Hacker News new | past | comments | ask | show | jobs | submit login

They show a simple one in the post

> In the example below, we build a model using the TF-GNN Keras API to recommend movies to a user based on what they watched and genres that they liked.

> The code above works great, but sometimes we may want to use a more powerful custom model architecture for our GNNs. For example, in our previous use case, we might want to specify that certain movies or genres hold more weight when we give our recommendation.




Couldn't you use a regular DNN with one-hot encoding of all the movies seen by a user (and the corresponding genres)? And boosting can give more weight to certain movies or genres.


It appears the difference can be found in the depth of knowledge. A one-hot encoding, followed by an embedding, followed by some fully connected layers only takes the actual titles into account.

A GNN can take into account everything you know about movies, including incomplete data. Therefore a GNN will see that user A likes everything with actor X, user B really wants the genre to be Y, user C likes actor Z, but only before 2000, and combinations of that. Therefore the GNN can do better, hell, it can even predict what movie properties would do well, which would be tough to get out of the embedding network.

You could encode all this data in your embedding, but this will be a much smaller and much more flexible network.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: