Hacker News new | past | comments | ask | show | jobs | submit login

my impression was that this was about the time taken to make each prediction, not to train the model? and yep, looking forward to the paper!



It was based on test time prediction, so given you have received a sentence, how fast does it take to compute the prediction with either a bag-of-words or an LSTM.

When you say practical example, would that be in the scenario that you have an API server running? So to consider such costs as latency, data transfer, API overhead etc.?

Thanks for your feedback!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: