Hacker News new | past | comments | ask | show | jobs | submit login

> but inference is in the long run gonna get the lion share of the work.

I'm not sure - might not the equilibrium state be that we are constantly fine-tuning models with the latest data (e.g. social media firehose)?




Head of groq said that in his experience at google training was less than 10% of compute.


Isn't Groq still more expensive than GPU-based providers?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: