Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not trivial given current supply bottlenecks, not to mention research expertise.


I don't feel like compute for pretraining the model was a huge constraint?

The supply bottlenecks have been around commercializing the ChatGPT product at scale.

But pretraining the underlying model I don't think was on the same order of magnitude, right?


The control of the supply si with Microsoft, who are likely falling on Sam’s side here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: