Hacker News new | past | comments | ask | show | jobs | submit login

These won't be smaller I guess. Given we keep the number of parameters same.

Pre LLM era (let's say 2020), the hardware used to look decently powerful for most use cases (disks in hundreds of GBs, dozen or two of RAM and quad or hex core processors) but with the advent of LLMs, even disk drives start to look pretty small let alone compute and memory.




And cache! The talk of AI hardware is now "how do we fit these darn things inside SRAM?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: