Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
stefs
8 months ago
|
parent
|
context
|
favorite
| on:
Llama 3.1 405B now runs at 969 tokens/s on Cerebra...
well, we can AI inference on our desktops for $500 today, just with smaller models and far slower.
ryao
8 months ago
[–]
There is no need to use smaller models. You can run the biggest models such as llama 3.1 405B on a fairly low end desktop today:
https://github.com/lyogavin/airllm
However, it will be far slower as you said.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: