Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
brokencode
1 day ago
|
parent
|
context
|
favorite
| on:
Lossless LLM compression for efficient GPU inferen...
Yeah, they’re saying that this compression is almost as good as is theoretically possible without losing any information.
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: