Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
jprafael
on March 24, 2023
|
parent
|
context
|
favorite
| on:
LoRA: Low-Rank Adaptation of Large Language Models
Computing gradients is easy/cheap. What this technique solves is that you no longer need to store the computed values of the gradient until the backpropagation phase, which saves on expensive GPU RAM, allowing you to use commodity hardware.
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: