Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
IanCal
10 months ago
|
parent
|
context
|
favorite
| on:
Fine tune a 70B language model at home
You can take an existing 70B model and train it to do a more specific task. You're teaching it the task but you're relying on a foundation model for the base understanding of the world/words/etc.
qsi
10 months ago
[–]
OK, that makes sense. Thank you!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: