Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
benob
6 months ago
|
parent
|
context
|
favorite
| on:
SeedLM: Compressing LLM Weights into Seeds of Pseu...
A variant I have been thinking of: each parameter matrix (or block) is the sum of a random matrix (generated from a seed) and a low rank matrix (a LoRA). I'd like to experiment training from scratch in that setting.
sadiq
6 months ago
[–]
There's a related write-up here you might find interesting:
https://wandb.ai/learning-at-home/LM_OWT/reports/Parameter-s...
It covers some experiments on weight tying, one of which is actually LoRA and random weights.
Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: