Hacker News new | past | comments | ask | show | jobs | submit login

One thing I've wondered about is fine tuning a large model from multiple LoRAs. If the model doesn't fit in your vram you can train a LoRA, apply it to the model, train another LoRA from the same data, apply it, and so on. Iterative low rank parameter updates. Would that work?



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: