Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

advancements around parameter efficient fine tuning came from internet randoms because big cos don’t care about PEFT


... Sort of?

HF is sort of big now. Stanford is well funded and they did PyReft.


HF is not very big, Stanford doesn’t have lots of compute.

Neither of these are even remotely big labs like what I’m discussing


HF has raised more than $400m. If that doesn't qualify them as "big", I don't know what does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: