Hacker News new | past | comments | ask | show | jobs | submit login

> there are no regularizers in any modern LLMs.

Using a large & diverse training set is the best regulariser, but I think there is also weight decay and dropout in transformers




RWKV also uses some sort of L2-esque regularization, which was supposedly an idea taken from PaLM (although I can't find a source on this point, other than some message in the RWKV discord)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: