Hacker News new | past | comments | ask | show | jobs | submit login

On the contrary -- the opposite will happen. There's a decent body of research showing that just by training foundation models on their outputs, you amplify their capabilities.

Less common opinion: this is also how you end up with models that understand the concept of themselves, which has high economic value.

Even less common opinion: that's really dangerous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: