Hacker News new | past | comments | ask | show | jobs | submit login

I'm completely hooked. This is such a good paper.

It hallucinating how it thinks through things is particularly interesting - not surprising, but cool to confirm.

I would LOVE to see Anthropic feed the replacement features output to the model itself and fine tune the model on how it thinks through / reasons internally so it can accurately describe how it arrived at its solutions - and see how it impacts its behavior / reasoning.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: