It hallucinating how it thinks through things is particularly interesting - not surprising, but cool to confirm.
I would LOVE to see Anthropic feed the replacement features output to the model itself and fine tune the model on how it thinks through / reasons internally so it can accurately describe how it arrived at its solutions - and see how it impacts its behavior / reasoning.
It hallucinating how it thinks through things is particularly interesting - not surprising, but cool to confirm.
I would LOVE to see Anthropic feed the replacement features output to the model itself and fine tune the model on how it thinks through / reasons internally so it can accurately describe how it arrived at its solutions - and see how it impacts its behavior / reasoning.