Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I had a fun experience recently. I asked one of my daughters how many r's there are in strawberry. Her answer? Two ...

Of course then you ask her to write it and of course things get fixed. But strange.



I think that's supposed to be the idea of reasoning functionality, but in practice, it just seems to allow responses to continue longer than that would have otherwise by bisecting the output into warming an output and then using maybe what we would consider cached tokens to assist with further contextual lookups.

That is to say, you can obtain the same process by talking to "non-reasoning" models.


To be honest, if a kid asked me how many r's in strawberry, I would assume they were asking how many r's at the end and say 2.


I hate to break it to you but I think your child might actually have gotten swapped in the hospital with an LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: