I guess what is missing for me is that there is no verification of the truth of it's responses.
When a human "reasons" they are trying to take some information and find an answer that apparently fits. We don't just blurt out something that may or may not fit. Or at least, if we did we wouldn't call it reasoning.
That an LLM can still produce answers which are correct some of the time is amazing. I just think without the ability to step back and assess it's response, it doesn't hit the bar of reasoning for me.
When a human "reasons" they are trying to take some information and find an answer that apparently fits. We don't just blurt out something that may or may not fit. Or at least, if we did we wouldn't call it reasoning.
That an LLM can still produce answers which are correct some of the time is amazing. I just think without the ability to step back and assess it's response, it doesn't hit the bar of reasoning for me.