Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Describe your process of reasoning, and how it differs from taking inputs and producing outputs.


Sorry, we're discussing GPT and LLMs here, not human consciousness and intelligence.

GPT has been constructed. We know how it was set-up and how it operates. (And people commenting here should be basically familiar with both hows mentioned.) No part of it does any reasoning. Taking in inputs and generating outputs is completely standard for computer programs and in no way qualifies as reasoning. People are only bringing in the idea of 'reasoning' because they either don't understand how an LLM works and have been fooled by the semblance of reasoning that this LLM produces or, more culpably, they do understand but they still falsely continue to talk about the LLM doing 'reasoning' either because they are delusional (they are fantasists) or they are working to mislead people about the machine's actual capabilities (they are fraudsters).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: