Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most people act on gut instincts first as well. Gut instinct = first semi-random sample from experience (= training data). That's where all the logical fallacies come from. Things like the bat and the ball problem, where 95% people give an incorrect answer, because most of the time, people simply pattern-match too. It saves energy and works well 95% time. Just like reasoning LLMs, they can get to a correct answer if they increase their reasoning budget (but often they don't).

An LLM is a derivative of collective human knowledge, which is intrinsically unreliable itself. Most human concepts are ill-defined, fuzzy, very contextual. Human reasoning itself is flawed.

I'm not sure why people expect 100% reliability from a language model that is based on human representations which themselves cannot realistically be 100% reliable and perfectly well-defined.

If we want better reliability, we need a combination of tools: a "human mind model", which is intrinsically unreliable, plus a set of programmatic tools (say, like a human would use a calculator or a program to verify their results). I don't know if we can make something which works with human concepts and is 100% reliable in principle. Can a "lesser" mind create a "greater" mind, one free of human limitations? I think it's an open question.



> Most people act on gut instincts first as well

And we do not hire «most people» as consultants intentionally. We want to ask those intellectually diligent and talented.

> language model that is based on human representations

The machine is made to process the input - not to "intake" it. To create a mocker of average-joe would be an anti-service in both that * the project was to build a processor and * we refrain to ask average-joe. The plan can never have meant to be what you described, the mockery of mediocrity.

> we want better reliability

We want the implementation of a well performing mind - of intelligence. What you described is the "incompetent mind", the habitual fool - the «human mind model» is prescriptive based on what the properly used mind can do, not descriptive on what sloppy weak minds do.

> Can a "lesser" mind create a "greater" mind

Nothing says it could not.

> one free of human limitations

Very certainly yes, we can build things with more time, more energy, more efficiency, more robustness etc. than humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: