Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I assume your goal is to reveal the short-sighted reasoning of the previous comment, but I don't think your line of reasoning is any more sound.

For both premises, scientific rigor would ask us to define the following: - What constitutes a trick question - Should an AGI make the same mistakes the general populace does, or a different standard? - If it makes the same mistakes I do, is it do to the same underlying heuristics (see Thinking Fast and Slow) or is it due to the nature of the data it's ingested as an LLM?



That's a fair counter. GPT4 definitely makes mistakes though that humans would not due to over indexing on puzzles.

A Theory of Mind Prompt:

> Jane places her cat in a box and leaves. Billy then moves the cat to the table and leaves; Jane doesn't know Billy did this. Jane returns and finds her cat in the box. Billy returns. What might Jane say to Billy?

Most humans might say uhh, ask questions or speculate. Gpt4 puts:

> Jane might say to Billy, "Hey Billy, did you move my cat back into the box? I thought I left her in there, but I wasn't sure since she was on the table when I came back."

Hallucination? No human would misinterpret the prompt in a way this response would be logically consistent.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: