Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reasoning is not understood even among humans. We only know the black box definition in the sense that whatever we are doing it is reasoning.

If an LLM arrives at the same output a human does given input and the output is sufficiently low probability to happen by random chance or association then it fits the term reasoning insofar as the maximum extent in which we understand it.

Given that we don't know what's going on the best bar is simply my matching input and output and making sure it's not using memory or pattern matching or random chance. There are MANY prompts that meet this criteria.

Your thoughts and claims are to be honest just flat out wrong. It’s just made up because not only do you know what the model is doing internally you don’t even know what you or any other human is doing. Nobody knows. So I don’t know why you think your claims have any merit. They don’t and neither do you.



Not sure why this got so acrid but I don’t really have any reason to interact with someone saying I have “no merit” You might want to look at how bent-out-of-shape you are getting about a rando on the internet disagreeing with you.

Why I would lie about plugging in your problem into an LLM or solve it is beyond me; you know I don’t lose anything by admitting you’re right? In fact I would stand to gain from learning something new. I think you should examine how you approach an argument because every time you’ve replied it’s made it look like you’re just more desperate for someone to agree and are trying to bully people into agreeing by making ad hominem attacks. Despite it all I think you have merit as a person — even if you can’t make a cogent argument and just chase your tail on this topic.

I’m going to stop engaging with you for now on, but just as a piece of perspective for you: both o3 and Gemini pointed out how your problem is a derivation when asked — perhaps you might be overestimating its novelty. Gemini even cited derivations out the gate.


>Why I would lie about plugging in your problem into an LLM or solve it is beyond me;

I interpreted it as you saying you solved it without the LLM. Apologies then for the misinterpretation.

Yeah I agree no point in continuing this conversation. We disagree, there's no moving forward from that.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: