Hacker Newsnew | past | comments | ask | show | jobs | submit | tripzilch's commentslogin

> In a mirror, the image is reversed left-to-right

lol


but I do actually believe that corporations taking part in and profiting from corruption/etc are just as much to blame as the governments that let it happen, as are the people/voters

unless you somehow think that corruption is morally neutral and only "bad" because there's laws against it?


Who do you think would have ended up paying the 145% tarriffs - Apple and Android phone sellers or consumers?

This is all on US voters.


Okay, sure. And while I understand and share your frustrations with the outcome of recent US elections, democracy is a bad proxy for morality, or to put it bluntly: If 9 out of 10 people voted for gang rape, it's still wrong for a multi national corporation to take advantage of that.


I’m very well aware of the tyranny of the majority. My still living parents grew up in the Jim Crow south. Everything they are doing now to demonize immigrants and LGBT they did in the 80s with “welfare queens” and “Willy Horton”.

The difference is that I don’t expect any better.


No reasoning is about applying rules of logic consistently, so if you only do it some of the time, that's not reasoning.

If I roll a die and only _sometimes_ it returns the correct answer to a basic arithmetic question, this is the exact reason why we don't say a die is doing arithmetic.

Even worse in the case of LLMs, where it's not caused by pure chance, but also training bias and hallucinations.

You can claim nobody knows the exact definition of reasoning, maybe there are some edges which aren't clearly defined because they're part of Philosophy, but applying rules of logic consistently is not something you just don't always do and still call it reasoning.

Also, LLMs are generally incapable of saying they don't know something, cannot know something, can't do something, etc. They would rather try and hallucinate. When it does that, it's not reasoning. And you also can't explain to an LLM how to figure out it doesn't know something, and then actually say it doesn't know and not make stuff up. If it was capable of reasoning you should be able to convince it using _reason_, to do exactly that.

However, you


It fails at deductive reasoning though. Pick a celebrity with non-famous children that don't obviously share their last name or something. If you ask it "who is the child of <celebrity>", it will get it right, because this is in its training data, probably Wikipedia.

If you ask "who is the parent of <celebrity-child-name>", it will often claim to have no knowledge about this person.

Yes sometimes it gets it right, but sometimes also not. Try a few celebrities.

Maybe the disagreement is about this?

Like if it gets it right a good amount of the time, you would say that means it's (in principle) capable of reasoning.

But I say, that if it gets it wrong a lot of the time, that means 1) it's not reasoning in situations when it gets it wrong, but also 2) it's most likely also not reasoning in situations when it gets it right.

And maybe you disagree with that, but then we don't agree on what "reasoning" means. Because I think that consistency is an important property of reasoning.

I think that if it gets "A is parent of B, implies B is child of A" wrong for some celebrity parents, but not for others, then it's not reasoning. Because reasoning would mean applying this logical construct as a rule, and if it's not consistent at that, it makes it hard to argue that it is in fact applying this logical rule instead of doing who-knows-what that happens to give the right answer, some of the time.


> AI can realise

wait did AGI happen? which AI is this?

stop anthropomorphizing them

No they can't. They can generate text that indicates they hallucinated, you can tell them to stop, and they won't.

They can generate text that appears to admit they are incapable of doing a certain task, and you can ask them to do it again, and they will happily try and fail again.

Sorry but give us some examples of an AI "realizing" its own mistakes, learning, and then not making the mistake again.

Also, if this were even remotely possible (which it is not), then we should be able to just get AIs with all of the mistakes pre-made, so it learned and not do them again, right? So it has already "realized" and "learned" which tasks it's incapable of, so it will actually refuse or find a different way.

Or is there something special about the way that _you_ show the AI its mistakes, that is somehow more capable of making it "learn" from those mistakes than actually training it?


> if I'm working on a team and people are writing code how is it any different? Everyone makes mistakes, I make mistakes.

because your colleagues know how to count

and they're not hallucinating while on the job

and if they try to slip an unrelated and subtle bug past you for the fifth time after asking them to do a very basic task, there are actual consequences instead of "we just need to check this colleague's code better"


> Google gets a free gift from a multinational bureaucracy and gets to look like a smart company in the process

it would have cost you exactly nothing to not make an unnecessary dig at Europeans, here


Maybe this is anecdotal, but I feel this is dependent on age. What you describe would have been true when I was younger. But nowadays I know quite some people who never used cannabis / hardly drink any alcohol, and still tell me they'd like to try LSD or shrooms some day.


Honestly you sound like someone who would as easily try to stop one patent troll only to help another one, depending who pays you most.


You unfortunately crossed into personal attack and broke the site guidelines repeatedly in this thread. That's not allowed here, regardless of how wrong someone is or you feel they are, so please don't do it again.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Would you say that SHRDLU is capable of reasoning then?

https://en.wikipedia.org/wiki/SHRDLU

Because, whenever you give it a reasoning test, it also seems to do fine.

That is what I meant in my other post, I don't really think that "it seems to do fine" is enough evidence for the extraordinary claim that it can reason.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: