Hacker News new | past | comments | ask | show | jobs | submit login

>> AGI is not, and there is no evidence that it is even possible.

> We ourselves are an existence proof that minds like ours can be made, that these minds can be made from matter, and that the design for the organisation of that matter can come from a simple process of repeatedly making imprecise copies and then picking the best at surviving long enough to make more imprecise copies.

I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.




> I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.

I don't know what you mean by "as-yet circular assumption". (Though in the philosophy of knowledge, the Münchhausen trilemma says that everything is ultimately either circular, infinite regression, or dogmatic).

> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.

Sounds like you're arguing against ASI not AGI: G = General like us; S = Super-, exceeding us.

That said, there's evidence that ASI is also possible: All the different ways in which we've made new minds that do in fact greatly exceed ours in capability.

When I was a kid, "intelligent" was the way we described people who were good at maths, skilled chess players, good memories, having large vocabularies, knowing many languages, etc. Even ignoring the arithmetical component of maths (where a Pi Zero exceeds all of humanity combined even if each of us were operating at the standard of the current world record holder), we have had programs solving symbolic maths for a long time; Chess (and Go, Starcraft, Poker,…) have superhuman AI; even before GPT, Google Translate already knew (even if you filter the list to only those where it was of a higher standard than my second language) more languages than I can remember the names of (and a few of them even with augmented reality image-to-image translations).

And of course, for all the flaws the current LLMs have in peak skill, most absolutely have superhuman breadth of knowledge: I can beat GPT-3.5 as a software engineer, maths and logic puzzles, or when writing stories, but that's basically it.

What we have not made is anything that's both human (or superhuman) skill-level while also human-level generality — but saying the two parts separately isn't evidence that it can be done is analogous to looking at 1 gram of enriched uranium and a video of a 50 kg sphere of natural uranium being forced to implode spherically, and saying "there no evidence that humans are capable of designing an atom bomb or that it's possible to make an atom bomb that greatly exceeds chemical bombs in yield."


You won't get a proof until the deed is done. But that's the same with nuclear armageddon - you can't be sure it'll happen until after the planet's already glassed. Until then, evidence for probability of the event is all you have.

> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability

There's plenty of good reasons to assume it's possible, all while there's no evidence suggesting it's not.


"good reasons" sounds like another way of saying "no actual evidence, but a lot of hope". There is no actual evidence that it's possible, certainly not anytime soon. People pushing this narrative that AGI is anywhere close are never people working in the space, it's just the tech equivalent of the ancient aliens guys.


> People pushing this narrative that AGI is anywhere close are never people working in the space

Apart from the most famous AI developer group since near the beginning of this year, on the back of releasing an AI that's upset a lot of teachers and interview-question writers because it can pass so many of their existing quizzes without the student/candidate needing to understand anything.

I suppose you could argue that they are only saying "AGI could happen soon or far in the future" rather than "it will definitely be soon"…


Yes, the people selling the hammer want you to believe it's a sonic screwdriver. What else is new? You sort of prove my point when your evidence of who is making those claims are the people with a vested interest, not the actual scientists and non-equity developers who do the actual coding.

"But a company said the tech in their space might be ground-breaking earth-shattering life-changing stuff any minute now! What, you think people would just go on the internet and lie!?"


"No Scotsman puts sugar on his porridge."

"But my uncle Angus is a Scotsman and he puts sugar on his porridge."

"But no true Scotsman puts sugar on his porridge."


I haven't set up a No True Scotsman proposition, I made a very clear and straightforward assertion, that I've challenged others to disprove.

Show me one scientific paper on Machine Learning that suggests it's similar in mechanism to the human brain's method of learning.

It's not a lack of logical or rhetorical means to disprove that's stopping you (i.e. I'm not moving any goalposts), it's the lack of evidence existing, and that's not a No True Scotsman fallacy, it's just the thing legitimately not existing.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: