Hacker News new | past | comments | ask | show | jobs | submit login

AIs, being digital, have some clear advantages over people. Very rapid replication, enormous communication bandwidth, potential to expand capacity to global scale, for example. A human-level AI with these features common among ordinary software will already be super-human.

People have deep, often implicitly shared values. Most care for human and their own lives, for example. Given limited capacity of a person, they need to cooperate for major acts. It is thus quite hard to do something extraordinary and in conflict with the values held by most humans.

Side note: There are top AI researchers, including Yann Lecun, who argue our intelligence is not general. They made some good points. I think the generality of intelligence is a gradient. Ours is not at the very top end of possibilities, but clearly more general than other animals.




> A human-level AI ...

What evidence do you have for this claim? What are examples of research programs that claim to be working towards this goal and why are their claims to be believed (other than being good marketing material for scaring people).

I already mentioned mathematical abilities (https://news.ycombinator.com/item?id=23413003) and any general intelligence at a level of a human must be able to prove theorems without brute force search. I see no evidence that this is possible or will ever be possible with the current statistical methods and neural network architectures. And if we generalize a little bit then any general intelligence will be able to not only prove theorems in complex analysis but in all domains of mathematics and again I see no evidence that this is possible with existing techniques and methods. When an AI research lab presents evidence for any of their products being able to derive and prove something like Cauchy's Residue Theorem then I will have reason to believe artificial intelligence can reach human levels of intelligence.

My pessimism is not about AI being beneficial, my pessimism is about folks claiming human level intelligence is possible and that it will be malevolent. My view of AI is the same as Demis Hassabis' view because AI is just a tool and a tool can't be malevolent:

> "I think about AI as a very powerful tool. What I'm most excited about is applying those tools to science and accelerating breakthroughs" [0]

--

[0]: https://www.techrepublic.com/article/google-deepmind-founder...


By the time a single AI system can prove those theorems and learn to perform well on very unrelated tasks, it could be too late to think about AI Safety.

AI Safety researchers and I are not arguing that AGI will certainly arrive in a specified amount of time, it just that we can’t be sure.

There are a significant number of AI researchers, however, who believe it might get developed within a few decades.

OpenAI, for example, is aiming for it; DeepMind as well. Both have research programs on AI Safety.

Do you have any other objections to the reasoning in the OP (no fire alarm..)? Subjective implausibility is not a good one. Your argument rests on using only “existing techniques”. When thousands of brilliant minds are working in the field and hundreds of good papers are being published every year, how can we be sure there won’t be a novel technique that can perform outside current limitations within a few decades?

At least two top computer vision researchers I talked with a couple years ago said they didn’t believe what their groups could do just 5 years before they did it.

See this thread for more on the rationale for preparation: https://news.ycombinator.com/item?id=23414197




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: