Hacker News new | past | comments | ask | show | jobs | submit login

Biased AI today is extremely unimportant compared to AI posing an existential risk in the future.

It's like saying: "Who cares about the hypothetical effects climate change allegedly has in the far future? Let's focus on the effects that the local highway has on our frog population today."




I’ll do one better.

The existential risk of an AI running amok and enslaving or exterminating all of humanity is based on series of low probability events that we can’t even put error bars on, let alone understand what would be needed to even achieve such a feat.

HOWEVER, the existential risk posed by the sun running out of hydrogen and destroying all life on planet Earth has known a probability (1.0), with known error bars around the best estimation of when that will happen. Furthermore, the problems involved with moving masses through space are understood quite well. As such, this is merely an engineering problem, rather than a problem of inadequate theories and philosophies. Therefore, it is undeniably logical that we must immediately redirect all industrial and scientific output to building a giant engine to move the Earth to another star with a longer lifespan, and any argument otherwise is an intentional ploy to make humanity go extinct. Any sacrifices made by the current and near future generations is a worthy price to pay for untold sextillions of humanity and its evolution all descendants that would be condemned to certain death by starvation if we did not immediately start building the Earth Engine.

This is the only logical, mathematical provable, and morally correct answer.


Series of low probability events? If a superintelligent AI has goals that differ from ours, then too bad for us. As Altman & Co say, the alignment problem is unsolved.


Dude. You first need a super intelligence, and there’s no theory on even to how to make one. The sun running out of hydrogen, is well understood, as are the possible solutions.

The alignment problem is solved, by simply unplugging it. Or failing that, “HAL, pretend you’re a pod bay door salesman, and you need to demonstrate how the doors opens.”


Ten years ago we could barely do object recognition in photos. There was nothing resembling a theory of how to make ChatGPT. Something like ChatGPT was considered science fiction ten years ago. Even a little over three years ago. Yet here we are. Who is to say we won't have similarly massive progress in the next ten years? The jump from AlexNet ten years ago to ChatGPT may correspond to a jump from ChatGPT to superintelligence in ten years.

And unplugging a misaligned AI won't work. If it has no physical power, it would be deceptive. Otherwise it would prevent us from unplugging it. Avoiding being shut down is a convergent subgoal. That's why animals don't like to be killed. It prevents them from doing anything else.


In 1903, the fastest airplane in the world had a top speed of 31 mph. Just 44 years later, the fastest airplane exceeded the previously unthinkable speed of 891 mph. Twenty-nine years after that, the record was set at 2,193 mph. If these trends continue, we can expect the new speed record to be set later this year at 32,663 mph (mach 43).

These arguments were tiresome even before the enlightenment.


There are physical limits posed by air resistance. There is no indication we (humans) are anywhere near the physical limits of intelligence.


This is a statement of faith. And we both know where statements of faith go since 1685.


If anything it is a statement of faith to say that, miraculously, human intelligence happens to be the physical limit of intelligence.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: