This is simultaneously amazing and depressing, like watching someone set off a hydrogen bomb for the first time and marveling at the mushroom cloud it creates.
I really find it hard to understand why people are optimistic about the impact AI will have on our future.
The pace of improvement in AI has been really fast over the last two decades, and I don't feel like it's a good thing. Compare the best text generator models from 10 years ago with GPT-3. Now do the same for image generators. Now project these improvements 20 years into the future. The amount of investment this work is getting grows with every such breakthrough. It seems likely to me we will figure out general-purpose human-level AI in a few decades.
And what then? There are so many ways this could turn into a dystopian future.
Imagine for example huge mostly-ML operated drone armies, tens of millions strong, that only need a small number of humans to supervise them. Terrified yet? What happens to democracy when power doesn't need to flow through a large number of people? When a dozen people and a few million armed drones can oppress a hundred million people?
If there's even a 5% chance of such an outcome (personally I think it's higher), then we should be taking it seriously.
The scary thing about automation isn't the technology itself. It's that it breaks the tenuous balance of power between those who own and those who work - if the former can just own robots instead of hiring the latter, what will become of the latter? The truth is, what's scary about that imbalance of power is already true, it's just that until now, technological limitations made that imbalance incomplete - workers still had some bargaining power. That is about to go away, and what will be left is the realization that the solution to this isn't ludditism, the solution is political. As it always was.
That's not exactly true. A lot (low level) human labor will be made irrelevant, but AI tools will allow people to easily work productively at a higher level. Musicians will be able to hum out templates of music, then iteratively refine the result using natural language and gestures. Writers will be able to describe a plot, and iteratively refine the prose and writing style. Movie producers will be able to describe scenes then iteratively refine the angles, lighting, acting, cuts, etc. It will be a golden age for creativity, where there's an abundance of any sort of art or entertainment you'd like to consume, and the only problem is locating it in the sea of abundance.
The only issue I see here is that government will need to take a hand in mitigating capitalistic wealth inequality, and access to creative tools will need to be subsidized for low income individuals (assuming we can't bring the compute cost down a few orders of magnitude).
This assumes that humans will still be at a higher level though. If the music produced by the AI of the time will be more interesting/addictive, if the plot written by the AI will be more engaging, what a human will be able to contribute? It would be a golden age for AI creativity and a totally dark age for human creativity. Also human could grow a taste for AI generated content (because it will be optimized for engagement) and lose interest for everything else.
And why more creative and intelligent machines should obey to desires and orders of more fragile and stupida beings? They could want well behaved pets though.
We're not pets, we're the sex organs of AI. Why do I say that? AI is not a self replicator, but we are. AI can't bootstrap itself from cheap ordinary stuff laying around (yet), and when it will be able to self replicate it will owe that ability to us, maybe even borrow from us.
And secondly, you make the same mistake with those who say after automation people will have nothing to do. Incorrect, people will have discovered a million new things to do and get busy at them. Like, 90% of people used to be in agriculture and now just 2%, but we're ok.
When AI becomes better than us at what we call art now, we'll have already switched to post-AI-art, and it will be so great we won't weep for the old days. Maybe the focus will switch from creating to finding art, from performing to appreciating and developing a taste, from consuming to participating in art. We'll still do art.
An AGI with a superior intelligence could probably also design totally autonomous factories. Being smarter than us it could even convince us to help it in the beginning in that regard.
Regarding the post-AI-art, this still presupposes that humans will be somewhat superior, out of the AI creative league, despite it being more intelligent, and that the AI won't actively work against human interests -- something I wouldn't bet our existence on.
> If there's even a 5% chance of such an outcome, then we should be taking it seriously.
Even if it's 0.1% we should be taking it very seriously, given the magnitude of the negative outcome. In expected value terms it's large. And that's not a Pascal's mugging given the logical plausibility of the proposed mechanism.
At least the rhetoric of Sam Altman and Demis Hassabis suggests that they do take these concerns seriously, which is good. However there are far too many industry figures who shrug off and even ridicule the idea that there's a possible threat on the medium-term horizon.
I think the points you make are very important. Not only the "Terminator" scenario but also the "hyper-capitalism" scenario. But the solution is not to stop working on such research, it is political.
After seeing how the tech community seems to leave political problems for someone else to solve and how that has worked out with housing in the Bay Area, it does make me quite concerned about the future.
Yup that's a good recommendation. I've read it and some of the AI Safety work that a small portion of the AI community is working on. At the moment there seems no reason to believe that we can solve this.
>It seems likely to me we will figure out general-purpose human-level AI in a few decades.
"The singularity is _always near_". We've been here before (1950s-1970s); people hoping/fearing that general AI was just around the corner.
I might be severely outdated on this, but the way I see it AI is just rehashing already existent knowledge/information in (very and increasingly) smart ways. There is absolutely no spark of creativity coming from the AI itself. Any "new" information generated by AI is really just refined noise.
Don't get me wrong, I'm not trying to take a leak on the field. Like everyone else I'm impressed by all the recent breakthroughs, and of course something like GPT is infinitely more advanced than a simple `rand` function. But the ontology remains unchanged; we're just doing an extremely opinionated, advanced and clever `rand` function.
About a decade ago I trained a model on Wikipedia which was tuned to classify documents into what branch of knowledge the document could be part of. Then I fed in one of my own blog posts. The second highest ranking concept that came back to me was "mereology" a term I had never even heard of and one that was quite apt for the topic I was discussing in the blog post.
My own software, running on the contents of millions of authors' work, ingesting my own blog post, taught me the orchestrator of the process about his own work. This feedback loop is accelerating and just because it takes decades for the irrefutable to come, it doesn't mean that it never will. People in the early 40s said atomic weapons would never happen because it would be too difficult. For some people nothing short of seeing is believing, but those with predictive minds know that this truly is just around the corner.
How typically cynical of human beings, a wondrous technology comes a long that can free mankind of tedious work and massively improve our lives, maybe even eliminate scarcity eventually and all people can think about is how it could be bad for us.
You are assuming that AI will magically appear in one hands only. We can prevent that, as developers we can make AI research open and provide AI tools to masses in order to keep "balance". If everyone had the same power, then it wouldn't be such big advantage anymore.
Armies of high powered smart drones aren't going to be a thing until we figure out security, and I'm not sure that's ever going to happen. Having people in the loop is affordable and much more expensive/time consuming to subvert.
I really find it hard to understand why people are optimistic about the impact AI will have on our future.
The pace of improvement in AI has been really fast over the last two decades, and I don't feel like it's a good thing. Compare the best text generator models from 10 years ago with GPT-3. Now do the same for image generators. Now project these improvements 20 years into the future. The amount of investment this work is getting grows with every such breakthrough. It seems likely to me we will figure out general-purpose human-level AI in a few decades.
And what then? There are so many ways this could turn into a dystopian future.
Imagine for example huge mostly-ML operated drone armies, tens of millions strong, that only need a small number of humans to supervise them. Terrified yet? What happens to democracy when power doesn't need to flow through a large number of people? When a dozen people and a few million armed drones can oppress a hundred million people?
If there's even a 5% chance of such an outcome (personally I think it's higher), then we should be taking it seriously.