1.: I take issue with non concerned issues being dismissed as exactly one reason why people should be concerned. I dislike circular reasoning.
2.: There does not exist uniform agreement that AI is safe, in fact at least in German hacker circles quite the opposite, but as far as I see the problems that AI risk evangelists emphasize are the wrong ones. Take over the world, paperclip etc...not going to happen. For that, AI is too stupid for now. And the rapid takeoff scenarios are not realistic. But ~30% of all jobs are easily replacable by current AI technology, once the laws and the capital is lined up. AI is also making more and more decisions, legitimizing the biases with which it was either trained or programmed because it is "AI" and thus more reliable(\s). And there is a great cargo cult of "data" "science" in development (separate scare quotes intended)
3. I am starting to dislike any explicit mentions of fallacies, especially if they were used just sentences ago by the mentioner
I agree with you, igk. AI is a threat right now and has to be countered. Now the problem is: is there an organization dedicated to countering current AI trends through government regulations to slow down AI research and prevent AI misuse? Or are we just going to talk about it on Hacker News and German hacker meetups?
Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?
Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.
>I agree with you, igk. AI is a threat right now and has to be countered. Now the problem is: is there an organization dedicated to countering current AI trends through government regulations to slow down AI research and prevent AI misuse? Or are we just going to talk about it on Hacker News and German hacker meetups?
We should not slow it down. We should push forward, educate people about the risks and keep as much as possible in public scrutiny and possession (open source, government grants, out of patents/universtiy patents)
>Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?
Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.
>Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.
Just like electriciy ate the world, and steam before...slowing down is not an option, making sure it ends up benefiting everybody would be the right approach. Pushing for UBI is infinitely more valuable than AI risk awareness, because one of the two does not depend on technological progress to work/give rewards
No, putting everything in the public domain and handing out a UBI doesn't solve anything at all. It's like worrying about nukes and believing the best solution is to give everyone nukes (AI) and nuclear bunkers (UBI), because "you can't stop progress". And then, let hand out pamphlets telling people how to use nukes safely, even though we know that most people will not read the pamphlets and (since the field is new) even the people writing the pamphlets may have no idea how to use this tech. Any cargo cult would only grow in number.
Oh and our "free" nuclear bunkers have to be paid for by the government. There is a chance of the bunkers either being too "basic" to help most people ("Here's your basic income: $30 USD per month! Enjoy!") or being so costly that the program will likely be unsustainable. And what if people don't like living in bunkers, etc.?
We are trying to apply quick fixes to address symptoms of a problem...instead of addressing the problem directly. Slowing down is the right option. If that happens, then society can slowly adapt to the tech and actually ensure it benefits others, rather than rushing blindly into a tech without full knowledge or understanding of the consequences. AI might be okay but we need time to adjust to it and that's the one resource we don't really have.
Of course maybe we can do all of this: slow down tech, implement UBI, and have radical AI transparency. If one solution fails, we have two other backups to help us out. We shouldn't put our eggs in one basket, especially when facing such a complicated abd multifaceted threat.
>Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.
You are right actually. My bad. I was referring to how AI Safety people has created organizations dedicated to dealing with their agendas, which can be said to be better than simply posting about their fears on Hacker News. But I don't actually hear of anything more about what these organizations are actually doing other than "raising awareness". Maybe these AI Safety organizations are little more than glorified talking shops.
It is also better to be far ahead of unregulatable rogue states that would continue to work on AI. Secondly, deferring AI to a later point in time might make self-improvement much faster and hence loss controllable since more computational resources would be available due to Moore's Law.
1.: I take issue with non concerned issues being dismissed as exactly one reason why people should be concerned. I dislike circular reasoning. 2.: There does not exist uniform agreement that AI is safe, in fact at least in German hacker circles quite the opposite, but as far as I see the problems that AI risk evangelists emphasize are the wrong ones. Take over the world, paperclip etc...not going to happen. For that, AI is too stupid for now. And the rapid takeoff scenarios are not realistic. But ~30% of all jobs are easily replacable by current AI technology, once the laws and the capital is lined up. AI is also making more and more decisions, legitimizing the biases with which it was either trained or programmed because it is "AI" and thus more reliable(\s). And there is a great cargo cult of "data" "science" in development (separate scare quotes intended) 3. I am starting to dislike any explicit mentions of fallacies, especially if they were used just sentences ago by the mentioner