No, putting everything in the public domain and handing out a UBI doesn't solve anything at all. It's like worrying about nukes and believing the best solution is to give everyone nukes (AI) and nuclear bunkers (UBI), because "you can't stop progress". And then, let hand out pamphlets telling people how to use nukes safely, even though we know that most people will not read the pamphlets and (since the field is new) even the people writing the pamphlets may have no idea how to use this tech. Any cargo cult would only grow in number.
Oh and our "free" nuclear bunkers have to be paid for by the government. There is a chance of the bunkers either being too "basic" to help most people ("Here's your basic income: $30 USD per month! Enjoy!") or being so costly that the program will likely be unsustainable. And what if people don't like living in bunkers, etc.?
We are trying to apply quick fixes to address symptoms of a problem...instead of addressing the problem directly. Slowing down is the right option. If that happens, then society can slowly adapt to the tech and actually ensure it benefits others, rather than rushing blindly into a tech without full knowledge or understanding of the consequences. AI might be okay but we need time to adjust to it and that's the one resource we don't really have.
Of course maybe we can do all of this: slow down tech, implement UBI, and have radical AI transparency. If one solution fails, we have two other backups to help us out. We shouldn't put our eggs in one basket, especially when facing such a complicated abd multifaceted threat.
>Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.
You are right actually. My bad. I was referring to how AI Safety people has created organizations dedicated to dealing with their agendas, which can be said to be better than simply posting about their fears on Hacker News. But I don't actually hear of anything more about what these organizations are actually doing other than "raising awareness". Maybe these AI Safety organizations are little more than glorified talking shops.
It is also better to be far ahead of unregulatable rogue states that would continue to work on AI. Secondly, deferring AI to a later point in time might make self-improvement much faster and hence loss controllable since more computational resources would be available due to Moore's Law.
Oh and our "free" nuclear bunkers have to be paid for by the government. There is a chance of the bunkers either being too "basic" to help most people ("Here's your basic income: $30 USD per month! Enjoy!") or being so costly that the program will likely be unsustainable. And what if people don't like living in bunkers, etc.?
We are trying to apply quick fixes to address symptoms of a problem...instead of addressing the problem directly. Slowing down is the right option. If that happens, then society can slowly adapt to the tech and actually ensure it benefits others, rather than rushing blindly into a tech without full knowledge or understanding of the consequences. AI might be okay but we need time to adjust to it and that's the one resource we don't really have.
Of course maybe we can do all of this: slow down tech, implement UBI, and have radical AI transparency. If one solution fails, we have two other backups to help us out. We shouldn't put our eggs in one basket, especially when facing such a complicated abd multifaceted threat.
>Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.
You are right actually. My bad. I was referring to how AI Safety people has created organizations dedicated to dealing with their agendas, which can be said to be better than simply posting about their fears on Hacker News. But I don't actually hear of anything more about what these organizations are actually doing other than "raising awareness". Maybe these AI Safety organizations are little more than glorified talking shops.