Any virtual or physical general AI being can sooner or later build a copy itself, easily in the former. Asymov's laws and other type of built-in regulations in scifi are childish imaginations as the company or the nation doing it will earn a huge advantage. It's going to be equal to nuclear weapons and will need regulations in consumer tech, which will be hard.
All the contradictions point to the fact that we humans either live like retarded biological animals or augment ourselves by integrating the AI features just like our kings married the women of the enemies to boost their genetic and social appeal. I'm for the latter.
So, uh, I'm still at the "I don't even know the extent of my ignorance" phase of understanding current ML tech; I haven't even gotten to linear algebra in my math education. I'm working on it.
However, I do spend my days around (often fixing computers for) people who do seem to understand machine learning... and as far as I can tell, we're still in a phase where machine learning functions like a fancy sort of filter... a way of determining if this new piece of data is more like this set of training data or the other set of training data.
While I totally see how that could be super useful in designing business applications, I mean, I could totally use some sort of ML filter to take the boss' words and match them with something I know how to do, or with something you can solve with an existing ML library... and while I can see how something like this could potentially help to replace me, I don't see what it has to do with artificial consciousness.
This is basically the stance of every serious practitioner that I've seen (ie the stance shared by everyone who directly works with theory/code to train models, as opposed to those who talk about AI at a high level).
Right now, we have these systems that are effectively ungodly complicated spreadsheets. They're great at a variety of tasks, some of which seem impossible for a non-intelligent entity to perform (neural machine translation is wild to me).
But that's all the systems are- super complicated spreadsheets. There's no way for them to start replicating consciousness without massive advances in the field.
Having said that, there is a road from where we are to intelligence- if we can create a network that performs arbitrary interactions online, and figure out some way to create a positive feedback loop for intelligence, like AlphaGo Zero did with their policy network & MCTS, then we might be able to figure it out. But we're so far away from that that I'm not concerned.
Concerned? I know which side my bread is buttered on; when the revolution comes, John Connor and I will probably not be friends.
But yeah; as disappointing as I might find it, I kind of think we're heading towards more of a 'star trek' dystopia... a universe with continuing ethnic strife and computers that are advanced when it comes to responding to what we want, but that remain tools, without much by way of will of their own.
> I thought Star Trek was considered Utopian, at least inside the Federation.
In my comment, I'm implying that any universe where we don't figure out AI, where humans are still in charge is a sort of dystopia.
To be absolutely clear, it was a poor attempt at a joke. Many of these observations can also be read in a positive light. But I do think that in a lot of ways you can see darkness in the federation.
They haven't figured out AI and still have humans in charge of menial tasks, humans who aren't particularly good at those tasks compared to a computer.[1] I mean, sure, exploring, sending people to explore is great, but they also send people to fight, even when the battle is existential. They still have humans in charge, even though those humans are still only slightly less corrupt and petty than we are.
They also apparently still have huge issues with racism even within the federation. This is the second part of the comparison; I have recently learned that my own society seems to be rather more racist than I thought it was. I have learned that progress is way slower than I initially thought. Star trek reflects this glacial progress.
[1]Apparently, they have bans on enhancing those humans, even though they have the tech to do it (see bashir's storyline on DS9) To me? this seems like the worst kind of waste. To have the technology to make us all brilliant, but to leave us all as dullards.
Unless you are more interested in foundational philosophy than solving problems of immediate relevance, I suggest you just ignore the "consciousness" debate. It is mostly in a state where people can't even agree what the question is, but everyone has their favorite answer to it.
I'm trying to educate myself in both philosophy and in engineering; I think it's realistic to expect a person to understand both the arts and the sciences, at least to an undergrad level, and I think there is value in both.
The philosophy of consciousness is interesting, though; I mean, the question "what is consciousness" is interesting and important, and... well, if we want to create consciousness, we need to answer that question; Even if it's an emergent property of something else we do, which is to say, even if we create a machine we call conscious by accident, we still need to know it when we see it; and right now, I'm not sure that philosophy even has a good "I will know it when I see it" kind of answer to that question.
But yeah, my response was mostly an attempt to point out that the article is talking about something that is more like "CASE tools" than like HAL
All the contradictions point to the fact that we humans either live like retarded biological animals or augment ourselves by integrating the AI features just like our kings married the women of the enemies to boost their genetic and social appeal. I'm for the latter.