Everyone loves to hate on OpenAI and talk about how they're really ClosedAI and an evil corporation vying for power, but the opposite way is also interesting to think about. I think it's fair to say that majority of scientists at OpenAI wouldn't be working there if they knew they were working for an evil corporation. These are some of the brightest people on the planet, yet I've only heard good things about OpenAI leadership, especially Sam Altman, and their commitment to actually guiding AI for the better.
I'm not saying that OpenAI is benevolent, but let's assume so for the sake of argument. They definitely would need real-world experience running commercial AI products, for the organizational expertise as well as even more control over production of safe and aligned AI technologies. A hypothetical strategy, then, would be to
a) get as much investment/cash as needed to continue research productively (Microsoft investment?)
b) with this cash, do research but turn that research into real-world product as fast as possible
c) and price these products at a loss so that not only are they the #1 product to use, other potentially malevolent parties can't achieve liftoff to dig their own niche into the market
I guess my point is that a company who truly believes that AI is potentially a species-ending technology and requires incredible levels of guidance may aim for the same market control and dominance as a party that's just aiming for evil profit. Of course, the road to hell is paved with good intentions and I'm on the side of open source(yay Open Assistant), but it's nevertheless interesting to think about.
> These are some of the brightest people on the planet, yet I've only heard good things about OpenAI leadership
This is a deeply ahistorical take. Lots of technically bright people have been party to all sorts of terrible things.
Don't say that he's hypocritical
Rather say that he's apolitical
"Vunce ze rockets are up, who cares vere zey come down
"Zats not mein department!" says Werner von Braun
While "smart people do terrible things" is an absolutely fair point, it's also the kind of thing I hear AI researchers say, even with similar references.
Sometimes they even say this example in the context of "why human-level AI might doom us all".
>I think it's fair to say that majority of scientists at OpenAI wouldn't be working there if they knew they were working for an evil corporation.
Majority of scientists will work on anything that brings money, engineers doubly so, and they'll either rationalize the hell out of what they're doing as "good", or be sufficiently politically naive to not even understand the repurcursions of what they're building in the first place (and will "trust their government" too)...
> and their commitment to actually guiding AI for the better
I think the Silicon Valley elite's definition of "for the better" means "for the better for people like us". The popularity of the longtermism and transhumanism cult among them also suggests that they'd probably be fine with AI wiping out much of humanity¹, as long as it doesn't happen to them - after all, they are the elite and the future of humanity, with the billions of (AI-assisted) humans of that will exist!
And they'll think it's morally right too, because there's so many utility units to be gained from their (and their descendants') blessed existence.
(¹ setting aside whether that's a realistic risk or not, we'll see)
Lots of people work for organisations they actively think are evil because it's the best gig going; plenty of other people find ways to justify how their particular organisation isn't evil despite all it does so they can avoid the pain of cognitive dissonance and keep getting paid.
My current approval of OpenAI is conditional, not certain. (I don't work there, and I at least hope I will be "team-think-carefully" rather than "team OpenAI can't possibly be wrong because I like them").
Huh? People have historically worked on all kinds of companies and organizations doing evil shit, while knowing they do evil shit, and not even justyfing it as "bad but necessary" or via some ideology, just doing it for profit...
Drug cartels have all sorts of engineers on board, for one small example...
Similarly, if you feel the need to fart it COULD be a monkey trying to escape - sure, it's been eggy gases every single time before but THIS TIME COULD BE DIFFERENT!
> These are some of the brightest people on the planet, yet I've only heard good things about OpenAI leadership, especially Sam Altman, and their commitment to actually guiding AI for the better.
Hear hear. It ought to be remembered that there is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success than to take the lead in the introduction of a new order of things.
I'm not saying that OpenAI is benevolent, but let's assume so for the sake of argument. They definitely would need real-world experience running commercial AI products, for the organizational expertise as well as even more control over production of safe and aligned AI technologies. A hypothetical strategy, then, would be to a) get as much investment/cash as needed to continue research productively (Microsoft investment?) b) with this cash, do research but turn that research into real-world product as fast as possible c) and price these products at a loss so that not only are they the #1 product to use, other potentially malevolent parties can't achieve liftoff to dig their own niche into the market
I guess my point is that a company who truly believes that AI is potentially a species-ending technology and requires incredible levels of guidance may aim for the same market control and dominance as a party that's just aiming for evil profit. Of course, the road to hell is paved with good intentions and I'm on the side of open source(yay Open Assistant), but it's nevertheless interesting to think about.