For a company that's executing so well (at least from an outside perspective), shipping so fast, growing so fast, and so ahead of the curve in arguably the hottest segment of the tech market, at this moment, to do this right now, means this must be REALLY bad.
Yeah, this is like, we are getting sued for billions of dollars and directors are going to jail bad.
So my bet is either they lied about how they are using customer data, covered up a massive data breach or something similar to that. The only thing that's a bit hard to figure there is how specific this is to Altman. A big scandal would be leaking out I would think and more people would be getting fired.
I bet the data breach being covered up is not customer data, but IP. My #2 theory is that the breach is prompt data and it went to a nation-state adversary of the US. Plenty of people putting sensitive work info into ChatGPT when they shouldn’t.
If she was under investigation, the board would almost certainly bypass her for the interim CEO position, to mitigate the disruption if that investigation also turned out negative. (They might make her CEO after she was cleared, though, if it went the other way.)
Random question: do you have any connection to the Dragon speech-to-text software [0] that was first released in 1997? I've always found that to be an intriguing example of software that was "ahead of its time" (along with "the mother of all demos" [1]). And if so, it's funny to see you replying to the account named after (a typo of) ChatGPT.
Microsoft had inside information about their security, which is why they restricted access. Meanwhile, every other enterprise and gov organisation using ChatGPT is exposed.
If Sam was pursuing profits or growth (even doing a really good job of it) in a way that violated the objectives set by the non-profit board, that could set up this kind of situation.
This, to me, seems like the most likely root cause: Sam was going too far into the "for profit" world, and lied to the board and misled them about his plans.
That's a good point. The abruptness of the firing and calling him "not candid" aka lied in corporate speak. Means it's probably something with legal jeopardy.
The statement says. It would mean not just a misalignment on values but active deception regarding OpenAIs current direction.
The bit about “ability to fulfill duties” sticks out, considering the responsibility and duties of the nonprofit board… not to shareholders, but, ostensibly, to “humanity.”
They fired him for lying. I think GP meant what they said, which is that what he was doing was blatantly lying, rather than whatever softer interpretation can be made for "not consistently candid in his communications".
Yes it is arguable. OpenAI is nothing more than a really large piece of RAM and storage around a traditional model that was allowed to ingest the Internet and barfs pieces back up in prose making it sound like it came up with the content.
We've somehow reached the point where the arguments for dismissing AI as hype are frequently more out of touch with reality than the arguments that AGI is imminent.
Its the same as Covid where people said it was going to kill everyone or was an authoritarian conspiracy. chatGPT is neither the singularity or useless junk. I use it everyday at work to write code but it’s not about to take my job either.
It's worth noting (though I'm not sure whether this is related), that Discord has announced that they're shutting down their ChatGPT-based bot[0], Clyde.
I don't think so. LLMs are absolutely not a scam. There are LLMs out there that I can and do run on my laptop that are nearly as good as GPT4. Replacing GPT4 with another LLM is not the hardest thing in the world. I predict that, besides Microsoft, this won't be felt in the broader tech sector.
They really aren't nearly as good as GPT4, though. The best hobbyist stuff that we have right now is 70b LLaMA finetunes, which you can run on a high-end MacBook, but I would say it's only marginally better than GPT-3.5. As soon as it gets a hard task that requires reasoning, things break down. GPT-4 is leaps ahead of any of that stuff.
No, not by themselves, they're not as good as GPT-4. (I disagree that they're only "marginally" better than GPT-3.5, but that's just a minor quibble) If you use RAG and other techniques, you can get very close to GPT-4-level performance with other open models. Again, I'm not claiming open models are better than GPT-4, just that you can come close.
that wouldn't be scam that would be a invention worthy of a Nobel Prize and be world altering beyond the impact of AI. I mean controling the flow of time without creating a supermassive blackhole would allow sort of fun exploits in computation alone not to mention other practical uses like instantly aging cheese or wine