Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For a company that's executing so well (at least from an outside perspective), shipping so fast, growing so fast, and so ahead of the curve in arguably the hottest segment of the tech market, at this moment, to do this right now, means this must be REALLY bad.


Yeah, this is like, we are getting sued for billions of dollars and directors are going to jail bad.

So my bet is either they lied about how they are using customer data, covered up a massive data breach or something similar to that. The only thing that's a bit hard to figure there is how specific this is to Altman. A big scandal would be leaking out I would think and more people would be getting fired.


> covered up a massive data breach or something similar to that

Honest question: do execs or companies in general ever suffer consequences for data breaches? Seems like basically no one cares about this stuff.


Really different between private and public companies. Recent hilarious piece from Matt Levine that was discussed on HN: https://www.bloomberg.com/opinion/articles/2023-11-16/hacker...


I think it depends on whose data was revealed, and the nature of the data.


Clorox just fired their CISO last week for a data breach


A CISO's role is to get fired after a data breach


No, their role is to prevent a data breach from happening in the first place.


Isn't that what ciso roles are for? They aren't real c-suite roles are they?


Chief Information Scapegoat Officer


Most executives are covered by Errors and Omissions insurance which protects them from personal liability.


I bet the data breach being covered up is not customer data, but IP. My #2 theory is that the breach is prompt data and it went to a nation-state adversary of the US. Plenty of people putting sensitive work info into ChatGPT when they shouldn’t.


I'm betting that ridiculous offer they made last week to cover developer legal fees has already blown up in their face


"they lied about how they are using customer data" -- possibly. it is in the inherent logic of the ai to gobble up as much data as physically possible


Zero percent chance that Ilya (part of the board) would be blindsided by this.


things are moving faster than the ability of any human to keep up.


Yeah but the CTO is now interim CEO. Hard to imagine her getting the role if that was the case.


Unless she was the one who blew the whistle on him. Here's a hypothetical scenario:

- New feature/product/etc. launch is planned.

- Murati warns Altman that it's not ready yet and there are still security and privacy issues that need to be worked out.

- Altman ignores her warnings, launches anyway.

- Murati blows the whistle on him to the board, tells them that he ordered the launch over her objections.

- Data breach happens. Altman attempts to cover it up. Murati blows the whistle again.

- Board fires Altman, gives Murati the job as it's clear from her whistleblowing that she has the ethics for it at least.

Again, completely hypothetical scenario, but it's one possible explanation for how this could happen.


or it's the other way around. she wants to launch because money and investors and that's all that she's really about.

he says fuck them and their money, it's not ready yet, here's a bunch of other things that will make people go wooooow.

she's not happy he does that because future. convinces the board of money and investors.

the board shits on humanity and goes for money and investors.


“Do a hugely public firing because a feature wasn’t launched” would probably be a first


"Interim CEO", she may also be marked for termination too.


If she was under investigation, the board would almost certainly bypass her for the interim CEO position, to mitigate the disruption if that investigation also turned out negative. (They might make her CEO after she was cleared, though, if it went the other way.)


Random question: do you have any connection to the Dragon speech-to-text software [0] that was first released in 1997? I've always found that to be an intriguing example of software that was "ahead of its time" (along with "the mother of all demos" [1]). And if so, it's funny to see you replying to the account named after (a typo of) ChatGPT.

[0] https://en.wikipedia.org/wiki/Dragon_NaturallySpeaking

[1] https://en.wikipedia.org/wiki/The_Mother_of_All_Demos


If you hunt 2 mosquitoes in your room, do you go to bed after having swatted 1? Certainly not me.


If I haven't found it after thirty seconds, I put on a long sleeve shirt and open the door...


I usually just cover myself with a sheet, say a hail mary and get back to sleep.

I ain't no Captain Ahab baby.


This is the most plausible speculation I've read here, really.


It might be related to this: https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...

Microsoft had inside information about their security, which is why they restricted access. Meanwhile, every other enterprise and gov organisation using ChatGPT is exposed.


Is Microsoft fully indemnified if they discovered an OpenAI leak, but kept it to themselves?


The board in question is the non-profit board.

If Sam was pursuing profits or growth (even doing a really good job of it) in a way that violated the objectives set by the non-profit board, that could set up this kind of situation.


This, to me, seems like the most likely root cause: Sam was going too far into the "for profit" world, and lied to the board and misled them about his plans.


But would that warrant outright firing him like this? No exit plan where they can give the appearance of leaving on good terms?


That's a good point. The abruptness of the firing and calling him "not candid" aka lied in corporate speak. Means it's probably something with legal jeopardy.


The statement says. It would mean not just a misalignment on values but active deception regarding OpenAIs current direction.

The bit about “ability to fulfill duties” sticks out, considering the responsibility and duties of the nonprofit board… not to shareholders, but, ostensibly, to “humanity.”


To make an example out of him?


They clearly did it in a hurry, this is a “pants on fire” firing, not a difference of opinion over his leadership and direction.


I assume you mean ‘hair on fire,’ which suggests a move done in a panic. “Pants on fire” means the subject is lying.


They fired him for lying. I think GP meant what they said, which is that what he was doing was blatantly lying, rather than whatever softer interpretation can be made for "not consistently candid in his communications".


Actually I did mix up “hair on fire” and “pants down”. Fortunately, “pants on fire” still works in context.


Ok, but it fits so well with "liar, liar, pants on fire" and their reason for dismissal.


Why would they want to do that? That doesn't benefit anyone.


Seems the most plausible and frankly ideal


> arguably the hottest segment of the tech market

Yes it is arguable. OpenAI is nothing more than a really large piece of RAM and storage around a traditional model that was allowed to ingest the Internet and barfs pieces back up in prose making it sound like it came up with the content.


We've somehow reached the point where the arguments for dismissing AI as hype are frequently more out of touch with reality than the arguments that AGI is imminent.


Its the same as Covid where people said it was going to kill everyone or was an authoritarian conspiracy. chatGPT is neither the singularity or useless junk. I use it everyday at work to write code but it’s not about to take my job either.


No country in Africa starts with a "K".


I'll bite: Kegypt


"No wireless. Less space than a Nomad. Lame."


Hey that’s Google also.


Also Redditors.


Savage but entirely justified


It absolutely does come up with its own content. It's especially evident when it confabulates apis.


If that counts as coming up with its own content, Markov bots have been doing it for decades.


This should be upvoted as the comment of the year.


IIRC, in a video I saw a couple of days ago, Noam Chomsky referred to LLMs as plagiarists.


Are google search results plagarism?


At least there's no accurate way for a search engine to check for originality. It's like asking a machine to evaluate other machines.

Here's the top-most featured snippet when I google if programming languages had honest slogans: https://medium.com/nerd-for-tech/if-your-favourite-programmi...

Half of the above post is plagiarised from my 2020 post: https://betterprogramming.pub/if-programming-languages-had-h...


Web search results are (or at least are supposed to be) attributed by their very nature, and definitely not presented as original creative work.


It's worth noting (though I'm not sure whether this is related), that Discord has announced that they're shutting down their ChatGPT-based bot[0], Clyde.

[0]: https://uk.pcmag.com/ai/149685/discord-is-shutting-down-its-...


I mean, if there's a company that doesn't care about user privacy, that's Discord.


Hmm.. I wonder if he made / was in the process of making a deal that violated the US/China trade policies. Or could just be a coincidence


Yeah, this is like, we are getting sued for billions of dollars and directors are going to jail bad.

So my bet is either they lied about how they are using customer data, covered up a massive data breach or something similar to that.


If OpenAI is somehow a scam there's going to be a lot of tech stocks crashing next week.


I don't think so. LLMs are absolutely not a scam. There are LLMs out there that I can and do run on my laptop that are nearly as good as GPT4. Replacing GPT4 with another LLM is not the hardest thing in the world. I predict that, besides Microsoft, this won't be felt in the broader tech sector.


They really aren't nearly as good as GPT4, though. The best hobbyist stuff that we have right now is 70b LLaMA finetunes, which you can run on a high-end MacBook, but I would say it's only marginally better than GPT-3.5. As soon as it gets a hard task that requires reasoning, things break down. GPT-4 is leaps ahead of any of that stuff.


No, not by themselves, they're not as good as GPT-4. (I disagree that they're only "marginally" better than GPT-3.5, but that's just a minor quibble) If you use RAG and other techniques, you can get very close to GPT-4-level performance with other open models. Again, I'm not claiming open models are better than GPT-4, just that you can come close.


Yeah but what could possibly be the scam? OpenAI's product works (most of the time).


The Mechanical Turk really played chess.


Are you suggesting that ChatGPT is secretly backed by humans? That’s impossible, it is faster than the fastest humans in many areas.


They invented a relativistic device that slows time inside a chamber. A human can spend a whole day answering a prompt at their leisure.


that wouldn't be scam that would be a invention worthy of a Nobel Prize and be world altering beyond the impact of AI. I mean controling the flow of time without creating a supermassive blackhole would allow sort of fun exploits in computation alone not to mention other practical uses like instantly aging cheese or wine


A more plausible theory is that the training actually relies on a ton of human labeling behind the scenes (I have no idea if this is true or not).


> A more plausible theory is that the training actually relies on a ton of human labeling behind the scenes (I have no idea if this is true or not).

Isn't this already generally known to be true (and ironically involving Mechanical Turk-like services)?

Not sure if these are all the same sources I read a while ago, but E.G.:

https://www.theverge.com/features/23764584/ai-artificial-int...

https://www.marketplace.org/shows/marketplace-tech/human-lab...

https://www.technologyreview.com/2022/04/20/1050392/ai-indus...

https://time.com/6247678/openai-chatgpt-kenya-workers/

https://www.vice.com/en/article/wxnaqz/ai-isnt-artificial-or...

https://www.noemamag.com/the-exploited-labor-behind-artifici...

https://www.npr.org/2023/07/06/1186243643/the-human-labor-po...


They're joking.


Altman didn't actually do his job, he just let ChatGPT run the company.


I think an internal scam of OpenAI is more likely than OpenAI being a scam, if “scam” is even the right framing.


it’s not really possible for it to be a scam. if you want to see its product you can go and try it yourself


Or, Sam really was the last key thing restraining AI capabilities from exploding upwards, and the AI just engineered his departure.


Yeah this is bad




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: