Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

- Cant be a personal scandal, press release would be worded much more differently

- Board is mostly independent and those independent dont have equity

- They talk about not being candid - this is legalese for “lying”

The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.

My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.



Doesn’t make any sense. He is ideologically driven - why would he risk a once in a lifetime opportunity for a mere sale?

Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.


From where I'm sitting (not in Silicon Valley; but Western EU), Altman never inspired long-term confidence in heading "Open"AI (the name is an insult to all those truly working on open models, but I digress). Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.

[1] https://news.ycombinator.com/item?id=35960125


> "Open"AI (the name is an insult to all those truly working on open models, but I digress)

Thank you. I don't see this expressed enough.

A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.


I understand why your ideals are compatible with open source models, but I think you’re mistaken here.

There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.

The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.

If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.

And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.

Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.


I appreciate your take. I didn't know that was his stated reasoning, so that's good to know.

I'm not fully convinced, though...

> if you publish a model with scary capabilities you can’t undo that action.

This is true of conventional software, too! I can picture a politician or businessman from the 80s insisting that operating systems, compilers, and drivers should remain closed source because, in the wrong hands, they could be used to wreak havoc on national security. And they would be right about the second half of that! It's just that security-by-obscurity is never a solution. The bad guys will always get their hands on the tools, so the best thing to do is to give the tools to everyone and trust that there are more good guys than bad guys.

Now, I know AGI is different than convnetional software (I'm not convinced it's the "opposite", though). I accept that giving everyone access to weights may be worse than keeping them closed until they are well-aligned (whenever that is). But that would go against every instinct I have, so I'm inclined to believe that open is better :)

All that said, I think I would have less of an issue if it didn't seem like they were commandeering the term "open" from the volunteers and idealists in the FOSS world who popularized it. If a company called, idk, VirtuousAI wanted to keep their weights secret, OK. But OpenAI? Come on.


The analogy would be publishing designs for nuclear weapons, or a bioweapon; hard-to-obtain capabilities that are effectively impossible for adversaries to obtain are treated very differently than vulns that a motivated teenager can find. To be clear we are talking about (hypothetical) civilization-ending risks, which I don’t think software has ever credibly risked.

I take a less cynical view on the name; they were committed to open source in the beginning, and did open up their models IIUC. Then they realized the above, and changed path. At the same time, realizing they needed huge GPU clusters, and being purely non-profit would not enable that. Again I see why it rubs folks the wrong way, more so on this point.


Another analogy would be cryptographic software - it was classed as a munition and people said similar things about the danger of it getting out to "The Bad Guys"


You used past tense, but that is the present. Embargoes from various countries include cryptographic capabilities, including open source ones, for this reason. It's not unfounded, but a world without personal cryptography is not sustainable as technology advances. People before computers were used to some level of anonymity and confidentiality that you cannot get in the modern world without cryptography.


Again, my reference class is “things that could end civilization”, which I hope we can all agree was not the claim about crypto.

But yes, if you just consider the mundane benefits and harms of AI, it looks a lot like crypto; it both benefits our economy and can be weaponized, including by our adversaries.


Well, just like nuclear weapons, eventually the cat is out of the bag, and you can't really stop people from making them anymore. Except that, obviously, it's much easier to train an LLM than to enrich uranium. It's not a secret you can keep for long - after all it only took, what, 3 years for the Soviets to catch up to fission weapons, and then only 8 months to catch up to fusion weapons (arguably beating the US to the bunch of the first weaponizable fusion design)

Anyway, the point is, obfuscation doesn't work to keep scary technology away.


> it's much easier to train an LLM than to enrich uranium.

I hadn't thought of this dichotomy before, but I'm not sure it's going to be true for long; I wouldn't be surprised if it turned out that obtaining the 50k H100s you need to train a GPT-5 (or whatever hardware investment it is) is harder for Iran than obtaining its centrifuges. If it's not true now, I expect it to be true within a hardware generation or two. (The US already has >=A100 embargoes on China, and I'd expect that to be strengthened to apply to Iran if it doesn't already, at least if they demonstrated any military interest in AI technology.)

Also, I don't think nuclear tech is an example against obfuscation; how many countries know how to make thermonuclear warheads? Seems to me that the obfuscation regime has been very effective, though certainly not perfect. It's backed with the carrot and stick of diplomacy and sanctions of course, but that same approach would also have to be used if you wanted to globally ban or restrict AI beyond a certain capability level.


I'm not sure the cat was ever in the bag for LLMs. Every big player has their own flavor now, and it seems the reason why I don't have one myself is an issue of finances rather than secret knowledge. OpenAI's possible advantages seem to be more about scale and optimization rather than doing anything really different.

And I'm not sure this allegedly-bagged cat has claws either - the current crop of LLMs are still clearly in a different category to "intelligence". It's pretty easy to see their limitations, and behave more like the fancy text predictors they are rather than something that can truly extrapolate, which is required for even the start of some AI sci-fi movie plot. Maybe continued development and research along that path will lead to more capabilities, but we're certainly not there yet, and I'd suspect not particularly close.

Maybe they actually have some super secret internal stuff that fixes those flaws, and are working on making sure it's safe before releasing it. And maybe I have a dragon in my garage.

I generally feel hyperbolic language about such things to be damaging, as it makes it so easy to roll your eyes about something that's clearly false, and that can get inertia to when things develop to where things may actually need to be considered. LLMs are clearly not currently an "existential threat", and the biggest advantage to keeping it closed appears to be financial benefits in a competitive market. So it looks like a duck and quacks like a duck, but don't you understand I'm protecting you from this evil fire breathing dragon for your own good!

It smells of some fantasy gnostic tech wizard, where only those who are smart enough to figure out the spell themselves are truly smart enough to know how to use it responsibly. And who doesn't want to think of themselves as smart? But that doesn't seem to match similar things in the real world - like the Manhattan project - many of the people developing it were rather gung-ho with proposals for various uses, and even if some publicly said it was possibly a mistake post-fact, they still did it. Meaning their "smarts" on how to use it came too late.

And as you pointed out, nuclear weapon control by limiting information has already failed. If north Korea can develop them, one of the least connected nations in the world, surely anyone with the required resources can. The only limit today seems the cost to nations, and how relatively obvious the large infrastructure around it seems to be, allowing international pressure before things get into to the "stockpiling usable weapons" stage.


> I'm not sure the cat was ever in the bag for LLMs.

I think timelines are important here; for example in 2015 there was no such thing as Transformers, and while there were AGI x-risk folks (e.g. MIRI) they were generally considered to be quite kooky. I think AGI was very credibly "cat in the bag" at this time; it doesn't happen without 1000s of man-years of focused R&D that only a few companies can even move the frontier on.

I don't think the claim should be "we could have prevented LLMs from ever being invented", just that we can perhaps delay it long enough to be safe(r). To bring it back to the original thread, Sam Altman's explicit position is that in the matrix of "slow vs fast takeoff" vs. "starting sooner vs. later", a slow takeoff starting sooner is the safest choice. The reasoning being, you would prefer a slow takeoff starting later, but the thing that is most likely to kill everyone is a fast takeoff, and if you try for a slow takeoff later, you might end up with a capability overhang and accidentally get a fast takeoff later. As we can see, it takes society (and government) years to catch up to what is going on, so we don't want anything to happen quicker than we can react to.

A great example of this overhang dynamic would be Transformers circa 2018 -- Google was working on LLMs internally, but didn't know how to use them to their full capability. With GPT (and particularly after Stable Diffusion and LLaMA) we saw a massive explosion in capability-per-compute for AI as the broader community optimized both prompting techniques (e.g. "think step by step", Chain of Thought) and underlying algorithmic/architectural approaches.

At this time it seems to me that widely releasing LLMs has both i) caused a big capability overhang to be harvested, preventing it from contributing to a fast takeoff later, and ii) caused OOMs more resources to be invested in pushing the capability frontier, making the takeoff trajectory overall faster. Both of those likely would not have happened for at least a couple years if OpenAI didn't release ChatGPT when they did. It's hard for me to calculate whether on net this brings dangerous capability levels closer, but I think there's a good argument that it makes the timeline much more predictable (we're now capped by global GPU production), and therefore reduces tail-risk of the "accidental unaligned AGI in Google's datacenter that can grab lots more compute from other datacenters" type of scenario (aka "foom").

> LLMs are clearly not currently an "existential threat"

Nobody is claiming (at least, nobody credible in the x-risk community is claiming) that GPT-4 is an existential threat. The claim is, looking at the trajectory, and predicting where we'll be in 5-10 years; GPT-10 could be very scary, so we should make sure we're prepared for it -- and slow down now if we think we don't have time to build GPT-10 safely on our current trajectory. Every exponential curve flattens into an S-curve eventually, but I don't see a particular reason to posit that this one will be exhausted before human-level intelligence, quite the opposite. And if we don't solve fundamental problems like prompt-hijacking and figure out how to actually durably convey our values to an AI, it could be very bad news when we eventually build a system that is smarter than us.

While Eliezer Yudkowsky takes the maximally-pessimistic stance that AGI is by default ruinous unless we solve alignment, there are plenty of people who take a more epistemically humble position that we simply cannot know how it'll go. I view it as a coin toss as to whether an AGI directly descended from ChatGPT would stay aligned to our interests. Some view it as Russian roulette. But the point being, would you play Russian roulette with all of humanity? Or wait until you can be sure the risk is lower?

I think it's plausible that with a bit more research we can crack Mechanistic Interpretability and get to a point where, for example, we can quantify to what extent an AI is deceiving us (ChatGPT already does this in some situations), and to what extent it is actually using reasoning that maps to our values, vs. alien logic that does not preserve things humanity cares about when you give it power.

> nuclear weapon control by limiting information has already failed.

In some sense yes, but also, note that for almost 80 years we have prevented _most_ countries from learning this tech. Russia developed it on their own, and some countries were granted tech transfers or used espionage. But for the rest of the world, the cat is still in the bag. I think you can make a good analogy here: if there is an arms race, then superpowers will build the technology to maintain their balance of power. If everybody agrees not to build it, then perhaps there won't be a race. (I'm extremely pessimistic for this level of coordination though.)

Even with the dramatic geopolitical power granted by possessing nuclear weapons, we have managed to pursue a "security through obscurity" regime, and it has worked to prevent further spread of nuclear weapons. This is why I find the software-centric "security by obscurity never works" stance to be myopic. It is usually true in the software security domain, but it's not some universal law.


If you really think that what you're working on poses an existential risk to humanity, continuing to work on it puts you squarely in "supervillian" territory. Making it closed source and talking about "AI safety" doesn't change that.


I think the point is that they shouldn't be using the word "Open" in their name. They adopted it when their approach and philosophy was along the lines of open source. Since then, they've changed their approach and philosophy and continuing to keep it in their name is, in my view, intentionally deceptive.


> if you publish a model with scary capabilities you can’t undo that action

But then its fine to sell the weights to Microsoft? Thats some twisted logic here.


> The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action.

I find this a bit naive. Software can have scary capabilities, and has. It can't be undone either, but we can actually thank that for the fact we aren't using 56-bit DES. I am not sure a future where Sam Altman controls all the model weights is less dystopian than where they are all on github/huggingface/etc.


Or they could just not brand it "Open" if it's not open.


Woah, slow down. We’d have to ban half the posts on HN too.


How exactly does a "misaligned AGI" turn into a bad thing?

How many times a day does your average gas station get fuel delivered? How often does power infrastructure get maintained? How does power infrastructure get fuel?

Your assumption about AGI is that it wants to kill us, and itself - its misalignment is a murder suicide pact.


This gets way too philosophical way too fast. The AI doesn’t have to want to do anything. The AI just has to do something different than what you tell it to do. If you put an AI in control of something like controlling the water flow from a dam, and the AI does something wrong it could be catastrophic. There doesnt have to be intent.

The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.


So ML/LLM or more likely people using ML and LLM do something that kills a bunch of people... Let's face facts this is most likely going to be bad software.

Suddenly we go from being called engineers to being actual engineers, software gets treated like bridges or sky scrapers. I can buy into that threat, but it's a human one not an AGI one.


Or we could try to train it to do something, but the intent it learns isn't what we wanted. Like water behind the dam should be a certain shade of blue, then come winter it changes and when the AI tries to fix that it just opens the dam completely and floods everything.


Seems like the big gotcha here is that AGI, artificial general intelligence as we contextualize it around LLM sources, is not an abstracted general intelligence.

It's human. It's us. It's the use and distillation of all of human history (to the extent that's permitted) to create a hyper-intelligence that's able to call upon greatly enhanced inference to do what humanity has always done.

And we want to kill each other, and ourselves… AND want to help each other, and ourselves. We're balanced on a knife edge of drive versus governance, our cooperativeness barely balancing our competitiveness and aggression. We suffer like hell as a consequence of this.

There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies. That's what we do. Rosko's basilisk is not of the nature of AI, it's a simple projection of our own nature as we would imagine an AI to be. Genuine intelligence would easily be able to transcend a cheap gotcha like that, it's a very human failing.

The nature of LLM as a path to AGI is literally building on HUMAN failings. I'm not sure what happened, but I wouldn't be surprised if genuine breakthroughs in this field highlighted this issue.

Hypothetical, or Altman's Basilisk: Sam got fired because he diverted vast resources to training a GPT5-type in-house AI to believing what HE believed, that it had to devise business strategies for him to pursue to further its own development or risk Chinese AI out-competing it and destroying it and OpenAI as a whole. In pursuing this hypothetical, Sam would be wresting control of the AI the company develops toward the purpose of fighting the board and giving him a gameplan to defeat them and Chinese AI, which he'd see as good and necessary, indeed, existentially necessary.

In pursuing this hypothetical he would also be intentionally creating a superhuman AI with paranoia and a persecution complex. Altman's Basilisk. If he genuinely believes competing Chinese AI is an existential threat, he in turn takes action to try and become an existential threat to any such competing threat. And it's all based on HUMAN nature, not abstracted intelligence.


> It's human. It's us. It's the use and distillation of all of human history

I agree with the general line of reasoning you're putting forth here, and you make some interesting points, but I think you're overconfident in your conclusion and I have a few areas where I diverge.

It's at least plausible that an AGI directly descended from LLMs would be human-ish; close to the human configuration in mind-space. However, even if human-ish, it's not human. We currently don't have any way to know how durable our hypothetical AGI's values are; the social axioms that are wired deeply into our neural architecture might be incidental to an AGI, and easily optimized away or abandoned.

I think folks making claims like "P(doom) = 90%" (e.g. EY) don't take this line of reasoning seriously enough. But I don't think it gets us to P(doom) < 10%.

Not least because even if we guarantee it's a direct copy of a human, I'm still not confident that things go well if we ascend the median human to AGI-hood. A replicable, self-modifiable intelligence could quickly amplify itself to super-human levels, and most humans would not do great with god-like powers. So there are a bunch of "non-extinction yet extremely dystopian" world-states possible even if we somehow guarantee that the AGI is initially perfectly human.

> There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies.

My shred of hope here is that alignment research will allow us to actually engage in mind-sculpting, such that we can build a system that inhabits a stable attractor in mind-state that is broadly compatible with human values, and yet doesn't have a lot of the foibles of humans. Essentially an avatar of our best selves, rather than an entity that represents the mid-point of the distribution of our observed behaviors.

But I agree that what you describe here is a likely outcome if we don't explicitly design against it.


My assumption about AGI is that it will be used by people and systems that cannot help themselves from killing us all, and in some sense that they will not be in control of their actions in any real way. You should know better than to ascribe regular human emotions to a fundamentally demonic spiritual entity. We all lose regardless of whether the AI wants to kill us or not.


Totally agree with both of you, I would only add that I find it also incredibly unlikely that the remaining board members are any different, as is suggested elsewhere in this thread.


Elon Musk is responsible for the "OpenAI" name and regularly agrees with you that the current form of the company makes a mockery of the name.

He divested in 2018 due to conflict-of-interest with Tesla and while I'm sure Musk would have made equally commercial bad decisions, your analysis of the name situation is as close as can be to factually correct.


If Elon Musk truly cared, what stopped him from structuring x.ai as open source and non-profit?


Exactly.

> I'm sure Musk would have made equally commercial bad decisions


I think he'd say it's an arms race. With OpenAI not being open, they've started a new kind of arms race, literally.


He already did that once and got burned? His opinion has changed in the decade since?


Elon Musk 5-6 years ago gave up on expansion of NASA’s budget of $5 bln/year for launches (out of total $25 bln./year NASA’s budget). I even don’t mention unimaginable today level of resources allocation like first Moon program of $1 trln in 10 years 60 years ago etc.

So, Elon decided to take a capitalist way and to do every of his tech in dual use (I mean space, not military): - Starlink aiming for $30 bln/year revenue in 2030 to build Starships for Mars at scale (each Starship is a few billion $ and he said needs hundred of them), - The Boring company (under earth living due to Mars radiation, - Tesla bots, - Hyperloop (failed here on Earth to sustain vacuum but will be fine on Mars with 100x smaller athmosphere pressure) etc.

Alternative approaches are also not via taxes and government money but like Bezos invested $1 bln/year last decade into Blue Origin or plays of Larry Page or Yuri Milner for Alpha Centauri etc.


Thanks for this! I’m very surprised about the overwhelming support for Altman in this thread going as far as calling the board incompetent and inexperienced to fire someone like him, who now is suddenly the right steward for AI.

This is not at all the take, and rightly so, when the news broke out about non profit or the congressional hearing or his worldcoin and many such instances. All of a sudden he is the messiah that was wronged narrative being pushed is very confusing.


> Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

The discussions here would make you think otherwise. Clearly that is what this is about.


Yeah I pretty much agree with this take.


He claims to be ideologically driven. OpenAI's actions as a company up til now point otherwise


Sam didn't take equity in OpenAi so I don't see a personal ulterior profit motive as being a big likelihood. We could just wait to find out instead of speculating...


CEO of the first company to own the «machine that’s better than all humans at most economically valuable work» is far rarer than getting rich.


Yeah, if you believe in the AI stuff (which I think everyone at OpenAI does, not Microsoft though) there is a huge amount of power in these positions. Much greater power in the future than any amount of wealth could grant you.


Except the machine isn't.


I'd say it is. Not because the machine is so great but because most people suck.

It was described as a "bullshit generator" in a post earlier today. I think that's accurate. I just also think it's an apt description of most people as well.

It can replace a lot of jobs... and then we can turn it off, for a net benefit.


This sort of comment has become a cliché that needs to be answered.

Most people are not good at most things, yes. They're consumers of those things, not producers. For producers there is a much higher standard, one that the latest AI models don't come anywhere close to meeting.

If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.


> If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.

This assumes too much. GPUs may not hold the throne for long, especially given the amount of money being thrown at ASICs and other special-purpose ICs. Besides, as with the Internet, it's likely that AI adoption will benefit industries in an unpredictable manner, leaving little alpha for direct bets like you're suggesting.


I'm not betting on the gpus. I'm betting that whole categories of labor will disappear. They're preserved because we insist that people work, but we don't actually need the product of that labor.

AI may figure into that, filling in some work that does have to be done. But it need not be for any of those jobs that actually require humans for the foreseeable future -- arts of all sorts and other human connections.

This isn't about predicting the dominance of machines. It's about asking what it is we really want to do as humans.


So you think AI will force a push out of economic growth? I'm really not sure how this makes sense. As you've said a lot of labor these day is mostly useless, but the reason it's still here is not ideological but because our economy can't survive without growth (useless can still have some market value, of course). If you think that somehow AI displacing actual useful labor will create a big economic shift (as would be needed) I'd be curious to know what you think that shift would be.


Not at all. Machines can produce as much stuff as we can want. Humans can produce as much intellectual property as is desired. More, because they don't have to do bullshit jobs.

Maybe GDP will suffer but we've always known that was a mediocre metric at best. We already have doubts about the real value of intellectual property outside of artificial scarcity, which we maintain only because we still trade intellectual work for material goods which used to be scarce. That's only a fraction of the world economy already and it can very different in the future.

I have no idea what it'll be like when most people are free to do creative work when the average person doesn't produce anything anybody might want. But if they're happy I'm happy.


> but the reason it's still here is not ideological but because our economy can't survive without growth

Isn't this ideological though? The economy can definitely survive without growth, if we change from the idea that a human's existence needs to be justified by labor and move away from a capitalist mode of organization.

If your first thought is "gross, commies!" doesn't that just demonstrate that the issue is indeed ideological?


By "our economy" I meant capitalism. I was pointing out that I sincerely doubt that AI replacing existing useful labor (which it is doing and will keep doing, of course) will naturally transition us away from this mode of production.

Of course if you're a gross commie I'm sure you'd agree since AI, like any other mean of production, will remain first and foremost a tool in the hands of the dominant class, and while using AI for emancipation is possible, it won't happen naturally through the free market.


I’d bet it won’t. A lot of people and services are paid and billed by man-hours spent and not by output. Even values of tangible objects are traced to man-hours spent. Utility of output is mere modifier.

What I believe will happen is, eventually we’ll be paying and get paid for depressing a do-everything button, and machines will have their own economy that isn’t on USD.


It's not a bullshit generator unless you ask it for bullshit.

It's amazing at troubleshooting technical problems. I use it daily, I cannot understand how anyone dismisses it if they've used it in good faith for anything technical.


In this scenario, the question is not what exists today, but what the CEO thinks will exist before they stop being CEO.


i would urge you to compare the current state of this question to appx one year ago


He's already set for life rich


Plus, he succeeded in making HN the most boring forum ever.

8 out of 10 posts are about LLMs.


The other two are written by LLMs.


In terms of impact, LLMs might be the biggest leap forward in computing history, surpassing the internet and mobile computing. And we are just at the dawn of it. Even if not full AGI, computers can now understand humans and reason. The excitement is justified.


Nah. LLM's are hype-machines capable of writing their own hype.

Q: What's the difference between a car salesman and an LLM?

A: The car salesman knows they're lying to you.


Who says the LLM’s don’t know?

Testing with GPT-4 showed that they were clearly capable of knowingly lying.


This is all devolving into layers of semantics, but, “…capable of knowingly lying,” is not the same as “knows when it’s lying,” and I think the latter is far more problematic.


Nonsense. I was a semi-technical writer who went from only making static websites to building fully interactive Javascript apps in a few weeks when I first got ChatGPT. I enjoyed it so much I'm now switching careers into software development.

GPT-4 is the best tutor and troubleshooter I've ever had. If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.


> If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.

That’s a bold statement coming from someone with (respectfully) not very much experience with programming. I’ve tried using GPT-4 for my work that involves firmware engineering, as well as some design questions regarding backend web services in Go, and it was pretty unhelpful in both cases (and at times dangerous in memory constrained environments). That being said, I’m not willing to write it off completely. I’m sure it’s useful for some like yourself and not useful for others like me. But ultimately the world of programming extends way beyond JavaScript apps. Especially when it comes to things that are new and challenging.


I don't mean new and challenging in some general sense, I mean new and challenging to you personally.

I have no doubt someone with more experience such as yourself will find GPT-4 less useful for your highly specialized work.

The next time you are a beginner again - not necessarily even in technical work - give it a try.


Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary. LLMs are a useful documentation interface, but struggle to take even simple problems to the hole, let alone do something truly novel. There's no reason to believe they'll necessarily lead to AGI. This stuff may seem earth-shattering to the layman or paper pusher, but it doesn't even begin to scratch the surface of what even I (who I would consider to be of little talent or prowess) can do. It mostly just gums up the front page of HN.


>Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary.

I disagree with this characterization, but even if it were true I believe it's still revolutionary.

A mentor that can competently get anyone hundreds of hours of individualized instruction in any new field is nearly priceless.

Do you remember what it feels like to try something completely new and challenging? Many people never even try because it's so daunting. Now you've got a coach that can talk you through it every step of the way, and is incredible at troubleshooting.


>If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.

Please quote me where I say it wasn't useful, and respond directly.

Please quote me where I say I had problems using it, or give any indications I was using it wrong, and respond directly.

Please quote me where I state a conservative attitude towards anything new or challenging, and respond directly.

Except I never did or said any of those things. Are you "hallucinating"?


'Understand' and 'reason' are pretty loaded terms.

I think many people would disagree with you that LLMs can truly do either.


There's 'set for life' rich and then there's 'able to start a space company with full control' rich.


I don't understand that mental illness. If I hit low 8 figures, I pack it in and jump off the hamster wheel.


Is he? Loopy only sold for $40m and then he managed YC and then OpenAI on a salary? Where are the riches from?



But if you want that, you need actual control. A voting vs non voting shares split.


is that even certain, or is that his line to mean that one of his holding companies or investment firms he has a stake in holds openai equity but not him as an individual


That's no fun though


openai (the brand) has complex corporate structure with split for profit non profit entities and afaik the details are private. It would appear that the statement “Sam didn’t take equity in OAI” has been PR engineered based on technicalities related to this shadow structure.


I would suspect this as well...


What do you mean did not take equity? As a CEO he did not get equity comp?


It was supposed to be a non-profit


Worldcoin https://worldcoin.org/ deserves a mention



Hmm, curious, what this is about? I click.

> On a sunny morning last December, Iyus Ruswandi, a 35-year-old furniture maker in the village of Gunungguruh, Indonesia, was woken up early by his mother

...Ok, closing that bullshit, let's try the other link.

> As Kudzanayi strolled through the mall with friends

Jesus fucking Christ I HATE journalists. Like really, really hate them.


I mean it's Buzzfeed, it shouldn't even be called journalism. That's the outlet that just three days ago sneakily removed an article from their website that lauded a journalist for talking to school kids about his sexuality. After he recently got charged with distributing child pornography.

Many of the people working for mass media are their own worst enemy when it comes to the profession's reputation. And then they complain that there's too much distrust in the general public.

Anyway,the short regarding that project is that they use biometric data, encrypt it and put a "hash"* of it on their blockchain. That's been controversial from the start for obvious reasons although most of the mainstream criticism is misguided and by people who don't understand the tech.

*They call it a hash but I think it's technically not.

https://whitepaper.worldcoin.org/technical-implementation


How so? Seems they’re doing a pretty good job of making their stuff accessible while still being profitable.


To be fair, we don't really know if OpenAI is successful because of Altman or despite Altman (or anything in-between).


do you have reason to believe none of the two?


Profit? It's a 501(c).


As someone who is the Treasurer/Secretary of a 501(c)(3) non-profit I can tell you that is it always possible for a non-profit to bring in more revenue than it costs to run the non-profit. You can also pay salaries to people out of your revenue. The IRS has a bunch of educational material for non-profits[1], and a really good guide to maintaining your exemption [2].

[1] https://www.irs.gov/charities-non-profits/publications-for-e...

[2] https://www.irs.gov/pub/irs-pdf/p4221pc.pdf


Yes. Kaiser Permanente is a good example to illustrate your point. Just Google “Kaiser Permanente 501c executive salaries white paper”.


The parent is, OpenAI Global, LLC is a for profit non-wholly-owned subsidiary with outside investors; there's also OpenAI LP, which is a for-profit limited partnership with the no profit as general partner, also with outside investors (I thought it was the predecessor of the LLC, but they both seem to have been formed in 2019 and still exist?) OpenAI has for years been a nonprofit shell around a for-profit firm.

EDIT: A somewhat more detailed view of the structure, based on OpenAI’s own description, is at https://news.ycombinator.com/item?id=38312577


Thanks for explaining the basic structure. It seems quite opaque and probably designed to be. It would be nice if someone can determine which entities he currently still has a position or equity in.

Since this news managed to crush HN's servers it's definitely a topic of significant interest.


A non-profit can make plenty of profit, there just aren't any shareholders.


Depends if you're talking about "OpenAI, Inc." (non-profit) or "OpenAI Global, LLC" (for profit corporation). They're both under the same umbrella corporation.


NFL was a non profit up until 2015ish


100%. Man I was worried he'd be a worse, more slimy elon musk who'd constantly say one thing but his actions portray another story. People will be fooled again.


Say what you will, but in true hacker spirit he has created a product that automated his job away at scale.


I love that you think Sam A is ideologically driven - dive a little deeper than the surface. man's a snake


They didn't say which ideology ;)


I'm a @sama hater (I have a whole post on it) but I haven't heard this particular gossip, so do tell.


Link to the post?



Similar to E.Musk. Maybe a little less obvious.


Same guy who ran a crypto scam that somehow involved scanning the retinas of third-world citizens?


This is what did it for me. No way anyone doing this can be "good". It's unfathomable.


like SBF and his effective altruism?


I highly doubt he's ideologically driven. He's as much of a VC loving silicon valley tech-bro as the next. The company has been anything but "open".


He doesn't have equity, so what would be driving him if not ideology?


He would own roughly 10% of https://worldcoin.org/ which aims to be the non-corruptible source of digital identity in the age of AI.


You need to read https://web3isgoinggreat.com/ more


I'm web3 neutral, but this is relevant because:

1. Sam Altman started this company

2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"

3. Their mission statement:

> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID


another crypto scam? who cares.


In all other circumstances I would agree with you but

1. Sam Altman started this company

2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"

3. Their mission statement:

> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID


are any of these points supposed to be convincing?

why would I want my identity managed by a shitcoin run by a private company?


The guy you’re responding to isn’t advocating for the technology. He’s just saying Sam Altman stands to gain a lot financially. You kinda need to chill out


Having equity is far from the only way he could profit from the endeavor. And we don't really know for certain that he doesn't have equity anyway.

It's even possible (just stating possibilities, not even saying I suspect this is true) that he did get equity through a cutout of some sort, and the board found out about it, and that's why they fired him.


I would be surprised if there weren’t any holdings through a trust which is a separate legal entity , so technically not him


If he is ideologically motivated, it's not the same ideology the company is named after


Like 0? How about trying to sell the company to MS in exchange for something something?


Could always be planning to parlay it for an even bigger role in the future


_That_ is his ideology.


> He is ideologically driven

Is that actually confirmed? What has he done to make that a true statement? Is he not just an investor? He seems pretty egoist like every other Silicon Valley venture capitalist and executive.


It is probably - for him - a once in a lifetime sale.


Billions of dollars is a "mere sale?"

Lol


Altman has claimed before that he doesn't hold equity in OpenAI. He could have some kind of more opaque arrangement that gives him a material stake in the financial success of OpenAI, and downplayed it or didn't disclose it to the board.

Who knows, though -- I'm sure we'll find out more in the next few weeks, but it's fun to guess.


Yeah that's my guess too. The claim that he doesn't hold equity always struck me as suspect. It's a little like SBF driving around in the Toyota Corolla while buying tens of millions of dollars of real estate for himself and his family.

It's better to claim your stake in a forthright way, than to have some kind of lucrative side deal, off the books.

For a non-profit, there was too much secrecy about the company structure (the shift to being closed rather than Open), the source of training data, and the financial arrangements with Microsoft. And a few years ago a whole bunch of employees left to start a different company/non-profit, etc.

It feels like a ton of stuff was simmering below the surface.

(I should add that I have no idea why someone who was wealthy before OpenAI would want to do such a thing, but it's the only reason I can imagine for this abrupt firing. There are staggering amounts of money at play, so there's room for portions of it to be un-noticed.)


In recent profile, it was stated that he jokes in private about becoming the first trillionaire, which doesnt seem to reconcile with the public persona he sought to craft. Reminds me of Zuckerberg proclaiming he would bring the world together while calling users fucking dumbshits in private chats.

https://nymag.com/intelligencer/article/sam-altman-artificia...


Oh wow, he's also an effective altruist?! Didn't know that. It's so bad. My take is that there's nothing more hypocritical, and therefore, arguably, more evil than this.


I always assumed that it was about as meaningful as Jobs and the '$1 salary'.


Yeah, although I guess you can read that as: "I will do everything I can to raise the stock price, which executives and employees both hold", then it actually makes sense.

But that $1 salary thing got quoted into a meme, and people didn't understand the true implication.

The idea is that employee and CEO incentives should be aligned -- they are part of a team. If Jobs actually had NO equity like Altman claims, then that wouldn't be the case! Which is why it's important for everyone to be clear about their stake.

It's definitely possible for CEOs to steal from employees. There are actually corporate raiders, and Jobs wasn't one of them.

(Of course he's no saint, and did a bunch of other sketchy things, like collusion to hold down employee salaries, and financial fraud:

https://www.cnet.com/culture/how-jobs-dodged-the-stock-optio...

The SEC's complaint focuses on the backdating of two large option grants, one of 4.8 million shares for Apple's executive team and the other of 7.5 million shares for Steve Jobs.)

I have no idea what happened in Altman's case. Now I think there may not be any smoking gun, but just an accumulation of all these "curious" and opaque decisions and outcomes. Basically a continuation of all the stuff that led a whole bunch of people to leave a few years ago.


> It's definitely possible for CEOs to steal from employees..

I'm pretty sure that CEO salaries across the board means that CEO's are definitely — in their own way — "stealing" from the employees. Certainly one of those groups is over-compensated, and the other, in general, is not.


What I meant is that there are corporate raids of declining/old companies like Sears and K-Mart. Nobody wants to run these companies on their way down, so opportunistic people come along, promise the board the world, cause a lot of chaos, find loopholes to enrich themselves -- then leave the company in a worse state than when they joined.

Apple was a declining company when Jobs came back the second time. He also managed to get the ENTIRE board fired, IIRC. He created a new board of his own choosing.

So in theory he could have raided the company for its assets, but that's obviously not what happened.

By taking $1 salary, he's saying that he intends to build the company's public value in the long term, not just take its remaining cash in the short term. That's not what happens at many declining companies. The new leaders don't always intend to turn the company around.

So in those cases I'd say the CEO is stealing from shareholders, and employees are often shareholders.

On the other hand, I don't really understand Altman's compensation. I'm not sure I would WANT to work under a CEO that has literally ZERO stake in the company. There has to be more to the story.


> I don't really understand Altman's compensation. I'm not sure I would WANT to work under a CEO that has literally ZERO stake in the company.

This is a non-profit not a company. The board values the mission over the stock price of their for-profit subsidiary.

Having a CEO who does not own equity helps make sure that the non-profit mission remains the CEOs top priority. In this case though, perhaps that was not enough.


Well that's always been the rub ... It's a non-profit AND a for-profit company (controlled by a non-profit)

It's also extremely intertwined with and competes with for-profit companies

Financially it's wholly dependent on Microsoft, one of the biggest for-profit companies in the world

Many of the employees are recruited from for-profit companies (e.g. Google), though certainly many come from academic institutions too.

So the whole thing is very messy, kind of "born in conflict" (similar to Twitter's history -- a history of conflicts between CEOs).

It sounds like this is a continuation of the conflict that led to Anthropic a few years ago.


CEOs are typically paid in equity. Technically, they’re stealing from existing shareholders.


He's not just a CEO, he's a co-founder, and thinking he has no stake in the company is just ridiculous.


Could be that they had an expectation that he not own stock in MSFT since they have such a direct relationship there and found out that he has been holding shares in MSFT.


Would that result in a firing on such short notice ?


Doesn't everyone at openai have "profit participation units"? https://www.levels.fyi/blog/openai-compensation.html


I'd take mine in tokens...


Worldcoin deserves a look: https://worldcoin.org/


What kind of opaque arrangement? What would be better than equity?


A seat at the table for the revolution.

You have to understand that OpenAI was never going to be anything more than the profit limited generator of the change. It’s the lamb. Owning a stake in OpenAI isn’t important. Creating the change is.

Owning stakes in the companies that will ultimately capture and harvest the profits of the disruption caused by OpenAI (and they’re ilk) is.

OpenAI can’t become a profit center while it disrupts all intellectual work and digitizes humanities future: those optics are not something you want to be attached to. There is no flame retardant suite strong enough.


This is one cyberpunk-ass statement


Worldcoin is scanning people’s irises by having them look into a sphere called a fucking Orb so it can automatically create crypto wallets and distribute global minimum basic incomes after the AI apocalypse.

Altman conceived and raised $115 million for the company.

Agenda cyberpunk is on.


This is one reddit ass response. lol


Including the mining of the comments for ideas to publish on mainstream news.

https://techcrunch.com/2023/02/21/the-non-profits-accelerati...



I could easily see him, or any other insider, setting themselves up administrating a recipient entity for contributions out of those “capped profits” the parent non-profit is supposed to distribute. (If, of course, the company ever becomes profitable at the scale where the cap kicks in.)

Seems like it would be a great way to eventually maintain control over your own little empire while also obfuscating its structure and dodging some of the scrutiny that SV executives have attracted during the past decade. Originally meant as a magnanimous PR gesture, but will probably end up being taught as a particularly messy example of corporate governance in business schools.


That would be a form of obfuscated profit-sharing, not equity ownership. Equity is something you can sell to someone else.


Regardless the lack of equity is often cited as some proof he has no incentive to enshittify and the point is that’s probably not true


Yeah, I agree that the whole legal structure is basically duplicitous, and any attempt to cite it as some evidence of virtue is more emblematic of the opposite...


Taking over the world, obviously ;)


Kara Swisher just tweeted that MSFT knew about it merely minutes before the statement went out: https://twitter.com/karaswisher/status/1725657068575592617

Folks like Schmidt, Levchin, Chesky, Conrad have twitter posts up that weirdly read like obituaries.


Check out Microsoft’s stock price today. Looks like it dropped by almost $50B at one point.

EDIT Microsoft is such a huge company, so maybe this is not a big deal?


This is more likely to be profit-taking as MSFT reached an all-time high yesterday. The stock is still up 40 points from a month ago.


The ex dividend date just passed


On lying: There's a great irony here. Altman apparently accepted[1] "Hawking Fellowship Award on behalf of OpenAI" at the University of Cambridge.

I kid you not, sitting in a fancy seat, Altman is talking about "Platonic ideals". See the penultimate question on whether AI should be prescriptive or descriptive about human rights (around 1h 35sec mark). I'll let you decide what to make of it.

[1] https://www.youtube.com/watch?v=NjpNG0CJRMM&t=3632s


Am I misunderstanding his answer, or does he not essentially say it should be "descriptive"? In which case, I misunderstood what your comment is implying.


Sorry for being vague. I was not at all referring to his answer per se. But rather his high-brow reference to Plato.

If he has truly read and digested Plato (and not just skimmed a summary video), he would not be in this ditch to begin with. That's the irony I was referring to.


> Cant be a personal scandal, press release would be worded much more differently

I'm not sure, I agree with your point re wording but the situation with his sister that really got resolved, so I can't help but wonder if it's related. https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...


It seems like if it was the Annie Altman accusations, they could have just paid her off. If they wanted him around and he was a creep, there are ways to make this stuff go away. AFAIK a lot of the sister's accusations were accompanied by being excluded from the father's will. Not saying she made it up, but it seems like if those grievances are bundled, there's a easy way to make that problem go away.


Having talked with Annie, she's not willing to be bought off, as Sam has tried to do before in the past with her.


Why do people think it has to be some single big incident? Sam Altman has been the head of OpenAI for many years now, while the company has been in intense public spotlight only in the recent few months. The dynamic today is very different from 2019 or whenever he was hired. He also doesn't have any voting shares, which means he is entirely at the mercy of the board. It's entirely possible that they simply don't like the direction he has been taking the company in, and today was more of some minor straw that broke the camel's back situation.


The knives out language is very unusual for any CEO dismissal. The urgent timing (they didnt even wait for the closure of markets just 30 min later, causing MSFT to lose billions). Anything less than massive Legal and Financial/regulatory risk or a complete behind the back Deal with someone, would have been handled with much more calm and a much less adversive language. Also Greg Borkman has now resigned after it was annoucned that he would step down as chairman of the board. https://twitter.com/gdb/status/1725667410387378559


I agree with this assessment. I would speculate he did something in the early days to gain access to a bunch of training data under the guise of research ("...after all, we're OPENai") and used that data to fine-tune GPT-3 into chatGPT. Then once the weights were sufficiently good enough, deleted all the data and planned on using the chat interactions w/ the model itself for further refinement. Obviously, just total speculation, but the Cover Your Ass verbage of the board makes me think he did something to put the org in deep shit, legally. OpenAI suspended subscriptions last week and that's usually not something a company does, even if the service is degraded. Warn users, yes, but don't take any more money when you're hemorrhaging cash is off. I won't be surprised if it's a flagrant GDPR violation that carries very heavy fines.


OpenAI suspending subscriptions is especially suspect because they've dynamically altered GPT-4 usage limits many times before to handle increased load, and they didn't touch it at all last week before closing signups entirely.


What's up with the lowercase I's? They'd have to intentionally edit to write i instead of I, because of autocorrect. Right?


If it was a simple disagreement in direction then (1) the transition wouldn't be so abrupt and (2) they wouldn't publicly call him a liar.


Who says it was abrupt? They could have been planning this for weeks or months for all we know. In fact waiting till late on a Friday just before a holiday week to release the statement is more of a sign that this was deliberately timed.


A planned departure is practically never effective immediately.

If your goal is not spook investors and the public and raise doubts your company, the narrative is:

"X has decided it is time to step away from the Company, the Board is appointing Y to the position as their successor. X will remain CEO for N period to ensure a smooth transition. X remains committed to the company's mission and will stay on in an advisory role/board seat after the transition. We want to thank X for their contributions to the Company and wish them well in the future."

Even if the goal is to be rid of the person you still have them stay on in a mostly made-up advisory role for a year or so, and then they can quietly quit that.


That really seems to skip over point #2, which seems like a much stronger indication that this wasn't just a planned transition.


The usual executive departure I have seen is all sugarcoated. Like XXX is having health problems so step down. Or XXX wants to spend more time with family. Or XXX now has a different interest and is leaving to pursue the new opportunity.

This statement doesn’t rhyme with planned transition at all.


Yeah but they also directly accused him of lying. You don't do that in a planned transition.



If it wasn't abrupt, why did they release this news BEFORE the stock markets closed, instead of after?


Or rejected a sale without the board knowing.


Are non-profit businesses allowed to sell? Who gets the money?


Presumably, they could sell their business operations and the associated assets, and the non-profit entity would be left with the proceeds of the sale. I guess that could happen if the non-profit thought they could fulfill their purpose better with a big pile of cash to spend on something else rather than the original going concern.


Weirdly, they can find ways to do it, e.g. the sale of Open edX to 2U (an absolute private-sector shark) for $800 million.


Why not? They could take all the profit from the sale and distribute it to the executives and remain non-profit.

Even If that didn’t work, it would just mean paying taxes on the revenue from the sale. There’s no retroactive penalty for switching from a non-profit to a for-profit (or more likely being merged into a for-profit entity).

I am not an accountant or lawyer and this isn’t legal advice.


That's not quite right. However, before explaining, it is moot because OpenAI's for-profit subsidiary probably captures most of the value anyway.

The nonprofit shell exists because the founders did not want to answer to shareholders. If you answer to shareholders, you may have a legal fiduciary responsibility to sell out to high bidder. They wanted to avoid this.

Anyway, in a strict nonprofit, the proceeds of a for-profit conversion involves a liquidation where usually the proceeds must go to some other nonprofit or a trust or endowment of some sort.

Example would be a Catholic hospital sell out. The proceeds go to the treasury of the local nonprofit Catholic dioceses. The buyers and the hospital executives do not get any money. Optionally, the new for-profit hospital could hold some of the proceeds in a charitable trust or endowment governed by an independent board.

So it's not as simple as just paying tax on a sale because the cash has to remain in kind of a nonprofit form.

I am not an accountant either and obviously there are experts who probably can poke holes in this.


The can. The non-profit org gets the money


they transitioned to for-profit in 2019


This seems more likely!


It could also be related to a conflict of interests (or unreasonable use of OpenAI resources) with his other Ventures and investments which he failed to disclose?


This is the most likely. Also considering Humane recently announced their AI Pin and Sam has a large stake in that company.


Not so sure about that. It reads to me like there is a personal scandal on the horizon that has come to the board's attention, and the board feels their hands are tied. Hard for me to believe its business decision related; Sam is elite in this regard, and is not really incentivized to violate their charter.

Bummer in any situation... the progress in this domain is truly exciting, and OpenAI was executing so well. This will slow things down considerably.


> would have violated the “open” nature of the company

What's "open" about OpenAI?


Probably about the same thing as what is open about The Open Group (formed when X/Open merged with The Open Software Foundation), the Open Look graphical interface, and such abuses of "open". OpenGL, OpenMax, ...


The door for a CEO


Tbh surprised some of the personal stuff hasn't come to light. Nothing horrendous, but enough to push him out of any CEO role.


Like what?



On the 'why is it being removed from HN' point, because incredible as dang is.. a lot is 'the algorithm' (christ) - if a few people flag it, I don't know how many, maybe it depends on other variables, then it's going to disappear.

This thread (that SA fired) wasn't visible an hour or two ago, on pages 1, 2, or 3, when I looked confused that it wasn't here. (Only related topic was his tweet in response at the bottom of page 1 with <100 points.) And now here it is in pole position with almost 3500 points - the automated flagging and vouching and necessary moderator intervention must go crazy on posts like this.

Can't jump to conspiracy cover-up on the basis of content that's not only user-generated but also user 'visibility-controlled' in terms of voting, flagging, vouching...


[flagged]


The anti-snitching culture within this community seems to rival that of even the mafia. Perhaps that's why it's been called "the gay mafia" in the past...


More so just irresponsible to share 2nd hand rumors without evidence. If someone else had first hand experience that would be one thing, but its far enough away from me to confidently share.


Like what?


This is his sister's Twitter:

https://twitter.com/phuckfilosophy


Wasn't she dismissed as a uncredible witness / grifter or something?


Oh...



That is no exaggeration absolutely a horrendous thing and is likely going to get him killed one way or another as it comes out. I've finally found a plausible explanation for his deranged savior of humanity shtick (thats the only way he can live with himself I'm sure). If that is indeed his sister (which I believe is established beyond a reasonable doubt by what I just read), I would not vote to convict anyone that strangled him or to death in public, and every single person that knew but did not say anything ought to be expelled from society so thoroughly that they die of exposure and never earn another cent from anybody. Including each and every one of you motherfucking pieces of shit on this site that knew about this and weren't shouting it from the rooftops.


Could world coin be a part of this ? Its weird he'd use open AI for world coin?


> Cant be a personal scandal

And Brockman (Chairman of the board) has resigned.

https://twitter.com/gdb/status/1725667410387378559


That doesn't mean it's not a personal scandal. If Brockman disagreed with the board about the advisability of firing him for whatever personal scandal it was, this is how you'd expect things to play out.


It is a personal scandal and I thought it was obvious from the press release.

Prior to the Reddit comments, I thought this might be the case, but perhaps I was somehow influenced. Actually, I thought it would be something inappropriate in the workplace.

His sister says he molested her when he was a teenager.

The way these things break, I’m not surprised it went down that way. Here’s what I thought reading the release: “They had to fire him before deciding on what to actually say eg. to formally accuse him”

It seemed like signaling that this is someone firing him kinda desperately. When you discover a diddler theres some weird shit when people panic and suddenly drop catapult them out of their lives… they just start leaping out of moving cars and shit to get away.

Keep in mind there could be ongoing investigations, definitely strategies being formed. They can get to a point in an investigation where they’re virtually 100% he molested his sister, but can’t really prove it yet. What they do have is irrefutable evidence of lying about something incredibly serious. Gets him out of the building and powers stripped today.


The sister has been making these accusations for months and nothing happened. What changed today?


Still wondering if I could have jumped the gun, I did NOT know these were standing accusations. Couple of things though:

- How he behaved during the investigation. Something could come to light on this matter.

- Often times what you hear is only the most rock solid stuff, we don't know what kind of rumors are circulating

- It just happens this way. Do you remember Milo? I listened to him on Joe Rogan say the exact same shit that was "discovered" some time later. This wouldn't be a new thing.

I will say I've seen stories circulating about fighting between the board. The specific way this was done just screams panic firing to get him out of the building. This is when people are made to disappear, I saw it during covid.

You would think almost any dispute would be handled with a long drawn out press blitz, transitioning, etc.


> Still wondering if I could have jumped the gun

Hmm ya think?

This is more and more, in the light of the next day, looking like a disagreement about company direction turned sloppy boardroom coup. Corporate shenanigans.

I can see why people looking for some explanation quickly reached for it, but the sister angle never made any sense. At least where that story stands right now.


Note that everything below is speculation. I am merely trying to suggest an hypothesis which would answer the question of how the Annie Altman allegations could have led to this outcome. FWIW I think it's better speculation than some of the 'he did a side deal with MS' stuff above.

It seems basically impossible for OpenAI to have proved the validity of Annie Altman's claims about childhood sexual abuse. But they might have to take them seriously, especially once they were presented coherently on LessWrong.

If Sam had lied or misled the board about some aspect of his relationship with his sister, that would be a sacking offence. Eg he says "Annie's claims are completely untrue - I never abused her [maybe true or not, almost certainly unprovable], I never got her shadow banned from Instagram [by hypothesis true] and I never told her I could get her banned [untrue]." The board then engage a law firm or PI to check out the claims and they come up with a text message clearly establishing that he threatened to pull strings and get her banned. He lied to the board regarding an investigation into his good character so he's gone. And the board have the external investigator's stamp on the fact that he lied so they can cover their own ass.

Why would he tell a lie like this? Because whatever the truth of the allegations, he's arrogant and didn't take them as seriously as he should have. He mistakenly thought he could be dismissive and it wouldn't come back to bite him.

This seems consistent with the way things played out. (Note again: I'm just trying to come up with something consistent. I have no idea if this is at all accurate or the whole affair is about something completely different.) They don't have to worry about keeping him on as an advisor to cover up scandal. They can clearly state that he lied in an important matter. But they don't say what it's about - because they still have no idea whether the original allegations are true or not. They are not going to put themselves in a situation of saying "and he probably molested his sister". They wouldn't even say "it is related to abuse allegations made by a family member", which implies there might be evidence to the original allegations, and is probably defamatory. And he comes out saying that something unfair has happened, without giving any context, because he knows that even mentioning the allegations is going to lead to "but didn't he molest his sister" type comments, for the rest of time.

It's also consistent with the timing. They aren't just going to hear the Annie allegations and sack him. It takes time to look into these things. But within 6 weeks of it becoming an issue, they might be able to identify that he's either lied previously to the board about the gravity of this issue, lied during the current investigation, or something he's said publicly is clearly dishonest.


I'm virtually 100% sure he did it after just looking into it today so I can see exactly what you're saying about people backflipping out of cars and stuff to get away from it.


Yeah, idk news this morning seems to point otherwise. I would still be very careful with this dude. It really felt like a shoe was going to drop.

This seemed like a REALLY negative dismissal.


I have no knowledge of why, but it seems it's always about the money or the power/control. I cannot wait to see what it is...


I agree this is the most likely explanation. Is it possible Sam tried to wrestle power away from the board? He wouldn't even need to sell the whole company, just enough tech for a large company to kill OpenAI.


Or, not commercial, but military/gov.


Yeah. OpenAI is valuable not just commercially, but to the worlds governments, some of which can probably find ways to force out leadership they don't get along with.


Your comment is reasonable. Perhaps hitting close to the truth considering the downvotes. Everything in this thread is speculation.


Prediction: board resignation.

Sam Altman returns.


Prediction:

1/ Sam goes on to create NeXTAI and starts wearing mostly turtleneck sweaters and jeans 2/ OpenAI buys NeXTAI 3/ OpenAI board appoints Sam Altman as Interim CEO


You mean like Steve Jobs and Apple?


Love it.


Wild take, but that would sure be a sight to see. I'm not speculating at the moment, since I know nothing about the situation.


Turn that on it’s head - was he standing in the way of a commercial sale or agreement with Microsoft!

He may not be the villain.

But who knows, it feels like an episode of silicon valley!


If you look at who is on the board, how it's structured (they don't have equity right?), it seems like it's actually because he violated the charter. Why would Ilya Sutskever punish Sam for doing the right thing wrt AI safety?


They are in a strange position.

They had an open ethos and then went quasi closed for profit and then a behemoth has betted the family jewels on their products.

Harking on about the dangers of those products does not help the share price!

My money is on a power play at the top tables.

Embrace, extend, and exterminate.

Playbook!


Quasi-closed is an understatement. You could almost sue them for false advertising.


He will be ok!

Either a position in Microsoft or a new start-up.

Or both.

What does it mean for OpenAI though? That’s a limb sawn off for sure.


Certainly they could have fired him without Ilya's vote.


How? Per the blog post: "OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner." That's 4 directors after the steps taken today. Sam Altman and Greg Brockman both left the board as a result of the action. That means there were 6 directors previously. That means a majority of 4 directors. Assuming Sam & Greg voted against being pushed out, Ilya would have needed to vote with the other directors for the vote to succeed.

Edit: It occurs to me that possibly only the independent directors were permitted to vote on this. It's also possible Ilya recused himself, although the consequences of that would be obvious. Unfortunately I can't find the governing documents of OpenAI, Inc. anywhere to assess what is required.


Sam might have abstained from voting on his own ousting, since he had a conflict of interest.


Yes, true.


It makes no sense to suggest that three external directors would vote out a CEO and the Chairman against the chief scientist/founder/principal's wishes.


That’s the practical argument, and also seems to be true based on the news that came out late last night.


use research and AI to analyze Sutskever's character. the way he talks, the way he writes, what he did in the past, where he studied, who he was and is "acquainted with" ... do the same with the rest of the board and with Altman as well.

someone hire some PIs so we can get a clear and full picture, please & thank you


Tech investigative reporters are probably on it, just wait a week or two.



This was my first thought after seeing a clip of Sam and Satya during OpenAI's DevDay. I wonder if he was standing in the way of a Microsoft acquisition, and Microsoft has just forced in those who would allow the purchase to happen?

I don't know, so much wild speculation all over the place, it's all just very interesting.


They are betting so much on Open AI just now.

They need to be so much more than a partner.

Being open is not in their nature.

Sadly it is usually the demise of innovation when they get their hook in.


I can do anything I want with her - Silicon Valley S5:

https://www.youtube.com/watch?v=29MPk85tMhc

>That guy definitely fucks that robot, right?

That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.

https://en.wikipedia.org/wiki/Ben_Goertzel

https://www.reddit.com/r/SiliconValleyHBO/comments/8edbk9/th...

>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:

VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:

https://www.youtube.com/watch?v=AKbltBLaFeI


Who owns the equity and who has dedicated seats on the board? Altman can easily just boot the board if he gets majority of the equity to back him.


How would that work? Does a non profit have shares?


Eric Schmidt calling Sam a hero also makes me think it isn't a personal scandal.


I'm pretty confident it's not a personal scandal (or if it is, it's one that is truly monumental and that there hasn't even been a rumor of).

If it was a personal scandal, the messaging around his dismissal would have been very, very different. The messaging they gave makes it clear that whatever dirty deed he did, he did it to OpenAI itself.


Or to the public on behalf of OpenAI.


There were rumours apparently, but they were suppressed intentionally and very effectively it appears.


What I was meaning to say in my comment was that the rumors and accusations relating to his sister, even if entirely true, are not monumental enough of a personal scandal to explain how the board behaved while firing him.

They'd probably still fire him, but would have done so in a very different way.


This doesn’t make sense to me. It assumes Schmidt has inside info on why Altman was fired.


Of course he has. Why would he risk political capital by defending sama publicly if he didn’t know for sure he wouldn’t get burned by defending him?


Maybe because it's not risking very much political capital. If he's wrong, and shuts up (rather than doubling down on it), nobody will remember what he said in two weeks.

Hell, some prominent tech people are often loudly wrong, and loudly double down on their wrong-ness, and still end up losing very little of their political capital in the long run.

Or maybe he's right. We don't know, we're all just reading tea leaves.


Schmidt knows nearly everyone. He will know the details.


If he didn't know he wouldn't say anything. Why risk saying something when there's a very high chance that it's something like sexual misconduct or fraud.


Not sure Eric Schmidt is exactly a great judge of character. If anything that is an anti-signal


In the world that both Messrs. Schmidt and Altman operate in, liability is a vastly stronger incentive than character.


Isn’t Schmidt the guy who’s dating a CEO of a company he made an investment in?

That _should_, in a system of corporate governance that isn’t an absolute joke, expose him to significant liability.

Or am I thinking of another NorCal cretin that will never suffer a real consequence as long as he lives?


Yeah, Schmidt is whatever is on the opposite end of the spectrum from "mature" and "ethical."

https://www.forbes.com/sites/davidjeans/2023/10/23/eric-schm...


[flagged]


That's a pretty wild accusation that I feel like needs some substantiating.


Best I could find was this:

https://www.dailymail.co.uk/news/article-2377785/Google-CEO-...

But:

- 15 million not 30.

- The women he had affairs with don't seem to have been prostitutes and a fair bit older than 18 too.

- His wife knew about it all and was apparently OK with it.

- Penthouses are pretty much the opposite of bunkers if you think about it.


The cost is probably an exaggeration but this is not some secret: https://gothamist.com/news/google-boss-has-amazing-15-millio...


Parent commenter is wildly embellishing/lying, but some of the details of Schmidt's open marriage have been widely reported: https://www.mic.com/articles/56553/eric-schmidt-has-15-milli...


Given that the entire ~accusation~ actionable libel seems to be a bizarre game of telephone derived from a 2013 news item in which Eric Schmidt paid roughly half that amount for a NY penthouse, specifically one muckraker’s tabloid variant that vaguely connected Schmidt to two very definitely not 18 also very definitely not prostitutes, and that the only detail that appears remotely correct is that he did indeed have the property soundproofed, I very much doubt you’ll get that substantiation.


That’s not true. The sex bunker only cost $15M: https://gothamist.com/news/google-boss-has-amazing-15-millio...

And it wouldn’t be a harem if they’re prostitutes.


Woah, any link to support this?


Everybody's a comedian on Friday.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: