Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI approached Anthropic about merger (theinformation.com)
115 points by thesecretceo on Nov 21, 2023 | hide | past | favorite | 52 comments



One thing pretty clear to me now, if even only half of these rumored reactions from the current OAI board are correct, then they have nobody or have consulted with nobody with legal and business experience regarding their actions.

Like come on, a merger talk at this point is so stupid. Nobody would want to touch something this hot and the attempt alone just made OAI situation worse. What are they thinking?


I think it's pretty clear by this point that the board consists of several people who are woefully ignorant about the way the world actually works.


It just blows me away. The unusual board activity is what drew me to this. What I'm also waiting for is an explanation for why/how they canned Brockman as their Chairman of the Board. With him so far I've not even heard of a hint of it being for cause, which it isn't exactly normal to remove a board member without cause...them offering to let him remain as regular employee actually made what they did all the more eyebrow raising.


It’s hard to tell from the outside what happened there. And if the article used a reliable source.


https://twitter.com/willknight/status/1726793735143621058 This guy claims he has "reason" to believe it's a "complete fabrication."

No idea whether to trust this guy or his source more than the original article though.


Feels like sabotage. Adam, the quora guy runs a chatbot AI company as well and is on the board.


If they’d wanted to sabotage OpenAI, they could have declared that building GPT5 is unsafe, pulled the plug and erased all the backups, no?


They could have. But they wanted to have it too.


But who has the power to remove the board?


Hellen and Tasha and possibly Adam.


This just reinforces something I've been thinking for a while. When you look people at the top in medium to large companies, you'd think that they are extremely qualified and got there through merit alone. Although this is certainly true in many cases, it's probably not the case most of the time - a combination of luck and contacts often boost people to places they have no business being.

In general, people in high places can just surf the waves and go with the flow in collecting cash and not really make that much difference, relying on the other people closer to the work (engineers and frontline managers). But when making the right decisions can decide the fate of a company, you then see who's supposed to be there and who isn't.


The fact it states that this was after firing Altman makes it seem like the phone call to Dario Amodei was just the board flailing "hey, do you want to come back to openai? no? or maybe you could just merge your company with ours? you still there?"


Apparently they not only asked Dario to become CEO, they also asked Nat Friedman (Former GitHub CEO) and Alex Wang (Scale AI CEO). Flailing indeed.


Can literally anyone provide even a 0.001% plausible sounding reason as to why this all was started in the first place?

Nothing makes sense.


OpenAI was planning to add more members to the board. Meanwhile Sam and GDB are fully productizing the company, even training GPT5. The safety group has been increasingly concerned and feels the non-profit charter has been slowly losing over it its $ minded startup side. They see the writing on the wall, as most employees care about those sweet $ ppu’s, and decide they have a last ditch effort, maybe 25% chance, of averting this successfully. They feel they have to do it otherwise the company will just be Microsoft accelerating GPT development into our oblivion. They aren’t quite successful.

Remember, this company was founded, 8 years ago, with Sama writing “Why You Should Fear Machine Intelligence.”

Although vaguely possible, there’s something else to the story as Emmett Shear said “The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that.”

https://blog.samaltman.com/machine-intelligence-part-1


Only explanation acceptable for me now is Hanlon's razor mixed with a Despair Inc poster.

"Never attribute to malice that which is adequately explained by stupidity."

And

"Meetings : None of us is as dumb as all of us."


Two camps: one camp was into moving fast, cranking out products and profit. The other group watching from the side, claiming to be worried about the future of humanity. The group watching though had the power to fire the leaders of the other group, and they did.

It didn’t have to be any deeper philosophical disagreements or anything, just plain old rivalry and power grab. Of course it wouldn’t be written down as it’s just childish, but that’s how people behave, especially when newly found fame and power gets to their heads.


The most plausible sounding reason I’ve heard is that Open AI was originally a charity. People donated money to a cause. Most who donated got nothing in return. Those who donated a lot got board seats.

Now, the the employees of the for-profit arm of the organization are all trying to get rich while those who donated the original money get nothing. But… they still have their board seats, and they’re weaponizing them.


From all the comments and the way things happened in the last 96 hours, it sounds like the board was entirely incompetent. I’ve interacted with board members with various companies over my career, and everyone on the board is extremely junior.


>I’ve interacted with board members with various companies over my career, and everyone on the board is extremely junior.

Yes, you've hit on the thing that bothered me when I read the members' bios after Altman's firing but didn't figure out until reading your comment. It's not so much what they say, as much as what they don't say. There is no greybeard, some seasoned current/former CEO of another company (no, the CEO of Quora does not quality). There is no VC representative.

I realize that OpenAI has this weird nonprofit/profit ownership arrangement which I still don't get, and that the VC board members/observers are presumably on the profit side, but that still doesn't answer the implied question!



Jeez, these people seem so dumb. Throwing everything at the wall to see what sticks.


Makes some sense.

It would be funny if they agreed to the merger and then mass resigned to join Microsoft also.

And then repeat with quora merger and another mass exodus.

Or they could just outsource keeping the lights on to a consulting company like ibm or Accenture or oracle.

Unfortunately there won’t be a happy ending. Just like there was no happy ending with Java.


Never been happier to be wrong. Altman basically unionized the openai labor force and the investors. He contained the board. Instead of scattering the board to 3 corners of the planet, it will be further contained by adding 6 board members including Sam and microsoft and hopefully brockman. Order will have been restored. And they lived happily ever after !


Also if they are exploring mergers, they might consider merging with a university like stanford or cern or a bigger non profit.


Google and Amazon are major Anthropic investors. Amazon is even going to use Claude in the next few months on Alexa.

Someone can please explain how realistic is to think that Google, Amazon and Microsoft will fund/partner the same future company?

Is the board that crazy for control?


How would such a merger be good for AI safety? A combined OpenAI-Anthropic would collectively own 6 out of the top 6 LLM's on the chatbot arena leaderboard [0] which would seem to be somewhat monopolistic. This would result in less choice for us developers.

Isn't it better to have the top two AI firms both upholding AI safety as a core principle?

To me this weakens the original argument they had for sacking Sam in the first place.

[0] https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...


You have a point, but one could easily frame it differently.

- the openai board is the non-profit, meant to make "good" decisions, and not necessarily business sense decisions.

- altman was fired because that board doesn't feel it can rely on him (both in action and in communication), he'll do what he wants and he won't be obvious about the fact he's not listening to them.

- now altman and co. are essentially going to kill openai as a company, leaving it only with the existing IP, so the board figures that they should bring in people who are interested in working / researching on top of openai's previous work.

- anthropic is composed of people who left openai due to safety concerns, so they're good people to partner with.

The fact that it would be better safety-wise if there was competition (maybe?) is immaterial when facing the fact that 1) openai appears to be collapsing, and 2) microsoft anyway has rights to openai's work, and Microsoft appears to be preparing to cannibalize openai.


It has never been about AI safety. They just threw their little mission statement on the end like an email signature.

Nothing could possibly weaken their position more than the last 72 hours


In which sense less choice for developers harms AI safety?

(I'm saying this as someone that doesn't believe in AI safety as a worthwhile goal)


If OpenAI ends up a shell of a company with billions in cloud GPU credits, in some ways merging makes a lot of sense?


Zero chance OpenAI ends up seeing any more compute if they merge.


In this case Microsoft will have zero chance to use IP owned by the OpenAI, right? That's how mutual agreements work?

If not, why? And who's responsible for this 'Laundry Buddy' deal with Microsoft then, isn't it Sam Altman? In this case OpenAI's board did a fair job with their decision.


Maybe they’re aiming to start mining cryptocurrency.


All Sam wanted to do was mine some worldcoin


What would be the most efficient method if the goal was mining as much as possible on Microsoft's dime, as many miner instances as possible (burning through Azure credits quickly) or keeping a saner number of instances running longer?


Will Knight, a writer for Wired, is stating “I have reason to believe that the OpenAI Anthropic merger story is a complete fabrication”:

https://twitter.com/willknight/status/1726793735143621058


If true, then the board has gone rogue. The only question is why? What could have possibly caused them to do this.


They’re fools, and thought they could oust Altman over petty grievances and that no one would care, and when it blew up, they struggled to find a solution that saves face and their careers and avoids adverse legal judgments against them individually, and they’re still frantically searching? I mean, they didn’t even have replacements in mind. They are not planners.


it takes talent to destroy 100 billion USD in less than a week. props to the worst combination of individuals ever to be thought as competent enough to be given real responsibility.


The OpenAI board seems to have fatally miscalculated the loyalty to Sam A. And in realizing their error is scrambling to undo their miscalculation. The sad thing is probably was for the right reasons (AI safety over profits), and will result in the precise opposite (unbridled capitalism in AI).


Since the board hasn’t given any remotely convincing explanation for the firing [0], their miscalculation seems to have rather been that nobody would be asking any questions. Which is really bewildering.

[0] see https://news.ycombinator.com/item?id=38356534


For their sake it had better be the AI safety thing because if it wasn't they are in a lot of trouble even if that hasn't registered yet. Board member is not a job without risks.


wasnt over saftey though


Yea that's true ... Based on new reporting the firing was over nonsensical reasons... for dismissing a CEO who oversaw $90B in value creation. -----" One explanation was that Altman was said to have given two people at OpenAI the same project.

The other was that Altman allegedly gave two board members different opinions about a member of personnel."


This is what happens with a 5% interest rate. Apparently, you cannot sustain a money-losing venture (like Uber) for too long.


This is a sickness of the highest order.


Written by: Mike Judge


Why in the hell, in light of all of this nonsense, would I ever trust a startup with my money?

OpenAI's board is largely a group of people who have zero skins on the wall. They got their positions by being related or married to people and being born with an entire fucking cupboard of silverware shoved in every orifice on their bodies.

Even Altman was born rich and managed to fail upwards until he didn't. All hail the guy with ten thousand shots at success, everybody.


First time having the curtain pulled back?


No, it's just plainly obvious that none of these people should never be allowed around a budget much less corporate governance.


It also looks as if they never even bothered to look up the rule book. They are probably used to just doing as they please and getting away with it. What surprises me is the number of HN'ers that seem to believe that a board can act with impunity and complete isolation from the fall out of their decisions. That's not the world that I live in and I've interacted with various boards over a long career and have never seen anything close to the level of incompetence on display here.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: