Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The board's job is not to do right

There is why you do something. And there is how you do something.

OpenAI is well within its rights to change strategy even as bold as from a profit-seeking behemoth to a smaller research focused team. But how they went about this is appalling, unprofessional and a blight on corporate governance.

They have blind-sided partners (e.g. Satya is furious), split the company into two camps and have let Sam and Greg go angry and seeking retribution. Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

For me there is no justification for how this all happened.



As someone who has orchestrated two coups in different organizations, where the leadership did not align with the organization's interests and missions, I can assure you that the final stage of such a coup is not something that can be executed after just an hour of preparation or thought. It requires months of planning. The trigger is only pulled when there is sufficient evidence or justification for such action. Building support for a coup takes time and must be justified by a pattern of behavior from your opponent, not just a single action. Extensive backchanneling and one-on-one discussions are necessary to gauge where others stand, share your perspective, demonstrate how the person in question is acting against the organization's interests, and seek their support. Initially, this support is not for the coup, but rather to ensure alignment of views. Then, when something significant happens, everything is already in place. You've been waiting for that one decisive action to pull the trigger, which is why everything then unfolds so quickly.


How are you still hireable? If I knew you orchestrated two coups at previous companies and I was responsible for hiring, you would be radioactive to me. Especially knowing that all that effort went into putting together a successful coup over other work.

Coups, in general, are the domain of the petty. One need only look at Ilya and D'Angelo to see this in action. D'Angelo neutered Quora by pushing out its co-founder, Charlie Cheever. If you're not happy with the way a company is doing business, your best action is to walk away.


Let me pose a theoretical. Let’s say you’re a VP or Senior Director. One of your sibling directors or VPs is over a department and field you have intimate domain knowledge. Meaning you have a successful track record in that field both from a management side and an IC side.

Now, that sibling director allows a culture of sexual harassment, law breaking, and toxic throat slitting behavior. HR and the Organizations leadership is aware of this. However the company is profitable, outside his department happy, and stable. They don’t want to rock the boat.

Is it still “the domain of the petty” to have a plan to replace them? To have formed relationships to work around them, and keep them in check? To have enacted policies outside their department to ensure the damage doesn’t spread?

And most importantly to enact said replacement plan when they fuck up just enough leadership gives them the side-eye, and you push the issue with your documentation of their various grievances?

Because that… is a coup. That is a coup that is atleast in my mind moral and just, leading to the betterment of the company.

“Your best action is to walk away” - Good leadership doesn’t just walk away and let the company and employees fail. Not when there’s still the ability to effect positive change and fix the problems. Captains always evacuate all passengers before they leave the ship. Else they go down with it.


> “Your best action is to walk away” - Good leadership doesn’t just walk away and let the company and employees fail.

Yes, exactly. In fact, it's corruption of leadership.

If an engineer came to the leader about a critical technical problem and said, 'our best choice is to pretend it's not there', the leader would demand more of the engineer. At a place like OpenAI, they might remind the engineer that they are the world's top engineers at arguably the most cutting edge software organization in the world, and they are expected to deliver solutions to the hardest problems. Throwing your hands up and ignoring the problem is just not acceptable.

Leaders need to demand the same of themselves, and one of their jobs is to solve the leadership problems that are just as difficult as those engineering problems - to deliver leadership results to the organization just like the engineer delivers engineering results, no excuses, no doubts. Many top-level leaders don't have anyone demanding performance of them, and don't hold themselves to the same standards in their job - leadership, management - as they hold their employees.

> Not when there’s still the ability to effect positive change and fix the problems.

Even there, I think you are going to easy on them. Only in hindsight do you maybe say, 'I don't see what could have been done.' At the moment, you say 'I don't see it yet, so I have to keep looking and innovating and finding a way'.


Max Levchin was an organizer of two coups while at PayPal. Both times, he believed it was necessary for the success of the company. Whether that was correct or not, they eventually succeeded and I don’t think the coups really hurt his later career.


PayPal had an exit, but it absolutely did not succeed in the financial revolution it was attempting. People forget now that OG PayPal was attempting the digital financial revolution that later would be bitcoin’s raison d'être.


Dismissing PayPal as anything but an overwhelming business success takes a lot of confidence. Unless you Gates or Zuckerburg, etc., I don't know how you have anything but praise for PayPal from that perspective.

Comparing PayPal's success in digital finance to cryptocurrency's is an admission against interest, as they say in the law.


I think getting to an IPO in any form during the wreckage of the Dotcom crash counts as an impressive success, even if their vision wasn't fully realized.


Yep. PayPal was originally a lot like venmo (conceptually -- of course we didn't have phone apps then). It was a way for people to send each other money online.


Good thing for PayPal that it now owns Venmo :P


PayPal went down the embrace, extend, extinguish route. If it were possible for them to do the same with bitcoin, they would have.


This example seems to be survivorship bias. Personally, if someone approached me to suggest backstabbing someone else, I wouldn't trust that they wouldn't eventually backstab me as well. @bear141 said "People should oppose openly or leave." [1] and I agree completely. That said, don't take vacations! (when Elon Musk was ousted from PayPal in the parent example, etc.)

[1] https://news.ycombinator.com/item?id=38326443


> I wouldn't trust that they wouldn't eventually backstab me as well.

They absolutely would. The other thing you should take away from this is how they'd do it-- by manipulating proxies to do it with/for them, which makes it harder to see coming and impossible to defend against.

Whistleblowers are pariahs by necessity. You can't trust a known snitch won't narc on you if the opportunity presents itself. They do the right thing and make themselves untrustworthy in the process.

(This is IMO why cults start one way and devolve into child sex abuse so quickly-- MAD. You can't snitch on the leader when Polaroids of yourself exist...)

> don't take vacations!

This can get used against you either way, so you might as well take that vacation for mental health's sake.


I had this exact thing happen a few weeks ago in a company that I have invested in. That didn't quite pan out in the way the would-be coup party likely intended. To put it mildly.


You were approached to participate in a coup and therefore had it squashed? Or a CEO was almost removed during their vacation?


The first. And it was a bit tricky because it wasn't initially evident that it was a coup attempt but they gave themselves away. Highly annoying.


Dear god that sounds interesting and yet terrifying.


That's pretty accurate. It could have easily killed the company too.


I feel like in the parent comment coup is sort of shorthand for the painful but necessary work of building consensus that it is time for new leadership. Necessary is in the eye of the beholder. These certainly can be petty when they are bald-faced power grabs, but they equally can be noble if the leader is a despot or a criminal. I would also not call Sam Altman's ouster a coup even if the board were manipulated into ousting him, he was removed by exactly the people who are allowed to remove him. Coups are necessarily extrajudicial.


It also looks like Sam Altman was busy creating another AI company, along his creepy WorldCoin venture, wasteful crypto/bitcoin support and no less creepy stories of abuse coming from his younger sister.

Work or transfer of intellectual property or good name into another venture, while not disclosing it with OpenAI is a clear breach of contract.

He is clearly instrumental in attracting investors, talent, partners and commercialization of technology developed by Google Brain and pushed further by Hinton students and the team of OpenAI. But he was just present in the room where the veil of ignorance was pushed forward. He is replaceable and another leader, less creepy and with fewer conflicts of interest may do a better job.

It it no surprise that OpenAI board had attempted to eject him. I hope that this attempt will be a success.


Why is there a presumption that it must take precedence over other work?

I've run or defended against 'grassroots organizations transformations' (aka, a coup) at several non-profit organizations, and all of us continued to do our daily required tasks while the politicking was going on.


Because any defense of being able to orchestrate a professional coup and do your other work with the same zeal and focus as you did before fomenting rebellion I take as seriously as people who tell me they can multitask effectively.

It's just not possible. We're limited in how much energy we can bring to daily work, that's a fact. If your brain is occupied both with dreams of king-making and your regular duties at the job, your mental bandwidth is compromised.


> If you're not happy with the way a company is doing business, your best action is to walk away.

This makes no sense at all!


It makes the most sense if you value your own wellbeing over whatever “mission” a company is supposedly chasing.


Are you the sort of person that hires someone that can successfully organize a coup against corporate leadership?

It feels like there is an impedance mismatch here.


I’ve hired people that were involved in palace coups at unicorn startups, twice. Justified or not, those coups set the company on a downward spiral it never recovered from.

I’m not sure I can identify exactly who is liable to start a coup, but I know for sure that I would never, ever hire someone who I felt confident might go down that route.

Startups die from suicide, not homicide.


"Startups die from suicide, not homicide." - That's a great way to put it. 100% true.


> I’ve hired people that were involved in palace coups at unicorn startups, twice...I know for sure that I would never, ever hire someone who I felt confident might go down that route.

So you hired coupers but you would never hire...coupers? Did you not know about their coups cuz that's the only way I can see that makes sense here. Could you clarify this, seems contradictory...

Also, great quote about startup failure :)


These people were early hires at a company I co-founded (but was not in an official leadership role at). They had never pulled a coup before, but they would do so within two years of being hired. The coup didn’t affect me directly, and indeed happened when I was out of the country and was presented as a fait accompli. But nevertheless I left not long thereafter as the company had already begun its downward slide.

The point in my comment was this: in retrospect, I’m not sure there’s anything that would have tipped me off to that behavior at the time of interview. But if this was something I could somehow identify, it would absolute be my #1 red flag for future hires.

Edit: The “twice” part might have made my comment ambiguous. What I meant was after I hired them, these people went on to pull two separate, successive coups, which indicates to me the first time wasn’t an aberration.


You should have made them fake manager like Michael Scott appointed Dwigt Schrute


'S all good


>So you hired coupers but you would never hire...coupers? Did you not know about their coups cuz that's the only way I can see that makes sense here. Could you clarify this, seems contradictory...

You might have missed this from GP's comment:

>>I’m not sure I can identify exactly who is liable to start a coup

In other words, at least once these people have pulled the wool over their eyes during the hiring process.


Thats what I thought ;)


If I'm confident in my competence and the candidate has a trustworthy and compelling narrative about how they undermined incompetent leadership to achieve a higher goal - yep, for sure.


Also, ones persons incompetent is anothers performer.

Like, being crosswise in organizational politics does not imply less of a devotion of organizational goals, but rather often simply different interpretation of those goals.


But being in a situation where this was called for twice?

That strikes me as someone who is either lacks the ability to do proper due diligence or they're straight up sociopaths looking for weak willed people they can strong arm out. Part of the latter is having the ability to create a compelling narrative for future marks, to put it bluntly.


The regular HN commenter says "ceos are bad useless and get paid too much" but now when someone suggests getting rid of one of them suddenly its the end of the world


1. There's different people here with different opions.

2. CEO's at fast growing startups are very different than at large tech.


Are you responsible for hiring though?


I agree completely. People should oppose openly or leave.


Aren't you taking sides in a fight without knowing which side was "right"? Or do you believe that loyalty trumps all other values?

At this point I'm in danger of triggering Godwin's Law so I had better stop.


My comment was phrased inappropriately.


As you are new here, I would urge you to read the site's Guidelines [1], which the tone & wording of your comment indicate you have not read.

[1] https://news.ycombinator.com/newsguidelines.html


Ok. Thank you.


All of this is spot on. The key to it all is 'if you strike at the king, you best not miss'.


Going off on a big tangent, but Jiang Zemin had made several failed assassination attempts on Xi Jinping, but he was still able to die of old age.


By assassination I assume you mean metaphorical? As in to derail his rise before becoming party leader?


No, literal attempts.

One attempt involved a battleship “accidentally” firing onto another battleship where both Hu Jintao and Xi Jinping were visiting.

https://jamestown.org/program/president-xi-suspects-politica...

Biased source, but she’s able to get a lot of unreported news from the mainland.

https://www.jenniferzengblog.com/home/2021/9/20/deleted-repo...

I will try to find more sources but Google is just shit these days. See my other comment for more.

A big problem is that mainland China is like the hermit kingdom. It’s a black hole for any news the CCP doesn’t want to get out


These are FalungGong. I will not trust Falun Gong's news on China. They are known to create conspiracy stories.


Agreed. Whilst I don’t trust China’s CCP, I sure as heck don’t trust anything from Falun Gong. Those guys are running an asymmetric battle against the Chinese State and frankly they would be capable of saying anything if it helped their cause.


I mean I would too if my ethnicity was so repressed, along with all the other non han Chinese.


Falun Gong is a religion, not an ethnicity, and they are of the cultish variety.

It's like believing the scientology center. Not trustworthy, they have an angle.


1. The sources aren’t limited to falun gong

2. It makes sense given Xi’s current paranoia and constant purges


What is Falun Gong exactly? I never understood what they are.


https://en.wikipedia.org/wiki/Falun_Gong

No guarantees about NPOV on that page.

See also:

https://en.wikipedia.org/wiki/Talk:Falun_Gong

If you want to see what makes WikiPedia tick that's a great place to start.


Interesting. The Wikipedia's declaration about them being "new religious movement" is inconsistent with the body of the article. It looks like it started as some kind of Chi Kung exercise and wellness group, but it got big very fast and Chinese Government got concerned about their popularity. Then, under CCP persecution, it escalated and morphed into a full-blown political dissident movement. Initially viewed favorably by the press as a dissident movement. Now, Wikipedia article is very unfavorable because The Epoch Times misalignment with press. Ok, I think I understand.


I wouldn't trust either the CCP or Falun Gong to speak my weight, they are both power structures and they are both engaging in all kinds of PR exercises to put themselves in the best light. But to Falun Gong's credit: I don't think they've engaged in massive human rights violations so they have that going for them. But there are certain cult like aspects to it and 'new religious movement' or not I think that the fewer such entities there are the better (and also fewer of the likes of the CCP please).


> One attempt involved a battleship “accidentally” firing onto another battleship where both Hu Jintao and Xi Jinping were visiting.

Hm. That really does qualify as an assassination attempt if it wasn't an actual accident. Enough such things happen by accident that it has a name.

https://en.wikipedia.org/wiki/Friendly_fire


Search for

  search terms site:nytimes.com
(or bbc.co.uk or ap.com or another trusted source)


You can safely assume he still had sufficient power to be well protected.


Never heard about this before. Sources?


Google is just really terrible these days.

http://www.indiandefencereview.com/spotlights/xi-jinpings-fi...

http://www.settimananews.it/italia-europa-mondo/the-impossib...

I will try to find better sources. There are more not so great articles in my other comment


I am extremely interested in hearing about these coups and your experience in them; if you'd like and are able to share


I would never work with you. This is why investors have such a bad reputation. If I had not retained 100% ownership and control of my business, I am sure someone like you would have tossed me out by now.

Focus on results, not political games.


What's funny is the board is already second-guessing themselves and might want Sam back. Sounds opposite of what you said here.


I feel like this is something that could be played out on a documentary about chimpanzies


Username… checks out?


Even in the HBO show Succession, these things take a season, not an episode


> They have blind-sided partners (e.g. Satya is furious), split the company into two camps and have let Sam and Greg go angry and seeking retribution.

Given the language in the press release, wouldn't it be more accurate to say that Sam Altman, and not the board, blindsided everyone? It was apparently his actions and no one else's that led to the consequence handed out by the board.

> Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

From all current accounts, doesn't that seem like what Altman and his crew were already trying to do and was the reason for the dismissal in the first place?


The only appropriate target for Microsoft's anger would be its own deal negotiators.

OpenAI's dual identity as a nonprofit/for-profit business was very well known. And the concentration of power in the nonprofit side was also very well known. From the media coverage of Microsoft's investments, it sounds as if MSFT prioritized getting lots of business for its Azure cloud service -- and didn't prioritize getting a board seat or even an observer's chair.


Sure, but Microsoft could also walk away today and leave OpenAI high and dry. They hold ALL the power here.


Microsoft terminating the agreement by which they supply compute to OpenAI and OpenAI licenses technology to them would be an existential risk to OpenAI (though other competing cloud providers might step in and fill the gap Microsoft created under similar terms), but -- whether or not OpenAI ended up somewhere else immediately (the tech eventually would, even if OpenAI failed completely and was dissolved) Microsoft would go from the best positioned enterprise AI cloud provider to very far behind overnight.

And while that might hurt OpenAI as an institution more than it hurts Microsoft as an institution, the effect on Microsoft's top decision-makers personally vs. OpenAI's top decisionmakers seems likely to be the other way around.


Not if they invested in Sam’s new startup, under agreeable profit-focused terms this time, and all the OpenAI talent (minus Ilya) followed.


At best, that might enable them to eventually come back, once new products are built from scratch, but that takes non-zero time.


Non-zero time, but not a lot either. Main hangup would be acquiring data for training, as their engineers would remember the parameters for GPT-4 and Microsoft would provide the GPUs. But Microsoft with access to Bing and all its other services ought to be able to help here too.

Amateurs on hugging face are able to match OpenAI in impressively short time. The actual former-OpenAI engineers with unlimited budget ought to be able to do as good or better.


Amateurs ?


Non-corporate groups.


If Open AI were to be in true crisis, I'm sure Amazon will step in to invest, for exclusive access to GPT4 (in spite of their Anthropic investment). That would put Azure in a bad place. So not exactly "All" the power.

Not to mention, after that, MSFT might be left bagholding onto a bunch of unused compute.


Sam and Greg have already said they’re starting an OpenAI competitor, and at least 3 senior engineers have jumped ship already. More are expected tonight. Microsoft would just back them as well, then take their time playing kingmaker in choosing the winner.


That's true, but Sutskever and Co still have the head start. On the models, the training data, the GPT4 licenses, etc. Their Achilles heel is the compute which Microsoft will pull out. Khosla Ventures and Sequoia may sell their Open AI stakes at a discount, but I'm sure either Google or Amazon will snap it up.

All Sam and Greg really have is the promise of building a successful competitor, with a big backing from Microsoft and Softbank, while OpenAI is the orphan child with the huge estate. Microsoft isn't exactly the kingmaker here.


It doesn’t sound like Sutskever is running anything. OpenAI reportedly put out a memo saying they’re trying to get Sam and Greg back: https://www.theinformation.com/articles/openai-optimistic-it...


Sutskever built the models behind GPT4, if I reckon correctly (all credit to the team, but he's the focal point behind expanding on Google transformers). I don't see Sam and Greg working with him under the same roof after this fiasco, since he voted them out (he could have been the tiemaker vote).


OpenAI leadership (board, CEO) didn't say that ... your link said their "Chief Strategy Officer" Jason Kwon said it.

Most likely outcome here does seem to be that Altman/Brockman come back, Sutskever leaves and joins Google, and OpenAI becomes for all intensive purposes a commercial endeavor, with Microsoft wielding a lot more clout over them (starting with one or more board seats).

Big winner in this scenario would be Google.


Sam just posted a selfie wearing an OpenAI guest badge at the SF offices. He's back there for some sort of negotiations.


Could they? I don't know the details of MSFTs contracts with OpenAI... but even if they can legally just walk away, it would certainly have some negative impact on MSFTs reputation when dealing with future negotiations for them to do so.


They loved to trot out the “mission” as a reason to trust a for-profit entity with the tech.

Well, this is proof the mission isn’t just MBA bullshit, clearly Ilya is actually committed to it.

This is like if Larry and Sergei never decided to progressively nerf “don’t be evil” as they kept accumulating wealth, they would have had to stage a coup as well. But they didn’t, they sacrificed the mission for the money.

Good for Ilya.


I wonder if there's a specific term or saying for that, maybe "projection" or "self-victimization" but not quite: when one person biasedly frames that other people were responsible for a bad thing, when it is they yourself that were doing the very thing in the first place. Maybe "hypocrisy"?


Lack of accountability. Inability for self reflection.


Probably a little of all of that all bundled up together under the umbrella of cult of personality.


The leaked memo today (which was probably reviewed by legal, unlike yesterday’s press release) says there was no malfeasance.


> split the company into two camps

The split existed long prior to the board action, and extended up into the board itself. If anything, the board action is a turning point toward decisively ending the split and achieving unity of purpose.


Can someone explain the sides? Ilya seems to think transformers could make AGI and they need to be careful? Sam said what? "We need to make better LLMs to make more money."? My general thought is that whatever architecture gets you to AGI, you don't prevent it from killing everyone by chaining it better, you prevent that by training it better, and then treating it like someone with intrinsic value. As opposed to locking it in a room with 4chan.


If I'm understanding it correctly, it's basically the non-profit, AI for humanity vs the commercialization of AI.

From what I've read, Ilya has been pushing to slow down (less of the move fast and break things start-up attitude).

It also seems that Sam had maybe seen the writing on the wall and was planning an exit already, perhaps those rumors of him working with Jony Ive weren't overblown?

https://www.theverge.com/2023/9/28/23893939/jony-ive-openai-...


The non-profit path is dead in the water after everyone realized the true business potential of GPT models.


What is the business potential? It seems like no one can trust it for anything, what do people actually use it for.


Anything that is language related. Extracting summaries, writing articles, combining multiple articles into one, drawing conclusions from really big prompts, translating, rewriting, fixing grammar errors etc. Half of the corporations in the world have such needs more or less.


It could easily make better decisions than these board members, for example.


> From what I've read, Ilya has been pushing to slow down

Wouldn’t a likely outcome in that case be that someone else overtakes them? Or are they so confident that they think it’s not a real threat?


I don't think the issue was a technical difference of opinion regarding whether transformers alone were needed or other architectures required. It seems the split was over speed of commercialization and Sam's recent decision to launch custom GPTs and a ChatGPT Store. IMO, the board miscalculated. OpenAI won't be able to pursue their "betterment of humanity" mission without funding and they seemingly just pissed off their biggest funding source with a move that will also make other would be investors very skittish now.


Making humanity’s current lives worse to fund some theoretical future good (enriching himself in the process) is some highly impressive rationalisation work.


Try to tell that to the Effective Altruism crowd.


Literally any investment is a divert of resources from the present (harming the present) to the future. E.g. planting grains for next year rather than eating them now.


There is a difference between investing in a company who is developing ai software in a widely accessible way that improve everyone’s lives and a company that pursues software to put out of work entire sectors for the profit of a dozen of investors


"Put out of work" is a good thing. If I make a new js library which means a project that used to take 10 devs now takes 5 I've put 5 devs out of work. But ive also made the world a more efficient place and those 5 devs can go do some other valuable thing.


What percent of those devs don’t do a valuable thing and become homeless?

Maybe devs are a bad example, so replace them with “retail workers” in your statement if it helps.

Is “put out of work” a good thing with no practical limits?


Yes, the ideal is when most jobs are genuinely automated we can finally afford UBI.


Who can afford it? When LawyerAI and AccountAI are used by all of the mega corps to find more and more tax loopholes and many citizens can’t work then where will UBI come from?


And people with money will want to make UBI happen because...?


Here's the discussion on the EA forum if anyone is interested: https://forum.effectivealtruism.org/posts/HjgD3Q5uWD2iJZpEN/...

I think the EA movement has been broadly skeptical towards Sam for a while -- my understanding is that Anthropic was founded by EAs who used to work at OpenAI and decided they didn't trust Sam.


My thought exactly. Some people don’t have any problem with inflicting misery now for hypothetical future good.


> Making humanity’s current lives worse to fund some theoretical future good

Note that this clause would describe any government funded research for example.


> locking it in a room with 4chan.

Didn’t Microsoft already try this experiment a few years back with an AI chatbot?


> Didn’t Microsoft already try this experiment a few years back with an AI chatbot?

You may be thinking of Tay?

https://en.wikipedia.org/wiki/Tay_(chatbot)


That’s the one.


I don't think it has to be unfettered progress that Ilya is slowing down for. I could imagine there is a push to hook more commercial capabilities up to the output of the models, and it could be that Ilya doesn't think they are competent/safe enough for that.

I think danger from AGI often presumes the AI has become malicious, but the AI making mistakes while in control of say, industrial machinery, or weapons, is probably the more realistic present concern.

Early adoption of these models as controllers of real world outcomes is where I could see such a disagreement becoming suddenly urgent also.


> treating it like someone with intrinsic value

Do you think if chickens treated us better with intrinsic value we won't kill them? For AGI superhuman x risk folks that's the bigger argument.


I think od I was raised by chickens that treated me kindly and fairly, yes, I would not harm chickens.


They'll treat you kindly and fairly, right up to your meeting with the axe.


That's literally what we already do to each other. You think the 1% care about poor people? Lmao, the rich lobby and manufacture race and other wars to distract from the class war, they're destroying our environment and numbing our brains with opiates like Tiktok.


No disagreement here.


> OpenAI is well within its rights to change strategy even as bold as from a profit-seeking behemoth to a smaller research focused team. But how they went about this is appalling, unprofessional and a blight on corporate governance.

This wasn't a change of strategy, it was a restoration of it. OpenAI was structured with a 501c3 in oversight from the beginning exactly because they wanted to prioritize using AI for the good of humanity over profits.


This isn't going to make me think in any way that OpenAI will return to its more open beginning. If anything it shows me they don't know what they want


I agree. They've had tension between profit motive and the more grandiose thinking. If they'd resolved that misalignment early on they wouldn't be in this mess.

Note I don't particularly agree with their approach, just saying that's what they chose when they founded things, which is their prerogative.


Yet they need massive investment from Microsoft to accomplish that?

> restoration

Wouldn’t that mean that over the longterm they will just be outcompeted by the profit seeking entities. It’s not like OpenAI is self sustainable (or even can be if they chose the non-profit way)


>Yet they need massive investment from Microsoft to accomplish that?

massive spending is needed for any project as massive as "AI", so what are you even asking? A "feed the poor project" does not expect to make a profit, but, yes, it needs large cash infusions...


That as a non profit they won’t be able to attract any sufficient amounts of money?


Or talent...


> a blight on corporate governance > They have blind-sided partners (e.g. Satya is furious) > the threat that a for-profit version of OpenAI dominates the market

It's seeming like corporate governance and market domination are exactly the kind of thing the board are trying to separate from with this move. They can't achieve this by going to investors first and talking about it - you think Microsoft isn't going to do everything in their power to prevent it from happening if they knew about it? I think their mission is laudable, and they simply did it the way it had to be done.

You can't slowly untangle yourself from one of the biggest companies in the world while it is coiling around your extremely valuable technology.


In other words, it’s unheard of for a $90B company with weekly active users in excess of 100 million. A coup leaves a very bad taste for everyone - employees, users, investors and the general public.

When a company experiences this level of growth over a decade, the board evolves with the company. You end up with board members that have all been there, done that, and can truly guide the management on the challenges they face.

OpenAI's hypergrowth meant it didn’t have the time to do that. So the board that was great for a $100 million, even a billion $ startup falls completely flat for 90x the size.

I don’t have faith in their ability to know what is best for OpenAI. These are uncharted waters for anyone though. This is an exceptionally big non-profit with the power to change the world - quite literally.


Why do you think someone who could be CEO of a $100 million company would be qualified to run a billion dollar company?

Not providing this kind of oversight is how we get disasters like FTX and WeWork.


And yet it’s very heard of for corporations to poison our air and water, cut corners and kill peoples, and lie, cheat, and steal. That happens every day and nobody cares.

And yet four people deciding the put something - anything - above money is somehow a disaster.

Give me a break.


"And there is how you do something"

Sorry I don't see the 'how' as necessarily appalling.

The less appalling alternative could have been weeks of discussions and the board asking for Sam's resignation to preserve the decorum of the company. How would that have helped the company ? The internal rife would have spread, employees would have gotten restless, leading to reduced productivity and shipping.

Instead, isn't this a better outcome ? There is immense short term pain, but there is no ambiguity and the company has set a clear course of action.

To affirm that the board has caused a split in the company is quite preposterous, unless you have first hand information that such a split has actually happened. As far as public information is concerned 3 researchers have quit so far, and you have this from one of the EMs.

"For those wondering what’ll happen next, the answer is we’ll keep shipping. @sama & @gdb weren’t micro-managers. The comes from the many geniuses here in research product eng & design. There’s clear internal uniformity among these leaders that we’re here for the bigger mission."

This snippet in fact shows the genius of Sam and gdb, how they enabled the teams to run even in their absence. Is it unfortunate that the board fired Sam, from the engineer's and builder's perspective yes, from the long term AGI research perspective, I don't know.


> They have … split the company into two camps

By all accounts, this split happened a while ago and led to this firing, not the other way around.


The split happened at the management/board level.

And instead of resolving this and presenting a unified strategy to the company they have instead allowed for this split to be replicated everywhere. Everyone who was committed to a pro-profit company has to ask if they are next to be treated like Sam.

It's incredibly destabilising and unnecessary.


> Everyone who was committed to a pro-profit company has to ask if they are next to be treated like Sam.

They probably joined because it was the most awesome place to pursue their skills in AI, but they _knew_ they were joining an organization with explicitly not a profit goal. If they hoped that profit chasing would eventually win, that's their problem and, frankly, having this wakeup call is a good thing for them so they can reevaluate their choices.


The possibility of getting fired is an occupational hazard for anyone working in any company, unless something in your employment contract says otherwise. And even then, you can still be fired.

Biz 101.

I don't know why people even need to be explained this, except for ignorance of basic facts of business life.


Let the two sides now create separate organizations and pursue their respective pure undivided priority to the fullest. May the competition flow.


> e.g. Satya is furious

Oh! So, now you got him furious? when just yesterday he made a rushed statement to standby Mira.

https://blogs.microsoft.com/blog/2023/11/17/a-statement-from...


>They have blind-sided partners

This is the biggest takeaway for me. People are building businesses around OpenAI APIs and now they want to suddenly swing the pendulum back to being a fantasy AGI foundation and de-emphasize the commercial aspect? Customers are baking OpenAI's APIs into their enterprise applications. Without funding from Microsoft their current model is unsustainable. They'll be split into two separate companies within 6 months in my opinion.


I'm sure my coworkers at [retailer] were not happy to be even shorter staffed than usual when I was ambush fired, but no one who mattered cared, just as no one who matters cares when it happens to thousands of workers every single day in this country. Sorry to say, my schadenfreude levels are quite high. Maybe if the practice were TRULY verboten in our society... but I guess "professional" treatment is only for the suits and wunderkids.


I have noticed you decided to use several German words in your reply, trying not to be petty but at least you should attempt to write them correctly. It’s either Wunderkind (German word for child prodigy) or english translation: wonder kid.


You are correct, though I must be Mandela Effect-ing, because I could have sworn that "wunderkid" was an accepted American English corruption of the original term, a la... Well, "a la" (à la).

My use of "schadenfreude", in general, can be attributed largely to Avenue Q and Death Note. Twice is coincidence.

EDIT: I just noticed "verboten." Now I'm worried.


And the stupid thing is, they could have just used the allegations his sister made against him as the reason for the firing and ridden off into the sunset, Scott-free.


I'm glad they didn't. She has enough troubles without a target like that on her back.


I thought the for-profit AI startup with no higher purpose was OpenAI itself.


OpenAI is a nonprofit charity with a defined charitable purpose that has a for-profit subsidiary that is explicitly subordinated to the purpose of the nonprofit, to the extent investors in the subsidiary are advised in the operating agreement to treat investments as if they were more like donations, and that the firm will prioritize the charitable function of the nonprofit which retains full governance power over the subsidiary over returning profits, which it may never do.


It is, only it has an exotic ownership structure. Sutskever has just used the features of that structure to install himself as the top dog. The next step is undoubtedly packing the board with his loyalists.

Whoever thinks you can tame a 100 billion dollar company by putting a "non-profit" in charge of it, clearly doesn't understand people.


>Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

Is Microsoft a higher purpose?


You're entitled to your opinions.

But as far as I can tell, unless you are in the exec suites at both OpenAI and at Microsoft, these are just your opinions, yet you present them as fact.


The way Altman behaved and manipulated the board to form this Frankenstein company is also appalling. I think it's clear now that openAI board are not business ppl, and they had no idea how to work with someone as cold and manipulative as Altman, thus they blundered and made fools of themselves, as often happen to the naive.


> Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

If it was so easy to go to the back of the queue and become a threat, Open AI wouldn't be in the dominant position they're in now. If any of the leavers have taken IP with them, expect court cases.


You assume they were indeed blindsided, which I very much doubt.

I think it’s a good outcome overall. More decentralization and focused research, and a new company that focuses on product.


Keep in mind that the rest of the board members have ties to US intelligence. Something isn't right here.


Do you have citations for that? That’s interesting if true


I'm pretty sure Joseph Gordon-Levitt's wife isn't a CIA plant.


She works for RAND Corporation


There had better be US intelligence crawling all over the AI space, otherwise we are all in very deep shit.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: