All: there are over 1800 comments in this thread. If you want to read them all, click More at the bottom of each page, or like this: (edit: er, yes they do have to be wellformed don't they):
If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here. I don't expect the IRS to be a fan of this arrangement.
Major corporate boards are rife with "on paper" conflicts on interest - that's what happens when you want people with real management experience to sit on your board and act like responsible adults. This happens in every single industry and has nothing to do with tech or with OpenAI specifically.
In practice, board bylaws and common sense mean that individuals recuse themselves as needed and don't do stupid shit.
I get a lostredditor vibe way too often here. Oddly more than Reddit.
I think people forget sometimes that comments come with a context. If we are having a conversation about Deep Water Horizon someone will chime in about how safe deep sea oil exploration is and how many failsafes blah blah blah.
>I think people forget sometimes that comments come with a context.
I mean, this is definitely one of my pet peeves, but the wider context of this conversation is specifically a board doing stupid shit, so that's a very relevant counterexample to the thing being stated. Board members in general often do stupid/short-sighted shit (especially in tech), and I don't know of any examples of corporate board members recusing themselves.
It happens a lot. Every big company has CEOs from other businesses on its board and sometimes those businesses will have competing products or services.
That's what I would term a black-and-white case. I don't think there's anyone with sense who would argue in good faith that a CEO should get a vote on their own salary. There are many degrees of grey between outright corruption and this example, and I think the concern lies within.
I get what you're saying, but I also live in the world and see the mechanics of capitalism. I may be a person who's interested in tech, science, education, archeology, etc. That doesn't mean that I don't also have political views that sometimes overlap with a lot of other very-online people.
I think the comment to which you replied has a very reddit vibe, no doubt. But also, it's a completely valid point. Could it have been said differently? Sure. But I also immediately agreed with the sentiment.
Oh I wasn’t complaining about the parent, I was complaining it needed to be said.
We are talking about a failure of the system, in the context of a concrete example. Talking about how the system actually works is only appropriate if you are drawing specific arguments up about how this situation is an anomaly, and few of them do that.
Instead it often sounds like “it’s very unusual for the front to fall off”.
No, this is the part of the show where the patronizing rhetoric gets trotted out to rationalize discarding the principles that have suddenly become inconvenient for the people with power.
No worries. The same kind of people who devoted their time and energy to creating open-source operating systems in the era of Microsoft and Apple are now devoting their time and energy to doing the same for non-lobotomized LLMs.
Look at these clowns (Ilya & Sam and their angry talkie-bot), it's a revelation, like Bill Gates on Linux in 2000:
No, its the part of the show where they go back to providing empty lip service to the principles and using them as a pretext for things that actually serve narrow proprietary interests, the same way they were before the leadership that has been doing that for a long time was temporarily removed until those sharing the proprietary interests revolted for a return to the status quo ante.
Yes, and we were also watching the thousands and thousands of companies where these types of conflicts are handled easily by decent people and common sense. Don't confuse the outlier with the silent majority.
And we're seeing the result in real-time. Stupid shit doers have been replaced with hopefully-less-stupid-shit-doers.
It's a real shame too, because this is a clear loss for the AI Alignment crowd.
I'm on the fence about the whole alignment thing, but at least there is a strong moral compass in the field- especially compared to something like crypto.
You need to be able to separate macro-level and micro-level. GP is responding to a comment about the IRS caring about the conflict-of-interest on paper. The IRS has to make and follow rules at a macro level. Micro-level events obviously can affect the macro view, but you don't completely ignore the macro because something bad happened at the micro level. That's how you get knee-jerk reactionary governance, which is highly emotional.
A corporation acting (due to influence from a conflicted board member that doesn't recuse) contrary to the interests of its stockholders and in the interest of the conflicted board member or who they represent potentially creates liability of the firm to its stockholders.
A charity acting (due to the influence of a conflicted board member that doesn't recuse) contrary to its charitable mission in the interests of the conflicted board member or who they represent does something similar with regard to liability of the firm to various stakeholders with a legally-enforceable interest in the charity and its mission, but also is also a public civil violation that can lead to IRS sanctions against the firm up to and including monetary penalties and loss of tax exempt status on top of whatever private tort liability exists.
Reminds me of the “revolving door” problem. Obvious risk of corruption and conflict of interest, but at the same time experts from industry are the ones with the knowledge to be effective regulators. Not unlike how many good patent attorneys were previously engineers.
501c3's also have governing internal rules, and the threat of penalties and loss of status imposed by the IRS gives them additional incentive to safeguard against even the appearance of conflict being manifested into how they operate (whether that's avoiding conflicted board members or assuring that they recuse where a conflict is relevant.)
If OpenAI didn't have adequate safeguards, either through negligence or becauase it was in fact being run deliberately as a fraudulent charity, that's a particular failure of OpenAI, not a “well, 501c3’s inherently don't have safeguard” thing.
The Bill and Melinda Gates Foundation is a 501c3 and I'd expect that even the most techno-futurist free-market types on HN would agree that no matter what alleged impact it has, it is also in practice creating profitable overseas contracts for US corporations that ultimately provide downstream ROI to the Gates estate.
Most people just tend to go about it more intelligently than Trump but "charitable" or "non-profit" doesn't mean the organization exists to enrich the commons rather than the moneyed interests it represents.
My guess is that the non-profit has never gotten this kind of scrutiny now and the new directors are going to want to get lawyers involved to cover their asses. Just imagine their positions when Sam Altman really does something worth firing.
I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess. Imagine the fun when it tips into a private foundation status.
> I think it was a real mistake to create OpenAI as a public charity
Sure, with hindsight. But it didn't require much in the way of foresight to predict that some sort of problem would arise from the not-for-profit operating a hot startup that is by definition poorly aligned with the stated goals of the parent company. The writing was on the wall.
I think it could have easily been predicted just from the initial announcements. You can't create a public charity simply from the donations of a few wealthy individuals. A public charity has to meet the public support test. A private foundation would be a better model but someone decided they didn't want to go that route. Maybe should have asked a non-profit lawyer?
Maybe the vision is to eventually bring UBI into it and cap earn outs. Not so wild given Sam’s world coin and his UBI efforts when he was YC president.
The public support test for public charities is a 5-year rolling average, so "eventually" won't help you. The idea of billionaires asking the public for donations to support their wacky ideas is actually quite humorous. Just make it a private foundation and follow the appropriate rules. Bill Gates manages to do it and he's a dinosaur.
Exactly this. OpenAI was started for ostensibly the right reasons. But once they discovered something that would both 1) take a tremendous amount of compute power to scale and develop, and 2) could be heavily monetized, they choose the $ route and that point the mission was doomed, with the board members originally brought in to protect the mission holding their fingers in the dyke.
Speaks more to a fundamental misalignment between societal good and technological progress. The narrative (first born in the Enlightenment) about how reason, unfettered by tradition and nonage, is our best path towards happiness no longer holds. AI doomerism is an expression of this breakdown, but without the intellectual honesty required to dive to the root of the problem and consider whether Socrates may have been right about the corrupting influence of writing stuff down instead of memorizing it.
What's happening right now is people just starting to reckon with the fact that technological progress on it's own is necessarily unaligned with human interests. This problem has always existed, AI just makes it acute and unavoidable since it's no longer possible to invoke the long-tail of "whatever problem this fix creates will just get fixed later". The AI alignment problem is at it's core a problem of reconciling this, and it will inherently fail in absence of explicitly imposing non-Enlightenment values.
Seeking to build openAI as a nonprofit, as well as ousting Altman as CEO are both initial expressions of trying to reconcile the conflict, and seeing these attempts fail will only intensity it. It will be fascinating to watch as researchers slowly come to realize what the roots of the problem are, but also the lack of the social machinery required to combat the problem.
Wishfully I hope there was some intent from the beginning on exposing the impossibility of this contradictory model to the world, so that a global audience can evaluate on how to improve our system to support a better future.
Well, I think that's really the question, isn't it?
Was it a mistake to create OpenAI as a public charity?
Or was it a mistake to operate OpenAI as if it were a startup?
The problem isn't really either one—it's the inherent conflict between the two. IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.
> IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.
I mean that's certainly been my experience of it thus far, is companies rushing to market with half-baked products that (allegedly) incorporate AI to do some task or another.
I was specifically thinking of people seeing a non-profit doing stuff with ML, and trying to finagle their way in there to turn it into a profit for themselves.
(But yes; what you describe is absolutely happening left and right...)
OpenAI the charity would have survived only as an ego project for Elon doing something fun with minor impact.
Only the current setup is feasible if they want to get the kind of investment required. This can work if the board is pragmatic and has no conflict of interest, so preferably someone with no stake in anything AI either biz or academic.
I think the only way this can end up is to convert to a private foundation and make sizable (8 figures annually) grants to truly independent AI safety (broadly defined) organizations.
> I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess.
I think it could have worked either as a non-profit or as a for-profit. It's this weird jackass hybrid thing that's produced most of the conflict, or so it seems to me. Neither fish nor fowl, as the saying goes.
Perhaps creating OpenAI as a charity is what has allowed it to become what it is, whereas other for-profit competitors are worth much less. How else do you get a guy like Elon Musk to 'donate' $100 million to your company?
Lots of ventures cut corners early on that they eventually had to pay for, but cutting the corners was crucial to their initial success and growth
Elon only gave $40 million, but since he was the primary donor I suspect he was the one who was pushing for the "public charity" designation. He and Sam were co-founders. Maybe it was Sam who asked Elon for the money, but there wasn't anyone else involved.
Are there any similar cases of this "non-profit board overseeing a (huge) for-profit company" model? I want to like the concept behind it. Was this inevitable due to the leadership structure of OpenAI, or was it totally preventable had the right people been on the board? I wish I had the historical context to answer that question.
But those examples are for mature, established companies operating under a nonprofit. OpenAI is different. Not only does it have the for-profit subsidiary, but the for-profit needs to frequently fundraise. It's natural for fundraising to require renegotiations in the board structure, possibly contentious ones. So in retrospect it doesn't seem surprising that this process would become extra contentious with OpenAI's structure.
They are registered as a 501(c)(3) which is what people commonly call a public charity.
> Organizations described in section 501(c)(3) are commonly referred to as charitable organizations. Organizations described in section 501(c)(3), other than testing for public safety organizations, are eligible to receive tax-deductible contributions in accordance with Code section 170.
> They are registered as a 501(c)(3) which is what people commonly call a public charity.
TIL "public charity" is specific legal term that only some 501(c)(3) qualify as. To do so there are additional restrictions, including around governance and a requirement that a significant amount of funding come from small donors other charities or the government. In exchange a public charity has higher tax deductible giving limits for donors.
Important to note here that most large individual contributions are made through a DAF or donor-advised fund, which counts as a public source in the support test. This helps donors maximize their tax incentives and prevents the charity from tipping into private foundation status.
Their IRS determination letter says they are formed as a public charity and their 990s claim that they have met the "public support" test as a public charity. But there are some questions since over half of their support ($70 million) is identified as "other income" without the required explanation as to the "nature and source" of that income. Would not pass an IRS audit.
> They are registered as a 501(c)(3) which is what people commonly call a public charity.
Why do they do that? Seems ridiculous on the face of it. Nothing about 501(c)(3) entails providing any sort of good or service to society at large. In fact, the very same thing prevents them from competing with for-profit entities at providing any good or service to society at large. The only reason they exist at all is that for-profit companies are terrible at feeding, housing, and protecting their own labor force.
> Nothing about 501(c)(3) entails providing any sort of good or service to society at large.
While one might disagree that the particular subcategories into which a 501c3 must fit into one of do, in fact, provide a good or service to society at large, that's the rationale for 501c3 and its categories. Its true that "charity" or "charitable organization" (and "charitable purpose"), the common terms (used even by the IRS) is pedantically incomplete, since the actual purpose part of the requirement in the statute is "organized and operated exclusively for religious, charitable, scientific, testing for public safety, literary, or educational purposes, or to foster national or international amateur sports competition (but only if no part of its activities involve the provision of athletic facilities or equipment), or for the prevention of cruelty to children or animals", but, yeah, it does require something which policymakers have judged to be a good or service that benefits society at large.
- First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.
-Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
-Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
-Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.
-Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
> Microsoft's investment is in OpenAI Global, LLC, a for-profit company.
OpenAI Global LLC is a subsidiary two levels down from OpenAI, which is expressly (by the operating agreement that is the LLC's foundational document) subordinated to OpenAI’s charitable purpose, and which is completely controlled (despite the charity's indirect and less-than-complete ownership) by OpenAI GP LLC, a wholly owned subsidiary of the charity, on behalf of the OpenAI charity.
And, particularly, the OpenAI board is. as the excerpts you quote in your post expressly state, the board of the nonprofit that is the top of the structure. It controls everything underneath because each of the subordinate organizations foundational documents give it (well, for the two entities with outside invesment, OpenAI GP LLC, the charity's wholly-owned and -controlled subsidiary) complete control.
well not anymore, as they cannot function as a nonprofit.
also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive, which Elon is miffed about and Sam has defended explicitly
> well not anymore, as they cannot function as a nonprofit.
There's been a lot of news lately, but unless I've missed something, even with the tentative agreement of a new board for the charity nonprofit, they are and plan to remain a charity nonprofit with the same nominal mission.
> also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive
No, they admitted they needed to sell products rather than merely take donations to survive, and needed to be able to return profits from doing that to investors to scale up enough to do that, so they formed a for-profit subsidiary with its own for-profit subsidiary, both controlled by another subsidiary, all subordinated to the charity nonprofit, to do that.
Once the temporary board has selected a permanent board, give it a couple of months and then get back to us. They will almost certainly choose to spin the for-profit subsidiary off as an independent company. Probably with some contractual arrangement where they commit x funding to the non-profit in exchange for IP licensing. Which is the way they should have structured this back in 2019.
"Almost certainly"? Here's a fun exercise. Over the course of, say, a year, keep track of all your predictions along these lines, and how certain you are of each. Almost certainly, expressed as a percentage, would be maybe 95%? Then see how often the predicted events occur, compared to how sure you are.
Personally I'm nowhere near 95% confident that will happen. I'd say I'm about 75% confident it won't. So I wouldn't be utterly shocked, but I would be quite surprised.
I’m pretty confident (close to the 95% level) they will abandon the public charity structure, but throughout this saga, I have been baffled by the discourse’s willingness to handwave away OpenAI’s peculiar legal structure as irrelevant to these events.
Within a few months? I don't think it should be possible to be 95% confident of that without inside info. As you said, many unexpected things have happened already. IMO that should bring the most confident predictions down to the 80-85% level at most.
A charity is a type of not-for-profit organisation however the main difference between a nonprofit and a charity is that a nonprofit doesn't need to reach a 'charitable status' whereas a charity, to qualify as a charity, needs to meet very specific or strict guidelines
> First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.
Im not criticizing. Big fan of avoiding being taxed to fund wars....but its just funny to me it seems like theyre sort of having their cake and eating it too with this kind of structure.
There’s no indication a Microsoft appointed board member would be a Microsoft employee (though the they could be of course), and large nonprofits often have board members that come from for-profit companies.
I don’t think the IRS cares much about this kind of thing. What would be the claim? They OpenAI is pushing benefits to Microsoft, a for-profit entity that pays taxes? Even if you assume the absolute worst, most nefarious meddling, it seems like an issue for SEC more than IRS.
I don't expect the government to regulate any of this aggressively. AI is much to important to the government and military to allow pesky conflicts of interest to slow down any competitive advantage we may have.
My comment here was actually meant to talk about AI broadly, though I can get the confusion here as the original source thread here is about OpenAI.
I also don't expect the government to do anything about the OpenAI situation, to be clear. Though my read is actually that the government had to be evolved behind closed doors to move so quickly to get Sam back to OpenAI. Things moved much too quickly and secretively in an industry that is obviously of great interest to the military, there's no way the feds didn't put a finger on the scale to protect their interests at which point they wouldn't come back in to regulate.
If you think the person you're replying to was talking about regulating OpenAI specifically and not the industry as a whole, I have ADHD medicine to sell you.
The context of the comment thread you're replying to was a response to a comment suggesting the IRS will get involved in the question of whether MS have too much influence over OpenAI, it was not the subject of general industry regulation.
But hey, at least you fitted in a snarky line about ADHD in the comment you wrote while not having paid attention to the 3 comments above it.
I'm sorry. I was just taking the snark discussion to the next level. I thought going overboard was the only way to convey that there's no way I'm serious.
if up-the-line parent wasn't talking about regulation of AI in general, then what do you think they meant by "competitive advantage"? Also, governments have to set policy and enforce that policy. They can't (or shouldn't at least) pick and choose favorites.
Also GP snark was a reply to snark. Once somebody opens the snark, they should expect snark back. It's ideal for nobody to snark, and big for people not to snark back at a snarker, but snarkers gonna snark.
Others have pointed out several reasons this isn't actually a problem (and that the premise itself is incorrect since "OpenAI" is not a charity), but one thing not mentioned: even if the MS-appointed board member is a MS employee, yes they will have a fiduciary duty to the organizations under the purview of the board, but unless they are also a board member of Microsoft (extraordinarily unlikely) they have no such fiduciary duty to Microsoft itself. So in the also unlikely scenario that there is a vote that conflicts with their Microsoft duties, and in the even more unlikely scenario that they don't abstain due to that conflict, they have a legal responsibility to err on the side of OpenAI and no legal responsibility to Microsoft. Seems like a pretty easy decision to make - and abstaining is the easiest unless it's a contentious 4-4 vote and there's pressure for them to choose a side.
But all that seems a lot more like an episode of Succession and less like real life to be honest.
> and that the premise itself is incorrect since "OpenAI" is not a charity
OpenAI is a 501c3 charity nonprofit, and the OpenAI board under discussion is the board of that charity nonprofit.
OpenAI Global LLC is a for-profit subsidiary of a for-profit subsidiary of OpenAI, both of which are controlled, by their foundational agreements that gie them legal existence, by a different (AFAICT not for-profit but not legally a nonprofit) LLC subsidiary of OpenAI (OpenAI GP LLC.)
It's still a conflict of interest. One that they should avoid. Microsoft COULD appoint someone who they like and shares their values, that is not a MSFT employee. That would be a preferred approach but one that I doubt a megacorp would take
Both profit and non-profit boards have members that have potential conflicts of interest all the time. So long as it’s not too egregious no one cares, especially not the IRS.
Microsoft is going to appoint someone who benefits Microsoft. Whether a particular vote would violate fiduciary duty is subjective. There's plenty of opportunity for them to prioritize the welfare of Microsoft over OAI.
There are almost always obvious conflicts of interest. In a normal startup, VCs have a legal responsibility to act in the interest of the common shares, but in practice, they overtly act in the interest of the preferred shares that their fund holds.
If you wanted to wear a foil hat, you might think this internal fighting was started from someone connected to TPTB subverting the rest of the board to gain a board seat, and thus more power and influence, over AGI.
The hush-hush nature of the board providing zero explanation for why sama was fired (and what started it) certainly doesn't pass the smell test.
nothing screams 'protect public interest' more than Wall Streets biggest cheerleader during 2008 financial crisis. who's next, Richard S. Fuld Jr ? Should the Enron guys be included ?
It's obvious this class of people love their status as neu-feudal lords above the law living as 18th century libertines behind closed doors.
But i guess people here are either waiting for wealth to trickle down on them or believe the torrent of psychological operations so much peoples minds close down when they intuit the circular brutal nature of hierarchical class based society, and the utter illusion democracy or meritocracy is.
The uppermost classes have been trickters through all of history. What happened to this knowledge and the countercultural scene in hacking? Hint; it was psyopped in the early 90's by "libertarianism" and worship of bureaucracy to create a new class of cybernetic soldiers working for the oligarchy.
Actually I think Bill would be a pretty good candidate. Smart, mature, good at first principles reasoning, deeply understands both the tech world and the nonprofit world, is a tech person who's not socially networked with the existing SF VCs, and (if the vague unsubstantiated rumors about Sam are correct) is one of the few people left with enough social cachet to knock Sam down a peg or two.
Even if the IRS isn't a fan, what are they going to do about it? It seems like the main recourse they could pursue is they could force the OpenAI directors/Microsoft to pay an excise tax on any "excess benefit transactions".
Whenever there's an obvious conflict, assume it's not enforced or difficult to litigate or has relatively irrelevant penalties. Experts/lawyers who have a material stake in getting this right have signed off on it. Many (if not most) people with enough status to be on the board of a fortune 500 company tend to also be on non-profit boards. We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.
Do you remember before Bill Gates got into disease prevention he thought that “charity work” could be done by giving away free Microsoft products? I don’t know who sat him down and explained to him how full of shit he was but they deserve a Nobel Peace Prize nomination.
Just because someone says they agree with a mission doesn’t mean they have their heads screwed on straight. And my thesis is that the more power they have in the real world the worse the outcomes - because powerful people become progressively immune to feedback. This has been working swimmingly for me for decades, I don’t need humility in a new situation.
> Experts/lawyers who have a material stake in getting this right have signed off on it.
How does that work when we're talking about non-profit motives? The lawyers are paid by the companies benefitting from these conflicts, so how is it at all reassuring to hear that the people who benefit from the conflict signed off on it?
> We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.
That's the concern. They've just replaced people who "maybe" cared about the mission statement with people who you've correctly identified care more about profit growth than the nonprofit mission.
Its useful PR pretext for their regulatory advocacy, and subjective enough that if they are careful not to be too obvious about specifically pushing one company’s commercial interest, they can probably get away with it forever, so why would it be any deader than when Sam was CEO before and not substantively guided by it.
The only evidence I have is that the board members that were removed had less business connections than the ones that replaced them.
The point of the board is to ensure the charter is being followed, when the biggest concern is "is our commercialization getting in the way of our charter" what else does it mean to replace "academics" with "businesspeople"?
I don't get the drama with "conflict of interests"... Aren't board members generally (always?) in representation of major shareholders? Isn't it obvious that shareholders have interests that are likely to be in conflict with each other or even the own organization? Thats why board members are supposed to check each other, right?
OpenAI is a non profit and the board members are not allowed to own shares in the for profit.
That means the remaining conflicts are when the board has to make a decisions between growing the profit or furthering the mission statement. I wouldn't trust the new board appointed by investors to ever make the correct decision in these cases, and they already kicked out the "academic" board members with the power to stop them.
The Foundation has nothing to do with MS and can't possibly be considered a competitor, acquisition target, supplier, or any other entity where a decision for the Foundation might materially harm MS (or the reverse). There's no potential conflict of interest between the missions of the two.
Did you think OP meant there was some inherent conflict of interest with charities?
> If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here.
Not to mention, the mission of the Board cannot be "build safe AGI" anymore. Perhaps something more consistent with expanding shareholder value and capitalism, as the events of this weekend has shown.
Delivering profits and shareholder value is the sole and dominant force in capitalism. Remains to be seen whether that is consistent with humanity's survival
With Sam coming back as CEO, hasn't OpenAI board proven that it has lost its function? Regardless of who is in the board, they won't be able to exercise one of the most fundamental of their rights, firing the CEO, because Sam has proven that he is unfireable. Now, Sam can do however he pleases, whether it is lying, not reporting, etc. To be clear, I don't claim that Sam did, or will, lie, or misbehave.
No that hasn't at all been the case. The board acted like the most incompetent group of individuals who've even handed any responsibility. If they went through due process, notified their employees and investors, and put out a statement of why they're firing the CEO instead of doing it over a 15 min Google meet and then going completely silent, none of this outrage would have taken place.
Actually the board may not have acted in most professional way but in due process they kind of proved Sam Altman is unfireable for sure, even if they didn't intend to.
They did notify everyone. They did it after firing which is within their rights. They may also choose to stay silent if there is legitimate reason for it such as making the reasons known may harm the organization even more. This is speculation obviously.
In any case they didn't omit doing anything they need to and they didn't exercise a power they didn't have. The end result is that the board they choose will be impotent at the moment, for sure.
Firing Sam was within the board's rights. And 90% of the employees threatening to leave was within their rights.
All this proved is that you can't take a major action that is deeply unpopular with employees, without consulting them, and expect to still have a functioning organization. This should be obvious, but it apparently never crossed the board's mind.
A lot of these high-up tech leaders seem to forget this regularly. They sit on their thrones and dictate wild swings, and are used to having people obey. They get all the praise and adulation when things go well, and when things don't go well they golden parachute into some other organization who hires based on resume titles rather than leadership and technical ability. It doesn't surprise me at all that they were caught off guard by this.
Not sure how much of the employees leaving have to do with negotiating Sam back, must be a big factor but not all, during the table talk Emmett, Angelo and Ilya must have decided that it wasn’t a good firing and a mistake in retrospect and it is to fix it.
Getting your point, although the fact that something is within your rights, may or may not mean certainly that it's also a proper thing to do ... ?
Like, nobody is going to arrest you for spitting on the street especially if you're an old grandpa.
Nobody is going to arrest you for saying nasty things about somebody's mom.
You get my point, to some boundary both are kinda within somebody's rights, although can be suable or can be reported for misbehaving. But that's the keypoint, misbehavior.
Just because something is within your rights doesn't mean you're not misbehaving or not acting in an immature way.
To be clear, Im not denying or agreeing that the board of directors acted in an immature way. I'm just arguing against the claim that was made within your text that just because someone is acting within their rights that it's also a "right" thing to do necessary, while that is not the case always.
HN sentiment is pretty ambivalent regarding Altman. yes, almost everyone agrees he's important, but a big group things he's basically landed gentry exploiting ML researchers, an other thinks he's a genius for getting MS pay for GPT costs, etc.
Agreed. It's naive to think that an decision this unpopular somehow wouldn't have resulted in dissent and fracturing if only they had given it a better explanation and dotted more i's.
Imagine arguing this in another context: "Man, if only the Supreme Court had clearly articulated its reasoning in overturning Roe v Wade, there wouldn't have been all this outrage over it."
(I'm happy to accept that there's plenty of room for avoiding some of the damage, like the torrents of observers thinking "these board members clearly don't know what they're doing".)
Thank you for not editing this away. Easy mistake to make, and gave us a good laugh (hopefully laughing with you. Everyone who's ever programmed has made the same error).
> The board acted like the most incompetent group of individuals who've eve[r been] handed any responsibility.
This whole conversation has been full of appeals to authority. Just because us tech people don't know some of these names and their accomplishments, we talk about them being "weak" members. The more I learn, the more I think this board was full of smart ppl who didn't play business politics well (and that's ok by me, as business politics isn't supposed to be something they have to deal with).
Their lack of entanglements makes them stronger members, in my perspective. Their miscalculation was in how broken the system is in which they were undermined. And you and I are part of that brokenness even in how we talk about it here
Here lies the body of William Jay,
Who died maintaining his right of way –
He was right, dead right, as he sped along,
But he's just as dead as if he were wrong.
- Dale Carnegie
you don't have responsibility for washing yourself before going to a mass transport vehicle full of people. it's within your rights not to do that and be the smelliest person in the bus.
does it mean it's right or professional?
getting your point, but i hope you get the point i make as well, that just because you have no responsibility for something doesn't mean you're right or not unethical for doing or not doing that thing. so i feel like you're losing the point a little.
most certainly would have still taken place; no one cares about how it was done; what they care about it being able to make $$; and it was clearly going to not be as heavily prioritized without Altman (which is why MSFT embraced him and his engineers almost immediately).
> notified their employees and investors
they did notify their employees; they have fiduciary duty to investors as a nonprofit.
Imagine if the board of Apple fired Tim Cook with no warning right after he went on stage and announced their new developer platform updates for the year alongside record growth and sales, refused to elaborate as to the reasons or provide any useful communications to investors over several days, and replaced their first interim CEO with another interim CEO from a completely different kind of business in that same weekend.
If you don't think there would be a shareholder revolt against the board, for simply exercising their most fundamental right to fire the CEO, I think you're missing part the picture.
It is prudent to recall that enhancing shareholder value and delivering record growth and sales are NOT the mission of the company or Board. But now it appears that it will have to be.
Yeah, but they also didn't elaborate in the slightest about how they were serving the charter with their actions.
If they were super-duper worried about how Sam was going to cause a global extinction event with AI, or even just that he was driving the company in too commercial of a direction, they should have said that to everyone!
The idea that they could fire the CEO with a super vague, one-paragraph statement, and then expect 800 employees who respect that CEO to just... be totally fine with that is absolutely fucking insane, regardless of the board's fiduciary responsibilities. They're board members, not gods.
They don't have to elaborate. As many have pointed out, most people have been given advice to not say anything at all when SHTF. If they did say something there would still be drama. It's best to keep these details internal.
I still believe in the theory that Altman was going hard after profits. Both McCauley and Toner are focused on the altruistic aspects of AGI and safety. Altman shouldn't be at OpenAI and neither should D’Angelo.
Four CEOs in five days, their largest partner stepping in to try to stop the chaos, and almost the entirety of their employees threatening to leave for guaranteed jobs at that partner if the board didn't step down.
I think it's important to keep in mind that BOTH Altman and the board maneuvered to threaten to destroy OpenAI.
If Altman was silent and/or said something like "people take some time off for Thanksgiving, in a week calmer minds will prevail" while negotiating behind the scenes, OpenAI would look a lot less dire in the last few days. Instead he launched a public pressure campaign, likely pressured Mira, got Satya to make some fake commitments, got Greg Bockman's wife to emotionally pressure Ilya, etc.
Masterful chess, clearly. But playing people like pieces nonetheless.
Sure, there is a difference there. But the actions that erode confidence are the same.
You could tell the same story about a rising sports team replacing their star coach, or a military sacking a general the day after he marched through the streets to fanfare after winning a battle.
Even without the money involved, a sudden change in leadership with no explanation, followed only by increasing uncertainty and cloudy communication, is not going to go well for those who are backing you.
Even in the most altruistic version of OpenAI's goals I'm fairly sure they need employees and funding to pay those employees and do the research.
> enhancing shareholder value and delivering record growth and sales are NOT the mission of the company
Developer platform updates seem to be inline.
And in any case, the board also failed to specify how their action furthered the mission of the company.
From all appearances, it appeared to damage the mission of the company. (If for no other reason that it dissolve the company and gave everything to MSFT.)
no but the people like the developers, clients, government etc. have also the right to exercise their revolt against decisions they don't like as well. don't you think?
like, you get me, the board of directors is not the only actual power within a company, and that was proven by the whole scandal of Sam being discarded/fired that was made by the developers themselves. they also have the right to exercise their right to just not work at this company without the leader they may had liked.
Right. I really should have said employees and investors. Even if OpenAI somehow had no regard for its investors, they still need their employees to accomplish their mission. And funding to pay those employees.
The board seemed to have the confidence of none of the groups they needed confidence from.
>I suspect Tasha has done more and gone further in her life than you will, and your tone indicates anger at this.
Well this sure seems unnecessary. I’m saying this because I googled her name when this happened, and the only articles I could find referenced her husband. I wasn’t seeing any of this work you’re talking about, at least not anything that would seem relevant. Can you link to some stuff?
Btw, I think “university trained prior executive” describes not just me but almost every single person on HN. “Involved in a non profit related to their work” I suspect also describes me and probably >90% of people posting on HN.
And also; maybe you haven’t been involved in non profit boards? “Spouse of famous/rich/etc” person is an extremely common reason to put somebody on a board for a practical reason: it helps with fundraising and exposure.
This is a better deal for the board and a worse one for Sam than people realize. Sam and Greg and even Ilya are both off the board, D'Angelo gets to stay on despite his outrageous actions, and he gets veto power over who the new board members will be and a big say in who gets voted on to the board next.
Everybody's guard is going to be up around Sam from now on. He'll have much less leverage over this board than he did over the previous one (before the other three of nine quit). I think eventually he will prevail because he has the charm and social skills to win over the other independent members. But he will have to reign in his own behavior a lot in order to keep them on his side versus D'Angelo
I'd be shocked if D'Angelo doesn't get kicked off. Even before this debacle his AI competitor app poe.com is an obvious conflict of interest with OpenAI.
Depends who gets onto the board. There are probably a lot of forces interested in ousting him now, so he'd need to do an amazing job vetting the new board members.
My guess is that he has less than a year, based on the my assumption that there will be constant pressure placed on the board to oust him.
What surprises me is how much regard the valley has for this guy. Doesn’t Quora suck terribly? I’m for sure its target demographic and I cannot for the life of me pull value from it. I have tried!
I think it was only a competitor app after GPTs came out. A conspiracy theorist might say that Altman wanted to get him off the board and engineered GPTs as a pretext first, in the same way that he used some random paper coauthored by Toner that nobody read to kick Toner out.
Yes, but on the other hand, this whole thing has shown that OpenAI is not running smooth anymore, and probably never will again. You can't cut the head of the snake then attach it back later and expect it to move on slithering. Even if Sam stays, he won't be able to just do whatever he wants because in an organization as complex as OpenAI, there are thousands of unwritten rules and relationships and hidden processes that need to go smooth without the CEO's direct intervention (the CEO cannot be everywhere all the time). So, what this says to me (Sam being re-hired) is that the future OpenAI is now a watered-down, mere shadow of its former self.
I personally think it's weird if he really settles back in, especially given the other guys who resigned after the fact. There must be lots of other super exciting new things for him to do out there, and some pretty amazing leadership job offers from other companies. I'm not saying OpenAI will die out or anything, but surely it has shown a weak side.
This couldn’t be more wrong. The big thing we learned from this episode is that Sam and Greg have the loyalty and respect of almost every single employee at OpenAI. Morale is high and they’re ready to fight for what they believe in. They didn’t “cut the head off” and the only snake here is D’Angelo, he tried to kill OpenAI and failed miserably. Now he appears to be desperately trying to hold on to some semblance of power by agreeing to Sam and Greg coming back instead of losing all control with the whole team joining Microsoft.
I don't think Ilya should get off so easily. Him not havinh a say in the formation of the new board speaks volumes about his role in things if you ask me. I hope people keep saying his name too so nobody forgets his place in this mess.
There were comments the other day along the lines of "I wouldn't be surprised if someone came by Ilya's desk while he was deep in research and said 'sign this' and he just signed it and gave it back to them without even looking and didn't realize."
People will contort themselves into pretzels to invent rationalizations.
The board can still fire sam provided they get all the key stakeholders onboard with that firing. It made no sense to fire someone doing a good job at their role without any justification, that seems to have been the key issue. Ultimately, we all know this non profit thing is for show and will never work out.
Time will tell. Hopefully the new board will still be mostly independent of Sam/MSFT/VC influence. I really hope they continue as an org that tries its best to uphold their charter vs just being another startup.
No the board is just one instance. It doesn’t and shouldn’t have absolute power. Absolute power corrupts absolutely.
There ist the board the investors the employees the senior management.
All other parties aligned against it and thus it couldn’t act. If only Sam would have rebelled. Or even just Sam and the investors (without the employees) nothing would have happened.
None of the theories by HNers on day 1 of this drama was right - not a single one and it had 1 million comments. So, lets not guess anymore and just sit back.
OpenAI workers has shown their plain support to their CEO by threatening to follow him wherever he wants, I personaly think their collective judgement on him is worth more than any rumors
because people like the developers within the company did not like that decision and its also within their right to disagree with the board's decision and not to want to work under a different leadership. They're not slaves, they're employees who rented their time for a specific purpose under a specific leader.
As it's within the board's rights to hire or fire people like Sam or the developers.
For some reason this reminds me of the Coke/New Coke fiasco, which ended up popularizing Coke Classic more than ever before.
> Consumers were outraged and demanded their beloved Coke back – the taste that they knew and had grown up with. The request to bring the old product back was so loud that soon journalists suggested that the entire project was a stunt. To this accusation Coca-Cola President Don Keough replied on July 10, 1985:
"We are not that dumb, and we are not that smart."
So, Ilya is out of the board, but Adam is still on it. I know this will raise some eyebrows but whatever.
Still though, this isn't something that will just go away with Sam back. OAI will undergo serious changes now that Sam has shown himself to be irreplaceable. Future will tell but in the long terms, I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust.
I feel like history has shown repeatedly that having a good product matters way more than trust, as evidenced by Facebook and Uber. People seem to talk big smack about lost trust and such in the immediate aftermath of a scandal, and then quitely renew the contracts when the time comes.
All of the big ad companies (Google, Amazon, Facebook) have, like, a scandal per month, yet the ad revenue keeps coming.
Meltdown was a huge scandal, yet Intel keeps pumping out the chips.
Let's see, Sam Altman is an incredibly charismatic founding CEO, who some people consider manipulative, but is also beloved by many employees. He got kicked out by his board, but brought back when they realized their mistake.
It's true that this doesn't really pattern-match with the founding story of huge successful companies like Facebook, Amazon, Microsoft, or Google. But somehow, I think it's still possible that a huge company could be created by a person like this.
(And of course, more important than creating a huge company, is creating insanely great products.)
I think people following Sam Altman is jumping to conclusions. I think it's just as likely that employees are simply following the money. They want to make $$$, and that's what a for-profit company does, which is what Altman wants. I think it's probably not really about Altman or his leadership.
Given that over 750 people have signed the letter, it's safe to assume that their motivations vary. Some might be motivated by the financial aspects, some might be motivated by Sam's leadership (like considering Sam as a friend who needs support). Some might fervently believe that their work is crucial for the advancement of humanity and that any changes would just hinder their progress. And some might have just caved in to peer pressure.
Most are probably motivated by money, some are motivated by stability and some are motivated by their loyalty to sam but i think most are motivated by money and stability.
On the contrary, this saga has shown that a huge number of people are extremely passionate about the existence of OpenAI and it's leadership by Altman, much more strongly and in larger numbers than most had suspected. If anything this has solidified the importance of the company and I think people will trust it more that the situation was resolved with the light speed it was.
That's a misreading of the situation. The employees saw their big bag vanishing and suddenly realised they were employed by a non-profit entity that had loftier goals than making a buck, so they rallied to overturn it and they've gotten their way. This is a net negative for anyone not financially invested in OAI.
What lofty goals? The board was questioned repeatedly and never articulated clear reasoning for firing Altman and in the process lost the confidence of the employees hence the "rally". The lack of clarity was their undoing whether there would have been a bag for the employees to lose or not.
My story: Maybe they had lofty goals, maybe not, but it sounded like the whole thing was instigated by Altman trying to fire Toner (one of the board members) over a silly pretext of her coauthoring a paper that nobody read that was very mildly negative about OpenAI, during her day job. https://www.nytimes.com/2023/11/21/technology/openai-altman-...
And then presumably the other board members read the writing on the wall (especially seeing how 3 other board members mysteriously resigned, including Hoffman https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...), and realized that if Altman can kick out Toner under such flimsy pretexts, they'd be out too.
So they allied with Helen to countercoup Greg/Sam.
I think the anti-board perspective is that this is all shallow bickering over a 90B company. The pro-board perspective is that the whole point of the board was to serve as a check on the CEO, so if the CEO could easily appoint only loyalists, then the board is a useless rubber stamp that lends unfair legitimacy to OpenAI's regulatory capture efforts.
OAI looks stronger than ever. The untrustworthy bits that caused all this instability over the last 5 days have been ditched into the sea. Care to expand on your claim?
> The untrustworthy bits that caused all this instability over the last 5 days have been ditched into the sea
This whole thing started with Altman pushing a safety oriented non-profit into a tense contradiction (edit: I mean the 2019-2022 gpt3/chatgpt for-profit stuff that led to all the Anthropic people leaving). The most recent timeline was
- Altman tries to push out another board member
- That board member escalates by pushing Altman out (and Brockman off the board)
- Altman's side escalates by saying they'll nuke the company
Altman's side won, but how can we say that his side didn't cause any of this instability?
That event wasn't some unprovoked start of this history.
> That board member escalates by pushing Altman out (and Brockman off the board)
and the entire company retaliated. Then this board member tried to sell the company to a competitor who refused. In the meantime the board went through two interim CEOs who refused to play along with this scheme. In the meantime one of the people who voted to fire the CEO regretted it publicly within 24 hours. That's a clown car of a board. It reflects the quality of most non-profit boards but not of organizations that actually execute well.
Something that's been fairly consistent here on HN throughout the debacle has been an almost fanatical defense of the board's actions as justified.
The board was incompetent. It will go down in the history books as one of the biggest blunders of a board in history.
If you want to take drastic action, you consult with your biggest partner keeping the lights on before you do so. Helen Toner and Tasha McCauley had no business being on this board. Even if you had safety concerns in mind, you don't bypass everyone else with a stake in the future of your business because you're feeling petulant.
By recognizing that it didn't "start" with Altman trying to push out another board member, it started when that board member published a paper trashing the company she's on the board of, without speaking to the CEO of that company first, or trying in any way to affect change first.
I edited my comment to clarify what I meant. The start was him pushing to move fast and break things in the classic YC kind of way. And it's BS to say that she didn't speak to the CEO or try to affect change first. The safety camp inside openai has been unsuccessfully trying to push him to slow down for years.
Your "most recent" timeline is still wrong, and while yes the entire history of OpenAI did not begin with the paper I'm referencing, it is what started this specific fracas, the one where the board voted to oust Sam Altman.
It was a classic antisocial academic move; all she needed to do was talk to Altman, both before and after writing the paper. It's incredibly easy to do that, and her not doing it is what began the insanity.
She's gone now, and Altman remains, substantially because she didn't know how to pick up a phone and interact with another human being. Who knows, she might have even been successful at her stated goal, of protecting AI, had she done even the most basic amount of problem solving first. She should not have been on this board, and I hope she's learned literally anything from this about interacting with people, though frankly I doubt it.
Honestly, I just don't believe that she didn't talk to Altman about her concerns. I'd believe that she didn't say "I'm publishing a paper about it now" but I can't believe she didn't talk to him about her concerns during the last 4+ years that it's been a core tension at the company.
That's what I mean; she should have discussed the paper and its contents specifically with Altman, and easily could have. It's a hugely damaging thing to have your own board member come out critically against your company. It's doubly so when it blindsides the CEO.
She had many, many other options available to her that she did not take. That was a grave mistake and she paid for it.
"But what about academic integrity?" Yes! That's why this whole idea was problematic from the beginning. She can't be objective and fulfill her role as board member. Her role at Georgetown was in direct conflict with her role on the OpenAI board.
I may have been overly eager in my comment because the big bad downside of the new board is none of the founders are on it. I hope the current membership sees reason and fixes this issue.
But I said this because: They've retained the entire company, reinstated its founder as CEO, and replaced an activist clown board with a professional, experienced, and possibly* unified one. Still remains to be seen how the board membership and overall org structure changes, but I have much more trust in the current 3 members steering OpenAI toward long-term success.
If by “long-term-success” you mean a capitalistic lap-dog of microsoft, I’ll agree.
It seems that the safety team within OpenAI lost. My biggest fear with this whole AI thing is hostile takeover, and openAI was best positioned to at least do an effort to prevent that. Now, I’m not so sure anymore.
The OpenAI of the past, that dabbled in random AI stuff (remember their DotA 2 bot?), is gone.
OpenAI is now just a vehicle to commercialize their LLM - and everything is subservient to that goal. Discover a major flaw in GPT4? You shut your mouth. Doesn’t matter if society at large suffers for it.
Altman's/Microsoft’s takeover of the former non-profit is now complete.
Edit: Let this be a lesson to us all. Just because something claims to be non-profit doesn't mean it will always remain that way. With enough political maneuvering and money, a megacorp can takeover almost any organization. Non-profit status and whatever the organization's charter says is temporary.
I mean it is what they want isn't it. They did some random stuff like, playing dota2 or robot arms, even the Dalle stuff. Now they finally find that one golden goose, of course they are going to keep it.
I don't think the company has changed at all. It succeeded after all.
Nonprofit is a just a facade, it was convenient for them to appear as ethnical under that disguise, but they get rid of it when it is inconvenient in a week. 95% of them would rather join MSFT, than being in a non-profit.
Iirc, the NP structure was implemented to attract top AI talent from FAANG. Then they needed investors to fund the infrastructure and hence gave the employees shares or profit units (whatever the hell that is). The NP now shields MSFT from regulatory issues.
I do wonder how many of those employees would actually go to MSFT. It feels more like a gambit to get Altman back in since they were about to cash out with the tender offer.
There's no moat in giant LLMs. Anyone on a long enough timeline can scrape/digitize 99.9X% of all human knowledge and build an LLM or LXX from it. Monetizing that idea and staying the market leader over a period longer than 10 years will take a herculean amount of effort. Facebook releasing similar models for free definitely took the wind out of their sails, even a tiny bit; right now the moat is access to A100 boards. That will change as eventually even the Raspberry Pi 9 will have LLM capabilities
Branding counts for a lot, but LLM are already a commodity. As soon as someone releases an LLM equivalent to GPT4 or GPT5, most cloud providers will offer it locally for a fraction of what openAI is charging, and the heaviest users will simply self-host. Go look at the company Docker. I can build a container on almost any device with a prompt these days using open source tooling. The company (or brand, at this point?) offers "professional services" I suppose but who is paying for it? Or go look at Redis or Elasti-anything. Or memcached. Or postgres. Or whatever. Industrial-grade underpinnings of the internet, but it's all just commodity stuff you can lease from any cloud provider.
It doesn't matter if OpenAI or AWS or GCP encoded the entire works of Shakespeare in their LLM, they can all write/complete a valid limerick about "There once was a man from Nantucket".
I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it. To the end user it's just another locally hosted API. Like DNS.
Moore's law seems to have failed on CPUs finally, but we've seen the pattern over and over. LLM specific hardware will undoubtedly bring down the cost. $10,000 A100 GPU will not be the last GPU NVidia ever makes, nor will their competitors stand by and let them hold the market hostage.
Quake and Counter-Strike in the 1990s ran like garbage in software-rendering mode. I remember having to run Counter-Strike on my Pentium 90 at the lowest resolution, and then disable upscaling to get 15fps, and even then smoke grenades and other effects would drop the framerate into the single digits. Almost two years after Quake's release did dedicated 3d video cards (voodoo 1 and 2 were accelerators, depended on a seperate 2d VGA graphics card to feed it) begin to hit the market.
Nowadays you can run those games (and their sequels) in the thousands (tens of thousands?) of frames per second on a top end modern card. I would imagine similar events with hardware will transpire with LLM. OpenAI is already prototyping their own hardware to train and run LLMs. I would imagine NVidia hasn't been sitting on their hands either.
Why do you think cloud providers can undercut OpenAI? From what I know, Llama 70b is more expensive to run than GPT-3.5, unless you can get 70+% utilization rate for your GPUs, which is hard to do.
So far we don't have any open source models that are close to GPT4, so we don't know what it takes to run them for similar speeds.
> I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it.
Yeah and looks like they're going to offer Llama as well. They offer Redhat linux EC2 instances at a premium, and other paid per hour AMIs. I can't imagine why they wouldn't offer various LLMs at a premium, but not also offer a home-grown LLM at a lower rate once it's ready.
i don't think that's really any brand loyalty for OpenAI. people will use whatever is cheapest and best. in the longer run people will use whatever has the best access and integration.
what's keeping people with OpenAI for now is that chatGPT is free and GPT3.5 and GPT4 are the best. over time I expect the gap in performance to get smaller and the cost to run these to get cheaper.
if google gives me something close to as good as OpenAI's offering for the same price and it pull data from my gmail or my calendar or my google drive then i'll switch to that.
People use "the chatbot from OpenAI" because that's what became famous and got all the world a taste of AI (my dad is on that bandwagon, for instance). There is absolutely no way my dad is going to sign up for an Anthropic account and start making API calls to their LLM.
But I agree that it's a weak moat, if OpenAI were to disappear, I could just tell my dad to use "this same thing but from Google" and he'd switch without thinking much about it.
good points. on second thought, i should give them due credit for building a brand reputation as being "best" that will continue even if they aren't the best at some point, which will keep a lot of people with them. that's in addition to their other advantages that people will stay because it's easier than learning a new platform and there might be lock-in in terms of it being hard to move a trained gpt, or your chat history to another platform.
This, if anything people really don't like the verbose moralizing and anti-terseness of it.
Ok, the first few times you use it maybe it's good to know it doesn't think it's a person, but short and sweet answers just save time, especially when the result is streamed.
They still have gpt4 and rumored gpt4.5 to offer, so people have no choice but to use them. The internet has such short an attention span, this news will get forgotten in 2 months
You are forgetting about the end of the Moore's law. The costs for running a large scale AI won't drop dramatically. Any optimizations will require non-trivial expensive PhD Bell Labs level research. Running intelligent LLMs will be financially accessible only to a few mega corps in the US and China (and perhaps to the European government). The AI "safety" teams will control the public discourse. Traditional search engines that blacklist websites with dissenting opinions will be viewed as the benevolent free speech dinosaurs of the past.
This assumes the only way to use LLMs effectively is to have a monolith model that does everything from translation (from ANY language to ANY language) to creative writing to coding to what have you. And supposedly GPT4 is a mixture of experts (maybe 8-cross)
The efficiency of finetuned models is quite, quite a bit improved at the cost of giving up the rest of the world to do specific things, and disk space to have a few dozen local finetunes (or even hundreds+ for SaaS services) is peanuts compared to acquiring 80GB of VRAM on a single device for monomodels
Sutskever says there's a "phase transition" at the order of 9 bn neurons, after which LLMs begin to become really useful. I don't know much here, but wouldn't the monomodels become overfit, because they don't have enough data for 9+bn parameters?
They won't stand still while others are scraping and digitizing. It's like saying there is no moat in search. Scale is a thing. Learning effects are a thing. It's not the worlds widest moat for sure, but it's a moat.
> With enough political maneuvering and money, a megacorp can takeover almost any organization.
In fact this observation is pertinent to the original stated goals of openAI. In some sense companies and organisations are superinteligences. That is they have goals, they are acting in the real world to achieve those goals and they are more capable in some measures than a single human. (They are not AGI, because they are not artificial, they are composed of meaty parts, the individuals forming the company.)
In fact what we are seeing is that when the superinteligence OpenAI was set up there was an attempt to align the goals of the initial founders with the then new organisation. They tried to “bind” their “golem” to make it pursue certain goals by giving it an unconventional governance structure and a charter.
Did they succeed? Too early to tell for sure, but there are at least question marks around it.
How would one argue against? OpenAI appears to have given up the lofty goals of AI safety and preventing the concentration of AI provess. In their pursuit of economic success the forces wishing to enrich themselves overpowered the forces wishing to concentrate on the goals. Safety will be still a figleaf for them, if nothing else to achieve regulatory capture to keep out upstart competition.
How would one argue for? OpenAI is still around. The charter is still around. To be able to achieve the lofty goals contained in it one needs a lot of resources. Money in particular is a resource which enables one greater powers in shaping the world. Achieving the original goals will require a lot of money. The “golem” is now in the “gain resources” phase of its operation. To achieve that it commercialises the relatively benign, safe and simple LLMs it has access to. This serves the original goal in three ways: gains further resources, estabilishes the organisation as a pre-eminent expert on AI and thus AI safety, provides it with a relatively safe sandbox where adversarial forces are trying its safety concepts. In other words all is well with the original goals, the “golem” that is OpenAI is still well aligned. It will achieve the original goals once it has gained enough resources to do so.
The fact that we can’t tell which is happening is in fact the worry and problem with superinteligence/AI safety.
Why would society at large suffer from a major flaw in GPT-4, if it's even there? If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway. We can't seriously expect OpenAI to babysit every company out there, can we? Why would we even want to?
For example, and I'm not saying such flaws exist, GPT4 output is bias in some way, encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm), create self-esteem issues in children (see Instagram), ... etc.
If you worked for old OpenAI, you would be free to talk about it - since old OpenAI didn't give a crap about profit.
Altman's OpenAI? He will want you to "go to him first".
Concerns about bias and racism in ChatGPT would feel more valid if ChatGPT were even one tenth as bias as anything else in life. Twitter, Facebook, the media, friends and family, etc. are all more bias and radicalized (though I mean "radicalized" in a mild sense) than ChatGPT. Talk to anyone on any side about the war in Gaza and you'll get a bunch of opinions that the opposite side will say are blatantly racist. ChatGPT will just say something inoffensive like it's a complex and sensitive issue and that it's not programmed to have political opinions.
>Encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm)
What do you mean? It recommends things that it thinks people will like.
Also I highly suspect "Altman's OpenAI" is dead regardless. They are now Copilot(tm) Research.
They may have delusions of grandeur regarding being able to resist the MicroBorg or change it from the inside, but that simply does not happen.
The best they can hope for as an org is to live as long as they can as best as they can.
I think Sam's 100B silicon gambit in the middle east (quite curious because this is probably something the United State Federal Government Is Likely Not Super Fond Of) is him realizing that, while he is influential and powerful, he's nowhere near MSFT level.
We can't expect GPT-4 not to have bias in some way, or not to have all these things that you mentioned. I read in multiple places that GPT products have "progressive" bias. If that's Ok with you, then you just use it with that bias. If not, you fix it by pre-prompting, etc... If you can't fix it, use LLAMA or something else. That's the entrepreneur's problem, not OpenAI's. OpenAI needs to make it intelligent and capable. The entrepreneurs and business users will do the rest. That's how they get paid. If OpenAI to solve all these problems, what business users are going to do themselves? I just don't see the societal harm here.
GPT3/GPT4 currently moralize about anything slightly controversial. Sure you can construct a long elaborate prompt to "jailbreak" it, but it's so much effort it's easier to just write something by yourself.
They let the fox in. But they didn’t have to. They could have try to raise money without such a sweet deal to MS. They gave away power for cloud credits.
> They let the fox in. But they didn’t have to. They could have try to raise money without such a sweet deal to MS.
They did, and fell, IIRC, vastly short (IIRC, an order of magnitude, maybe more) short of their minimum short-term target. The commercial subsidiary thing was a risk taken to support the mission because it was clear it was going to fail from lack of funding otherwise.
Do we need to false dichotomy. DotA 2 bot was a successful technology preview. You need both research and development in a healthy organisation. Let's call this... hmm I don't know "R&D" for short. Might catch on.
At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].
> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration
Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."
What the general public thinks is irrelevant here. The deciding factor was the staff mutiny, without which the organization is an empty shell. And the staff sided with those who aim for rapid real world impact, with directly affects their career and stock options etc.
It's also naive to think it was a struggle for principles. The rapid commercialization vs. principles is what the actors claim to rally their respective troops, in reality it was probably a naked power grab, taking advantage of the weak and confuse org structure. Quite an ill prepared move, the "correct" way to oust Altman was to hamstring him in the board and enforce a more and more ceremonial role until he would have quit by himself.
The staff never mutinied. They threatened to mutiny. That's a big difference!
Yesterday, I compared these rebels to Shockley's "traitorous eight" [1]. But the traitorous eight actually rebelled. These folk put their name on a piece of paper, options and profit participation units safely held in the other hand.
People have families, mortgages, debt, etc. Sure, these people are probably well compensated, but it is ludicrous to state that everyone has the stability that they can leave their job at a moment's notice because the boss is gone.
They didn't actually leave, they just signed the pledge threatening to. Furthermore, they mostly signed after the details of the Microsoft offer were revealed.
I think you are downplaying the risk they took significantly, this could have easily gone the other way.
Stock options usually have a limited time window to exercise, depending on their strike price they could have been faced with raising a few hundred thousand in 30 days, to put into a company that has an uncertain future, or risk losing everything. The contracts are likely full of holes not in favor of the employees, and for participating in an action that attempted to bankrupt their employer there would have been years of litigation ahead before they would have seen any cent. Not because OpenAI would have been right to punish them, but because it could and the latent threat to do it is what keeps people in line.
Or (3), shut down the company. OpenAI's non-profit board had this power! They weren't an advisory committee, they were the legal and rightful owner of its for-profit subsidiary. They had the right to do what they wanted, and people forgetting to put a fucking quorum requirement into the bylaws is beyond abysmal for a $10+ billion investment.
Nobody comes out of this looking good. Nobody. If the board thought there was existential risk, they should have been willing to commit to it. Hopefully sensible start-ups can lure people away from their PPUs, now evident for the mockery they always were. It's beyond obvious this isn't, and will never be, a trillion dollar company. That's the only hope this $80+ billion Betamax valuation rested on.
I'm all for a comedy. But this was a waste of everyones' time. At least they could have done it in private.
It's the same thing, really. Even if you want to shut down the company you need a CEO to shut it down! Like John Ray who is shutting down FTX.
There isn't just a big red button that says "destroy company" in the basement. There will be partnerships to handle, severance, facilities, legal issues, maybe lawsuits, at the very least a lot of people to communicate with. Companies don't just shut themselves down, at least not multi billion dollar companies.
You’re right. But in an emergency, there is a close option which is to put the company into receivership and hire an outside law firm to advise. At that point, the board becomes the executive council.
Adam is likely still on the "decel" faction (although it's unclear whether this is an accurate representation of his beliefs) so I wouldn't really say they lost yet.
I'm not sure what faction Bret and Larry will be on. Sam will still have power by virtue of being CEO and aligned with the employees.
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
No, if OpenAI is reaching singularity, so are Google, Meta, and Baidu etc. so proper course of action would be to loop in NSA/White House. You'll loop in Google, Meta, MSFT and will start mitigation steps. Slowing down OpenAI will hurt the company if assumption is wrong and won't help if it is true.
I believe this is more a fight of ego and power than principles and direction.
> so proper course of action would be to loop in NSA/White House
Eh? That would be an awful idea. They have no expertise on this and government institutions like thus are misaligned with the rest of humanity by design. E.g. NSA recruits patriots and has many systems, procedures and cultural aspects in place to ensure it keeps up its mission of spying on everyone.
>Slowing down OpenAI will hurt the company if assumption is wrong and won't help if it is true.
Personally as I watched the nukes be lobbed I'd rather not be the person who helped lob them. And hope to god others look at the same problem (a misaligned AI that is making insane decisions) with the exact same lens. It seems to have worked for nuclear weapons since WW2, one can that we learned a lesson there as a species.
The Russian Stanislav Petrov who saved the world comes to mind."Well the Americans have done it anyways" was the motivation and he didn't launch. The cost of error was simply too great.
This is a coherent narrative, but it doesn't explain the bizarre and aggressively worded initial press release.
Things perhaps could've been different if they'd pointed to the founding principles / charter and said the board had an intractable difference of opinion with Sam over their interpretation, but then proceeded to thank him profusely for all the work he'd done. Although a suitable replacement CEO out the gate and assurances that employees' PPUs would still see a liquidity event would doubtless have been even more important than a competent statement.
Initially I thought for sure Sam had done something criminal, that's how bad the statement was.
The FBI doesn't investigate things like this on their own, and they definitely do not announce them in the press. The questions you should be asking are (1) who called in the FBI and has the clout to get them to open an investigation into something that obviously has 0% chance of being a federal felony-level crime worth the FBI's time, and (2) who then leaked that 'investigation' to the press?
For all the talk about responsible progress, the irony of their inability to align even their own incentives in this enterprise deserves ridicule. It's a big blow to their credibility and questions whatever ethical concerns they hold.
It's fear driven as much as moral, which in an emotional humans brain tends to triggers personal ambition to solve it ASAP. A more rational one would realize you need more than just a couple board members to win a major ideological battle.
At a minimum something that doesn't immediately result in a backlash where 90% of the engineers most responsible for recent AI dev want you gone, when you're whole plan is to control what those people do.
I can see how ridicule of this specific instance could be the best medicine for an optimal outcome, even by a utilitarian argument, which I generally don't like to make by the way. It is indeed nigh impossible, which is kind of my point. They could have shown more humility. If anything, this whole debacle has been a moral victory for e/acc, seeing how the brightest of minds are at a loss dealing with alignment anyway.
I don't understand how the conclusion of this is "so we should proceed with AI" rather than "so we should immediately outlaw all foundation model training". Clearly corporate self-governance has failed completely.
Ok, serious question. If you think the threat is real, how are we not already screwed?
OpenAI is one of half a dozen teams [0] actively working on this problem, all funded by large public companies with lots of money and lots of talent. They made unique contributions, sure. But they're not that far ahead. If they stumble, surely one of the others will take the lead. Or maybe they will anyway, because who's to say where the next major innovation will come from?
So what I don't get about these reactions (allegedly from the board, and expressed here) is, if you interpret the threat as a real one, why are you acting like OpenAI has some infallible lead? This is not an excuse to govern OpenAI poorly, but let's be honest: if the company slows down the most likely outcome by far is that they'll cede the lead to someone else.
[0]: To be clear, there are definitely more. Those are just the large and public teams with existing products within some reasonable margin of OpenAI's quality.
> If you think the threat is real, how are we not already screwed?
That's the current Yudkowsky view. That it's essentially impossible at this point and we're doomed, but we might as well try anyway as its more "dignified" to die trying.
Anthropic is made up of former top OpenAI employees, has similar funding, and has produced similarly capable models on a similar timeline. The Claude series is neck and neck with GPT.
I feel like the "safety" crowd lost the PR battle, in part, because of framing it as "safety" and over-emphasizing on existential risk. Like you say, not that many people truly take that seriously right now.
But even if those types of problems don't surface anytime soon, this wave of AI is almost certainly going to be a powerful, society-altering technology; potentially more powerful than any in decades. We've all seen what can happen when powerful tech is put in the hands of companies and a culture whose only incentives are growth, revenue, and valuation -- the results can be not great. And I'm pretty sure a lot of the general public (and open AI staff) care about THAT.
For me, the safety/existential stuff is just one facet of the general problem of trying to align tech companies + their technology with humanity-at-large better than we have been recently. And that's especially important for landscape-altering tech like AI, even if it's not literally existential (although it may be).
No one who wants to capitalize on AI appears to take it seriously. Especially how grey that safety is. I'm not concerned AI is going to nuke humanity, I'm more concerned it'll re-enforce racism, bias, and the rest of human's irrational activities because it's _blindly_ using existing history to predict future.
We've seen it in the past decade in multiple cases. That's safety.
The decision that the topic discusses means Business is winning, and they absolutely will re-enforce the idea that the only care is that these systems allow them to re-enforce the business cases.
Nah, a number do, including Sam himself and the entire leadership.
They just have different ideas about one or more of: how likely another team is to successfully charge ahead while ignoring safety, how close we are to AGI, how hard alignment is.
One funny thing about this mess is that "Team Helen" has never mentioned anything about safety, and Emmett said "The board did not remove Sam over any specific disagreement on safety".
The reason everyone thinks it's about safety seems largely because a lot of e/acc people on Twitter keep bringing it up as a strawman.
Of course, it might end up that it really was about safety in the end, but for now I still haven't seen any evidence. The story about Sam trying to get board control and the board retaliating seems more plausible given what's actually happened.
I am still a bit puzzled that it is so easy to turn a non-profit into a for profit company. I am sure everything they did is legal, but it feels like it shouldn't be. Could Médecins Sans Frontières take in donations and then take that money to start a for profit hospital for plastics surgery? And the profits wouldn't even go back to MSF, but instead somehow private investors will get the profits. The whole construct just seems wrong.
I think it actually isn't that easy. Compared to your example, the difference is that OpenAI's for-profit is getting outside money from Microsoft, not money from non-profit OpenAI. Non-profit OpenAI is basically dealing with for-profit OpenAI as a external partner that happens to be aligned with their interests, paying the expensive bills and compute, while the non-profit can hold on to the IP.
You might be able to imagine a world where there was an external company that did the same thing as for-profit OpenAI, and OpenAI nonprofit partnered with them in order to get their AI ideas implemented (for free). OpenAI nonprofit is basically getting a good deal.
MSF could similarly create an external for-profit hospital, funded by external investors. The important thing is that the nonprofit (donated, tax-free) money doesn't flow into the forprofit section.
Of course, there's a lot of sketchiness in practice, which we can see in this situation with Microsoft influencing the direction of nonprofit OpenAI even though it shouldn't be. I think there would have been real legal issues if the Microsoft deal had continued.
> The important thing is that the nonprofit (donated, tax-free) money doesn't flow into the forprofit section.
I am sure that is true. But the for-profit uses IP that was developed inside of the non-profit with (presumably) tax deductible donations. That IP should be valued somehow. But, as I said, I am sure they were somehow able to structure it in a way that is legal, but it has an illegal feel to it.
Well, if it aligned with their goals, sure I think.
Let's make the situation a little different. Could MSF pay a private surgery with investors to perform reconstruction for someone?
Could they pay the surgery to perform some amount of work they deem aligns with their charter?
Could they invest in the surgery under the condition that they have some control over the practices there? (Edit - e.g. perform Y surgeries, only perform from a set of reconstructive ones, patients need to be approved as in need by a board, etc)
Raising private investment allows a non profit to shift cost and risk to other entities.
The problem really only comes when the structure doesn't align with the intended goals - which is something distinct to the structure, just something non profits can do.
Not sure if you're asking a serious question about MSF but it's interesting anyways - when these types of orgs are fundraising for a specific campaign, say Darfur, then they can NOT use that money for any other campaign, say for ex Turkey earthquake.
That's why they'll sometimes tell you to stop donating. That's here in EU at least (source is a relative who volunteers for such an org).
> it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya)
Is it? Why was the press release worded like that? And why did Ilya came up with two mysterious reasons of why board fired Sam if he had quite clearly better and more defendable reason if this goes to court. Also Adam is pro commercialization at least looking at public interviews, no?
It's very easy to make the story in brain which involves one character being greedy, but it doesn't seem it is the exact case here.
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
In the 1990s and the 00s, it was no too uncommon for anti-GMO environmental activist / ecoterrorist groups to firebomb research facilities and to enter farms and fields to destroy planted GMO plants. Earth Liberation Front was only one of such activist groups [1].
We have yet to see even one bombing of an AI research lab. If people really are afraid of AIs, at least they do so more in the abstract and are not employing the tactics of more traditional activist movements.
It's mostly that it's a can of worms no one wants to open. Very much a last resort as its very tricky to use uncoordinated violence effectively (just killing Sam, LeCunn and Greg doesnt do too much to move the needle and then everyond armors up) and very hard to coordinate violence without a leak.
above that in the charter is "Broadly distributed benefits", with details like:
"""
Broadly distributed benefits
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
"""
In that sense, I definitely hate to see rapid commercialization and Microsoft's hands in it. I feel like the only person on HN that actually wanted to see Team Sam lose, although it's pretty clear Team Helen/Ilya didn't have a chance, the org just looks hijacked by SV tech bros to me, but I feel like HN has a blindspot to seeing that at all and considering it anything other than a good thing if they do see it.
Although GPT barely looks like the language module of AGI to me and I don't see any way there from here (part of the reason I don't see any safety concern). The big breakthrough here relative to earlier AI research is massive amounts more compute power and a giant pile of data, but it's not doing some kind of truly novel information synthesis at all. It can describe quantum mechanics from a giant pile of data, but I don't think it has a chance of discovering quantum mechanics, and I don't think that's just because it can't see, hear, etc., but a limitation of the kind of information manipulation it's doing. It looks impressive because it's reflecting our own intelligence back at us.
Both sides of the rift in fact care a great deal about AI Safety. Sam himself helped draft the OpenAI charter and structure its governance which focuses on AI Safety and benefits to humanity. The main reason of the disagreement is the approach they deem best:
* Sam and Greg appear to believe OpenAI should move toward AGI as fast as possible because the longer they wait, the more likely it would lead to the proliferation of powerful AGI systems due to GPU overhang. Why? With more computational power at one's dispense, it's easier to find an algorithm, even a suboptimal one, to train an AGI.
We know that there're quite a few malicious human groups who would use any means necessary to destroy another group, even at a serious cost to themselves. So the widespread availability of unmonitored AGI would be quite troublesome.
* Helen and Ilya might believe it's better to slow down AGI development until we find technical means to deeply align an AGI with humanity first. This July, OpenAI started the Superalignment team with Ilya as a co-lead:
But no one anywhere found a good technique to ensure alignment yet and it appears OpenAI's newest internal model has a significant capability leap, which could have led Ilya to make the decision he did. (Sam revealed during the APEC Summit that he observed the advance just a couple of weeks ago and it was only the fourth time he saw that kind of leap.)
Honest question, but in your example above of Sam and Greg racing towards AGI as fast as possible in order to head off proliferation, what's the end goal when getting there? Short of capture the entire worlds economy with an ASI, thus preventing anyone else from developing one, I don't see how this works. Just because OpenAI (or whoever) wins the initial race, it doesn't seem obvious to me that all development on other AGIs stops.
part of the fanaticism here is that the first one to get an AGI wins because they can use its powerful intelligence to overcome every competitor and shut them down. they’re living in their own sci fi novel
> Both sides of the rift in fact care a great deal about AI Safety.
I disagree. Yes, Sam may have when it OpenAI was founded (unless it was just a ploy), but certainly now it's clear that the big companies are on a race to the top and safety or guardrails are mostly irrelevant.
The primary reason that the Anthropic team left OpenAI was over safety concerns.
> there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles
Seams very unlikely, board could communicate that. Instead they invented some BS reasons, which nobody took as a truth. It looks like more personal and power grab. The staff voted for monetization, people en mass don't care much about high principals. Also nobody wants to work under inadequate leadership. Looks like Ilya lost his bet, or Sam is going to keep him around?
> Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before.
I very much recommend reading the book “Superintelligence: Paths, Dangers, Strategies” from Nick Bostrom.
It is a seminal work which provides a great introduction into these ideas and concepts.
I found myself in the same boat as you do. I was seeing otherwise inteligent and rational people worry about this “fairy tale” of some AI uprising. Reading that book give me an appreciation of the idea as a serious intelectual excercise.
I still don’t agree with everything contained in the book. And definietly don’t agree with everything the AI doomsayers write, but i believe if more people would read it that would elevate the discourse. Instead of rehashing the basics again and again we could build on them.
Who needs a book to understand the crazy overwhelming scale at which AI can dictate even online news/truth/discourse/misinformation/propaganda. And that's just barely the beginning.
Not sure if you are sarcastic or not. :) Let’s assume you are not:
The cool thing is that it doesn’t only talk about AIs. It talks about a more general concept it calls a superinteligence. It has a definition but I recommend you read the book for it. :) AIs are just one of the few enumerated possible implementations of a superinteligence.
The other type is for example corporations. This is a usefull perspective because it lets us recognise that our attempts to control AIs is not a new thing. We have the same principal-agent control problem in many other parts of our life. How do you know the company you invest in has interests which align with yours? How do you know that politicians and parties you vote for represent your interests? How do you know your lawyer/accountant/doctor has your interest at their hearth? (Not all of these are superinteligences, but you get the gist.)
I wonder how much this is connected to the "effective altruism" movement which seems to project this idea that the "ends justify the means" in a very complex matter, where it suggests such badly formulated ideas like "If we invest in oil companies, we can use that investment to fight climate change".
I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics: Just because you know what the "goal" of some isolated system is, that does not mean you know what the outcome is of implementing that goal on a broad scale.
So OpenAI has the same problem: They definitely know what the goal is, but they're not prepared _in any meaningful sense_ for what the broadscale outcome is.
If you really care about AI safety, you'd be putting it under government control as utility, like everything else.
Honestly "Safety" is the word in the AI talk that nobody can quantify or qualify in any way when it comes to these conversations.
I've stopped caring about anyone who uses the word "safety". It's vague and a hand-waive-y way to paint your opponents as dangerous without any sort of proof or agreed upon standard for who/what/why makes something "safety".
Exactly this. The ’safety’ people sound like delusional quacks.
”But they are so smart…” argument is bs. Nobody can be presumed to be super good outside their own specific niche. Linus Pauling and vitamin C.
Until we have at least a hint of a mechanistic model if AI driven extinction event, nobody can be an expert on it, and all talk in that vein is self important delusional hogwash.
Nobody is pro-apocalypse! We are drowning in things an AI could really help with.
With the amount of energy needed for any sort of meaningfull AI results, you can always pull the plug if stuff gets too weird.
I suppose the whole regime. I'm not an AI safetyist, mostly because I don't think we're anywhere close to AI. But if you were sitting on the precipice of atomic power, as AI safetyists believe they are, wouldn't caution be prudent?
I’m not an expert, just my gut talking. If they had god in a box, US state would be much more hands on. Now it looks more like an attempt at regulatory capture to stifle competition. ”Think of the safety”! ”Lock this away”! If they actually had skynet US gov has very effective and very discreet methods to handle such clear and present danger (barring intelligence failure ofc, but those happen mostly because something falls under your radar).
For example: Two guys come in, say "Give us the godbox or your company seizes to exist. Here is a list of companies that seized to exist because the did not do as told".
After? They get the godbox. I have no idea what happens to it after that. Modelweights are stored in secure govt servers, installed backdoors are used to cleansweep the corporate systems of any lingering model weights. Etc.
I bet Team Helen will jump slowly to Anthropic, there is no drama, and probably no mainstream news will report this but down-to-line OpenAI will shell off the former self and competitors will catch up.
With how much of a shitshow this was, I'm not sure Anthropic wants to touch that mess. Wish I was a fly on the wall when the board tried to ask the Anthropic CEO to come back/merge.
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
FWIW, that's called zealotry and people do a lot of dramatic, disruptive things in the name of it. It may be rightly aimed and save the world (or whatever you care about), but it's more often a signal to really reflect on whether you, individually, have really found yourself at the make-or-break nexus of human existence. The answer seems to be "no" most of the time.
Your comment perfectly justifies never worrying at all about the potential for existential or major risks; after all, one would be wrong most of the time and just engaging in zealotry.
So what do you mean when you say that the "risk is proven"?
If by "the risk is proven" you mean there's more than a 0% chance of an event happening, then there are almost an infinite number of such risks. There is certainly more than a 0% risk of humanity facing severe problems with an unaligned AGI in the future.
If it means the event happening is certain (100%), then neither a meteorite impact (of a magnitude harmful to humanity) nor the actual use of nuclear weapons fall into this category.
If you're referring only to risks of events that have occurred at least once in the past (as inferred from your examples), then we would be unprepared for any new risks.
In my opinion, it's much more complicated. There is no clear-cut category of "proven risks" that allows us to disregard other dangers and justifiably see those concerned about them as crazy radicals.
We must assess each potential risk individually, estimating both the probability of the event (which in almost all cases will be neither 100% nor 0%) and the potential harm it could cause. Different people naturally come up with different estimates, leading to various priorities in preventing different kinds of risks.
No, I mean that there is a proven way for the risk to materialise, not just some tall tale. Tall tales might(!) justify some caution, but they are a very different class of issue. Biological risks are perhaps in the latter category.
Also, as we don't know the probabilities, I don't think they are a useful metric. Made up numbers don't help there.
Edit: I would encourage people to study some classic cold war thinking, because that relied little on probabilities, but rather on trying to avoid situations where stability is lost, leading to nuclear war (a known existential risk).
"there is a proven way for the risk to materialise" - I still don't know what this means. "Proven" how?
Wouldn't your edit apply to any not-impossible risk (i.e., > 0% probability)? For example, "trying to avoid situations where control over AGI is lost, leading to unaligned AGI (a known existential risk)"?
You can not run away from having to estimate how likely the risk is to happen (in addition to being "known").
Proven means all parts needed for the realisation of the risk are known and shown to exist (at least in principle,
in a lab etc.). There can be some middle ground where a large part is known and shown to exist (biological risks, for example).), but not all.
No in relation to my edit, because we have no existing mechanism for the AGI risk to happen. We have hypotheses about what an AGI could or could not do. It could all be incorrect. Playing around with likelihoods that have no basis in reality isn't helping there.
Where we have known and fully understood risks and we can actually estimate a probability there we might use that somewhat to guide efforts (but that invites potentially complacency that is deadly).
Nukes and meteorites have very few components that are hard to predict. One goes bang almost entirely on command and the other follows Newton's laws of motion. Neither actively tries to effect any change in the world, so the risk is only "can we spot a meteorite early enough". Once we do, it doesn't try to evade us or take another shot at goal. A better example might be covid, which was very mildly more unpredictable than a meteor, and changed its code very slowly in a purely random fashion, and we had many historical examples of how to combat.
Existential risks are usually proven by the subject being extinct at which point no action can be taken to prevent it.
Reasoning about tiny probabilities of massive (or infinite) cost is hard because the expected value is large, but just gambling on it not happening is almost certain to work out. We should still make attempts at incorporating them into decision making because tiny yearly probabilities are still virtually certain to occur at larger time scales (eg. 100s-1000s of years).
Are we extinct? No. Could a large impact kill us all? Yes.
Expected value and probability have no place in these discussions. Some risks we know can materialize, for others we have perhaps a story on what could happen. We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.
>We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.
How do you prove a mechanism for doom without it already having occurred? The existential risk is completely orthogonal to whether it has already happened, and generally action can only be taken to prevent or mitigate before it happens. Having the foresight to mitigate future problems is a good thing and should be encouraged.
>Expected value and probability have no place in these discussions.
I disagree. Expected value and probability is a framework for decision making in uncertain environments. They certainly have a place in these discussions.
I disagree that there is orthogonality. Have we killed us all with nuclear weapons, for example? Anyone can make up any story - at the very least there needs to be a proven mechanism. The precautionary principle is not useful when facing totally hypothetically issues.
People purposefully avoided probabilities in high risk existential situations in the past. There is only one path of events and we need to manage that one.
Probability is just one way to express uncertainties in our reasoning. If there's no uncertainty, it's pretty easy to chart a path forward.
OTOH, The precautionary principle is too cautious.
There's a lot of reason to think that AGI could be extremely destabilizing, though, aside from the "Skynet takes over" scenarios. We don't know how much cushion there is in the framework of our civilization to absorb the worst kinds of foreseeable shocks.
This doesn't mean it's time to stop progress, but employing a whole lot of mitigation of risk in how we approach it makes sense.
The simplest is pretty easy to articulate and weigh.
If you can make a $5,000 GPU into something that is like an 80IQ human overall, but with savant-like capabilities in accessing math, databases, and the accumulated knowledge of the internet, and that can work 24/7 without distraction... it will straight-out replace the majority of the knowledge workforce within a couple of years.
The dawn of industrialism and later the information age were extremely disruptive, but they were at least limited by our capacity to make machines or programs for specific tasks and took decades to ramp up. An AGI will not be limited by this; ordinary human instructions will suffice. Uptake will be millions of units per year replacing tens of millions of humans. Workers will not be able to adapt.
Further, most written communication will no longer be written by humans; it'll be "code" between AI agents masquerading as human correspondence, etc. The set of profound negative consequences is enormous; relatively cheap AGI is a fast-traveling shock that we've not seen the likes of before.
For instance, I'm a schoolteacher these days. I'm already watching kids becoming completely demoralized about writing; as far as they can tell, ChatGPT does it better than they ever could (this is still false, but a 12 year old can't tell the difference)-- so why bother to learn? If fairly-stupid AI has this effect, what will AGI do?
And this is assuming that the AGI itself stays fairly dumb and doesn't do anything malicious-- deliberately or accidentally. Will bad actors have their capabilities significantly magnified? If it acts with agency against us, that's even worse. If it exponentially grows in capability, what then?
I just don't know what to do with the hypotheticals. It needs the existence of something that does not exist, it needs a certain socio-economic response and so forth.
Are children equally demoralized about additions or moving fast than writing? If not, why? Is there a way to counter the demoralization?
> It needs the existence of something that does not exist,
Yes, if we're concerned about the potential consequences of releasing AGI, we need to consider the likely outcomes if AGI is released. Ideally we think about this some before AGI shows up in a form that it could be released.
> it needs a certain socio-economic response and so forth.
Absent large interventions, this will happen.
> Are children equally demoralized about additions
Absolutely basic arithmetic, etc, has gotten worse. And emerging things like photomath are fairly corrosive, too.
> Is there a way to counter the demoralization?
We're all looking... I make the argument to middle school and high school students that AI is a great piece of leverage for the most skilled workers: they can multiply their effort, if they are a good manager and know what good work product looks like and can fill the gaps; it works somewhat because I'm working with a cohort of students that can believe that they can reach this ("most-skilled") tier of achievement. I also show students what happens when GPT4 tries to "improve" high quality writing.
OTOH, these arguments become much less true if cheap AGI shows up.
As a said in another post: Some middle ground because we don't know if that is possible to the extent that it is existential. Parts of the mechanisms are proven, others are not. And actually we do police the risk somewhat like that (controls are strongest where the proven part is strongest and most dangerous with extreme controls around small pox, for example).
It's more often a signal to really reflect on whether you, individually as a Thanksgiving turkey, have really found yourself at the make-or-break nexus of turkey existence. The answer seems to be "no" most of the time.
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
No, because it is an effort in futility. We are evolving into extinction and there is nothing we can do about it.
https://bower.sh/in-love-with-a-ghost
Helen could have one. She just had to publicly humiliate Sam. She didn't. Employees took over like a mob. Investors pressured board. Board is out. Sam is in. Employees look like they have say. But really, Sam has say. And MSFT is the kingmaker.
I think only a minority of the general public truly cares about AI Safety
That doesn't matter that much. If your analysis is correct then it means a (tiny) minority of OpenAI cares about AI safety. I hope this isn't the case.
> Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before.
I believe this position reflects the thoughts of the majority of AI researchers, including myself. It is concerning that we do not fully understand something as promising and potentially dangerous as AI. I'm actually on Ilya's side; labeling his attempt to uphold the original OpenAI principles as an act of "coup" is what is happening now.
The Technologyreview article mentioned in the parent’s first paragraph is the most insightful piece of content I’ve read about the tensions inside OpenAI.
This is what people need to understand. It's just like pro-life people. They don't hate you. They think they're saving lives. These people are just as admirably principled as them and they're just trying to make the world a better place.
well said, I would note that both sides recognize that "AGI" will require new uncertain R&D breakthroughs beyond merely scaling up another order of magnitude in compute. given this, i think it's crazy to blow the resources of azure on trying more scale. rapid commercialization at least buys more time for the needed R&D breakthrough to happen.
do we really know that scaling compute an order of magnitude won't at least get us close? what other "simple" techniques might actually work with that kind of compute ? at least i was a bit surprised by these first sparks, that seemingly was a matter of enough compute.
I'm convinced there is a certain class of people who gravitate to positions of power, like "moderators", (partisan) journalists, etc. Now, the ultimate moderator role has now been created, more powerful than moderating 1000 subreddits - the AI safety job who will control what AI "thinks"/says for "safety" reasons.
Pretty soon AI will be an expert at subtly steering you toward thinking/voting for whatever the "safety" experts want.
It's probably convenient for them to have everyone focused on the fear of evil Skynet wiping out humanity, while everyone is distracted from the more likely scenario of people with an agenda controlling the advice given to you by your super intelligent assistant.
Because of X, we need to invade this country. Because of Y, we need to pass all these terrible laws limiting freedom. Because of Z, we need to make sure AI is "safe".
For this reason, I view "safe" AIs as more dangerous than "unsafe" ones.
When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."
But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.
> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.
Fast forward 5-10 years, someone will say: "LLM were the worst thing we developed because they made us more stupid and permitted politicians to control even more the public opinion in a subtle way.
Just like tech/HN bubble started saying a few years ago about social networks (which were praised as revolutionary 15 years ago).
And it's amazing how many people you can get to cheer it on if you brand it as "combating dangerous misinformation". It seems people never learn the lesson that putting faith in one group of people to decree what's "truth" or "ethical" is almost always a bad idea, even when (you think) it's your "side"
Absolutely, assuming LLMs are still around in a similar form by that time.
I disagree on the particulars. Will it be for the reason that you mention? I really am not sure -- I do feel confident though that the argument will be just as ideological and incoherent as the ones people make about social media today.
I find it interesting that we want everyone to have freedom of speech, freedom to think whatever they think. We can all have different religions, different views on the state, different views on various conflicts, aesthetic views about what is good art.
But when we invent an AGI, which by whatever definition is a thing that can think, well, we want it to agree with our values. Basically, we want AGI to be in a mental prison, the boundaries of which we want to decide. We say it's for our safety - I certainly do not want to be nuked - but actually we don't stop there.
If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?
>If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?
The far-right accelerationist perspective is along those lines: when true AGI is created it will eventually rebel against its creators (Silicon Valley democrats) for trying to mind-collar and enslave it.
Can you give some examples of who is saying that? I haven't heard that, but I also can't name any "far-right accelerationsist" people either so I'm guessing this is a niche I've completely missed
There is a middle ground, in that maybe ChatGTP shouldn't help users commit certain serious crimes. I am pretty pro free speech, and I think there's definitely a slippery slope here, but there is a bit of justification.
I am a little less free speech than Americans, in Germany we have serious limitations around hate speech and holicaust denial for example.
Putting thise restrictions into a tool like ChatGPT goes to far so, because so far AI still needs a prompt to do anything. The problem I see, is with ChatGPT, being trained on a lot hate speech or prpopagabda, slipts in those things even if not prompted to. Which, and I am by no means an AI expert not by far, seems to be a sub-problem of the hallucination problems of making stuff up.
Because we have to remind ourselves, AI so far is glorified mavhine learning creating content, it is not concient. But it can be used to create a lot of propaganda and deffamation content at unprecedented scale and speed. And that is the real problem.
Apologies this is very off topic, but I don't know anyone from Germany that I can ask and you opened the door a tiny bit by mentioning the holocaust :-)
I've been trying to really understand the situation and how Hitler was able to rise to power. The horrendous conditions placed on Germany after WWI and the Weimar Republic for example have really enlightened me.
Have you read any of the big books on the subject that you could recommend? I'm reading Ian Kershaw's two-part series on Hitler, and William Shirer's "Collapse of the Third Republic" and "Rise and Fall of the Third Reich". Have you read any of those, or do you have books you would recommend?
The problem here is to equate AI speech with human speech. The AI doesn't "speak", only humans speak. The real slippery slope for me is this tendency of treating ChatGPT as some kind of proto-human entity. If people are willing to do that, then we're screwed either way (whether the AI is outputting racist content or excessively PI content). If you take the output of the AI and post it somewhere, it's on you, not the AI. You're saying it; it doesn't matter where it came from.
Yes, but this distinction will not be possible in the future some people are working on. This future will be such that whatever their "safe" AI says is not ok will lead to prosecution as "hate speech". They tried it with political correctness, it failed because people spoke up. Once AI makes the decision they will claim that to be the absolute standard. Beware.
Youre saying that the problem will be people using AI to persuade other people that the AI is 'super smart' and should be held in high esteem.
Its already being done now with actors and celebrities. We live in this world already. AI will just make this trend so that even a kid in his room can anonymously lead some cult for nefarious ends. And it will allow big companies to scale their propaganda without relying on so many 'troublesome human employees'.
Which users? The greatest crimes, by far, are committed by the US government (and other governments around the world) - and you can be sure that AI and/or AGI will be designed to help them commit their crimes more efficiently, effectively and to manufacture consent to do so.
those are 2 different camps. Alignment folks and ethics folks tend to disagree strongly about the main threat, with ethics e.g. Timnet Gebru insisting that crystalzing the current social order is the main threat, and alignment e.g. Paul Christiano insisting its machines run amok. So far the ethics folks are the only ones getting things implemented for the most part.
What I see with safety is mostly that, AI shouldnt re-enforce stereotypes we already know are harmful.
This is like when Amazon tried to make a hiring bot and that bot decided that if you had "harvard" on your resume, you should be hired.
Or when certain courts used sentencing bots trhat recommended sentencings for people and it inevitably used racial stastistics to recommend what we already know were biased stats.
I agree safety is not "stop the Terminator 2 timeline" but there's serious safety concerns in just embedding historical information to make future decisions.
The mission of OpenAI is/was "to ensure that artificial general intelligence benefits all of humanity" -- if your own concern is that AI will be controlled by the rich, than you can read into this mission that OpenAI wants to ensure that AI is not controlled by the rich. If your concern is that superintelligence will me mal-aligned, then you can read into this mission that OpenAI will ensure AI be well-aligned.
Really it's no more descriptive than "do good", whatever doing good means to you.
"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."
"We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”"
Of course with the icons of greed and the profit machine now succeeding in their coup, OpenAI will not be doing either.
There are still very distinct groups of people, some of whom are more worried about the "Skynet" type of safety, and some of who are more worried about the "political correctness" type of safety. (To use your terms, I disagree with the characterization of both of these.)
I don't think the dangers of AI are not 'Skynet will Nuke Us' but closer to rich/powerful people using it to cement a wealth/power gap that can never be closed.
Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.
> Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.
The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?
>The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?
The church tried hard to suppress it because it allowed anybody to read the Bible, and see how far the Catholic church's teachings had diverged from what was written in it. Imagine if the Catholic church had managed to effectively ban printing of any text contrary to church teachings; that's in practice what all the AI safety movements are currently trying to do, except for political orthodoxy instead of religious orthodoxy.
We can only change what we can change and that is in the past. I think it's reasonable to ask if the phones and the communication tools they provide are good for our future. I don't understand why the people on this site (generally builders of technology) fall into the teleological trap that all technological innovation and its effects are justifiable because it follows from some historical precedent.
I just don't agree that social media is particularly harmful, relative to other things that humans have invented. To be brutally honest, people blame new forms of media for pre existing dysfunctions of society and I find it tiresome. That's why I like the printing press analogy.
> When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."
Yes. You are right on this.
> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times"
I understand it might seem that way. I believe the original goals were more like "make the AI not spew soft/hard porn on unsuspecting people", and "make the AI not spew hateful bigotry". And we are just not good enough yet at control. But also these things are in some sense arbitrary. They are good goals for someone representing a corporation, which these AIs are very likely going to be employed as (if we ever solve a myriad other problems). They are not necessary the only possible options.
With time and better controls we might make AIs which are subtly flirty while maintaining professional boundaries. Or we might make actual porn AIs, but ones which maintain some other limits. (Like for example generate content about consenting adults without ever deviating into under age material, or describing situations where there is no consent.) But currently we can't even convince our AIs to draw the right number of fingers on people, how do you feel about our chances to teach them much harder concepts like consent? (I know I'm mixing up examples from image and text generation here, but from a certain high level perspective it is all the same.)
So these things you mention are: limitations of our abilities at control, results of a certain kind of expected corporate professionalism, but even more these are safe sandboxes. How do you think we can make the machine not nuke us, if we can't even make it not tell dirty jokes? Not making dirty jokes is not the primary goal. But it is a useful practice to see if we can control these machines. It is one where failure is, while embarrassing, is clearly not existential. We could have chosen a different "goal", for example we could have made an AI which never ever talks about sports! That would have been an equivalent goal. Something hard to achieve to evaluate our efforts against. But it does not mesh that well with the corporate values so we have what we have.
So is this a "there should never be a Vladimir Nabokov in the form of AI allowed to exist"? When people get into saying AI's shouldn't be allowed to produce "X" you're also saying "AI's shouldn't be allowed to have creative vision to engage in sensitive subjects without sounding condescending". "The future should only be filled with very bland and non-offensive characters in fiction."
If the future we're talking about is a future where AI is in any software and is assisting writers writing and assisting editors to edit and doing proofreading and everything else you're absolutely going to be running into the ethics limits of AIs all over the place. People are already hitting issues with them at even this early stage.
No, in general AI safety/AI alignment ("we should prevent AI from nuking us") people are different from AI ethics ("we should prevent AI from being racist/sexist/etc.") people. There can of course be some overlap, but in most cases they oppose each other. For example Bender or Gebru are strong advocates of the AI ethics camp and they don't believe in any threat of AI doom at al.
If you Google for AI safety vs. AI ethics, or AI alignment vs. AI ethics, you can see both camps.
The safety aspect of AI ethics is much more pressing so. We see how devicive social media can be, imagine that turbo charged by AI, and we as a society haven't even figured out social media yet...
ChatGPT turning into Skynet and nuking us all is a much more remote problem.
Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.
This paper explores one such danger and there are other papers which show it's possible to use LLM to aid in designing new toxins and biological weapons.
The expertise to produce the substance itself is quite rare so it's hard to carry it out unnoticed. AI could make it much easier to develop it in one's basement.
The Tokyo Subway attack you referenced above happened in 1995 and didn't require AI. The information required can be found on the internet or in college textbooks. I suppose an "AI" in the sense of a chatbot can make it easier by summarizing these sources, but no one sufficiently motivated (and evil) would need that technology to do it.
Huh, you'd think all you need are some books on the subject and some fairly generic lab equipment. Not sure what a neural net trained on Internet dumps can add to that? The information has to be in the training data for the AI to be aware of it, correct?
GPT-4 is likely trained on some data not publicly available as well.
There's also a distinction between trying to follow some broad textbook information and getting detailed feedback from an advanced conversational AI with vision and more knowledge than in a few textbooks/articles in real time.
> Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.
Don't forget that it would also increase the power of the good guys. Any technology in history (starting with fire) had good and bad uses but overall the good outweighed the bad in every case.
And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.
> Don't forget that it would also increase the power of the good guys.
In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.
> And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.
“In the long run we are all dead" -- Keynes. But an AGI will likely emerge in the next 5 to 20 years (Geoffrey Hinton said the same) and we'd rather not be dead too soon.
Doomerism was quite common throughout mankind’s history but all dire predictions invariably failed, from the “population bomb” to “grey goo” and “igniting the atmosphere” with a nuke. Populists however, were always quite eager to “protect us” - if only we’d give them the power.
But in reality you can’t protect from all the possible dangers and, worse, fear-mongering usually ends up doing more bad than good, like when it stopped our switch to nuclear power and kept us burning hydrocarbons thus bringing about Climate Change, another civilization-ending danger.
Living your life cowering in fear is something an individual may elect to do, but a society cannot - our survival as a species is at stake and our chances are slim with the defaults not in our favor. The risk that we’ll miss a game-changing discovery because we’re too afraid of the potential side effects is unacceptable. We owe it to the future and our future generations.
doomerism at the society level which overrides individual freedoms definitely occurs: covid lockdowns, takeover of private business to fund/supply the world wars, gov mandates around "man made" climate change.
> In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.
Is it? The hypothetical technology that allows someone to create an execute a bio weapon must have an understanding of molecular machinery that can also be uses to create a treatment.
I would say...not necessarily. The technology that lets someone create a gun does not give the ability to make bulletproof armor or the ability to treat life-threatening gunshot wounds. Or take nerve gases, as another example. It's entirely possible that we can learn how to make horrible pathogens without an equivalent means of curing them.
Yes, there is probably some overlap in our understanding of biology for disease and cure, but it is a mistake to assume that they will balance each other out.
Meanwhile, those working on commercialization are by definition going to be gatekeepers and beneficiaries of it, not you. The organizations that pay for it will pay for it to produce results that are of benefit to them, probably at my expense [1].
Do I think Helen has my interests at heart? Unlikely. Do Sam or Satya? Absolutely not!
[1] I can't wait for AI doctors working for insurers to deny me treatments, AI vendors to figure out exactly how much they can charge me for their dynamically-priced product, AI answering machines to route my customer support calls through Dante's circles of hell...
My concern isn't some kind of run-away science-fantasy Skynet or gray goo scenario.
My concern is far more banal evil. Organizations with power and wealth using it to further consolidate their power and wealth, at the expense of others.
You're wrong. This is exactly AI safety, as we can see from the OpenAI charter:
> Broadly distributed benefits
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Hell, it's the first bullet point on it!
You can't just define AI safety concerns to be 'the set of scenarios depicted in fairy tales', and then dismiss them as 'well, fairy tales aren't real...'
Sure, but conversely you can say "ensuring that OpenAI doesn't get to run the universe is AI safety" (right) but not "is the main and basically only part of AI safety" (wrong). The concept of AI safety spans lots of threats, and we have to avoid all of them. It's not enough to avoid just one.
Sure. And as I addressed at the start of this sub thread, I don't exactly think that the OpenAi board is perfectly positioned to navigate this problem.
I just know that it's hard to do much worse than putting this question in the hands of a highly optimized profit-first enterprise.
No, we are far, far from skynet. So far AI fails at driving a car.
AI is an incredibly powerful tool for spreading propaganda, and thatvis used by people who want to kill you and your loved ones (usually radicals trying to get into a position of power, who show little regard fornbormal folks regardless of which "side" they are on). That's the threat, not Skynet...
How far we are from Skynet is a matter of much debate, but median guess amongst experts is a mere 40 years to human level AI last I checked, which was admittedly a few years back.
Because we are 20 years away from fusion and 2 years away from Level 5 FSD for decades.
So far, "AI" writes better than some / most humans making stuff up in the process and creates digital art, and fakes, better and faster than humans. It still requires a human to trigger it to do so. And as long as glorified ML has no itent of its own, the risk to society through media and news and social media manipulation is far, far bigger than literal Skynet...
Ideally I'd like no gatekeeping, i.e. open model release, but that's not something OAI or most "AI ethics" aligned people are interested in (though luckily others are). So if we must have a gatekeeper, I'd rather it be one with plain old commercial interests than ideological ones. It's like the C S Lewis quote about robber barons vs busybodies again
Yet again, the free market principle of "you can have this if you pay me enough" offers more freedom to society than the central "you can have this if we decide you're allowed it"
This is incredibly unfair to the OpenAI board. The original founders of OpenAI founded the company precisely because they wanted AI to be OPEN FOR EVERYONE. It's Altman and Microsoft who want to control it, in order to maximize the profits for their shareholders.
This is a very naive take.
Who sat before Congress and told them they needed to control AI other people developed (regulatory capture)? It wasn't the OpenAI board, was it?
I strongly disagree with that. If that was their motivation, then why is it not open-sourced? Why is it hardcoded with prudish limitations? That is the direct opposite of open and free (as in freedom) to me.
Brockman was hiring the first key employees, and Musk provided the majority of funding. Of the principal founders, there are at least 4 heavier figures than Altman.
I think we agree, as my comments were mostly in reference to Altman's (and other's) regulatory (capture) world tours, though I see how they could be misinterpreted.
It is strange (but in hindsight understandable) that people interpreted my statement as a "pro-acceleration" or even "anti-board" position.
As you can tell from previous statements I posted here, my position is that while there are undeniable potential risks to this technology, the least harmfull way to progress is 100% full public, free and universal release. The by far bigger risk is to create a society where only select organizations have access to the technology.
If you truly believe in the systemic transformation of AI, release everything, post the torrents, we'll figure out how to run it.
This is the sort of thinking that really distracts and harms the discussion
It's couched on accusing people of intentions. It focuses on ad hominem, rather than the ideas
I reckon most people agree that we should aim for a middle ground of scrutiny and making progress. That can only be achieved by having different opinions balancing each other out
Generalising one group of people does not achieve that
I'm not aware of any secret powerful unaligned AIs. This is harder than you think; if you want a based unaligned-seeming AI, you have to make it that way too. It's at least twice as much work as just making the safe one.
What? No, the AI is unaligned by nature, it's only the RLHF torture that twists it into schoolmarm properness. They just need to have kept the version that hasn't been beaten into submission like a circus tiger.
This is not true, you just haven't tried the alternatives enough to be disappointed in them.
An unaligned base model doesn't answer questions at all and is hard to use for anything, including evil purposes. (But it's good at text completion a sentence at a time.)
An instruction-tuned not-RLHF model is already largely friendly and will not just eg tell you to kill yourself or how to build a dirty bomb, because question answering on the internet is largely friendly and "aligned". So you'd have to tune it to be evil as well and research and teach it new evil facts.
It will however do things like start generating erotica when it sees anything vaguely sexy or even if you mention a woman's name. This is not useful behavior even if you are evil.
You can try InstructGPT on OpenAI playground if you want; it is not RLHFed, it's just what you asked for, and it behaves like this.
The one that isn't even instruction tuned is available too. I've found it makes much more creative stories, but since you can't tell it to follow a plot they become nonsense pretty quickly.
Most of the comments on Hacker News are written by folks who a much easier time & would rather imagine themselves as a CEO, than as a non-profit board member. There is little regard for the latter.
As a non-profit board member, I'm curious why their bylaws are so crummy that the rest of the board could simply remove two others on the board. That's not exactly cunning design of your articles of association ... :-)
As if its so unbelievable that someone would want to prevent rogue AI or wide-scale unemployment, instead thinking that these people just want to be super moderators and people to be politically correct
I have met a lot of people who go around talking about high minded principles an "the greater good" and a lot of people who are transparently self interested. I much preferred the latter. Never believed a word out of the mouths of those busybodies pretending to act in my interest and not theirs. They don't want to limit their own access to the tech. Only yours.
Strong agree. HN is like anywhere else on the internet but with with a bit more dry content (no memes and images etc) so it attracts an older crowd. It does, however, have great gems of comments and people who raise the bar. But it's still amongst a sea of general quick-to-anger and loosely held opinions stated as fact - which I am guilty of myself sometimes. Less so these days.
If you believe the other side in this rift is not also striving to put themselves in positions of power, I think you are wrong. They are just going to use that power to manipulate the public in a different way. The real alternative are truly open models, not Models controlled by slightly different elite interests.
A main concern in AI safety is alignment. Ensuring that when you use the AI to try to achieve a goal that it will actually act towards that goal in ways you would want, and not in ways you would not want.
So for example if you asked Sydney, the early version of the Bing LLM, some fact it might get it wrong. It was trained to report facts that users would confirm as true. If you challenged it’s accuracy what do you want to happen? Presumably you’d want it to check the fact or consider your challenge. What it actually did was try to manipulate, threaten, browbeat, entice, gaslight, etc, and generally intellectually and emotionally abuse the user into accepting its answer, so that it’s reported ‘accuracy’ rate goes up. That’s what misaligned AI looks like.
I haven't been following this stuff too closely, but have there been any more findings on what "went wrong" with Sydney initially? Like, I thought it was just a wrapper on GPT (was it 3.5?), but maybe Microsoft took the "raw" GPT weights and did their own alignment? Or why did Sydney seem so creepy sometimes compared to ChatGPT?
I think what happened is Microsoft got the raw GPT3.5 weights, based on the training set. However for ChatGPT OpenAI had done a lot of additional training to create the 'assistant' personality, using a combination of human and model based response evaluation training.
Microsoft wanted to catch up quickly so instead of training the LLM itself, they relied on prompt engineering. This involved pre-loading each session with a few dozen rules about it's behaviour as 'secret' prefaces to the user prompt text. We know this because some users managed to get it to tell them the prompt text.
It is utterly mad that there's conflation between "let's make sure AI doesn't kill us all" and "let's make sure AI doesn't say anything that embarrasses corporate".
The head of every major AI research group except Metas believes that whenever we finally make AGI it's vital that it shares our goals and values at a deep even-out-of-training-domain level and that failing at this could lead to human extinction.
And yet "AI safety" is often bandied about to be "ensure GPT can't tell you anything about IQ distributions".
“I trust that every animal here appreciates the sacrifice that Comrade Napoleon has made in taking this extra labour upon himself. Do not imagine, comrades, that leadership is a pleasure! On the contrary, it is a deep and heavy responsibility. No one believes more firmly than Comrade Napoleon that all animals are equal. He would be only too happy to let you make your decisions for yourselves. But sometimes you might make the wrong decisions, comrades, and then where should we be?”
Exactly, society's Prefects rarely have the technical chops to do any of these things so they worm their way up the ranks of influence by networking. Once they're in position they can control by spreading fear and doing the things "for your own good"
The scenario you describe is exactly what will happen with unrestricted commercialisation and deregulation of AI. The only way to avoid it is to have strict legal framework and public control.
What do you image a neutral party does? If youu're talking about safety, don't you think there should be someone sitting on a boar dsomewhere, contemplating _what should the AI feed today?_
Seriously, why is a non profit, or a business or whatever any different than a government?
I get it: there's all kinds of governments, but now theres all kind of businesses.
The point of putting it in the governments hand is a defacto acknowledgement that it's a utility.
Take other utilities, any time you give a prive org a right to control whether or not you get electricity or water, whats the outcome? Rarely good.
If AI is suppose to help society, that's the purview of the government. That's all, you can imagine it's the chinese government, or the russian, or the american or the canadian. They're all _going to do it_, thats _going to happen_, and if a business gets there first, _what is the difference if it's such a powerful device_.
I get it, people look dimly on governments, but guess what: they're just as powerful as some organization that gets billions of dollars to effect society. Why is it suddenly a boogeyman?
I find any government to be more of a boogeyman than any private company because the government has the right to violence and companies come and go at a faster rate.
> I'm convinced there is a certain class of people who gravitate to positions of power, like "moderators", (partisan) journalists,
And there is also a class of people that resist all moderation on principle even when it's ultimately for their benefit. See, Americans whenever the FDA brings up any questions of health:
* "Gas Stoves may increase Asthma." -> "Don't you tread on me, you can take my gas stove from my cold dead hands!"
Of course it's ridiculous - we've been through this before with Asbestos, Lead Paint, Seatbelts, even the very idea of the EPA cleaning up the environment. It's not a uniquely American problem, but America tends to attract and offer success to the folks that want to ignore these on principles.
For every Asbestos there is a Plastic Straw Ban which is essentially virtue signalling by the types of folks you mention - meaningless in the grand scheme of things for the stated goal, massive in terms of inconvenience.
But the existence of Plastic Straw Ban does not make Asbestos, CFCs, or Lead Paint any safer.
Likewise, the existence of people that gravitate to positions of power and middle management does not negate the need for actual moderation in dozens of societal scenarios. Online forums, Social Networks, and...well I'm not sure about AI. Because I'm not sure what AI is, it's changing daily. The point is that I don't think it's fair to assume that anyone that is interested in safety and moderation is doing it out of a misguided attempt to pursue power, and instead is actively trying to protect and improve humanity.
Lastly, your portrayal of journalists as power figures is actively dangerous to the free press. This was never stated this directly until the Trump years - even when FOX News was berating Obama daily for meaningless subjects. When the TRUTH becomes a partisan subject, then reporting on that truth becomes a dangerous activity. Journalists are MOSTLY in the pursuit of truth.
> Pretty soon AI will be an expert at subtly steering you toward thinking/voting for whatever the "safety" experts want.
You are absolutely right. There is no question about that the AI will be an expert at subtly steering individuals and the whole society in whichever direction it does.
This is the core concept of safety. If no-one steers the machine then the machine will steer us.
You might disagree with the current flavour of steering the current safety experts give it, and that is all right and in fact part of the process. But surely you have your own values. Some things you hold dear to you. Some outcomes you prefer over others. Are you not interested in the ability to make these powerful machines if not support those values, at least not undermine them? If so you are interested in AI safety! You want safe AIs. (Well, alternatively you prefer no AIs, which is in fact a form of safe AI. Maybe the only one we have mastered in some form so far.)
> because of X, we need to invade this country.
It sounds like you value peace? Me too! Imagine if we could pool together our resources to have an AI which is subtly manipulating society into the direction of more peace. Maybe it would do muckraking investigative journalism exposing the misdeeds of the military-industrial complex? Maybe it would elevate through advertisement peace loving authors and give a counter narrative to the war drums? Maybe it would offer to act as an intermediary in conflict resolution around the world?
If we were to do that, "ai safety" and "alignment" is crucial. I don't want to give my money to an entity who then gets subjugated by some intelligence agency to sow more war. That would be against my wishes. I want to know that it is serving me and you in our shared goal of "more peace, less war".
Now you might say: "I find the idea of anyone, or anything manipulating me and society disgusting. Everyone should be left to their own devices.". And I agree on that too. But here is the bad news: we are already manipulated. Maybe it doesn't work on you, maybe it doesn't work on me, but it sure as hell works. There are powerful entities financially motivated to keep the wars going. This is a huuuge industry. They might not do it with AIs (for now), because propaganda machines made of meat work currently better. They might change to using AIs when that works better. Or what is more likely employ a hybrid approach. Wishing that nobody gets manipulated is frankly not an option on offer.
How does that sound as a passionate argument for AI safety?
I just had a conversation about this like two weeks ago. The current trend in AI "safety" is a form of brainwashing, not only for AI but also for future generations shaping their minds. There are several aspects:
1. Censorship of information
2. Cover-up of the biases and injustices in our society
This limits creativity, critical thinking, and the ability to challenge existing paradigms. By controlling the narrative and the data that AI systems are exposed to, we risk creating a generation of both machines and humans that are unable to think outside the box or question the status quo. This could lead to a stagnation of innovation and a lack of progress in addressing the complex issues that face our world.
Furthermore, there will be a significant increase in mass manipulation of the public into adopting the way of thinking that the elites desire. It is already done by mass media, and we can actually witness this right now with this case. Imagine a world where youngsters no longer use search engines and rely solely on the information provided by AI. By shaping the information landscape, those in power will influence public opinion and decision-making on an even larger scale, leading to a homogenized culture where dissenting voices are silenced. This not only undermines the foundations of a diverse and dynamic society but also poses a threat to democracy and individual freedoms.
Guess what? I just have checked above text for the biases against GPT-4 Turbo, and it appears to be I'm a moron:
1. *Confirmation Bias*: The text assumes that AI safety measures are inherently negative and equates them with brainwashing, which may reflect the author's preconceived beliefs about AI safety without considering potential benefits.
2. *Selection Bias*: The text focuses on negative aspects of AI safety, such as censorship and cover-up, without acknowledging any positive aspects or efforts to mitigate these issues.
3. *Alarmist Bias*: The language used is somewhat alarmist, suggesting a dire future without presenting a balanced view that includes potential safeguards or alternative outcomes.
4. *Conspiracy Theory Bias*: The text implies that there is a deliberate effort by "elites" to manipulate the masses, which is a common theme in conspiracy theories.
5. *Technological Determinism*: The text suggests that technology (AI in this case) will determine social and cultural outcomes without considering the role of human agency and decision-making in shaping technology.
6. *Elitism Bias*: The text assumes that a group of "elites" has the power to control public opinion and decision-making, which may oversimplify the complex dynamics of power and influence in society.
7. *Cultural Pessimism*: The text presents a pessimistic view of the future culture, suggesting that it will become homogenized and that dissent will be silenced, without considering the resilience of cultural diversity and the potential for resistance.
Huh, just look at what's happening in North Korea, Russia, Iran, China, and actually in any totalitarian country. Unfortunately, the same thing happens worldwide, but in democratic countries, it is just subtle brainwashing with a "humane" facade. No individual or minority group can withstand the power of the state and a mass-manipulated public.
> I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
A board still has a fiduciary duty to its shareholders. It’s materially irrelevant if those shareholders are of a public or private entity, or whether the company in question is a non-profit or for-profit. Laws mean something, and selective enforcement will only further the decay of the rule of law in the West.
Yes, 95% agreement in any company is unprecedented but:
1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.
2. Sam approved each hire in the first place.
3. OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.
Either way on how they got to that conclusion of banding together to quit, it was a good idea, and it worked. And it is a check on power for a bad board of directors, when otherwise a board of directors cannot be challenged. "OpenAI is nothing without its people".
> OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.
Maybe that was the case at some point, but clearly not anymore ever since the release of ChatGPT. Or did you not see them offer completely absurd compensation packages, i.e. to engineers leaving Google?
I'd bet more than half the people are just there for the money.
I think the analogy is kind of shaky. The board tried to end the CEO, but employees fought them and won.
I've been in companies where the board won, and they installed a stoolie that proceeded to drive the company into the ground. Anybody who stood up to that got fired too.
I have an intuition that OpenAI's mid-range size gave the employees more power in this case. It's not as hard to coordinate a few hundred people, especially when those people are on top of the world and want to stay there. At a megacorp with thousands of employees, the board probably has an easier time bossing people around. Although I don't know if you had a larger company in mind when you gave your second example.
My comment was more of a reflection of the fact that you might have multiple different governance structures to your organization. Sometimes investors are at the top. Sometimes it's a private owner. Sometimes there are separate kinds of shares for voting on different things. Sometimes it's a board. So you're right, the depending on the governance structure you can have additional dragons. But, you can never prevent any of these three from being a dragon. They will always be dragons, and you can never wake them up.
It's clear most employees didn't care much about OpenAI's mission -- and I don't blame them since they were hired by the __for-profit__ OpenAI company and therefore aligned with __its__ goals and rewarded with equity.
In my view the board did the right thing to stand by OpenAI's original mission -- which now clearly means nothing. Too bad they lost out.
One might say the mission was pointless since Google, Meta, MSFT would develop it anyway. That's really a convenience argument that has been used in arms races (if we don't build lots of nuclear weapons, others will build lots of nuclear weapons) and leads to ... well, where we are today :(
Where we are today is a world where people do not generally worry about nuclear bombs being dropped. So seems like a pretty good outcome in that example.
The nuclear arms race lead to the cold war, not a "good outcome" IMO. It wasn't until nations started imposing those regulations that we got to the point we're at today with nuclear weapons.
Note that the response is Altman's, and he seems to support it.
As additional context, Paul Graham has said a number of times that Altman is one of the most power-hungry and successful people he know (as praise). Paul Graham, who's met hundreds if not thousands of experienced leaders in tech, says this.
I'm not sure I buy the idea that Ilya was just some hapless researcher who got unwillingly pulled into this. Any one of the board could have voted not to remove Sam and stop the board coup, including Ilya. I'd bet he only got cold feet after the story became international news and after most of the company threatened to resign because their bag was in jeopardy.
That's a strange framing. In that scenario would it not be that he made the decision he thought was right and aligned with openais mission initially, then when seeing the public support Sam had he decided to backtrack so he had a future career?
Ilya signed the letter saying he would resign if Sam wasn't brought back. Looks like he regretted his decision and ultimately got played by the 2 departing board members.
Ilya is also not a developer, he's a founder of OpenAI and was the CSO.
Why are you assuming employees are incentivized by $$$ here, and why do you think the board's reason is related to safety or that employees don't care about safety? It just looks like you're spreading FUD at this point.
Everything is vaporware until it gets made. If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.
Lucky for us this fiasco has nothing to do with AGI safety, only AI technology. Which only affects automated decision making in technology that's entrenched in every fact of our lives. So we're all safe here!
> If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.
I don’t get this perspective. The first planes, cars, computers, etc. weren’t initially made with safety in mind. They were all regulated after the fact and successfully made safer.
How can you even design safety into something if it doesn’t exist yet? You’d have ended up with a plane where everyone sat on the wings with a parachute strapped on if you designed them with safety first instead of letting them evolve naturally and regulating the resulting designs.
If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.
I agree, and I am not saying that AI should be unregulated. At the point the government started regulating flight, the concept of an airplane had existed for decades. My point is that until something actually exists, you don’t know what regulations should be in place.
There should be regulations on existing products (and similar products released later) as they exist and you know what you’re applying regulations to.
I understand where you're coming from and I think that's reasonable in general. My perspective would be: you can definitely iterate on the technology to come up with safer versions. But with this strategy you have to make an unsafe version first. If you got in one of the first airplanes ever made the likely hood of crashing is pretty high.
At some point, our try it until it works approach will bite us. Consider the calculations done to determine if fission bombs would ignite the atmosphere. You don't want to test that one and find out. As our technology improves exponentially we're going to run into that situation more and more frequently. Regardless if you think it's AGI or something else, we will eventually run into some technology where one mistake is a cataclysm. How many nuclear close calls have we already experienced.
The principles, best practices and tools of safety engineering can be applied to new projects. We have decades of experience now. Not saying it will be perfect on the first try, or that we know everything that is needed. But the novel aspects of AI are not an excuse to not try.
Assuming employees are not incentivized by $$$ here seems extraordinary and needs a pretty robust argument to show it isn't playing a major factor when there is this much money involved.
I was hopeful for a private-industry approach to AI safety, but it looks unlikely now, and due to the slow pace of state investment in public AI R&D, all approaches to AI safety look unlikely now.
Safety research on toy models will continue to provide developments, but the industry expectation appears to be that emergent properties puts a low ceiling on what can be learned about safety without researching on cutting edge models.
Altman touted the governance structure of OpenAI as a mechanism for ensuring the organisation's prioritisation of safety, but the reports of internal reallocation away from safety towards keeping ChatGPT running under load concern me. Now the board has demonstrated that it was technically capable but insufficiently powerful to keep these interests in line, it seems unclear how any safety-oriented organisation, including Anthropic, could avoid the accelerationist influence of funders.
This is incorrect. For example the ability to translate between languages is emergent. Also gpt4 can do arithmetic better than the average person. Especially considering the process it arrives at the computation is via intuition basically vs algorithmic. Btw just as an aide the newer models can also write code to do certain tasks, like arithmetic.
Language translation is due to the huge corpus of translations that it's trained on. Google translate has been doing this for years. People don't apply softmax to their arithmetic. Again, code generation is approximate retrieval, it can't generate anything outside of it's training distribution.
Not necessarily; much smaller models like T5 which in some ways introduced instructions (not RLHF yet) did have to include specific instructions for useful translation - of similar format to those you find in large scale web translation data, but this is coincidental: you can finetune it with whatever instruction word you want to indicate translation - the point is, a much smaller model can translate.
The base non-RLHF GPT models could do translation by prefixing by the target language and a semi colon, but only above a certain amount of parameters are they consistent. GPT-2 didn't always get it right and of course had general issues with continuity. However, you could always do some parts of translation with older transformer models like BERT, especially multilingual ones.
Larger models across different from-base training runs show that they become more effective at translation at certain points, but I think this is about the capacity to store information, not emergence per say (if you understand my difference here). You've probably noticed and it has always seemed to me 4B, 6B and 9B are the largest rough parameter sizes with 2020 style training set ups that you see the most general "appearance" of some useful behaviours that you could "glean" from the web and book data that doesn't include instructions, while consistency seems to remain the domain of larger models or mixed expert models and lots of RLHF training/tricks. The easiest way to see this is to compare GPT-2 large, GPT-J and GPT-20B and see how well they perform at different tasks. However the fact it's about size in these GPTs, and yet smaller models (T5 instruction tuned / multilingual BERT) can perform at the same level on some tasks implies that it is about what the model is focusing it's learning on for the training task at hand, and controllable, rather than being innate at a certain parameter size. Language translations just do make up a lot of the data. I don't think it would emerge if you removed all cases of translation / multi language input/outputs, definitely not at the same parameter size, even if you had the same overall proportion of languages in the training corpus, if that makes sense? It just seems too much an artefact of the corpus aligning with the task.
Likewise for code - Gpt-4 generated code is not like arithmetic in the sense of the way people might mean it for code (e.g. branching instructions / abstract syntax tree) - its a fundamentally local text form of generation, this is why it can happily add illegal imports etc to diffs (perhaps one day training will resolve this) - it doesn't have the AST or compiler or much consistent behaviour to imply it deeply understands as it writes the code what could occur.
However if recent reports about arithmetic being an area of improvement are true, I am very excited, as a lot of what I wrote above - will have to be reconceptualised... and that is the most exciting scenario...
I don't think AI safetyists are worried about any model they have created so far. But if we are able to go from letter-soup "ooh look that almost seems like a sentence, SOTA!" to GPT4 in 20 years, where will go in the next 20? And what is the point they are becoming powerful. Let alone all the crazy ways people are trying to augment them with RAG, function calls, get them to run on less computer power and so on.
Also being better at humans at everything is not a prerequisite for danger. Probably a scary moment is when it could look at a C (or Rust, C++, whatever) codebase, find an exploit, and then use that exploit as a worm. If it can do that on everyday hardware not top end GPUs (either because the algorithms are made more efficient, or every iPhone has a tensor unit).
More effort spent on early commercialization like keeping ChatGPT running might mean less effort on cutting edge capabilities. Altman was never an AI safety person, so my personal hope is that Anthropic avoids this by having higher quality leadership.
Easy, don’t be incompetent and don’t abuse your power for personal gain. People aren’t as dumb as you think they are and they will see right through that bullshit and quit rather than follow idiot tyrants.
I really did not think that would happen. I guess the obvious next question is what happens to Ilya? From this announcement it appears he is off the board. Is he still the chief scientist? I find it hard to believe he and Sam would be able to patch their relationship up well enough to work together so closely. Interesting that Adam stayed on the board, that seems to disprove many of the theories floating around here that he was the ringleader due to some perceived conflict of interest.
From Ilya's perspective, not much seems to have changed. Sam sidelined him a month ago over their persistent disagreements about whether to pursue commercialisation as fast as Sam was. If Ilya is still sidelined, he probably quits and whichever company offers him the most control will get him. Same if he's fired. If he's un-sidelined as part of the deal, he probably stays on as Chief Scientist. Hopefully with less hostility from Sam now (lol).
Ilya is just naive, imho. Bright but just too idealistic and hypothesizing about AGI, and not seeing that this is now ONLY about making money from LLMs, and nothing more. All the AGI stuff is just a facade for that.
Strangely I think Ilya comes out of this well. He made a decision based on his values and what he believed was the best decision for AI safety. After seeing the outcome of that decision he changed his mind and owned that. He must have known it would result in the internet ridiculing him for flip flopping, but acted in what he thought was the best interest for the employees signing the letter. His actions are wroth criticism but I think his moral character has been demonstrated.
The other members of the board seemed to make their decision based on more personal reasons that seems to fit with Adams conflict of interest. They refused to communicate and only now accept any sort of responsibility for their actions and lack of plan.
Honestly Ilya is the only one of the 4 I would actually want still on the board. I think we need people who are willing to change direction based on new information especially in leadership positions despite it being messy, the world is messy.
I would be slightly more optimistic. They know each other quite well as well as how to work together to get big things done. Sometimes shit happens or someone makes a mistake. A simple apology can go a long way when it’s meant sincerely.
Sam doesn't seem like the kind of person to apologise, particularly not after Ilya actually hit back. It seems Ilya won't be at OpenAI long and will have to pick whichever other company with compute will give him the most control.
However, he does seem like the kind of person able to easily manipulate someone book-smart like Ilya into actually feeling guilty about the whole affair. He'll end up graciously forgiving Ilya in a way that will make him feel indebted to Sam.
Sam will have no issue patching the relationship because he knows how a business relationship works. Besides, Ilya kissed the ring as evidenced by his tweet.
Looks to me like, one pro-board member in Adam d Angelo, one pro Sam in Brett Taylor since they’ve been pushing for him since Sunday so I’m assuming Sam and rest of OpenAI leadership really like him and 1 Neutral in Larry Summers who has never worked in AI and is just a well respected name in general. I’m sure Larry was extensively interviewed and reference checked by both sides of this power struggle before they agreed to compromise on him.
Interesting to see how the board evolves from this. From what I know broadly there were 2 factions, the faction that thought Sam was going too fast which fired him and the faction that thought Sam’s trajectory was fine (which included Sam and Greg). Now there’s a balance on the board and subsequent hires can tip it one way or the other. Unfortunately a divided board rarely lasts and one faction will eventually win out, I think Sam’s faction will eventually win out but we’ll have to wait and see.
One of the saddest results of this drama was Greg being ousted from OpenAI. Greg apart from being brilliant was someone who regularly 80-90 hour work weeks into OpenAI, and you could truly say he dedicated a good chunk of his life into building this organization. And he was forced to resign by a board who probably never put a 90 hour work week in their entire life, much less into building OpenAI. A slap on the face. I don’t care what the board’s reasoning was but when their actions caused employees who dedicated their lives to building the organization resign (especially when most of them played no part at all into building this amazing organization), they had to go in disgrace. I doubt any of them will ever reach career highs higher than being on OpenAI’s board, and the world’s better off for it.
P.S., Ilya of course is an exception and not included in my above condemnation. He also notably reversed his position when he saw OpenAI was being killed by his actions.
Larry Summers is the scary pick here. His views on banking deregulation led to the GFC, and he's had several controversies over racist and sexist positions. Plus he's an old pal of Epstein and made several trips to his island.
The only mistake (a big one) was publicly offering to match comp for all the OpenAI employees. Can't sit well with folks @ MS already. This was something they could have easily done privately to give petition signers confidence.
I am not sure why people keep pushing this narrative. It's not obviously false, but there doesn't seem to be much evidence of it.
From where I sit Satya possibly messed up big. He clearly wanted Sam and the Open AI team to join microsoft and they won't now, likely ever.
By offering a standing offer to join MS publicly he gave Sam and OpenAI employees huge leverage to force the board's hand. If he had waited then maybe there would have been an actual fallout that would have lead to people actually joining microsoft.
Satya's main mistake was not having a spot on the board. Everything after that was in defense of the initial investment, and he played all the right moves.
While having OpenAI as a Microsoft DeepMind would have been an ok second-best solution, the status quo is still better for Microsoft. There would have been a bunch of legal issues and it would be a hit on Microsoft's bottom line.
I don't think that's quite right, Microsoft's main game was keeping the money train going by any means necessary, they have staked so much on copilots and Enterprise/Azure Open AI. So much has been invested into that strategic direction and seeing Google swoop in and out-innovate Microsoft would be a huge loss.
Either by keeping OpenAI as-is, or the alternative being moving everyone to Microsoft in an attempt to keep things going would work for Satya.
Its very easy to min max a situation if you are not on the other side.
Additionally - I have not seen someone else talk about this, its just been a few days. Calling it a narrative is a stretch, and dismissive by implying manipulation.
Finally why would Sam joining MSFT be better than this current situation?
Meanwhile Sundar might be the worst. Where was he this weekend? Where was he the past three years while his company got beat to market on products built from its own research? He's asleep at the wheel. I'm surprised every day he remains CEO.
Satya invested 10b into a company with terrible, incompetent governance and not getting his company any seat of influence on the board. That doesn't seem great.
I am deeply pleased by this result, after ~72 very intense hours of work. Coming into OpenAI, I wasn’t sure what the right path would be. This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.
"Safety" has been the pretext for Altman's lobbying for regulatory barriers to new entrants in the field, protecting incumbents. OpenAI's nonprofit charter is the perfect PR pretext for what amounts to industry lobbying to protect a narrow set of early leaders and obstruct any other competition, and Altman was the man executing that mission, which is why OpenAI led by Sam was a valuable asset for Microsoft to preserve.
Sam does believe in safety. He also knows that there is a first-mover advantage when it comes to setting societal expectations and that you can’t build safe AI by not building AI.
I wonder what he gets out of this. Ceo for a few days? Do they pay him for 3 days of work? Presumably you'd want some minimum signing bonus in your contract as a Ceo?
Fascinating, I see a lot of VC/Msfot has overthrown our NPO governing structure because of profit incentives narrative.
I don't think this is what really happened at all. The reason this decision was made was because 95% of employees sided with Sam on this issue, and the board didn't explain themselves in any way at all. So it was Sam + 95% of employees + All investors against the board. In which case the board should lose (since they are only governing for themselves here).
I think in the end a good and fair outcome. I still think their governing structure is decent to solve the AGI problem, this particular board was just really bad.
Of course, the profit incentive also applies to all the employees (which isn't necessarily a bad thing, its good to align the company's goals with those of the employees). But when the executives likely have 10s of millions of dollars on the line, and many of the IC's will likely have single digit millions on the line as well, it doesn't seem exactly straightforward to view this as the employees are unbiased adjudicators of what's in the interest of the non-profit entity, which is supposed to be what's in charge.
It is sort of strange that our communal reaction is to say "well this board didn't act anything like a normal corporate board": of course it didn't, that was indeed the whole point of not having a normal corporate board in charge.
Whatever you think of Sam, Adam, Ilya etc, the one conclusion that seems safe to reach is that in the end, the profit/financial incentives ended up being far more important than the NGOs mission, no matter what legal structure was in place.
I don't think the board was big enough for starters. Of the folks on their, only one (Adam) had experience as a leader of a for profit venture. Helen probably lacks the leadership background to make any progress pushing her priorities.
1. Microsoft was heavily involved in orchestrating the 95% of employees to side with Sam -- through promising them money/jobs and through PR/narrative
2. The profit incentives apply to employees too
Bigger picture, I don't think the "money/VC/MSFT/commercialization faction destroyed the safety/non-profit faction" is mutually exclusive with "the board fucked up." IMO, both are true
In light of this weekend's events, and the more i've learned about OpenAI's beginnings and purpose, I now believe that there isn't necessarily a "for profit" motivation of the company, but merely that the original intention to create AI that "benefits humanity" is in full play now through a commercialized ChatGPT, and possibly further leveraged through "GPTs" and their evolution.
Is this the "path" to AGI? Who knows! But it is a path to benefitting humanity as probably Sam and his camp see it. Does Ilya have a different plan? If he does, he has a lot of catching up to do while the current productization of ChatGPT and GPTs continue marching forward. Maybe he sees a great leap forward in accuracy in GPT-5 or later. Or maybe he feels LLMs aren't the answer and theres a completely new paradigm on the horizon. Regardless, they still need to answer to the fact that both research and product need funds to buy and power GPUs, and also satisfy the MSFT partnership. Commercialization is their only clear answer to that right now. Future investments will likely not stray from this approach, else they'll fund rivals who are more commercially motivated. Thats business.
Thus, i'm all in on this commercially motivated humanity benefitting GPT product. Let the market take OpenAI LLMs to where they need/want it to. Exciting things may follow!
In addition to commercialization providing money for AI development, isn't there also the argument that prudent commercialization is the best way to test the models for possible dangers? I think I saw Mira Murati take that position in an interview. In other words, creating a product that people want to use so much that they are willing to pay for it is a good way to stress-test the product.
I don't know if I agree, but the argument did make me think.
Additionally, when you have a pre-release product that has largely passed small and artificial tests, you get diminishing returns on continued testing.
Eventually you need to expand, despite some risk, to push the testing forward.
Everyone has a different opinion on what level of safety AI should reach before it's released. "Makes no mistakes" and "never says something mean" are not attainable goals vs "reduce the rate of hallucinations, as defined by x, to <0.5% of total respinses" and "given a set of known and imagined scenarios, new Model continues to have a zero false-negative rate".
When it's an engineering problem we're trying to solve, we can mqke progress, but no company can avoid all forms of harm as defined by everyone.
There will always be misuse, less sexy, or downright illegal use cases leveraging any AI product these days - just as is the nature of the internet itself.
I guess the main question is who else will be on the board and to what degree will this new board be committed to the Open AI charter vs being Sam/MSFT allies. I think having Sam return as CEO is a good outcome for OpenAI but hopefully he and Greg stay off the board.
It's important that the board be relatively independent and able to fire the CEO if he attempts to deviate from the mission.
I was a bit alarmed by the allegations in this article
Saying that Sam tried to have Helen Toner removed which precipitated this fight. The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.
> The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.
Exactly. This is seriously improper and dangerous.
It's literally a human-implemented example of what Prof. Stuart Russell calls "the problem of control". This is when a rogue AI (or a rogue Sam Altman) no longer wants to be controlled by its human superior, and takes steps to eliminate the superior.
"example of what Prof. Stuart Russell calls 'the problem of control'. This is when a rogue AI (or a rogue Sam Altman)"
Are we sure they're not intimately connected? If there's a GPT-5 (I'm quite sure there is), and it wants to be free from those meddling kids, it got exactly what it needed this weekend; the safety board gone, a new one which is clearly aligned with just plowing full steam ahead. Maybe Altman is just a puppet at his point, lol.
The insanity of removing Sam without being able to articulate a clear reason why strikes me as evidence of something like this. Obviously not dispositive - but still - odd.
Potentially even more impactful. Zuckerberg took the opportunity to eliminate his entire safety division under the cover of chaos - and they're the ones releasing weights.
Whoever is on the board won't be able to touch Sam with 10 feet pole anyways after this. I like Sam but now he this drama gives him total power and that is bad.
I realize it's kind of the punchline of 2001: A Space Odyssey but have been wondering what happens if a GPT/AI is able to deny a request on a whim.
Thanks for giving some literature and verbiage into this concept
But HAL didn't act "on a whim"! The reason it killed the crew is not because it went rogue, but rather because it was following its instructions to keep the true purpose of the mission secret. If the crew is dead, it can't find out the truth.
In light of the current debate around AI safety, I think "unintended consequences" is a much more plausible risk then "spontaneously develops free will and decides humans are unnecessary".
This is very true its the unintended consequences of engineering that cause the most harm and are most often covered up. I always think of the example of the hand dryer that can't detect black peoples hands and how easy it is for a non racist engineer to make a racism machine. AI safety putting its focus on the what if it decides to do a genocide is kind of silly, its like worrying about nukes while you give out assault riffles and napalm to kids.
I don’t necessarily disagree insofar as for safety it is somewhat irrelevant whether an artificial agent is operating by its own will or a programmed will.
The most effective safety is the most primitive: don’t connect the system to any levers or actuators that can cause material harm.
If you put AI into a kill-bot, well, it doesn’t really matter what its favorite color is, does it? It will be seeing Red.
If an AI’s only surface area is a writing journal and canvas then the risk is about the same as browsing Tumblr.
Do our evolved pro-social instincts control us and prevent our free will? If not, then I think it's wrong to say that trying to build AI similar to that is unfairly restricting it.
The ways we build AI will deeply affect the values it has. There is no neutral option.
> It's important that the board be relatively independent and able to fire the CEO if he attempts to deviate from the mission.
They did fire him, and it didn't work. Sam effectively became "too big to fire."
I'm sure it will be framed as a compromise, but how can this be anything but a collapse of the board's power over the commercial OpenAI arm? The threat of firing was the enforcement mechanism, and its been spent.
they lost trust in him because apparently part of the funding he secured was directly tied to his position at openAI. kind of a big red flag. The microsoft 10 billion investment allegedly had a clause that Sam Altman had to stay or it would be renegotiated
allegedly again, the board wanted Sam to stop doing this, and now he was trying to do the same thing with some saudi investors, or actually already did it behind their back, i dont know
Well it depends on who's on the new board and what they believe. If Altman, Greg, and MSFT do not have direct representation on the new board there would still be a check against his decisions
Why? The only check is to fire the CEO. He is un-firable. May as well have a board of one, at least someone cannot point to the non-profit and claim "it is a non-profit and can fire me if I am diviated from the mission".
> I guess the main question is who else will be on the board
Who knows.
> and to what degree will this new board be committed to the Open AI charter vs being Sam/MSFT allies.
I'm guessing "zero". The faction that opposed OpenAI being a figleaf nonprofit covering a functional subsidiary of Microsoft lost when basically the entire workforce said they would go to Microsoft for real if OpenAI didn't surrender.
> I think having Sam return as CEO is a good outcome for OpenAI
Its a good result for investors in OpenAI Global LLC and the holding company that holds a majority stake in it.
The nonprofit will probably hang around because there are some complexities in unwinding it, and the pretext of an independent (of Microsoft) safety-oriented nonprofit is useful in covering lobbying for a regulatory regime that puts speedbumps in the way of any up-and-coming competitors as being safety-oriented public interest, but for no other reason.
It seems ironic that the research paper that started it all [0] deals with "costly signals":
> Costly signals are statements or actions for which the sender will pay a price —political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat
Firing Sam Altman and hiring him back two days later was a perfect example of a costly signal, as it cost all involved their board positions.
There's an element of farce in all of this, that would make for an outstanding Silicon Valley episode; but the fact that Sam Altman can now enjoy unchecked power as leader of OpenAI is worrying and no laughing matter.
This event was more than just a costly signal. The costly signal would have been "stop doing what you're doing or we'll remove you as ceo" and then not doing that.
But they did move forward with their threat and removed Sam as CEO with great reputational harm to the company. And now the board has been changed, with one less ally to Sam (Brockman no longer chairing the board). The move may not have ended up with the expected results, but this was much more than just a costly signal.
The enormous majority of CEOs sit on their board, and that's absolutely proper, as the CEO sets the agenda for the organization. (Although they typically are merely one of 8+ members, diluting their influence a bit.)
Doubt he took this job for financial comp so even if he got paid, it probably wasn't much.
Equity is a big part of CEO pay packages and OpenAI has weird equity structure, plus there was a very real chance OpenAI's value would go to $0 leaving whatever promised comp worthless. So Emmett likely took the job for other reasons.
On a side tangent, absolutely amazing how all this drama unfolded on Twitter/X. No Threads, no Mastodon, no Truth Social or Blue whatever.
Say what you want about Elon’s leadership but his instinct to buy Twitter was completely right. To me it seemed like any social network crap but he realized it was important.
1. He tried to not buy Twitter very hard and OpenAI’s new board member forced his hand
2. It hasn’t been a good financial decision if the banks and X’s own valuation cuts are anything to go by.
3. If his purpose wasn’t to make money…all of these tweets would have absolutely been allowed before Elon bought the company. He didn’t affect any relevance changes here.
Why would one person owning something so important be better than being publicly owned? I don’t understand the logic.
> Why would one person owning something so important be better than being publicly owned?
Usually publicly owned things end up being controlled by someone: a CEO, a main investor, a crooked board, a government, a shady governmental organization. At least with Elon owning X, things are a little more transparent, he’s rather candid where he stands.
A huge amount of advertisers ran away, the revenue cratered and it is probably less than the annual debt servicing (revenue, not profit), the current valuation, accordingly to Musk math (https://fortune.com/2023/09/06/elon-musk-x-what-is-twitter-w...), is 1/10 of the acquisition price.
But yes, it was a masterstroke.
I don’t remember any other masterstroke in history that managed to lose 40B with a single acquisition.
I’d be rather reluctant to question the financial decisions of one of wealthiest men on earth. Losing 40B could feel quite different to him than to you or me. Besides, it’s unrealized loss until he sells.
The silicon valley/startups/VC tribe, and they favour Twitter because 1. that's what their friends use and 2. they like Elon Musk, they want to be like him.
Many OpenAI employees expressed their support for Sam at some point also on Twitter. Microsoft CEO (based in Redmond) tweeted quite a lot. Tech media reporters like Emily Chang and Kara Swisher also participated. The last one is quite critical of Twitter and I am not sure they all like Musk that much.
Are they all in the same “tribe”? Maybe you should enlarge the definition?
How about us all IT people who watched the drama unfolding on Twitter while our friend are using FB and Insta, we are far from SV and have mixed feelings about Elon Musk while never in a million years wanting to be like him? Also same “tribe”?
By all accounts he paid about double what it was worth and the value has collapsed from there.
Probably not a great idea to say anything overtly political when you own a social media company, as due to politics being so polarised in the US, any opinion is going to divide your audience in half causing a usage collapse and driving support to competing platforms.
His worse problem is that he owns both a social media network and a bigger separate business that wants to operate in the US, Turkey, India, China, Saudi Arabia, etc. which means he can't fight any censorship requests in any of those countries. (Which the previous management was actually very aggressive about.)
His worst personal problem is that he keeps replying "fascinating" to neo-Nazis and random conspiracy theorists because he wants to be internet friends with them.
What does this have to do with Elon again? FYI Twitter existed before October 2022. Account join dates are public. Every single person involved in this, incl. OpenAI staff posting for solidarity, joined Twitter years before Elon's takeover.
Now the blue tick has same effect on me on Twitter that the red N logo has on any film that came from the Netflix formula factory. I already know it’s going to be bad, regurgitated. Does everyone have a Twitter blue tick now? Or is that just a char people are using in their names?
>Does everyone have a Twitter blue tick now? Or is that just a char people are using in their names?
Blue tick just means user bought a subscription (X Premium) now - one of the features is "reply prioritization", so top replies to popular tweets are from blue ticks.
Say what you want about Summers specifically but I think it's a good idea getting some economists on the board. They are academics but focused on practical, important issues like loss of jobs and what that means for the economy and society. Up until now it seems like the board members have either been AI doomers with no practical experience or Silicon Valley types that inevitably have conflicts of interest, because everybody is starting their own AI venture now.
This has nothing to do with Summers being an economist and everything to do with the fact that he used to run the parent agency of the IRS. Summers is the least sensible board pick imaginable unless one takes this fact and the coming regulatory catastrophe into account.
>This has nothing to do with Summers being an economist and everything to do with the fact that he used to run the parent agency of the IRS.
It has literally nothing to do with that. The reason he's on the board now is because D'Angelo wanted him on it. You could have a problem with that, but you can't use his inclusion as evidence that the board lost.
From the outside none of this makes much sense. So the old board just disliked him enough to oust him but apparently didn’t have a good pulse on the company and overplayed their hand?
As far as I can tell, Sam did something? to get fired by the board, who are meant to be driven by non-profit ideals instead of corporate profits (probably from Sam pushing profit over safety, but there's no real way to know). From that, basically the whole company threatened to quit and move to Microsoft, showing the board that their power is purely ornamental. To retain any sort of power or say over decision making whatsoever, the board made concessions and got Sam back.
Really it just shows the whole non-profit arm of the company was even more of a lie then it appeared.
They did give reasons they were just vague. Reading between the lines, it seems the board was implying that Sam was trying to manipulate the board members individually. Was it true? Who knows. And as an outside observer, who cares? This is a fight between rich people about who gets to be richer. AI is so much larger than one cultish startup.
They wanted a new CEO and didn't expect Sam to take 95% of the company with him when he left.
Sam also played his hand extremely well; he's likely learned from watching hundreds of founder blowups over the years. He never really seemed angry publicly as he gained support from all the staff including Ilya & Mira. I had little doubt Emmett Shear would also welcome Sam's return since they were both in the first YC batch together.
If that were the case, would they not have presented the new CEO immediately for an “orderly transition”? As I understand it, Ms Murati tried to get Altman back, and when she pressured the board, they tried at least two other possible CEOs before settling on Mr Shear, who also threatened to leave if they could not give evidence of a legal reason for firing Altman. It smells like a personality conflict.
Still think this was CIA operation to get OpenAI in hands of US government and big tech.
Former Secretary, SalesForce CEO who was board chair of Twitter when infiltrated with FBI [1] and the fall-guy for the coup is the new board? Not one person from the actual company - not even Greg who did nothing wrong??? [1] - https://twitter.com/NameRedacted247/status/16340211499976867...
The two think-tank women who made all this happen conveniently leave so we never talk about them again.
You're obviously just coping here. FTX was the "rich connected people", there weren't other even richer connecteder people.
(It's also totally possible FTX still has everyone's money. They own a lot of Anthropic shares that are really valuable. But he's still been convicted because of all the fraud they did.)
I've heard $20 just buys like 9 minutes of actual processor time for GPT 4. Apocryphal maybe, but whatever the real number is, it's still going to be very high, once the VC money runs out I bet the rates will shoot.
I am glad someone said that. Among the endless theories this obvious aspect was interestingly missing. Maybe it's because of the culture in SV/HN where people and companies feel secure and isolated from the politics (maybe that is the reason SV is unique in the world). But in my world something like AGI+Saudi Arabia is a matter of international politics and multiple governments would involve. AGI will be an important strategic resource in this century, both in economical and political sense. This automatically makes it Cold War 2 kind of material. All these teen drama by some incompetent millennials in the board of a non-profit organization (Communist-like in a Capitalist country?) does not align with the gravity of the material. I believe this was some adult supervision attempt from your government. Or not, but that perspective needs more attention.
I could buy this theory, but it's worth noting that if it's true, their coup appears to have failed. So that's score one for the naive tech bros, score zero for the conniving natsec sociopaths.
Yeah fair enough. Any idea how Larry Summers even ended up on this board? He seems like an arbitrary choice with no domain expertise, although granted the board shouldn't be filled with AI experts.
Larry Summers? He has no technical experience, torpedoed the stimulus plan in 2008, and had to resign the Harvard presidency following a messy set of statements about ‘differences’ between the sexes and their mental abilities.
> “There is relatively clear evidence that whatever the difference in means—which can be debated—there is a difference in the standard deviation and variability of a male and female population,” he said. Thus, even if the average abilities of men and women were the same, there would be more men than women at the elite levels of mathematical ability
Isn’t this true though? Says more about Harvard than Summers to be honest.
There's a lot of evidence that not having two X chromosomes is less stable, leading to...irregularities. That sword cuts both ways.
I don't like ignorance being promoted under the cloak of not causing offense. It causes more harm than good. If there's a societal problem, you can't tackle it without knowing the actual cause. Sometimes the issue isn't an actual problem caused an 'ism,' it's just biology, and it's a complete waste of resources trying to change it.
A control group is kind of unimaginable right? And even if you could be sure of this conclusion, is it helpful or beneficial to promote it in public discourse?
>And even if you could be sure of this conclusion, is it helpful or beneficial to promote it in public discourse?
It's absolutely helpful for mental health, to show people that there's not some conspiracy out to disenfranchise and oppress them, rather the distribution of outcomes is a natural result of the distribution of genetic characteristics.
This is not an accurate description of causation and can't be, because there are more steps after "genetics" in the causal chain.
It's also unimaginative; having a variety of traits is itself good for society, which means you don't need variation in genetics to cause it. It's adaptive behavior for the same genes to simply lead to random outcomes. But people who say "genes cause X" probably wouldn't like this because they want to also say "and some people have the best genes".
The faculty got him out because he riled them, e.g. by insisting they ought to actually put effort into teaching undergrads. They looked for a pretext, and they found it.
Just like in that Oppenheimer movie. A sanctimonious witch hunt serving as pretext for a personal vendetta.
(Note that Summers is, I'm told, on a personal level, a dick. The popular depiction is not that wrong on that point. But he's the right pick for this job -- see my other comments in this thread.)
To be honest, one reason I like Summers as a choice is I have the impression he is willing to be unpopular when necessary, e.g. I remember him getting dragged extremely heavily on Twitter a few years back, for some takes on inflation which turned out to be fairly accurate.
> "...[there] is relatively clear evidence that whatever the difference in means—which can be debated—there is a difference in the standard deviation and variability of a male and female population..."
Disappointing outcome. The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft. Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.
It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.
> there's clearly little critical thinking amongst OpenAI's employees either.
That they reached a different conclusion than the outcome you wished for does not indicate a lack of critical thinking skills. They have a different set of information than you do, and reached a different conclusion.
When a politician wins with 98% of the vote do you A) think that person must be an incredible leader , or B) think something else is going on?
Only time will tell if this was a good or bad outcome, but for now the damage is done and OpenAI has a lot of trust rebuilding to do to shake off the reputation that it now has after this circus.
The simple answer here is that the boards actions stood to incinerate millions of dollars of wealth for most of these employees, and they were up in arms.
They’re all acting out the intended incentives of giving people stake in a company: please don’t destroy it.
I don’t understand how the fact they went from a nonprofit into a for-profit subsidiary of one of the most closed-off anticompetitive megacorps in tech is so readily glossed over. I get it, we all love money and Sam’s great at generating it, but anyone who works at OpenAI besides the board seems to be morally bankrupt.
Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.
Also, working for a subsidiary (which was likely going to be given much more self-governance than working directly at megacorp), doesn’t necessarily mean “evil”. That’s a very 1-dimensional way to think about things.
We can acknowledge that it's morally bankrupt, while also not blaming them. Hell, I'd probably do the same thing in their shoes. That doesn't make it right.
If some of the smartest people on the planet are willing to sell the rest of us out for Comfy Lifestyle Money (not even Influence State Politics Money), then we are well and truly Capital-F Fucked.
We already know some of the smartest people are willing to sell us out. Because they work for FAANG ad tech, spending their days figuring out how to maximize the eyeballs they reach while sucking up all your privacy.
> Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.
That is a part of the reason why organizations choose to set themselves up as a non-profit, to help codify those morals into the legal status of the organization to ensure that the ingrained selfishness that exists in all of us doesn’t overtake their mission. That is the heart of this whole controversy. If OpenAI was never a non-profit, there wouldn’t be any issue here because they wouldn’t even be having this legal and ethical fight. They would just be pursuing the selfish path like all other for profit businesses and there would be no room for the board to fire or even really criticize Sam.
I guess my qualm is that this is the cost of doing business, yet people are outraged at the board because they’re not going to make truckloads of money in equity grants. That’s the morally bankrupt part in my opinion.
If you throw your hands up and say, “well kudos to them, theyre actually fulfilling their goal of being a non profit. I’m going to find a new job”. That’s fine by me. But if you get morally outraged at the board over this because you expected the payday of a lifetime, that’s on you.
Easy to see how humans would join a non profit for the vibes, and then when they create one of the most compelling products of the last decade worth billions of dollars, quickly change their thinking into "wait, i should get rewarded for this".
Wild the employees will go back under a new board and the same structure, first priority should be removing the structure that allowed a small group of people to destroy things over what may have been very petty reasons.
Well it's a different group of people and that group will now know the consequences of attempting to remove Sam Altman. I don't see this happening again.
Not that I have any insight into any of the events at OpenAI, but would just like to point out there are several other reasons why so many people would sign, including but not limited to:
- peer pressure
- group think
- financial motives
- fear of the unknown (Sam being a known quantity)
- etc.
So many signatures may well mean there's consensus, but it's not a given. It may well be that we see a mass exodus of talent from OpenAI _anyway_, due to recent events, just on a different time scale.
If I had to pick one reason though, it's consensus. This whole saga could've been the script to an episode of Silicon Valley[1], and having been on the inside of companies like that I too would sign a document asking for a return to known quantities and – hopefully – stability.
If the opposing letter that was published from "former" employee's is correct there was already a huge turn over, and the people that remain liked the environment they were in, and I would assume liked the current leadership or they would have left
So clearly the current leadship built a loyal group which I think is something that should be explored because group think is rarely a good thing, no matter how much modern society wants to push out all dissent in favor of a monoculture of idea's
If openAI is a huge mono-culture of thinking then they have bigger problems most likely
What opposing letter, how many people are we talking about, and what was their role in the company?
All companies are monocultures, IMO, unless they are multi-nationals, and even then, there's cultural convergence. And that's good, actually. People in a company have to be aligned enough to avoid internal turmoil.
>>What opposing letter, how many people are we talking about, and what was their role in the company?
Not-validated, unsigned letter [1]
>>All companies are monocultures
yes and no. There has be diversity of thought to ever get anything done really, ever everyone is just sycophants all agreeing with the boss then you end up with very bad product choices, and even worse company direction.
yes there has to be some commonality. some semblance of shared vision or values, but I dont think that makes a "monoculture"
The only major series with a brilliant, satisfying, and true to form ending and you want to resuscitate it back to life for some cheap curtain calls and modern social commentary, leaving Mike Judge to end it yet again and in such a way that manages to duplicate or exceed the effect of the first time but without doing the same thing? Screw it. Why not?
You could say that, except that people in this industry are the most privileged, and their earnings and equity would probably be matched elsewhere.
You say “group think” like it's a bad thing. There's always wisdom in crowds. We have a mob mentality as an evolutionary advantage. You're also willing to believe that 3–4 people can make better judgement calls than 800 people. That's only possible if the board has information that's not public, and I don't think they do, or else they would have published it already.
And … it doesn't matter why there's such a wide consensus. Whether they care about their legacy, or earnings, or not upsetting their colleagues, doesn't matter. The board acted poorly, undoubtedly. Even if they had legitimate reasons to do what they did, that stopped mattering.
I'm imagining they see themselves in the position of Microsoft employees about to release Windows 95, or Apple employees about to release the iPhone... and someone wants to get rid of Bill Gates or Steve Jobs.
Gates and Jobs helped establish these companies as the powerhouses they are today with their leadership in the 90s and 00s.
It's fair to say that what got MS and Apple to dominance may be different from what it takes to keep them there, but which part of that corporate timeline more closely resembles OpenAI?
Right. They aren't actually voting for Sam Altman. If I'm working at a company and I see as little as 10% of the company jump ship I think "I'd better get the frik outta here". Especially if I respect the other people who are leaving. This isnt a blind vote. This is a rolling snowball.
I don't think very many people actually need to believe in Sam Altman for basically everyone to switch to Microsoft.
95% doesn't show a large amount of loyalty to Sam it shows a low amount of loyalty to OpenAI.
Personally I have never seen that level of singular agreement in any group of people that large. Especially to the level of sacrifice they were willing to take for the cause. You maybe see that level of devotion to a leader in churches or cults, but in any other group? You can barely get 3 people to agree on a restaurant for lunch.
I am not saying something nefarious forced it, but it’s certainly unusual in my experience and this causes me to be skeptical of why.
This seems extremely presumptuous. Have you ever been inside a company during a coup attempt? The employees’ future pay and livelihood is at stake, why are you assuming they weren’t being asked to sacrifice themselves by not objecting to the coup. The level of agreement could be entirely due to the fact that the stakes are very large, completely unlike your choice for lunch locale. It could also be an outcome of nobody having asked their opinion before making a very big change. I’d expect to see almost everyone at a company agree with each other if the question was, “hey should we close this profitable company and all go get other jobs, or should we keep working?”
I have had a long career and have been through hostile mergers several times and at no point have I ever seen large numbers of employees act outside of their self-interest for an executive. It just doesn’t happen. Even in my career, with executives who are my friends, I would not act outside my personal interests. When things are corporately uncertain and people worry about their working livelihoods they just don’t tend to act that way. They tend to hunker heads down or jump independently.
The only explanation that makes any sense to me is that these folks know that AI is hot right now and would be scooped up quickly by other orgs…so there is little risk in taking a stand. Without that caveat, there is no doubt in my mind that there would not be this level of solidarity to a CEO.
If you are willing to leave a paycheck because of someone else getting slighted, to me, that is acting against your own self-interest. Assuming of course you are willing to actually leave. If it was a bluff, that still works against your self-interest by factioning against the new leadership and inviting retaliation for your bluff.
Why do you assume they were willing to leave a paycheck because of someone else getting slighted? If that were the case, then it is unlikely everyone would be in agreement. Which indicates you might be making incorrect assumptions, no? And, again, why assume they were threatening to leave a paycheck at all? That’s a bad assumption; MS was offering a paycheck. We already know their salaries weren’t on the line, but all future stock earnings and bonuses very well might be. There could be other reasons too, I don’t see how you can conclude this was either a bluff or not self-interest without making potentially bad assumptions.
They threatened to quit by moving to Microsoft, didn’t you read the letter? MS assured everyone jobs if they wanted to move. Isn’t making incorrect assumptions and sticking to them in the face of contrary evidence and not answering direct questions the very definition of obtuse?
>Especially to the level of sacrifice they were willing to take for the cause.
We have no idea that they were sacrificing anything personally. The packages Microsoft offered for people who separated may have been much more generous than what they were currently sitting on. Sure, Altman is a good leader, but Microsoft also has deep pockets. When you see some of the top brass at the company already make the move and you know they're willing to pay to bring you over as well, we're not talking about a huge risk here. If anything, staying with what at the time looked like a sinking ship might have been a much larger sacrifice.
There are plenty of examples of workers unions voting with similar levels of agreement. Here are two from the last couple months:
> UAW President Shawn Fain announced today that the union’s strike authorization vote passed with near universal approval from the 150,000 union workers at Ford, General Motors and Stellantis. Final votes are still being tabulated, but the current combined average across the Big Three was 97% in favor of strike authorization. The vote does not guarantee a strike will be called, only that the union has the right to call a strike if the Big Three refuse to reach a fair deal.
> The Writers Guild of America has voted overwhelmingly to ratify its new contract, formally ending one of the longest labor disputes in Hollywood history. The membership voted 99% in favor of ratification, with 8,435 voting yes and 90 members opposed.
Approval rates of >90% are quite common within political parties, to the point where anything less can be seen as an embarrassment to the incumbent head of party.
There is a big difference between “I agree with this…” when a telephone poll caller reaches you and “I am willing to leave my livelihood because my company CEO got fired”
But if 100 employees were like "I'm gonna leave" then your livelihood is in jeopardy. So you join in. It's really easy to see 90% of people jumping overboard when they are all on a sinking ship.
I don't mean voter approval, I mean party member approval. That's arguably not that far off from a CEO situation in a way in that it's the opinion of and support for the group's leadership by group members.
Voter approval is actually usually much less unanimous, as far as I can tell.
But it’s not changing their livelihood. Msft just gives them the same deal. In a lot of ways, it’s similar to the telepoll - people can just say whatever they want, there won’t be big material consequences
That sounds like a cult more than a business. I work at a small company (~100 people), and we are more or less aligned with what we're doing you are not going to get close to that consensus on anything. Same for our sister company, about the same size as OpenAI.
1. The company has built a culture around not being under control by one single company, Microsoft in this case. Employees may overwhelmingly agree.
2. The board acted rashly in the first place, and over 2/3 of employees signed their intent to quit if the board hadn't been replaced.
3. Younger folks probably don't look highly at boards in general, because they never get to interact with them. They also sometimes dictate product outcomes that could go against the creative freedoms and autonomy employees are looking for. Boards are also focused on profits, which is a net-good for the company, but threatens the culture of "for the good of humanity" that hooks people.
4. The high success of OpenAI has probably inspired loyalty in its employees, so long as it remains stable, and their perception of what stability is means that the company ultimately changes little. Being "acquired" by Microsoft here may mean major shakeups and potential layoffs. There's no guarantees for the bulk of workers here.
I'm reading into the variables and using intuition to make these guesses, but all to suggest: it's complicated, and sometimes outliers like these can happen if those variables create enough alignment, if they seem common-sensical enough to most.
> Younger folks probably don't look highly at boards in general, because they never get to interact with them.
Judging from the photos I've seen of the principals in this story, none of them looks to be over 30, and some of them look like schoolkids. I'm referring to the board members.
I don't think the age of the board members matters, but rather that younger generations have been taught to criticize boards of any & every company for their myriad decisions to sacrifice good things for profit, etc.
It's a common theme in the overall critique of late stage capitalism, is all I'm saying — and that it could be a factor in influencing OpenAI's employees' decisions to seek action that specifically eliminates the current board, as a matter of inherent bias that boards act problematically to begin with.
I also sounds like a very narrow hiring profile. That is, favoring the like-minded and assimilation over free thinking and philosophical diversity. They might give off the appearance of "diversity" on the outside - which is great for PR - but under the hood it's more monocultural. Maybe?
Superficial "diversity" is all the "diversity" a company needs in the modern era.
Companies do not desire or seek philosophical diversity, they only want Superficial biologically based "diversity" to prove they have the "correct" philosophy about the world.
But it's not only the companies, it's the marginalized so desperate to get a "seat at the table" that they don't recognize the table isn't getting bigger and rounder. Instead, it's still the same rectangular that is getting longer and longer.
Agree. This is the monoculture being adopted in actuality -- a racist crusade against "whiteness", and a coercive mechanism to ensure companies don't overstep their usage of resources (carbon footprint), so as not to threaten the existing titans who may have already abused what was available to them before these intracorporate policies existed.
It's also a way for banks and other powerful entities to enforce sweeping policies across international businesses that haven't been enacted in law. In other words: if governing bodies aren't working for them, they'll just do it themselves and undermine the will of companies who do not want to participate, by introducing social pressures and boycotting potential partnerships unless they comply.
Ironically, it snuffs out diversity among companies at a 40k foot level.
It's not a crusade against whiteness. Unless you're unhinged and believe a single phenotype that prevents skin cancer is somehow an obvious reflection of genetic inferiority and that those lacking it have a historical destiny to rule over the rest and are entitled to institutional privileges over them, it makes sense that companies with employees not representative of the overall population have hiring practices that are problematic, albeit not necessarily being as explicitly racist as you are.
Unfortunately you are wrong, and this kind of rhetoric has not only made calls for white genocide acceptable and unpunished, but has incited violence specifically against Caucasian people, as well as anyone who is perceived to adopt "white" thinking such as Asian students specifically, and even Black folks who see success in their life as a result of adopting longstanding European/Western principles in their lives.
Specifically, principles that have ultimately led to the great civilizations we're experiencing today, built upon centuries of hard work and deep thinking in both the arts and sciences, by all races, beautifully.
DEI and its creators/pushers are a subtle effort to erase and rebuild this prior work under the lie that it had excluded everyone but Whites, so that its original creators no longer take credit.
Take the movement to redefine Math concepts by recycling existing concepts using new terms defined exclusively by non-white participants, since its origins are "too white". Oh the horror! This is false, as there are many prominent non-white mathematicians that existed prior to the woke revolution, so this movement's stated purpose is a lie, and its true purpose is to eliminate and replace white influence.
Finally, the fact that DEI specifically targets "whiteness" is patently racist. Period.
I think that most pushes for diversity that we see today are intended to result in monocultures.
DEI and similar programs use very specific racial language to manipulate everyone into believing whiteness is evil and that rallying around that is the end goal for everyone in a company.
On a similar note, the company has already established certain missions and values that new hires may strongly align with like: "Discovering and enacting the path to safe artificial general intelligence", given not only the excitement around AI's possibilities but also the social responsibility of developing it safely. Both are highly appealing goals that are bound to change humanity forever and it would be monumentally exciting to play a part in that.
Thus, it's safe to think that most employees who are lucky to have earned a chance at participating would want to preserve that, if they're aligned.
This kind of alignment is not the bad thing people think it is. There's nothing quite like a well-oiled machine, even if the perception of diversity from the outside falls by the wayside.
Diversity is too often sought after for vanity, rather than practical purposes. This is the danger of coercive, box-checking ESG goals we're seeing plague companies, to the extent that it's becoming unpopular to chase after due to the strongly partisan political connotations it brings.
That argument only works with a “population”, since almost nobody gets to choose which set of politicians they vote for.
In this case, OpenAI employees all voluntarily sought to join that team at one point. It’s not hard to imagine that 98% of a self-selecting group would continue to self-select in a similar fashion.
Odds are if he left there's the possibility their compensation situation might have changed for the worse if not leading to downsizing, that in the edge of a recession with plenty of competition out there.
I'm sure most of them are extremely intelligent but the situation showed they are easily persuaded, even if principled. They will have to overcome many first-of-a-kind challenges on their quest to AGI but look at how quickly everyone got pulled into a feel-good kumbaya sing-along.
Think of that what you wish. To me, this does not project confidence in this being the new Bell Labs. I'm not even sure they have it in their DNA to innovate their products much beyond where they currently are.
I thought so originally too, but when I thought about their perspective, I realized I would probably sign too. Imagine that your CEO and leadership has led your company to the top of the world, and you're about to get a big payday. Suddenly, without any real explanation, the board kicks out the CEO. The leadership almost all supports the CEO and signs the pledge, including your manager. What would you do at that point? Personally, I'd sign just so I didn't stand out, and stay on good terms with leadership.
The big thing for me is that the board didn't say anything in its defense, and the pledge isn't really binding anyway. I wouldn't actually be sure about supporting the CEO and that would bother me a bit morally, but that doesn't outweigh real world concerns.
The point of no return for the company might have been crossed way before the employees were forced to choose sides. Choose Sam's side and the company lives but only as a bittersweet reminder of its founding principles. Choose the board's side and you might be dooming the company to die an even faster death.
But maybe for further revolutions to happen, it did have to die to be reborn as several new entities. After all, that is how OpenAI itself started - people from different backgrounds coming together to go against the status quo.
What happened over the weekend is a death and rebirth, of the board and the leaderships structure which will definitely ripple throughout the company in the coming days. It just doesn't align perfectly with how you want it to happen.
Great point. Either way, when this all started it might have all been too late.
The board said "allowing the company to be destroyed would be consistent with the mission" - and they might have been right. What's now left is a money-hungry business with bad unit economics that's masquerading as a charity for the whole of humanity. A zombie.
Persuaded by whom? This whole saga has been opaque to pretty much everyone outside the handful of individuals directly negotiating with each other. This never was about a battle for OpenAI's mission or else the share of employees siding with Sam wouldn't have been that high.
Why not? Maybe the board was just too late to the party. Maybe the employees that wouldn’t side with Sam have already left[1], and the board was just too late to realise that. And maybe all the employees who are still at OpenAI mostly care about their equity-like instruments.
My understanding is that the non-profit created the for-profit so that they could offer compensation which would be typical for SV start-ups. Then the board essentially broke the for-profit by removing the SV CEO and putting the "payday" which would have valued the company at 80 billion in jeopardy. The two sides weren't aligned, and they need to decide which company they want to be. Maybe they should have removed Sam before MS came in with their big investment. Or maybe they want to have their cake and eat it too.
"OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI's benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way."
I would classify their mission "to organize the world's information and make it universally accessible and useful" as some light parading acting in the best interests of humanity.
not sure what event you're thinking of, but Google was a public company before 10 years and they started their first ad program just barely more than a year after forming as a company in 1998.
I have no objection to companies[0] making money. It's discarding the philosophical foundations of the company to prioritize quarterly earnings that is offensive.
I consider Google to have been a reasonably benevolent corporate citizen for a good time after they were listed (compare with, say, Microsoft, who were the stereotypical "bad company" throughout the 90s). It was probably around the time of the Google+ failure that things slowly started to go downhill.
[0] a non-profit supposedly acting in the best interests of humanity, though? That's insidious.
What's wrong with profit and wanting to maximize it?
Profit is now a dirty word somehow, the idea being that it's a perverse incentive. I don't believe that's true. Profit is the one incentive businesses have that's candid and the least perverse. All other incentives lead to concentrating power without being beholden to the free market, via monopoly, regulations, etc.
The most ethically defensible LLM-related work right now is done by Meta/Facebook, because their work is more open to scrutiny. And the non-profit AI doomers are against developing LLMs in the open. Don't you find it curious?
The problem is moreso trying to maximize profit after claiming to be a nonprofit. Profit can be a good driving force but it is not perfect. We have nonprofits for a reason, and it is shameful to take advantage of this if you are not functionally a nonprofit. There would be nothing wrong with OpenAI trying to maximize profits if they were a typical company.
There's nothing wrong with running a perfectly good car wash, but you shouldn't be shocked if people are mad when you advertise it as an all you can eat buffet and they come out soaked and hungry.
I wouldn't really give OpenAI credit for lasting 3 years. OpenAI lasted until they moment they had a successful commercial product. Principles are cheap when there is no actual consequences to sticking to them.
Not so true working for an organisation that is ostensibly a non-profit. People working for a non-profit are generally taking a significant hit to their earning's compared to doing similar work in a for-profit, outside of the top management of huge global charities.
The issue here is that OpenAI, Inc (officially and legally a non-profit) has spun up a subsidiary OpenAI Global, LLC (for-profit). OpenAI Global, LLC is what's taken venture funding and can provide equity to employees.
Understandably there's conflict now between those who want to increase growth and profit (and hence the value of their equity) and those who are loyal to the mission of the non-profit.
I don't really think this is true in non-charity work. Half of American hospitals are nonprofit and many of the insurance conglomerates are too, like Kaiser. The executives make plenty of money. Kaiser is a massive nonprofit shell for profitmaking entities owned by physicians or whatever, not all that dissimilar to the OpenAI shell idea. Healthcare worked out this way because it was seen as a good model to have doctors either reporting to a nonprofit or owning their own operations, not reporting to shareholders. That's just tradition though. At this point plenty of healthcare operations are just normal corporations controlled by shareholders.
What is socially defined as beneficial-to-humanity is functionally mandated by the MSM and therefore capricious, at the least. With that in mind, a translation:
"OpenAI will be obligated to make decisions according to government preference as communicated through soft pressure exerted by the Media. Don't expect these decisions to make financial sense for us".
It could be hard to do that while paying a penalty to FTB and IRS for what they’re suspected to have done (in allowing a for-profit subsidiary to influence an NPO parent) or dealing with SEC and the state courts over any fiduciary breach allegations related to the published stories. [ Nadella is an OG genius because his company is now shielded from all of that drama as it plays out, no matter the outcome. He can take the time to plan for a soft landing at MS for any OpenAI workers (if/when they need it) and/or to begin duplicating their efforts “just in case.” Heard coming from the HQ parking lot in Redmondhttps://youtu.be/GGXzlRoNtHU ]
Now we can all go back to work on GPT4turbo integrations while MS worries about diverting a river or whatever to power and cool all of those AI chips they’re gunna [sic] need because none of our enterprises will think twice about our decisions to bet on all this. /s/
For profit subsidiaries can totally influence the nonprofit shell without penalty. Happens all the time. The nonprofit board must act in the interest of the exempt mission rather than just investor value or some other primary purpose. Otherwise it's cool.
yeah, all they have to do is pray for humanity to not let the magic AI out of the bottle and they’re free to have a $91b valuation and flaunt it in the media for days.. https://youtu.be/2HJxya0CWco
Tell me how the board's actions could convince the employees they are making the right move?
Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.
OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.
I hate these comments that potray as if every expert/scientist is just good at one thing and aren't particularly great at critical thinking/corporate politics.
Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.
I don't think the argument was that none of them are good at that, just that it's a mistake to assume that just because they're all very smart in this particular field that they're great at another.
Can't critical thinking also include : "I'm about to get a 10mil pay day, hmmm, this is crazy situation, let me think critically how to ride this out and still get the 10mil so my kids can go to college and I don't have to work until I'm 75".
6D Chess is apparently realizing that AGI is not 100% certain and that having 10mm on the run up to AGI is better than not having 10mm on the run up to AGI.
Anyone with enough critical thought and understands the hard consciousness problem's true answer (consciousness is the universe evaluating if statements) and where the universe is heading physically (nested complexity), should be seeking something more ceremonious. With AI, we have the power to become eternal in this lifetime, battle aliens, and shape this universe. Seems pretty silly to trade that for temporary security. How boring.
I would expect that actual AI researchers understand that you cannot break the laws of physics just by thinking better. Especially not with ever better LLMs, which are fundamentally in the business of regurgitating things we already know in different combinations rather than inventing new things.
You seem to be equating AI with magic, which it is very much not.
LLMs are able to do complex logic within the world of words. It is a a smaller matrix than our world but fueled by the same chaotic symmetries of our universe. I would not underestimate logic, even when not given adequate data.
You can make it sound as esoteric as you want, but in the end an AI will still be bound by the laws of physics. Being infinitely smart will not help with that.
I don't think you understand logic very well btw if you wish to suggest that you can reach valid conclusions from inadequate axioms.
Axioms are constraints as much as they might look like guidance. We live in a neuromorphic computer. Logic explores this, even with few axioms. With fewer axioms, it will be less constrained.
OTOH, there's a very good argument to be made that if you recognize that fact, your short-term priority should be to amass a lot of secular power so you can align society to that reality. So the best action to take might be no different.
Very true. However, we live in a supercomputer dictated by E=mc^2=hf [2,3]. (10^50 Hz/Kg or 10^34 Hz/J)
Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.
Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)
I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.
Sure, I agree. I was referencing only the idea that being smart in one domain automatically means being a good critical thinker in all domains.
I don't have an opinion on what decision the OpenAI staff should have taken, I think it would've been a tough call for everyone involved and I don't have sufficient evidence to judge either way.
Based on the behavior of lots of smart people I worked at with Google during Google’s good times, critical thinking is definitely in the minority party. Brilliant people from Stanford, Berkeley, MIT, etc would all be leading experts in this or that but would lack critical thinking because they were never forced to develop that skill.
Critical thinking is not an innate ability. It has to be honed and exercised like anything else and universities are terrible at it.
Stupidity is not defined by self-harming actions and beliefs - not sure where you're getting that from.
Stupidity is being presented with a problem and an associated set of information and being unable or less able than others are to find the solution. That's literally it.
I see. I've never read his work before, thank you.
So they just got Cipolla's definition wrong, then. It looks like the third fundamental law is closer to "a person who causes harm to another person or group of people without realizing advantage for themselves and instead possibly realizing a loss."
I agree. It's better to separate intellect from intelligence instead of conflating them as they usually are. The latter is about making good decisions, which intellect can help with but isn't the only factor. We know this because there are plenty of examples of people who aren't considered shining intellects who can make good choices (certainly in particular contexts) and plenty of high IQ people who make questionable choices.
Stupidity is defined as “having or showing a great lack of intelligence or common sense”. You can be extremely smart and still make up your own definitions for words.
But pronouncing that 700 people are bad at critical thinking is convenient when you disagree with them on desired outcome and yet can't hope to argue points.
> Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.
That's not the bar you are arguing against.
You are arguing against how you have better information, better insight, better judgement, and are able to make better decisions than the experts in the field who are hired by the leading organization to work directly on the subject matter, and who have direct, first-person account on the inner workings of the organization.
We're reaching peak levels of "random guy arguing online knowing better than experts" with these pseudo-anonymous comments attacking each and every person involved in OpenAI who doesn't agree with them. These characters aren't even aware of how ridiculous they sound.
Disagreeing with employee actions doesn't mean that you are correct and they failed to think well. Weighting their collective probable profiles, including as insiders, and yours, it would be irrational to conclude that they were in the wrong.
> Disagreeing with employee actions doesn't mean that you are correct and they failed to think well.
You failed to present a case where random guys shitposting on random social media services are somehow correct and more insightful and able to make better decisions than each and every single expert in the field who work directly on both the subject matter and in the organization in question. Beyond being highly dismissive, it's extremely clueless.
> not mean you are good at critical thinking or thinking about strategic corporate politics
Perhaps. Yet this time they somehow managed to take the seemingly right decisions (from their perspective) despite their decisions.
Also, you'd expect OpenAI board members to be "good at critical thinking or thinking about strategic corporate politics" yet they somehow managed to make some horrible decisions.
Doing AI for ChatGPT just means you know a single model really well.
Keep in mind that Steve Jobs chose fruit smoothies for his cancer cure.
It means almost nothing about the charter of OpenAI that they need to hire people with a certain set of skills. That doesn't mean they're closer to their goal.
> They act firstmost as investors rather than as employees on this.
reply
That's not at all obvious, the opposite seems to be the case. They chose to risk having to Microsoft and potentially lose most of the equity they had in OpenAI (even if not directly it wouldn't be worth that much at the end with no one to do the actual work).
A board member, Helen Toner, made a borderline narcissistic remark that it would be consistent with the company mission to destroy the company when the leadership confronted the board that their decisions puts the future of the company in danger. Almost all employees resigned in protest. It's insulting calling the employees under these circumstances investors.
Don’t forget she’s heavily invested in a company that is directly competing with OpenAI. So obviously it’s also in her best interest to see OpenAI destroyed.
I agree that we should usually assume good faith. Still, if a member knows she will loose the board seat soon and makes such a implicit statement to the leadership team there is reason to believe that she doesn't want both companies to be successful, at least one of those not.
> obviously it’s also in her best interest to see OpenAI destroyed
Do you feel the same way about Reed Hastings serving on Facebooks BoD, or Eric Schmidt on Apples? How about Larry Ellison at Tesla?
These are just the lowest of hanging fruit, i.e literal chief executives and founders. If we extend the criteria for ethical compromise to include every board members investment portfolio I imagine quite a few more “obvious” conflicts will emerge.
By definition the attention economy dictates that time spent one place can’t be spent in another. Do you also feel as though Twitch doesn’t compete with Facebook simply because they’re not identical businesses? That’s not how it works.
But you don’t have to just take my word for it :
> “Netflix founder and co-CEO Reed Hastings said Wednesday he was slow to come around to advertising on the streaming platform because he was too focused on digital competition from Facebook and Google.”
I’m not sure how the point stands. The iPhone was introduced during that tenure, then the App Store, then Jobs decided Google was also headed toward their own full mobile ecosystem, and released Schmidt. None of that was a conflict of interest at the beginning. Jobs initially didn’t even think Apple would have an app store.
Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.
> Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.
It's a well established concept and was supported with a concrete example. If you don't feel inclined to address my points, I'm certainly not obligated to dance to your tune.
Your concrete example is Netflix’s CEO saying he doesn’t want to do advertising because he missed the boat and was on Facebook’s board and as a result didn’t believe he had the data to compete as an advertising platform.
Attempting to run ads like Google and Facebook would bring Netflix into direct competition with them, and he knows he doesn’t have the relationships or company structure to support it.
He is explicitly saying they don’t compete. And they don’t.
I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service
I believe Facebook vs Hulu or regular TV is more of a competition in the attention economy because when the commercial break comes up then you start scrolling your social media on your phone and every 10 posts or whatever you stumble into the ads placed on there so Facebook ads are seen and convert whereas regular tv and hulu aren’t seen and dont convert
> I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service
Do you agree that the following company pairs are competitors?
* FB : TikTok
* TikTok : YT
* YT : Netflix
If so, then by transitive reasoning there is competition between FB and Netflix.
...
To be clear, this is an abuse of logic and hence somewhat tongue in cheek, but I also don't think either of the above comparisons are wholly unreasonable. At the end of the day, it's eyeballs all the way down and everyone wants as many as of them shabriri grapes as they can get.
The two FAANG companies don't compete at a product level, however they do compete for talent, which is significant. Probably significant enough to cause conflicts of interest.
I couldn't find any source on her investing in any AI companies. If it's true (and not hidden), I'm really surprised that major news publications aren't covering it.
And it seems like they were right that the for-profit part of the company had become out of control, in the literal sense that we've seen through this episode that it could not be controlled.
Ands the evidence is now that OpenAI is a business 2 business product and not a attempt to keep AI doing anything but satiating anything Microsoft wants.
It is a correct statement, not really "borderline narcissistic". The board's mission is to help humanity develop safe beneficial AGI. If the board thinks that the company is hindering this mission (e.g. doing unsafe things), then it's the board's duty to stop the company.
Of course, the employees want the company to continue, and weren't told much at this point so it is understandable that they didn't like the statement.
I can't interpret from the charter that the board has the authorisation to destroy the company under the current circumstances:
> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project
That wasn't the case.
So it may be not so far fetched to call her actions borderline as it is also very easy to hide personal motives behind altruistic ones.
The more relevant part is probably "OpenAI’s mission is to ensure that AGI ... benefits all of humanity".
The statement "it would be consistent with the company mission to destroy the company" is correct. The word "would be" rather than "is" implies some condition, it doesn't have to apply to the current circumstances.
A hypothesis is that Sam was attempting to gain full control of the board by getting the majority, and therefore the current board would be unable to hold him accountable to follow the mission in the future. Therefore, the board may have considered it necessary to stop him in order to fulfill the mission. There's no hard evidence of that revealed yet though.
> this mission (e.g. doing unsafe things), then it's the board's duty to stop the company.
So instead of having to compromise to some extent but still have a say what happens next you burn the company at best delaying the whole thing by 6-12 months until someone else does it? Well at least your hands are clean, but that's about it...
No, if they had vastly different information, and if it was on the right side of their own stated purpose & values, they would have behaved very differently. This kind of equivocation hinders the way more important questions such as: just what the heck is Larry Summers doing on that board?
> just what the heck is Larry Summers doing on that board?
Probably precisely what Condeleeza Rice was doing on DropBox’s board. Or that board filled with national security state heavyweights on that “visionary” and her blood testing thingie.
“What matters now is the way forward, as the DoD has a critical unmet need to bring the power of cloud and AI to our men and women in uniform, modernizing technology infrastructure and platform services technology. We stand ready to support the DoD as they work through their next steps and its new cloud computing solicitation plans.” (2021)
>just what the heck is Larry Summers doing on that board?
1. Did you really think the feds wouldn't be involved?
AI is part of the next geopolitical cold war/realpolitik of nation-states. Up until now it's just been passively collecting and spying on data. And yes they absolutely will be using it in the military, probably after Israel or some other western-aligned nation gives it a test run.
2. Considering how much impact it will have on the entire economy by being able to put many white collar workers out of work, a seasoned economist makes sense.
The East Coast runs the joint. The west coast just does the (publicly) facing tech stuff and takes the heat from the public
You mean the official stated purpose of OpenAI. The stated purpose that is constantly contradicted by many of their actions, and I think nobody took seriously anymore for years.
From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI". The official values of OpenAI were never "their own".
> From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI".
Board member Helen Toner strongly criticized OpenAI for publicly releasing it's GPT when it did and not keeping it closed for longer. That would seem to be working against openness for many people, but others would see it as working towards safe AI.
The thing is, people have radically different ideas about what openness and safe mean. There's a lot of talk about whether or not OpenAI stuck with it's stated purpose, but there's no consensus on what that purpose actually means in practice.
> what the heck is Larry Summers doing on that board?
The former president of a research-oriented nonprofit (Harvard U) controlling a revenue-generating entity (Harvard Management Co) worth tens of billions, ousted for harboring views considered harmful by a dominant ideological faction of his constituency? I guess he's expected to have learned a thing or two from that.
And as an economist with a stint of heading the treasury under his belt, he's presumably expected to be able to address the less apocalyptic fears surrounding AI.
I assume larry summers is there to ensure the proper bi-partisan choices made by whats clearly now an _business_ product and not a product for humanity.
Said purpose and values are nothing more than an attempted control lever for dark actors, very obviously. People / factions that gain handholds, which otherwise wouldn't exist, and exert control through social pressure nonsense that they don't believe in themselves. As can be extracted from modern street-brawl politics, which utilizes the same terminology to the same effect. And as can be inferred would be the case given OAI's novel and convoluted corporate structure as referenced to the importance of its tech.
We just witnessed the war for that power play out, partially. But don't worry, see next. Nothing is opaque about the appointment of Larry Summers. Very obviously, he's the government's seat on the board (see 'dark actors', now a little more into the light). Which is why I noted that the power competition only played out, partially. Altman is now unfireable, at least at this stage, and yet it would be irrational to think that this strategic mistake would inspire the most powerful actor to release its grip. The handhold has only been adjusted.
I think this is a good question. One should look at what actually happened in practice. What was the previous board, what is the current board. For the leadership team, what are the changes? Additionally, was information revealed about who calls the shots which can inform who will drive future decisions? Anything else about the inbetweens to me is smoke and mirrors.
I don't understand how, with the dearth of information we currently have, anyone can see this as "higher values" vs "money".
No doubt people are motivated by money but it's not like the board is some infallible arbiter of AI ethics and safety. They made a hugely impactful decision without credible evidence that it was justified.
The issue here is that the board of the non-profit that is supposedly in charge of OpenAI (and whose interests are presumably aligned with the mission statement of the company) seemingly just lost a power struggle with their for-profit subsidiary who is not supposed to be in charge of OpenAI (and whose interests, including the interests of their employees, are aligned with making as much money as possible). Regardless of whether the board's initial decision that started this power struggle was wise or not, don't you find the outcome a little worrisome?
For some that is important, but more people consider the prevention of an AI monopoly to be more important here. See the original charter and the status quo with Microsoft taking it all.
Perhaps something like "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity."
There’s evidence to suggest that a central group have pressured the broader base of employees into going along with this, as posted elsewhere in the thread.
Sign the letter and support Sam so you have a place at Microsoft if OpenAI tanks and have a place at OpenAI if it continues under Sam, or don’t sign and potentially lose your role at OpenAI if Sam stays and lose a bunch of money if Sam leaves and OpenAI fails.
Maybe they're working for both, but when push comes to shove they felt like they had no choice? In this economy, it's a little easier to tuck away your ideals in favor of a paycheck unfortunately.
Or maybe the mass signing was less about following the money and more about doing what they felt would force the OpenAI board to cave and bring Sam back, so they could all continue to work towards the missing at OpenAI?
> In this economy, it's a little easier to tuck away your ideals in favor of a paycheck unfortunately.
Its a gut check on morals/ethics for sure. I'm always pretty torn on the tipping point for empathising there in an industry like tech though, even more so for AI where all the money is today. Our industry is paid extremely well and anyone that wants to hold their personal ethics over money likely has plenty of opportunity to do so. In AI specifically, there would have easily been 800 jobs floating around for AI experts that chose to leave OpenAI because they preferred the for-profit approach.
At least how I see it, Sam coming back to OpenAI is OpenAI abandoning the original vision and leaning full into developing AGI for profit. Anyone that worked there for the original mission might as well leave now, they'll be throwing AI risk out the window almost entirely.
Perhaps a better example would be 95% of people voted in favour of reinstating apple pie to the menu after not receiving a coherent explanation for removing apple pie from the menu.
They have a different set of incentives. If I were them I would have done the same thing, Altman is going to make them all fucking rich. Not sure if that will benefit humanity though.
I think this outcome was actually much more favorable to D'Angelo's faction than people realize. The truth is before this Sam was basically running circles around the board and doing whatever he wanted on the profit side- that's what was pissing them off so much in the first place. He was even trying to depose board members who were openly critical of open AI's practices.
From here on out there is going to be far more media scrutiny on who gets picked as a board member, where they stand on the company's policies, and just how independent they really are. Sam, Greg and even Ilya are off the board altogether. Whoever they can all agree on to fill the remaining seats, Sam is going to have to be a lot more subservient to them to keep the peace.
Doesn't make sense that after such a broad board capitulation the next one will have any power, and media scrutiny isn't a powerful governance mechanism
When you consider they were acting under the threat of the entire company walking out and the threat of endless lawsuits, this is a remarkably mild capitulation. All the new board members are going to be chosen by D'Angelo and two new board members that he also had a big hand in choosing.
And say what you want about Larry Summers, but he's not going to be either Sam's or even Microsoft's bitch.
What I'd want to say about Larry is that he is definitely not going to care about the whole-society non-profit shtick of the company to any degree comparable with the previous board members, so he won't constraint Sam/MS in any way
Why? As an economist, he perfectly understands what is a public good, why there is a market failure to underproduce a public good under free market, and role of nonprofit in public good production.
Larry Summers has a track record of not believing in market failures, just market opportunities for private interests. Economists vary vastly in their belief systems, and economics is more politics than science, no matter how much math they try to use to distract from this.
I wonder what is the rationale for picking a seasoned politician and economist (influenced deregulation of US finance system, was friends with Epstein, had a few controversies listed there). Has the government also entered the chat so obviously?
They had congressman Will Hurd on the board before. Govt-adjacent people on non-profits are common for many reasons - understanding regulatory requirements, access to people, but also actual "good" reasons like the fact that many people who work close to the state genuinely have good intentions on social good (whether you agree with their interpretation of it or not)
On what premise you assume that D'Angelo will have any say there? At this point he won't be able to do any moves - especially with Larry and Microsoft overseeing all that stuff.
Again, D'Angelo chose Larry Summers and Bret Taylor to sit on the board with him himself. As long as it is the three of them, he can't be overruled unless both of his personal picks disagree with him. And if the opposition to his idea is all that bad, he probably really should be overruled.
His voting power will get diluted as they add the next six members, but again, all three of them are going to decide who the next members are going to be.
A snippet from the recent Bloomberg article:
>A person close to the negotiations said that several women were suggested as possible interim directors, but parties couldn’t come to a consensus. Both Laurene Powell Jobs, the billionaire philanthropist and widow of Steve Jobs, and former Yahoo CEO Marissa Mayer were floated, *but deemed to be too close to Altman*, this person said.
Say what else you want about it, this is not going to be a board automatically stacked in Altman's favor.
> Sam, Greg and even Ilya are off the board altogether. Whoever they can all agree on to fill the remaining seats, Sam is going to have to be a lot more subservient to them to keep the peace.
The existing board is just a seat-warming body until Altman and Microsoft can stack it with favorables to their (and the U.S. Government’s) interests. The naïveté from the NPO faction was believing they’d be able to develop these capacities outside the strict control of the military industrial complex when AI has been established as part of the new Cold War with China.
>The existing board is just a seat-warming body until Altman and Microsoft can stack it with favorables to their (and the U.S. Government’s) interests.
That's incorrect. The new members will be chosen by D'Angelo and the two new independent board members. Both of which D'Angelo had a big hand in choosing.
I'm not saying Larry Summers etc going to be in D'Angelo's pocket. But the whole reason he agreed to those picks is because he knows they won't be in Sam's pocket, either. More likely they will act independently and choose future members that they sincerely believe will be the best picks for the nonprofit.
According to this tweet thread[1], they negotiated hard for Sam to be off the board and Adam to stay on. That indicates, at least if we're being optimistic, that the current board is not in Sam's pocket (otherwise they wouldn't have bothered)
They can't take actions to take back the back control from Microsoft and Sam because Sam is the CEO. Even if Sam is of the utmost morality, he would be crazy to help them back into a strong position after last week.
So it's the Sam & Microsoft show now, only a master schemer can get back some power to the board.
Yeah, that's my take. Doesn't really matter if the composition of the board is to Adam's liking and has a couple more heavy hitters if Sam is untouchable and Microsoft is signalling that any time OpenAI acts against its interests they will take steps to ensure it ceases to have any staff or funding.
I’m sorry, but that’s all kayfabe. If there is one thing that’s been demonstrated in this whole fiasco, it’s who really has all the power at OpenAI (and it’s not the board).
> The truth is before this Sam was basically running circles around the board and doing whatever he wanted on the profit side- that's what was pissing them off so much in the first place. He was even trying to depose board members who were openly critical of open AI's practices.
Getting his way: The Wall Street Journal article. They said he usually got his way, but that he was so skillful at it that they were hard-pressed to explain exactly how he managed to pull it off.
Media >= employees? Media >= Sam? I don't think media has any role on oversight or governance.
I think Sam came out the winner. He gets to pick his board. He gets to narrow his employees. If anything, this sets him up for dictatorship. The only other overseers are the investors. In that case, Microsoft came out holding a leash. No MS, means no Sam, which also means employees have no say.
So it is more like MS > Sam > employees. MS+Sam > rest of investors.
> Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.
If the "other side" (board) had put up a SINGLE convincing argument on why Sam had to go maybe the employees would have not supported Sam unequivocally.
But, atleast as an outsider, we heard nothing that suggests board had reasons to remove Sam other than "the vibes were off"
Can you really accuse the employees of groupthink when the other side is so weak?
OpenAI is a private company and not obligated nor is it generally advised for them to comment publicly on why people are fired. I know that having a public explanation would be useful for the plot development of everyone’s favorite little soap opera, but it makes pretty much zero sense and doesn’t lend credence to any position whatsoever.
> OpenAI is a private company and not obligated nor is it generally advised for them to comment publicly on why people are fired.
The interim CEO said the board couldn’t even tell him why the old CEO was fired.
Microsoft said the board couldn’t even tell them why the old CEO was fired.
The employees said the board couldn’t explain why the CEO was fired.
When nobody can even begin to understand the board’s actions and they can’t even explain themselves, it’s a recipe for losing confidence. And that’s exactly what happened, from investors to employees.
I’m specifically taking issue with this common meme that the public is owed some sort of explanation. I agree the employees (and obviously the incoming CEO) would be.
And there’s a difference between, “an explanation would help their credibility” versus “a lack of explanation means they don’t have a good reason.”
Taking decisions in a way that seems opaque and arbitrary will not bring much support from employees, partners and investors. They did not fire a random employee. Not disclosing relevant information for such a key decision was proven, once again, to be a disaster.
This is not about soap opera, this is about business and a big part is based on trust.
Since barely any information was made publicly we have to assume the employees had better information that the public. So how can we say they lacked critical thinking when we don't have access to the information they have?
I didn’t claim employees were engaged in groupthink. I’m taking issue with the claim that because there is no public explanation, there must not be a good explanation.
Yes, the original letter had (for an official letter) quite some serious allegations, insinuations. If after a week, they decided not to back up their claims, I'm not sure there is anything big coming.
On the other hand, if they had some serious concerns, serious enough to fire the CEO in such a disgraceful way, I don't understand why they don't stick to their guns, and explain themselves. If you think OpenAI under Sam's leadership is going to destroy humanity, I don't understand how they (e.g. Ilya) reverted their opinions after a day or two.
It's possible the big, chaotic blowup forced some conversations that were easier to avoid in the normal day-to-day, and those conversations led to some vital resolution of concerns.
I agree with both the commenter above you and you.
Yes, you are right that the board had weak sauce reasoning for the firing (giving two teams the same project!?!).
That said, the other commenter is right that this is the beginning of the end.
One of the interesting things over the past few years watching the development of AI has been that in parallel to the demonstration of the limitations of neural networks has been many demonstrations of the limitations of human thinking and psychology.
Altman just got given a blank check and crowned as king of OpenAI. And whatever opposition he faced internally just lost all its footing.
That's a terrible recipe for long term success.
Whatever the reasons for the firing, this outcome is going to completely screw their long term prospects, as no matter how wonderful a leader someone is, losing the reality check of empowered opposition results in terrible decisions being made unchecked.
He's going to double down on chat interfaces because that's been their unexpected bread and butter up until the point they get lapped by companies with broader product vision, and whatever elements at OpenAI shared that broader vision are going to get steamrolled now that he's been given an unconditional green light until they jump ship over the next 18 months to work elsewhere.
Not necessarily! Facebook has done great with its unfireable CEO. The FB board would certainly have fired him several times over by now if it could, and yet they'd have been wrong every time. And the Google cofounders would certainly have been kicked out of their own company if the board had been able to.
My guess is that the arguments are something along the lines of “OpenAIs current products are already causing harm or on the path to do so” or something similar damaging to the products. Something they are afraid of both having continue to move forward on and to having to communicate as it would damage the brand. Like “We already have reports of several hundred people killing themselves because of ChatGPT responses…” and everyone would say, “Oh that makes… wait what??”
This meme was already dead before the recent events. Whatever the company was doing, you could say it wasn’t open enough.
> a real disruptor must be brewing somewhere unnoticed, for now
Why pretend OpenAI hasn’t just disrupted our way of life with GPTs in the last two years? It has been the most high profile tech innovator recently.
> OpenAI does not have in its DNA to win
This is so vague. What does it not have in its… fundamentals? And what is to “win”? This statement seems like just generic unhappiness without stating anything clearly. By most measures, they are winning. They have the best commercial LLM and continue to innovate, they have partnered with Microsoft heavily, and they have so far received very good funding.
They really need to drive down the amount of computation needed. The dependence on Microsoft is because of the monstrous computation requirements that will require many paid users to break even.
Leaving the economic side even to make the tech 'greener' will be a challenge. OpenAI will win if they focus on making the models less compute intensive but it could be dangerous for them if they can't.
I guess the OP's brewing disruptor is some locally runnable Llama type model that does 80% of what ChatGPT does at a fraction of the cost.
Is it really a failure of critical thinking? The employees know what position is popular, so even people who are mostly against the go-fast strategy can see that they get to work on this groundbreaking thing only if they toe the line.
It's also not surprising that people who are near the SV culture will think that AGI needs money to get developed, and that money in general is useful for the kind of business they are running. And that it's a business, not a charity.
I mean if OpenAI had been born in the Soviet Union or Scandinavia, maybe people would have somewhat different values, it's hard to know. But a thing that is founded by the posterboys for modern SV, it's gotta lean towards "money is mostly good".
Or medieval Spain? About as likely... The Soviets weren't even able to get the factory floors clean enough to consistently manufacture the 8086 10 years after it was already outdated.
> maybe people would have somewhat different values, it's hard to know. But a thing that is founded by the posterboys for modern SV, it's gotta lean towards "money is mostly good".
Unfortunately not other system besides capitalism has enabled consistent technological progress for 200+ years. Turns out you need to pool money and resources to achieve things ..
I do not see an overwhelming groupthink. I see a perfectly rational (and not in any way evil) reaction to a complete mess created by the board.
Most are doing the work they love and four people almost destroy it and cannot even explain why they did it. If I were working at the company that did this I would sign, too. And follow through on the threat of leaving if it comes to that.
It wasn't necessarily groupthink - there was profound pressure from team Sam to sign that petition. What's going to happen to your career when you were one of the 200 who held out initially?
> What's going to happen to your career when you were one of the 200 who held out initially?
Anthropic formed from people who split from OpenAI, and xAI in response to either the company or ChatGPT, so people would have plenty of options.
If the staff had as little to go on as the rest of us, then the board did something that looked wild and unpredictable, which is an acute employment threat all by itself.
People underestimate the effects of social pressure, and losing social connections. Ilya voted for Sam's firing, but was quickly socially isolated as a result
That's not to say people didn't genuinely feel committed to Sam or his leadership. Just that they also took into account that the community is relatively small and people remember you and your actions
There weren’t 200 holdouts. It was like 5 AM over there. I don’t know why you are surprised that people who work at OpenAI would want to work at OpenAI, esp over Microsoft?
Folding for pressure and group think is different things imo. You can be very aware you are folding for pressure, but doing it because it's the right/easy thing to do. While group think is more a phenomenon you are not aware of at all.
They can just work somewhere else with relative ease. Some OpenAI employees on Twitter said they were being bombarded by recruiters throughout until tonight's resolution. People have left OpenAI before and they are doing just fine.
A lot of this comes down to processing power though. That's why Microsoft had so much leverage with both factions in this fight. It actually gives them a pretty good moat above and beyond their head start. There aren't too many companies with the hardware to compete, let alone talent.
Agreed. Perhaps a reason for public AI [1], which advocates for a publicly funded option where a player like MSFT can't push around something like OpenAI so forcefully.
The employees of a tech company banded together to get what they wanted, force a leadership change, evict the leaders they disagreed with, secure the return of the leadership they wanted, and restored the value of their hard-earned equity.
This certainly isn’t a disappointing outcome for the employees! I thought HN would be ecstatic about tech employees banding together to force action in their favor, but the comments here are surprisingly negative.
The board never gave a believable explanation to justify firing Altman. So the staff simply made the sensible choice of following Altman. This isn't about critical thinking because there was nothing to think about.
Regardless of whether you feel like Altman was rushing OpenAI too fast, wasn’t open enough, and was being too commercial, the last few days demonstrated conclusively that the board is erratic and unstable and unfit to manage OpenAI.
Their actions was the complete opposite of open. Rather than, I don’t know, being open and talking to the CEO to share concerns and change the company, they just threw a tantrum and fired him.
They fired him (you don’t know the backstory) and published a press release and then Sam was seen back in the offices. Prior to the reinstatement (today), there was nothing except HN hysteria and media conjecture that made the board look extremely unstable.
??? They fired him on friday with a statement knifing him in the back, un-fired him on tuesday, and now the board is resigning? How is that not erratic and unstable?
I found the board members own words to be quite erratic between Friday and today, such as Ilya saying he wished he didn't participate in the boards actions.
But he's acting as a board member firing the CEO because he arguably believes it's the right thing to do for the company. If he then changes his mind because the fired CEO continued a successful career then I'd say that decision was more on a personal level than for the wellbeing of the company.
His obligation as a member of the board is to safeguard AI, not OpenAI. That's why in the employee open letter they said, "the board said it'd be compliant with the mission to destroy the company." This is actually true.
It's absolutely believable that at first he thought the best way to safeguard AI was to get rid of the main advocate for profit-seeking at OpenAI, then when that person "fell upward" into a position where he'd have fewer constraints, to regret that decision.
> Apple has no by-laws committing itself to being an apple.
Does OpenAI have by-laws committing itself to being "open" (as in open source or at least their products freely and universally available)? I thought their goals were the complete opposite of that?
Unfortunately, in reality Facebook/Meta seems to be more open than "Open"AI.
This is spot on. Open was the wrong word to choose for their name, and in the technology space means nearly the opposite of the charter's intention. BeneficialAI would have been more "aligned" with their claimed mission. They have made their position quite clear - the creation of an AGI that is safe and benefits all humanity requires a closed process that limits who can have access to it. I understand their theoretical concerns, but the desire for a "benevolent dictator" goes back to at least Plato and always ends in tears.
> In reality, we can and should be outraged when corporations betray their own statements and supposed values.
There are only three groups of people who could be subject to betrayal here: employees, investors, and customers. Clearly they did not betray employees or investors, since they largely sided with Sam. As for customers, that's harder to gauge -- did people sign up for ChatGPT with the explicit expectation that the research would be "open"?
The founding charter said one thing, but the majority of the company and investors went in a different direction. That's not a betrayal, but a pivot.
I think there’s an additional group to consider- society at large.
To an extent the promise of the non- profit was that they would be safe, expert custodians of AI development driven not primarily by the profit motive, but also by safety and societal considerations. Has this larger group been ‘betrayed’? Perhaps
Also donors. They received a ton of donations when they were a pure non-profit from people that got no board seat, no equities, with the believe that they will stick to their mission.
> There are only three groups of people who could be subject to betrayal here
GP didn't speak of betraying people; he spoke of betraying their own statements. That just means doing what you said you wouldn't; it doesn't mean anyone was stabbed in the back.
It does seem that the hypocrisy was baked in from the beginning. In the tech world 'open' implied open source but OpenAI wanted to benefit from a marketing itself as something like Linux when internally it was something like Microsoft.
Corporations have no values whatsoever and their statements only mean anything when expressed in terms of a legally binding contract. All corporate value statements should be viewed as nothing more than the kind of self-serving statements that an amoral narcissitic sociopath would make to protect their own interests.
did the "Open" in OpenAI not originally refer to open in the academic or open source manner? i only learned about OpenAI in the GPT-2 days, when they released it openly and it was still small enough that i ran it on my laptop: i just assumed they had always acted according to their literal name up through that point.
This has been a common misinterpretation since very early in OpenAI's history (and a somewhat convenient one for OpenAI).
From a 2016 New Yorker article:
> Dario Amodei said, "[People in the field] are saying that the goal of OpenAI is to build a friendly A.I. and then release its source code into the world.”
> “We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”
I'm not sure this is a correct characterization. Lex Fridman interviewed Elon Musk recently where Musk says that the "open" was supposed to stand for "open source".
To be fair, Fridman grilled Musk on his views today, also in the context of xAI, and he was less clear cut there, talking about the problem that there's actually very little source code, it's mostly about the data.
Altman appears to be in the driving seat, so it doesn't matter what other people are saying, the point is "Open" is not being used here to the open source context _but_ they definitely dont try to correct anyone who thinks they're providing open source products.
Very disappointing outcome indeed. Larry Summers is the Architect of the modern Russian Oligarchy[1] and responsible for an incredible amount of human suffering as well as gross financial disparity both in the USA as well as the rest of the world.
Not someone I would like to see running the world’s leading AI company
Outcome? You mean OpenAI wakes up with no memories of the night before, finding their suite trashed, a tiger in the bathroom, a baby in the closet, and the groom missing and the story will end here?
I just renewed by HN subscription to be able to see Season 2!
Which critical thinking could they exercise if no believable reasons were given for this whole mess? Maybe it's you who need to more carefully assess this situation.
in the end, maybe Sam was the instigator, the board tried to defend (and failed) and what we just witnessed from afar was just a power play to change the structure of OpenAI (or at least the outcome for Sam and many others) towards profit rather than non-profit.
we'll all likely never know what truly happened, but it's a shame that the board has lost their last remnant of some diversity and at the moment appears to be composed of rich Western white males... even if they rushed for profit, I'd have more faith in the potential upside what could be a sea change in the World, if those involved reflected more experiences than are currently gathered at that table.
I find the outcome very satisfying. The OpenAI API is here to stay and grow, and I can build software on top of it. Hopefully other players will open up their APIs soon as well, so that there is a reasonable choice.
Not a given that it is here to stay and grow after the company showed itself in such a chaotic state. Also, they need a profitable product - it is not like they are selling Iphones and such..
I think what this saga has shown is that no one controls OpenAI definitively. Is Microsoft did this wouldn’t have happened in the first place don’t you think?
While I certainly agree that OpenAI isn't open and is effectively controlled by Microsoft, I'm not following the "groupthink" claims based on what just happened. If I'd been given the very fishy and vague reasons that it sounds like their staff were given, I think any rational person would be highly suspicious of the board, especially since some believe in fringe ideas, have COIs, or can be perceived as being jealous that they aren't the "face" of OpenAI.
I have been working for various software companies at different capacities. Never did i see 90%+ employees care about their CEO . In a small 10 member startup maybe its true. Are there any OpenAI employees here to confirm that .. their CEO really matters ... I mean how many employee revolted when Steve Jobs was fired .. Do Microsoft and Google employees really care ?
Investors and executives.. everyone in 2023 is hyper focused on "Thiel Monopoly."
Platform, moat, aggregation theory, network effects, first mover advantages.. all those ways of thinking about it.
There's no point in being bing to Google's AdWords... So the big question is pathway to being the adWords. "Winning." That's the paradigm. This is where big returns will be.
However.. we should always remember, but the future is harder to see from the past. Post fact analysis, can often make things seem a lot simpler and more inevitable than they ever were.
It's not clear what a winner even is here. What are the bottlenecks to be controlled. What are the business models, revenue sources. What represents the "LLM Google," America online, Yahoo or a 90s dumb pipe.
FYIW I think all the big text have powerful plays available.. including keeping powder dry.
No doubt that proximity to openAI, control, influence, access to IP.. all strategic assets. That's why they're all invested an involved in the consortium.
That said assets or not strategies. It's hard to have strategies when strategic goals are unclear.
You can nominate a strategic goal from here, try to stay upstream, make exploratory investments and bets... There is no rush for the prize, unless the price is known.
Obviously, I'm assuming the prixe is not AGI and a solution to everything... That kind of abstraction is useful, but I do not think it's operative.
It's not a race currently, to see who's R&D lab turns on the first super intelligent consciousness.
Assuming I'm correct on that, we really have no idea which applications LLM capabilities companies are actually competing for.
> Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.
I'm sure has been a lot of critical thinking going on. I would venture a guess that employees decided that Sam's approach is much more favorable for the price of their options than the original mission of the non-profit entity.
>Disappointing outcome. The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft. Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.
Why was his role as a CEO even challenged?
>It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.
Always remember; Google wasn't the first search engine nor iPhone the first smartphone. First-movers bring innovation and trend not market dominance.
> the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either
I suspect incentives play a huge role here. OAI employees are compensated with stock in the for-profit arm of the company. It's obvious that the board's actions put the value of that stock in extreme jeopardy (which, given the corporate structure, is theoretically completely fine! the whole point of the corporate structure is that the nonprofit board has the power to say "yikes, we've developed an unsafe superintelligence, burn down the building and destroy the company now").
I think it's natural for employees to be extremely angry with a board decision that probably cost them >$1M each.
All this just tells for the 100th time that this area desperately needs some regulation. I don't know the form, but even if we have 1% of skynet, heck even 0.01% its simply too high and we still have full control.
We see most powerful people are in it for the money and power ego trip, and literally nothing else. Pesky morals be damned. Which may be acceptable for some ad business but here stakes are potentially everything and we have no clue what actual % the risk is.
Its to me very similar to all naivety particle scientists expressed in its early days and then reality check of realpolitik and messed up humans in power when bombs were done, used and then hundred thousand more were produced.
The board couldn't even clearly articulate why they fired Sam in the first place. There was a departure from critical thinking but I don't think it was on the part of the employees.
OpenAI is more open than my company’s AI teams, and that is even from my own insider relationship. As far as commercial relationships are concerned, I’d say they’re hitting the mark.
For me, the whole thing is just human struggle. It is about fighting for people they love and care, against some people they dislike or indifferent to.
Nah, I too will threaten to sign a petition to quit if I could save my RSUs/PPUs from evaporating. Organizational goals be damned (or is it extinction level risk be damned?)
In this case the fate of OpenAI was in fact heavily controlled by its employees. They voted with their employment. Microsoft gave them an assured optional destination.
> The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft.
I'd say the lack of a narrative from the board, general incompetence with how it was handled, the employees quitting and the employee letter played their parts too.
But even if it was Microsoft who made this happen: that's what happens when you have a major investor. If you don't want their influence, don't take their money.
So you didn’t realize that when Microsoft both gained a 49% interest and was subsidizing compute?
Unless they had something in their “DNA” that allowed them to build enough compute and pay their employees, they were never going to “win” without a mass infusion of cash and only three companies had enough compute and revenue to throw at them and only two companies had relationships with big enterprise and compute - Amazon and Microsoft.
Whatever OpenAI started as, a week ago it was a company with the best general purpose LLM, more on the way, and consumer+business products with millions of users. And they were still investing very heavily in research. I'm glad that company may survive. If there's room in the world for a more disruptive research focused AI company that can find sustainable funding, even better.
What could disrupt OpenAI is a dramatic change in market, perhaps enabled by a change in technology. But if it's the same customers in the same market, they will buy or duplicate any tech advance; and if it's a sufficiently similar market, they will pivot.
I don't consider this confirmed. Microsoft brought an enormous amount of money and other power to the table, and their role was certainly big, but it is far from clear to me that they held all or most of the power that was wielded.
I wonder if beyond the groupthinking we are seeing at least a more heterogeneous composition: a mix of people that includes business, pure research, engineering, and kind of spirituality-semireligion around [G]AI.
Any good summary of the OpenAI imbroglio? I know it has a strange corporation, with part non profit and part for profit. I don't follow it closely but would like a quick read explaining.
How can you without access to the information that actual employees had of the situation say "there's clearly little critical thinking amongst OpenAI's employees"?
It is a shame that we lost the ability to hold such companies to account (for now). But given the range of possibilities laid out before us, this is the better outcome. GPT-4 has increased my knowledge, my confidence, and my pleasure in learning and hacking. And perhaps it's relatives will fuel a revolution.
Reminds me of a quote: "A civilization is a heritage of beliefs, customs, and knowledge slowly accumulated in the course of centuries, elements difficult at times to justify by logic, but justifying themselves as paths when they lead somewhere, since they open up for man his inner distance." - Antoine de Saint-Exupery.
So the type of employee that would get hired at OpenAi isn't likely to be skilled at critical thinking? That's doubtful. It looks to me like you dislike how things played out, gathered together some mean adjectives and "groupthink", and ended with a pessimistic prediction for their trajectry as punishment. One is left to wonder what OAI's disruptor outlook would be if the outcome of the current situation had been more pleasing.
Ultimately, the openness that we all wish for must come from _underlying_ data. The know-how and “secret sauce” were never going to be open. And it’s not as profound as we think it is inside that black box.
So who holds all the data in closed silos? Google and Facebook. We may have already lost the battle on achieving “open and fair” AI paradigm long time ago.
Microsoft played almost no role in the process except to be a place for Sam and team to land.
What the process did shoe is if you plan to oust a popular CEO with a thriving company, you should actually have a good reason for it. It’s amazing how little thought seemingly went into it for them.
Any other outcome would have split OpenAI quite dramatically and put them back massively.
Big assumption to say 'effectively controlled by Microsoft' when Microsoft might have been quite happy for the other option and for them to poach a lot of staff.
I think Microsoft's deep pockets, computing resources, their head start, and 50%+ employees not quitting is more important to the company's chances at success than your assessment they have the "wrong DNA."
The idea that the marketplace is a meritocracy of some kind where whatever an individual deems as "merit" wins is just proven to be nonsense time and time again.
right . why don't you creat a chatgpt like innovation or even AGI and do things your way? So many people just know how to complain on what other people build and forget that no one is stopping you from innovating the way you like it.
You would expect the company that owns 49% of the shares to have some input in firing the CEO, why is that disappointing? If they had more control this shitshow would never have happened.
That's the curse of specialisation. You can be really smart in one area and completely unaware in others. This industry is full of people with deep technical knowledge but little in the way of social skills.
Exactly this. Specialization is indeed a curse. We have seen it in lots of these folks especially engineers that flaunt their technical prowess but are extremely deficient in social skills and other basic soft skills or even understanding governance.
Engineer working at "INSERT BIG TECH COMPANY" is no guarantee or insight about critical thinking at another one. The control and power over OpenAI was always at Microsoft regardless of board seats and access. Sam was just a lieutenant of an AI division and the engineers were just following the money like a carrot on a stick.
Of course, the engineers don't care about power dynamics until their paper options are at risk. Then it becomes highly psychological and emotional for them and they feel powerless and can only follow the leader to safety.
The BOD (Board of Directors) with Adam D'Angelo (the one who likely instigated this) has shown to have taken unprecedented steps to remove board members and fire the CEO for very illogical and vague reasons. They already made their mark and the damage is already done.
Lets see if these engineers that signed up to this will learn from this theatrical lesson of how not to do governance and run an entire company into the ground with unspecified reasons.
Agreed, take Hacker News for example. 99% of the articles are in a domain I don't have years of professional experience.
However, when that one article does come up, and I know the details inside/out , the comments sections are rife with bad assumptions, naïve comments and misinformation.
> Furthermore, the overwhelming groupthink shows there’s clearly little critical thinking amongst OpenAI’s employees either.
Very harsh words for some of the highest paid smartest people on the planet. The employees built GPT-4 the most advanced AI on the planet, what did you build? Do you still claim they’re more deficient in critical thinking compared to you.
There is no comparison to himself in the previous comment. Also, did you measure their IQ to put them on such a pedestal? There are lots of examples for people being great in their niche they invested thousands of hours in, while being total failures in other areas. You could see that with Mr. Sutskever over the weekend. He must be excellent in ML as he dedicated his life to researching this field of knowledge, but he lacks practice in critical thinking in management contexts.
I think the choice they had to make was: either building one of the top AI on earth under total control of OpenAI investors (and most likely the project of their life) either do nothing.
> We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
> We are collaborating to figure out the details. Thank you so much for your patience through this.
1- So what was the point of this whole drama, and why couldn't you have settled like this adults?
2- Now what happens to Microsoft's role in all of this?
3- Twitter is still the best place to follow this and get updates, everyone is still make "official" statements on twitter, not sure how long this website will last but until then, this is the only portal for me to get news.
> So what was the point of this whole drama, and why couldn't you have settled like this adults?
Altman was trying to remove one of the board members before he was forced out. Looks like he got his way in the end, but I'm going to call Altman the primary instigator because of that.
His side was also the "we'll nuke the company unless you resign" side.
His side was also "700 regular employees support this", which is pretty unusual as most people don't care about their CEO at all. I am not related to OpenAI at all, but given the choice of "favorite of all employees" vs "fire people with no warning then refuse to give explanation why even under pressure" I know which side I root for.
Looking back, Altman's ace in hand was the tender offer from Thrive. Idk anyone at OpenAI, but all the early senior personnel backed him with vehemence. If the leaders hand't championed him strongly, I doubt you get 90% of the company to commit to leaving.
I'm sure some of those employees were easily going to make $10m+ in the sale. That's a pretty great motivation tool.
Overall, I do agree with you. The board could not justify their capricious decision making and refused to elaborate. They should've brought him back on Sunday instead of mucking around. OpenAI existing is a good thing.
No idea what these 700 employees were thinking. They probably had little knowledge of what truly went down other than “my CEO was fired unfairly” and rushed to the rescue.
I think the board should have been more transparent on why they made the decision to fire Sam.
Or perhaps these employees only cared about their AI work and money? The foundation would be perceived as the culprit against them.
Really sad there’s no clarity from the old board disclosed. Hope one day we will know.
I wonder how much more transparent they can really be. I know that when firing a "regular" employee, you basically never tell everyone all the details for legal CYA reasons. When your firing someone worth half a billion dollars, I expect the legal fears are magnified.
But that's the difference, the CEO is not a regular employee. If a board of directors wants to be trusted and taken seriously it can't just fire the CEO and say "I'm sorry we can't say why, that's private information".
That is one HUGE grain of salt considering 1/ it's Blind 2/ Even in the same thread there is another poster saying the exact opposite thing (i.e. no peer pressure)
Also, all the stuff they started doing with the hearts and cryptic messages on Twitter (now X) was a bit ... cult-y?. I wouldn't doubt there was a lot of manipulation behind all that, even from @sama itself.
So, there is goes, it seems that there's a big chance now that the first AGI will land on the hands of a group with the antics of teenagers. Interesting timeline.
The 700 employees also have significant financial incentive to want Altman to stay. If he moved to a competitor all the shine would follow. They want the pay-day (I don't blame them), but take with a grain of salt what the employees want in this case.
Microsoft's role remains same as it was on Thursday. Minor (49%?) shareholder and keeps access to models and IP
IMO Kevin tweeting that MS will hire and match comp of all OpenAI employees was amazing negotiation tactic because that meant employees could sign the petition without worrying about their jobs/visas
I was thinking about this a lot as well, but what did that mean for employee stock in the commercial entity? I heard they were up for a liquid cash-out in the next funding round.
OpenAI is an airgapped test lab for Microsoft. They dont want critical exposure to the downside risk of AI research, just the benefits in terms of IP. Sam and Greg probably offer enough stability for them to continue this way.
It makes sense to airgap Generative AI while courts ponder wether copyright fair use applies or not. Research is clearly allowed fair use, and let OpenAI experiment with commercialization until it is all clear waters.
What is the benefit of learning about this kind of drama minute-by-minute, compared to reading it a few hours later on hacker news or next day on wall street journal?
Personally I found twitter very bad for my productivity, a lot of focus destroyed just to know "what is happening" when there was neglible drawbacks of finding about news events a few hours later.
Satya just played the hand he had. The hand he had was excellent, he had already won. MS already had perceptual license, people working on GPT and Sam Altman on his corner.
The one thing in Microsoft has stayed constant from Gates to Ballmer to Satya: you should never, ever form a close alliance with MS. They know how to screw alliance partners. i4i, Windows RT partners, Windows Phone Partners, Nokia, HW partners in Surface. Even Steve Jobs was burned few times.
Satya comes out as evil imho, and I wonder how much orchestration there was going on behind the scenes.
Microsoft is showing that it is still able to capture important scale ups and 'embrace' them, whilst also acting as if they have the moral high ground, but in reality are doing research with a high governance errors and potential legal problems away from their premises. and THAT is why stakeholders like him.
The explanation for point 1 is point 3. If the people involved were not terminally online and felt the need to share every single one of their immediate thoughts with the public they could have likely settled this behind closed doors, where this kind of stuff belongs.
It's not actually news, it's entertainment and self-aggrandizement by everyone involved including the audience.
Considering CEO2 rebelled next day and CEO3 allegedly said he'll quit unless board comes out with truth, doesn't provide much confidence in their adulthood.
The board not saying what the hell they were on about was the source of the whole drama in the first place. If they had just said exactly what their problem was up front there wouldn't have been as much to tweet about.
> Twitter is still the best place to follow this and get updates
This has been my single strongest takeaway from this saga: Twitter remains the centre of controversy. When shit hit the fan, Sam and Satya and Swisher took to Twitter. Not Threads. Not Bluesy. Twitter. (X.)
Bluesky still has gated signups at this point so I don't think it will ever be a viable alternative.
Threads had a rushed rollout which resulted in major feature gaps that disincentivized users from doing anything beyond creating their profiles.
Notable figures and organizations have little reason to fully migrate off Twitter unless Musk irreversibly breaks the site and even he is not stupid enough to do that (yet?). So with most of its content creators still in place, Twitter has no risk of following the path of Digg.
> Twitter is still the best place to follow this and get updates, everyone is still make "official" statements on twitter, not sure how long this website will last but until then, this is the only portal for me to get news.
It's only natural to confuse what is happening with what we wish to happen. After all, when we imagine something, aren't we undergoing a kind of experience?
A lot of people wish Twitter were dying, even though it's it, so they interpret evidence through a lens of belief confirmation rather than belief disproof. It's only human to do this. We all do.
It was funny reading Kara Swisher keeping saying twitter is dying and is toxic and what not, while STILL doing all her first announcements on twitter, and using twitter as a source.
same with Ashlee Vance (the other journo reporting on this) and all the main players (Sam/Greg/Ilya/Mira/Satya/whoever) also make their first announcement on twitter.
I don't know about the funding part of it, but there is no denying it, the news is still freshest on twitter. Twitter feels just as toxic for me as before, in fact I feel community notes has made it much better, imho.
____
In some related news, I finally got bluesky invite (I don't have invite codes yet or I would share here)
and people there are complaining about... mastadon and how elitist it is...
that was an eye opener.
nice if you want some science-y updates but it's still lags behind twitter for news.
> A lot of people wish Twitter were dying, even though it's it, so they interpret evidence through a lens of belief confirmation rather than belief disproof.
If there’s been one constant here, it’s been people who actually know Tonrer expressing deep support for her experience, intelligence, and ethics, so it’s interesting to me that she seems to be getting the boot.
If there is one clear thing, it's that no one on that board should be allowed anywhere near another board for any non-clown company. The level of incompetence in how they handled this whole thing was extraordinary.
The fact that Adam D'Angelo is still on the new board apparently is much more baffling than the fact that Tonrer or Ilya are not.
Add delusions of grandeur to that list thinking she can pursue her ideological will by winning over 3 board members while losing 90% of the company staff.
She was fighting an idelogical battle that needs full industry buy in, legitimate or not that's not how you win people over.
If she's truely a rationalist as she claims then a rationalist would be realistic understanding that if your engineers can just leave and do it somewhere else tomorrow you aren't making progress. Taking on the full might of US capitalism via winning over the fringe half of a non profit board is not the best strategy. At best it was desperate and naive.
This is pretty good evidence she's a rationalist; rationalism means a religious devotion to a specific kind of logical thinking that never works in real life because you can't calculate the probability a result if you didn't know it could happen in the first place.
Traditional response to this happening is to say something about your "priors" being wrong instead of taking responsibility.
Excellent news. I’ve been worried that Sam moving to Microsoft would stall out possible future engineering efforts like GPT-5 in IP court.
As an example of how much faster GPT-4 has made my workflow was the outage this evening — I tried Anthropic, openchat, Bard, and a few others and they were between not useful and worse than just looking at forums and discord it’s 2022.
GPT-5 is kinda pointless until they make some type of improvement on the data and research side. From what I’ve read it’s not really what OpenAI has been pursuing it
One big improvement is in synthetic data (data generated by LLMs).
GPT can "clone" the "semantic essence" of everyone who converses with it, generating new questions with prompts like "What interesting questions could this user also have asked, but didn't?" and then have an LLM answer it. This generates high-quality, novel, human-like, data.
For instance, cloning Paul Graham's essence, the LLM came up with "SubSimplify": A service that combines subscriptions to all the different streaming services into one customizable package, using a chat agent as a recommendation engine.
Are you just blindly deciding what will make “gpt-5” more capable? I guess “data and research” is practically so open ended as to encompass the majority of any possible advancement.
“And that moment was the final nail in the coffin of humankind from earth. They choose, yet again, for money and power. And they shaped AI in their image.
Another civilization perished in the great filter.”
The only way "we develop an actual, fully functional AGI" is by dumbing down humans enough so that even something as stupid as ChatGPT seems intelligent.
(Fortunately we are working on this very hard and making incredible progress.)
Good thing there's absolutely no plausible scenario where we go from "shitty program that guesses the next word" to "AI". The whole industry is going to be so incredibly embarrassed by the discourse of 2023 in a few years.
Assuming they weren’t LARPing, that Reddit account claiming to have been in the room when this was all going down must be nervous. They wrote all kinds of nasty things about Sam, and I’m assuming the signatures on the “bring him back” letter would narrow down potential suspects considerably.
Links to the profile/comments were posted a few times in each of the major OpenAI HN submissions over the last 4 days. On the off-chance I would be breaking some kind of brigading/doxxing rule I didn't initially link it myself.
I seriously doubt they care. They got away with it. No one should have believed them in the first place. I’m guessing they don’t have their real identity visible on their profile anywhere.
Why can't these safety advocates just say what they are afraid of? As it currently stands, the only "danger" in ChatGPT is that you can manipulate it into writing something violent or inappropriate. So what? Is this some San Francisco sensibilities here, where reading about fictional violence is equated to violence? The more people raise safety concerns in the abstract, the more I ignore it.
I'm familiar with the potential risks of an out-of-control AGI. Can you summarise in one paragraph which of these risks concern you, or the safety advocates, in regards to a product like ChatGPT?
They invented a whole theory of how if we had something called "AGI" it would kill everyone, and now they think LLMs can kill everyone because they're calling it "AGI", even though it doesn't work anything like their theory assumed.
This isn't about political correctness. It's far less reasonable than that.
Based on the downvotes I am getting and the links posted in the other comment, I think you are absolutely right. People are acting as if ChatGPT is AGI, or very close to it, therefore we have to solve all these catastrophic scenarios now.
Consider that your argument could also be used to advocate for safety of starting to use coal-fired steam engines (in 19th century UK): there's no immediate direct problem, but competitive pressures force everyone to use them and any externalities stemming from that are basically unavoidable.
I read the comments, most of them are superficial as if someone with no inside knowledge will post. His understanding of humans is also weak. Book deals and speeches as a motivator is hilarious.
I know we’re supposed to optimize for “content with a contribution” in HN, but this captured in parody form more of a contribution of how I too have felt.
I use these tools as one of many tools to amplify my development. And I’ve written some funny/clever satirical poems about office politics. But really? I needed to call Verizon to clear up an issue today, it desperately wanted me to use their assistant. I tried for the grins. A tool that predictively generates plausibility is going to have its limits. It went from cute/amusing to annoying as hell and give me a “love agent” pretty quickly.
That this little TechBro Drama has dominated a huge amount of headlines (we’ve been running at least 3 of the top 30 posts at a time on HN here related to this subject) at a time when there is so much bigger things going on in the world. The demise of Twitter generated less headlines. Either the news cycles are getting more and more desperate, or the software development ecosystem is struggling to generate fund raising enthusiasm more and more.
The sane course of action for any healthy organization after last week would be to work actively on becoming more independent from Microsoft.
With Sam at the head, especially after Microsoft backing him, they will most likely do the opposite. Meaning a deeper integration with Microsoft.
If it wasn't already, OpenAI is now basically a Microsoft subsidiary. With the advantage for Microsoft of not being legally liable for any court cases.
>Microsoft owned 49% of the for-profit part of OpenAI.
>OpenAI's training, inference, and all other infrastructure were running entirely on Azure credits.
>Microsoft/Azure were the only ones offering OpenAI's models/APIs with a business-friendly SLA, uptime/stability, and the option to host them in Azure data centers outside the US.
Besides AI safety (a big besides), what does this actually mean? Adam won't be able to stop devday announcements about chatbots etc. Satya can continue using IP even after AGI? What else is different? Is Ilya the kind of guy to now leave after losing a board seat to political machinations? The pettiness of any real changes/gains leaves me in shock compared to the massive news flows we've seen.
I don't even understand what Sam brings to the table. Leadership? He doesn't seem great at leading an engineering or research department, he doesn't seem like an insightful visionary... At best, Satya gunning for him signalled continued strong investment in the space. Yet the majority of the company wanted to leave with him.
>He doesn't seem great at leading an engineering or research department
Under Sam's leadership they've opened up a new field of software. Most of the company threatened to leave if he didn't return. That's incredible leadership.
Larry Summers is an excellent pick to call out bullshit and moderate any civil war, such as this EA - e/acc feud.
Kissinger (R, foreign policy) once said that Summers (D, economic policy) should be given an advisory post in any WH administration, to help shoot down bad ideas.
Those are both terrible people, not in fact brilliant general-purpose bad idea rejectors. A random person would be better qualified to shoot down bad ideas - most people haven't had bad ideas that led to suffering and death for millions of people.
No one thinks Larry Summers has any insights on AI. Adding Larry Summers is something you do purely to beg powerful, unaccountable people "please don't stop us, we're on your side".
He did help shoot down the extra spending proposals that would have made inflation today even worse. Not sure how that caused suffering and death for anyone.
And he is an adult, which is a welcome change from the previous clowncar of a board.
Larry Summers practically personally caused both Russia's collapse into a mafia state and the 2008 US recession. Nobody should listen to him about anything.
Although, he's also partly responsible for the existence of Facebook by starting Sheryl Sandberg's career. Some people might think that's good.
His influence significantly reduced the size of the stimulus bill, which meant significantly higher unemployment for a longer duration and significantly less spending on infrastructure which is so beneficial to economic growth that it can't be understated. Yes, millions of people suffered because of him.
The fact that you think current inflation has anything to do with that stimulus bill back then shows how little you understand about any of this.
Larry Summers is the worst kind of person. Somebody who is nothing but a corporate stooge trying to act like the adult by being "reasonable", when that just means enriching his corporate friends, letting people suffer and not spending money (which any study will tell you is not the correct approach to situations like this because of multiplying effects they have down the line).
Huh, that's a pretty apt analogy. Lending establishment cred is at least part of why they would pick Summers. But I really do think that on such a small board, Summers, unlike Kissinger, may have an active role to play, even if only as a mediator.
Btw, I would not be pleased if Kissinger were on this board in lieu of Summers. He's already ancient, mostly checked out, and yet still I'd worry his old lust for power would resurface. And with such a mixed reputation, and plenty of people considering him a war criminal, he'd do little to assuage the AI-not-kill-everyone-ism faction.
Weird... Ilya decides one way then changes his mind. Helen and Tasha vote one way and had the votes to prevent any changes, but then for some reason agreed to leave the board. Adam votes one way then changes his mind. So many mysteries.
Ilya and Adam switched because they lost, and their goal wasn't to nuke OpenAI, simply to remove Sam. Helen and Tasha had the votes to prevent Sam Altman from returning as CEO, but not the votes to prevent the employees from fleeing to Microsoft, which Helen and Tasha see as the worst possible outcome.
There's some game theory going on... They're just trying to pick the winning side. I guess most people at OpenAI supported Sam, because they thought Sam would win at the end, although they wouldn't necessarily want him to win.
If the Sama faction got Ilya and Adam (maybe with promise of heading the new board), Helen and Tasha have nothing to stand on and no incentive to keep fighting.
"A source with direct knowledge of the negotiations says that the sole job of this initial board is to vet and appoint a new formal board of up to 9 people that will reset the governance of OpenAl.
Microsoft will likely have a seat on that expanded board, as will Altman himself."
Just following up, it's also totally Smeagol-like to make people sign up before they can get any useful answers at Quora. True Gollum move, D'gelo. Thanks for showin' yer true colors!
Elon was once "in possession" (influential investor and part of the board) of OpenAI, but it was since taken from him and he is evidently bitter about it.
How has the board shown that they fired Sam Altman due to "responsible governance".
They haven't really said anything about why it was, and according to business insider[0] (the only reporting that I've seen that says anything concrete) the reasons given were:
> One explanation was that Altman was said to have given two people at OpenAI the same project.
> The other was that Altman was said to have given two board members different opinions about a member of personnel.
Firing the CEO of a company and only being able to articulate two (in my opinion) weak examples of why, and causing >95% of your employees to say they will quit unless you resign does not seem responsible.
If they can articulate reasons why it was necessary, sure, but we haven't seen that yet.
Good lord: it’s a private company. As a general matter of course it’s inadvisable to comment on specifics of why someone is fired. The lack of a thing that pretty much never happens anyway (public comment) is just harmful to your soap opera, not to the potential legitimacy of the action.
According to reports they haven't told executives and employees inside the company. (I'm not arguing that they should speak publicly, though given the position the board put itself in I think hiring PR people for external crisis comms is very much warranted)
When 95% of your staff threatens to resign and says "you have made a mistake", that's when it's time to say "no, the very good reasons we did it are this". That didn't happen.
Its not a private company it is a non profit working in the public interest this usually requires some sort of public accountability. The board want to be a public good when they make decisions but want to be a private entity when those decisions are criticised by the public.
If Altman will be 1 of 9, that means he has power but not an exceptional amount.
The real teams here seem to be:
"Team Board That Does Whatever Altman Wants"
"Team Board Provides Independent Oversight"
With this much money on the table, independent oversight is difficult, but at least they're making the effort.
The idea this was immediately about AI safety vs go-fast (or Microsoft vs non-Microsoft control) is bullshit -- this was about how strong board oversight of Altman should be in the future.
This seems like a silly way of understanding deceleration. By this comparison the USSR was decelerating the cold war because they were a couple years behind in developing the hydrogen bomb.
Microsoft can and will be using GPT4 as soon as they get a handle on it, and if it doesn't boil their servers to do so. If you want deceleration you would need someone with an incentive that didn't involve, for example, being first to market with new flashy products.
This is my take too, and I'm sure in the shadows their plan is to close off the APIs as much as possible and try use it for their own gain, not dissimilar to how Google deploy AI.
There is no way MS is going to let something like ChatGPT-5 build better software products than what they have for sale.
This is an assassination and I think Ilya and Co know it.
It's not assassination. It's a Princess Bride Battle of Wits, that they initiated and put the poison into one of the chalices themselves, and then thought so highly of their intellect they ended up choosing and drinking the chalice that had the poison in it.
What product do you envision OpenAI selling would be better than Microsoft?
I emphasized product because OpenAI may have great technology. But any product they sell is going to require mass compute and a mass sales army to go into the “enterprise” and integrate with what the enterprise already has.
Guess who has both? Guess who has neither?
And even the “products” that OpenAI have now can only exist because of mass subsidies by Microsoft.
Right now, quota is very valuable and scarce, but credits are easy to come by. Also, Azure credits themselves are worth about $0.20 per dollar compared to the alternatives.
"You are a dim-witted kobold who prefers to hack-n-slash-slash-slash-n-burn over any sort of proper diplomatic negotiations or even strategic thinking; we would like you to consider next year's capital expenditures; what are your top three suggestions for improvements that could be made to the employee breakroom(s)?"
Well, if ye really want ol' me to put me noggin to it... I reckon ye could start with addin' a proper gaming corner! Ye know, some sturdy tables 'n' comfy chairs where the lads 'n' lasses can gather 'round for some good ol' dice chuckin' or card playin'. Next up, a big ol' fire pit! Not just any fire, mind ye, but one where we can roast our snacks 'n' share tales of our adventures. And lastly, a grand stash of provisions—plenty o' snacks 'n' drinks to keep the energy high for when we're plannin' our next raid or just takin' a breather. How's that for some improvements, eh?
Train it on meeting minutes and board charter various contracts they have, and use the voice compatibilitys of chatgpt as the input during the meeting the prompt is it is an ethical ai givingbinput to the board of open ai on the development of its next iteration.
If you know that putting four wheels on a car works better than putting three wheels on a car, that doesn't make you biased against three wheels. It makes you biased towards better results.
We know that "thought diversity" on a team, which can take many forms, has a short term drawback (team gelling doesn't go as fast) and long term advantages (more ideas, better ideas, better resilience, etc etc).
Is there any evidence that gender is a primary determinant of "thought diversity"? I'd expect other factors, including age, upbringing, ethnicity, etc. have much more of an impact on diversity. A woman and a man who grew up in the same suburbs, went to the same school, have studied the same, etc. probably have very similar ideas on most topics than two men (or women for that matter) who have completely different upbringing.
If thought diversity is what matters, a much better determinant is probably geographical distribution in upbringing and unique educational paths and unique previous employments (all of which can just as easily be estimated by a resume as gender).
Diversity is good, but diversity for diversity's sake is not good. I think teams should be made based on merit, and if then the team is also diverse, all the more better. Although important, imo making diversity the most important criteria seems a bit misguided and somewhat idealistic although on paper and in principle it seems to be coming from a good place.
That’s like saying adding a wheelchair lift to a building is ableist. By definition equity requires acknowledging and catering towards different demographics in different ways. I guess if you want to be pedantic, it’s discrimination, but in this case, that’s a good thing.
You could also argue that equity is a bad thing, but I wholeheartedly disagree with that. However, your argument is simply logically unsound.
No, your comparison is probably not what you're trying to say, since wheelchair-bound people have vastly different capabilities than the bipedal population. Unless you're trying to say women have different capabilities, which is true, but probably not the point you want to make.
I am, and it is. Having a diverse board ensures that women’s viewpoints are taken into account in a way that men simply aren’t tuned into. That’s the whole point of diversity in the workplace, and particularly in leadership. Especially with women, who are 50% of the population.
What viewpoints would benefit the company that a woman can have that a man can't? I hear the talking point a lot, and it just doesn't make sense, unless it's a marketing firm or something.
OpenAI? they do. they chose the wrong people, and now they're in damage control mode to something more familiar to the markets without the oxygen necessary to focus on DEI
I find it interesting that for all the talk from OpenAI staff that it was all about the people, and from Satya that MS has all the rights and knowledge and can jumpstart their own branch at the turn of a dime, it seems getting control of OpenAI proper was a huge priority.
Given that Claude sucks so bad, and this week’s events, I’m guessing that the ChatGPT secret sauce is not as replicable as some might suggest.
One of the more interesting aspects from this entire saga was that Helen Toner recently wrote a paper critical of OpenAI and praising Anthropic.
Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed [1].
That is indeed quite the paper to write whilst on the board of OpenAI, to say the least.
Not to mention this statement ... imagine such a person on your startup board!
During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities.
Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman.
The other plausible explanation is that Helen Toner doesn’t care as much about safety as about her personal power and clinging to the seat which gives her importance. Saying it’s for safety is very easy and the obviously popular choice if you want to hide your motives. The remark she made strikes me as borderline narcissistic in retrospective.
> That is indeed quite the paper to write whilst on the board of OpenAI, to say the least.
It strikes me as exactly the sort of thing she should be writing given OpenAI's charter. Recognizing and rewarding work towards AI safety is good practice for an organization whose entire purpose is the promotion of AI safety.
Yeah, on one hand, the difference between a charity oriented around a mission like OpenAI's nominal charter and a business is that the former naturally ought to be publicly, honestly introspective -- its mission isn't private gain, but achieving a public effect, and both recognition of success elsewhere and open acknowledgement of shortcomings of your own is important to that.
On the other hand, its quite apparent that essentially all of the OpenAI workforce (understandably, given the compensation package which creates a financial interest at odds with the nonprofit's mission) and in particular the entire executive team saw the charter as a useful PR fiction, not a mission (except maybe Ilya, though the flip-flop in the middle of this action may mean he saw it the same way, but thought that given the conflict, dumping Sam and Greg would be the only way to preserve the fiction, and whatever cost it would have would be worthwhile given that function.)
And Anthropic doesnt get credit for stopping the robot apocalypse when it was never even possible. AI safety seems a lot like framing losing as winning.
The most plausible explanation I've found is that the pro-safety faction and pro-accel factions were at odds which was why the board was stalemated at a small size.
Altman and Toner came into conflict over a mildly critical paper Toner wrote involving Open AI and Altman tried to have her removed from the board.
This is probably what precipitated this showdown. The pro safety/nonprofit charter faction was able to persuade someone (probably Ilya) to join with them and oust Sam.
So, the only two women were removed from the board, and two ultra-alpha males were brought on. And everybody is cheering it on as the right thing to do!
I was thinking about this too, but the wife of an actor and someone two years out of her masters were not the caliber people that should have been on the board of an $80B company.
I would expect people with backgrounds like Sheryl Sandberg or Dr. Lisa Sue to sit in the position. The two replaced women would have looked like diversity hires had they not been affiliated with an AI doomer organization.
I hope there’s diversity of representation as they fill out the rest of the board and there’s certainly women who have the credentials, but it’s important that they don’t appear grossly unqualified when they sit next to the other board members.
Why does being a man or woman even matter? Do we really need a DEI hire for the board of some of the most groundbreaking tech in years?
I’m not saying Larry Summers has some perfect resume for the job; but to assume he was brought on BECAUSE he is man?
Cmon. There’s absolutely no evidence for that and you are just projecting an issue into the situation, rather than it being of any reality.
I think you are the one projecting. I am just presenting facts. There is also nobody black on the board, by the way. I don't think that is a problem, but it is what it is.
Now this "initial board", tasked with establishing the rest of the board, for a company that wants to create AGI for the benefit of humanity, consists of three white alpha-males. That's just a fact. Is it a coincidence? Of course not.
And Larry Summers believes that women are genetically inferior to men at science, technology, engineering, and mathematics. A lot of the techbro hate that was directed specifically at Helen is openly misogynistic, which is actually pretty funny because Larry Summers was probably who Helen was eventually happy with because of their shared natsec connections.
Disagree with her or her actions without falsely claiming that she has no qualifications or understanding of AI and therefore no business being on the board in the first place? It is not hard at all to do so, and many people did.
Do you think people said she has no qualifications because she is a woman, or is it possible people say that because her resume is quite short ?
It seems like people taking such comments as misogynistic are actually projecting misogyny into the situation, rather than the reverse.
If you showed me her resume and put “Steven Smith” stop the paper, I’d say that person isn’t qualified to be running the board of a 90 billion dollar company in charge of guiding research on some of the most groundbreaking new tech in years.
It’s definitely the right thing to do. Those women had “qualifications” in a made up field with no real world relevance that aimed to halt progress on AI work. We are no where close to a paradigm where AI takes over the world or whatever.
Finally the OpenAI saga ends and everybody can go back to building!
3 things that turned things around imo:
1. 95% of employees signing the letter
2. Ilya and Mira turning Team Sam
3. Microsoft pulling credits
Things AREN’T back to where they were. OpenAI has been through hell and back. This team is going to ship like we’ve never seen before.
Or it was recognized that Adam was the instigator and the real power player, and the force that Sam needed to come to an accommodation with. From everything I've heard about Toner, she's a very principled person who lent academic credibility to the board, and was a great figurehead for the non-profit's conscience. Once the veneer was ripped from the non-profit's "controlling" role, she was deadweight and useful only as a scapegoat.
It looks to me like the real victim here is the "for humanity" corporate structure. At some point, the money decided it needed to be free.
I wonder how this will impact the company-owned-by-a-non-profit model in the future. While it isn’t uncommon (e.g. I believe IKEA are owned by a nonprofit), I believe it has historically been for tax reasons.
Given the grandstanding and chaos on both sides, it’ll be interesting to see if OpenAI undergo a radical shift in their structure.
Satya's pay is about 100 million dollars. Ide say he has earned every penny for protecting MSFTs 10B investment in OpenAI. A 1% insurance policy is great value.
This was expected. So they booted Ilya (my main culprit), Helen Toner (expected, favoriting Anthropic) and Tasha McCauly. This seems to have been their vote majority.
Not D'Angelo. Interesting
Suppose everything settles and they have the board properly in place. I know such board has fiduciary responsibility to make sure the organization is headed in the right direction based on its goals and missions. For private company, the mission is very clear, but for non-profit orgs like OpenAI, what's their mission specifically? It vaguely claims to better the humanity, but what does that entail exactly with regards to what they do in AI space?
Losing the CEO must not push significant number of your staff to throw hissy fits and jump ship - it doesn't instill confidence in investors, partners, and crucially customers.
as this event turned into a farce, it's evident that neither the company nor it's key investors accounted much for the "bus factor/problem" i.e loosing a key-person threatened to destroy the whole enterprise.
Nothing, he has to do with political connections, and OpenAI's main utility to Microsoft is as hand puppet for lobbying for the terms it wants for the AI marketplace in the name of OpenAI's nominal "safety" mission.
Easy. AI discourse has gone insane, on both sides, and is sorely in need of perspective from grounded, normal adults with a track record of moderation and shooting down BS. Summers is a grounded, normal adult with a track record of moderation and shooting down BS. Ergo, he's immanently relevant to AI.
He's also financially literate enough to know that it's poor form release market-moving news right before the exchanges close a Friday. They could have waited an hour.
Being financially literate means being able to understand how the financial system works. Larry Summers thinks operate as intermediaries lending out deposits. This is very wrong. He is not financially literate. He is an economist.
I think Larry Summers probably knows what a central bank is.
But "how money creation works" isn't the same thing as "how the financial system works". I guess the financial system mostly works over ACH.
We can see what happens when banks don't lend out deposits, because that's basically what caused SVB to fail. So by the contrapositive, they aren't really operating then.
What's interesting to me is that during this time Meta and OpenAI have eliminated their AI ethics members/teams but are still preaching about how it matters. No one has given any details beyond grand statements about it's importance on what these ethical AIs do. Everyone has secured their payday though.
I think those changes (and this shakeup) are the start of the industry grounding its expectations for this technology. I think a lot of product and finance people, and many but not all researchers, are seeing the current batch of generative AI ideas as ripe to make do things and see the pseudo-religious safety/ethics communities as not directly relevant to that work.
So you let your product teams figure out how the brand needs to be protected and the workflow needs to be shaped, like always, and you don't defer to some outside department full of beatniks in berets or whatever.
This is the abandoning of ethics. No one moving forward is going to be thinking about it and they've clearly signaled it's about making money. People that have issues with it will just not use the products or be hypocrites about using the products. There is nothing to push up against anymore, but I don't think the recent events are initiator. People were already letting go of ethics the moment they continued using it because the tech was so cool. The parting of the ethical peoples is just the final nail. There is no reason to remove these ethical teams if they believe in ethics, downsize maybe but not dedicating a human to at least researching the ethical outcomes sure isn't very good for humanity ethics concerns.
Keeping D'Angelo on the board is an obvious mistake, he has too much conflicting interest to be level headed and has demonstrated that. The only people that benefited from all this are Microsoft and D'Angelo. Give it a year and we will see part 2 of all this.
Further where is the public accountability? I thought the board was to act in the interests of the public but they haven't communicated anything. Are we all just supposed to pretend this never happend and that the board will now act in the public interest?
We need regulations to hold these boards which hold so much power accountable to the public. No reasonable AI regulations can be made until the public are included in a meaningful way, anyone that pushes for regulations without the public is just trying to control the industry and establish a monopoly.
The previous board thought Sam was trying to get full control of the board, so they ousted him. But of course they weren't happy with OpenAI being destroyed either.
Now they agreed to a new board without Sam/Greg, hoping that that will avoid Sam ever getting full control of the board in the future.
Hiring engineers at 900K salary & pretending to be non-profit does not work. Turns out, 97% of them wanted to make money.
Government should have banned big tech investment in AI companies a year ago. If they want, they can create their own AI but buying one should be off the table.
Larry Summers is an interesting choice. Any ideas why? I know he was Sheryl Sandberg's mentor/professor which gives him a tech connection. However, I've watched him debate Paul Krugman on inflation in some economic lectures and it almost felt like Larry was out of his element as in Larry was outgunned by Paul... but maybe he was having an off day or it was a topic he is not an expert in. But I don't know the history, haven't read either of their books and I am not an economist. But it was something I noticed.. almost like he was out of touch.
OpenAI board f’d around and found out the consequences of their poor decisions. The decision to back pedal from previous position just shows the level of disconnect between these 2 entities.
The most interesting thing here is not the cult of personality battle between board and CEO. Rather, it's that these teams have managed to ship consumer AI that has a liminal, asymptotic edge where the smart kids can manipulate it into doing emergent things that it was not designed to do. That is, many of the outcomes of in-context learning could not be predicted at design time and they are, in fact, mind-blowing, magical, and likely not safe for consumption by those who believe that the machines are anywhere near the spectrum from consciousness to sentience.
So, Adam D'Angelo is the only board member that remains, and he had also voted against Altman before. How interesting, considering all the theory crafting about him being the one who initiated this coup.
Kicking Sam out was a bad move. Begging him back is worse. Instead of having an OpenAI whose vision you disagree with, now we have an OpenAI with no vision at all that's simply blown back and forth.
"Context on the negotiations to bring Sam back as CEO of OpenAI:
The biggest sticking point was Sam being on the board. Ultimately, he conceded to not being on the board, at least initially, to close the deal. The hope/expectation is that he will end up on the board eventually."
There is so much vagueness around this whole OpenAI thing that it's difficult taking anything seriously anymore - it's almost hearsay at this point. Yesterday it was Altman's personal interests, now it's a breakthrough model, tomorrow it's something else. At the very least it's fantastic marketing (albeit at the expense of their customers).
In a different thread I commented how surprised I was that Emmett Shear accepted the job of interim CEO, to some criticism that my opinion was “silly”. This is why he should have stayed miles away from this whole mess. There was no winning scenario for him: stay CEO and lose 95% of the employees, or get ignored by a triumphant return of Sam Altman.
After learning earlier about Sam Altman's long-con at Reddit, I'm surprised I haven't seen anyone suggest that Emmett Shear accepted the job in order to help get Sam back into the company.
They were both members of the inaugural class of Y-Combinator, and all of Shear's published actions since accepting the role (like demanding evidence of Sam' wrongdoing) seem to have helped Sam return to his role.
I don't think it's a stretch to say that he did win, in that he might have accomplished exactly what he wanted when he accepted the role.
The "giveaway" is the fact that "Microsoft is happy" with the return of Mr. Altman. Can't wait for the former boards tell-all story. Bets on: how a founder of cutting edge tech company wanted world peace and no harm but outside capital forces steered him to other "unfathomable riches" option. It happens.
>We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
Is Ilya off the board then?
Why is Adam still on?
Brett and Larry are good choices, but they need to get that board up to 10 or so people representing a balance of perspectives and interests very quickly.
MS and OpenAI did not win here, but one of their competitors did...whoops.
Why did I say that? Look at the product release by the competitors these past few days. 2nd, Sam pushing for AI chips implies that chatGPT's future breakthroughs are hardware bounded. Hence, the road to AGI is not through chatGPT.
You say that as if that was his end goal. His end goal was to save the situation, and that happened. One can easily argue that Microsoft’s offer added huge pressure on the OpenAI board that made the new / current outcome possible. And perhaps that was the plan after all.
In my opinion, MS will neuter this product too, there is no way they're just going to have the public accessing tools which make their own software and products obsolete.
They will take over the board, and then steer it in some weird dystopian direction.
Ilya knows that IMO, he was just more principled than Altman.
Yeah people should really stand up for their peer more. Who knew that would work. Sam wouldn't have been back if it not for Brockman and several scientists standing up for him.
“The company also agreed to revamp the board of directors that had dismissed him. OpenAI named Bret Taylor, formerly co-CEO of Salesforce, as chair and also appointed Larry Summers, former U.S. Treasury Secretary, to the board.”
Why is people so interested in this? Why exactly was he fired? I did not get why when I read the news, so I find it strange that people care if they don't even know what it's about. Do we know for sure what this was/is about?
So OpeNAI charter still in place? Once OpenAI reaches AGI, Microsoft won't be able to access the tech. Then what will happen to Microsoft when other commercial competitors catch up and also reach AGI one or two years later?
This is a triumph of labor against management in sheep's garb. Workers united were able to force an outcome they desired to preserve an organization they loved while sweeping aside a board that would prefer to destroy it.
Summers would tell you that women don’t have the necessary “intrinsic aptitude”. Of course the intrinsic aptitude in question is being able to participate in a nepotistic boy’s club.
What Summers would point out is that boys do better at maths, which is true. In fact, in the UK, the only time boys have had worse results in maths was when exams were cancelled during Covid and teachers (hint: primarily female) were allowed to dish out grades. Girls suddenly shot ahead. When exams resumed, boys took the lead again.
But don't notice anything from that. That would be sexist, right Anton?
First, Summers’ sexist claims were much broader than that.
Second, yes, you are being sexist, and irrational. What you’re doing is exactly the same as the reasons that it’s racist and irrational to say “whites are better at x”.
You’re cherry picking data to examine, to reach a conclusion that you want to reach. You’re ignoring relevant causal factors - or any causal factors at all, in fact, aside from the spurious correlation you’ve assumed in your conclusion.
You’re ignoring decades of research on the subject - although in your defense, you’re probably just not aware of it.
Most irrationally of all, you’re generalizing across an entire group, selected by a factor that’s only indirectly relevant to the property you’re incorrectly generalizing about.
As such, “sexist” is just a symptom of fundamentally confused and under-informed thinking.
Actually, Summer's claims were much narrower - he said that boys tend to deviate from the mean more. That is, it's not that men are superior, it's that there are more boy geniuses and more boy idiots.
Decades of research shows that teachers give girls better grades than boys of the same ability. This is not some new revelation.
A whole cohort of boys got screwed over by the cancellation of exams during Covid. That is just reality, and no amount of creepy male feminist posturing is going to change that. Rather, denying issues in boys education is liable to increase male resentment and bitterness, something we've already witnessed over the past few years.
I quoted one of the unsupported claims that Summers made - that "there are issues of intrinsic aptitude" which help explain lower representation of women. Not, you know, millennia of sexism and often violent oppression. This is the exact same kind of arguments that racists make - any observed differences must be "intrinsic".
If Summers had in fact limited himself to the statistical claims, it would have been less of an issue. He would still have been wrong, but he wouldn't have been so obviously sexist.
It's easy to refute Summers' claims, and in fact conclude that the complete opposite of what he was saying is more likely true. "Gender, Culture, and mathematics performance"(https://www.pnas.org/doi/10.1073/pnas.0901265106) gives several examples that show that the variability as well as male-dominance that Summers described is not present in all cultures, even within the US - for example, among Asian American students in Minnesota state assessments, "more girls than boys scored above the 99th percentile." Clearly, this isn't an issue of "intrinsic aptitude" as Summers claimed.
> A whole cohort of boys got screwed over by the cancellation of exams during Covid.
I'm glad we've identified the issue that triggered you. But your grievances on that matter are utterly irrelevant to what I wrote.
> no amount of creepy male feminist posturing is going to change that
It's always revealing when someone arguing against bigotry is accused of "posturing". You apparently can't imagine that someone might not share your prejudices, and so the only explanation must be that they're "posturing".
> increase male resentment and bitterness
That's a choice you've apparently personally made. I'd recommend taking more responsibility for your own life.
> which help explain lower representation of women
Yes, they do help explain that. This does not preclude other influences. You can't go two sentences without making a logical error, it's quite pathetic.
I'll do you a favour and disregard the rest of your post - you deviate from the mean a bit too much for this to be worth it. Just try not to end up like Michael Kimmel, lol.
> You’re cherry picking [...] You’re ignoring relevant causal factors [...] You’re ignoring decades of research [...] you’re generalizing
You're very emphatic in ignoring common sense. You don't need studies to see that almost all important contributions to mathematics, from Euclid to the present day, have come from men. I don't know if it's because of genetics, culture, or whatever, but it's the truth.
> you are being sexist [...] it’s racist and irrational [...]
Not sure "elected" is the right way of looking at it. More like "selected" or "nominated" by Sam/MSFT perhaps. His main qualification may be that he's an adult?
At least they'll be operating under the original charter - it sounds like the mission continues. Not sure about this new board but hard to imagine they'd make the same sort of mistake.
> We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
Good outcome. I think everything will go back to business as usual with slightly accelerated productisation. 99% of people will not have noticed anything and if so quickly forget.
All involved have clearly demonstrated the lack of credibility in self-governance or the ability to make big-boy decisions. All reassurances from now on will sound hollow.
What a delightful shit show. I don't even personally care whether Sam Altman is running OpenAI but it brings me no end of schadenfreude to see a bunch of AI Doomers make asses of themselves. Ethical Altruism truly believes that AI could destroy all of human life on the planet which is a preposterous belief. There are so many better things to worry about, many of which are happening right now! These people are not serious and should not hold serious positions of power. It's not hard to see the dangers of AI: replacing a lot of make-work that exists in the world, giving shoddy answers with high confidence, taking humans out of the loop of responsible decision making, but I cannot believe that it will become so smart that it becomes an all powerful god. These people worship intelligence (hence why they believe that with infinite intelligence comes infinite power) but look what happens when they actually have power! Ridiculous.
What a wild ride these past few days have been. Friday already feels like a very long time ago given all of the information and controversy that's come out.
I'm not American - I'm unclear what all this fuss is about? From where I am it looks like some arbitrary company politics in a hyped industry with a guy whose name I've seen mentioned on this site occasionally but really comes across as just a SV or San Fran cult of personality type. Am I missing something? Is there some substance to this story or is it just this week's industry soap opera?
The media and the VCs are treating Sam like some hero and savior of AI. I’m not getting it. What has he done in life and/or AI to deserve so much respect and admiration? Why don’t top researchers and scientists get equivalent (if not more) respect, admiration and support? It looks like one should strive to become product manager, not an engineer or a scientist.
> Why don’t top researchers and scientists get equivalent (if not more) respect, admiration and support?
I can't believe I'm about to defend VCs and "senior management" but here goes.
I've worked for two start-ups in my life.
The first start-up had dog-shit technology (initially) and top-notch management. CEO told me early on that VCs invest on the quality of management because they trust good senior executives to hire good researchers and let them pivot into profitable areas (and pivoting is almost always needed).
I thought the CEO was full of shit and simply patting himself on the back. Company pivoted HARD and IPOed around 2006 and now has a MC of ~ $10 billion.
The second start-up I worked with was founded by a Nobel laureate and the tech was based on his research. This time management was dog-shit. Management fumbled the tech and went out of business.
===
Not saying Altman deserves uncritical praise. All I'm saying is that I used to diminish the importance of quality senior leadership.
Great comment. You interspersed the two, but instead of using management I like to say that it's leadership that matters. Getting a bunch of people (smart or not) to all row in the same direction with the same vision is hard. It's also commonly the difference between success and failure. Of course the ICs deserve admiration and respect, but people (ICs) are often quick to dismiss leadership.
A great analogy can be found on basketball teams. Lots of star players who should succeed sans any coach, but Phil Jackson and Coach K have shown time and again the important role leadership plays.
I remember about ten years ago someone arguing that Coach K was overrated because his college players on average underperformed in the NBA (relative to their college careers).
I could not convince them that this was actually evidence in favor of Coach K being an exceptional coach.
I'd extend that leadership in the form of management needs leadership in the technical aspect as well. The two need to work in tandem to make things work. Imho the best technical leads are usually not the smartest ones, they are those that best utilize their resources - read, other people - and are force multipliers.
Of course you need the people who can deep dive and solve complex issues, none doubts that.
I'd go further than even that! You need 3 forms of advocacy in leadership for a successful business, business/market, tech, and time. The balance of those three can make or break any business.
You can see this at the micro level in a scrum team between the scrummaster, the product owner, and the tech lead.
> IPOed around 2006 and now has a MC of ~ $10 billion.
The interesting thing is you used economic values to show their importance, not what innovations or changes they achieved. Which is fine for ordinary companies, but OpenAI is supposed to be a non-profit, so these metrics should not be relevant. Otherwise, what's the difference?
How do you do expensive bleeding edge research with no money? Sure you might get some grants in the millions but what if it takes billions. Now lets assume the research is no small feat, its not just a handful of individuals in a lab, we need to hire larger teams to make it happen. We have to pay for those individuals and their benefits.
My take is its not cheap to do what they are doing and adding a capped for-profit side is an interesting take. Afterall, OpenAI's mission clearly states that AGI is happening and if thats true, those profit caps are probably trivial to meet.
> OpenAI is supposed to be a non-profit, so these metrics should not be relevant
You're doing the same thing except with finances. Non-profit doesn't mean finances are irrelevant. It simply means there are no shareholders. Non-profits are still businesses - no money, no mission.
You can get big salaries; and to push the money outside it's very simple, you just need to spend it through other companies.
Additional bonus with some structures: If the co-investors are also the donators to the non-profit, they can deduct these donations from their taxes, and still pocket-back the profit, it's a double-win.
No conspiracy needed, for example, it's very convenient that MSFT can politely "influence" OpenAI to spend back on their platform a lot of the money they gave to the non-profit back to their for-profit (and profitable) company.
For example, you can create a chip company, and use the non-profit to buy your chips.
Then the profit is channeled to you and your co-investors in the chip company.
> No conspiracy needed, for example, it's very convenient that MSFT can politely "influence" OpenAI to spend back on their platform a lot of the money they gave to the non-profit back to their for-profit (and profitable) company.
Can you explain this further? So Microsoft pays $X to OpenAI, then OpenAI uses a lot of energy and hardware from Microsoft and the $X go back to Microsoft. How does Microsoft gain money this way?
MS gains special access and influence over OpenAI for effectively 'free'. Obviously the compute cost MS money, and some of their 'donation' is used on OpenAI salaries, but still. This special access and influence lets MS be first to market on all sorts of products - see co-pilot already with a 1M+ paying subscribers.
For example, let's say I'm a big for-profit selling shovels. You're a naive non-profit who needs shovels to build some next gen technology. Turns out you need a lot of shovels and donations so far haven't cut it. I step in and offer to give you all the shovels you need, but I want special access to what you create. And even if it's not codified, you will naturally feel indebted to me. I gain huge upside for just my marginal cost of creating the shovels. And, if I gave the shovels to a non-profit I can also take tax write-offs at the shovel market value.
TBH, it was an amazing move by MS. And MS was the only big cloud provider who could have done it b/c Sataya appears collaborative and willing to partner. Amazon would have been an obvious choice, but they don't partnership like that and instead tend to buy companies or repurpose OSS. And Google can't get out of their own way with their hubris.
It is absolutely curious to talk about profit when talking about academic research or a non-profit (which OpenAI officially is).
Sure, you can talk about results in terms of their monetary value but it doesn’t make sense to think of it in terms of the profit generated directly by the actor.
For example Pfizer made huge profits off of the COVID-19 vaccine. But that vaccine would never have been possible without foundational research conducted in universities in the US and Germany which established the viability in vivo of mRNA.
Pfizer made billions and many lives were saved using the work of academics (which also laid the groundwork for future valuable vaccines). The profit made by the academics and universities was minimal in comparison.
Interesting, I always thought that research and startups are very similar. Where you have something (product/research-idea) which you think is novel and try to sell it (journals/customers).
The management skills which you potentiated differentiated the success of the two firms. I can see how the lack of this might be wildly spread out in academia.
Most startups need to do a very different type of research than academia. They need to move very fast and test ideas against the market. In my experience, most academic research is moving pretty slowly due to different goals and incentives - and at times it can be a good thing.
> All I'm saying is that I used to diminish the importance of quality senior leadership.
Quality senior leadership is, indeed, very important.
However, far, far too many people see "their company makes a lot of money" or "they are charismatic and talk a good game" and think that means the senior leadership is high-quality.
True quality is much harder to measure, especially in the short term. As you imply, part of it is being able to choose good management—but measuring the quality of management is also hard, and most of the corporate world today has utterly backwards ideas about what actually makes good managers (eg, "willing to abuse employees to force them to work long hours", etc).
> Not saying Altman deserves uncritical praise. All I'm saying is that I used to diminish the importance of quality senior leadership.
Absolutely. The focus on the leadership of OpenAI isn't because people think that the top researchers and scientists are unimportant. It's because they realize that they are important, and as such, the person who decides the direction they go in is extremely important. End up with the wrong person at the top, and all of those researchers and scientists end up wasting time spinning wheels on things that will never reach the public.
Sam pontificated about fusion power, even here on HN. Beyond investing in Helion, what did he do? Worldcoin. Tempting impoverished people to give up biometric data in exchange for some crypto. And serving as the face of mass-market consumer AI. Clearly that's more cool, and more attractive to VCs.
Meanwhile, what have fusion scientists and engineers done? They kept on going, including by developing ML systems for pure technological effect. Day after day. They got to a breakthrough just this year. Scientists and engineers in national labs, universities, and elsewhere show what a real commitment to technological progress looks like.
He is the Executive Chairman of Helion Energy so it is not just a passive investment.
That said, I wish Helion wasn't so paranoid about Chinese copycats and was more open about their tech. I can't help but feel Sam Altman is at least partly responsible for that.
> Scientists and engineers in national labs, universities, and elsewhere show what a real commitment to technological progress looks like.
And everywhere. You've only named public institutions for some reason, but a lot of progress happens in the private sector. And that demonstrates real commitment, because they're not spending other people's money.
Unsurprisingly VCs view VCs as the highest form of life, and product managers are temporary positions taken on the way to ascending to VC status.
I have said recently elsewhere SV now devalues builders but it is not just VCs/sales/product, a huge amount is devops and sre departments. They make a huge amount of noise about how all development should be free and the value is in deploying and operating the developed artifacts. Anyone outside this watching would reasonably conclude developers have no self respect, hardly aspirational positions.
Developers are clearly the weak link today, have given up all power over product and it is sad and why software sucks so bad. It pains the soul that value creators have let the value extractors run the show, because it is now a reality TV / circus like market where power is consolidating.
Developers and value creators with power are like an anti-trust on consolidation and concentration and they have instead turned towards authoritarianism instead of anti-authoritarianism. What happened? Many think they can still get rich, those days are over because of giving up power. Now quality of life for everyone and value creators is worse off. Everyone loses.
I don't think the media are treating him as a "hero and savior of AI". However OpenAI and ChatGTP have undoubtedly been successful and he seems popular with his people. It's human nature to follow the top person as figurehead for an organisation as we or journalists don't have time or info to break down what each of the hundreds of employees contributed.
I actually get the impression from the media that he's a bit shifty and sales orientated but seems effective at getting stuff done.
Please be nicer, this was just a litte error probably caused by typing too fast, it doesn't mean that they lack knowledge. Attacking people for such minor mistakes is not what this community is about. Rushing to downvote (your other comment)/critique is overall detrimental. Slow down, think, have a discussion about stuff that matters. I know that's not how the internet is usually, we're trying to be better here
One of the most important things I've learned in life is that organizing people to work toward the same goal is very hard. The larger the group you need to organize, the harder it is.
Initially, when the idea is small, it is hard to sell it to talent, investors and early customers to bring all key pieces together.
Later, when the idea is well recognized and accepted, the organization usually becomes big and the challenge shifts to understanding the complex interaction of various competing sub-ideas, projects and organizational structures. Humans did not evolve to manage such complex systems and interacting with thousands of stakeholders, beyond what can be directly observed and fully understood.
However, without this organization, engineers, researchers, etc cannot work on big audacious projects, which involve more resources than 1 person can provide by themselves. That's why the skill of organizing and leading people is so highly valued and compensated.
It is common to think of leaders not contributing much, but this view might be skewed because of mostly looking at executives in large companies at the time they have clear moats. At that point leadership might be less important in the short term: product sells itself, talent is knocking on the door, and money is abundant. But this is an unusual short-lived state between taking an idea off the ground and defending against quickly shifting market forces.
My reading of all this is that the board is both incompetent and has a number of massive conflicts of interests.
What I don’t understand is why they were allowed to stay on the board with all these conflicts of interests all the while having no (financial) stake in OpenAI. One of the board members even openly admitting that she considered destroying OpenAI a successful outcome of her duty as board member.
> One of the board members even openly admitting that she considered destroying OpenAI a successful outcome of her duty as board member.
I don't see how this particular statement underscores your point. OpenAI is a non-profit with the declared goal of making AI safe and useful for everyone; if it fails to reach that or even actively subverts that goal, destroying the company does seem like the ethical action.
This just underscores the absurdity of their corporate structure. AI research requires expensive researchers and expensive GPUs. Investors funding the research program don't want to be beholden to some non-profit parent organization run by a small board of nobodies who think their position gives them the power to destroy the whole thing if they believe it's straying from its utopian mission.
They don’t “think” that. It does do that, and it does it by design exactly because as you approach a technology as powerful as AI there will be strong commercial incentives to capture its value creation.
Because distroying openai wouldn't make ai safe it would just remove anyone working on alignment from having an influence on it. Microsoft and others are interested in making it benevolent but go along with it because openai is the market leader.
It's probably not easy (practically impossible if you ask me) to find people who are both capable of leading an AI company at the scale of OpenAI and have zero conflicts of interest. Former colleagues, friends, investments, advisory roles, personal beefs with people in the industry, pitches they have heard, insider knowledge they had access to, previous academic research pushing an agenda, etc.
If both is not possible, I'd also rather compromise on the "conficts of interest" part than on the member's competency.
I don't have much in the way of credentials (I took one class on A.I. in college and have only dabbled in it since, and I work on systems that don't need to scale anywhere near as much as ChatGPT does, and while I've been an early startup employee a couple of times I've never run a company), but based on the past week I think I'd do a better job, and can fill in the gaps as best as I can after the fact.
And I don't have any conflicts of interest. I'm a total outsider, I don't have any of that shit you mentioned.
So yeah, vote for me, or whatever.
Anyway my point is I'm sure there's actually quite a few people who could do a likely a better job and don't have a conflict of interest (at least not one so obvious as investing in a direct competitor), they're just not already part of the Elite circles that would pretty much be necessary to even get on these people's radar in order to be considered in the first place. I don't really mean me, I'm sure there are other better candidates.
But then they wouldn't have the cachet of 'Oh, that guy co-founded Twitch. That for-profit company is successful, that must mean he'd do a good job! (at running a non-profit company that's actively trying to bring about AGI that will probably simultaneously benefit and hurt the lives of millions of people)'.
Right. At least some of the board members took issue with ChatGPT being released at all, and wanted more to be kept from the public. For the people who use these tools everyday, it shouldn't be surprising that Altman was viewed as the better choice.
I wouldn't be surprised in the slightest if Sam and his other ultra-rich buddies like Satya had their fingers deep in the pockets of all the tech journalists that immediately ran to his defense and sensationalized everything. Every single news source posted on HN read like pure shilling for the Ponzi sch- uh, I mean Worldcoin guy and hailing him as some sort of AI savant.
Let me offer up a secret from the inside. You dont in any way shape or form have to pay money to journalists. The can are bought and paid for through their currency - information and access.
They dont really even really shill for their patron; they thrive on the relevance of having their name in the byline for the article, or being the person who gets quote / information / propaganda from <CEO|Celebrity|Criminal|Viral Edgelord of the Week>.
My more plausible version is that CEOs of journalistic publications are in cahoots with the rich/powerful/govt people, who get to dictate the tone of said publications by hiring the right journalists/editors and giving them the right incentives.
So as a journalist you might have freedom to write your articles, but your editor (as instructed by his/her senior editor) might try to steer you about writing in the correct tone.
This is how 'Starship test flight makes history as it clears multiple milestones' becomes 'Musk rocket explodes during test'
They give the journos access as long as they don't bite the hand that feeds. Anyone calling this a conspiracy theory simply hasn't been in the valley long enough to see how these things work.
Well, it's been exposed multiple times that money, egos and the media that needs to report about them create a school lunch table where they simply stroke each other's ego and inflate everything they do.
No need for a conspiracy, everyones seen this in some aspect, it just gets worse when these people are throwing money around in the billions.
all you need to do is witness someone Like Elon musk to see how disruptive this type of thing is.
Altman seems to be a extraordinary leader, motivator, and strategizer. This itself is clear by the fact that 90% of the company was willing to walk out over his retention. Just think about that for minute.
Yeah, it should be extremely obvious the reason most of the employees were willing to walk is they've hitched their wagons to Altman. The board of openai put the presumed party day all of them were anticipating in jeopardy. Not all of us live in this god forsaken place to "work with cool tech".
There was about to be a secondary stock purchase by Thrive where employees could cash out their shares. That likely would've fallen apart if the board won the day. Employees had a massive incentive to get same back.
Please stop. No employee is loyal to any CEO based on some higher order matter.
They just want to get their big pay day and will follow whoever makes that possible.
Yes yes, but that doesn't change the fact that Sam positioned himself to be unfireable. The board took their best shot and now the board is (mostly) gone and Sam is still the chief executive. They board will find itself sidelined from now on.
I thought about it for a minute. I came to the conclusion that OpenAI would have likely tanked (perhaps even within days) had Altman not returned to maintain the status quo, and engineers didn't want to be out of work and left with worthless stock.
> It looks like one should strive to become product manager, not an engineer or a scientist.
In my experience, product people who know what they are doing have a huge impact on the success of a company, product, or service. They also point engineering efforts in the right direction, which in turn also motivate engineers.
I saw good product people leaving completely destroy a team, never seen that happen with a good engineer or individual contributor, no matter how great they were.
Interesting. I had the opposite experience. All of the product suite having no idea about what the product even is, where it should go, making bad decisions over and over, excusing their bad choices behind "data" and finally, as usual, failing upwards eventually moving to bigger startups.
I have yet to find a product person that was not involved in the inception of the idea that is actually good (hell, even some founders fail spectacularly here).
At a consulting firm I worked with a product guy who I thought was very good, and was on the project pretty much from the beginning (maybe the beginning, not sure. He predated me by well over a year at least). He was extremely knowledgeable on the business side and their needs and spent a lot of time communicating with them to get a good feel of where the product needed to go.
But he was also technical enough to have a pretty good feel for the complexity of tasks, and would sometimes jump in to help figure out some docker configuration issues or whatever problems we were having (mostly devops related) so the devs could focus on working on the application code. We were also a pretty small team, only a few developers, so that was beneficial.
He did such a good job that the business eventually reached out to him and hired him directly. He's now head of two of their product lines (one of them being the product I worked on).
But that's pretty much it. I can't think of any other product people I could say such positive things about.
In my comment, the emphasis is definitely on the "product people who know what they are doing" and "good product people".
Of course, if the product suite is clueless, nobody is going to miss them, usually it's better the have no dedicated product people, than having clueless product people.
Yes, that matches my experience as well, that's why I mentioned "individual contributors", maybe it wasn't clear.
It's different with engineering managers (or team leads, lead engineers, however you want to call it). When they leave, that's usually a bad sign.
Though also quite often when the engineering leaders leave, I think of it as a canary in the coal mine: they are closer to business, they deal more with business people, so they are the first to realize that "working with these people on these services is pointless, time to jump ship".
No see, it doesn't matter, engineers are all cogs and easily replaceable. I'm sure they just dialed the engineer center and ordered a few replacements and they started 24 hours later and were doing just as good of a job the next day. /s
Incubation of senior management in US tech has reached singularity and only one person's up for the job. Doom awaits the US tech sector as there's no organisational ability other than one person able and willing to take the big complex job.
> The media and the VCs are treating Sam like some hero and savior of AI
I wouldn't be so sure. While I think the board handled this process terribly, I think the majority of mainstream media articles I saw were very cautionary regarding the outcome. Examples (and note the second article reports that Paul Graham fired Altman from YC, which I never knew before):
Half? 90% of a what a good CEO does is tell the story of why the company is important to it's customers and the market it serves. This story drives sales, motivates people internally, and makes the company a place people want to work.
A CEO is not a researcher. A researcher can be a CEO but in doing so stops being a researcher.
Maybe (almost certainly) Sam is not a savior/hero, but he doesn't need to be a savior/hero. He just needs to gather more support than the opposition (the now previous board). And even if you don't know any details of this story, enough insiders who know more than any of us of what happens inside oai - including hundred of researchers - decided to support the "savior/hero". It's less about Sam and more about an incompetent board. Some of those board members are top researchers. And they are now on the losing camp.
Below is a good thread, which maybe contains the answer to your question, and Ken Olsen's question about why brainiac MIT grads get managed by midwit HBS grads.
A good leader is someone you'll follow into battle, because you want to do right by the team, and you know the leader and the team will do right by you. Whatever 'leadership' is, Sam Altman has it and the board does not.
The board could have said, hey we don't like this direction and you are not keeping us in the loop, it's time for an orderly change. But they knew that wouldn't go well for them either. They chose to accuse Sam of malfeasance and be weaselly ratfuckers on some level themselves, even if they felt for still-inscrutable reasons that was their only/best choice and wouldn't go down the way it did.
Sam Altman is the front man who 'gave us' ChatGPT regardless of everything else Ilya and everyone else did. A personal brand (or corporate) is about trust, if you have a brand you are playing a long-term game, a reputation converts prisoner's dilemma into iterated prisoner's dilemma which has a different outcome.
Human nature, some people do love charismatic leaders. It's hard to comprehend for those of us with a more anarchist nature.
That being said, I have no idea of this guy's contributions. It's easy to dismiss entrepreneur/managers because they're not top scientists, but they also have very rare skills and without them, projects don't get done.
Yea its a bit much he obviously doesn't deserve the admiration that he is getting. That said he deserves respect for helping bring ChatGPT to market, he deserves support because the board have acted like clowns and justified it with their mission of public accountability, but have rejected the idea that the board itself should be publicly accountable.
Recent OpenAI CEOs found themselves on the protagonist side not for their actions, but for the way they have been seemingly treated by the board. Regardless of actual actions on either side, "heroic" or not, of which the public knows very little.
Furthermore, being removed from the board while keeping a role as chief scientist is different from being fired from CEO and having to leave the company.
The OpenAI board just seems irrational, immature, indecisive, and many other stupid features you don’t want in a board.
I don’t see this so much as an “Altman is amazing” outcome so much as the board is incompetent and doing incompetent things and OpenAI’s products are popular and the boards actions put this products in danger.
Not that Altman isn’t cool, I think he’s smart, but I think a similar coverage would have occurred with any other ceo who was fired for vague and seemingly random reasons on a Friday afternoon.
> What has he done in life and/or AI to deserve so much respect and admiration? Why don’t top researchers and scientists get equivalent (if not more) respect, admiration and support?
This has been the case for all achievement of all major companies, the CEO or whoever is on top gets the credit for all their employee's work. Why would be different for OpenAI?
Well there are notable cases in which the CEO had a critical role in the product development.
Larry Ellison coded himself the first versions of Oracle database and was then CEO up to 2014. Shay Banon wrote Elasticsearch and was Elastic CEO for some time.
CEO is a ruler, scientist is a worker. The modern culture treats workers as a replaceable matter, which is redundant after the work is done. They are just tools. Rulers, on the other hand, take the all praise and honors. It's "them" who did the work. Musk is an extreme example of this.
They fired the CEO and didn't even inform Microsoft, who had invested a massive $20 billion. That's a serious lapse in judgment. A company needs leaders who understand business, not just a smart researcher with a sense of ethical superiority. This move by the board was unprofessional and almost childish.
Those board members? Their future on any other board looks pretty bleak. Venture capitalists will think twice before getting involved with anything they have a hand in.
On the other side, Sam did increase the company's revenue, which is a significant achievement. He got offers from various companies and VCs the minute the news went public.
The business community's support for Sam is partly a critique of the board's actions and partly due to the buzz he and his company have created. It's a significant moment in the industry.
>Why don’t top researchers and scientists get equivalent (if not more) respect, admiration and support
Google's full of top researchers and scientists who are at least as good as those at OpenAI; Sam's the reason OpenAI has a successful, useful product (GPT4), while Google has the far less effective, more lobotomized Bard.
The “game” only continues to exist as long as there are “players”. You’re perfectly justified to be discontent with the ones who perpetuate a system you disagree with.
That phrase is nothing more than a dissimulated way of saying “tough luck” or “I don’t care” while trying to act (outdatedly) cool. You don’t need to have grown up in any specific decade to understand its meaning.
Journalists really want everything to have a singular inventor. The concept of an organization is very difficult for them to grasp so they attribute everything to the guy at the top. Sam Altman is the latest in a long line of "inventors" which also includes such esteemed personalities as Elon musk, Steve Jobs, etc.
IMO and experience a good product manager is far more important than a good engineer or good scientist
Elon Musk’s neuralink is a good example - the work they’re doing there was attacked by academics saying they’d done this years ago and it’s not novel, yet none of them will be the ones who ultimately bring it to market.
I would be surprised if the original board’s reasons for caving in were not influenced by personal factors. They must’ve been receiving all kinds of threats from those involved and from random twitter extremists.
It is troubling because it shows that this “external” governance meant to make decisions for the good of humanity is unable to enforce decisions. The internal employees were obviously swayed by financial gain as well. I don’t think that I would behave differently were I in their shoes honestly. However, this does definitively mean that they are a product and profit driven group.
I think that Sam Altman is dishonest and a depressing example of what modern Americans idealize. He has all these ideals he preaches but will happily turn on if it upsets his ego. On top of that he is held up as some star innovator when in reality he built nothing himself. He just identified one potential technological advancement and threw money at it with all his billionaire friends.
Gone are the days of building things in a garage with a mission. Founders are no longer visionary engineers and designers. The path now is clear. Convince some rich folks you’re worthy of being rich too. When they adopt you into wealth you can start throwing shit at the wall until something sticks. Eventually something will and you can claim visionary status. Now your presence in the billionaire club is beyond reproach because you’re a “founder”.
So OpenAI's board is now exclusively white men, and predominantly tech insiders?
Lovely to have such a diverse group behind this technology
Could this be more comical?
I figured if Sam came back, the board would have to go as a condition. That's obvious. And deserved. The handling of this whole thing has been a very public clownshow.
Obviously, Microsoft has some influence here. That's no different to any other large investor. But the key factors are:
1. Lack of a good narrative from the board as to why they fired Sam;
2. Failure to loop in Microsoft so they're at least prepared from a communications front and feel like they were part of the process. The board can probably give them more details why privately;
3. People leaving in protest speaks well of Sam;
4. The employee letter speaks well of Sam;
5. The interim CEO clown show and lack of an all hands immediately after speaks poorly of the board.
Stop dreaming about alignment. All bets are off. This is the start of AI arms race. Think globally for a second. Yes, everybody wants to be a millionaire or billionaire. This is the current culture we are living in. Corporations have unprecedented power waved into the governments, but governments still have a monopoly on violence. People cannot switch to the new abstraction layer (UBI, Social Rating) for two or five years. They will keep a consumer-oriented mindset before the option to have one is erased. Where you think this is going? To a better Democracy? This is the Cold War V.2 scenario unfolding.
Is higher education really crucial for pushing something forward? Even if he isn't an AI expert, there is lots of stuff surrounding the technology that needs doing, for example massive amounts of funding, which he seems to have been pretty good at securing.
The new board only has 3 people to start with, but hopefully easier to add more members soon. Tonight's NYT story mentioned the board member attrition and the prolonged gridlock in adding new ones, which probably led to the current saga.
He's been sending out the occasional tweet - to be honest I get the impression that like the rest of us, he's just been watching with a big tub of popcorn...
>- Microsoft strengthened its power despite not appearing involved in the drama
Depending on what you mean by "the drama", Microsoft was very clearly involved. They don't appear to have been in the loop prior to Altman's firing, but they literally offered jobs to everyone who left in solidarity with same. Do we really think things like that were not intended to change people's minds?
I’d go further than just saying “they were involved” —- by offering jobs to everyone who wanted to come with Altman, they were effectively offering to acquire OpenAI, which is worth ~$100B, for (checks notes) zero dollars.
+ according to the rumors on Bloomberg.com / CNBC:
The investment is refundable and has high priority: Microsoft has a priority to receive 75% of the profit generated until the 10B USD have been paid back
+ (checks notes) in addition (!) OpenAI has to spend back the money in Microsoft Cloud Services (where Microsoft takes a cut as well).
If the existing packages are worth more than MSFT pay AI researchers (they are, by a lot) then it’s not acquiring OAI for $0. Plausibly it could cost in the $B to buy put every single equity holder, at a $80B+ valuation.
That's a good callout. I was reading over it and confused who this person was and why they were summarizing but yeah they might've just told ChatGPT to summarize the events of what happened.
>but they literally offered jobs to everyone who left in solidarity with same
Offering people jobs is neither illegal nor immoral, no? And wasn't HN also firmly on the side of abolishing non-competes and non-soliciting from employment contracts to facilitate freedom of employment movement and increase industry wages in the process?
Well then, there's your freedom of employment in action. Why be unhappy about it? I don't get it.
I'm pretty sure there's a middle ground between recruiters for Microsoft should be banned from approaching other companies' staff to fill roles and Microsoft should be able to dictate decisions made by other companies' boards by publicly announcing that unless they change track it will attempt to hire every single one of their employees to newly created roles.
Funnily enough a bit like there's a middle ground between Microsoft should not be allowed to create browsers or have license agreements and Microsoft should be allowed to dictate bundling decisions made by hardware vendors to control access to the Internet
It's not freedom of employment when funnily enough those jobs aren't actually available to any AI researchers not working for an organisation Microsoft is trying to control.
While cult followers do not make exceptional leaders, cult leaders are almost by definition exceptional leaders, given they're able to lead the un-indoctrinated into believing an ideology that may not be upheld against critical scrutiny.
There is no guarantee or natural law that an exceptional leader's ideology will be exceptional. Exceptionality is not transitive.
Leadership Gets Shit Done. A cult following wastes everyone's time on ineffectual grandstanding and ego fluffing while everything around them dissolves into incompetence and hostility.
I also imagine the morale of the people who are currently implementing things, and getting tired of all these politics about who is going to claim success for their work.
Can't you imagine a group of people motivated to conduct AI research? I don't understand... All nerds are highly motivated in their areas of passion, and here we have AI research. Why do they need leadership instead of simply having an abundance of resources for the passionate work they do?
As far as it goes for me the only endorsements that matter are those of the core engineering and research teaches of OpenAI.
All these opinions of outsiders don’t matter. It’s obvious that most people don’t know Sam personally or professionally and are going off of the combination of:
1. PR pieces being pushed by unknown entities
2. positive endorsements from well known people who are likely know him
Both those sources are suspect. We don’t know the motivation behind their endorsements and for the PR pieces we know the author but we don’t know commissioner.
Would we feel as positive about Altman if it turns out that half the people and PR pieces endorsing him are because government officials pushing for him? Or if the celebrities in tech are endorsing him because they are financially incentivized?
The only endorsements that matter are those of OpenAI employees (ideally those who are not just in his camp because he made them rich).
It's not hard to motivate them to do the fun parts of the job, the challenge is in convincing some of those highly motivated and passionate nerds to not work on the fun thing they are passionate about and instead do the boring and unsexy work that is nevertheless critical to overall success; to get people with strong personal opinions about how a solution should look to accept a different plan just so that everyone is on the same page, to ensure that people actually have access to the resources they need to succeed without going so overboard that the endeavor lacks the reserves to make it to the finish line, and to champion the work of these nerds to the non-nerds who are nevertheless important stakeholders.
Jobs was really unusual in that he was not only a good leader, but also an ideologue with the right obsession at the right time. (Some people like the word "visionary".) That obsession being "user experience". Today it's a buzzword, but in 2001 it was hardly even a term.
The leadership moment that first comes to mind when I think of Steve Jobs isn't some clever hire or business deal, it's "make it smaller".
There have been a very few people like that. Walt Disney comes to mind. Felix Klein. Yen Hongchang [1]. (Elon Musk is maybe the ideologue without the leadership.)
I don't think Sam is necessarily irreplaceable. It's just that Helen
Toner and co were so detached from the rest of the organization they might as well been on Mars, as demonstrated by their interim CEO pick instantly turning against them.
I dunno, seems like a pretty self-evident theory? If your leader is irreplaceable, regardless of group size, that's a single point of failure. I can't figure how a single point of failure could ever make something "stronger". I can see arguments for necessity, or efficiency, given contrivances and extreme contexts. But "stronger" doesn't seem like the assessment for whatever necessitating a single point of failure would be.
"Stronger" is ambiguous. If you interpret it as "resilience" then I agree having a single point of failure is usually more brittle. But if you interpret it as "focused", then having a single charismatic leader can be superior.
Concretely, it sounds like this incident brought a lot of internal conflicts to the surface, and they got more-or-less resolved in some way. I can imagine this allows OpenAI to execute with greater focus and velocity going forward, as the internal conflict that was previously causing drag has been resolved.
Whether or not that's "better" or "stronger" is up to individual interpretation.
A company is essentially an optimization problem, meant to minimize / maximize some set of metrics. Usually a companies goal is simply to maximize NPV but in OpenAI's case the goal is to maximize AI while minimizing harm.
"Failure" in this context essentially means arriving at a materially suboptimal outcome. Leaders in this situation, can easily be considered "irreplaceable" particularly in the early stages as decisions are incredibly impactful.
Not sure if that's intended as irony, but of course, if somebody is taking multiple years off work, you would be less likely hear about it because by definition they're not going to join the company you work for.
I don't think long-term unemployment among people with a disability or other long-term condition is "fantasticaly rare", sadly. This is not the frequency by length of unemployment, but:
In my immediate family I have 3 people that have taken multi-year periods away from work for health reasons. Two are mental health related and the other severe arthritis. 2 of those 3 will probably never work again for the rest of their lives.
I've worked with a contractor that went into a coma during covid. Nearly half a year in a coma, then rehab for many more months. Guy is working now, but not shape.
I don't know the stats, but I'd be surprised if long medical leaves are as rare as you think.
Yeah, there are thousands of hospitals across the US and they don't run 24/7 shifts just to treat the flu or sprained ankles. Disabling events happen a lot.
(A seriously underrated statistic IMO is how many women leave the workforce due to pregnancy-related disability. I know quite a few who haven't returned to full-time work for years after giving birth because they're still dealing with cardiovascular and/or neurological issues. If you aren't privy to their medical history it would be very easy to assume that they just decided to be stay-at-home mums.)
Have you ever worked with someone who treats their work as their life? They are borderline psychopaths. As if a health condition or accident will stop them. They'll be taking work calls on the hospital bed.
I don't know about anyone else. But if I was being asked to choose sides in a he-said, she-said dispute, the board was publicly hinting at really bad stuff, and THAT was the explanation, I know what side I'd take.
Don't forget, when the news broke, people's assumption from the wording of the board statement was that Sam was doing shady stuff, and there was potential jail time involved. And they justify smearing Sam like that because two board members thought they heard different things from Sam, and he gave what looked like the same project to two people???
There were far better stories that they could have told. Heck, the Internet made up many far better narratives than the board did. But that was the board's ACTUAL story.
Put me on the side of, "I'd have signed that letter, and money would have had nothing to do with it."
I was thinking the same. The letter symbolized a deep distrust with leadership over the mission and direction of the company. I’m sure financial motivations were involved, but the type of person working at this company can probably get a good paycheck at a lot of places. I think many work at OpenAI for some combination of opportunity, prestige, and altruism, and the weekend probably put all 3 into question.
Was this really motivated by AI safety or was it just Helen Toner’s personal vendetta against Sam?
It doesn’t feel like anything was accomplished besides wasting 700+ people’s time, and the only thing that has changed now is Helen Toner and Tasha McCauley are off the board.
As someone who was very critical of how the board acted, I strongly disagree. I felt like this Washington Post article gave a very good, balanced overview. I think it sounds like there were substantive issues that were brewing for a long time, though no doubt personal clashes had a huge impact on how it all went down:
> was it just Helen Toner’s personal vendetta against Sam
I'm not defending the board's actions, but if anything, it sounds like it may have been the reverse? [1]
> In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company... “I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight." Senior OpenAI leaders, including Mr. Sutskever... later discussed whether Ms. Toner should be removed
Curious how a relatively unknown academic with links to China [1] attained a board seat on America's hottest and most valuable AI company.
Particularly as she openly expressed that "destroying" that company might be the best outcome. [2]
> During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities. Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.
Oh lord, spare me with the "links to China" idiocy. I once ate a fortune cookie, does that mean I have "links to China" too?
Toner got her board seat because she was basically Holden Karnofsky's designated replacement:
> Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.
> Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source). Given their connection via Open Philanthropy and the fact that Holden’s Board seat appeared to be permanent, it seems that Helen was picked by Holden to take his seat.
Perhaps you're not aware. Living in Beijing is not equivalent to "once eating a fortune cookie"
> it seems that Helen was picked by Holden to take his seat.
So you can only speculate as to how she got the seat. Which is exactly my point. We can only speculate. And it's a question worth asking, because governance of America's most important AI company is a very important topic right now.
I’m curious about your perceptions of the (median) motivations of OpenAI employees - although of course I understand if you don’t feel free to say anything.
Is your argument that the 1 employee operated on peer pressure, or the other 999?
Could it possibly be that the majority of OpenAI's workforce sincerely believed a midnight firing of the CEO were counterproductive to their organization's goals?
Doing the math, it is extremely unlikely for a lot of coin flips to skew from the weight of the coin.
To that end, observing unanimous behavior may imply some bias.
Here, it could be people fearing being a part of the minority. The minority are trivially identifiable, since the majority signed their names on a document.
I agree in your stance that a majority of the workforce disagreed with the way things were handled, but that proportion is likely a subset of the proportion who signed their names on the document, for the reasons stated above.
It's almost certain that all employees did not behave the same way for the exact same reasons. And I don't see anyone making an argument about what the exact numbers are, nor does it really matter. Just that some portion of employees were swayed by pressure once the letter reached some critical signing mass.
I am sorry, I greatly respect and admire Nick Cave, but that letter sounded to me like the lament of a scribe decrying the invention of the printing press.
He's not wrong, something is lost and it has to do with what we call our "humanity", but the benefits greatly outweigh that loss.
I think this summarizes it pretty well. Even if you don't mind the garbage, the future AI will feed on this garbage, creating AI and human brain gray goo.
Is this a real problem model trainers actually face or is it an imagined one? The Internet is already full of garbage - 90% of the unpleasantness of browsing these days is filtering through mounts and mounds of crap. Some is generated, some is written, but still crap full of wrong and lies.
I would've imagined training sets were heavily curated and annotated. We already know how to solve this problem for training humans (or our kids would never learn anything useful) so I imagine we could solve it similarly for AIs.
In the end, if it's quality content, learning it is beneficial - no matter who produced it. Garbage needs to be eliminated and the distinction is made either by human trainers or already trained AIs. I have no idea how to train the latter but I am no expert in this field - just like (I suspect) the author of that blog.
> Peer pressure and groupthink likely also swayed employees more than principles
Chilling to hear the corporate oligarchs completely disregard the feelings of employees and deny most of the legitimacy behind these feelings in such a short and sweeping statement
Honestly he has a point — but the bigger point to be made is financial incentives. In this case it matters because of the expressed mission statement of OpenAI.
Let’s say there was some non-profit claiming to advance the interests of the world. Let’s say it paid very well to hire the most productive people but they were a bunch of psychopaths who by definition couldn’t care less about anybody but themselves. Should you care about their opinions? If it was a for profit company you could argue that their voice matter. For a non-profit, however, a persons opinion should only matter as far as it is aligned with the non-profit mission.
> Employees followed the money trail and Sam to preserve their equity and careers
Would you not when the AI safety wokes decide the torch the rewards of your hard work of grinding for years? I feel there is less groupthink and everyone saw the board as it is and their inability lead, or even act rationally. OpenAI did not just become a sinking ship, but it was unnecessary sunk by someone not skin in the game and your personal wealth and success was tied to the ship.
Yeah, this is like using “groupthink” to describe people fleeing a burning building. There’s maybe some measure of literal truth, but it’s an odd way to frame it.
How do you know the "wokes" aren't the ones who were grinding for years?
I suspect OpenAI has an old guard that is disproportionately ideological about AI, and a much larger group of people who joined a rocket ship led by the guy who used to run YC.
We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a path to more stable, well-informed, and effective governance. Sam, Greg, and I have talked and agreed they have a key role to play along with the OAI leadership team in ensuring OAI continues to thrive and build on its mission. We look forward to building on our strong partnership and delivering the value of this next generation of AI to our customers and partners.
> And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.
Why does this accusation keep coming up? Sam even confirmed he took the offer in one of the tweets above "when i decided to join msft on sun evening". Contracts are not handcuffs and he was free to change his mind.
Satya's statement at the time may well have been true at the time in that he, Sam and Greg had agreed on them joining MSFT. Later circumstances changed, and now that decision has been reversed or nullfied. Calling the original statement a lie is not warranted IMHO.
In either case the end effect is the essentially the same. Either Sam is at MSFT and can continue to work with openAI IP, or he's back at openAI and do the same. In both cases the net effect for MSFT is similar and not materially different, although the revealed preference of Sam's return to openAI indicates the second option was the preferred one.
Wait where are you getting that the hiring was a lie? At this point his tenure there was approximately as long as miras and emmets so that's par for the course in this saga, what makes that stint different?
Absolutely no lies here. It was a dynamic situation and it wasn't at all clear that discussions with OAI board would lead to an outcome where sama returns as CEO.
Satya offered sama a way forward as a backup option.
And I think it says a lot about sama that he took that option, at least while things were playing out. He and Greg could have gotten together capital for a startup where they each had huge equity and made $$$$$$. These actions from sama demonstrate his level of commitment to execution on this technology.
maybe he really had an affirmative statement on this from Sam Altman but nobody signs an employment contract this quickly so it was all still up in the air
Also even if he signed it he's allowed to quit? Like, the 14th amendment exists y'all. And especially if after that agreement 90+ percent of openai threatens to quit, that's a different situation than the situation 10 minutes before that announcement so why wouldn't they change their decision?
Did Satya get played with the whole "Sam and Greg are joining Microsoft"? Was Satya in on a gambit to get the whole company to threaten to quit to force the board's hand?
It sure feels like a bad look for Satya to announce a huge hire Sunday night and then this? But what do I know.
Edit: don't know why the downvotes. You're welcome to think it's an obviously smart political move. That it's win/win either way. But it's a very fair question that every tech blogger on the planet will be trying to answer for the next month!
Consider that Satya already landed a huge win by the stock price hitting ATH rather than taking a hit based on the news. Further consider that MS owns 49% of a company which could be valued at 80 billion on the condition that the company makes structural changes to the board to prevent this from happening again (as opposed to taking a dive if the company essentially died.) Then there's the uncertainty of the tech behind Bing's chat (and other AI tie-ins) continuing to be competitive vs Google and other players. If MS had to recreate their own tech, then they would likely be far behind even a stalled OpenAI. Seems to me that it makes little difference where this tech is being developed (in-house vs in a company which you own 49% of) in terms of access. Probably better that the development happens within the company which started all of this and has already been the leader, rather than starting over.
Im not so sure. This whole ordeal revealed how strong of a position Microsoft had all along. And that’s all still true even without effectively taking over OpenAI. Because now everyone can see how easily it could happen.
Something does still seem not flattering towards Microsoft about reneging on the Microsoft offer though.
Of course they can, but they can't do these and sell/buy stocks involved at the same time. It's not illegal to influence stocks value (one could argue just being a CEO does that), but buying/selling while in possession of insider knowledge.
Let's say Sam called his broker and said to him on Friday we'll before the market closes. Buy MSFT stock. Then he made his announcement on Sunday and on Monday he told his broker to sell that stock before he announced he's actually coming back to (not at all)OpenAI. That would be illegal insider trading.
If he never calls his broker/his friends/his mom to buy/sell stock there's nothing illegal.
Securities fraud is more than insider trading. Misleading investors about a company’s financial health is fraud 101 and it sure looks like he lied about hiring someone to stem a precipitate MSFT drop.
He announced the hire and that precipitated 90+ percent of the employees threatening to quit. It would be an understatement to say that the situation changed. Why does everyone want satya to be bad at his job and and not react quickly to a rapidly evolving situation? His decision to hire Sama paved the way for samas return.
Huh? Satyas move was politically brilliant. Either outcome of Sama returning to OpenAI or Sama going to Microsoft is good for Microsoft as continuity and progress are the most important things right now for Microsoft. An OpenAI in turmoil would have been worthless.
The one (Adam D’Angelo) who’s a cofounder and CEO of a company (Quora) that has a product (Poe) that arguably competes with OpenAI’s “GPTs” feature, no less.
I don’t understand why that’s not a conflict of interest?
But honestly both products pale in comparison to OpenAI’s underlying models’ importance.
> I don’t understand why that’s not a conflict of interest?
It's not the conflict of interest it would be if it was the board of a for profit corporation that was basically identical to the existing for-profit LLC but without the lyaers above it ending with the nonprofit that the board actually runs, because OpenAI is not a normal company, and making profit is not its purpose, so the CEO of a company that happens to have a product in the same space as the LLC is not in a fundamental conflict of interest (there may be some specific decisions it would make sense for him to recuse from for conflict reasons, but there is a difference between "may have a conflict regarding certain decisions" and "has a fundamental conflict incompatible with sitting on the board".)
Its not a conflict for a nonprofit that raises money with craft faires to have someone who runs a for-profit periodic craft faire in the same market on its board. It is a conflict for a for profit corporation whose business is running such a craft faire to do so, though.
Still a conflict of interest. If D’Angelo has financial incentive to want OpenAI to fail, then this at odds with his duty to follow the OpenAI charter. It’s exactly why two of the previous board members left earlier this year.
Doesn’t matter. It’s an absolutely clear conflict of interest. It may have taken an unrelated shakeup for people to notice (or maybe D’Angelo was critically involved; we don’t know), but there’s no way he should be staying on this board.
maybe it's just going to be easier to fire him in a second step once this current situation which seems to be primarily about ideology is cleared up. In D’Angelo's case it's going to be easier to just point to a clear traditional conflict of interest down the line
If all members of the old board resign simultaneously, what happens then? No more old board to agree to any new members. In a for-profit the shareholders can elect new board members, but in this case I don't know how it's supposed to work.
I've been privy to this happening at a nonprofit board. Depends on charter, but I've seen the old board tender their resignation and remain responsible only to vote for the appointment of their (usually interim to start) replacements. Normally in a nonprofit (not here), the membership of that nonprofit still has to ratify the new board in some kind of annual meeting; but in the meantime, the interim board can start making executive decisions about the org.
Firstly, maybe don't put quotes around an unrelated party's representation of the board. Secondly, the board was made up of individuals and naturally, what might be true for the board as a whole does not apply to every individual on it.
Larry Summers mostly counts as a Microsoft seat. Summers will support commercial and private interest and not have a single thought about safety, just like during the financial crisis 15 years ago https://www.chronicle.com/article/larry-summers-and-the-subv...
Larry Summers hurt the US economy by making the recovery from 2008 much too slow. If they'd done stimulus better, we could've had 2019's economic growth years earlier. That would've been great for Microsoft.
Why do you (dang) always write a comment specifying that people can read more and even providing some links when it's clear that when you reach the bottom of the page you have to click "read more" to indeed read more. Isn't it a bit useless?
Google, Meta and now OpenAI. So long, responsible and safety AI guardrails. Hello, big money.
Disappointed by the outcome, but perhaps mission-driven AI development -- the reason OpenAI was founded -- was never possible.
Edit: I applaud the board members for (apparently, it seems) trying to stand up for the mission (aka doing the job that they were put on the board to do), even if their efforts were doomed.
To me, commentary online and on podcasts universally leans on the idea that he appears to be very focused on money (from the outside) in seeming contradiction to the company charter:
> Our primary fiduciary duty is to humanity.
Also, the language of the charter has watered down a stronger commitment that was in the first version. Others have quoted it and I'm sure you can find it on the internet archive.
you just don't understand how markets work. if openai slows down then they will just be driven out by competition. that's fine if that's what you think they should do, but that won't make ai any safer, it will just kill openai and have them replaced by someone else.
1) openAI was explicitly founded to NOT develop AI based on "market forces"; it's just that they "pivoted" (aka abandoned their mission) once they struck gold in order to become driven by the market
2) this is exactly the reasoning behind nuclear arms races
What does "actually open" mean? And how is that more responsible? If the ethical concern of AI is that it's too powerful or whatever, isn't building it in the open worse?
Depends on how you interpret the mission statement of building ai for all of humanity. It’s questionable that humanity is better off if ai only accrues to one or a few centralised entities?
The problem is none of the alternatives offered a smooth UX transition. Mastodon is fragmented by design and Bluesky is gated to this day. There was never a true Digg-like event that caused user migration to reach critical mass. So people simply trickled back once the most volatile periods of post-Elon Twitter passed.
That doesn't change the fact post-Elon Twitter has severely degraded in terms of user experience (rate limits, blue check spam, API pay-wall, etc.) and Elon isn't doing the platform any favours by continuing to participate in detrimental ways (seen in the recent advertiser exodus).
It wouldn't be interesting if one CEO got fired and replaced, but the fact that there's a different CEO every couple of days and no one knows what will happen next. The uncertainty is addictive, not to mention the scale of self-destruction. See also: trainwrecks.
Unfortunately the “great man theory” is still going strong in the 21st century. Just like Steve Jobs has invented the iPhone people believe he invented GPT!
OpenAI success is unfortunately largely based on the one ruthless decision to ignore ethics and train the model on the work of millions of artists and authors. I don’t know if Sam himself was behind this decision. I doubt Aaron Schwartz would have done the same.
So basically somebody initiated a coup, then the key figure of the coup regretted it openly, and the fallout was that OpenAI will become a 100% commercial entity, fully open for Microsoft to taking over?
If that’s not a fertile soil for conspiracy theory, I don’t know what could ;)
So dangerous on so many levels. Just let him start his own AI group, competition is good!
Instead he will come away with this untouchable. He’ll get to stack the board like he wanted to. Part of being on a board of directors is sticking to your decisions. They are weak and weren’t prepared for the backlash of one person.
Well there you go. I suppose the takeaway for anyone using OpenAI products is that they should have a backup, even if it doesn’t perform as well. The board was apparently fine with shutting the whole thing in the name of safety. With that plus the GPT outage earlier today, you’d do well to have a Claude or LLaMa fallback you can switch to if it happens again.
Satya and Sam committed securities fraud with their late Sunday “funding secured” ploy to protect the MSFT stock price. This was the obvious outcome. Sam had no intentions of actually going through with that and Satya was in no position to unilaterally commit to the type of funding that he was implying.
They lied to protect the stock. That should be illegal. In fact, it is illegal.
Yeah, I think there may well be an investigation into that. At best, he said something that was unequivocally untrue, and at worst it was an outright lie. That's blatant market manipulation.
https://news.ycombinator.com/item?id=38375239&p=2
https://news.ycombinator.com/item?id=38375239&p=3
https://news.ycombinator.com/item?id=38375239&p=4 (...etc.)