Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI Is a Strange Nonprofit (bloomberg.com)
35 points by colonCapitalDee on Nov 21, 2023 | hide | past | favorite | 9 comments



And it is easy to see how the board’s view of the mission could conflict with the staff’s views of their jobs...

From the board’s perspective ... will have a staffing problem ... those staff will probably be more enthusiastic about AI, generally, than the mission calls for.

From the staff’s perspective, the board is a bunch of outsiders ... driven by an abstract sense of mission. Which kind of is the job of a nonprofit board, but which will reasonably annoy the staff.

Yesterday virtually all of OpenAI’s staff signed an open letter to the board ... the letter claims that the board “informed the leadership team that allowing the company to be destroyed ‘would be consistent with the mission.’” Yes! I mean, the board might be wrong about the facts, but in principle it is absolutely possible that destroying OpenAI’s business would be consistent with its mission. If you have built an unsafe AI, you delete the code and burn down the building. The mission is conditional — build AGI if it is safe — and if the condition is not satisfied then you go ahead and destroy all of the work. That is the board’s job. It’s the board’s job because it can’t be the staff’s job, because the staff is there to do the work, and will be too conflicted to destroy it. The board is there to supervise the mission.


The real problem, in my view, is that the board's stated mission is simply impossible. The amount of potential profit precludes ever "burning it all down" regardless of how unsafe it may be or how earnest the board is. The Big Boys would never let that sort of money escape them.

Assuming AGI is possible at all, this is why something like OpenAI should never have even been started if the concern was anything like ensuring safety or even being a net benefit to the world.

Fortunately, I think AGI isn't possible (at least not within a short enough timeline to matter). But I do think there's been nothing but recklessness in terms of what has already been accomplished. In the short term, uninvolved others are inevitably going to suffer. I have a problem with any group of people deciding that others are expendable.


>"But I do think there's been nothing but recklessness in terms of what has already been accomplished."

Could you elaborate on this? Specifically which accomplishment(s) you consider to be reckless.


This technology threatens to put too many people out of work too quickly, and there's a large risk of widespread harm resulting. It's reckless to move forward with something like this while failing to take any steps to mitigate the risk. Worse, they seem to totally disregard this risk, justifying that disregard with some hand-wavy "it'll all be better in the long term" stuff.


> Fortunately, I think AGI isn't possible

"Fortunately"? That's a strange way to phrase that. Most people in 1928 thought that the repeat of the Great War isn't possible but it happened anyhow in just 10 years. Heck, nobody thought WWI, in the way it actually happened, could be possible! A world-wide war, in a globalized (yes, it was globalized about as much as in the 1990s) economy? A war where people would just keep dying on the field from both sides without any territorial advances, and keep doing this for years, and the involved countries mostly just waiting to see who will first run out of the resources (spoiler: Germany and Austria ran out first, because the US got involved), material and human? Nah, that's just un-scientific fantasy only some silly socialists like Wells could come up with.


> If you have built an unsafe AI, you delete the code and burn down the building

But it doesn't need to get to that end point - if the board thought that the decisions the CEO had made or likely to make in the future would not lead to the core goal of "building AGI that is safe and benefits all of humanity" - because, for example, future work would be dominated by links to Microsoft, or he would be distracted by building other for profit companies that built on OpenAI work, they they would also be right to sack him even if it meant destroying the profitability/growth of the company.

I think the structure of OpenAI was a hideous idea - but I can't help but feel the board may have acted according to the rules they'd been given.



It’s great to see Matt Levine chime in, he often has an incredible well thought out and informed view that is also just incredibly funny to read. That said I am disappointed that he latched on the the entirely imagined idea that this was “profit va safety” which isn’t in any way supported by anything and is just baseless speculation cooked up by spectators.

It’s even directly contrary to the statements from the board and the new CEO, and there is no reason to believe they would lie and keep it hidden if this was in fact their reasoning.


FYI you seem to be shadowbanned, I vouched for this comment but you should probably reach out to hn@ycombinator.com




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: