Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From NYT article [1] and Greg's tweet [2]

"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”

Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.

He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."

[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...

[2] https://twitter.com/gdb/status/1725736242137182594



So they didn't even give Altman a chance to defend himself for supposedly lying (inconsistent candour as they put it.) Wow.


Another source [1] claims: "A knowledgeable source said the board struggle reflected a cultural clash at the organization, with Altman and Brockman focused on commercialization and Sutskever and his allies focused on the original non-profit mission of OpenAI."

[1] - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...


TY for sharing. I found this to be very enlightening, especially when reading more about the board members that were part of the oust.

One of the board of directors that fired him co-signed these AI principles (https://futureoflife.org/open-letter/ai-principles/) that are very much in line with safeguarding general intelligence

Another of them wrote this article (https://www.foreignaffairs.com/china/illusion-chinas-ai-prow...) in June of this year that opens by quoting Sam Altman saying US regulation will "slow down American industry in such a way that China or somebody else makes faster progress” and basically debunks that stance...and quite well, I might add.


So the argument against AI regulations crippling R&D is that China is currently far behind and also faces their own weird gov pressures? That's a big gamble, applying very-long term regulations (as they always are long term) to a short term window betting on predictions of a non-technical board member.

There's far more to the world than China on top of that and importantly developments happen both inside and outside of the scope of regulatory oversight (usually only heavily commercialized products face scrutiny) and China itself will eventually catch up to the average - progress is rarely a non-stop hockey stick, it plateaus. LLMs might already be hitting a wall https://twitter.com/HamelHusain/status/1725655686913392933)

The Chinese are experts at copying and stealing Western tech. They don't have to be on the frontier to catch up to a crippled US and then continue development at a faster pace, and as we've seen repeatedly in history regulations stick around for decades after their utility has long past. They are not levers that go up and down, they go in one direction and maybe after many many years of damage they might be adjusted, but usually after 10 starts/stops and half-baked non-solutions papered on as real solutions - if at all.


> The Chinese are experts at copying and stealing Western tech.

Sure that's been their modus operandi in the past, but to hold an opinion that a billion humans on the other side of the Pacific are only capable of copying and no innovation of their own is a rather strange generalization for a thread on general intelligence.


Well, I guess (hope) no one thinks it is due to genetic disabilities which are preventing disrupting innovations from (mainland) chinese.

It is rather a cultural/political thing. Free thinking and stepping out of line is very dangerous in a authorian society. Copying approved tech on the other hand is safe.

And this culture has not changed in china lately, rather the opposite. Look what happened to the Alibaba founder, or why there is no more Winnie Puuh in china.


This seems to make more sense. Perhaps it has to do with OpenAI is not "open" anymore. Not supporting and getting rid of the OpenAI Gym was certainly a big change in direction of the company.


I'm confused. It's usually the other way around; the good guy is ousted because he is hindering the company's pursuit of profit.


This time he was ousted because he was hindering the pursuit of the company's non-profit mission. We've been harping on the non-openness of OpenAI for a while now, and it sounds like the board finally had enough.


"This time he was ousted because he was hindering the pursuit of the company's non-profit mission. "

This is what is being said. But I am not so sure, if the real reasons discussed behind closed doors, are really the same. We will find out, if OpenAI will indeed open itself more, till then I remain sceptical. Because lots of power and money are at stake here.


Those people aren't about openness. They seem to be members of "AI will kill us all" cult.

The real path to AI safety is regulating applications, not fundamental research and making fundamental research very open (which they are against).


That's what it's looking like to me. It's going to be as beneficial to society as putting Green Peace in charge of the development of nuclear power.

The singularity folks have been continuously wrong about their predictions. A decade ago, they were arguing the labor market wouldn't recover because the reason for unemployment was robots taking our jobs. It's unnerving to see that these people are having gaining some traction while actively working against technological progress.


I want you to be right. But why do you think you're more qualified to say how to make AI safe than the board of a world-leading AI nonprofit?


Literal wishful thinking ("powerful technology is always good") and vested interests ("I like building on top of this powerful technology"), same as always.


Because I work on AI alignment myself and had been training LLMs long before Attention is All You Need came out (which cites some of my work).


Someone is going to be right, but we also know that experts have known to be wrong in the past, ofttimes to a catastrophic effect.


In this case, the company is a non-profit, so it is indeed the other way around



It is not that simple. https://openai.com/our-structure

The board is for the non-profit that ultimately owns and totally controls the for-profit company.

Everyone that works for or invests in the for-profit company has to sign an operating agreement that states the for-profit actually does not have any responsibility to generate profit and that it's primary duty is to fulfill the charter and mission of the non-profit.


Then what's the point of the for-profit?


> Then what's the point of the for-profit?

To allow OpenAI to raise venture capital, which allows them to exchange equity for money (ie, distribute [future] rights to profit to shareholders)


If you don’t know anything, why are you posting


Yeah I though that was the most probable reason, especially since these people don't have any equity, so they have no interest in the commercial growth of the org.

Apparently Microsoft was also blindsided by this.

https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...


So it looks like they did something good.


Yes. They freed Sam and Greg from their shackles and gave a clear indicator that OAI engineers should jump ship into their new venture. We all win.


Perhaps joining Bret Taylor and his friend from Google X? Can’t imagine what those brains might come up with.


If you want AI to fail, then yes.


Melodrama has no place in the AI utopia.


The only thing utopian ideologies are good for is finding 'justifications' for murder. The "AI utopia" will be no different. De-radicalize yourself while you still can.


> The only thing utopian ideologies are good for is finding 'justifications' for murder.

This seems more like your personal definition of "utopian ideology" than an actual observation of the world we live in.


It seems like an observation to me. Let’s take the Marxist utopian ideology. It led to 40 - 60 million dead in the Soviet Union (Gulag Archipelago is an eye opening read). And 40 - 80 million dead in Mao Zedong’s China. It’s hard to even wrap my mind around that amount of people dead.

Then a smaller example in Matthia’s cult in the “Kingdom Of Matthias” book. Started around the same time as Mormonism. Which led to a murder. Or the Peoples Temple cult with 909 dead in mass suicide. The communal aspects of these give away their “utopian ideology”

I’d like to hear where you’re coming from. I have a Christian worldview, so when I look at these movements it seems they have an obvious presupposition on human nature (that with the right systems in place people will act perfectly — so it is the systems that are flawed not the people themselves). Utopia is inherently religious, and I’d say it is the human desire to have heaven on earth — but gone about in the wrong ways. Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal.


"I have a Christian worldview"

We are quite OT here, but I would say christianity in general is a utopian ideology as well. All humans could be living in peace and harmony, if they would just believe in Jesus Christ. (I know there are differences, but this is the essence of what I was taught)

And well, how many were killed in the name of the Lord? Quite a lot I think. Now you can argue, those were not really christians. Maybe. But Marxists argue the same of the people responsible for the gulags. (I am not a marxist btw)

"Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal."

And it simply depends on the specific Utopian ideal. Because a good utopian concept/dream takes humans as they are - and still find ways to improve living conditions for everyone. Not every Utopia claims to be a eternal heaven for everyone, there are more realistic concepts out there.


You could also credit Marxism for workers rights.

Having utopian ideologies NEVER doing good in the world would require some very careful boundary drawing.


Kibbutz?


Huh, I've read Marx and I dont see the utopianism you're referencing.

What I do see is "classism is the biggest humanitarian crisis of our age," and "solving the class problem will improve people's lives," but no where do I see that non-class problem will cease to exist. People will still fight, get upset, struggle, just not on class terms.

Maybe you read a different set of Marx's writing. Share your reading list if possible.


This article gives a clear view on Marx’s vs. Engel’s view of Utopianism vs. other utopian socialists [1]. That Marx was not opposed to utopianism per se, but rather when the ideas of the utopia did not come from the proletariat. Yet you’re right in that he was opposed to the view of the other utopian socialist, and there is tension in the views of the different socialist thinkers in that time. (I do disagree on the idea that refusing to propose an ideal negates one from in practice having a utopic vision)

That said my comment was looking mainly at the result of Marxist ideology in practice. In practice millions of lives were lost in an attempt to create an idealized world. Here is a good paper on Stalin’s utopian ideal [2].

[1] https://www.jstor.org/stable/10.7312/chro17958.7?searchText=...

[2] https://www.jstor.org/stable/3143688?seq=1


That makes sense. It would be like being able to attribute deaths due to christianity on the bible because there is a geneology of ideas?


I know we are a bit off topic. It seems it would be more like if several prominent followers of Jesus committed mass genocide in their respective countries within a century of his teachings. Stalin is considered Marxist-Leninist.


Oh ok. That makes sense. That's because if someone has an idea that causes a lot of immediate harm then the idea is wrong, but if there is a gap then it is not?


Utopian ideologies are also useful when raising funds from SoftBank and ARK


Yeah, AI will totally fail if people don't ship untested crap at breakneck speed.

Shipping untested crap is the only known way to develop technology. Your AI assistant hallucinates? Amazing. We gotta bring more chaos to the world, the world is not chaotic enough!!


All AI and all humanity hallucinates, and AI that doesn't hallucinate will functionally obsolete human intelligence. Be careful what you wish for, as humans are biologically incapable of not "hallucinating".


GPT is better than an average human at coding. GPT is worse than an average human at recognizing bounds of its knowledge (i.e. it doesn't know that it doesn't know).

Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.

If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.

Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.

So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?


Being better than the average human at coding is as easy as being better than the average human at surgery. Until it's better than actual skilled programmers, the people who are programming for a living are still responsible for learning to do the job well.


Because people are are into tech? That's pretty much the whole point of this site?

Just imagining if we all only used proven products, no trying out cool experimental or incomplete stuff.


Without supposing we're on this trajectory, humans no longer needing to focus on being productive is how we might be able to focus on being better humans.


Well, that's the goal isn't it? Having AI take over everything that needs doing so that we can focus on doing things we want to do instead.


Some humans hallucinate more than others


humanity is capable of taking feedback, citing its sources, and not outright lying

these models are built to sound like they know what they are talking about, whether they do or not. this violates our basic social coordination mechanisms in ways that usually only delusional or psychopathic people do, making the models worse than useless


Nobody's forcing anybody to use these tools.

They'll improve hallucinations and such later.

Imagine people not driving the model T cause it didn't have an airbag lmao. Things take time to be developed and perfected.


The model T killed a _lot_ of people, and almost certainly should have been banned: https://www.detroitnews.com/story/news/local/michigan-histor...

If it had been, we wouldn't now be facing an extinction event.


Yea, change is bad.


Numerically, most change is bad.


And yet we make progress. It seems we've historically mostly been effective at hanging on to positive change, and discarding negative change


Yes, but that's an active process. You can't just be "pro change".

Occasionally, in high risk situations, "good change good, bad change bad" looks like "change bad" at a glance, because change will be bad by default without great effort invested in picking the good change.


You haven't been around when Web2.0 and the whole modern internet arrived, were you? You know, all the sites that you consider stable and robust now (Google, YT and everything else) shipping with a Beta sign plastered onto them.


I first got internet access in 1999, IIRC.

Web sites were quite stable back then. Not really much less stable than they are now. E.g. Twitter now has more issues than web sites I used often back in 2000s.

They had "beta" sign because they had much higher quality standards. They warned users that things are not perfect. Now people just accept that software is half-broken, and there's no need for beta signs - there's no expectation of quality.

Also, being down is one thing, sending random crap to a user is completely another. E.g. consider web mail, if it is down for one hour it's kinda OK. If it shows you random crap instead of your email, or sends your email to a wrong person. That would be very much not OK, and that's the sort of issues that OpenAI is having now. Nobody complains that it's down sometimes, but it returns erroneous answers.


But it’s not supposed to ship totally “correct” answers. It is supposed to predict which text is most likely to follow the prompt. It does that correctly, whether the answer is factually correct or not.


If that is how it was marketing itself, with the big disclaimers like tarot readers have that this is just for entertainment and not meant to be taken as factual advice, it might be doing a lot less harm but Sam Altman would make fewer billions so that is apparently not an option.


Chat-based AI like ChatGPT are marketed as an assistant. People expect that it can answer their questions, and often it can answer even complex questions correctly. Then it can fail miserably on a basic question.

GitHub Copilot is an auto-completer, and that's, perhaps, a proper use of this technology. At this stage, make auto-completion better. That's nice.

Why is it necessary to release "GPTs"? This is a rush to deliver half-baked tech, just for the sake of hype. Sam was fired for a good reason.

Example: Somebody markets GPT called "Grimoire" a "100x Engineer". I gave him a task to make a simple game, and it just gave a skeleton of code instead of an actual implementation: https://twitter.com/killerstorm/status/1723848549647925441

Nobody needs this shit. In fact, AI progress can happen faster if people do real research instead of prompting GPTs.


Needlessly pedantic. Hold consumers accountable too. "Durr I thought autopilot meant it drove itself. Manual, nah brah I didn't read that shit, reading's for nerds. The huge warning and license terms, didn't read that either dweeb. Car trying to stop me for safety if I take my hands off the wheel? Brah I just watched a Tiktok that showed what to do and I turned that shit offff".


Perhaps we need a better term for them then. Because they are immensely useful as is - just not as a, say, Wikipedia replacement.


You could also say that shipping social media algorithms with unknown effects on society as a whole are why we're in such a state right now. Maybe we should be more careful next time around.


This is not a story about AI.

It's a story about greed, vanity, and envy.

Impossible to be more human than that.


Sutskever and his allies focused on the original non-profit mission of OpenAI."

Seems reasonable, I mean that's why Sutskever joined in the first place ?


Not just Sustkever, but other top researchers joined the then nascent OpenAI team for the same reason. Most of them on-record indicating they turned down much bigger paychecks.

The problem I see is, astronomical costs of training and inference warrants a for-profit structure like the one Sam put up. It was a nice compromise, I thought; but of course, Sustkever thinks otherwise.


Maybe Sutskever is finished with his LLM experiments and now has other interests and ideas to pursue meanwhile Sam was keen to make money and stay on the same trajectory. Microsoft also felt the same way.

Could see this


The commercial shift has started quite some time ago, what's the point of firing them now?

And why such a controversial wording around Altman?

Why fire Brockman too?


Brockman quit, he wasn’t fired.


He was removed from one of his roles (chairman) and quit the other (president) if I understand correctly.


If true, this gives me hope the Open can return to OpenAI.


Given the board members’ focus on safety, maybe not that likely.


Open source is the only path to broad public accountability, which is a prerequisite for safety.


Microsoft won't be happy about this


What is bad for Microsoft is good for the world.


it's hardly believed that Alman was fired by his stand on commercialisation


The fact that the press release is 50% dedicated to repeating that OpenAI is supposed to be a non-profit and help all of humanity isn't enough for you to believe this is the reason?


The abruptness of the firing and the fact that they give his lying to the board as the reason is why I don't believe that this is over a general disagreement on direction.


They have to say the reason is a fireable offense or he can sue them. Or will be more likely to win if he does.


It's exactly the other way around - if they dismiss him for a vague general reason, they're much less exposed to litigation than they would be if they falsely accused him of lying.


You are 100% correct here, which is how we can reasonably conclude that the accusations were not false.


If the accusations by the board are true, that doesn't explain why Brockman and a few of the senior researchers quit as a response to all of this.


Them leaving does not imply accusations are false. They may like him, they may dislike new boss regardless of accusations, they may dislike overall future direction. They may think they would be fired some times later regardless.


As another comment below mentioned, Elon Musk hinted at this in his interview with Lex Fiedman.

Specifically, he mentioned that OpenAI is supposed to be open source and non-profit. Pursuing profit and making it closed-source brings "bad karma".


Why can’t some use money from profit to do nonprofit again when others caught up. Only moat seems to be the research time invested.


Many believe that race dynamics are bad, so have the goal of going as slowly and carefully as possible.

The split between e/acc (gotta go fast) and friendly AI/Coherent Extrapolated Volition (slow and cautious) is the first time in my life I've come down on the (small-c) conservative side of a split. I don't know if that's because I'm just getting older and more risk adverse.


What a hypocritical board, firing them after massive commercial success!

Classic virtue signalling for the sake of personal power gains as so often.


What’s hypocritical about a non-profit firing a leader who wanted lots of profits.


Didn't think I'd need to explain this:

The hypocritical part is doing so right AFTER beginning to take off commercially.

An honorable board with backbone would have done so at the first inkling of commercialization instead (which would have been 1-2 years ago).

Maybe you can find a better word for me but the point should be easily gotten ...


OpenAI hasn't made billions in profits. Their operating costs are huge and I'm pretty sure they're heavily reliant on outside funding.


Which puts into question the whole non-profitness anyway, but that aside:

They have still been operating pretty much like a for-profit for years now so my point still stands.


Your point hinged on billions in profit. Which you just made up, or assumed to be true for some reason. I don't think any of your points stand. Don't use fact you haven't checked as preconditions for points you want to make.


[flagged]


A non-profit doesn’t have to offer their services for free, they can cover their expenses.

A profit driven company will often offer their services below cost in order to chase away the competition and capture users.


Right.

Which is why the board's accusations against Sam are a farce as far as we can tell.


Have they gotten specific yet? Last I heard was the whole “not sufficiently candid” thing, which is really nebulous; hard to call it a farce really. It is a “to be continued.”

I’m going to wait and see before I get too personally attached to any particular position.


To think that "Non-Profit" means "Free" is pretty naive. There are operating costs to maintain millions of users. That doesn't mean they are trying to profit.


Exactly.

So what's Sam's crime exactly, trying to cover the costs?


Again, conjecture with no supporting evidence.


Not sure what you're trying to say.

Clearly, under Altman, OpenAI has been massively successful one way or another, correct?

Now they boot him and claim moral superiority? Really?


I mean, as far as I know the guy hasn't written a single line of code.


Three other board members stepped down this year. It might not have been possible before.


Ofc it's "not possible" in that it may incur personal costs.

But it's the honorable thing to do if you truly believe in something.

Otherwise it's just virtue signalling.


No, they may literally have not had the votes.


Almost more of a "takeover" by the board after it's successful lol


I am going to go on a limb here, and speculate...This was because of the surprise party crashing of the Microsoft CEO, at OpenAI first Developer Conference...


Kara Swisher was told the dev conference was "an inflection point", so it's not that speculative.


I doubt this was a surprise to them, I’m sure Sam was well aware of the concerns and repeatedly ignored them, and even doubled down. Putting OpenAI’s mission in jeopardy.

Many politically aligned folks will leave, and OAI will go back and focus on mission.

New company will emerge and focus on profits.

Overall probably good for everyone.


Why would employees be consulted begore being fired?


Because board members are not employees, or not just employees. They're part of the democratic governance of an organization.

The same way there's a big difference between firing a government employee and expulsion of a member of Congress.


Wow, that is actually the first time I hear someone use democracy and corporation unironically together...

In a semse board memebers have even less protection than rank and file. So no, nothing special happening at OpenAI other than a founder CEO being squezzed out, not the first nor the last one. And personal feeling never factored into that kind of decision.


Ha, true. Well, I did say "democratic governance", not "democracy" itself.

Substitute "rules of order" or "parliamentary procedure" if you like. At the end of the day, it's majority vote by a tiny number of representatives. Whether political or corporate.


Is that news to you? Corporate governance is structure pretty much the same as parliamental democracies. The C-suite is the cabinet, the board of directors is the parliament/house of representatives and the shareholders are the public/voters.


would be hilarious if Altman was directly hired by Microsoft to head their AI teams now.


He may have had ample chance before.


Sam's sad face in the NYT article is pretty priceless.


[flagged]


Google Meet is quite good, much better than Teams, IME.


Yup, it's my default for most meetings, share a link at it just works fine.


OpenAI also uses Google Forms -- here's what you get if you click the feedback form if your question gets flagged as violating openAI's content policies https://docs.google.com/forms/d/e/1FAIpQLSfml75SLjiCIAskEpzm...


I think the shock is about the privacy risks.


minus the irony that it doesn't run on 32-bit Chrome and I had to load Edge at work to use it


What should they use? Self hosted Jitsi?


I mean, presumably Teams?


Haven't these people suffered enough!?


In my experience Teams is great for calls (both audio and video), horrible for chat. I guess because it's built on top of Skype codebase? (just a guess)

But it's out of the scope for this discussion.


The chat portion of Teams is so very poorly designed compared to other corporate chat systems I've used.

I mean even copy and paste doesn't work correctly. You highlight text, copy it and Teams inserts its own extra content in there. That's basic functionality and it's broken.

Or you get tagged into conversations and with no way to mute them. For a busy chat that alert notification can be going off continuously. Of course the alert pop up has been handily placed to cover the unmute icon in calls, so when someone asks you a question you can't answer them.

Teams feels like a desperate corporate reaction to Slack with features added as a tickbox exercise but no thought given to actual usability.

I never thought that Slack or the whatever Google's chat system is currently called was in any way outstanding until I was made to use the dumpster fire that is Teams.

It's a classic example of where the customers, corporate CTOs, are not the end users of a product.


I hope you'll never have to use Webex.


Sweet fuck after covid I forgot about webex. I think I might have ptsd from that.

The Teams/Zoom/Other platform arguments have nothing on how unfriendly, slow, and just overall Trash webex is.


Working at a company that still uses it, but with a change on the horizon.

It still, in the year 2023, plays an unmutable beep noise for every single participant that joins, with no debouncing whatsoever.


It astounded me that that company was either unwilling or unable to cash in on work from home during covid.

That has to be among history's biggest missed opportunities for a tech company.

Anyone here associated with them? Why didn't they step up?


I can relate


teams is the absolute worst


Have you used Google meet though? Even teams isn't that bad.


All I notice is that my time going from calendar to Teams call is ~30 seconds due to slow site loading and extra clicks. Calendar to Meet call is two clicks and loads instantly with sane defaults for camera/microphone settings. It's significantly better than teams or zoom in those regards.


If you're fully immersed in the Microsoft ecosystem, going from your Outlook calendar to a Teams call is a single click, and the desktop app doesn't take as long to get into the call.


If you're fully immersed in the Microsoft ecosystem I pray for you


I use both and shudder every time I am forced to use the lame web app alternatives to Word, Excel & PowerPoint on desktop - mostly because my child's school runs on web alternatives. Ironically even on Android, Outlook seems to be the only major client that actually provides a unified inbox across mail accounts due to which I switched & use my Gmail accounts through it.


I have used both, and vastly prefer Google Meet. I prefer something that works in Firefox.


Even Zoom works well in Firefox. Still prefer the UX of Google Meet though.


What’s the issue with Meet? It always seems to work when I need it.


Having used both in a professional capacity I have to say Teams is shockingly worse than Google Meet.

I’ve never had my laptop’s sound like an Apache helicopter while on a call with Google Meet yet simply having Teams open had me searching for a bomb shelter.


Teams sucks compared to Meet, IMHO.


Given the GP's username, maybe some Wakandan tech?


We at dyte.io are planning to launch something here! Hoping to solve all the challenges people face with Teams, Meet, Zoom, etc.


Shall we jump on a dyte? Gets reported to HR for unwanted advances


Shall we jump on a dyte? Sure, can you swim though?


How are you going to break into and differentiate yourself in an already oversaturated market of video call competitors?


All video call software suck in various ways. Corporate IT throttling&filtering and analyzing traffic with a mismash of third party offerings ”to increase security” does not help.


Keet [1] doesn't suck. Fully encrypted, peer to peer. Bandwidth only limited by what the parties to the call have access to.

[1] https://keet.io/


> [...] Fully encrypted, peer to peer. [...]

The least two features the average user wants. Most users are happy if sound and video work instantly, always. Maybe some marketing department should focus on that?

(Don't know keet; yes, encryption is still an important festure).


Peer to peer makes it as fast as possible because it's not having to pass through a 3rd party's servers (which, for cost reasons, normally limit the bandwidth of the communication channel they are serving).

This is just like when you pull down a torrent. You can do it as fast as your bandwidth and the bandwidth of the peers who are seeding it to you allow. Which can be blazingly fast.


Then market it as "fast". Nobody (except a few zealots) cares about the implementation details.


I'm not marketing it (I'm a user, not a developer). And I would think that HN is exactly the forum where ppl care about the implementation details.


Google meet is excellent for videoconferencing actually.


power hijack by the doomers. too bad the cat is out of the bag already


Quite possible actually, this seems to become a really hot political potato with at least 3 types of ambition running it 1. Business 2. Regulatory 3. ’Religious/Academic’. By latter I mean the divide between ai doomerists and others is caused by insubstantiable dogma (doom/nirvana).


> insubstantiable dogma (doom/nirvana)

What do you mean by this? Looks like you're just throwing out a diss on the doomer position (most doomers don't think near future LLMs are concerning).


Neither AI fears nor singularity is substantiated. Hence the discussion is a matter of taste and opinion, not of facts. They are sunstantiated once one or the other comes to fruition. The fact it's a matter of taste and opinion makes the discussion only so much heated.


Wouldn't this put AI doomerism in the same category as nuclear war doomerism? E.g. a thing that many experts think logically could happen and would be very bad but hasn't happened yet?


I'm unaware of an empirical demonstration of the feasibility of the singularity hypothesis. Annihilation by nuclear or biological warfare on the other hand, we have ample empirical pretext for.

We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.


That’s an excellent example why AI doomerism is bogus completely unlike nuclear war fears weren’t.

Nuclear war had very simple mechanistic concept behind it.

Both sides develop nukes (proven tech), put them on ballistic missiles (proven tech). Something goes politically sideways and things escalate (just like in WW1). Firepower levels cities and results in tens of millions dead (just like in WW2, again proven).

Nuclear war experts were actually experts in a system whose outcome you could compute to a very high degree.

There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.

You can already trivially load up a car with explosives, drive it to a nearby large building, and cause massive damages and injury.

Yes, it’s plausible a lone genious could manufacture something horrible in their garage and let rip. But this is in the domain of ’fictional whatifs’.

Nobody factors in the fact that in the presence of such a high quality AI ecosystem the opposing force probably has AI systems of their own to help counter the threat (megaplague? Quickly synthesize megavaxine and just print it out at your local healt centers biofab. Megabomb? Possible even today but that’s why stuff like Uranium is tightly controlled. Etc etc). I hope everyone realizes all the latter examples are fictional fearmongering wihtout any basis in known cases.

AI would be such a boom for whole of humanity that shackling it in is absolutely silly. That said there is no evidende of a deus ex machina happy ending either. My position is let researchers research and once something substantial turns out, then engage policy wonks, once solid mechanistic principles can be referred to.


> There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.

You don't seem actually familiar with doomer talking points. The classic metaphor is that you might not be able to say how specifically Magnus Carlson will beat you at chess if you start the game with him down a pawn while nonetheless knowing he probably will. Predicting

The main way doomers think ASI might kill everyone is mostly via the medium of communicating with people and convincing them to do things, mostly seemingly harmless or sensible things.

It's also worth noting that doomers are not (normally) concerned about LLMs (at least, any in the pipeline), they're concerned about:

* the fact we don't know how to ensure any intelligence we construct actually shares our goals in a manner that will persist outside the training domain (this actually also applies to humans funnily enough, you can try instilling values into them with school or parenting but despite them sharing our mind design they still do unintended things...). And indeed, optimization processes (such as evolution) have produced optimization processes (such as human cultures) that don't share the original one's "goals" (hence the invention of contraception and almost every developed country having below replacement fertility).

* the fact that recent history has had the smartest creature (the humans) taking almost complete control of the biosphere with the less intelligent creatures living or dying on the whims of the smarter ones.


In my opinion, if either extreme turns out to be correct it will be a disaster for everyone on the planet. I also think that neither extreme is correct.


this is why you don't bring NGO types into your board, and you especially don't give them power to oust you.


What does “your” board mean in this context? Who’s “your”?

The CEO just works for the organization and the board is their boss.

You’re referencing a founder situation where the CEO is also a founder who also has equity and thus the board also reports to them.

This isn’t that. Altman didn’t own anything, it’s not his company, it’s a non-profit. He just works there. He got fired.


I believe altman had some ownership, however it is a general lesson of handing over substantial power to laymen who are completely detached from the actual ops & know-how of the company


nobody handed over power. presumably they were appointed to the board to do exactly what they did (if this theory holds), in which cass this outcome would be a feature not a bug


There’s no such thing as owning a non-profit.


> this is why you don't bring NGO types into your board

OpenAI is an NGO…?


That is neither stated nor implied, unless you’re simply making the objection, “But OpenAI _is_ nongovernmental.”

Most readers are aware they were a research and advocacy organization that became (in the sense that public benefit tax-free nonprofit groups and charitable foundations normally have no possibility of granting anyone equity ownership nor exclusive rights to their production) a corporation by creating one; but some of the board members are implied by the parent comment to be from NGO-type backgrounds.


I'm not sure I understand what you're saying. Perhaps you could point out where your perspective differs from mine? So, as I see it: Open AI _is_ a non-profit, though it has an LLC it wholly controls that doesn't have non-profit status. It never "became" for-profit (IANAL, but is that even possible? It seems like that should not be possible), the only thing that happened is that the LLC was allowed to collect some "profit" - but that in turn would go to its owners, primarily the non-profit. As far as I'm aware the board in question that went through this purge _was_ the non-profit's board (does the LLC have a board?)

From the non-profit's perspective, it sounds pretty reasonable to self-police and ensure there aren't any rogue parts of the organization that are going off and working at odds with the overall non-profit's formal aims. It's always been weird that the Open-AI LLC seemed to be so commercially focused even when that might conflict with it's sole controller's interests; notably the LLC very explicitly warned investors that the NGO's mission took precedence over profit.


My objection is that OpenAI, at least to my knowledge, still is a non-profit organization that is not part of the government and has some kind of public benefit goals - that sounds like an NGO to me. Thus appointing “NGO types” to the board sounds reasonable: They have experience running that kind of organization.

Many NGOs run limited liability companies and for-profit businesses as part of their operations, that’s in no way unique for OpenAI. Girl Scout cookies are an example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: