Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the contrary, the video you linked to is likely to be part of the lie that ousted Altman.

He's also said very recently that to get to AGI "we need another breakthrough" (source https://garymarcus.substack.com/p/has-sam-altman-gone-full-g... )

To predicate a company so massive as OpenAI on a premise that you know to not be true seems like a big enough lie.



Fair enough, but having worked for an extremely secretive FAANG myself, "we need XYZ" is the kind of thing I'd expect to hear if you have XYZ internally but don't want to reveal it yet. It could basically mean "we need XYZ relative to the previous product" or more specifically "we need another breakthrough than LLMs, and we recently made a major breakthrough unrelated to LLMs". I'm not saying that's the case but I don't think the signal-to-noise ratio in his answer is very high.

More importantly, OpenAI's claim (whether you believe it or not) has always been that their structure is optimised towards building AGI, and that everything else including the for-profit part is just a means to that end: https://openai.com/our-structure and https://openai.com/blog/openai-lp

Either the board doesn't actually share that goal, or what you are saying shouldn't matter to them. Sam isn't an engineer, it's not his job to make the breakthrough, only to keep the lights on until they do if you take their mission literally.

Unless you're arguing that Sam claimed they were closer to AGI to the board than they really are (rather than hiding anything from them) in order to use the not-for-profit part of the structure in a way the board disagreed with, or some other financial shenanigans?

As I said, I hope you're right, because the alternative is a lot scarier.


I think my point is different than what you're breaking down here.

The only way that OpenAI was able to sell MS and others on the 100x capped non-profit and other BS was because of the AGI/superintelligence narraitive. Sam was that salesman. And Sam does seem to sincerely believe that AGI and superintelligence are realities on OpenAI's path, a perfect salesman.

But then... maybe that AGI conviction was oversold? To a level some would have interpreted as "less than candid," that's my claim.

Speaking as a technologist actually building AGI up from animal-levels following evolution (and as a result totally discounting superintelligence), I do think Sam's AGI claims veered on the edge of reality as lies.


Both factions in this appear publicly to see AGI as imminent, and mishandling its imminence to be an existential threat; the dispute appears to be about what to do about that imminence. If they didn't both see it as imminent, the dispute would probably be less intense.

This has something of a character of a doctrinal dispute among true believers in a millennial cult


I totally agree.

They must be under so much crazy pressure at OpenAI that it indeed is like a cult. I'm glad to see the snake finally eat iself. Hopefully that'll return some sanity to our field.


Sam has been doing a pretty damn obvious charismatic cult leader thingy for quite a while now. The guy is dangerous as fuck and needs to be committed to an institution, not given any more money.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: