Maybe this is for the best. The ones who are in for the money can go to MS, and the ones with a higher calling can stick with OpenAI and we'll see who wins.
I'm convinced an actual, true open approach (ideally open source) will win out, everyone thinks MS has executed a masterstroke, but more likely they've shot their load too early with LLMs only being a signpost on the way to higher-order AI (AGI is a ill-defined fantasy).
That's just one view (as is mine), no one knows what's actually happening.
In my view Altman represents the 'lets get lots of money' side of things and not much else. The deals with MS, ME financiers, SoftBank, a Jony Ive colab makes that pretty clear.
Maybe it's not that simple, but I'd say it's broadly correct.
It seems reasonable to say that AGI will take a ton of resources. You'll need investors for power, GPUs, researchers, data, and the list goes on. It's a lot easier to get there with viable commercial products than handouts.
I'd be willing to bet that between Sam's approach and the theorized approach of the OpenAI board we're discussing, Sam's approach has a higher chance of success.
It's looking at humans, how they're trained and their wetware makes me believe that AGI, as most people understand it, ie a super human like intelligence, will never exist. There will be powerful AI but it won't be human like in the way people think about it now.
Even so, was expecting more dissimilarities or even just types of inappropriateness that are very human — humans are a broad bunch, no reason the LLMs wouldn't just default to snarky and lazy, like the example from the OpenAI Dev Day of someone who tried to fine tune on their slack messages, asked it to write something, and it said "Sure, I'll do it in the morning".
Despite people calling them stochastic parrots and autocomplete on steroids, ChatGPT is behaving like it is trying to answer rather than merely trying to continue the text the user enters. I find this surprising.
Precisely. Breakthroughs are often cleverer than brute force, “throw more compute/tokens at it” approaches. Turning some crucial algorithm from O(n) to O(log(n)) could be an unlock worth trillions of compute time dollars.
If this were true, Yudkowsky's MIRI would have solved AGI a decade ago. Turns out you need money and lots of compute power, not just people sitting around talking about the issues.
> If this were true, Yudkowsky's MIRI would have solved AGI a decade ago.
Isn't MIRI focussed on “trustworthy reasoning”, not AGI more generally, and doesn't it see untrustworthy AGI as an undesirable thing to develop, even as an instrumental step?
So, literally, isn't “solving AGI” an explicit anti-goal?
I'm convinced an actual, true open approach (ideally open source) will win out, everyone thinks MS has executed a masterstroke, but more likely they've shot their load too early with LLMs only being a signpost on the way to higher-order AI (AGI is a ill-defined fantasy).