I don't know... There are plenty of otherwise capable adults who just get home from work and watch TV. They either never, or extremely rarely, indulge in hobbies, go see a concert, or even go out to meet others. Not that TV can't be art and challenge us but lets be honest, 99% of it is not that.
There is legal precedent establishing healthcare providers' obligations in life-threatening situations. The same moral responsibility should exist when insurers deny lifesaving care, it's just hidden behind bureaucracy.
The interactive element itself could be really powerful. If you could put some restrictions on how much of the answer it can give you all at once, it's the perfect incremental learning tool.
I'm okay with watching the majority of action movies, but I distinctly remember watching this fight scene in a Bourne movie and not having a clue what was going on. The constant camera changes, short shot length, and shaky cam, just confused the hell out of me.
I thought it was brilliant. Notice there’s no music. It’s one of the most brutal action scenes I know. Brutal in the sense of how honest it felt about direct combat.
Skeptic and completely reactionary. I had to unfollow him on Twitter because he always has to have a "take" on every AI headline, and he's often contradictory between "AI is useless" and "AI is a huge threat".
The people behind LLMs believe in it so thoroughly that they are pushing it to do things that aren’t safe. So it can easily be true that it’s both overhyped and needs better regulation. The fact that LLMs can’t actually solve the problems they are being used for is, in fact, a problem that may require government intervention.
If LLMs aren't fit for the types of problems they're being used for, what would the government be needed for? Users would pretty quickly give up and move on if LLMs continue to suck at solving the problems people want them to solve.
Not if those people are "true believers". Once you get emotionally attached to the idea that something must work, you don't respond to failures rationally.
If we're delving into the realm of belief and therefore religion, pulling in the government to shut it down goes against everything that America was (supposedly) founded on.
That's just one view (as is mine), no one knows what's actually happening.
In my view Altman represents the 'lets get lots of money' side of things and not much else. The deals with MS, ME financiers, SoftBank, a Jony Ive colab makes that pretty clear.
Maybe it's not that simple, but I'd say it's broadly correct.
It seems reasonable to say that AGI will take a ton of resources. You'll need investors for power, GPUs, researchers, data, and the list goes on. It's a lot easier to get there with viable commercial products than handouts.
I'd be willing to bet that between Sam's approach and the theorized approach of the OpenAI board we're discussing, Sam's approach has a higher chance of success.
It's looking at humans, how they're trained and their wetware makes me believe that AGI, as most people understand it, ie a super human like intelligence, will never exist. There will be powerful AI but it won't be human like in the way people think about it now.
Even so, was expecting more dissimilarities or even just types of inappropriateness that are very human — humans are a broad bunch, no reason the LLMs wouldn't just default to snarky and lazy, like the example from the OpenAI Dev Day of someone who tried to fine tune on their slack messages, asked it to write something, and it said "Sure, I'll do it in the morning".
Despite people calling them stochastic parrots and autocomplete on steroids, ChatGPT is behaving like it is trying to answer rather than merely trying to continue the text the user enters. I find this surprising.
Precisely. Breakthroughs are often cleverer than brute force, “throw more compute/tokens at it” approaches. Turning some crucial algorithm from O(n) to O(log(n)) could be an unlock worth trillions of compute time dollars.
You don't understand how much value was lost, even if OpenAI perfectly migrates over to Microsoft (it will be messy). Sam had no incentive to not continue with the existing OpenAI structure.
I'd like to hear more about the board's argument before deciding that this was "virtuous board vs greedy capitalist". The motivations for both sides is still unclear.
Seems unusual for a nonprofit not to have a written investigative report or performance review conducted by a law firm or auditor. Similar to what happened with Stanford's ousted president but more expedited if matters are more pressing.
Nothing can take away your ability to have incredible experiences, except if the robots kill us all.