There is legal precedent establishing healthcare providers' obligations in life-threatening situations. The same moral responsibility should exist when insurers deny lifesaving care, it's just hidden behind bureaucracy.
The interactive element itself could be really powerful. If you could put some restrictions on how much of the answer it can give you all at once, it's the perfect incremental learning tool.
I'm okay with watching the majority of action movies, but I distinctly remember watching this fight scene in a Bourne movie and not having a clue what was going on. The constant camera changes, short shot length, and shaky cam, just confused the hell out of me.
I thought it was brilliant. Notice there’s no music. It’s one of the most brutal action scenes I know. Brutal in the sense of how honest it felt about direct combat.
Skeptic and completely reactionary. I had to unfollow him on Twitter because he always has to have a "take" on every AI headline, and he's often contradictory between "AI is useless" and "AI is a huge threat".
The people behind LLMs believe in it so thoroughly that they are pushing it to do things that aren’t safe. So it can easily be true that it’s both overhyped and needs better regulation. The fact that LLMs can’t actually solve the problems they are being used for is, in fact, a problem that may require government intervention.
If LLMs aren't fit for the types of problems they're being used for, what would the government be needed for? Users would pretty quickly give up and move on if LLMs continue to suck at solving the problems people want them to solve.
Not if those people are "true believers". Once you get emotionally attached to the idea that something must work, you don't respond to failures rationally.
If we're delving into the realm of belief and therefore religion, pulling in the government to shut it down goes against everything that America was (supposedly) founded on.
That's just one view (as is mine), no one knows what's actually happening.
In my view Altman represents the 'lets get lots of money' side of things and not much else. The deals with MS, ME financiers, SoftBank, a Jony Ive colab makes that pretty clear.
Maybe it's not that simple, but I'd say it's broadly correct.
It seems reasonable to say that AGI will take a ton of resources. You'll need investors for power, GPUs, researchers, data, and the list goes on. It's a lot easier to get there with viable commercial products than handouts.
I'd be willing to bet that between Sam's approach and the theorized approach of the OpenAI board we're discussing, Sam's approach has a higher chance of success.
It's looking at humans, how they're trained and their wetware makes me believe that AGI, as most people understand it, ie a super human like intelligence, will never exist. There will be powerful AI but it won't be human like in the way people think about it now.
Even so, was expecting more dissimilarities or even just types of inappropriateness that are very human — humans are a broad bunch, no reason the LLMs wouldn't just default to snarky and lazy, like the example from the OpenAI Dev Day of someone who tried to fine tune on their slack messages, asked it to write something, and it said "Sure, I'll do it in the morning".
Despite people calling them stochastic parrots and autocomplete on steroids, ChatGPT is behaving like it is trying to answer rather than merely trying to continue the text the user enters. I find this surprising.
Precisely. Breakthroughs are often cleverer than brute force, “throw more compute/tokens at it” approaches. Turning some crucial algorithm from O(n) to O(log(n)) could be an unlock worth trillions of compute time dollars.
You don't understand how much value was lost, even if OpenAI perfectly migrates over to Microsoft (it will be messy). Sam had no incentive to not continue with the existing OpenAI structure.
I'd like to hear more about the board's argument before deciding that this was "virtuous board vs greedy capitalist". The motivations for both sides is still unclear.
Seems unusual for a nonprofit not to have a written investigative report or performance review conducted by a law firm or auditor. Similar to what happened with Stanford's ousted president but more expedited if matters are more pressing.
The lock screen won't show anything sometimes, and you can't unlock it. It's just a wallpaper. I had to enable face unlock and auto unlock in order to use the phone. It start the camera app with the power button, try to access photos frying there, and that gets the fingerprint to work and unlock it.
Oh fuck me. Given that I'm on automatic updates and the update has already been downloaded, next time I reboot my phone it will switch to Android 14 automatically. :(
If you use multiple profiles, you could potentially end up somewhere between locked out of your main account, to even soft- and hard-bricked. See siblings link from Ars.
I really liked Android, but at a certain point I just couldn't deal with the constant buggy releases, security flaws, and short update cycle anymore. Especially things like the Pixel phones crashing when calling 911 seems ridiculous in 2023. Maybe 10 years ago when I had time to play around with ROMs and nobody really needed to reach me 24/7 that would be fine, but now I need a device that "just works" and that's iPhone.
It is great that we have choice, but "just works" is not really the whole truth. It just works as long as you do exactly as Apple wants you to do. Any misstep or need for choice outside what they deem correct usage and Android "just works" much, much better. For you this might be perfect, but to me having zero choice is perfectly bad. A phone without F-droid is to me a brick.
That’s why I simply rock two old smartphones (an Android, Nexus 6P, and an iPhone SE, 1st Gen.). And as I look around, I use them much effectively than everyone I know with much more modern phones, when they have just one device. My iPhone just works (and it does it very well), while my Nexus can do whatever it wants in terms of instability, I don’t care. I use it as a pocket computer, with F-droid and its wonderful apps. On top of that I have an old low-end Samsung smartphone (someone just gave it to me), I re-flashed it with Lineage OS and it has its use-case, slightly different from Nexus. I really cannot see any need in one extra phone, as those 3 cover all my use-cases at the moment.
It would be nice if that's what they were advertised as, not as top end premium Android devices with polished experience as a major selling point.
But I agree, I had a Pixel 6 briefly when my S23 was in repair for a week and it was an incredibly buggy experience in comparison, although it was quite pretty.