Hacker News new | past | comments | ask | show | jobs | submit | psbp's comments login

There is legal precedent establishing healthcare providers' obligations in life-threatening situations. The same moral responsibility should exist when insurers deny lifesaving care, it's just hidden behind bureaucracy.


The interactive element itself could be really powerful. If you could put some restrictions on how much of the answer it can give you all at once, it's the perfect incremental learning tool.


It's funny that if I could describe my entire career, it would probably be something similar to software janitor/maintenance worker.

I guess I should have pursued a PhD when I was younger.


In another universe, this comment would be "With low pay and few academic jobs going for PhD was the worst decision of my life"


My brain processes too slow for modern action movies.

I can tell what's going on, but I always end up feeling agitated.


I'm okay with watching the majority of action movies, but I distinctly remember watching this fight scene in a Bourne movie and not having a clue what was going on. The constant camera changes, short shot length, and shaky cam, just confused the hell out of me.

https://youtu.be/uLt7lXDCHQ0?si=JnVMjmu0WgN5Jr5e&t=70


I thought it was brilliant. Notice there’s no music. It’s one of the most brutal action scenes I know. Brutal in the sense of how honest it felt about direct combat.


I'm glad we're finally getting away from the 00's shaky cam era.


Skeptic and completely reactionary. I had to unfollow him on Twitter because he always has to have a "take" on every AI headline, and he's often contradictory between "AI is useless" and "AI is a huge threat".


It was super weird to see him align with the "regulate AI people" after years of being one of the main "this is going nowhere" guys


The people behind LLMs believe in it so thoroughly that they are pushing it to do things that aren’t safe. So it can easily be true that it’s both overhyped and needs better regulation. The fact that LLMs can’t actually solve the problems they are being used for is, in fact, a problem that may require government intervention.


If LLMs aren't fit for the types of problems they're being used for, what would the government be needed for? Users would pretty quickly give up and move on if LLMs continue to suck at solving the problems people want them to solve.


Not if those people are "true believers". Once you get emotionally attached to the idea that something must work, you don't respond to failures rationally.


If we're delving into the realm of belief and therefore religion, pulling in the government to shut it down goes against everything that America was (supposedly) founded on.


I think his stance is pro-regulation, but not in the same way that AI doomers and OpenAi want.


Yes, skepticism is healthy and useful, but this sounds more like intellectual dishonesty.


I think his position is that "AI is not intelligent but is nevertheless dangerous, and even moreso because people think it's intelligent".


I tend not to listen to people who have no fucking clue what they're talking about.


This is such a complete misreading of what's happening. Not sure why I keep seeing this on HN.


That's just one view (as is mine), no one knows what's actually happening.

In my view Altman represents the 'lets get lots of money' side of things and not much else. The deals with MS, ME financiers, SoftBank, a Jony Ive colab makes that pretty clear.

Maybe it's not that simple, but I'd say it's broadly correct.


It seems reasonable to say that AGI will take a ton of resources. You'll need investors for power, GPUs, researchers, data, and the list goes on. It's a lot easier to get there with viable commercial products than handouts.

I'd be willing to bet that between Sam's approach and the theorized approach of the OpenAI board we're discussing, Sam's approach has a higher chance of success.


Since AGI isn't a thing, no one knows what it will look like or if it will even exist.

The biggest breakthroughs in science do not come from those with the most money. It's all ideas.


OTOH humans are a non-artificial GI, and we can use ourselves as an anchor for estimates of what we'd need for an artificial equivalent.

About 1000x the complexity of GPT-3 and much slower would be the best guess right now.


It's looking at humans, how they're trained and their wetware makes me believe that AGI, as most people understand it, ie a super human like intelligence, will never exist. There will be powerful AI but it won't be human like in the way people think about it now.


That definition ought to be reserved for ASI (S meaning super) not AGI (G meaning general).

That said I agree "human like" is unlikely, although LLMs and diffusion models are much closer than I was expecting.


That's because their training source is human output. Human In Human Out (HIHO).


Even so, was expecting more dissimilarities or even just types of inappropriateness that are very human — humans are a broad bunch, no reason the LLMs wouldn't just default to snarky and lazy, like the example from the OpenAI Dev Day of someone who tried to fine tune on their slack messages, asked it to write something, and it said "Sure, I'll do it in the morning".

Despite people calling them stochastic parrots and autocomplete on steroids, ChatGPT is behaving like it is trying to answer rather than merely trying to continue the text the user enters. I find this surprising.


Precisely. Breakthroughs are often cleverer than brute force, “throw more compute/tokens at it” approaches. Turning some crucial algorithm from O(n) to O(log(n)) could be an unlock worth trillions of compute time dollars.


You don't understand how much value was lost, even if OpenAI perfectly migrates over to Microsoft (it will be messy). Sam had no incentive to not continue with the existing OpenAI structure.


Sam could have made hundreds of millions of dollars potentially, that's a big incentive to not continue.


No one knows.


I'd like to hear more about the board's argument before deciding that this was "virtuous board vs greedy capitalist". The motivations for both sides is still unclear.


Seems unusual for a nonprofit not to have a written investigative report or performance review conducted by a law firm or auditor. Similar to what happened with Stanford's ousted president but more expedited if matters are more pressing.


Android 14 has been a buggy mess for me on the Pixel 7 Pro. It's confounding that Google is still pushing out buggy Android releases.


I upgraded to 14 on my Pixel 7 Pro a few days ago and didn't notice any problem so far. What are the worst issues I am likely to encounter?


The lock screen won't show anything sometimes, and you can't unlock it. It's just a wallpaper. I had to enable face unlock and auto unlock in order to use the phone. It start the camera app with the power button, try to access photos frying there, and that gets the fingerprint to work and unlock it.


> What are the worst issues I am likely to encounter?

Complete loss of data. https://arstechnica.com/gadgets/2023/10/android-14s-ransomwa...

Back up your data while you can.


Oh fuck me. Given that I'm on automatic updates and the update has already been downloaded, next time I reboot my phone it will switch to Android 14 automatically. :(

(And yup, I'm using work profiles and all that.)


Good thing you can still back up your data. It's possible it doesn't affect everyone, I'm not sure. Lemme know how it goes when you reboot?

Or avoid rebooting somehow (if it's not automatic) in the hope they somehow fix it...


So far I've avoided rebooting but it looks like the GrapheneOS devs had already fixed the bug a while ago:

https://grapheneos.social/@GrapheneOS/111309676504712576


I've noticed that tapping notifications has a significant delay (350ms or so) before it opens any app since upgrading to 14.


If you use multiple profiles, you could potentially end up somewhere between locked out of your main account, to even soft- and hard-bricked. See siblings link from Ars.


I really liked Android, but at a certain point I just couldn't deal with the constant buggy releases, security flaws, and short update cycle anymore. Especially things like the Pixel phones crashing when calling 911 seems ridiculous in 2023. Maybe 10 years ago when I had time to play around with ROMs and nobody really needed to reach me 24/7 that would be fine, but now I need a device that "just works" and that's iPhone.


It is great that we have choice, but "just works" is not really the whole truth. It just works as long as you do exactly as Apple wants you to do. Any misstep or need for choice outside what they deem correct usage and Android "just works" much, much better. For you this might be perfect, but to me having zero choice is perfectly bad. A phone without F-droid is to me a brick.


That’s why I simply rock two old smartphones (an Android, Nexus 6P, and an iPhone SE, 1st Gen.). And as I look around, I use them much effectively than everyone I know with much more modern phones, when they have just one device. My iPhone just works (and it does it very well), while my Nexus can do whatever it wants in terms of instability, I don’t care. I use it as a pocket computer, with F-droid and its wonderful apps. On top of that I have an old low-end Samsung smartphone (someone just gave it to me), I re-flashed it with Lineage OS and it has its use-case, slightly different from Nexus. I really cannot see any need in one extra phone, as those 3 cover all my use-cases at the moment.


Pixel phones are the bleeding edge though. You are basically beta testing for the other Android vendors.


It would be nice if that's what they were advertised as, not as top end premium Android devices with polished experience as a major selling point.

But I agree, I had a Pixel 6 briefly when my S23 was in repair for a week and it was an incredibly buggy experience in comparison, although it was quite pretty.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: