Hacker Newsnew | past | comments | ask | show | jobs | submit | psbp's commentslogin

The process of thinking and exploring ideas is inherently enriching.

Nothing can take away your ability to have incredible experiences, except if the robots kill us all.


I don't know... There are plenty of otherwise capable adults who just get home from work and watch TV. They either never, or extremely rarely, indulge in hobbies, go see a concert, or even go out to meet others. Not that TV can't be art and challenge us but lets be honest, 99% of it is not that.


I have been this person. I can say that it's not a time of my life I look back on fondly.


There is legal precedent establishing healthcare providers' obligations in life-threatening situations. The same moral responsibility should exist when insurers deny lifesaving care, it's just hidden behind bureaucracy.


The interactive element itself could be really powerful. If you could put some restrictions on how much of the answer it can give you all at once, it's the perfect incremental learning tool.


It's funny that if I could describe my entire career, it would probably be something similar to software janitor/maintenance worker.

I guess I should have pursued a PhD when I was younger.


In another universe, this comment would be "With low pay and few academic jobs going for PhD was the worst decision of my life"


My brain processes too slow for modern action movies.

I can tell what's going on, but I always end up feeling agitated.


I'm okay with watching the majority of action movies, but I distinctly remember watching this fight scene in a Bourne movie and not having a clue what was going on. The constant camera changes, short shot length, and shaky cam, just confused the hell out of me.

https://youtu.be/uLt7lXDCHQ0?si=JnVMjmu0WgN5Jr5e&t=70


I thought it was brilliant. Notice there’s no music. It’s one of the most brutal action scenes I know. Brutal in the sense of how honest it felt about direct combat.


I'm glad we're finally getting away from the 00's shaky cam era.


Skeptic and completely reactionary. I had to unfollow him on Twitter because he always has to have a "take" on every AI headline, and he's often contradictory between "AI is useless" and "AI is a huge threat".


It was super weird to see him align with the "regulate AI people" after years of being one of the main "this is going nowhere" guys


The people behind LLMs believe in it so thoroughly that they are pushing it to do things that aren’t safe. So it can easily be true that it’s both overhyped and needs better regulation. The fact that LLMs can’t actually solve the problems they are being used for is, in fact, a problem that may require government intervention.


If LLMs aren't fit for the types of problems they're being used for, what would the government be needed for? Users would pretty quickly give up and move on if LLMs continue to suck at solving the problems people want them to solve.


Not if those people are "true believers". Once you get emotionally attached to the idea that something must work, you don't respond to failures rationally.


If we're delving into the realm of belief and therefore religion, pulling in the government to shut it down goes against everything that America was (supposedly) founded on.


I think his stance is pro-regulation, but not in the same way that AI doomers and OpenAi want.


Yes, skepticism is healthy and useful, but this sounds more like intellectual dishonesty.


I think his position is that "AI is not intelligent but is nevertheless dangerous, and even moreso because people think it's intelligent".


I tend not to listen to people who have no fucking clue what they're talking about.


This is such a complete misreading of what's happening. Not sure why I keep seeing this on HN.


That's just one view (as is mine), no one knows what's actually happening.

In my view Altman represents the 'lets get lots of money' side of things and not much else. The deals with MS, ME financiers, SoftBank, a Jony Ive colab makes that pretty clear.

Maybe it's not that simple, but I'd say it's broadly correct.


It seems reasonable to say that AGI will take a ton of resources. You'll need investors for power, GPUs, researchers, data, and the list goes on. It's a lot easier to get there with viable commercial products than handouts.

I'd be willing to bet that between Sam's approach and the theorized approach of the OpenAI board we're discussing, Sam's approach has a higher chance of success.


Since AGI isn't a thing, no one knows what it will look like or if it will even exist.

The biggest breakthroughs in science do not come from those with the most money. It's all ideas.


OTOH humans are a non-artificial GI, and we can use ourselves as an anchor for estimates of what we'd need for an artificial equivalent.

About 1000x the complexity of GPT-3 and much slower would be the best guess right now.


It's looking at humans, how they're trained and their wetware makes me believe that AGI, as most people understand it, ie a super human like intelligence, will never exist. There will be powerful AI but it won't be human like in the way people think about it now.


That definition ought to be reserved for ASI (S meaning super) not AGI (G meaning general).

That said I agree "human like" is unlikely, although LLMs and diffusion models are much closer than I was expecting.


That's because their training source is human output. Human In Human Out (HIHO).


Even so, was expecting more dissimilarities or even just types of inappropriateness that are very human — humans are a broad bunch, no reason the LLMs wouldn't just default to snarky and lazy, like the example from the OpenAI Dev Day of someone who tried to fine tune on their slack messages, asked it to write something, and it said "Sure, I'll do it in the morning".

Despite people calling them stochastic parrots and autocomplete on steroids, ChatGPT is behaving like it is trying to answer rather than merely trying to continue the text the user enters. I find this surprising.


Precisely. Breakthroughs are often cleverer than brute force, “throw more compute/tokens at it” approaches. Turning some crucial algorithm from O(n) to O(log(n)) could be an unlock worth trillions of compute time dollars.


You don't understand how much value was lost, even if OpenAI perfectly migrates over to Microsoft (it will be messy). Sam had no incentive to not continue with the existing OpenAI structure.


Sam could have made hundreds of millions of dollars potentially, that's a big incentive to not continue.


No one knows.


I'd like to hear more about the board's argument before deciding that this was "virtuous board vs greedy capitalist". The motivations for both sides is still unclear.


Seems unusual for a nonprofit not to have a written investigative report or performance review conducted by a law firm or auditor. Similar to what happened with Stanford's ousted president but more expedited if matters are more pressing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: