Hacker News new | past | comments | ask | show | jobs | submit | shamino's comments login

Paul Graham came out publicly to defend Sam, and we instantly have this blog post about, wait, just a sec, let's dissect actually why Sam is still evil.

Can we believe that Sam could actually be a good person? Today, Kara Swisher in her podcast on Pivot said, "Every time I tell people I actually like Sam, they become widely offended".


You’re applying pretty black and white moral values to a post that, at least to me, didn’t read that way at all. One can like Sam Altman as a person while wishing he was more transparent in some of his business dealings.


The post may not be at the extremes, but if the author has been following the issue as closely as it seems, they must be aware that there are people boldly proclaiming Sam Altman to be a sociopath on a daily basis.

The issue has become polarized, for reasons I don't rightly understand, but nevertheless this is where we have ended up.

To write on the topic in this environment it would be advisable to be clear on what they are saying, what they have an issue with, and what the appropriate remedy would be.

To just throw out some insinuations in a "I'm just asking questions" manner doesn't in-itself condemn a person. It isn't happening in isolation though. No snowflake believes itself to be responsible for the avalanche.


> To write on the topic in this environment it would be advisable to be clear on what they are saying, what they have an issue with, and what the appropriate remedy would be.

They literally did all of that, though?


Its insane how far people are willing to project on their feelings towards Altman. Look at this quote in this thread.

"But it's worth noting that much of Sam Altman's presentation is just a mask (one he puts a ton of effort into and is good at maintaining), even if he's still less evil than the Sacklers or a mob boss."

How do people even come up with this narrative?


> Can we believe that Sam could actually be a good person?

Depends on what good means to you. This is a person that we have evidence on repeatedly using these kind of underhanded techniques. Maybe he's not physically hurting anyone, but this is a person I would avoid.


> these kind of underhanded techniques

What in the article is underhanded? Worst case, he has undisclosed conflicts of interest.

> this is a person I would avoid

Does Altman have a Trump-like wake of ruined careers and lost riches among former allies? Everyone he's been close to seems to have done well from it.


> Worst case, he has undisclosed conflicts of interest.

Given the size of these deals, that's kind of a big deal. It isn't an "oops, it slipped my mind" small little conflict of interest kind of thing, imo.


> Given the size of these deals, that's kind of a big deal

For investors, sure. (And the investors are more than fine with Altman, warts and all.) For the public, eh.


Are you saying the public won't or shouldn't care? Altman wants public trust to say what regulations should and should not be made. Dishonesty is relevant.


How many of them were made to sign egregious and secret non-disparagement agreements?


Honestly, I think his shady handling of the ScarJo thing is what is shifting the tide.

He very clearly isn't being honest there, and it's so obvious that many people are starting to question everything he says.


No, if anything that’s a pretty fake controversy too.

Ricardo Montalban had a great quote about the life stages of an actor, enumerating them as follows:

1. Who is Ricardo Montalban?

2. Get me Ricardo Montalban.

3. Get me a Ricardo Montalban type.

4. Get me a young Ricardo Montalban.

5. Who is Ricardo Montalban?

As far as I can tell, Johansson’s complaint is that when OpenAI reached out to her for voice acting and she turned them down, that they instead got a Scarlett Johansson type, and that OpenAI should be categorically prohibited from hiring any voice actor who sounds like her at all. Which is not how acting has ever worked, but for some reason the topic of artificial intelligence gets a lot of people worked up to the point of artificial stupidity.


Midler v. Ford is more relevant legally than Ricardo Montalban's wit. In short deliberate mimicry is not allowed.

OpenAI's public claims about how they produced the Sky voice followed Johansson's public statement. They could be true or false. We don't know what claims or evidence they gave Johansson's counsel.


> As far as I can tell, Johansson’s complaint is that when OpenAI reached out to her for voice acting and she turned them down, that they instead got a Scarlett Johansson type

Agreed, and according to Midler v. Ford that is not permitted:

"We hold only that when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort in California."


I’ve never seen that standard applied to acting. Particularly voice acting. Midler v. Ford was a dispute between a singer and a company that used a sound-alike singer, singing a cover of a Bette Midler song in a commercial, to falsely imply a sponsorship that didn’t exist. Totally different case.



There’s a difference between impersonating someone with the explicit intention of falsely giving the impression they are involved in a project which they are not, and simply hiring an actor (or a voice actor) who can give a performance that’s similar or reminiscent of another.

> You've never seen it, huh? Google is super easy to use.

Please stop acting like an asshole.


I'm sorry, friend, from my perspective it seems quite obvious you're not arguing in good faith, and I have no interest in chasing your goalposts.

You take care now.


> from my perspective it seems quite obvious you're not arguing in good faith

I’m arguing in good faith, in the sense that I’m expressing my own genuine rationale for my own opinions. Your inability to cope with that and remain civil is the only show of bad faith in this entire discussion.


You started this conversation calling my position a "pretty fake controversy".

You take care now. I will not be responding to you further.


You really struggle with the fact that other people disagree with you, don't you?


LOL. Take care.


It’s quite a few things. I didn’t find his claims of ignorance around the non-competes, for example, particularly compelling.

But all of that is quite separate from these conflicts, which are entirely a matter for Altman and his investors, investors who have no reason to complain about him.


Agreed.

But when you see him being duplicitous in some situations, it's hard not to suspect it bleeds into other situations.


> What in the article is underhanded? Worst case, he has undisclosed conflicts of interest.

undisclosed = underhanded

> Does Altman have a Trump-like wake of ruined careers and lost riches among former allies? Everyone he's been close to seems to have done well from it.

I'm not talking about Trump, and I don't think Trump should be a reference for what is or isn't acceptable.


People are angry about the "Open"AI debacle and he's been publicly in favor of being very paternalistic and controlling (likely for his material benefit as much as safety). It's fair that he's taken some flak for those things, he's trying to control the direction of society at large and people want a say. I don't think he's evil, but I can see why people would perceive him as paternalistic or even a bit patronizing.


> Paul Graham came out publicly to defend Sam, and we instantly have this blog post about, wait, just a sec, let's dissect actually why Sam is still evil

I'm not seeing good and evil in this post. It's calling Sam out for not being transparent. Given he's elevated OpenAI, in public testimony, to an extinction-level threat to humanity, that lack of transparency is of public concern.

Not being transparent doesn't make him evil, doesn't mean he is unlikeable and doesn't per se mean he's dishonest. (Though OpenAI and he do have a likeability problem, at least in politics, albeit one I think they can fix.)


That’s exactly what I was going for — the issue of transparency here. It wasn’t dissecting why he’s “bad”, it’s that the public statements don’t match up with financial realities.

Maybe next time I could press more about the transparency factor, but I thought it was concise enough.


> public statements don’t match up with financial realities

Has he ever plead poverty?


Obviously not, but as I state in the article he has plead:

> “He owns no stake in the ChatGPT developer, saying he doesn’t want the seductions of wealth to corrupt the safe development of artificial intelligence, and makes a yearly salary of just $65,000.”

According to OpenAI themselves.

So he takes a “low” salary and no ownership as to, according to him and the company, not influence his decisions in the pursuit of financial gain — yet that’s a complete omission of the whole truth.

I’ll stop short of calling it a flat-out lie, but a mischaracterization of reality for sure.


> I’ll stop short of calling it a flat-out lie, but a mischaracterization of reality for sure.

In my opinion a mischaracterisation of reality is just a lie with layers of indirection to weasel oneself out of it. It's definitely a lie.


> albeit one I think they can fix

Money, a little image coaching, and have Aaron Sorkin write a movie.

(We've seen a similar situation before.)


I'd love for Altman to explain why they decided to start Worldcoin in Africa, and by offering bigger and bigger signup incentives for their whole retinal scanning thing, to the point where at times it could be two month's wages for some people...


Keep in mind, when Swisher says she likes Sam, what she means is, to quote her Twitter: "Sam Altman is no different than most of the talented ones, which is to say, aggressive, sometimes imperious and yes, self-serving."


People don't divide cleanly into "good" and "evil" buckets, and CEOs in general tend to be ruthless deal -makers. But it's worth noting that much of Sam Altman's presentation is just a mask (one he puts a ton of effort into and is good at maintaining), even if he's still less evil than the Sacklers or a mob boss.


Given the accounts of his sister, and now Helen Toner, no. We can be sure that he is evil.


> Can we believe that Sam could actually be a good person?

Ok but based on what ?


> let's dissect actually why Sam is still evil.

I never said Sam is evil, nor anything close.


I mean, yeah, same thought after seeing the signatories. What are some of the cliches being used around here ? Toothpaste is out of the tub? Arrow has left the bow. The dye is cast. The ship has sailed. (Thanks ChatGPT).


The confetti has left the cannon.[0]

[0] https://news.ycombinator.com/item?id=35346683


If ChatGPT told you "the dye is cast", there's hope after all, because it's die, not dye.


The pee is in the pool. The black swan has left the barn.

And yeah, I had a laugh at the signatories. Of course my heart goes out to the non-billionaires that might be out of a job. Or maybe us lucky duckies are going to travel the world on our new basic income trust funds?


> Toothpaste is out of the tub.

Please don't correct that.


The genie is out of the bottle. [1]

[1] No AI was involved in the creation of this reply. ;-)


agreed - can you think of any other model that has such unrestricted release ? Open means available for wide use


Go take a look at the content of Civitai. Take everything you see there, and imagine what happens if you start prompting it with words that indicate things which may not be legal for you to see images of.

Please show me viable harm of GPT-4 that is higher than the potential harm from open sourced image generators with really good fine tuning. I'll wait, most likely forever.


Stable Diffusion v1.4, v2.1

LLaMA


LLaMa technically is only limited to researchers, etc...


Tell that to the magnet link I clicked on


The actually open models like BLOOM?


Where is the noscript/basic (x)html interop support?


exactly. this isn't a leetcode problem where all you have to do is re-run the function, or do it iteratively vs recursively.


Not sure what you mean, but for example, 2 separate competitors to DALL-E was released within months (SD and MJ). Arguable that both of these have since surpassed DALL-E's capabilities/ecosystem.

Not sure why ChatGPT will be any different.


> Not sure why ChatGPT will be any different.

LLMs take vastly more resources to train and run than image generators. You can do quite a bit with SD on a few year old 4GB laptop GPU (that’s what I use mostly, though I’ve set up an instance with a better GPU on Compute Engine that I can fire up, too.)

GPT-NeoX-20B – an open (as in Open Source, not OpenAI) LLM intended as a start to move toward competing with GPT-3 (but still well behind, and smaller) requires a minimum 42GB of VRAM and 40GB system RAM to run for inference. The resources times time cost for training LLMs is…immense. The hardware cost alone of trying to catch up to ChatGPT is enormous, and unless a radical new approach that provides good results and insanely lower resource requirements is found, you aren’t going to have an SD-like community pushing things forward.

Will there be competition for ChatGPT? Yes, probably, but don’t expect it to look like the competition for Dall-E.


Interests align - those who keep the secrets are better off for keeping them (not just for punishment of breaking NDAs, but the fruits (pun intended) of revealing something wonderful).


Apple TV+ has a show about this.



Ya, very r/oddlyspecific.


Umm, tech salaries are indeed not falling in SF Bay Area thank you very much.


> Similar to balancing other social issues, I don't believe private companies should make all of the decisions on their own.

Then give up majority control.

This article needs a listen in compassion. It's all, "no we're not", instead of, "we understand why people feel this way and this is what we need to do"


I think one of the comments said vulgar and predatory. I second this. You don't need a bootcamp to break into Data Science, or Data Engineering, or Data anything.


This is my main grip with YC touting startups as a great place to join, but not helping them negotiating fair equity. Yes, founders deserve credit because they took risk, but they need to spread the wealth for execution. It should almost be illegal to give employees such low equity, relative to what founders get.


This is the way.


People have been saying these kinds things about Apple forever. Betting against a $2 Trillion company, not sure about that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: