> Anthropic’s CEO, Dario Amodei, was once the VP of research at OpenAI, and reportedly split with the firm after a disagreement over OpenAI’s roadmap — namely its growing commercial focus.
So he's now the CEO of Anthropic, a company selling AI services?
Claude is amazing, and we use it's Teams plan here at the office extensively (having switched from ChatGPT since Claude is vastly better at technical material and adcopy writing).
But, Anthropic definitely has a commercial motive... no?
I'm not saying a commercial motive is a bad thing - hardly... but this quote seems to be odd given the circumstances.
Pivoting from "for all mankind" to "all for myself" would make me deeply uncomfortable, too. The change from one position to the other, not either position in any absolute sense, is the concerning part.
This is also a great point. I ranted at length about this when the OpenAI news broke last week, but to cut it short: it's a little troubling to see the company founded on the ethos "for-profit AI work is incredibly dangerous" transition to a for-profit AI firm openly engaged in an arms race. Not just engaged, inciting...
No it’s good to try to build tech that helps people. Doesn’t mean such declarations need to be taken at face value, but being baseline-cynical is generally unwarranted, undesirable, and uninteresting.
The baseline stance for tech startups is wanting to solve a problem in the world and profiting from the value that provides. And thankfully, most of the time, those motives don't conflict. Even a mundane business like my local grocery store solves the problem of a curating a selection of food producers, buying in bulk to ensure a sustained sustained supply at a reasonable price and make it available to me close to my home. That is a tremendous value! And for that they make their markup. They aren't necessarily solving other social problems like food scarcity or maximizing nutrition or whatever, and instead focus on what their customers want to buy, those that can pay for it. But there still is a meeting in the middle of value being created.
Cynicism doesn't necessarily protect you from getting scammed, but it does absolutely prevent you from accessing any upside there is to be had in the world :)
The upside being people flocking from MLM to DeFi to LLMs like headless chickens while I watch in amusement?
The only downside for me is having been involved in all these projects and knowing enough to innovate. At least I do try to warn people before we proceed.
It sounds like they are least trying to build on the notion of being a public benefit corporation, and create a business that won't devolve into chart must go up and to the right each quarter.
Time will tell of course, OpenAI was putatively started with good, non-profit intentions.
You're absolutely correct that they're a for-profit firm, but you're missing that they were founded specifically over safety concerns. Basically it's not just "commercial motive" in general, it's the sense that OpenAI was only paying lipservice to safety work as a marketing move.
* Acting in accordance with declared motivations is a demonstration of integrity.
* Acting towards hidden motivations that oppose your declared motivations is deceptive action.
Honest people don't want to lead and be responsible for deceptive action, even if the action is desirable.
For these types of people, it is often better to leave a place that requires them to active deceptively in favor of one that will let them operate with integrity.
Even if the end goal is the same, eg: to make money.
At least Anthropic is honest about their intentions though. That would be enough for me to leave OpenAI. Hey you want to commercialize it sure but don’t hide behind lies.
> I'm not saying a commercial motive is a bad thing - hardly... but this quote seems to be odd given the circumstances.
It only seems odd to you because you are reading much more into it than he's ever said, like "AI should never be commercialized under any circumstances and it is impossible to do so correctly". Then yes, it would be hypocritical. But he didn't say that and never has; and Anthropic thinks they are doing it right.
Also Anthropic, even though it has the same conflict of interest as OpenAI, seems to be addressing safety concerns more earnestly. They are listening to their internal team and innovating in the space (especially in a practical sense, not just future AGI concerns). Massively funding (or at least announcing too) and then ignoring a team is much worse.
I think the whole situation where they got some serious investment from SBF and then he got indicted pushed them into commercialising their tech so they could have more standard sources of funding
I'm not sure as to what Anthropic means when they mean safety. I remember them doing good, non-censorship work in this field, but I also pay for ChatGPT instead of Claude because Claude is just so censored and boring.
In my experience it is better at not sounding like an LLM wrote it, even without being directed to not sound like an LLM. It's better able to find and maintain the desired tone (playfulness, silly, professional, a mixture of, etc) with minor prompting. It also seems better at understanding your business/company and helping craft adcopy that's on-message/theme.
We used ChatGPT's Teams plan too with GPT4, but were sold on Claude almost immediately. Admittedly we have not used GPT4o recently, so we can't compare.
With technical information, Claude is vastly better at providing accurate information, even about lesser-known languages/stacks. For example, it's ability to discuss and review code written in Gleam, Svelte, TypeSpec and others is impressive. It is also, in our experience, vastly better at "guided/exploratory learning" - where you probe questions as you go down a rabbit hole.
Is it always accurate? Of course not, but we've found it to be on average better at those tasks than ChatGPT.
So he's now the CEO of Anthropic, a company selling AI services?
Claude is amazing, and we use it's Teams plan here at the office extensively (having switched from ChatGPT since Claude is vastly better at technical material and adcopy writing).
But, Anthropic definitely has a commercial motive... no?
I'm not saying a commercial motive is a bad thing - hardly... but this quote seems to be odd given the circumstances.