Well… yes definitely. You are worth more than your salary, otherwise your employer wouldn’t keep you around.
The problem is that it’s definitely a lot easier to just cruise along, punch 40 hours, and leave work at work. Startup founder life is rough. I don’t blame anyone for not wanting to extract that worth from themselves.
It is not easy to cruise along in some large corps. You always have to be on guard about layoffs or managers throwing you under the bus.
Not always true but it only needs to happen once to you for your 40 hour cruising to end. At that point, you will question why you spent so much time slaving away.
Because in the US at least the probability of you breaking your leg and going into medical debt then losing everything because you don't have insurance is far higher.
And yet the majority of Americans cannot afford a surprise $1000 dollar expense.
My daughter broke her leg just over a month ago. I have decent insurance and my out of pocket expenses were still close to $2000 alone with in network coverage. I think a lot of HNers are somewhat wealthy middle class+ with cash in the bank, or not from the US so they have no clue how out of control these kinds of expenses are.
Why quit? Most people will mess up #2 or execute badly on #3 or even then #4 is unlikely to happen when FAANGs can just copy you and put you out of business instead of buying you out and rewarding this behavior.
#5 will almost certainly happen even if you keep doing #1, earning a huge salary.
The numbers and risks just don’t make sense anymore.
...because there's a glut of investor money chasing "AI", raising money for your new AI startup is not particularly hard if you are linked with any of the big names in AI.
Even if the startup "fail", the founder could have been paying themselves a market-rate salary with the added possibility of an FU-money exit. The downsides are limited.
I think you underestimate the probability of little to no return, and ignore opportunity cost.
I've seen several startup founders pay themselves below market for years because they have all that equity. Only to find out, a couple down rounds later, it wasn't worth it. They tried though.
Because everybody’s so busy selling shovels that no one’s digging for gold, so there’s no one to sell shovels to.
That’s not totally true, but we’re all here trying to outsmart one another in order to have a Sure Thing. Well I’m here to tell you that there are no Sure Things in life except for death and taxes, so take a risk, jump on a random plot of land, and start digging.
Because of the FTC, BigTech is more reticent to do an acquisition.
So like in the case of Inflection.ai, Microsoft just did an acquihire of the entire team and “licensed” the technology/IP from what now is a shell of a company.
Up through #3, literally the playbook for the Google Research team (minus one who took at job at OpenAI) that authored the Attention is All you Need paper.
This seems like a general problem for a company like this: your great success demonstrates that your employees are incredibly valuable. And sometimes businesses may say things like "our people are our greatest asset", but that's not actually true. They're not an asset. Your IP is an asset, and you own that. But your people are often capable of going elsewhere and producing equally good work, and they're likely to do that if they get a better offer.
It's no different than professional sports. When you win the super bowl for instance, the value of many of your players (employees) suddenly jump. At that point it's infeasible to pay everyone what they can realistically get on the market. Thus you identify and pay your most essential assets, and the others leave and receive huge pay bumps.
I think it's very valuable for the industry, and as long as the teams who are picking up these employees are able to utilize them effectively, will result in broader innovation.
> At that point it's infeasible to pay everyone what they can realistically get on the market.
Largely, because of salary caps created fairly explicitly to break the winning -> popularity -> money -> monopolizing talent -> winning positive feedback loop.
Success feedback loops aren't quite so constrained in most other industries.
Were there instances where a sports club actually didn't want to win some title to prevent exactly this from happening? How would the sports association react if the club decided to throw the match for this reason?
You assume that IP once created stands on its own. The programming as theory building view suggests that IP will degrade once people who have robust mental models about how the IP was created and works. So software/IP in essence has two critical parts that need to be there, the actual work that was created and robust mental models living in people’s heads who continue to work on those projects.
It also goes against the idea that programmers are replaceable cogs that management can change out as and when they want to.
I think you're misunderstanding the GP. He's not saying the people aren't valuable; he's saying they aren't assets. That is to say, they aren't property.
Which isn't what 'asset' means, but it is the central meaning; using it on people always felt squicky to me.
In my experience this kind of mass exodus is more associated with failures in leadership/management than an inevitable result of success. For example, OpenAI has been far more successful than Stability and its senior employees obviously have lots of options, but its staff turnover ratio feels much smaller than Stability's.
While true for existing IP for industries like pharma, for industries like AI, your IP isn’t worth anything unless it keeps evolving. If you lose your key researchers, you’re done.
They're not an asset but they're a hell of a lot more valuable than the IP. Ask anyone who runs a research org. The IP is nice but it deprecates rapidly. It's the fastest depreciating asset ever. The value is all the machine (network of talented people, culture) that builds the machine (IP).
Interesting. It's not mentioned in the article, but the team behind the recently released "Stable Cascade" models has left stability as well.
Personally, I assume stability has hit a scaling/money issue on image generation models, which is why they might be pivoting to other domains where you can still make headlines spending <$50M on training runs, like their 3D models.
Sad to hear, they really created some awesome models for the community. Looks like their entire diffusion team left. Do we know where they went to? Hopefully another organisation pops up without the incompetent discord based leadership and that actually finds a way to finance themselves while releasing weights in some capacity.
It makes me wonder if I should switch from my current position as a Java dev to something related to AI and machine learning. It seems that is going to be the future now, and there seems to be a big need for people with that knowledge right now.
Yes, if you're willing to invest X years into becoming relevant on the AI/ML market, and you believe than in X years there will be still too few AI/ML specialists - then it's a great idea.
There's a huge amount of help needed in AI/ML in non-researcher roles (infrastructure, pipelines, testing, etc). I'd say that the bulk of AI/ML orgs are actually just that, in my experience.
(I'd further argue that the number of actual "AI researchers" that actually contribute enough to justify their mega-million comp packages is small indeed... but that's a separate topic)
To the parent: “AI” in 2024 is just using REST APIs. Don’t bother inventing new models, just get up to speed on what the various vendors are good at and the basic techniques (particularly Agents/ReAct and RAG) and you’ve learned enough. The demand for AI is almost unlimited right now, and it’s definitely not too late. It’s like 2009 in the iPhone dev era; plenty of time to quickly learn ObjC.
Dotcoms, then add tech (the money of Web 2.0), crypto/blockchain, AI/ML...
If you are hearing about it, and you aren't already in, it might be too late.
That having been said, it's a prime time to have project that you can bootstrap into a startup. You dont need to be google, you need to be google, small teams can clear a few mill a year with the right product and not get much larger.
I'd say this is a "yes", and remember that there's a ton of work that doesn't involve actually training models. Every company right now has a big need for help in infrastructure, data pipelines, ops, monitoring + dashboards, internal tools, CI/CD, release management, etc.
The way job hunting is in the industry now with recruiters and employers relying on automated systems to filter resumes it couldn't hurt. Even if it's just something to slap on your resume to get noticed. Most of the people responsible for hiring in our industry now don't know or understand what they're hiring for anymore. AI isn't changing job roles so much as it's changing job titles.
Their value proposition thus far has been "Give stuff away".
I'm grateful for it, and I think they should have government funding if necessary -- the upsides are huge, just little of it goes to Stability -- but it's easy to see why investors would be wary.
It's readily apparent that any sufficiently useful AI/CV/ML should be deployed to solve specific, immediate, niche problems of individual people that they're willing to pay small amounts of money or assign referral commissions for:
- Product personal shopper-recommender goes out and crawls reviews and pricing to find what and where someone should buy something based on a prompt.
- Where to live.
- Where to go out.
- What to have for dinner.
- Find a plumber, carpenter, etc. and do open source due-diligence on them.
- How to optimally invest money based on circumstances, assumptions, and speculative outcome distribution.
- How to redecorate a room.
In a business context, there are many classes of problems AI can semi-automate including:
- Decision support
- Feature prioritization based on support data and social media sentiment
Yeah. Every time Emad has talked about their business model changes I've always thought "...that's it?"
I personally think they're missing some low hanging fruit, though I suppose it might be in the name of "safety." I believe Stability (maybe Clipdrop?) at least did have some sort of paid Lora or other training, but I tried it and it was awful. Considering they made the models surely they would have the absolute best insight into full fine tuning it and could roll out a service to do so.
I think they talked about doing it B2B at a presumably much higher cost, but a consumer facing, easy to use way of doing it would at least pull in some money. Of course, the second someone uses that to train child porn or whatever they'll be in hot water.
Hey now, there's a real lack of information in this tweet. Just a link to a paywalled article from a dodgy financial journal.
This is the second time this has been referenced on the front page and I still don't know who they're talking about or what the details are (and I'm not about to give money to forbes, of all people.)
The CEO of Stability seems to be rather controversial person at the helm of a rather controversial company. I'm honestly surprised more people haven't left the company by now, unless they had some absurd incentives to stick around.
2. Identify a demand in the market.
3. Quit the lab/startup, launch your own startup.
4. Get acquired by some established AI company or FAANG tech megacorp.
5. Retire.