> make the balance between capital and labor even more uneven.
I think it's interesting to note that as opens source models evolve and proliferate, the capital required for a lot of ventures goes down - which levels the playing field.
When I can talk to one agent-with-a-CAD-integration and have it design a gadget for me and ship the design off to a 3D printer and then have another agent write the code to run on the gadget, I'll be able to build entire ventures that would require VC funding and a team now.
When intellectual capital is democratized, financial capital looses just a bit of power...
What value do you bring to the venture, though? What makes your venture more likely to succeed than anybody else's, if the barrier is that low? I mean, I'll tell you: if anyone can spend $100 to design the same new gadget, the winner is going to be whoever can spend a million in production (to get economy of scale) and marketing. Currently, financial capital needs your brain, so you can leverage that. But if they can use a brain in the cloud instead, they're going to do just that. Sure, you can use it and design anything you can imagine, but nobody is going to pay you for it unless you, yourself, bring some irreplaceable value to the table.
Since everyone has AI, then it stands that humans still make the difference. That is why I don't think companies will be able to automate software dev too much, they would be cutting the one advantage they could have over their competition.
It stands that humans will make the difference if they can do things that the AI cannot. The more capable the AI gets, however, the less humans will meet that threshold, and they are the ones that will lose out. Capital, on the other hand, will always make a difference.
At present, if you have financial capital and need intellectual capital you need to find people willing to work for you and pay them a lot of money. With enough progress in AI you can get the intellectual capital from machines instead, for a lot less. What loses value is human intellectual capital. Financial capital just gained a lot of power, it can now substitute for intellectual capital.
Sure, you could pretend this means you'll be able to launch a startup without any employees, and so will everyone. But why wouldn't Sam Altman or whomever just start AI Ycombinator with hundreds of thousands of AI "founders"? Do you really think it would be more "democratic"?
> But why wouldn't Sam Altman or whomever just start AI Ycombinator with hundreds of thousands of AI "founders"? Do you really think it would be more "democratic"?
AI is useful in the same way with Linux
- can run locally
- empowers everyone
- need to bring your own problem
- need to do some of the work yourself
The moral is you need to bring your problem to benefit. The model by itself does not generate much benefits. This means AI benefits are distributed like open source ones.
Those points are true of current AI models, but how sure are you they will remain true as technology evolves?
Maybe you believe that they will always stay true, that there's some ineffable human quality that will never be captured by AI and value creation will always be bottle-necked by humans. That would be nice.
But even if you still need humans in the loop, it's not clear how "democratizing" this would be. It might sound great if in a few years you and everyone else can run an AI on their laptop that is as a good as a great technical co-founder that never sleeps. But note that means that someone who owns a data-center can run the equivalent of the current entire technical staff of Google, Meta, and OpenAI combined. Doesn't sound like a very level playing field.
Think the marginal cost of developing complex software goes down thereby making it affordable to a greater market. There will still be a need for skilled software engineers to understand domains, limitations of AI, and how to harness and curate AI to develop custom apps. Maybe software engineering for the masses. Local small businesses can now maybe afford to take on custom software projects that were before unthinkable.
> There will still be a need for skilled software engineers to understand domains, limitations of AI, and how to harness and curate AI to develop custom apps.
But will there be a need for fewer engineers, though? That's the question. And the competition for those who remain employed would be fierce, way worse than today.
I think it might be useful to look at this as multiple forces to play.
One force is a multiplier of a software engineer’s productivity.
Another force is the pressure of the expectation for constant, unlimited increase in profits. This pressure force the CEOs and managers to look for cheaper alternatives to expensive software engineers, ultimately to eliminate the position and expense. The lie that this is a possibility draws huge investments.
And another force is the infinite number of applications of software, especially well designed, truly useful, software.
I'd be a hypocrite if I didn't admit I use AI daily in my job, and it's indeed a multiplier of my productivity. The tech is really cool and getting better.
I also understand AI is one step closer for the everyday Jane or Joe Doe to do cool and useful stuff which was out of reach before.
What worries me is the capitalist, business-side forces at play, and what they will mean for my job security. Is it selfish? You bet! But if I don't advocate for me, who will?
Jevon's Paradox says that you're probably wrong. But I'm worried about the same thing. The moat around human superiority is shrinking fast. And when it's gone, we may get more software, but will we need humans involved?
AI doesn't have needs any desires, humans do. And no matter how hyped one might be about AI, we're far away from creating an artificial human. As long as that's true, AI is a tool to make humans more effective.
That's fair, but the question was whether AI would destroy or create jobs.
You might speculate about a one-person megacorp where everything is done by AIs that a single person runs.
What I'm saying is that we're very far from this, because the AI is not a human that can make the CEO's needs and desires their own and execute on them independently.
Humans are good at being humans because they've learned to play a complex game, which is to pursue one's needs and desires in a partially adversarial social environment.
This is not at all what AI today is being trained for.
Maybe a different way to look at it, as a sort of intuition pump: If you were that one man company, and you had an AGI that will correctly answer any unambiguously stated question you could ask, at what point would you need to start hiring?
You're taking your opinion to extreme because I don't think anyone is talking about replacing all engineers with a single AI computer doing the work for a one-person mega-corporation.
The actual question, which is much more realistic, is if an average company of, let'say, 50 engineers will still have a need to hire those 50 engineers if AI turns out to be such an efficiency multiplier?
In that case, you will no longer need 10 people to complete 10 tasks in given time-unit but perhaps only 1 engineer + AI compute to do the same. Not all businesses can continue scaling forever, so it's pretty expected that those 9 engineers will become redundant.
You took me too literally there, that was intended as a thought experiment to explore the limits.
What I was getting at was the question: If we feel intuitively that this extreme isn't realistic, what exactly do we think is missing?
My argument is, what's missing is the human ability to play the game of being human, pursuing goals in an adversarial social context.
To your point more specifically: Yes, that 10-person team might be replaceable by a single person.
More likely than not however, the size of the team was not constrained by lack of ideas or ambition, but by capital and organizational effectiveness.
This is how it's played out with every single technology so far that has increased human productivity. They increase demand for labor.
Put another way: Businesses in every industry will be able to hire software engineering teams that are so good that in the past, only the big names were able to afford them. The kind of team required for the digital transformation of every old fashioned industry.
In my 10-person team example, what in your opinion would the company with the rest of the 9 people do once the AI proves its value in that team?
Your hypothesis is AFAIU is that the company will just continue to scale because there's an indefinite amount of work/ideas to be explored/done so the focus of those 9 people will just be shifted to some other topic?
Let's say I am a business owner I have a popular product with a backlog of 1000 bugs and I have a team of 10 engineers. Engineers are busy both juggling between the features and fixing the bugs at the same time. Now let's assume that we have an AI model that will relieve 9 out of 10 engineers from cleaning the bugs backlog and we will need 1 or 2 engineers reviewing the code that the AI model spits out for us.
What concrete type of work at this moment is left for the rest of the 9 engineers?
Assuming that the team, as you say, is not constrained by the lack of ideas or ambition, and the feature backlog is somewhat indefinite in that regard, I think that the real question is if there's a market for those ideas. If there's no market for those ideas then there's no business value $$$ created by those engineers.
In that case, they are becoming a plain cost so what is the business incentive to keep them then?
> Businesses in every industry will be able to hire software engineering teams that are so good that in the past, only the big names were able to afford them
Not sure I follow this example. Companies will still hire engineers but IMO at much less capacity than what it was required up until now. Your N SQL experts are now replaced by the model. Your M Python developers are now replaced by the model. Your engineer/PR-review is now replaced by the model. The heck, even your SIMD expert now seems to be replaced by the model too (https://github.com/ggerganov/llama.cpp/pull/11453/files). Those companies will no longer need M + N + ... engineers to create the business value.
> Your hypothesis is AFAIU is that the company will just continue to scale because there's an indefinite amount of work/ideas to be explored/done so the focus of those 9 people will just be shifted to some other topic?
Yes, that's what I'm saying, except that this would hold over an economy as a whole rather than within every single business.
Some teams may shrink. Across industry as a whole, that is unlikely to happen.
The reason I'm confident about this is that this exact discussion has happened many times before in many different industries, but the demand for labor across the economy as a whole has only grown. (1)
"This time it's different" because the productivity tech in question is AI? That gets us back to my original point about people confusing AI with an artificial human. We don't have artificial humans, we have tools to make real humans more effective.
Hypothetically you could be right and I don't know if "this time will be different" nor am I trying to predict what will happen on the global economic scale. That's out of my reach.
My question is rather of much narrower scope and much more concrete and tangible - and yet I haven't been able to find any good answer for it, or strong counter-arguments if you will. If I had to guess something about it then my prediction would be that many engineers will need to readjust their skills or even requalify for some other type of work.
It should be obvious that technology exists for the sake of humans, not the other way around, but I have already seen an argument for firing humans in favour of LLMs since the latter emit less pollution.
LLMs do not have desires, but their existence alters desires of humans, including the ones in charge of businesses.
I agree the latter part is a risk to consider, but I really think getting an AI to replace human jobs on a vast scale will take much more than just training a bit more.
You need to train on a fundamentally different task, which is to be good at the adversarial game of pursuing one's needs and desires in a social environment.
And that doesn't yet take into account that the interface to our lives is largely physical, we need bodies.
I'm seeing us on track to AGI in the sense of building a universal question answering machine, a system that will be able to answer any unambiguously stated question if given enough time and energy.
Stating questions unambiguously gets pretty difficult fast even where it's possible, often it isn't even possible, and getting those answers is just a small part of being a successful human.
PS: Needs and desires are totally orthogonal to AI/AGI. Every animal has them, but many animals don't have high intelligence. Needs and desires are a consequence of our evolutionary history, not our intelligence. AGI does not need to mean an artificial human. Whether to pursue or not pursue that research program is up to us, it's not inevitable.
To be clear, I'm not arguing humans will stop being involved in software engineering completely. What I fear is that the pool of employable humans (as code reviewers, prompt engineers and high-level "solution architects") will shrink, because fewer will be needed, and that this will cause ripples in our industry and affect employment.
We know this isn't far-fetched. We have strong evidence to suspect during the big layoffs of a couple of years ago, FAANG and startups all colluded to lower engineer salaries across the board, and that their excuse ("the economy is shrinking") was flimsy at best. Now AI presents them with another powerful tool to reduce salaries even more, with a side dish of reducing the size of the cost center that is programmers and engineers.
Honestly, I wasn't even talking about jobs with that. I worry about an intelligent IOT controlled by authoritarian governments or corporate interests. Our phones have already turned society into a panopticon, and that will can get much worse when AGI lands.
But yes, the job thing is concerning as well. AI won't scrub a toilet, but it will cheaply and inexhaustibly do every job that humans find meaningful today. It seems that we're heading inexorably towards dystopia.
> AI won't scrub a toilet, but it will cheaply and inexhaustibly do every job that humans find meaningful today
That's the part I really don't believe. I'm open to being wrong about this, the risk is probably large enough to warrant considering it even if the probability of this happening is low, but I do think it's quite low.
We don't actually have to build artificial humans. It's very difficult and very far away. It's a research program that is related to but not identical to the research program leading to tools that have intelligence as a feature.
We should be, and in fact we are, building tools. I'm convinced that the mental model many people here and elsewhere are applying is essentially "AGI = artificial human", simply because the human is the only kind of thing in the world that we know that appears to have general intelligence.
But that mental model is flawed. We'll be putting intelligence in all sorts of places that are not similar to a human at all, without those devices competing with us at being human.
To be clear, I'm much more concerned about the rise of techo-authoritarianism than employment.
And further ahead, where I said your original take might not age well; I'm also not worried about AI making humanoid bodies. I'd be worried about a future where mines, factories, and logistics are fully automated: an AI for whom we've constructed a body which is effectively the entire planet.
And nobody needs to set out to build that. We just need to build tools. And then, one day, an AGI writes a virus and hacks the all-too-networked and all-too-insecure planet.
> I'm also not worried about AI making humanoid bodies. I'd be worried about a future where mines, factories, and logistics are fully automated: an AI for whom we've constructed a body which is effectively the entire planet.
I know scifi is not authoritative, and no more than human fears made into fiction, but have you read Philip K. Dick's short story "Autofac"?
It's exactly what you describe. The AI he describes isn't evil, nor does it seek our extinction. It actually wants our well-being! It's just that it's taken over all of the planet's resources and insists in producing and making everything for us, so that humans have nothing left to do. And they cannot break the cycle, because the AI is programmed to only transition power back to humans "when they can replicate Autofac output", which of course they cannot, because all the raw resources are hoarded by the AI, which is vastly more efficient!
I think that science fiction plays an important role in discourse. Science fiction authors dedicate years deeply contemplating potential future consequences of technology, and packaging such into compelling stories. This gives us a shorthand for talking about positive outcomes we want to see, and negative outcomes that we want to avoid. People who argue against scifi with a dismissal that "it's just fiction" aren't participating in good faith.
On the other hand, it's important not to pay too close attention to the details of scifi. I find myself writing a novel, and I'm definitely making decisions in support of a narrative arc. Having written the comment above... that planetary factory may very well become the third faction I need for a proper space opera. I'll have to avoid that PKD story for the moment, I don't want the influence.
Though to be clear, in this case, that potentiality arose from an examination of technological progress already underway. For example, I'd be very surprised if people aren't already training LLMs on troves of viruses, metasploit, etc. today.
I think we're talking about different time scales - I'm talking about the next few, maybe two or three decades, essential the future of our generation specifically. I don't think what you're describing is relevant on that time scale, and possibly you don't either.
I'd add though that I feel like your dystopian scenario probably reduces to a Marxist dystopia where a big monopolist controls everything.
In other words, I'm not sure whether that Earth-spanning autonomous system really needs to be an AI or requires the development of AI or fancy new technology in general.
In practice, monopolies like that haven't emerged due to competition and regulation, and there isn't a good reason to assume it would be different with AI either.
In other words, the enemies of that autonomous system would have very fancy tech available to fight it, too.
I'm not fussy about who's in control. Be it global or national; corporate or governmental; communist or fascist. But technology progresses more or less uniformly across the globe and systems are increasingly interconnected. An AGI, or even a poor simulacrum cobbled together from LLMs with internet access, can eventually hack anything that isn't airgapped. Even if it doesn't have "thoughts" or "wants" or "needs" in some philosophical sense, the result can still be an all-consuming paperclip maximizer (but GPUs, not paperclips). And every software tool and every networked automated system we make can be used by such a "mind."
And while I want to agree that we won't see this happen in the next 3 decades, networked automated cars have already been deployed on the street of several cities and people are eagerly integrating LLMs into what seems to be any project that needs funding.
It's tempting to speculate about what might happen in the very long run. And different from the jobs question, I don't really have strong opinions on this.
But it seems to me like you might not be sufficiently taking into account that this is an adversarial game; i.e. it's not sufficient for something just to replicate, it needs to also out-compete everything else decisively.
It's not clear at all to me why an AI controlled by humans, to the benefit of humans, would be at a disadvantage to an AI working against our benefit.
Agreed on all but one detail. Not to put too fine a point on it, but I do believe that the more emergent concern is AI controlled by a small number of humans, working against the benefit of the rest of humanity.
In the AI age, those who own the problems stand to own the AI benefits. Utility is in the application layer, not the hosting or development of AI models.
this is a better world. we can work a few hours a week and play tennis, golf, and argue politics with our friends and family over some good cheese and wine while the bots do the deployments.
We're already there in terms of productivity. The problem is the inordinate number of people doing nothing useful yet extracting huge amounts. Think most of finance for example.
If it's any consolation, if indeed the extra productivity happens, and kills the number of SWE jobs I don't see why this dynamic shouldn't happen in almost all white collar job across the private sector (government sectors are pretty much protected no matter what happens). There'll be a decreasing demand for lawyers, accountants, analysts, secretaries, HR personnel, designers, marketers etc etc. Even doctors might start feeling this eventually.
no I think more engineers. especially those who can be a jack-of-all-trades. if a software project that takes normally 1 year of customer development can be done in 2 months, then that project is affordable to a wide array of business who would could never fund that kind of project before.
I can see more projects being deployed by smaller businesses, that would otherwise not be able to.
But how will this translate to engineering jobs? Maybe there will be AI tools to automate most of the stuff a small business needs done. "Ah," you may say, "I will build those tools!". Ok. Maybe. How many engineers do you need for that? Will the current engineering job market shrink or expand, and how many non-trash, well paid jobs will there be?
I'm not saying I know for sure how it'll go, but I'm concerned.
By the way, car mechanics (especially independent ones, your average garage mechanic) understand less and less about what's going on inside modern cars. I don't want this to happen to us.
would be similar to solution engineers today. you build solutions using ai. think about all the moving parts to building a complex business app. user experience, data storage, business logic, reporting, etc. etc. the engineer can orchestrate the ai to build the solution and validate its correctness.
I fear even this role will need way fewer people, meaning the employment pool will heavily shrink, and those competing for a job will need to accept lower paychecks.
like someone said above. demand is infinite. imagine a world where the local AI/Engineer tech is a ubiquitous as the uber driver. don't think it will necessarily create smaller paychecks. hard to say. But I see demand skyrocketing for customized software that can be provided at 1/10 of today's costs.
We are far away from that though. As an enterprise software/data engineer, AI has been great in answering questions and generating tactical code for me. Hours have turned into minutes. It even motivated me to work on side projects because they take less time.
You will be fine. Embrace the change. Its good for you. Will lead to personal growth.
I'm not at all convinced demand is infinite, nor that this demand will result in employment. This feels like begging the question. This is precisely what I fear won't happen!
Also, I don't want to be a glorified uber driver. It's not good for me and not good for the profession.
> As an enterprise software/data engineer, AI has been great in answering questions and generating tactical code for me. Hours have turned into minutes.
I don't dispute this part, and it's been this way for me too. I'm talking about the future of our profession, and our job security.
> You will be fine. Embrace the change. Its good for you. Will lead to personal growth.
We're talking at cross-purposes here. I'm concerned about job security, not personal growth. This isn't about change. I've been almost three decades in this profession, I've seen change. I'm worried about this particular thing.
3 decades. me too. since 97. maybe uber driver was a bad example. what about having a work model similar to a lawyer? whereby one can specialize in creating certain types of business or personal apps at a high hourly rate ?
I get this argument, but it feels we cannot always reason by analogy. Some jumps are qualitatively different. We cannot always claim "this didn't happen before, therefore it won't happen now".
Of course assemblers didn't create fewer programming jobs, nor did compilers or high level languages. However, with "NO CODE" solutions (remember that fad?) there was an attempt at reducing the need for programmers (though not completely taking them out of the equation)... it's just that NO CODE wasn't good enough. What if AI is good enough?
> I'm worried these technologies may take my job away
The way I look at this is that with the release of something like deepseek the possibility of running a model offline and locally to work _for_ you while you are sleeping, doing groceries, spending time with your kids / family is coming closer to a reality.
If AI is able to replace me one day I'll be taking advantage of that way more efficiently than any of my employee(s).
You won't be happy doing a robot's job either, at least not for long.
In the ideal case, we won't be dependent on the unwilling labor of other humans at all. Would you do your current job for free? If not -- if you'd rather do something else with your productive life -- then it seems irrational to defend the status quo.
One thing's for certain: ancient Marxist tropes about labor and capital don't bring any value to the table. Abandon that thinking sooner rather than later; it won't help you navigate what's coming.
That's not historically what's happened though, is it? We've had plenty of opportunities to reduce the human workload through increased efficiency. What usually happens is people demand more - faster deliveries, more content churn; and those of us who are quite happy with what we have are either forced to adapt or get left behind while still working the same hours.
Jevon's paradox really does work for everything, not just in the current way people have used it this last week in terms of GPU demand. People always demand more, and thus, there is an endless amount of work to be done.
We don't have enough because the productivity improvements are not shared with the working class. The wealth gap increases, people work the same. This is historically what has happened and it's what will happen with AI. The next generations will never have the opportunity to retire.
Because billionaires think that you are a horse and that the best course of action is to turn you into glue while they hope AGI lets them live forever.
Billionaires don't think about you at all. That's what nobody seems to get.
We enjoy many luxuries unavailable even to billionaires only a few decades ago. For this trend to continue, the same thing needs to happen in other sectors that happened in (for example) the agricultural sector over the course of the 20th century: replacement of human workers by mass automation and superior organization.
In the past, human workers were displaced. The value of their labour for certain tasks became lower than what automation could achieve, but they could still find other things to do to earn a living. What people are worrying about here is what happens when the value of human labour drops to zero, full stop. If AI becomes better to us at everything, then we will do nothing, we will earn nothing, and we will have nothing that isn't gifted to us. We will have no bargaining power, so we just have to hope the rich and powerful will like us enough to share.
If anything like that had actually happened in the past, you might have a point. When it comes to what happens when the value of human labor drops to zero, my guess is every bit as good as yours.
I say it will be a Good Thing. "Work" is what you call whatever you're doing when you'd rather be doing something else.
The value of our labour is what enables us to acquire things and property, with which we can live and do stuff. If your labour is valueless because robots can do anything you can do better, how do you get any of the possessions you require in order to do that something else you'd rather be doing? Capitalism won't just give them to you. If you do not own land, physical resources or robots, and you can't work, how do you get food? Charity? I'd argue there will need to be a pretty comprehensive redistribution scheme for the people at large to benefit.
What we see through history is that human labour cost goes up and machine cost goes down.
Suppose you want to have your car washed. Hiring someone to do that will most likely give the best result: less physical resources used (soap, water, wear of cloth), less wear and tear on the car surface and less pollution and optionally a better result.
Still the benefit/cost equation is clearly in favor of the machine when doing the math, even when using more resources in the process.
What is lacking in our capitalist economic system is the fact of hiring people to perform services is punished by much higher taxes compared to using a machine, which is often even tax deductible. That way, the machine brings only benefits to the user of the machine (often a more wealthy person), less much to society as a whole. If only someone could find a solution to this tragedy.
Forgetting the offhand implication that $6,000 is not out of reach for anyone, this will do nothing. If we're really taking this to its natural conclusion, that AI will be capable of doing most jobs, companies won't care that you have an AI. They will not assign you work that can be done with AI. They have their own AI. You will not compete with any of them, and even if you find a novel way to use it that gives you the gift of income, that won't be possible for even a small fraction of the population to replicate.
You can keep shoehorning lazy political slurs into everything you post, but the reality is going to hit the working class, not privileged programmers casually dumping 6 grand so they can build their CRUD app faster.
But you're essentially arguing for Marxism in every other post on this thread, whether you realize it or not.
Yeah, there's always some reason why you can't do something, I guess... or why The Man is always keeping you down, even after putting capabilities into your hands that were previously the exclusive province of mythology.
I prefer to not use -ist's and -ism's. I read that Marx wrote he was not a Marxist. Surely his studies and literature got used as a frame of reference for a rather wide set of ideologies. Maybe someone with a deeper background on the topic can chime in with ideas?
I'm worried these technologies may take my job away and make the balance between capital and labor even more uneven.
Why should I be happy?