This is why agentic AI will likely cause a cataclysim in white-collar labor soon. The reality is, a lot of jobs just need "OK" performers, not excellent ones, and the tipping point will be when the average AI is more useful than the average human.
I had a similar conversation with my CEO today - how does the incoming crop of college grads deal with the fact AI can do a lot of entry level jobs? This is especially timely for me as my son is about to enter college.
So I ended up posing the question to Claude and the response was “figure out how to work with me or pick a field I can’t do” which was pretty much a flex.
On some level, though this isn't quite what the person you're replying to was saying, it doesn't really matter whether AI actually can do any entry-level jobs. What matters is whether potential employers think it can.
To impact the labor market, they don't have to be correct about AI's performance, just confident enough in their high opinions of it to slow or stop their hiring.
Maybe in the long term, this will correct itself after the AI tools fail to get the job done (assuming they do fail, of course). But that doesn't help someone looking for a job today.
Customer service, entry sales, jr data/business specialist
- Ada's LLM chatbot does a good enough job to meet service expectations.
- AgentVoice lets you build voice/sms/email agents and run cold sales and follow ups (probably others better it was just the first one I found)
- Dot (getdot.ai) gives you an agent in Slack that can query and analyze internal databases, answering many entry level kinds of data questions.
Does that mean these jobs at the entry level go away? Honestly probably not. A few fewer will get hired in any company, but more companies will be able to create hybrid junior roles that look like an office manager or general operations specialist with superpowers, and entry level folks are going to step quickly up a level of abstraction.
Thank you for mentioning some cool projects, they all seem to target very specific use-cases not necessarily handed by junior roles. I guess PaaS services like Heroku/Render/Fly took away juniour DevOps roles then, but at least PaaS don't hollucinate or generate infra that is subtly wrong in non-obvious ways.
Paradoxically, the hardest jobs to automate are physical jobs it seems. A white collar worker is threatened by AI, blue collar not as much. I can totally envision AI software engineers (they’re already okay if you check their work), but as of yet there are no AI plumbers or mechanics. Maybe there won’t be, given the costs associated why producing physical machines vs software ones.
Your average white collar worker is certainly challenged, but I think the talent of neurodiverse people is going to become even more vital as average-ability people are more and more challenged. Of course, there's the saying:
"A man is his own easiest dupe, because what he wishes to be true, he will generally believe to be true." and I'm neurodivergent, so it makes sense that my assumption that shit'll probably turn out okay for me is a foregone conclusion.
It's just a matter of time. Your statement assumes AI won't help to develop robotics.
Robotics is the big unlock of AI since the world is continuous and messy; not discrete. Training a massively complex equation to handle this is actually a really good approach.
I'm not sure about that. For them to actually be economically useful is a high bar. More so than you think - it isn't just our brains but our strength, metabolisms, and more in a single package.
For example you need them to:
- High energy requirements in varied env's: Run all day (and maybe all night too which MAY be advantage against humans). In many environments this means much better power sources than current battery technology especially where power is not provisioned (e.g. many different sites) or where power lines are a hazard.
- For failure rates to be low. Unlike software failing fast and iterating are not usually options in the physical domain. Failure sometimes has permanent and far reaching costs (e.g. resource wastage, environmental contamination, loss of lives, etc)
- Be light weight and agile. This goes a little against No 1 because batteries are heavy. Many environments where blue collar workers go are tight, have only certain weight bearings, etc
- Handle "snowflake" situations. Even in house repair there is different standards over the years, hacks, potential age that means what is safe to do in one residence isn't in another, etc. The physical world is generally like this.
- Unlike software the iteration of different models of robots is expensive, slow, capital intensive and subject to laws of physics. The rate of change will be slower between models as a result allowing people time to adapt to their disruption. Think in terms of efficient manufacturing timelines.
- Anecdotally many trades people I know, after talking to many tech people, hate AI and would never let robots on their site to teach them how to do things. Given many owners are also workers (more small business) the alignment between worker and business owner in this regard is stronger than a typical large organisation. They don't want to destroy their own moat just because "its cool" unlike many tech people.
I can think of many many more reasons. Humans evolved precisely for physical, high dexterity work requiring hand-eye co-ordination much more so than white collar intelligence (i.e. Moravec's Paradox). I'm wondering whether I should move to a trade in all honesty at this stage despite liking my SWE career. Even if robots do take over it will be much slower allowing myself as a human to adapt at pace.
From a very inhuman perspective, and one I don't find appropriate to generally use: A human physical worker is a high capital and operational expense. A robot may not have such high costs in the end.
Before a human physical worker can start being productive, they need to be educated for 10-16+ years, while being fed, clothed, sheltered and entertained. Then they require ongoing income to fund their personal food, clothing and shelter, as well as many varieties of entertainment and community to maintain long-term psychological well-being.
A robot strips so much of this down to energy in, energy out. The durability and adaptability of a robot can be optimized to the kinds of work it will do, and unit economics will design a way to make accessible the capital cost of preparing a robot for service.
Emotional opinions on AI aside, we will I think see many additional high-tech support options in the coming decade for physical trades and design trades alike.
While I agree with you this cost isn't really borne by the people employing the human. Maybe the community, the taxpayer, even parents, but not the employer. As such these costs you mention are "sunk" - in the end as an employer I either take on a human ready to go or try to develop robots. That cost is subsidized effectively via community agreement not just for economics but for societal reasons. Generally as an trades employer I'm not "big tech" with billions of dollars in my back pocket to try R&D on long shots like AI/Google Deepmind/etc that most people thought would never go anywhere (i.e. the AI winter) - I'm usually a small business servicing a given area.
I'm not saying the robots aren't coming - just that it will take longer and being disrupted last gives you the most opportunity to extract higher income for longer and switch to capital vs labor for your income. I wouldn't be surprised if robots don't make any inroads into the average person's live in the coming decade for example. As intellectual fields are disrupted purchasing power will transfer to the rest of society including people not yet affected by the robots making capital accumulation for them even easier at the expense of AI disrupted fields.
It is a MUCH safer path to provide for yourself and others assuming capitalism in a field that is comparatively scarce with high demand. Scarcity and barriers to entry (i.e. moats) are rewarded through higher prices/wages/etc. Efficiency while beneficial for society as a whole (output per resource increases) tends to punish the efficient since their product comparatively is less scarce than others. This is because, given same purchasing power (money supply) this makes intelligence goods cheaper and other less disrupted goods more expensive all else being equal. I find tech people often don't have a good grasp of how efficiency and "cool tech" interacts with economics and society in general.
In the age of AI the value of education and intelligence per unit diminishes relative to other economic traits (e.g. dexterity, social skills, physical fitness, etc). Its almost ironic that the intellectuals themselves, from a capitalistic viewpoint, will be the ones that destroy their own social standing and worth comparatively to others. Nepotism, connections and skilled physical labor will have a higher advantage in the new world compared to STEM/intelligence based fields. Will be telling my kids to really think before taking on a STEM career for example - AI punishes this career path economically and socially IMO.
There's more options than those two; there's a reason that "spanner in the works" is a colloquialism. Humans become disagreeable when our status is challenged, and many people are very attached to the status of "employed".
That's easy. The CEO has authority and social connections, has done mutual beneficial deals, has the soft skills/position to command authority over others, has leverage over others, etc which is an economic asset. In an AI world this skill comparatively is MORE scarce than intelligence based skills (e.g. coding, math, physics, etc) and so will attract a greater premium. Nepotism and other economic advantages will play a bigger world in a AI world.
AI rewards the skills it does not disrupt. Trades, sales people, deal makers, hustlers, etc will do well in the future at least relatively to knowledge workers and academics. There will be the disruptors that get rich for sure (e.g. AI developers) for a period of time until they too make themselves redundant, but on average their wealth gain is more than dwarfed by the whole industry's decline.
Another case of tech workers equating worth to effort and output; when really in our capitalistic system worth is correlated to scarcity. How hard you work/produce has little to do with who gets the wealth.
Claude isn't wrong. The baseline for entry level has just risen. The problem isn't that it's risen (this happens continuously even before LLMs), but the speed at which it has increased.
They are already good at criminal activities such as phishing. That bar is rather low, especially once you scale up (hitting 100 people and successfully scamming 1 is still great ROI with cheap small models).
But I don't see what governments can really do about it. I mean, sure, they can ban the models, but enforcing such a ban is another matter - the models are already out there, it's just a large file, easy to torrent etc. The code that's needed to run it is also out there and open source. Cracking down on top-end hardware (and note that at this point it means not just GPUs but high-end PCs and Macs as well!) is easier to enforce but will piss off a lot more people.
Maybe I'm missing something, but we seem to be a long way off from the wave of AI replacing a lot of jobs, or at least my job. By title I'm a Software Engineer. But the work that I do here, that we do, well frankly, it's a mess. Maybe AI can crank out code, but that's actually not the hardest part of the job or the most time-consuming part. Maybe AI will accelerate certain aspects but overall, we will all be expected to do more. Spelling and grammar checkers are great. But when you're writing five times the amount you used to write, you barely even notice.
A surprising number of jobs could probably be done with AI right now, depressingly enough. Look at programming. Yes, AI is nowhere near as good as a decent programmer, can't handle rarer or more esoteric languages and frameworks well and struggles to fix its own issues in many circumstances. That's not good enough for a high level FAANG job or a very technical field with exact requirements.
But there are lots of 'easy' development roles that could be mostly or entirely replaced by it nonetheless. Lots of small companies that just need a boring CRUD website/web app that an AI system could probably throw together in a few days, small agency roles where 'moderately customised WordPress/Drupal/whatever' is the norm and companies that have one or two tech folks in-house to handle some basic systems.
All of these feel like they could be mostly replaced by something like Claude, with maybe a single moderately skilled dev there to fix anything that goes wrong. That's the sort of work that's at risk from AI, and it's a larger part of the industry than you'd imagine.
Heck, we've already seen a few companies replacing copywriters and designers with these systems because the low quality slop the systems pump out is 'good enough' for their needs.
There's quite a few companies (consulting companies/IT staffing) that make tons of money doing staff aug etc. for non-"tech" companies. Many of these companies have notoriously poor reputations for low-quality work/running out the clock while doing little actual work.
From experience dealing with a few of these companies, there's almost no chance that "vibe coding" whatever thing is going to be anything other than a massive improvement over what they'd otherwise deliver.
Thing is, the companies hiring these firms aren't competent to begin with, otherwise they'd never hire them in the first place. Maybe this actually disrupts those kinds of models (I won't hold my breath).