Hacker News new | past | comments | ask | show | jobs | submit login
The decline of computers as a general-purpose technology (2021) (acm.org)
242 points by yeesian on Oct 21, 2023 | hide | past | favorite | 231 comments



I understand this is ACM, but I would say the decline has strictly been a failure of software.

Specifically, corporate IT's failure to deliver general purpose development tools usable by anyone in a company to make computers do work for them (aka programming).

There's still a ton of general value that could be delivered by general purpose processors/computers.

But instead, most non-programmers live in a Kafkaesque reality where the only programs they have access to don't do what they want, and so their work becomes "working around the bad program."

I helped a lady on Friday at a midsized bank whose job it was to take a set of dates in an Excel file, search each one individually into an internal search tool, then Ctrl+f the result report and verify that every code listed in the original document exists in the report.

For 100+ entries.

Every month.

In 2023.

What the fuck are we doing as a society, that this lady, with a machine that could compute the above in a fraction of a second, has to spend a decent chunk of her work time doing this?

Example specifically chosen because I've seen innumerable similar "internal development will never work on this, because it's too small of an impact" issues at every company.

Computers should work for us. All of us. Not the other way around.


Hypothetically, if she figured out how to automate it on her own, would she get any kind of reward for it? If not, why should she automate it?

Another example, if there’s some repetitive task that you automate at a software dev job, would you get rewarded for figuring out how to automate it? The answer is obviously dependent on culture, at my current gig you just get brushed off

Seems to me like you’re assuming that the economy incentivizes efficiency. Imo it doesn’t, it’s just a bunch of stupid rich people sloshing money around (for the most part). None of it makes any sense


I was a temporary worker at a field dispatch office. We had to telephone all of our technicians and enter their time into a time sheet. This was approximately 1990, and it turns out all the technicians had email. Since the time recording format was very simple, I worked out a shell script to Let them enter their time in an email message, mail it to me, then my forward rules would pipe it through a shell script, a couple of ANSI codes later, and the time was entered automatically into the system. I would check it for accuracy, but it saved me having to Tab/enter about 25 times per employee just to get to the part where I entered their time. Literally it was five minutes per person including voice comms time reduced to 30 seconds.

A senior dispatcher got wind of it. She went to the boss’ boss’ boss, and complained that “this boy is going to computerize us out of a job.”

It wasn’t long before I was summoned to appear. Three things happened at that meeting. The Uber boss told me to keep it on the down low. He also gave me a copy of his shell script programming book. And finally, he told me to get the heck out of there, go back to school and stop putzing around at a job like this before you end up becoming a middle manager like himself in a career with low reward.


> if she figured out how to automate it on her own, would she get any kind of reward for it?

More likely she would be hauled up in front of a gang of rabid, frightened, myopic and punitive ICT managers where she'd be reprimanded for breaking a litany of "policies".

In these kinds of places you are not supposed to think, there, computers are not intelligence amplifiers or "bicycles for the mind"; they are more akin to the presses and looms in dark satanic mills.


I synthesize proteins to sell on an industrial scale. When I started working here a few years ago I noticed much of the programming/prep work could be easily automated, so I did. In many cases I could at least double productivity.

None of these improvements ever resulted in getting a raise, or a faster promotion, or getting to go home earlier. All it earned me was extra time to do other people's work, or to stare at the floor and look like a lazy unproductive employee.

I never had any real expectations otherwise; I did it out of boredom. But still I feel stupid for having done it at all.


> Hypothetically, if she figured out how to automate it on her own, would she get any kind of reward for it? If not, why should she automate.

The reward is more time. Now where that time is spent is another story.


Sounds like you're implying that that newly spared time would be hers to spend and not her employers?


If she keeps her mouth shut about it, definitely.


Of course. She was presumably hired for regular office work, not R&D.


why not? she needs to be rewarded for that automation somehow.


Realistically that reward is going to be:

1. more work

2. nice words and then more work

3. nice words, more work and then she is asked to do the same for another person's task who is then fired

Sorry to be cynical here, but it is very rare for managment to reward the few people that do a better than good job with more free time or less work.


I believe dingi is perfectly aware of the realistic outcomes, and is instead describing the normative ones, i.e. what you need if you want an enterprise that actually self-improves, where the best employees aren't actively looking for the door whilst concealing things from management.

It is, however, wholly a management problem to find those actually-rewarding rewards.

Crazy idea: what if employees retained IP of any spontaneous, self-directed R&D? You could then license their tech and make the role redundant, something any ruthless capitalist would consider a win. The employee can go job-hunting with a small passive income and glittering CV, which means they're much more likely to actually tell you so you can get that outsourcing win.

In reality, it seems far too many businesses have moats as a result of excellent decisions made by founders in the past, and as a result can't be outcompeted by companies that do manage talent better.


She doesn't have to tell anyone


Exactly my point. But if you are a manager, you might wanna consider if that is the incentive structure you wanna have at your organization: People who do good work get "punished" with more work, people who keep it low don't.

I certainly have left jobs because of that.


I think about incentive structures from an opportunity cost standpoint too.

Everyone has finite amounts of time they can spend on work.

Ceteris paribus, if you spend 90% of your time working and 10% politicking, at most companies you will be out promoted by someone who spends 60% of their time working and 40% politicking.

The parallel IC track that tech popularized solves this to some degree... but most non-tech companies don't have that track.


What mechanisms successfully address this?


> She doesn't have to tell anyone

She doesn't because there is an app on her work machine that gathers all kinds of useful data on usage of such computer. Sooner or later someone will notice and there will be consequences for doing so. There are no trophies for running some unauthorized software on companies data and documents.


More time, yes, probably doing something else equally tedious, or looking for a job.


> Hypothetically, if she figured out how to automate it on her own, would she get any kind of reward for it? If not, why should she automate it?

One would hope she would be able to move on to more meaningful work, rather than doing drudgery while waiting for someone to eliminate her when someone figures out it can be automated.

That said, a lot of people don't know how to automate processes and it would likely face some push back since the process would be non-standard and tied to an individual. That can have long term costs should the knowledge be lost or a software upgrade breaks something.


> Hypothetically, if she figured out how to automate it on her own, would she get any kind of reward for it? If not, why should she automate it?

the reward is the intrinsic satisfaction of a job well done, and something to put on the CV for the next job.

automating dumb bureaucratic shit is how I learned to code in the "real world".


> something to put on the CV for the next job.

spoiler alert: you can put anything in your CV without actually doing it. I've had the experience of dealing with people who I have high doubts did this. From the looks of it have been reward for it for more than a decade.

> the reward is the intrinsic satisfaction of a job well done,

just because it applies to you do not mean it a universal shared experience.

for some it's not even satisfaction that they get from that it a frustration and a feeling of being cu*ed where they put in the effort and someone else get the reward.


I had a job like this as a student. I automated it and didn't tell anyone and used the newly won time to do other stuff.


I hate this take on optimization.

She would of course get a raise or would succeed otherwise in our economy.

The only reason why I'm as successful as I am is that people understand that I'm so good in optimizing shit that they give me raises.

And alone the time I have to myself to skill up instead of waisting it on repetitive things is ridiculous if you think about return of investment and reinvestment.


> She would of course get a raise or would succeed otherwise in our economy.

Are you kidding? I'm happy your optimisations have been recognised and rewarded, because that's how it should be, but this lady would almost certainly just get more work to fill that newly freed time, for no additional compensation.


Many companies don’t even give top performers raises large enough to cover inflation. Who are you kidding here


I get bonuses because my manager knows very well he has much bigger issues when I leave.

You have to have a certain amount of flexibility of course and be willing to quit.

I'm still very sure that good performance is overall much more beneficial than staying and acting at the bottom.

You are hurting yourself while you do mondaine tasks while the other person gets compound interest and new skills.


Your situation is unusual and you should be grateful. The person described by op is a quasi government employee likely working for a TBTF institution that does not do this


> if she figured out how to automate it on her own, would she get any kind of reward for it? If not, why should she automate it?

Because it makes her job easier or improves her performance?

That's why I automate things at work, anyway. Being rewarded by my company for doing it doesn't really enter into the equation for me.


What happens to her if her performance is improved?


That entirely depends on the company she works for, and her own disposition.


Quite likely she will be punished for it, directly or indirectly. This is why it’s bad to be an employee. If you earn a living by knitting scarves in your house and work out a way to make them faster or better, or both, you’ll make more money or have more free time. If you knit scarves for a salary you’ll probably suffer for doing it faster or better.


> Quite likely she will be punished for it, directly or indirectly.

Not at most modern companies in the real world.

Jobs have already been so hyper-specialized that you have minimal staff "managing" large portions of the company, amortized over a large number of locations / amount of business.

Consequently, if a job task is taken off their plate, there are innumerable additional tasks to backfill the free time.

And critically, tasks that are probably more intellectually fulfilling than the lowest-hanging-fruit rote tasks that are automated.


"assuming the economy incentivizes efficiency"

In the long run it does. Today's stupid rich people are tomorrow's "my grandfather was x and now I'm middle class"

Happens more and faster than you think.

Economic disparity metrics suffer from selection bias.

But beyond that I think that attitude come from misunderstanding and confusing an ideal of "fairness" with efficiency.

Efficiency and fairness are far and away not the same thing.

Autocracies, for example, can be very efficient.

The same applies to economic competition. Scams and cons are efficient. Crime is efficient. Corruption... All are very efficient at redistributing wealth. So government is needed to enforce fairness.

But when it comes to the example problem here of adding excess value to a low-value job, the efficiency of the market is usually acting at the level of the firm, rather than within it.

People are naturally lazy, and for most people, the imagined ungratefulness of a company paired with an inflated view of their own value causes them to not even try and certainly not persist at innovating.


> Another example, if there’s some repetitive task that you automate at a software dev job, would you get rewarded for figuring out how to automate it? The answer is obviously dependent on culture, at my current gig you just get brushed off

The only reward I need for automating a tedious task is my own sanity.


What the fuck are we doing as a society that we have such a system of perverse incentives in place?


Our society is geared -- at all levels -- towards minimizing expense regardless of the impact on quality or even on society in general. We're all racing to the bottom.


Giving people raises for automating things is minimizing expense.

Suppose Alice is making $40,000/year and comes up with a way to automate a third of her job. So you start paying her $50,000/year and give her some other work to do. Then Bob and Carol each find a way to automate a third of their own jobs, so now they all make $50,000 and have made Don redundant.

The company is now paying $150,000 in total salary instead of $160,000 and only has to pay insurance and provide office space for three employees instead of four. Meanwhile the workers who found ways to improve efficiency are making more money and have the incentive to do it again and get another raise.

Companies may not actually do this, but those companies are mismanaged and putting themselves at a competitive disadvantage.


Bob, Carol and Alice might not feel comfortable with costing Don his job.

This can be for purely empathetic reasons, but also because they realize they’re all potential future Dons if these automations continue.


> they realize they’re all potential future Dons if these automations continue.

That isn't how the economy works. Bob, Carol, Alice and the boss are now collectively making $40,000+/year more than they did before. Where does this money go? To buy something they couldn't previously afford, made by Don at his new job somewhere else, which was created to meet the increase in demand caused by the increase in productivity.

The only way automation reduces employment is if it makes goods cost less and then people choose to work less because working less still allows them to buy everything they want. But in general people don't do that. Given the choice between having the same stuff they have now and working fewer hours or having more stuff than they do now and working the same number of hours, they pick the second one.


It's called capitalism. Some confuse that with the more neutral concept of a market economy.

In capitalism, individual wealth accumulation is the primary goal, not employing and paying people living wages.

Since capitalism is a disequilibrium state, full employment is not expected. Busy work is part of the system.


It is not entirely correct to describe the social contract within a corporation as capitalist if you are salaried and compensation is indirectly tied to performance.


While this is true and arguably of more importance, the thesis of the article was something quite specific: "the economic cycle that has led to the usage of a common computing platform, underpinned by rapidly improving universal processors, is giving way to a fragmentary cycle, where economics push users toward divergent computing platforms driven by special purpose processors". This is likely to exacerbate the issue you describe, but it is not caused by it.

I don't think your observation was any less true 20 years ago either. There hasn't been a "decline", as such - more of a failure to realize potential. Office Space (1999) was full of people doing mindless busywork that could easily be automated.


From a hardware perspective, the ACM cycle discussed is tightly tied to continued general purpose CPU performance gains.

There are many pure-performance classes of software, where more performance = more value. Those classes have been diverging since the 80s (media playback), 90s (graphics), 00s (mobile), and ~10s (gpgpu).

But there are other classes that are functionality limited. E.g. electronic medical record or enterprise resource planning.

If software functionality were more plastic or expanded in those, the same general purpose performance would then be more valuable, and investment would also be incentivized.

Accepting the inevitability of divergent, harder to user-program platforms, when we still have a lot of value on the table feels premature.

And like it bodes badly for further losses of user computing-sovereignty as hyper-optimized hardware makes cloud platforms more attractive and kills off user-owned processing.


There's many situations where manually dealing with 100+ records is a decent tradeoff. I feel people in our field overestimate the actual complexity of many dumb looking tasks.

From your bank employee example, it looks like your solution would be to open the bank's internal tool to her excel instance, have it somewhat find the right records and inject the right value, while checking the listed codes. That looks like a series of APIs if they care about data exposure, or huge CSV exports that they need to then track ?

And as it seems to be important enough to warrant verification, it means your program is not some random batch, it needs significant validation.

Then picture the internal tool changing its output format. Reports getting categorized differently because of organizational change. New fields being added to follow new legislations. The tool could break at any of these changes, and the lady's left with a broken system except a lot of money has been invested, it's supposed to have solved her problem, she might not have the same access to data she had before the integration, and overall the situation could be worse than before.

That feels like a bleak scenario, but it happens all the time as well.


It’s basically a JOIN, the most fundamental data manipulation operator. If you can’t automate this, then you don’t “get” computers at all.

I once had two guys arguing next to me about how it was impossible to produce a simple report about VM storage utilisation in under a month. I produced the report, printed it out, and shoved it in their hands just to shut them up while they were still busy arguing. A “month” of work required five joins.


> It's basically a JOIN

The comment you're replying just explained why this is not "basically a JOIN." There are two unrelated systems (not SQL databases) which don't expose any APIs to the analyst.


It's still basically a join. If it's on a computer, it can be converted to text files, and then it can be JOINed, one way or another.


In these cases, the work is usually about connecting the systems together than the data modification itself.


No, that's just made up work that's not required to solve a problem. Systems are already connected enough by virtue of being accessible by the same person on the same computer at the same time. Turning this into software project and building Proper Integration is only creating a tight coupling between the systems, which will only keep creating bugs and requires ongoing maintenance.


My sibling from another mother. That's exactly what we did.


[flagged]


This is a pointlessly mean and unpleasant comment. It was polite discussion up until that point.


I think you're right. I've sometimes been in the situation of having to maintain a script that depends on APIs or data formats that change from time to time without warning. A fair amount of skill and effort is required. (There is skill involved in writing the script so that it is maintainable and not too fragile but still detects when something has gone wrong.) If the script is doing a task that has to be completed on a particular day every month, rather than whenever the script hacker is next available, then you'd have to choose between paying the hacker a reasonable retainer or having someone around who you know can still do the task manually, which means you might have to get them to do the task manually from time to time to stay in practice.

A dilemma I keep facing in practice is converting a text that someone else has produced in their own special way, like the ghastly cluttered HTML from a word processor. For example, do I attempt to use a Perl script to automatically convert some of the mark-up, or do I throw away all the mark-up and put back the italics by hand, with the risk of making a mistake? (If anyone knows of a good configurable or modifiable tool for converting cluttered HTML into sane HTML/XML I'd be interested to hear about it. My Perl scripts for doing it are not great.)


Over the years, I've internalized a few rules to soften what you point out.

* Understand both the cost of making a mistake and of the process not running. Be aggressive on things that are 'nice to have' or have downstream manual re-validation. Be conservative on things that are 'must have' and immediately effect an outcome.

* First, do not harm -- program defensively and break execution on any violated expectations

* Understandable error messages at the user level are gold. The user has no idea what "schema mismatch" means. They can understanding "Column 3 had unexpected value. Was expecting 'X'. Saw: 'Y'"

--

On ugly HTML, that's a bad one, because of how ugly it can get.

I've only done toy work with it, but I hear good things about Beautiful Soup (Python). https://beautiful-soup-4.readthedocs.io/en/latest/


Sounds like a good usecase for an LLM.


Perhaps, but LLMs while may be slightly better (or slightly worse) than average humans, if you have 20 instances of the same LLM will have correlated errors whereas 20 humans will be… well, not entirely[0] uncorrelated, but less correlated, in their errors.

[0] We'll still sometimes get an entire office of telephone help-desk people who are all convinced that $1/megabyte is the same as $0.01/megabyte, or whatever that story was a few decades back (I can't Google it any more to find it, there was a recording of the phone call).


it was 0.01¢ and the rep / software was billing it as $0.01 - saying 0.01 of a penny is $0.01.

I may be off by a factor of 10.


We were able to fix it up with some automation in a couple days.

Critically, still running visually on her machine, and with internal resources for lifecycle support if the process changes.


Finding internal resources to fix the tool when things change is very prescient.

Growing up, my mom performed a job where she would essentially copy car crash data from one system into another, row by row. As a teenager learning to program, it was obvious to me this could be easily automated. She (perhaps rightfully) didn't want to run anything I cooked up on her company's computer!

A year or so later she was laid off a long with many others in the company performing similar roles.

And this is the problem, isn't it? Once the job is automated, the analyst is no longer necessary. You really need to "learn to fish."


Do you have any guess how long it will take to break even? Presumably you charged some amount to spend a few days automating it and hopefully it frees her up to do something more valuable for the company.

Sometimes the reason things aren't automated is because there isn't anything more valuable for that person to do so automation doesn't have a positive return on investment.


Can't speak for them, but I've done office automation work with a breakeven measured in days.


I’ve automated things where the breakeven will occur decades from now. It was fun though, the chance of errors is decreased, and I learned a bunch of new stuff.

Sometimes it’s about the journey and not the destination.


Calculating breakeven ROI is part of the push from a lot of companies to start citizen developer initiatives.

The point being that the "I" is also flexible -- an existing employee fooling around to learn automation whenever they have downtime is much cheaper than hiring a consultant.

Sure, the automation is worse and delivered more slowly. But you can get automations from a lot of people in parallel, plus you're upskilling your workforce.


It was an internal citizen developer initiative that I was supporting externally. Don't know her salary, so couldn't say.

> because there isn't anything more valuable for that person to do

I've seen an interesting shift post-COVID where (remaining) ops workers are often more coordinators of / responsible for their area of responsibility.

Which is a distinction because in that sort of job, not having Task X leads directing into finding something else you can be doing to improve your area.


> Critically, still running visually on her machine, and with internal resources for lifecycle support if the process changes.

That's such an elegant and perfect solution. She keeps track of what's happening, it all runs through her GUI so worse* case scenario she does it herself again, and she can get help if/when she needs it.

* actual worse case would be the script going berserk, but as she's not the one who wrote it, she should be shielded from the blame if that ever happens


>I helped a lady on Friday at a midsized bank whose job it was to take a set of dates in an Excel file, search each one individually into an internal search tool, then Ctrl+f the result report and verify that every code listed in the original document exists in the report.

I happen to work in a midsized bank, and we have TONS of these sorts of manual processes. My favorite unintuitive fact is that 90% of them turn out to be some outdated compliance process that nobody remembered a person even did, and that is no longer necessary.

That also usually the reason why something is "too small of an impact". Having 2 engineers come in to figure out that something is obviously just busywork, and then try to run some political process to convince leadership, is very expensive.


When I stumble across things like this, I always remember this scene from Babylon 5 (and Peter Jurasik's delivery!)

(Context: they've found an attache and friend they thought was dead, alive but forgotten in a prison cell) https://m.youtube.com/watch?v=kCj-Rnd5SsA


That was pretty amazing. I have to watch this show some day.


150% would recommend. I've watched it through 3 times over the years. Can't say that for any other show!

If you do, commit to watching through the end of the second season, before you make a call.

It takes that long for the show to find its feet and the overarching plot to get really going.

And expect a lot of cheesy 90s stuff. Although way less than Star Trek!

But on the whole, IMHO, it's incredibly modern and prescient for when it was released. Far fewer disconnected "monster/problem of the day" than e.g. X-Files, and even those usually still have character interactions that do important world building.


It would be nice if there was a general expectation by now that people would have a sort of “intro to scripting” type class in high school. Not everyone has to be a programmer, in the same way not everyone needs to be an author, but everybody ought to be able to write a memo, email, or shopping list.


Teaching people a scripting language when the language has been already chosen for them, platform has been prepared, data preformatted etc. might be easy - a non-trivial share of my classmates couldn't comprehend a for loop though.

To have real transferrable programming skill - I am not talking about one that will get you a software engineering job, just basic problem solving - you need to know an awful lot about how computers work. Otherwise you'll quickly get overwhelmed by errors you can't comprehend at all stages , not being able to get data in right format, not knowing what tools are suitable for what problems etc.


It might be the case that a better language for this sort of thing would have to be designed. Maybe somebody needs to take another crack at the sort of Visual Basic/Excel type environment.

I think someone on this site chimes in occasionally with a sort of “spreadsheet, but each cell runs a line of Python” type program they’ve been working on (although I can’t remember the name unfortunately). That could be a good basis…


Spreadsheets might already be that better environment. You can solve a load of spreadsheets with some smarts and just spreadsheets (no Visual Basic or so necessary).

But that doesn't mean _teaching_ people how to solve problems is any easier.


vb (6?) may have been copy and paste back in the day but with the advent and continuing "work" on .net VB is just as impenetrable as all the others now.


Look at all the other classes people are already sitting through for high school and not learning anything either. What makes you think this one would be any different?

Already around the world high schools all have something highfalutin like 'teaching critical thinking' in their curriculum.


Add it to reach the average-smart kids, those without internal knowledge of where to look or internal motivation to learn (but still will take the advanced classes for a grade).


Literacy rates in the US are pretty high I think, compared to where we were before k-12 schooling became standard.

And most people can do basic arithmetic, right?

The goal, I think, to start should be to get to the point where an office of 10 or so people has at least one or two who can string together basic scripts.


> Literacy rates in the US are pretty high I think, compared to where we were before k-12 schooling became standard.

'Post hoc ergo propter hoc.' https://en.wikipedia.org/wiki/Post_hoc_ergo_propter_hoc

By and large, kids from well-off parents were always more literate, and they still are. And the US is a lot richer now than it used to be.


If schools aren’t even teaching basic things like arithmetic and literacy, then what are they? Just a place to park kids while their parents work? In that case, might as well also do some programming classes, right? If the other option is learning nothing.

I mean, if you want to totally revamp the education system, I’m here for it, but I guess that seems like a bigger project to me.


Bryan Caplan's Case Against Education might be a good place to start, if you want to have a broader discussion about the education system.

And yes, baby sitting is a big part of the package that schools do provide in practice.

> In that case, might as well also do some programming classes, right?

I don't think programming classes would do any more damage than the current curriculum.

> If the other option is learning nothing.

Well, giving the kids free time or more physical activity might also be useful?


So your reasoning is "elites have always been more educated then average, so standard education cannot have had an effect in average literacy"? It is far more reasonable (and verifiable) to conclude that widespread standard education is more than just correlated with higher average rates of literacy.


Byran Caplan's Case Against Education might be a good place to start. The author has collected a lot of statistics.


For the second time in as many days, I find myself saying on HN: your argument only makes sense if you reduce all societal logic to economic logic. What a way to live.


> What makes you think this one would be any different?

Because unlike almost all other classes, that one would be immediately beneficial in daily life, in a direct and apparent way.


No it won't, unless we build infrastructure for them to work with.

Even worse, given modern security standards, I wouldn't be surprised if programming and scripting were to be prohibited in the next ten years or so.


It seems like a negative feedback loop or something, nobody uses these sorts of features, so nobody implements them, and then the skills are less useful…


Microsoft Excel (or cloud-based spreadsheets) will always be allowed. Even if they remove scripting abilities, a clever person can remove a lot of office busy work with just basic Excel functionalities like pivot tables.


Basic scripting abilities would be beneficial years later for the subset of the kids who become office drones.

I would agree with your assessment about 'immediately beneficial in daily life', if you were talking about a course offered to adults who are already working a white collar job.


So, banks do actually do a lot of this. They hire people who are not really programmers by background and then teach them what they call robotic automation.

This has nothing to do with actual robots, which can make it confusing if you come across this without having the right context. What they mean by this is they have a commercial tool that implements a very simple scripting language, often in a structured way so you can edit it without using an editor. The tool has lots of commands for automating GUI actions like "find this window by title", "click that button", "load this web page", "find text that looks like this", "loop" and so on.

The idea here is that a lot of processes involve moving data from one app or window to another in a repetitive manner, so if you have a programming language optimized for that and designed for beginners, you can automate a lot of stuff. It's not a scripting language like Python, it's totally focused on driving GUIs. And it works! Robotic automation is a huge deal in some companies.

I helped an ex-girlfriend learn this stuff at one point. She sort of bluffed her way through the interview, they were looking for people with more computing experience but she was a woman so the hiring was more or less automatic despite her knowing nothing about computers at all (woke companies ...) Despite the tool being designed for such people she still found it really really hard though! Some people just really struggle with structured thinking of any kind. Writing down instructions in a formal way was just totally alien to her. The tool tried its best to help users avoid mistakes but fundamentally you still have to be able to think in terms of loops, lists, multi-step processes and so on. I think she did eventually get there but it took a long time and she failed the internal exams the first time.


Take it a step further and teach actual data analysis and statistics that are not coin flip/compound interest plug and chug into the calculator problems. No matter what sort of job you do in life, being able to pull a csv from all your different credit or debit accounts and draw some inferences from that would be so useful.


Just because something fancy (or even basic) is on the curriculum doesn't mean that anyone is learning anything.

No matter how well meaning, high school barely teaches most people anything in practice.


I’m not sure what “actual data analysis” is exactly, but it sounds like something that would be better handled in a stats or science class.

Though, if programming were taught as a basic problem solving tool like writing and arithmetic, just stuck in the toolbox, a science class would probably benefit from having it there. For example, if it was expected that a kid could do programming as well as they can write a lab report, then it would probably be easy enough to add some sort of “call Numpy to do a linear regression” type section at the end of every lab report.

This sort of programming is an everyday force multiplier, and it ought to be taught as such.


When every workflow and every bit of data passes through computers (even if, as often in German bureaucracy, just to be printed out and filed) everyone is a programmer. Living in denial and programming ad hoc, non specified, untested, programs in very indirect and flawed ways is much worse then just teaching everyone basics of computing.

Computers are vital and unavoidable technology. Once it was arcane knowledge to read and write, to make calculations. It will be trivial in the future to know the basics of computer science and scripting.


What I learnt from the emergence of chatGPT was that way more people were allergic to writing even a small note than I thought.


Kind of a counterpoint: I've heard countless stories (albeit, in Japan) of people doing things like "manually" doing additions of numbers they already have in a spreadsheet instead of just asking Excel to do the addition in what, two clicks, because they "can't trust the computer to do it right" which sounds like a BS rationalization to justify that they wouldn't have much of a job if the computer was doing it.


Back in 2009, I visited Tokyo for work. What I learned from the folks in my local office is the deployment software my company made tooling for was not very popular.

The reasons were mainly that most of that software was made by Western (English speaking, specifically) countries and they did not trust it. They were not exactly wrong - software tends to work best on English systems, unless it was developed by a local company. I heard of some really embarrassing Korean translation issues that even as a person just with Google as my Korean language skill, I could validate the translation issue. Like Korean 101 kinda mistakes.

So Japanese companies were using vastly inferior deployment software, which would basically RDP into an interactive session and replay inputs, because that software worked fine in Japanese.

...Or also very common was sneakernet deployment with DVD's and thumbdrives, making the lowest workers do the running around.

*I do not know the current state of software deployment in the Japanese market.


If I was a busy decision maker at a major Japanese firm with a dozen direct reports and a hundred balls in the air simultaneously, and if I could even spot translation issues in a few minutes on the polished demo presented to upper management, then there's no way I would ever assume anything is correct with the software when I'm not looking.

Let alone my subordinate's subordinate's subordinate who would actually be using the software day in day out with their job on the line.

So it's a very sensible heuristic. If it's supposed to be a serious enterprise software product, offered for sale in Japan, then spending a few million dollars to hire expert technical translators to double check everything should not be an issue.


I mean not exactly the same thing but just last night Apple Maps was directing me to turn down a street that had clearly been closed for some time. Computers "make mistakes" all the time in most peoples' experience and after you get burned a few times (especially if you get badly burned, like getting fired over the "mistake") you learn to not blindy trust computers.


I once got fired because i upgraded a company's wifi network from a single apple wifi bridge (the residential one!) to a managed linksys or netgear deployment, and there was a bug in the firmware. I don't recall the bug, but it made the chief lawyer's apple laptop drop connectivity to their printer or something a few times the first day. I opened a support ticket, and got confirmation of the bug - at around 8PM, forwarded the email to my boss, and was fired the next afternoon.

what a crap company. They folded within 4 months IIRC, and listed me as the "financial contact" for the fiber optic internet links - got a bunch of nasty phone calls later in the year about needing to pay 5 digits in billing arrears!


Being a developer for a number of decades has firmly taught me to never blindly trust computers. Even if the software is flawless (which is literally never the case), there's still the GIGO problem which your Apple Maps example demonstrates.


Perhaps some folks here are too young to remember the Pentium FDIV bug[1], but it's not complete BS to distrust the calculations.

1 https://en.wikipedia.org/wiki/Pentium_FDIV_bug


I’m sure FDIV-like bugs are absolutely eclipsed by typos and misclicks from doing things manually.


Yeah, but typos and misclicks don't scale. What was that saying? "To err is human, but to really foul things up you need a computer."


IIRC, Excel uses floating point for calculations, so they're not wrong.


It also uses them for phone numbers, if you're not careful.

I have, in real life, received an SMS from… let's see… 4.4786e+11 welcoming me to Germany and giving me a link to the German COVID quarantine and testing rules (I was already in Germany at the time, I just turned off Airplane mode for the first time in 6 months).


Anyone who trusts computers to do non-integer math correctly does not know anything about floating point numbers.


I trust, but they compute in a different form of numbers


What the fuck are we doing as a society, that this lady, with a machine that could compute the above in a fraction of a second, has to spend a decent chunk of her work time doing this?

To put it bluntly: do you want her to be unemployed instead?


I was going to say that - I'm increasingly subscribing to "bullshit jobs" theory :

* productivity for goods is high and availability is high (we can discuss inequality and distribution)

* we just don't need everybody to work 40hrs a week to obtain collectively good standard of living

* but as a society we are uncomfortable with universal basic income

* so instead we basically do UBI through inefficient, unnecessary jobs

That's my current perspective at least. As I continue to grow older and slide into cynicism though, I'm likely to switch to "everybody is idiots and everything is screwed up" theory more and more :-D


> so instead we basically do UBI through inefficient, unnecessary jobs.

Your first 3 bullet points make sense, but this last one is where I think the normal theory behind "bullshit jobs" really falls apart. Every individual business has a large, strong economic incentive to not hire these bullshit jobs if they don't need to, so why should we think they would be so generous to engage in this "bullshit jobs charity"?

I think what is really happening is that as society advances, lots of rules, regulations and processes just build up and up over time, and this complexity eventually becomes self-sustaining, even if it greatly diverges from the original intent of the original rules.

Case in point (which is a common case I know) is the complete, total insanity of the US healthcare system, specifically how healthcare is paid for. Literally every single person I know that has ever gotten into some aspect of healthcare payments (potentially with an idea to improve the madness) eventually comes to the conclusion "The whole system is completely fucked, it should all just be torn down and greatly simplified." The problem, though, is now there are a ton of entrenched interests who depend on that complexity for their livelihood (I heard it referred to as "an abusive relationship" - Company A exists to "simplify some aspects of healthcare payments", which means they depend on the underlying complexity in the first place to exist), so there are no real incentives from the people that control the levers to simplify. So a lot of those bullshit jobs come about to manage that complexity, but it's not like some company thought "let's hire people to do busywork so they'll have employment."


Whilst business owners have a financial incentive not to employ people, there are competing incentives. For example, non-owners are incentivised to have more people reporting to them because it increases their status and status is usually correlated with salary. In fact, when you are rich enough that money doesn't mean anything any more, you may as well have a bunch of people employed to do nothing but make you feel important and powerful.


> In fact, when you are rich enough that money doesn't mean anything any more, you may as well have a bunch of people employed to do nothing but make you feel important and powerful.

I’d just like to proactively coin the term “Ego Engineer” for the post-AGI world.


Traditionally they'd be called "elevator operator" or "doorman" or some equally mundane/superfluous job.


Interesting choices, given that both of those are putting humanity into what would otherwise be purely mechanical jobs.

Practicality of having someone able to adapt to novel situations aside, there's something to be said for just having a human interaction.


From a purely profit focused standpoint, these jobs should very rarely exist. And yet, due to other incentives they do. E.g. because the owner might like being greeted on their way to work.


> I think what is really happening is that as society advances, lots of rules, regulations and processes just build up and up over time, and this complexity eventually becomes self-sustaining, even if it greatly diverges from the original intent of the original rules.

This is one theory, I think a slightly different explanation could be that most corporations are too large for the people making decisions regarding things like layoffs to be able to have a clear picture of what each employee is doing

Also like a sibling comment said, there are also conflicting incentives like middle management engaging in empire building. Because of this, there isn't any vertical enforcement or clarity

Really interesting how much better complexity scales in software than it does in business


You're conflating the internal (lack of) logic from the perspective of the person employed in a bullshit job with the external logic for the existence of this position. Businesses hire people for multitude economically sound reasons and very often from the perspective of the person the job is utterly meaningless except for providing them with a paycheck. That leads to psychic pain. The economic logic is there and moving the needle but the social, cultural, psychological logic is lacking. That is the problem. I think the theory is sound: that disconnect between economic logic and other levels of meaning is at the source of the problem. There's no grounds to dismiss it by reducing all logic to economic logic.


Yes, that's how it is. For every reform that would benefit society as a whole, there is now a tiny minority of certain losers with a deeply entrenched lobby against the new and for the old. Be it fossil fuels, health care, banking, peace in the Middle East, nuclear technology, the use of genetic engineering in plant breeding, electric vehicles, and so on.

I don't think UBI would change that, but UBI might have a chance to change the perception of one's job as a bullshit job (they say that's 40% of the workforce).


> certain losers with a deeply entrenched lobby

Excellent point and phrasing! And it suggests what needs to be done to move things along: (a) make them not losers & (b) decrease the ability to entrench lobbying.

To (a), I don't think we do enough of directly buying out stakeholders. E.g. if we know moving to single-payer insurance would be beneficial to the system as a whole (we probably don't, but just an example), why not explicitly pay off private insurance companies, phased out over 10 years? Everyone would still win!

To (b), the key is the corruption of should-be-objective decisions. To me, regulatory-industry revolving doors are the most important, because they're the greatest source of invisible-at-the-time corruption (e.g. you sway a decision... 15 years later you get a cushy "retirement job" in industry). Fundamentally, I don't think there's a way you get around that, absent banning it (above a certain level of regulatory authority) and substituting equivalent economic compensation. Continuously finding selfless, competent people to staff regulatory services is not a sustainable model. So pay them so the "selfless" isn't a requirement (even if you have to tax industry more heavily to do so).


GP's last point isn't to be taken literally. It's just the short and snappy summary of what took you many words to describe.


Who's saying that lady's job is bullshit just because it can be automated? There is real value in paying some up front cost in detection/monitoring/validation of data to prevent mistakes. Just because software engineers would consider her job "beneath" them because they could do it faster or better does not make it not worth doing.

There are mega-efficient factory farms in Nebraska that are immensely productive per-human involved, but that doesn't mean what hobby farmers in Connecticut or subsistence farmers in Africa are doing is bullshit.

I think "bullshit jobs" are less of a thing than people believe. It's just that people devalue assembly-line or cog-in-machine work, even when they're the ones doing it - especially in the US where we grow up wanting to be a unique success story like an entertainment celebrity or rich entrepreneur and so that kind of work is antithetical to our idea of a successful person. Fact of the matter is, machines needs cogs, and we don't have an infinite capacity to replace human cogs with automated ones.


Does she find that task enjoyable and fulfilling? If not, it's bullshit.

And if bet money on it NOT being enjoyable and fulfilling. Humans almost universally hate being cogs doing the same repetitive trivial action over and over.


Do janitors find their job enjoyable and fulfilling? Probably not. Their job is still important for the functioning of society, so I wouldn't call it bullshit.


I was a janitor for a long time in my younger years, and I actually did find it enjoyable and fulfilling. Just sayin'.


What did you enjoy about it? Curious!


I enjoyed that what I did made a real impact on people, I enjoyed the sense of satisfaction from making a space nice again, and I enjoyed that it wasn't intellectually demanding, which gave me plenty of time every workday to think about hard programming problems.

There's a real sense of accomplishment in it, and it's nice that people can immediately see and enjoy the results of your hard work. There's also something about doing manual labor that is satisfying.

And I enjoyed being "invisible" around people who used the spaces I cleaned. You usually don't exist, so people tend to continue to speak to each other as if nobody else is listening. I learned a lot about people that way.


The 'probably not' bit of your reasoning is self serving.


> To put it bluntly: do you want her to be unemployed instead?

Fine. Then write her a script and continue to pay her a salary. Hell, let her keep coming into work if she wants to. Functionally, what’s the difference to the business?

Presumably she’s still going to want to maintain social relationships in her life, which means she’s got an intrinsic incentive to do things to benefit someone else even absent financial incentive.

But now we’ve effectively got an atomic form of UBI, which scares people. Yet 1 person’s worth of work is being done for the cost of 1 salary.

In fact, if she starts doing something else productive to society with her time, you’ve got 2 people’s worth of work being done for the cost of 1 salary.

And if someone automates that and she switches to something else, it increases to a 3:1 ratio.

Where is her employment status a problem, except to the sensibilities of people who believe that everyone needs to be “employed”?

If no one ever automated what she was doing for fear of her losing her livelihood, then the maximum productivity we’d ever expect to see is 1:1.

Seems like not having UBI or a stronger safety net is creating a perverse incentive for economic inefficiency.


> continue to pay her a salary

Why would any company do that?


This is the "lump of labor" fallacy.

If she turns up to work reliably, and stays on-task, there are no end of other things she could be doing that increase general welfare more than this.


Maybe, but I'm not sure everyone has the inclination to become a programmer.

And the whole thread was about how computers should work for everyone, including this lady whose job could be automated.

Which begs the question: do most people actually want general purpose computing? Or does the average human prefer "apps"?


In this scenario, she had plenty of other high-value work to perform if she never had to do this again.

Which is the other side of the coin of the ubiquity of this problem -- modern businesses are so optimized on the people side that roles are collapsed down to 1-2 people, and therefore those people inevitably catch "oh, and also do this" could-be-automated work.


Who said anything about her becoming a programmer?

An interior decorator, or a gardener, or a nurse aide, or a yoga instructor, or a nail technician: they all add more to human welfare than this task.

If she wants to become an architect, or a water system engineer, more power to her!


Whatever you list here, she'll be competing for an entry-level position with much younger people, who have more time, more energy, lower expenses and no obligations.

The older one is, the more time one spent in one line of work before being forced to find something else to do, the harder it hits. So sure, maybe you can switch to landscaping in your 50s, but that also means you and your family suddenly being kicked down one or two economic classes.


I figure experience is valuable to potential employers in two ways: (1) what you can do & (2) what you know.

Usually, switching industries past a certain point of tenure wipes out (2), as so much of the expert knowledge is specific.

In this case, she obviously wasn't only a spreadsheet-reconciler: this lady knows thousands of things about banking that I don't. And probably that new, entry-level people at her company have no idea about.

Ergo, the company would get the most benefit from assigning her new responsibilities where that information is valuable. Specifically, something in middle management where her intuition of "... that doesn't sound quite right, tell me more about..." would help prevent bad decisions.


Not disagreeing with your point in general, but:

Not everyone can become an architect or water systems engineer at 50, after having worked a "general assistant" type office job for many years.

I think that (and its consequences) might be the biggest short term societal risk of automation in an aging society.

How would you solve this problem?


Why can't you become these things at 50? Considering you can become them at 23, having worked perhaps in fast food for a few years prior?


Good question! Wait until you are 50, maybe you can answer it then ;-)


I (and most of my social circle) am over 50, and I can answer that question: there's no solid reason you can't train for and be successful in a different career later in life. I've seen it happen too often to think otherwise.

Whether or not you want to is an entirely different question, of course.


Not there yet, but slowly getting there. Of course you can train for a different career after you are 50, but you will also have a good idea what kind of career is not a good fit for you (anymore). So just because certain careers are now looking for people, doesn't mean that these are a good fit for you.


Yes indeed!

Our youth is when the majority of what we do is try things out to see what fits and what doesn't. In our older years, most of that experimentation is behind us and we have a pretty solid idea of what fits us and what doesn't.

The trick is that the amount of experimentation should never be reduced to zero.


I'd imagine the over-X disinclination to retrain is mostly a discomfort with being uncomfortable.

By X, in a single career, you've generally got a pretty good handle on things (and life).

In contrast to 20/30, when everything is constantly new and you are constantly uncomfortable.

Thus, big difference in how well the uncomfortable muscle is exercised. 20/30 -> novel situation you suck at -> business as usual. Over X -> " -> WTF.

And, as you say, the key to retaining that capability is... make sure you continue exercising that muscle!


I appreciate that question!

Something something about age and responsibilities and tiredness and not having trained for that, but... it's worth it to think about it, anyway — at least as risk-mitigation for yourself, going forward.

Maybe we'll all end up as plumbers?


If there are resources available, some entrepreneur will figure out a way to make use of them.


If the average human were allowed to achieve its most base preferences, we'd all be a half ton in weight, floating around in chairs, stupefied at some screen, just as depicted in Wall-e.


I actually don't think this is true at all. While there will always be a percentage of people who would prefer that, my observations are that most people don't. They tend to have interests that drive them to put time and effort into doing things, instead.


If I ever luck into wealth, I have a mad scientist social experiment I want to run.

Advertise a job, with midling-but-not-exceptional pay, and try the following with a few hundred people.

   - They report at exactly 9:00 am
   - They bring nothing with them
   - They enter a featureless room
   - They sit at a featureless desk
   - They sit there for 8 hours
   - They have 30 minutes for lunch
   - They have access to a featureless bathroom
   - They leave at the end of the day
   - They get paid weekly
The question is: how long would people willingly stay in that "job"?

My hypothesis is that it's a bimodal distribution. Some set of people would be perfectly fine with it, indefinitely. Some people would go batshit insane and quit quickly.


I wouldn't even consider such a job. For me, the worst possible job is one where I have nothing to do. I've had those, and they are a nightmare. I'd much rather be seriously overworked!


After a week or two, they'd probably call Sherlock Holmes.


Yes. A human mind is a terrible thing to waste. Plenty of people don't look for more fulfilling work because what they already have is "good enough" and it brings home a paycheck that supports the people who depend on them. Oftentimes, people need a push, and doing so can be an act of compassion. Laying people off is not some death sentence; it's a disruptive event that well-adjusted people will recover from. That's ultimately their private responsibility, not the company's, certainly not the company's in a day and age where few people work their entire career for the same employer anymore.

Furthermore, a society where there is massive unemployment but society's economic engines are humming along, mostly automated, for the benefit of a select private few, is a society where such engines are ripe for nationalization, whether that be through public ownership or public seizure of the majority of the profits via taxation, either one of which could support basic income. So again, not necessarily a bad thing in the end.

And if you think people blowing their basic income checks on panem et circenses represents some kind of civic death, I would point to the current alternative, people working bullshit jobs of questionable value, and ask if that really represents the civic virtue you're trying to uphold.


> it's a disruptive event that well-adjusted people will recover from.

Some of us are not well adjusted. Some of us are barely hanging on, barely able to manage the demands of our lives and our society.

Being laid off at this point for me would be worse than disruptive it would be an outright disaster.


Let's create a hypothetical McDonald's worker, imbued at birth with extraordinary intelligence and capability.

They have a family and are the primary means of support.

They're laid off. Parent post's "act of compassion"

What happens?

In the US, they have unemployment benefits. They have SNAP.

That's probably it as far as immediate. They hypothetically have access to Section 8 housing, but that's contingent on finding availability (current wait-lists are long and conditions of properties variable).

There are some academic / training programs (merit and need-based), but I believe those are all state-administered?

IOW, in the US, not great. I don't have high confidence that housing and initial retraining would be financially/practically attainable to allow them to take advantage of their abilities.

Honestly, in the US, most reliable option for that person would be to join the military.


> Oftentimes, people need a push, and doing so can be an act of compassion.

Only if those people have asked for the push. If they haven't, then it's not at all an act of compassion.


She will not be unemployed in this situation, nor will her peers in similar situations. Available labor is always gobbled up and not left idle for long. Case in point: we've obviated jobs like the horseshoe maker and the stable boy and yet unemployment rate today at 3.8% or so is half of what it was in the 1900s when we had all these horse and stable jobs.


You have to also take into account income not just the unemployment rate. Growth has been close to non-existent if not negative over the last 40-50 years or so if earn below the median.


In the long term, yes. But shouldn't we have some compassion for those who get screwed in the short term? The horseshoe makers who were thrown out of work at the time may not have been able to find decent new employment. We have copious modern-day examples of jobs that have been obsoleted and those who were immediately affected still haven't been able to recover.

These sorts of labor shifts can't avoid harming those who get the short end of the stick.

Saying this is not to say that such changes shouldn't happen. It's inevitable and necessary in order for society to adapt to changing conditions. But I very often see a disregard for the reality that people really do get hurt badly in these shifts.


Long-training-time jobs are the worst for this. E.g. medical

By the time you mint a professional in medical something, they've put a decade+ into training for it. And in the US model, incurred substantial debt doing so.

One reason I'm more sympathetic to the artificial medical supply/demand management shenanigans.


Jobs do not disappear, they just change


They can be fully automated out of existence, e.g the jobs of Herb Strewer and link-boy.

And, indeed, that old job called "computer".


> do you want her to be unemployed instead?

Yes.

Compassion and reason both repel me, as a humanist and computer scientist, from the thought of another human being pointlessly wasting her life for the mere "necessity" of money to live because broken software has become an excuse for this.

It is an inhuman spectacle, and the notion of "employment" here is so weak as to have no force in the argument. An unemployed person at least has the opportunity to redefine their life, whereas one who is tricked daily into believing their bullshit job is "valuable" is being robbed, no matter the remuneration.

Further, as a technical person, it disgusts me to see the tools we created so comprehensively misused by idiots who are unable to intelligently deploy them - or rather deploy them as instruments of abuse rather than for human development and progress.


It's hard to believe this is an actual opinion a human holds and it wreaks of someone who has never gone through particularly hard times.

Redefine their life? Good luck doing that when you suddenly don't have an income because you were automated out of employment. People who are able to recover from these events and reinvent themselves are the exception.

Any job is better than no job and it's not your place to judge the value of someone's life. If said person is making an income, can tolerate their job, and enjoys their time outside of work that's all anyone can really ask for.


I use to point at office buildings and tell people non of the jobs there are real. They are all software that didn't get written. One logistics friend argued his well paid job was very important but with very little help he automated it in 2 weeks, it was just the idea never occurred to him. i told him to not tell anyone but he did and 300 people got fired


But surely she had the tools to do that in excel itself, no? This is something I see a lot in software, where people devalue and trivialize the skills and abilities needed to make software simply because they know something is possible to do in software and because, as a software worker, they most likely had a natural predilection towards the kind of thought processes that led them toward pursuing software work.

I think it's fallacious to think everybody has the natural disposition and curiosity to want to write software. Writing has been around forever, and most people don't want to do it beyond purely functional communication, and even those who do want to go beyond that are mostly terrible at it. That lady could have automated her job if she really wanted to and was motivated enough to do it; she simply preferred to be a ditch digger than to design a ditch digging machine.

It doesn't help that "designing ditch digging machines" is in such high demand that most companies are really only able to hire mediocre people to fill those roles. Leonardo da Vinci is not working on sox compliance at a fucking bank.

There's a good chance that the payoff for that bank automating "take a set of dates in an Excel file, search each one individually into an internal search tool, then Ctrl+f the result report and verify that every code listed in the original document exists in the report" is not even there, because the first attempt will have insufficient monitoring or verifications to catch silent failures and introduce an expensive issue caught months after the fact that leads to a huge remediation scramble and requires more development than initially expected to address, not to mention the constant need for maintenance in updating the software and making sure it continues to run (which also involves a rotating cast of programmers joining, needing to learn the software, and leaving). Leonardo might do it right the first time, but he's not doing it for cheap and not going to make it his life's work. Joe Schmoe can do it cheaper but it's going to take a while, have an ongoing maintenace burden, and probably some mistakes made along the way. That lady can do it for even cheaper than Joe Schmoe can without many headaches.


You see this in manufacturing as well. You’d think robots would be everywhere but often, like way more than you’d imagine, it’s cheaper to hire a human to do it.


Yeah, it's easy to forget that humans themselves are pretty amazing technology. You can program them with relatively vague instructions using natural language, they come with built in general-intelligence so you can delegate things for them to figure it out and expect them to adapt their instructions according to their situation, and they come with amazingly precise and dextrous arms. They generally cost under $150k/unit/year for most manual applications and you pretty much know what you're getting up front vs hardware/software purchased off the shelf, not to mention the capital cost of acquiring one is pretty low because they already exist and just need to be hired.


I think part of that is how these sorts of businesses choose to organize their money. A manufacturer might budget themselves to be quite lean, so lean that for any sort of improvement like automation with robots, they have to hire an expensive consultant to tell them they have to hire an expensive robot company to build them expensive robots with expensive service contracts. So of course in that model the human is cheaper. However, if the manufacturer instead took all that money they would have spent on the expensive consultant, and opened up their own "internal" automating wing that would just look for things within the company to replace with a robot they create and service internally, maybe the math would pencil out another way.


There's another aspect to it as well.

It's really common to see jobs that people are doing that appear to be easy to replace with robots. But if you actually try to develop automation to do it, you very commonly hit a bunch of gotchas that make the automation infeasible. Sometimes this is a systemic thing (you can't effectively automate this without having to change too much of the rest of the process at the same time, which makes it economically a nonstarter), but more often than you'd think it's because there's an aspect of the job which doesn't seem particularly hard or important because humans do it without even thinking about it -- but to computers and robots, it's incredibly difficult.


The way fertility rates are going, not for much longer.


This has to do with the fact that robots are still pretty bad at basic tasks such as picking items from a container, or handling fabric. Just have a look at the state of the art in garment manufacturing robots. For most tasks, human muscle and sense of touch are way ahead of any robot.


Thus, over time, most people working on commercial robots favor increasing the minimum wage :-)


Someone told me their job was mainly checking Word documents for discrepancies. Basically diffing them, but manually, by reading.

I showed them how you could diff them automatically. This was not appreciated.

“Cool, IT dude, now I lost my job.” And they were probably right if anybody cared enough to look at what they actually do.


A large amount of inefficiency is actually a chain of accountability. Trust, and failing that assigning blame, and trusting those assignments, is the root of so. much.


The whole of life is built on inefficiency. Earth is just a filter for slowing down the diffusion of star energy from the Sun and step it through ecosystems complexity before it being lost to the vacuum. I have no idea why people can be so obsessed with order, control and efficiency when life itself is just the interval of disorder, freedom and delay in energy dissipation.


This sounds like a perfect use-case for automation using recorded macros. The failure (or perhaps solutions exist) is lack of facilities in our desktop OSes.


AFAIK, macOS is the only OS to come with this sort of things out of the box (and that is supposed to work consistently across all applications; good luck with that on Linux).

https://en.m.wikipedia.org/wiki/Automator_(macOS)

https://en.m.wikipedia.org/wiki/AppleScript


That is what I had in mind as I wrote it. Sad that many macOS apps now are built with Electron, or we simply use web browsers to access public or internal apps.


Using web browsers to access apps is actually a boon in this regard: the whole UI is exposed via the DOM, and you can use JavaScript to interact with it for macros -- and it's cross platform as it works on any OS that can run a web browser!

In contrast Automator/AppleScript only works on macOS afaict.


But then you got the ugliest DOM. I left FaceBook because I was trying to curate my friends list (2k+) down to only the person I know in the real world. But the Unfriend process was a dark pattern and automating it was more work than it was worth. Instagram was worse.


There is a free Power Automate Desktop for Windows.


Macros are generally locked down in enterprise network environments due to the risk of acting as a malware vector.


> but I would say the decline has strictly been a failure of software.

I would argue it's a failure of policy, and more specifically certain people enforcing a policy top-down on an industry they have little, if any, competence in.

The tools are still out there, and those of us who know what we are doing balk at those managers (not programmers, BTW), telling us "you don't need that."

We need to wrest control back from the managers, at all levels, and tell them to fuck off while we get real work done.


And don't forget that automating this process could also include a host of error checking she is simply unable to do because she is a human and not a computer.


This article has absolutely nothing to do with what you’re saying. This article is about how certain types of calculations are better suited and are more efficient on specialized processors. I don’t know what finding dates in Excel has to do with this.


I've seen this before. My experience is that the leadership who approves improvements is too removed to care. This is one reason B2B SaaS generally targets decision makers.


That lady could have automated that with just a little glue code. Let the program do the work and use that time to do nothing. Nobody's complaining.


There should be a Tell HN that is just an 800-comment-long thread full of these stories because I love them.


> There should be a Tell HN that is just an 800-comment-long thread full of these stories because I love them.

As gratifying as they are, they get repetitive. Once you've heard one, you've heard them all.

I'd much rather hear about the place that fixed a policy problem that was blocking such progress in the first place. Software is easy, getting people (and orgs) to change is hard, tedious, boring, but absolutely necessary, and more enduring than a lot of software fixes.


I agree that software is easy, and managing (human) processes is hard. Just the other day I advocated for a Free Management Foundation [1], in the same vein as the Free Software Foundation. Anyone care to spend their life on that?

[1] https://news.ycombinator.com/item?id=37968979


For some reason that reminded me of the professor I had for administrative law in law school. He was a member of something called the Administrative Office of the United States Courts or something like that. Their entire job was to be a government run think tank that produced reports about government administration.

He really, really loved administrative law.


I believe that about repetitiveness. It’s just that I’m not in software or computers, so I haven’t been subject to them all.

Although I was a lawyer, and I can tell you that human beings come up with an infinite variety of ways to do weird things, which gives me hope that there may be a weird software story out there for you, that you’ve never heard. :)


I've been a dev for a very long time, and I still regularly hear weird software stories from colleagues that are not just "the same old thing" again. They're a pretty small percentage -- maybe 2-5%? -- but despite the low flow rate, the well never seems to actually empty.


Perhaps this is either:

A: Where AI comes in to do this and then the rest of her job is well.

B. Why programming should be taught as a basic subject in primary school.


> set of dates in an Excel file

PTSD intensifies


This fragmentation was predicted well over a decade ago, but we're finally seeing mass market realization of these predictions.

The most relevant idea here is Koomey's Law [1], which postulates that the "number of computations per joule of energy dissipated doubled about every 1.57 years." This law is essentially a combination of Moore's Law and Dennard scaling. As transistors get smaller, Moore's Law predicts increases in the number of transistors on a chip, while Dennard scaling predicts decreasing power usage per transistor. For many years, Moore's Law really just tracked with Dennard scaling; CPU on-die areas weren't getting substantially larger (to my knowledge), but transistors were getting smaller and more efficient, so more could fit in a given area.

However, Dennard scaling also says that the power density of a transistor remains constant. As transistors became smaller, more of the die became occupied by transistors, causing more heat dissipation per unit area. After exhausting cooling options (think back to the TDP of those CPUs back in the late-90s/early 2000s), Dennard scaling died in the mid-to-late 2000s because of thermal issues.

However, just because heat can't be dissipated doesn't mean CPU manufacturers won't try to cram more and more transistors in a given area. As a result, a modern CPU will not use all its transistors at once because it would overheat, creating "dark silicon". Once in that paradigm, the next logical design approach was prioritizing application-specific transistor clusters, the extent of which we're now seeing.

While I generally agree with the authors, the key in these circumstances is to find ways to combine the fragments and make a better whole. Perhaps we'll have a "brain board" attached to the motherboard, where sockets on the brain board allow user-specific processor cores to be installed. Not everyone needs an i9 or even an i7, and maybe not everyone needs advanced ML performance.

[1] https://en.wikipedia.org/wiki/Koomey%27s_law


> However, Dennard scaling also says that the power density of a transistor remains constant. As transistors became smaller, more of the die became occupied by transistors, causing more heat dissipation per unit area. After exhausting cooling options (think back to the TDP of those CPUs back in the late-90s/early 2000s), Dennard scaling died in the mid-to-late 2000s because of thermal issues.

Dennard scaling says that power usage per transistor per clock cycle is constant; not power density. As transistors became smaller, an x% reduction in transistor area meant an x% reduction in power per transistor per clock cycle, which allowed clock cycles to increase dramatically to consume the same fixed heat dissipation budget.

The breakdown was not from there being too many transistors per square millimeter, but from the breakdown of the direct relationship of transistor size to power consumption per clock cycle due to current leakage. An x% reduction in transistor area no longer causes an x% reduction in power consumption per clock cycle, meaning operating clock frequency is now fixed at your ability to dissipate heat, not your ability to shrink transistors.


Good point. Leakage current is the main issue these days, but that problem is only indirectly linked to power density.

I've explained it your way in the past, but I saw the Wikipedia article for Dennard scaling [1] says that power density for a transistor remains constant with time, which is true given that the underlying transistor technology has constant resistive losses. My understanding is that transistors using GaN have higher power density than "traditional" transistors, which mean Dennard scaling certainly breaks down.

[1] https://en.wikipedia.org/wiki/Dennard_scaling


The dark silicon isn't necessarily application-specific, it's also used by duplicating a particular circuit and then rotating which duplicate does the computing, so that the other areas have time to cool down a bit before they're used again.


I largely don't blame the specialized processors, they exist to fill a need. The transition to mobile/tablet has been very much a producer-consumer relationship. In the old days, normal users would one day "View source" and dabble in HTML and making their own web pages and journey into making their own programs. There's no parallel to this on mobile today. Perhaps the next generation that grew up on Roblox and using yet to successful end-user visual development tools will.

Is there a web-app like MS Access or Visual Basic today, maybe Airtable, others?


The problem was the smart phone.

With that closed platforms became the standard.

Non productive hardware is now ubiquitous, most of it can't be fixed.

The open risc-v tablet is the only exit from this mess.

Forget ARM and X86 empires, they will only exploit you as a consume only slave.

It needs working GPU and swap-able battery though!


What difference does the ISA make in this respect?


Very little, broad stroke RISC ISA was nailed between 6502 and ARM-1 (1975-85), conclusion is 32-bit is enough for eternity. Same with IPv4, just add a byte for the internal mask (/24) and be done with it, long (64-bit) is way overkill... and IPv6 choose 128-bit. I say 4GB RAM per process is enough.

Competition (also eternal growth) only works when you have infinite free energy. Now we are at the end of the curve, current Risc-V (64-bit) is not perfect by any means (lacks vector/SIMD), BUT it removes the real morons (lawyers and "owners") from the picture somewhat.

The practical evidence is: Vision Five 2 software is going forward fast, the GPU is stalled for some reason, probably related to Imagination (PowerVR dumped by Apple and bought by the Chinese governement) being British, but who knows.

Pinetab-V uses the same chip (JH7110) so it's also grid locked by the GPU drivers. TH1520 also uses Imagination GPU but a bit beefier, that one will have more trouble since the CPU is a new architecture and the JH7110 uses the same as the SiFive Unmatched.

I tried to help by building my 3D MMO engine for them. I sent the profiling data, they wanted to see the OpenGL call stack and I lost hope because it works on ALL machines I have tried (PC Nvidia/Radeon, Raspberry 4 and Jetson Nano).

I mean they have the game executable right there a 10MB download away?

Popcorn time at the end of the world, at least I tried.


I think the theory is that an open ISA like RISC-V would promote a similar culture of openness around the software and hardware implemented with it.

I'm skeptical that is how it would play out in practice though.


But x86 is PC, there are ARM notebooks, smartphone is just a computer with touchscreen and telephony.


I question the article's framing of CPUs as "universal" and GPUs as "specialized". In theory, they can both do any computation, so they differ only in their performance characteristics, and the deep learning revolution has shown that there is wide range of practical workloads that is non-viable on CPUs. The reason OpenAI runs GPT-4 on GPUs isn't that it's faster than running it on CPUs--they do it because they _can't_ practically run GPT-4 on CPUs.

So what's going on is not a shift away from the universality of CPUs, but a realization that CPUs weren't as universal as we thought. It would be nice though if a single processor could achieve the best of both worlds.


> [...] differ only in their performance characteristics [...]

... but that is exactly why CPUs are considered "universal" and GPUs as "specialized".

The whole concept of specialized hardware is to do fewer things more efficient, and in tons of applications that means the problem suddenly becomes feasible. That has always been the case. Not sure what the deep learning revolution has shown in regards to this.


> In theory, they can both do any computation, so they differ only in their performance characteristics

But surely, the exact same thing can be said when comparing any two different machines that engage in computations and can do conditional branching.

GPUs can be used for any computational task that a general CPU can be used for, but a GPU is optimized so it will do certain sorts of tasks much better (and as a consequence will be worse at other kinds of tasks). CPUs are meant to be adequate (if not spectacular) at any sort of task.

It seems to me characterizing GPUs as "specialized" and CPUs as "not specialized" is entirely correct.


Indeed. Though I did like the rest of the article, three of the authors' pillars to define specialization are dubious:

> 1. substantial numbers of calculations can be parallelized

> 2. the computations to be done are stable and arrive at regular intervals ('regularity')

> 3. relatively few memory accesses are needed for a given amount of computation ('locality')

Where (1) fails, any modern multicore + SIMD + ILP desktop/console/mobile CPU will run at a tiny fraction of its peak throughput. While sufficiently small serial tasks still complete in "good enough" time, the same could be said of running serial programs on GPU (in fact this is sometimes required in GPU programming). People routinely (and happily) use PL implementations which are ~100x slower than C. The acceptibility of ludicrous under-utilization factors depends on the tininess of your workload and amount of time to kill. Parallelism is used broadly for performance; it's about as un-specialized as you can get!

(2) and (3) are really extensions of (1), but both remain major issues for serial implementations too. There mostly aren't serial or parallel applications, rather it's a factor in algorithm selection and optimization. Almost anything can be made parallel. Naturally you specialize HW to extract high performance, which requires parallelism, for specialized HW as for everywhere else.

The authors somewhat gesture towards the faults of their definition of "specialized" later on. Truly specialized HW trades much (or all) programmability in favor of performance, a metric which excludes GPUs from the last ~15 years:

> [The] specialization with GPUs [still] benefited a broad range of applications... We also expect significant usage from those who were not the original designer of the specialized processor, but who re-design their algorithm to take advantage of new hardware, as deep learning users did with GPUs.


You can't run a GPU without a CPU, but you can run a CPU without a GPU.

Could we change GPU's so they become more general purpose? Yeah, an we already have by gluing the GPU to a CPU ie integrated graphics, lots of CPUs has that. But when you do that we call it a CPU and not a GPU, so as soon as you make a GPU general purpose we start calling it a CPU.


I would also question the framing because the systems the specialized hardware is running are the most general software systems we've ever created


ditch your phone , go back to your desktop, problem solve. Mobile has been a regression in pretty much everything except corporate profits.

The focus of the article is wrong. PCs for example are not getting more specialized, they are getting new capabilities with the additional specialized gpus. As others have said the problem does not lie in processors, but on the software and UI, which has been steadily dumbing down since "smart"phones have been introduced. Phones have perfectly capable general computing processors but corporate decisions limit their usage.


Amen to this, I absolutely hate what mobile phones and their attendant giant megacorps did to the internet and computing in general.


PinePhone has a lot software releases, postmarketOS has plenty of supported hardware.

"Dumbing down" is a necessity for extended demographics, that's trend from long ago.

Article explains reality, predicts proliferation of specialized hardware. For example was sound ever viable on CPU? GPUs were specialized, become more universal with compute, extended with video encoders, RTX, DLSS.


I agree -- this is exactly why when my current smartphone dies, I'm going to be switching to the dumbest phone I can find and start carrying a real pocket computer along with it.


and a camera; and a wallet with all the cards you use? maybe the landscape of featurephones is different now, but i only knew of 1 or two (kyocera) that allowed tethering, so now you also have to figure out how to get internet onto your pocket computer - another sim, another bill. Perhaps you need a GPS device too (my car has one built in, but my wife's doesn't, for example).

I was going to just post a link to a picture of my cellphone graveyard drawer; i've been saying this same sentiment for the 12 or so years i've owned a smartphone. I even bought two Nikon 1 cameras to obviate the need for a decent cellphone camera. I have a servicable netbook (eeePC), too.

The most expensive cellphone i ever bought was $300. In theory, i could carry a Nintendo Switch, Nikon 1, feature phone, GPS unit, and a netbook around with me... or this absolute garbage OnePlus that i hate using enough that i only use it when necessary, like to SMS/MMS or something as silly as making or receiving a telephone call.


I realize I'm unusual for the HN demographic, but...

I don't really use the camera on my smartphone, so see no need to replace it. I carry a wallet with the few cards I need daily anyway, so no change there (and even if I didn't need to carry cards, I still need somewhere to keep cash) I don't use my phone to pay for things or access services, so I'm not concerned about doing those things when mobile. I can do what I need to do online from my desktop machine at home.

I do need GPS, but it will be in my pocket computer, so that's fine. I don't need to have online access for this to be useful because I really only use GPS for specific planned activities anyway, so I can preload any maps I might need.

> i only knew of 1 or two (kyocera) that allowed tethering, so now you also have to figure out how to get internet onto your pocket computer - another sim, another bill.

I'm not particularly worried about having the pocket computer be always connected to the internet. I can easily get by without having a constant connection. The pocket computer will have Wifi, which will cover situations where I want to have the machine connect to the internet.


> and start carrying a real pocket computer along with it

So why wouldn't you buy Librem 5 or Pinephone, which are pocket computers running GNU/Linux which have a phone functionality?


Because I don't want the computer to have phone functionality. Also, I can get a better computer for the money if it isn't married to a phone.


Would you mind linking to a good pocket computer for buying?


I would if I could. There are a number of models available, but I have no real insight as to which are the best. I'm building my own for this endeavor.


I don't know if there's good justification for this yet. The claim that computers are shifting from general (CPU) chips to more specialized (GPU) chips.

I'm not sure classifying GPU as "specialized" is right- it's essentially a chip that can do lots of small computations, rather than big ones like the CPU does. To me, the trend looks more like a shift from centralized to distributed models of computing. Which I think is also backed up by data processing tools like spark which distribute over multiple computers.

I saw a "SQL process unit" announced recently[1] which I guess really is a move towards specialized compute. I haven't heard of much uptake in it yet though, so I guess time will tell.

[1] https://www.neuroblade.com/product/


Discussed at the time (of the article):

The decline of computers as a general-purpose technology - https://news.ycombinator.com/item?id=26238376 - Feb 2021 (218 comments)


The failure is primarily due to stagnation in single core speed improvements since 2003, compared to previous decades.

Yes we have more cores now, and single cores have got faster, but nothing like the scale of the 1990's when raw clock speeds increased 150x.

Moore's law is about the number of transistors, not the speed of the device. More cores are good, but software is hard to parallelize, despite decent efforts from the CS community.

There have also been efforts to put FPGAs into general purpose CPUs. These have also not taken off.

For most users, a single core CPU running at 30GHz would appear faster than 32 cores running at 2GHz.


> For most users, a single core CPU running at 30GHz would appear faster than 32 cores running at 2GHz.

This is not certain except for single threaded programs not accessing main memory, for which this claim is trivial and mostly academic. It is well-known that increase of frequency of CPU by a factor of k does not always result in equal factor k for increase of performance, mainly because of memory access speed not increasing in the same way.

"Appear faster" depends on instruction translation, exploitation of paralelism and memory access speed getting faster in some commensurate way, and nobody has yet built a machine doing those all those things at close to 30GHz to see how fast they are. It's possible that while very small CPU can possibly run at 30GHz, other bigger and more distant ingredients can't, e.g. main memory (heat and frequency causing too many errors to be practical). Already the present-day CPUs are too fast for the main memory they use. In intensive data processing, often CPU wastes most of the time waiting for memory. Making it wait faster won't help performance.


The tittle is misleading. It should read: The Decline of CPU As a General Computing Device.

But then, I guess not many people would mind it.

A computer? I consider all forms of computing units today as computer: micro-controller, single board computer, CPU +/- GPU +/-Neutral Engines etc.

What about the math-coprocessor in the 90s? Did they cause a decline of computer as a general computing device? Of course not.


It's way too premature to say we are turning into fragmentation cycle regarding bespoke hardware. GPUs are still universal computers, they just use a different model with different performance characteristics (and applications, which stem from these performance characteristics and also inertia/maturity of tooling) than CPUs. ASICs and specialized processors have been a thing for a long time and their application towards some "new" things like crypto, hft, and a handful of cloud optimizations is hardly a new trend when seen in the larger context of their being used all the time for now-boring, once also-new technologies like networking hardware and embedded applications.

I'd argue that the last three decades have actually had massive shifts towards more generalized and ubiquitous computing that may continue even further. First, with the introduction of PCs the computing landscape shifted away from bespoke hardware and software to more standardized plug-and-play hardware and software. With the internet, everybody got a middleware called a "web browser" that standardized UI development on a single computing model. With mobile computing, we got a more restricted model of userspace that standardized what applications could do and even how (app review, software signing) you could do it. Cloud did the same things to various extents to server software. Except for at the height of Windows' PC dominance, it's never been easier to write software usable by such a large portion of the market, and the absolute number of devices and users that applies to is probably an order of magnitude more than existed during peak-Windows.

Everybody in tech knows single-core cpu performance gains are slowing down and that the remaining incremental improvements will eventually end too, so what comes next is on peoples' minds. IMO this article is jumping the gun though - it's still way too big an undertaking to use specialized computers (and I don't count GPUs as these) for all but the largest scale or most performance sensitive use cases, as it's always been.


The last two decades at least have moved towards a purpose built consumer device, with a general purpose server.

Gpus, CPUs, and Asics, are run by vendors rather than end customers. The end customer runs a full computer, but has no rights to it because it's called a smart phone rather than a computer


Machine learning is a massive counter example to this trend. Specialised deep learning hardware is just supporting an even more general GPT.


> the economic cycle that has led to the usage of a common computing platform, underpinned by rapidly improving universal processors, is giving way to a fragmentary cycle, where economics push users toward divergent computing platforms driven by special purpose processors

Where does RISC-V and all its extensions land on the GPT <-> fragmentation spectrum?


Is this because people in efforts to secure their long-standing jobs have made software more complex and less effective?


It's an odd use of language. I'm typing this from a macbook that seems pretty general purpose but the article seems to arguing it's not because it has both a regular CPU and some GPU/parallel stuff and the latter apparently doesn't count. But it mostly seems quite general purpose to me.


Another way this article could have been written: The rise of general purpose computing on GPUs.

They are equating CPUs with computers, and I don't think that is quite right.


The decline is due to vested interests. More specifically: to make money.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: