Hacker News new | past | comments | ask | show | jobs | submit login

Computers are fine but total computerisation of human society may not be so. That's actually what the article should have explored in my opinion. Computers are neither ruthless nor kind, its controllers, humans, are. So the question then becomes, who controls them and how sophisticated they (the computers) become, and how much they infiltrate our world. A simple thought... should everything that can theoretically be controlled by computer algorithms, made so? Should computers use ML for as many as tasks as possible, even though that can lead to wrong decisions on occasion?



Institutions made of people have limited reach. Human resources are finite, there's a physical limit on the scale of what they can do. A computerized society is limitless. A state might want to spy on everybody but it's impossible to do so due to limited manpower. Computers remove those limits and allow states to implement global surveillance.


Computerized societies are still limited by their resources. We are all seeing now the effects of a society that thinks it can print money without ruining its money supply.


Computers still scale much faster than humans.


Whether computers count as ruthless depends on fine details of your definition of ruthless.

If you mean the mental/emotional state necessary for a human to act without regard for consequences of their actions to other people, then computers are not ruthless.

If you just mean acting without consideration of how their their actions will affect people then computers fit this definition of ruthless perfectly by virtue of not being able to consider the consequences of their actions on other people, because we don’t know how to program them to consider how their actions will affect other people.


> because we don’t know how to program them to consider how their actions will affect other people.

We don't have to. For example, a ruthless train door could be made less ruthless, by having sensors that respond to someone who is nanoseconds late, to open the door, just that once, just like it's elevator door brethren, but only once.

The problem is that "inherently opportunistic people" will take advantage of the machine kindness and take an entire train full of people hostage.

The oppressive train door is a dictator everyone loves, maybe if they were also razor sharp, people would give them the respect they deserve. phwump. guillotrains. on time, every time.


But doing so isn’t the default state for computers. Someone needs to think about that, and put effort into building that. Making it so that some concerned citizen that wants to make the system less ruthless, also demands the instrumentation of the system to allow for that, cooperation between all parties to facilitate such changes - it doesn’t happen by default. So while it’s possible to make a system not behave ruthlessly with enough effort, it is default ruthless, for better or worse.


> If you mean the mental/emotional state necessary for a human to act without regard for consequences of their actions to other people, then computers are not ruthless.

I don't understand, what am I missing?

I've never seen a computer have any regard for the consequences of its decisions. Computers are thus completely ruthless.


I think the GP points to the fact that given the inability of computers to evaluate the consequences of its programming, ruthlessness is a concept that cannot apply to them because by definition involves disregard about those consequences, not ignorance.

edit: typo


I think there is a conflation of malicious and ruthlessness in this thread, showing no pity or compassion (def of ruthless) does apply to a computer because they can do neither. However, being actively malicious and disregarding consequences probably only applies to humans controlling them because the computer is ignorant of those consequences as you say.


> So the question then becomes, who controls them and how sophisticated they (the computers) become...

There's also unexplored (by article) concept of inadvertent or emergent control.

E.g. Dept A encodes Rule 1 and Dept B independently encodes Rule 2, yet applied together Rule 1 and 2 produce an unexpected outcome. In which case, neither Dept A or Dept B (the ostensible controllers) could be said to actually control the outcome.


There is also the question of increasing human-machine integration, eventually resulting in a direct read-write interface to the brain (see Neuralink, OpenWater, and VALVE's research and goals), in the context of proprietary software under corporate and government control.


The alternative mechanisms for decision-making also make wrong decisions on occasion, especially when that alternative is a human.


infiltrate? They're not spies. They're savants at best.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: