Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, but I kind of want my diagnostician to be obsoleted by orders of magnitude.


A human can be held accountable for making mistakes and killing someone. A large language model has no concept of guilt and cannot be held accountable for making what we consider a mistake that leads to someone's death.


The chance of a doctor being held accountable for the medical errors they make is lower then you might expect. I could tell you a story about that. Lost my eyesight at the age of 5 because I happened to meet the wrong doctor at the wrong time, and was abused for his personal experimentation needs. No consequences, simply because high ranking people are more protected then you would hope.


This is very true, and many people don't know this. A tremendous amount of damage is inflicted by medical errors, particularly against low income people and those least able to get justice. It's wrong to reduce people to being just another body to experiment with or make money from. But good luck holding anyone in the system accountable.

A lot of patients don't know who they are dealing with nor their history. And it can be really hard to find out or get a good evaluation. Many people put too much faith in authority figures, who may not have their best interests in mind or who are not the experts they claim or appear to be.


The chance of a machine being held accountable is zero as the concept is inapplicable.


Medical error is the third leading cause of death in the US at least. Given that data, I am assuming the chances of a human being held accountable for their errors in medicine is also almost zero. It might not be ccompletely zero, but I think the difference is effectively negligible.


Many have no idea about this. Medical error, is right there behind cancer and heart attacks. But there is way too much shoulder shrugging when it happens. Then on to the next.


> I think the difference is effectively negligible.

The difference is categorical, humans are responsible whether they are held to account or not. An automated system effectively dissipates this responsibility over a system such that it is inherently impossible to hold any human accountable for the error, regardless of desire.


It will have to payout of its blockchain wallet that naturally it will have. /s


Sorry to hear that. The current medical system is a joke and fails people at every stage


The difference is you could find the person responsible. Contrast when the DMV can't be held accountable for fouling up your registration.


And, what difference does it make being able to find the individual responsible, and figuring out that the system is protecting him from liabilities? What I am trying to say here is, there isnt much difference between zero and almost zero.


Don't worry, now there will be an extra layer of indirection.


The third leading cause of death is medical error in the US. It doesn't really look like doctors are being held accountable for their mistakes to me.

Which isn't to say that they even should, really. It's complicated. You don't want a doctor to be so afraid of making a mistake that they do nothing, after all.


I'd much prefer a lower chance of dying to more accountability for whoever is responsible but higher chance.


Humans making decisions in high stakes situations do so in a context where responsibility is intentionally diffuse to a point where it is practically impossible to hold someone accountable except picking someone at random as a scapegoat in situations where "something" needs to be done.

Killing people with AI is only a lateral move.


Doctors are only held accountable when they do somthing negligent or something that they "should have known" was wrong. That's a pretty hard thing to prove in a field like medicine where there are very few absolutes. "Amputated the wrong limb" is one thing, but "misdiagnosed my condition as something else with very similar symptoms" is the more common case and also the case where it's difficult to attribute fault.


Well, the kinds of things we hold people responsible for are errors from negligence and malicious errors. The reasons people do stuff like that is complicated but I think boils down to being limited agents trying to fulfill a complex set of needs.

So where does guilt come in? Its not like you expect a band saw to feel guilt, and its unclear how that would improve the tool.


At a some degree of success, I will take the risk. The contract will probably offer it.


I agree. My guess is that the hospital will have to get a mandatory insurance. Let's wait until the insurance for AI is cheaper than paying a human.

The advantage of human are:

* They can give a bushtit explanation of why they made a mistake. My guess is that in the future AI will gain introspection and/or learn to bushtit excuses.

* You can hang them in the public square (or send them to jail). Sometimes the family and/or the press want someone to blame. This is more difficult to solve and will need a cultural change or the creation of Scapegoats as a Service.


We can hold those operating or training the AI model accountable.


What's the difference between suing your doctor's liability insurance and suing your AI's liability insurance?


The owner/operator of said machine can and will.


An AI trained on the past work of diagnosticians doesn't really render diagnosticians obsolete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: