Hacker News new | past | comments | ask | show | jobs | submit login

I just hope someone is double checking whatever notes are taken. My experience with LLM tells me they're fantastically bad at remembering and even worse at producing factual stuff that can be used without context.



No one will be double checking. On at least three occasions now I’ve had to correct monumental fuck ups in mine and my ex wife’s medical records. One nearly lead to a transfusion of an incompatible blood type.

I suspect this will lead to a decline in accountability with there being another party to blame rather than the medical professional.

The LLM did it, not me.


Alluding to FAFO, a computer cannot find out, so a computer shouldn’t fuck around.

I’m hoping for a lot of legal precedent showing that an AI cannot be blamed, especially in a medical context.


I would hope that would be the case but a conservative safety culture is unfortunately built on piles of dead people.


Companies and people should have liability, but mere tools like AIs should not.

How would that even work?


I'm the simplest way - if you allow AI to make decisions, you're responsible. Like this https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-th...

So far we're doing pretty good with that idea globally (I've not seen any case going the other way in court)


I mean how would it work, if you tried to hold the AI liable?


Liability for the company selling the AI, I'd presume.


And that's perfectly acceptable, if everyone involved agreed beforehand.


Ah, I misunderstood. That is an interesting idea to consider.


Liability should imo be placed on those that selected the tools and arranged their implementation without providing due care and procedures to ensure the validity of output data.


> No one will be double checking.

Yeah no, I can tell you from experience with a clinic that things are checked. Let's talk about the real issues and how to enforce double-checking for people who would ignore it. Hyperboly like that is not helpful.

But, I wonder if those systems should randomly insert obvious markers to help here. "This has not been read." "Doctor was asleep at the wheel." "Derp derp derp." - like the fake gun images that the airport security has to mark.


You are really naive if you think checking anything is universally done to any standard anywhere on the planet. And writing it off as hyperbole is dismissive.

I’ve literally been in a meeting with a doctor in the last week who wrote something down wrong on the damn computer in front of me. And I’m talking a specialist consultant. I’m sure if I didn’t mention it that the incorrect data would be checked over and over again to make sure it was correctly incorrect…


You're oversimplifying this. So I'm not a doctor, but close enough to this system. First, you've definitely got professions/situations where checking is done. See flight and surgery checklists. Of course there will be mistakes and we'll never reach 100% compliance, but that's a given.

But then, there are secondary effects like how much time will your doctor have and how ready they are. In practice the notes take time. If you're unlucky, you're going to be late, on a busy day, and your notes will be done many hours later from recollection. In that case, even an imperfect system can increase the overall quality if it enables faster turnaround. I know of cases where the automatic notes generation did catch issues which the doctor simply forgot about.

The individual stories are brutal, but overall they say very little - was that the only mistake that doctor made in their life, or are they making 10 a day? In general we have to accept mistakes happen and build a system that catches them or minimises the impact.


> I’ve literally been in a meeting with a doctor in the last week who wrote something down wrong on the damn computer in front of me. And I’m talking a specialist consultant. I’m sure if I didn’t mention it that the incorrect data would be checked over and over again to make sure it was correctly incorrect….

Far be it from me to suggest that doctors aren't both fallible, and subject to arrogance that makes it harder for them to catch their mistakes—what highly skilled professionals are immune?—but "doctors make mistakes" is, while doubtless completely true, a very different claim from "doctors don't check things."


Agreed, but now the doctor has to do her job and the AI's job.

Cory Doctorow wrote about it a while back. I think it was this article "Humans are not perfectly vigilant" [0]. It explains how technology is supposed to help humans be better at their jobs, but instead we're heading in a direction where AIs are doing the work but humans have to stand beside them to double check them.

[0] https://pluralistic.net/2024/04/01/human-in-the-loop/


For what it's worth, the tech will only improve over time and looking at the birth rates, humans will only become more and more overworked and less reliable as the years go by. There should be a point where it just makes sense to switch even if it still makes mistakes.


Depends on the system I guess, but I'm familiar with a local one which is very much tuned for just summarising/rewriting. It seems very good at what it's doing and since it's working from the transcripts, it's actually picking up some things the doctors were not concentrating on because it wasn't the main issue. I've never seen doctors so keen to adopt a new tech before.


Which one? And what sorts of things is it picking up on?


https://www.lyrebirdhealth.com/ and I know of at least one case where the patient mentioned something relevant, but not connected to the main issue they were talking about. The doctor missed it because it wasn't the main topic, but the transcript and the notes included it.


yeah Lyrebird health... been hearing some crazy as stuff about them - wouldn't be surprised if they were YC soon


To ber very frank, a lot of these logging and note taking requirements have already led to many mistakes, and we already have many case studies of how the system does not care to remedy such things. I can easily see how AI will be adopted here and its mistakes will be glossed over the same way we already glossed over the same mistakes before.


I just think medical notes are something that you shouldn't be able to legally just gloss over. A simple typo in a dosage will kill someone.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: