Hacker News new | past | comments | ask | show | jobs | submit login

I'm saying getting close to perfection is truly dangerous territory, because everyone gets very complacent at that point.

As a concrete example: https://www.cbsnews.com/news/pilots-fall-asleep-mid-flight-1...




We are already there with humans. Most people take a doctor at their word and don't bother to get a second opinion.


At that point, you've at least consulted with a medically trained professional who's licensed (which they have to regularly renew), has to complete annual CME, can be disciplined by a medical board, carries medical malpractice insurance, etc.

There should be requirements for any AI tool provider in the medical space to go through something like an IRB (https://en.wikipedia.org/wiki/Institutional_review_board) given they're fundamentally conducting medical experimentation on patients, and patients should have to consent to its use.


In the context described, it's acting as a tool for a doctor. AI scribes are not conducting experiments.


The use of the AI to treat patients is a medical experiment.


any change to the practice is an experiment.


Exactly. If you have any kind of illness this is displaying atypical symptoms or otherwise may be rare, your life is in your own hands. Even something that is somewhat common like EDS can get you killed by doctors missing the signs. Keep a printout of all our own symptoms as they evolve over time, and immediately bring up anything that conflicts with what the doctor says.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: