Hacker News new | past | comments | ask | show | jobs | submit login
Ontario family doctor says new AI notetaking saved her job (globalnews.ca)
260 points by davidbarker 9 months ago | hide | past | favorite | 263 comments



As a person that used to be a “human scribe” but for emergency dept doctors. I would say the ER doctors shared the same experience as the one depicted in the news story. For a doctor that was nearing the age of retirement or had very poor keyboard proficiency. scribes were a godsend.

Doctors worried less about documentation and focused more on patient care. Of course they still wrote things like admission orders, prescriptions, and nurse orders. Scribes were often told to avoid doing these but sometimes the doctor was okay with it. I personally just told them to do it. Too much of a legal liability for both of parties.

But capturing all of the details of the history of present illness (HPI), patient medical history, review of systems, physical exam, procedures (if any) and medical decision making during patient visit can be a time consuming issue. Time increases with significantly more complex cases, especially one with more than one procedure.

Documenting it so that it can be billed appropriately is also crucial.

Prior to scribes, doctors would describe forgoing writing documentation all together at the end of the shift and writing it the next day or shift.

Obviously, this caused very poor documentation and opened up the doctor to legal issues (if they arise). Ambulance chasers can easily call out these inconsistent details between doctor notes, patient outcomes, and cross check with nursing notes.

Additionally, this caused cases to be billed incorrectly or “down coded” as well.

I don’t agree with allowing private companies unfettered/unregulated access to what is considered private medical information. But with appropriate controls on the data and who has access to it with full transparency, I think it can help in alleviating physical burn out.


> For a doctor that was nearing the age of retirement or had very poor keyboard proficiency...

I went to a doctor recently who was typing very fast. I did kid with her that she could type nearly as fast as I did and yet she was also using one of this automated note taking thinggamagic (holding some kind of microphone in her hand).

She'd type manually for stuff like email/subject, click click one sentence here, click, one word there: for that kind of stuff it's too slow to dictate one word, then click, etc.

But in that big empty text field? She'd dictate.

As a bonus as her patient I got to hear her report (or "my" report if you want) in real-time.


I once had to go to a walk-in clinic to deal with a minor emergency. The doctor who attended me was literally typing with his pointer fingers.


Were they slow? There's actually plenty of people who can do that at a reasonable or even fast pace. Of course lots of people type reasonably quickly with two thumbs too (or in the case of this comment, a left thumb and right pointer finger).


It’s always slower than typing properly.


I don't type properly (use about half my fingers) but I find 80wpm is usually fast enough.


Using half your fingers is different from using two.


I can type with two fingers faster than I can type with full hands. I’m still trying to get my touch typing up to the speed of my two fingers.

I spent two years typing with just my forefingers, and can dk 80wpm with 99+ accuracy and I don’t look at the keyboard.

With two hands I’m still at 60-65 roughly and with worse accuracy but I’m sure it will come with time.


Doesn't sound real. I've typed correctly my whole life and have a lower WPM than that. You'd need to move your fingers at superhuman speeds to achieve that with just two wingers.


I spent literally thirty years typing with two forefingers, and note, I didn’t need to look at the keyboard. Also worked as a software engineer that whole time. So I had a lot of practice.

It’s why it’s taken me so long to try and adapt to touch typing. Too much muscle memory to rewire.


Sure, in the hypothetical where the person also decided to learn to type properly. If someone is plenty fast for their needs then it doesn't really matter, though.


Fun fact. An old coworker typed like that. He's a software engineer.


>> Documenting it so that it can be billed appropriately is also crucial.

This is literally the only reason why scribes, or pretty much any EMR exist.

EMRs are an accounting tool. To avoid "down coding" and legal exposure . Plain and simple.

There's a reason that doctors would not adopt EMRs for the longest time (at least in the US, don't know other nations) until actual legislation was passed to force them to do so, else get shut out of Medicare.

The EMR, and therefore most note-taking, is not as value add to the patient, or doctor. Its just the only way the doctor will be paid.

The EMR of the future will not have an encounter in a computer. It will be computerless. AI will be acting like zapier between payor APIs, doctor's invoicing, and the patient's chart.


My wife is a vet where the legal penalties are far less but she contends that proper medical records are critical, one of her most important asks of colleagues.


A medical record can be paper or electronic. The key question, is whether that record is proper or not.

Most EMRs have copy pastes of staff paid $15 per hour and of doctors completing e-notes in under 3 mins for each pt the night after the encounter happened. That's hardly proper records.


> The EMR, and therefore most note-taking, is not as value add to the patient

Since patient’s medical histories are relevant to treating medical issues, having quick access to accurate medical histories is a value add for the patient.

Just the simple fact a pharmacist does not have to decipher chicken scratch to get people the right medicine is a value add for patients.

My family chooses to frequent doctor groups that integrate with the local hospitals’ EMRs because it allows the hospital doctors immediate access to all the information they need in the event of an emergency.


That can be easily solved without an EMR. In fact, that's a good AI use case


This doesn't make any sense... an AI is going to be reading the EMRs.


Sorry, I'm talking about traditional EMR.

But you are correct, 'record' will be a storage service with no front end required inputs from the doctor. That's the point. Inputs and outputs without a keyboard or mouse. Just natural language.

In effect, EMRs would become voice-enabled AI chatbots.


> The EMR, and therefore most note-taking, is not as value add to the patient, or doctor. Its just the only way the doctor will be paid.

I don't think this is true as proper note-taking and documentation is required by law in many countries where the doctor is not paid per patient/procedure


Digital records are valuable for health in several cases, for example, when the patient needs or wants all their records.


OK, but you need to clarify your statement. As you said,

>>> note taking and documentation is required by law.

So then, the EMR not a value add, which is also what I wrote. It just so happens that in the US the primary reason is to get paid, and its also why some doctors still eke out a living without an EMR (but still with some kind of paper medical files)


Do EMRs improve the standard of care?


Big time.

In some cases where hospitals do paper charting, it can be mis charted.

or, there can be issues with not recording medication as being given or not, and the patient can miss their dose because a nurse thinks it was given. There is some cases where such confusion is tolerated depending on the patient.


>>> or, there can be issues with not recording medication as being given or not, and the patient can miss their dose because a nurse thinks it was given.

Partially true. First, there's no need to record anything in any EMR, this is a big misunderstanding. The act of sending the script is recording it itself. There's no need to do data entry in EMR.

You can test this by simply attempting to send eScripts outside of the EMR. PBM systems like surescripts will alert the provider as he's prescribing in any other 3rd party system of any problems (multiple scripts of same meds, reactions, etc).

Even a paper script needs to be adjudicated via PBM by the pharmacy, which means there's a PBM record already the moment that script is created and picked up. That's how most doctors and pharmacies know whether pts are picking up their meds.

Now if a doctor is fully on paper based then you do have a problem because there's no feedback to a paper record IF the doctor is failing to log into the PBM system to check for drugs dispensed. In this case, the EMR may appear superior on this front, but it also introduces its own set of problems, such as a very common one - patients missing their scripts because the pharmacy is out of meds, causing multiple scripts being sent and back and forths with busy doctors. This is never a problem with a paper script.

Perhaps one of the very few value adds is turning the MD scribble into something legible, but that's something that can easily be solved without any click, or an EMR in the middle .


> First, there's no need to record anything in any EMR, this is a big misunderstanding. The act of sending the script is recording it itself. There's no need to do data entry in EMR.

Not quite. Hospital EMRs now have barcoding and scanning, for timed doses being delivered an to make sure they were - saw it first hand in the past year.

This is a shadow working culture issue, not a technical one.

Hospital workplace cultures can be quite toxic, and that plays out varying degrees of horrible for certain segments of the population 60 percent of the time, every time.

In hospitals, prescriptions are administered usually by a nurse.

Since it's a problem that can be casually looked away from because it doesn't impact one group, it can be downplayed.


Absolutely. They can allow providers to easily track their adherence to Patient Quality measures which directly affects their income.


The question is whether they improve the standard of care.

Quality measures produce numbers so bean-counters are satisfied at CMS.

For example: Before i was not submitting any quality measures, but my patient satisfaction was sky high, and i had the lowest complications for years.

Now i report quality measures, but as a result of documentation requisites and reporting requirements, i have less time to see patients , and therefore make more mistakes.

My quality measures are good because I'm talking to patients about quitting smoking and getting leaner - but i was already doing that previously. Now objectively, since I now have less time due to EMR requisites, my patients are worse off than before and it shows with slightly more complications my patient's aren't as happy as their waiting times are longer (and getting worse too).


It’s not the sole reason but given that billing is based on it (ICD codes). I can see your point.

I remember the bean counters from the hospital going over with scribes and doctors on how to “properly” document all the necessary elements to bill for a procedure. All of it was very very boring.

“We can bill for ED physician prelim reads of radiology studies but it needs to have at least 3 or more findings documented. “

So let’s say a an ed physician orders a chest xray to r/o pneumonia vs bronchitis. Have to document in chart something like: “3 view chest xray; no infiltrates, no pleural effusion, no pneumothorax; received by X doctor”

In reality, most doctors would just put “reviewed 3v cxr, no infiltrates”. No need to document negative findings that contribute nothing.

Prior to EMRs, hand written notes and charts were absolutely god awful to read. Physician and nurse short hand is not standardized. Plus some doctors handwriting is just atrocious.

EMRs helped standardize communication between multiple parties (both medical and non-medical). Like others have mentioned, it could have been a way to track standards of care across multiple hospitals or across the US. Unfortunately with the gold rush to get a product out there, all we got was 9-10 different proprietary EMR systems that do not have interoperability.

CT scan performed in Virginia and needs to get reviewed by doctor in California? The hardcopy of the images uses some proprietary viewer that is only accessible within the same system the Virginia hospital used. I remember a few cases of this when we had transfers from out of state. Either hospital physician gets their own additional scan and official read or waits for physical images from other hospital to view. I think in a CT this can be hundreds of images/slices. (Don’t quote me on this, this was a decade ago lol)


If a doctor can verbally get all notes, they will forget to write less, or ensure follow ups, referalls, etc go.


if AI is hosted in house I think it can have some very wonderful potential. Unfortunately lots of organizations are more worried about covering their asses legally than actually protecting the data


Yeah but you can't get that genie back in the bottle. Once these data are made available to private industry it's only a matter of lobbyists chipping away at whatever protections were initially legislated. See Also: your TV, phone, car, fucking doorbell, and fridge are all spying on you.


The HIPPA rules on health data are fairly strict, otherwise your doctor, hospital and so on could already be doing wrong. Personally I have a dumb TV, car, doorbell and fridge so not much spying there. The phone I'm less sure of.


HIPPA rules are easily circumvented unless you as a patient are paying attention: I can't tell you how many forms I've opted out of that wanted to explicitly export my data to third parties and partners that are not HIPPA compliant. And at least for my healthcare providers that use MyCharts, they like to make it part of the echeckin workflow, with no option to refuse. So you're forced to go up to the desk to check in and explicitly reject it each and every time. It's a healthcare dark pattern.

And then there are online providers like better health that don't have the option to opt out at all. So you just have to avoid them entirely.


Wait—this doesn't make any sense. I'm a physician and have a lot of experience dealing with protected health information. Third parties are required to sign a HIPAA BAA and obligated to uphold privacy/security standards equal to that of your physician and hospital. Can you provide some specific examples of the third parties you're talking about?

MyChart itself is a component of Epic (the EMR) and is absolutely HIPAA compliant. Every healthcare institution I've worked with has taken HIPAA and privacy/security regarding patient data extremely seriously. Non-HIPAA compliant vendors are an immediate non-starter and don't even enter discussions when looking at new products.


I'm a retired software developer that's worked in just about every healthcare-adjacent industry segment you can imagine with HIPAA compliance being an evergreen issue. I know how this sausage gets made on the back end and let me tell you regardless of what impression has been made to you about compliance and safeguards the reality behind the scenes is always messy. Hundreds of gigs of unanonymized user data lying around on developer machines, getting tossed out accidentally during equipment rollouts, leaky API implementations, half-assed compliance testing, lack of meaningful continuous oversight, vendor services with varying levels of compliance hot-glued together on the back end, outright theft of data, this list is incomplete. I'd recommend a weapons-grade dose of skepticism over any claims of meaningful data privacy as the last 30 years have consistently and comprehensively shown that anything that gets digitized eventually gets outed if there's a financial motivation to do so.


Yes, I do believe what you're telling me—the state of healthcare tech is definitely leagues behind general consumer tech. However, I do think this is a meaningfully different class of issue when it comes to patients' perceptions and actual harms. Patients are afraid of having their health data used against them. For example, revealing medical conditions to potential employers, revealing health information to friends/family, etc. There's a growing mistrust of healthcare institutions in recent years, and there are unfounded accusations of healthcare institutions selling data for financial gain like social media companies and even the DMV (https://www.caranddriver.com/features/a32035408/dmv-selling-...). This class of nefarious patient data privacy/security negligence effectively doesn't happen. I've treated patients who are illegal US immigrants, I see patients who use and possess illegal drugs while in the hospital, but they're not reported to anyone. They're simply treated and discharged. Unfortunately, a growing number of patients don't believe this is the case, and we see substantial disparities in levels of care provided to these patients who fear healthcare.

I'm not at all dismissing how terrible it is that healthcare tech companies can be lax with patient data. This absolutely needs to be better! But at the same time, this sounds more like incompetence than active malice. Practically speaking, a patient is extremely unlikely to experience actual harm because a developer accidentally took patient data home on a personal laptop. Although, I would love to hear more about what kinds of violations you've seen in your time in health tech? I work with third party vendors from a healthcare institution, and I absolutely want to figure out how to fix this.


With stuff like illegal drugs, so long as records are retained by the hospital, what prevents the feds from coming in later with a warrant to go through them?


HIPAA trumps the warrant. HIPAA is serious when it says patient data can only be shared for treatment, payment, and hospital operations purposes. Law enforcement is not an allowable reason to disclose patient information without their permission.


In the US a mere warrant is not enough to pierce doctor patient privilege, afaik. At least i would hope it would take a subpoena or a court order or something of the ilk.

Now, though, if a third party accidentally leaks your patient info, or lead pipes are involved


I don’t usually comment on these posts, but as a HIPAA compliance practitioner working with covered entities (also business associates) I have to take a contrarian view of HIPAA compliance efforts by providers. HIPAA is mostly a “check the box” type of compliance effort, as opposed to building a “culture of compliance.” Most compliance efforts stop at the technology barrier. For business associates, the compliance dynamic is even worse. While the larger BA’s do generally comply, because their focus is generally on the technology, for midsized and smaller BA’s , in most cases know the CE will take at face value that the BA is compliance. But there is a reason about 30% (by number) of all breached are caused by BA’s


Sure, next time I find one of the forms I'll snag it for you. It was rather eye catching because it explicitly stated "You're allowing us to share data with third parties and service providers that are not HIPPA compliant." How do I get it to you?

I wasn't claiming that MyCharts isn't HIPPA compliant: I was complaining as part of a MyCharts workflow I was presented with a form that wanted me to grant someone the right to send my data to non-compliant organizations, and as I said above explicitly stated so.


My email is in my profile page. And if you have truly found that the institution is sharing protected health information (e.g., even just names and date of birth) with third parties who have not signed BAAs, that is a lawsuit worth tens of millions plus government fines of $50,000 per piece of compromised data per patient. I highly suspect that there's some misunderstanding or miscommunication here.


The annual HIPAA training I was subjected to for nearly a decade on the EMR provider side of things never brought up these scenarios, but the Privacy Rule does have carve-outs that allow PHI to be transmitted to entities that would not be considered Business Associates, if the patient consents.


100% this. I just visited an urgent care center yesterday for some strep tests. I was given an electronic signature pad and told to sign for "consent to care". No documentation was given to me on what I was signing - just that I needed to sign.

I had to ask for a paper copy of the form I was signing, which was handed to me. That document said that "I acknowledge receiving the privacy notice ..." Was that given to me? Of course not. Asking for that - well let's just say I think I was the first person to ever ask for any of this documentation. I'm sure my information has been shared with 30 other entities - for a strep test. It's insane and unenforceable as a patient who just wants to get shit done.


The fact that it is even legal to ask patients to sign away their right to privacy boggles the mind.


It's mind boggling because I highly doubt it's actually true. I'm not sure where the OP is getting that info. Patients can't waive away HIPAA privacy/security rights.

I think the OP is assuming that when healthcare institutions partner with third parties, those third parties are not required to uphold HIPAA. If that's his/her belief, it's 100% false. Third parties associating with healthcare institutions have to sign business associate agreements (BAAs) that require them to uphold the same standard of privacy/security regarding patient data as the first party healthcare institution. There are severe financial penalties for violating HIPAA, and every healthcare institution I've been a part of takes this extremely seriously.


Before I start, I'm not singling you out- I am happy that you're participating in this discussion and sharing your first hand knowledge.

The thing for me is that if HIPAA truly does provide me privacy of my personal information and health care information, why are all of these privacy and consent forms required?

Whenever I am handed a form that says "privacy policy" my sense is immediately raised - what is it that they're trying to hide from me through mountains of legalese? When I don't receive one (as was the case in my doctors visit) then I am REALLY on edge.

For example, with my health care visit, this thread prompted me to call the listed numbers on the website for the health care provider to discuss their privacy policy. The provider's number dumps you into an IVR that has zero way to reach a human - you must dial an extension, and there is no option for an operator. I ended up calling their headquarters to get a callback from a human.

If there are standard mechanisms and policies in place, then we should be able to understand the rules once and never have to sign another form again, because the rules would be clear, unambiguous, and applicable to every health care interaction. If the rules are clear about not waiving HIPAA privacy/security rights, then why have a privacy policy that's three pages of inscrutable legalese that gives a bunch of weasel room for them to "share" information?


No problem—glad to participate! There's a lot of cynicism that leads to misinformation about how healthcare works, so I'd like to clean that up. Let's attack and fix the broken parts of the system, but we should praise the working parts. I think patient privacy/security is one of the few things the US gets mostly right about healthcare.

Regarding the privacy policies: these are created by the legal department and physicians in the department are told to distribute them and get signatures when necessary in order to do things by the book. However, your rights are inalienable and protected regardless of whether you actually receive the policy and sign the appropriate box. If you don't receive the policy, the healthcare institution is on the hook and could face a fine if reported to the DHHS. Things could absolutely be done more efficiently and clearer for patients, but there's a fear in changing things ("if it ain't (horribly) broke, don't fix it"). Trying to improve how privacy policies are disseminated and patients informed could result in an inadvertent violation of HIPAA that results in large fines. So healthcare institutions are disincentivized from trying to improve things here.

I reviewed the patient privacy policy for a few large institutions in the US, and it all seems to support what I'm saying. For example, here's NYU's policy on business associates: https://nyulangone.org/files/business-associates.pdf

NYU has additional policies here: https://nyulangone.org/policies-disclaimers/hipaa-patient-pr.... UCLA Health has similar policies here: https://www.uclahealth.org/privacy-practices. Every institution has essentially the same policies as they're all just a reflection of HIPAA.

The only ways in which patient data can be shared with others are if (1) they're involved in your treatment (e.g., your doctor at another hospital), (2) payment purposes (e.g., insurance), (3) health care operations (e.g., third party vendor software like EMRs, PACS, etc.) All are required to be HIPAA compliant if they're covered entities (i.e., healthcare institutions) or sign a BAA with a covered entity that essentially puts the same HIPAA requirements on them. A violation again results in massive fines, C-suite level firings, and expensive legal fallout.


I spoke with the compliance manager at the urgent care this afternoon and had a pleasant conversation. I shared my concerns that I was never provided a copy of the paperwork I was expected to sign - and they took that feedback to hopefully improve in the future.

I had one question in case you’re still monitoring this thread. The compliance manager mentioned a “health information exchange” which I opted out of (since it was something I can control). Do you have experience with these? It seems benign from the searches I’ve done since the conversation but I would be curious if you had any insight as a medical professional


In the US, adults are generally considered competent to make their own decisions.


There are already too many restrictive law and rules in the US around healthcare, and here you want to add another restriction.


Ok, I’ll bite. List some concrete examples. Otherwise, I have to land on the default that regulation is good and works better than a free-for-all.


>Ok, I’ll bite. List some concrete examples.

How generous of you


Hitchen's Razor: what can be asserted without evidence can also be dismissed without evidence.


> The HIPPA rules on health data are fairly strict

Depends on how you define "strict". They're pretty onerous to comply with, but they don't really provide patients with anywhere near the level of protection that most people think. It's better than nothing, but in reality, your data is being legally shared with an arbitrary number of entities, without your consent, and without any way for you to even know who has access to that data.

If that data is breached in any way, in theory the reports are supposed to trickle up the chain eventually. In practice? If it's more than one or two subcontractors deep, you'll probably never find out (unless the breached data is posted publicly and you stumble upon it that way).

Also, the cap on penalties is shockingly low: $2 million for all violations of a given provision per calendar year. And that's for willful neglect. If the cause of the violation is determined to be lower than willful neglect, the maximum violation is even lower.

For a very large and well-capitalized company, that might as well be a cost of business.


And what is your recourse? There's no private cause of action for HIPAA violations.


Bingo. As a new ambulance chaser, I was having some real difficulty getting the medical records for a client - no response for weeks and weeks and weeks.

Aha! I thought. HIPAA gives them 30 days(sortof). We'll sue, and surely there's an attorney fee provision in there. Easy money. GOOGLE Wait what? No private cause of action! All I can do is file a complaint with HHS!

That said, depending on your state, you may be able to make some sort of colorable common-law claim.


> GOOGLE

Maybe ChatGPT would have given you a more favourable answer?


Seems unlikely since at it's absolute best it's merely synthesizing a mashup of the same information that feeds Google search, and that's assuming it doesn't start seeing shit. Better to go directly to the source so you can at least make an attempt at vetting the credentials/expertise level that went into the information you're viewing instead of guessing how many mommy blogs, spam mills, and reddit comments got sucked into the intake in the process of producing whatever ChatGPT just coughed up.


Going directly to the source did not provide a favourable answer. Did you somehow miss reading the thread before you replied?


K so given a situation where no credible information is available on the net a system that synthesizes information taken from the net is going to produce credible information through what mechanism exactly?


Why not just read the thread before replying?


Perhaps, but this was pre-transformers.


Finally someone uses the correct acronym


> your data is being legally shared with an arbitrary number of entities

And ILLEGALLY shared via non-conformance with federal laws and data breaches.

Just look at the Boeing fiasco and the serious normalization of deviance. You think that doesn't happen when you outsource your entire IT operations offshore to a populace that literally has zero skin in the game.


The HIPPA rules may be strict, but my most frequent breach notification for loss of personal info (averaging once every 2-3 years) is from health insurers and practice companies losing my information.


Working at the edge of cybersecurity and privacy, you've just stepped in it: there is no "Health Insurance Portability and Privacy Act". It's the "Health Insurance Portability and Accountability Act" (HIPAA).

(And I see that noone corrects you below. [edit -- actually a few people do, or the comments are continuing])

Gravity is a fairly strict law too. Maybe you should review what it covers, what it doesn't. The Act greatly expands the "sloshability" of your data, whether the sanctions are appropriate or sufficient to prevent patient harm is debatable.


It's a bit unfair since it's an easy mistake to make but I treat HIPPA as the equivalent of seeing a resume that says someone knows SAP, Bash, JAVA, GIT, Perl. The capitalization is kind of bozo-signaling. HIPPA is bozo-signaling.

I wouldn't expect right capitalization of FedRAMP. But JAVA vs. Java and HIPPA vs HIPAA just seem like you're not truly familiar.


I'm not sure what your comment is: it's a bit unfair; it's easy to make a mistake; it's bozo-signaling; [you] wouldn't expect right [correct?] capitalization; [something other than capitalization] seem[s] like you're not truly familiar.

What's unfair? Are random mistakes unfair (that's a very good philosophical question)? Are we forbidden from learning about other people from their mistakes or from mistakes generally?

The parent says:

> The HIPPA rules on health data are fairly strict

The followons variously say:

> I can't tell you how many forms I've opted out of that wanted to explicitly export my data to third parties and partners that are not HIPPA compliant.

> I wasn't claiming that MyCharts isn't HIPPA compliant

> The HIPPA rules may be strict

I'm more convinced that these people are making claims about the heart of what they presume HIPAA to be than I am about my parent poster's intent. According to part of your comment these people are "not truly familiar", but without that surfeit of "P" all over it we wouldn't know. My comment was based on an actual conversation heard in the field while working with what is potentially HIPAA data.

As for the parent post, the thought in my mind was is it a mistake? is it a troll? is it a mistake and they thought it was funny so they didn't correct it? I'm willing to give them a tip o' th' hat for the inadvertent glimpse into the bland certitude of inaccuracy.


If anyone wants to argue or discuss hypotheticals, how about this:

"Patient X's head was caught in a drop forge, and now they need to get four CAT scans a day."

How does HIPAA apply to this statement, how would you anonymize it, and how effective would those measures be against de-anonymization given the obvious rarity of the situation? Or is something like this simply never to be discussed?


The law is called HIPAA and technology providers sign a BAA(Business Associate Agreement) that states they agree to handle and store your information in compliance with the relevant standards and laws. With tools like OpenAI Whisper and GPT the HIPAA mode means that the tools don't retain any memory of prior interactions. Health tech is built on 3rd party vendors AWS, Google, and Microsoft are huge in the space.


Well, the reality is likely once they get access to the data, are allowed to use it - they will likely be happy to be grandfathered into their accelerated position if legislation passes to create barriers of entry by preventing such mass data gather/access without express permission - to have an advantage on competition; but generally this won't be a winning tactic in the end because trust is the most important factor, even if consumers aren't driven by that yet as their primary driver.


The mere implication that consumers can be driven by privacy concerns flies in the face of 20 years of observed reality so I'm not sure where you're getting the idea that that is credibly possible. Objectively consumers are driven primarily by convenience, with cost coming in a close second.


Have consumers had any really good or inspiring options yet that put privacy first?

Most of the pundits of Bitcoin and similar - an evolution of the finance industrial complex - seem to claim that the reason there isn't wider adoption is that the "first killer app" hasn't been developed yet. I'd argue it's because its adoption is motivated by profit-greed, which requires a wealth transfer from new adopters to the ones passing off hodling the bag.

Many of the core-fundamental values put forward, what many hope Bitcoin et al would solve, are virtuous and attempting-hoping to solve complex problems - but Bitcoin from a holistic systems perspective, where all consequences are integrated, doesn't fit the bill for what will become the next stable evolution of how society functions with technology. I'd argue similarly to privacy concerns, the solutions that Bitcoin hodlers are aiming for - if they care about such things other than profit from buying low-selling high during pumps and dumps - simply haven't had a viable non-hype and non-greed-driven solution made available yet.

This current wave as a result of industrial complexes forming to maximize their ROI at all costs, first-to-market and maximizing profits allows them to dominate - but for how long? Maybe a decade ago now I wrote a blog post on Facebook's governance, pointing out FB's attempt to maximize profit now will certainly increase annual revenues/profits in the short-term - but would you rather have lower profits for 20+ years or higher profits for 5+ years?

Mark not being an idea person, not a creative - where everyone in tech should know his story involving the ConnectU twins who had hired him - and so he wasn't able to navigate to design and evolve a system to fully harness the potential of having what's essentially a free marketing platform for him as the controller - instead mostly depending on network effect defense strategies including buying up feature sets like WhatsApp, Instagram, etc - who gained a critical mass that could begin to become a competitor with FB, so no real innovation.

The VC industrial complex has been a driver in selecting for all of this, and where acquisitions also suck up and eliminate any up and coming competition that gained enough market share and momentum to be a threat; the incumbent dating and food-delivery platforms-apps are the most obvious for this; the captured MSM is another less obvious version of this, where conglomeration from consolidation has put the power of information control in the hands of fewer and fewer people - why big pharma has been so successful suppressing the majority of negative sentiment about them, as one of multiple parties who are toeing the line and attempting to maintain control with what I call the censorship-suppression-narrative control apparatus; Elon buying Twitter-X created a #ZeroIsASpecialNumber problem in terms of no longer being able to as easily put their hand on the scale of free speech - a blow to their authoritarian-totalitarian and industrial complex dreams, that combination forming fascism.

Another example, I think the advertising industrial complex will collapse within the next decade.

Ads are probably tied at first with downvoting mechanisms for how detrimental of an effect they have on society - where I don't have time to dive into detailing reasons for either right now; they are not mimicking natural patterns for how information-attention was distributed prior to digital.

Business is war, and there are $ trillions at stake - and so who knows what all the various parties, millions to billions of people who most likely mostly blindly follow the status quo system because they believe that they will do better off - those who struggled to get where they are in the manufactured rat race, and holding on for dear life due to fear, when in fact tyranny and scarcity mindset is very expensive - and where the universe provides all the abundance we need, and we can all thrive with proper organization.

“Those who love peace must learn to organize as effectively as those who love war.” — Martin Luther King, Jr.

Thanks for the convo! Please continue if you're motivated or inspired to!

P.S. I had 2 neck surgeries last week, so my pain level is down a lot - and so words are flowing out of me a bit easier, and apparently you inspired me to say far more than I was expecting, so thank you again.


WHile US has private healthcare, Canada does not and data does not belong to the private companies.


There's no mention of data privacy. Since AI Scribe[0] study is a run by OntarioMD, the digital technology arm of the Ontario Medical Association, it might be okay but it would be nice if it declared it.

From the AI Scribe link:

> This project is funded by the Ontario Ministry of Health and overseen by Ontario Health. The study is currently underway, with 150 primary care providers already selected from diverse demographic groups, technical backgrounds, and geographic areas. We are no longer accepting participants for the study. Results of the study will be shared later this year.

[0] https://www.ontariomd.ca/ai-scribe


would you care about data privacy when you're dead?


Your living relatives might, considering the hereditary nature of so many things in medicine.


I feel like you were trying to make a point here, but I have no idea what it was.


We are made of electrons. Every electron is the same. Electrons have no sense of privacy.


Electrons don't get murdered when their SO's find out they are pregnant. Although I guess that's more of an issue in the USA than Canada.


Electrons don’t eat food either.

Eating food is unnecessary.


Care about anything when you are dead?


I honestly don't love taxpayer funding being used - unless the underlying technology is open sourced and available for commercial use, so then the free market can make improvements on it, etc.

Or funding situations like e.g. "we'll pay for you if the doctor/practitioner chooses your software" - but otherwise the free market is efficient, and unelected people administering taxpayer money have no real incentive to not mismanage the money; especially when current governments are money printing devaluing everyone's money, causing a ton of externalized effects literally causing harm to people's health and reducing people's quality of life.


> but otherwise the free market is efficient

Efficient in routing public goods to a locked box maybe


If you look at a small enough timeframe and a completely free market without setting up rules to the game to keep it an even playing field, then that's an issue, yes.

What are your thoughts on patents?


if there's rules to ensure fairness then it isn't a free market. and if a market with rules could be called a free market, but only for arbitrary slices of time, i'd submit that also isn't a free market.

"allow lucky or unscrupulous owners to swallow less lucky or more ethical owners' businesses until there's only 1-3 players left that are too big to start to compete with and too big for the others to buy out."


I find perfectionism gets in the way in such conversations: the goal is aiming for the freest market possible, but no - you can't legally sell services to assassinate people; unless you're the state who has a monopoly on violence.

Because on the other side we have people blaming the free market and capitalism in general for the problems, when capitalism is the solution - it's crony capitalism or what I prefer to call it - corruption and things like regulatory capture that's the problem.

The problem is heavily industrial complexes' funding and lobbying of politicians, placing politicians who will favour policy for them - is the problem; and it appears that foreign bad actors have also helped certain politicians get elected elsewhere.

I like the Democracy Dollars and Journalism Dollars solutions proposed by Andrew Yang during his presidential run to help act as a counterweight to the power of industrial complexes.


> so then the free market can make improvements on it, etc.

Yeah, not sure I'm onboard with that. I am onboard with open sourcing it via a foundation and accepting commercial contributions ala Linux kernel.


im from Ontario and see resident physicians regularly. The problem isnt that they dont have time to take notes.. its that they dont have time period. The first thing the supervisor doctor will say to me when theyre invited in is "i havent got much time...". Most likely your diagnosis goes like this, you describe symptoms, resident doctor disappears to the supervising doctor and returns with an action. Usually no longterm plan is given, just a medication or recommendation. Its hard to negotiate the action because youre negotiating with the resident, but the decision is authorized by the supervisor. The problem is systemic and automatic note taking isnt going to solve it. Despite this success story for the doctor, I suspect the patients themselves wont have the same success in Ontario.


Here is a similar open source project: https://github.com/josephrmartinez/soapscribe

Just a starting point. But if you are interested in this space, fork away and build it into something useful!

My personal take is that the current tools on the market are too expensive. The cost should go way, way down. This should stay open source. Patients should have easy access to full audio recordings and transcriptions of their medical appointments. One can dream!


Having worked in computing around medical facilities and providers, unfortunately, I believe this is just a pipe dream.

Companies that charge a lot for the software will buy up the smaller companies providing services until only a few are left. They'll bundle it up with an expensive price tag. Even more so, providers tend to have protection when they buy a piece of software used by a large number of other providers. They get to bandy about saying "industry standard". I promise you, you do not want to be in front of a medical malpractice jury saying "well we slapped these parts together that we downloaded online", it won't go well for you, even if that's what the big software service did just the same.


They can only buy up companies that are willing to sell. Principles still matter to some people.


Salesperson's Perspective

(initial introductions, "accidentally" stumbling on the owner's favorite bar/beach/...) Yeah, I'm doing my damndest to make the world a better place. There's all of these great tools for doctors, but they're struggling, getting hit by the crossfire of malpractice suits and interoperability difficulties.

(later meetings) We just landed <xyz llc>! They're world-class at helping patients with <abc>, but they didn't have the resources to survive in the current political climate. The founders were floundering pouring their own money in to try to keep their patients happy, but with our help the patients are better off and the founders can finally retire comfortably.

(much later) I see how much you care about your business and how it's really about the people you serve; that's why I got into it too. When it's finally time for you to sell, you definitely don't have to go with us, but please try to hand it off the right way to somebody who cares. Private equity vultures don't make anybody happy.


Fewer and fewer it feels like :(

The almighty dollar influences a lot of people.


On that front, there is the cool AquaVoice input method: https://withaqua.com/

anyone is looking at doing an open-source version of that? I guess with a mini-whisper and mini fine-tuned GPT-like model we could get 99% of the way there


For my personnal use I've been using a minimalist script for a while : https://github.com/thiswillbeyourgithub/Quick-Whisper-Typer

don't be afraid to contribute it's a bit rough around the edges


that sounds like a great open source project, I'm interested in starting on it if there's nothing out there.


For my personnal use I've been using a minimalist script for a while : https://github.com/thiswillbeyourgithub/Quick-Whisper-Typer

don't be afraid to contribute it's a bit rough around the edges


3 days later - I sub to Aqua, it's an astonishingly good product. I wish I could insert this on any textbox I wanted to though.

Also I wish for an app. Pls aqua people. Keeping my phone's screen on while I transcribe sucks.


Privacy and hallucinations would concern me. It seems likely to me that relying on these tools will mean physicians will "trust" the output of these systems and stop checking the notes for accuracy. Checking the notes will become a burden too.

Maybe the real solution is to lower the amount of administrative work required? Or hire people to do it?


I too feel a bit worried about AI making mistakes, but the thing is that a doctor rushing or an assistant can (and do) make mistakes too. These are all just cost vs quality of service tradeoffs. From my experience (having been in and out of hospitals a lot in the last year) it wouldn't surprise me if AI, even with occasional mistakes, would still be a net improvement in overall quality of care and outcomes.


I’m not a fan of, “but humans make mistakes too,” defense.

They do but we don’t consider it an improvement when they are made. The goal isn’t to have mistakes.

The goal is to maintain (or improve) outcomes without burning out clinicians. Using an LLM isn’t the only solution and might not be a good one.

If this is going to be a thing I hope that policy around it will inform patients when the physician plans to use it and allow a patient to opt out.


In my impression of using AI a lot more recently, hallucinations are a non-issue if you are simply using it to transform data. AI hallucinates when it's being asked to extract data from it's own "memory" but not when it's simply given a task to perform and all the necessary input to do it.


whisper hallucinates - or is just incorrect - fairly often. I have been transcribing old tv and radio programs just to get a real feeling for how inaccurate it is, and for an example, the tv show JAG's main character is Lieutenant Harmon Rabb, Junior. I prompt "You are transcribing a show called JAG - Judge Advocate General - whose main characters are named Lieutenant Harmon Rabb jr, known as Harm or Rabb, [...]" Maybe 1 time out of 20 it will actually put "Rabb" or "Harm".

Even better is stuff like old call in radio shows, the callers are perfectly understandable, but whisper has lots of issues.

If there's something better than whisper, i'd love to try it out.


“ mimic doctor’s notes and reduce the amount of paperwork a physician would have to manually compile.”

So there is AI making up doctors notes ? That’s extremely contentious.

In Ontario doctors are paid per patient per year and then also per visit / procedure. This doctor is just outsourcing her doctor work that taxpayers are footing the bill for - to AI. This is wrong in so many ways. What a lazy practitioner!


Did we read different articles? Did you just fixate on that line and not read the rest? She was the one doing the examination and verbalising her concerns. The AI disseminated that information the doctor provided and reformatted it into bureaucratic prose just like doctors scribes have been doing for the last few centuries. I don't see the issue here in the slightest. None of the "doctoring" has been outsourced, just the scribe work.


In my line of work (healthcare in the UK), an AI system that would change what a human writes in any way would be considered a medical device, and would require an absolute ton of paperwork to certify. In the case of LLMs, I don't know if certification would even be possible, because you have to show that your system didn't change the intent of the human, which is impossible to do with LLMs.


Sure, if the LLM note were filed straight into the chart without review, it’d be pretty unsafe.

But these systems are meant to generate a draft note that the doctor still has to review, edit, and sign. At the end of the day, it’s still up to the doctor to ensure the note is correct.


Its mostly translating dictation and recordings. The doctors are still liable if it's wrong so there's plenty of incentive to simply read the output, which is still a major productivity improvement over writing+reading.


There is a 0% chance they aren't slurping up that data to store forever (and "improve their core offerings" or whatever the privacy policy says). Anyone want to take bets on whether the first breach is:

(1) A vindictive employee tracks down their ex's information

(2) The raw database is breached and wholesaled

(3) Somebody embeds the AI's RAG data in HTML for anyone to read

(4) The training job's Spark admin portal lives on an ngrok'd laptop for anyone to siphon

(5) The mandatory Facebook "like button" on every government website is set up to scan every query

(6) They don't "sell personal data", but if you don't opt out (and usually even if you do) a thousand adtech companies have paid to be affiliates to "assist in normal day-to-day operations", where their only "assistance" is downloading your personal data to serve you ads for hemorrhoid-friendly buttplugs or non-magnetic magnetic healing crystals that don't interfere with your pacemaker or whatever

(7) Other (please comment)


This data already exists anyway and has been there for like 50+ years ...


Maybe I'm old-fashioned, but to this day I don't take notes on the computer when I'm in the room with a patient. I might take a couple notes, but otherwise I do it between patients or after hours. It seems discourteous to me otherwise.

That said, I have a prolonged charting period after every clinic. It's not sustainable in daily practice (I don't see patients every day).


What the economics of getting a scribe?


If you're looking to build an AI notetaker for medical use-cases (or others), https://www.recall.ai is a HIPAA compliant API to capture conversations from video conferences.

P.S. I'm the founder, so obviously biased :)


The root cause of this is that the billing system (which varies by Province) by which doctors actually get paid is a fee-for-service model. That means the system incentivizes doctors to see patients by only allowing them to get paid for actually seeing the patient: Give someone a vaccine? $15. Family doctor routine checkup? $34.

The system is set up to reward the kind of doctor who sees 6 patients an hour and is on their third marriage because they're at the clinic for 12 hours a day. Those are the family docs who are making high six figures, which is incidentally the opposite of what's good for the patient. You can't have meaningful interactions with a family doctor in ten minute visits, but the billing codes (in BC and Ontario at least) are set up to financially penalize doctors who take 45 minutes with a single patient to really make sure they get to the bottom of things.

Anyway, the follow-on effect is that doctors generally don't get paid to do their legally-required paperwork and a litany of other things, so we've arrived at the present where family doctors are using "AI" shitware of questionable quality made by silicon valley techbros of questionable quality, and relying on that software to ensure the accuracy (and privacy!) of patients' health data because they are so overworked and underpaid that there's a constant brain drain of good family doctors leaving family medicine for specialized sub-disciplines or leaving Canada to seek greener (pun only partially intended) pastures where the pay is better south of 49.

tl;dr the whole situation's fucked and the fact they're resorting to "AI" garbage should be a source of profound shame, not jubilation


The situation in BC just recently changed for family doctors:

https://www.cbc.ca/news/canada/british-columbia/bc-doctor-su...

"Provincial health officials announced the changes during a Monday news event, saying physicians will be able to stop participating in the current fee-for-service system in early 2023."

"The provincial government says a full-time family doctor will be paid about $385,000 a year, up from the current $250,000, under the new three-year Physician Master Agreement reached with Doctors of B.C. last week."


My limited experience with dental practice payors (e.g. the NHS in England) is that it's an ogre's choice: Pay for over-treatment, or pay for neglect.


Damn that is accurate


Many family doctors in Ontario don't use the fee for service model.

Instead, they have a roster of patients and they recieve a flat fee every year for having a patient on their roster.

I'm not sure it changes any of the downsides you mentioned, though, since the yearly flat fee is quite low so doctors still need to minimize the time spent per patient.


The situation is the same in Sweden. New Public Management is a disaster all around.


Here in BC it's just a lack of political will because physicians are expensive, governments are broke after COVID, and so instead of fixing the system (which would cost money), the absolute morons in the health ministry have decided that "nurse practitioners" are 1:1 substitutes for family doctors, despite the fact that they don't go to medical school and thus don't receive any diagnostic training whatsoever.

It'll right itself eventually, but I'm not sure what the actuarial tables say about whether I'll be around to see it or not.


The best you can do is take care of yourself. Exercise, eat healthy, don’t do overly risky activities.

It really is a shame the state we are left in BC. If it was more accessible I would go back to school to become a doctor but the MD programs are far and few between and not really structured for someone with kids. I do meet the entrance criteria for the UBC med program but Im not going to uproot my family back to Vancouver.

The newly proposed increased federal tax share from 1/2 to 2/3 on corporate capital gains will also negatively effect many doctors in Canada as well since many of them run their own companies and invest some of the income for desirable tax advantages.


Don't do it. I'm married to a doctor who started in family medicine and bailed out because the work-life balance was awful and the money was mediocre, but she wasn't willing to see 6 patients an hour.


> governments are broke

That's just what they tell voters in order to get away with cutting the thing government exists to provide in the first place, all to lower taxes for some weird political-religious ideals.

If the government can't basics like health care, that government has no reason to exist.


Not saying your overall message is wrong about your region but... My wife is an NP and most of her courses and clinical time were focused on diagnostic techniques.


Sorry man, your wife isn't a replacement for a doctor. If she was, she'd have gone to medical school.


The problems you point out seem valid, but are you sure they are directly relevant to the core issue this AI is solving? That skilled data entry is critical to medical operations and is very effort intensive.

I feel like doctor incentivization and silicon valley's predatory corporate culture aside, this problem does need to be solved if we want to avoid wasting doctors' precious work-hours


>I feel like doctor incentivization and silicon valley's predatory corporate culture aside, this problem does need to be solved if we want to avoid wasting doctors' precious work-hours

Your comment presupposes that time spent on data entry is a waste. Relevant, cogent notes written by a human are infinitely better than whatever the rent-seeking intermediary between the physician and OpenAI shits out. The problem is not technical in nature; it's a misalignment of incentives.

The solution is to pay physicians for time they spend charting, and then the time is no longer "wasted".


Obviously, Medical and privacy issues in this story that make this case more complex, but anyone on HN not looking into ways that AI can streamline their lives and jobs at this point is doing themselves a disservice.

Yes, there are limitations. Yes, there are times when it goes down the wrong path or actually makes things worse, but those are getting fewer and further between.

Things like Otter, Aider, Cursor, GPT Vision, Obsidian (Smart Connections), MumurType are all making huge impacts on my life and productivity. I'm now tapping knowledge I've been collecting since the early 2000s.


Medical data shouldn't be going to Microsoft. I'm surprised it's even legal in the context of this story.

Your argument is also kind of silly. Yes, go ahead and submit all your life notes since 2k to Big Tech. What does that have to do with the larger context of how these tools are deployed, and specifically in the medical field where people do expect some semblance of privacy and is already regulated to some degree?


Not possible for work due to NDAs, not desirable outside work for similar reasons. AI home appliances that aren’t cloud-based seem a long way off.


Just the other day there were news about how many mistakes the AI-assisted journaling caused – wrong names, wrong diagnoses, typos (buksmärta -> kuksmärta which is hilarious but serious).

Some of it is surely teething problems, but unless there is a robust check upon implementation it might just add another layer of inefficient new public management make-work to the system.

https://sverigesradio.se/artikel/ai-journaler-i-sjukvarden-k...


It feels to me like an autopilot problem in the making. "This thing means that you don't have to keep your eyes on the road - but please ensure you keep your eyes on the road, in case of errors"


The issue is, if you have any kind of rare condition, it already is this way. Much like the entire white side of a semi being presented to you across the road is a rare condition for autopilot, a huge number of 'rare' diseases already present problems for humans leading doctors and their staff to make errors by assuming the most likely condition. There is some saying like "It's probably a horse and not a zebra", but when it comes to hospitals zebras and even unicorns do show up, especially in the cases with recurring problems.


I found it interesting that your mind went to Tesla's autopilot. My mind went to operating airplanes. Most newer small planes have some form of GPS but you're technically not supposed to use instrument navigation until your certified to do so. I haven't met a single pilot that didn't do so, though.

Anyway, it creates the very problem you mentioned but just replace "road" with "outside the cockpit".


> you're technically not supposed to use instrument navigation until your certified to do so

You can use them all you like. You just can't fly in conditions where you have to use them.


> but you're technically not supposed to use instrument navigation until your certified to do so

What do you mean by this? Not having an IFR rating does not mean you're not allowed to use the navigation aids or the plane's autopilot.


Reading and doing minor edits is much less of a cognitive load than writing.

The article does not suggest that doctors should blindly trust the SOAP note created by the tool in question.


> The article does not suggest that doctors should blindly trust the SOAP note created by the tool in question.

But that's what will inevitably happen at some point, when they get to the point of only rarely making big dangerous mistakes.


I don’t think we’re yet in a position where we can make claims about how inevitable certain outcomes are.

It is important to remind people that technology of any sort can be error prone and that human oversight should be relied on for any automated process, LLM based or not!

I work in the legal industry and every lawyer is aware of the guy who used ChatGPT to spit out non-existing case law!


Apparently not enough to prevent it from being repeated... One of Trump's former lawyers did it months after the first case made national news.

https://text.npr.org/2023/12/30/1222273745/michael-cohen-ai-...


> But that's what will inevitably happen at some point, when they get to the point of only rarely making big dangerous mistakes.

So the same as doctors making occasional big dangerous mistakes that cause lives. Seems like it would be a win then as it takes some mental load off of doctors so they can focus on where they should, on the patients and not on note taking.


> So the same as doctors making occasional big dangerous mistakes that cause lives.

Will it be? There are already unanswered questions on who's liable if Tesla's FSD runs you into someone.


I would assume you, the person behind the wheel of the car. Much the same the doctor/staff hitting the submit button on the validity of the records statement.


So Boeing should not have any liability for the MCAS crashes, because the pilots were in control?


There's not much that a pilot can do when a plane is not working correctly. They can recognize the issue but they might not be in a position to do anything about it.

If an auto-form filler is not working correctly the doctor can also recognize the issue and also be in a position to do something about it, namely, fix the error before they submit the form.

That is to say that there's a world of a difference between a pilot flying a plane and a doctor filling out a form.


Isn't the vast majority of the world using computers without ECC memory already blindly trusting there are no bit flips causing silent corruption?


> …when they get to the point of only rarely making big dangerous mistakes.

Are you intentionally rubbing FUD on that or am I mis-reading you? I don’t think we need to wait to rely on technologies until they’ve achieved perfection - just when their mistakes are less frequent, less dangerous, or more predictable than human mistakes for the same task.


I'm saying getting close to perfection is truly dangerous territory, because everyone gets very complacent at that point.

As a concrete example: https://www.cbsnews.com/news/pilots-fall-asleep-mid-flight-1...


We are already there with humans. Most people take a doctor at their word and don't bother to get a second opinion.


At that point, you've at least consulted with a medically trained professional who's licensed (which they have to regularly renew), has to complete annual CME, can be disciplined by a medical board, carries medical malpractice insurance, etc.

There should be requirements for any AI tool provider in the medical space to go through something like an IRB (https://en.wikipedia.org/wiki/Institutional_review_board) given they're fundamentally conducting medical experimentation on patients, and patients should have to consent to its use.


In the context described, it's acting as a tool for a doctor. AI scribes are not conducting experiments.


The use of the AI to treat patients is a medical experiment.


any change to the practice is an experiment.


Exactly. If you have any kind of illness this is displaying atypical symptoms or otherwise may be rare, your life is in your own hands. Even something that is somewhat common like EDS can get you killed by doctors missing the signs. Keep a printout of all our own symptoms as they evolve over time, and immediately bring up anything that conflicts with what the doctor says.


thinking reading and doing edits as less work than entering it yourself is exactly what will cause critical errors to be made. It may not suggest that but just like there are Tesla drivers who are supposed to watch the road there are users who will not check. And in a medical record that can be deadly.


> typos (buksmärta -> kuksmärta which is hilarious but serious)

To save people looking it up, that one-char difference changes "abdominal pain" into "cock pain".

Wow.


Well, they are close enough to each other, aren't they ? /s

That is the main problem with the AI: it is close enough, but never there.


This seems very similar to self driving cars.

At the beginning they will be worse than humans and cause deaths that humans would have prevented while at the same time probably saving lives where a human would make a mistake.

But not far down the road they'll become much better than humans even if they do occasionally make a mistake and cause a death that a human wouldn't' have they'll save far more lives due to them not making the mistakes that humans do.


But not far down the road they'll become much better than humans

While I think you're correct, there is no proof this will ever be achieved.

It very well may not be possible along our current path. It may take a 100 years, 1000 to get there.

And yes it could take only 20 more. But to state this as a certainty?

No.


I think its basic certainty that computers will be better at transcribing notes than humans basically now and will continue to get better. I mean we're basically there now, I trust spell check and grammar check more than myself.

That's what we're talking about here. IMHO computers are already better at that, what possibly makes you think this won't happen?


My reply was to the parent post, which started:

This seems very similar to self driving cars.

And continued discussing self driving cars.

Regardless, what I said stands.


ah, then I mis-interpreted. My fault, appreciate you letting me know.


I wouldn't be surprised if Region Blekinge were using something much worse and much more expensive than Whisper for their transcription.

I've been transcribing A LOT of SR (Swedish Radio) shows as part of https://nyheter.sh/, and Whisper (self-hosted) has been very accurate.


Tangent here: really? I've found base Whisper has concerning error rates for non-US English accents; I imagine the same is true for other languages with a large regional mode to the source dataset.

Whisper + an LLM can recover some of the gaps by filling in contextually plausible bits, but then it's not a transcript and may contain hallucinations.

There are alternatives that share Whisper internal states with an LLM to improve ASR, as well as approaches that sample N-best hypotheses from Whisper and fine-tune an LLM to distill the hypotheses into a single output. Haven't looked too much into these yet given how expensive each component is to run independently.


Language detection in the presence of strong accents is, in my opinion, one of the most under-discussed biases in AI.

Traditional ASR systems struggle when English (or any language) is spoken with a heavy accent, often confusing it with another language. Whisper is also affected by this issue, as you noted.

The root of this problem lies in how language detection typically works. It relies on analyzing audio via MFCC (Mel Frequency Cepstrum Coefficient), a method inspired by human auditory perception.

MFCC is a part of the "psychoacoustic" field, focusing on how we perceive sound. It emphasizes lower frequencies and uses techniques like normalized Fourier decomposition to convert audio into a frequency spectrum.

However, this approach has a limitation: it's based purely on acoustics. So, if you speak English with a strong accent, the system may not understand the content but instead judge based on your prosody (rhythm, stress, intonation).

With the team at Gladia, we've developed a hybrid approach that combines psycho-acoustic features with content understanding for dynamic language detection.

In simple terms, our system doesn't just listen to how you speak but also understands what you're saying. This dual approach allows for efficient code-switching and doesn't let strong accents fall through the cracks. The system is based on optimized Whisper, among other models.

In the end, we managed to solve 99% of edge cases involving strong accents, despite the initial Whisper bias there. We've also worked a lot on hallucinations as a separate problem, which resulted in our proprietary model called Whisper-Zero.

If you want to give it a try, there's a free tier available. I'm happy to bounce around ideas on this topic any time; it's super fascinating to me.


Isn't the issue more that traditional ASR systems use US (General American) phonetic transcriptions so then struggle with accents that have different splits and mergers?

My understanding on whisper is that it is using a model trained on different accents, specifically from LibriVox. The quality would depend on the specific model selected.

The MFCC or other acoustic analysis is to detect the specific phonemes of speech. This is well understood (e.g. the first 3 formants corresponding to the vowels and their relative positions between speakers), and the inverse is used for a lot of the modern TTS engines where the MFCC is predicted and the waveform reconstructed from that (see e.g. https://pytorch.org/audio/stable/transforms.html).

Some words can change depending on adjacency to other words, or other speech phenomena (like H dropping) can alter the pronunciation of words. Then you have various homophones in different accents. All of these make it hard to go from the audio/phonetic representation to transcriptions.

This is in part why a relatively recent approach is to train the models on the actual spoken text and not the phonetics, so it can learn to disambiguate these issues. Note that this is not perfect, as TTS models like coqui-ai will often mispronounce words in different contexts as the result of a lack of training data or similar issues.

I'm wondering if it makes sense to train the models with the audio, phonetic transcriptions, and the text and score it on both phonetic and text accuracy. The idea being that it can learn what the different phonemes sound like and how they vary between speakers to try and stabilise the transcriptions and TTS output. The model would then be able to refer to both the audio and the phonemes when making the transcriptions, or for TTS to predict the phonemes then the phonemes as an additional input with the text to generate the audio -- i.e. it can use the text to infer things like prosody.


>Traditional ASR systems struggle when English (or any language) is spoken with a heavy accent, often confusing it with another language.

Humans also have difficulty with heavy accents, no?


True, but we can notice that that's the case, and then try to listen more carefully or ask for clarification


I've found that WhisperX with the medium model has been amazing at subtitling shows containing English dialects (British, Scottish, Australian, New Zealand-ish). It not only nails all the normal speech, but even gets the names and completely made up slang words. Interestingly you can tell it was trained from source material with dialects because it subtitles their particular spelling; so someone American will say color, and someone British will say colour.

I can't speak to how it performs outside of production quality audio, but in the hundreds of hours of subtitles that I've generated I don't think I've seen a single error.


IIUC, it's trained on LibriVox audio mainly, along with a few other sources. I'm not sure how it is handling spelling as the spelling will depend on the source content being read, unless the source text has been processed/edited to align with the dialect.


> There are alternatives that share Whisper internal states with an LLM to improve ASR, as well as approaches that sample N-best hypotheses from Whisper and fine-tune an LLM to distill the hypotheses into a single output. Haven't looked too much into these yet given how expensive each component is to run independently.

Any recommended ones you've looked at?


This is a situation where running within the context of the EMR and having access to the existing chart data is likely to make a lot of difference. My bigger concern is that there are a lot of different things being lumped together under "AI", and this is going to hit a bunch of different areas of machine learning.


This would be a major concern for me.

The one time I used AI meeting notes, some important details were wrong. And beyond that, the notes were just terrible. A literal transcription can be useful. A summary of the substance of the meeting can be useful. This was neither. A human would know that a tangent talking about the weather is not important, but AI notes are just as likely to fill the document with "Chris mentioned it had rained yesterday but he was hoping to cook hamburgers when the sun comes out. Alice and Bob expressed opinions about side dishes, with the consensus being that fries are more appropriate than potato salad." as it was to miss a nuanced point that a human would have recorded because they understood the purpose of the meeting. And then it'd give me an action item to buy corn.


> Some of it is surely teething problems...

Just gonna adjust the temperature of your baby here, ok, now he should grow up just fine.


Would love to know which AI company got the data or do the developers of Scribe run their own ChatGPT?


I think OpenAI claims they don't use the data that goes through their APIs for training (in contrast to chatgpt)


I talked to a therapist who hated doing notes. So I proposed a solution:

Use Microsoft's "seeing AI" to describe the children playing during "play therapy". Then have a camera take a picture every few minutes, and then have chatgpt come up with the story based on the pictures and transcript of the audio, transcribed with openai whisper. Business in a nutshell!


This is not really how therapists work. It’s how it looks like they work.

This is a really big problem with the AI industry. People don’t know the domain well enough and assume it is a fit.

Oh look, just throw this at it and done. Simples!

Then you find that therapists, of the non quack variety at least, spend years working on how to remove bias from their assessment and learning subtle cues and indicators from their patients. And have to write in a certain prose and have to qualify their results with peers.


Ok I was actually joking. AI (up to this point - although the new OpenAI could probably do it), is not even close to being able to handle it. This was tongue in cheek and not meant to be taken seriously.

I am married to said therapist so I always joke with her.


> This is a really big problem with the AI industry. People don’t know the domain well enough and assume it is a fit.

Yep. This is painfully obvious in the medical world right now. Too many tech-only people assuming they can understand a domain in a few weeks and then run a business in it.


It would probably be easier to teach those domain experts the basics of AI and how to leverage it than teach developers the details of each domain. After all, it's not like asking them to train transformers from scratch, just how to use what is already built. It reminds me of a joke about the movie Armageddon and teaching astronauts to drill rather than drillers to go to space..


Ah, the freedom of having no consideration for privacy or ethics. No one should record children and feed the imagery to a ominous third-party.


A worrying trend indeed.


Considering how much businesses use Microsoft, that's one of the least ominous third parties to choose.


Try just doing that yourself with random pictures of you doing things in your house. See if GPT can generate a coherent note suitable for the EMR.


Just need to get the parents permission for filming/photographing the children and feeding it to an AI.


Just ask for it, glitter some PR over it, done. People consent into all kind of things.


not a problem if it's done locally. at least the consent is only to capture/store the images with the practice, not giving it to third-party.


Microsoft? no thank you


I just hope someone is double checking whatever notes are taken. My experience with LLM tells me they're fantastically bad at remembering and even worse at producing factual stuff that can be used without context.


No one will be double checking. On at least three occasions now I’ve had to correct monumental fuck ups in mine and my ex wife’s medical records. One nearly lead to a transfusion of an incompatible blood type.

I suspect this will lead to a decline in accountability with there being another party to blame rather than the medical professional.

The LLM did it, not me.


Alluding to FAFO, a computer cannot find out, so a computer shouldn’t fuck around.

I’m hoping for a lot of legal precedent showing that an AI cannot be blamed, especially in a medical context.


I would hope that would be the case but a conservative safety culture is unfortunately built on piles of dead people.


Companies and people should have liability, but mere tools like AIs should not.

How would that even work?


I'm the simplest way - if you allow AI to make decisions, you're responsible. Like this https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-th...

So far we're doing pretty good with that idea globally (I've not seen any case going the other way in court)


I mean how would it work, if you tried to hold the AI liable?


Liability for the company selling the AI, I'd presume.


And that's perfectly acceptable, if everyone involved agreed beforehand.


Ah, I misunderstood. That is an interesting idea to consider.


Liability should imo be placed on those that selected the tools and arranged their implementation without providing due care and procedures to ensure the validity of output data.


> No one will be double checking.

Yeah no, I can tell you from experience with a clinic that things are checked. Let's talk about the real issues and how to enforce double-checking for people who would ignore it. Hyperboly like that is not helpful.

But, I wonder if those systems should randomly insert obvious markers to help here. "This has not been read." "Doctor was asleep at the wheel." "Derp derp derp." - like the fake gun images that the airport security has to mark.


You are really naive if you think checking anything is universally done to any standard anywhere on the planet. And writing it off as hyperbole is dismissive.

I’ve literally been in a meeting with a doctor in the last week who wrote something down wrong on the damn computer in front of me. And I’m talking a specialist consultant. I’m sure if I didn’t mention it that the incorrect data would be checked over and over again to make sure it was correctly incorrect…


You're oversimplifying this. So I'm not a doctor, but close enough to this system. First, you've definitely got professions/situations where checking is done. See flight and surgery checklists. Of course there will be mistakes and we'll never reach 100% compliance, but that's a given.

But then, there are secondary effects like how much time will your doctor have and how ready they are. In practice the notes take time. If you're unlucky, you're going to be late, on a busy day, and your notes will be done many hours later from recollection. In that case, even an imperfect system can increase the overall quality if it enables faster turnaround. I know of cases where the automatic notes generation did catch issues which the doctor simply forgot about.

The individual stories are brutal, but overall they say very little - was that the only mistake that doctor made in their life, or are they making 10 a day? In general we have to accept mistakes happen and build a system that catches them or minimises the impact.


> I’ve literally been in a meeting with a doctor in the last week who wrote something down wrong on the damn computer in front of me. And I’m talking a specialist consultant. I’m sure if I didn’t mention it that the incorrect data would be checked over and over again to make sure it was correctly incorrect….

Far be it from me to suggest that doctors aren't both fallible, and subject to arrogance that makes it harder for them to catch their mistakes—what highly skilled professionals are immune?—but "doctors make mistakes" is, while doubtless completely true, a very different claim from "doctors don't check things."


Agreed, but now the doctor has to do her job and the AI's job.

Cory Doctorow wrote about it a while back. I think it was this article "Humans are not perfectly vigilant" [0]. It explains how technology is supposed to help humans be better at their jobs, but instead we're heading in a direction where AIs are doing the work but humans have to stand beside them to double check them.

[0] https://pluralistic.net/2024/04/01/human-in-the-loop/


For what it's worth, the tech will only improve over time and looking at the birth rates, humans will only become more and more overworked and less reliable as the years go by. There should be a point where it just makes sense to switch even if it still makes mistakes.


Depends on the system I guess, but I'm familiar with a local one which is very much tuned for just summarising/rewriting. It seems very good at what it's doing and since it's working from the transcripts, it's actually picking up some things the doctors were not concentrating on because it wasn't the main issue. I've never seen doctors so keen to adopt a new tech before.


Which one? And what sorts of things is it picking up on?


https://www.lyrebirdhealth.com/ and I know of at least one case where the patient mentioned something relevant, but not connected to the main issue they were talking about. The doctor missed it because it wasn't the main topic, but the transcript and the notes included it.


yeah Lyrebird health... been hearing some crazy as stuff about them - wouldn't be surprised if they were YC soon


To ber very frank, a lot of these logging and note taking requirements have already led to many mistakes, and we already have many case studies of how the system does not care to remedy such things. I can easily see how AI will be adopted here and its mistakes will be glossed over the same way we already glossed over the same mistakes before.


I just think medical notes are something that you shouldn't be able to legally just gloss over. A simple typo in a dosage will kill someone.


https://www.getfreed.ai/ has a free trial for an ai scribe if you're interested in trying one out. Saves an avg of 30min to 2hrs each day for some clinicians.


We don't need any more recommendations of AI scribes. There are already hundreds of them. That business opportunity lasted 5 minutes before becoming fully saturated.


> Lall said it takes minutes for AI Scribe to collate the information, allowing her to move on to other patients while the SOAP note is being created in the background.

it takes minutes ...


... which, from TFA, adds up to 19 hours (ie >2 extra working days) per week, time that could be (and is) spent seeing other patients.


no argument with the benefit for doctors. I know docs who say they are squeezed because their day is engineered in a way to force them to do paperwork later

just wondering what computer process takes minutes. is it building the linux kernel every time?


To be blunt, Enterprise Cloud Architecture is a really good way to make a bunch of fast servers process work slower than a several-generations-old laptop on WiFi. You got trash integrations with all kinds of compromises, queues galore, awful “low-code” solutions somewhere in the mix, cloud VMs with comically tiny network pipes, all kinds of nonsense.


Good to know, I’m going to ask my doctor if they use this and who will be liable for errors.


just make the note input look like anki


Med students don't have the time to make Anki cards


How about improving the processes and reduce the amount of forms and admin work instead?


This is another version of "use public transport instead of throwing billions on futile self driving cars" -- it's the correct, rational thing to do, but it's not the American way!


It's a Canadian-developed tool used by Canadian doctors. It's popular on the internet to offer up overly-generalized platitudes like yours but do us a favor and at least read the article first.


I don't see how it being Canadian is relevant. First off Canada is part of North America and North America is big geographically. We have similar car culture compared to USA. There are very large subsidies given to battery manufacturing sector in Canada amounting to $43.6 billion over 10 years as part of the latest budget.


Humorously public transport, for all intents and purposes, only service cities. The very place you don't need transit as you already live within walking distance of everything. There is absolutely no rationality there.

Connecting every last rural area by train may have been a thing in the 1800s, but it's not really all that efficient to fire up a humungous train to carry one person. We aren't going back. "Correct and rational" doesn't exactly stand here either.


Car ownership continue to grow around the world. Maybe not everywhere, but definitely in countries where public transportation is popular, such as the Netherlands. The number can be found easily.


This could be the thing that saves the US healthcare system; currently we pay 2x what a normal developed country would pay for the level of service provided, because there is a thick layer of paperwork, with insurance companies going back-and-forth with medical providers to debate and negotiate billing.

Imagine if AIs on each side could run that negotiation process in seconds, at the cost of merely a few hundred thousand OpenAI API calls. Then humans can get back to doing the actual work!


LinkedIn proved that adding a tech layer that should get rid of an industry of paper shufflers is only profitable if those paper shufflers fund the platform.

Cue me skeptical that AI really reduces the middle-folk fees.


Better, more efficient technology rarely reduces the sticker price for customers. If the underlying product or service is cheaper to produce or provide, the price will stay the same (or increase) and the difference will be pocketed by shareholders. This is pretty much always what happens.

Same on the employee side. As an employee, if you adopt some technology that makes you 2X more productive, you're not going to get a 2X compensation increase. That delta will be captured and pocketed by the business.


This works only if everybody negotiates in good faith which isn't the case in the US system. Insurances deny legitimate claims, hospitals inflate prices as much as they can. You don't need AI to fix this. You just need more transparency and stable pricing. Hospitals need to stop making up cost but have stable pricing and insurances need to publish what they support and what they don't support.

Obviously this won't happen because there is way too much money to be made with the current state. If anything they will use AI to squeeze even more money from the patients. And employer based health insurance will keep patients captive.


> currently we pay 2x what a normal developed country would pay for the level of service provided, because there is a thick layer of paperwork, with insurance companies going back-and-forth with medical providers to debate and negotiate billing.

Oh, sweet summer child. We don't pay 2X because there's too much paperwork. There's too much paperwork because 2X spending can be used to justify a lot of paperwork. The delta in spending is due to the delta in pricing power that the US payer/provider complex has vs what is seen in other countries.


I don't think it is for the same level of service provided. I have read that there are three substantial differences between medical care in the USA and in other countries: 1. More single-patient rooms 2. More surgical procedures performed in the final year of life 3. More obesity and that the difference in the percentage of GDP spent on healthcare can at least partially be explained by these.


No, the big difference is higher prices for the same services. All the things you listed are just distractions that the industry and its water-carriers use.


> The Ford government has been so impressed with the technology that it announced a pilot program to allow 150 family physicians to use AI Scribe as part of their practices. The health minister said the early signs were promising but stressed the government would proceed carefully.

What I find fascinating is that doctors are independent contractors but the government appears to be buying their tools to make them more efficient


When the state subsidizes healthcare, it is in its interest to lower the costs/improve productivity.


Spending 25% of your working hours on “paperwork” sounds about correct for just about everybody who’s had a desk job the last 100 years.

I’m not saying it’s a good thing and I’m not saying I’m happy about it. Just that it is so, not just for the medical profession.

So what’s the fuss?


> I’m not saying it’s a good thing and I’m not saying I’m happy about it.

> Just that it is so,

> So what’s the fuss?

This sort of attitude would make the world never progress. The fuss is a highly trained specialist who is constantly in short supply is spending a quarter of their time doing work not entirely connected with their speciality. In an efficient system there should be a division of labour to another lower paid specialist, a scribe, but medical systems can't afford that for every single doctor, especially tax payer supported ones.

Automating that secondary role for every doctor benefits all of us if it means we give doctors more time.

Not to mention the documentation itself is almost always at least 25% excessive and duplicative, and computers+good UIs help eliminate that.


The scribe adds communication and interpretation overhead, they also can make mistakes, etc., so it's not at all clear it would improve overall outcomes per dollar spent.


I think these tools are the solution to the shortage of doctors, and will greatly improve performance by taking more parameters into account. Not sure though if a simple LLM can do that. (Shortage of doctors is a thing at least in DE and CH)

Doctors study like 10 years to follow a decision tree based on patient background and symptoms.

Especially in countries where education is harder to get by this can be useful, I imagine.

Then we transition from doctor to med-tech, like in the sci-fi movies.


The solution to the shortage of doctors is to stop letting doctors decide the supply of doctors.

Its a special interest problem. Doctors organize, bribe/lobby politicians, limit licenses, make extra money/get more power, repeat.

The excuse 'there isnt enough surgeries', is a chicken and egg problem. More doctors lower the cost, making surgery more reasonable and not necessarily a last ditch option. Not to mention, we don't stop the number of civil engineers that graduate because 'there arent enough bridges'. You just have more people observing around the table.


> The solution to the shortage of doctors is to stop letting doctors decide the supply of doctors.

I would certainly rather let decisions about certification of doctors be in the hands of doctors than anyone else. There's potential for a conflict of interest, but there's also expertise that isn't replicated anywhere else.


You are conflating two different ideas. Supply management and certification are not one and the same. You can have supply management without certification, and you can have certification without supply management. The parent only expressed concern over the supply management practice, not the certification practice.


Yep. That’s what we do. You hit the nail right on the head.

I saunter down to my local US senators house and bribe him, every month or so. Then our group created a fund where we bribe the president of the United States. We organized with all the other groups in the area and bribed the pope. Next we are going to find you in your house and bribe you.


You are using humor to distract from immoral behavior.

1) US Chamber of Commerce $1,882,365,680

2) National Assn of Realtors $849,607,903

3) American Hospital Assn $525,121,249

4) Pharmaceutical Research & Manufacturers of America $507,171,550

5) American Medical Assn $504,434,500

https://www.opensecrets.org/federal-lobbying/top-spenders?cy...


Why have residency spots shrunk per capita? Why do unmatched grads have to work at mcdonalds but NP's and PA's can practice with more latitude than some residents?

(serious questions)


ACR for radiology is going on in Washington DC. Current staffing throughout the country is poor - over utilization of imaging and not enough radiologists. So we are lobbying for an increase in Medicare funding for residency positions (they are paid by Medicare). Also trying to get more J1 visas but that’s kind of dubious as we are taking physicians from another country who likely also needs physicians.

I have no idea why unmatched graduates are working at McDonald’s as you say. There’s always primary care positions open for the scramble last I checked. If they can’t get a spot there’s likely a real issue in their education or themselves. It’s a normal distribution of a population, MD or not.

I can’t speak to what NP and PA’s do - they have their own PACs and organizations. I know they want to increase their scope of practice and keep their liability low.


You can't legally practice as a primary care physician without one year of internship, after which you get unrestricted licensed. If you're unmatched, no internship. Is that not correct?


Yea that’s right. You can scramble separately into 1 year internships. Those are really easy to get - but the unfilled spots are not in happy places. They’re usually more rural and surgical.

But like you said if you have a year under your belt you can work in an urgent care or the like.


I didn't realize they had that option. In general, I still find absurdity in what an NP can do vs an unmatched doctor. That's entirely incoherent.

Also, it's laudable what rads is doing, but didn't the AMA lobby for less residency spots?


>So we are lobbying for an increase in Medicare funding for residency positions (they are paid by Medicare).

Hey taxpayers, can you pay for my grad degree too? Wait its worse, I actually profit during this period.

Don't worry that I'm the highest paid profession upon graduation.


You think you’ll be able to recruit more people into medicine by reducing compensation, keeping education costs the same, and also keeping medicolegal liability on the physician?

Residency as “profit” is a stretch. You tread water for 5 years. If you would like those apprenticeship years to go unpaid then I’m not sure how it increases the number of people who want to go into medicine.


>You think you’ll be able to recruit more people into medicine by reducing compensation

You don't need to recruit more, there is an abundant supply of people who want it.

>Residency as “profit” is a stretch.

Physicians are funny, a fantastic wage for the lower-middle class is considered 'treading water'. And its for education. Something every other degree pays for.

>I’m not sure how it increases the number of people who want to go into medicine.

This is not an issue, there are plenty of people who want degrees that don't involve math. The issue is number of licenses, not number of people who are capable of doing the job and want to.


Residents should absolutely be paid. Years of often times reaching 80 hours/week, treating patients, performing procedures, writing notes - all is valuable labor.

It is like saying we won't pay you for the first several years of your first dev job because it is primarily a ramp up / educational period.


What is true:

The labor isnt valuable thus cannot be billed and needs to be taken via taxes. It also means that whatever people are doing in residency, they don't need residency for. Licensure bullshit.

The labor is valuable and doesnt need to be taken via taxes.


My limited understanding is that residency programs are a bottleneck and they’re mostly funded by the federal government. I don’t think doctors have control over this?


Don't forget ten years of schooling to be bad at formal methods to step out of that decision tree:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9237793/ https://blogs.cornell.edu/info2040/2014/11/12/doctors-dont-k... https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3016704/ https://www.overcomingbias.com/p/doctor-there-arhtml

And many more.

go to r/residency or other doctor forums if you want to see the most repugnant combination of dunning kruger and condescension


If medicine is just following a decision tree why would we need LLMs to do it? Computers have been able to follow decision trees for 70 years or something.


> Ontario family doctor says new AI notetaking saved her job

Lemme rewrite this headline for a more interesting and relevant article that you might actually want to read:

"Doctor hails notepad and pen as essential diagnostic tools."


The entire point of the article is that she no longer has to spend upwards of two hours a day using the notepad and pen.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: