"226 of 521 FDA-approved medical devices, or approximately 43%, lacked published clinical validation data."
The lack of "published" clinical validation studies implies neither that the AI developer performed no clinical validation nor that the FDA hasn't seen it. So, it is not clear if the problem is with the lack of clinical validation or the lack of reporting.
For some reason the title exaggerates yet further (half of FDA-approved AI not "trained" on real patient data).
> AI developer performed no clinical validation nor that the FDA hasn't seen it
if the developer went through the trouble and expense of performing clinical validation you can be sure that they would publish the results __unless__ the results reflected negatively on them
Given that the design and endpoints of clinical validation studies is priceless information for developers of similar devices (i.e., competitors) applying for FDA clearance via the 510(k) pathway, it would not surprise me at all if this information was purposefully kept secret no matter how flattering it is. Especially for relatively new technology like AI
510(k) summaries are always a negotiation between FDA (which wants as much data as possible in them) and manufacturers (which want as little data as possible, as it is priceless competitive intelligence)
The lack of "published" clinical validation studies implies neither that the AI developer performed no clinical validation nor that the FDA hasn't seen it. So, it is not clear if the problem is with the lack of clinical validation or the lack of reporting. For some reason the title exaggerates yet further (half of FDA-approved AI not "trained" on real patient data).