> drugs developed with a specific purpose, but then afterwards found to have a completely different effect to what was originally intended
I don't find that surprising. You have to do the early phases of drug development in model systems like cell cultures for practical reasons: It makes evidence obtained more reproducible and has less ethical concerns than experimenting on humans immediately. It's like how software components usually get implemented and tested in simplified environments (e.g. unit tests use doubles), and just like with drugs, sometimes we find that software components interact with each other in unexpected ways when combined into a whole.
What seems weird about this is that the process for deciding that an approved drug is useful for a certain condition (not the one it was intended for) is really informal, it seems to rely on common knowledge circulating in the guild.
For the initial approval, we have this very formal process, the FDA wants this kind of data from trials of this size to show this statistical level, etc. All very scientific-looking.
But once it's on the market, random doctors try it out for other conditions. If it seems to work they tell their buddies, tweak the mixtures, share anecdotes. If it becomes widespread, some industry club records this for the purpose of arguing with insurance companies. I am told this off-label use now constitutes the majority of drug prescriptions. All based on (as far as I can tell) no statistically driven testing at all, just stories.
(To be clear, I'm not at all surprised that drugs have "off-target" effects, and may be very useful for things their inventor never imagines. I'm just a bit shocked how pre-modern our system for collecting this knowledge appears to be. But not an expert, and would love to know more.)
> I'm just a bit shocked how pre-modern our system for collecting this knowledge appears to be.
I'm not an expert, but I imagine any more advanced system would be extremely problematic from a data protection point of view (think HIPAA etc.). In a study, patients sign multiple pages of legalese to allow the researchers to collect all the required data. You don't have that with regular patients in a GP's office.
Yes that's a hurdle. But even without that, it seems hard to design a good system. If most of the doctors treat the very sick cases (among those diagnosed with X) with drugs Y+Z currently rumoured to work, how do you disentangle the causes of their poor outcomes?
More generally, how do we know it isn't mostly garbage? That seems to be the consensus about most pre-20th-C medicine, and those guys weren't idiots, they were just trying things out and sharing ideas...
I don't find that surprising. You have to do the early phases of drug development in model systems like cell cultures for practical reasons: It makes evidence obtained more reproducible and has less ethical concerns than experimenting on humans immediately. It's like how software components usually get implemented and tested in simplified environments (e.g. unit tests use doubles), and just like with drugs, sometimes we find that software components interact with each other in unexpected ways when combined into a whole.