> Is it considered unethical to do medical experiments on yourself without any oversight like you would find in a typical human subject trial?
Note there are different contexts at play here. When someone says "ethics" in a scientific context, it may encompass scientific integrity, avoidance of questionable research practices, reproducibility, etc., as well as medical and moral ethics. The speaker may not even be fully aware of these distinctions, since the subject is often taught with a rule-based perspective.
Experimentation on oneself is often _scientifically_ unethical (i.e., when done with the intent to make a scientific discovery) because:
1. The result is often too contaminated by experimental integrity issues to have scientific value. As another comment in this thread notes: "sample size of 1, confirmation bias, amped-up placebo effect, lack of oversight, conflict of interest when the patient is the investigator". Lack of oversight means no one is checking the validity of your work, it's not a permission thing. Every issue that is blamed for the so-called reproducibility crisis is worse.
2. Due to publication pressure, abandoning the cultural prohibition against self-experimentation amounts to pressuring everyone to self-experiment to grow their CV by a few quick N = 1 studies, or do something risky when their career flags. Obviously, oversight to ensure that self-experimentation proceeds only in cases of terminal disease mitigates this concern.
In practice, journal editors currently provide oversight addressing point #2, which is why work like what we're discussing here still gets published. See also Karen Wetterhahn's valuable documentation of her (accidental) dimethylmercury poisoning (https://en.wikipedia.org/wiki/Karen_Wetterhahn).
Experimentation on oneself in an attempt to cure your own illness by any means at your disposal, provided you do not harm others, is not _morally_ unethical IMO. It just rarely has a scientific role.
According to the original account, the pencil/pen thing wasn't about an audit trail, and both the IRB and hospital admin were equally silly.
> IRREGULARITY #3: Signatures are traditionally in pen. But we said our patients would sign in pencil. Why?
> Well, because psychiatric patients aren’t allowed to have pens in case they stab themselves with them. I don’t get why stabbing yourself with a pencil is any less of a problem, but the rules are the rules. We asked the hospital administration for a one-time exemption, to let our patients have pens just long enough to sign the consent form. Hospital administration said absolutely not, and they didn’t care if this sabotaged our entire study, it was pencil or nothing.
The usual approach is to provide grants for something else that the municipality wants/needs, but make them conditional on the municipality acting in the desired manner.
In my opinion, archive the data that was actually gathered and the code's intermediate & final outputs. Write the code clearly enough that what it did can be understood by reading it alone, since with pervasive software churn it won't be runnable as-is forever. As a bonus, this approach works even when some steps are manual processes.
Pretty much yes. Critical analysis is a necessary skill that needs practice. It's also necessary to be aware of the intricacies of work in one's own topic area, defined narrowly, to clearly communicate how one's own methods are similar/different to others' methods.
Even in group 1, when I go back to a project that I haven't worked on in years, it would be helpful to be able to query the build system to list the dependencies of a particular artifact, including data dependencies. I.e., reverse dependency lookup. Also list which files could change as a consequence of changing another artifact. And return results based on what the build actually did, not just the rules as specified. I think make can't do this because it has no ability to hash & cache results. Newer build systems like Bazel, Please, and Pants should be able to do this but I haven't used them much yet.
Patents sort of work that way, except that even people who didn't look at your work owe you their future profits.
I think I'm hoping for a result that anyone can train any model on any content, regardless of that content's copyright status. Mostly because I want AI assistant tools to be as effective as possible, to be able to access the same information I can access. But however it turns out there will probably be some unintended consequences.
The problem with gold OA is the proliferation of low-quality spam, and the recommended remedy is to both restore exclusivity (raise the barrier to entry) without supporting excessive (monopolistic, exploitative) journal subscription fees. The article's recommendation to build up society journals has a decent chance of accomplishing this. It won't prevent spam from getting published, but the only essential aspect is to keep the spam easily identifiable (e.g., published by MDPI) so it can be safely ignored. The www seems to facilitate the spam business model so I don't think the spam will go away entirely.
Assuming you want to get rid of the fees entirely: people will have to look at the spam and divide it into more and less spammy. Others will have to do the same with the less spammy and so on. Eventually there is room for more accomplished moderators Until those with the greatest prestige filter down to a tiny sub set. You assign weights to the experts so that with each level of review the good half raises in rank much more than the previous round.
At first you do it by committee but eventually the rating for each scientists can be derived from the importance of their publication(s).
I don't think it needs to be free or even cheap. Subscriptions to the archives (and research data) could be more expensive and the money can be divided over the reputation points.
Free access can be earned by the sweat of your brow, publish worthy papers and do reviews that closely match other reviewers.
Universities should be able to publish enough quality material to get paid. Have the grownups pay for juniors education again. It was a good idea back then it is a good idea now.
> So IMO, the first reform to conduct is to turn clinical researchers into real scientists. That won't be easy.
Mostly, there are not enough hours in a life to be both a good doctor and a good scientist. The few MD-PhDs who give equal weight to both areas are both brilliant and extremely driven. Multidisciplinary teams seem to have a better chance at success.
I guess if the scientific education was done early (undergraduate at the latest) dual training could work. Once medical training and practice starts there isn't a lot of time left over. And early / non-practicing science education, for whatever reason, doesn't seem to be very effective.
0. Install the Tree Style Tab extension (or whatever vertical tabs extension you prefer).
1. Enable userChrome.css: set toolkit.legacyUserProfileCustomizations.stylesheets=true in about:config.
2. Set browser.tabs.inTitlebar=0 in about:config so the title bar buttons (and, on some OS's, the title bar itself) remain visible.
3. Create =chrome/userChrome.css= in your Firefox profile folder and write the following to it: