Some of my colleagues speculated that this was not the intended study. Originally, they might have tried to delete a gene, but the expected phenotype didn't show up, which would have put an end to that line of research. But instead of completely abandoning the project they figured they could sequence the mice that were still around to see what the off-target effects were, and they thought they found something surprising and important.
Of all the stupidity and hype around blockchain tech, the idea of committing the hypothesis of your research to an immutable record beforehand so as to avoid p-hacking seems to be one of the best ideas I've heard, provided you could avoid the problem of people simply committing a tournament of alternative answers...
Even more interesting is the proposal I've seen where you submit the intro, theory and methods section for review at a journal before you do the expeiments. Once what you're planning to do is deemed good and interesting by the reviewers, you get the go-ahead, and the journal will publish your results no matter what they are. Not only does it prevent p-hacking, it also removes bias against negative results.
FWIW, anti-p-hacking measures are already implemented for clinical trials; they have to register expected outcomes at the beginning.
Journals should not be given the power to implicitly decide what research gets done and conversely they will never accept to be forced to publish all results because of the vast amount of research that goes nowhere.
The whole value for a journal is to be able to select once the results are known because that's how you build prestige.
All you are doing in effect is to lay out a good case why journals should and probably will simply die because this level of transparency would be a great thing to have but is incompatible with the publishers model.
Many of the journals currently publishing these kind of Registered Reports only publish special issues, which might help with your concern about journals being given control over what work is done in a field, though it also limits how much of the literature can benefit from the new approach.
Another way of thinking about a journal's incentives is that, by moving the primary peer-review stage before a study has actually been conducted, you greatly expand the number of studies that a journal will interact with, which may provide a larger and more valuable role for entities like journals in world where actual publishing is a trivial matter.
The "where others can collect it" part solves that: Suppose that university A announces their study method and university B makes a mirror. Then if A were to forge the announcement after the fact, B could tell. (You would need cryptographic signing, though, to see if the record mirrored by B has not been tampered with.)
If you're making a notary system, the part you can get from a blockchain is a very minor percent of your code, and comes with a lot of complicated baggage. It's not convenient at all.
The problem with using the blockchain for this is that there are legitimate reasons to withdraw such a registration ranging from misstyping (s/increase/decrease anyone?) to, depending on your kind of study, the inclusion of personally identifiable information. The solution currently being pursued is to expand pre-registration, a practice from clinical research, into other fields.
In pre-registration a trusted intermediary is used to store a read-only copy of your study design and materials, ideally including a pre-analysis plan that specifies the analysis you will run for hypothesis testing (so that we can cut P-hacking out of the picture at the same time). That intermediary can allow researchers to withdraw registrations while preserving a stub that shows everyone a registration used to be there and why it was removed.
That is how pre-registrations work on the Open Science Framework (https://osf.io), a cross-disciplinary FOSS web tool run by the Center for Open Science.
The real problem is not p-hacking (this is bad), but the lack of replication. All problems in science can be solved with replication, but almost nobody does it because it is near impossible to get a replication study published in a worthwhile journal.
No other problem in science is more important to solve.
Just like peer review is a necessary step for a study to be taken seriously, replication studies should be regarded as the direct next step. There needs to be incentive to reproduce the experiment.
Replication is far more important than peer review. I would rather a 1/10th the number of "advances" published if I could rely on the results to be true, and even more importantly, robust.
When I used to run a lab we used to only consider results that had been replicated in at least two unconnected labs. Everything else was just a waste of time (on average).
Eh, I would classify it as simple exploratory/observational science. Not all science is done to prove a hypothesis. Sometimes you just go take a look at a forest / cave / stretch of the sky / animal genome, and see what you find.
Once you have enough observations, then you start forming and testing hypothesis.