As others have mentioned, the main problem is that open systems are more vulnerable to low-cost, coordinated external attacks.
This is less of an issue with systems where there is little monetary value attached (I don't know anyone whose mortgage is paid for by their Stack Overflow reputation). Now imagine that the future prospects of a national lab with multi-million yearly budget are tied to a system that can be (relatively easily) gamed with a Chinese or Russian bot farm for a few thousand dollars.
There are already players that are trying hard to game the current system, and it sometimes sort of works, but not quite, exactly because of how hard it is to get into the "high reputation" club (on the other hand, once you're in, you can often publish a lot of lower quality stuff just because of your reputation, so I'm not saying this is a perfect system either).
In other words, I don't think anyone reasonable is seriously against making peer review more transparent, but for better or worse, the current system (with all of its other downsides) is relatively robust to outside interference.
So, unless we (a) make "being a scientist" much more financially accessible, or (b), untangle funding from this new "open" measure of "scientific achievement", the open system would probably not be very impactful. Of course, (a) is unlikely, at least in most high-impact fields; CS was an outlier for a long time, not so much today. And (b) would mean that funding agencies would still need something else to judge your research, which would most likely still be some closed, reputation-based system.
Edit TL;DR: Describe how the open science peer-review system should be used to distribute funding among researchers while begin reasonably robust to coordinated attacks. Then we can talk :)
This is less of an issue with systems where there is little monetary value attached (I don't know anyone whose mortgage is paid for by their Stack Overflow reputation). Now imagine that the future prospects of a national lab with multi-million yearly budget are tied to a system that can be (relatively easily) gamed with a Chinese or Russian bot farm for a few thousand dollars.
There are already players that are trying hard to game the current system, and it sometimes sort of works, but not quite, exactly because of how hard it is to get into the "high reputation" club (on the other hand, once you're in, you can often publish a lot of lower quality stuff just because of your reputation, so I'm not saying this is a perfect system either).
In other words, I don't think anyone reasonable is seriously against making peer review more transparent, but for better or worse, the current system (with all of its other downsides) is relatively robust to outside interference.
So, unless we (a) make "being a scientist" much more financially accessible, or (b), untangle funding from this new "open" measure of "scientific achievement", the open system would probably not be very impactful. Of course, (a) is unlikely, at least in most high-impact fields; CS was an outlier for a long time, not so much today. And (b) would mean that funding agencies would still need something else to judge your research, which would most likely still be some closed, reputation-based system.
Edit TL;DR: Describe how the open science peer-review system should be used to distribute funding among researchers while begin reasonably robust to coordinated attacks. Then we can talk :)