The underlying weakness of the Miller-Rabin test has been known for a long time, and yet "just do N rounds of Miller-Rabin" keeps being perpetuated as the folklore solution to probabilistic primality testing. The BPSW test is basically superior in every way: there are no known counterexamples (and there's currently no feasible strategy to construct counterexamples, if any exist), it's as fast as a single Miller-Rabin test for discarding composites, and if I'm not mistaken, it's about as fast as N = 3 Miller-Rabin tests for primes. In other words, doing N = 50 Miller-Rabin rounds instead of just doing a single BPSW test really makes no sense. I'm happy that the authors advocate the BPSW test as a replacement. BPSW is the default pseudoprime test used in Mathematica, Maple and Pari/GP so it's definitely battle-tested too. Perhaps this work will finally prompt widespread change.
"Perhaps this work will finally prompt widespread change."
I sure hope so. I've argued this to various languages and libraries for a few years, and mostly they don't really care, other than they've heard of M-R and not of anything else.
One bottleneck I ran into are the "Prove that the probability is smaller" argument. The probability bound for BPSW is actually not very good, but empirically we see that bound is grotesquely conservative. One can always add more random-base M-R tests if desired.
There is also the problem that statistical thinking is hard for people. This comes up with using fixed bases and not grasping that this ruins the "2^-80 probability!" (for example) claim. That only applies to a random input.
If the attacker has access to your prime number generator, how does a recommendation to change anything (like the primality test) in your system help? While interesting this seems like a thin layer of protection. OTOH every small thing is a step in the right direction right?