It does not have to be slow in all domains. In my field there are many modeling papers. Journals could require that all data and code be open access, or at least define a submission type where this is the case. You could even automate the process of running submitted containers / packages.
All that would prove is that the the code that implements a buggy version of a model gives consistently wrong results. Actual replication requires that somebody else implements the code independently. And that both implementations are checked over a reasonably wide range of parameters over which the model should be valid.
Actual replication is much more effort and consequently slower than just a "docker run".
While I agree that this should be mandatory in many cases, it doesn't prove too much. I've experienced many cases where the open code worked fine with the test data provided, but failed completely when I tried it with my own real world data.