Dynamic lets you do lots of nasty tricks, and also lets you detect and work around various nasty tricks at runtime (worst case by falling back to emulation). Consider that old games often would use self-modifying code, for example. Reliably statically detecting self-modifying code can be extremely hard even when it's not intentionally obfuscated. But at runtime it is "easy": Write protect all the code pages, and trap page-faults, and either modify the offending code, or fall back to emulation.
In general, I think dynamic approaches are ideal when you're dealing with a "hostile" environment where the software you're translating was written with no expectation that it would be translated, and possibly (like with old games) in a situation where the programmer may have tried to really maximally exploit the hardware, because it means failure to statically determine that something weird is going on can often be counteracted much simpler by detecting attempts at violating your assumptions.
You can do hybrid approaches, and statically make a "best effort" and include similar methods to trap stuff that breaks your assumptions and fall back to JIT or emulation, but if you do that then there's a tradeoff between how much dynamic stuff you need to be able to do before it's easier to just do everything dynamically from the start.
The performance thing is not so easy to ascertain. JIT'ing code takes a bit of time, but not much compared to the expected overall time the program will be run afterwards. Static compilation can spend more time doing optimisations, but JIT's at least in theory have more information to work with (can detect the specific processor version, and use specialised instructions or alter instruction selection, for example, or could at least in theory even do tricks like re-arranging data to get better cache behavior (I have no idea if any existing JITs actually do that) based on profiling access patterns for the current run.
In general, I think dynamic approaches are ideal when you're dealing with a "hostile" environment where the software you're translating was written with no expectation that it would be translated, and possibly (like with old games) in a situation where the programmer may have tried to really maximally exploit the hardware, because it means failure to statically determine that something weird is going on can often be counteracted much simpler by detecting attempts at violating your assumptions.
You can do hybrid approaches, and statically make a "best effort" and include similar methods to trap stuff that breaks your assumptions and fall back to JIT or emulation, but if you do that then there's a tradeoff between how much dynamic stuff you need to be able to do before it's easier to just do everything dynamically from the start.
The performance thing is not so easy to ascertain. JIT'ing code takes a bit of time, but not much compared to the expected overall time the program will be run afterwards. Static compilation can spend more time doing optimisations, but JIT's at least in theory have more information to work with (can detect the specific processor version, and use specialised instructions or alter instruction selection, for example, or could at least in theory even do tricks like re-arranging data to get better cache behavior (I have no idea if any existing JITs actually do that) based on profiling access patterns for the current run.