Hacker News new | past | comments | ask | show | jobs | submit login

In addition to the sibling comments, one simple opportunity available to a JIT and not AOT is 100% confidence about the target hardware and its capabilities.

For example AOT compilation often has to account for the possibility that the target machine might not have certain instructions - like SSE/AVX vector ops, and emit both SSE and non-SSE versions of a codepath with, say, a branch to pick the appropriate one dynamically.

Whereas a JIT knows what hardware it's running on - it doesn't have to worry about any other CPUs.




One great example of this was back in the P4 era where Intel hit higher clock speeds at the expense of much higher latency. If you made a binary for just that processor a smart compiler could use the usual tricks to hit very good performance, but that came at the expense of other processors and/or compatibility (one appeal to the AMD Athlon & especially Opteron was that you could just run the same binary faster without caring about any of that[1]). A smart JIT could smooth that considerably but at the time the memory & time constraints were a challenge.

1. The usual caveats about benchmarking what you care about apply, of course. The mix of webish things I worked on and scientists I supported followed this pattern, YMMV.


AOT compilers support this through a technique called function multi-versioning. It's not free and only goes so far, but it isn't reserved to JITs.

The classical reason to use FMV is for SIMD optimizations, fwiw




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: