Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Agreed, the JVM's thresholds for method compilation is higher than the test strategy seemed to account for.

Also, quoting from the site:

    TruffleRuby eventually outperforms the other JITs, but it takes about two minutes to do so. It is also initially quite a bit slower than the CRuby interpreter, taking over 110 seconds to catch up to the interpreter’s speed. This would be problematic in a production context such as Shopify’s, because it can lead to much slower response times for some customers, which could translate into lost business.
There's always ways around that, for example pushing artificial traffic at a node as part of a deployment process, prior to exposing it to customers. I've known places that have opted for that approach, because it made the best sense for them. The initial latency hit of JIT warm-up wasn't a good fit for their needs, while every other aspect of using a JIT'd language was.

As ever, depends on the trade-off if it's worth the extra work to do that. e.g. if I could see after 5-10 minutes that TruffleRuby was, say, 25% faster than YJIT, then that extra engineering effort may be the right choice.

edit: Some folks throw traffic at nodes before exposing them to customers to ensure that their caches are warm, too. It's not necessarily something limited to JIT'd languages/runtimes.



If you are able to snapshot the state of the JIT, you can do the warming on a single node. The captured JIT state can then be deployed to other machines, saving them from spending time doing the warming. This increases the utilization of your machines.

While this approach sounds like a convoluted way to do ahead of time compilation, I’ve seen it done.


IBM's JVM used to support it at one stage, not sure if it still does or if other JVMs have picked it up.


It appears to have a cool-sounding JIT server mode, allowing multiple clients to share a caching JIT compiler which does most of the heavy-lifting:

https://www.usenix.org/conference/atc22/presentation/khrabro...

https://developer.ibm.com/articles/jitserver-optimize-your-j...

It also has a "dynamic AOT compiler", so first-run stuff can be JITed and cached for future execution instead of it all starting out interpreted every time.


The shared class cache was the thing I was thinking of, I think.

https://developer.ibm.com/tutorials/j-class-sharing-openj9/

Looks like maybe there is similar for OpenJDK & Oracle Java as of version 12 (I think it is?)

https://docs.oracle.com/en/java/javase/19/docs/specs/man/jav...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: