Hacker News new | past | comments | ask | show | jobs | submit login
RedHat Mandrel Makes Java Native (infoq.com)
176 points by shortlived on July 2, 2020 | hide | past | favorite | 139 comments



After playing with Java native stuff, one thing has become clear to me that is not general purpose technology where one can take existing Java application and make it native. It will not work even for apps that one write while choosing any library from vast Java ecosystem.

Framework vendors are betting on fact that most Java apps are HTTP/ORM/JSON plus metrics and security etc. So if framework provide transitive closure of these libraries and provide configuration to compile native, it will work.

For my purpose it does not work because I have many more dependencies which do not fit in above framework. Also I prefer to write my code directly with JDK and external libraries without any of these framework.


Since their introduction around 2000, AOT compilers have been special purpose.

There are several features in the platform for dynamic code generation that are very hard to make work in an AOT scenario.


This is fine. However one thing that is not clear to me is why after telling for decades that startup time and memory usage does not matter, Java world is promoting Graal native so much.


As someone who's spent much of the last 20 years writing J2EE, Java EE and Jakarta EE projects, the answer is definitely that with containers and container orchestration, no one believes it anymore. Our new services are written in Go and have significantly shorter start-up times and smaller memory usage and it does matter (to us at least)!


I think this touches on some of the points that GraalVM/Mandrel and Quarkus are trying to address. No one wants those large J2EE/Spring monoliths anymore especially if they're deploying to K8s. Being able to quickly spin up replicas is where that short startup time and smaller memory usage comes into play. Scaling out vs scaling up (horizontal vs. vertical) is what Java devs have needed and it's being addressed with these new technologies.


Agreed ... but as noted elsewhere in this thread, your chance of getting a Wildfly (e.g.) application running in Quarkus and compiled to native via GraalVM is pretty small unless you've got a pretty standard CRUD application. We run into all sorts of problems with connections to anything that isn't a database including jobs that process JMS messages in an async fashion.


The goal is to remain competitive with Go. Java is still going to excel in cases when the benefits of JIT and superior GC have the opportunity to show themselves.

Graal native gives up the benefits of JIT for fast startup. It might be a useful tradeoff for some but it is not a universal solution.


The funny thing is HotSpots startup time and memory usage are drastically lower on jdk 14 vs 8 [0].

I think the next step for Java will be more people shifting to using ZGC which automatically releases unused heap back to the OS where the current default (G1) doesn't.

[0] https://cl4es.github.io/2019/11/20/OpenJDK-Startup-Update.ht...


Because Graal Community is free beer and plenty of developers don't want to pay for their tools, but enjoy being paid.


i don't understand your point; openjdk is free beer too. and graal-vm is an oracle project, as is hotspot?


And exactly for that reason, many are also euphoric about jaotc, which makes use of Graal anyway.


Simple applications that don't use reflection are pretty easy to compile with graal native image.

You absolutely don't need to use a framework like micronaut. You do however need to understand what libraries you are using, and if/how they use reflection and dynamic class generation.


I totally agree with all you said. But as there's different styles and approaches to development, I can say I've mostly worked (and still) on teams where rarely anything we touch works without reflection. Concretely: if you work for a company that has committed and hired to be building Spring (Boot) applications, GraalVM seems like a complete non-starter.

To the point of: if you want to deploy serverless applications and GraalVM requires you to write Java code that is so fundamentally different than what you have been doing for so long, why would you even stay with Java, you already lost much of the benefit of it being supposedly "the same language", and you may then just as well write your serverless application in Javascript or Python. Instead of using a far from mainstream solution like GraalVM.

I'm even wondering how obvious or subtle the bugs would be if I ever accidentally did something reflective in GraalVM.

I found this video very validating for thoughts I already had about how optimizing Java for use in AWS Lambda is not how I want to be programming Java: https://www.youtube.com/watch?v=ddg1u5HLwg8 What he lays out in this video makes me think back to doing my own memory management in C++. There's surely a place for that: high-level application development just ain't that place, in my mind.


Spring Boot is setting the Java ecosystem back 10 years. But such missteps aside a lot of modern Java libraries have a more "plain old Java" attitude, and it's now reasonably idiomatic to write Java code in a style that doesn't depend heavily, or even at all, on reflection.

(Myself I'm interested to see how well this handles Scala, since avoiding reflection completely is pretty much the norm there).


It's also widening the gap between Dev and Ops. I was a developer for 6 years and started moving towards DevOps and trying to build relationships between the ops teams and the programmers. The programmers are always whining about not being able to use the latest language/tool/library/<<insert flash tech here>> while the ops team are just trying to make the pile of junk they've been handed work.

Spring, and Spring Boot most egregiously, work only to widen the gap between devs and ops. The developers have no idea what a Boot app is doing internally for 90% of its stack. The ops guys just want to stably deploy something that is well understood and supportable when it falls over. Not to mention the memory management headaches it introduces...

I moved away from JS early on in my career because it's mostly a disaster waiting to happen, but now the Java devs have the same potential to output a steaming pile of crap that they don't understand.


> The developers have no idea what a Boot app is doing internally for 90% of its stack.

This is the biggest problem by far with Spring Boot. It’s not just hidden default settings or convention or configuration, it’s entire hidden applications.


> Not to mention the memory management headaches it introduces...

Could you elaborate on this?


We have had app teams whose Spring Boot application leaks memory at a rate that meant we needed to do app restarts once a month, even once a fortnight on occasion. Investigation is then hampered by half of the conversations we have being along the lines of "well we're using Spring Boot so it can't be our problem" and the other half are "it's Spring Boot causing the leak."

The dev team refused to accept responsibility as though they hadn't fought tooth and nail to use Spring Boot in the first place.


Just curious how you're measuring the memory leak in this case? Can you explain how that's being determined?


We monitor just about every aspect of the machines and JVMs that we own. When the Heap graph climbs steadily over days/weeks at a time, i.e. each GC still results in considerably more overall memory usage.


I read that as more of a criticism of the ORM/JPA and maybe xml than Spring Boot. In practice it's easy to pull down half the database with an errant "eagerly loaded" collection or attempt to deserialize a giant xml document into an in memory DOM, both of which can exhaust memory and hang jvms with very little information on what happened than an 'out of memory' exception.


Seconded. Throughout all my time working on Java, Scala and Clojure, I've considered reflection to be a code smell. It's a bit more complicated with Clojure, since it uses reflection to bootstrap the language.


There's a burgeoning ecosystem in Clojure for creating native apps using GraalVM native image

https://github.com/borkdude/babashka is a good place to start


Babashka is a scripting tool that is meant to be compiled with GNI and a great tool but to link your own Clojure app like that a good starting point might be https://github.com/taylorwood/clj.native-image

(or https://github.com/taylorwood/lein-native-image if you are using Leiningen)


Spring Boot is setting the Java ecosystem back 10 years.

How so?


By Spring 4.x it felt like the ecosystem had learned the value of staying close to vanilla Java - annotation-based configuration meant that there would be a (Java) reference to any class that was instantiated by Spring, constructor injection meant that there was less need for Spring-specific lifecycle methods like afterPropertiesSet() and it was more practical to write "dual-use" classes that could be used with or without Spring.

Whereas for me Spring Boot has the same feel as the bad old days of early Spring 3.x - lots of magical constructs that were impossible to understand unless you knew all the Spring specifics. A Spring Boot application will behave completely differently based on just which libraries happen to be on the classpath - the exact same code might start up a tomcat server when run one way and not when run another way - and has a whole host of new Spring-specific things that you have to learn if you're going to understand how your application is behaving.


It increases the distance between the developer and the actual, deployable executable that they output. It's like all these JS libraries that allow you to spin up an app in 20 lines. It's meaningless garbage unless you understand the libraries, but the people who are writing those apps usually just copy and paste the code from somewhere.


I am quite happy that with my hops between Java and .NET, every time I land on a Java project I have been able to avoid it.


Apparently you are not representative of average HN user :/ just kidding..


Well, it is true, specially if you follow my threads regarding games development, or paying for software. :)


Seems like some confusion here.

GraalVM is two things. One is a regular but enhanced HotSpot. All existing Java code works on it. You can also run all the Truffle/Graal languages on it fast - it basically turns the JVM into a VM for many languages with good or great performance.

Then there's "native-image"/SubstrateVM. This can also run most Java code, including all code that uses reflection. Reflection isn't missing or broken on this platform, but by default it throws out the metadata you need for it and maybe the code (because it will appear unused). So you have to tell native-image what classes you'll want to reflect over at runtime so it keeps the data. This is well known to mobile developers who used ProGuard because it's the same; you have to say "classes matching these patterns need to be reflectable so don't optimise them".

You can't really have bugs caused by not setting these configurations correctly because if you try to reflect something you didn't pin you immediately get an exception telling you about your mistake.

The thing SubstrateVM cannot do, at least not until a later project that Oracle are working on is released, is run dynamically generated bytecode.

It's bytecode synthesis and plugin loading at runtime, not reflection, that causes the compatibility issues.


Because Python doesn't have the performance of any JVM implementation (not counting toy ones here), not even via PyPy, and JavaScript lacks the tooling, performance of static typing and good support for parallel programming.


If rampant reflection becomes a taboo because of native compilation, that would be great.


This is already happening with Kotlin driving people to use functions instead of annotations. E.g. the kofu project gets rid of most anootation driven configuration for Spring Boot; specifically with the aim to speed up startup and ultimately enable native compilation via Graal.


You can use annotations without reflections, you just need another build step, take a look at Micronaut [1], it really looks like Spring Boot but makes all annotation processing in compile time. Because of that it also supports GraalVM as first class

[1] http://micronaut.io/


Really, thanks for sharing this. I just perused the docs and its got me excited to get back into Java development again!


Thanks for sharing this, I’m new to Micronaut - it seems to solve a lot of problems inherent to Java.


+1 - I tried using GraalVM for a Clojure(script) + re-frame application recently and it didn't work, changing back to OpenJDK works fine. It's a shame because the times were Graal does work, the start-up time is orders of magnitude faster.


What often is not clear is that GrralVM in "native" mode is NOT faster than JVM mode, because the JVM has a number of tricks up its sleeve for dynamic optimization based on actual usage patterns. It definitely has faster startup times, but for a web client/server application this is not so significant.


I personally am more interested in the lower memory usage.


Memory usage is smaller mostly for small apps. Larger apps will see larger memory usage because the GC and heap layout is a lot weaker.

SVM/native-image has its place, but currently it's optimised for things like command line apps or servers that people constantly shut down and start up to try and save on huge AWS costs. Actual runtime throughput and latency will be far better on regular OpenJDKs, especially the latest ones with much better garbage collectors.


I believe that startup time is already a solved problem https://github.com/facebook/nailgun

There is also https://mail.openjdk.java.net/pipermail/discuss/2020-April/0...


Not a solved problem.

nailgun doesn't solve it generally.

leyden is not released yet; its proposal into JDK future is validation of the approaches projects like GraalVM/Mandrel and Quarkus are doing.


Exactly. Add a little bit of jaxws, jpa, javafx stuff and it won't work.

Still today, jlink for the above modules won't work (not generating jmods, especially all those jakarta.xxx stuff) with original tool, imagine with GraalVM.


JPA works fine as proven by quarkus which uses hibernate. For JavaFX you should be using substrate from gluon to achieve this.


Reading the article hurts me. The link from RedHat [1] is much better.

What is hard to understand is Quarkus.

>The Quarkus project was launched in 2019 and provided that evolutionary step needed for Java developers in this new world of Kubernetes and serverless.

And on its webpage

>A Kubernetes Native Java stack tailored for OpenJDK HotSpot and GraalVM, crafted from the best of breed Java libraries and standards.

So correct me if I am wrong. Quarkus is an umbrella term for all the "selected" Open Source project that works perfectly on tops of Mandrel / GraalVM and Kubernetes?

[1] https://developers.redhat.com/blog/2020/06/05/mandrel-a-comm...


GraalVM has an AoT (ahead-of-time) compiler that links against a “Substrate VM” [1] library. It [still starts as]* bytecode but an executable is generated that has fast startup times and low disk/memory footprint. These properties are important for subsecond Serverless Function spin-up and small container images starting with a minimalist Linux like Alpine.

It’s not clear to me if JBoss/Redhat/IBM add anything new to the mix or are simply embracing GraalVM and are trying to make the experience seamless for their customers.

[1] https://www.graalvm.org/docs/reference-manual/native-image/

*update: edited bytecode sentence based on thu2111's comment


The binary produced by native-image (which uses SubstrateVM) doesn't have any Java bytecode left in it. It's all AOT compiled machine code.


no, it is more than that. It's a framework that performs build-time optimization so that those open source projects bootup faster (e.g. move configuration to build-time) and compile natively (e.g. setup reflective accesses automatically or remove them altogether at run-time). Libraries usually provide a "Quarkus Extension" to do those optimizations.


No that would just be marketing fluff, it's actually designed "to drive speed and efficiency of the Quarkus framework, with its 'supersonic subatomic Java.'"


In case you are wondering, like me, what's the point here is some quote from the linked release "The difference for the user is minimal, but for maintainability the upstream alignment with both OpenJDK 11 and GraalVM is critical. It means that Red Hat can offer better support to customers since we have skilled engineers working within the OpenJDK and GraalVM community."

So basically RedHat is building this so they have a stable basleline for their Enterprise releases.


> So basically RedHat is building this so they have a stable basleline for their Enterprise releases.

Can you tell me what Red Hat is not building for their enterprise releases?


Ok so this is just a re-packaging of GraalVM by RedHat. Which is great for those who want nothing to do with Oracle. But I see this as the only plus - there really are no other differentiators. Keep in mind the relevant license here: GPLv2+CPE (classpath exception) which still applies whether you use Mandrel from RedHat or GraalVM from Oracle.

I've used GraalVM and while its a bit fiddly to setup it is great for containerised apps owing to lower memory usage and faster startup - this outweighs (for my use case) the slightly lower throughput vs using regular JVM.


> Ok so this is just a re-packaging of GraalVM by RedHat

I think the implications of this might be much deeper than just a re-spin of Graal. If Red Hat ports all their applications into a native mode, and starts offering native spinoff of EAP, or of their other software, that might be insanely interesting offer... Hell, keycloak is written in Java...

It now depends on how strongly will RH push internally for GraalVM adoption.


I've been out of the loop on the AoT side of Java for a while now. How do these systems handle dynamic class-loading / code generation and things like hot patching?


They don't. They explicitly refuse to compile an app that takes advantage of things that can't statically be determined to be available at compile time. There are a few exceptions, but it is best not to rely on them.


Given the reliance of most, if not all, major Java apps on reflection, factory patterns, type parameters/generics, and other dynamisms a la Spring (which had been practiced and preached for the longest time), I'd expect most apps won't benefit from AOT compilation then.


Yeah, it is not going to compile everything. For most Java apps, which were built in a world where startup time was a known disadvantage of the JVM, they will probably be better off with a JITed VM anyway. This is more of a new capability, allowing JVM languages the ability to move into usecases where they have never shined before. Like CLI apps, for example. I would expect most usage of AOT compilation will be for new software, not existing software.

That being said, generics and factory patterns aren't a problem for graal at all. Reflection and dynamic class loading are the biggest offenders.


How about GUI? Does JavaFX rely on dynamic classloading? (I presume so, given its XML-driven approach.) If so, would it be possible to refactor it away? Or perhaps introduce a preprocessing phase of some sort?


Gluon put out substrate which is supposed to do this:

https://github.com/gluonhq/substrate


Great that this is in the works.

Do you know about its workings?


I do not, other than it's based on GraalVM native image. I just know of its existence. I believe this is how gluon is pushing crossplatform for iOS since apple doesn't like the JVM JIT.


Reflection, factory patterns, type parameters/generics are are all handled by Graal. You just need to make the appropriate config file for your app.


> type parameters/generics

Those are purely compile-time constructs.


At least you'd still have AspectJ, because it can instrument bytecode before the AOT compiler sees it. Replacing Spring with Dagger would let you codegen your DI, but in 2020 you need a really good reason to blow that much eng work to improve your startup time.


One of the things that those sticking to Java 8 are missing out are the JIT caches that are now part of all major implementations, as those features were integrated into the FOSS variants from the commercial products.


Which is why most commercial AOT compilers for Java have been mostly focused on embedded deployments first, and as code caches or fast startup path on enterprise scenarios.


Buy isn't most of the dynamic load limited to the locally-available jars ?

Which means that the AOT generator has a finite set of classes to.comsider ?

And similarly, all the Spring/Boot dynamic features are a finite combination ?

(I have work on huge JVM systems with huge classpath directories but ultimately these run time polymorphism are finite?)


I haven't done much of this at all, but I believe dynamic class loading in graal's native-image AOT works when the string parameter in Class.forName(param) is a static string or a constant. This actually capturez a huge amount of apps, because their only dynamic class loading is for JDBC, and most apps know beforehand which driver they want to use.


OpenJDK supports various kinds of AOT too, which also supports dynamic class loading. However the performance boost at startup is not as significant as with native-image/SubstrateVM/GraalVM because it stays spec compliant.


I think GraalVM can fall back to a JVM if it can’t AOT the whole thing?


Yeah, but most of the time it is used with --no-fallback. I personally don't really see the point of a fallback...I'd rather have it fail on me and know the reason why than to just become a normal JVM app as a fallback.


I assumed that, with fallback, it still did some JVM->machine code translation, maybe I'm misunderstanding how it all works. Also, it still produces a native executable, which is more convenient to distribute than a jar.


Depends on the implementation. Like others said GraalVM doens't support it, GCJ has (had?) a JVM interpreter in the runtime to handle non-static stuff, or you could compile dynamically loaded classes separately to native code and later load them on the fly.


Graal native image is not Java specific, works well for eg Clojure, Scala etc. Does the RH offering also support other Graal supported JVM languages or is it focused on just Java?


I think something that translates to the proper native language is a better goal. Something like c2rust[1] but for Java. There is some PoC[2][3] that can convert basic Java constructs to Rust. It can be taken as a base and improved.

[1] https://github.com/immunant/c2rust

[2] https://users.rust-lang.org/t/java-to-rust-converter-for-ted...

[3] https://github.com/aschoerk/converter-page


Wasn't that a feature of GCC Java?


Yep, I remember some Linux distributions even packaged some Java apps compiled to native code using GCJ.

There's even vestigal text about shipping native versions of dynamically linkable Java libs in the Debian packaging manual: https://www.debian.org/doc/packaging-manuals/java-policy/ch0...

Looks like came around in the early 00's judging by some bug reports in the archives.


Yeah. Red Hat even got the Eclipse Java IDE running on it (sort of, it was really buggy and a lot slower than running it on the Sun JDK of the time) After Sun made the JDK GPL it was abandoned very quickly.


Are the movement of "native Java / native Scala" be following the language design, and implement a native code generator for the source code / bytecode?

Or do they produce the same runtime, but just "more native" implementation?

For example, in the traditional java, there are class loader, hot code reload, reflection, garbage collection and rpc supports. If Mandrel or GraalVM does not provide those runtime features, should them still be called Java?


> class loader, hot reload

Difficult in AOT environment and IIRC graal's native-image does a dependency analysis and builds a static binary.

There is dlopen() in C, it is possible to dynamically load libraries. The problem is it doesn't play well with modern deployment scenarios compared to static linked binaries.

> Reflection, GC

These are possible to implement in an AOT language. See Golang.


Wow! I think this might be good for Java video games.


To the best of my knowledge, GraalVM native images are probably slower at runtime, due to the hotspot vm being unavailable.


GraalVM native images are slower right now, but there's no reason that has to be the case in the future. I think we all have realized over the last 2 decades that though JIT compilers (like the JVM) should be theoretically faster than a traditional native code compiler, this is rarely (if ever) the case. I wouldn't be surprised if over the next 10 years most JVM code (including Java, Scala, etc) is compiled to native images.

It's both clear that Oracle is pushing GraalVM hard and that it's what the consumer wants. If performance was identical, I can't think of a single reason why anyone would choose the regular JDK over GraalVM. It's not like people are slinging jars over the network between linux and windows... hell, now with WSL, that's not even a good reason either.


Actually the JIT compiler often outperforms native code when hot because the latter doesn’t typically get compiled with PGO and the JIT is effectively PGO at runtime.

In addition, the JIT can perform optimisations that a native compiler can’t do, because the JIT can take different decisions if a field is null repeatedly, and thus elide the inclusion of code that supports the non null case leading to potential further optimisations due to dead code elimination and additional inlining opportunities. This is something that PGO won’t give you – the native code compiler will have to be defensive and compile everything, not just what it measured on a few runs of test cases.

In the JIT’s case, when an assumption is violated, it falls back to the interpreter and recompiles with the new knowledge of the change and thus includes more code than before.

The speed advantages of the JIT tend not to be seen on comparison sites like Debian’s language shootout because they don’t exclude start up time and rarely run the code a sufficient number of times to trigger the JIT’s compilation (10,000 method calls by default) to become hot. It’s why the JVM world has tools like JMH to do micro benchmarking.

Finally in the Graal case, it’s actually possible to use the Graal compiler as the hotspot replacement so you can see the difference in speed and behaviour between the two. Graal has some nice wins but also some disadvantages; it’s good at fixing some of Scales’s generated code which is why Twitter are big fans, but on regular Java code the advantages are not as pronounced.

The main benefit to Graal is the reduction in startup time, not execution speed at runtime. For long running servers you are better off with the JVM’s JIT for performance; for short start up times (like AWS Lambda) you could be better off with jaotc or Graal.


And if using a recent Java version (meaning not 8), that JIT code is cached between executions and can even take PGO data into account.

There are a couple of talks from the .NET team regarding the same issue, and how they are planning that for full support of all .NET features, the way forward is a mix of AOT/JIT.

Android team also learned the hard way that AOT only wasn't as good idea as they thought, hence why ART now does all combinations, interpreted, JIT and AOT with PGO and code cache.


HotSpot uses a JIT compiler that dynamically recompiles code at runtime to speed up hot code paths. GraalVM native compilation cannot do this, and given the nature of Java's non-reified generics, it's very hard to get equivalent speeds at runtime. Profile-guided optimization is the only hope to recover some of that performance gap, and it's currently only available in GraalVM EE. I'm not sure how the benchmarks compare to HotSpot.


If the above RAM figures are correct (⅕ ram for native), and of JIT performance is 20% faster than native, then I’m pretty sure that it’s cheaper to add 20% more CPU than 5x more RAM. Especially now that we are talking Gravitons and chiplets.


Why would type erasure hurt performance? Certainly expressiveness is curtailed due to Java's generics design (or lack thereof), but I wasn't aware of any performance implications.


Whenever a generic class method returns an erased type and the caller tries to use it, javac has to insert a runtime check (cast) so the bytecode is typesafe. It also forces boxing of primitive types.


But since the type of a function can't change at runtime, why is this necessary?


Because of erasure, there is only one copy of List#get(int), and it returns Object (because there's no other upper bound). The source might say

  List<String> l = ...
  String s = l.get(0);
which looks typesafe, but to pass the verifier and load, the bytecode has to accept an Object and cast it to String before using it, just like Java programmers had to write by hand before generics (this really sucked).

There's not even anything stopping you from putting non-Strings in that List instance via reflection or generic casts (if you ignore the warning) or using a really old javac. You could use a checked collection but that just adds a runtime check during writes that ensures the runtime check during reads will never fail (but the bytecode verifier still won't let you remove it!)


I should have written ArrayList#get. It's possible to write a class that implements List<String> in a typesafe way, but callers using the standard List interface receive an Object and have to downcast.


It's not very easy to compare java's performance with that of other compiled languages, because of the GC. The closest native analogue is go, which is much slower than java, although it's not quite a fair comparison since go uses a concurrent GC by default and java does not.

Java is quite fast for what it does, and I don't expect any runtime to make it significantly faster (aside the fact that java isn't slow, and its performance problems are memory usage and lack of value types, which native vs. JIT has no impact on).


Languages like Ocaml and Haskell are both around Java speed, though. And even though both languages have more primitive GCs than Java, in practice, they tend to use less memory and have lower latency.

I can only think of one situation when using Ocaml that I had to think about the GC (I was allocated huge floating point arrays), while I can't even count the number of times I had to deal with a runaway heap or unacceptably long pauses. The last JVM version I've really used was 7, though. Perhaps things have gotten better?


This point has to be noted. There are languages other than Go with optimizing compilers, and those compiling to C. Go has focus on fast compiles and doesn't do much optimization. And no generics means many things are done using reflection / reimplemented poorly and can be slow.

OCaml, Haskell, SML with MLTon, Nim, D, Crystal etc.. have better performance than Java unless one benchmarks a tight numerical loop. Not to mention memory consumption.

JITs are oversold. JIT may be great for optimizing Python or Lua. Moreover java is just poorly designed from a performance standpoint - no value types, virtual by default.


Value types are coming, EA are available and virtual by default has long stopped being a problem thanks to StrongTalk optimization regarding de-virtualization across shared libraries and call caches.

That "better performance" isn't reflecting in market share.


Performance and market share are different things.


Not when adoption is at play.


What point are you trying to make?


That the beauty of "OCaml, Haskell, SML with MLTon, Nim, D, C" and lack effectiveness of JIT compilers have done very little for them to take market share from Java in any significant way.

In case of C, on the contrary it lost to Java the majority of the roles it owned during 90's enterprise distributed computing applications.


Yes, things have gotten a lot better. G1GC is the default now and we have free options like ZGC and ShenandoahGC for low latency options. Of course there are the IBM options and Azul as well.

Scalar replacement got better, and with Graal partial evaluation gives reduced heap allocation rates. Which can have big impacts.

Future is even more bright with inline (value) types going towards even less allocations and better memory layouts.


You mean partial escape analysis and scalar replacement for reduced heap allocation rates. Partial evaluation is something else, also pretty useful but for other reasons.


Yes I did ;) Thanks for the helpful correction


> I think we all have realized over the last 2 decades that though JIT compilers (like the JVM) should be theoretically faster than a traditional native code compiler, this is rarely (if ever) the case.

This is a very broad claim. Do you have data to support it? It would mean comparing production-quality JIT and AOT compilers for the same language, which are hard to find.

I'm vaguely aware that LLVM's JIT ambitions never worked out because compile times were too slow, which (IIRC) they tried to mitigate by running fewer optimizations in JIT mode, which meant the quality of the generated code wasn't as good as AOT. I wouldn't call this production quality though, and even so it wouldn't allow you to make your claim as broadly as you are.


AOT compilation doesn't make good use of PGO data (you need a very good test data set) and there is zero chance of inlining across dynamic libraries.

What most developers want is not having to pay for their tools, hence why so much love for GraalVM community edition.

Some form of AOT compilation has been part of commercial JDKs since early 2000's.

In fact, the JIT cache that is now also available for free on OpenJDK traces back to J/Rockit and similarly the JIT code caches now available on OpenJ9 go all the way back to WebSphere Real Time JVM.

Also this is another point, by holding on to Java 8, those JIT caches are not available to those that only want the free JDK variants.

However it is a warm feeling that despite the critic, here is a project that they have kept aliven from Sun Labs (aka MaximeVM) and not only have brought it to production, they have a long term roadmap to replace C2 with it (Project Metropolis).


> What most developers want is not having to pay for their tools, hence why so much love for GraalVM community edition.

Well I agree that developers do not want to pay in general. But Graal VM CE/EE are firmly in enterprise domain, so companies have to take a call on Graal.


Graal VM CE is free beer, hence the attention it is getting now.


It's not like people are slinging jars over the network between linux and windows

That happens all the time. Examples:

- Anyone using IDE plugins to IntelliJ, PyCharm, WebStorm or any other developer IDE

- Any team where developers are running a mix of operating systems but work on a shared Java codebase

- Any place developers pull in Java dependencies from Maven Central/JCenter and then use them in an Android app


Debugging native image would be well utilized as well


Indeed. And when documents say "Unless low memory usage is top requirement just use regular JDK for better performance." It does not give much confidence in native solution.


Given Java's outrageous memory usage I think this is pretty compelling benefit. It is not easy to provision resources for Java applications relative to just about everything else.


Does java come with the tools you need to manually manage your memory to avoid using the garbage collector? In unity with c#, the garbage collector is a major problem that is annoying to work around


There are alt garbage collectors you can plug in or write yourself.

https://openjdk.java.net/jeps/304


That's not what they are asking for. Well there is epsilon GC but rather than writing java that doesn't allocate, I'd rather write C++ or Go.

This is the problem with Java ecosystem. Horribly reinventing simpler things in complex ways, from frameworks to build systems, and selling them as next big thing ™.


Then you have either the pleasure to win the lottery and be part of the 70% statistic, or do tricks with 1GB memory allocations to force the GC to work, choices.


What 70% statistic?



You still have a GC and no value types.


Value type is overrated, lots of game engines choose C# because of value type, any game in C# has good performance or memory usage? Unity games are memory hog...


Somehow I think they would still be a memory hog if written in C, better not mix the capabilities of the tooling with the skills of those that use them.


Any real world application to demonstrate those skills instead of empty words?


Yes, but it is up to you to find out, my words are justified by my cv and customer relationships.


My question is: if you can code in C++, why should you write C++ on top of GraalVM instead of going native (and faster)?


Because to code in C++ with the safety of Java, specially in distributed computing scenarios, I have to make people go through this.

https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines

https://docs.microsoft.com/en-us/cpp/code-quality/?view=vs-2...

And most of the time is a lost battle to force people to change, so it is easier to bring in people that have other culture towards security when plugging software into the network.

As for GraalVM support for LLVM, that is mostly for language interoperability purposes.


Because is my case Java is the only language with libraries that implement all the rare needs I have. And because I need to create an application with the smallest memory footprint possible. And in my best attempt at it, I use a Rust Webserver that use a static GraalVM compiled Java library containing all the business code thus eliminating the fat Java http/web libs, so I have a lower exe footprint that translate is smaller memory footprint because executable are loaded in memory at one point.


Because Java is much cheaper in terms of development time and HR


well I disagree with this. You can waste a lot of time, especially in the debug phase, due to runtime errors not caught at compile time


Debugging Java is easier than C++ though.


"Supersonic subatomic Java on Red Hat Linux?" Did somebody open a portal to the 1990's?


If this was implemented by the JPCSP guys it would be a nice counterpart against PPSSPP.


I don't think it would be that much helpful. Emulators usually struggle with speed, not with memory or slow start. The execution speed wasn't measured in the benchmark for good reason. Normally JIT compiled Java performs adaptive optimizations and the execution speed can become faster so it better suits program's real execution profile.

You might ask: why not run the program on real life input, save the execution profile and optimize GraalVM output depending on that? Indeed that is possible. But Oracle keeps that part of GraalVM proprietary (in fact selling such improvements are probably the most important part of their GraalVM related business, Oracle is not one of those companies which would make open source software just to be nice).

Of course benefit of such optimizations is minimal for containerized server applications that GraalVM is mainly used for, because they are not CPU bound, so for normal users it doesn't really matter. But I think that emulator is one of things where it would actually matter quite a lot.


What makes this better than GCJ? Also if any of these attempts are promising, why are we still using JVM? Writing a native Java compiler seems like a 20+ year old pipe dream.


I agree. If Java’s already as fast as native, as people keep claiming, why all the effort to compile to native code? This feels like an old solution to an old problem.


there is no single "native". you have different compilers for different languages. "native" (i.e. compiled) PHP is still slow. some interpreted languages are insanely fast. in theory, JITed languages could be faster than AOT-compiled languages; in practice that's rarely the case (https://wiki.c2.com/?SufficientlySmartCompiler).

hotspot is a just-in-time compiler, i.e. it's still compiled to native code; just not the whole codebase at once and prior to startup, but depending on usage statistics which are gathered during interpreted execution (hot code paths). you can leverage similar statistics in AOT with profile guided optimization.

as it stands, native_image starts a lot quicker, needs far less disk space and uses a lot less ram, which is important for CLI- and containerized apps. but it's slightly slower than hotspot, doesn't support dynamic code loading and of course you'll need to recompile your app for different platforms and architectures.


Small size since it does not need JRE, fast launching time, it's good for short living app, something like pay as you go cloud computing...



Hmm, I wonder what changes are in the Mandrel fork of GraalVM.


How does this compare to Jazelle?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: