Even it's latency is outclassed by ZGC and Shanandoah.
The GC monoculture is great from a simplicity and out of the box experience but there is a very good reason to a) want multiple different GCs tuned for different workload types and b) have competition so that the best designs can be found rather than having to fight to be the only implementation.
Huh, Go also does not allocate ten times more more memory like Java. So even if Go's GC is 1/10th performant as Java (and it isn't) it will be equally better Go applications.
Java's GC improvement is relentless because Java's applications are relentless in memory allocation.
Being on endless Java memory/perf issue prod calls I can say Java GC improvement, performance tuning is cottage industry in itself. Meanwhile end users keep suffering but at least Java devs get to tune GC to their heart's content.
I’d love to see you elaborate on this. Anecdotally, my experience was the exact opposite. Is there some documentation making this point?
In the course of optimizing I came to know the various JVM GC algos (concurrent mark/sweep, parallel old, etc) by the corresponding memory graph alone. I never, ever had to debug similar latency in the Go stack.
Both of those are only picked for low sized heaps with few cores, probably within a container. Were these micro services?
G1 is the default for larger heaps and multiple cores, and ZGC and Shenandoah (low latency GCs) have to be manually turned on AFAIK.
OP said:
>Java doesn’t slow down the allocation rate, it tries to keep up with the churn.
This is incorrect. ZGC will block a thread when it cannot give a thread any memory, because it can't collect and free memory at the pace needed. Google "allocation stall" for this. ZGC can achieve very low latencies akin to Go's GC, I don't know if the throughput is higher or not. Multiple cores and some GiB of heap space is when ZGC will shine.
No, Go’s GC is a toy compared to the JVM’s. It is lowerish in latency by actually stopping the application threads when under high contention.
Java doesn’t slow down the allocation rate, it tries to keep up with the churn.