I'm familiar with Biscuit and Midori. The Biscuit paper estimates a 10% performance hit in syscall-intensive benchmarks due to using Golang vs. C.
That being said, ARC may faire much better than a tracing GC, particularly in latency variance, throughput variance, and mean heap usage.
Though, I think we're best off moving off of hypervisors and onto 4th generation microkernels with isolation similar to AIX LPARs / Solaris Containers. After all, a hypervisor is essentially a microkernel that's forced to use hardware traps/faults plus hardware emulation for its syscall interface (plus upcalls for performance, which use a calling convention much closer to traditional syscalls). There are stability, security, and performance advantages to throwing out all of that hardware emulation code, moving everything to upcalls, and getting rid of the second (guest OS) kernel running between the application and the hardware. If you push most of the system functionality out of kernel space, then rewriting some of the less performance-critical components in a language with ARC or tracing GC starts to make more sense.
ARC is slower than tracing GC as all benchmarks where Swift was used prove.
Biscuit was a research project whose goal was to get the thesis done, when the thesis was done, no more effort was spent.
Bing was powered by Midori for the Asian countries for part of its life, and Joe Duffy has stated in the RustConf keynote that Windows Dev team was a reason why Midori faced so much internal resistance, even when they were proven wrong.
Before Google was willing to put the money into JavaScript many would assert that JavaScript would be worthless to anything besides form validation and DHTML.
Mainframes to this day still make use of their own systems programming languages, way safer than C, and you don't see people crying that their kernels are too slow.
In fact, Unisys uses this fact to sell ClearPath MCP to customers that value security above anything else, Fort Knox style.
Very interesting! I wasn't aware Midori had been widely deployed in production! My understanding is that Midori ran the kernel and all programs in ring 0 and relied on the classloader enforcing type safety and the managed runtime enforcing the other security and stability constraints normally enforced by hardware, so syscalls were just normal method calls and there were no context switches.
Burroughs MCP was written in essentially an extended Algol 60 dialect. Algol 60 only had very limited heap allocation for dynamic arrays, no GC, and I haven't read any indications that Burroughs added GC to their extended dialects.
Multics was written in a PL/I dialect, without tracing GC. Likewise, IBM OS/360 and descendants are written in the PL/S dialect of PL/I, and I haven't seen any indication it has tracing GC.
With tracing GC, you have a trade-off between the peak amount of unclaimed garbage and the GC overhead. ARC should have lower variance in both latency and heap usage, which I presume is the reason Apple moved the whole Objective-C and Swift ecosystem to ARC and deprecated the Objective-C tracing GC.
I used to be a True Believer(tm) in the JVM and other managed runtimes. I was one of 5 developers of the most popular Java desktop application in the mid 2000s. Then I moved to Google and started developing web search infrastructure. I was at Google when V8 was created, and I put a lot of effort into running all of the JavaScript that the indexing system found, across the entire visible web. For things at massive scale, spending millions of dollars per year just in electricity bills, it's extremely tough to beat highly tuned C++. Yes, it's a lot of effort. Yes, I hope safer languages like Rust replace C++ and static analysis tools continue to improve.
I still kind of want to be a managed runtime true believer again, but it's tough to go back after believing for so many years that managed runtimes were going to match expertly hand-optimized C++ in latency- and throughput-critical applications "any day now".
As for the mainframe languages, yes they don't have a GC, but they have the right defaults regarding bounds checking, implicit conversions, explicit unsafe code.
Regarding managed runtimes, versus C++, languages like C#, D, Modula-3, Swift have the features to write C++ like code when needed, the main problem is that many don't bother to learn the language features available to them.
At Microsoft stories about hard core C++ devs having to be proven wrong with C# running in front of them is relatively known, Joe Duffy has shared a couple of such stories.
His experience in Singularity and Midori is also what made him bet on Go for Pulumi.
That being said, ARC may faire much better than a tracing GC, particularly in latency variance, throughput variance, and mean heap usage.
Though, I think we're best off moving off of hypervisors and onto 4th generation microkernels with isolation similar to AIX LPARs / Solaris Containers. After all, a hypervisor is essentially a microkernel that's forced to use hardware traps/faults plus hardware emulation for its syscall interface (plus upcalls for performance, which use a calling convention much closer to traditional syscalls). There are stability, security, and performance advantages to throwing out all of that hardware emulation code, moving everything to upcalls, and getting rid of the second (guest OS) kernel running between the application and the hardware. If you push most of the system functionality out of kernel space, then rewriting some of the less performance-critical components in a language with ARC or tracing GC starts to make more sense.