Hacker News new | past | comments | ask | show | jobs | submit | tokenrove's comments login

This paper was influential with regards this idea: https://www.usenix.org/conference/hotos-ix/crash-only-softwa...

I don't think it's that unusual, but obviously there are tradeoffs.


Check out Kleinrock's Queueing Systems.


I've worked a bunch in both, and Go really doesn't compare; you can't judge based on watching a talk. I find it much, much easier to write safe, robust code in Erlang, and operationally, you have so much more power to inspect the state of the BEAM. Another key BEAM thing is the process heap, which amounts to region allocation when used carefully. The reduction mechanism used for scheduling processes, though crude, is also still much lighter-weight than Go's approach.

On the other hand, Erlang is a weird language and a weird environment, that takes a lot of time to really understand, so I understand why it's never going to "win", and golang tools and the runtime will continue to improve. YMMV.


> The reduction mechanism used for scheduling processes, though crude, is also still much lighter-weight than Go's approach.

I'm not sure how it could be lighter weight than what Go has implemented natively at the machine level, instead of in what amounts to a bytecode interpreter. If you have any links, I would be very interested to read more.

> I've worked a bunch in both, and Go really doesn't compare

I respect your experience, so it is probably a failure of imagination on my part that I simply can't imagine how the BEAM tools could be that much better. I haven't had a problem with a rogue task (or really anything else) in production with Go where I was unable to figure out what was going on extremely quickly. The tooling has been amazing for me, unlike basically every other language I've worked seriously with. The YouTube talk certainly didn't do the BEAM tools justice if they are that much better.

With Go, I actually get really nice GUIs to look around the running Go process and analyze the situation, instead of just an interactive CLI session where I have to craft my own commands to find the top tasks and such. The Go pprof tools also work as an interactive shell (but for analysis, not for remote arbitrary code execution), but I would rather just have the flamegraph in front of me 99% of the time. I fully admit that I've never had a chance to use Erlang/Elixir/BEAM in any meaningful way, but I have tried to understand what they offer, and I haven't seen the compelling magic that some people talk about.

Now, if someone is running a Go service without the HTTP pprof server running on some port that they can access, then yes... it wouldn't even come close to comparing to what BEAM offers when you have the option to connect to a running BEAM instance.


And, note, I'm not saying Go is bad, here, just that there is a lot that is underrated and misunderstood about Erlang.

On the Erlang side, check out the BEAM Book chapter on scheduling: https://blog.stenmans.org/theBeamBook/#CH-Scheduling and the core scheduling loop in the BEAM: https://github.com/erlang/otp/blob/master/erts/emulator/beam...

On the Go side, check out https://github.com/golang/go/tree/master/src/runtime/proc.go and the asm* implementations.

It's been a little while since I looked at it, but I recall that much less state had to be saved in an Erlang process switch in the usual case; I seem to recall it can be done in a handful of instructions in many cases. Go of course has to save a bunch of registers much as you'd have to do in any native context switch.

Edited to add: it can be useful to look at that part of the BEAM disassembled in objdump or gdb, to appreciate it, since it's hard to tell how much work is happening with all those macros.


I only started with Elixir in recent months and it's the first language that has ever made me comfortable about writing concurrent code. I didn't spend a lot of time with Go, but the idea of just calling a function that is now running in parallel but was always disconcerting to me. Of course, I could have spent more time with it and gotten more comfortable learning the ins-and-outs, but Erlang/Elixir's addressable processes running with their own stack/heap/gc and passing messages between each other is something that clicked very quickly with me. It's such an incredibly simple idea. For being a "weird" language, I think there is a lot of power the simplicity of its design, especially around learning. You just have to get over the weird syntax (which is a hot topic).

For transparency, I've never written any production code in either Elixir or Go.


The funny thing is that Erlang is the Ericsson language. Go is the Google language.

Ericsson never had the Google "street cred" or K& not R. Ericsson is boring phone interchanges and boring radio. Google was hipster latte with software freedom before it turned out to be worse than MS, IBM and Oracle combined.

One might wonder what could have been without all the smoke and mirrors.


No, it was the need for this on Windows that made Citrix all its money.


Hobbyists move first. Speaking of nostalgia, I'm sure you remember when people would mention they were moving to Linux and were laughed at — Windows, Solaris, AIX, HP-UX, and so on were the serious server OSes.


I think your statement goes too far. In my experience, it's very difficult to do lightweight threading on the JVM anywhere near the performance of the BEAM, as well as region allocation in the style used by high-performance Erlang programs. I think a comparison is unfair. The BEAM definitely sucks for compute-oriented tasks, but it's an immense amount of work to make a JVM-based program compete in the areas where it's good.


Yeah, but lightweight threading is a built in primitive in BEAM, so if you only measure scheduling overhead it is very low, but once you do any work inside those processes, it runs like any code running on low-performance runtimes. The isolated heaps are also OK, but the modern JVM GCs would still give you lower latency even with a shared heap. BEAM, CPython, MRI Ruby and other low-performance runtimes get the job done for whatever it is they're used for, and if all you're doing is IO it may not be too bad, but let's not mention those runtimes anywhere near good performance. BEAM is certainly in the bottom half or third of the Techempower benchmarks.


yeah each individual process has low resource allocation. There could be millions running on the same box so that makes sense.

Luckily, computational parallelization is not a big challenge with new libraries such as: https://github.com/plataformatec/flow

However, immutability might still become a challenge in terms of resources/performance. Rust is often used to patch that with Erlang's NIF.


Don't forget Fibers - coming to a JVM near you soon!


Hard for him to forget that -- take a look at who you're responding to.


https://www.sqlite.org/malloc.html is a welcome contrast to the typical behavior you're describing, but I agree that such measures are rare. (and, not an application: though it provides the mechanisms for applications to follow its lead.)


Untether.ai | multiple developers | Toronto & Montreal, Canada | REMOTE | Full-Time | https://untether.ai

Our team is developing brand new hardware to do high-performance neural network and deep learning inference. We're remote-first, senior people trying to raise the bar on high-performance and low-energy AI hardware.

We have interesting problems in the domains of optimizing compilers, graph algorithms, computer architecture, and machine learning. Candidates with experience working with performance-sensitive systems preferred.


Untether.ai | multiple developers | Toronto & Montreal, Canada | REMOTE | Full-Time | https://untether.ai

Our team is developing brand new hardware to do high-performance neural network and deep learning inference. We're remote-first, senior people trying to raise the bar on high-performance and low-energy AI hardware.

We have interesting problems in the domains of optimizing compilers, graph algorithms, computer architecture, and machine learning. Candidates with experience working with performance-sensitive systems preferred.

Please reach out directly at careers@untether.ai.


Untether.ai | multiple developers | Toronto & Montreal, Canada | REMOTE | Full-Time | https://untether.ai

Our team is developing brand new hardware to do high-performance neural network and deep learning inference. We're remote-first, senior people trying to raise the bar on high-performance and low-energy AI hardware.

We have interesting problems in the domains of optimizing compilers, graph algorithms, computer architecture, and machine learning. Candidates with experience working with performance-sensitive systems preferred.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: