The BEAM VM's preemptive multitasking may be a more efficient use of rpi's limited CPU in comparison to the cooperative multitasking of say NodeJS for example.
BEAM is a beast when it's in its own runtime managing concurrency, but the bytecode interpreted language it implements is very "meh" when it comes to performance.
(Since people seem to take these sorts of assessments personally, full disclosure and cards on the table: I massively prefer the BEAM world to the Node world. Nevertheless, the facts are what they are. The interpreter for BEAM is just not where you go for performance.)
So, pure curiosity: Node is amazingly excellent at IO. Its like the one thing it does really well. Its traditionally pretty bad at computationally heavy tasks, but it still beats BEAM there.
What do you use BEAM for? Is rock solid supervision really its biggest boon?
And if that's the case, how does it compete with the proliferation of easy to use tech like Kubernetes that, more or less, solve the supervision problem in a simpler, easier to scale, and more abstractable way?
There are parts of Elixir I really quite enjoy, but I've long felt that BEAM is holding it back as much as it benefits it, similar to how the JVM is both a boon and a "modern curse" to Java. The 90s dream of having these VMs executing platform independent bytecode seems dated in the face of infinitely customizable VMs on cloud hardware. And process supervision at the application level also seems dated in the face of modern devops HTTP-level liveness and containerization.
Is kubernetes actually easy to use? I'm also a little bit scared at the pace at which it's development is happening. Maybe I'm old, but I'm worried that it will end up in a state like JavaScript - where I can't make heads or tails of things like promises and classes, async, etc.
Seriously. Elixir/Erlang are VERY clear about the VM (BEAM) not being particularly good at intensive computation. Use a port (or NIF if you really need to).
Is it possible to assign dirty schedulers to isolated cores? Like if you have 8 cores, can you say, "6 of these are for the regular schedulers and 2 are for the dirty schedulers"? If your CPU intensive dirty code is running is on the same cores that your normally pre-empted code is running on, I have to assume that performance is still going to degrade a bit.
Ok, I'm not an expert here, but you have several options available to you when determining your scheduler topology. What you'd want to look at if you really want to make sure to bind your dirty scheduler to a particular core is the +sbt option[1]. However, that's probably not a setting I'd tweak lightly—the OS in general and BEAM in particular is going to do a better job at that than you are under most circumstances.
There is definitely a cost to using the dirty scheduler, and if your NIFs don't need it, you're going to be paying the overhead for nothing. But obviously there's a plethora of uses for them when integrating with libraries that don't play nice with chunked work.