Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What makes elixir/erlang good fit for rpi? When compared to let's say rust or lua. Serious question, no offense.


There's a few reasons to use nerves,

1. It has a good system to hot load the code onto the pi and immediately start running your new code making deploys more efficient than other compiled languages.

2. Erlang uses processes which are very resilient and a good fit for projects that require constant streams of data & lots of connections which can be relevant in IOT. Basically the jobs restart themselves in a robust fashion.

3. Erlang runs on beam which is a well established VM that has years of development behind it making it a robust core for your project compared to some other operating systems that you may load onto your RPI. I don't think this is the biggest reason, but it is one.

The hot reloading & concurrency model are my favorites & elixir is just fun to build in as well. I'm already using it for web so to bring it into IOT makes it natural for a web/IOT mixed project.


Since parent ask for a comparison with Lua and the Pi: I'm running a digital signage service (https://info-beamer.com/hosted) that uses a custom minimal Linux (~35MB) on the Pi and a program written in C that is scriptable in Lua. One of the key features is hot reloading of code (and visual assets). Thanks to the flexibility and robustness of Lua that works incredibly well: You can even push new code to the central website using git, it instantly deploys it to connected devices and they reload their code. If done correctly you can update the content and logic your displays without any interruption. The C code is pretty robust for a few years now and never crashes or leaks memory.


That's really fun! Back in another lifetime (2006) I co-founded a kiosk company. I wrote a platform with a C core and Lua on top and used it to manage some pretty big kiosk networks, as well as handle all of the "server side" logic of our apps: pathfinding, hardware integration/card readers, printing, etc. . . I definitely miss working with Lua and C. We used Flash for the UI on most of those machines and our shit looked _sooo_ much better than anything else on the market at the time. Funny enough I used the bindings for SVN to programmatically handle pushing out updates, I imagine git would have been much more pleasant.

Anyway, thanks for the trip down memory lane.


This is great. How is the business side of it?


Looking good. Once customers realize that our platform is far more powerful that just throwing a browser at everything, they get really excited. You can fully automate everything with the API, build perfectly smooth 60fps content and even play synchronized content across any number of screens. And a lot more. Coupled with the low cost and the reliability of the Pi, it really looks great. Still, getting the word out is difficult as the Pi is still considered a toy by many.


Just saw the demos, and they look great. Back when I ran this sort of thing I moved from Pis to Android boxes for our own in-house system for SVG support (https://www.flickr.com/photos/ruicarmo/albums/72157643937892...) but often wished I’d had a simpler, more efficient setup.

How good are the Lua bindings for OpenGL ES? Are they as nice as Löve? I tried https://www.mztn.org/rpi/rpi_ljes.html once, but it was a bit fiddly.


info-beamer pi (the software that runs in the Pi and drives the output) doesn't expose OpenGL directly at a very low level. Instead it provides a higher level API to draw and move images, fonts and videos around (see https://info-beamer.com/doc/info-beamer#referencemanual for the complete API). So it's indeed similar to Löve. As a programmer you really don't have to know OpenGL to get anything done. Right now info-beamer pi doesn't have SVG support but images and even videos get you pretty far for most effects I can think of.


Just a couple of thoughts/questions if you don't mind:

- How's the supply-chain side of things? For a while, vendors would only sell one Pi at a time, if they had stock at all. Have you had any problems with that?

- Why do your customers know or care that there's a Pi inside? It seems like an implementation detail that only a select few would ever think to ask, unless you're telling them. "COTS ARM Cortex A53" would likely be enough to make most people's eyes glaze over before digging enough to discover that there's a "hobbyist" board inside.

Edit: I looked at your website and get it :). You're selling hosting, not pre-packaged devices. That's really interesting!


Supply for the "normal" Pi has never been a problem AFAIK. Only the Pi Zero has these problems. But as you noted: We don't sell prepackaged hardware: Only the software and service. Shipping/Warranty/Return handling seems pretty complicated and not worth the hassle (at least for now). Instead we made the installation as simple as possible and as a result users only have to unzip a single ZIP file to an empty SD card and put that in their Pi. So far even to most non-technical users managed to do that.


That's awesome! Congrats! That's a really cool niche, and it sounds like you've executed on it in a pretty interesting way. Although I haven't done it recently, I did a fair bit of Lua embedded in C in my M.Sc. and found it to be a really smooth way to add a ton of power to C code without needing to do a bunch of work.


That sounds like such a perfect fit for Lua. We used it to the same effect in gamedev.

Coroutines are also amazing for sequenced AI routes.


Coroutines are also useful for some of the visual code I wrote: You can have functions that run through an animation, yielding for every frame. Or of course you can handle loading, displaying and teardown for content in one function. Pretty handy.


> 3. Erlang runs on beam which is a well established VM that has years of development behind it making it a robust core for your project compared to some other operating systems that you may load onto your RPI.

Not sure. Of course, the BEAM VM is robust and good for the things it's good at, but I don't think it's much more "robust" than, say, Linux, which seems pretty good, stable and robust these days. I'm not talking about software that runs on Linux; though the coreutils etc are pretty stable at this point (understatement). Linux is a fine platform to run your project on, and I claim it's no less robust than BEAM is.

Nothing against your other points though.


I think you might be missing the point about BEAM. It's not that BEAM itself is more robust than Linux. BEAM and OTP enable the creation of highly fault tolerant services through the use of supervision trees, among other things. These restart failures from last-known-good state within microseconds. While Linux itself is certainly stable it doesn't provide any analogous facilities for creating robust services.


Linux is robust and fault tolerant by design. This comes at the cost of higher performance overhead, of course.

Besides, why are you comparing BEAM to a full-fledged OS kernel?


Linux is not an high integrity class kernel.

In fact in high integrity certified deployments, it is another actually robust and fault tolerant kernel running Linux kernel as yet another user process.

Kernels like INTEGRITY are robust and fault tolerant by design.

https://www.ghs.com/products/rtos/integrity.html


My understanding is that the OS is a few gigs in size and uses a lot of memory, and neves compiles to about 20 megabytes while still being resilient and easy to update.


Actually, a compiled Linux kernel is on the order of ~5 MB. A minimal root filesystem adds another ~50 MB to that. It starts to get bloated once you add kernel modules, drivers, etc.


That's 5MB compressed and also doesn't account for the fact that you'd also need an actual userspace of some description.


When I start writing applications in kernel space, i'll take "Linux" as a more serious contender for an application platform. you're the one making the comparison, so I'm pointing out that Linux provides very little in the way of application-level tools.

Edit: sorry, i see you're not the GP commenter.


> Linux is robust and fault tolerant by design.

How is Linux fault tolerant. Say a kernel driver starts overwriting kernel memory how do the rest of the kernel subsystems isolate that fault and keep going without crashing?


To clarify, Linux is fault tolerant where it needs to be: keeping userspace faults contained.

And what happens when a core module that is part of BEAM starts overwriting critical BEAM VM data structures?

A more apt comparison would be software running on top of BEAM vs. a userspace process running on top of Linux.


> To clarify, Linux is fault tolerant where it needs to be: keeping userspace faults contained.

Agreed.

> And what happens when a core module that is part of BEAM starts overwriting critical BEAM VM data structures?

Segfaults and other terrible things.

> A more apt comparison would be software running on top of BEAM vs. a userspace process running on top of Linux.

That's a better analogy of course. I've heard of BEAM VM described as an "OS for application code". Nobody would want to put their latest and greatest crown jewel production code on a Windows 3.1 platform, where one segfault in the calculator process takes down the word processor, but that is essentially what is happening when using shared memory and concurrency units (threads, goroutines, co-routines, green threads etc).


> , but I don't think it's much more "robust" than, say, Linux,

It is much more robust than Linux. If Linux kernel has a segfault and crashes, it takes everything with it when it panics. If one of the million Erlang processes which has an isolated heap crashes, it can probably safely restart (maybe with a few other ones it is linked with).


Bad analogy, but another commenter has already pointed this out. :)

But I do suppose that Linux has a higher chance of having a kernel panic than BEAM having an internal segmentation fault, simply since Linux has more code and thus a larger surface area for bugs.


Right, right. Agree. I think the right comparison is running on Linux vs running on BEAM VM. Linux became popular, among other reasons, because it allowed for strongly isolated processes. Those can crash, stop and that's fine. Other processes of other users don't even care or notice. Erlang VM solved the same problem for an application environment - millions of lightweight processes and run, individually crash restart because they are isolated.

To extend the analogy: running with threads that share memory between themselves is a bit like running on a Windows 3.1 machine - were if the calculator is broken it can crash and overwrite the memory used by the word processor


"What makes elixir/erlang good fit for rpi?"

Honestly, I'd say it isn't a good fit for any particular reason. jbhatab isn't wrong, but there's other environments that will have other advantages over Elixir/Erlang/BEAM, depending on what you want to do.

What is the case here is that the Raspberry Pi is basically a little computer and can run pretty much anything, Erlang included. Erlang was built in a world where a Raspberry Pi's specs would have been mindblowing, so it works just fine, just like anything else first built in the 1990s.


IMO the BEAM run time is ideal for it.

If I were writing code to drop onto a small device, I’d want the capability to efficiently have a lot of things run at the same time with a small processor, to know that a single heavy unit of work wouldn’t negatively affect the responsiveness of everything else and to know that all of those tiny pieces were built to basically never go down.

Personally speaking, it would be difficult to imagine using anything else for that type of work.


The ability to utilize the BEAM's fully pre-emptive processes makes it great for programming hardware tasks and higher level tasks in the same environment. Compared to other languages like Lua, Node, Python, or even Go which all use various forms of cooperative multi-tasking or async-io. In these systems you have to be careful not to block other critical tasks. That's a pain but not the end of the world.

However, for me the best feature is that I can remotely log into a running BEAM VM and interactively explore the live system [1]. Since Erlang has been used in "embedded" type systems for a long time there are a lot of useful libraries. For example a tiny bit of wrapper code let me setup a secure remote Elixir REPL using standard SSH keys for our embedded devices [2]. You can also run entire "applications" much as you would a system service, but which you can communicate with natively in Elixir/Erlang. Also the support for running sub-processes as ports is really nice.

In general it's really more like a live operating system which you can program and investigate. The primary thing lacking is a capabilities system to allow running non-privileged code in a sand-boxed manner.

1: https://tkowal.wordpress.com/2016/04/23/observer-in-erlangel... 2: https://github.com/elcritch/iex_ssh_shell


Not an Erlang user, but I'm tickled by the idea of having a reliable cluster made out of unreliable little computers. Erlang has a great reputation for process supervision.


It would be fun to build a redundant hardware control system which used three cheap SBC’s to do a form of Triple modular redundancy [1]. It’d be straightforward to prototype with Nerves/Elixir!

1: https://en.m.wikipedia.org/wiki/Triple_modular_redundancy


The BEAM VM's preemptive multitasking may be a more efficient use of rpi's limited CPU in comparison to the cooperative multitasking of say NodeJS for example.


Broadly speaking, Node is faster than BEAM: https://benchmarksgame-team.pages.debian.net/benchmarksgame/.... Unless you're going pretty crazy with concurrency, BEAM isn't going to catch up to Node.

BEAM is a beast when it's in its own runtime managing concurrency, but the bytecode interpreted language it implements is very "meh" when it comes to performance.

(Since people seem to take these sorts of assessments personally, full disclosure and cards on the table: I massively prefer the BEAM world to the Node world. Nevertheless, the facts are what they are. The interpreter for BEAM is just not where you go for performance.)


These are mostly computational tasks... Why would you use BEAM for these?


So, pure curiosity: Node is amazingly excellent at IO. Its like the one thing it does really well. Its traditionally pretty bad at computationally heavy tasks, but it still beats BEAM there.

What do you use BEAM for? Is rock solid supervision really its biggest boon?

And if that's the case, how does it compete with the proliferation of easy to use tech like Kubernetes that, more or less, solve the supervision problem in a simpler, easier to scale, and more abstractable way?

There are parts of Elixir I really quite enjoy, but I've long felt that BEAM is holding it back as much as it benefits it, similar to how the JVM is both a boon and a "modern curse" to Java. The 90s dream of having these VMs executing platform independent bytecode seems dated in the face of infinitely customizable VMs on cloud hardware. And process supervision at the application level also seems dated in the face of modern devops HTTP-level liveness and containerization.


Is kubernetes actually easy to use? I'm also a little bit scared at the pace at which it's development is happening. Maybe I'm old, but I'm worried that it will end up in a state like JavaScript - where I can't make heads or tails of things like promises and classes, async, etc.


Seriously. Elixir/Erlang are VERY clear about the VM (BEAM) not being particularly good at intensive computation. Use a port (or NIF if you really need to).


don't do intensive computation in a NIF! You'll screw up the scheduler. Use a port.


Since OTP 20 BEAM has had dirty schedulers for unsafe NIFs. They're more suited for computational tasks.


Is it possible to assign dirty schedulers to isolated cores? Like if you have 8 cores, can you say, "6 of these are for the regular schedulers and 2 are for the dirty schedulers"? If your CPU intensive dirty code is running is on the same cores that your normally pre-empted code is running on, I have to assume that performance is still going to degrade a bit.


Ok, I'm not an expert here, but you have several options available to you when determining your scheduler topology. What you'd want to look at if you really want to make sure to bind your dirty scheduler to a particular core is the +sbt option[1]. However, that's probably not a setting I'd tweak lightly—the OS in general and BEAM in particular is going to do a better job at that than you are under most circumstances.

There is definitely a cost to using the dirty scheduler, and if your NIFs don't need it, you're going to be paying the overhead for nothing. But obviously there's a plethora of uses for them when integrating with libraries that don't play nice with chunked work.

http://erlang.org/doc/man/erl.html#+sbt


sure, but you might not care so much for throughput in a situation where latency is critical


erlangs bit matching syntax, case statements for structs... incredibly elegant. I don't know about Rust or Lua, maybe they have this too.


In particular, the elixir_ale library is a very pleasant interface to various bits of hardware you might want to interact with at the GPIO/i2c/etc level.

https://github.com/fhunleth/elixir_ale


Rust doesn’t have the bit matching syntax, but we do have match generally.


It'd be interesting to explore something similar to the bitstring[1] library for Ocaml. Where should one start if exploring syntax extensions for Rust. I haven't looked into rust in a while now so i'm not sure how far one can get with macros these days.

[1] https://github.com/xguerin/bitstring


“Macros 1.2” is what you want to look up; it’s in FCP right now!


MetaLua does!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: