Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Author here! I will take some time to answer any questions.


Coming from Go which introduced me to the concept of lightweight threads, I really miss goroutines and Go's concurrency model with channels/select/cancellation/scheduling.

However, targeting WASM seems like a pretty huge compromise to doing this natively in Rust. Conversely, it looks like what you're trying to do wouldn't currently be possible without the WASM/WASI stack. Curious what your thoughts are about not implementing it as a Rust runtime target? Did you investigate this and why did you rule it out if so?


Note, the Go model falls short against the Erlang/Lunatic one. You can't externally kill a goroutine at arbitrary points (you need to cancel them with a cancel chan / context with manual checking). You can't prioritize a goroutine over others. You can't serialize a goroutine state and send it over the network. You can't fork a goroutine. You can't sandbox a goroutine. Etc.

Using WASM is a tradeoff, but WASM is very fast, surely faster than BEAM process.


> You can't externally kill a goroutine at arbitrary points

Why not? Isn't that just an artifact of Go's implementation, or is there fundamentally something in CSP that prevents this?


It's essentially cultural. A fundamental assumption of the Go language, implementation, libraries, and community is that goroutines share memory. You don't write concurrent Go code assuming that a Goroutine could arbitrarily disappear, so breaking this assumption will break things.

Lunatic is not like that:

> Each Lunatic process gets their own heap, stack and syscalls.

Similarly, Rust code that uses threads and/or async isn't going to work in Lunatic (or at least won't get the advantages of its concurrency model) without being rewritten using Lunatic's concurrency primitives. The concurrency model is more like JavaScript workers or Dart isolates, though hopefully lighter weight.

I'm guessing that the Rust language might work well because move semantics assumes that values that aren't pinned can be moved, and that's a better fit for transmitting messages between processes. But there will probably be a lot of crates that you can't use with Lunatic. If it became popular, it would be forking the community at a pretty low level. You'd have some crates that only work with Lunatic and others that use async or threads.


I don't see it as a compromise. Using Wasm allows for some really nice properties, like compiling once and running on any operating system and CPU architecture. Lunatic JIT compiles this code either way to machine code so the performance hit should not be too big.

When you say "implementing it as a Rust runtime target", I assume you mean targeting x86/64. In this case it would be impossible to provide fault tolerance. The sandboxing capability of Wasm is necessary to separate one processes memory from another.


Go's concurrency model seems like a compromise:

https://play.golang.org/p/DAoobz43yud

OR

https://play.golang.org/p/m0VntGBORBw

Also, I can't think of what scheduling capabilities Go's concurrency has?


I'm not sure I understand your second example qualm. That's logical described-on-the-box behavior.


Because the behavior is merely described, not enforced. It's relying on everyone to obey the two design rules that: 1) only the writing side should close the channel and 2) only one side should write.

In this sense, it's as safe as mutexes: everything works as long as everyone (including any 3rd party library) does exactly what they're supposed to. When compared to what modern and robust concurrent programming looks like (e..g Erlang), it seems like a compromise to me


You can have millions of coroutines in flight in Rust using async/await without issues.

This app explicitly wants to use threads, for some reason.


Recently started my journey with Rust having worked with Elixir in production for about two years, but I keep an ear out on Rust and Elixir/Erlang development

My question: are you familiar with the Lumen project? https://github.com/lumen/lumen#about. Both projects appear to have some overlap. Secondly, what, if any, will be the primary differentiators here? (I've not looked at either projects in too much detail)

One of its creators, Paul Schoenfelder (Bitwalker), you're probably familiar with as he's authored a few popular libraries in Elixir and the other core developer, Luke Imhoff, is ensuring that WASM is taking players like Elixir/Erlang into account being a part of one of the organizations or committees if I recall correctly


Lunatic seems very enticing to me. I remembered it, but not its name, when I read the intro to this blog post and my question was going to be if you’d tried it and what you thought of it but then a couple of paragraphs further into the blog post I learned that you are the author of Lunatic :P

But instead I would like to ask about the future of Lunatic. What is you vision for it? Like, is it a hobby project that you are doing for fun or is it an endeavor that you intend for to be powering big systems in production?

Furthermore, how will you fund the development of Lunatic? And where will other contributors come from? Will they be people scratching their own itches only or will you hire people to work on it with a common roadmap?


I started working on Lunatic around a year ago in my free time. Until now I just wanted to see if it was even technically possible to build such a system. Now that I have the foundation in place and you can build some "demo" apps, like a a tcp server, I'm starting to explore options for funding. From this point on I think that the progress is going to be much faster, especially if others find it useful enough to contribute to it.


Here's a possible funding idea: See if you can get one of the big cloud providers or CDNs to notice the project, so they can hire you to build something that goes head to head with Cloudflare Workers.


We're hiring.... https://careers.microsoft.com/us/en/job/915502/Senior-Softwa...

And definitely looking at doing something like this next year.


Maybe you can apply at https://prototypefund.de/en/


I think the Lunatic architecture can enable shared memory regions between processes and between processes and host in addition to the erlang-like shared nothing approach. That would be an advantage over (non-NIF) erlang for some use-cases. Do you plan something like that? Can you easily map a data structure in Rust? (I think is doable between WASM process but not sure about between WASM and native host).

Another question: What about sending code between process? Like sending a fun between Erlang process.

IMHO this architecture has the potential to go beyond BEAM, good work!


Thank you! This is something I want to support and there is some WIP code for it. Currently I'm only waiting on Wasmtime (the WASM runtime used by Lunatic) to get support for shared memory.

Regarding the question about sending code, this also can be implemented. Wasm allows you to share function indexes (pointers) and dynamically call them.


> working with async Rust is not as simple as writing regular Rust code

Working with async Rust is very simple if you are not writing futures yourself. Tokio and async-std provide api's that mirror std except for the fact that you have to add `.await` to the end of the operation and add the `async` keyword to your function. With proc-macros, `async main` is just a matter of adding a small annotation to the top of the function. Async functions in traits are blocked due to compiler limitations, but `async-trait` makes it easy. What part of async Rust is more complicated than synchronous Rust?

> and it just doesn't provide you the same features as Elixir Processes do.

Can you explain this? How is tokio's `task` different then elixir processes? From the tokio docs:

> Another name for this general pattern is green threads. If you are familiar with Go's goroutines, Kotlin's coroutines, or Erlang's processes, you can think of Tokio's tasks as something similar.


In general thinking about concurrent code in terms of threads is easier than thinking in terms of async code(it's a lower level abstraction).

> Can you explain this? How is tokio's task different then elixir processes?

Tokio's tasks and go's goroutines and kotlin's coroutines are cooperatively scheduled, i.e a infinite loop can block other tasks from running.

Erlang and lunatic have pre-emptive schedulers(similar to a OS scheduler) that schedule processes fairly by giving time slices to threads.


The BEAM scheduler is not quite preemptive as I understand it, but in practice it gets close because instead of yield points being defined manually with “await” they are inserted automatically when you return from a function or call a function or do some number of other fundamental operations, and the pure functional nature of the language means that functions are short and consist almost solely of other function calls.


A favorite HN comment that discusses this: https://news.ycombinator.com/item?id=13503951

Go read it all but here's a relevant quote:

""" So how does Erlang ensure that processes don't hog a core forever, given that you could theoretically just write a loop that spins forever? Well, in Erlang, you can't write a loop. Instead of loops, you have tail-calls with explicit accumulators, ala Lisp. Not because they make Erlang a better language to write in. Not at all. Instead, because they allow for the operational/architectural decision of reduction-scheduling. """


That is a fantastic comment. I also recommend the BEAM book to those who want to go deeper: https://blog.stenmans.org/theBeamBook/

I haven’t finished it yet but the chapters on scheduling are great.


There's also BEAM Wisdoms: http://beam-wisdoms.clau.se/en/latest/


That is how haskell's scheduler works, but I was not aware that it was the same with BEAM. Makes sense.



lol, so BEAM is for schduling like Rust is for memory allocation?


I'm not familiar with erlang/elixir so I assumed that processes were similar to goroutines:

> In other languages they are also known as green threads and goroutines, but I will call them processes to stay close to Elixir's naming convention.


they're similar, but the developer ergonomics around processes are way better. It's difficult to mess up, coding in the BEAM feels like bowling with those rubber bumpers in the gutter lanes, especially around very difficult concepts like concurrency and failure domains.

Go makes it easy to mess up because the abstractions are superficially simple, but it's a pretty thin layer that you punch through.


> What part of async Rust is more complicated than synchronous Rust?

The Rust part, of course :) Seriously though, the compiler error messages alone make it a major pain - although I can't figure out if it's an issue of maturity (language and ecosystem), a fundamental tradeoff with Rust's borrow checker, or me just getting way ahead of myself.

I can rarely go a few days of async programming in Rust before running into some esoteric type or borrow checker issue that takes hours to solve because I borrowed some non Send/Sync value across an await point and the compiler decides to dump a hot mess in my lap with multi-page long type signatures (R2D2/Diesel, looking at you).


Those subpar diagnostics are a bug. The underlying reasons are that

- The async/await feature is an MVP

- async/await desugars to code you can write yourself that leverages a bunch of relatively advanced features, namely trait bounds and associated types

- It also leverages a nightly only feature (generators)

- Async programming is inherently harder :)

- We've had less time to see how people misuse the features to see where they land in order to clean up those cases and make the errors as user friendly as they can be

Put all of those together and the experience of writing async code in Rust is similar to the experience of writing regular Rust code maybe two or three years ago. When you encounter things like this, please file a ticket at https://github.com/rust-lang/rust/issues so that we can fix them.


> Working with async Rust is very simple if you are not writing futures yourself.

> Tokio and async-std provide api's that mirror std except for the fact that you have to add `.await` to the end of the operation and add the `async` keyword to your function.

There are absolute pain in the ass problems with `async` currently, in large part due to async closures not being a thing, which means it's very hard to create anything other than an `FnOnce` unless you desugar the future entirely.


Async `Fn*` traits are possible on nightly with the `unboxed_closures` and `async_closure` features.


I don't consider things enabled by nightly features to be "possible" today, only "potentially possible at some time in the future". The days of using nightly because of a single needed feature in a popular crate are (in my eyes) gone.


Yes, by "not being a thing" I meant "not being stable", I thought I'd edited before posting but apparently I only thought of doing so, sorry 'bout that.


And what do you do with functions from external crates that take callback functions but are not async themselves?

You are now limited to non-async functions and if the operation of the crate depends on the return values of those functions, you will need some extreme measures to make it work.


Erlang processes are not green threads.

Green threads can share memory while Erlang processes cannot, they are strictly forbidden to do it.

Also Erlang scheduler is highly optimized to work as a soft real-time platform, so they never run for infinite amount of time, they never block and never (at least that's the goal) bring down the entire application, the worst thing that can happen is that everything slows down but it's still functional and responsive.

I don't know about Tokio.


> Erlang processes are not green threads. Green threads can share memory while Erlang processes cannot, they are strictly forbidden to do it.

So message passing is the only way to communicate between proccesses? I guess that makes sense with elixir being a fp language. This was not clear in the article:

> Lunatic takes the same approach as Go, Erlang and the earlier implementation of Rust based on green threads.


> So message passing is the only way to communicate between proccesses?

There are escape hatches. Obviously, you can do something really heavy like open a network port or open a file descriptor, but it also provides you with use ets, which is basically "really fast redis for intra-vm stuff" and you can transact over shared memory if you drop down to FFI.


Basically only message passing. As another poited out, you can use FFI calls and Erlang Term Storage, possibly some other means to communicate, but the central feature is that each process has an ID, and then you send() a message to it.

each process also has a receive() block where you essentially listen to everything that ends up in your mailbox and pattern match on it to take action.


Is there a reason you used threads in the rust example instead of tasks? I think it would have been more useful to compare against proper rust async concurrency: I ran 200k sleep tasks over 12 threads using `async-std` and each thread only used up ~30% cpu (~60% on `tokio`), and <10% with 20k tasks.

What is the proper way to call `Process::sleep` in your example? I don't see it in the lunatic crate or documentation, and I can't compare results without running the same test in lunatic.

Edit: I guess async rust is mentioned, but it doesn't really explain in detail what lunatic is solving that async rust doesn't provide, besides a vague "more simple" and "extra features," which the example doesn't really show.


So must the `crashing_c_function()` in your example be compiled to WASM before it can be used in Lunatic? Another comment elsewhere asked:

> Well then that is not comparable to NIFs at all. In fact it is an extremely misleading example...A lot of the time you can not compile the C code to wasm and what do you do then? How do you interface with OS libraries?

https://news.ycombinator.com/item?id=25255955


Yes, it must be compiled to Wasm to work.


This is a bit out of left field, and perhaps more to motivate others to try, but...

Have you considered making this a port driver for BEAM? Then you could call some function from Elixir to launch a wasm actor (that happens to be written in rust)?

Your BEAM would still be imperiled if the Lunatic? layer violates conventions, of course; but it may (or may not) be simpler than reinventing the rest of OTP?


What do you see as the advantage of using these Rust + WASM running on Lunatic processes, versus developing in Elixir?


Using C bindings without fear would be a big one. If you use NIFs in Elixir you may bring down the whole VM with a segmentation fault. While Lunatic limits this failure to only the affected process. WASM was designed to be run inside the browser so the sandboxing capabilities are much stronger.

Another would be raw compute power, WASM comes close to native in many cases.


Other advantages over BEAM: Static typing, multiple languages with a WASM target available, live migration of process to the browser.


One question that I have is at what point do you call something a VM?

Also, I haven’t looked at your code, so maybe this is not understanding things correctly, but wrt targeting wasm, could your work ever make it into the actual rust runtime?


> into the actual rust runtime?

Rust has the same level of runtime as C, with no plans to expand it. So, regardless of how awesome this project is, this is unlikely.

(Notably, there is no "Rust runtime" for async, you have to bring your own if you want to do it.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: