Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Rust Implementation of Conway's Game of Life (github.com/brundonsmith)
76 points by brundolf on April 8, 2020 | hide | past | favorite | 37 comments


I wish I could find the original article I read about hashlife[1]. So many dirty tricks to just reach as deep as possible into the evolution of the system. The hash approach is a good trick. you can abuse it quite a bit more than just row/col. Anyway, you might poke around for a hashlife writeup if you're into that kinda thing.

[1] https://en.wikipedia.org/wiki/Hashlife


Hashlife is also incredibly good at being able to accelerate large-scale computational structures, such as Unit Life cells, which simulate the game of life (or arbitrary other cellular automata) within the game of life: https://www.conwaylife.com/wiki/Unit_cell

The current state of the art is Golly: https://en.wikipedia.org/wiki/Golly_(program)




Here's a HashLife implementation that runs in the browser, with an infinitely scrolling universe:

https://github.com/raganwald/hashlife


That's pretty neat! One of the cool implications of Life being a wholly deterministic and discrete system, even though it's spatial.


If you are interested in compiling down to wasm.

https://rustwasm.github.io/docs/book/game-of-life/introducti...


For even more memory efficiency there's Transpose Jagged Diagonal Storage format (TJDS) https://www.sciencedirect.com/science/article/abs/pii/S00200...


There is an intriguing variety of Conway's Life I want to find some time to work on:

- At any given time each alive cell has a probability to go into the dead state, at 0.01 or lower per iteration - equivalent of "entropy")

- There are localized sources of "energy" (areas where a dead cell will spontanously come alive with high probability, 0.1-0.9 or so)

- some interesting results are immediately noticeable. Like blinkers will die quickly, but blocks can regenerate from entropy damage.

My hypothesis is that large enough simulation of this kind might yield a true A-life pattern.

P.S. I also want to learn Rust a bit more, so this looks like a good project to fork.


At that point I think you have gone above Conway's Game of Life and are now just in the superset of "Cellular Automata".


Let's think about more simple properties that could be added to create a more interesting dynamical system. Some ideas are: each cell could be a small neural network that learn properties of its surrounding environment; each generation could produce variation of neural network parameters so it could use evolutionary algorithm to evolve; the cells (or group of cells) could be able to move (e g. the evolutionary algorithm will select the ones that move towards the energy source); the cells could have sensors (e.g. sensing proximity of the energy source); cells or group of cells could compete for resources.


I went fairly far with this, creating a simulation of creatures I called worms using an unholy marriage of neural networks, cellular automata, and classifier systems (a pattern matching approach based on genetic algorithms). I wanted to see whether a population of "worms" with neural network brains would outcompete a population with classifier system brains.

Rust makes this both fun and a challenge - immutable variables and trait objects make simulation programming a bit trickier. In the end the project fell off a cliff when I got grumpy that I couldn't create a single templatized breeding function that could apply to both types of brains - because of those trait objects. It started to look like I'd have to unwind a lot of my design to rework it (a common experience in Rust!) and then I got a new job, and you know the story.

In every other respect Rust made this project a real pleasure - the tooling is great, particularly the compiler. It's been a couple of years, perhaps I'll dust it off again!


If you do decide to use Rust for your task, and are trying to see how far into the future you can make it iterate, go into the Cargo.toml and uncomment the line "# opt-level = 3". This enables compiler optimizations, which in other projects have given me a 20x speedup. I didn't turn them on here because none of my sample configurations have any speed problems.


Is this an established variation or something you invented?


This is something I invented many years ago, I even wrote a basic implementation in C#, but it was not suitable for large-scale simulations.


Here's John Scholes coding up the Game of Life in an APL one liner: https://www.youtube.com/watch?v=a9xAKttWgP4


A must-read for fans of Conway's Game of Life and laypersons looking for an approachable read. It covers both the history of GoL, and its implications for understanding the nature of Life itself:

"The Recursive Universe: Cosmic Complexity and the Limits of Scientific Knowledge" https://www.amazon.com/gp/product/0809252023/ref=as_li_ss_il...

(Strip the affiliate code off if you like!)


There are JavaScript implementations of Conway's Game of Life in just 140 characters, here are some of them: https://www.dwitter.net/h/gameoflife Some have changed the rules to produce other population dynamics.


I'm interested as well in implementing Conway's Game of Life but I wonder, what are the applications of this system in computation? Can a Turing Machine be constructed with this system?


Not only is the answer "Yes, a Turing machine can be constructed out of GoL, demonstrating that GoF is universal," but given that GoL is itself a computation, the reverse is true: You can build GoL out of a Turing machine.

I took a stab at it a few years ago. My strategy was to start with a very basic Turing Machine, and then write a compiler that compiled a more sophisticated Turing Machine out of a simpler Turing Machine.

For example, when I wanted cells to have a value and a tag, I wrote a compiler that translated every possible combination of symbol and tag into a single flat space of symbols.

I stacked compiler on top of compiler until I had a Langdon's Ant with various programming conveniences, and I wrote a 9-cell GoL using my Langdon's Ant.

The whole thing compiled to a very simple TM with millions of states, but it worked. I did not make it practical enough to ever manage a GoL big enough to emulate a Turing machine, but I feel that I did enough to get some practical experience with something that is straightforward in theory.

---

Turing Machine in GoL: https://www.youtube.com/watch?v=My8AsV7bA94


You can not only build a turning machine with GoL, but you can build GoL in GoL

https://youtu.be/xP5-iIeKXE8


> what are the applications of this system in computation?

If you mean "applications" in the practical sense, there are basically none, but that's part of the fun. GoL is fascinating in part because it's pretty much useless, yet has been studied extensively by bored computer scientists over the decades.


Yes


funny how rust doesn't seem to let you easily work with dynamically sized array. Why pick a hashmap to represent the world instead of a vec ?


From the readme:

> Ideally the board is "infinite"; the "creatures" can progress infinitely in any direction, so it's best if they don't hit a wall

> The most obvious data structure for Life is a 2D array. But that will have walls, and while it could be re-allocated as necessary to grow indefinitely, this would get extremely memory-inefficient for, say, a glider that shoots off in one direction and leaves nothing behind.

> What I landed on was using a HashMap whose keys are locations (row/column) and whose values are booleans, indicating aliveness state. This allows the structure to be incredibly "sparse"; i.e., memory usage is tied to how many cells are alive, not where they are or where they've been before. It only needs to actually record the ones that are alive right now, and the ones that might become alive next cycle (direct neighbors of the currently-alive). These keys of the current HashMap are then iterated over as the only candidates for being "possibly alive" in the next cycle.


> What I landed on was using a HashMap whose keys are locations (row/column) and whose values are booleans, indicating aliveness state.

Given that they later state that they maintain two "worlds" ("today" and "tomorrow" - current and next), shouldn't it be enough to record coordinates/location of live cells?

I suppose that'd require a pass to lookup candidates from live cells (complicating the work of creating "tomorrow" from "today" a little). But it seems the boolean/list of neighbors is redundant?


> shouldn't it be enough to record coordinates/location of live cells?

It sounds like you're recommending a list (or Vec) of Locs instead of a HashMap from Locs to booleans. Yes, this would also work, but it would be quite a bit slower because whenever it needed to check the liveness state of a given Loc (which it does very frequently), that would be a O(N) search instead of a O(1) lookup.

Edit: it did just occur to me that a HashSet could be used, which would save a small amount of memory without introducing the performance issue. Though it would also prevent (or at least complicate) the strategy below where I pre-mark the dead neighbors of alive cells.

> But it seems the boolean/list of neighbors is redundant?

Technically including the dead neighbors in the HashMap is also redundant, but it's also a (less important) optimization. By recording the dead neighbors while setting the live cells, a little bit of work is saved on the next phase because otherwise I would have to iterate over all 8 neighbors for each live cell, when determining their next state. This would mean many cells would get iterated over several times, as they might have several alive neighbors. Doing it the way I did also just simplifies the logic a bit.


> it did just occur to me that a HashSet could be used, which would save a small amount of memory without introducing the performance issue.

I was indeed thinking of keeping the "occupied" coordinates in a set-like structure. Not sure how a boolean is stored - is it 8 bits, or a single bit? Either way, just storing two (eg) 64 bit coordinates should trivially align (ish)?

The trick would be to quickly be able to go over all occupied cells and their neighbors in order to build "tomorrow's" board. I suppose the new board might serve as a cache/state helper of sorts, but I think the logic would have to be a bit more complicated.

Maybe it's possible to go from a 1d algorithm to a 2d one by making sure there's a list of occupied cells sorted by their place on a Hilbert curve?

My intuition tells me it's simpler to go center out, than row-by-row (if we store only occupied cells, we need to deal with the empty row "above" our first occurrence somehow - it might be easier to "push" out).

I'd probably have to try and sketch this out on paper though :)

At any rate, I just immediately though storing sn extra dead/alive bit seemed a bit redundant and possibly alignment unfriendly if only "really" needed a list/set of occupied cells.

But perhaps the most pragmatic way of changing just the data structure (and more or less keep the algorithm (with the caveat that I haven't actually looked at the code!))- would be to split each generation in two - a list of live and list of dead cells - each accessible via their coordinates in a quick manner.


Yeah, though my goal with this particular project was to strike a balance between performance and readability, not squeeze out the absolute maximum performance. I.e., I didn't bother with any optimizations that entered the territory of "tricks", where they started to make the code less intelligible. For my use-cases (the configurations included in the repo), the current implementation is not limited by speed (and I didn't even turn on compiler optimizations, which in my experience can give a 20x speed improvement).


Oh, I don't mind that at all. All these ideas would need to be be benchmarked anyway :)

But ever since implementing a naive (backed by 2d array) game of life, I from time to time play with the idea of trying for something much higher performance. I'm not sure if this one scales to 30fps at 4k for example (maybe it surpasses that on modern hw!). That's the kind of target that'd be fun to aim for. Quite possibly using the frame buffer for state is more efficient anyway.


Thanks, indeed i missed that !


Sounds like you have both little experience with rust and didn't read the linked page.

Dynamically sized arrays have been in the standard library since day 1.

https://doc.rust-lang.org/stable/std/vec/struct.Vec.html


i didn't mean rust didn't have dynamic sized arrays, i even mentionned Vec in my comment.

I am indeed a beginner in rust, and i was surprised by the completely different syntax & types for arrays and Vec... I read about it afterward and get the reason for it, but that's the first time i encountered that peculiarity in a language..


I think you were downvoted because your comment appeared to be making an incorrect assertion confidently, rather than asking a question.

> funny how rust doesn't seem to let you easily work with dynamically sized array

Could instead have been

> why did the author choose a hashmap?


A Rust Vec is very similar, if I recall correctly, to a C++ vector. It can basically be used like a List but is implemented underneath as an automatically dynamically-sizing array.


That's fair, many languages will only have Vecs, and will call them arrays, so it can cause confusion coming to Rust :)


This has nothing to do with rust, vectors are perfectly good for anything not related to this particular problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: