It is pretty hard in general to prove that calculating things takes at least O(n) time, so I wouldn't expect that one to be solved any time soon.
Personally, I am curious if there is any cellular automaton that has something like 3-dimensional rotational symmetry. If our universe can be described by a cellular automaton, it isn't obvious to me how such a symmetry could arise, but I wouldn't be surprised if someone figured it out.
For what it's worth, Stephen decided that network was a better model for the universe than a cellular automaton, and thought that space might emerge as a property of that network rather than be "defined in", as in a cellular automaton.
(In the preceding section he discussed some constraints that a cellular automaton would put on the universe model, but didn't dismiss the idea for that reason)
I can't open this link for some reason (uBO?), but the title reminds me one very clever Wolfram's article where he brags about (as usual) about how he derived GTR equations from his graph model. That article had a bunch of comments with and one of them stating that in such a graph model there is always a reference frame. Wolfram didn't respond to that comment.
I think SW became too obsessed with the mathematical and computational approach to CA, while (as usual) 't Hooft pursued his physics intuitions to create a candidate CA theory:
I say as usual, because G'tH had already formulated the holographic principles that would allow space-time (hence GTR) to be encoded non-locally over a graph.
There need not be a preferred reference frame if the space-time events do not occur at individual nodes in the graph, but emerge at scales much larger than the graph, with some holographic or permutation symmetry that can reproduce the diffeomorphism invariance of GTR. It is also plausible that the position-momentum duality of space-time could emerge from such a theory.
Space-time would be created by the events that occur, not the other way around, hence the aphorism spooky distance at an action, because the events are primary, and distance is the strange phenomenon that emerges from them.
There has been much recent progress in making space emerge from entanglement, which also implies that locality emerges from non-locality:
Note that LQG has produced a nice discretization of space, based on graphs that have observable quanta of area and volume, but not length. Directions and lengths are defined as normals to areas, which is reminiscent of Clifford Algebra. Having derived (emergent) lengths also makes diffeomorphism easier.
> It is pretty hard in general to prove that calculating things takes at least O(n) time
It's much simpler depending on the answer to the first question, right?
It's been a while since I've touched big-O, but I'm pretty sure that if the center column is periodic (period of P cells), then it follows that you just have to compute the P cells and then index into them, which makes it O(1).
That would be a refutation of the claim that the calculation takes O(n) (or more correctly Omega(n)) time. If the center column turned out to be periodic, then yeah, you're basically on the right track: you would get an algorithm that runs in time polynomial in log(n).
I'm not sure how hard it would be to prove a lower bound on time for this problem, but if a polynomial space lower bound were proven (i.e. computing the nth square of the central column requires a Turing machine with access to Omega(n^a) space for some a > 0), then you'd have a sparse language in P that's not in L, which would be a huge advance for computational complexity theory. So definitely don't expect that anytime soon.
>any cellular automaton that has something like 3-dimensional rotational symmetry
Kind of cheating : but a differentiable 3d field of real numbers following a local Hamiltonian evolution is a kind of cellular automaton : it has a lot of states 2^32 and local evolution rules are the mathematical ones for differentiable calculus so they encode the rotational symmetry, you can approximate to any levels of desired precision.
I think the challenge is more of a quest to find a way to compute it in less than O(n). The proof for that would just be a working algorithm.
IIRC cellular automaton are Turing-complete? We have proven we can implement simulations of 3d rotational symmetry using regular computers, so that one is straightforward, isn't it?
Very beautiful and artistically rendered! Those would make great fireworks and weapons in Minecraft!
From a different engineering perspective, Dave Ackley had some interesting things to say about the difficulties of going from 2D to 3D, which I quoted in an earlier discussion about visual programming:
David Ackley, who developed the two-dimensional CA-like "Moveable Feast Machine" architecture for "Robust First Computing", touched on moving from 2D to 3D in his retirement talk:
"Well 3D is the number one question. And my answer is, depending on what mood I'm in, we need to crawl before we fly."
"Or I say, I need to actually preserve one dimension to build the thing and fix it. Imagine if you had a three-dimensional computer, how you can actually fix something in the middle of it? It's going to be a bit of a challenge."
"So fundamentally, I'm just keeping the third dimension in my back pocket, to do other engineering. I think it would be relatively easy to imaging taking a 2D model like this, and having a finite number of layers of it, sort of a 2.1D model, where there would be a little local communication up and down, and then it was indefinitely scalable in two dimensions."
"And I think that might in fact be quite powerful. Beyond that you think about things like what about wrap-around torus connectivity rooowaaah, non-euclidian dwooraaah, aaah uuh, they say you can do that if you want, but you have to respect indefinite scalability. Our world is 3D, and you can make little tricks to make toruses embedded in a thing, but it has other consequences."
Here's more stuff about the Moveable Feast Machine:
What are the implications of positive/negative answers to these questions, and/or what else aside from "because it's there" motivates answering these questions?
Who knows? Answers to these questions might spur discussion on connections between cellular automata and other computational constructions. I think that's Wolfram's angle -- justify the cellular-automata-is-everything tack that he's been on the past 20-30 years.
"If one can show that a system is universal, however, then this does have implications that are closer to our rule 30 problem. In particular, if a system is universal, then there’ll be questions (like the halting problem) about its infinite-time behavior that will be undecidable, and which no guaranteed-finite-time computation can answer."
There's a thing called a "Garden of Eden" configuration that has no predecessors, which is impossible to get to from any other possible state.
For a rule like Life, there are many possible configurations that must have been created by God or somebody with a bitmap editor (or somebody who thinks he's God and uses Mathematica as a bitmap editor, like Stephen Wolfram ;), because it would have been impossible for the Life rule to evolve into those states. For example, with the "Life" rule, no possible configuration of cells could ever evolve into all cells with the value "1".
For a rule that simply sets the cell value to zero, all configurations other than pure zeros are garden of eden states, and they all lead directly into a one step attractor of all zeros which always evolves back into itself, all zeros again and again (the shortest possible attractor loop that leads directly to itself).
There is a way of graphically visualizing that global rule state space, which gives insight into the behavior of the rule and the texture and complexity of its state space!
Andrew Wuensche and Mike Lesser published a gorgeous coffee table book entitled "The Global Dynamics of Cellular Automata" that plots out the possible "Garden of Eden" states and the "Basins of Attraction" they lead into of many different one-dimensional cellular automata like rule 30.
Those are not pictures of 1-d cellular automata rule cell states on a grid themselves, but they are actually graphs of the abstract global state space, showing merging and looping trajectories (but not branching since the rules are deterministic -- time flows from the garden of eden leaf tips around the perimeter into (then around) the basin of attractor loops in the center, merging like springs (GOE) into tributaries into rivers into the ocean (BOA)).
The rest of the book is an atlas of all possible 1-d rules of a particular rule numbering system (like rule 30, etc), and the last image is the legend.
He developed a technique of computing and plotting the topology network of all possible states a CA can get into -- tips are "garden of eden" states that no other states can lead to, and loops are "basins of attraction".
Here is the illustration of "rule 30" from page 144 (the legend explaining it is the last photo in the above album). [I am presuming it's using the same rule numbering system as Wolfram but I'm not sure -- EDIT: I visually checked the "space time pattern from a singleton seed" thumbnail against the illustration in the article, and yes it matches rule 30!]
"The Global Dynamics of Cellular Automata introduces a powerful new perspective for the study of discrete dynamical systems. After first looking at the unique trajectory of a system's future, an algoritm is presented that directly computes the multiple merging trajectories of the systems past. A given cellular automaton will "crystallize" state space into a set of basins of attraction that typically have a topology of trees rooted on attractor cycles. Portraits of these objects are made accessible through computer generated graphics. The "Atlas" presents a complete class of such objects, and is inteded , with the accompanying software, as an aid to navigation into the vast reaches of rule behaviour space. The book will appeal to students and researchers interested in cellular automata, complex systems, computational theory, artificial life, neural networks, and aspects of genetics."
"To achieve the global perspective. I devised a general method for running CA backwards in time to compute a state's predecessors with a direct reverse algorithm. So the predecessors of predecessors, and so on, can be computed, revealing the complete subtree including the "leaves," states without predecessors, the so-called “garden-of-Eden" states.
Trajectories must lead to attractors in a finite CA, so a basin of attraction is composed of merging trajectories, trees, rooted on the states making up the attractor cycle with a period of one or more. State-space is organized by the "physics" underlying the dynamic behavior into a number of these basins of attraction, making up the basin of attraction field."
Pencil drawing from the very early days before automatic computer drawing was perfected:
This doesn't appear to be true for arbitrarily many digits, and I'm struggling to see how the two fields may even be connected. Some properties appear similar, eg. the Möbius function, too, appears to sum up to 0.
Reading the prize guidelines Wolfram gets the right to publish the proofs but that's it, which seems fair. Big tech companies perhaps have to pay more because they own the work product and it's presumably for some commercial reason?
Probably because its math (intellectual) instead of work (advertisement). I think people have an innate disdain towards advertisement, which is why its usually so covert.
Would you mind providing a little more detail of your reasoning?
For example, say that someone shows that the most efficient algorithm for computing the value (0 or 1) of the central column of the nth row of rule 30 starting with a single 1 cell (i.e., the system under consideration for the prize) takes time n^2. Wouldn’t one then say that the complexity of the calculation is O(n^2)? One couldn’t say that it’s O(n), surely?
love this phrase: "naturally constructed" numbers, suggesting that the numbers we use every day are themselves computational conveniences, and possibly that another number construction could be useful at other scales.
I thought maybe this was a reference to constructivist mathematics, to refer to only this numbers that are algorithmic and exclude numbers we know exist in ℝ but have to be proven to exist using the axiom of choice.
maybe, probably, it was. But the idea that a measure of one of these rules can be interpreted as a number sparked my imagination.
I have to say also that interpreting the grid as space itself, and color of the square as the presence or absence of a particle is pretty compelling. When I think of it that way, then the proportion of each color matters a lot, as it would be the way equilibrium is expressed.
Has anyone seen a version of Rule 30 for a 2D array, instead of 1D? I'm doing some searching but can't find one. Somewhat tangential to my angle of attack, but would be helpful to me.
There are 2D rules that are aesthetically similar to Rule 30 (in how reactive and "fluffy" they are, and how they tend to perpetuate instead of die down), but a 2D and a 3D rule are fundamentally different.
But you can apply simple transformations on rules, like reflecting them (1D and 2D) and rotating them (2D only), or layering them (having a bunch of lower dimension rules running in parallel). Or you can perform a 1D rule along each horizontal/vertical axis (or even diagonal axis) and then combine them somehow (like XOR). But that will typically have a vastly different behavior.
One of my favorite 2D rules that looks kind of like an animated version of Rule 30 is "HGLASS", named that because it looks kind of like the falling sand in an hourglass. If you run it in a torus (edges wrap around), it tends to settle into a circular dependency tree with gaps bubbling up through it like space between cars in a traffic jam.
This is a five-neighbor two-state two-dimensional cellular automaton found at random by Margolus and Toffoli. It organizes a nice sliding flow on a random screen, and it disassembles solid starting patterns in an interesting way.
What would this mean? You could have a rule that just implemented rule 30 for one adjacent row or column of the plane, but that would just give you stripes of 1d rule 30. Is there some obvious way of generalizing the 1d rules to 2d that I'm missing? What properties would you expect to be conserved?
My main attack vector is to build up a simple new math starting with just 1 and 0 to define both the problem and solution in and see where it goes, hopefully creating useful side effects along the way. If I were to solve a problem my hunch is it will be related to impossibilities in formally defining "infinite sequence" in the problem definition in such a language.
Quote from previous link, in answer to a similar question comparing CAs (1d) to Conway's Life (2d):
"The pictures you're looking at, one of the axes is time. One row "becomes" the row below it. Each time-step iteration adds a row to the bottom, while the rows above it are immutable (because they are in the past)."
For some context, Wolfram published a book of collected papers called "Cellular Automata and Complexity" [1] where he classified Cellular Automata (CA) into four broad categories:
I. Convergent uniform final state
II. Convergent simple pattern final state
III. "Random"
IV. "Complexity"
Where "Complexity" essentially means Turing Machine Equivalent [2]. See also [3].
Later, Wolfram self-published "A New Kind of Science" (ANKoS). ANKoS is abysmal and barely readable but, from what I can understand from what little I read of it, Wolfram updated his understanding and instead basically classified CA into two categories, either 'simple' or 'complex', where, again, 'complex' means Turing Machine Equivalent.
If I remember correctly, I think Wolfram even used Rule 30 as a basis for random number generators somewhere (Mathematica?).
Considering the tenor of the problems, maybe Wolfram is still essentially classifying CA into his four initial categories as it looks like he's pushing for the idea that "Rule 30 == 'random'".
My personal take on this is that the idea that there are basically "two" classifications of CA, either 'simple' (non-Turing Machine equivalent) and 'complex' (Turing Machine equivalent) is correct. Though it might be the case where the more 'random' a CA looks, the harder it is to do the reduction to TME, maybe even going so far as having a diverging reduction cost.
Regardless, this is why these questions might be interesting. Is Rule 30 TME? If so, then that answers question 3 (O(n) simulation is essentially saying there's no "shortcut" and you're basically doing full computation). Questions 1 and 2 are, in my opinion, shades of the same TME question, where non-periodic is another way of saying there's no short-circuit computation and calculating the average is getting at the unpredictability (read 'required computability') of the cells.
If Rule 30 isn't TME but still requires O(n) cost to predict, that's also pretty interesting as it requires as much computing power to predict it but isn't as powerful as a Turing Machine. From a broader perspective, this gets at the whole "randomness vs. determinism" idea, as this is a deterministic system that, for many purposes, behaves randomly. This idea is probably old news to this crowd but there's a close link to TME and randomness that is still not completely understood and investigations into CA of this sort are another way to tackle this idea.
Personally, I am curious if there is any cellular automaton that has something like 3-dimensional rotational symmetry. If our universe can be described by a cellular automaton, it isn't obvious to me how such a symmetry could arise, but I wouldn't be surprised if someone figured it out.