It loves RAM (and probably RAM speed), but not necessarily in a bad way. Jumping to 16gb (2x8gb) from 8gb (1x8gb) on the laptop I first played it on was a big jump in performance even with a GT740.
I tried installing it on Ubuntu MATE, GPD Pocket 2, with whatever the display-scaling default is for this distro. There is some stupid bug where I can't get past the privacy-policy adhesion agreement because it is rendered mostly off screen and doesn't recognize keyboard input. Classic Linux desktop paper-cut that I'm sure I could resolve with 15-240 minutes of web research.
Other than that, I assume the Linux version is fine.
not entirely sure I follow the question. But if I'm reading you right, yes, a lot of DLC features and "quality of life" improvements originally started as mods, which are still available.
Development needs resources. I don't see why there is so much hate towards well crafted DLC content. The base game is great as it is, and nobody forces you to buy the DLC.
DLC released every few months which adds new features or mechanics is perfectly fine. I'm not sure if its still the case by Skyline's team used to release DLC in two parts, a bit for a free and an extra bit you had to pay for, they do it really really well.
Generally there's a stigma around DLC cut out of the original game to be sold as extras on top, See anything EA / Ubisoft crap out.
It's the standard way for games developed or published by Paradox. It helps to diversify the gameplay after some time and you choose what you want to add from all the DLC.
I think this has some meaning for the abundance of life in the multiverse. If most human-created systems of sufficient complexity turn out to accidentally support computation, then maybe most laws of physics support computation.
Hey, it's not better or worse than any other way to guess.
You might to be tempted to ignore this paper for being too technical, but it would be technical only because these are real issues that become difficult when put under serious reflection.
If you can stomach it you will be rewarded with deeper understanding.
That's a paper about ontology, I'm talking about the actual physical reality of "the multiverse," you know, the one with the meta-landscape of all possible string theories, and all the different spidermans.
In what way can statements about a multiverse not be ontological? Are you committing to modal realism, Platonic realism, or are you just pulling my leg?
If you commit to the multiverse being physical, by extension you're making claims about ontological concerns, since all physical claims require there to be a notion of something existing, even if you're wanting to claim that something counterfactually exists. Then that gets into modal realism etc etc.
Likewise you can't be fully confident in multiversal theories since no physicist endorses a multiversal theory without also accepting it as interpretative, as no experiment has been performed to anoint multiversal theories as being correct. So you're still postulating something worthy of criticism even if there was nothing ontological involved.
Each universe gets its own version of Spiderman, and we live in the universe with the fictional Spiderman. That's one theory that matches our observations. Since basically every theory matches our observations, including the theory that there isn't a physical multiverse, every argument is equally specious and aesthetic.
Every theory matches observations given interpretations of those theories, and it's impossible to engage in discourse without additionally engaging in interpretation. So some theories are going to be eliminated from view given one perspective or the other since interpretations must be chosen over others. I think where you're wrong is thinking it's possible to evade this and make the claim that all arguments are specious without simultaneously gesturing that you don't want to engage in the discourse anymore: it's a self-defeating claim.
Notice that none of this prevents there from being a singular subject that we can talk about, but there can still be substantial disagreement as to the nature of the thing be discussed. If we are disputing whether or not there is a multiverse I would just ask you, what are your grounds for accepting that we occupy a multiverse and what brought you to do so?
I'm leaning on the fact that theories are significations of events, but significations are only possible when relative to an interpretive scheme that allows those events to be connected to the theory.
A deep-math version of this would be picking whether or not you want to work with the axiom of choice or not when using groups to represent vectors (the groups and vectors aren't important here, but they play the role of theories and objects). Deciding whether or not you accept the axiom of choice can a have major impact on what theorems you are willing to accept as justifiable, but the decision is not purely aesthetical, because it's possible to have technical reasons for using the axiom of choice.
Resolving whether or not the axiom of choice for the purposes of your practice can be done but it won't usually be within the scope of the system. Often you need to introduce further information from outside the system to justify use of the axiom. And so you are building on further postulates, making more choices at the exclusion of others.
A more science-y example: It can be seen as a restatement of the Duhem-Quine thesis: you always need postulates to relate a hypothesis to its observation. These postulates might correspond to truth or might not but you are almost working with a set of assumptions with varying levels of validation when doing any scientific work.
https://en.wikipedia.org/wiki/Duhem%E2%80%93Quine_thesis
All of this drives out from philosophical school of American Pragmatism, which includes philosophers like Charles Peirce and Charles Morris who pioneered the semiotic theory. To a lesser extent Carnap might fit in this category since he bought the semantics/syntactics/pragmatics distinction that Peirce and Morris created, then set up classical logic as we know it (including model theory), although he's usually considered a positivist.
I don't own a private email account, but the authors I could recommend are anything by the aforementioned pragmatists (particularly Peirce), work by Van Fraasen, and work by Macintyre. Kuhn and Laudan are also a big ones for science-focused stuff. I also brought up Floridi because he'd at least be sympathetic to the plight of computational theorists while also providing disagreement from the philosophical mainstream.
The point of all of the above is to realize that modern philosophy has made three different "turns":
1) The linguistic turn
2) The pragmatic turn
3) The discursive turn
All three point to the idea that almost all acts of knowledge are social and communicative acts, and that all claims being made between us are from perspectives we assume and inherit from others. They amount to saying that no inquiry is done in a vacuum, and all inquiry is open to criticism of some form or another. In which case we shouldn't expect to ever converge on a singular Truth, but rather be constantly negotiating a network of truths between ourselves as long as we are finite and limited beings.
These positions are reactions to middle-of-the-century philosophy that supposed that all knowledge can be treated through a uniform standard (like science, or logic, or faith), and that any knowledge which is true, is true without qualification or the need to ascribe context, including the context of our language or the context of our goals. Both of these positions can be considered "positivist", which is the philosophy that made physics productive in the 20th century but ended up being unhelpful for other fields (like economics or biology), as well as ignoring certain complexities that all forms of inquiry (including physics) share.
The primary struggle that positivists gain when they are exposed to these views is that they think it implies relativism. This isn't quite right, since it's possible to agree as well as disagree with people; but it does imply certain theses inconvenient to the goals of positivism, like the impossibility of a final theory between all modes of knowledge. It might be plausible to have a final theory with respect to a perspective or set of postulates -- like maybe an atomic theory of viruses -- but that would always be irreconcilable with other perspectives which have their own merits, leaving it to the individual to decide the value of each. The good news is that there is always room for thought.
This paper is also excellent for introducing the mechanics of semiotics in discourse, in that most discursive of disciplines, the Law. https://pdfs.semanticscholar.org/0a7f/1fd305239da9232f182fbc... Law makes a good example since people have a lot of conceptions of what having rules of law means. We are obliged to the law by convention and negotiation, rather than by some eternal Truth. Likewise, to the extent that any science is discursive, there will be a sense in which it has to be legalistic, although there are more effective ways which we stop ourselves from spiraling into relativism here (primarily by instrumentation).
If you are willing to wait a bit I can get back to you on this once I have an email account.
That article isn't particularly relevant. It argues that reality is not fundamentally digital.
The article seems to argue that since a digital system can simulate an analog system (through a DAC), reality cannot be either digital nor analog. But that doesn't matter, since as long a reality can simulate digital or analog phenomena, it can simulate the digital phenomena and create Turing machines.
The link to "A Defence of Informational Structural Realism" is broken.
I interpreted the OP as implying a strong relationship between the laws of physics and computation. It's trivial to assert that there is a relationship between the laws and physics and computation because there are computers.
But saying that the laws of physics specifically lend themselves to computers involves finding reasons that this may be so.
Yet if there is no inherent computational property in physical nature (like say that the laws actually reflect cellular automata, or are fully deterministic and are not nominal), then we ought to be skeptical that Turing Machines are somehow essential to physics or physical constraints, or that we should be surprised if Turing Machines are common.
Maybe Turing Machines are common just because the definition of a Turing Machine is syntactically weak.
I'm not sure what you mean by your second question, but even though the set of postulates required for natural selection is compact, each concept that supports it is quite rich.
More general statements about evolution (the kind that I'm guessing Universal Darwinists endorse) would be uninteresting to me because they begin to eliminate details about the system which would render the claims less trivial to talk about... although, I know that there are serious theorists of self-organization that might be better equipped to understand how to talk about fundamental truths of this kind (like Stuart Kauffman).
I'd be less combative in this thread if there were discussions about the ways in which Turing-Completeness come about in the different realizations of the machine, or if comparisons were made between these machines. But so far there hasn't been a lot to prove.
I have not read the paper in detail, but a paper on "digital ontology" that does not mention computational complexity, the quantum Church-Turing thesis, or the fact that analog computation leads to unphysical "hypercomputation" makes any other claims in the paper extremely questionable. How can one take a philosopher of information seriously if they do not address the most typical arguments in computational complexity (you know, the actual science of processing information).
Edit: page 7 discusses some of the points I complained about, but it does not seem particularly convincing to me (it only brushes them off, without explaining them). Admittedly, the language of philosophy is frequently unconvincing to me, so maybe the problem is in me.
Well, this game is really just a layer built on top of Turing complete software so is it really all that surprising that the game itself is Turing complete? I wouldn't be surprised if most complex simulation games are Turing complete.
I wouldn't say it'd be unsurprising to build a turing complete game - building a game (or like, any software) is an exercise in reducing the functionality of a computer, if a machine can originally do X things then your program can do Y things where Y ⊆ X - I think that programs maintaining turing completeness sort of need to go out of their way to be so feature full.
For instance, HTML[1] is (IIRC) not turing complete, the computation power once you step into a layer of pure HTML prevents you from ever assembling a turing machine - no matter how many gigs of HTML you can pump out you'd never be able to produce a turing machine[2].
[1] Pure HTML, HTML + CSS apparently is turing complete.
[2] This disregards merely using HTML as a data definition format and using other logical components to enable the construction of a turing machine - a turing machine's tape is as simple as can be, so we don't really care about storage formats that can replicate the tape portion, we care about things that can replicate the full machine.
> then maybe most laws of physics support computation
Well since we know the laws of physics can be used to implement a Turing machine, we therefore know the laws of physics support computation.
If I remember my college computability course correctly, any system that can be used to implement a Turing machine is itself Turing complete. Even if by no other means than to implement a Turing machine.
Besides you really don't need much: Branching, jumps, and a way to read/write. Voila, turing complete.
Infinite memory is helpful, but we consider computers Turing complete despite not having infinite memory. If you take infinite memory as a hard requirement then the entire universe together is not Turing complete. But that isn't useful so we often waive that part.
Did you know that recently it was shown (on HN I think) that Magic: The Gathering can simulate a Turing machine as well (but the cards you need make it difficult to pop up in a match).
> There is a caveat to this conclusion: if our universe is a simulation, there is no reason that the laws of physics should apply outside it. In the words of Zohar Ringel, the lead author of the paper, “Who knows what are the computing capabilities of whatever simulates us?”
Is there a postulate between a toolset's ability to reduce entropy and Turing completeness?
I feel like most of the "X is Turing complete" posts are essentially saying "X can decrease entropy to an arbitrary fidelity" (while also having some simulation rules that run over the altered system).
Adam's Postulate: any sufficiently complex system can probably be coaxed in to reducing entropy in such a way that a set of simulation rules can act as an abstraction layer to form programmable systems.
Redstone is by design made for implementing logic, so it belongs to slightly different category. Although I think Redstones introduction was prompted by people building logic without it in the olden days.
I think Redstone's inspiration was people building computers in Dwarf Fortress with floodgates and pressure plates, rather than Minecraft specifically.
Dwarfputers are a wonderful tangent of their own, with a not-unlikely possibility of flooding your entire computer with firey death in the form of magma in some of the more impressive approaches - this has been possible (but quite difficult) for some time, more recent game versions have made non-fluid based logic gates[1] more realistic and usable.
Someone has built an (admittedly simple) space-invaders game[2] using df which is rather impressive.
Back in the day I used to build single-shot logic using sand supported by torches holding back water. Water knocks out torches, sand falls releasing more water, etc. When redstone came along everything got much simpler :-)
I miss the days before command blocks when it was still just about getting clever with good old redstone and a couple gates. The piston update was massive. I've built some ridiculous contraptions such as color displays and tape reels and programmable mob traps and variable-clock-rate machinery. And how could I forget minecarts... And IndustrialCraft took all of this to the next level. Wow, I miss the golden days of Minecraft.
"Quad Core Computer with Redstones" features things like cache, hardware stack, a gpu connected to a 15x15 display, serial input and output port and many other interesting things. Also it's available as download by the creator.
The idea of creating computers out of virtual objects has always fascinated me since I first saw someone do it in Minecraft.
It really brings up some interesting scenarios that I like to day dream about sometimes.
For instance, in a real world simulation, you could build a processor with a gazillion transistors because you don’t have to worry about the same physical limitations like size or heat. Could it take an input and compute an output faster than something in the real world?
Would you be bound by the speed of light in the virtual world? You control the physics in your virtual world, so technically nothing prevents it right? Information can travel faster than the speed of light relative to your virtual objects. Say you model the earth at 1:1 scale in the simulation and have avatars on complete opposite sides of earth. They could exchange messages faster than they could in the real world since the information wouldn’t have to physically travel across physical space. (e.g. send message directly to memory address X instead of sending light through fiber optic physics simulator).
Essentially, in a simulation of the physical world that has tweaked physics, could information be processed faster than the processor running the simulation?
Is there some sort of conservation of energy law, but for information?
The information would "appear" to be processed "faster" to the digital system merely because the digital system would have no memory of there being a moment in between its current state and the next computed state. Even supposing that it's possible for there to be a subjective experience of a computer: this goes in to physical time and psychological time and whether the two bear a strict relation, as well as the relative nature of time in physical space.
On the first topic, if you can stomach having to reformat a PDF, the "No Free Lunch in Machine Learning" guy wrote a paper on thermodynamics and psychological time:
On the second one... I think it's more interesting to think if there's a theory of relativity for computation. Imagine if two systems were updating one another's state but were separated by physical distance. Wouldn't you have to hold the state of one to only update when it a receives a message from the other, for the systems to "experience" instantaneous communication? That would mean there would be a third system for whom time would have to pass to transfer the message and compute it in such a way that the amount of updating required for the receiving systems are minimal.
Maybe we have to guarantee that at least one system has to experience time more slowly than the others, to compute the information necessary for instanaeity to be true for the communicating systems.
As far as we know, there is no way to compress reality. The smallest full simulation of a thing is the thing itself.
That means the real world simulation would be as big as the real world. To exchange messages from one side to another means the information has to travel from some point in the physical network to another. Although the nodes may be closer in the physical network than in the virtual world, taking less time, that can't be true for any two nodes.
On average, a full simulation of something will be as physically large as the thing itself, and the distance information has to travel is on average the same virtually and physically.
What if it's not a full simulation? Then you can break those rules. I can draw two galaxies and a spaceship that travels between them in seconds therefore achieving your premise.
Can a restricted simulation ever be computationally faster than a reality/a full simulation? My intuition says no. I can't think of a source.
could information be processed faster than the processor running the simulation?
No. Whatever you do to process the information could always be done faster if you did it directly on a computer without the overhead of the simulation itself.
> Essentially, in a simulation of the physical world that has tweaked physics, could information be processed faster than the processor running the simulation?
Nope, the simulator has to run it still - how would the simulation update anything unless being told to by the processor running the simulation?
Am I the only one who didn't enjoy Cities: Skylines? I played without any expansion (wasn't really hooked enough by the base game to go looking for them) and found the gameplay quite limited compared to the few old city building games I played as a kid.
I still play Pharaoh to this day 20 years later and it's still challenging and complex and there's tons of gameplay to explore. In comparison, I played Cities: Skylines for about a weekend, got over the initial difficulty with managing finances, and then it became a matter of building pretty roads and forcefully read people's complaints on in-game-Twitter.
The game is most about traffic management. This is not very surprising because the company behind it was focused on traffic games.
I like the game a lot. But to be fair: you must download a good traffic control mod to setup traffic lights and road settings to keep the traffic under control.
The only real difficulty with Cities is in managing the logistics once your city gets to an extreme scale. Otherwise it's much more about aesthetic and maximizing efficiency than overcoming obstacles.
Old Pharaoh style city builders certainly have a good deal more challenge when it comes to survival, but they're quite discrete and once you have a few templates for housing and industry nailed down I don't think they have much more difficulty to offer either.
I describe Cities:Skylines as a "city painter". You can make nice looking things in it, but there's little challenge and most of the mechanics don't have a lot of depth.
I'd suggest looking at SimCity 4 for the peak of the in-depth city simulator. The systems in it go way deeper, it isn't limited by inevitable scaling problems with agent simulations (Tropico, even C:S), and there's a large, long-running modding community for it.
There's a few mods that would be considered basically essential, namely the Network Addon Mod (which overhauls the traffic simulator, in addition to a vast amount of content).
I think most people enjoy it for the sake of creativity, not for 'gameplay'. Hence the popularity of mods that unlock everything and give you infinite money.
Wouldn't you also need a way of storing information (preferably an unlimited amount of information) for Skylines to be turing complete?
How would you implement the tape of the turing machine?
Yes. By convention we ignore that requirement for calling something Turing-complete, since in the absolute sense, nothing in the universe is Turing-complete.
Is there some distinction to be made between Turing machines with tapes of fixed capacity, and ones whose capacities are limited by their execution substrate, such that they’ll support as much memory as they are “given” by the environment?
I’m picturing here the difference between Turing machines in the Game of Life that take place on a fixed area of the grid, vs. ones that attempt to just index out to whatever grid position they need, whether that causes wraparound or not.
There is definitely a difference. The problem with the vast majority of "Turing-completeness in X" claims on HN is that they are just logically complete Boolean circuits that are overgeneralized (infinitely tessellated or scaled) for free and cannot handle arbitrarily large inputs. They can only handle constant sized input lengths. To handle an arbitrary input, the circuit has to be resynthesized and created to handle that particular case length. This ends up being a massive leap in computational complexity from constant-time to decidable.
Some of these claims are even wrong since the posters don't bother analyzing whether their suggested generalization technique (e.g. infinite tiling) actually do let you topologically embed every gate and batch of wires into any Boolean circuit you want. This is quite common in 2D side scrollers where you have constant y-height and infinite x length. It just is not possible to communicate all the information you need for a computation from one side of the map to the other. This was the case with Minecraft prior to bidirectional flying machines. You could actually prove that any finite sized "machine" could not communicate an arbitrary distance away (think arbitrarily long Turing tapes) without losing some part of it going off in the distance forever. Hence it was not "Turing-equivalent" unless you used command blocks or gave some overly general infinite tiling capabilities.
Actual descriptions of Turing machines are finite in size/length. These "Turing-equivalent" candidates that keep popping up are not when laid out. This makes it pretty easy to run into uncomputability situations when you try to feed the machines a description of itself and compute some property (like a number = 2x its length). A typical Turing machine would have no problem handling this, though a Boolean circuit would.
Turing machine has unbounded memory. There is a difference between unbounded and unlimited memory: You can extend the tape as needed, but at any given step of computation, the used space is finite.
A Turing machine with bounded memory is called linear bounded automaton. Although LBA is less powerful than a Turing machine in the absolute sense, it is not trivial to form a problem that a Turing machine could decide but a LBA could not.
I'm not sure there's "a" distinction, but there's a couple I tend to think about. This isn't necessarily anything formal or official.
One is just, could this be Turing complete if we gave it infinite resources in the obvious manner? e.g., in Life, an infinite grid, for a modern computer, hypothetically infinite RAM and disk and time, etc. I think this is the one most people are using.
Another one I tend to think about is, since the system is theoretically modeled by a finite state automaton, you can ask the question "how large would the manifested finite state automaton be?" For these toy models such as Cities, or the CSS Turing machine [1], or some small chunk of a Rule 110 encoding, often the answer is still something that might fit in a real computer as a table. There's a sense in which that's so small that using Turing Machine tools to understand the system may still not be the best way to look at it.
By contrast, the FSM for something like the computer you're using to read this would be so staggeringly large that it could never fit into our universe by any encoding that doesn't somehow "cheat" and cease to be an FSM model. (That is, we can "compress" the FSM model right back down by encoding it as a Turing machine, but that defeats the purpose, no?) Even though it is theoretically a correct model of your computer, it's not useful for us humans to understand the behavior of the system. The tools of computer science for Turing-complete systems are far more applicable to understanding what it does than the FSM-based tools, even if, technically, they don't fully apply.
Often they still do de facto; while theoretically it is possible to create a program that can examine the real state of any real computer and determine whether it's going to halt [2], the result would (without proof) again be so large that it couldn't possibly be manifested in the real universe, so in reality the results of the Halting Problem and similar TM-based analyses are still generally "correct" for us in practice, even if they aren't in principle. Symantec isn't going to be selling a 10^100^100^100-byte-sized anti-virus program to us any time soon.
So there's also a sort of line you can draw based on the size difference between the TM-based model and the FSM-based model. It's fuzzy, although, since the FSM-based model grows exponentially (super-exponentially?), it's less fuzzy than you may think since it doesn't take much for the FSM model to go to plaid.
[2]: I'm not 100% sure this is true, but I'm pretty sure it is; I suspect we can prove that the only way this machine can run is to simply execute the computer internally and record every single state it passes through; eventually it'll either repeat a previous state or halt. You can't even assume you can compress the past history very well since for any compression scheme there are programs that will pass through the states in an order that will pessimize your compression (I'm pretty sure), so we rapidly exceed the universe's storage capacity trying to store all the previous states even with such cleverness as we might still be able to muster.
Adders belong to category of combinational logic, which by definition excludes memory. So OP is right, they do not yet demonstrate Turing Completeness. I imagine, but don't actually know, that they would be about as expressive as dfa, which are few steps below turing machines
He builds the adders from ANDs, ORs and NOTs. With these, you can also build flip-flops, which are memory.
(Actually, it's enough to have only NAND (or NOR) to build all logic circuits, including memory)
Trying to understand the excitement around this. Is it uncommon for games to be turing complete? I imagine a lot of modern games are complex enough to pass turing completeness check.
It's more or less an feat in problem solving. Given a specific set of limitations in simulation, can you recreate and solve this computation problem?
For some games, like Minecraft (redstone) and Factorio (logic gates) it's easy, but for others it's difficult enough that you get to see some creativity. This is one of those latter instances.
Interestingly, Minecraft only added redstone to explicitly allow logic gates after users had found out how to create them out of flowing water and other select blocks.
Minecraft construction is weird. Minecraft's reason for existing is that people think it's fun to make simple things mind-numbingly hard by using the crudest lowest-level building blocks, each block simualting with horridly monstrous gobs of computing powes).
So it's the worst of both worlds -- an expensive simulator for tediously manipulting low-level primitives. Why not manipulate primivitives efficiently, or use high-level constructs?
Then serious Minecraft constructors build mods and tools to automate the tedium back out anyway, so they design whatever nice things using high level tools, but then execute and inspect the builds on inefficient computing systems of cryptocurrency-level of wastefulness.
Why not just make whatever thing you want out of good (virtual
or physical) materials in the first place, and then put it under the microscope/oscilloscop/decompiler/hex-editor to marvel at its complex structure?
I am always on the lookout for something like Minecraft, but is geared towards construction in a large scale without the restriction of a first person view, ie having an RTS style zoomable God camera.
What's partly amusing and partly astounding is the time this person took to do it, re-do it, test it, write it and prove it. I think that's where most of the excitement comes from. We get excited by things that are theoretically possibly but no one would waste their time doing it until someone does.
Idea for the sci-fi story: after the civilization collapse, the only computing device that has survived is an arcade machine, with a difficult game that accidentally is Turing complete. People need to program it, so they can use the results to reboot the power/transportation/medical equipment/etc.
I really enjoyed that part of the book. If I remember correctly, the soldiers were just representations of the Trisolarans as they were too alien for us to identify with. The actual Trisolarans emitted light to build their living computer.
Doesn't Turing completeness require conditional loops?
Considering the fact that he has AND and OR gates he can represent conditionals, but he doesn't show it. And the fact that it always terminates by design also means it solves the halting problem which by definition means it isn't Turing complete (I think?).
He built an adder, which does always complete(by design). But having the ability to build AND and NOT gates(which he demonstrates) should be enough to build any computer from just those 2 components[1]. I'm not familiar with the game to comment on the correctness of the AND + NOT implementation itself though:
If NOT is modeled by flooding a power plant and causing it to turn off, would it go back to being on after a while if we stop actively flooding it? Ie, If the input goes to 0, does the output go back to 1 or not? If this doesn't hold then it's not a proper NOT gate to begin with(turns 1 to 0 but doesn't turn 0 to 1) and the implementation is flawed.
The sewage plant will stop pumping out sewage if its power is gone. The sewage that floods the power plant will then drain away; it is only flooded if there is a continuous influx of sewage. So yes, it looks like this is a proper NOT gate because the power plant is flooded iff the sewage plant's power is on (modulo propagation delays).
What do you mean by "it always terminates by design?"
However, as the comments to the post say, the author should provide a proof that the "game physics" allows to construct a functioning latch from these logic gates.
Conditional loops are not essential: the lambda calculus is TC despite lacking any loop primitives.
In theory you can build a latch using only NAND gates, so it should be possible unless the gates that are demonstrated are flawed somehow, see my sibling comment.
I could very well be dead wrong about this, but isn't any game that allows you to construct the equivalent of a transistor turing-complete with enough work?
Kind of, but the interesting part is finding out how to make the transistor in the first place. These things usually take a few years to be found despite the popularity of such games.
Essentially it means that City Skylines could compute anything that it is possible to compute, given enough time. Therefore it is functionally equivalent to the computer you're typing on. In reality, like all computers, it has memory constraints which limit what it can do. A true Turing machine needs infinite memory, which is impossible.