Hacker News new | past | comments | ask | show | jobs | submit login

>The whole system is built so that everything mostly works, most of the time, usually recovers, and it doesn't matter much whether this cell dies or that mitochondria malfunctions.

What makes you think we can't design reliable software systems that way? In fact, I think it has already proven to be a remarkably good idea: http://erlang.org/




Heh, I actually cut out a segment where I started describing a system I could build that would work like that, starting with Erlang. (I actually program professionally in Erlang, though not exclusively.) It got too long and parenthetical.

It's still not the same. There's just no biological equivalent to a bit of code that is dividing by zero or referencing a file that doesn't exist, or any of several other errors I've made that have brought enormous swathes of the supervision tree down because they're restarting crappy code. Erlang adds this sort of biology-style robustness at the top of the stack, biology itself works with it at the bottom of the stack. That changes everything. Programming with massive unreliable parallelism may indeed someday happen, but it's a long road between here and there.


Well, we can find or create equivalences.

Someone I knew did a master thesis on the male reproductive system, he told me that males have 10-15 largely independent biological pathways to produce sperm. From nature's perspective the point is presumably that if you can still breathe, you should be able to reproduce :)

If we were to put the same level of effort into a file reading module, we would have the files replicated over 10-15 different systems with file reading code written by a dozen different people in a dozen different languages, all using different heuristics to locate a similar file (or a backup) if the original wasn't found. Add some sort of selection mechanism to pick the best result from the 10-15 return values, and you would have a very resilient file reader :)

I can't imagine we will ever want to write code like that by hand - drawbacks include development cost and maintenance hassle, and the system becomes very hard to understand and debug.

But it's still an interesting approach. In computer systems I suppose we could bring it about by some variant of evolutionary programming. ( http://en.wikipedia.org/wiki/Evolutionary_programming )


Yeah, that's pretty much how I was thinking of it. Enormous effort, even if you do get to use evolution. And consider the whole cycle of building a web page; organically retrieve a file, organically open a connection to a database (with an organic protocol, of course) to organically retrieve some organic data, organically convert it to some sort of organic representation (HTML is too rigid, we'd need some sort of probabilistic representation or something) and organically render it in a browser; the complication is simply enormous and the payoff? For enormously more computational power, you have a net decrease in the reliability of the whole process.

I could see how AI could use such a thing, especially since the best intelligence we know works that way. But in general? It seems less than awesome.


Not on evolutionary time scales :-)


Precisely. Another example are Dynamo-like clustered database systems. They are designed to have redundant servers so that if one crashes the system as a whole isn't affected.

There are also examples of both (Erlang and Dynamo) combined: Riak and Cloudant. These systems run multiple Erlang processes on multiple redundant nodes. Processes or nodes can die without ill effects, and often the system knows how to heal itself.


If you graphed those systems as they were graphed in the article, they would look like the linux kernel, not a shell. Those things don't work like cells, they add the duplication at the highest layer possible. It's not something pervasively shot through the entire architecture, all the way down to the simplest primitives, like it is in biology. Those systems run the same code on what may very well be the same basic hardware, code that still has the same basic structure of core primitives and higher layers and can still crash like a program can. They are slightly more robust against some types of errors, but are still not even remotely like a cell.


Google is built that way.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: