Hacker News new | past | comments | ask | show | jobs | submit login
Life Is Made of Unfair Coin Flips (alexdanco.com)
236 points by jger15 on April 12, 2020 | hide | past | favorite | 85 comments



So by extension, death is just decoherence?

I've said before that the most plausible explanation to me for what happens after death is the end of "I," since we as "I's," necessarily cannot comprehend a state of "not-I." We can know it exists, we can experience its effects from others, and we can severely impair it with psychedelic poisons, but to decohere into a state of "!i" is, to borrow from GEB, the song that breaks the record player.

Great post. Alex Danco is consistently good.


Buddhists would likely disagree with that, since in Buddhism (though this is a significant oversimplification) enlightenment comes with the unrooting of all attachments and the dissolution of "self".


If I've read the OP correctly, it would appear that information theorists would say that since the definition of individual is the effect of propagating coherent information forward in time, the "end" of the individual entity would be when it ceases to cohere information forward in time. Given everything "dies," and in this proposition in the blog post individual entities (living units or not) cease to propagate coherent information forward, the implication to me is the entity that is, "I," will also de-cohere.

By defining an individual unit of something as information, they've taken something we only had metaphors and analogies for and concretized it as a category of things that behave like information. I've applied it to something else we only have metaphors and analogies for, and am sounding out whether it provides any additional insight.


What's over-simplified about that statement? That's a perfect summary of Buddhism - it encompasses the point completely without going into techniques.

The only part that's missing is "...with the goal of avoiding suffering".


Comments like that are a bellwether of the place they’re posted in. People will hedge their comments whenever they fear there is a sufficient concentration of “well, actually”s in the audience.


I think it's oversimplified because there would need to be a long explanation on what is meant by "self", which the commentor very wisely put in scare quotes.

To the parent: actually I think this "death of decoherence" pretty closely matches the Buddhist doctrine of interdependent arising, so not sure about the disagreement there.


Can you expand on the concept of what "not-I" is? I am very interested, and maybe it is my coffee just kicking in - but my brain can't comprehend solid examples of it.


Buddhists view everything as a sum of parts - a car is not a car, it's a collection of wheels, engine parts, transmission parts, etc. In Buddhism, you are not "you" - you are just some organs working together. Likewise, when you sit quietly during mindfulness meditation you learn to identify sensations of pain and discomfort as things that occur outside of you, you just perceive them. In other words, there is no you, there is a sum of parts that you have traditionally perceived as you. If you decouple yourself from these sensations, you can no longer be hurt by them.

Keep in mind that Buddhism is deeply rooted in a time when the only way to avoid suffering was to ignore all desire, because it wasn't going to be fulfilled anyway. So, if you want a primer course on Buddhism, read the "4 Noble Ways" and "The Eightfold Path" (about a 30 second read for both) - while these sounds esoteric, they basically summarize Buddhism in a very clear, concise, and logical way; which is that Buddhism is nothing by asceticism and a way to let go of wordly desires and ignore the outside world, to ease suffering.

Everything else is fluff by pseudo-intellectuals who will flood below this post.

As a side note "Altered Traits" was a useless, garbage, author-self-promotion book with no actual info, thanks for nothing HN.


I can also recommend "What the Buddha Taught" by Walpola Rahula. It is from a Theravada perspective, but I think it is a good start to get to know the core of Buddhism based on the Pali cannon.


Can you break that down? I am eager to learn about Buddhism, because to me, it has some "practical" aspects, but so far every book I have read has been a 100 pages of fluff and a salient point or two.

What makes this book good?

Just to be clear, I am genuinely interested, but I want to know more as to why you are recommending it.


Since this is an introductory text, it covers things you probably already know about - four noble truths, eight fold path etc.

However, there are some practical advice which I found to be nicely explained,

- Book has a chapter on meditation and I think it does get to the point without fluff, for example. It talks about Samatha (such as focus on breath) and Vipassana meditation.

- Talks about some important sutras such as Kalama sutra which is all about how to live your life as a Buddhist as a layman (as opposed to a monk).


Cool, thanks dewaka, will look into it, though there is a good chance I am familiar with most of it; it would still be nice to have a good introductory test - I don't know of a single Buddhist book I feel comfortable recommending to others, maybe this will be it.


The state of not being a self aware conscious entity.


Are you claiming that individuality and consciousness are the same?


Define "individuality" and "consciousness" for the sake of trying to contrast them. Both seem like very vague open-ended words. Oh wait, they are, because we still can't define consciousness, so there can be no meaningful answer to your question.


no - the question was around a self-aware entity (an "I") becoming aware of its future non-existence. A conscious being without self-awareness would not be capable of considering its self no longer existing.


>we as "I's," necessarily cannot comprehend a state of "not-I."

Can't we?

Just think of nothing. Before you were born. Dreamless sleep. The gap of nothing between standing and feeling funny and waking up later on the ground that one time you passed out. That kind of thing.


What is GEB?


Gödel, Escher, Bach - the book by Hofstadter.


I really had fun reading this and thinking deeply about the blurry Ill defined line between things that are alive (an individual) and things that are not.

To take this experiment to a conclusion which the author has “left to the reader” as all good texts do, let’s think about a virus.

A virus has information, in the form of dna or rna (I believe this is what a coronavirus uses), that it injects into a cell to then cause the cell to create more virus. The goal of this is to pass “information” to successive versions of the virus which are then programmed to do the same thing. Since they are trying to keep the same information, then they are in fact attempting to reduce entropy across time therefore they are as the author defines, an “individual”.

Would love for someone to test that logic train that I just rode


The blog entry is quite good, but it seems to suggest that the breakthrough consists in analyzing living organisms through the lenses of information theory. This is not a mistake in itself, but a shallow perspective nonetheless: information theory is, supposedly, a mathematical representation of the ways our brains make sense of the world around. The blog entry accidentally gives leeway to readers think that there is some intrinsic quality to living beings that transforms dull matter into information propagation channels.

As expected, the article does a better job in setting aside this line of reasoning[1]. What you are really measuring is how information is persisted over time, and over the "background noise" of environment. They specifically talk about viruses at a point, and the usual odd distinction of them being not quite living creatures. When they are found sitting outside their hosts, it is very hard to point out if they actually carry information over time.

[1] Disclaimer: I have just skim-read it.


If I could edit the parent comment, I would replace “our brains” by “an observer”, which is common physics parlance.

As far as I know, all uses of information-theorical approaches outside of its original scope also involve a scenario where an observer is originally unable to formulate a model that allow them to draw conclusions from alleged discrepancies. This paper is just one more example of that.

This somehow reminded me of the following quote by Bertrand Russell. “Physics is mathematical not because we know so much about the physical world, but because we know so little; it is only its mathematical properties that we can discover.”


Maybe I misunderstood you here but I don’t think information theory and in particular entropy has anything to do with “our brain”. It’s simply a statement of the number of micro states in a system. There is no requirement for reading them out or having a brain look at it. This value exists without observer. In that sense there is indeed an intrinsic quality to living beings that leads them to have a reduced number of micro states. They spend energy to keep it that way over time.


I know this is a pretty bold claim, but it is hard to overlook the idea that information theory as proposed by Claude Shannon works as a model of information as related to human cognition, and that it does not imply that information is a fundamental property of the physical universe.

There are several interviews in which he mentioned that the understanding of how the human brain works was one of his main inquiries. His work on AI and the Theseus. And the seminal paper on information theory contains a section in which a fidelity evaluation function is defined with relation to the human ear and brain.


Shannon wasn’t the first to think about entropy. It’s a general concept that is hugely important in thermodynamics and statistical physics. Information theory itself becomes increasingly more important in fundamental physics, see the firewall problem of black holes. It’s even harder to overlook these ideas and claim the human brain and our ears are somehow relevant to the definition of information and entropy.


Indeed, he was not the first at all. The concept of thermodynamic entropy predates, with a huge margin, its information theoretical counterpart.

You make it look like I am saying there is a mystical property of the human brain that backs the validity of information theory, an assertion that has no resemblance at all which what I am attempting to express, and sits in the same category the ones I am trying to debate against do.

What I am trying to say is that there is no meaning in a teleological interpretation of information theory. Things like “the purpose of living organisms is to propagate information” contradict information theory because there is no such a concept as absolute information, you always define it on mutual terms.


> there is no such a concept as absolute information, you always define it on mutual terms.

This is true of classical information theory, but not (as I understand it) algorithmic information theory. I think the jury is still out on whether it makes philosophical sense to generalize over universal machines like AIT does, and the practical applications compared to classical information theory seem minimal, from a layman's view it seems like it's been mathematically very fruitful.


My understanding is that, in order to recognize information, you should at least compare it to the outputs of a source of entropy.

It is a common mistake to conflate information and entropy. A bad analogy to mechanics is that the outputs of an entropy source are the frame of reference, the signal is a body and information might be whatever property you wish to analyze, such as velocity or acceleration.


> in order to recognize information, you should at least compare it to the outputs of a source of entropy.

Again, this is correct for classical information theory which requires some frame of reference for "likelihood". But AIT claims a "global frame" over the minimal representation in all universal machines, the particular choice of machine being at worst constant overhead.

You can argue, I think somewhat plausibly, that this frame is still an (inter)subjective frame rather than an objective, absolute one. But if we assume C-T (and we virtually always do), that argument is pretty weak - any other definable frame becomes formally "worse" in that it becomes "merely" a specific case of the universal one.


I am delving into speculation here, since I am not familiar with AIT and I don’t know any other definition of algorithmic information besides mutual information as defined by Kolmogorov which, remember, relies on Kolmogorov complexity but is a separate concept.

My point is that if I ask you, “given a bit sequence A, is it an optimal program?”, your answer would probably be: “I cannot even say if this represents a computable function and, also, is it an optimal program compared to what?”. You must establish a frame of reference such as the Kolmogorov complexity of a given program.


> “given a bit sequence A, is it an optimal program?”

> Kolmogorov complexity of a given program.

You seem fundamentally confused about the objects of study of information theory. They're not programs, they're e.g. strings of symbols. We measure by the information content of those strings based on likelihood / programs. Information theory asks "given some bit sequence A, how much information is in it?" not "is it an optimal program?" - instead we measure the information in it by constructing or otherwise proving facts about programs that generate or predict it. We talk about Kolmogorov complexity of strings (/ signals / states / whatever) as measured by programs, not Kolmogorov complexity of programs themselves.

Obviously programs are also themselves representable strings of symbols, and this is why we find the usual suspects of self-reference paradoxes in IT. But that doesn't mean the measure does not exist, or that it's not possible to find in lots of interesting, easily-computable cases. It's a bit like handing me a ruler and asking me how long it is - sure, if I don't trust any ruler I'll have a hard time measuring it. But I don't have to trust that specific ruler to do so, and the fact it's a device used for measuring itself is completely incidental to my measuring of it.


> You seem fundamentally confused about the objects of study of information theory.

It is hard to argue against your slightly condescending remark if my comment is not accurate, which is still up to debate. I am sure I could not observe all due formalities even if I tried. But please understand that my comment was written taking into consideration your previous comment, by which I mean:

- You mentioned that the overall approach in Algorithmic Information Theory is to assume Church-Turing thesis as valid. My understanding is that having a standard representation of data is one among the various accidental benefits of that---raw data could pretty well be represented by a Turing machine itself, as well as any other program representation that could generate it as long as it is a computable function. Notice that, in this scenario, talking about the Kolmogorov complexity of a program is valid, as strings of raw data are also represented as programs.

- The "is it an optimal program?" question was a rhetorical device which apparently did not work well, even due to the fact that I did not define what "optimal" meant in this context---I thought it was given. But I can't understand how you came to the conclusion that I was defining the subject of study of Algorithmic Information Theory there.


So if I’m understanding this correctly you have a relativistic understanding of information in which the zero state is observer dependent? Just for my understanding, consider for example the spin of an electron. It can be in one of two states, up or down. In which scenario am I unclear about the absolute information content of knowing the spin state?


It is really hard for me to reply to that comment.

I would say, first, that I need a definition for absolute information, as I have been insisting that information is defined on mutual terms. If we move past that, though, information about the spin state is unclear before measurement.


viruses have no agency so they don't attempt to reduce entropy across time. viruses are a reduction of entropy across time, just by being an organized structure (of rna and protein strands folded together) requiring energy to create and maintain. information is encoded in that structure--both in the sequence and in the shape--and that's transmitted through time.


I think it is a mistake to say that agency is a property of a system, rather than a lens through which we can analyze a system. It truly is useful to describe a thermostat and heater as trying to keep a room within a certain temperature range, even though there is nothing actually striving there.

Much like anything under evolutionary pressure, the viruses that propagate are the ones that do manage (in their normal environment) to keep entropy across time from exploding. Are they "alive"? Meh. In combination with their environment, they act like it though.


What is the least complex entity that you ascribe agency to?


Not OP, but this is something I spend a lot of time with. I have a pet framework called Zodeaism which means "Living Ideas". In my theory, the real "life forms" are ideas which possess the capabilities of information storage, adaptation, self-repair, and transmission. My own consciousness is mediated by thousands of such ideas, some competing and some working in harmony.

One such idea -- Do Good Onto Others As Others Do Unto You -- is an example of an extremely powerful and resilient idea which lives and operates in the brains of billions of individuals. It is powerful enough to ward off weaker ideas and has lived a long time without much modification to its original essence.

With that out of the way... I felt it was necessary to reduce agency to ability to use internal energy in order to put oneself in a higher energy state in the external world. This can be observed externally. I am standing next to a rock. I can jump up, spending some of my energy, and fighting against the potential energy well of gravity. I've increased my external energy state at the expense of some of my internal energy. Thus, a living being needs a way to store and use energy. You can observe this externally and conclude that I am alive, while the rock is either dead or inactive.

I consider such an act of "living" motion which can take another path than that of least resistance to be a "kin". In other words, any motion which is the result of a physical calculation (Zodeaism is compatible with determinism) and leads to an increase in external energy state. A kin is any such motion, large or small.

So now the problem becomes, what is the smallest kin we've observed in nature? Single-celled bacteria can expend energy in order to move through their environment against forces like friction and gravity, but a virus "rides the waves" if you will, never expending energy for things like respiration or locomotion. Any energy which is spent internally is potential energy like chemical or gravitational, released through a physical process without need for computation. I am unaware of anything smaller than a single-celled organism which produces such kins, but that doesn't mean they aren't out there. Even ethereal life forms such as ideas can produce these kins within the bodies of countless individuals across the planet, so physically local computational circuitry isn't a hard requirement.

So, according to this framework viruses aren't alive, however we can make the case that some machines are, except the experience is incomparable because of the advanced circuitry we contain which mediates our experience through things like emotion.


Are you familiar with Friston/free energy/markov blankets?


Yes to all three, when I was exposed to Friston's work I found many parallels in my own research and I would like to reach out to him when I've reached a more complete formalization of my ideas.

What do you think of his free energy principle and related concepts?


i haven't thought about it enough to even speculate, but one criteria would be the ability to make decisions, however simple.

viruses are entirely passive in their action. they're exquisitely complex structures (for what they are) existing entirely by chance that happen to have the property of self-replicating in the presence of the right cellular machinery. they don't decide to do anything, they just are, and therefore don't have agency.


Unfortunately, by your definition, computer programs are closer to being an individual than viruses. I am assuming that is not intended, so probably the idea needs more refinement.

I also think this is a weakness of the original article: a lot of things we probably shouldn't consider individual life forms would probably fit the definition (nations, for example).


yes, good point. some autonomy and skin-in-the-game is probably required too.

the article seems to have sparked some good conversation, so it's been useful, if not as strong as it might be.


Viruses increase entropy like all life. Their reproductive process destroys a much larger and more complex cell.

Plants convert light into chemical energy at low efficiency, the rest goes to waste heat. Animals do the same thing with chemical energy.


yes, as @selestify also noted, that seems contradictory on the surface but actually isn't. the virus itself is a reduction of entropy while it's net existential entropy contribution is positive.

where you draw the system boundary matters for defining entropy changes, but long-term we're all bound to increase entropy, otherwise life (and our semi-living cousins) would be violating the laws of thermodynamics writ large.


When you simply include the raw materials the virus is made from before and after it forms you have a net loss of entropy. So, it’s not clear what you mean by boundaries in that case.


I disagree with life increasing entropy; it does do that but it expels the entropy.


Living things are low-entropy internally, but produce higher entropy externally than would have been without life.


Hmm, I don’t follow the comparison. How could, for instance, the sun use “life” to generate more entropy than it already does?


To some degree depending on how productive they are externally, eg bee vs fox.


I don't know if the goal of the virus is to reduce entropy. Viruses mutate just like some clever computer viruses. Allowing them to trick immune systems and anti viruses and render antivirals useless. Bacterias are doing that too.

I think we can equate too much entropy with danger since cancer cells also mean mutations.


The viruses themselves do not mutate, it is the genetics of the infected cell that cause mutation.


How do you define the information-theoretic entropy of a physical object, like a virus?


There's a good quote in this FT interview with Peter Piot, who co-discovered Ebola:

http://archive.is/N6fAF

“The absence of bad luck in life is the most important thing.”


I'm postulating that "bad luck" is mostly poor decisioning, with say 30% of black swan type events in humans.

Paralleling Nassim Taleb's concept of protecting against downside tail risks is of utmost important. Are there ways to protect downside risk in the other areas of life?

A former, more judgemental version of me, used to be highly critical of illogical decisions individuals make. I tend to lack the "feeling / in the moment" aspect that a larger group of humans possess and naturally run everything through the 'thinking, logical, multiple permutation outcomes of a decision'. I used to see consistent decisions impacting people's lives as streams of "I feels" decisions that were suboptimal, and led to poor outcomes - masked as bad luck. (examples abound).


This sounds a lot like Schopenhauer. He said, or perhaps someone summarized him, something like: Life is hell, and the main goal is to choose a room farthest from the flames.


Haha, funny enough, he argued for antinatalism before that term was heavily used. He argued for a proto-form of negative utilitarianism, and in the case of some of his disciples, they argued that people should commit suicide

https://en.m.wikipedia.org/wiki/Philipp_Mainl%C3%A4nder (disciple advocating for suicide)

Many took their arguments to be closer to "to live is to suffer and suffering is the ultimate bad thing. You shouldn't inflict that suffering on anyone else (and maybe not even on yourself)

No wonder Nietzsche was so afraid of what schopenhaur was doing to philosophy. His work and his followers advocate for the ultimate form of life denial. Similar to jainists too.

Though, I still find Schopenhaur to be more compelling on this than anyone else...


I as well. Though not the term, the idea is very old. In Ecclesiastes 4, Qohelet says something like The dead are better off than the living, and better still are those who were never born.

I will say that the question of whether the living should commit suicide is rather different than whether new lives should be brought into the world. Sunk costs and all that.

We will all be dead soon enough. Not sure there's much point in speeding things up.


It seems the author (or paper authors) are mixing up individuality with life/agency?

IOW a hard drive with information is working against entropy to keep information active through time. But that doesn't make it an individual - IOW I see how this can be used to define a singular unit, I don't see how this idea is used to delineate the 'emergence' part: What am I missing?


Well, it's an individual hard drive. Maybe "individual" is independent from "alive"?

It seems like a message in a suitable environment can have as much agency as a virus? This is how computer viruses work.

A hard drive can work the same way; it's just a brick until someone plugs it in.


I still agree with Stephen Hawking's interpretation. (Or at least he's the first one I'd heard make this interpretation, probably not his per se). Anyway, that consciousness emerges from natural selection: ability to attack/defend is strongest when you can predict the actions of your prey/predator. Ability to predict those actions is greatly strengthened when the individual can reflect on thoughts and project those onto others. This reflection is where consciousness aka "life" emerges from.


Fun read.

But wouldn't we reluctant to change if the only goal of individuals was to preserve/propagate information to the future? Mutations, increase entropy, yet some of them are beneficial.

How does it fit in this theory?


Change is necessary for survival as the environment shifts. Many organisms have a stable superstructure within which controlled change is accommodated--the seasonal timing of reproduction, the outer protein coat of a virus, the plasticity of the brain.

The dichotomy isn't so much between change vs. stability, but where change happens, and in what way.


> Many organisms have a stable superstructure within which controlled change is accommodated--the seasonal timing of reproduction, the outer protein coat of a virus, the plasticity of the brain.

Ok for the seasonal change, but I don't think all change is controlled the way you say. Individuals grow and adapt to their changing environment, and for this process to be optimal, entropy is necessary.

This contradict the idea that "Individuals maximally propagate information from their past to their future." which suggests that the optimal individual is fully deterministic.

So maximizing the propagation of information cannot be their only objective function. At least, this is what puzzles me.

edit: actually answer your comment


> optimal individual is fully deterministic

The way out of your contradiction is to realize that full determinism is likely to lead to the death of the organism, which would not maximally propagate information. One view is that organisms allow change because the environment forces it upon them; another view is that the necessity for particular types of change has shaped organisms over time, until it becomes part of their nature.

Take global warming: there are certainly a large contingent of folks who would prefer to reject change (both mentally, and in action), even at the risk of death and suffering.

> Individuals grow and adapt to their changing environment

The growth is an extremely highly choreographed process. Individuals grow and adapt in extremely specific ways compared to the space of all possible ways that they could change.

Consider the brain: a brain that is capable of learning, absorbing new lessons, and then using them when appropriate is an extremely unlikely arrangement of matter, from a thermodynamic perspective.

Another perspective that helps resolve that matter, which the article touches upon, is that individuality exists at multiple levels of organization, and in particular stability at one level implies change at another.

Brains are a way for genes to maintain a higher level of stability: the learning, growth, and adaptation happens inside the brain (and also the body), while the genes that encode the recipe for creating the brain get a measure of stability.

In the absence of brains, genes would have to change much more frequently! So brains are a mechanism by which genes channel and outsource change to a different level of organization.

Another example: reflect upon the mental process that you are currently undergoing in our conversation.

Your brain is seeking to maximize its own stability. There is a fundamental principle that it refuses to overturn: that A and ~A are incompatible, that change and not-change do not fit together.

There are two options: you can reject entirely the line of thought and dismiss it as contradictory. This would minimize change, but also leave a potential gap in your model of the world. Knowledge gaps can be threatening - in the extreme case, it can lead to death and the destruction of the brain!

Or, if you find a piece of knowledge that can resolve the contradiction, then your brain gets to keep its fundamental principle, while also having an improved mental model that can help it navigate the world and avoid destruction.

Of course, a subset of the the brain's goal is reproduction of its own genes as well as transmission of its own ideas and knowledge (something that my brain admits to doing right now!), another example of a form of change channeled to maximize stability elsewhere.

In summary, to resolve your contradiction, realize that there are multiple 'individuals', which are really composite systems, at various levels of organization, each with its own goal of self-preservation. Notice where the change happens - it usually involves change being pushed off somewhere else!

This current meme in my head very much desires change - it would like your mind to change, in order for itself to have a higher chance of survival.


> The way out of your contradiction is to realize that full determinism is likely to lead to the death of the organism, which would not _maximally_ propagate information.

If I understand your argument, one way to think of it is that if my genes allow a “little bit” of entropy now they have a better chance of lasting multiple generations, thus increasing _maximal_ information preservation over time. That is to say preserving 99% of my genes for 10,000 generations is “better” than preserving 100% of my genes for 100 generations.

The interesting ramification there then is that in this perspective the individual is the gene sequence (“my“ DNA) not the current expression of those genes (“me”).

Still too early in the morning for the full ramifications of this all to sink in with me, but definitely fascinating.


It isn’t obvious to me that mutation, in general, would increase entropy. There are mutations that would. For example a mutation that would lead to a random bit flip during cell division would certainly increase entropy of the system. In my understanding such a mutation is also pretty bad for your organism. Its actually a somewhat trivial observation that systems that tend to higher entropy over time, “die” because they tend to a maximally unordered state over time.


Mutation is a random process, so individuals that can mutate are more entropic than the ones that can't.

If we agree that mutations are desirable (in some cases) then this contradict the idea that individuals are only propagate the information to their future

>Individuals maximally propagate information from their past to their future.


No it does not contradict it. You can have random processes induce a reduction in entropy of a system (e.g. random bit flips can lead to the sequence 111111 which has 0 entropy) and in particular a random process can increase the ability of a system to maintain low entropy over time (e.g. a mutation that makes dna replication during cell division more accurate), which is the more relevant point in the proposed definition.


A good talk that hashes out many of the ideas from the article, from a statistical mechanics perspective: https://youtu.be/10cVVHKCRWw


> if it’s seeking to maximize that information passed forward, then you’re probably dealing with something we should consider to be an individual

This seems overly broad because it would also apply to some or most machines.


If it applies to bacteria, it applies to machines, as bacteria are just chemical machines.

Note that "individual" != "alive".


The definition of life as something finalized to reduce the entropy (of itself) can be complementary to the theory of life as something that emerged to increase entropy of the world (surrounding environment)?

https://www.quantamagazine.org/a-new-thermodynamics-theory-o...


I initially read the title and thought this would be about why "[the origin of life, on earth, or in the universe] is the result of a series of unfair coin flips" and got almost too excited, but what the article was actually about is just as interesting! And, if true, it maybe even informs how we might go about answering the origin question too...


I thought it would be about how every single successful person had it better than their unsuccessful peers and some stuff about US and China.


I recommend this book 'The demon in the machine' on this topic https://asunow.asu.edu/20191219-asu-professors-demon-machine...


Great sentence. The key is to accept the unfairness, and not be mad about it. Don't hate the player


There are different keys for different people

Know the personality of the person dealing with the "unfair coin flip" and then make a suggestion - https://en.wikipedia.org/wiki/Revised_NEO_Personality_Invent...

Agreeable traitholders are more likely to "accept" unfairness.

With others anything can happen, depending on their needs, who they are surrounded by, their interests, health, financial stability etc

Nature ensures we don't have one strategy to deal with roadbumps. Which is why we survive so many.


This reminded me of this excellent article I cought on HN some time ago. https://www.quantamagazine.org/the-computational-foundation-...


If we, at our core, have a drive to increase entropy, I wonder if there's a correlation between families choosing to only have one child and those having a positive outlook towards preventing global warming.


Makes sense as human race flourished since we learned how to store informations externally (notes, books, etc..)


Bad metaphor. It's impossible to bias a coin to a non-integer probability of flipping to tails.

http://www.stat.columbia.edu/~gelman/research/published/dice...


This is not a new idea - it's been around for decades now[1] - and the suggestion that it's related to individuality is a distraction that carries a lot of political, psychological, and emotional baggage that is irrelevant to the topic.

There is no individuality in information theory. There are only systems.

It's been debated whether or not there's individuality in evolutionary theory. You don't lose anything - and you may gain a lot - if you stop thinking of evolution as the survival of "fit individuals", and think of it more as the survival of complex ecosystems shaped by a blend of cooperative and competitive strategies with environmental feedback and randomness.

[1] The first example I can find is Schrodinger's book "What is Life?" published in 1944.


The way I see it, at its core, evolution is just a chaotic system made of randomness biased by environment, and at the same time in a feedback loop with it.

Survival of "fit individuals", or "fit ecosystems", or "fit companies", or "fit societies" - it's all the same thing IMO.


When you say "ecosystems" aren't you already enumerating them in a way that implies individuality?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: