Hacker News new | past | comments | ask | show | jobs | submit login

Shannon wasn’t the first to think about entropy. It’s a general concept that is hugely important in thermodynamics and statistical physics. Information theory itself becomes increasingly more important in fundamental physics, see the firewall problem of black holes. It’s even harder to overlook these ideas and claim the human brain and our ears are somehow relevant to the definition of information and entropy.



Indeed, he was not the first at all. The concept of thermodynamic entropy predates, with a huge margin, its information theoretical counterpart.

You make it look like I am saying there is a mystical property of the human brain that backs the validity of information theory, an assertion that has no resemblance at all which what I am attempting to express, and sits in the same category the ones I am trying to debate against do.

What I am trying to say is that there is no meaning in a teleological interpretation of information theory. Things like “the purpose of living organisms is to propagate information” contradict information theory because there is no such a concept as absolute information, you always define it on mutual terms.


> there is no such a concept as absolute information, you always define it on mutual terms.

This is true of classical information theory, but not (as I understand it) algorithmic information theory. I think the jury is still out on whether it makes philosophical sense to generalize over universal machines like AIT does, and the practical applications compared to classical information theory seem minimal, from a layman's view it seems like it's been mathematically very fruitful.


My understanding is that, in order to recognize information, you should at least compare it to the outputs of a source of entropy.

It is a common mistake to conflate information and entropy. A bad analogy to mechanics is that the outputs of an entropy source are the frame of reference, the signal is a body and information might be whatever property you wish to analyze, such as velocity or acceleration.


> in order to recognize information, you should at least compare it to the outputs of a source of entropy.

Again, this is correct for classical information theory which requires some frame of reference for "likelihood". But AIT claims a "global frame" over the minimal representation in all universal machines, the particular choice of machine being at worst constant overhead.

You can argue, I think somewhat plausibly, that this frame is still an (inter)subjective frame rather than an objective, absolute one. But if we assume C-T (and we virtually always do), that argument is pretty weak - any other definable frame becomes formally "worse" in that it becomes "merely" a specific case of the universal one.


I am delving into speculation here, since I am not familiar with AIT and I don’t know any other definition of algorithmic information besides mutual information as defined by Kolmogorov which, remember, relies on Kolmogorov complexity but is a separate concept.

My point is that if I ask you, “given a bit sequence A, is it an optimal program?”, your answer would probably be: “I cannot even say if this represents a computable function and, also, is it an optimal program compared to what?”. You must establish a frame of reference such as the Kolmogorov complexity of a given program.


> “given a bit sequence A, is it an optimal program?”

> Kolmogorov complexity of a given program.

You seem fundamentally confused about the objects of study of information theory. They're not programs, they're e.g. strings of symbols. We measure by the information content of those strings based on likelihood / programs. Information theory asks "given some bit sequence A, how much information is in it?" not "is it an optimal program?" - instead we measure the information in it by constructing or otherwise proving facts about programs that generate or predict it. We talk about Kolmogorov complexity of strings (/ signals / states / whatever) as measured by programs, not Kolmogorov complexity of programs themselves.

Obviously programs are also themselves representable strings of symbols, and this is why we find the usual suspects of self-reference paradoxes in IT. But that doesn't mean the measure does not exist, or that it's not possible to find in lots of interesting, easily-computable cases. It's a bit like handing me a ruler and asking me how long it is - sure, if I don't trust any ruler I'll have a hard time measuring it. But I don't have to trust that specific ruler to do so, and the fact it's a device used for measuring itself is completely incidental to my measuring of it.


> You seem fundamentally confused about the objects of study of information theory.

It is hard to argue against your slightly condescending remark if my comment is not accurate, which is still up to debate. I am sure I could not observe all due formalities even if I tried. But please understand that my comment was written taking into consideration your previous comment, by which I mean:

- You mentioned that the overall approach in Algorithmic Information Theory is to assume Church-Turing thesis as valid. My understanding is that having a standard representation of data is one among the various accidental benefits of that---raw data could pretty well be represented by a Turing machine itself, as well as any other program representation that could generate it as long as it is a computable function. Notice that, in this scenario, talking about the Kolmogorov complexity of a program is valid, as strings of raw data are also represented as programs.

- The "is it an optimal program?" question was a rhetorical device which apparently did not work well, even due to the fact that I did not define what "optimal" meant in this context---I thought it was given. But I can't understand how you came to the conclusion that I was defining the subject of study of Algorithmic Information Theory there.


So if I’m understanding this correctly you have a relativistic understanding of information in which the zero state is observer dependent? Just for my understanding, consider for example the spin of an electron. It can be in one of two states, up or down. In which scenario am I unclear about the absolute information content of knowing the spin state?


It is really hard for me to reply to that comment.

I would say, first, that I need a definition for absolute information, as I have been insisting that information is defined on mutual terms. If we move past that, though, information about the spin state is unclear before measurement.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: