Hacker News new | past | comments | ask | show | jobs | submit login
How can mindless mathematical laws give rise to aims and intention? (fqxi.org)
48 points by mathgenius on Dec 14, 2016 | hide | past | favorite | 54 comments



The theme seems weirdly phrased. It's like asking "How can a tree give rise to the color green?" Their attempt to clarify didn't really clarify. They're looking for a causal relationship between physical law and thought process (because "aims and intention" are thoughts).

Perhaps causality itself is the wrong tool to apply here. Thanks to various readings over the past year, I'm starting to believe causality as an approach to understanding is overrated. There's a bit of Golden Hammer to the way we use the tools of scientific inquiry handed down from Descartes and Newton.


> various readings over the past year

Yes, what were these?

I tend to agree with you about causality. The attempts to understand how the classical world arises from quantum laws seem particularly boneheaded to me. I guess people (scientists) don't want to let go of causality because who knows what chaos and mayhem will befall the world if we give up on this. And the philosophers have a term "causal closure" which seems to be axiomatic in alot of their discussions. But they just don't want to talk about any kind of mystical stuff that might seep in through the gaps.


Two books in particular have me thinking in this direction... Drift into Failure, by Sidney Dekker, which is about understanding failure in highly complex systems (aircraft, nuclear reactors, etc), and The World Beyond Your Head, by Matthew B. Crawford, which is really a philosophy book, but has substantial criticism of Enlightenment thinking (which includes reductionism, from Descartes/Newton, the basis of the scientific method).

Note that neither of these authors argue that the scientific method is wrong (as some new-agey types do), but rather that it is incomplete, and a poor tool for understanding the real world in many situations.


> how the classical world arises from quantum laws seem particularly boneheaded

What classical world do you refer to? Newtonian physics? That's well understood, the general area of study is called thermal physics.

Are you talking about how the brain works? The biologists understand the meso-scale structures and the microscale structures, and the wiring. What they lack is a certified, detailed diagram, which is admittedly hard, but it's hard like understanding the wiring diagram of an ARM chip is hard.


> That's well understood, the general area of study is called thermal physics.

I don't agree at all. Thermal (classical) physics may be consistent with quantum physics but this does not at all show how a classical world arises from the quantum.


Not thermodynamics, thermal physics. You very literally start from quantum states and work you're way up to large collections about which you make estimates which are highly consistent with the real world. Perhaps "statistical mechanics" is a better term.


I suspect that a detailed diagram of a brain would not be understandable at this point, though it would probably be a big help in getting there. While all of biology is nominally reducible to physics, that is not how to understand it.


> The attempts to understand how the classical world arises from quantum laws seem particularly boneheaded to me.

Why? They have been completely successful. The reduction from QM to newtonian mechanics is very easy; you just turn the action way up so that you can't see the quantization gaps between actions anymore.

QM is not acausal. Even in QFT where we have interactions that look time-reversed, they still aren't acausal.


> The reduction from QM to newtonian mechanics is very easy

This is just not true. For example when enanglement is involved all of this goes out the window. Indeed, the next few decades of quantum computing is going to be completely non-reducible to newtownian mechanics.

> QM is not acausal.

I disagree entirely. QM is exactly acausal. Also known as the measurement problem.


> For example when enanglement is involved all of this goes out the window.

Fair enough, this is sort of a counter example to the "increase the action" trick, but it does work for any local effects.

> Also known as the measurement problem.

There is no need to violate causality to solve the measurement problem. Even superluminal solutions don't allow for the superluminal transmission of information, so causality is preserved. There are huge families of solutions that don't require superluminality at all (einselection, many worlds, etc.).


> violate causality

I'm not suggesting to violate causality, I'm saying there just is no cause.

> to solve the measurement problem

I don't think there is any solution. The physical laws of the universe are incomplete, they do not determine what "happens." There are constraints in the form of probabilities, so it's not like anything can happen at any time. But what actually happens, it's spontaneous, un-caused.

> huge families of solutions

This is a vast area of research that has spanned decades and yielded very little in the way of solutions. Hence my use of the term "boneheaded" above.


> There are constraints in the form of probabilities, so it's not like anything can happen at any time. But what actually happens, it's spontaneous, un-caused.

Ah, your problem is that you only seem to be aware of the Copenhagen interpretation. There are other interpretations available that work just as well, but don't have anything as inconvenient as nondeterminism.

> This is a vast area of research that has spanned decades and yielded very little in the way of solutions.

No, you are wrong here.

https://en.wikipedia.org/wiki/Copenhagen_interpretation https://en.wikipedia.org/wiki/Einselection https://en.wikipedia.org/wiki/Many-worlds_interpretation https://en.wikipedia.org/wiki/Objective_collapse_theory https://en.wikipedia.org/wiki/Transactional_interpretation


It seems like the essays they want would be similar in spirit to Douglas Hofstadter's books: "Gödel, Escher, Bach" and "I Am a Strange Loop"

https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop


Just for kicks, I'd suggest that anyone with an understanding of software and an interest in how thought (or AI) works should set aside six months or so to read GEB carefully, and absorb the ideas in it. It's a lot to wrap your head around, but it will change how you view the world.


GEB is a textbook on AI that's just trying to come at it sideways.

The problem when you're trying to attack any problem like AI where we know practically nothing about what the actual solution is, is that anything you write about what you think the solution is is going to predispose people to not just work down that avenue but to think about the problem in abstractions that will ultimately turn out to be wrong.

GEB is an incredibly good book in that it picks out problems that are absolutely at the heart of solving AI (ambiguity in natural language is a huge one) and elucidates them incredibly well _without_ predisposing the reader as to what the solution might be.


> GEB is a textbook on AI that's just trying to come at it sideways.

That's because it's not a book on AI at all [0]. It talks about it a lot. And sets up a lot of fundamental ideas for it. But it's as much a book about AI as Moby Dick is a book about whaling.

It's great reading if you're interested in AI (edit: If you're on HN, you'll probably love the book /edit), but if you pick it up expecting an AI textbook, or a book about AI, you may be disappointed. Or at the very least, won't get the full message.

Another book I'd recommend is "The Minds I" by Douglas Hofstadter and Daniel C. Dennett. It's a collection of essays about AI and philosophy, relating to the idea of the "self", with several pages of reflection after each essay by Hofstadter and Dennett. I'm about halfway through so far, but it was worth it alone just for chapter 5, "The Turing Test: A Coffeehouse Conversation". Also the Ant Fugue from GEB is one of the featured essays.

[0] Though you can't really fault anyone for believing that. Hofstadter complains in the intro of the 20th anniversary edition (and several interviews) that everyone thinks the point of GEB is something different. And it's never what he intended the point of GEB to be.

Specifically, he says "GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter. What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle?"


I'm aware. I've written most of what Hofstadter has written - I also didn't say that it was only a textbook on AI or even that it was explicitly intended that way.

But I do maintain that the book is about precisely the topics that you need a deep understanding of if you want to think productively about AI. In that sense, it's a damn good textbook on AI, whether it was intended that way or not.


Well, yeah. It's about AI because it's about cognition and the basis of self. Bit of a wider scope :p


From what I've heard, Hofstadter wrote "I am a Strange Loop" because too many people were describing GEB as a "collection of essays" or musings on unrelated topics, when he wanted it to be a single exploration of the concept of intelligence. "Strange Loop" covers pretty much the exact same ground, just from a much more direct angle; they're both worth reading, depending on your mood.


I agree. When I read GEB, it was clear to me after awhile that it wasn't about G, E, or B, it was really about how intelligence emerges out of self-reference. One of the best books I've read (though challenging). Not often that books in this genre win the Pulitzer.


If their point is 'behavior is plausibly mindless but experience implies that there's something more to consciousness', then they're asking the hard problem of consciousness but doing a bad job of it (like everyone else).

It's also possible that they don't understand computation at all -- but once again, a lot of people approach the hard problem of consciousness without separating it from computation.


> The goals of the Foundational Questions Institute's Essay Contest (the "Contest") are to: Encourage and support rigorous, innovative, and influential thinking about foundational questions in physics and cosmology;

> Identify and reward top thinkers in foundational questions; and,

> Provide an arena for discussion and exchange of ideas regarding foundational questions.

Cool McCool!


In short:

Mindless mathematical laws encode all kinds of patterns.

Our brains also encode patterns in them. These patterns mirror the outside world.

Once your brain models "the state of the world" it can also model the configuration space of the world: the ways things could be.

An aim or intention is an ordering in that configuration space. "I could have many things for breakfast; what I'd prefer is X" is an ordering on your internal representation of breakfast-space.


This is a model. At best it answers the "how". It doesn't answer the deeper "why", which I think is what the essay should try and address


Just read Bert Kappen on delayed reward. There is optimal behavior between being driven by (stochastic) environmental input and executing control. Optimal in Bellman's sense.

Apart from that. Why optimal behaviour arises should not be a question. :-)

Why there are goals in itself is probably also governed by entropic forces about the flow of information. Read on empowerment by Polani et al. Or study the robots from Ralf Der.


This is basically a re-phrasing of the Hard Problem of Consciousness - explaining how and why we, as material beings, have subjective, phenomenal experiences.

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


Oh, I think I have some material in my Google Docs on this. Might as well package it up and enter it for a chance at the prize.


Is this contest even worth entering as an autodidact? Looking at past contest winners, the majority seem to hold a PhD.


If you're interested in the question, it's probably worth writing the essay even if you think you have no chance of winning.


A quick read of the high scoring answers from past years shows that there were many crackpots. So I would say it can't hurt to apply (source: I have a PhD in quantum physics, and many top essays made no sense)


Top prize is ten grand (!) so I expect the essays submitted will be higher quality than your average online contest. I wonder how well I could budget my time over the next few months... this is roughly what I majored in, but it might be bad for my health to plunge back into a student's paper-writing schedule.


While those with something to say may be motivated by the prize to put extra effort in, the prize will probably attract many with nothing to say.


> How can mindless mathematical laws give rise to aims and intention?

By the wonders of self replicating adaptive systems, tending for their existence by intelligent behavior and sexual reproduction. Meaning arises from the context of optimizing their own life (based on rewards and RL).

We, humans, have 3 levels of self reproduction - the cellular level, the organism level, and the mental (cultural/technologic/AI) level. Each of them operate their own game of survival and adaptation. There is a fundamental difference between "mindless mathematical laws" and self replicators - the second have agency, are embodied and participate into a larger world.

All of these conclusions are immediately visible from the paradigm of reinforcement learning, but are not so clear from the mainstream consciousness philosophy. They're still fighting zombies and muse about bats in Chinese rooms (armchair philosophy at its finest) while the RL guys beat the best human at Go.


That's like saying "How do computers compute?" "Well, because we built these giant fabs and factories to build them".

You're answering the wrong question.


No, there is no difference between human survival/replication and purpose. That's the recurrence, the reentrant loop that bootstraps humans.

Of course individual humans might make their own values and act on them, but they only exist as a byproduct of the survival/reproduction process, and are formed by this process.


You're the one talking about purpose. The rest of us are talking about mechanism.

You're handwaving away the problem, but that doesn't mean there still isn't a problem there we don't understand yet.


what is the problem?


It's The Hard Problem.

Suppose we eventually know everything about the body and mind, that some day science can simulate and predict the motion of every atom in your body. Thus, all your behavior is deterministic. That still doesn't explain why there's an "ego" there to experience it all.


or a consciousness to observe the ego.


The question is why did survival and replication arise? Rocks don't try to survive or replicate.


"They can't. Please send my prize money to..."


I just wrote something on this subject on facebook... suppose this would be a good place for it.

TL;DR though - we aren't ready to try to answer such questions directly yet.

---------

We are self programmable Turing machines: the core problems (and frustrations!) of both AI and understanding the human brain are encoded in that statement. Frustrations, because it's an infuriatingly simple statement that's obviously true but we're nowhere near actually understanding.

On the one hand, it's almost a tautology if you take the Church-Turing thesis seriously. That is: there are no hypercomputers, there are only computers - Turing machines, and they're all fundamentally equivalent except in the details of capacity or performance. We can compute, thus we are also Turing machines.

But we are self programmable Turing machines: we're able to create new programs for other Turing machines (proof by example: any software engineer), and we're able to modify the existing programming for our own Turing machines (i.e. learn new things: the set of new things we can learn, or ways we can change our own behavior, has no fixed bound).

But Turing machines are deterministic: their output is completely determined by their inputs and the code they run. Completely deterministic: if a Turing machine appears to have random or nondetermistic outputs, either it's running a pseudo random number generator (thus deterministic), or it had random input (e.g. a hardware random number generator), and if you fixed the input the output would also be fixed.

So, we have this rather baffling conundrum - on multiple levels! of how in the hell to reconcile all this. We have a proof by example that it is possible to construct a Turing machine with outputs that are not only effectively non deterministic, but unbounded in complexity and potentially more complex than the Turing machine that created them: that this should be possible at all runs counter to the intuition of every programmer who's ever written a line of code.

For the first baffling thing: regular Turing machines have deterministic outputs, and while we are subject to the same rules and math that applies to any other Turing machine - so this has to be true for us too, in the strictest sense - it is clearly also true that in a very deep way our outputs are not deterministic, and not restricted in complexity.

By not restricted in complexity, I mean that it would be uninteresting if you had a Turing machine that produced nondeterministic output but it was all just white noise, or a string of random numbers. If that was all, there wouldn't be much to fuss over - all you need is a cryptographic PRNG and in practice you can get a seed anywhere. We, on the other hand, can produce things with actual semantics and structure - there's a big difference between spitting out some random numbers that seem to come from nowhere, and pulling a proof for the infinitude of the primes out of thin air.

That is, we're non deterministic in a very deep way: with a regular computer program, you can predict what will happen when you give it inputs and model the results, and if your model is correct you'll never be surprised - imagine sending packets to a network server, you always know more or less what you're going to get back.

(Digression: fundamentally, the model that tells you what response you'll get is a program that is in some way equivalent to the original program - it must be a Turing complete program, else it would not be able to capture all the possible outputs of the original Turing complete program. You can learn a lot about how natural languages work, and also about how intelligence must work, by applying this insight: first, note that natural language is (inherently) ambiguous, and it is also (like programming languages) Turing complete. When we construct a sentence that we're about to say to someone else, fundamentally (there are shortcuts most of the time, but this is really going on) we are resolving this ambiguity by constructing a (turing complete!) model of how the other person thinks, and then finding a sentence that, when run as a program on that model, will produce the desired output - that is, the sentence that will convey to them what we want them to understand. And when we're reading or listening to other people, we are treating those sentences as programs to be executed and run - that we run in our own models that we construct in order to figure out what the other person means.

This is fundamentally different from how the computer programs we construct communicate: they send packets back and forth with fixed, predefined meaning. When human communicate, in programmer speak we are running untrusted code from other people inside our own brains!).

But the fact that we're non deterministic is, I think, the easiest part to swallow: it's pretty clear that we make use of and rely on non determinism, so where we get the initial seed is uninteresting (it's a safe bet that evolution has provided us with the equivalent of hardware random number generators).

Programs constructing other programs though - and not in a deterministic way like a compiler, but genuinely new programs - that is baffling, though.

Modern machine learning - deep learning - does appear to actually be starting to scratch the surface of this: we're starting to figure out how to construct programs that can discover, on their own, the structure - the grammar - of things, to a degree that's actually starting to look promising. This is certainly a prerequisite any "self programmable Turing machine", but there's still a hell of a lot more we don't understand...


You take "determinism" too hard. In fact brains are stochastic and local noise can and do influence it all the time. It's meaningless to think about deterministic brains. Even if the brain were physically deterministic, it still includes too much noise in it's internal processing (as a regularization process) to be easy to predict.

I think meaning comes from survival, treated as a game theory problem - agent, world, actions, rewards. Determinism or the lack of it is a false lead. What does it matter, when we are embedded in the universe, which is so interconnected (both by interaction and quantum entanglement). You'd have to simulate the whole universe to predict any piece of it. And where would the computer running the universe simulation sit?

Better to think in reinforcement learning concepts. Our values are survival and reproduction (another kind of survival). The first implies ability to move about and act in the world, socialization, learning, cooperation and even conflict with aggressors. Everything we do is in the service of self survival and survival of our genes. Our values come from them - thus, intentionality problem is solved.


> You take "determinism" too hard.

Isn't he just really claiming that there's no room for free will? Is ever a state where the next immediate thing to happen is not determined by that state.


No, I'm not really claiming anything about free will.

The problem of free will is mostly just how to define it in a way that makes any sense at all. But to the degree that it does make sense I would say we clearly have it.

What else does it mean for something to have the ability to decide?


> What else does it mean for something to have the ability to decide?

Is there any evidence for free will? Or maybe actual free will vs perceived free will is meaningless.


I explicitly talked about that.

Basically, of course we use stochastic processes but that doesn't change the fundamental problem.

We are still fundamentally computers.


Connected to many other highly random or highly complex behaving agents.


There is no hypercomputer. (The strong form of the Church-Turing thesis).

That doesn't change the problem one bit. Everything interesting that's going on in our brains is fundamentally just computation.


It seems the essence of your argument is: "We are Turing machines. Turing machines are completely deterministic. Our outputs are not deterministic. This is a baffling conundrum."

If we are Turing machines, we are Turing machines with noise and errors. (Proof: do large mathematical calculations by hand.) This explains the lack of determinism.

The remainder of your essay seems to argue that our nondeterminism is "deep", but I don't understand the distinction. You state there's a big difference between random numbers from a cryptographic PRNG (which is in fact deterministic) and generating a proof that primes are infinite. But I don't see why computers can't generate "deep" nondeterminism, whatever that is.


Man, there's still way to much dunning-kruger, even on hacker news.


Why did you post this comment in response to your own post?


When I commented my original post had been downvoted to -1 without any replies. Irritating.


it's an essay competition.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: