Hacker News new | past | comments | ask | show | jobs | submit login
NP-Complete Problems and Physical Reality (2005) (arxiv.org)
88 points by andyjohnson0 on July 14, 2015 | hide | past | favorite | 33 comments



If you enjoyed this I highly recommend both his book, "Quantum Computing Since Democritus", and blog (http://www.scottaaronson.com/blog/). Tagline of the blog is "If you take just one piece of information from this blog: Quantum computers would not solve hard search problems instantaneously by simply trying all the possible solutions at once."


Wholeheartedly recommend this book as well. He gives a brief overview of classical complexity, how quantum matrices work, then how quantum complexity works. It's really accessible to those with some complexity background.


Detailed notes and problem sets for a series of twenty-one(!) lectures based on the book are at [1].

[1] http://www.scottaaronson.com/democritus/


On the point of anthropic computing: it is important to note that you have not accomplished any _improvement_ to your condition by killing yourself; before as after, you had found a correct solution with probability 2^-n. Anthropic computing is just "fancy guessing".

What is interesting about it to me is that the "true computational nature of reality" must be such that sufficient computation is available for you to run the _validation_ of your solution in logarithmic time an exponential number of times ("in parallel"), because otherwise you could not find yourself _at all_ in a universe where you knew you had found the correct computation. Which does make it strange that quantum physics does not seem to give us any way to "get at" all this computation that must be going on behind the scenes.

(Unless the "true nature" of reality is such that a subset of the wavefunction is being computed "for cheap". Collapse exonerated?)


By running the validation "in parallel", do you mean running it in each of the possible quantum "many worlds" in which validation takes place?

If so, aren't you implicitly assuming that these other parallel worlds somehow draw on a shared pool of computing resource? This might be true [1], but equally there may not be a "behind the scenes" of reality at all.

[1] How would we know? Run a timing attack on the underlying compute infrastructure?


I'm leaning toward the parallel universe hypothesis as well, except I'm thinking it's more like there are part-time processes that run in parallel, do their work, then exit after they come to some type of consensus of state. It would end up looking something like a bunch of containers running a game on your computer, processing all the other player's moves, arriving at consensus, sending that consensus to a central 'compute' spot where a global consensus took place, wait for a reply, then take what the global central thing said was a valid state of things. Rinse and repeat. Stick it all in blockchain in case there's a disagreement.

I keep pointing out to people I can't make their beers (including glass) disappear while we're talking at a table. That this universe doesn't allow me to hack it (and presumably keeps others from hacking it) is a nice feature.


If the multiverse / physical reality has a "global central thing" then it has a scaling bottleneck, and we have a problem.


We wouldn't really notice if the simulation keeps getting slower though.


Not if that exists on another level of abstraction. Limits here would be good. There, not so much assumed.


How is killing yourself a correct solution to anything, and what is "n" in your equation? Humans are experience integrators with the rather interesting (and unique) opportunity to "emit" the useful integrations that make it easier (or possible) for others to perform a similar integration. We do it on a personal level with children and friends; and on an impersonal level with books and papers.


I'm not sure if you read the paper. Aaronson writes: "given a formula ϕ, guess a random assignment x, then kill yourself if x does not satisfy ϕ." In other words, the only universes in which you find yourself existing (alive) are ones in which you are in possession of a solution to ϕ, which you obtained easily by guessing.

Its a not-entirely-serious riff on the Anthropic Principle [1] and maybe the "Quantum Suicide" thought experiment. It's clear to me that the meaning in the article is not that killing yourself solves life problems.

[1] https://en.wikipedia.org/wiki/Anthropic_principle

[2] https://en.wikipedia.org/wiki/Quantum_suicide_and_immortalit...


The problem (well, one of them) is that there is no guaranteed way of killing yourself. For any sufficiently complex problem you will end up being more likely to near fatally maim yourself in the attempt to kill yourself for an incorrect guess than you will be to make a correct guess.


I started watching Robert Harper's Type Theory presentations on Youtube. In the first section he mentions that suffering is part of calculation, or something similar. If you kill yourself, you are basically exiting the suffering you bear here. The question then comes whether suffering is present is all places you go. If it is, it just makes more sense to bear the suffering here and learn from it. Maybe you can learn to reduce it slightly, which will help out in your next lifetime (assumption).


You squeeze your probability mass into universes where the problem is solved, and therefore you are more likely to find yourself existing in those universes. What this means for your actual experience, is that as soon as you turn the quantum suicide machine on, you should expect to see it output a correct answer and not kill you. You can never observe the machine output an incorrect answer, nor can you observe your own death.

>Which does make it strange that quantum physics does not seem to give us any way to "get at" all this computation that must be going on behind the scenes.

Why should it?

Also if parallel worlds can interact with each other and aren't entirely separate, then we probably wouldn't exist. The first parallel world to develop a way of taking over other parallel universes would win. The optimal self replicator that can spread through all universes would take over everything.


"But another reason we believe P≠NP is that otherwise mathematical creativity could be automated!"

I have a problem with this. Just because the thought is unpleasant does not make it true or false. Besides, perhaps this is the very difference between automation and intelligence, perhaps the point at which you can ask this of a computer is the point at which you should no longer consider it a computer.


There is zero reason to believe that human level mathematical creativity is computationally significantly more expensive than normal day to day thinking.


In terms of the computations performed in the brain by the mathematician, at the time the insight is achieved, it's not.

But appropriately accounted for, I think it is.

The relevant metric would be, "what do I have to increase in order to get more [interesting, new] theorems proved?" Is it as easy has having mathematicians work more hours? Having more people start working on them?

Intuitively, it is not so easy -- each insights need exponentially more work as time progresses. An exponentially-small fraction of humans is capable of producing novel results, and so on.

This, I think, is what Aaronson is getting at: if P = NP, if proving is as easy as verifying, then coming up with hard mathematical insights should be as easy as following a cookbook, or any other routine, mundane mental task.

But that doesn't seem to describe the world we live in, where getting mathematical insights has rapidly diminishing returns per unit resource invested.


Sure, it is certainly possible that mathematics can be automated, but mathematics is also over 2000 years old. We are not much closer to automating it than the Greeks were. P=NP does not only break the Complexity theory (which is a relatively new invention), it also has profound consequences to all of human reasoning.


We already have commoditised math solvers for any number of (relatively) trivial problems.

What we haven't automated is mathematical creativity, which is a completely different class of problem.


The whole difficulty of the hard complexity questions such as "Is P=NP?" hides in the non-trivial, "corner" cases. Sure, we have SAT solvers that are by many practical standards really good. P=NP would imply that we can relatively easily find polynomially long proofs to many hard propositions, such as the P=NP problem itself! Or many other open mathematical problems.

Creativity is an anthropomorphic, abstract concept, a way of describing human ways to find solutions. What really matters is the proofs.


Or you should no longer consider yourself something else than a computer.


Right, it is automated. we are the machine that automates it.


It's just the time complexity sucks.


It's not an appeal to consequences ("it would be horrifying if this were true") but to absurdity ("believing this would require us to believe all these implausible things").


A point that has become more obvious over the years is this: there are "soft" instances of NP-complete problems where you can find good approximations in reasonable time, and there are "crunchy" instances of the same class of problem where you don't find a good approximation.

Commercial ILP solvers (e.g. Gurobi, CPlex) profit from the fact that quite often it is possible to formulate NP-complete problems and find good solutions in acceptable time. Similarly, many of the "physical" ways of solving NP-complete problems work ok for easy instances and get unwieldly fast if confronted with difficult instances.


I remember a TED talk on the failed attempts to find cases in which nature solves a NP-complete problem. Paper must be an interesting read.



I posted this link because (a) it seemed like a good survey of the subject, and (b) I felt like I almost understood most of it. Does anyone have any suggestions for accessible further reading at the intersection of computer science and physics?


His Philosophy-Complexity paper is also fantastic, in case you have not read it already: http://www.scottaaronson.com/papers/philos.pdf

It has also been discussed most recently here: https://news.ycombinator.com/item?id=9061744


Thanks. I'll definitely be reading those.

Aaronson's talk on Computational Complexity and the Anthropic Principle [1] also looks interesting.

[1] http://www.scottaaronson.com/talks/anthropic.html


"If some physicist wants to continue the tradition of naming quantum gravity theories using monosyllabic words for elongated objects that mean something completely different in computer science, then I propose the most revolutionary advance yet: thread theory."


I'm looking forward to his take on the recent thing about memristor machines being able to solve NP-complete problems...it was on HN a couple of days ago.


From 2005




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: