Hacker News new | past | comments | ask | show | jobs | submit login
Every odd number greater than five is the sum of three primes (plus.google.com)
381 points by ColinWright on May 14, 2013 | hide | past | favorite | 174 comments



One of the amazing things about mathematics is that you'd like to work in one branch and wind up working in another. E.g., who knew before Riemann that number theory would often wind up being an exercise in calculus of one complex variable?

The example perhaps best known to non-mathematicians is that there were two formulations of quantum mechanics, namely in group theory and in partial differential equations, and they turned out to be pretty much the same thing.

--------------------------------

Come to think of it, that's been a great lesson for me in business life. I come in supposedly to consult about one kind of problem, and really wind up addressing a different subject entirely.


i find it more amazing that some seemingly completely unrelated things are somehow deeply connected (and unintuitive connected), and in reality are two facets of the "same object".


It appears like the constant flux that the new discoveries provoke fruit in better and better solutions. This is utter joy, the heist for more knowledge. Evolutionary and peer reviewed improvments :)


It is absolutely mind-blowing how much time and passion humans have invested in understanding the consequences of nine simple statements...and how hard they resist against unveiling their secrets.

  0∈ℕ
  ∀x∈ℕ:S(x)∈ℕ
  ∀x∈ℕ:S(x)≠0
  ∀x,y∈ℕ:S(x)=S(y) ⇒ x=y
  0∈X∧∀x∈ℕ:(x∈X ⇒ S(x)∈X) ⇒ ℕ⊆X
  x + 0 := x
  x + S(y) := S(x + y)
  x · 0 := 0
  x · S(y) := x · y + x

These are the Peano axioms [1] defining the natural numbers with addition and multiplication. And for at least 3000 years humans are tying to figure out properties of these natural numbers, all consequences of these definition, and the end is not in sight.

[1] https://en.wikipedia.org/wiki/Peano_axioms


To be fair there are a lot of possible systems from which you could derive infinite complexity. There are an infinite number of true statements that can be discovered after all. Most are not particularly interesting or useful. You could come up with proofs for more and more specific behaviors that occur in turing machines under specific conditions. Or cellular automata. Or any number of arbitrary systems. But most wouldn't be relevant to the real world. The cool thing about numbers is that there are a lot of possible applications to the real world. But there are also an infinite number of trails you could follow with mathematics that lead nowhere useful at all.

Automated Mathematician tried to explore mathematics and find interesting concepts the same way mathematicians do and using similar heuristics. And it got fairly far, discovering numbers and primes and Goldbach's conjecture. But eventually it just started discovering useless concepts which had no apparent relevance to anything and were pointlessly complex or specific.


It's not so surprising. This could also be summarized as "God created the integers."

But really the process is reversed: given arithmetic, what is the smallest axiom set to give us the power to prove our theorems.

It's like, given all code, what is the smallest language we can use to express it. Which is sequences of 0s and 1s.


> It's like, given all code, what is the smallest language we can use to express it. Which is sequences of 0s and 1s.

This should be of interest: https://en.wikipedia.org/wiki/Binary_lambda_calculus


I wouldn't say they define the natural numbers. A more apt phrasing would be "describe the natural numbers." They are more like the laws of physics for the natural numbers.

Also, there are easily understandable theorems about the natural numbers not provable from the Peano axioms.

http://en.wikipedia.org/wiki/Goodsteins_theorem


That comes down to the (philosophical) question if natural numbers existed before the first human started thinking about them. Depending on this the Peano axioms either just describe a part of the universe or they introduce something new. (I don't think it matters that natural numbers were known before the Peano axioms were formulated - before that we just defined natural numbers in a less formal way.)


How then are the naturals defined? By the Von Neumann construction? (0, {0}, {0, {0}}, ...)


I would say that the Von Neumann construction and PA try to capture the natural numbers. It is a philosophical stance which can be called platonism. What you say is called the formalist stance.


What I mean is that since Goodstein's theorem is provably true for the naturals, but is not a consequence of the Peano axioms, then the definition of the naturals used to demonstrate Goodstein's must be strictly stronger than the Peano axioms themselves. I was wondering what this definition might be.

I'm familiar with the distinction between formalism and Platonism, although I still haven't made my mind up yet :)


It is a consequence of the Peano arithmetic (Peano axioms plus definition of addition and multiplication), it is just not provable in this system.

Gödel's first incompleteness theorem [1] states this fact, that no theory above a certain expressiveness (read as can express natural numbers with addition and multiplication) can be consistent and complete. Assuming Peano arithmetic is consistent, it can not be complete and complete means you can prove all true facts expressible in the system within the system itself.

The (standard) proof of Goodstein's theorem uses ordinal numbers [2] which are outside of Peano arithmetic and the Kirby–Paris theorem proves that there is no proof inside Peano arithmetic [3].

[1] http://en.wikipedia.org/wiki/G%C3%B6dels_incompleteness_theo...

[2] http://en.wikipedia.org/wiki/Ordinal_number

[3] http://en.wikipedia.org/wiki/Goodsteins_theorem#Proof_of_Goo...



Thanks!


Just imagine the day that one of Peano's axioms is discovered to not hold true under all circumstances. Perhaps in a parallel dimension or something.


I'm pretty sure these math efforst existed long before the Peano axioms were formulated.


Here's the research paper discussing and proves the conjecture. http://arxiv.org/pdf/1305.2897v1.pdf It's a 130 page research paper excluding references mostly containing proofs and formulas.


Can a mathematician explain the significance of this result please?


Briefly ...

To a large extent the primes appear to be distributed in a manner indistinguishable from randomly. There seems underneath to be no reason to believe that there are results like this. If they were random then there might be numbers that cannot be expressed as the sum of 3 primes, but somehow every number ends up so expressible.

Why should that be true? Indeed, is it true? This result says it is true, but we really don't see why.

Understanding how the primes are distributed may have far-reaching implications for cryptography, and possibly even for solving things like the TSP. We just don't know, just as we initially never suspected that public-key cryptosystems were possible, and would involved primes.

Random matrices is another area where a lot of research was done just because people found it an interesting problem, and now there appear to be deep connections with practical physics that may allow us to further bend the world to our will.

Sometimes it's the chase that's exciting, never knowing what may turn out to have world-changing implications.


If I'm understanding correctly, you suggest that this is an example of the prime numbers not behaving randomly. I disagree.

Here's (more or less) the simplest possible random model of the primes. (It's called the Cramer model.) Take P to be a random subset of the natural numbers that includes each n>1 independently with probability 1/log(n). Then P is kinda like the prime numbers, though it lacks some structure it should have (e.g., it doesn't have the property that almost all elements are odd).

If you do this, then the expected number of ways to write a large n as the sum of two elements of P is on the order of n/log(n)^2, which is far enough from 0 that I bet (though I haven't checked) that Pr(every number >= 4 is the sum of two elements of P) is positive, and in fact probably quite big. This is kinda analogous to the Goldbach conjecture; note that it isn't only about even n, since in this model there can be lots of even "prime numbers".

I did a quick computer experiment, and it looks as if the probability is about 20%. In other words, if the primes were chosen at random according to the Cramer model, then about 1/5 of the time the appropriately mangled Goldbach conjecture would be true. (And the "odd Goldbach conjecture", which is what Helfgott has just proved, would be true substantially more often than that.)

Now, of course the Cramer model is way too simple. In particular, it doesn't know that almost all prime numbers are odd. So let's make it just a little smarter by declaring that 2 is always in P, the other elements of P are always odd, and that any odd n>1 is in P with probability 2/log(n). And now we should switch back to the original form of the Goldbach conjecture, looking only at even n. Well, according to my computer experiment the probability that the Goldbach conjecture holds if we use P instead of the primes is about 96%!

[EDITED to add: and if we force P to get 2 and 3 "right" instead of just 2, the probability goes up to about 99.5%. I fixed a few other things in this edit too.]

In other words: if the primes are basically random, then we should expect the Goldbach conjecture to be true.


Of course, your computer simulation completely disregards the possibility of some very large (but rare) numbers (possibly of density zero) not being expressible as such a sum. You merely verified it for low finite integers. I am very unconvinced your percentages are accurate. In fact, they may be completely wrong, as this issue is the entire crux of the Goldbach problem!


My simulation disregards it, but I don't. In this sort of random model, the probability of any (specific) large number not being the sum of two "primes" is extremely small (and decreasing rapidly as the number increases), which means that the total probability of any (at all) large number not being the sum of two "primes" is also extremely small.

In other words: in this model, with probability very close to 1 all large enough even numbers are sums of two "primes".

Let's put a little flesh on those bones. I'll stick with the unmodified Cramer model for simplicity; the tweaked versions are fiddlier but not fundamentally different. The probability that n=a+b is a "good" decomposition is 1/log(a)log(b); this is smallest, for fixed n, when a=b; so the probability that any given decomposition "works" is at least 1/log(n/2)^2. There are n/2-2 of these, and they're all independent, so the probability that none of them works is at most (1-1/log(n/2)^2)^(n/2-2), which is approximately exp(-n/(2 log(n/2)^2)). So the expected number of n beyond, say, 1000 for which that happens is the sum of this for n from 1000 up. Unfortunately neither the sum nor the obvious approximation as an integral has a closed form, but we can compute it numerically; it's about 0.000279. That's the expected number of failures above n=1000, and of course that's an upper bound on the probability of at least one such failure.

So, the probability (in this model) of any Goldbach failures above n=1000 is at most about 0.0003, and therefore the probability of any Goldbach failures at all is at most that much bigger than the probability of any Goldbach failures up to n=1000.

Note that although this is a statement with probabilities in, the probabilities aren't fractions of the integers for which Goldbach fails.

The situation is analogous to the following. Suppose you flip a coin 10 times, then 20 times, then 40 times, then 80 times, etc., and for each group of flips you "win" if you get at least one head. Then, with probability about 99.9%, you always win: the probability of failure in successive groups drops so fast that the contribution to the overall failure probability from all groups other than the first is tiny.

Similarly, with the Cramer-model analogue of Goldbach's conjecture, the probability that the conjecture fails "at n" decreases so rapidly with n that almost all the probability of there being any failure at all comes from the possibility of failures for small n.

And that's why you can get good estimates of the failure probability from computer simulations that look only at finitely many n.


Taking the Cramer model, the probability that a number N is not expressible as the sum of two primes is

product(x=1...N-1) (1 - 1/log(x)log(N-x))

< product(x=1...N-1) (1 - 1/log(N))

= (1 - 1/log(N)) ^ (N-1)

Which is eventually below

(1 - 1/log(N)) ^ log(N) ^ sqrt(N)

Which converges to

(1/e) ^ sqrt(N)

And so is eventually below

(1/2) ^ sqrt(N)

The -log of the probability that a number is expressible as the sum of two Cramer numbers is thus at most

-log(1 - (1/2) ^ sqrt(N))

Which is eventually below

2 * (1/2) ^ sqrt(N)

You can show that this is a convergent series, and if the -log of the probability converges then the probability that ALL integers (minus perhaps the first few) have this property must converge, to something above zero. Thus, there is a nonzero chance that a set of Cramer numbers will have the property that all a finite number of positive integers will be expressible as the sum of two of them. Thus, prime numbers do not need any magical properties for some version of the Goldbach conjecture to be correct on them - the magical properties are only necessary to actually prove it.


You've argued that if the the primes were basically random, we would expect Goldbach to hold, but as I understand it, we would expect to not be able to prove that it holds.

The fact that we can prove the ternary Goldbach conjecture is (necessarily) at least as surprising as it being true: in this case it seems to be significantly more surprising. Is the significance of this result in fact the techniques used, rather than the result itself?


This is really interesting. Have any theorems been proved or at least conjectured based on a probabilistic model for primes? Do you have any suggested resources for learning more about the Cramer model?


> We don't really see why.

Huh? if the proof holds, that's precisely what it does. The proof might be delicate enough to not apply in other settings, so our wider intuition about prime numbers might not be enriched (at least not immediately). Still, a correct proof is very much a formal explanation?

And, what's the connection between TSP and prime numbers? I thought the current consensus in complexity is that the factoring is likely not NP-hard, but not in P either.


No, the proof will show us that it's true, but that's not the same as understanding why it's true. There's a difference that most mathematicians grok, but can be hard to see from the outside.

And I don't know if there's a connection between primes and TSP, but people never suspected a connection between modular forms and elliptic curves, either. And this result may have nothing to do with factoring, but the techniques may give better approximation algorithms for things that are NP Hard.

That's the point, we really don't yet know what this result, but more particularly, what these proof techniques will yield.


I'm a mathematician, and I don't grok the difference. Unless you're using the Axiom of Choice, which enables non-constructive proofs, a proof should tell you exactly why something happens, albeit sometimes it's a bit hard to understand, since it's too abstract.

Unless, of course, you're referring to the underlying philosophical reasons why it's true, to which my usual answer would be: math is a game with certain simple rules that play well together and have long-reaching consequences. It's a game we choose to play.


My Logics professor said once to em that even though there are many proofs for a given theorem, there's generally only one that feels like the "natural" explanation of why a thing is how it is, the one that really explains why. I think many mathematicians believe this is so, and it's good to reach these proofs and not just any proof when we are trying to understand something.

For example, let's look at Euclides proof of the pythagorean theorem. It's true that it show's that the theorem holds, and therefore it shows why it holds. But it just feels awfully convoluted and round-away. It's talking about triangles, but it's going through seemingly unrelated constructions to do so. The proof by similar triangles is, to me at least, much more intuitive, and after reading it I feel I understand and not just know why the theorem is true.


The four-colour theorem is a good example of a simple result which doesn't have a "natural" explanation - the only known proof is a computerized check of hundreds of cases. I remember Imre Leader saying some people think it's just an "accident of nature", that doesn't "mean" anything (insofar as any mathematics has meaning).

Oddly enough that doesn't bother me; in fact, it seems natural that some parts of mathematics are just "like that" with no underlying reason. It makes the world of mathematics seem all the richer if not all things are simple and logical consequences of other things.


Exactly so. For the record, Imre is one of my PhD siblings, and it's perfectly reasonable that we came to this point of view as a by-product of working with our supervisor (and "grandparents", Erdős and Adams)


Colin, on days like this, I'm very glad you have not given up on HN entirely.


To me, that feels more like a limit on human cognition than a fundamental quality of the proof or the program. After all, you can summarize--or, perhaps compress--the proof into a program which checks the cases. It's simply a level removed from how mathematicians normally work.

On the other hand, if a proof could not be reduced at all--and, given my very limited understanding of information theory, this is possible--then I would certainly agree that it's inherently more complex.

Put another way, I think that using a computer to check cases like this is morally similar to using induction. It still exposes and exploits a certain simplicity in the domain.


There was no induction; the proof was essentially: a) all planar graphs can be reduced to these several hundred cases (this part is ordinary mathematics) b) observe that it's possible to four-colour each of these several hundred cases (and a computer program was used to verify it). To my mind the proof is as complex as that set of cases; the fact that we can write a relatively simple program to check them all doesn't really reduce the complexity or tell us anything about why they're all 4-colourable, in the same way that using a computer to check the riemann hypothesis up to some limit tells us nothing about why it holds.


I know it was not induction. My point is that a computer program to consider all the cases is just another mathematical tool morally similar to induction.

The program does tell us something, namely that the problem can be reduced to a bunch of similar cases all of which are colorable. The difference is that the insight is perhaps a level removed from a normal proof.

Essentially, I think that it's the program and not its output that plays the philosophical role of the proof in this case.

Of course, my perception is significantly colored by the fact that I'm mainly interested in programming languages rather than traditional math :P.


For problems like that, I can't help but wonder if some day we'll discover that we are just not searching for the solution in the proper "language". Maybe some day someone will find a problem isomorphic the the for-color problem in a different field, and that problem will have a simple and natural explanation. Probably in a field we don't know today.


i think that's a good notion as long as your prof. didn't demand that you feel the same way s/he did about everything.

also, proofs of older facts using newer techniques can illuminate a new theory rather than the fact at hand. for instance, the topological proof that there are infinitely many primes is nice because it makes you think about what separability means.


Have you never followed a proof all the way through, got to the end, and said to yourself - "OK, it's true, and I believe it, and I understand the proof, but I really don't get why it works."

If not, then you think about math differently from how I, my supervisor (from 30 years ago) and most of my PhD siblings do.

That's OK, of course, it's just interesting.


Sure, that happens :) but it doesn't mean that the proof is non-constructive, it just means that it's too abstract for me to understand. Sometimes, however, rereading it does wonders (e.g. I can't really visualize the Inverse Function Theorem (for functions R^2 -> R^2), but if I try to squeeze it into 3D space, I can almost visualize it, and the proof becomes logical).


There is no finite field of size 6. It's very easy to prove this constructively just by showing exhaustively that any attempt to define multiplication on the unique abelian group of size 6 will fail some of the axioms.

However, such a proof will not be a good answer to the question _why_ this is true. A good answer why it's true that there's no finite field of size 6 is that a finite field is a vector space on its prime subfield and so must be of size p^n, where n is the dimension of the vector space and p, a prime, the size of the prime subfield.


I'm not speaking (only) of non-constructive proofs, I'm just saying that sometimes a proof will leave you with no real sense of why a result is true (or "inevitable"). That sense sometimes comes much later, after further results are known.


There's a notorious proof of Heron's formula (computing the area of a triangle from the lengths of the three sides without trig) that proceeds by proving three things that seem to have nothing to with one another or with the goal, and then through a flourish of algebra the result falls out. There are much more straightforward ways to prove the theorem that do not abuse the reader, and modern mathematics frowns on such indirectness unless it is totally necessary. Unfortunately it was the style to write gimmicky proofs like that for a while in IIRC the 18th and 19th centuries, and sometimes they persist.


> Unless, of course, you're referring to the underlying philosophical reasons why it's true, to which my usual answer would be: math is a game with certain simple rules that play well together and have long-reaching consequences. It's a game we choose to play.

Isn't it interesting that this game we choose to play is so effective at modelling the world around us? I suppose you could argue that we wouldn't have put so much effort into this formulation of the game if it wasn't.


Is a proof by contradiction considered constructive? I just know in my linear algebra class, we used proof by contradiction constantly, but the axiom of choice was only mentioned as necessary once.


Proof by contradiction is not constructive, since it uses the Law of the Excluded Middle.


There are some subtleties there. Andrej Bauer covers them well: http://math.andrej.com/2010/03/29/proof-of-negation-and-proo...


Some popular methods of proofs are non-constructive. For example you can take a look at the probabilistic method used by Paul Erdős and others.


Coming back to this, consider the following.

[Warning: read like math ...]

Let p be a prime of the form 4k+1. It's easy to show that there exists u such that u^2=-1 (mod p). Indeed, simply using (2k)! works.

Consider all number of the form a+b.u (mod p) where a and b are in the range 0 <= a,b < sqrt(p). There are more than p of these, so by the pigeon-hole principle we have a0, b0, a1, b1 such that a0+b0.u = a1+b1.u (mod p).

Rearrange to give a0-a1 = (b1-b0).u (mod p) and square both sides. Remembering that u^2=-1 (mod p) we get

(a0-a1)^2 + (b1-b0)^2 = 0 (mod p)

A little checking shows that (a0-a1)^2 + (b1-b0)^2 < 2p, so that means

(a0-a1)^2 + (b1-b0)^2 = p

Thus we have shown that p is the sum of two squares.

I can reproduce this proof at will. I see how the proof works, I am convinced it's correct and valid. I still don't really understand why every prime of the form 4k+1 is the sum of two squares, especially since every argument someone tries to give to show somehow it's inevitable also works for 4k+3, where it's not true.


Here is a challenge for you.

It is easy to prove that the 2-player version of the game of hex on an nxn board is a win for the first player. The proof involves a finite game, so no axiom of choice involved.

I invite you to find and master the proof, then when you think you understand it demonstrate your knowledge by playing me a game of hex. We can play on wargear, my user account there is http://www.wargear.net/players/info/btilly and you can challenge me to a 19x19 pure or 2nd player choose on the board http://www.wargear.net/boards/view/Hex.

If I win, then I submit to you that your comprehension of the proof did not truly extend to understanding why it was true.


Well, what if the proof is just an exhaustive search of the space? Say 'all even numbers between 4 and 2^10' can be written as the sum of two primes. Say the proof is just listing out of these numbers and the primes that compose them. The proof tells you it is true. But it doesn't tell you anything about a natural underlying truth that causes this. Do you not find something wanting in such a proof?



Many proofs are in some sense proofs by exhaustion. For example, to prove that no square of an integer ends in a digit '3', the typical proof I) makes the search space smaller by working modulo 10, and II) exhausts all possibilities.

In the end, it all boils down to aesthetics: going from infinitely many cases to a small finite number, as given above, is acceptable because the number is low, and because mathematicians are convinced the remainder is about as simple as it gets.

Going to about 2000, as in the four color theorem, is not, because 2000 is a lot, and, I think more so, because mathematicians aren't convinced that it is necessary to handle each of these cases individually.

And of course, mathematicians would agree that using 5 instead of 10 in step I above leads to a nicer proof. It requires less tedious work in step II and leads to a stronger result (squares never end in 3 or 8).


> It's a game we choose to play.

as i recall, paul cohen would disagree. i don't know what to say as i am not a mathematician, but doesn't saying such a thing seem awfully depressing? like, doesn't it kinda trivialize what you do to write it off as a game?


What? A brute force computer simulation can show you that some finite conjecture holds...but an axiomatic proof surely is the only real concept of "why" we have. Because if it is not true, then 1+1 != 2, or some such result.


Not for me, and not for many of the mathematicians I have worked with. Sometimes with a theorem there is a sense of knowing why something is true, and in those cases the proof emerges fairly naturally from that. Sometimes, on the other hand, you have a proof, you know it's true[0], and yet there is no enlightenment, no sense of what's happening, no real understanding of why it's true.

Some mathematicians don't have this point of view, but I always felt that sometimes, even when I really knew the proof, I still hadn't grokked a sense of the why. It just all felt like unmotivated symbol manipulation.

[0] For some value of "true".


"Why it works" is a very subjective notion. A very good example of the difference between proving something is true and showing why it's true comes in the undergraduate Real Analysis textbooks by Walter Rudin and Charles Pugh.

If you look at reviews for Rudin's books, the word "magic" is used quite frequently. Rudin will cover the minimum amount of theory necessary and prove each theorem in a highly specific way. He tends to achieve the minimum number of words necessary for a proof.

In contrast, Pugh focuses on building up the surrounding constructs and techniques to the point where you can apply the same tactics to a wide variety of problems. To me, this gives more of a sense of "why" each theorem is true.


Colin,

At the risk of sounding very stupid I've never understood what was actually meant by statements of the type "the primes appear to be distributed in a manner indistinguishable from randomly". I say this because I imagine that the definition of what makes a prime a prime determines whether any particular number is prime or not. So what don't I get when people make statements like this? :(


There's a classic experiment where you get people firstly to generate a length 50 "random" string of heads and tails by making it up. Then they genuinely flip a coin 50 times and write down the result.

You can almost always tell the difference. there are statistical tests that you can apply that almost invariably identify which is which.

So there are statistical tests that let us ask whether a particular collection is "random" (whatever that means) When we have the right model (the Cramer model is one) we can produce sequences of numbers which are genuinely created at random, but which the statistical tests cannot decide which sequences are the random ones, and which one is the primes.

You can read a little here: http://www.solipsys.co.uk/new/RandomEratosthenes.html?HN1


To a large extent the primes appear to be distributed in a manner indistinguishable from randomly. There seems underneath to be no reason to believe that there are results like this. If they were random then there might be numbers that cannot be expressed as the sum of 3 primes, but somehow every number ends up so expressible.

I see this as backwards. Based on the prime number distribution we can actually predict the probability that a given number will be a counter example. And the sum over all positive integers of those probabilities is a miniscule number, so we're quite certain that the statement is true. We just don't know why.

The primes are distributed to a first approximation randomly with independent probabilities 1/log(n) for n being prime. You can refine this with obvious divisibility criteria. (For example odd numbers appear randomly prime with probability 2/log(n). And so on.)

If you first refine these statements for all the small primes that you care to, then make any prediction that you like, that prediction tends to hold up very well.

For example consider the even form of Goldbach's conjecture, which says that every even greater than 2 is the sum of 2 primes. Well, 4 is a special case, so it really is that every even greater than 4 is the sum of 2 odd primes. Does this seem unlikely?

If n is even, it is the sum of 2 smaller odd numbers in n/4 + O(1) ways. On average those have independent probabilities around 4/log(n/2)^2 of being the sum of 2 primes. Therefore it tends to be the sum of 2 primes in O(4n/log(n)2) ways, with the exact distribution being close to a Poisson distribution around the projected number, which for large n is well approximated by a normal distribution. (There are various minor quibbles and corrections that you can easily use to refine this basic prediction.) From this the probability of being a sum of two odd primes in 0 ways can be estimated, and turns out to go to 0 fast enough that we expect only a finite number of counter-examples. (2 and 4 are indeed counterexamples to the stronger statement, but we know of no others.)

This basic projection can be verified for small n by computer analysis, and indeed works out to be remarkably accurate. Therefore we have great confidence that Goldbach's conjecture is likely to be true, but absolutely no idea why.

The hope, some day, is to be able to close this gap. To not just make assertions based on known statistical facts about the distribution of primes, but to be able to prove them.

The best known example of a statement which should be true that we have not yet proven is the Riemann Hypothesis. It should be noted that Louis de Branges does claim to have proven it, nobody has investigated his claims depth, and he was right about the Bieberbach conjecture.

My favorite example of a statement which looks reasonable on the numerical evidence, cannot be supported by this kind of argument, which later proved false is Mertens conjecture.


The reason why this problem is difficult is because primes have everything to do with multiplication (every number is the product of primes), and summing numbers have everything to do with...well, addition. To give additive properties of numbers in terms of primes is like putting a round peg in a square hole.

The reason why this problem is important, is that in the grand scheme of things, it'a more about the math and less about the applications. People have already assumed it to be true, and still not applied it to anything practical like cryptography, as far as I know. Please correct if not.


> every number is the product of primes

Except prime numbers.

> People have already assumed it to be true, and still not applied it to anything practical like cryptography

How can you use something that you didn't prove?


Easily, act as if you proved it. We are generally very hesistent to do this (with good reason), but sometimes practicality takes over.

For example, we have not proven that are current construction of mathematics contains no contradictions (indeed, we have proved that such a proof is impossible), however, we continue to act as if it does.

We have also not proved that factoring numbers is hard, or that NP/=P, but we still act as if these are true.


Can you explain your question? It seems obvious that you can take something that is not proven as an axiom and try to use it in some practical applications. Maybe to disastrous results if it ends up being wrong, but no worse than what is already a risk with bugs in implementation.


> Maybe to disastrous results if it ends up being wrong

This is what I meant. Bugs are never introduced intentionally in this manner. Though I suppose we can say that most encryption is used based on an unproven assumption, so I see your and dmvaldman's point!


Also, Golbach's conjecture has been verified up from 5 to 4000000000000000000 computationally.

But in general, it's common for mathematicians to assume a conjecture is true and go on from there, because if they arrive at a contradiction, then they can conclude the conjecture false (or that another interesting fact may be logically equivalent to the conjecture).

Pure mathematics consists entirely of such assertions as that, if such and such a proposition is true of anything, then such and such another proposition is true of that thing. It is essential not to discuss whether the first proposition is really true…. Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true. - Bertrand Russell


You assume it is true, and write a program that relies on it.


Yeah, but every prime is the product of numbers!

All is number!

/s


This sort of terrifies me. I know there's no known connection to factoring, but still.


Really? I thought it was fascinating and spent a few minutes running through some examples... Quite a sensawunda moment.


> A sense of wonder (sometimes jocularly or cynically > abbreviated to 'sensawunda') is an emotional reaction > to the reader suddenly confronting, understanding, or > seeing a concept anew in the context of new > information. [1]

I was unfamiliar with this concept! I can surely relate to the feeling though, especially in the context of seeing the beauty of a solution to a problem, maths or otherwise.

[1]: http://en.wikipedia.org/wiki/Sense_of_wonder


My only real interest in number theory is applied number theory; i.e. the basis of many schemes of asymmetric cryptography.


Well… the weak Goldbach conjecture has been known for a couple hundred years and my understanding is it's one of those things we've suspected was true but just never found a solid way to prove that it was so.

This is a neat result, but no one's surprised, per se.


Being able to prove can also bring new pieces of mathematics.


I mostly just meant "any results in the space at all" might change things, and really can only change things for the worse (if you have deployed cryptosystems).


There are 2 mostly distinct branches of number theory additive and multiplicative. Integer factorization is related to multiplicative number theory, but not really additive (as far as is known). This is a result in additive number theory so you shouldn't be any more concerned than say if a major problem in graph theory or differential equations was solved.

The fact that the security of a cryptosystem can only degrade over time is what I find so interesting about them. You can't just plug in a cryptosystem and forget about it. If you want real security you will regularly have to make changes.


Also interesting work on twin primes this week:

http://www.nature.com/news/first-proof-that-infinitely-many-...


In my country social security numbers consist of two numbers, and mine consists of two twin primes. Probably the only one in the country.

This is something I can only brag about on HN :-)


Finding mixmax's country of residence, social security number, name and address is left as an exercise for the reader :)


should definitely be doable - but using it to defraud me would be hard. Here social security numbers are used as an identifier, you can't really get to anything interesting with it. For that you'll need my passport, drivers license, digital ID or something similar. You can probably go to the doctor and claim to be me, but that's about it.

As a matter of fact a popular singer once published an album where the title was his social security number to show that you can't do much with it.


By my math, if you're talking about Denmark, I don't think you're as unique as you think you are. There are a total of about 18 million valid Danish ID numbers (approximately 36.5k valid DDMMYY combinations * approximately 500 valid serials that meet the checksum), of which 204,099 are two twin primes. So I'd expect about 1% of the country to have two twin primes for their ID number.

So not only are you probably not the only one in the country to have two twin primes for an ID number, you're probably not the only Dane on Hacker News for whom that's true!


As long as we are on the subject of primes, my parents telephone number is prime (and in Denmark telephone numbers are 8 digits).

WTF is the odds of that?


If my math is to be believed, 5096876/90000000 or ~0.056632; roughly 1 in 20.

Edit: Here's my program: http://pastebin.com/sBsXjPt5

Usage: gcc primes.c -O2 -o primes; ./primes 8 | wc -l

Edit 2: Nevermind; I'm dumb. First, there are 10^8 unique 8 digit combos, not 10^8-10^7. If all combinations were legal we'd have a probability of 0.05096876. But I assume all combinations are not legal, so that answer too is wrong. Maybe later if I'm bored and have more time I'll research the Danish telephone system and get an accurate number.


Alright - Denmark made it easy on me with the spreadsheet here: http://www.erhvervsstyrelsen.dk/numbering_lists

The probability that a valid, non-reserved, 8-digit danish phone number is prime is 4208056/73399100, or ~0.0573312. This probability does not account for number blocks which are valid but not yet assigned/purchased.

Code uses a copy of the number spreadsheet without header row saved as 'danish_numberlist.csv'

Code here: http://pastebin.com/kWUsHsL4


Terence Tao's post https://plus.google.com/114134834346472219368/posts/XESxA9bL...

However, in absence of a preprint, it's IMO too soon to make announcements.


True, especialy given we don't know the largest prime closest to infinity, but whilst not a formal mathmatician (autistic) it is true. Just needs simpler explanation as anything more than a page is probably over detailed in my oppinion. But I can understand why it is as long.

Still all 133 pages already exist somewhere in Pi's infinite digits, and how many pages would it take to prove that one :-).


We know that there are an infinite number of primes (Euclid's theorem). We don't yet know whether or not every possible sequence of data occurs in the digits of pi, but it's highly suspected (this would make pi a "normal number" if true).


Isn't a normal number ever so subtly different from that? In that not only does every possible sequence of data occur in the digits of a normal number, but each sequence occurs an infinite number of times with likelihood proportional to the number of digits.

Infinities always mix up my intuitions, though, so they might be equivalent.


You are correct, normality is a stronger criterion. See http://en.wikipedia.org/wiki/Disjunctive_sequence


True, it was one of those `gravity` moments or feelings about it perhaps -- You know it is there but still have not proven how it fully works, though my a physics aspect but I do like to think of them hand in hand. Perhaps another bias perceptional thought process, sorry my bad.


"the largest prime closest to infinity" What do you even mean by that?


Sorry, probably not the best way to point out that we do not know what the largest prime is as we constantly keep proving larger known primes and by that they are infinite and expressed as infinity being the largest possible ever number.

Hope that explains what I mean't by that term you quoted now.


There is no largest prime. In fact the classic proof that there are infinitely many primes starts with the assumption that there is a largest prime and proceeds to use this number to construct a larger number which is also prime. This is a contradiction: "this is the largest prime" and "this is not the largest prime." So the original assumption must be false and "there is no largest prime" or equivalently "there are infinitely many primes". Recently it has been shown that this proof is a result of misunderstanding the one given in Euclid's Elements. Nevertheless it's still a fine proof.


The largest known prime I was refering too was what is known now - http://en.wikipedia.org/wiki/Largest_known_prime_number

So given that to know with all certaintity that only upto that value three times would be the only provable maximum value, until a new know prime is found and somebody invents a formular to drop in N and get the Nth prime out without waiting more than few seconds. Currently we can not do that at all, let alone that dream of many, maybe one day in our lifetimes.

I agree, it is a fine proof. I hope to appreciete it more over the following days. Lots to go over.


It's been a while since I studied this stuff and I screwed up a crucial part of the argument. The larger number isn't guaranteed to be prime. It's either prime or composite. In either case you end up with a prime not in the finite list of primes that follow from the assumption of a largest prime. Here is a link to a good explanation that includes the composite case:

http://www.jimloy.com/number/primes.htm


Curiosity. With this 130 page research paper, I'm really curious what's its application in the near future. I'm sure there will be, I'm seeing this in the branch of cryptography.

I'm imagining it to be implemented like a public-private key. Maybe the odd number can be the private key. And three prime numbers can be the public keys. The hash can't be decoded by just the single public key. It needs all three of the public key to decode the hash. Well, with just abrupt thinking, I think it can be more secure? (Not really sure about it though. Just a guess.)


I don't see such an application; given a number which is known to be a sum of primes it's very easy to reverse this and extract the primes (unlike multiplication, which is how RSA works). And for practical cryptography processes it's always been perfectly reasonable to assume the goldbach conjecture - even though there's no formal proof yet, it's true in such an overwhelming majority of cases that you really wouldn't worry about hitting a counterexample, the odds are far smaller than e.g. a hash collision.


If you add three large primes I don't see how you could quickly reverse that process. May be possible that several set's of 3 primes also sum to that number so you would need a process for finding all of them.


Factorisation is unique, the addition of 3 primes is not.

e.g. 29 can be written 5 + 11 + 13 or 3 + 3 + 23

So even if it were a difficult operation to reverse addition of 3 numbers, it would be made easier by collisions.


>Factorisation is unique //

Um, no it isn't.

6, 4 are factors of 24; 2, 12 are factors of 24.

Probably you meant prime decomposition.


I think he means that prime factorization is unique (which is true).

http://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmet...


Isn't that what I said?


I could have sworn that last sentence wasn't there (my bad).


Thinking about it more this is basically a special case of the knapsack problem. Which is not obviously impossible, and indeed was used in an early attempt at public-key cryptography, but has since been broken: http://en.wikipedia.org/wiki/Merkle%E2%80%93Hellman_knapsack... .


It was cracked because it depended on a super-increasing sequence, the knapsack problem is still NP-Hard. However, finding a given sequence of primes that sum to the number is not really the knapsack problem.


If I recall correctly, the private key was super increasing. The knapsack encryption algorithm attempted to reduce the super-increasing sequence to a regular sequence. What was broken was that the reduced sequence, while no longer super-increasing, still turned out to be a special case which was easy to solve.


The known prime number - (3 + 5) ?

This is extremely amenable to a rainbow attack - just start here, http://primes.utm.edu/lists/small/1000.txt.


This is less amenable to a rainbow attack than cracking RSA. When cracking RSA you have to factorise a number N into P.Q, so you only have to check up to sqrt(N). With this, you have to get 3 numbers, and check up to N/3.


Can't wait to use this to prove that are infinite primes. :P


Very amusing. Here's a proof by contradiction, for the record. Assume that there are only a finite number, n, of prime numbers. There are at most n^3 distinct odd numbers that can be formed as a sum of three of those primes. But n^3 is finite, and so the number of odd numbers must be finite, which is, of course, false, and so our original assumption must have been false. QED

There are some hilarious variations on this theme (using difficult theorems to prove simple facts) here: http://mathoverflow.net/questions/42512/awfully-sophisticate...


Assume there are a finite number of primes. Let p be the largest prime. Then 3p+2 is an odd number > 5, so it must be expressible as the sum of 3 primes. But since p is the largest prime, the sum of 3 primes can never be larger than 3p, contradiction.


From Wikipedia:

Goldbach's conjecture is one of the oldest and best-known unsolved problems in number theory and in all of mathematics.

If this proof is correct then it's a bigger deal than Fermat's Last Theorem.


This is a proof of Goldbach's weak conjecture (http://en.wikipedia.org/wiki/Odd_Goldbach_conjecture), not Goldbach's (strong) conjecture.


So we now can take 2 greatest known prime numbers, substract them from some HUGE odd number (bigger than (both these prime numbers) * 2 + 1), and we have a new greatest known prime number?

Or am I missing sth?

EDIT: ah, I get it now, it's sum of SOME 3 prime numbers, but not neccesarily these 2 that I've choosen to make it and another one, it may be 3 completely different primes


Not every sum is of three primes.

15 = 5 + 5 + 5

which are three primes, but

15 = 2 + 3 + 10

which are not. The theorem says there there is such a sum, not that all sums are of that form.


I don't believe that would work. Because it doesn't specify that it's the sum of ANY three primes. For example, you can't take 21, subtract 7, subtract 5 and say that you have a prime. It has to be two specific primes.


Such papers MUST include source code available for further research activities.


Interesting that it involved a significant computational effort in verifying the GRH up to a large finite constant.


And also that, among others, the four-colour theorem's only known proof is a computerized check of hundreds of cases. Will this increasingly be the case? Is this the way mathematics is going? Will any sufficiently difficult problem from now on require a computational step? Does that mean that mathematicians until now were using the non-computational subset of mathematics?


Why greater than 5? Don't 3 and 5 work as well?

3 = 1 + 1 + 1

5 = 3 + 1 + 1

Or do they mean different primes?

EDIT: Oops, I forgot about 1 not being prime.


Not to be a "pile on", but Numberphile did a great video on why 1 is not prime. http://www.youtube.com/watch?v=IQofiPqhJ_s


I understand "we got tired of saying 'excluding 1'", but I still don't grasp how the product of zero primes is 1 and not 0.


1 is the multiplicative identity. If you were to write a for loop calculating the product of a list of ints, what would be the initial value for the accumulator?


Multiplying 0 times is the "same" as raising something to the 0 power.


Oh, of course. x exp. 0 == 1. I got that he was using powers but my brain didn't equate 2 exp. 0 with how he explained it for some reason.


1 is not considered prime.



I think it depends on what school of thought you are coming from, but generally 1 is not considered a prime.


I haven't read the paper, but in other 'fun with primes' exercises, 1 is often excluded.


At this point it's not about practical applications. I find the idea that number's (number theory, primes, etc) could be Platonist very interesting, meaning it's existence could be outside ourselves and not an archetype. In other words, a mathematical structure doesn't describe a universe, it is a universe.

Ya I know, It sounds very "hippy".


As someone who doesn't study this subject, intuitively it seems so unlikely. I would guess that 130 pages of proves is probably a little too much to tackle to try to get a handle on it :)

Very curious result though.


As I understand it this is largely a technical refinement of the argument from Tao's earlier paper http://arxiv.org/abs/1201.6656 , which is a bit shorter and very readable by the standards of such papers (though admittedly not exactly elementary)


What are the other practical applications of number theory, primes and this the results in the paper outside cryptography?


Oh my god this is so much math

http://arxiv.org/pdf/1305.2897v1.pdf


[deleted]


1) Some people enjoy researching it. Therefore it adds to their lives.

2) Some people enjoy knowing about it. The researchers therefore add to these people's lives too.

3) Pure research frequently turns out to have an application later - numerous branches of pure mathematics later turn out to have applications. New knowledge is added to the toolkit we can use to understand the world.


entirely in agreement with 3)

I know you were answering the question as written, but IMO 1) and 2) are out of place in a discussion about 'value', with the implied context of 'and why are we spending public money on this useless rubbish?'.

you may as well replace 'learning about number theory' with 'windsurfing'. sure, it adds to their lives, but it's only the public good that has an impact on the discussion.


Why do we build concert halls and stadiums? There's this thing called "culture"... It's hard to define, but you know when it's missing.


(Note, the original question was deleted, so I can't see context.)

The public good most certainly includes the good of the people doing it, though. The notion that those who act on behalf of the public good must commit a sacrifice of their own, individual good is simply wrong.


The most famous quote for this is by G. H. Hardy, a famous number theorist. He said:

"No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity, and it seems unlikely that anyone will do so for many years."

Yet here we are a few decades later with the fundamentals of modern cryptography based on number theory results. Specifically, the difficult in factorizing the product of two primes into the original prime factors (RSA).

So while this specific might seem abstract and without purpose, it often goes in mathematics that we'll find the purpose for it later.


In Sid Meier's Civilization you can only build catapults after you research mathematics.


http://hwhelp.s3.amazonaws.com/hwhdocs/The%20Math%20Behind%2... [PDF]

Pythagoras: why we throw giant rocks at walls.


And those fancy JDAMs and other GPS-guided toys wouldn't be nearly as accurate if GPS did not account for the effects of both the special and the general relativity.


That's physics, not mathematics (let alone number theory).


Mathematics purists would disagree http://xkcd.com/435/


I referred to the fact that Prof. Hardy specifically mentioned relativity in the quote.


But sometimes we discover the math to support we discover the physics. Knowing the physics alone is less useful (but not useless) than knowing the underlying mathematical relationship of the physics.


One of my favorite short science fiction stories is about inter-dimensional warfare in which numbers are the weapons.

Dark Integers by Greg Egan

http://www.asimovs.com/_issue_0710/Dark.shtml


The material on which RSA is based was once thought completely useless, studied for interest alone. You never know when things like this may turn out to be useful.


I'd recommend A Mathematician's Apology for a defense of number theory by someone who was quite prominent in the field. It's slightly dated ("No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity, and it seems unlikely that anyone will do so for many years."), but a lot of it is still relevant, or at very least, interesting.


Whether or not it's relevant, I still second the recommendation to read this book - it's a great read :)


First of all you have to understand that when something is not solved, it's in human nature to try to find a way to solve it, that said, it is also hard to know the implications of a new discovery/demonstration.

That said, prime numbers are a very important and discussed topic in math, they also have high implications on computer systems since prime numbers are at the core of many secure systems' implementations.

So, when we know something new about primer numbers, new doors may open concerning the generation of prime numbers or the detection of prime numbers, basically it's just that a small step in knowledge, let's see where it takes us.


of itself, probably nothing.

asking this question is somewhat like driving to your mother's house, looking out the window on the way and asking, "Why the hell am I on a freeway onramp? I didn't want to be here, I want to be at my mother's house!"

academic discovery is a journey, building on top of the progress made by others along the way. not all points along that route have intrinsic value.

public-key crypto has incredibly high real-world value. most of the things learnt by number theory researchers building up to RSA's discovery - not so much.


It would be nice to take a common gadget, say an iPhone or an Android phone, and split it up into small bits and pieces (including software) and then list all the pieces which are based upon a theory or research which decades ago was considered irrelevant and useless. I.e. all the findings which make up an iPhone, but where decades ago people would have said 'what does this add to our life'. What I can think of, right now, is

- Cryptography (iCloud, iMessage, Online Banking): Is all based on prime numbers. For hundreds of years, prime number theory in math was considered a purely academic exercise, something with no real world usage at all. And suddenly it is the basis of many things we consider granted. [1]

[1] http://www.claymath.org/posters/primes/


What does this electron circling an atomic nucleus adds to our lives? It's purely theoretical

What does this adding numbers that only can be zero or one add to our lives? It's purely theoretical

What does this imaginary number things and the square root of negative numbers adds to our lives? It's purely theoretical


One example of the applicability of number theory is cryptography.


Pure mathematics has a habit of trickling down to real world applications.

Sometimes that influence is direct, as seen with cryptography. In fact much of the worlds security is based on prime numbers. eg HTTPS (SSL / TLS) uses RSA.

Sometimes that influence is indirect. Such as math theorems that are applied to physics. Which in turn lead to real world products (eg faster processors, GPS, etc).


You ask: 1. Is X good? How is it good? 2. Is it entirely separate from reality?

While you might think of it as a simple question, you asked it in the general sense, without any context. In the general sense, these are two central questions in 2 separate domains of philosophy. 1. Ethics/Moral Philosophy 2. Dualism (philosophy of mind). Neither has a simple answer, although you could start form Wikipedia or the Stanford Encyclopedia of Philosophy.

Furthermore, the idea and enterprise of science and math is complex. It's not familiar to the layman. Instead, try approaching the question with more familiar ideas:

Think of a person who doesn't use Facebook at all. 1. What does Facebook add to his life? 2. As far as he can see, Facebook is a purely virtual social network, isn't it? 3. Why is it worth everyone's time to check/use Facebook?

Think about it from the perspective of a non-Facebook user. Perhaps one of the tribesmen in Africa should you know any.


Because any new discovery could open up possibilities that we don't yet understand?


Because it's there :)


In addition to what other people said, some of the methods used to approach the problem may be reused on other problems.


This question prompts the discussion - which can be a good thing, please don't just downvote because you disagree.


On the other hand, it's a discussion I've seen over and over and over again. And it's not actually specific to this particular link at all, so it would be reasonable to downvote if you feel the discussion is out of place.

And you'll note that currently, ~90% of the comments in this thread are in response to the question -- it's something of an attractive hazard.


It's a proof. That doesn't say it's true.


Will this be useful for PKI encryption?


Well, duh, of course it's every odd number greater than Five....

Because Seven eight Nine.

I'll be here all week folks. Try the fish.


I thought this was already know, been using in a pet project for a few years, though I don't use numbers near infinity and with that you have to give the chap credit as it is due. Shame it is not a prime number of pages as well and simpler in explanation.

Maybe for his next paper he could do:

"Every even number greater than five is the sum of three primes and one" Though one is a funny prime and with that I did not say four primes too keep the peace. Can reference his previous paper and with that get two papers for less than 134 pages compared to only one proof for 133 pages.

No I can't say what my pet project is, not everybody does lottery tickets ;).


> I thought this was already know, been using in a pet project for a few years

You were merely assuming that the conjecture is true. Actually proving it is a non-trivial task.

Oh and I look forward to hear your "simpler explanation", given that you have been using it "for a few years" ;)


though one is a funny prime and with that I did not say four primes too keep the peace.

One is not prime. A prime number is a natural number p (positive integer) p>1 that has only the two divisors 1 and p [1].

[1]http://www.encyclopediaofmath.org/index.php/Prime_number


It is not unreasonable to construct prime numbers to include 1. It is just that, over time, the mathametics community came to the consensus that the set of primes that does not include 1 is more useful.

If primes were constructed to include 1, math would not be seriously different, we would simply have to say 'prime greater than 1' when we want to exlude one. The same way that we often have to say x/=0 when we divide by x.


Counterexample: 6.


Okay off-by-one, greater than 7.

But more fun is 'every even number 10 and up is the sum of at most four primes, at least one of them being 3.


Hmm, this is also very very true, Curse you the Number TWO being a prime and not poor Number ONE. The madness :0.

So maybe (5x2)+1 onwards, though wondering that any two primes +2 will alwys mess that one up.

Still all that said nothing about the said even number not being also able to be made up by three primes without adding +1 ;). But still Wondering now how many clash's and with that it does get down to the case of:

  Any Two primes + 2 would be three prime numbers and also a even number, 
  we all know that too be true and fact, even if those primes are also the value of 2.

Numbers are such fun, more so when you question them, learn them and respect, or indeed change them. They are still fun.


Yes 6 is already three primes: 2+2+2

Well pointed out.

So from 7 onwards then :blush:-)


"numbers near infinity" is such a neat thing to say, because you're always going to be wrong. Or are you? ;)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: