There are some interesting statements that are equivalent to the Riemann Hypothesis. What "equivalent" means is that if the statement is true then RH must be true, and if RH is true then the statement must be true.
Here's one I find particularly nice.
Let s(n) = the sum of the divisors of n, for a positive integer n. For example s(12) = 1 + 2 + 3 + 4 + 6 + 12 = 28.
Let H(n) = 1 + 1/2 + 1/3 + ... + 1/n.
The RH is equivalent to the claim that for every integer n >= 1:
Here's another one. You'll notice that typically there's slightly more numbers that factor into a product of an odd number of primes (i.e. 2,3,5,7,8,11,12,13,...) than into an even number of primes (4,6,9,10,14,15,...). If this is the case then RH is true. You can make this into a more precise question about the partial sums of the Liouville function and make it an equivalence.
One of the things about the Riemann Hypothesis is that the search space for a proof is more wide than it is deep. For each given idea or approach it doesn't take relatively long to get to the forefront of what is known, and an expert can often tell you right from the beginning that the whole class of approaches may not work due to some known phenomena, or that they would have to involve certain complications to not pick up on various almost-counterexamples. Furthermore, in these 150 years not only has the right path/approach not been found, but there isn't a truly compelling reason to believe that RH is true, aside from numerical evidence and a belief in beauty.
I think it'd be fascinating to put together an online resource to organize the possible approaches, list the knowledge prerequisites, show the potential counterexamples and stumbling blocks to each approach. This would help anyone interested in the problem, and once it is sufficiently developed it would allow non-specialists to contribute productively. This could be LaTeX on github, it could be more of a traditional wiki, but now I really want to get this going.
There was actually an entire conference in 1996 pretty much dedicated to talks about "How not to prove the Riemann Hypothesis" to prevent people from wasting time on approaches that were known not to work:
Looks like a great conference, regrettably before my time (and unfortunately no videos of course). They look like fairly standard topics about the zeta function, where does "how not to prove RH" come in?
"there isn't a truly compelling reason to believe that RH is true, aside from numerical evidence and a belief in beauty."
You wouldn't consider the analogue of the Riemann hypothesis for the zeta function of a smooth projective variety over a finite field, proven by Deligne, to be a compelling reason to believe the RH is true?
No because that's a polynomial. The proofs that involve computing moments already have an analog for the Riemann zeta function, but for zeta(s) they just represent zero-density results, while in the Deligne case they give you the full on Riemann Hypothesis.
I'm having some difficulty finding it, but Terry Tao had a nice exposition in one of his posts, I believe the crux was computing/bounding traces of powers and then being able to conclude the result. My work is almost purely on the analytic side (hence perhaps the RH skepticism), so I don't know the right reference off-hand.
It was discovered only really with the work of Siegel that Riemann made his hypothesis after computing the first few zeros and seeing they were on the critical line. In many ways the complexity of the zeta function is controlled by log log log T, and some heuristics say we shouldn't expect to see counterexample until height around e^e^e^3. Here's one example. The function S(T) measures the difference between the number of zeros up to height T and the number asymptotically expected. So in particular it jumps by 1 whenever there is a zero. If there's a zero off the critical line then there will be two zeros symmetric around the critical line, so S(T) would jump by 2. The biggest value of S(T) seen is around 1.6 so in a very real sense there isn't "room" for a counterexample just yet.
This is a great introduction to analytic number theory.
I was expecting the standard dumbed down piece with no mathematical insight and far-fetched analogies. Instead, the post is mathematically rigorous and deep, and yet it manages to be completely self-contained and clear. Amazing work.
This version of Euclid's proof does not look like the one on Wikipedia, which is not a proof by contradiction. The article[0] mentions that:
"Euclid is often erroneously reported to have proved this result by contradiction, beginning with the assumption that the finite set initially considered contains all prime numbers, or that it contains precisely the n smallest primes, rather than any arbitrary finite set of primes."
Then Euclid's proof does not show that there are infinitely many primes (as is commonly reported).
It is trivial to show that there are infinitely many primes from it though through proof by contradiction as follows:
1. Assume there are finitely many primes.
2. Write a list of all the primes. (Writing a list of finite elements is possible)
3. There exists a prime not in that list (By Euclid's proof)
4. All primes are in the list.
3 and 4 contradict and therefore the single assumption is incorrect. So there are infinitely many primes.
As a sidenote, it seems that proving there are infinitely elements of any set will need to be proven by contradiction. Is there any proof of that?
You made an error: p1p2...pn + 1 is not necessarily prime, but none of its prime factors are in {p1, ..., pn}. For example, 15 = 2 * 7 + 1 is not prime, but neither 3 nor 5 are in {2, 7}. I agree with your overall point though.
I said it was nitpicking because it is only remotely relevant to the article, but the distinction does matter in mathematics. See for instance this post[0], where this very proof is discussed in the comments.
This is most obvious in the naming of the J function near the bottom, which I have not seen anywhere else. Riemann originally called that function a generic f.
At any rate, Edwards' book is great because it develops the theory from a historical viewpoint. It begins with a very well-annotated exposition of Riemann's original paper and the rest of the book goes on to explain other mathematicians' successive results in filling in all of the gaps that Riemann left in his paper. I recommend this book to any serious student of zeta.
I'm pretty awful at math and I found this explanation to be wonderful despite many parts of it going over my head. My brain naturally despises numbers (dyslexia)but your textual descriptions of formulas seemed to help me bridge the gap better than most texts.
Math is super interesting. The reason why most people don't like it and/or think it's hard is because they did not have a great teacher. Once you understand the notation and grasp the basics it's awesome.
So, it's more the language of math that makes it hard to understand than it is about the math itself.
I went to a talk on the Reimann hypothesis with a physicist friend. The chair of our math department gave the talk. At one point he stated "Do you know why the Reimann hypothesis is important?, it's because it's the Reimann hypothesis". Suffice to say the entire lecture was waaaay over our heads.
states that "The gamma function is defined for all complex numbers except the non-positive integers. For complex numbers with a positive real part, it is defined via a convergent improper integral ..."
So maybe that is what the OP means: the gamma function Γ(z) can be defined by via convergent improper integral for all complex numbers of z with real part greater than zero.
?
Wikipedia then points out that:
"This integral function is extended by analytic continuation to all complex numbers except the non-positive integers (where the function has simple poles), yielding the meromorphic function we call the gamma function."
For the curious, much of this is covered in depth in a graduate analytic number theory first course.
As fascinating as the Riemann Zeta Hypothesis is, I think something almost as fascinating is the works of Ramanujan - to this day, the genius Indian's work is studied in earnest, and amongst the most famous number theory work to this day.
Good catch! I read that paragraph the same way and I believe it is a typo. I've left a comment on the medium article asking the author to clarify that section.
If you don't mind me asking, what is so off-putting about these types of mistakes? I'd like to do more personal and professional technical writing similar to this but I find it a bit daunting, especially when I consider that my audience could know much more about a subject than I do. Do you have any suggestions on how I could or should write articles that may contain errors so I don't upset readers?
I came across that mistake (and it was pretty obviously a mistake to me, but that's because I already know quite a bit about the RH), and later was stonewalled by a different mistake -- the "Plot of the real and imaginary part of the Riemann zeta function ζ(s) in the interval -5 < Re < 2, 0 < Im < 60" plot is mislabeled; it's actually a contour graph of the condition "ζ(s) = 0" on the complex plane with real and imaginary parts plotted separately.
The problem for me is that my way of reading math stuff is to assume that whatever is said is being true and fit it into a growing mental model as I read. Many other students I know do that. Once you get into the flow of it this can get pretty fast without losing out on comprehension. When you come across mistakes, it leads to a speed bump, and you have to stop and re-collect your thoughts after this. This only happens to me for math and physics, not other kinds of technical writing, but it happens.
This doesn't turn me off an article, it's just super annoying.
Typos and stuff are fine, it's the "plausibly true" stuff that gets you. I suggest spending more time proofreading but not worrying more about it.
> what is so off-putting about these types of mistakes?
Basically when I see something like this I involuntary
start thinking: "Is it really a typo? Or is it the writer doesn't understand what he is writing about, because he cannot explain it to me in a clear and precise way?". Maybe this "BS filter" I have is a bad thing for me :(
About writing articles - I think one shouldn't worry about audience knowing more about a subject - it's inevitable, and it's what discussions in comments are for. So, errors due to not knowing something are ok, I think. Other errors can easily be corrected by proof reading (by a friend or by oneself after a day). That what I would do if I actually decided to start a blog :)
It's a platform that the founder knows has absolutely no clue about how to sustain it. After 5 years, and over hundred million dollars of investments.
end meta
This is a good post, and it should be put somewhere that will keep it.
That said, "RH" stuff now:
Fuck Riemann. An entire school of music theory that has almost nothing to do with him is named after him. It's a ginormoulsy idiotic way to understand music.
The people who think of him as a great thinker as it applies to music are completely out of their minds.
If you think this guy is a dick you should see a couple of other mathematicians who have a bazillion things named after them (Euler, Gauss, Cauchy, etc)
Here's one I find particularly nice.
Let s(n) = the sum of the divisors of n, for a positive integer n. For example s(12) = 1 + 2 + 3 + 4 + 6 + 12 = 28.
Let H(n) = 1 + 1/2 + 1/3 + ... + 1/n.
The RH is equivalent to the claim that for every integer n >= 1:
s(n) <= H(n) + exp(H(n)) log(H(n))
This is due to Jeffry C. Lagarias. Here's his paper showing the equivalence: https://arxiv.org/abs/math/0008177