What you want for asymmetric cryptography is "always hard to solve, easy to verify". Only a very small number of problems are believed to fall in this category, including factoring and discrete logarithm.
It's far more common to have problems that are "sometimes easy to solve, sometimes hard to solve, easy to verify". That describes this problem as well as most NP-hard problems that with good heuristic solutions. The problem is those "sometimes easy to solve" cases break your cryptography!
Elliptic curves are uniformly hard. If you could solve the elliptic curve discrete log problem (ECDLP) on any non-negligible fraction of inputs, then you'd break a lot of cryptography. In fact, the use of Pedersen commitments is based on nobody knowing the logarithm log(H) of a specific curve point H with respect to the standard base G.
An efficient algorithm for solving ECDLP on a non-negligible fraction of inputs, could fail on input H, but one would eventually work on some random input r * H for a randomly picked scalar r, giving log(H) as log(r * H) / r.
Not necessarily, the term "discrete logarithm" also alludes to calculating them in other contexts such as modulo p for p prime. It's not always on elliptic curves, and when I started in this field it was never on ECs.
But your comment is largely true, this is a minor point ... I make it only for completeness, and for people who later find this comment.
While all problems that are NP-hard (to be pedantic different from just NP) have that property, for crypto you want things to be uniformly hard (i.e. a randomly generated instance is hard). That's a much rarer property.
This comment was dead and I couldn't understand why it was flagged as it didn't seem to violate any rules. Moreover in my experience I tend to agree that most NP problems do seem to have this property. Can someone who downvoted/flagged or is otherwise knowledgeable about the subject chime in and correct/ teach me something new?
Fwiw, NP includes P, so it's not true that problems in NP are hard to solve, easy to verify. You want to limit the statement to NP-complete problems. Even then, it's only true that solving the problems in full generality is (conjectured) hard. Individual instances can be easy, and in some applications, it may be the case that all/most problems that occur in practice are easy. Even SAT can be easy in practice, and modern SAT solvers solve instances with millions of variables every day.
Excuse me for being a bit of a moron here, but why do we spend so much time, energy, and money trying to solve these weird mathematical questions? What's the practical application of any of this that can justify the cost?
It's a fair question mostly because of how effective math has been at solving practical problems, but honestly working on problems like these is usually due to a genuine appreciation and enjoyment of mathematics. I'd even say the majority of mathematicians study math strictly out of a sense of wonder and curiosity rather than towards a commercial application and that the vast majority of math research is not at all practical.
The cost is justified due to a common shared interest in the subject, many other people are interested in math and pay people to work on it. It's not all that different than paying someone for music, or a painting, or to write a work of fiction. People get a profound sense of enjoyment and happiness from this work and are happy to pay the cost for the continued advancement of this field.
> k = x³ + y³ + z³ is what number theorists call a Diophantine equation — a kind of algebraic structure whose properties have fascinated mathematicians for millennia. “They’re rich enough to encode [other mathematical] statements that have nothing to do with Diophantine equations,” said Browning. “They’re rich enough to simulate computers.”
More specifically, equations of such form come up in a proof of Godel's first incompleteness theorem. That being said, yeah, this effort is mostly a mathematical curiosity, mapping out the domain of knowledge of mathematics increasingly more completely. Sometimes an application comes about where either the solution or novel techniques used to arrive to the solution can become useful, but pure math research isn't typically driven by any existing applications as much as a "because it's there" attitude.
Sometimes these weird questions can lead to more practical applications down the road, e.g. once we know how to do something it gets added to the toolbox and eventually used, somewhat like how Euler's totient was a cute mathemtatical curiosity until it became one of the important steps in RSA key generation and a very useful tool almost two centuries after it was discovered. It is also the case that sometimes knowing where the 'holes' are in these sort of stepwise math series' can itself reveal something deeper or more important. Like the fabled exceptions that prove the rule, knowing where a general rule does not work is sometimes an incredibly useful discovery.
Often we don't know the "practical" benefits that will come out of some specific line of investigation. But some people find it interesting to think about. I guess better than spending time playing with facebook.
One of the terms in the solution is a bit bigger than 2^51 so its square is around 2^102. Assuming the methods used need to actually compute these numbers, you’ll need something bigger than a 64-bit int. Not sure how good GPUs are going to be at bignum-ish stuff, but it’s not the most obvious fit
"""Answer to the Ultimate Question of Life, The Universe, and Everything from the supercomputer Deep Thought, specially built for this purpose. It takes Deep Thought 7 1⁄2 million years to compute and check the answer, which turns out to be 42.""" . Any co-incidence 42 is still unsolved?
Wasn’t the earth the supercomputer that was trying to come up with the question to the answer of 42? If they just found it, does that mean that they’re done with us?
It's been a while... but I'm pretty sure the opening of the book explained that there was a theory that should both the question and the answer exist in the same universe, it would instantly collapse on itself and begin anew.
There was also a theory that this had already happened.
Related: https://news.ycombinator.com/item?id=20895776