Regarding importance, it provides an efficient alternative to RSA.
[edit: this paragraph, based on my old notes on the subject, seems to be incorrect, see sdevlin's comment below]
More specifically, because of the RSA dependency on prime numbers, the RSA effective key space is very sparse (which is why going from 2048-bit RSA to 4096-bit RSA only increases the effective key space by ~16%). With elliptic curves, the key space is very dense, which reduces the key size for an "equivalent" encryption strength. Elliptic curve solutions also tend to be more computationally efficient, both in terms of key generation as well as encryption operations; this performance delta increases rapidly as "equivalent" key sizes grow.
Regarding "trending":
-- They're under active fundamental research, which is fun (RSA is well established, whereas new EC proposals are still under active debate)
-- They've been the subject of some drama, which is also "fun" (conspiracy theories related to several EC proposals/recommendations, debates regarding a primary EC researcher, etc)
> More specifically, because of the RSA dependency on prime numbers, the RSA effective key space is very sparse (which is why going from 2048-bit RSA to 4096-bit RSA only increases the effective key space by ~16%). With elliptic curves, the key space is very dense, which reduces the key size for an "equivalent" encryption strength.
This isn't correct.
The reason RSA (and classic DH) keys are gigantic is because index calculus techniques yield efficient attacks on systems based on finite fields. We need big keys to make them impractical.
EC systems are not susceptible to these attacks because the points on a curve comprise only a group and not a field. Counterintuitively (to me, at least!), they're safer because they have less structure.
On the question of whether it's intuitive or not: NP-complete problems tend to have fairly little structure. A problem like boolean satisfiability has basically no structure: all you have is a soup of clauses. The "tricky" problems that have solutions in P get their solutions from clever exploitation of structure.
At least, that's how it becomes more intuitive to me.
The structure of a problem specifies it; an unstructured problem is less well-specified, and that makes it intuitively more difficult to approach.
E.g. summation of a column of decimal numbers is a highly structured problem that's very easy to solve. Parsing the speech of yelling drunkards is not so structured, and much harder to solve.
Very informative, thanks for the detailed response. To be clear, was my statement incorrect in terms of the "16%" quotation (from [0], which I now realize I misquoted in my private notes), or in the claim that the RSA key space density is low due to the dependency on prime numbers?
My private notes include a claim (original source uncited, sadly) that because the prime density from 1..n decreases (approximately) as `1/ln(n)`, then the corresponding effective key space density for RSA decreases at a related rate, which was what led to the corresponding reduction in the "equivalent" key size. If my notes on this are complete bunk I'd love to hear it. (Even a simple 'yes' would suffice and I'll revisit the subject.) Ta!
It's true that the space of valid RSA keys is sparse relative to size, but this isn't why we need big keys. As a counterexample, classic DH keys are also big (or they can be), even though the space of valid keys is dense.
We need big keys (or rather big fields) due to index calculus. This is a family of algorithms used to factor integers and compute discrete logarithms in finite fields. The fact that index calculus is (thus far) inapplicable to elliptic curve groups is the primary motivation for ECC.
Many algorithms for crypto and similar can be phrased elegantly and abstractly not in terms of the actual numbers, but in terms of what are called "Groups"[0].
A group is a set of things, and a binary operation that satisfies certain rules.
At first glance a group appears to be pointless abstract nonsense, but most of the properties of numbers that we use in, say, RSA, or Diffie-Hellman-Merkle-Williamson, or in factoring via Pollard P-1, use the fact that the numbers we are using are an example of a group.
The groups being used are usually:
* For DHMW, the integers modulo a large prime, or
* For RSA, the integers that are co-prime to the product of two large primes.
In each case the operation is multiplication modulo something.
So then we can ask if the same algorithms work if we use a different group instead, and whether the result will have better or worse characteristics. The answers to that are (1) yes, the algorithms work in other groups, and (2) it depends on the particular group or groups used.
So given an elliptic curve, it turns out that the points can form a group if we define a particular operation[1]. Then it turns out that the rational points form a group. Then we can convert that to work modulo a prime, and we end up with a finite group.
And that's exactly what we need to use some of our algorithms.
The question of whether this group is better depends, and is too long to fit in a single HN comment, but the main point is that there are many possible elliptic curves to choose from, and many possible primes to use, and so we have more choice. That alone makes it worth considering.
But the answer turns out to be yes, some of the algorithms have better characteristics on these new groups. For example, using elliptic curves we can use smaller keys for RSA or DHMW, and the elliptic curve version of Pollard P-1 is now the third fastest known factoring algorithm, and probably fastest over a certain range of sizes.
I would be happy to answer any questions, either here or by email.
With recent(ish) leaks about what the NSA is doing in terms of breaking widely-available crypto, the question has arising about what weaknesses might exist in current classical techniques. RSA and DHMW have been around for a long time, and much is known about specific weaknesses. Some primes need to be avoided, for example in DHMW one should avoid primes P where (P-1)/2 has lots of small factors.
But all the elliptic curve cryptography is comparatively new, and weaknesses are still being found. It's plausible that there are simple things to avoid when choosing an elliptic curve, and so perhaps we should just use the elliptic curves recommended to us by security experts.
But after Snowdon, etc., people are becoming wary of trusting experts, so they want to know more about the implications of their choices, and what options they might have. This is an on-going issues, and now, as people are starting to understand the mechanics of implementing systems that use elliptic curves instead of just Z_p, so articles are being written aimed at the non-security-community people.
And so articles appear that are readable and relevant.
I'm going to push back a little on "new" and "weaknesses still being found". The underlying theory of curves and their hardness has been pretty stable for awhile --- since well before 2000, I think. More progress has been made against conventional multiplicative group Diffie-Hellman than has against curves.
The complicating factor isn't the curve problems themselves, but rather implementation details, some of them particular to specific curves.
That's a reasonable point and I agree with you, but I think you've read into my comment something that's not there. I said:
>> But all the elliptic curve cryptography is comparatively new, and weaknesses are still being found.
It's the elliptic curve cryptography that's comparatively new, and the weaknesses are being found in the full crypto package. That includes, and in many cases is primarily in, the implementation.
So actually I think you're not pushing back, I think you're clarifying exactly what I said.
Of course, I may yet have misunderstood you, so feel free to add more. You certainly know more about this than I do, and I'm happy to learn (or have it clarified further).
The whole field of misuse-resistant cryptography is very new, relative to the field as a whole. We didn't even have a usage model of cryptography that was sound until the later 1990s, when the connection was made between authentication and indistinguishability. It's only in the last few years that we've begun to prioritize constructions that make implementation bugs harder to blunder into.
Which is a long way of saying, that's true, but also still an issue relevant to RSA and DH and DSA.
I think the primary reason we read a lot about elliptic curves today is that the field has, at least to the extent that it's not directly promoting post-quantum algorithms, pretty much coalesced around curves as the best modern way to implement asymmetric cryptography.
A smidgen too late to edit the comment to add this ...
One advantage of ECC (Elliptic Curve Crypto) is that the keys are smaller, and hence it's easier to implement on resources constrained devices such as smart-cards and IoT devices.
Yes, elliptic curves are huge in cryptography, but they are also mathematically significant. They are the degree three nonsingular algebraic plane curves with at least one rational point. This makes them essentially "one step up" from the conic sections. The conic sections, of course, are well understood including the parametrization of their rational points. However, elliptic curves are much more subtle! We do not even have proven algorithms for determining the size of any elliptic curve's set of rational points, and in specific the algebraic rank of that set. The Birch and Swinnerton-Dyer conjecture is one of the millennium prize problems, and it relates elliptic curves' algebraic ranks to their analytic ranks. Elliptic curves are also very related to modular forms, and this connection is part of the theory that allowed Andrew Wiles to prove Fermat's Last Theorem. In the study of the rational solutions to integer polynomial equations, i.e. Diophantine Analysis, elliptic curves are one of the next stepping stones that must be more thoroughly understood before we can have a more complete understanding of polynomials in general. Their applications in cryptography and integer factorization are huge in applied mathematics, but they are also incredibly important subjects to fields like algebraic geometry, diophantine analysis, and of course, number theory.
Elliptic Curves are one of the core mathematical constructs behind most of the commonly used non-RSA asymmetric cryptographic algorithms (curve25519, ed25519, ECDH, ECDSA).
to predict the future, some look to the past for ideas.
others monitor the present e.g. github commits for popular projects - reject anything that has no recent activity.
the context is per packet encryption, the space and time needed to do it.
this is not the approach taken by tls where one compromised packet can compromise the entire "encrypted stream". nor is the approach taken by dnssec where instead of encrypting some third party gives their blessing to (signs) the data being communicated.
elliptic curve crypto is not new. but encrypting each and every packet on the internet separately is "new" (or at least "different" from current practice).
As a side note, the author of the linked notes on elliptic curves (at MIT) co-founded a successful tech startup company in the 1990s involving high speed networking, then only much later went back to grad school and became a computational number theorist.
From a math point of view are interesting cause since are equations of third grade, any line in a projective space will intersect them in three points.
That is the feature, it means they can define a binary operator that given two points returns a point, the third intersection with a line. This opens up a Pandora's gift.
the profusion of posted slides on the internet is a detriment to researchers everywhere. so many times i've googled something technical (CS, math, engineering) and what percolates to the top are lecture slides, which are almost useless for actual in depth learning.
edit: spoke too soon - only the first link is slides.
When I was doing research projects for my undergraduate degree we primarily used the web differently.
0. First thing we would search is the Arxiv front end http://front.math.ucdavis.edu/ . Usually we would have a researcher in mind. If we didn't then we would look at Wikipedia.
1. We would use wikipedia to get general knowledge and information about the topic. Sometimes lecture notes and presentations would be looked at too but we would eventually get to papers.
2. If the topic was relevant after getting the gist we would look at the citations on wikipedia and track down authors.
3. Books we would obtain electronically. Papers would be tracked down on Arxiv.
4. Rinse and repeat if you didn't understand a topic that was cited in any of the other steps.
It believe mathematicians will continue to favor boards. Sitting in the audience I greatly prefer this; however, such lectures have less chance of being posted at all.
For this topic we are lucky to have Silverman's book [1], which everyone seems to like.
It is explained in a relatively elementary way in Frances Kirwan's "Complex Algebraic Curves"[1]. The book is pretty self contained, and much more entry level than, say, Silverman.
The explanation is in lecture 15. It requires knowledge of some advanced mathematics. Abelian varieties over the complex numbers are analytically isomorphic to a torus.
I would expect him to spell this out in lectures 15 and 16. It does take work. What is surprising is that despite their all looking the same -- certainly they are the same as real manifolds -- there are tons of elliptic curves. The difference is in the complex analytic structure.
They (the space of complex points on any elliptic curve over C) are the same (homeomorphic) as topological spaces. However, they are not the same as complex analytic manifolds, which is a stronger condition (requiring an analytic isomorphism, not just a topological one).
Can someone give me some context?