You can be sure that our numerate friends at the NSA have spent some quality time trying to find an efficient factoring algorithm over the past few decades.
As they are still so keen on controlling encryption we can assume with good confidence that so far they have failed. And given the stakes it's really not likely to be the possibly untested problem that this author speculates about.
> As they are still so keen on controlling encryption we can assume with good confidence that so far they have failed.
After the Enigma was cracked, the government was terrified that the Axis would find out. If they knew they were cracked, they could change the cipher and set back the Allies significantly.
So they spent a ton of effort obscuring their own intelligence.
Whenever a decoded Enigma message told them where a U-boat would be, they sent a "spotter" boat whose job is was to be seen seeing the U-boat before it was attacked. That way there was a plausible source of information for the U-boat's location. (These spotters were so "effective", the Germans suspected there must be many more of them in the area then there actually were.)
One time they weren't able to get a spotter there in time and Churchill himself made the call to attack anyway. To cover for that and avoid arousing suspicion, they sent a radio dispatch to a fake spy thanking him for the intel, so that the message could be intercepted by the Germans.
That is just so fascinating... do you know where I could read more about this sort of thing. I don't care about time frames. Vietnam, I suspect, may have some interesting crypto treasures...
cryptonomicon by Neal Stehpens is a fiction but has a lot of this kind of thing in it. Not sure how accurate. Code book by Simon Singh. "The Man Who Never Was" is a boot I have never read but should because it's great ww2 espionage story. I'm sure others will kick in some suggestions.
No! It does not at all follow from NSA/USG's policy interests that NSA doesn't have secret factoring improvements. As the last few years of crypto research have shown, there's a big difference between having a break and being able to operationalize the break. It's plausible that NSA could have something that is tractable against an individual target, but not against massed targets. In fact: if they have a secret factoring improvement, this is the most likely shape of the problem for them.
In 1996 I asked Robert Morris Sr if he was worried that so much crypto was dependent upon factorization. "Worried? I'm not worried, but I can tell you that if US Military lives are at stake, we don't use algorithms that depend on factorization."
From that I would infer and speculate that the NSA found a pragmatic solution to factorization long ago.
> From that I would infer and speculate that the NSA found a pragmatic solution to factorization long ago.
Or they judge that a pragmatic solution is within the realm of possibility, and they don't want take the chance of being caught with their pants down. They probably have the resources to take on many extra operational costs to avoid theoretical risks. One of the quotes on his Wikipedia pages is: "Never underestimate the attention, risk, money and time that an opponent will put into reading traffic."
>As they are still so keen on controlling encryption we can assume with good confidence that so far they have failed.
"assume with good confidence" - this is very naive.
They're not going to advertise their super secret breakthrough by acting like they no longer care about encryption. Such a secret is going to be compartmentalized in the organization etc.
Fortunately, those slides leaked. The lower levels talked about cryptanalysis and supercomputers. The TS/SCI/ECI stuff said they were backdooring crypto at the company level using FBI locally and CIA/ISA foreign. Also note their own Type 1 systems allow these ciphers with most sensitive stuff and EKMS using a variation of Photorus called Firefly. Safe bet that good implementations of the stuff still work.
That's a lot of detail, which I'm not familiar with as a causal observer, but its irrelevant.
The slides show the capabilities of one layer of the onion, but its surely naive to assume that's everything?
A total mathematical factoring break is a holy grail.
I mean, they made a hollywood movie about this! In the 90s!
Sneakers: "There isn't a government on this planet that wouldn't kill us all for that thing." Obviously, that's a movie, not real life, but the sentiment is valid.
It's possible. Yet, someone has to be able to use that stuff. The Sentry Eagle leaks showed that, as I predicted, they developed most of this stuff in Special Access Programs that compartmentalize things away from even TS clearance. Then, as capabilities are developed, people with TS/SCI clearances are briefed on one or more based on their need to know. Some are so high up they get briefed on all of them. That was Sentry Eagle:
As you can see, the descriptions represent their most guarded secrets that can do the gravest damage. There could be an extra layer that basically has that one fact in it. I doubt it given how many different angles of attack & severity these docs cover. We also saw that predicted damage adding up after the releases of such leaks. It's more believable given it's the exact kinds of things I'd expect a post-9/11 agency to be doing if they couldn't break RSA, etc. They'd do it anyway for deniability but these are so secret they'd rather let targets go to preserve them. The secrecy level was already tight enough to cover preserving RSA so long as insiders on highest levels kept quiet.
My thing with this argument is that not only would you have to keep your entire organization in the dark (possible, see response to Snowden) but also have to successfully recruit every academic in that space, nationally and internationally, in order to prevent them from also coming to the same conclusion. Yes it's possible, and the tinfoil hatter in me wants to think it's true, but there's too many moving parts for me to think it likely.
I think employees of the NSA will be used to the idea of compartmentalization.
> successfully recruit every academic
Why that ridiculously high bar?
Original article estimates 100 mathematicans have examined factoring in detail. Googling says NSA has 600 on staff. Maybe they just did a lot of work on the problem. Also, a large dedicated group may outperform the scattered efforts distributed throughout the research community.
> there's too many moving parts for me to think it likely.
Guess you're right, it'd be like as if they [insert implausibly complex project which the Snowden leaks showed they actually do.]
You're conflating the NSA with the rest of the government, namely the FBI. I believe some spokesperson from the NSA has even gone on record lately against the FBI's desires to criminalize [applied math]. The NSA is an organization that tackles the world as it is.
Compromised standards, TAOed equipment, global passive adversary, exploiting OPSEC fails. Even if the highest compartments of the NSA were able to instantly factor any RSA key, these alternative methods would still be useful for plausible deniability as well as general use by less-privileged compartments.
The FBI is lucky to receive any of the NSA's scraps (and if they did, it would be some vague tip to start looking in the right place), hence being left to push for criminalization to facilitate their actual investigations.
And that required about 357 years of mathematics to be able to prove it, as well. With the recentish GI announcement, I'm hopeful factoring and discrete log will fall within the next decade or two, but it's likely going to involve a bunch of subtle perspective shifts on several different fronts combined with new machinery in math to show.
>As they are still so keen on controlling encryption we can assume with good confidence that so far they have failed.
I think the real issue with relating their position on legislation to their ability to break any given algorithm (Or the problem on which it is based) is this: Legislation will survive a new, more secure algorithm. Breaking one algorithm is subject to being patched out or the algorithm being entirely replaced. A broken system is a short-term investment, legislation is long-term (With the assumption that the government enforcing the legislation sticks around).
In addition to which, of those 100, some of the ones with the strongest incentive to crack factoring work for places with a strong disincentive to publish their results.
EDIT: Missed the sentence in the article where this is hinted at.
"Of course, I have no real evidence for my views; if I had any good ideas for factoring, I'd be working on them instead of writing this. On the other hand, the people who talk about the great difficulty of factoring have equally little evidence. The net result is that it's reasonable to hope that factoring is difficult, but you should regard that as a provisional hypothesis rather than a certainty."
I see the article kindly submitted here is the equivalent of an amateur blog post, just a note posted to one personal webpage that happens to be served up by the MIT server. This is a case when the Hacker News protocol for displaying a domain for the submitted article was misleading rather than helpful. The author is gainfully employed and highly educated in related disciplines,
This article makes it seem so easy to leap in and take a crack at it, but it seems like Step One is to either not value your own autonomy or to not rely on a wage.
I'm not a professional mathematician (recreational learner, at best) and part of the reason I didn't pursue a career of interesting problems like this is because I don't want to do so as a cog in academia or a spook at the NSA.
Am I wrong? Are there people who've had experiences that contradict my cynical perceptions?
Does that NPR job survey count people who tried their hand at being mathematicians, and hated it so much that they quit, or were unable to find jobs as mathematicians?
He makes the good point that factoring is already below exponential time, and there's been progress over the years in speeding it up. Even if someone doesn't develop a polynomial time algorithm, there still could be substantial progress.
There's also the possibility of new algorithms which are fast for some products of two primes, but not all. There are lots of problems, such as linear programming, where the worst case is exponential but the average case is far faster. Even something that allowed easy factoring of only 1% of products of two primes would be useful to an attacker.
I'm not too terribly worried. There are enough efforts to look for post-quantum crypto algorithms that I think by the time we come up with something that actually threatens factoring, we'll have alternatives.
If quantum stuff comes first, then we can go to lattice based systems (which are the leading candidate as far as I know - please correct me otherwise, I am familiar with lattice based systems but have not done research for a better base).
If factoring is solved first, we can stick with elliptic curves.
>I'm not too terribly worried. There are enough efforts to look for post-quantum crypto algorithms that I think by the time we come up with something that actually threatens factoring, we'll have alternatives.
That's hardly a consolation, since all that we've encrypted and shared until then will be trivial to break as long as people have them in encrypted form (which is very easy to do).
That is sort of inevitable, I think. Best case, it is never broken and we never get quantum computers, but I hope that doesn't come to pass.
So the next best case is that most of the details recovered by such breakage are irrelevent to those still alive. I think thats far more likely anyway.
The argument laid out by Cohn also works against elliptic curves: the discrete log is not NP-hard either, and if anything even less people have tried to attack it than factorization (according to Cohn, subexponential factorization only happened in the 1970s, after centuries of trying; elliptic curves are not that old). Some lattice problems, on the other hand, are known to be NP-hard. What does this tell us about their worthiness as cryptographic problems? Not a lot.
Which do you think is going to happen first: progress against RSA that plausibly threatens 2048 (or 3072) bit keys, or progress against ECC that plausibly threatens Curve25519?
Quantum computing confuses every thread about RSA and ECC.
Shor threatens RSA and ECC. Neither RSA nor ECC are considered post-quantum schemes. If quantum computing is really your threat model, you want to be doing what Google did: run both a pre-quantum and a post-quantum key exchange and mix the results with a KDF.
Which post-quantum approach you choose, I don't care. (I'm a quantum computing skeptic).
What you do not want to do is build a cryptosystem using solely a post-quantum key exchange algorithm, or, even worse, try to build a cryptosystem without any asymmetric key exchange at all. In both cases, implementation errors --- some of which, in the latter case, are probably inevitable --- will doom your system immediately.
I think progress against RSA is more likely, unless Shor happens first and kills everyone. I also hope to see a O(n^2) matrix multiplication algorithm within my lifetime. My point was not to defend RSA, but to point out that Cohn's argument could be reused for most of our current primitives.
I feel like the idea that you'd use ECC out of concern for advances in factoring or conventional discrete log isn't my own, but I'm not careful enough with this stuff to know the best thing to cite.
As always, I comment on crypto stuff principally to see if I can goad you into correcting me. :)
Well, I suppose you could cite Miller and Koblitz on that (mostly Miller). This was the mid-80s, when progress in index-calculus algorithms was in full force. So they introduce elliptic curves, and basically say 'look, this problem seems to be immune to these classes of algorithms, which seem to be getting better all the time, so let's use it instead'. With some important caveats discovered in the meantime, they have been right so far.
Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy. [1]
Even the NSA is wary of elliptic curve public key crypto; isn't ECC significantly more vulnerable to Shor's algorithm?
Symmetric encryption is weakened; namely, a quantum computer can search through a space of size 2n in time 2n/2. This means that a 128-bit AES key would be demoted back to the strength of a 64-bit key ...
I'm really not interested in how well PSK AES schemes fare post-quantum, because PSK isn't a realistic solution to any of the problems we use cryptography to address.
I suggested, upthread, not using RSA (which depends on the hardness of factoring) and using ECC instead. How AES does post-quantum is not responsive to that advice.
You suggested ECC over RSA, implying that ECC offers greater security than RSA in the context of easy prime factorization. ECC's security relies on the hardness of the discrete logarithm. Yet, both prime factorization and the discrete logarithm are both special cases of the hidden subgroup problem over an Abelian group. In short, cracking RSA means cracking ECC, too.
> I believe you to be wrong about the idea that tenably factoring 1024 or 2048 bit moduli implies a solution to the elliptic curve discrete modulus problem
Probably. This mathy stuff is way over my head. I'm just repeating what I read in Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer [1] published by Peter W. Shor on behalf of AT&T Research in 1995. Supposedly his proof is valid, predicated on the existence of quantum computers. Quantum computers actually exist [insert epistemological joke here].
If I understand your argument properly, then I believe you to be wrong about the idea that tenably factoring 1024 or 2048 bit moduli implies a solution to the elliptic curve discrete modulus problem, yes.
Number theory is a fascinating subject, but it's more or less like taking a text in English and trying to make sense of the text by reading columns and not lines
Most factorization algorithms require you to find either a loop or a quadratic congruence (mod n). What changes is the way they try to find these
Was D-Wave ever confirmed to give an actual speedup over classical computers? I just remember reading Scott Aaronson's continuous blog posts about how they never managed/wanted to release any real proof. I haven't heard of anyone claiming his $100k bounty and the last positive thing I read was "D-Wave does have something today that’s more computationally-useful than a roast-beef sandwich; the question is “merely” whether it’s ever more useful than your laptop." (http://www.scottaaronson.com/blog/?p=954)
I did find the google part summarised (http://www.scottaaronson.com/blog/?p=2555) "<skip writeup of known facts> Thus, while there’s been genuine, interesting progress, it remains uncertain whether D-Wave’s approach will lead to speedups over the best known classical algorithms, let alone to speedups over the best known classical algorithms that are also asymptotic or also of practical importance."
From their website: "D-Wave’s flagship product, the 1000-qubit D-Wave 2X quantum computer, is the most advanced quantum computer in the world. It is based on a novel type of superconducting processor that uses quantum mechanics to massively accelerate computation."
How would you understand that, considering the current computation happens in silicon?
They know what the rest of us do know: advances in quantum computing could hurt national security. So, they're using their default method of countering future advances by restricting sharing of ideas or tech. I'm not sure how effective it will be as it depends on how much funding comes from how many sources. Emanation attacks and TEMPEST security is still restricted to a handful of countries after decades of them discovering it. So, there's some precedent for the approach working.
There are crypto functions that are not amenable to quick solutions with quantum computers, right? Is there any reason why we wouldn't start migrating to those functions today? It seems foolish to wait until technology matures and present-day crypto is completely invalidated before we react defensively. Is anyone working on this?
I truly understand neither a) why this piece of puff is on HN nor b) why it is so highly voted.
As others have noted, I have no real evidence for my views. To me this is a total WTF? The author is ignorant of the subject and like so many experts outside their fields supposes that the field that has suddenly piqued their interest cannot, must not, be all that complex.
Factoring some numbers is very, very easy. Factoring other numbers is computationally hard. The consensus in the field is that there are no shortcuts. See, e.g., [1] and [2].
As they are still so keen on controlling encryption we can assume with good confidence that so far they have failed. And given the stakes it's really not likely to be the possibly untested problem that this author speculates about.