It's possible that NSA can recover keys from 1024-bit RSA/DH/DSA handshakes, but it's extraordinarily unlikely that they can do so scalably; that would be an ability that currently qualifies as space-alien technology.
RC4 is terribly broken and should be disabled. But the practical attack on RC4 requires many repetitions of the same plaintext --- "many" in the millions. This is a real threat to HTTPS/TLS, but not as much to RC4. Even hypothetical improvements to Fluhrer/McGrew would still require lots of repetitions. Disable RC4, but if you're playing along at home, this probably isn't it.
No serious cryptographer seems to believe that the NIST curves are backdoored. Avoid them if you can; they suck for other reasons.
It is conceivable, with the right tradeoffs, that computing individual discrete logs over a few popular 1024-bit moduli could be made cheap. However, none of the available evidence suggests this practice.
There is a lot of inconsistent thinking behind the advice given in the article:
- Hard-to-implement NIST curves suck, whereas GCM and Poly1305 are recommended.
- NIST apparently sucks, but NSA-designed SHA-2 is recommended.
- MACs need 256-bit tags, so UMAC and not-NSA-designed RIPEMD160 is apparently not fine, but GCM/Poly1305's 128-bit tags are recommended. On this note, 256-bit tags are pointless when the rest of the crypto infrastructure is sitting on 128-bit security.
- 3DES is not recommended because DES is 'broken', not realizing that this break is due to small key length of the original DES; 3DES is deemed to be quite secure (but slow).
- 64-bit block size is enforced, but for no good reason: SSH's 32-bit sequence number, along with counter mode, renders block size worries moot.
3DES has become fairly weak, and it seems like most cryptographers wouldn't recommend it - especially if you are trying to defeat the NSA - originally it had a 168 bits, but with known attacks, it's reckoned to have around 80 bits of security left, which given DES was considered insecure with 56 bits makes it sound iffy at best.
Well...there's "insecure"...and then there's really insecure.
DES (and by extension 3DES) isn't, per se, "insecure" at 56-bits except that technology has progressed from the mid-1970s such that an exhaustive search of the keyspace (e.g. brute force) is now practical in reasonable time. DES is resistant to differential analysis and even more modern techniques that could seriously reduce DES security are theoretical exercises at best (like those requiring terabytes of known plain-text to derive a key, or those only applicable to reduced-round implementations).
Yes, I'm aware that to a cryptographer, "theoretical attack possible" == "OMFG insecure cipher", and that attitude is a good thing. If DES was an AES candidate we'd never pick it. But from a practical standpoint, baring implementation mistakes or operational missteps, no one using known 2015 tech and technique is cracking 3DES before the heat death of the universe (or at least before we're long turned to dust).
That said, can you provide a reference to a practical (that is, not the linear cryptanalysis stuff from Davies) attack which reduces 3DES to 80-bits of effective security? If it's there, I missed it, and would invalidate what I've said.
> Hard-to-implement NIST curves suck, whereas GCM and Poly1305 are recommended.
I've always wondered this about DJB - he preaches the gospel of ease-of-implementation with Curve25519 and Salsa/Chacha20, but then for a MAC he has... Poly1305. I guess speed trumps everything?
Sure. I'm not saying Poly1305 is problem for DJB, just for anyone else trying to implement it, which is a concern DJB has with his other crypto, but not here.
GCM is harder for everyone else to implement than Poly1305. It's harder in the "literally trickier to implement" sense, and in the "needs hardware support to be performant and secure at the same time".
> "needs hardware support to be performant and secure at the same time"
So does Poly1305; it just so happens that most popular processors have strong hardware support. Here's an exercise: implement both GHASH and Poly1305 for MSP430.
I think you're calling fast multipliers "hardware support", which is fair, but the hardware support needed by GHASH is idiosyncratic to things like GHASH. CLMUL is only a few years old and GCM is its primary use case.
Implementing a truly constant-time GCM in software without CLMUL is sufficiently hard that noone has managed to create a remotely competitive implementation. They're all either an order of magnitude slower or vulnerable to cache-timing attacks.
Poly1305 isn't a walk in the park, but doesn't need special hardware support for fast constant-time implementation. Though I will agree something like HMAC is much simpler.
I think the reason why many still think SHA2 is safe is mainly because of Bitcoin. There's a lot of money at stake and many crypto experts working with Bitcoin. If there was a hole, it's possible someone would've found it. AES was also designed by NSA and is considered safe.
That said, I think it's better to avoid them anyway just to give another hit to NIST/NSA. Plus, ChaCha20 and BLAKE2 have much better performance in software than AES and SHA2/SHA3 anyway, so I would like to see those adopted as default options instead.
I know I've bothered you before about this, but can you explain [1] for the layman? It seems to be saying that non-rigid curves may have secret attacks. Then provides a table where, for some reason, just the NIST curves are listed as "manipulatable".
It seems, from reading [2], that the NIST curves went out of their way to claim "verifiably random" generation....using unexplained seeds. The page says it's conceivable that the NIST curves have weaknesses that "were introduced deliberately by NSA."
I don't understand the math so it's likely I'm totally misunderstanding. But reading those pages, they seem to hint that the NIST curves might have some intentional flaws, and that it's suspicious that they generated curves that are susceptible to known problems.
space-alien technology speculation aside, i've been aware of openssh's less-than-reassuring default selection order for Ciphers, HostKeyAlgorithms, KexAlgorithms and MACs for a few years. for most modern computers and cpus, using these stronger algos amounts to, at most, a 10% speed loss when scp'ing and a 10% increase in cpu usage. even machines with weaker cpus will barely show any signs of fatigue with these stronger algos. despite this, at least 2 of the openssh devs have rebuked my suggestion to change the default algorithm selection order.
it's not exactly clear (to me) why anyone who runs a project that so many ppl depend on for security would stick to such old and crufty algos. since openssh and openbsd are intertwined, it does make me wonder if this is being done so that openssh can run on the latest vax, etc (omg! but it will take a week for it to generate the right sized keys!).
EDIT: openssh in 2nd paragraph changed from openbsd, a typo.
I feel like most of this is just because OpenSSH has been under development for 15 years rather than any conspiracy. Cryptography moves forward, new algorithms get added, but yet defaults don't get changed.
I have no idea what you're referring to. The OP took issue with the default algorithm(s) selection order over the course of two paragraphs. I noted they were recently changed in the latest OpenSSH release to exclude weak ciphers.
Using -C with scp will result is drastic speedups most of the time anyway, it's possible any slowdown due to a nicer cipher suite can be counteracted with that.
That's orthogonal and compression won't help much if you're copying things that are already well compressed like archives, videos, photos... And compression only helps if you're I/O bound.
If using a stronger cypher is enough to slow you down chances are that you're CPU bound anyway and adding compression on top is actually going to make your transfer slower.
Making it slower isn't just a theoretical problem but I routinely saw that working with fast network hardware (1G, later 10G) on hosts which were either loaded with user tasks (computational lab) and have still seen it in recent years using hosts which are running AIX/Solaris/etc. where it's apparently routine for vendors to ship OpenSSH without any compiler optimizations enabled.
Depends on where the bottleneck is. A lot of the time scp is hampered by latency, moving most files that are not already compressed around will usually get some level of speed up on latent connections. Compression is also asymmetric, a fast remote host and a slow one being bottlenecked by decryption might see a speedup.
In theory, compression before encryption should make encryption faster on a multicore system. However, I'm pretty sure all ssh clients are single-thread, so that wouldn't apply.
They're in a format that requires implementations to be especially careful validating input parameters to avoid leaking information; for instance, an attacker-submitted faulty coordinate can trick an implementation into performing a calculation with its secret key on the wrong curve.
Here's the most egregious case: Bitcoin's secp256k1. This curve is defined by the equation y^2 = x^3 + 7 with arithmetic modulo the prime p = 2^256 - 2^32 - 2^9 - 2^8 - 2^7 - 2^6 - 2^4 - 1.
Suppose you have some way to send a point P to a non-point-verifying adversary, and get the scalar multiplication Q = s . P back, where s is the secret key. If we send a point on the curve y^2 = x^3 + 0 over the same prime---which is technically not an elliptic curve---the arithmetic will still make sense and we will get a meaningful result. However, discrete logarithms on this second curve are very easy to compute: s = (Q_x P_y) / (P_x Q_y) mod p. Without point verification stealing the secret key is a simple matter.
This example is slightly artificial; but real examples are just as deadly, however they usually recover the secret key a few bits at a time, and are a little more complicated.
This kind of invalid curve attack exists against all elliptic curves, so it's a bit difficult to argue that they're a reason to prefer one curve type over another.
The situation is a bit different in the presence of point compression, in which case you're typically concerned with twist security, but the security of the NIST P-256 quadratic twist is pretty decent, so again this isn't a strong argument against it.
The two good reasons to choose something like Curve25519 over NIST P-256 are 1/ speed and 2/ the fact that it's somewhat simpler to obtain side-channel protected implementations. For SSH key exchange, it's pretty much a wash for most realistic settings (only a server that spends significant CPU time simply establishing SSH connections would care about the performance difference here).
I chose the singular example for its simplicity; invalid curve attacks are, of course, much more general (smooth-order curve + CRT). That said: how would you mount an invalid curve attack on a curve in Edwards form? It is obvious if the adversary is using Hisil's d-less formulas, but it does not seem obvious otherwise.
It certainly depends on the precise arithmetic being used, but for the usual complete addition law, for example, I'm pretty sure I can recover k from the computation of [k]P where P is of the form (0,y) (or (0:Y:1) in the projective case) and not on the curve. That's a cute idea for a paper that I'll probably write up, by the way; thanks!
And sadly there's a patent covering point-verifying, so the workaround for this includes paying licence fees to Certicom if you're in the US.
As far as I'm aware there's still no real progress on getting better curves (e.g. curve25519) into TLS, despite a lot of noise on the ML, which is a real shame.
Better curves would also be faster - so it's not "just" a security thing.
Actually, CFRG's doing a consensus call to adopt a rough draft from agl containing Curve25519 as an RG document right now - and I feel fairly comfy saying we seem to have good consensus and running code for X25519 (the Montgomery-x key exchange over the curve known as Curve25519, introduced in that paper). Implementers are already pushing ahead. I wouldn't think the TLS group or anyone else needs to delay that work any longer - it's been quite long enough already in my opinion.
Not quite so sure about signatures, but that's more a PKIX WG problem with more (CA-style) inertia behind it, so that won't move very quickly no matter what. The chairs want to resolve signatures after the curve and key exchange algorithm, which the TLS WG participants seem to want sorted out first.
> As far as I'm aware there's still no real progress on getting better curves (e.g. curve25519) into TLS, despite a lot of noise on the ML, which is a real shame.
I thought Google was pushing for it to be adopted in TLS 1.3. Did everyone else reject that idea or what happened?
Re: curve25519, just do what vendors always do: add it to your implementation as a proprietary extension and wait for everyone to adopt it as de facto standard.
The problem is Apple and Microsoft. If Curve25519 isn't standardized, Google and Firefox will implement it, and Unix servers will get support through OpenSSL, and so a big chunk of the web will get to use Curve25519, which is a bit of a win.
However, Apple probably won't support Curve25519 without a standard, and Microsoft definitely won't: they have a competing proposal. Which will leave the NIST curves widely used across the web as well, because IE and Safari support is critically important.
How are they a problem? Firefox/Chrome/OpenSSL can still pick up Curve25519, and Microsoft can still hold out for their own proposal; neither cancels the other out and we're already stuck with NIST curves (rfc4492). In fact, TLS specifically advertises what curves are available so that an upgraded server/client can use stronger curves in the future - optionally.
I know MS will be dicks and try to force their own version of everything, but i'll bet Apple will implement anything that there's a half-decent reference implementation of. That just leaves Microsoft, and the easiest way to defeat their proposal is to get their customers to demand they support the thing everyone else already implements, which would be Curve25519.
Let's imagine a scenario that could result from this:
1. DJB criticizes NIST and other standardization institutes and their curve selection choices
2. CFRG fails to recommend a curve (or a suite of curves) for TLS WG
3. Microsoft refuses to adopt Curve25519
4. Everyone else does
5. Interop problems
6. ?????
7. Everyone who has to clean up after this is aligned strongly towards standardization processes.
That's how they are a problem.
(For the record: This isn't a conspiracy theory, I don't think anyone wants this to happen and is actively trying to manipulate things to make it happen, it's merely a hypothesis on what could happen.)
DJB said in his recent talk that Microsoft is going to adopt "26 new curves", and he seemed pretty happy about it. But I'm not sure whether that means Microsoft will support Curve25519 or not. If they will indeed support 26 new curves, but they won't support Curve25519, that would be pretty silly of them.
If you're expecting a curve with 2^250 ish possible resultant values, and you perform a calculation on a curve with only 2^13 ish possible values, you're going to leak some information about the number you gave it.
The ECC Hacks talk by Dan Bernstein and Tanja Lange explains it better than I can.
The point that is transmitted as part of the shared secret isn't guaranteed to be from the curve but not checked (and it'd be expensive to do so). The problem deos not exist with carefully chosen curves. TBH, It's highly unlikely I got that right, so there's an example in the first half of the talk, you might want to watch.
What percentage of ssh keys are encrypted, do you estimate or wager? I can't imagine that they haven't built factoring hardware but I agree that scale is a problem. I also can't imagine that they need to use factoring hardware if say 80% (I'm just guessing, could be much higher) of the ssh private keys are just chillin' on disks unencrypted, there has to be tons of other exploits to get those.
Server keys? Probably closer to 99.999% are just chillin' on disk.
Client keys? How many people use github, but don't want to enter a password on every push and aren't hardcore about setting up agents (esp. on Windows)?
It's basically a smartcard in the form factor of a nano-USB-stick. You can generate a pair of public/private SSH keys, with the private key remaining forever on the token.
Then setup gpg-agent in ssh-agent emulation mode (it's three lines in a file) and voila! you have hardware-backed authentication.
I'll probably write a HOWTO soon, but until then have a look at this:
It's unnecessarily complicated as described in the top post, but read the comments below, too. The real setup is dead simple - just a few lines in gpg-agent.conf and one of the .*profile files.
I'd love the writeup if you get the chance, I've looked at various places on how to do it (my brother bought me a Yubikey for Christmas) but like you said, it seems more complex than it should be. I'm interested to see how it'll go with OS X.
Generate public/private key pair on the NEO smartcard:
Run 'gpg --card-edit'
In the menu, choose 'admin'. Then choose 'generate'. Then 'quit'.
That's it. The private SSH key will remain on the smartcard forever; it will never leave it, not even during authentication. It cannot be extracted (well, maybe the NSA can, who knows).
To extract the public SSH key from the card, run 'ssh-add -L > my-public-key.pub'
You may want to edit the name (the third field) at the end of the key.
I'm 99% sure ssh-add -L works on any Unix system, you don't need anything preconfigured, just plug the token into it and run the command. This way you can easily get your public key no matter where you are.
The smartcard has a user PIN and an admin PIN. The default user PIN is '123456'. The default admin PIN is '12345678'. It is recommended to change them.
After 3 mistakes entering the user PIN, the card locks up and you'll need to unlock it with the admin PIN.
After 3 mistakes entering the admin PIN, the card is dead forever. Be careful with the PINs.
Read "man gpg", options --card-edit, --card-status, and --change-pin.
You will have to enter the user PIN when you authenticate SSH. It's cached for a while (see below).
#########################
Configure Linux or OS X to use ssh key authentication with the NEO:
Install gnupg, either from Homebrew or from GPG Tools (on OS X), or via repos on Linux.
On Linux, I think you don't need the pinentry-program line, so remove it (not sure). Or experiment with various pinentry utilities, see what works for you; there should be a pinentry somewhere on your system after you install gnupg, and usually it's text-mode.
The value shown above is for OS X with GPG Tools, which is a GUI mode pinentry. If you install gnupg via Homebrew, read what I said above about Linux. Or google for the GUI mode pinentry for OS X - it's a separate download, made from an older GPG Tools version, that you can install along with Homebrew gnupg.
$ tail -n 7 .bash_profile
GPG_TTY=$(tty)
export GPG_TTY
if [ -f "${HOME}/.gpg-agent-info" ]; then
. "${HOME}/.gpg-agent-info"
export GPG_AGENT_INFO
export SSH_AUTH_SOCK
fi
The GPG_TTY is not needed with the GUI pinentry that comes with GPG Tools on OS X, but might be needed for the simpler text-mode pinentries that come with other gnupg distros.
On Linux, or on Mac with gnupg installed from Homebrew, you need to launch gpg-agent upon logging in (GPG Tools will do that automatically for you). One way that seems to work well (checked with Homebrew gnupg on OS X, and with the Linux gnupg) is to add this to .bash_profile:
eval $(gpg-agent --daemon)
To use it, put your public key on a server, plug the NEO into USB, and run 'ssh user@host'. pinentry will ask you for the user PIN. And that's it.
##########################
WARNING:
PCSC is broken on OS X 10.10. If you're on 10.9, stay there if you plan to use the NEO (or any smartcard for that matter). More details here:
Does using the Yubikey in this manner protect your keys even if your machine has malware on it? I read through this tutorial [1], and they say you need read/write access to the device, so it seems to me like malware could access your keys while the smartcard is plugged in. If this is the case, I'm not sure how this setup is any more secure than simply keeping your keys on a USB flash drive and plugging it in whenever you need to use ssh.
The private SSH key never leaves the smartcard, not even during authentication. It is not exposed to the OS or any process, at all. You can't extract it at all (maybe the NSA can, who knows). The actual authentication takes place on the token, not in a process on your Unix system.
The only thing that the malware can do is issue an authentication request while the token is plugged in. That's all. If the PIN is not cached, you'll be prompted to enter it, and you'll be like "why is it asking me to enter the PIN?"
Maybe they could run a spy debugger on gpg-agent, but again, this would not give them your private key.
Client keys? How many people use github, but don't want to enter a password on every push and aren't hardcore about setting up agents (esp. on Windows)?
I encourage everyone to use encrypted keys on all platforms. You can set up the regular ssh-agent in git bash, and Atlassian's Source Tree can also use encrypted keys.
I don't see the point of encrypted keys, if my computer is compromised it is a trivial matter for an attacker to log input and get the password. If the computer is stolen, the disk encryption should be enough.
There are many ways to compromise a computer without installing something and having the user later provide input. Easiest example is a lost or stolen laptop - if it's not encrypted, you can get the contents of the disk, but the user isn't going to be around to provide more input.
There are many ways to compromise a computer without installing something and having the user later provide input. Easiest example is a lost or stolen laptop - if it's not encrypted, you can get the contents of the disk, but the user isn't going to be around to provide more input.
Client keys: chk out Userify :) (shameless plug follows) You keep your private key private client-side. (use an agent or not). Userify deploys user/sudo/key to your project servers. </plug>
Most key "management" systems like to hold onto your private key for you and provision your connection for you. That's insane and defeats the whole point (as you point out)!!
As far as server-side, automation keys are often 'server' side (where the server is itself a client). Userify can manage and deploy those keys as well, but it's not super easy (yet) -- currently, you still have to 'invite' a (fake) user, create a new user account (company_backup_account or whatever), and then choose all the servers that you want that public key deployed to. That part could definitely be easier.. and soon will be.
Unless all they do is capture the flow of every handshake in a huge database. Then they can use that for targeted decryption or other forms of prioritized targeting. The NSA program is called Longhaul. Watch the talk Appelbaum gave at CCC very recently.
RC4 is terribly broken and should be disabled. But the practical attack on RC4 requires many repetitions of the same plaintext --- "many" in the millions. This is a real threat to HTTPS/TLS, but not as much to RC4. Even hypothetical improvements to Fluhrer/McGrew would still require lots of repetitions. Disable RC4, but if you're playing along at home, this probably isn't it.
No serious cryptographer seems to believe that the NIST curves are backdoored. Avoid them if you can; they suck for other reasons.
MD5 is survivable in the HMAC construction.