Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Last time I tried using gpg on a VM it failed to work (literally would not do anything) because it blocked on /dev/random. Would you say that the gpg people should be outsourcing their work to someone else?


Things have improved a little on this front. It turn out gnupg was being a little gluttonous when it came to entropy:

"Daniel Kahn Gillmor observed that GnuPG reads 300 bytes from /dev/random when it generates a long-term key, which, he observed, is a lot given /dev/random's limited entropy . Werner explained that GnuPG has always done this. In particular, GnuPG maintains a 600-byte persistent seed file and every time a key is generated it stirs in an additional 300 bytes. Daniel pointed out an interesting blog post by DJB explaining that a proper CSPRNG should never need more than about 32 bytes of entropy. Peter Gutmann chimed in and noted that a 2048-bit RSA key needs about about 103 bits of entropy and a 4096-bit RSA key needs about 142 bits, but, in practice, 128-bits is enough. Based on this, Werner proposed a patch for Libgcrypt that reduces the amount of seeding to just 128-bits."[^1]

On a related not why are you generating keys on a remote vm? Its probably not fair to say that gpg "failed to work (literally would not do anything)." It was doing something, gnupg was waiting for more entropy. Needing immediate access to cryptographic keys that you just generated on a recently spun up remote VM is kind of a strange use case?

[^1]: https://www.gnupg.org/blog/20150607-gnupg-in-may.html


Thanks very much for the updates on entropy requirements.

Re: "Why are you generating keys on a remote VM" - prior to this, it hadn't occured to me I couldn't generate a gpg key on linode/digital ocean, VMs. I realize now that keys should be generated on local laptops (or such), and copied up.

Re: "Fair to say failed to work" - It just sat their for 90+ minutes - I spent a couple hours researching, and found a bug (highly voted on) that other people had run into the same issue. But, honestly - don't you think that gpg just hanging for 90+ minutes for something like generating a 2048 bit RSA key should be considered, "failing to work?" - I realize under the covers (now) what was happening - but 99% of the naive gpg using population would just give up in the same scenario instead of trying to debug it.


Yeah, the bug was really how it handled the case of waiting forever without telling you why. In GPG's defense, before it actually stars reading from /dev/random, it does give you all kinds of warnings that it needs sources of entropy before it can make any progress.

Hard to get that kind of thing right, but fundamentally it did stop you from making exactly the kind of terrible mistake that I was talking about. ;-)


On environments with limited initial entropy (such as VMs) just use haveged [1]

[1] - https://wiki.archlinux.org/index.php/Haveged


Whoa. I didn't realize it was reading 300 bytes. That is excessive.


No, they should outsource their UX work to someone else.

They know their crypto requires good entropy. GPG should timeout or give a warning/option to shoot yourself in the foot in case you wanted to. But that's UX, not crypto.


I thought the entire purpose of this thread was pointing out that /dev/urandom is plenty fine for security purposes, and all blocking on /dev/random does is, well, block your program for no particularly good reason.


/dev/urandom is fine for a lot of contexts. If you were using it for key generation (which I believe is when it reads from /dev/random), particularly inside a VM, that might be when you have a case that you have a need for a small seed that can't controlled by the VM environment. You don't want two instances of the VM picking the same exact persistent keys.

As the article says, it's a screw up to not have each VM instance seeded separately. You need that to not muck the whole thing up. In the case of generating keys for permanent use, you want to fail in that case, at least until some entropy shows up, as it did.


If you have two VMs with identical /dev/urandom states, that's a grievous vulnerability that will impact far more than GPG. This doesn't seem like a good reason to use /dev/random, but rather a reason to fix whatever distro bug is preventing your (u)random from being seeded properly.

You do not need /dev/random for key generation.


In addition, if two VMs have the same urandom state, that means they must have the same random state. So now you are hoping that somehow those states diverge? But given that they started the same, I'd be wary of making that assumption. There's no guarantee that whatever "entropy" unblocks random on one machine won't be identical to the second machine.


You don't need it, but if you don't have enough entropy for key generation, something is wrong.


If that's snark, you've missed my point. If urandom on your system is insecure, all sorts of other software on that system is also insecure.


Yes, so the question is what is the preferred failure behaviour when there isn't even enough entropy in the system to be confident that you can select a single unpredictable key with any kind of confidence of uniqueness within an unbounded time window. That should mean that the system effectively has no source of entropy.

I think blocking and not returning until you have entropy is a reasonable failure behaviour for gpg in the key generation process.

It'd be nice to report something and maybe hint to the user that you are waiting for just a modicum of entropy to show up, but at least it isn't presenting a key that is entirely predictable (and worse still, the SAME as any other instance of that VM image!!!) to anyone with access to the original VM.

The bug the article is referring to is that a lot of security systems will block reading /dev/random when in fact /dev/urandom will provide a securely unpredictable sequence of data with no statistically likelihood of another system producing the same sequence. It's particularly bad for the case where timeliness is an important part of the protocol (which is largely a given for anything around say... a TCP connection). That's a silly design flaw.


I think I see what you are saying - that gpg blocking, and failing to create a key on a VM, is actually a desired behavior, and that the only real problem is gpg doesn't time out more quickly and say something like, "System not providing sufficient entropy for key generation".

But, if that's the case, then the entire thesis behind, "Use /dev/urandom" is incorrect. We can't rely on /dev/urandom, because it might not generate sufficiently random data. /dev/random may block, but at least it won't provide insecure sequences of data.

This is kind of annoying, because I was hoping that just using /dev/urandom was sufficient, but apparently there are times when /dev/random is the correct choice, right?


/dev/urandom will generate secure random data. That's what it does.

That was the point of the blog post: if you are using /dev/random as input in to a cryptographic protocol that will stretch that randomness over to many more bytes, WTF do you think /dev/urandom is already doing?

What /dev/urandom might fail to do, and this primarily applies to your specific case of a VM just as it is first launching and setting things up, is generate unpredictable data, and most terribly, it might generate duplicate data, for certain specific use cases where that would just be awful.

I would agree that you got the gist right though: /dev/urandom is usually the right choice, but when it is not, /dev/random is indeed needed. Most people misunderstand the dichotomy as "random" and "guaranteed random", which leads to very, very unfortunate outcomes. Other people misunderstand what they are doing cryptographically and somehow think that some cryptographic algorithm that uses a small key to encrypt vast amounts of data shouldn't have any concerns about insecure, non-random cyclic behaviour, but oddly take a jaundiced eye to /dev/urandom. It basically amounts to "I think you can securely generate a random data stream from a small source of entropy... as long as that belief isn't embodied in something called urandom".

Again, if you don't know the right choice, you should pass the baton to someone who does, because even if you make the right choice, odds favour you doing something terrible that compromises security.


I think the key point you made was this: "What /dev/urandom might fail to do...is generate unpredictable data."

That was not something that I was aware of, thanks.


Argh. No.


So, contrary to the article, you are claiming that there are good reasons to use /dev/random some times?


There are very specific use cases, where the concern is not about entropy in the series (because duh!), but from having a small seed value that is intrinsically not cryptographically secure but unpredictable (not quite the same thing) even in a controlled setup like a the start up of a VM image.

I would agree with the article that the right way to fix the problem is to seed each instance of the image on startup, but that will also avoid you having a problem with it blocking.

That's a special case though, where you aren't in the middle of a cryptographic handshake, you don't have real time constraints, and the fix to the real problem will also mean there is no problem. Don't use it for a network service's source of randomness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: