You wouldn't believe how many "engineers" I've seen open up random.org's string generator, and fiddle with the settings + copy a string from it only to drop it into a terminal for a root password/etc.
Point-in-case, I saw 2-3 "DevOps" engineers do it in my last position... over a screen share..! When you'd bring it up to people they would just roll their eyes and call you paranoid. =(
Do you believe "trying various snippets from random.org history" to be a viable oracle for guessing passwords? Because honestly that does sound a bit paranoid.
I guess I don't understand your statement as an actual real-world vector would look quite different than that. I may not understand what you're suggesting.
Visiting a webpage and copy-pasting a string off of it is not a very good practice for security because you're adding on a lot of parties to trust with that secret!
Effectively you want to minimize ANY place that your secret exist in plaintext, and trusting a webpage with this is just not a good idea.
It absolutely is paranoia. No point calling it anything else.
I imagine owning random.org, and not being very mean but a little clever. I know how many people come here for a quick clip; more importantly I know you come here. I rotate the same blob. I know all the pieces to brute force your infrastructure. Maybe you’ll use the wrong setting and something will be public that shouldn’t be. Hello.
On top of that, I find taking a shell and running `pwgen 24` much easier and faster. It even generates a couple of passwords if need a few. And can be piped into things like ssh sessions for automation.
And yes, our recommended way is to have the secret management automatically generate the password. That way, not even your workstation touches it.
IMO - high, but let me explain... We were also under regulation. Annnnd we also had a giant target painted on our back at all times due to information that would be of immediate use to an attacker.
It all comes down to chain of trust. When it comes down to a root password, or any secret that's business-critical you want to minimize ANY sort of risk and that's just the right way to do business.
When I point my scraper at random.org I can see it talks to "ocsp.digicert.com", "ajax.googleapis.com", "ssl.google-analytics.com", and obv. "random.org" (wow that's actually pretty good =P)... those are now three separate entities that now need trust because they all have the opportunity to see what was rendered on that page in plaintext, they have the opportunity to see what you selected, etc.
Then add to that, any browser plugin, the browser itself, etc etc. Then the "in plaintext over screenshare" issue - and you've got a lot of points where something, or someone could MiTM a plaintext password if they wanted/needed.
Generating a random password/secret by visiting a public site on the internet is stupid/silly with regards to actual security, and opens yourself to attack vectors _for no real reason_. There are a TON of VERY QUICK/EASY ways to generate a very secure string for secret management that don't involve trusting a ton of third parties =|
In a "security culture conscious" SF tech company there should be no place for laziness/lack of care like that. IMO - dumb compromises like that are how you get caught with your pants down leaking a ton of PII.
Devops usually means "it's faster and cheaper because we don't have a dedicated sysadmin". The union of good developers and good syadmins is narrow like that of developers and designers.
Maybe true of junior devops that has never done any sysadmin or much programming outside of school. Theres always people who are skilled at their job in whatever field.
Yeah - careful with that generalization though... talent is more a person-to-person and organization-to-organization thing and not a specific title IMO
There’s a strong assertion that it’s a bad idea, but no actual reasons given. The link doesn’t address it. So I asked the question.
So far the answers have been downvotes and evasive questions, so I’m leaning toward the idea I stepped into some kind of ideological thing. That’s fine, I don’t really care so I withdraw the question.
It is mostly ideology. But using something like random.org does raise your risk profile.
Random.org or any of their partners or your browser or the connection between you and random.org could all potentially be compromised.
If someone knows that you always generate your random salts with that site, they could potentially use past generated strings to reverse engineer your crypto.
Of course, very few password generators are only going to use the random seed you gave it. You would also need to know possibly the exact microtime and a ton of other variables to be able to "replay" the same scenario and generate a copy of the key.
The strength of your crypto is based on how unpredictably random the data you provide it is.
Assuming random.org is not the only source of random that your application used, it's probably fine.
If not, and reusing that same random string will produce the same output, it is quite dangerous. Especially if you are screen sharing. Someone tied to the project could easily figure out the output by copying the random string from the video.
I think it’s a good question! For me personally I feel that urandom is the right thing to do. Perhaps I’m overly cautious but I don’t think it’s prudent to include an external dependency when there is a good local alternative included in most systems. urandom is very solid. If one’s network connection goes down or random.org goes down, one can still generate randomness without problems.
Kind of surprised at the lack of information about the current setup, which is certainly most sophisticated and therefore the most interesting. How did they manage to distribute the servers? What are they using now as a source of randomness? Are there still notes on these nodes to keep away?
Really cool to see this here. Mads supervised my undergraduate thesis, I remember him explaining the random.org setup when I asked about the servers & radio sitting in the corner of his office. A super friendly and smart guy.
Of note, his office also contained a pretty legendary kendo sword linked up to sensors (this was early 2000s). Ostensibly to assist with technique, but Im pretty sure it was for light sabre visualisations...
The fatal flaw of course is that everyone accessing their stream gets the same random data, making it substantially less random in the sense of others not being able to predict it. So do not use for cryptography!
They're not very explicit, but their FAQ indicates that they are buffering random data and they sometimes even turn the generators off. Which would suggest that they're not repeating the same random data to multiple users
Interestingly most clues about how their system work are given in the 'paranormal' section of their FAQ on https://www.random.org/faq/
I'm not a crypto expert. But I do know this stream is a firehose. Somewhere on the site is says how much data is generated. I forget. Naively I would think at least for some applications there would be no way to determine what part of the stream had been sampled. And the stream connection is over https.
I find it a little concerning that RANDOM.ORG doesn't make it clear that it a trusted service, and cannot be relied on for secure entropy. The only mention is this, buried in the FAQ:
>anyone genuinely concerned with security should not trust anyone else (including RANDOM.ORG) to generate their cryptographic keys.
But the problems go beyond cryptographic keys. If you use RANDOM.ORG to pick lottery winners, you're trusting that the numbers you get are as truly random as they claim. In particular, the operators of RANDOM.ORG could trivially inject deterministic entropy (generated from, e.g., AES-encrypting successive integers) and this would be completely undetectable, even to statistical tests.
IMO the site needs a big, scary disclaimer on the front page that describes what applications it is appropriate for, and which ones should use a more secure source of entropy.
No. The idea is that you use their service as opposed to running something on your own machine, which eliminates you as a nefarious source of hanky-panky. Think about running their list randomizer to pick one or more names from, say, a list of raffle entrants. The "trust" isn't that the result is going to be cryptographically random or anything like that, it's just an external service you can't monkey with, which avoids accusations of cheating.
How does one take a biased noise source source (like atmospheric noise with a certain spectral density) and convert it to a stream of uncorrelated random bits? I've never understood this.
Simpler problem: how do you take a biased coin and convert it into a perfect source of random bits?
You flip the coin twice. If it comes up differently, you pick the first one. Otherwise you repeat. That will produce 50/50 random bits, no matter what the probability of the coin is.
I always had it in my head that the way to do this with a random noise source was to do what is done with radiation. Because radiation is emitted randomly, but time dependent, you can measure the time differences between three (four?) events. If the time between A-B is greater than the time between B-C, the random bit is a one otherwise, it’s a zero. I think it works with just three events, but I’m not sure.
You could do the same with audio noise by looking for peaks above a certain value and using a similar time function.
I don’t know if that is really how it is done, so I might be misremembering.
If you needed to simulate a single fair coin toss using a biased but at least somewhat random coin you might toss it twenty times, feed the sequence of results ("HTHTT...") into sha256sum, and take the first bit of the output.
Take the biased source and run it through a cryptographic hash. As long as the input contains more bits of entropy than the length of the output, the result will be essentially 100% random.
Cool, I think this explains it. What crytographic hashes are available that take a limited stream of bits over time and generate an arbitrary number of bits when requested?
Real CSPRNG designs are a bit more complex, having at least a state (like a hash function) but also an output mechanism to let it generate more bits.
One way to do that is to “compress” all your input entropy into a single state, and then use that as the key for a stream cipher, like AESCTR or Salsa20.
Hashes always generate a fixed-size output (that's the definition of a hash). So if you have a hash with an n-bit output, you run it on an input of m>=n bits to get n random bits, then repeat that process as necessary. You choose m so that it contains >= n bits of entropy. When it doubt, overestimate m.
You can also "seed" subsequent round of the has with the output from the previous round. That helps protect against certain kinds of failures. It's not really necessary, but it's not hard to do either so you might as well do it.
If the actual distribution of the noise one is sampling from is known, use the CDF of the noise to obtain a true random variable. That's exactly what the biased coin example does.
I imagine this approach would be insecure because an attacker could easily change the distribution from miles away.
And I don't think it solves the problem of radio noise being time-dependent random variables.
Here's a pretty good description of how one company does it in their hardware random number generator [1]. Generally along the same lines people have already described, with the addition of testing throughout the process so that you can have some confidence that it is all working.
Other people have mentioned how it's done in practice but there have been recent advances on this on the theoretical front as well. Here's one I know of:
https://sockpuppet.org/blog/2014/02/25/safely-generate-rando...