There is virtually nil chance that none of the root CAs are compromised or influenced, or at least influenceable by a criminal organisation or group.
I also doubt that all governmental organisations that have the opportunity to legally influence the CAs have full control of all their employees and that they are not vulnerable to criminals.
I also doubt that the governmental organisations of the usual suspected countries involved in government funded industrial espionage are not able to influence CAs - in order to target information in other countries.
Actual lawful interception is inevitably possible - but it's a good thing so it can be ignored. Actual political espionage can also be ignored, as it is unlikely to happen to normal people and companies and nearly impossible to avoid without considerate resources.
But the first three scenarios are probable, and can cause serious economical harm, and can't be fixed without changing the whole current infrastructure.
I think the point the parent was making is that it is likely that one or more of the most prevalent certificate authorities' private keys have been compromised - whether voluntarily provided to government, or obtained by criminals via dubious methods.
Once the private keys are compromised, all bets are off. No database - whether maintained by Google or anyone else - puts a dent in such a problem. It's one thing to track inauthentic certificates; it's quite another to discover someone silently decrypting traffic with a copy of the private key.
Edit: My response brings up an interesting question I hadn't considered before. With the certificate chain, is it required to have the private keys for the entire chain in order to be able to decrypt the stream? If someone has the certificate authority top-level certificate, can they decrypt a domain's certificate without having its private key as well as any intermediates? If not then it'd be true that the primary concern with a top-level private key being compromised is illegitimate certificates being signed, in which case Google's attempt to combat that with Certificate Transparency isn't half bad.
Your edit is correct - it is the public certificate that is signed, so a cert authority never sees the private cert and can hence not decrypt downstream. The problem is issuing fake certificates.
As far as I know, you can't trivially decrypt traffic as a passive observer, even if you have the private key of the website or the CA. The sides still perform key exchange using something like Diffie-Hellman. The asymmetric keys are used to verify that you're performing the key exchange with the real server and not with a MITM.
If you have the CA's private you must sign your own fake certificate and MITM the connection with the fake certificate. In this case, as you pointed out, Certificate Transparency does help because the fake certificate won't be present in the certificate logs.
The 'all or nothing' take on security is a very childish or naive point of view.
No security system is invulnerable, and even the slightest inconvenience will prevent a certain percent of behavior.
In light of those two facts, you must speak about specific threats and attacks if you want to say anything about security.
For example: Compromised root CAs have not been found to be used by criminal gangs in bank phishing schemes. Therefore, they seem to provide about the right level of security for that threat.
Compromised CAs have been strongly implicated in government attacks, so they don't provide an adequate level of security in that context.
This doesn't mean they're beyond repair. They work for certain tasks in certain contexts, like your front door, or a safe, or a password, or a security guard.
> The 'all or nothing' take on security is a very childish or naive point of view.
I disagree; indeed, I think the idea that cryptography and security can be flawed-but-useful is itself naïve. If something is breakable, it will be broken; that's why we need to be using the best possible security protocols and standards.
> For example: Compromised root CAs have not been found to be used by criminal gangs in bank phishing schemes.
They've not been found, which means neither that they've never existed nor that they never will exist.
Weaknesses in a system are vulnerabilities. Exploits are the methods used on vulnerabilities.
A flawed TLS implementation might only be exploitable by nation states, or only be practical against high value targets. Or perhaps it's known humans are corruptible, but a transparent, auditable root CA is still better than name-your-Chinese-state-CA.
Security is a spectrum. It is not all or nothing. Sure, crypto algorithms should strive for perfect security, but a system with humans will be fallible and must be usable.
Whether it will be broken during the duration of your use by an attacker that you can realistically thwart is the question you must be asking, and that has different answers.
The ability to issue a malicious certificate gives you the ability to instantly mim any https connection.
So about as bad as it gets. Worse, because the user is under the impression there is at least some security on the connection which changes their behavior. - for example most users would now not enter financial details on a non https connection.
Does anyone know if there is a log stored in windows 10 of what root authorities were used by different domains?
I wasn't talking about this particular concern (which I agree with as being severe). Mine is with the mindset behind "perfect security" leading to worse security, that's all.
Again, the root problem, which is universal, is the concentration of power. One company or group or place has so much power for everything in the world, and regulating them has been always the central problem of the human society.
But alas, we always need the concentration so badly for the sake of efficiency.
Microsoft has gotten really bad about describing what it is doing and planning on doing with the Windows 10 generation. They've been called out on it before, but I haven't seen much from them in a formal way over the last year. Individual teams in the company may do an excellent job, but overall the bar is low.
Were they really any better prior to Windows 10 though? I remember trying to read the Windows 7 updates, but they were always seemed very... light on material.
There at least existed KB articles with patch details ahead of a patch being dropped. I've routinely tried to dig up a newly deployed Windows bundle at home on a machine not running the Insider Program and hit 404s pulling up the KB number.
I've been getting a recommended driver update for the Broadcom Bluetooth driver used by a little ASUS USB Bluetooth dongle I use (very common BT chipset), yet I cannot find any information about any driver updates on ASUS' nor Broadcom's websites. And the "More information" link in the Windows Update dialog just points to a page that only says "Driver Information: Coming Soon <br> Thank you for using Windows Update. The More Information feature is not available yet. We apologize blah blah yadda yadda" (last sentence paraphrased). Why the hell would I install a driver for which I can find no information about? That's nuts. This isn't the first time I've ran into this situation with Windows Update and driver "update" recommendations, either.
For the Bluetooth driver specifically, you may be able to to pull it down via the Windows Update Catalog and inspect it that way. It may just be an OEM version of Bluesoleil or similar.
I recently got an update for my Microsoft LifeCam Studio. There is zilch to find online about the update. No KB article, the LifeCam Studio website still hasn't heard anything about a windows version after 7, nothing.
In a surprising development today, Microsoft was accused of moving too fast and bre^H^H^H failing to document things. HN commenters expressed nostalgia for the MS patch days of yore.
"At least we had MSFT" was the sentiment before Win10. For many, they were one of the few remaining companies that weren't "moving too fast".
It was a great time, when people could actually use more of their time on computers for doing other things besides chasing the update/upgrade treadmill.
It still takes them a long time to make patches; there's plenty of time to write notes. Releasing on a different schedule does not make it harder to document. You're implying it's a tradeoff, and it's not. They just dropped the ball.
IMO, I think its a security problem. People run diffs on the updates and use the detailed bug fixes to uncover exploits faster than most can patch. It could also be MS ending the era of "admin approved updates". Google started it and now MS competes with Google and even Apple in the the School market so it makes sense in a way.
It's clear they are boiling the frog prepping for Windows as a service. And on a SaaS model you have to take that control away to keep costs down. Obfuscating the full extent of the changes is just part of "making sure you have the latest and greatest"and why many apps for example have stopped including actual release notes in app stores.
Scary release notes and the opportunity for users to say no are bad for business.
To continue to trade access to corporations' and private computers' the world over for political and economic advantage to the intelligence and treasury communities.
Honestly I don't know whether this particular CA addition is a good example of the sorts of backdoors that Microsoft is known to insert on behalf of intelligence and police organizations. I think the parent was perhaps being a bit flippant?
Why do you trust Debian? (I say this as a happy Debian user/sysadmin/occasional package maintainer).
In particular, it's a volunteer-run organization in which it's not unusual at all to volunteer to maintain a package as part of your day job that uses a package, and where a large amount of discretion is given to the individual package maintainer, until they choose to hand maintenance to someone else. This is perfect for an organization who wants to push security configuration weaknesses. Even if you can't get a back door in, you can certainly default to code that you have privately found vulnerabilities in, or compile with or without certain options, add third-party patches that are pretty questionable, or add CAs that are particularly easy to coerce. None of these actions look weird at all, they just look like someone who is putting work into their package and caring about doing Debian-specific work to make it work well. In particular, until relatively recently, the Debian ca-certificates package included the CAcert root cert, which was in very few other root stores, and the SPI one, which was in no other stores.
It's also the case that Debian accepts binary packages built on the developer's personal machine (and this used to be required until very recently), so it's very easy to straight-up upload a backdoor that isn't in the source. (This might have changed recently, but I believe this was true at least as recently as the last stable release.)
I trust people doing it largely for themselves and the community reputation _more_ than I trust people who are expected to deliver more returns every year in a stagnating market.
But you don't know if someone is doing it for themselves and community reputation, or if they're a fake persona created by someone who wants to break into servers. All it takes is one stereotypically-stubborn open source maintainer who gets grumpy about switching old reliable cryptographic defaults for kids-these-days defaults - which is a thing that real-world stubborn open source maintainers, on whom the stereotypes are based, do: https://sourceware.org/bugzilla/show_bug.cgi?id=13286
Do you know if the version of OpenSSL in your Debian has any patches to its cipher suite selection algorithm, compared to upstream? (Genuine question; I haven't checked.) If it did, and you saw someone being grumpy on a Debian bug and refusing to remove the patch, would you suspect that they were actually evil? Or just grumpy?
Remember, also, that Debian is the distro that patched their OpenSSL to ludicrously weaken the random-number generator, and the Snowden leaks confirmed that the NSA backdoored a random-number algorithm. I am not at all saying that the NSA was behind the patch (it looked genuinely like an oversight), but if the NSA wanted to be behind a similar patch, no one would think it abnormal.
That's totally fair, but I hope that each of those organizations has a vested interest of exposing each other; at the very least it's in the NSA's charter to protect American businesses against attacks, I have no idea if they feel this is an effective way though. So yes, it's risky.
But Microsoft has all those disadvantages too, even though it's harder to get moles inside it's easier to have their stuff undetected (and in the case of the NSA it might even be done with full cooperation). Plus, the market share makes them a bigger target.
Outside probably hypotheticals, what we know for certain is that microsoft is attempting to monetize their new windows on the back of user's data.
While I mentioned the NSA, really the bigger threat is a guy (or hacker group) who wants to pull off a million dollar heist. The NSA can get into (practically) anything and everything.
To get a job at MS, you have to have a real life reputation. Once you get in, there will be others analyzing your code, and your bug may not make it to release.
To insert a bug into Debian, become a packager and you're done. Access to one of the most popular server (the important stuff is here) OSs (Debian, Ubuntu) on the web.
You're busted? Create another account and start over.
If the NSA were trying to protect American businesses against attacks, they would responsibly disclose vulnerabilities they discover. But for me most part they hoard them.
Debian had some fairly dubious CAs included for almost 10 years, so they're not without fault. Looks like they've cleaned up - and at least their processes are fairly transparent.
Another nice thing about Debian's certificate package is that debconf will prompt you to accept each new certificate if you have debconf set to show low priority questions. I have not seen anything similar in other distributions or OSX.
Most notably for me, the list doesn't show any of the foreign roots, Agencia Notarial de Certificación, Autoridad Certificadora Raíz Nacional de Uruguay, etc. Are they kept anywhere else?
EDIT: According to the author, Manage Computer Certificates will only show you that a trusted root exists after you've used it. Trusted roots that you've never used are invisible: http://hexatomium.github.io/2015/08/29/why-is-windows/
The test he provides still works: "OpenTrust Root CA G3" won't show up in Manage Computer Certificates until you visit https://www.opentrust.com/
So, Manage Computer Certificates is useless for untrusting roots Microsoft trusts for you.
One additional note, most applications in Windows by proxy follow this OS level list of trusted certificates. Notable exceptions are Java and Firefox which maintain their own CA repositories in their installations.
This is the reason I hate JVM from sysadmin/devops perspective. It tries to manage things that should be left to OS (CAs, fonts, time, etc.). It's not too big of a problem for end user applications with package manager maintained JVM, but if you want to use JVM for any kind of daemon (application server, database, messaging system) your sysadmin has to become jvmadmin too.
It would not be such a bad thing if you really, really wanted to manage system per application if there was "fallback to OS" option. But then again moderately sized applications have been run in their own virtual machines for quite some time and these days containers are pretty prevalent solution for self containing even small applications.
On OS X, open the app Keychain Access and look at the System Roots keychain -- you can see all the root certs there.
For Windows: Have never done this myself, but the link mentions a tool called RCC that lists root certs and highlights potentially suspect ones I guess? No clue if this is legit, use at own risk: https://www.wilderssecurity.com/threads/rcc-check-your-syste...
well I didn't said that it's good or bad, I just said that they are "familiar"?
not sure why they even add more CAs since they already have one, some of them with way more than 8 years left. they didn't had ECC certificates tough.
There is virtually nil chance that none of the root CAs are compromised or influenced, or at least influenceable by a criminal organisation or group.
I also doubt that all governmental organisations that have the opportunity to legally influence the CAs have full control of all their employees and that they are not vulnerable to criminals.
I also doubt that the governmental organisations of the usual suspected countries involved in government funded industrial espionage are not able to influence CAs - in order to target information in other countries.
Actual lawful interception is inevitably possible - but it's a good thing so it can be ignored. Actual political espionage can also be ignored, as it is unlikely to happen to normal people and companies and nearly impossible to avoid without considerate resources.
But the first three scenarios are probable, and can cause serious economical harm, and can't be fixed without changing the whole current infrastructure.