Reminder that open source projects are not provably more secure, nor is it easy (or even possible in many cases) to assert the source you see made the binary in question.
Yubikey has been around a long time and has made every effort to be a transparent company with a support for open source. Truth is, that is sometimes hard to do.
I've seen that article and it's a heap of crap. There's no reason they couldn't make the firmware read-only so you could verify it, then publish the source to audit and verify against.
>Reminder that open source projects are not provably more secure, nor is it easy (or even possible in many cases) to assert the source you see made the binary in question.
I can (and do) read the code for security-related software, and I can at least check for obvious backdoors and flaws myself. With reproducable builds it is possible to assert the source you see made the binary in question (and security related software must support reproducable builds for this reason).
If you want to convince yourself the product is secure, that's up to you, but it's not.
You obviously didn't read the article. There is no way for you to actually do that. And the secure platforms themselves have NDAs around their specs and software tooling.
So yeah, there is a reason they didn't do that. The hardware they're using specifically makes it difficult to do the verification you want to do. Which is directly related to foiling the kind of attacks they want to foil.
> If you want to convince yourself the product is secure, that's up to you, but it's not
I think we have the same goal, but you have a conviction that open source stops "obvious back doors." It in no way would help that at all in this case. The hardware is configured before it is shipped, then locked in a way designed to prevent rewriting or inspection. You have no rational basis for the belief that the source code on a website and the binary a malicious and deceptive actor would deploy to the hardware are the same thing.
Being open source only affects the way security auditing can be done. It doesn't guarantee better quality.
I have read the article, several times, thank you very much. Don't take the easy way out by dismissing the opposition as ignorant.
>So yeah, there is a reason they didn't do that. The hardware they're using specifically makes it difficult to do the verification you want to do. Which is directly related to foiling the kind of attacks they want to foil.
Then they've chosen the wrong hardware. This doesn't make it more secure, it just explains why their product is insecure.
>I think we have the same goal, but you have a conviction that open source stops "obvious back doors." It in no way would help that at all in this case. The hardware is configured before it is shipped, then locked in a way designed to prevent rewriting or inspection. You have no rational basis for the belief that the source code on a website and the binary a malicious and deceptive actor would deploy to the hardware are the same thing.
I already addressed this - reproducable builds. I don't have to take anyone's word for it.
> Then they've chosen the wrong hardware. This doesn't make it more secure, it just explains why their product is insecure.
If the hardware is more resistant to hardware and software attacks, it seems odd to then deem it less secure just because you don't get source code that isn't guaranteed to correspond to a given binary.
> reproducable builds
There's so much literature on how this methodology fails, some of it quite famous. There is no assurance that your device conforms to the build you can reproduce, unless you can arbitrarily inspect the state of the entire device at each step. Being able to do that would defeat the purpose of these devices.
>If the hardware is more resistant to hardware and software attacks, it seems odd to then deem it less secure just because you don't get source code that isn't guaranteed to correspond to a given binary.
It may be, but there's no guarantee it behaves the way it claims to. There's no guarantee it's not backdoored. There are powerful actors involved in these areas.
>There's so much literature on how this methodology fails, some of it quite famous. There is no assurance that your device conforms to the build you can reproduce, unless you can arbitrarily inspect the state of the entire device at each step. Being able to do that would defeat the purpose of these devices.
> It may be, but there's no guarantee it behaves the way it claims to. There's no guarantee it's not backdoored. There are powerful actors involved in these areas.
It renders your point about source code moot though, doesn't it. Security is ultimately the art of trust propagation.
Trust chains are their weakest link, and people often put a lot of trust in compilers without really asking what it is doing. Not unlike crypto, we're told not to roll our own.
People have proposed ways around this, but they're not very good (http://imgur.com/a/BWbnU#0). The moral of the story is that at some point, you extend trust to someone. Security is never absolute.
>It renders your point about source code moot though, doesn't it. Security is ultimately the art of trust propagation.
I don't see how that follows. If I can audit the source code and confirm that the same code is running on the device, the weak link is reduced to my ability to aduit it (combined with everyone else who's auditing it as well and might publish their findings).
>The most famous discourse here is the "untrustworthy compiler problem."
I thought this might be what you're talking about, but this is ridiculous. Do you really think that the Yubikey folks have backdoored my copy of gcc? Dude.
> the weak link is reduced to my ability to aduit it (combined with everyone else who's auditing it as well and might publish their findings).
And if the hardware itself has microcode that overrides your code?
> but this is ridiculous. Do you really think that the Yubikey folks have backdoored my copy of gcc?
Actually, I think the first and foremest threat would be, "Could someone insert a yubikey into a malicious device that changed its behavior such that it now leaks information and does not provide actual security."
Because those kinds of attacks actually exist. Ultimately, what you're arguing for is the pleasure and moral superiority of being able to do that audit. Not only does that audit not give you many guarantees, but giving you the ability to do that audit opens you up to much more sinister attacks.
>And if the hardware itself has microcode that overrides your code?
Hard to defend against this, but it can be helped by using well understood architectures and letting us confirm that the microcode being run is the same microcode that the upstream CPU vendors are publishing.
>Actually, I think the first and foremest threat would be, "Could someone insert a yubikey into a malicious device that changed its behavior such that it now leaks information and does not provide actual security."
I'm not going to keep entertaining this discussion if you keep disregarding everything I've already said. I've already said I'm only asking for read-only access. In any case, defending against physical compromise is close to impossible anyway.
You are taking what you personally consider to be a guarantee and applying it to mean what everyone else considers to be a guarantee.
Unless you physically inspect each and every device (since they could easily run multiple lots) your faith is in the fabrication of all of the ICs used in the design. Not to mention that the computer you stick these into suffers from the same problem. On the theoretical side, all of the math which this is based upon is probably taken by most people on a faith basis. There is a lot of faith to go around. It just depends where you draw the line.
Well, with that logic, open-source security-related products are a complete joke, too.
- Microprocessor can look at the binary, recognize the patterns ("oh, this is OpenSSH trying to generate a key... lets give them an easily breakable one") and do whatever it wants with it.
Remediation: build your own compilers, build you own processors, from your own schematics, in your own foundry (cost: $billions), built by yourself.
Just use a TOTP app, at the moment. Note that because there are no U2F alternatives means that you shouldn't use U2F - not that you should settle for an insecure device.
There are U2F alternatives, several of which are mentioned in this thread. Also, U2F is immune to phishing while TOTP isn't. Your advice is actively harmful.
The available data suggests there are no groups of people who are good at not being phished.
The audience here is unlikely to send a check to the Nigerian prince looking to smuggle his money to America, but if you're arguing that we shouldn't trust yubikeys against APT backdoors, we're talking about a much higher quality of phishing.
I'll take my odds with yubikeys firmware rather than try to vet every site I enter a TOTP code into
You should be vetting those sites anyway, especially since you probably were also asked for a password. And it's not exactly hard - just glance up at the address bar.
A good phish relies on triggering instinctive behaviour, e.g. scaring the crap out of you and not following best practices because you're having an adrenaline rush. That's how careful people get hit. SwiftOnSecurity sometimes posts really well done phishing attempts: https://twitter.com/search/live?q=phish+from%3Aswiftonsecuri...