There seems to be one consistency in all these kinds of stories. The moral, it seems, is: if you want to have security, do not buy what your government does.
This applies to everything. When you need something, look at what people who actually need it are buying, not what people who just want to cover their ass are doing.
"For the Minuteman ICBM force, the US Air Force's Strategic Air Command worried that in times of need the codes would not be available, so they quietly decided to set them to 00000000; checking this combination was even present on the launch checklists. This was not changed until 1977.[7]"
Not so true. When the government REALLY wants to secure something, like, say, nukes, no matter what the technical security measures, there are always guys with guns.
I wonder, though, if that's their reasoning, or if they'd rather have another complicated layer in place instead. It might just be mimicry - people doing illegal things have hired guns, and it seems to work reasonably well for them - without actually understanding why it works when expensive techniques (which they don't fully comprehend) fail.
I think that it has less to do with why the technical obstacles can be overcome and more to do with the fact that they can be overcome. Unless a technical obstacle can be 100% secure, having an additional layer of security in the form of armed gunmen is useful.
The layer of armed gunmen is obviously not 100% reliable either, but requires an entirely separate domain of skills/knowledge/resources to overcome than technical obstacles.
I've served in the military. We had an asset we needed to secure - not nukes, but fairly important. We did the risk analysis and wound up with this layered approach: big thick blast-, TEMPEST-, and EMP-resistant door, retinal scan identification system, and an armed guard ( enlisted, not contracted ) 24/7. There was other stuff too. I don't remember it all - it was 1996 fer cryin out loud.
Complicated systems fail in unpredictable ways, and we understood that. We absolutely did _not_ want to depend on technical means only.
Maybe some organizations behave the way you suggest, but IMHO it is far more rare than you think.
Some people, when confronted with a problem, think
“I know, I'll use regular expressions.” Now they have two problems.
In the case of these locks, the problem with increased complexity, is that now you have to manage complexity, and deal with the low tech physical intrusion methods.
I like how they say they haven't even disclosed the worst of the flaws, because they want the company to have a chance to fix this stuff.
And how the company is trying to tell us that these flaws can only be exploited in the lab. I can just imagine the security bulletins banning rubber mallets from the facility.
It's kind of frightening to imagine what exploits these locks have are that are quieter and less detectible than sticking it with a paperclip.
It's also funny to imagine the company's security experts who just can't seem to reproduce the stick-it-with-a-paperclip trick outside of a laboratory environment. I'm surprised guys that sharp let the hammer trick slip through!
It does not have to be less detectable to be worse.
Imagine if there were a way to remotely disable all such locks in a building, keeping them locked, or to remotely make them burst into flames (or both).
A company I worked for once installed a super secure magnetic locking system on a server room door. One day I tripped and fell, knocked into the door and it popped right open. Must have been a pretty weak magnet.
The article says that certain other techniques weren't demonstrated because they were "too sensitive to show to the Defcon audience before giving Kaba a chance to fix the problems." What is worse than a whack on the top opening it?
I'd assume ways to fake the access logs. It's bad to allow unauthorized access, but it's really bad to allow unauthorized access that appears to be authorized (a great vector for framing people).
What do the logs look like when the door is opened with a mallet though? If the access isn't logged because there was no card swipe, then the last person to access the door could get blamed/framed.
Ways to bypass the lock and leave no evidence that the lock was bypassed are much worse.
When an ameteur picks a lock it is very easy to inspect the pins and see if they have been manipulated by something other than the key. I imagine that the rubber mallet technique leaves evidence of malicious manipulation behind.
If I was going to bootstrap a lock company, I'd start by presenting my designs at hacker conferences and offering bounties for exploits, just like an opensource project.
Manufacturers really should embrace this kind of testing.
It's just possible this is a title which deserves to be editorialized to "Defcon Lockpickers Open Card-And-Code Government Locks In Seconds With a Hammer." Edit: Make that "With a Rubber Mallet."
There were three different security exploits. The first was rapping with a mallet to compress the springs and release the pins (similar concept to bumping).
But don't forget:
"In another bypass, they insert a wire into a silicon cover for an LED light that blinks red when the user enters an invalid code. That wire can ground a contact on the circuit board behind the light that triggers a function intended to allow the door to be opened with a remote button, bypassing all its security measures."
and
"A third attack allows an insider to open the back side of the lock and insert a wire that flips a microswitch intended as an override for power failures. That trick resets the lock’s software, tampering with its audit trail and allowing it to be reprogrammed with different codes. Bluzmanis demonstrated in a video that the more elaborate microswitch attack could be performed in under a minute."
Yeah, that's what I'm saying— all these exploits are absurd. The security guys are experts, of course, but I just think implying this is "lockpicking" is giving the locks a little too much credit.
Well, it's not 'lockpicking' in the traditional sense, but they still demonstrated that you can open it just by stabbing a bit of wire into the light. It's absurd in the sense that it actually works.
The sample is on a small demo cutout of a door, thus has a lot of give/spring. I would like to see the Rapping flaw demo'd on a lock on a full size, mounted door which does not have the same resonant spring which would drop the lock.
>"He argues that Kaba’s locks claim only to be “access control devices, not high security locks,” and says less than 500 have been sold to government customers."
Haha, he justified the vulnerabilities by stating that few have been sold.
It would be interesting to know where the ~500 locks have been deployed, or, rather, what has supposedly been protected with them.
I think these guys are taking the best possible method of working with Kaba on these vulns, but typical security PR from Kaba is as laughable as HBGary.
Cool, but I don't understand $1,300 locks. If somebody can breech a permitter and actually physically get to the door I think maybe you should allocate resources elsewhere.