Why not keep the keys in a TPM and pull them into "real" registers whenever the kernel context switches into a specially flagged AES decode thread, and zero them when context switching away?
If you can't trust the TPM, can you trust Intel's debug registers to be secure?
Long term I suspect that this kind of thing will use Intel's Software Guard Extensions (SGX), which creates a trusted enclave of code and data that not even the kernel nor the hypervisor can access.
The second article is quite alarming. Don't run any code in an enclave which you didn't compile yourself! One may not even be able to use a virtual machine to peer inside an enclave by emulating SGX: the software could demand a valid public key stored uniquely in every Intel chip and signed by Intel's private key, which a hypervisor would not have.
TRESOR still leaves all the code and data exposed in memory. You can modify the code that's being executed in order to divulge the contents of the debug registers.
We're working on solving the malicious device and cold boot problem at PrivateCore. To do so, we're encrypting all of main memory and keeping plaintext state in the L3 cache.
Even with cooling they weren't able to recover. However, a warm reboot, would still work. Speculating, but some laptops might have a way to force a reboot like some machines have a reset button. That'd at least restart the system, although proper BIOS config should prevent that from being useful.
Hi. PrivateCore has implemented encrypted RAM as part of a secure hypervisor. Our product is currently in a private beta, but you can check out our website at: http://www.privatecore.com.
OpenBSD does this by default. It also now directly boots cryptodisks eliminating the need to create a /boot partition and carry it around if you're concerned about evil maid attacks, though I would imagine a camera or keyboard hardware keyloggers would defeat that pretty easily
If you stream the data in from disk (e.g. read 1kB, encrypt 1kB, read 1kB, encrypt 1kB) then a cold boot attack only recovers one small chunk of data. Whereas if your keys are in RAM then a cold boot attack can recover the keys which can then be used to decrypt your entire hard drive.
Debug registers are only accessible in Ring 0. I haven't looked at the implementation described, but I'd be surprised if it isn't a kernel module or something equivalent.
Are they available to a JTAG probe? I've not looked at a modern motherboard in ages, but I wouldn't be too surprised to find unpopulated JTAG pads near the CPU.
Why not? Even if it doesn't use JTAG for debug, it almost certainly would need it for boundary scan. I just checked volume 1 of the Intel i7 processor family datasheet, and it does show a JTAG port.
I'm not very knowledgeable about x86, but why do they have to use the debug registers if they're slower? I know in GCC you can mark registers as "do not touch" when you write assembly in C. Why not use more general purpose registers?
Because the idea is to run it in kernel mode with arbitrary other software running on the computer. Asking other software to pretty please don't touch these specific registers isn't going to work, because there's no way to enforce that. The debug registers are perfect because they can only be accessed from kernel mode, not userland.
Not really... I've seen a demonstration where a Truecrypt (or something similar) encrypted laptop is stolen. The machine is locked, but still powered on (very reasonable. You locked your machine and left your desk). This basically means that the AES key is in RAM.
The attacker/thief cools down the RAM with liquid nitrogen (to slow the discharge), inserts a bootable CD with a special tool to dump physical RAM on bootup, and quickly cycles the laptop battery (causing a cold boot).
Since the RAM is not zeroed on bootup, the "old" bits stay pretty much as they were (thanks to the cooling).
IIRC this allowed an attacker recovery of the key with a very high probability of success.
While everything you say is true, it would seem paranoid in many cases. I don't know how many neighborhood electronics fencers have liquid nitrogen laying around or the computer savvy to pull this kind of stuff.
If someone is after me who is using this methodology then I have seriously pissed off the wrong people. While you could argue that it's only paranoia if the object of preparation is not possible, I would say that you're at least one leg over the fence into paranoia-land if you're prepping against a cold-boot and don't have any nation state or corporate espionage level enemies.
Corporate espionage isn't exactly rare, and it can affect any employee (usually the least important employees, since they tend to be the least careful).
One of the nice things about OSX and Filevault2 is that you can force the key to be destroyed on suspend:
destroyfvkeyonstandby - Destroy File Vault Key when going
to standby mode. By default File vault keys are retained
even when system goes to standby. If the keys are
destroyed, user will be prompted to enter the password
while coming out of standby mode.(value: 1 - Destroy, 0 -
Retain)
One of the other nice things about OSX is the feds may already have your key[0], so if you manage to get your computer back from them after that confiscate it, it won't have cracks in it from the extreme cold.
SMCs are not the problem. The problem is code in OSX could put your key there in a way that someone could dump it.
Of course, since FileVault is not open source, we have no way of knowing if it does this. Is this paranoid? Perhaps, but if you are worried about cold boot attacks you should be worried about this as well.
You might also be worried about some strange design decisions in FileVault such as the fact that it uses public key cryptography[0] for what ought to just be symmetric disk encryption. While not a red flag,it is a bit strange.
It doesn't actually require LN2. In fact, all you really need for many cases is a can of compressed air and a usb drive.
The idea that your full disk encryption is only safe so long as nobody manages to have such outlandishly difficult to acquire materials is rather disheartening.
Moreover, disk encryption systems aren't just designed for overly paranoid individuals who probably don't have anything more interesting on their drive than embarrassing porn, it's also targeted for people who have data they seriously want to keep from being divulged. A perfect example being a running, but locked, corporate laptop being stolen from an office building or even a public space (e.g. a coffee shop). Someone desiring to commit industrial espionage would have no difficulties whatsoever in pulling off a cold boot attack on a vulnerable system, even if liquid nitrogen was required (it's quite easy to obtain and fairly cheap).
While the materials required are quite simple to get I am more worried about someone jacking my macbook pro and wiping the drive and reselling it as opposed to someone doing a cold boot on it. The materials are cheap, the execution of the attack isn't terrifically complicated for the folks on here, but the likelihood of being a victim of this attack in the wild is exceedingly low, that's what I was getting at moreso than the difficulty of the attack.
You don't actually even need to use liquid nitrogen. Much more common is chemical duster (hold the can upside-down and the boiling fluorocarbons will rapidly cool anything), or even nothing at all if you do it quickly. There are entire USB distributions dedicated to this sort of thing, and it has been done plenty of times outside of academic or professional settings.
For example:
http://revision3.com/hak5/coldbootattack
Reminds me of old home computers, where the RAM was slow enough to discharge that you didn't need to bother to cool down anything: Flicking the power quickly enough on a C64 would often leave most of the memory intact.
Elcomsoft Forensic Disk Decryptor [0] can find the decryption keys from the RAM dumps or hibernation files.
The RAM can be accessed using a DMA malware from the GPU/NIC/Intel ME or other devices which have a (micro-)processor and can use DMA [1, just submitted].
This isn't just about surreptitiously installing a compromised PCI card into a desktop PC. It affects every machine with a FireWire, Thunderbolt, or PCMCIA/ExpressCard port, i.e. most stolen corporate laptops.
I've heard that it could address it in theory but hasn't actually been used for that yet. Do you have a source saying it does this by default, or failing that, how to set it up to do that?
Copy the plaintext of what? The password? The encryption programs usually don't keep the password in the RAM longer than required, but the key must be always there.
I just wanted to say that keeping the decryption keys out of RAM and Disks is not that paranoid because there are techniques which allow extraction of data from the RAM: cold boot, ordinary malware/rootkits, DMA malware.
The plaintext of whatever you're encrypting. Presumably you're going to actually use these debug register AES keys to encrypt or decrypt something more interesting.
The sneaky thing is to do like TreVisor, and I believe the commercial company PrivateCore, to encrypt all memory outside the CPU die (L1/L2, maybe L3?), by pinning the hypervisor and encryption routines to something running inside, and encrypting everything which leaves (and presumably doing some integrity protection). HN user sweis works for PrivateCore; I've talked to them a few times and they seem really interesting, although I think a more conventional HSM makes more sense for some applications, and Intel SGX is going to make the whole thing a lot more interesting in 2-3 years.
The website encountered an unexpected error. Please try again later.
Error message
PDOException: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) in lock_may_be_available() (line 167 of /var/www/i1/includes/lock.inc).
It looks as Drupal is not suited to stand a HN-DDoS, of 70 points (7000 visitors estimate) in 3 hours.