Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
TRESOR Runs Encryption Securely Outside RAM (uni-erlangen.de)
112 points by jgrahamc on Sept 27, 2013 | hide | past | favorite | 65 comments


This keeps the AES keys in some x86 debug registers so they never appear in RAM and can't be accessed after a reboot.


Why not keep the keys in a TPM and pull them into "real" registers whenever the kernel context switches into a specially flagged AES decode thread, and zero them when context switching away?


Here is an NSA slide where the NSA talks about exploiting Trusted Computing Platforms for intelligence[1].

The German government believes that the TPM is backdoored and a danger to security[2].

1: http://www.nytimes.com/interactive/2013/09/05/us/documents-r...

2: http://www.techweekeurope.co.uk/news/microsoft-seeks-calm-on...


If you can't trust the TPM, can you trust Intel's debug registers to be secure?

Long term I suspect that this kind of thing will use Intel's Software Guard Extensions (SGX), which creates a trusted enclave of code and data that not even the kernel nor the hypervisor can access.



The second article is quite alarming. Don't run any code in an enclave which you didn't compile yourself! One may not even be able to use a virtual machine to peer inside an enclave by emulating SGX: the software could demand a valid public key stored uniquely in every Intel chip and signed by Intel's private key, which a hypervisor would not have.


Had to delete my comment :(. Thanks for clarifying that.


TRESOR still leaves all the code and data exposed in memory. You can modify the code that's being executed in order to divulge the contents of the debug registers.

We're working on solving the malicious device and cold boot problem at PrivateCore. To do so, we're encrypting all of main memory and keeping plaintext state in the L3 cache.

Here's a resource page with links to the TRESOR paper and other resources: http://privatecore.com/resources-overview/physical-memory-at...


Where's the source?

Or are you expecting us to trust your proprietary nonfree software without ever seeing it?

Good luck with that.


Didn't DDR3 ram basically make cold boot attacks obsolete? They don't hold any voltage when you power down long enough to even freeze the chips


Interesting, that sounds correct:

http://www1.cs.fau.de/filepool/projects/coldboot/fares_coldb...

Even with cooling they weren't able to recover. However, a warm reboot, would still work. Speculating, but some laptops might have a way to force a reboot like some machines have a reset button. That'd at least restart the system, although proper BIOS config should prevent that from being useful.


hm, it looks to me like they just cover the sorts of temperatures you see in refrigerators.

any idea if work has been done on cryogenic temperatures?


Shouldn't this also be a really fast implementation since there isn't even the delay required to move items into/out of ram?

Also what happens if the scheduler moves the application out of running state?


Debug register access is not necessarily as optimized as access to (e.g.) L1 cache.


There is no application, it is a kernel patch.


Even if all of the AES state is kept in cache, the data to be encrypted would still have to be copied from ram or disk, right?


yeah but you can have an encrypted disk, encrypted ram, not so much.


Hi. PrivateCore has implemented encrypted RAM as part of a secure hypervisor. Our product is currently in a private beta, but you can check out our website at: http://www.privatecore.com.

We gave a talk on some of the vulnerabilities and mitigations at CanSecWest this year: http://cansecwest.com/slides/2013/PrivateCore%20CSW%202013.p...


OpenBSD will get you halfway there, it at least encrypts (if you switch it on) the virtual memory.


OpenBSD does this by default. It also now directly boots cryptodisks eliminating the need to create a /boot partition and carry it around if you're concerned about evil maid attacks, though I would imagine a camera or keyboard hardware keyloggers would defeat that pretty easily


The boot loader is still on the disk, unencrypted.


That's what removable media is for


derp, /root correction


As does OSX. Probably Windows too? There has to be a registry setting to tweak, somewhere.


If you stream the data in from disk (e.g. read 1kB, encrypt 1kB, read 1kB, encrypt 1kB) then a cold boot attack only recovers one small chunk of data. Whereas if your keys are in RAM then a cold boot attack can recover the keys which can then be used to decrypt your entire hard drive.


Hm. Maybe Linux should fold this into the kernel.


Debug registers are only accessible in Ring 0. I haven't looked at the implementation described, but I'd be surprised if it isn't a kernel module or something equivalent.


Are they available to a JTAG probe? I've not looked at a modern motherboard in ages, but I wouldn't be too surprised to find unpopulated JTAG pads near the CPU.


Only boundary scan on Intel.


I don't think x86 uses JTAG.


Why not? Even if it doesn't use JTAG for debug, it almost certainly would need it for boundary scan. I just checked volume 1 of the Intel i7 processor family datasheet, and it does show a JTAG port.


You are correct - it looks like it is distributed as a kernel patch.



I'm not very knowledgeable about x86, but why do they have to use the debug registers if they're slower? I know in GCC you can mark registers as "do not touch" when you write assembly in C. Why not use more general purpose registers?


Because the idea is to run it in kernel mode with arbitrary other software running on the computer. Asking other software to pretty please don't touch these specific registers isn't going to work, because there's no way to enforce that. The debug registers are perfect because they can only be accessed from kernel mode, not userland.


Registers you ask the compiler not to touch are still flushed to RAM on context switches, which defeats the purpose of this system.


And to avoid keyloggers being installed when the machine is powered of, see STARK[1] from the same uni.

[1] http://www1.informatik.uni-erlangen.de/stark


Now thats paranoid ;)


Not really... I've seen a demonstration where a Truecrypt (or something similar) encrypted laptop is stolen. The machine is locked, but still powered on (very reasonable. You locked your machine and left your desk). This basically means that the AES key is in RAM.

The attacker/thief cools down the RAM with liquid nitrogen (to slow the discharge), inserts a bootable CD with a special tool to dump physical RAM on bootup, and quickly cycles the laptop battery (causing a cold boot).

Since the RAM is not zeroed on bootup, the "old" bits stay pretty much as they were (thanks to the cooling).

IIRC this allowed an attacker recovery of the key with a very high probability of success.


If this is a concern, the solution is easy. Buy an SSD, turn computer off when you walk away, turn computer on when you come back.

The intersection of people vulnerable to cold boot attacks and people likely to be victims of cold boot attacks is hopefully the empty set.


While everything you say is true, it would seem paranoid in many cases. I don't know how many neighborhood electronics fencers have liquid nitrogen laying around or the computer savvy to pull this kind of stuff.

If someone is after me who is using this methodology then I have seriously pissed off the wrong people. While you could argue that it's only paranoia if the object of preparation is not possible, I would say that you're at least one leg over the fence into paranoia-land if you're prepping against a cold-boot and don't have any nation state or corporate espionage level enemies.


Corporate espionage isn't exactly rare, and it can affect any employee (usually the least important employees, since they tend to be the least careful).


LN2 is cheap. If undergrads can make LN2 ice cream, they can use it for cracking too.


One of the nice things about OSX and Filevault2 is that you can force the key to be destroyed on suspend:

     destroyfvkeyonstandby - Destroy File Vault Key when going
     to standby mode. By default File vault keys are retained
     even when system goes to standby. If the keys are
     destroyed, user will be prompted to enter the password 
    while coming out of standby mode.(value: 1 - Destroy, 0 -
     Retain)


One of the other nice things about OSX is the feds may already have your key[0], so if you manage to get your computer back from them after that confiscate it, it won't have cracks in it from the extreme cold.

[0]http://www.nosuchcon.org/talks/D1_02_Alex_Ninjas_and_Harry_P...


SMCs are present in nearly all Intel systems...They could very well store your truecrypt keys too


SMCs are not the problem. The problem is code in OSX could put your key there in a way that someone could dump it.

Of course, since FileVault is not open source, we have no way of knowing if it does this. Is this paranoid? Perhaps, but if you are worried about cold boot attacks you should be worried about this as well.

You might also be worried about some strange design decisions in FileVault such as the fact that it uses public key cryptography[0] for what ought to just be symmetric disk encryption. While not a red flag,it is a bit strange.

[0]http://deimos3.apple.com/WebObjects/Core.woa/FeedEnclosure/u...


The trust you have in proprietary software is charming.


It doesn't actually require LN2. In fact, all you really need for many cases is a can of compressed air and a usb drive.

The idea that your full disk encryption is only safe so long as nobody manages to have such outlandishly difficult to acquire materials is rather disheartening.

Moreover, disk encryption systems aren't just designed for overly paranoid individuals who probably don't have anything more interesting on their drive than embarrassing porn, it's also targeted for people who have data they seriously want to keep from being divulged. A perfect example being a running, but locked, corporate laptop being stolen from an office building or even a public space (e.g. a coffee shop). Someone desiring to commit industrial espionage would have no difficulties whatsoever in pulling off a cold boot attack on a vulnerable system, even if liquid nitrogen was required (it's quite easy to obtain and fairly cheap).


While the materials required are quite simple to get I am more worried about someone jacking my macbook pro and wiping the drive and reselling it as opposed to someone doing a cold boot on it. The materials are cheap, the execution of the attack isn't terrifically complicated for the folks on here, but the likelihood of being a victim of this attack in the wild is exceedingly low, that's what I was getting at moreso than the difficulty of the attack.


Obviously it is not neighborhood electronics fencers we're addressing here.


You don't actually even need to use liquid nitrogen. Much more common is chemical duster (hold the can upside-down and the boiling fluorocarbons will rapidly cool anything), or even nothing at all if you do it quickly. There are entire USB distributions dedicated to this sort of thing, and it has been done plenty of times outside of academic or professional settings. For example: http://revision3.com/hak5/coldbootattack


Reminds me of old home computers, where the RAM was slow enough to discharge that you didn't need to bother to cool down anything: Flicking the power quickly enough on a C64 would often leave most of the memory intact.


That was probably because of a capacitor somewhere in the power supply though..., right?


Elcomsoft Forensic Disk Decryptor [0] can find the decryption keys from the RAM dumps or hibernation files.

The RAM can be accessed using a DMA malware from the GPU/NIC/Intel ME or other devices which have a (micro-)processor and can use DMA [1, just submitted].

[0] - http://www.elcomsoft.com/efdd.html

[1] - https://news.ycombinator.com/item?id=6457069


Your backdoored DMA capable NIC can also just copy the plaintext out of RAM without futzing around with the key.


This isn't just about surreptitiously installing a compromised PCI card into a desktop PC. It affects every machine with a FireWire, Thunderbolt, or PCMCIA/ExpressCard port, i.e. most stolen corporate laptops.

https://en.wikipedia.org/wiki/DMA_attack


VT-d addresses this. Standard on most Intel Core processors for a couple years.


I've heard that it could address it in theory but hasn't actually been used for that yet. Do you have a source saying it does this by default, or failing that, how to set it up to do that?


IOMMU with VT-d is present, but not utilized by some major commercial operating systems and hypervisors.

There are also some implementation vulnerabilities in specific OSes which are not yet publicly disclosed.


Copy the plaintext of what? The password? The encryption programs usually don't keep the password in the RAM longer than required, but the key must be always there.

I just wanted to say that keeping the decryption keys out of RAM and Disks is not that paranoid because there are techniques which allow extraction of data from the RAM: cold boot, ordinary malware/rootkits, DMA malware.


The plaintext of whatever you're encrypting. Presumably you're going to actually use these debug register AES keys to encrypt or decrypt something more interesting.


The sneaky thing is to do like TreVisor, and I believe the commercial company PrivateCore, to encrypt all memory outside the CPU die (L1/L2, maybe L3?), by pinning the hypervisor and encryption routines to something running inside, and encrypting everything which leaves (and presumably doing some integrity protection). HN user sweis works for PrivateCore; I've talked to them a few times and they seem really interesting, although I think a more conventional HSM makes more sense for some applications, and Intel SGX is going to make the whole thing a lot more interesting in 2-3 years.


Here's a video of a malicious PCIe device extracting both plaintext and encrypted main memory: http://www.youtube.com/watch?v=chvJpEmXvDk


/me sings, "Another one bites the Dust"

Error

The website encountered an unexpected error. Please try again later.

Error message

PDOException: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) in lock_may_be_available() (line 167 of /var/www/i1/includes/lock.inc).

It looks as Drupal is not suited to stand a HN-DDoS, of 70 points (7000 visitors estimate) in 3 hours.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: