A read-only memory device for firmware would be helpful. Not flash, not something erasable, but hard ROM, in a socketed device. All the firmware would go in that ROM, and you could take it out if necessary and compare it with other ROMs. There are simple hard-wired ROM comparison devices.
This is the approach mandated by Nevada Gaming Commission slot machine regulations.[1]
Once upon a time, most/many PCs had physical BIOS protection in the form of a jumper on the motherboard that would allow you to put the BIOS into a read-only state. However, we have now had many years where such control cannot be manually asserted by the end user, and the flash just sits there writable (although there are chipset-level firmware write protections, various hacks, like Dark Jedi, have found ways around them). Plus, apparently even when you pull down the WO pin on some flash chips, the hardware setting can still be overridden by software commands. The paper suggests, particularly with the more recent versions of Intel ME, that the PC architecture has now evolved to expect, and perhaps require, access to a writable BIOS (in part, because the ROM stores not only firmware, but also things like configuration settings and data for the new ME-implemented TPM).
Thus, we may not be able to simply go back to a ROM with today's architectures. However, we can give today's systems something that behaves like a writeable flash chip, but is readily (and automatically) reset to a clean/factory state.
Some Chromebooks have a "flash write-protect screw"[1] that basically makes the firmware read-only. Acts as a physical protection for flashrom by software (though I am not sure if there are ways to work around it).
Reprogrammable ROM with jumpers for write-protect is what I did in mine way back. Even game consoles had removable ROM on cartridges. ;) Another scheme, independently designed before me, was to use two memories with a ROM being root-of-trust with trusted boot and flash having most firmware for update purposes.
Far as Joanna's idea, this has been done before twice. I posted a business and legal analysis of it here:
There's a discussion in r/netsec right now about The Evil Maid problem. Looking at the datasheet for a common eeprom (24c256 http://www.techdesign.be/projects/datasheet/24LC256.pdf), the write protect pin is active high, and has an internal pull-down to GND. For a motherboard that has a physical write-protect or something like that, it'd be relatively straightforward for the Evil Maid to cut the WP pin to the eeprom to re-enable writes to it.
That makes me said. If it were the other way around (pull to VCC to enable writes), you'd at least need to add a pretty obvious bodge wire to the WP pin to enable writes.
I can see that for a slot machine, but a laptop probably needs to do something more than that. Don't you want to run a browser? In this case you're locked into a specific browser version, and if someone figures out an exploit that allows an attacker to hijack your browser you have to replace the ROM.
Or do you only run SSH/VNC/RDP and connect to a desktop someplace else?
You're right. In the blog post, the author refers to problems on the x86 platform and I didn't realize she was talking about firmware. It's clear in the paper.
This is interesting to me, because this is the same sort of thinking that led to the current iteration of Chrome OS devices. While not stateless entirely as the paper suggests, they clearly have the same train of thought. Without the developer switch flipped, verified boot should entrust that Google (and anyone with its private key) is the sole originator of the code on the device.
I wouldn't suggest using a Chrome OS device for anything where opsec matters, but I find the similarities, at least in thought, striking. Chrome OS's early security benefits were that the device could be trusted if the dev switch wasn't flipped -- and it could be fully wiped and restored on demand if need be. The trusted stick described in the PDF would likely share similar characteristics as far as disposability goes.
The issue with ChromeOS is that you know that Google controls everything that’s on it – which is not very useful for someone who wants actual security. A state actor could still force Google to install spyware on the devices.
In fact, for chromebooks for education, Google even allows the schools to MitM the traffic of the pupils with no way for the pupil to know it.
Which gets risky when the pupils take the device home, and the school still controls it.
How is that any different than an employer forcing their laptops provided to employees through a VPN (where the user is subject to inspection/blacklist/whitelist/etc) all the time? In both situations you have a machine provided to someone for a specific purpose (business/education) that likely describes extremely particular acceptable use cases. MITM traffic on whatever network seems like a valid way to enforce that. Doing your banking on your own machine.
These forced MitMs are all very similar, though I don't see how that gets us anywhere. It's not a valid way of enforcing any policy no matter how twisted, because people will root their devices and circumvent the lock-down.
The reason I call it twisted is because I'm wondering why that lock-down is necessary in the first place: you don't want them to "abuse" the device, causing potential damage? This might be a valid concern with company cars due to the life-threatening aspect of driving, but extending this concept to laptops seems abusive. Either way, the risk posed to the school or company's assets can be effectively taken into account by some kind of insurance policy, not by attempting to lock down the device.
"So how did the hackers get into the corporate network?"
"One of our execs downloaded a game from the Internet that contained malware and then he connected his laptop to the corporate network..."
I’m just arguing that Secure Boot or similar solutions don’t mean that you can make sure to always know what’s on the system, but that the person with the keys can. In this case, that’s an employee at Google.
As far as I can tell this proposal allows to ensure that it is possible to "factory reset" a system in a way that removes all malware, and also allows to "dual boot" OSes without any of the OSes being able to compromise the other (by flashing malicious firmware which then exploits the other OS on a subsequent boot).
It also prevents some more exotic attacks like replacing the BIOS (but not any hardware) with a malicious one as the laptop is delivered, used without network access (but not with network access) and then stealing the laptop and trying to read unencrypted user data leaked by the malicious BIOS.
It is not effective against undetected arbitrary physical attacks (insert a keylogger between keyboard and motherboard) or against persistent software attacks against a single vulnerable OS (persist via the OS autostart mechanism and exploit the OS on each boot).
Having an external stick also mitigates detectable physical attacks (e.g. theft of laptop, or manipulation detected by a broken tamper-proof seal) where the attacker has already stolen the encryption password, since they still won't get the stick and thus won't be able to get the data anyway.
The stick being external doesn't seem to provide much advantage otherwise, since if the laptop hardware is malicious it doesn't help, and if it is not malicious then an internal trusted stick equivalent works just as well.
> The stick being external doesn't seem to provide much advantage otherwise, since if the laptop hardware is malicious it doesn't help, and if it is not malicious then an internal trusted stick equivalent works just as well.
You can take out the external USB and keep it in your pocket when you go someplace you wouldn't want to carry a laptop (eg. public bathroom). Whether this is necessary depends on how paranoid you want to be.
I think your observations are pretty much spot-on, except for your last point:
> The stick being external doesn't seem to provide much advantage otherwise, since if the laptop hardware is malicious it doesn't help, and if it is not malicious then an internal trusted stick equivalent works just as well.
I think it provides a security-conscious user an added level of comfort/faith over a built-in solution. If you move the flash memory out to this external unit, and there is simply a three wire type of interface that pretty much only gives the system no permanent writability to the flash contents, that is a fairly solid and tangible promise. To some degree, you get to assert a new level of control over the "root of trust," at least the poinbt at which it begins in firmware.
That doesn't mean that there is not room for motherboard vendors to improve things, but we will have to have faith in them having done things correctly. I am not even talking about a hostile motherboard vendor - there are plenty of good faith or half baked efforts that end up being circumventable.
It's an interesting idea, but probably hopeless: all this does is move the goalposts. You're still fundamentally trusting the hardware to read and obey the instructions on your weird "external SPI" device, and if you're willing to trust the hardware then you might as well just put it on the motherboard like today's devices do. If you don't trust your hardware (and I challenge anybody to prove that a motherboard full of chips hasn't had a chunk of flash memory added to it without your knowledge) then you have no reason to think that the code it loaded was the code stored on your "trusted stick".
I don't believe that it is possible to build a secure system which isn't based on trusting the device that you hold in your hands. At some level, you need to have a device which is capable of both UI and computation functions to a sufficient extent to validate whatever transaction you are attempting to sign. You could push that onto a smaller device than your laptop (we already know that phone-sized devices are viable), but you still have to end up at the thing you interact with for signing purposes being a device that you trust.
Moving the bar from "This machine can be subverted by anybody who can access the SPI bus" to "This machine can be subverted by anybody who can solder on additional hardware that intercepts and modified bus activity" is still significant.
I don't think soldering's really required. If I was attacking the proposed system, I'd just jam a variation on those nifty miniature usb keyloggers into the path of the usb port itself. If somebody was building devices like this, then somebody else would quickly create such a device that could be quickly slipped into place (possibly even one you could just jam into the usb port without opening the case, like the fake slots that ATM hacks use).
What would that get you? The input devices are almost certainly going to be internal and PS/2 or I2C, and if you're the sort of person doing this you'd be using encrypted storage.
I mean tapping the external SPI connection at that point. You can't encrypt the first stage that gets loaded into the CPU, so you would simply replace that with your rootkit and then continue as normal until the user types the decryption key into the now-compromised device.
I'm fascinated by the idea of having a TPM without any on-board storage for it to use. How do you propose that would work?
If you're willing to accept stateful storage for the TPM then I agree this is straightforward, but then I don't think the "stateless device" has been achieved. If you're willing to trust the TPM's storage then you could have just used that to establish trust for everything (which is the status quo on chromebooks).
As described in the article, PTT includes a TPM running on the ME. The CPU loads the ME firmware (which is validated against a key on the ME), then starts executing the rest of the firmware (including copying measurements to the TPM).
So it just boils down to TPM-protected encrypted storage? That obviously works (because it's how a bunch of devices work today), but it's a lot less exciting... if you can set up a full TPM stack for sealed storage (which we don't have on consumer linux today :( ) then I don't see what attacks this "stateless laptop" defends you against that the TPM doesn't already handle.
I think the main point is that if you trust the main board once, you can trust it (more) in the future. If it doesn't have any changeable memory, modifying it to become hostile is harder and requires physical tampering.
As Animats said, solving that problem just requires making the firmware read-only at the chip level. Which is cheap. The current problem exists so devices can be easily updated and contain more features. Users demand both with both benefiting hardware market. A a ROM + Flash setup solves the problem but adds a little to cost. Most companies won't do it but it's another viable solution.
It is going to be pretty hard to take an off-the-shelf notebook and make the proposed changes, especially when you are talking about implementing hardware kill switches for certain components on a 12 layer PCB. The strongest incarnation of this idea seems to involve cooperation with, and some degree of trust in, the motherboard manufacturer (whether that is Purism or someone else). Is there any solution to nation-state hardware attacks involving intercepting your notebook while it is being delivered to you? I think that has always been considered "game over" as far as maintaining security.
However, if you have that, you do get a significant benefit. You are no longer vulnerable to someone sticking a rootkit in your BIOS. That is where a lot of the up-and-coming SMM-level rootkits like to be installed. You can move a fair chunk of your root of trust (the firmware, etc) into a device separate from the notebook, and feel pretty comfortable that device is going to provide certain guarantees about flash memory contents that we do not have with today's systems.
IMO, the ability to run multiple truely independent operating systems on the same hardware (if nothing else) makes it worthwile.
I am not convinced that forcing attacks to be more likely to be detected is really much of a deterrent in a world where Lenovo can pre-install a rootkit in the bios and not suffer all that much from it. I still think there are overall benefits to aiming for a stateless system.
This is pretty much what the Raspberry Pi does, so we should all switch to that :-)
There is ROM (as far as we know) in the GPU which instructs it to read the CPU firmware from the SD card, and then boots up the machine. Unfortunately neither firmware is open source or well documented, so you still can't really trust it.
This circumvention of surveillance could work for current processors that assume they can trust the SPI chip. But it would be easy for Intel to package the SPI chip together with the CPU in future offerings, circumventing the circumvention.
I don't see any assurance for anyone that doesn't control the foundry itself.
Yes, Intel might do that, but "circumventing the circumvention" is practically describing Intel making the change as some kind of malicious/hostile actor that wants to facilitate you being the victim of a BIOS hack. I don't think that is what is happening, but the current architecture does have undesirable consequences. Rotkowska's proposal doesn't really "circumvent" anything Intel is doing - it mainly allows the end user to assert a clean BIOS state.
I really wish there was a good laptop that runs Qubes OS and has no BLOBS/closed code/firmware at all.
I do some research on this every now and then and always come up short. The Librem 15 was the last one I looked into but it had closed components. Something like the Novena can't run Qubes but would otherwise be ideal (I'd gladly give up some battery life, looks and whatnot for pure freedom)
If anyone happens to know a suitable candidate let me know. Wouldn't mind a bit of hardware replacing etc. if the lone closed component could be swapped out etc.
I do not believe there is are any Intel chips that give you VT-d (IOMMU) and do not require a firmware blob. Blame Intel for that situation.
I think AMD has open sourced most of their BIOS, and a lot more of their hardware supports IOMMU anyway. Maybe that is a more fruitful direction to consider.
I'm probably missing something, but I don't see how this is feasible. Moving all firmware to a device that lives on an external bus means that you must either create a 'trustworthy' distribution channel for all supported firmware (including all system components and peripheral devices), or support only a select few devices and forbid adding any new peripherals. It also means peripheral devices must either be capable of bootstrapping themselves from this chip, or the system must provide a mechanism that 'sends' this firmware to all peripherals before booting the system. I'm inclined to think that the complex interdependency of this boot process seems impractical.
Also, I have to disagree that FPGAs are ideal for the architecture proposed by this paper. Performance and state issues of an FPGA aside, they're field programmable, which seems more vulnerable than 'microcode updates'. Of course, you could just disable field programming, but why even use an FPGA in the first place?
Disclaimer: I believe Joanna is much smarter than I am, so I wouldn't be surprised if my comments are based on a fundamental misunderstanding.
> or the system must provide a mechanism that 'sends' this firmware to all peripherals before booting the system.
That's pretty much what they note in "The SPI flash chip" section and what's replaced / muxed from the trusted stick in "Putting it all together" (page 15). SPI is that firmware provisioning component here.
Computers already have the type of bus she's talking about. A lot of the low-level/boot level components use SPI/I2C/LPC/etc busses because they're so dead simple to implement. For peripheral cards, PCI-e already supports I2C (well, SMBus technically, but they're close enough), so extension there would also not be terribly difficult.
Finally, regarding FPGAs, they're as programmable as you want them to be. There are a number of applications where they do indeed become write-once chips that just handle what needs handling. Additionally, depending on what you're doing, FPGAs can be more than fast enough -- there are a number of them that support more interesting busses and interconnects, like built in 10-gigabit ethernet. So basically, you end up using the FPGA as a chip fine-tuned and protected based on your needs, not generic needs.
My point is not that it's difficult to design a peripheral device that loads its firmware over SPI (or whatever bus). My point is that firmware for all supported devices must be maintained on the proposed external device.
What happens when you add a new peripheral device to your laptop that didn't exist when your read-only SPI-connected firmware repository was created? How do you solve this with less risk than what we have now? Eliminate hardware upgrades and peripheral devices in favor of disposable computers and e-waste?
FPGA:
I'm afraid the FPGA argument still doesn't make sense. Sure, the community could create a "trusted" processor or SoC, but why use an FPGA over a custom designed processor?
If the FPGA is reprogrammed at every reboot, we now have to ensure this process can't be exploited. If it's never reprogrammed, why use an FPGA in place of a CPU in the first place?
I appreciate the input and perspectives, but I still don't see how the "laptop" described in the paper is advantageous. There are many promising paths that move us much closer to secure computing, but simply moving firmware around doesn't seem to move us forward.
I'm probably missing something, but I don't see how this is feasible. Moving all firmware to a device that lives on an external bus means that you must either create a 'trustworthy' distribution channel for all supported firmware (including all system components and peripheral devices), or support only a select few devices and forbid adding any new peripherals.
In general, most of the firmware needed for system components is streamed from the main SPI chip to the various components as they are configured by the main system BIOS. Thus, there mainly is a single chip we are concerned about. However, the author identified a second embedded controller )EC_ flash that is also usually present, so we end up being concerned about 2 flash modules. The author addresses other firmwares - the main one being discrete GPUs - and suggests having a system that does not include them.
> Also, I have to disagree that FPGAs are ideal for the architecture proposed by this paper. Performance and state issues of an FPGA aside, they're field programmable, which seems more vulnerable than 'microcode updates'. Of course, you could just disable field programming, but why even use an FPGA in the first place?
If your only interface between the computer and the FPGA is a three wire interface that emulates an SPI chip, that does not provide any vector for reconfiguring the FPGA. The bitstream for configuring the FPGA is provided via a completely separate set of hardware pins.
Thanks for the info! I still don't see how an FPGA (with or without anti-fuse) is any better than fabricating a processor. The paper seems to state that FPGAs are a better option than x86, ARM implementations, etc. I'm starting to think I misinterpreted that section, or the limitations and trade offs of FPGAs weren't fully considered.
Note that the tooling that makes this possible, esp design synthesis, can run to $1+ million per user per year. Some are merely upper 5-digits to lower 6-digits. Really inexpensive. Mask costs have come down for older nodes in recent years because fab equpiment is finally paid off. Yet, you're still talking millions for a full SOC with modern features. And it will be slow as hell because it's on old stuff if it's a CPU.
That the market demands more speed, more functions, less power, etc is why they keep dropping to smaller node sizes. Each one adds new effects that try to break the chip. The electrons even tend to leak out of the transistors. Can't even assume they'll stay in them haha. Actually, from what I've read, it appears chips are broken all over on latest nodes with lots of logic there just to correct that. Here's an example of the crap they have to do at 28nm, which isn't cutting-edge anymore:
So, you need specialists that make big $$$, $1+ million in EDA tools, mask costs at millions a set per trial, and other stuff like boards (regularly 6 digits on kickstarter). That's for an ASIC. An FPGA's design flow ends at the RTL simulation part, has no mask costs, free to cheap EDA, and often has pre-made boards you can use. Price you pay is lower-than-ASIC performance, higher watts, and very-high per unit price. Still a better deal on lots of systems plus can be converted to a hybrid later (see eASIC Nextreme).
Hope that clears up why one would choose a FPGA over an ASIC. All that said, the difficulties they're facing in this case is largely due to choice to stay on Intel, Xen, and other difficult-to-secure crap. If one forgoes those software, then one can use Cobham-Gaisler's SPARC SOC's since they're designed for easy modification & already at quad-core. Academics made many secure CPU's out of his stuff. Just gotta license it, modify it, and run it through later parts of ASIC flow. FPGA still cheaper, but you can FPGA it too. :)
It is the formfactor of a portable computer that you could own completely, unlike anything more mobile where at least some of the chips are configured and run by someone else.
Also you would take your I/O with you because it makes no sense if you hardly trust a device you don't let leave your side to interface with some static hardware that any third party could have modified. Current monitors have more processing power than early mainframe computers and more than enough room to hide rf equipment for remote snooping.
One thing I would love to have that could be a solution to the worst case early boot encryption issues is an external crypto processor that generates and stores long term keys, implements authentication, and only releases temporary keys to the main system (to which it only connects via a simple serial connection). It has enough of a screen and input to receive a password and query the user before performing various actions. That is, something like Bitcoin Trezor but maybe a little more complex input and for more general crypto use. Ideally, such a device could even physically store the trusted stick (or several), although that trusted stick shouldn't interact with the rest of the system differently than any other device for maximum reliability. This way the most sensitive crypto is not performed on a general purpose system and the user could authenticate to the device once and then the device can authenticate the user and provide keys to multiple independent systems without hastle. It is an additional expense so hopefully wouldn't be necessary, but would be one way to solve the early boot encryption problem (if needed and less expensive solutions do not work) in a not completely special purpose way.
SD cards have SPI pins, so I recommend SD for Trusted Stick rather than repurposed USB.
I've bought a 512GB SDXC card for the purpose of backing up my laptop, and often wonder whether to use it as a boot device. It's much less vulnerable to theft when it's safe in my pockets, compared to in a bag.
I'd make one small change. Rather than aim for a laptop first, mod a WiFi SD card or other pocket-sized device. The KeyAsic platform (PQI Air Card/Transcend) has been extensively hacked, and Ubuntu can run on it. Client devices (laptop, phone, etc) could connect over WiFi and run VNC through a web browser. It's still vulnerable to keystroke logging on the client, but it would be possible to switch clients halfway through typing important messages. In my opinion the most secure client device would be an iPod running Rockbox, and connecting to the PQI Air Card over serial. My "WiPod" seems like the closest thing we have to a practical pocket-sized open source device, and it lets me share photos from an SD card to my phone :).
While an OS that doesn't preserve state is an important component of Rutkowska's proposal, and your OS might be one basis for that component, I don't think this is "mostly there" in terms of everything that the paper discusses. Much of what's new in the paper is about hardware issues, especially because it's concerned with firmware attacks that are already being used by attackers like NSA, and that other people clearly understand how to develop in principle.
With these firmware attacks, compromising a device at one point in time may allow the compromise to persist even if the user reinstalls the OS or replaces it with a different one.
proposing more details of a safer future platform.
Right now, someone who can briefly get kernel-level control on a machine intended to run your OS might be able to reprogram the hard drive firmware. At that point you have a serious authenticity challenge when booting your OS, because the hard drive can alter the contents of particular binaries at the moment they're read from disk. There are some powerful software-only defenses against this, but if an attacker knows which ones you use, they can probably design an attack that evades those.
"The fundamental design flaw of all of these compromised password managers, keychains, etc. is that they keep state in a file. That causes all sorts of problems (syncing among devices, file corruption, unauthorized access, tampering, backups, etc.)."
Short on time, but I'll say she's looking in the right idea in general. This idea has actually been done before. Removable firmware part used to happen in older machines, too.
Far as I know, I came up with it first with a proposal on Schneier's blog, etc to put both the CPU and trusted state on a stick or card you inserted into a machine containing only peripherals maybe with RAM. Research CPU's at the time had RAM encryption/integrity to make it untrusted. I was thinking PC Card rather than stick due to EMSEC, storage, and cost issues. I'll try to find the link later today.
It was actually inspired by foreign, airport security compromising stuff. People asked me to develop a convenient solution. So, real problem was physical access to the trusted components. That access couldn't happen but can't keep all our gear with us or away from inspection. A simple chip or PC Card they carried on would be better. The chassis, from laptop to whatever, they could acquire in country or ship separately with inspection. I further imagined a whole market popping up supplying both secure sticks/cards and the stuff you plug them into. Inspiration for that was iPod & its accessories like docks. One more part was that each user could determine how much protection, from tamper-evidence to EMSEC, to apply to their trusted device.
As it sometimes happens, another company showed up with government backing IIRC and R&D on security devices. Their proposed portfolio was very similar. They undoubtedly started patenting all of it. This created a second risk for anyone attempting what I or now Joanna is attempting: a greedy, defence-connected, third party legally controlling pieces of your core business. They usually just rob people but I predicted on Schneier's blog & later here in a heated debate that they could attempt to change or get rid of the product using their patents. Especially true if a proxy for an intelligence agency. We might have just seen that happen with Apple over iMessage but I can't be sure. Anyway, do know there's both prior art and probably patents on these concepts in defense industry.
So, it was a cool concept. It was one of those I was proudest of given it collapsed problems with all kinds of devices to design and protection of one component. That's basic Orange Book-era thinking I try to remember. Unfortunately, after much debate with marketing types, we determined there was a chicken and the egg problem with these [at the time]. The NRE cost would be high to the point you'd want to be sure there was a demand for thousands of them plus people willing to pay high unit prices. Custom laptops were often closer to $10,000 than $3,000 if low volume. My greater market idea was chicken-and-the-egg times a million. That plus risk of 3rd party patents made me back off the idea as nice but not practical.
Since then, what's changed is dramatically lower cost for homebrew hardware or industrial prototyping. Projects like Novena show it can probably be done for lower NRE than before. However, this is security-critical design that needs strong expertise in both hardware (esp analog/RF) and Intel x86. That will up the NRE and odds of them screwing up. ARM or MIPS ("cheaper ARM") might be easier to do but still need HW expert and significant NRE.
So, there's my take. It's a good idea that two of us in security industry already fleshed-out with removable firmware being proven in ancient mainframes. Serious marketing obstacles to getting this done and done securely. A high-level design for the technology, as I did, is pretty straight-forward and will teach one many lessons. It was a good learning experience if nothing else.
> what's changed is dramatically lower cost for homebrew hardware or industrial prototyping
Also changed is industry awareness of x86 platform (in)security, an increased role for the Intel ME in 2016 laptops, and the existence of a software-hardware partnership (Qubes-Purism) that could advance the proposed architecture.
Any open-hardware implementation of these ideas has the potential to influence mainstream x86 OEMs, as OLPC inspired the netbook category. The more people who design and build open hardware prototypes, the faster the industry can converge on a disaggregated firmware/hardware TCB.
Having the CPU on an external card makes things significantly more difficult - your connector now needs to break out the entire bus and also be capable of delivering ~100W, and you need a cooling solution that can handle that without the benefit of the greater surface area of the laptop chassis. Joanna's approach is much more attractive in terms of being something that involves very little modification of existing platforms.
You might be looking at it a bit differently than me, here. What I was looking at is the central components for CPU & storage are on the card. Power, memory, and peripherals are in the chassis. The card's connectors plug directly into that. So, if anything, I'm re-creating the old situation in towers of pluggable CPU's except it's externally pluggable and the tower is now a laptop with integrated electronics.
Regardless, it was very important to move the CPU out given it's a high chance of targeting or subversion. It is literally the root of trust for computation. I protect it because I assume attackers will be smarter than me and use it against me somehow. Far as cooling, I admit I didn't think much of it for the high-end: just decided on efficient CPU's where that wasn't so much a problem. Think along the lines of the card computers that need no cooling but have good performance.
"Joanna's approach is much more attractive in terms of being something that involves very little modification of existing platforms."
Convenience vs security. Always a tradeoff. I promise you that in physical you'll find the more convenient versions will usually get you screwed. Especially if EMSEC or subversion matters to you. I'm holding off reviewing specifics of her work until she finishes it. No promises that I will but I'd rather wait for finished thing given nature of this topic. I'm writing on the general concept which predates it on paper and partly in real products.
In the past a CPU was attached to a relatively low speed bus, and the peripheral interconnects all came off some external chip. These days you've got PCIe coming off the CPU package and memory clocks in the GHz range, so the mechanical aspects of this become massively more inconvenient. Even ignoring that, once you've got storage and CPU on the card, you've basically got a card that's a significant proportion of the size and weight of a laptop. At which point you could just carry the laptop instead.
> Think along the lines of the card computers that need no cooling but have good performance.
The attempts on that side (such as the Motorola phones that had laptop-style docks available) have been complete failures.
> I promise you that in physical you'll find the more convenient versions will usually get you screwed
And a solution that's excessively inconvenient will just be ignored.
So, to be clear, you're saying modern processors can no longer be physically plugged into a motherboard? That the processor and BIOS chip are physically too big to be isolated into a card-sized container that plugs into such a slot on a laptop? Everything else, including cooling, could be built into laptop part. But this critical part is impossible with today's technology and they all have to be hardwired at manufacturing?
It's strange because my friend's desktop CPU fit into my hand and plugged into place. That was a year or two ago. If that's no longer possible, though, then the CPU can't be extracted into its own device and my scheme can't apply.
> you're saying modern processors can no longer be physically plugged into a motherboard
In a literal sense, yes - laptop parts are designed for SMT only.
> Everything else, including cooling, could be built into laptop part
The point of this design is to allow users to take their state with them when they leave a hotel room without having to worry about the rest of the system being tampered with. You need the removable device to be packaged such that it's trivially removable, fits in a pocket, and is sufficiently hard-wearing that it won't be damaged. Your approach would require it to have a several hundred-pin connector and some means to bind into the cooling design, and that's an incredibly non-trivial engineering problem.
Appreciate your elaboration. Seems mine is a no go for mobile, then, if it's Intel chips and such. Embedded-style computers for trusted part are still a possibility. I've seen a card computer put into a laptop for a coprocessor. Hardwire in a KVM-style switch so the coprocessor can be the main processor when necessary. It's naturally removable. This lets key stuff be done on trusted component, safe storage, and even checking of untrusted stuff with what techniques are available.
Just gotta have something that does computation & storage that will not lie to its user.
Considered harmful is great format because the title makes it clear that the author is intentionally taking one side and he is then free to concentrate into one side of the story.
I am not sure how would you protect and distribute resources without (democratic) state. I am big fan of democracy, and no state (or similar structure) seems like an idea that goes against democracy. And without democracy, I don't see how society can be considered fair, and protect its weaker members from harm.
I am not convinced that democracy protects weaker members of society (let two wolves and one sheep elect what the dinner will be...). Rule of Law does.
Democracy gives you a say in politics. Rule of Law gives you a safe life (at least from governmental actions).
I concede that both may be not unrelated. But this is not necessarily the case.
> let two wolves and one sheep elect what the dinner will be
First of all, one important rule of democracy is that you cannot vote about stripping someone's right in democracy (or alternatively, the democratic decisions have to be reversible), otherwise you will get Russell-style paradoxes. The existence (or not) of democracy itself cannot be decided democratically; this is often neglected but nevertheless important rule.
Second, your example is unrealistic. There are far less wolves then sheep in the real world (literally!). In the real world, this is not really a big concern, because you wouldn't want to live in society where are more wolves than sheep anyway.. Almost every social technology (including evolved human cooperation) is predicated on this being false.
Of course, there are other things that are important for society and orthogonal to democracy - as you mention, for example, rule of law. But I disagree with rule of law you're safe from government actions - the state is typically the actor who enforces the law. So I very much disagree with the sentence "Rule of Law gives you a safe life (at least from governmental actions)." I would like to see an example of society where this is true due to it not being democratic; since democracy effectively gives you control over government actions.
I think this might meet all thr criteria you are looking for...
The United States is not a Democracy, it is a Representative Republic. It happens to use _some_ bits from the democracy toolbox.
As to any back-and-forth regarding people's rights being voted away and courts of law re-establishing them, there is the issue of slavery, and most recently, gay marriage.
This is very obviously false. It may have been established with not that intent, but today (especially after progressive movement) it's one of the most advanced democracies in the world. Of course there is still a long way to go.
For example, there is paper from Cato Institute that showed (on different U.S. states) more democracy leads to better management of state budget.
> people's rights being voted away
This is misrepresentation - people's right are not being voted away; what happens in these cases is that social progress is not as fast as some liberals would want. (In other words, there is a huge difference between regress and slow progress.) Democracies are conservative, as most normal people are (and this is arguably good engineering practice). And slavery was actually supported by the representation (you're actually contradicting yourself here a bit, if you want to blame democracy for slavery, then you shouldn't say that U.S. wasn't democracy).
They're not really "states," but autonomous voluntary regions. After all, if everyone can opt out of the terms of a central monopolistic state, then they cease to be states in the normal sense of the term, since no monopoly on legitimate initiation of force is held and free travel between "borders" is permitted.
This is the so-called "panarchy" arrangement and emerges organically from most forms of anarchism.
Every community is responsible for its own defense, which it can hire out or assemble itself.
Well, you call those "voluntary" but how is that going to work in practice? Being/wanting to take new members into the community is at odds with people knowing each other well (and being able to rely on one another). If people can voluntarily enter/exit any region/community they want, what prevents criminals to just leave without justice? And if not, it goes against the idea of being voluntary member of the community.
I think there needs to be some balance of these two opposing forces. At minimum, I would love to see these anarchist ideas backed up by some computer model.
Well, in theory anarcho-capitalism should also ensure a fair distribution of resources. It's one of several currents of the Libertarian philosophy. I found an interesting write-up about some real-life Libertarian societies here: [1].
The most prominent problem with the view on resources is, that the prevailing mindset about nature is that it's a free resource. Which is fatally wrong! Again, in theory anarcho-capitalism should fix this also, but personally I don't trust individuals overcoming their greed for a greater good.
I don't buy the idea that there are no free resources in the world. It seems very obviously false, it seems isomorphic to labor theory of value, and leads to logical contradictions (such that value is not monotonic in effort). At minimum, there are other actors in the world that produce value; take for instance a hen producing an egg. The egg is valuable and is produced by the effort of the hen (scavenging for food). Yet we cannot assign this value to any person, it just comes for free from nature.
I was thinking more about resources like clean air and water. Humans used to pollute rivers and air because they treated it as a free resource. However, the cost of air pollution are health problems.
I had a system nearly like what the author describes after the internal SSD drive on my first generation Acer Aspire One died. There didn't seem to be an easy way to replace the drive with a generic SATA SSD (at least none that I was aware of), and a replacement drive was like $80. This is for a $200 netbook, and it wasn't a very good SSD to begin with. So I put Knoppix on a USB stick and basically used that. Since all my stuff is either on my personal server or in the cloud anyway, it was a workable solution. In my case it worked reasonably well, and if I went though the trouble of identifying or putting together a bootable Linux distro with a desktop I really liked I could probably live with something like that as a permanent solution.
I'm not nearly as privacy conscious or paranoid as the author, so I'm satisfied with the convenience of a stateful laptop. I don't even have a screen lock when it wakes from sleep. If you want to use a stateless machine like the author describes, you're going to need a personal server or a cloud provider you really trust to keep your stuff.
Edit: ge0rg had already posted link to non-PDF version.
I'm not sure you read the paper? The concern centered on all the flash/firmware compnents such as SPI flash and wifi/BT firmware, not just the OS or hard drive.
The idea proposed was to have SPI and all device firmware on a secure USB stick that you always keep with you..
You're right. I read the blog post, then skimmed the actual paper. You can already do what she proposes at the OS level, but her concerns go much deeper.
It seems like a pretty valid concern. Part of the next generation of rootkits seems to be targeted at SMM-level rootkits (termed "ring -1" by some) that are installed in the BIOS. They are practically undetectable once installed, and can punch through hypervisor protections too.
I think that is also part of the author's concern with Intel ME being present on all systems. It is a separate microcontroller in the chipset that has power on the level of "ring -3" (I believe it is used to implement much of the new SGE instruction set, for example).
This is the approach mandated by Nevada Gaming Commission slot machine regulations.[1]
[1] http://gaming.nv.gov/modules/showdocument.aspx?documentid=33...