Hacker News new | past | comments | ask | show | jobs | submit login

So, I read a few of the papers. I'm hoping the Ethos OS folks are on this forum, since I have a couple questions and concerns.

First, my current understanding of the OS:

Networking TL;DR: Their protocol is called MinimaLT, and it effectively replaces TCP/IP. Traffic sent through a MinmaLT channel is encrypted and MAC'ed by the OS. Each user has a public key, which gets submitted to the remote host for each MinimaLT channel. The public key identifies processes belonging to that user on both endpoints. The user can generate a new public/private key as often as once per connection to allow for anonymity. The OS maintains one channel per host-to-host tunnel, and multiplexes it across applications.

Key management TL;DR: There exists an organization-wide key directory service and ephemeral key upload service. Servers register and re-distribute their ephemeral keys to their local key upload service, which synchronizes them with the directory service. Clients connect to their local directory service to get the ephemeral keys for other servers. The system scales up by piggybacking on DNSSEC--the directory service delivers its local servers' ephemeral keys to other directory services outside the organization by embedding them in short-lived DNS records (which then get cached).

Questions:

* It's not clear to me how a server comes to trust a user's public key. Is it trust-on-first-use? If so, how does a user revoke the public key? For example, if Mal stole Alice's key, now Bob's server thinks Mal's actions are from Alice. What do Bob and Alice do then?

* It's not clear to me how the directory service and key upload service come to trust a local server. Is looks like this is something the local admin has to do manually?

Concerns:

* I'm not sure if this is more secure; in fact, I think it's less secure than SSL. The authenticity and integrity of ephemeral server public keys are backed by DNSSEC's security. So, this scheme effectively replaces a bunch of (presumably) independent TLS CAs with just one: the DNS root. You can bet the NSA has the private key.

* I don't like how the network architecture couples key distribution to name resolution, which I view as orthogonal concerns. This design puts both under the control of the same administative entity, which makes it easy for that administrative entity to trick clients into communicating with the wrong servers.

* MinimaLT isn't amenable to content caching. How does a CDN know that two ciphertexts are really e.g. the same image file? If it can tell, then the CDN can break your end-to-end encryption. If it can't tell, then the origin server can't scale.




Borando is correct. MinimaLT (our TLS replacement) will use some sort of PKI, and eventually SayI. SayI is completely distributed, and enables the relying party (the entity doing the authentication) to choose which parts of the PKI to trust. And it will scale to the Internet; efficiency has been a problem with choose-who-you-trust PKIs.

We are working to release a Research Prototype for MinimaLT, which can be used for open source prototyping while simultaneously hardening our implementation.

There are a number of projects, open source and academic which are looking at MinimaLT. Some are implementing or providing interfaces for other languages (e.g., JavaScript/Erlang). Others are analyzing the security.

Most of all, we are focused on a small, tight codebase. Everything needed for security but not one iota of extra code. This is one of the ways we are engineering MinimaLT to avoid the problems that plague TLS.


The system scales up by piggybacking on DNSSEC

You're missing the major point: MinimaLT will initially use X.509 (since it's already deployed). A future protocol upgrade will support, if I'm not mistaken, sayI.

DNS Security (e.g. DNSCurve, DNSCrypt, or even DNSSEC) adds a second layer of security: keys are transmitted in DNS records, and server auth is done via X.509.

This means an attacker would have to break both X.509 _and_ DNS.

I'm not sure if this is more secure; in fact, I think it's less secure than SSL

I believe the above point addresses your concern. In addition, MinimaLT's Curve25519 + Salsa20-Poly1305 is superior to any ciphersuite found in TLS.


Okay, that was not immediately clear. Thanks! :)


As discussed in many, many other OpenSSL related posts here in the last few days, it's becoming pretty clear that any "open-to-public" PKI system will practically be required to be built on a third-party trust system (as also discussed, PGP WOT system also reduces to a third-party trust system when it comes under attack). @jude- How else would you solve this problem if not through one or the other third party trust based PKI?


I've had an idea rattling around in my head for a while now about how this might be achieved.

* CAVEATS *

(Not directed at anyone in particular, but I always state these when I talk about security in detail).

"Security" and "trust" are fundamentally economic and social problems that sometimes (but not always) have feasible technical solutions. You can't reason about either without first considering the (human) adversaries you face, as well as the society in which your users and adversaries live. In my ideal world, for example, all systems are secure and trustworthy by default, since the people building them and interacting with them are all responsible and ethical (we sadly do not live in this world).

Paradoxically, the "easy" part of making a system secure and trustworthy is implementing the cryptographic primitives, since their correctness can be formally proven. The "hard" part is key management, since "correct" key management depends on your threat model. If you mess key management up, then it doesn't matter how well your crypto is implemented, since now your adversaries have your keys.

* THREAT MODEL *

Now that that's out of the way, let's consider threats to key distribution in the post-Snowden world. We're up against a large state-level adversary (i.e. the Mafia, the NSA, botnets) that has lots and lots of CPU cycles. The adversary can read anything on any network and store it indefinitely, and they can alter any data in-transit. They can coerce a large number of users to make all of their data readable, and they can coerce them to change their data arbitrarily. However, I'll assume that the rest of the users beyond the adversary have more computing power combined, and that the adversary cannot coerce the majority of users to reveal or alter their data. I think that both of these constraints on the adversary's power are reasonable, since external factors (like social push-back against the large-scale power abuse required to overcome these constraints) will limit the power of the adversary in practice.

Now, the problem is: given this adversary, how do two users Alice and Bob exchange and revoke public keys over the Internet? With PKI strategies, Alice and Bob both trust Charlie, who certifies Alice's and Bob's public keys. If Alice revokes her public key, she gets Charlie to vouch for a revocation notice to Bob, which Bob accepts since Charlie vouched for it.

The problem with PKI implementations today isn't Charlie per se. After all, Alice and Bob have to have some way of verifying that the each other's public keys are authentic, and since it's not feasible for them to meet in person to do this, they have to "meet in the middle" with either trust-on-first-use semantics or with one or more trusted intermediaries (CAs, web-of-trust). The problem with PKI implementations today is that our adversary can get Charlie (i.e. a TLS CA, a key server, the ISP's network, etc.) to vouch for or deliver the wrong keys and revocation notices. What's needed is a way to make it so the adversary can't coerce Charlie without receiving a HUGE public backlash each time.

To do so, we'll use a well-known blockchain as a notary for Alice and Bob's public keys. Under our threat model, it's reasonable to assume that even a state-level adversary does not have enough power (computing or coercive) to execute a 51% attack on the blockchain. To see why this is a reasonable assumption, consider what would happen if the NSA were to attack Bitcoin. If they did, everyone who invested in Bitcoin--a lot of people, some with powerful friends and lobbying groups--would be out for blood, and the NSA's power would likely be reduced by Congress or the President as part of the backlash (we might also see resignations from the NSA ranks). Moreover, the blockchain could be recovered and the offending hosts identified (the fork where the 51% attack started would be evident), making the pay-off to the NSA very small compared to its potential losses.

* KEY MANAGEMENT *

The blockchain gives me an idea for a protocol based on the Sovereign Key proposal from the EFF [1], but with two key differences (no pun intended). First, unlike traditional PKI systems, Alice generates key pairs and CSRs in advance, and publishes all of the public keys at once to one or more public locations. She uploads the URLs to the public key bundle, its cryptographic hash, and one or more user-specific identifiers to the blockchain (using a one-time-use blockchain key pair). Then, anyone can fetch her public key bundle and verify its integrity by using the blockchain as a notary, and we don't have to worry about the adversary covertly modifying the keys or identifying information after the fact.

The second key difference is that Alice manages her private keys in a way to force the adversary to use human intervention to compromise them (greatly driving up the economic and social costs of doing so). To bootstrap key distribution, Alice thinks for a while, and deduces that she is likely to encouter N private key compromises over her lifetime. She gets a trusted computer and generates N key pairs and a CSR for each one. She signs each CSR with every private key to prove their authenticity (since throughout her life, only she is expected to know them all). She puts these keys in the order she will use them over the course of her lifetime, and gives them each a sequence number (PK_1, PK_2, PK_3, etc.).

She takes the private keys and their matching CSRs and stores them in various places that are hard for the adversary to find without noticeable human involvement. Some she keeps on her devices, some she stores in an offline USB stick, some she prints out and puts in a bank vault, some she prints out and buries in a safe in her back yard, etc. She only needs to have the keypair she's currently using installed on her devices. This leverages the assumption that it's not feasible for the adversary to compromise all of her keys without forcing her to help (and thus bringing their snooping to her attention, which she'll complain about on the Internet and get the media involved). Even a state-level adversary won't be covertly digging up everyone's back yards in search of buried private keys anytime soon :)

Alice uses her keypairs in order, revoking them as they get compromised or expires (if she specifies an expiration date). When she revokes a public key, she goes and gets her offline CSR and publishes it to the blockchain.

Alice always uses the uncompromised private key with the smallest sequence number to communicate securely. Bob can easily figure out which one this by (1) fetching Alice's key list (i.e. from one of the URLs in the blockchain), (2) verifying its integrity with the blockchain's copy of its hash, and (3) scanning the blockchain for subsequent valid CSRs.

The remaining challenge is for Bob to discover which blockchain record belongs to Alice (so he can get the right key bundles). To do so, Alice includes as many user-specific identifiers in her blockchain record as possible, to bind her strongly to her key bundle.

The set of user-specific identifiers is arbitrary (I got the idea from onename.io [2]). Examples:

* a (hash, [URL list]) pair that identifies a picture of Alice holding a hand-written copy of her key bundle hash.

* Alice's TLS-secured and DNSSEC-secured domain name

* Alice's bitcoin wallet, which has a transaction including her key bundle hash

* Alice's twitter feed, which contains her key bundle hash

* A signature of the key bundle using a private key from a separate PKI (which Bob has the public key for)

* her email address, so you can ask her for her key bundle hash

* a link to a press conference that features a picture of her doing something she's well known for

* cryptographic signatures from other people or CAs

* usernames on other services, along with content that includes the hash

* an OpenID identity URL, which contains her key bundle hash

There is space here for TOFU, CA, and WOT semantics, allowing Alice and Bob multiple ways to authenticate (some automatic, some manual). Alice can include her TLS certificate to authenticate to Bob via the existing TLS CA infrastructure. If Alice and Bob prefer WOT, they can authenticate through a chain of individuals between them. If Alice and Bob want manually authenticate with TOFU (i.e. through their web browsers), the browser will load a dosier of the other principal using the blockchain record.

Once Bob has decided to accept Alice's key bundle hash based on the identifying information, then Bob can communicate with Alice securely. I intentionally avoid mandating an authentication mechanism, because this is an application-specific choice based on what the user will trust the application with and how secure the application needs to be.

[1] https://www.eff.org/deeplinks/2011/11/sovereign-keys-proposa...

[2] https://onename.io


Your plan relies on two false assumptions.

1) The government can be trusted to mitigate its own attacks because bitcoin users get angry ("the NSA's power would likely be reduced by Congress or the President"). This is patently ridiculous; government security agencies do not become less powerful over time.

2) Alice can predict the future. ("Alice thinks for a while, and deduces that she is likely to encouter N private key compromises over her lifetime") Similarly, this is... unreliable at best, uttery ridiculous as phrased.

The final, and fatal, problem with this plan is that Alice's bindings to 'her' blockchain are subject to false flag operations / account compromises (OpenID, email, etc), or else they are relying on other PKI schemes (DNSSEC, TLS), which just pushes the problem back to the starting gate.


Thank you for your reply.

1) First, it doesn't particularly matter as long as they are not powerful enough to execute a 51% attack. Second, government security agencies (or rather, the laws granting them their powers) do get rolled back in practice once they're found to be unconstitutional or supremely unpopular.

A few examples off the top of my head:

* The US doesn't intern Japanese-Americans anymore, but did for "national security" reasons during WWII.

* The US has suspended and later restored habeas corpus during the Civil War, Reconstruction, colonization of the Philippines, and WWII (again, due to "national security").

* The US passed and later repealed (or allowed the expiration of) the Alien and Sedition Acts (also "national security").

* The SCOTUS has limited the powers granted by the Espionage Act of 1917 over the course of the 20th century (particularly w.r.t. The Pentagon Papers).

I personally think if the NSA started to attack its own citizens en masse via a 51% attack, the citizens and their representatives would soon look to curtail its power to do so (especially since there's money involved). Maybe I'm too optimistic, but time will tell.

2) Alice's goal is to make it so that it's not cost-effective for her adversary to compromise all of her private keys. She must estimate how much time and money her adversary is willing to spend, and then devise a way of storing her offline keys so that its not worth their while to compromise them. Effectively, she is insuring her keys against theft, just as she might insure another easily-stolen valuable asset. This means Alice can bring to bear the whole field of actuarial science ("predicting the future" as you call it) to find the best way to keep her keys safe.

The final, and fatal, problem with this plan is that Alice's bindings to 'her' blockchain are subject to false flag operations / account compromises (OpenID, email, etc), or else they are relying on other PKI schemes (DNSSEC, TLS), which just pushes the problem back to the starting gate.

Regarding account compromises, the public key bundle can only be written once, and Alice has (presumably) insured her private keys. This limits the effectiveness of false flag operations--once Alice (or whatever username she claims) puts a key bundle, no one else can do so without being caught and rejected.

Before addressing full compromises, consider this thought experiment: if you were to one day meet me in person, how would you know it was me? Sure, I could say "I am jude- from Hacker News" and show you my GitHub page, my government proofs of identity, my fingerprints, etc. But, that could all be forged, and pretty easily by the likes of the NSA. Proving that I am who I say that I am will ultimately require a leap of faith on your part--there has to be some information I can show you that will convince you that I am who I say that I am (i.e. you believe the information I present would be infeasible for an imposter to forge--you can apply actuarial science to estimate the costs and payoffs to an impostor, as well as likelihoods and conditional probabilities that an impostor would want to spoof you, and so on).

Now, maybe the actuaries got it wrong, and maybe it became feasible for Alice's adversaries to compromise all of her keys, and maybe that happens. In general, there are one of three solutions:

* Alice uses a trusted 3rd party to vouch for her over the impostor (undesirable as you say, since this is DNSSEC and TLS all over again).

* Alice posts all of the CSRs she generated earlier thereby revoking the impostor's identity, and then races and beats the impostor to re-generating the identity/keybundle. Not always possible, since Alice may have lost the CSRs, and/or Alice might lose the race. This leads to option 3:

* Alice abandons the old identity and builds a new one.

Granted, option 3 is messy and hard--a lot of people would be in a world of hurt if they lost their Facebook accounts, for example. However, this is why you have lots of offline keys stored in lots of different places in the first place--to avoid this situation altogether. Since we want to avoid centralized 3rd parties, I don't see any other way than to do our best to insure against total key compromises.

Perhaps one extension to the protocol would be to devise a secure way for Alice to generate more public keys, if she still retains enough private keys...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: