Hacker News new | past | comments | ask | show | jobs | submit login

Unless you're shipping the customer a persistent component that doesn't automatically update itself every time they connect to the service, you might as well just store the keys, because (as LavaBit apparently discovered) if a court decides you need to cough up your users info, they'll probably just get you to install whatever code is needed to make that happen.



If I understand correctly, the decryption code was in the server, so the user was providing the password for the private key, and the code from the server was doing the encryption/decryption (by server-side or client-side execution, I don't know). This scheme could be backdoored by altering that server-side code. So, why not keep the code on the client-side (like signed, verified browser add-on or native application)? Then there will be no way to get the password of the private key from the user (except by installing a malware with some 0-day browser vuln).


Note that Chrome extensions can suffer from essentially the same problem, since they can autoupdate.

The reason nobody does this, though, is that end-users don't want to install software on their machines. Unfortunately, there is a fundamental conflict between desire to run software off of other people's machines rather than your own and desire for cryptographic security.


The client would have to be open source in order for that to work. Otherwise they can just force the provider to put a backdoor in the client code.

Probably open source with no downloadable binaries would be best. Maybe even better would be something like only pulling off a source control system so there's an easily accessible audit trail. The idea being not necessarily to prevent an evil adversary, but to convince a judge that there's no way to convincingly be an evil adversary.


I don't think you've addressed my point. I don't hand my private key to anyone, including the service operator, so there's nothing they can cough up to make anything happen.

If you go back to the original backpack encryption model -- I leave my padlocks out in public and tell people to send me messages at some dropoff spot (alt.wesley.crusher.die.die.die for example) and collect my encrypted (padlocked) messages whenever I see fit, which I then unlock with the key I have never given out.

This could obviously be dressed up like email if necessary. E.g. set up a mail server that bounces all unencrypted incoming messages to me with a "try again using this public key".

Yes, the authorities can track my attempts to access the server (but guess what, they already can). And because my inbox requires no password to access they can't tell me from any random person or bot trying to access my mail.

The authorities (and anyone else) can also deluge me with encrypted spam (including spam I can't decrypt :-)). It's not perfect.


That is not how the POP or IMAP protocols work. To make the system work with existing clients, the server needs to have the plain text when it is sending to the client. Surely this is also how they complied with valid warrants (wait until user logs in, then execute warrant).

Asymmetric encryption on the server means that an intruder can't read the content (except possibly for those accounts where the private key passphrase is still in memory) and nobody can search old mails until a user logs back in.


Um huh? What the heck are you talking about? Imap and pop can't tell if the body text of a message is plaintext or not beyond poop left by encoding /encrypting engines. Certainly my proposed system would allow plaintext messages disguised as encrypted messages to be sent to the end user, but so what?

I am not talking about how lavabit worked, I am asking why not design a system where the service provider need hold no private keys and thus have no way to comply with requests for keys. Yes, they can help the government track users and they can try to install malware on your machine, but fundamentally they don't know whether you're reading your email on a mac, pc, raspberry pie, or microwave oven.

Here's a pgp encrypted message. What can't I send by conventional email? (Or simply post on usenet, as my earlier post.) "They" can try to track every person who inadvertently downloads messages left for me.

-----BEGIN PGP MESSAGE----- <-- poop

Version: BCPG C# v1.6.1.0

hQEMAz/dtuqQ9lvGAQf/Rqb+/hNYGhdTli66144SlhBIDineb9uY0tc7p5kDOEm1 DmwqoQNoyX8LshRe1YlpCIiS7nW6Mmzhs86U65yA2/W4Rfs0gsfBx8R//01bBr54 qgRAMsoW426hIVc16XjlIVy+o7/FrynHkY3Vf0E7Ft7qbHL2OcKjIMxDtl0mK2dj W2c5/rvTiZeq6j1iKTn22DaD94PFjHVcE7H4IRGRKRnp5TxgZq0OAzGD00aSqWMM 4xZdiqFNr7J9o9Akoz8qYotSBjLXFoep+pDyD8EU9I6oA4Eqea3Ka2YXQ9m6/QwS 9VS6cPYccfqjms4X0V/E+fWRnkpyXomVETSamar2IMktO4BiRY6/qCjhpUywcag8 bJ+rOFrwVsSS+xy3XpXvRtlYRPGk8dA/BYH4b3Wz =D8kd

-----END PGP MESSAGE-----


the "persistent component that doesn't automatically update itself" that tptacek was talking about is the software used to generate keys, do encryption etc.

the safe alternative is that each user has to go find and install reliable third party software themselves. this is already possible with gpg et al and it is not used.

so instead someone needs to package the crypto code. and as soon as you do that, if there's any kind of update process, the code package can be forced (apparently) to modify the code to leak information.

so sure, you can do this securely. it's already possible, but it's not popular. and anything easy enough to be popular appears unreliable.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: