It's not worth bothering with filesystem encryption on a VPS. Keep in mind that the hypervisor has full, unrestricted and essentially undetectable access to the contents of the virtual machine's RAM, so they can read decrypted data, private keys and cleartext passwords.
If you want even a hint of privacy, you'll have to rent some rackspace and put a tamper-resistant box of your own in there - and stop trusting it the first time it goes down unexpectedly.
... or make sure that your cloud-based resources, whatever they are, only see ciphertext, and access them only through heavily secured machines that you hold locally. Which requires the attacker to compromise your laptop, or phone, or whatever. (Which is almost certainly still possible for an attacker with the resources of a major government that decides you're of sufficient interest, but it raises the bar a bit.)
Use full end-to-end encryption. The "known compromised" machine doesn't do the encryption or decryption in the first place; that's done at the physically secure (you hope!) endpoints.
By cloud resource, did you mean storage? In which case, sure. But if you meant to do work "in the cloud" on the cipher text's content, the under-bad-actor-control machine is one of the "ends".
I do not doubt the claim, and believe it as much as you. However, I have always wondered: how long does it take a skilled technician to do this? Is it assume to assume, if we talk luks/dmcrypt for a filesystem or something per-file like GnuPG, is the passphrase, since it is small, just stored in the same block of memory contiguously? Is it hard to find, ASLR or not?
You don't need the pass-phrase when you can get the encryption key itself. And there are several techniques to identify encryption keys and vastly reduce the search space for these, making it rather fast to extract.
There's a spectrum of concerns... Same as there's a whole range of options between "I live in a compound guarded by genetic clones of me, conditioned to do my bidding" and "my house has no doors, I trust everybody"
As such, an encrypted file system even on a VPS provides slightly more protection than an unencrypted one. If it's worth the effort depends on your threat model.
So, if you're really the person who needs a tamper-resistant box that's untrusted after unexpected downtime, even renting rackspace is a bad idea. You probably want it under physical control.
If you're however an average user, a VPS with encrypted FS provides decent protection against most casual attackers. For many of us, that's enough.
Encrypting disks can be reasonable, but for a different reason: It reduces the chance that a third party gains access to your data upon recycling of the hard-drives.
I had been a paying Google Apps customer for personal and corporate use since the service was in beta. Until several weeks ago, that is. I was about to set up another Google Apps account for a new project when I stopped to consider what I would be funding with my USD $50 per user per year
This is exactly how I feel about Google Apps. I used to recommend it to customers, now I hesitate to recommend it, even though the alternatives aren't as easy or fully-featured without a lot more work. Thanks to the author for setting this lot up, I'll definitely be looking at it, and possibly using a few of the pieces.
Ansible is great though; even if you're not setting up your own private cloud it's worth taking a look at for deployment. AnsibleWorks really should set up an extensive library of playbooks like this, each isolated so that they can easily be mixed and matched. Their examples were a bit limited and specific last time I looked.
Does anyone have any tales of using Dovecot (good or bad), as I'm considering installing it?
I ran a Postfix/Dovecot setup for years and they were both fast, reliable, lightweight, and very flexible. The #dovecot and #postfix channels on Freenode are also tremendously helpful if you are in a pickle. (Full disclosure, I don't run my own mail server anymore except for a few virtual aliases; a good webmail client was the weak link in the chain and Fastmail provides a reliable service with a phenomenal web client at a fair price).
I was using Roudcube, but it was not powerful enough for my needs. The greatest strength of the Fastmail UI are all of the built-in keybindings. There's one for almost every action, indeed, I find it friendlier than the best desktop clients I've used, let alone ones like Gmail or Roudcube.
I've been using it on a server at home to basically archive my email (since the account I use to receive new email gets fussy if it gets too full) for a number of years now; it's worked great for me. I particularly like the fact that as of version 2.0, it supports sieve filtering out of the box, including a ManageSieve server, so I can easily set up complex filtering rules.
Thanks. Filters sound interesting too. Probably spam filtering would be my biggest concern moving from google apps, I don't really care about the webmail.
People seem to think that if you move your domain off GApps you're suddenly going to receive so much spam that your inbox will blow up. This is not so at all.
A decent Postfix configuration + up2date SpamAssassin can do wonders. One of the "secrets" is to train Spamassassin.
What I do for example is to move all the spam I receive in my inbox to a special folder that I later run "sa-learn --spam" on from a cronjob. You don't even need a webmail/interface for this.
I agree with you entirely that hosting your own mail doesn't mean drowning in spam. After many years hosting my own mail, I find the most effective anti-spam measures are (in order):
1. RBL: zen.spamhaus.org. This kills 95% of incoming spam.
2. Greylisting. This catches the 4.99% of stuff that gets through zen.spamhaus.org
3. SpamAssassin: Everything else. The last time spam got this far appears to have been over a year ago.
The RBL is pretty trivial to set up in postfix, and there is an effective greylist implementation included by default nowadays. Between these two alone, I get less spam in my own mail than I do in gmail..
1 - try not to use greylisting (it can be quite annoying some time when you are waiting for an important email)
2 - RBLs should be used from within SpamAssassin in such a way that it raises the spam score, not directly in the MTA. There are loads of "good" addresses listed in RBLs due to carelessness and/or incompetence of the operators.
You can set up procmail to auto-run a program when you add/remove messages on IMAP folders. I use it to auto-train my crm114 spam catcher (by dragging emails mostly from "Maybe Spam" to "Absolutely Spam" but it works the other way too: when you drag an email from Maybe Spam to Inbox, it trains that message as Not Spam).
Mail Avenger will cut down a good amount of the spam by itself, and waste the spammers' resources too. And honestly having some remaining spam is useful for detached gauging of pop culture. The subjects aren't actually too different from mainstream marketing pitches.
fully agreed. i've been running my own mailserver for a decade. theres a bunch of accounts on it, the domains are also well known.
I get generally little spam. I use exactly the same spam assassin training, if ever i get a spam, i put it in my inbox/not-detected dir, this will get autoindexed, and this kind of spam never comes again.
Also, it requires next to zero maintenance. (of course I KNOW how to setup dovecot and what not, but once setup, I've basically nothing to do, basically, ever)
> Intrusion prevention via fail2ban and rootkit detection via rkhunter.
Semantics, semantics, but rkhunter is intrusion detection, not prevention. I don't know what rkhunter would do to stop an intrusion, and fail2ban stopping a brute-force on your SSH login is hardly the likely intrusion vector for a server running this many services.
These tools still require a huge amount of systems administration work before it really counts as a "personal cloud". rkhunter looks for some basic rootkits but will not really protect you from emerging threats, other than to tell you you have a file integrity mismatch on a common file such as /usr/bin/login.
Since this is installing everything, it seems wise to add better host-based intrusion detection/file integrity checking across all services and configurations, via AIDE[1] or Samhain[2], which you could do with this type of automated setup. Both can then use the local MTA to alert you directly to your mail client if something is compromised, plus you gain the security of your configuration files for these services not having been tampered with.
What about running unattended-upgrades[3] for security patches to things like Apache et al? Given the adversaries expected here, I assume that we aren't worried about false packages, etc. as a risk.
I was surprised to read that this configuration uses DSPAM rather than SpamAssassin. Can anyone here compare these two, or point to a recent comparison?
Docker seems like it could be a nice alternative to Ansible here. I'd love to see something as easy to use as an "app store" for "personal cloud" servers. One click to install servers for email, contacts, calendar, dropbox, backups, etc.
Are complementary, not overlapping. Could be interesting to have some of the services described there in their own containers, but a lot of data must be shared between most of them.
I am working on something like what you are suggesting. I currently have only the email container done(Postix/Dovecot) and working on a small cmdline tool that will take care of bootstrapping the containers by asking a few simple questions. The point is to make the whole setup as frictionless as possible, by providing a sensible set of standards and only asking for the bare minimum configuration parameters. In the future I plan to expand the tool with a web interface so you can install and uninstall applications from a central repository or a git repository.
I will publish what I have soon, just need to build a few more applications containers(think rss reader, owncloud, git repo).
It seems like a useful thing to have and I am curious if there is a demand for such a tool.
The project is based on Docker. The nice part about it is that its really convenient to interact with the Docker daemon via its http api, with a minimum amount of work. Although all the underlying tech that Docker is built upon was widely known, the useability of Docker is the thing that sets it apart.
Anyone got any examples of setting up a set of services on a docker host with ansible? It would also seem feasible to use docker files ontop of a minimal image that used ansible with a local connection to do the set up... Anyone doing this?
I like the use of Dovecot, but I got tired of resource consumption with OwnCloud, despite it being a very, very cool project (and Android not speaking CardDav or CalDav natively, sigh; the ONE thing that makes me like iOS).
In any event, I have been looking into Cyrus IMAP because a) it used by Fastmail, they even host their forked version of the code [0] and b) there is a alpha-beta feature right now for a CardDAV and CalDav server baked in. [1]
Maybe I can try and work on my own sovereign-like setup with this.
Are there any new and modern open source HTML5 webmail clients available?
It's been several years since I last set up self-hosted email. At the time Roundcube was the best out there, but it wasn't in the same league as Gmail.
As a side note, the postscreen(8) man page [1] says that "this service was introduced with Postfix version 2.8.". Also RHEL 6 comes with postfix-2.6.6-2.2.el6_1.i686.
Instead of going through all this hassle with private clouds just to avoid using a service like Dropbox, I have some shell scripts that use rsync and curl to grab almost everything I want to back up from my hard drive and the internet (rss subscriptions, bookmarks), and generate a tar.7z.gpg file. The 7z archive is AES256 encrypted as well.
I feel relatively safe that I could drop this file anywhere I want and it would be useless to anybody without access to my brain or keystrokes, so I just put it in unsecure cloud storage. If I'm wrong about that, then I'm screwed, because all of my finances and identity would be full compromised. Each backup is pretty small since it's mostly text files and a few binaries and images. I have a minimal amount of media files so I don't bother to back up larger things that are mostly replaceable.
I have been wanting to do this, hadn't gotten around to detailing all the pieces I'd need to put together, and also I've been wanting to check out ansible. Thanks for this writeup!
the problem I seem to run into when into when hosting my own email these days is that my IP (currently with ovh) keeps getting marked by sbls because of some other bad citizen in the same block. I've been looking at sendgrid and authsmtp as potential ways to avoid this, but would love to hear what others are doing... those of you who hosts... how do you prevent from getting black listed?
Thank you. Not only a good real world example of server provisioning with Ansible, something I've been meaning to try, but a pretty well chosen set of services to install and configure. I'm already using Postfix and Dovecot over SSL but will surely learn from the rest of the setup.
I don't see a reason to buy overseas hosting. So long as your body is still in the US, the NSA could get your data if they really wanted to.
But the real goal here is not to stop a targeted government attack. It's to avoid cost-effective, bulk surveillance. For that, a server in your basement is quite good.
If you want even a hint of privacy, you'll have to rent some rackspace and put a tamper-resistant box of your own in there - and stop trusting it the first time it goes down unexpectedly.