Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Enjoy your free service, but I would avoid paying for Digital Ocean for any serious project.

No custom kernel support

1. Digital Ocean does not allow you to run your own kernel natively.

2. Digital Ocean droplet kernels are infrequently updated.

3. These kernels often contain relevant security vulnerabilities.

4. This has been a known issue since 2013.

5. You can kexec a kernel, but this is an annoying workaround.

https://www.digitalocean.com/community/questions/how-can-i-b...

Private networking is a joke

1. Your private IP addresses are accessible by everyone in the same datacenter.

https://www.digitalocean.com/community/tutorials/how-to-isol...

IPv6 support

1. Took forever to implement, and the timetable broke promises to customers.

2. Inferior. Digital Ocean still won't give you a /64 per standards.



There seems to be a bit of confusion about how we handle kernels at DO. Hopefully I can clear things up a bit. On newer distro versions (e.g. Ubuntu >= 15.04, Debian 8, Fedora, CoreOS, FreeBSD) we no longer use "external" kernels. You are free to compile and use custom kernels. We're happy with what we've seen, and with Ubuntu 16.04 around the corner our default distribution will have support for this as well. After that point, we'll be backporting the change to older releases.

For Droplets still running with external kernels, we import new ones on a regular basis as they are released. If you happen to need one that hasn't been imported yet, just open a support ticket and the team will do so.


>>You are free to compile and use custom kernels.

What's exactly does this mean? Could you please post the link to the documentation page that explains how?


It's like on any Linux or BSD machine. You can build and install the kernel normally, and it just works.

Only some of the older images still use the old method of selecting the kernel from the Control Panel. The rest that Andrew mentioned use the bootloader and kernel from the droplet's image itself.


Are you planning to offer storage options? I find it's ridiculous to have to upgrade to a higher CPU/RAM plan (or add a new node) just because we're running out of disk space. We don't need detachable volumes or anything fancy, just the ability to increase the amount of disk storage.


Storage is also an area we're working on heavily right now, and we should have some good news soon. We definitely recognize that there's a lot of demand for more storage without the need for the corresponding upgrade in compute power. Check out the update from our product team on this UserVoice request, and vote/subscribe to get updates:

http://digitalocean.uservoice.com/forums/136585-digitalocean...


Thanks!


Product manager for Storage at DO here. I generally avoid comment on these kind of things but given the public statements already out there it is safe to say we are launching storage reasonably soon. If you are truly interested in participating in the beta program, email me at tfrietas@digitalocean.com


"IPv6 support 1. Took forever to implement, and the timetable broke promises to customers. 2. Inferior. Digital Ocean still won't give you a /64 per standards."

Far more egregious is that they silently drop port 25 on IPv6. This means that enabling IPv6 will cause mail problems for some destinations (destinations that support IPv6, like Google). When asked they say it's because a /64 is too much address space in the hands of potential (ab)users. This fails to understand that an IPv6 /64 is conceptually similar to an IPv4 /32. (In fact, there are pretty reasonable arguments for assigning IPv6 /56s or /48s with the same semantics as how IPv4 /32s are assigned.)


I can't talk about port 25, but we have 15 mailservers hosted on $5/per pop with DigitalOcean and with properly setup rDNS and SPF including our IPv6 address, we don't see any problems with deliveribility to gmail via subscription port.

That is on circa 600,000 emails per month sent to gmail.


You send server-to-server messages on port 587? That is... non-standard. But an interesting approach.

You are using your users' gmail credentials? Or are you simply delivering over 587 the same as you would over 25?


It's very "non-standard" -- that's not what the submission port (587/TCP) -- not "subscription port" -- is for.


It sounds like you're talking about sending mail, not receiving it. Port 25 is only an issue if you're trying to receive email.


If you want to send mail as a peer (i.e. without using a smart host / relay) you need to be able to connect outbound on port 25, which some providers block.


Let's add to this the fact that if you're under a DOS attack with DigitalOcean they disconnect your machine from the internet (making it impossible for you to log in and do log analysis, etc.), send you an email saying "figure out what's happening and stop it", and then reconnect your machine several hours later only to repeat the process practically as soon as it's back online (assuming the DOS continues). I wouldn't trust a side project on DO, let alone my business.


I was about to post this. My project received a DDOS, and I could not access my droplet for over 12 hours.


Just to clarify, DigitalOcean does not offer any form of DoS mitigation services so they blackhole during a DoS. Its for 3 hours, a lot less than other providers.

If you've got a DoS issue, you definitely need a 3rd party DoS protection service. Cloudflare free works pretty well


It's not "a lot less than other providers". You know what happens when someone starts DDOSing one of my AWS servers? I get a CloudWatch alert saying "high inbound traffic" and that's it. They don't black hole the thing and cut off all traffic. Then, I can log in, see what's happening, and take my time diagnosing the problem. Even under a fairly heavy DDOS I never feared losing access. With DO, the black hole happens before the email alert. It's a terrible policy and I can't use or recommend DO until it's changed. I can't stick CloudFlare in front of every server I own.


> 1. Your private IP addresses are accessible by everyone in the same datacenter.

This is pretty bad compared to an AWS VPC -- you basically have to manage and sync your own iptables between all your nodes.


I would suggest that instead of trying to maintain shared state (which IP addresses are blessed) across all of your nodes, you look into using ipsec. Those internal interfaces aren't for security, they're to segment cheap/fast network traffic internal to the dc from expensive/slow/metered traffic that hits the Internet.

https://en.wikipedia.org/wiki/Opportunistic_encryption


Perhaps, but at the same time, you shouldn't be using IP addresses as a security mechanism. Assume the connection between your hosts is compromised, and code accordingly, with encrypted/authenticated connections between hosts.


Not that I want to wade into the "don't use D.O." part of this argument, but, in practice, nobody does this. Virtually every deployment environment I've ever seen with more than 4 hosts in it would be fatally compromised by an attacker who could reach any IP address in that environment.


True. I haven't heard folks other than Google explicitly talking about this as a best practice.


A VPC is analogous to a physical network, not a subnet. Nobody uses them that way because it's not easy to grok, but you can treat a VPC as a physical network complete with your own numbering and ACL policies.

If you're doing that defense in depth on a physical network, I'm impressed by your dedication but would avoid your work for wasting resources.


it's analogous to a vlan, and it's not that much work to maintain ACLs if the vlans aren't supposed to talk to each other, which they're not, that's the whole point.


We have a lot of interesting stuff coming up later this year around networking. Some of it will be behind the scenes, but it is going to open up a lot of new possibilities for user-facing features. We're looking to give users a lot more control over their network while maintaining our focus on UX simplicity.

Might be a good time to mention we're hiring network engineers as well:

https://www.digitalocean.com/company/careers#software-engine...


That link is for "a software engineer on the Network team", not network engineer. Did you post the wrong link or just use bad working? I'm a network engineer but I'm not a "developer" by any means -- and there's a helluva difference!


Hey! Guess I miss-typed a bit, and it's too late to fix. We have two separate teams, a Networking team and a SWE-Networking team. It seems we don't have a proper "network engineer" posting up right now, but if you know someone interested they should still get in touch (http://do.co/1mf6HgB).


> No custom kernel support

How many people run custom kernels?

I have used a kernel with realtime extensions enabled but that was a very special case and wouldn't run that in a VM anyway.


Very few people run actual custom kernels, but most people want to run the distro-supplied kernel for their distro of choice (including security updates to it) and most decent providers configure their VMs to allow custom kernels so that their users can do this.


This is especially big for RHEL, OEL, and friends. Yeah, CentOS is cool for your startup but a big portion of the valley wants support contracts so they can stop doing OS grunt work, and if your provider doesn't roll RHEL you get to deploy your own, and AFAIK that is not possible on DO (and requires quite a bit of work on Linode, its closest competitor in the space; DO is not AMZN). Deploying RHEL in a supported way requires using their kernels.

It's your virtual machine. You should be able to pick a kernel. This isn't for running Andrew Morton patches, as some of the comments imply.


Yeah, I often get annoyed when DigitalOcean doesn’t keep with with Ubuntu kernels. One of my DigitalOcean servers is running Ubuntu 15.04 and kernel 3.19.0-21. The newest kernel from Ubuntu is -49, which DigitalOcean does not have. I also have -26 in my /lib/modules, but they don’t have that either. So now I have to explicitly install -30, the latest they support, or remember to update later.

Is there a good reason they can’t automatically add all new kernels from the major distributions?


> people want to run the distro-supplied kernel

I read "custom kernel" as in "replace distro-supplied kernel and compile your own with some custom flags and patches".


Well, you can do that too, but the point is that you have control over what kernel your VMs are running and not the hosting provider. From a technical perspective it makes little difference where that kernel comes from; either you control what kernel you're running or you don't.


I was at least thinking that once you pick a distro, they'd be first in line updating the security patches for it (or at least as fast as say your own ops team would). But I guess that is not the case.


> How many people run custom kernels?

How many people want to run an updated kernel without known security vulnerabilities? Or with fixes to relevant issues?


> How many people want to run an updated kernel without known security vulnerabilities?

How many people call that "a custom kernel"? Haven't heard anybody call an upstream distro kernel update a "custom kernel".

Is this a typical conversation people have?

"Hey Jim, did you update the servers to get the latest OpenSSL security fixes? - Yap, I compiled and installed a custom kernel".

Maybe they need to run a better distro with faster security update response?


It's specific to VM images. "Custom" here means "not baked into Xen," since your filesystem is not considered when spawning a domU kernel except in limited circumstances. In the Xen world, your kernel is provided by your hosting provider. You can apt-get all day and nothing will happen.

That is what custom means in this context. "Not yours." Read accordingly; you've made the same flawed point at several spots in this thread.


Ah, I understand now, thanks for explaining.

I thought they used KVM for some reason... But I guess if they use Xen they yeah, they are stuck with whatever kernel they get.


I've got both Xen & KVM systems under my care.

It's no longer true that Xen needs to mean managing the kernel outside the VM. PVGRUB can be specified as the 'kernel' to boot, which will chainload a grub which can be managed inside the VM, which lets you run any kernel you wish and manage the boot process as you would on a non-virtualized system.

Amazon uses Xen for their EC2 product, and as I understand they too now set people up with pvgrub.


It's a slightly similar story under KVM in these scenarios. Customer kernels are trickier.


Depending on what kind of level of virtualization they opted for, I've run Windows on Linux under KVM. They are doing the "boot a kernel" mode instead of fully virtufalized hardware mode probably to save on resources.


Just to confirm, we use KVM across our fleet at DO.


DO uses KVM.


> 3. These kernels often contain relevant security vulnerabilities.

This is really the only part of the list that I think has merit. I love Digital Ocean in general, but my own gripe is with their API and its lack of proper automatic key management:

https://digitalocean.uservoice.com/forums/136585-digitalocea...


Custom kernel support is actually available for certain distributions on DigitalOcean, though it's not widely marketed.

Right now, Debian 8, Ubuntu 15.04, Ubuntu 15.10, Fedora 23 and FreeBSD all allow you to control your kernel version. The next LTS release of Ubuntu in April, 16.04, will also allow you to customize your kernel.


User-managed kernels are currently supported on new distro releases, and will be supported for all distributions that are released moving forward.


> 2. Inferior. Digital Ocean still won't give you a /64 per standards.

Huh? I made a droplet just a week ago and it was assigned a whole /64. Maybe this is only a recent change, but you should know that it's here now.


Yep, but you are allowed to use only 16 of them. Check the configurable address range.


> Private networking is a joke

> 1. Your private IP addresses are accessible by everyone in the same datacenter.

This is also the case at Linode. Besides AWS VPC, are there VPS providers that give your hosts their own private VLAN?


Pretty sure Google Compute Engine puts each project in a private network.


yup, GCE gives you a "VPC" by default


Ok, no serious project would run their own kernel. What are you hosting that you want to run your own kernel?


If you want to maintain your server well and keep the kernel updated against security vulnerabilities, that's a pretty big deal. It would suck to be the victim of a root escalation and have no way of preventing it.


I bet this deal drives improvements on all of those points (or at least the ones that matter).


If you want a DO competitor check out vultr.com -- I've used them for years and been quite impressed. They seem at least as good if not better and they have some very interesting features like turn-key BGP and AnyCast hosting.

I have no affiliation with them.


For the most part, I think Vultr has a better offering than Digital Ocean. However, I had an infuriating experience with Vultr recently (not that I have reason to believe DO would have done any better). Vultr has poorly configured DDoS mitigation equipment. If that equipment believes your IP is under DDoS, it will automatically blackhole your IP from their upstreams.

With near-zero traffic on my interface - and for an internally facing VM, the IP address of which has never been published in any DNS, their DDoS 'protection' system decided I was under DDoS attack and blackholed my IP with their upstreams. I investigated and my conclusion is that the most likely cause was that their system mis-idenfied the overlay networking software I was using (which communicates over UDP) as DDoS traffic.

I raised this issue with support, who did not manage to help. Amongst other things, they told me that they didn't retain logs long enough to be helpful. I agreed that I would raise the issue with them again should it re-occur.

The second time they blackholed my IP, they didn't even bother telling me. And when I filed a support ticket, they took somewhere around an hour to even respond - during the time they've taken my system offline, I expect them to make themselves available.

When I tried calling their parent company on the phone, they were downright rude with me. I understand they don't usually do phone support, and that's fine, but if you've told me that (A) you can only help me resolve a problem during an incident because you don't keep logs, and (B) don't make yourself available to me through your regular support channels, don't get pissy with me for phoning you. And especially don't completely refuse to do anything remotely helpful.

I ended up terminating the VM and moving that workload elsewhere. I don't have time to be fixing other people's networks, especially when they can't be bothered to participate in the process.

It's a shame, because their BGP & AnyCast stuff looked really interesting, and I'd love to explore their offerings more. But I don't currently have confidence I can deploy anything with them in production, because when something goes wrong with their service, they don't appear competent at making themselves available to fix things.

(For the record, in some respects I consider one of my companies to be a competitor to both Digital Ocean & Vultr. But I like to be familiar with the competition, so I use them for some things. It also provides us a way to put workloads in locations we wouldn't otherwise be able to justify.)


Most people don't consider custom kernel as a requirement for serious projects. I don't see why this is a joke.


It's less about customised kernel and more about an up-to-date one. You may want to patch at your own schedule (faster, or slower than DO). Any serious project will take updates (especially security updates) seriously.


If you want to run your own LXC or Docker you might need a custom kernel to enable aufs.


If you want to maintain your server well and keep the kernel updated against security vulnerabilities, that's a pretty big deal. It would suck to be the victim of a root escalation and have no way of preventing it.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: