> I still don’t understand this industry’s obsession with predefined fixed limits on unrelated resources.
1. It keeps their billing simpler. They would have to (or otherwise make it up elsewhere) charge different rates for different resources, making it relatively confusing + increasing support costs.
2. Much easier to forecast resources. If you know that you can fit X instances of type Y on a box, or W instances of type Z, it's easier to understand when/where you will need more hardware.
It's not perfect, I agree, but if an ad-hoc VPS product was profitable I'm sure we'd have seen it by now.
It's more of a capability thing. If you're running, say, Piston cloud you're using ceph over ethernet to back your disks, so you can easily decouple disk usage and ram usage. If you're stuck using local disks (ie. rackspace/joyent/linode/amazon to a point/etc.), then it's a lot harder to provide that sort of product.
That being said there are providers out there that sell it, and have been for years.
That’s the thing - they DO exist, its just that the “big” players don’t offer them.
I almost get why big companies (Rackspace, MT, etc) don’t offer it - if you can make a schmuck pay $X hundreds and hundreds of GB of disk he will never use, just because he needs 4GB of RAM or a lot of transfer, in theory you can over provision the hardware.
What I really don’t get though, is the technically savvy people that think they’re somehow getting a reasonable service?
1. It keeps their billing simpler. They would have to (or otherwise make it up elsewhere) charge different rates for different resources, making it relatively confusing + increasing support costs.
2. Much easier to forecast resources. If you know that you can fit X instances of type Y on a box, or W instances of type Z, it's easier to understand when/where you will need more hardware.
It's not perfect, I agree, but if an ad-hoc VPS product was profitable I'm sure we'd have seen it by now.