Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The linear IOP density model seems clever and logical but is a huge source of headaches because it includes a patently false assumption that IOPs scale in proportion to growth in object size. Performance quotas should be assigned at the object level (block device, file system, bucket) regardless of size.



It’s not because the underlying storage is 1 TiB disk, so you’re getting a fixed allocation that’s shared among many other 50mb/s clients (ie assuming flash and overall transfer rates of ~2gigabytes/, that lets them coex roughly 40 customers on 1 machine without over subscribing)?

Isn’t that kind of pricing mode about the only one that would be feasible to implement to make this cost effective? How are you thinking it should work? Fixed cost per block transferred?


The pedantic answer would be to allow customers to specify both minimum storage space and minimum IOPS. Then charge them for whichever is the larger portion of a drive and provide the space and IOPS that they are then paying for.

This way, if you need 10MB but also 25,000 IOPS, then you'll pay for ~10% of a drive. You'll get your minimum required 25,000 IOPS....and also 200GB or whatever share of the drive is required to get you those IOPS.

At the end of the day, I'm not entirely sure it matters whether the cloud provider breaks it out like this as long as it at least has a little gray UI element under the specified storage space slider that reads out the IOPS to you.

It would be exactly the same as the current situation where customers do this manually. So probably not particularly necessary - why make customers fill in additional fields they might not need to?

I do think cloud providers should make it clear during requisition and read back how many IOPS you're getting, just for clarity.

It does seem like having temporarily high quotas and then throttling back seems to break developer experience. Good deeds and punishment - but in this case there's a solid underlying reason why it has detrimental effects.


For Amazon EBS you have some kind of IOPS slider. This is possible even on normal EBS (gp3) volumes, not just on those special high performance EBS volumes (io2).

> General Purpose SSD (gp3) - IOPS 3,000 IOPS free and $0.006/provisioned IOPS-month over 3,000

> General Purpose SSD (gp3) - Throughput 125 MB/s free and $0.0476/provisioned MB/s-month over 125

https://aws.amazon.com/ebs/pricing/


It would make more sense to me to charge for the size of the provisioned share and performance profile, like EBS, instead of this weird performance tiering. It basically makes EFS useless for the bulk of use-cases I can personally imagine.


They do that as well - see "provisioned throughput"

See the section "Specifying Throughput with Provisioned Mode" here: https://docs.aws.amazon.com/efs/latest/ug/performance.html


Fixed cost per IOPS allocated. Essentially, the same thing as before, but without the necessity of you storing large blank objects.


How would scheduling of concurrent I/O workloads work? If I’m paying some price per 1k IOPs and my service gets a spike, won’t I greedily take out the other services running on the same machine rather than getting throttled? Doesn’t this also penalize workloads that do lots of small I/Os rather than a few big ones even if the amount transferred is the same?


1) By ensuring capacity for max allocated concurrent IOPS

2) Wrong relationship; you're paying for IOPS to the block store, so you'd be trampling on other accesses to the same block store.

3) This penalizes them less - in that workloads that do lots of small IO on small files will actually be able to request the IOPS they need, instead of IOPS being (wrongly) dynamically allocated.


Can't this be addressed by provisioned throughput on EFS?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: