I don't get the "its hard to measure throughput" line. I'm using RDS at work. At some point we had 20TB data, with daily 500GB (batch) writes into indexed tables. Same order of magnitude cost, sure. But the combination of RDS instance monitor, Performance Insights, PGadmin dashboard means you have: visual query plan with optional profilling (pgadmin), live tracking of SQL invocations with # invokes per second, avg number of rows per invocation, and sampling based bottleneck analysis (disk reads, locks, cpu, throttling, network reads, sending data to client, etc), you have per disk read/write throughput (MBps), IOPS being used, network throughput, etc. At most times what i felt lacking was the ability to understand why PG was using so much CPU/disk troughput(e.g. inserts into indexed tables) but the disk throughput the instance was under was always very visible.
The article also doesnt mention anything about using provisioned IO instances. Nor any mention of which architectures have the highest PIOPs ceiling.
IOPS times blocksize is bandwidth in my experience (on modern storage).
I've built block devices using the highest IOPS (fulfilling all the necessary requirements) at well as extremely large block devices (64TB) using EBS. When maxxed out and tuned to the gills, it's fast and big.
The article also doesnt mention anything about using provisioned IO instances. Nor any mention of which architectures have the highest PIOPs ceiling.