I knew it was bad but I didn’t realize just how bad the pricing spread can be until I started dealing with the GPU instances (8x A100 or H100 pods). Last I checked the on-demand pricing was $40/hr and the 1-year reserved instances were $25/hr. That’s over $200k/yr for the reserved instances so within two years I’d spend enough to buy my own 8x H100 pod (based on LambdaLabs pricing) plus enough to pay an engineer to babysit five pods at a time. It’s insane.
With on-demand pricing the pod would pay for itself (and the cost to manage it) within a year.
It's actually not that bad for GPUs considering their useful life is much shorter than regular compute. DC-grade CPU servers cost 12-24mo of typical public cloud prices but you can run em for 6-8 years.
> GPUs considering their useful life is much shorter than regular compute
This looks like FUD. People have taken GPU's that had been used for mining and in turn used them just fine for years. Nothing breaks, the hardware is just fine. Obviously newer generations of GPUs are more efficient, but CPU hardware improves a lot too.
With on-demand pricing the pod would pay for itself (and the cost to manage it) within a year.