Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I knew it was bad but I didn’t realize just how bad the pricing spread can be until I started dealing with the GPU instances (8x A100 or H100 pods). Last I checked the on-demand pricing was $40/hr and the 1-year reserved instances were $25/hr. That’s over $200k/yr for the reserved instances so within two years I’d spend enough to buy my own 8x H100 pod (based on LambdaLabs pricing) plus enough to pay an engineer to babysit five pods at a time. It’s insane.

With on-demand pricing the pod would pay for itself (and the cost to manage it) within a year.





That's just hardware. If you need to build and maintain your own devops tooling it can balloon in complexity and cost real quick.

It would still likely be much cheaper to do everything in house, but you would be assuming a lot of risks and locking yourself in losing flexibility.

There is a reason people go with AWS over many competing cheaper cloud providers.


> There is a reason people go with AWS over many competing cheaper cloud providers.

Opportunity cost.

The evolutionary fitness landscape hasn't yet provided an escape hatch for paying this premium, but in time it will.


It's actually not that bad for GPUs considering their useful life is much shorter than regular compute. DC-grade CPU servers cost 12-24mo of typical public cloud prices but you can run em for 6-8 years.

> GPUs considering their useful life is much shorter than regular compute

This looks like FUD. People have taken GPU's that had been used for mining and in turn used them just fine for years. Nothing breaks, the hardware is just fine. Obviously newer generations of GPUs are more efficient, but CPU hardware improves a lot too.


It’s not about them physically breaking but more about them being obsolete although we may be seeing the end of that trend soon



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: