You are assuming the data is compacted such that the 90day data is grouped by expiration date. This is not how production deletion works at any reasonable scale where cost of media matters.
In reality, the data is all mixed up because data gets backed up usually around the time it's written, and gets deleted in a completely different pattern. The art of compaction is very important, and things like optical and tape make inaccurate compaction extremely expensive due to the reduced IO.
And even if we pretend you do have perfect compaction, you still don't have nearly the IO that HDD would provide. Considering hypothetically you have perfect compaction, that also means you have the perfectly smallest live data set, and thus the HDD premium is even smaller for wildly better throughput in disaster.
In reality, the data is all mixed up because data gets backed up usually around the time it's written, and gets deleted in a completely different pattern. The art of compaction is very important, and things like optical and tape make inaccurate compaction extremely expensive due to the reduced IO.
And even if we pretend you do have perfect compaction, you still don't have nearly the IO that HDD would provide. Considering hypothetically you have perfect compaction, that also means you have the perfectly smallest live data set, and thus the HDD premium is even smaller for wildly better throughput in disaster.