I was disappointed too, this article was very light on details about the subject matter. I wasn't expecting a blue-print, but what was presented was all very hand-wavy.
In large systems (albeit smaller than S3) the way this works is that you slurp out some performance metrics from storage system to identify your hot spots and then feed that into a service that actively moves stuff around (below the namespace of the filesystem though, will be fs-dependant). You have some higher-performance disk pools at your disposal, and obviously that would be nvme storage today.
So in practice, it's likely proprietary vendor code chewing through performance data out of a proprietary storage controller and telling a worker job on a mounted filesystem client to move the hot data to the high performance disk pool. Always constantly rebalancing and moving data back out of the fast pool once it cools off. Obviously for S3 this is happening at an object level though using their own in-house code.