I'm sorry, but this mail is totally stupid.
It assumes it takes one year to complete the exploit.
I don't think there is anything preventing from doing such an exploit in 1 month, or even less.
With the various CaaS providers the total cost even remains the same!
The cost model also assumes non-criminal behavior. But for most cases where somebody would be motivated to find collisions for breaking a cryptographic scheme, they're unlikely to be restrained by the normal boundaries of the law.
IOW, for a criminal organization (or just a single criminal) with a huge botnet the direct cost might be closer to $0. Of course there's opportunity cost--botnets are often rented like a cloud service--but that's the case for everything.
Also, this is why I avoid bikeshedding technology like Argon2. If your password database is stolen, a 10,000 or 100,000 strong botnet isn't going to have much trouble cracking a substantial fraction of user passwords in that database. Memory-hard algorithms like scrypt and Argon2 are designed to thwart specialized FPGA- and ASIC-based solutions. But cloud services and botnets use general purpose hardware. While specialized hardware will always be more efficient, the scale you can achieve with general purpose hardware is mind boggling.
If people spent half the effort they spend bikeshedding password authentication and instead work to support hardware security tokens--both client- end server-side (i.e. with an HSM hashing client passwords using a secret key), we'd all be in a better place.
I remember when I first heard of git I wondered why it didn't use a member of the SHA-2 family (which had been out for several years by then). Even in 2005 I think that it was fast enough.
Or, if you had sufficient budget, not even completely unreasonable for a nation-state that presumably could use a very large cluster for other purposes, generate a collision in an hour or two. That would be an interesting exercise - how much hardware/kwH would it take to generate a SHA-1 collision in 60 minutes.
The numbers are available: 65,000 CPU years & 110 GPU years.
Assuming that the massive numbers of cores necessary don't cause any extra difficulties (seems like a stretch), we get 68,328,000,000 CPU cores to do the CPU portion in 30 minutes, and 115,632,000 GPUs to do the GPU part in 30 minutes.
Assuming 1U servers each with 2 64-core CPUs, that's at least 533,812,500 physical servers, which is at least 12,709,822 42U server racks. That would take up ~85,000,000 sqft, or 3.04 square miles.
That's just for the CPUs. You could probably fit 1GPU per server, but I assume this isn't the typical config for cloud GPUs. In any case, it could probably be done in, let's call it 4 square miles of server floor space.
Note: these are pretty conservative estimates at every step. It doesn't take into account any practicalities, like having server aisles wide enough for golf carts (because, really, 3 miles on a side is a bit of a hike on foot). Or how power is delivered. Or how to communicate all of that parallel work. Or the colossal amount of maintenance staff that would be required. Or keeping that many servers cool enough to work continuously... etc.
After you figure all that out, now you can get to the small task of figuring out how to continuously deliver 120-ish[1] gigawatts of power to those servers.
[1] 90 Watts per 64-core CPU (2 of those per server), and 180 Watts per GPU (lifted from fryguy's sibling comment). Turns out the CPU part is a lot more expensive in just about every way than the GPU part. Accordingly, you would probably adjust this so that you spend 55 minutes on the CPU and 5 on the GPU portion, or something like that.
[Edit: added the footnote, and refined my estimate of the power requirements, also clarity edits]
I get different numbers. First, it was 6,500 CPU core-years comparable to a relatively recent Xeon that uses about 10.5 watts per core. So, 57 million cores, 600 megawatts, and about 11,000 racks (if you can fit 128 * 42 equivalent cores per rack). Wholesale electricity is about $50/MWh, so $30,000 for the CPU part.
The numbers they give for the GPU part are more confusing, and despite the time difference, they say it was the more expensive phase of the attack. However, it appears to involve considerably less physical hardware.
Still pretty impractical for what's probably limited value, but not out of reach.
Eh, what's an order of magnitude here or there or at the beginning of your chain of huge multipliers? heh.. Yeah, no wonder the result was so enormous.
Well if you use the Bitcoin network as a metric, there's roughly 3 billion GH/s (which is really two chained SHA1 in hardware), and realtimebitcoin.info claims this is ~2000 MW. If you compare that to the 9 billion GH that the shattered article claims are needed, then that indicates it would take a network equivalent in size to the Bitcoin network ~3 seconds and ~1'600 kWh. There's no indication how "lucky" a 9 billion GH collision is, so perhaps it would be longer or shorter based on the statistics.
Looking at it from the other direction, they claim 110 GPU-years. A GeForce GTX 1080 is claimed to be 180 W. That's 175'000 kWh. If you assume that dedicated hardware ASICs are 100x more power efficient than the card I claimed, that has at least a similar order of magnitude. To do it in an hour would take a million graphics cards, and ~200 MW.
I don't think there is anything preventing from doing such an exploit in 1 month, or even less. With the various CaaS providers the total cost even remains the same!