Very interesting topic, but rather low on detail --- really wanted to see what those 60 lines of Asm that allegedly show a faulty CPU instruction were, and also surprised that it wasn't intermittent; in my experience, CPU problems usually are intermittent and heavily dependent upon prior state, and manually stepping through with a debugger has never shown the "1+1=3" type of situation they claim. That said, I wonder if LINPACK'ing would've found it, as that is known to be a very powerful stress-test with divisive opinions among the overclocking community; some, including me, claim that a system can never be considered stable it if fails LINPACK since that is essentially showing intermittent "1+1=3" behaviour, while others are fine with "occasional" discrepancies in its output since the system otherwise appears to be stable.
Prime95 is my gold standard for CPU and memory testing. Everything from desktops to HPC and clustered filesystems get a 24 hour “blend” of tests. If that passes without any instability or bit flips then we’re ready for production.
In my experience, LINPACK (at least the Intel MKL on GenuineIntel combination) is both quicker and more thorough in finding setups that are not actually stable/reliable.
ECC isn’t free and ECC has a limited ability to detect all statistically plausible errors. Additionally, error correction in hardware is frequently defined by standards, some of which have backward compatibility requirements that go back decades. This is why, for example, reliable software often uses (quasi-)cryptographic checksums at all I/O boundaries. There is error correction in the hardware but in some parts of the silicon that error correction is weak enough that it is likely to eventually deliver a false negative in large scale systems.
None of this is free, and there are both hardware and software solutions for mitigating various categories of risk. It is explicitly modeled as an economics problem i.e. how does the cost of not mitigating a risk, if it materializes, compare to the cost of minimizing or eliminating it. In many cases, the optimal solution is unintuitive, such as computing everything twice or thrice and comparing the results rather than using error correction.
Actually it's not uncommon for there to be ECC used within components as a method to guard against stuff like this. I don't think it's practical to ever have complete coverage without going full blown dual/triple redundant CPU but for stuff like SSD controllers they have ECC coverage internally on the data path.
But if it's a consistent fault, like the silent data corruption covered in the linked paper, redoing the computation is still going to end up with no way to identify which core is faulty. If it's an intermittent fault, then even for hard realtime you can accomplish that with one core, just compute 3x and go with majority result.
Yup exactly. The only way independent hardware can help is if the fault is state dependent in a way on the hardware (eg differences in behavior due to thermal load or different internal state corruption or something) in which case repeated computations may not help if the repeated computation is not sufficiently decoupled temporally to get rid of that state. The other thing with independent hardware is that you don’t pay a 3x performance penalty (instead 3x cost penalty). That being said, none of these fault modes are what are really what is being discussed in the paper.
The other one that freaks me out is miscompilation by the compiler and JITs in the data path of an application. Like we’re using these machines to process hundreds of millions of transactions and trillions of dollars - how much are these silent mistakes costing us?
I think that strictly looking at it in terms of money-related operations stuff can still be managed/double-checked externally, i.e. by the real world, which means that whatever mistakes/inconsistencies might show up there's still a "hard reality" out there that will start screaming "hey! this money figure is not correct!" because people tend to notice when there are big money-discrepancies and the "mistakes" are, generally speaking, reversible when it comes to money.
What's worrying is when systems like these get used in real-time life-and-death situations, and there's basically no reversibility because that would imply dead people returning to life. For example the code used for stuff like outer space exploration, sure that right now we can add lots and lots of redundancies and check-ups in the software being used in that domain because the money is there to be spent and we still don't have that many people out there in space. But what will happen when we'll think of hosting hundreds, even thousands of people inside a big orbital station? How will we be able to make sure that all the safety-related code for that very big structure (certainly much bigger than we have now in space) doesn't cause the whole thing to go kaboom based on an unknown-unknown software error?
And leaving aside scenarios that are not there yet, right now we've started using software more and more when it comes to warfare (for example for battle simulations based on which real-life decisions are taken), what will happen to the lives of soldiers whose conduct in war has been lead by faulty software?
The financial impact was just to highlight a scope in terms that can result in a single calculable easy to understand number. And also most transactions are automated and rarely validated manually so I’m not sure how many inconsistencies we’re catching. Look at the UK post office scandal and that was basic distributed systems bugs used in the auditing software where the system was granted privilege over manual review (sure there’s lots wrong with that scandal but it is illustrative of how much deference we give to automated systems since that tends to be the right tradeoff to make).
The recent Ukraine war shows that soldiers lifes are cheap - according to commanders.
So many soldiers on both sides died because of really dumb commander decisions, missing kit, political needs, that worrying about CPU errors is truly way way down the list.
At the tactical level, of course that what you're saying is true, but the big Ukrainian counter-offensive from last year had been preceded by lots and lots of allusions made to "war games simulations" set-up by Ukraine's allies in the West (mostly the US and the UK), and it is my understanding that those war games were heavily taken into consideration as a basis for that counter-offensive decision. I'm not saying that the code behind those simulations was faulty, I'm just saying that software is already used at an operational level (at least) when it comes to war.
As per the sources, here's this one in The Economist [1] from September 2023, just as it had become obvious that the counter-offensive had fizzled out:
> American and British officials worked closely with Ukraine in the months before it launched its counter-offensive in June. They gave intelligence and advice, conducted detailed war games to simulate how different attacks might play out
And another one from earlier on [2], in July 2023, when things were stil undecided:
> Ukraine’s allies had spent months conducting wargames and simulations to predict how an assault might unfold.
>wouldn't that classify as broken hardware requiring device change?
Yes but you need to catch it first to know what to take out of production.
>That might be difficult if CPU is broken. How are you sure you actually computed 3 times if you can't trust the logic.
That's kind of my point. Either it's a heisen-bug and you never see those results again when you repeat the original program or it's permanently broken and you need to swap out the sketchy CPU. If you only care about the first case then you only need one core. If you care about the second case then you need 3 if you want to come up with an accurate result instead of just determining that one of them is faulty. It's like that old adage about clocks on ships. Either take one clock or take three, never two.
You don't need to know which one of the two was bad; it's not worth the extra overhead to avoid scrapping two in the rare case you catch a persistent glitch; sudden hardware death (blown VRM or such, for example) will dominate either way, so you might as well build your "servers" to have two parts that check each other and force-reset when they don't agree.
If it reboot-loops you take it out of the fleet.
Right, but the comment I was replying to was in response to this:
> 2 will tell you if they diverge, but you lose both if they do. 3 let's you retain 2 in operation if one does diverge.
If you care about resilience then you either need to settle with one and accept that you can't catch the class of errors that are persistent or go with three if you actually need resilience to those failures as well. If you don't need that kind of resilience like an aerospace application would need then you're probably better off with catching this at a higher layer in the overall distributed systems design. Rather than trying to make a resilient and perfectly accurate server, design your service to be resilient to hardware faults and stack checksums on checksums so you can catch errors (whether HW or software) where some invariant is violated. Meta also has a paper on their "Tectonic filesystem" where there's a checksum of every 4K chunk fragment, a checksum of the whole chunk, and a checksum of the erasure encoded block constructed out of the chunks. Once you add in yet another layer of replication above this then even when some machine is computing corrupt checksums or inconsistent checksums where the checksum and the data are corrupt then you can still catch it and you have a separate copy to avoid data loss.
In those cases, the CPU makes a false calculation independent of what's done in RAM. It can be solved by having flop redundancy as in system z - but nobody at Google or Meta would be considering big metal.
From my point of view, this technology problem may be interesting academically (and good for pretending to be important in the hierarchy at those companies) but a non-issue at scale business-wise in modern data centers.
Have a blade that once in a while acts funny? Trash and replace. Who cares what particular hiccup the CPU had.
> a non-issue at scale business-wise in modern data centers.
I've worked on similar stuff in the past at Google and you couldn't be more wrong. For example, if your CPU screwed up an AES calculation involved in wrapping an encryption key, you might end up with fairly large amounts of data that can't be decrypted anymore. Sometimes the failures are symmetric enough that the same machine might be able to decrypt the data it corrupted, which means a single machine might not be able to easily detect such problems.
We used to run extensive crypto self testing as part of the initialization of our KMS service for that reason.
Sure. It’s a cool issue to work on and maybe actually relevant at Google scale. But I’ve asked your colleagues multiple time if the business side actually cared about the issue and they never confirmed.
Again, cool to work on at Google. Not sure anybody else cares. If you care (finance) you fix it in hardware (system z).
Why would the business side ever care about technical details? It's like asking the business what days the dumpsters get emptied. Nobody gives a fuck; they just care that it gets done and gets done quickly, correctly, and safely.
If a CFO knows which days the dumpster is emptied, you have a strange CFO. The metaphor is to point out that there’s a lot of technical details that aren’t tracked (like usually refactoring isn’t tracked independently) and shouldn’t be tracked because they are the normal part of the technical job. A CFO can’t measure it even if they wanted to because nobody else is measuring crazy things, like how fast you walk to the bathroom or any other metrics that are specifically related to doing your job.
Interesting. The corruption was in a math.pow() calculation, representing a compressed filesize prior to a file decompression step.
Compressing data, with the increased information density & greater number of CPU instructions involved, seems obviously to increase the exposure to corruption/ bitflips.
What I did wonder was why compress the filesize as an exponent? One would imagine that representing as a floating-point exponent would take lots of cycles, pretty much as many bits, and have nasty precision inaccuracies at larger sizes.
Interesting paper, but has some technical errors. First of all, they keep mentioning SRAM+ECC, instead of DRAM+ECC; you cannot use gcj to inspect assembly code generated for Java method, as it will be completely different from the code generated by Hotspot; you do not need all that acrobatics to get disasm of the method, you could just add an infinite loop to the code and attach gdb to the JVM process and inspect the code or dump the core.
Disclaimer: I work at Meta and I know a couple of the authors of the paper but my work is completely unrelated to the subject of the paper.
That's not a technical error, they mean SRAM in the CPU itself. You're right about gcj but that's kind of a moot point when investigating some reproducible CPU bug like this. The paper mentions all the acrobatics they went through when trying to find the root cause but if gcj would have been practical then it also would have been immediately clear if the gcj output reproduced the error or not. If it didn't reproduce, no big deal, try another approach. You might be right about it being easier to root cause with gdb directly but I'm not so sure. Starting out, you have no idea which instructions under what state are triggering the issue so you'd be looking for a needle in a haystack. A crashdump or gdb doesn't let you bisect that haystack so good luck finding your needle.
GCJs implementation could be so vastly different from Hotspot, you could as well rewrite it in C and check if it is failing or not. ChatGPT would generate testcase within a minute.
It all depends how good you are with x64 assembly. If you are good enough, you can easily deduce what the instructions at the location do, and can potentially simply copy-paste into an asm file, compile it and check result. Would be much faster to me.
Bluntly speaking, people who are not familiar with low-level debugging make an honest and succesful attempt to investigate a low-level issue. A seasoned kernel developer or reverse engineer would have just used gdb straight away.
I think you should take another look at the author list. Chris Mason counts as a seasoned kernel developer in my book. Either way I think you're missing the point. Yes gcj would be different, but there's a decent chance it could hand you a binary that reproduces the issue that you can bisect to the root cause from there. It's one thing to run it through gcj and see if it reproduces, rewriting it in C is a ton of work compared to gcj for something that might not pan out.
I am not missing the point, as I do not believe in authorities and someone else's evaluations of skill level of yet another person. To rewrite a simple exponentiation in C would not cause "lots of work", and pinpointing the culprit, the exponentiation does not require any gdb debugging and disassembling. In fact, just knowing that exponentiation has caused that suggests faulty hardware and not further investigation required.
You should probably invite these people themselves to the discussion instead of speaking on their behalf. Not productive.