That's also a clearly flawed analysis, because the numbers mostly don't change between re-computations of the spreadsheet cell values!
E.g.: Adding a row doesn't invalidate calculations for previous rows in typical spreadsheet usage. The bug is deterministic, so repeating successful calculations over and over with the same numbers won't ever trigger the bug.
Yes, the book "Inside Intel" makes the same argument about spreadsheets (p364). My opinion is that Intel's analysis is mostly objective, while IBM's analysis is kind of a scam.
IBM's result is correct if we interpret "one user experiences the problem every few days" as "one in a million users will experience the problem 5000 times a second, for 15 minutes every day they use the spreadsheet with certain values". It's an average that makes no sense.
E.g.: Adding a row doesn't invalidate calculations for previous rows in typical spreadsheet usage. The bug is deterministic, so repeating successful calculations over and over with the same numbers won't ever trigger the bug.