I have a way simpler explanation. IEEE 754 double can only represent integers up to 2^53 without precision loss, so if you naively average two numbers greater than 2^52, you get an erroneous result.
It just so happens that 2^52 nanoseconds is a little bit over 52 days.
I've seen the same thing with AMD CPUs where they hang after ~1042 days which is 2^53 10-nanosecond intervals.
The comment said IEEE754 doubles can represent integers to 2^52. But I missed the double or assumed float. Floats cannot do that and it would be disastrous to assume so. For that matter, doubles also have some pretty big issues when you do operations on them (loss of precision), but as long as you are purely doing integer operations, it “should” be fine. A practical example with non-integers: 35 + -34.99
Having done exactly this math for GStreamer bindings in JavaScript (where the built in numeric types are double or nothing), this would also be my prime suspect.
More interesting, a root cause analysis: https://news.ycombinator.com/item?id=33239443 https://ioactive.com/reverse-engineers-perspective-on-the-bo...
The 47 bit timestamp at 32MHz would explain the duration (Though not why it isn't 33MHz?).