Fun fact: because Google's data centres have _basically_ monotonic CLOCK_REALTIME due to smearing leap seconds over 24h, the open-source abseil.io relies on it.
So you can't use absl if you need your time logic to be correct and also work in non-smearing environments, or in environments where the RTC isn't guaranteed to always be well-synced.
It's taken a while to see monotonic clocks widely available. QNX had a monotonic clock decades ago, since, in the real-time world, that's usually what you want.
I once rented a brand-new Linux server in a data center, and MySQL came up before the first time the clock was set. This confused things. But an ordinary reboot fixed it.
> ... backwards-going wall time happens every time we have a leap second. Granted, we're in a long dry spell at the moment, but it'll probably happen again in our lifetimes.
Well, maybe not:
> In November 2022 at the 27th General Conference on Weights and Measures, it was decided to abandon the leap second by or before 2035. From then the difference between atomic and astronomical time will be allowed to grow to a larger value yet to be determined.
I just hope that 2035 is early enough to prevent a negative leap second, which hasn't happened before. It would be quite a brave experiment to have one of those just before abandoning the whole concept.
On the "smash head here" moment: This distinction is called a steady clock, which is stricter than a monotonic clock in the way that it should never stop. On the other hand a monotonic clock can be made from a non-monotonic clock by detecting clamping any timestamp to the biggest observed value; this will be less useful if, for example, the clock source is ever decreasing.
In reality there is no absolute way to guarantee that a particular clock source is steady; virtualization can easily wreck them, so an external verification is the only definitive solution. You can say that macOS `CLOCK_MONOTONIC` is more steady than Linux `CLOCK_MONOTONIC`, but you can never assume they are actually steady all the time; they are only steady enough for not causing issues in typical cases.
BTW on linux the equivalent is CLOCK_BOOTTIME - and the general point stands that you need to find something "steady" enough for your usecase. There is no absolutely monotonic clock in the universe ;-), that we know of, is there?
"not increasing" is a permissible interpretation of monotonic though. The clock should never go backward, but not increasing for a brief period is acceptable. It does mean you can't use it for 100% high-precision work, but at least you'll not be surprised by a negative result.
But having a clock not progressing might break some wait- and timeout-based designs and you spend a lot of time understanding the strange phenomenon in a large fleet.
That PR is about buggy or niche systems. VMs lying about their clocks, CPUs with faulty firmware revisions, kernel bugs. Rust is just going the extra mile to uphold its guarantee even when the OS fails to do its job. It would have been legitimate to just say that in the face of broken systems you get broken behavior, GIGO, but it's difficult to convey that and most users aren't even aware that their systems are broken so it provided better user experience to paper over the bug rather than surfacing it.
On most systems CLOCK_MONOTONIC actually is monotonic. It's not necessarily steady or providing SI seconds, but its monotonic.
You'll need stuff like GPS if you want a monotonic, steady clock synchronized with other clocks.
A hardware timer is a monotonic clock, and at least one is built into any computer. With the advent of GPS spoofing, you can't really rely on GPS, either.
Can't wait for the miniaturisation of atomic clocks. They're quite already small but once they're 'so cheap they're on every SoC' a lot of ills of 'cheap' (as in not GC or AWS or Facebook) real-time distributed systems builders might go away.
The difference in CLOCK_MONOTONIC behavior between Linux and Darwin (Apple) systems was an issue when they added Clock to the Swift standard library. It led to the two built-in clock implementations being named SuspendingClock and ContinuousClock, rather than UptimeClock and MonotonicClock.
One would hope this is common knowledge by now, but it certainly doesn’t hurt to repeat that.
An issue with monotonic clocks I had in the past is they usually had lower resolution that the default. I’d like to think this isn’t a problem anymore?
On Linux it can be argued that CLOCK_MONOTONIC_RAW should be used for durations. From the Linux man page:
CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
Similar to CLOCK_MONOTONIC, but provides access to
a raw hardware-based time that is not subject to
NTP adjustments or the incremental adjustments per‐
formed by adjtime(3). This clock does not count
time that the system is suspended.
There is also CLOCK_BOOTTIME to keep counting time while suspended:
CLOCK_BOOTTIME (since Linux 2.6.39; Linux-specific)
A nonsettable system-wide clock that is identical
to CLOCK_MONOTONIC, except that it also includes
any time that the system is suspended. This allows
applications to get a suspend-aware monotonic clock
without having to deal with the complications of
CLOCK_REALTIME, which may have discontinuities if
the time is changed using settimeofday(2) or simi‐
lar.
Beware, while CLOCK_MONOTONIC_RAW has its uses (avoiding clock skew due to NTP when you're synchronising to an external clock) wasn't vDSO'd yet and was slow AF (and causing a syscall...). It's been optimized on later kernels but I was on some RH or LTS branch I was badly bitten by this.
So you can't use absl if you need your time logic to be correct and also work in non-smearing environments, or in environments where the RTC isn't guaranteed to always be well-synced.