In many high-level dynamic languages, the integer type is an “arbitrary-precision integer” in the sense that it’s a machine integer when it can be, but gets implicitly auto-promoted into a bignum ADT when necessary to keep operations lossless, and said bignum ADTs auto-demoted back to machine integers when possible to make operations more efficient. This is mostly hidden from the program, all part of one continuous “integer” abstraction, unless you use reflection facilities to specifically ask what concrete type/kind/class of “integer” you’re currently holding.
I would say that that’s almost exactly analogous to fp16s swapping from one representation to another as they’re moved between the GPU and a CPU that either does or doesn’t have native hardware support for them.
Another good analogy might be to x86 real-mode segmented “far” pointers, vs. protected-mode “flat” pointers, in cases where a given memory address is expressible under both representations.
The issue here isn't that FP16 changes speeds during runtime (although it can) but that it changes on different compile targets, so you can write a program that effectively hasn't been tested on the other targets.
Bignums are straightforwardly O(N) based on size - which is different from O(1) and can definitely lead to performance and security bugs - but it is consistent.
You mean like bignums?