Aha! This explains a 20-year-old mystery bug! In undergrad, one of my side projects was a procedural 3D world from scratch (kind of Myst meets Halo), including physics, graphics (OpenGL), and the raw synthesized sound of ocean waves and plasma grenade explosions from sine waves via granular synthesis. After about 10 minutes exploring the world (or running around throwing tiny spheres down hills and scattering them with plasma grenades) my relaxing synthesized ocean wave sounds started to sound garbled and distorted so I always had to restart the program — now I know why :)
I've been playing The Witness recently. I've enjoyed it a lot, and had a lot of "aha moments", but I wouldn't say I altogether understand the game. It has a very elusive nature. I'm working through (what I believe to be) the "ending" at the moment, but there is still a lot of the pre-ending stuff that I haven't figured out yet.
At first glance The Witness is a puzzle game, made annoying by the fact that you have to walk everywhere instead of clicking "Next Puzzle". But then you occasionally see things in the environment that are just really pleasing. An example that springs to mind is a bunch of broken metal in a window[0], and some branches on a tree, that line up with the sun (which doesn't move) to cast a shadow of a woman sitting underneath a tree[1].
And there's loads of this stuff. And noticing it doesn't contribute anything at all to the apparent objective of the game (except where it does!), but it adds so much that you just wouldn't get if it were the simple puzzle game that it initially appears to be.
I think one of the key flourishes of the game has to do with intrinsic vs extrinsic motivation. The game never presents a locked door where there is some key elsewhere, and the key goes into your inventory, and now you can open the door. At all times, all doors are openable. But you may not know the rules to the puzzle on the door yet. Once you learn them, the 'key' is knowledge and the 'inventory' is your mind.
If you're keen to this then you will find it surprising how few games work this way.
This is one of my favorite things about that game. You could technically beat The Outer Wilds on your very first time loop, but in practice you need to explore just about everything in order to understand how to do it.
That game has a moment when you realize it's not quite about what you thought it was about. I hope this isn't a spoiler but there are black obelisks around the world. If you know what they are for then you've found this secondary goal
I think it's the game I've played that comes closest to feeling like a genuine mystery. Even though I think I've finished it, there are many aspects I still don't "get". It may just be that it's good at simulating this experience (sort of like a Lynch movie) but I don't think it matters. It's a great puzzle game wrapped in an enigma. And a piece of art too.
There's some fun image out there I can't exactly recall, but it's basically of an interview with some game devs, and one of them saying something like "We wanted to have a pot with flowers on the table here, and when you come back to this room later, the color of the flowers changes. No gameplay consequences. But we cut it for time. Why did we want it? Because we thought it'd be cool."
But in a lot of games there's tons of stuff like that which doesn't get cut. Gamedev is full of those "because it'd be cool" or other vague artistic reasons (as opposed to business reasons or researched game testing reasons) for something to be there, or something to be polished well beyond reason.
Yes, the Minecraft Farlands are caused by precision issues of floating point numbers. The game becomes jittery as it tries snapping to fewer and fewer precise digits. This is most obvious at first in the selection box around blocks, but the terrain also becomes unstable after a certain point.
Note that one Youtuber KurtJMac has been walking to the Farlands since 2011. He's been raising money for charity as part of those videos/streams. He's roughly 40% of the way there.
Note these are seperate issues, caused by the same underlying issues with float precision.
Some game engines solve the jittery rendering issues by moving the world relative to the player/camera, rather than the opposite (though it may be more intuitive). This way all your shaders work in nice accurate low floats relative to the camera at the origin, no matter the player location in the world.
But to implement worldgen with relative coordinates would be much more complex.
Another YouTuber beat him to the punch about 2 years ago actually! Here's the moment from his stream where he finally reached the Farlands [volume warning]: https://youtu.be/VAvQ_kT73W4?t=25696 (at the 7:08:17 mark)
I've encountered that, too, in my Excel 97 easter egg reproduction. My terrain loops in the shader, but camera position is governed by the JavaScript, so it can get weird far from origin if I don't sanitize.
Here's how the terrain normally looks, zoomed out with the grid on:
There's two weird phenomena happening there: the lack of camera position precision causes the terrain to shift left and right, but a little bit further, you can see that all the terrain quads collapse in one dimension for some reason.
Someone also made a game out of this, called Floating Point Leviathan:
I believe that every game have this problem if you are far enough away from the origin.
3kliksphilip did a great video on this subject: https://www.youtube.com/watch?v=eK7eNgiQfhk
I first encountered this in Active Worlds around 2000 or so, as a teenager. I managed to get the head developer to check it out, and he seemed surprised by it.
I've used a similar trick for decades: for example I put it in the code for my Hypnocube (www.hypnocube.com) product in 2005 (Hypnocube). I used it in games and digital art projects before (and after) that.
For embedded or low resource computing, sin/cos may be expensive, so I use a table based fixed point version. I pick the table to have size power of 2, making lots of things easier. Then to make time wrap, I use a large power of 2, which is exactly the same as this trick, with base 10 replaced with base 2 (and using fixed point math).
You also hit problems where delta times can go negative, so those also need to be max time aware. In short, I always make a timing module, it tracks time (and stretches it as needed), and doles out a few things used everywhere: a delta frame time, a large time (say 64 bits as ns for 584 year wraparound), and a capped time (say 16 or 24 bits) to use in places where you know the wrap amount and still give space for computations not to overflow.
As far as I know the Hypnocube never repeats nor does anything flicker at any time due to bad wraparounds. But that took work to ensure.
I remember seeing Hypnocube references and your site probably around 2005 or 2006, but it was so far beyond me at the time I just wrote it off as "cool OK". YouTube wasn't quite what it is today so I never did get to see what it looks like in motion. Today I assume similar things can be done extraordinarily cheap and easily but I still have no idea how. I'm not motivated enough right now to ask, but I just wanted to thank you for giving younger-me's friends something exciting to talk about for a short while.
I think that instead of this, my solution would be to have each function that depends on dt to accumulate the dt themselves, and reset it at their individual periods.
A Rust trait with an associated const would help with this:
But then you would need a local dt for each time-dependent function, and shaders don't really have packaged-up member variables in structs like you think they would if you come from Rust. That means you would have to maintain a massive list of dt's for each time-dependent function your shader calls, and then change this list everytime you introduce or remove a time-dependent function to the shader code. It's a solution that only works in a language that can hide the complexity from the programmer like Rust.
this is about shaders, which have no persistent memory even pixel-to-pixel on the same frame. there is no way to accumulate over time. and calculating outside of the shader would miss the point.
so you're stuck doing it from scratch every pixel. that's fine, shaders are fast.
at most it might make sense to calculate a truncated time globally per frame and provide that as a uniform.
Would it be useful to have a 'fraction' type that represents values in [0,1] with uniform precision?
Then instead of time deltas, we could work with fractions of the animation period.
When switching from float to fraction, we can remove code that handles float's edge cases: +inf, -inf, NaN, -0.0, non-canonical encodings, and loss of precision.
We have various integer types, and e.g. a 0.32 or 0.16 fixed point representation is trivial on top of integer types; and is also the only efficient uniform representation on [0, 1) (which is probably more valuable than [0, 1]).
This is a brand-new post. I'm surprised he's still blogging on the site for The Witness, given that it came out six years ago and I don't think it's seen any activity in terms of ports or content updates since the iOS port in 2017. Is it even still getting patches?
The blog suggests that you need to use integers to describe time because floats have problems. Granted.
Why limit one’s self to absolutely having to describe it in a single integer value? Why not some wrapper around a handful of integer values that can handle a much bigger max?
It doesn't suggest using integers. It suggests resetting your time uniform every 1000, 10000, 100000... etc. In values of ten raised to some (integer) power. The sole reason for picking a number like that is because programmers will tend to use constants like 32.768 instead of some fractional value. If you use 32.768 and reset your time uniform at 1000, the functions used with 32.768 will still loop seamlessly.
Double precision operations are much slower on GPUs. This can get very bad indeed for certain optimizations, like LUTs. 32 bit integers can accumulate much more time delta than floats without precision errors, but have similar problems.
You can pass in an integer and convert it to a float, but that doesn't really solve any problems. The accumulated time is being used in functions that noticeably change over a dozen milliseconds. The total accumulation is simply too large to for floats to represent with that precision; you would also need to convert most of the math surrounding the time uniform.
It is a much better solution to limit the time uniform. The periodic functions depending on it are sensitive to millisecond changes and loop every 100-10000 milliseconds; there's no reason for time to ever be much larger than 10000.
Not only it isn't trivial, but also it requires a 4D texture. You may have a nice effect going on that uses a texture with less dimensions and looping it will be ever harder... Or ugly.
You could simply use a single 64 bit integer in, say, nanoseconds. That would give you 584 years of range.
Just convert the integer into a float before passing it into the shader. For periodic effects, apply the appropriate modulo. Fog doesn't change very rapidly, so if wrapping is a pain, you could just accept the loss in precision. You could round the value as part of the conversion so that the precision doesn't change over the range. With 1 second precision, you should be good for a few months with a 32 bit float.
> Just convert the integer into a float before passing it into the shader.
That solves no problems at all. The number needs to stay an integer until after it is fed to a periodic function, which will restore it to a small enough number to be precisely represented as a float.
> Fog doesn't change very rapidly, so if wrapping is a pain, you could just accept the loss in precision. You could round the value as part of the conversion so that the precision doesn't change over the range.
That doesn't help either. Unless you round the time uniform CPU side- sending a counter that is incremented once per second to the fog shader, and different counter incremented once per millisecond to the shimmer shader- you're still sending a giant number to a function that is periodic over a 1,000,000x smaller window. Precision errors will still cause wildly varying outputs.
Sending separate uniforms only solves the problem for very, very slowly varying functions.
> With 1 second precision, you should be good for a few months with a 32 bit float.
Human vision is exceptionally well-tuned for noticing sudden changes, even relatively subtle ones. A gradual change over 1 second can be hundreds of times larger than a sudden change before it's noticeable.
That is what I'm proposing; use an integer representation in the CPU as the source of truth, then convert it to various floats as needed in the shaders, passed in as an argument to the shader on each frame. I haven't touched GPU code in a decade, but that's a reasonable thing to do on each frame isn't it?
```
float periodic_animation_sec = (now_ns % periodic_animation_period_ns) * 1e-9f;
float fog_sec = now_ns / 1000000000; // or use a power of 2
```
Regarding the acceptable level of approximation for human perception, I think my point still stands. My assumption is that the frequency content of fog is low enough that pixel colors won't change appreciably over the course of a second. Want 30Hz? That will give you a few days before the precision degrades. 10Hz? About a week or so. Or, find a different solution, like using the CPU to reset the state of the shader every so often. Figure out how to do that seamlessly, or just make sure there's an in-game sunny day every few hours.
You can't put a 64 bit integer into a 32 or even 64 bit floating point number since at least some of those bits aren't available even in the 64 bit floating point number.
I'm specifically suggesting transforming the integer in an intelligent way to map well to the mantissa of the floating point representation, based on the intended usage. High frequency animations can chop off the most significant bits, while low frequency animations can chop off the bottom bits.
If you aren't able to constrain the design or make assumptions about frequencies, then it seems like you have to instead parameterize time as some sort of tuple so that you have more bits, such as the sum of a course absolute and a fine offset, and write the shader to cope with it. That's how time APIs in many operating systems work, where there's integer seconds and integer micro or nanoseconds.
you'd still end up either losing precision or using a truncated (cycling) part when you wanted to use it in a float calculation. it's effectively the same.
1. Integers would make the situation easier, but still overflow after ~50 days. You can't just increment delta every frame; frame times vary significantly and and if you aren't precise to the millisecond things will jump around.
2. It's not exactly cheap, and I don't think compilers put much effort into making sure you actually retain that precision. I have only really used floats in shaders though, and don't know what would happen.
3. I'm pretty sure that in practice you'd lose out on a lot of hardware-accelerated functions, doing trig and interpolations with multiple messy conversions. It's also possible you'd fuck up some compiler optimizations.
> if you aren't precise to the millisecond things will jump around
It is correct that precision is very important here, but, a millisecond is way too coarse: at 120fps, a millisecond is 1/8 of the frame time, and you'd get horrible jitter.
Ehh. The framerate doesn't actually matter that much, because in the end the actual pixel changes color at the same rate regardless of FPS. No shader is periodic at 120 Hz; they're very rarely periodic at even 10 Hz.
10 Hz is already a slow strobe light; 1 ms deltas means that each flash is within ~1% of the correct color.
If you're doing something like raymarching in the pixel shader, then you might want sub-millisecond resolution. In 99% of normal shaders, I don't think so. That kind of precision comes into play more with moving objects, where a tiny time delta can mean the difference between a pixel being completely lit or completely dark. Even then though, bad time resolution is just as likely to manifest as motion blur or something.
I think everyone who responded to you makes a point that just doesn't matter, because it's in a shadow of a much more significant problem:
If your formula involves Π, multiplying it by an integer will produce a float, with its precision problems.
Though you could define pi as 31415 integer... Though if Hwillis is right and originally an integer would overflow after ~50 days, now, being 10000 larger, it would overflow after ~7 minutes.
And finally, if you use sine or cosine, which take radians as input, any whole number passed to them (which probably are at that point converted to floats, but let's assume they aren't) will be a multiple of 57.2957795° expressed in degrees. Almost a sixth of full rotation is way too big of a step for any smooth transition.
Out of curiosity I decided to check if multiplying 1 radian could result with a very big, but visually (due to wrapping around 360°) only a little step:
print(min((180/pi*i % 360, i) for i in range(1,100000)))
Apparently 19 radians is ~1088.62° (mod 360° =~ 8.62°)
44 radians is ~2521.01° (mod 360° =~ 1.01°)
377 radians is ~21600.509° (mod 360° =~ 0.509°)
710 radians is ~40680.00345° (mod 360° =~ 0.00345°)
The last result is surprisingly good, but isn't it a spoonful of honey in a barrel of tar? :)
BTW, changing min to max in the Python script will also give useful results (close to 360 rather than close to 0). Worse results for low multipliers but better results near the end of the range.
int % float -> float is not a native operation I'm aware of on any CPU or GPU. Even C doesn't have this operation -- the % operator only applies to integers, and the float variant is a function, fmodf(float, float); shading languages are similar. Also note that GPUs don't have any sort of integer division instruction.
I'm confused why you need a global total game-time anyways, or if you have one why it has to be precise. Could you use your precise time in float, and then increment a larger but less-precise total gametime value every 10 minutes?
It doesn't have to be game time, as the article says it could be time since level or some other trigger. It's basically about any time based counter you need for a time based effect, such as a predefined cyclical wind effect on trees or waves in water. The article also explains why it has to be precise, looping effects will behave oddly if the loop is not occurring precisely. Splitting the number into a large part and a precise part doesn't actually solve for anything, it just moves the problem into "how do I make arbitrary effects precisely loop based on the 2 parts of the time" instead of "how do I make arbitrary effects precisely loop based on the time".
It doesn't move it to 100 days, it'd move the problem to 10 minutes. Shaders can't just take a large number as a large piece and small piece unmodified, solving for this in the logic is the same as solving for the original cycle matching problem.
When I need something like that, I usually passing [ 0 .. 1 ] float phase to the shaders, and when updating the number on CPU I wrap the value into that range after the increment.
i have been doing this for a while playing around with shadertoy. i don't think i learned it from anywhere, i just wanted some things to match up and repeat over hours and it was obvious that sin/cos are periodic, and cycling the time value would improve the imprecision at the seam.
seeing it delivered as a clever trick in a blog post makes me wonder if i'm more competent than i thought, or if everyone else is generally less competent than i thought.
The trick isn't so much about looping the sine/cosine. It's about setting synchronizing the maximum time value passed to a shader, with maximum precision of an arbitrary multiplicand used in the shader.
> How do we ensure that, easily, in a way that people don't have to think about?
The relation is that the time should switch from 10^x to 0, and in the shader the number of digits after a dot should be no more than x.
Every now and then I have to remind myself that things that are obvious to me aren't obvious to everyone else. And thus, sometimes its worth explaining "obvious" things.