Hacker News new | past | comments | ask | show | jobs | submit login

The approach is straightforward. "Every shot in Piper is composed of millions of grains of sand, each one of them around 5000 polygons." With enough compute power, you don't have to fake as much.



Unless you're looking at it very, very closely, does 5k polygon sand look any different from 500 polygon sand, or even 50 polygon sand? Even the very close ups in the article look like they'd be much the same with a lot fewer polygons.


For reference, this is a 5040 triangle sphere (flatshaded so you can see the faces):

https://imgur.com/a/IZLxQyZ

and this is 520 triangles:

https://imgur.com/a/EGxKsV5


No, it does not. 5000 polygons condensed into a fraction of a pixels becomes a matter of statistical distributions. Effects like displacement and bump mapping have different looks if you filter them to get one value per pixel. The illumination changes because you get one averaged normal instead of distribution of normals that would occur naturally.

To have this much detail is a brute force and overkill way to get the distribution of samples that you really want on a sub pixel level, but sometimes these things are done in film because with a lot of pain you can make it work.

In this case if they did actually do that, it is through instancing, which in a sense could be thought of as a lookup with polygons to ultimately get that distribution. Usually a huge amount of samples are needed to deal with aliasing as well.


Why bother to do Cook-Torrance shading (a statistical modelization of the micro-geometry) when you can just use the real microscopic geometry ...


That was my first question. They mention in the article the high number is for close ups.


Also says:

> Every shot in Piper is composed of millions of grains of

> sand, each one of them around 5000 polygons.

I'm guessing the article is just confused. The important point, I think, is about using real geometry for sand particles rather than the beach being a surface with a displacement map.


But with instancing. So they could have a few dozens of grain types 5K tris each and the total number is from instances, no need to load 5 billion triangles into memory.


What does that look like from a data structure perspective? Each grain has a bounding box and you only look at the geometry for that grain if a ray crosses the box? (I'm way, way out of my wheelhouse here, if it's not already obvious.)


Reduced ad absurdum I thinks that is pretty much how it works. There is an acceleration structure that is traversed for each ray, to reduce the search space for ray-geometry intersections (probably a bounding volume hierarchy of some sorts), and using instancing only a very small subset of the total of all grains of sand actually need to be stored and processed. I imagine the sand is modeled using some particle system where each particle is actually a small scene of sand grain models in itself, with its own BVH etc, and the raytracer somehow reuses whatever happens inside them for every other particle that has the same lighting conditions.

It’s probably way way more complicated than that though. Extremely interesting stuff.


They have some close ups of the sand and it effects light transport


Presumably the difference is in light reflections on the sand? I don't know


I believe that the simulation in Houdini works better with smoother meshes. Plus the shaders look better on higher polygon objects for smooth objects.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: