Attempting to escape traditional rendering pipelines with a pure compute not-rasterizer/not-quite-raytracer. It can't use any of the rasterizer or raytracing hardware on modern GPUs, but in return it gets a lot of flexibility.
Some early benefits:
1. Being free of linear transforms for projection allows it to use other projections for free, like stereographic fisheye. (You can also design the projection to map onto a VR headset's view without needing a warp shader, giving a better sampling distribution.)
2. Global acceleration structures with fast traversals and flexible intersection routines can make full resolution noise free soft shadows cheap.
I'm still trying out some permutations for the traversals (mostly different kinds of sharing traversal work) but all the prototypes are looking pretty promising.
In the long run, the plan is to push beyond hardware rasterizer limitations with high geometric density and avoid the zoo of problems associated with screenspace. Things like analytic approximations for antialiasing, transparency without endless pain, and eventually fully decoupling shading from screenspace and moving into other dynamically prefilterable spaces to open up some forms of supercheap global illumination. The end goal is high framerates, extremely high geometric density, extremely low latency, and very high image clarity (no screenspace temporal antialiasing!).
Going to be an on and off project for a least a couple of years more, but so far so good.
Some early benefits:
1. Being free of linear transforms for projection allows it to use other projections for free, like stereographic fisheye. (You can also design the projection to map onto a VR headset's view without needing a warp shader, giving a better sampling distribution.)
2. Global acceleration structures with fast traversals and flexible intersection routines can make full resolution noise free soft shadows cheap.
3. All input and scene state is uploaded asynchronously; the GPU samples it right before rendering. Full input latency times can be as low as the monitor refresh time: https://twitter.com/RossNordby/status/1335351074069368832
I'm still trying out some permutations for the traversals (mostly different kinds of sharing traversal work) but all the prototypes are looking pretty promising.
In the long run, the plan is to push beyond hardware rasterizer limitations with high geometric density and avoid the zoo of problems associated with screenspace. Things like analytic approximations for antialiasing, transparency without endless pain, and eventually fully decoupling shading from screenspace and moving into other dynamically prefilterable spaces to open up some forms of supercheap global illumination. The end goal is high framerates, extremely high geometric density, extremely low latency, and very high image clarity (no screenspace temporal antialiasing!).
Going to be an on and off project for a least a couple of years more, but so far so good.