Hacker News new | past | comments | ask | show | jobs | submit login
Ray Tracing Essentials, Part 1: Basics of Ray Tracing (nvidia.com)
231 points by mariuz on Feb 18, 2020 | hide | past | favorite | 40 comments



On the topic of Ray tracing books:

I'm currently at chapter 9 of The Ray Tracer Challenge and I have to say it's a wonderful book and the stellar reviews are well deserved.

It handholds you all the way and that is such a breath of fresh air for someone like me, who's not a math guy, nor do I instantly 'get' modern graphics APIs, since there is a lot of prerequisite knowledge encoded in those APIs.

I also like to understand things from first principles and it helps when those principles are explained in a straightforward, even somewhat humorous way and I get to see results right away.

Learning practical things would be so much easier if the other books on my shelf had the same approach.

It's already in the top 3 of my favorite books, along with Nature of Code.


+1 to Ray Tracer Challenge.

I really think the key thing for it clicking with me was to build up the math foundation via TDD. Sure I could have used C#'s Matrix4x4 and Vector classes and just browsed through the beginning of the book, but actually implementing my own tuples, vectors, points, and matrixes really helped me understand "here's what I'm trying to achieve" then "here's the math to implement it".


I had a really fun time going through the ray tracer challenge recently! It really is an amazing introduction to ray tracing

I've since moved on to the pbr book[0] and finding myself getting much more lost in the math. I'm doing my best to brush up and/or learn all of it, but it's a bit daunting.

It's really telling how reassuring the tests are in the RTC. Right now I can re-implement something from the pbr-book, but I can't really say if I got it right other than by doing a render (and even then, it can be hard to tell).

My plan is to at least go through and write my own tests by doing the math by hand and trying to verify my implementations.

I really wish there were some intermediary book between the two.

[0] http://www.pbr-book.org/


I wholeheartedly agree with you. The tests do help a lot.

Funnily enough, I own PBR too (it's in my reading queue), and, having skimmed through it, it seems math heavy. I think I'll leave it for last. My current queue is:

- RTC (for fun),

- 3d Math primer for Graphics and Game Dev by Dunn (to solidify the math part)

- Foundations of Game Engine Dev - Mathematics, by Lengyel (because when it comes to math, overkill is underrated)

- CGPP (to get the basics down)

... not sure about the order for my other books, but then...

- OpenGL SuperBible (second time around, sadly)

- Real-Time Rendering

- PBR

For the math books, I was thinking to do the same thing as you: do the math by hand, and then translate them into tests.


> - OpenGL SuperBible (second time around, sadly)

I tried using the SuperBible in order to learn OpenGL a few years back, but it always seems to be a bit too dense for my taste. Since OpenGL itself is an API specification and usually you would learn the basics of 3D graphics before delving into it, I recommend using the fantastic Learn OpenGL site (https://learnopengl.com). It goes through the basics of OpenGL from ground-up and touches on more advanced techniques, such as shadow mapping and deferred shading. It is a fantastic site and a great resource for learning the API.


Thanks for the advice, I'll give it a try!


Well, there is the book "Ray Tracing from the Ground Up" [1], which although a bit outdated (stemming from 2007), gives in-depth discussions on many of the topics. The author discusses possible pitfalls on the way as well. There are a few chapters, which are still math-heavy (the ones discussing the principles of stochastic ray tracing, a.k.a. Monte Carlo ray tracing).

A helpful resource for me, personally, was the educational ray tracer "Nori" [2], which came with 5 assignments covering fundamentals of a ray-tracing system (intersection acceleration, Monte Carlo sampling, basic and advanced integrators, BRDFs, even microfacet material models). Assignments gave hints on how to integrate those features in the ray tracer, plus they provided ways to validate ray-traced results. Currently, the assignments are removed from the Nori's website, but one can find them using the Wayback Machine.

[1] "Ray Tracing from the Ground Up", by Kevin Suffern, September 2007, http://www.raytracegroundup.com/

[2] Nori: an educational ray tracer, Dr. Wenzel Jakob, https://wjakob.github.io/nori/# (using Wayback machine to access assignments https://web.archive.org/web/20200110040505/https://wjakob.gi...)


Note that, as of the third edition, Wenzel Jakob has joined Pharr and Humphreys as an author of Physically Based Rendering.


Peter Shirely's Ray Tracing in One Weekend series (https://raytracing.github.io/) is a great way to learn enough about ray tracing to implement it yourself.


It is a great project for learning a new language. I used it to practice Rust.

You get to implement vectors, with basic operations on them, this gives you a chance to practice some abstractions. It's also good to create some unit tests to ensure your vector operations are correct. There's also good reason to parallelize your code and perform benchmarks. Abstractions, unit tests, parallelism, benchmarks, you have an excuse to try them all.


Oh, that’s a great idea! The main reason I still haven’t learned Rust is I didn’t have a project to use with it, but this tutorial is something else I’ve wanted to do, so it’s a perfect match.


I love seeing this book listed. I picked it up 4ish years ago while learning Rust. I was converting the code to Rust and I found a small bug[1] because I could not convert the code as it was. Peter was amazingly responsive and encouraging. I highly recommend this and the second second book.

[1] https://github.com/RayTracing/raytracing.github.io/blob/7e2a...


Oh cool, congrats on finding the bug!


Less well known is "Rasterization in One Weekend"

https://tayfunkayhan.wordpress.com/2018/11/24/rasterization-...


For rasterization I would recommend the Tiny Renderer mini-course (https://github.com/ssloy/tinyrenderer/wiki). It aims to teach you how OpenGL works by having you implement your own OpenGL-like software rasterizer. It is a fantastic resource and one that I enjoyed thoroughly :)


I was reading through and noticed he does not normalize his vectors for (so far) things like ray direction or surface normals for lighting. He does give warnings but I'm curious what type of bugs or rendering issues will manifest from this decision?


Incorrect results will manifest. If you take the dot product of a surface normal and another vector and they aren't the same length, the dot product will be distorted by the lengths of the vectors.

Ray direction and ray length might be combined into one vector that just stretches from the origin for ray tracing, but using the direction with a surface normal for dot products, reflection vectors etc. is going to give artifacts.


What you say is true; however, not automatically normalizing ray directions after transformation can be useful in some cases, e.g. to avoid introducing floating point error when calculating points from t values.

See http://www.pbr-book.org/3ed-2018/Shapes/Spheres.html#Surface..., the paragraph beginning "A natural question to ask..."


I literally gave using the non normalized ray for ray tracing as an example.

(Also in practice floating point inaccuracy doesn't become a huge problem since you have to design around floats not being exact in the first place. Spheres can also wind up being more finnicky with precision but are rarely used as primitives to trace against in production renderers. There isn't a single right way to do the tracing, but the shading does need normalized vectors for a lot of common operations.)


> Spheres can also wind up being more finnicky with precision but are rarely used as primitives to trace against in production renderers.

I've never heard this before! Interesting. Why is this?


It's because tracing a sphere is quadratic equation solve. The guy in the video, Eric Haines, published an article about how to improve sphere tracing precision. It's in the freely available Ray Tracing Gems book.

https://link.springer.com/content/pdf/10.1007%2F978-1-4842-4...


Spheres are the "hello world" shape of ray tracing, but are generally not used for production renderers for the following reasons: Spheres are not that interesting to render, because hardly anything in the real world is a perfect sphere; and determining a ray-sphere intersection point requires solving at least one quadratic equation, which requires a square root, which is slow

Triangle meshes are better choices for the same reasons: they can be used to model arbitrarily complex shapes, and it's faster to compute ray-triangle intersections.


From the example in this video I’m not quite sure how you get anti-aliasing for free in sub-pixel sampling. I understand it in general, but in this example the light is being reflected on a diffuse surface. Assumedly where the diffused ray goes has a large random component, so sub-pixel sampling provides very little, if any, benefit. Is my take in this correct?


Consider a scenario where almost half of a pixel (but not the center of it) is covered by some diagonal black object that's in front of a white object.

If you ray trace and shoot one ray in the center of the pixel then it will hit the white object, and no matter how many rays you spawn from that intersection point the part of the black object that's covering part of the pixel won't actually contribute to the final image. So with this technique the final pixel will be white.

With path tracing you shoot many initial rays at random sub-pixel positions, most miss the black object and hit the white one, but some hit the black one instead. So with this technique the final pixel will be gray, a mixture of the two objects' colors and therefore anti-aliased.


While sub-pixel sampling provides little benefit for diffuse surfaces, sampling the same pixel multiple times does provide better smoothness across the surface of the object.

Since it's not much more computationally expensive to do sub-pixel sampling than to repeatedly sample the same point within the pixel you might as well just use sub-pixel sampling.

Additionally a pixel has no knowledge of its contents until you cast a ray. And even if the ray did hit a diffuse surface you would have to do more math to determine if the edge of that surface lies within the boundaries of that pixel. Might as well just use sub-pixel sampling.


When you shoot a ray from camera perspective, you've already chosen which pixel it's going to contribute to. "Platonic" pixels are infinitesimally small.

In reality, pixels on the screen and camera sensor have an area. If you choose a random position on that area, you get anti-aliasing "for free" because there's going to be variation in the direction of rays that contribute to the same pixel.


You actually don’t get anti-aliasing for free on camera sensors. Many cameras put a low-pass filter in front of the sensor specifically to avoid the problem of aliasing. More recently, high-end cameras have been omitting the filter in order to improve resolution. This occasionally does result in aliasing (called moiré by photographers) when taking pictures of fine-structured repeating patterns, however.


Video is really well done. Will be looking for the next one.


slightly tangential question: does someone know of resources that i might be able to use for plotting mesh surfaces f.e. z = f(x, y) ? all online resources seem to point to howto use surface plots in either matplotlib or gnuplot or some variant thereof. thank you kindly !


Interesting video. I didn't get how the Ray casting process formulates the final picture in the eye.


There's a point in space that represents the lens of the observer's eye, and a rectangle in space that represents the viewport. This rectangle is divided into pixel-equivalent square areas. For each area, a sampling of one or more rays is drawn from the lens point through the bounds of the area, until it encounters a surface of the scene. At that point, the material rules of the surface might generate another ray for specular reflection, a cone for diffuse reflection, another ray for refraction, and also add the emissive light value from that material. If the specular or diffuse reflections encounter a light source or ambient light, they add some of that light to the pixel-equivalent.

The diffuse cones send out a sampling of rays and attenuate the light from the light source, based on how many of those rays hit it, instead of some other object.

Instead of drawing light onto the scene and calculating how much passes through the viewport to the lens, ray-tracing cheats by working backwards, because photons traveling backward in time follow exactly the same rules as those traveling forward in time. Every photon that can travel backwards in time from the eye to hit a light source must have emanated from a light source with exactly the right direction and polarization to enter the eye. So the only photons calculated are the ones that contribute to the scene as viewed by the eye.


Thanks. I think I got most of what you described.

If I understand it correctly:

1) a point / pixel in the scene (as viewed by the eye) sends out a cone of rays, and the final color of this pixel is a combination of what those rays hit. This is the ray casting process, the reverse of light traveling.

2) the overall picture of the scene is the combination of pixels each calculated by the above ray casting process.

Am I right?


Yes. The problem with working backwards is that some optical calculations have probability elements. A photon that hits a half-silvered mirror has a 50% chance of (specular) reflecting and a 50% chance of transmitting.

So for ray-tracing, you calculate along both paths and give 50% weight to each. Every time a ray hits a triangle in the scene, the material properties determine how the various components sum up to determine the color of the pixel.


Dear NVidia, While I and many others appreciate the quality of your video, in an age where people are distributing ray-tracing code on business cards, there’s not a ton of value in producing even more “Basics” or “Essentials” of ray tracing educational material.

What would provide most of us with interest in ray tracing real value is expanding on the territory covered by the PBRT book, making the material it covers more accessible, and covering the changes to the state of the art which have occurred in the decade since it was written (particularly the elements that the RTX hardware enables; real-time mixing of ray tracing and rasterization rendering).

Thanks!


Something like Ray Tracing Gems book they published last year? Or maybe CFP for Ray Tracing Gems II book they have right now?


To save others looking for it, a PDF version (CC BY-NC-ND 4.0) is available here (https://www.realtimerendering.com/raytracinggems/).


Sweet. Didn’t see these come out. Thanks for the heads up.


There are previous GPU gems available for free as well: https://developer.nvidia.com/gpugems/gpugems/contributors


And the very first one of the NVidia series, "The Cg Tutorial" is also available.

https://developer.download.nvidia.com/CgTutorial/cg_tutorial...

Granted, Cg is now just a curiousity in the context of shading languages, it is nevertheless interesting to read.


Workin' on it. :-)

Note that the third edition came out roughly 5 years ago now, not 10.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: