The amount of effort this must have taken is obscene. The creator made a physically accurate pin-hole camera and film inside Blender. I hadn't realised that this is even possible with existing off-the-shelf renderers, let alone open source software. What a blast to watch also.
That is, the fact that the whole package (Blender + the author's work) emulates a physical camera inside an existing renderer is amazing to me.
What I've personally always wondered is if you could make a liquid crystal or electrochromatic diaphragm to get perfectly circular bokeh (or maybe even smooth-edge STF bokeh) on demand. Would probably kill the T-stop value when full open due to less than perfect transmissivity though..
I also wonder what would happen if you could substitute an angle-limited piece of glass (kind of like a privacy screen - visible head-on but opaque from an angle) instead of the diaphragm.
Brilliant, it's like running a VM inside in a VM. I love how the images have a character due to the lens and simulation characteristics, It should be possible to train a diffusion model which can generate these and make it obsolete though.
The only thing I knew was that when light goes through a pinhole in a box just right, for some reason it creates a large upside down projection on the opposite side haha
Everything in this video is just a flurry of mad genius and gobbledygook jargon. Loved every second of it
To me, the best part is that most of it is just an emergent property of raytracing. Once you implement the core physics needed for any raytracer, things like lenses and aperture blades just work.
I can't want for full-scene realtime raytracing to hit the consumer market.
In a similar vein, when raycasting was being added to second life I built a rudimentary approximation of a camera. It took some time to put together an image because of the limits of SL scripting and the rate limits on raycasting, but the first successful image had incredibly obvious vignetting. The solution was changing the size of my 'aperture' equivalent.
It's always very cool when simulation behavior starts to resemble real behavior.
This is where I’m losing track of what’s going on here. I generally understand the principles from at least the first half of the video, the pinhole camera and lenses and can imagine how it works in real life, but this is blender. It’s not real life. How does this work in BLENDER? Is it because the underlying physics models in Blender are so accurate one can recreate even these increasingly convoluted things in real life?
Well, that's the best part of it: the underlying physics model is really easy!
A pinhole camera requires zero additional physics, just a simulation of light rays passing through a small hole. A lens just requires Snell's Law - which is pretty much trivial. Adding dispersion is just a matter of using a formula instead of a constant for the refractive index of the lens, and doing the raytracing using random wavelengths instead of a single "composite" one.
Raytracers capable of doing this are implemented in a few hundred lines of code as a toy project. The difficult part is getting it to render quickly - and of course modeling the scene properly.
The physics of light is relatively simple. Blender wants to look photorealistic, so cycles (a branched path tracer, commonly known as a ray tracer) almost perfectly models how light works. That means these emergent properties just work.
It wouldn't work under a rasteriser (Blender has one, called Eevee) as those operate using a complex series of approximations of reality that are more difficult to create but run far faster. But because Cycles (slower, but more realistic) operates by mimicking how light works this kind of thing is entirely possible.
Also keep in mind that Cycles and most renderers use a simplified version of physics—rather than pathtrace all wavelengths, they use a simplified RGB model, which means certain optical properties behave differently. For instance, you don’t get the chromatic aberration with the simulated lens.
raytracing rendering moved to physics based rendering a while ago: the core idea is to approximate the physics of light as accurately as can reasonably be made fast. It's not like it's solving a wave equation directly, it's using a statistical sampling of a model of how light interacts with different interfaces between mediums. This is basically the easiest way to get photorealistic renders: the human brain is very good at noticing when your scene has non-physical lighting (not in a 'ahah, the specular highlights violate conservation of energy' way, but in a 'this looks fake' way).
(this approach was started by some researchers who basically said 'well, let's make a simple scene for real and measure everything and make our render look identical' https://en.wikipedia.org/wiki/Cornell_box )
I just keep remembering the original Half Life 'can your computer handle this test' and thinking "This is the absolute height of tech, footage from within a game displayed in-game, live."
And now... I mean my god. This video made me feel both incredibly excited, and terribly old.
This simulated camera projects the image onto an internal film plane. But blender cannot read an image off a plane directly, so you need to point a blender camera at the film plane to capture the film to the blender output.
the best thing about this is the presentation format
one thing thats great about this content world is that people learn how to get to the point very quickly and efficiently!
this could have easily been 1:40:03 watching some nerdy guys on a recorded google meet drone on and on about their accomplishment and I’m glad it wasnt!
The magic of digital goods is that you only need to produce them once. This guy put in all the work and now can just copy paste it wherever and have it always ready
This was super interesting and I think the results are beautiful. I'm imagining how cool it would be for game with a photo mode to have a film simulation mode.
I used to wonder what Rendezvous With Rama's landscape would really look like if I were standing on its inner surface. Same with The Stone from Greg Bear's Eon. The renders I've seen so far have all looked too small, or everything is too clear.
Maybe simulating something closer to the eye, or a 35mm camera, would give me what I've dreamed about. What do these places look like with clear skies, and then with clouds? 2 metres "up" from the inner surface, what does that really look like?
None of the elements here are individually surprising, but I had never thought of this, and it most certainly looks like it took ages worth of research and work to get working. The most surprising was the pinhole camera but now that I've had some time to think about it, it makes sense why it should just work.
Very sophisticated, very cool.
Actually, the most surprising was that this was a fun video and not just a long form blog post as I first assumed it must be.
True, but film is more than just three color planes. Different types of film capture and display colors differently. Grain is very different depending on the film sensitivity. And b/w uses different chemistry than color (silver vs dye), which also affects how an image is captured. Not to mention different emulsions.
I'd like to see his virtual camera incorporate the ability to specify different types of films, not to mention lighting (daylight vs. indoor).
Theoretically you could even zoom in enough on his virtual images that you could see the actual (simulated) grain, once you got to a high enough zoom level where there were more than a few pixels to render individual grains.
I'd also (re-)add: film is just one part of a transmission process.
Film has to be developed into something. And that's a chemical process, which is non-linear. Developer, the bath you put film in to activate the still blank but exposed reel, to turn the grains into actual "developed" photo, is a complex and local analog process. "Developer" is expended while developing film & becomes less effective at developing, creating a much stronger local contrast across pictures in a natural chemical way.
There's a pretty complex Shannon Information Theory system going on here, which I'm not certain how to model. There's maybe a information->transmit->medium->receive->information model between the scene and the film. Then an entirely separate information->transmit->medium->recieve->information model between the undeveloped scene and what actually shows up when you "develop" the film.
As you say, there are quite a variety of film types with different behaviors. https://github.com/t3mujin/t3mujinpack is set of Darktable presets to emulate various types of film. But the behavior of the film is still only half of the process. As I said in my previous post, developing the film is a complex chemical process, with lots of local effects for different parts of the image. There's enormous power here. https://filmulator.org/ is an epic project, that, in my view, is incredibly applicable to almost all modern digital photography, that could help us so much, to move beyond raw data & help us appreciate scenes more naturally. It's not "correct" but my personal view is the aesthetic is much better, and it somewhat represents what the human eye does anyways, with it's incredible ability to comprehend & view dynamic range.
> True, but film is more than just three color planes.
Yes, which is why I said it was the part about having three planes to gracefully point you towards what you Could you watch the content we are trying to discuss here? I know it’s HN tradition to comment without reading but it doesn’t really help.
Yes, I know you pointed out that there is an explanation of tweaking the color planes individually, but I think you're missing my point. My comment was not intended as criticism of what you said; it was intended to expand on your thought by giving specific examples (e.g. film grain).
Could have gotten proper glass dispersion effects, that now got dropped, if he had used luxcore instead of cycles. But that might be considered cheating :)
I think this is reasonable. If nothing else, it is the aspect least possible to imitate. If included, the best imitation would be cheap strobing. Anything better than that would require synthesis, which is something that even rendering engines require significant sacrifices to produce.
That is, the fact that the whole package (Blender + the author's work) emulates a physical camera inside an existing renderer is amazing to me.