Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Make Your Renders Unnecessarily Complicated by Modeling a Film Camera in Blender [video] (youtube.com)
275 points by CharlesW on July 2, 2023 | hide | past | favorite | 51 comments


The amount of effort this must have taken is obscene. The creator made a physically accurate pin-hole camera and film inside Blender. I hadn't realised that this is even possible with existing off-the-shelf renderers, let alone open source software. What a blast to watch also.

That is, the fact that the whole package (Blender + the author's work) emulates a physical camera inside an existing renderer is amazing to me.


Not just a pin-hole camera. Later in the video he builds a multi-element lens too!


If anyone wants to play with a (probably much simpler) blender model of a lens, here's a link: https://studio.blender.org/blog/camera-lenses-with-caustics/

What I've personally always wondered is if you could make a liquid crystal or electrochromatic diaphragm to get perfectly circular bokeh (or maybe even smooth-edge STF bokeh) on demand. Would probably kill the T-stop value when full open due to less than perfect transmissivity though..

I also wonder what would happen if you could substitute an angle-limited piece of glass (kind of like a privacy screen - visible head-on but opaque from an angle) instead of the diaphragm.


You can get smooth-edged bokeh in a physical lens by inserting an apodisation filter inside:

http://www.4photos.de/camera-diy/Apodization-Filter.html

This is an aperture with fuzzy soft edges so that the bokeh fades out smoothly around the edges instead of being formed of hard circles.

It makes the images look like they have Photoshop- or cellphone-faked bokeh, because you end up with effectively a Gaussian blur in the background.


Reminds me of ”how to animate cube in Houdini” -https://youtu.be/NLiL0GLSvIw


This is so delightfully ridiculous. I wonder how many folks come across it looking for a nice beginner tutorial.


Brilliant, it's like running a VM inside in a VM. I love how the images have a character due to the lens and simulation characteristics, It should be possible to train a diffusion model which can generate these and make it obsolete though.


Very interesting but wow does this remind me of how little I know. I understood exactly zero of this.


The only thing I knew was that when light goes through a pinhole in a box just right, for some reason it creates a large upside down projection on the opposite side haha

Everything in this video is just a flurry of mad genius and gobbledygook jargon. Loved every second of it


The nice thing about pinhole cameras is that you can explain it all with simple geometry: https://www.alternativephotography.com/how-a-pinhole-camera-...


Amazing to be able to recreate a functioning camera on first principles. So much work but the result is totally worth it!


To me, the best part is that most of it is just an emergent property of raytracing. Once you implement the core physics needed for any raytracer, things like lenses and aperture blades just work.

I can't want for full-scene realtime raytracing to hit the consumer market.


In a similar vein, when raycasting was being added to second life I built a rudimentary approximation of a camera. It took some time to put together an image because of the limits of SL scripting and the rate limits on raycasting, but the first successful image had incredibly obvious vignetting. The solution was changing the size of my 'aperture' equivalent.

It's always very cool when simulation behavior starts to resemble real behavior.


This is where I’m losing track of what’s going on here. I generally understand the principles from at least the first half of the video, the pinhole camera and lenses and can imagine how it works in real life, but this is blender. It’s not real life. How does this work in BLENDER? Is it because the underlying physics models in Blender are so accurate one can recreate even these increasingly convoluted things in real life?


Well, that's the best part of it: the underlying physics model is really easy!

A pinhole camera requires zero additional physics, just a simulation of light rays passing through a small hole. A lens just requires Snell's Law - which is pretty much trivial. Adding dispersion is just a matter of using a formula instead of a constant for the refractive index of the lens, and doing the raytracing using random wavelengths instead of a single "composite" one.

Raytracers capable of doing this are implemented in a few hundred lines of code as a toy project. The difficult part is getting it to render quickly - and of course modeling the scene properly.


The physics of light is relatively simple. Blender wants to look photorealistic, so cycles (a branched path tracer, commonly known as a ray tracer) almost perfectly models how light works. That means these emergent properties just work.

It wouldn't work under a rasteriser (Blender has one, called Eevee) as those operate using a complex series of approximations of reality that are more difficult to create but run far faster. But because Cycles (slower, but more realistic) operates by mimicking how light works this kind of thing is entirely possible.


Also keep in mind that Cycles and most renderers use a simplified version of physics—rather than pathtrace all wavelengths, they use a simplified RGB model, which means certain optical properties behave differently. For instance, you don’t get the chromatic aberration with the simulated lens.


raytracing rendering moved to physics based rendering a while ago: the core idea is to approximate the physics of light as accurately as can reasonably be made fast. It's not like it's solving a wave equation directly, it's using a statistical sampling of a model of how light interacts with different interfaces between mediums. This is basically the easiest way to get photorealistic renders: the human brain is very good at noticing when your scene has non-physical lighting (not in a 'ahah, the specular highlights violate conservation of energy' way, but in a 'this looks fake' way).

(this approach was started by some researchers who basically said 'well, let's make a simple scene for real and measure everything and make our render look identical' https://en.wikipedia.org/wiki/Cornell_box )


I just keep remembering the original Half Life 'can your computer handle this test' and thinking "This is the absolute height of tech, footage from within a game displayed in-game, live."

And now... I mean my god. This video made me feel both incredibly excited, and terribly old.


I misunderstood title as "Modeling a Film Camera inside a Blender that is Plugged In and Running."

Somehow the actual video was more interesting than that. :)

Still, I need a UML diagram to understand what is meant by "fake camera inside the real fake camera."


This simulated camera projects the image onto an internal film plane. But blender cannot read an image off a plane directly, so you need to point a blender camera at the film plane to capture the film to the blender output.


I think it was "the [Blender concept of 'camera'] inside the [camera I modeled in Blender]".


the best thing about this is the presentation format

one thing thats great about this content world is that people learn how to get to the point very quickly and efficiently!

this could have easily been 1:40:03 watching some nerdy guys on a recorded google meet drone on and on about their accomplishment and I’m glad it wasnt!


In that case you could just watch the video at 10x speed.


The magic of digital goods is that you only need to produce them once. This guy put in all the work and now can just copy paste it wherever and have it always ready


This was super interesting and I think the results are beautiful. I'm imagining how cool it would be for game with a photo mode to have a film simulation mode.


I used to wonder what Rendezvous With Rama's landscape would really look like if I were standing on its inner surface. Same with The Stone from Greg Bear's Eon. The renders I've seen so far have all looked too small, or everything is too clear.

Maybe simulating something closer to the eye, or a 35mm camera, would give me what I've dreamed about. What do these places look like with clear skies, and then with clouds? 2 metres "up" from the inner surface, what does that really look like?


Renders looking too small reminds me of this article: https://aaronhertzmann.com/2022/02/28/how-does-perspective-w...

Maybe you're looking for something like "Natural Perspective"?


Wow!

THANK YOU!!

That's a really good summary of what I'm referring to.


None of the elements here are individually surprising, but I had never thought of this, and it most certainly looks like it took ages worth of research and work to get working. The most surprising was the pinhole camera but now that I've had some time to think about it, it makes sense why it should just work.

Very sophisticated, very cool.

Actually, the most surprising was that this was a fun video and not just a long form blog post as I first assumed it must be.


This video is one of the greatest works of art that I've seen in such a long time. Not to even mention all the engineering that went into it.


Why stop there?

There was a great project that I keep forgetting the name of / link to... It simulated the chemical process of developing film.

I keep wanting to dig it up & play with it. Such a neat effort.



Another film simulation project, filmbox, got discussed few months ago: https://news.ycombinator.com/item?id=34713202


I knew it! Video games are how simulation universes come to be.


This project is simulating film. That’s the part about having three planes. There even is an explanation of tweaking each plane colour sensitivity.


True, but film is more than just three color planes. Different types of film capture and display colors differently. Grain is very different depending on the film sensitivity. And b/w uses different chemistry than color (silver vs dye), which also affects how an image is captured. Not to mention different emulsions.

I'd like to see his virtual camera incorporate the ability to specify different types of films, not to mention lighting (daylight vs. indoor).

Theoretically you could even zoom in enough on his virtual images that you could see the actual (simulated) grain, once you got to a high enough zoom level where there were more than a few pixels to render individual grains.


I'd also (re-)add: film is just one part of a transmission process.

Film has to be developed into something. And that's a chemical process, which is non-linear. Developer, the bath you put film in to activate the still blank but exposed reel, to turn the grains into actual "developed" photo, is a complex and local analog process. "Developer" is expended while developing film & becomes less effective at developing, creating a much stronger local contrast across pictures in a natural chemical way.

There's a pretty complex Shannon Information Theory system going on here, which I'm not certain how to model. There's maybe a information->transmit->medium->receive->information model between the scene and the film. Then an entirely separate information->transmit->medium->recieve->information model between the undeveloped scene and what actually shows up when you "develop" the film.

As you say, there are quite a variety of film types with different behaviors. https://github.com/t3mujin/t3mujinpack is set of Darktable presets to emulate various types of film. But the behavior of the film is still only half of the process. As I said in my previous post, developing the film is a complex chemical process, with lots of local effects for different parts of the image. There's enormous power here. https://filmulator.org/ is an epic project, that, in my view, is incredibly applicable to almost all modern digital photography, that could help us so much, to move beyond raw data & help us appreciate scenes more naturally. It's not "correct" but my personal view is the aesthetic is much better, and it somewhat represents what the human eye does anyways, with it's incredible ability to comprehend & view dynamic range.


> True, but film is more than just three color planes.

Yes, which is why I said it was the part about having three planes to gracefully point you towards what you Could you watch the content we are trying to discuss here? I know it’s HN tradition to comment without reading but it doesn’t really help.


Yes, I know you pointed out that there is an explanation of tweaking the color planes individually, but I think you're missing my point. My comment was not intended as criticism of what you said; it was intended to expand on your thought by giving specific examples (e.g. film grain).


Could have gotten proper glass dispersion effects, that now got dropped, if he had used luxcore instead of cycles. But that might be considered cheating :)


I hate how good this is.


It’s… beautiful.


I believe they did this for Wall-E also


Yeah it what film stock do you use?


Brilliant video


Yes!!! The best part about this is that it significantly reduces the urge I've had for years to do it myself.


It amplified mine and made me start imagining a world where cinematic CGI production integrates a process like this.


Were you also left with the worry that motion blur was being neglected??


I think this is reasonable. If nothing else, it is the aspect least possible to imitate. If included, the best imitation would be cheap strobing. Anything better than that would require synthesis, which is something that even rendering engines require significant sacrifices to produce.


More power to you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: