I remember when this one came out and letting my computer spend insane amounts of time rendering images. There is something about POV-Ray, its limitations and interface (or lack of) seems to make it excel at this sort of half real impressionist style while never entering the uncanny valley. Or maybe POV-Ray just attracts people who work towards that style. I miss the POV-Ray community and spending weeks coding an image and 14 hours rendering an image just to find out you made one tiny stupid error and have to do it all over. Last I used it I had a 1.8ghz single core processor, I wonder what it is like on a modern computer.
I think the author of "The wet bird", Gilles Tran, is one of the very few artists who were adept with both the more graphical, modeler-based workflow and the fully programmable code-based interface of the POV-Ray renderer, very successfully combining the best of both worlds (see eg. [1][2][3]). Unlike most POV users, Tran liberally used commercial tools like Poser, Terragen, as well as third-party models.
Programmatically composing mathematical primitives does only get you so far if your goal is to create art relatable to humans. For abstracts, surrealism, and "mathematical art" plain POV-Ray is great, as well as for many natural forms exhibiting some degree of fractalness, but for animals, humans, and many manmade things a mesh-based modeler – or at least the use of third-party meshes – is essentially a requirement.
---
Due to the end of Moore's Law, POV-Ray these days is not tremendously faster than what it was on late-00s hardware. It does fully support multicore, so there's that (and raytracing is famously an "embarrassingly parallel" problem). Modern SIMD could also bring a 2x to 4x performance boost; I'm not sure how vectorized/autovectorizable the current POV-Ray code is. If the shader language could be JITted, that might bring a nice benefit as well.
> [...] but for animals, humans, and many manmade things a mesh-based modeler – or at least the use of third-party meshes – is essentially a requirement.
That was/is not true. It could be even a CAD modeler. In fact, in the 90's, before the advent of subdivison surfaces (sds) in VFX production, everything you saw in blockbuster movies that was organic was NURBS patches. E.g. the dinosaurs in "Jurassic Park". Usually modeled in Alias PowerAnimator and then animated in Softimage.
Nowadays you can choose between polygon meshes, sds, T-splines also signed-distance-based modeling for organic stuff. In fact, even then you could do organic modeling from SDF primitives using the Oranica[1] modeler.
There were also many cheap or even free modeleres that supported bezier patch modeling at the time. Bezier/B-spline patches modelers were accessible to everyone on Windows and Mac OS at the time. And there were many renderers that could render those without artifact. I.e. REYES[2]-based.
You may have to mesh the SDF for further work (and then maybe render a Loop sds if you want a high order surface at render time). So mesh-based being the only choice was neither true when "The wet bird" was produced nor is it now.
I ran a small CGI shop (five people) a few years before this image was produced.
We used Real3D/Alias PowerAnimator with a custom pipeline to do organic modeling with B-spline patches and rendering with PhotoRealistic RenderMan for Windows. No polygons were ever created (or harmed) producing any of the images we were paid for.
So yeah, the humans in that image were polgon meshes from Poser and exported to POV but that was just what the artist knew/choose.
Pardon the slight inaccuracy. With "mesh-based modeler" I simply meant a GUI modeler, no matter what the primitives used are. (I would consider a model made of many X-spline patches still a "mesh", for the record.) But my point was that some type of modeler is a requirement for creating many photorealistic shapes; nobody is going to make a plausible human programmatically by composing geometry out of primitives like spheres, cones, and boxes. Or even blobs/metaballs or isosurfaces.
> [...] nobody is going to make a plausible human programmatically by composing geometry out of primitives like spheres, cones, and boxes. Or even blobs/metaballs or isosurfaces.
Actually that is exactly what people did in the aformentioned Organica modeler in the 90's. Yes, using a GUI modeling app but that is not a requirement here.
Some vids that show what is possible with a text editor and not much effort (if you have a lib of ready-made SDF components you string together in code):
So yeah, I'd disagree with the 'nobody'. I wrote shaders for blokcbutser VFX as part of my job for over a decade. We regularly did some very complex SDF stuff there simply because we had to.
I think Gilles Tran especially would be the kind of person who'd do that sort of stuff[1] if the object in question was central enough to the image.
Which the motion blurred human figure in the "The wet bird" was not.
Your insistence on using the word "nobody" there is an absolute without evidence. You underestimate the visualization capabilities of savants. For my 3D game prototype, I created all of the meshes programmatically and wrote *and debugged* a mesh-slicing algorithm using only visualization in my own mind. Mostly I did indeed use primitives like spheres, cones, and boxes. Granted, huge caveats - I never finished a game from the prototype, and the meshes involved weren't that complicated; I was modeling imaginary robots not people.
It's painful for me to use GUI or CAD tools because they're so much slower and less capable than the Dall-E image generator/CAD modeler/Star Trek Holodeck I have inside my own brain. This is in some ways limiting, analogous to a mathematician refusing to learn to use a computer. Nevertheless back to your point, I'm fairly certain I could model a plausible human programmatically from geometry primitives, but I won't claim to until I try. However what I am certain of is somebody can.
I am sorry for the confusion, but I did think it was pretty clear that the important part of "mesh modeler" in my comment was the word "modeler" because in the context it was used as the opposite to building scenes by writing code. I did genuinely forget that there exist modelers that work on splines rather than triangle meshes, but I don't really see the distinction as very important in this specific context.
>Programmatically composing mathematical primitives does only get you so far if your goal is to create art relatable to humans.
Of coarse, that is one of those weak areas you have to work around and ultimately helps define the style of povray, that incredible pathos of all those images devoid of people beyond the implied. This thread got me to install povray and start playing with it again. There is a fairly large speed increase, rendered the diffuse-back sample scene and it took 8 minutes, took hours on my old T42.
Same with me. I'd spend the late afternoon creating the file, then start rendering in the morning and watch the result after I got back from the university. I still have the images of the animation and they are like 250x250 pixels or something like that (they're on an offline drive).
I can't even remember how I turned them into an animation, or if I even got to do that.
From what I understand the average GPU is optimized for meshes which is not the povray way so GPUs tend not to offer much improvement. Quick search for a more informed explanation.
The main problem here is that POV-Ray is primitive based, NOT mesh based.
GPU rendering is heavily mesh optimized. They can process in parallel,
but only if all threads need to use the same base code. With POV-Ray,
there is no universal geometry handling code : Each and every primitive
require different code and GPU just can't cope with that requirement.
I understand POV-Ray is optimised using SSE or AVX instructions. You can do arithmetic operations on vectors in GPUs in a much more efficient way. That's why crypto and AI are using GPUs. If you can divide your work in simple arithmetic parallel operations, the GPU will be happy to do it on a massive scale. You have things like CUDA to use but even a simple thing like a compute shader will help.
For a modern GPU a mesh is an abstraction. You can think of it like a collection of very basic CPU cores which only know do do a few basic operation on a massive collection of data. This is an over simplification but an useful mental model.
If I understand you correctly you are just elaborating on the bit I excerpted from the linked thread? This is out of my depth but it is interesting. It seems povray would have to drastically change how it works to really benefit from GPUs since in povray a sphere is an object related to pi and not a series of triangles which are arranged into the shape of a sphere where each triangle can be easily paralleled on a GPU since they are all handled the same way?
Or did I not understand and you are saying povray's performance could actually be improved by modern GPUs without reworking how it deals with its primatives?
When this originally came out, I kept it bookmarked for 20 years before finally buying the poster and putting it on the wall :D https://imgur.com/ffS4pds
"Published: Dec 29, 2007", so it's not that old. I remember seeing it back then and looking at it for a long time, and while it felt like "a good amount of time ago", it already feels like the more modern times to me.
The development is very, very slow these days, unfortunately. Nothing seems to have happened at least on GitHub after 3.8.0 beta 2 was released in 2021. I was a big fan back in the early 2000s; there are few, if any, comparable products out there, combining a high-quality, featureful renderer with an interface based on a Turing-complete scene description language rather than a graphical modeler.
It's a shame (although understandable in 1991) that the original developers chose a custom non-free "source available" license, so POV-Ray was late to the "open-source party" and the GitHub revolution due to the understandable difficulty of contacting all of the various authors and asking for re-licensing under a more popular, more permissive license, or rewriting those parts that could not be re-licensed. Back in the day (~15 years ago) the eventual goal was to release a 4.0 version which would've been a major rewrite, but I doubt that's ever going to happen.
Keeping in mind you can still use POV, and many would say it doesn't really need a replacement if it still works...
Art of Illusion is a good replacement in various ways, with continuous development. These days as a community it's mainly centered around the SourceForge forums for the project.
In years past, I created paid professional illustration work with AoI, and IIRC some of my art's still in the software's splash screens (good nostalgia feel!). Even back around 2006-2007 I created a 21 megapixel image with it, for use on an event poster for a city-managed event where I live, which was pretty darn credible for FOSS.
It's very capable software if it fits your needs and you are aware of the various ways to do things. The manual is exceptional and an exemplary example of FOSS help docs IMO.
Just the other day I used it for some home design visualizations as well, and the current version seems to work great. Especially if you use the Scripts & Plugins manager and try out the many plugins available for things like N-gon mesh modeling (PolyMesh Editor), various UI changes, etc.
You may also want to consider Bforartists which is like a Blender fork aimed at artists, including its own keymap, UI changes, a different set of defaults, and various other changes.
True. Composing SDF primitives in a shader language is quite reminiscent of writing POV-Ray scenes. Of course, the "renderer" is much less powerful than a full-fledged raytracer, but on the other hand realtime animation is entirely feasible and SDFs allow for much more creative freedom than rigid spheres and boxes.
Back in the day it was really common to speed up renders by building custom direct lighting rigs for illustrations like this. So it may be the result of a workaround to optimize render times, or the result of moving the workaround through the scene to various places and forgetting to move one of the lights in the rig, etc.
Or, changing the cloud materials, or recreating one of the cloud materials in one location, and forgetting to turn off "accept shadows" for example, which is a common toggle to see in 3D rendering software.
It's kind of fun to remember all the different ways you could hack the lighting alone, to say nothing of using procedural textures to trick the eye, various modeling tricks, and so on...
What ever object he used for the sky is set at the height of the Chrysler Building. If it had been moved up higher the shadow would either not be seen or there would be a gap between the shadow at the top of the building. Generally (if memory serves) the sky in povray was a sphere, you would model you scene inside the sphere and map the sky texture on the inside of the sphere but it does not look like he did this, that shadow looks very flat but I can not see any telltale signs of what he did use.
This is spectacular. I remember when I first came across povray, on the home page there was a book with glasses in the middle, with excellent caustics. I tried to have a go, but as I was doing it with a text file, I found it impossible.
I went into the world of blender soon after (this is about 2001), but never really worked with povray, even though there was a plugin.
Its wild that someone can make something so nice with just a text file.