Hacker News new | past | comments | ask | show | jobs | submit login

Nah, I agree with GP. Who didn't suggest making 3D scenes by hand, but the opposite: create those 3D scenes using the generative method, use ray-tracing or the like to render the image. Maybe have another pass through a model to apply any touch-ups to make it more gritty and less artificial. This way things can stay consistent and sane, avoiding all those flaws which are so easy to spot today.



I know exactly what OP suggested but why are you both glorifying the fact there is a 3D scene graph made in the middle and then slower rendering at the end when the tech can just go from the first thing to a better finished thing?


Because it just can't. And it won't. It can't even reliably produce consistent shadows in a still image, so when we talk video with a moving camera, all bets are off. To create flawless movie simulations through a dynamic and rich 3D world, requires an ability of internally represent that scene with a level of accuracy which is beyond what we can hope generative models to achieve, even with the gargantuan amount of GPU-power behind ChatGPT, for example. ChatGPT, may I remind you, can't even properly simulate large-ish multiplications. I think you may need to slightly recalibrate your expectations for generative tech here.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: