Hacker News new | past | comments | ask | show | jobs | submit login

Looks pretty cool. Can anyone comment on how to hack together the opposite? That is, going from 2D object image to 3D rendering with in-painted background? Or is that not possible right now.



Do you mean transforming a sketch to a 3D-looking image(i.e not a 3d mesh model): If so Stable diffusion with control net can do that using a good prompt and the SDXL model.

Do you mean you have an existing photo of something and would like to add a realistic setting. There's a lot of ways to do this, but probably the easiest right now is the Generative Fill feature in Adobe Photoshop (beta).


Not sure if they are planning on releasing this but you can mix a image2Nerf model (threestudio is a good repo for this) for the object 3d model, and an image2depth model like ZoeDepth to generate 3d background.


2D to 3D networks are pretty amazing right now.

High poly meshes and low poly textures are possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: