Hacker News new | past | comments | ask | show | jobs | submit login
Learning dynamic 3D geometry and texture for video face swapping (disneyresearch.com)
33 points by guiambros on Oct 14, 2022 | hide | past | favorite | 11 comments



Cool. In the future, movie studios will be more and more like video-game makers. Movies will be made from virtual assets and played on a variety of media, including VR and AR. I'm looking forward to seeing great virtual actors playing great characters in realistic immersive worlds!


I think this perspective makes sense in both technology and business models. The "hard" part will be less about creating for a medium but the concept of a compelling narrative.


Here's a London based startup working on similar tech

www.lumirithmic.com


This is fantastic. Having an underlying 3d representation should make this viable for videogames. This is not super dissimilar to what Meta showed at Connect a few days ago, although I noticed Meta's version of this had some sort of normal map generated as well?


Generating normals is not particularly difficult once you have triangles. I believe the method is to take one corner of the triangle, get the two vectors pointing towards the two other corners, and then get the cross product of these two vectors. This gives you a new vector that is perpendicular to the triangle's face, i.e. the normal.


Normals is not the same as normal maps.


You're right. One's a storage method for normals.


You are confusing a per vertex information with a per texel one.


No I just thought he was implying normals were hard to generate.

In your case you need a couple more steps: generate uvs and render to texture.


There's little point in rendering normals computed from a triangle mesh to a normal map. The point of using textures is that they enable greater detail without having to add many vertices. With the right normal map, you can make a flat triangle look curved or bumpy.

Computing a normal map from video that still looks good if you change the lighting is a bit more difficult.


> There's little point in rendering normals computed from a triangle mesh to a normal map.

I disagree. It's a common workflow that I've used many times. The idea is you bake the normals from a high poly model with millions of verts and then apply the texture to a low poly model with matching uvs.

> Computing a normal map from video that still looks good if you change the lighting is a bit more difficult.

Yes, I agree. But it's also difficult to generate a clean mesh, and we can see that they've succeeded in doing that. The point I tried (and failed) to make is that if they can generate a mesh like that from a video, then normals shouldn't be too much of a stretch.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: