Disney has a (poor) version of this idea. They project an animation onto wedding cakes at their resorts. They're pre-rendered though, rather than dynamic like this. It'd be fascinating to see what uses they could come up with for this stuff.
Disney does a lot of projection mapping in the parks right now, but it's all (AFAIK) the traditional, pre-rendered type, with animations projected onto large static surfaces like the Magic Kingdom's castle, or onto animatronics with pre-planned motion paths.
I'm having a little trouble visualizing how this non-rigid projection mapping could get applied in a practical way in the parks today, mostly because it seems like it has a fairly small "active" area (determined by your projector and sensor resolution, essentially). I could imagine this being used in a parade or stage show, for example, but this system seems like it would be pretty restrictive as to where the performers could move and remain in the projection space.
Those projections onto the castle are quite stunning. The technology for producing an enormous, apparently seamless image on that uneven canvas is amazing (even if it's static). And a lot of creativity went into building an animation that used that very specific venue so well.
What if the projection space were mobile? One of the applications is to put projectors on moving objects, and project onto static (or moving) surfaces, assuming the system gets small & portable enough. How about one or more projectors mounted on each car of the Haunted Manison ride, or projectors mounted on flying drones aimed at the Spaceship Earth (the geodesic Epcot globe) during the night light show? Combine multiple projectors and a position tracking system, maybe even viewer head/eye tracking too, and I think there might be some amazing possibilities...
I think you could even go and apply the same algorithms to AR projections though. This would have massive applications in parks. Guests wearing AR projection goggles while touring the park allowing for a hugely immersive experience
I could imagine doing something for the house of terror like projecting ghosts or something like that, in the curtains to a doorway or on sofas/couches after you stand up (and have your shadow follow you while doing creepy things for instance)
Of course, I'm no expert but throw a few million dollars into this and you can probably come up with some neat stuff even if the projection space is small-ish for now.
Disney uses something similar in a lot of their stage shows at their parks. They use a water fountain to create a screen and project onto the water sheet. It works surprisingly well, but I would assume that it is pre-rendered.
The water screens are actually a pretty old technique that you can do with any traditional light projector (flim or digital). The hard part is making the water sheet.
>What do you think the word 'poor' means when referring to quality?
poor (adjective): of a low or inferior standard or quality.
>Might be kinder to say 'less advanced' or 'simpler' or 'earlier'.
Kinder towards what? Is a multinational like Disney sensitive or is the technology sensitive to the choice of words criticising it? Or will the researchers take offense to their technology, which is objectively inferior, being described as "poor"?
We're stretching this too thin, inventing issues where there are none...
First of all, I'm not calling it poor, the grandparent did.
I'm saying it's nothing special to call it poor.
We say 10x harsher things everyday in HN for frameworks, languages, etc. Heck, check any thread about Apple products. Don't real people work on those?
Plus, ever read art criticism, or restaurant criticism, or political critiques even in the most respected newspapers? "Poor" is the least harsh of the terms they use. And those are also real people they level those things at...
The connotative meaning of “poor” isn’t really anything that isn’t the absolute best. Words have meaning beyond the dictionary definition. “Poor” in this case means you think the researchers did a bad job, not that their work was a step on the path to something better. “Earlier”, or even “dated”, would have been a much more charitable depiction. And plenty of people are deriving joy from the application of that technology.
I’m really not. And I don’t get offended by very much. Maybe you’re not a native speaker? “Poor” as it refers to quality is a negative qualifier. And it’s typically used in a subjective context. That’s the connotative meaning.
Even the definition used here (taken from Google) indicates so, if it weren’t truncated:
poor (adjective) worse than is usual, expected, or desirable; of a low or inferior standard or quality.
The list of synonyms is even more telling about it’s real meaning: shoddy, bad, deficient, defective, lamentable, deplorable, awful, etc.
These are not words I’d use to describe the technology, having seen it first-hand. The technology is not worse than usual and I really don’t see how it’s worse than expected or is otherwise undesirable. It’s out there and being enjoyed by people.
Hope it doesn't easily scale out to larger spaces and crowds, or the current tech industry would soon have public spaces filled with ads projected on peoples belongings.
The second example doesn't, IMO, demonstrate the kind of deformation tracking the example in the OP does. As far as I can tell, they are able to get a depth representation, segment objects from a video, get an approximation of surface normals and reflectivity for those objects, and project a shaded surface onto those objects.
What they do not seem to be able to do without IR markers is project a diffuse texture to an object so that it would stick properly. See the one example with a non-uniform texture, where the fingers of a hand are fanned out - the texture warps noticeably.
Very cool. I do live projection work[0] and latency is always the killer with immersion. Anything higher than a 1000/90ms latency breaks integration at normal dancer movement speeds. 1000fps seems like overkill but it allows for very fast movement.
It is, but I cheat! The dancer has a small android device with a custom gyroscope app, mounted in the middle of her back. I can get her general orientation accurately this way (more accurately, and faster, than state of the art pose estimation).
I am in the process of bringing together a community around art making like this. Let me know if this is something that interests you.
It is of interest! I am in the process of learning TouchDesigner, looking to integrate it with Ableton Live. I doubt I’ll have enough GPU power for real-time 3D renders based on changing sound or visual input though…
Currently I am considering pre-rendering scenes to given BPMs (where applicable) and doing only limited realtime alterations with TD nodes.
This seems like an extremely advanced version of those sand tables at science centers that kids play with, where digging a trench in sand with your hand modifies the projection to affect virtual water flow.
The youtube video[0] from that page is especially interesting.
You want a high-bandwidth interface to computing? Combine this with Dynamicland technology (https://dynamicland.org/). This makes way more sense than creepy neural interfaces. Best of all, it would allow people to freely collaborate on things in real life.
In 1899, H.G. Wells wrote "A Story of Things to Come", which later was adapted become the 1936 film "Things To Come". In the original story, the main characters mention being irritated by the advertisements projected onto the backs of the people they walk behind. Old idea, only now possible without image distortion.
Another use of a high speed projector would be to create real 3d display anywhere in a volume swept out by a moving surface. Objects will be translucent, but otherwise real 3d with wide viewing angles.
The first I heard of it in the 1980s. TI used a laser to project point on a rotating surface. Then someone used a flexible mirror in front of a speaker. The mirror would flex convex/concave changing the apparent distance to a vector display screen.
Just saying a modern 1000fps display could do this much better.
That’s an amazing idea. A naive approach would be to slice the volume into N surfaces (say, ~30 layers to stay above 30fps) but there is probably a much more efficient organic pattern or interlacing that would give good volume and angle coverage at much higher resolution - think of crumpled cloth blowing in the wind at speed.
This is just too much. My mind is completely blown. And makes me think of how many things I think are impossible but actually are or will be in the near future.
Similar feelings. The video doesn't go easy on the tech either. He is shaking that paper violently and I can't see any faults. The part where he has 2 bits of paper as well as when he stretches his shirt are mindblowing.
One downside with this approach is that you need something to bounce the light off of (e.g., a surface), so adding virtual objects to AR is difficult if the objects aren't positioned at the surface of real world objects. That's an effect you often want to achieve in AR applications.
Having just watched a bunch of videos on Deep Fakes after the revelation of the Deep Fake video of Nixon's Moon Landing Disaster speech on here this morning, I can't help but feel like this is something else that will make fakes more and more difficult to distinguish from genuine.
I think it would be very interesting idea for a music show. Tell everyone to dress using white tshirts and create this kind of projection from multiple beacons standing spread in the venue. Not sure if that would be possible but very cool to see IRL and talk about
Back in 2012, my wife and I attended a stage show called "The Animals and Children Took To The Streets" [0]. It was done with "dumb" projectors, with choreographed movement of different screens on the stage, but created a highly dynamic show.
Remembering that show, and seeing these videos, it makes me giddy to think what could be done with the latest tech today.
That is pretty impressive. Tried something similar with a DLP projector stolen from texas instruments (not stolen, but they tend to be picky about selling them. Probably because these are awesome devices).
Was slow as hell, since I used a raster with multiple images to measure the topology. The resulting heightmap was awesome, but even with synchronized camera and projector, I needed a pretty long illumination time per image. So it would be interesting what camera(s) they used too. I doubt you would need multiple projectors, because the available ones are extremely fast.
First saw the on the prosthetic knowledge Tumblr account years ago. Really miss that account. Whoever was behind it did a phenomenal job of curating incredibly interesting technology developments.
This is so cool. Now I want to take this ( and so many more things like it that have come up recently ) and show the students at the art school I attended that projection mapping can be so much more than lining up all the parallel lines.
The demo video answered my questions, is short and the tech is impressive. Not entirely sure of the business model though. The non-rigid tracking might be more useful than the projection -- perhaps a Defense application?
Skimming the description, they used a structured light approach (active) for geometry deformation tracking. This is still probably useful for certain defense applications but an ideal goal is often to use passive tracking systems. It's impressive either way.
Lots of theater, performing, and visual art uses come to mind.
Some retailers (especially Japanese ones) have a fascination with the idea of allowing customers to "try on" different colors/styles via an on-screen avatar that's based on a scan of the customer. This would seem like a logical next step.
I'm unsure how much customers would actually want it though, at least after the novelty wears off.
This would go great with the teamLab borderless exhibit, which currently seems primitive in comparison (it’s an art exhibit where they project images everywhere).
I was thinking the same thing. Stryker already has a system that has a tracking device for surgeons that shows pre-OP CT scans around the area and shifts sliced around depending on the position (also performs a 3D rendering).
Projecting this directly on the surface may be useful but you'd have to be careful not to skew information surgeons may find useful. Seems great for training on cadavers and stuff though.
And they can now do the tracking without the infrared paint: http://www.k2.t.u-tokyo.ac.jp/vision/MIDAS/index-e.html