Didn't care much for the article, but the linked Nature Communications paper "Holographic acoustic elements for manipulation of levitated objects" is quite fun:
This is great work, but this is another article trying desperately to give one person credit for something that an entire community of researchers has been working on. They go as far to claim ultrasonic haptics was "abandoned" in the 70s and they're the only one working on "in the air" haptics, and yet, a few minutes of searching shows similar work done at University of Tokyo and Disney research:
That said, translating this stuff to a product line is the real challenge, and they seem to be doing impressive stuff there. But priority claims really take away from that.
All of the examples shown in the video demo look so painfully unrealistic and impractical: every situation seems to beg for an actual, you know, button.
Using two fingers to air swipe a virtual knob to bring up the heat when you're cooking? Really? How does that make anything any easier?
Even worse: trying to do so while driving. Good luck with that!
The technology seems promising but I just don't see it happening for any of the demo'ed use cases.
Not everything is supposed to be immediately applicable. I'm guessing it will be a while before we start some stuff like this in consumer products but seeing the first steps of it is pretty cool still
Those demos were painful to watch. This technology has just so many potential applications - some of them might be a big deal. A button and a knob are not among them.
Good point, although I wasn't using it long enough to know exactly. I imagine you could apply the same rule of thumb as with other gestural controllers.
Another aspect of fatigue would be: does the user become fatigued/insensitive to the haptic interaction over time?
How does this compare with what Google[x] has been developing in the RF gesture recognition? In the videos they are using a Leap Motion while G is "suggesting" using your fingers as support.
This makes me think of the description in Genesis of the Universe being created through speech— the Universe is the sustained epiphenomenon of 10 utterances:
Seems to be great for texture already. I wonder how you can integrate it into the typical VR setting. I'm envisioning some sort of bubble around the person. Or at least I guess you'd need to be surrounded by the ultrasound speakers in some way.
Stopping movement is of course also very tricky/tough. Picking up a mug of coffee is a killer app. If that ever works with the right feedback the future is here.
Gesture input is ultimately pointless without force feedback, or else you don't feel the interaction and the lack of intuitive feeling makes you want to go back to comfortable interfaces.
I remember being really excited for the Wii and swinging my sword for the first time in Legend of Zelda. My sword was blocked; my hand kept moving. Immersion gone.
I don't believe that's true, it may depend on the action.
I'm very comfortable pointing a person or animal where to go, and the lack of force feedback doesnt make me want to go up to them and push them into where they ought to be. Bit different than wielding a sword.
What you have done is give an example where it would be an improvement to have force feedback. It's a common fallacious way to try and disprove something.
It does not mean that this is the general case. Actually, it doesn't matter if it is. If there are a dozen uses without feedback, and 3 dozen uses with feedback, it's still a big win to get the first dozen uses.
It is a big loss regardless if the lack of convenient feedback means the application is never used, even if force feedback is not necessary to function.
yes, it's a big loss if you have nothing now and add a solution for some people. But since it can't meet the needs of everyone ... </sarcasm> You want it all or nothing. seems unreasonable.
As Tim Cook said to me, if you make something that doesn't change behavior, it's a gimmick, and it won't last.
If motion sensors have an application - perhaps for people with disabilities - by all means go for it. But innovation for its own sake can be a waste of time.
Do you mean like speech recognition before it's 100% ready? It's pretty limited now and I've noticed that Siri is easily confused and people seem to have to repeat themselves quite often.
Obviously, motion sensors have a lot of use without force feedback. Feel free to wait until that point. Telling the rest of us that we don't need it seems pointless.
I feel like we'll get to the point of hijacking brain signals first. All the senses are inputted by the computer and all the voluntary muscle movement signals are intercepted before they reach the actual muscle.
Awesome. But one thing is still missing: temperature and especially temperature response gradients - like, when you touch a simulated piece of aluminium foil it will adapt to your hand temperature while a "solid steel block" will feel colder than your hand for longer time.
http://www.nature.com/ncomms/2015/151027/ncomms9661/full/nco...
Short 3 minute summary video from the supplementary materials, which has a fair amount of eye-candy: http://www.nature.com/ncomms/2015/151027/ncomms9661/extref/n...