Hacker News new | past | comments | ask | show | jobs | submit login
Ultrasound technology lets you touch and manipulate virtual objects (bloomberg.com)
92 points by akrymski on Feb 3, 2016 | hide | past | favorite | 31 comments



Didn't care much for the article, but the linked Nature Communications paper "Holographic acoustic elements for manipulation of levitated objects" is quite fun:

http://www.nature.com/ncomms/2015/151027/ncomms9661/full/nco...

Short 3 minute summary video from the supplementary materials, which has a fair amount of eye-candy: http://www.nature.com/ncomms/2015/151027/ncomms9661/extref/n...


This is great work, but this is another article trying desperately to give one person credit for something that an entire community of researchers has been working on. They go as far to claim ultrasonic haptics was "abandoned" in the 70s and they're the only one working on "in the air" haptics, and yet, a few minutes of searching shows similar work done at University of Tokyo and Disney research:

http://news.bbc.co.uk/2/hi/technology/7593444.stm

https://www.disneyresearch.com/project/aireal/

That said, translating this stuff to a product line is the real challenge, and they seem to be doing impressive stuff there. But priority claims really take away from that.


All of the examples shown in the video demo look so painfully unrealistic and impractical: every situation seems to beg for an actual, you know, button.

Using two fingers to air swipe a virtual knob to bring up the heat when you're cooking? Really? How does that make anything any easier?

Even worse: trying to do so while driving. Good luck with that!

The technology seems promising but I just don't see it happening for any of the demo'ed use cases.


Not everything is supposed to be immediately applicable. I'm guessing it will be a while before we start some stuff like this in consumer products but seeing the first steps of it is pretty cool still


Those demos were painful to watch. This technology has just so many potential applications - some of them might be a big deal. A button and a knob are not among them.


I've played with one of their kits, it's definitely fun. The level of definition for different textures was surprisingly good as well.

Worth noting that you have to orient your hands horizontally to the device like with the Leap Motion to get a consistent result.


From your account, and looking at the pictures, you need to hold your arm/hand/finger in the air to make this work.

How long can you do that before it becomes uncomfortable (aka "gorilla arm")?


Good point, although I wasn't using it long enough to know exactly. I imagine you could apply the same rule of thumb as with other gestural controllers.

Another aspect of fatigue would be: does the user become fatigued/insensitive to the haptic interaction over time?


How does this compare with what Google[x] has been developing in the RF gesture recognition? In the videos they are using a Leap Motion while G is "suggesting" using your fingers as support.


Haven't tried Soli if that's what you're referring to, although I don't see how they compare as Google aren't doing mid-air haptics?


This makes me think of the description in Genesis of the Universe being created through speech— the Universe is the sustained epiphenomenon of 10 utterances:

http://www.chabad.org/multimedia/media_cdo/aid/677029/jewish...

. . . also, this is like the coolest thing ever . . .


Seems to be great for texture already. I wonder how you can integrate it into the typical VR setting. I'm envisioning some sort of bubble around the person. Or at least I guess you'd need to be surrounded by the ultrasound speakers in some way.

Stopping movement is of course also very tricky/tough. Picking up a mug of coffee is a killer app. If that ever works with the right feedback the future is here.


Gesture based computing is almost here in some form. Apple, Google, Intel and Microsoft are all working on it:

Apple: http://9to5mac.com/2016/02/02/apple-proximity-sensors-patent...

Intel Real Sense - http://www.intel.com/content/www/us/en/architecture-and-tech...

Google's Soli chip - http://www.youtube.com/watch?v=0QNiZfSsPc0

Microsoft shipped the Kinect 6 years ago.

The need for hand tracking with VR headsets should give it another boost.

By the 20th anniversary of Minority Report?

http://youtu.be/7SFeCgoep1c


Gesture input is ultimately pointless without force feedback, or else you don't feel the interaction and the lack of intuitive feeling makes you want to go back to comfortable interfaces.

I remember being really excited for the Wii and swinging my sword for the first time in Legend of Zelda. My sword was blocked; my hand kept moving. Immersion gone.


I don't believe that's true, it may depend on the action.

I'm very comfortable pointing a person or animal where to go, and the lack of force feedback doesnt make me want to go up to them and push them into where they ought to be. Bit different than wielding a sword.


What you have done is give an example where it would be an improvement to have force feedback. It's a common fallacious way to try and disprove something.

It does not mean that this is the general case. Actually, it doesn't matter if it is. If there are a dozen uses without feedback, and 3 dozen uses with feedback, it's still a big win to get the first dozen uses.


It is a big loss regardless if the lack of convenient feedback means the application is never used, even if force feedback is not necessary to function.


yes, it's a big loss if you have nothing now and add a solution for some people. But since it can't meet the needs of everyone ... </sarcasm> You want it all or nothing. seems unreasonable.


As Tim Cook said to me, if you make something that doesn't change behavior, it's a gimmick, and it won't last.

If motion sensors have an application - perhaps for people with disabilities - by all means go for it. But innovation for its own sake can be a waste of time.


Do you mean like speech recognition before it's 100% ready? It's pretty limited now and I've noticed that Siri is easily confused and people seem to have to repeat themselves quite often.

Obviously, motion sensors have a lot of use without force feedback. Feel free to wait until that point. Telling the rest of us that we don't need it seems pointless.


I like that quote. Thanks.


The imaginary eWii would trigger electroshocks on collisions, causing short muscle spasms.


I just want someone to figure out how to immobilize muscles so that you can't "push through" the feedback. Then, I can get my full body holodeck.


That sounds scary.

(And here we go again, my brain is throwing misuse scenarios at me..)


Yup.

The problem is not designing solid haptics.

The problem is designing solid haptics that can't easily kill or maim you.


I feel like we'll get to the point of hijacking brain signals first. All the senses are inputted by the computer and all the voluntary muscle movement signals are intercepted before they reach the actual muscle.


Controlled magnetic fields!


You could virtually punch someone with this tech. Transduce this!

Blind people can feel their way around a user interface, or perhaps a projected face from a 3D door camera.


Was thinking about it yesterday!! Really cool stuff that you got something working!


Awesome. But one thing is still missing: temperature and especially temperature response gradients - like, when you touch a simulated piece of aluminium foil it will adapt to your hand temperature while a "solid steel block" will feel colder than your hand for longer time.


Ok, when can we expect holodecks, and even better, Quark's holo-suites?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: