Hacker News new | past | comments | ask | show | jobs | submit login

6dof input from hand gestures is a killer feature but it has to be rock solid. So far only controllers can do it but it's getting much better every year.

Haptic feedback, discrete buttons and precise analog input from controllers are also very important. The downside of controllers is that your hands are full and it's just not feasible for an all day wearable.

Hopefully someone figures out a good compromise be it rings or gloves or whatever.




My killer feature will be a keyboard (/mouse) in a wristband(s), which comes along with, or nearby 6dof (also worthy, but not a personal grail).

Electromyography is an awesome technology, among other reasons, because it can (or will) detect neural signals below the activation (movement) threshold, meaning you should be able to train yourself to type without moving your fingers. A viable way to thought control without the invasive aspects of other approaches.

Back in the nineties, I said the computer user fifty years from then would look like a hippy. Headband (neural interface), sunglasses (I thought monitor, but AR is cooler), and a crystal around their neck (optical computer, maybe a miss, we'll see what the next decade brings, a slab in a pocket will do for now). Given my zero trust of end stage capitalism near my noggin, wristbands are an excellent transitional, as long as they're local (or can be made so, happy jail breaking)


Unless EMG signal processing has had some breakthrough in the past 10 years, it is not a very precise interaction mode. I worked in a lab developing it for quadriplegics to use with the muscle on their temple (we tested above the thumb as well). You can get rough 2-axis control with some practice, but that's with an adhesive EMG pad. Can a wrist band get a clean signal?

For typing, I'd expect you need to combine with eye tracking. So you're back to the Vision Pro UI.

On its own, EMG makes a good button, I'd expect. Maybe 1-axis control.


Thanks for the reality check. Wait some more, use voice for now, is what I hear... Although a decade is a long time in signal processing and Meta has dumped a boatload of cash into this.

No 6dof either ?

Sorry for using you as the 'say something wrong, get corrected' research method, but kudos for jumping in. ;}


It would be a Bene Gesserit keyboard


The haptic feedback you get from touching your thumb and forefinger to simulate a click is actually better than a button because it feels more organic and natural.

Where it falls apart is not being able to feel yourself touching objects which nothing other than a full glove is going to be able to simulate. Controllers and rings provide no benefit over Apple's approach.


> Controllers and rings provide no benefit over Apple's approach.

When I touch a good quality button, I can feel the actuation point, and it's the same every time - I can learn to tell reliably whether I've pressed it or not.

When I touch my thumb and forefinger for a camera, I can't reliably tell what point it'll get detected as touching, because it isn't the same point each time.

As a result, I have to hold them together until I'm sure it's registered.

As a user, knowing unambiguously whether you've activated a control or not is a huge advantage for controllers & buttons.


It sounds like the wrist strap thing will have haptic feedback for when the gestures get registered, so you'll at least know when that happens. It sounds like that might actually make it better than the annoying capacitive buttons that's popular these days with no feedback…


> The haptic feedback you get from touching your thumb and forefinger to simulate a click is actually better than a button because it feels more organic and natural.

Where did you experience this? My only experience is with the Apple Vision Pro and it failed like half the time


The device may have failed to do what you anticipated, but the only way the haptic feedback failed is if you have peripheral nerve damage.


If the sensation is not associated closely with an action it's still a failure.


Isn't haptic feedback supposed to mean that you feel something as feedback that an action happened? If so, then this would be more like haptic feedforward. Apple vision reacts because you feel something, and that sounds as reliable as it probably is.


Apple doesn't react because you feel something. Apple estimates, based on the kinematics it recreates from it's camera feed, when something happens. It is NOT looking for a visual gap between fingers to disappear, as this would require an exactly correct camera angle.


I guess you get the natural haptic, but the feedback is visual/audio (happens in software). In any case the link between haptic and visual/audio action is kinda broken on the vision pro


let me ELI5:

* I pressed my finger

* nothing happened

* it feels broken

the reason is that it is camera based, unlike Orion. And this is why people describe Orion as magical, whereas nobody talks about the hand gestures of the Apple Vision Pro (but people do talk about the eye tracking of the AVP as magical)


> Controllers and rings provide no benefit over Apple's approach.

This is the problem with fanboi-ism... the hyperbole is so clearly false. Let me list the ways that controllers are better: - Typing/Chording - Cheaper - More efficient, no cameras pointing at things the user can't see. No continuous video processing. - No dead zones where the camera can't see. - Accuracy. For all but a few camera angles, Apple has to guess when you're fingers make contact. it works best with bigger movements, but the bigger the movement, the longer that movement takes. There's a reason no big-name competitive games have been ported over. - Actual haptic feedback. Play Horizon: Forbidden West on PS5 to understand just what haptics can communicate to you... it's so much more than tapping your fingers together.

Apple's approach is amazing, and it's good for the use cases that Apple tells you are important... but there's so much more than that. You're doing a disservice to Apple by going full fanboi.


Users can just connect a ps5 controller to the Vision Pro, of course it's an extra expense but for Vision Pro buyers that's pretty much insignificant.

And all the sensors, cameras and superfast processing will still be needed for that immerse quasi AR simulated experience, so there are no cost savings there.


Where do you figure? Last I checked mushy, organic keyboards are not preferred.

I mean, it's just laughable to suggest that inpu work just as well on the AVP. People are not using the virtual keyboard if a real one is available. Gamers want clicky buttons too.

There are clear benefits and disadvantages of each setup.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: