Hacker News new | past | comments | ask | show | jobs | submit login

There is actually an older demo still on YouTube: https://www.youtube.com/watch?v=TYcrQswVcnA&t=10s (full presentation: https://www.youtube.com/watch?v=iCZLll1l92g)

The technique called "dual gaze" in the article has some similarity to some of the stuff we were doing. This was long before that paper was published, and I think there were several aspects of our design that were better than the one in that paper.




Holy shit, that looks like magic! The slide to confirm interaction utilizing smooth pursuit was really nice touch. I added the extended demo to the article.

You say the dual gaze worked similarly to your implementation, however I don't see any confirmation flags. I really can't figure out how this works. :) Is the tutorial available somewhere please?


Thanks! I should say I didn't have much to do with that specific demo; I mostly worked on reimplementing everything from the ground up for VR. I don't think there's any good video of that system, but there are some accounts in the press from journalists trying it. (Any journalist that tried it was using my VR system. We didn't let journalists try the AR version because the calibration wasn't reliable enough on new people, but the deep learning based tracking of the VR version was more reliable).

As far as exactly how it works, I probably shouldn't go around disclosing all the secret sauce. After all, Google bought it fair and square. AFAIK there's nobody left in Google VR that knows much about it, but I haven't worked there for many years so I don't know the current state of things.


Ok, I read 20+ articles describing the VR demo you worked on. Some of them explicitly state that they can’t disclose how the interaction works. One of them mentioned that extra saccade is used to “click”, but it isn’t very revealing. Some demos, e.g. 360 virtual displays, has an explicit gaze target to trigger the action, but the AR demo lacks them, so my take is that the 2min tutorial teaches person that there is an invisible, but standardized target (say upper-right corner) that is the trigger. No idea about scrolling. But damn, everybody that tried the demo was super-convinced that this is the way to go and that in 2017 there will be headsets with such tech. Here we are, 6 years later, and I am still waiting.

Google should start open sourcing their canned and dead projects.

Thank you for pointing me towards this research and for your work. :) Very cool!


Yeah I wish Google would do something with it too. The good news is headsets with good eye tracking are finally about to become generally available and I expect that within a few years people will be using it in very creative ways. Wide availability will trigger more experimentation than we could ever do as a single startup.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: