Hacker News new | past | comments | ask | show | jobs | submit login

That's my intent as well: use MR to replace monitors, then progressively switch to monitor-less apps.

There may be a limitation, though: resolution. If we are have 9 virtual monitors, each displaying content at 2500x1200 (random pick), performances may not follow.

But then I guess, we only need to focus on one at a time. Maybe those additional monitors could work if we lower resolution of those not directly where our eyes are looking at, and just blur them for good rendering.




To say nothing of whether magic leap is real - when vr/ar technology catches up there's a trick to get around this. It should be possible to track gaze precisely and only render the high resolution 'fovea' at high detail. The rest can be very low res and nobody would know. This will probably be important in graphics and gaming first though, where they can focus all of the gigaflops on sampling illumination raycasts in the most sensitive area of vision. It's actually pretty wasteful to render an entire UHD monitor at full resolution when the eye can't even discern a word at one end of a sentence when the fovea is focused on the other.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: