AR glasses will have some sort of camera. It's easy enough to warp the captured video to roughly match the view from each eye. It doesn't have to be perfectly aligned, clear, nor high-resolution. It just needs to be sufficient to provide a faux blurred background behind UI elements.
Looking at Liquid Glass, they certainly solved it for higher-res backdrops. Low res should be simpler. It won't be as clean as Liquid Glass, but it could probably do VisionOS quality.
Oh its possible, but it costs a lot of power, and has design implications.
You need the camera on and streaming, sure you only need a portion, but also your camera needs to cover all of your screen area, and the output remapped. It also means that your camera now has limited placement opportunities.
Having your camera on costs power to, so not only is your GUI costing power but its costing more power because the camera is on as well.
I think you're over-thinking it. Camera power is extremely cheap. Amazon's Ring cameras run for months on a single charge. It's the display, refreshing content at 24–60 Hz (or more) that consumes power.
The camera will have to turn on for the glasses to show you metadata, right? The camera will see what you see, just from slightly different angles from each eye. A simple video matrix can warp the image to match each eye again. Cut out what you don't need, and just keep what's needed for the UI element. The AR glasses could simply have a dedicated chip for the matrix and other FX. I imagine view depth could take extra work, but iPhones do that now with their always-on lockscreen.
Nope, its experience. Why do you think oculus has funny warping issues? its down to camera placement.
> A simple video matrix can warp the image to match each eye again
Occlusions are not your friend here.
> Cut out what you don't need, and just keep what's needed for the UI element.
For UI thats fix in screen space, this works, for UI thats locked to world space, you need to be much more clever about your warping. Plus your now doing realtime low latency stuff on really resource constrained devices.
> I imagine view depth could take extra work,
yes, and no. If you have a decent SLAM stack with some object tracking, you kind of get depth for free. If you have 3d gaze vectors, you can also use that to estimate depth of what you're looking at without doing anything else. (but gaze estimation thats accurate needs calibration)
> but iPhones do that now with their always-on lockscreen
Thats just a rendering thing. Its not actually looking for your face all the time. most of that is accelerometer. Plus its not like it needs to be accurate, just move more or less in time with the phone.
> Camera power is extremely cheap
Yes, but not for glasses. Glasses have about 1.3 watt-hours for the whole day. cameras consume about 30-60mw, which is about half your power budget if you want a 12 hour day
> Amazon's Ring cameras run for months on a single charge
Yes, the cameras isn't on all the time, it has PIR to work out if there is movement. Plus the battery is much much bigger. (I think it has 23 watt hours of battery)
Looking at Liquid Glass, they certainly solved it for higher-res backdrops. Low res should be simpler. It won't be as clean as Liquid Glass, but it could probably do VisionOS quality.