Hacker News new | past | comments | ask | show | jobs | submit login
How Magic Leap Works – From Field of View to GPU (gpuofthebrain.com)
249 points by rawnlq on July 28, 2016 | hide | past | favorite | 76 comments



I can't wait to see their next cooked demo video. Just another day in the office at Magic Leap...


I learn from Magic Leap and Theranos is there is no need for MVP. Just need a MHP Maximum Hyped Product to get started. And then spend the funding to try to get actual product out.


It's because they convinced Google to give them a half billion dollars. That is seen as verification of their claims.


Low-hanging fruit is becoming increasingly more rare. The only real path forward is hard technology. Hard technology requires tolerance for risk, given the necessity of hardtech taking such risks should be celebrated.


I wonder if we have Kickstarter to thank for this new advertising model


I really haven't paid any attention to Magic Leap for a while. Is anything actually happening other than the "concept" videos and patents they release?


They do have a note in at least their most recent video that the video is filmed through their tech with no post-processing, so I think it's a bit more than a concept video even if it's still hard to judge how far along the tech is.


"their tech" could really mean anything as they have been involved in many forms of AR.


Do you have a link? I thought all that was out there was the pre-rendered teaser(the one with the girl and the elephant in her hand).



That's what they mean by "cooked".


Per their response, it's not. They're saying _this_ is cooked: https://www.youtube.com/watch?v=lP5ZZI05A3g

I was talking about the original tease that came out probably over a year ago, which was clearly pre-rendered and didn't have the disclaimer at the bottom of the above video(and the few others that have surfaced).


I understand it all except how they supposedly cancel background light to make black. It would only be theoretically possible if ambient light were coherent - which it isn't - and even then it would only be possible if everything were perfectly still.


I doubt that they cancel background light: all their videos are in low light room.


Same here. But I also wondered how they get the fiber projection from the side to show up all across the screen/glass nicely.

I'm always amazed at the super even result of LCD backlight diffusers (are they asymmetrical, when the light comes from just one side?), but they can work more messy than some pixel projection.

edit: probably it just doesn't work. Maybe they could add a simple one segment LCD for variable dimming.

They where very careful to show nothing black on white. The tip of the mountain touches the white thing on the table shortly here: https://youtu.be/GmdXJy_IdNw?t=63


LCD backlight diffusers have a cool method of accounting for light distribution. Basically the diffuse etching is a grid of circles, and the circles are larger where there is less light. Like printer dithering.


Isn't a 0.6 micron pixel size at the limit of diffraction? How on earth do you get an accurate XY attitude control with a piezoelectric element at that scale? What about vibrations? Timing/sync issues, etc? Anyone care to speculate?


Patents don't need a working prototype. Maybe they calculated a theoretical limit of the smallest pixel size possible and used that number.


I can't speak to most of those issues, but piezoelectric actuators can move precisely on nanometer scales. They control the positioning in many modern atomic force microscopes. So it seems plausible to be that they could achieve very fine control. Dealing with noise seems like the harder problem. Perhaps embedding sensors in the glass could give you real-time adjustment feedback.


It's not like it's the only thing that sounds a bit too hopeful. Driving content to a light-field display at a decent frame rate and a resolution "much higher than [4375 x 2300]" is orders of magnitude beyond any mobile GPU now or within anyone's manufacturing plans.

Speculation is all that the article is, and it sounds rather like it's intended to feed the hype.


Unless you only draw to part of the display. Given that it is AR, maybe they rely on most of the display being blank.

Still, I agree that sounds infeasible.


driving the display is just a part of it - they need to do super accurate SLAM, object recognition/machine learning inference, and if the visuals are interactive and not pre-rendered then all the lighting, texturing, rigging, physics etc of the rendered models - all this on the same cpu/GPU.

It's just about doable on desktop VR setups with latest desktop GPUs


With their budget they can afford to make a custom SLAM ASIC, push more cores into their SoC and create some special GPU. Such capital-heavy ASIC approach can give 10-100x performance boost.


I also have doubts about this being powered by a mobile GPU adequately, but I guess we will see pretty soon. I've read they are starting manufacturing and tech demos lined up end of 2016/early 2017


I'd had the idea of moving processing off the eyegear and into your pocket, connected by cable, but using a fibre optic cable is even better.

The scanning display also enables different resolutions for different parts of the field of view (the retina center "fovea" is high resolution; peripheral vision much lower). Eye-tracking allows them to tell where your fovea is looking.

But I fear we're still not ready for camera always-on constant surveillance. And that seems necessary for the light-cancellation of solid blacks, and of course for object recognition. But these are severable, and the other features will still work.

Pokemon Go helps prove the ground for all this.


VR backpacks are already here (and are probably the future): http://www.theverge.com/2016/6/7/11874762/vr-backpack-pc-han...

You can pack a lot of computing power into a backpack these days.


Good points on the scanning display + eye tracking combo. Having a non-linear spatial resolution that can be moved would definitely be one approach to providing excellent perceived visual acuity over a super wide FOV.


I'm unfamiliar with a few of the acronyms used in the article:

    SLAM

    DOE
Definitions would be appreciated :)


SLAM: simultaneous localization and mapping. Generally implemented as a stereo video camera that builds a map of its environment and determines its position while it moves.

DOE: diffractive optical element. Defined in the article with a photo, which is better than I could hope to do here ;)



Alignment errors build up over time -- there is a good 1.5 ft misalignment visible here: https://youtu.be/C6-xwSOOdqQ?t=216


Alignment and scale drift is usually solved via loop closure. Clearer example in the LSD-SLAM: https://www.youtube.com/watch?v=GnuQzP3gty4&feature=youtu.be....

By the way it seems like the researcher behind DSO and LSD-SLAM is now at oculus research! http://vision.in.tum.de/members/engelj


Impressive. I am guessing this is what the next Google Street View will look like.


SLAM provides a continuously updated map of the environment, which is necessary if you are going to position virtual objects in real visual space. https://en.wikipedia.org/wiki/Simultaneous_localization_and_...


SLAM = Simultaneous localization and mapping

DOE = Diffractive optical element


Looks like the DOE is a form of Bragg Grating similar to a FBG (https://en.wikipedia.org/wiki/Fiber_Bragg_grating) but in a planar form. Still trying to get my head around how it works...


Nice summary of what's probably going on over there.

The tech demos are quite impressive and if we assume ML isn't misrepresenting the results it'll be really interesting to see what kind of hardware materializes.


For anyone interested in diving a little deeper into the workings of their imaging engine, here's a really well put together paper by Brian Schowengerdt et al:

http://wenku.baidu.com/view/c40a8242e45c3b3567ec8b68.html


Sounds cool and at least 10 years out. So why are they hyping it now? What with hiring Neal Stephenson and so on.


In the last interview they are talking about releasing in about a year. Their hope is to have 80% of the country wearing it by 2020. They want to be the next Apple. I think they mentioned that they are currently debugging the production line.


its worth looking at the first video on the magic leap website: https://www.magicleap.com/#/home

its a Whale, jumping out of the floor. Nothing too hard. What is hard is the motion tracking, and the water sim.

The water sim is just about possible on the GPU, There is some cheating. However I suspect thats a houdini[1] special.

So you're looking at carting round a dual top end nVidia GPU rig in your pocket, just for the particle sim, let alone the motion tracking/SLAM/dynamic lighting.

Sure, people will say, but you can use the cloud! you try adding 25-70ms of latency to your vision, and see how immersive that feels.

[1]https://www.sidefx.com/


Alternatively, all the physics work is pre-generated and it's just displaying stuff. The floor is flat and there are no interactions with other objects. That black sure is impressive though.


Can't you use the photonic lightfield chip for computation too? ;)


IMO, instead of a GPU, much of the rendering could be done on a "Vision Processing Unit" like the Movidius device - cheap, low-cost, power efficient http://www.movidius.com/


ha! somewhere in my comment history is a question about fiber optic projection

i remembered seeing a youtube video showcasing it but was unable to find it again after the initial viewing

i was told it was impossible and assumed my memory was flawed

but the picture in the section Fiber Scanning Display is the same tech

i remember that butterfly image


The inspiration for their display was pulled from someone's ass.

Seriously. There was an article discussing it a while back. There is a new kind of colonoscopy camera that uses a fiber optic cable with a rapidly-spinning business end to build up images rather than a camera with a conventional lens. You can imagine the impetus for shrinking the probe. They "just" reversed the light flow and now they have a fiber optic projector!


It's funny how Pokemon Go took away a lot of AR marketing Buzz from Magic Leap.


That's quite the stretch. Magic Leap only got buzz in the first place because a lot of famous people said they were the best thing since sliced bread, and Google Ventures gave them tons of money.

The main reason you don't hear much about them anymore is because they don't show anything.


Magic Leap has also gotten a lot of buzz because tech publications are willing to write about them despite almost total lack of details. Look at the article Kevin Kelly wrote for Wired about Magic Leap: http://www.wired.com/2016/04/magic-leap-vr/ It's like 10k words, yet after all the embargoed/NDAed info has been omitted, what do you really learn from all that besides that Magic Leap is some sort of AR glasses and Kelly thinks it's really really really impressive?


It's starting to sound like the new IT/segway.


I find it disappointing that there is this aversive sentiment in tech-scene towards groundbreaking, physical technologies like segways, self-driving cars, and now magic leap. I like Peter Thiel's point of view on this issue: world needs more zero to one and less of the same. Note that I'm talking this as an average software developer (I don't work on cool technology at work). I'd like to live in the future and I acknowledge that without huge risky hardware bets the future will be as bleak as present is.


Much like this writeup, which is mostly a "what if/could be" based on the patent filing.

Perhaps it should be retitled "how magic leap might work"


Google Ventures gave them no money, Google the company invested in them with Sundar Pichai (Google CEO) taking a board seat at the company


Pokemon Go's AR is a gimmick, though, and doesn't work anywhere close to where you can claim it's mean to be serious. It's just so you can tell your friends "it shows you Pokemon in the room!".


Yeah it's not really AR, it's just turning the camera on and pasting an image over top of it. I'm not sure the app does any analysis of where the Pokemon is standing, I see them floating in midair all the time.


Technically I'd say it's not just the PokemonCamera (which everone has turned off except when they want to take a picture of a pokemon in a cool location). The game augments reality by providing a virtual layer that you can only see on the phone (poke stops, arenas, spawning of pokemon). Yeah it's not the shiny AR of projecting objects into the room but I'd argue it's augmentation regardless.


Is mapping a game over real-life not a form or AR? The normal game screen in Pokemon Go takes "the real world" (streets, locations) and augments it to make it part of the game.


I suppose it could be considered AR. But where do you draw the line? Is Google Maps with certain businesses outlined AR because it's an augmented version of the real world?

Personally I limit what I consider AR to be graphics superimposed on a real world view. With the maps in Pokemon GO, the real world and game are distinct, almost like VR.


My guess is it further legitimated their end goals and will increase investment (if Pokemon Go doesn't fizzle out fast before making significant money).


Pokemon Go was the first non-creepy AR use case. Everything else they came up with with Google Glass and such involved disturbing mass surveillance type stuff. Even so, Pokemon Go was treated as a national security threat by some governments.

With AR you could do all kinds of kind of socially awkward crap. Can you think how cringe inducing AR Tinder would be?


I dunno....I don't find either to be necessarily creepy (although you can find creepy uses for just about anything I guess). Still, using a game to track your travels and location combined with the whole business aspect of an uptick in customers when you're a popular poke-location seems just as potentially creepy than the stuff that got people wailing and gnashing teeth over Glass - namely...a camera that you're always carrying around on your face.

I mean...miniature cameras are old news and they're a lot easier to hide in ways where people don't realize you have a camera. Just as there have always been ways to track stuff like location and purchases via cell phone location and credit card use.

The stuff in Pokemon isn't much "creepier" than what your average map app or social "gamifying" app like Foursquare does.


Yelp's Monocle has been doing non-creepy AR since 2009.


Well Ingress also exists. But I'd agree if you mean mass/mainstream use case.


All those dedicated agents, forgotten as soon as someone decided to slap cute cartoon characters and "training gyms" on our interdimensional portals...


How Magic Leap works: from vapor to snake oil.


If that does turn out to be true it will be very impressive, has there ever been a vaporware project that raised this much money?


> It sounds like the exact technology that Apple wishes it had.

Except... people just don't want to wear glasses. Especially not clunky ones.


People don't want to wear glasses all the time -- but I don't think this is meant to be worn 24/7 like Google glass or something. To me, your comment is akin to, sure people want to play games but they don't want to hold a controller. And look at VR -- people are definitely willing to wear goggles/visors/whatever for limited periods of time.


I honestly believe people don't like glasses (in general, I'm not speaking of the geek crowd). Also, VR isn't a big thing yet.

Please be cautious with analogies, because often there's no logic in them. For example, in this case, we can further dumb it down to: people like to eat but they don't want to hold a spoon. With all respect, an analogy just makes no sense. I was talking about glasses specifically.


What about sunglasses? They've always been popular/fashionable and they're also glasses that you wear in specific situations. They're also glasses that, underneath any styling or branding, still exist for functional reasons rather than solely for looks.

I think of headsets more like headphones. No, you don't want to have to wear headphones all day long but when you're getting something out of them, people seem willing to wear them for extended periods of time.

Even if the glasses thing is a hurdle to this becoming ubiquitous consumer electronics, I think the engineering and tech being developed still has quite a bit of promise and I'll be glad to see if they can make some real progress and show something off beyond teasers.


> I honestly believe people don't like glasses (in general, I'm not speaking of the geek crowd).

Reminds me of that article from yesterday about umbrellas in England.

I think aiming that kind of stuff for general population is a mistake dictated by greed. Some products need to get gradually accepted before they become popular with the mainstream. General population doesn't like things that look weird and/or different. E.g. it took many years for mobile phones to transition from symbol of weirness to symbol of status to something everyone is expected to have.


Sorry, I don't agree and I think my analogy stands. Others can be the judge, but I guess we can agree to disagree and move on.


Until "AI" is good enough to create exciting environments nothing spectacular will be coming from Magic Leap (As per their demo videos) to soon.

The hardware is not the hard part, the artwork/design/integrating it with society is.


Clearly it's entirely different from Vive and Rift, but I think they have been working just as hard on the content side of things as the hardware itself. Most of what I know about MagicLeap comes from content people joining the team, the other comes from patent filings.


A team of 100's can create a nice movie.

Then to make that 3d. Then to integrate that around the world.

Easy to hire people and make a 2d video on you tube on what it might do.


Movies are one of the last things we'll see adapted to AR. Even adapting the medium to VR is a difficult challenge. Disney has a lab that's taking a crack at it though.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: