This is huge news. Nobody comes close to having the 3d coverage that Google has there have been no viable competitors on the horizon. Until now there was just no way to use that gorgeous data.
From a more personal point of view - Google Earth in VR is one of the most stunning experiences I'm aware of and it's been on life support since Google lost interest in VR. Now it looks fairly simple to build something similar on top of the same data source.
I accidentally discovered that not zooming in quite as much, but getting down on your hands and knees is a really fun and unique experience in Earth VR. You get a closer view of the dollhouse effect that goes away when you zoom in to ground level. Probably looks particularly strange to the outside observer though.
Still on my trusty Vive due to third-world-itis, but would upgrade to another PC-tethered device like the Index or some other SteamVR-compatible (since I'm on Linux), as to be less at the mercy of the manufacturer.
Stern look at requiring a FB account to enable developer mode on the Quest
But principle would be enough for me, I don't like that the Quest is a brick without phoning home (you have to install and log into the Oculus software at least once to be able to use the desktop computer link), plus the whole UX nightmare of being forced software updates when you want to use the darned thing.
My friends with a Quest are often delayed from joining VR play sessions due to some silly business requirement like an expired login or system update. Also, the Oculus desktop software is some 10 gigabytes due to a non-optional very detailed home menu environment, even if you use a third party launcher like SteamVR Home. And it must be up-to-date before running any actual applications.
> My friends with a Quest are often delayed from joining VR play sessions due to some silly business requirement like an expired login or system update.
Finally in the v53 Quest firmware release notes BTW: "When you shutdown your headset, you will now have the option to update apps before the headset powers off. This will help minimize the number of app updates that you have to complete the next time you want to jump into a VR app or game."
(I think the Quest has some big firmware issues - I'm personally most annoyed about my Quest rebooting instead of shutting down if connected to my desktop when I've explicitly selected the option in the Quest menu to power down. Sounds like one of the ones annoying you may be getting partially resolved though).
Semi related, but you also need to agree to Facebook using your camera feed if you enable the hand tracking feature. They probably use it to train the tracking models further, and to be fair it is a cool feature, but it costs a video feed of your home to use, which to me is not worth it.
It seems like there are some really simple things that it could do to improve some of its other VR stuff with an incredibly tiny amount of effort, which is one thing I find really depressing about Google here.
e.g. in Youtube VR, just give me a tab to find VR180 videos (which I find can be nicer than flat screen as long as they're high res) as opposed to only showcasing VR360 videos there (which I usually find annoying as I'd rather watch sitting down and for such would then need a swivel chair).
Showcasing VR360 in Youtube VR but making it fairly difficult to find VR180 films (though filtering search results is at least an option) sometimes makes me wonder if any of the Youtube VR dev team were/are actually real-world VR enthusiasts.
Assuming you mean 2020, it's impressive for a game but not Google earth. Flying through a desert (even a desert city with photogrammetry!) and the interpolated trees that get added in would make you think you're flying over a lush forest in the pacific northwest. It also struggles a lot with non-solid 3d objects like bridges, freeway overpasses, etc.
Google earth VR fidelity is terrible anywhere that isn't a big 3D hand mapped city, just like Flight Sim VR. Most places you are just looking at streetview images, without any depth.
They are continually improving coverage, a 3D artist friend of mine worked with an agency that was providing 3D models for them (wasn't the best workplace though, very factory-floor-panopticon-y).
Though last time I tried it, about a year ago, my hometown (a capital of one of Brazil's states) and all the surrounding areas was still very much procedurally generated. (FLN airport district is next to a mangrove swamp but was represented in-game as grasslands, as well as some very out-of-place building styles)
Maximum 6,000 queries per day, calculated as the sum of all requests for all applications using the credentials of the same project.
---
Photorealistic 3D Tiles
Maximum 300 root tileset queries ["map-loads"] per day. This is calculated as the sum of all requests for all applications using the credentials of the same project.
Maximum 250,000 renderer’s tile requests per day. This is calculated as the sum of all requests for all applications using the credentials of the same project.
Rate limit is 12,000 queries per minute for the tile renderer.
---
The Map Tiles API documentation does not contain a "Usage and Billing" section, which seems to imply that it is free to use, bound to the above mentioned limits.
So the only thing to worry about is how Google showed us couple of years ago how relentless it starts to charge for a previously free service -- the Google Maps Javascript API -- after it sees that enough developers got technically invested into Google Maps.
If you open the inspector on any of the examples [1] you'll see how many tile fetches it triggers per second.
While my comment contains some criticism, I'm really excited for this new offering.
In some cases making users supply their own API key is the best solution - although it does put a fairly high bar on both commitment and technical understanding. (the latter mainly because the Google API UI is so damn complex!)
I kinda wish Google supported this better - they should make a new flow for opensource/free software where an oauth dialog is shown where the user is asked if the application can do billable things - and obviously the consent dialog would allow you to choose how much, if anything, you want to spend on API requests.
This lets opensource and free/hobby projects use all google API's with the actual users footing the bill directly. And obviously most uses are only a cent or two, so will normally fall in free allowances.
That would be awesome, but it would also make it way harder for Google to offer a free tier in the first place. 300 free queries per user per day is a lot more than 300 free queries per developer per day, so it might be a mistake to make that usage pattern even easier.
Though maybe they could just place way stricter limits on requests placed via API tokens obtained this way? That would probably work. But it would also sort of defeat the purpose of having users use their own API tokens. If everyone's going to need to pay to use your app anyway, you can just have them pay you to use your API token and gain an easy source of monetization in the process.
For FLOSS use cases, the author may not want to deal with everything that comes from collecting money from people. So it's still useful to let users pay Google directly for their usage.
As a contributor to deck.gl, I’m super excited to use this! Contributors added a TerrainExtension [0] to drape or offset 2D data onto 3D surfaces in 8.9, which is huge for 3D visualization in general since common 2D layers would often be occluded by anything 3D. It’s great to see Google using open standards and openly governed projects for rendering too. Carto developed a story map [1] to showcase the new features, and played a big role in the integrations and extensions. Documentation has been updated, so it’s all ready to use. Happy mapping!
I haven't looked at the specs yet so maybe you can help me understand if the TerrainExtension, which uses raster DEM data can work with a point cloud data like Googles tiles?
My understanding is that with raster data, you cannot have arches, trees, or any object which has a thin component, like a tree trunk under a thick component like the tree canopy.
It’ll work in the situations you’ve described. The TerrainExtension can be applied to the TerrainLayer or the 3DTilesLayer. The TerrainLayer uses the 2D raster data to dynamically generate 3D meshes, and your understanding is correct in that this won’t have any overhangs. As far as I know, google doesn’t offer an api for the terrain layer, but AWS hosts a free open dataset for that. 3DTilesLayer renders GLTF meshes, which can contain overhangs. I’m not sure what the overhang behavior is, but it’d probably be something like “use the highest value”. In this case, Google’s api serves 3D textured meshes rather than point clouds.
I’ll caveat that this is all pretty new to me as well, and I might be missing something.
Have to say for anyone looking into this space though cesium was there first Deck.gl is far and away the best choice for almost any 3d mapping library now. Much much faster than cesium. And its great with 2d too even though it hardly gets a mention amongst leaflet, maplibre, openlayers etc.
Thanks! It’s been a great community to collaborate with. The deck.gl group works pretty closely with MapLibre these days in react-map-gl, and Cesium as well on 3D tile loaders in loaders.gl
It's slightly more complex than that. This is in 3D Tiles format which uses GLTF but I don't think you can simply grab a GLTF from an API endpoint. It's been sliced into cubes with hierarchical level of detail and other things I barely understand.
Oh that's interesting. You probably still need to jump through hoops to figure out the right url for the grid square and level of detail you need. And the session parameter - so you still need to make the initial tile request to get the JSON.
Do a little googling and you can probably find code on github to take care of the "get me terrain for these coordinates" part for you. That's how I managed it a few years ago when I was programmatically downloading tiles from google earth without understanding how to convert from coordinates to their url scheme.
I know nothing about GITF, but I wonder if Microsoft Flight Sim could automatically read these files in to overwrite the default Microsoft scenery.
I have some scenery add-ons that are just a manual conversion of Google Earth's 3D models for some cities that were just generic buildings in Microsoft's version (ex. Dallas)
I was involved in building the interactive story map using deck.gl that you can view at https://3dtiles.carto.com
It’s really cool to have access to all this amazing data! Shout out to Google for working with us to bring support to multiple open source rendering engines.
oh, wow. we have 3D Tile support in https://thirdroom.io but had only ever found NASA’s Mars dataset as a good set of tiles to point it at. This could effectively turn Third Room into a FOSS, decentralised, E2EE multiplayer Google Earth running over Matrix!
As much as I like the third room concept (I like it, but didn't actually try it), I am wondering how much of a distraction it is for Matrix developers?
As a datapoint, a good chunk of my social circles use Matrix, but they seem to incessantly complain about Element. Though very nice improvements have been made lately, I can say from what I've seen and experienced that a lot of improvements can still be made:
- experience was night and day when I switched from my resource-constrained Galaxy S4 with LineageOS 18.1 to a FP4. The phone and most apps were fine, but Element was oh so sluggish compared to the others. I thought it was just the app, but it works much better on this new phone. It now takes seconds to sync after opening instead of multiple minutes.
- my SO experiences constant glitching with Element on Chrome on Macos. The web app fights with the browser to draw over the URL and tab bar. It might be a chrome bug, I'll report this. Additionally, the UI appears to be rendered at single-digit framerates.
- I've seen multiple serious bugs that end up clearing the local DB, triggering an initial sync that lasts for more than 10 minutes.
- I only recently discovered that Android could handle conversation-level notification granularity, and don't have anything similar on my other devices. I don't wish to have the same level of intrusiveness from all chats: for some, I want to see the notifications, for others I want to hear them, etc. Some space-based device-specific controls would be nice to have.
A lot of the issues can be improved by the upcoming sliding sync, and I know vector.im isn't in a brilliant financial situation. I can't really direct you to spend energy on some topics vs others, but it seems to me that there are more (potential or current) Matrix users with old hardware than VR gear. And since messaging is so reliant on networking effects, I would concentrate on being able to reach the largest possible audience.
Now, I really don't want you to get the wrong idea, I love most projects at Matrix/vector, but I can't help but feeling uncomfortable when third room is talked up, while basic chat features leave a lot to be desired.
On this specific point, you are right, and seeing such integration would be awesome. Just don't get addicted to Google-provided data :)
Funnily enough, they address this question on the linked website:
> Whenever we work on metaverse or VR for Matrix (e.g. 3D video calling, or our original Matrix + WebVR demo) we always get a some grumpy feedback along the lines of “why are you wasting time doing VR when Element still doesn’t have multi-account?!” or whatever your favourite pet Matrix or Element deficiency is.
> The fact is that Third Room has been put together by a tiny team of just Robert (project lead, formerly of Mozilla Hubs & AltspaceVR), Nate (of bitECS fame) and Ajay (of Cinny fame) - with a bit of input from Rian and Jordan (Design), Bruno (Hydrogen) and Hugh (OIDC). On the Matrix side it’s been absolutely invaluable in driving Hydrogen SDK (which also powers things like Chatterbox and of course Hydrogen itself) - as well as helping drive native Matrix VoIP and MSC3401 implementation work, and critically being our poster-child guinea pig experiment for the first ever native OpenID Connect Matrix client! In terms of “why do this rather than improve Element” - the domain-specific expertise at play here simply isn’t that applicable to mainstream Element - instead there are tonnes of other people focused on improving Matrix (and Element). For instance we shipped a massive update to Element’s UI the other week.
Thanks for replying with this. I know it's not my place to comment on what people spend their time on, but somehow couldn't resist commenting this time. I thought sharing my instinctive reaction may be invaluable, but it's true they probably get it a lot.
Oh well, I have a bit more time these days, I should finish up and submit the Matrix Spec Change I started drafting almost a year ago... Everything takes time.
Would it be worth having a conversation about thirdroom and Open Brush integration? Our community is always looking to find ways to get their creations onto a social platform and we have a three.js add-on that should make it pretty easy to add our custom shaders to thirdroom.
I live in Venice. The picture on the main page is St.Mark's Square, and "Focaccia e Figioli" does not exist there. There's Cafe Florian [0], the oldest cafe in the world - ops, I stand corrected, the oldest in Italy and one of the oldest in the world.
I work at a startup that uses similar technology to make large-scale 3D models, but using your own dataset https://www.one3d.ai. You can check a demo here https://app.one3d.ai/twins/9763/?invcode=7asheX17, but the surprising thing is that they have pretty good quality without close-up aerial photography. We need drones to take photos, but a doubt they used anything closer than a plane.
If they are genuinely aiming for mass market applications like gaming and AR then it can't be too expensive. Their last attempt at this was a flop because they had no open sign up and "call us for pricing" bullshit.
This looks like they might have come to their senses.
Im really happy with this, trying to implement this now for the startup Im working at.
Unfortunately, the URIs in the request don't seem to follow the OGC (https://portal.ogc.org/files/102132 - When the URI is relative, its base is always relative to the referring tileset JSON file.)
I'm using a custom viewer that does not seem to handle these URIs well...
Am I the only one unimpressed with the quality? The render quality and the buffering is poor, it's not quite photorealistic as they claim, it might be possible to render it nicely such that it is photoreal but that's not what they are showing in their own videos. Stablediffusion and Controlnet exist but what this needs is probably just some better sampling and attention to preloading.
I refer the honorable gentleperson to my answer elsewhere on this page:
> Not to pull the whole "Everything Is Amazing And Nobody Is Happy" on you but pause to consider the sheer scale of the dataset. 2500 cities have geometry data to the level where you can see details on individual buildings for areas extending well out into the suburbs.
> Yes - it's blurred up close but my god, isn't it still amazing?
Having Google followed an OGC standard is an incredible good step forward for the google maps ecosystem. This can really speed up developments on the clients side. It was not that useful to have the best client if there was always a place where the data looked better. Kudos to google.
Where’s Patrick Cozzi? This is huge! Cozzi and Cestium helped define the whole glTF and glb formats (which are good!). Kudos to the Google Earth team for doing this. This is going to help in the simulation space so much!!!
For anyone else who still feels like this is "too good to be true", here's a quick video clip I made of extracting a piece of Manhatten as glTF files and rendering them in ThreeJS (https://threejs.org/editor/)
It is "too good to be true", they don't allow for it:
>Applications using the Map Tiles API are bound by the terms of your Agreement with Google. Subject to the terms of your Agreement, you must not pre-fetch, index, store, or cache any Content except under the limited conditions stated in the terms.
This is very cool. I remember during my PhD I briefly investigated using these 3D models for synthetic data generation, but there was no straightforward, non-shady way of going about it.
In the Google IO talk they mention its compatible with deck.gl and three.js. Does anyone know of any projects or resources for using either deck or three as a custom renderer [1]?
Does this tiling data support overhangs and holes? For example, could it accurately model the Arc de Triomphe and let me fly 'under' the arch?
Other approaches I've seen are effectively just a 2D map with a height per pixel, which is okayish for mountains, but quickly looks bad when you model human structures like bridges.
Does anyone know how the Aerial View API would work with billing?
That is, can you cache the URL of the generated video and embed that for your users? Or, would you need to make a new request for the same video on every page-view?
This is neat. I've just so happen to have been playing around with GIS and blender recently and you have to jump quite a few hoops to get data, hopefully this will simplify things
I want to point everyone to the "No creating content" clause of the terms. I don't fully understand it but I think it'll substantially limit what can be done.
This is super interesting. Does anyone have any clues as to what kinds of integrations We'll begin to see first? I'm envisioning tons in the long-term.
Not to pull the whole "Everything Is Amazing And Nobody Is Happy" on you but pause to consider the sheer scale of the dataset. 2500 cities have geometry data to the level where you can see details on individual buildings for areas extending well out into the suburbs.
Yes - it's blurred up close but my god, isn't it still amazing?
If I was developing a competitor to Flight Sumulator its a fantastic asset
If I'm developing a first person experience that isn't from an airplane, not so much
I appreciate that this exists but it's far from the best, nor the only large-scale geometry asset for a cityscape, though by volume has more cities.even if the geometry is more rudimentary than other solutions
Bing has been stuck with the same handful of 3d cities for about 4 years. Seemingly no effort to expand coverage. I think Google's lead was maybe too intimidating.
It's a bit of a strange thing to say. At what level? Apple doesn't support lots of formats, neither does Microsoft, people can build tools in their apps to support them, and publish them, and hence new formats are born, that, although may have been created on a platform, the owner of that platform does not 'support' them.
As this is an open format, you can, or very likely, someone else, will write something to parse it on an apple platform, will Apple support it then?
Yes Apple does support GLTF, I have a site that renders 3D models that are GLTF format with no issues on an iPad with iOS 13.7. iOS does not support GLTF with draco compression though which is unfortunate but easy to work around by having a separate non compressed model to serve to those devices.
On iOS I would much prefer a second non safari browser than a second app store.
Apple does not support Draco Compression on USD [1] which still sucks & any glTF you are seeing was converted using one method or another. An interesting way to hackdoor open 3D immersive standards is this project using [2] App Kit.
And anyway this all might be moot if Apple announces support at WWDC June 4th with the xrOS
Photorealistic 3D Tiles are in the OGC standard glTF format, meaning you can use any renderer that supports the OGC 3D Tiles spec to build your 3D visualizations.
I understand many people are familiar with what "open standard" means. There are lots of other people that are somewhat new to the phrase.
Consider these related terms:
- Open Source
- Open Data
- Open Standard
For people who encounter these terms in the listed order, "Open Standard" may or may not be understood by analogy. Some people will assume that it also includes the concepts of Open Source and Open Data. It does not.
From a more personal point of view - Google Earth in VR is one of the most stunning experiences I'm aware of and it's been on life support since Google lost interest in VR. Now it looks fairly simple to build something similar on top of the same data source.