If you turn on the Gold Rays one in the background and watch the Stopping by Woods on a Snowy Evening one, the music and voice over just go so well together, I didn't realize it was a different tab playing the music at first.
Stuff like this makes me think about how we'll live in the future. I read the book Ready Player One last year (fun fiction read) and the story is set in a world where most people live inside of a virtual world accessed via VR headsets/suits (e.g. Occulus Rift).
Seeing this makes me think of a future when we'll be doing the same and instead of looking at this in a browser, we'll actually be able to stand under a tree and watch the leaves fall.
And hearing what you say makes me think of a present where I can stand under a tree and watch the leaves fall. Because it's October, the air is cool and crisp, and there's a park nearby. The resolution is much better too.
As technologists, we have to make sure not to get too caught up in our own bubble. We have to remember what kind of a job our technology actually does for people, what kind of a role it actually plays in their lives. And I think the VR enthusiasts right now have a tendency to lose sight of that.
Ok, let's get things straight. Watching this is nothing like walking to the park and standing under a tree. It's cinema. It's a completely different experience. You wouldn't say "why would you watch a Tarkovsky film when you can just go to Russia?"
What I wish you said, however, is that we have a responsibility to remember what we need to do for people. The argument for standing under a tree rather than watching this could be: you can have a totally enveloping, tranquil experience without burning coal, without fracking, without necessitating the employment of exploited factory workers, without supporting the mass extraction of resources (and corresponding environmental destruction), and without adding another broken gizmo to the landfill in 5 years.
For those working on the technology, rather than building escapist fantasy worlds, you could work to make our actually existing, real, external world (note: fallacy), which is dysfunctional for billions of people, more functional. You could also refuse to build escapist fantasy worlds for people who themselves could be making a difference in the world rather than consuming resources and funneling money to the wealthy corporations in exchange for "virtual goods."
And you could also do this through VR. HMD's don't even necessarily need to display virtual worlds. Virtual worlds don't need to be purely hedonic.
All that said, maybe lying on a tempur-pedic mattress and staring up at an artificial autumn-scape is exactly what someone needs. Who can say.
>Ok, let's get things straight. Watching this is nothing like walking to the park and standing under a tree. It's cinema. It's a completely different experience. You wouldn't say "why would you watch a Tarkovsky film when you can just go to Russia?"
No, but a lot of VR stuff is BS recreation of reality -- not a Tarkovsky.
Fully agree! I prefer reality (though, this is one of my favorite philosophical arguments: https://www.youtube.com/watch?v=OA3WGf9pX0A). You're right, the bubble is something to avoid going too far with. Personally I don't "get it," but I can see why others might want it and don't discount it as a possible (likely) future.
A personal theory (though, I limit myself from saying it because it sounds kind of nuts) is that the world we're in currently (i.e. our "real world") is merely the advanced simulation of a past civilization. In essence, the Matrix is real and we're hooked into a machine somewhere. Though, I can't say I fully agree with dystopian humans-as-robot-fuel theory.
but doing this in the rift would take at most 30 seconds... doing it in real life is an actual chore which takes at least 30 minutes if you also consider the way back.
I would translate "actual chore" as "actual pleasure". A decent part of my vacation time tends to be just being outside . Not even having to do anything, just feeling the fresh air, listening, being. Sharing. Just might go outside right now for a few minutes. See ya !
Are you kidding? Who has time for that? 30 minutes to sit around staring at trees?
This rendered like crap in my browser, but I'd probably take a 2.5-5 minute break once a day to look at leaves. That'd probably be about all I could stand. What's the difference between 30 seconds and 30 minutes staring at leaves? They're leaves. They all look the same and they all act the same. They're mostly the same color. They don't do anything interesting.
There's a disturbing undercurrent in industrial societies of fetishizing the "natural", and I think that's the only reason anyone would enjoy sitting in the cold looking at a goddamn tree for a half hour -- to make a tribal statement about how ~enlightened~ they are because they can ~appreciate~ the ~beauty~ of ~nature~.
I mean, you're probably a developer, aren't you? Your time is valuable. Why waste it looking at a tree? Just keep this open in a small tile on your desktop and glance at it once every few hours if you really like leaf videos.
edit - just in case it isn't satire. I habitually watch trees quite a lot. I have been obsessing a bit recently with how light reflects off particularly waxy leaves making the colours change with the sky. It doesn't particularly matter that they are trees though, I can watch buildings in much the same way. Is more to do with an appreciation of light.
If this isn't satire then I'm amazed at how cluelessly condescending one could be.
>I mean, you're probably a developer, aren't you? Your time is valuable. Why waste it looking at a tree?
I have a hunch that not many people die peacefully thinking "wow, the most important way I spent my valuable time was coding" when reflecting on life and happiness.
Coming from someone who does enjoy sitting in nature for hours at a time observing life, it's mostly about forgetting all the "important" bullshit that will be waiting for me when I get back home. The experience also gets 10x better when you're out in nature with someone who you care about.
For me, it's not so much the leaves or the trees themselves but taking a moment out of my day just to sit there and do nothing. It's a form of meditation. The leaves, trees, waves, etc... are just there as a sort of visualization for your moment, and usually you'll notice things about them that you normally don't. Just because my time is valuable doesn't mean I'm going to make sure I spend every minute of my time as efficiently as possible. Sometimes I just need to sit on the beach and watch the waves crash. Or sit on a park bench and watch the people walking around.
It's not necessarily about nature, though for many of us that is where we find what we are looking for. It's more about taking a minute or thirty for yourself. In that time you aren't being productive, but you are giving your mind a much needed rest. It could be watching the lights from cars on a road late at night. Or staring at leaves. Or just getting a good view of your city from a high vantage point.
That's not to say that there aren't obnoxious people always talking about how "enlightened" they are. I'd say they are in the minority, however.
I stare at leaves because sometimes, they really do change after 30 seconds. If you have a wind blowing, it can be "snowing" leaves, which does bring me some level of enjoyment to watch. It's a good way to clear my mind and think about things without really trying to think about things.
Multiply that "goddamn tree" by say, a forest, and then stare at it, looking at the different patterns created by the mixing of trees. Then come back the next day and see how things have changed. Or, just open up you editor and stare at your code. Same diff.
Can anybody explain what's special about WebGL demos? I mean, don't we already know that WebGL exposes possibilities of OpenGL to JavaScript through browser? Am I missing something?
That's never been true of OpenGL (or 3D in general) before so there's always been this divide in what people consider possible for a web app (i.e. convenient, easily shared) and the conventional software model where someone needs to download and install something first – a big step up in delay and complexity, and a reasonable hesitation for the security-conscious.
The excitement comes from the fact that now none of that is necessary for many uses – if you're working on a non-AAA game, 3D object viewer, interactive diagram, etc. etc. etc. you get support for 70% of web users globally without having to take on the expense of desktop development and support or supporting so many different toolchains. This isn't a big change for the major players who are still going to do that for other reasons but it simplifies a ton of casual usage.
Plug, but the ga.me people (https://ga.me/) work just down the road from me and are doing some pretty amazing stuff with WebGL games. OK, it’s not AAA, but worst it’s as least as good as the majority of Kongregate stuff and best it’s much better. And this is all just in the browser, no engine download or plugin necessary.
WebGL runs everywhere that is objectively a Good Thing™. Your criticism is aimed at the optimisation rather than the technology.
The person who created this could, theoretically, add a little slider to let you adjust the visual quality to meet your PC's performance characteristics.
Additionally WebGL can also be utilised for things which only get rendered once (e.g. single 3D frames). Like generating 3D graphs for one suitable example.
> The person who created this could, theoretically, add a little slider to let you adjust the visual quality to meet your PC's performance characteristics.
If they wanted to, they could even auto-scale the quality based on frame-rate to ensure that everybody had a good experience.
Check out chrome://gpu to see whether your specific driver is blacklisted - it sounds like you're hitting software rendering. GL becoming a mainstream feature has put pressure on vendors to ship fixes for known bugs – the Chrome team usually links to the actual bug reports so you can see why something is disabled.
Right now, this is still maturing – e.g. for me that demo runs great even on an old iPhone 4S, 2010 MacBook Air, etc. but on a lowest-bidder Dell laptop with the cheapest possible Intel integrated graphics, it struggles. One hardware generation refresh, however, and even the cheap equipment is adequate for many uses - after all, that 2010 MBA was only $1k 4 years ago.
The upshot is that the excitement is because this is finally reaching the point where support will have matured by the time most people will ship a serious app.
There are already some interesting WebGL-only applications like Polarr (https://www.polarr.co/editor) which suggest that the model is already plausible for many projects.
No not really. Just web developers being excited that the lords and masters who control their s̶h̶a̶c̶k̶l̶e̶s̶ sandbox have dropped them a few crumbs allowing them to compete with the native software of 1996.
Demos are not typical and in no way demonstrate the general usage of a GPU. They use very dirty tricks to render out a specific output, and are practically useless otherwise.
Here's a list of actual software that was released in 1996:
i think you're flipping the script here: the web isn't "shackled", it's a virtual machine that almost everyone and their grandmother has chosen to install. tell me how one could, in 1996, get any computer made in the last two or four years to run this software at this quality in under 3 seconds? virtual machines are slower in exchange for their universality, and we're now at the point where they're fast enough to do awesome things.
It feels pretty good to click the link on my iPad and see the leaves. Apple was the last hold out for WebGL. Hopefully, we'll see a big spike in WebGL usage.
Mozilla were the first to implement (Vladimir Vukićević’s work) with Opera around about the same time. Chrome jumped on board a few years later, and IE as of version 11. Apple was the holdout for basically unknown reasons (they’ve had a working version preffed off for ~5 years now) but the basic assumption is that graphics card drivers needed to be hardened to provide a secure execution environment for shaders. It’s one thing to buy a game and give it root, quite another to load a webpage that could start a WebGL process in the background and load executable code onto the graphics card.
Security is definitely the likely candidate (it was for Microsoft as well): even on Yosemite, where they enabled WebGL by default, there's a per-site permissions interface similar to things like location services access.
IE dies on a Nokia 630, on a S3 running Android 4.3 got a blank screen with the native browser and Chrome, just got lucky with a S4 running Android 4.4 with Chrome after some thinking time.
Nice demo. Is there any reason why every WebGL demo makes my browser burn the CPUs and drain the battery? Equivalent native code that produces output of the same quality does not have this much of an effect.
Well, first, even assuming JITing JavaScript is expensive and produces expensive code, shouldn't WebGL demos mostly stress the GPU and not the CPU?
Second, web sites are not allowed to execute native code, but all the modern JavaScript engines compile down to native code. My intuition tells me to expect the performance of compiled JavaScript to be low, however I have seen endless benchmarks that say that's not the case and JITed JavaScript should be plenty fast.
However I feel this disconnect between those benchmarks and reality. The benchmarks say it's fast, while in fact I perceive it to be pretty slow and a resource hog.
It depends I think. There isn't any built-in reason why WebGL code would drain batteries faster then a similar native-code demo. WebGL has to do a bit more validation on the CPU which may contribute a bit. But other then that most demos should be very light on the CPU side, so the difference between JS and native code wouldn't be an issue. However, many WebGL demos move a lot of work to the GPU (especially the shadertoy demos, which render the entire scene in a single fullscreen quad and have absurdly complex pixel shaders), epecially in fullscreen mode these would suck your battery dry very quickly.
On my fairly old MBP, all fullscreen 3D apps (also native games) kick the fan into high gear immediately and the bottom gets warm very quickly, while most "reasonable" WebGL demos don't have this effect (e.g. some of the lighter ones from here: http://alteredqualia.com/)
This isn't the case with every WebGL demo, only animations need to continuously render. For example WebGL apps that change view only when camera position changes do not require to continuously invoke the render loop and often can have low CPU requirements.
By "out of sandbox" you refer to the fact that shaders are statically transpiled into safe code, rather than getting executed in a dynamically memory-safe and sandboxed environment?
I agree the approach raises some doubts, as it gives untrusted web content a degree of indirect control over what kind of "safe" shader code gets passed to the GPU shader compilers. But the first-order issues are addressed by the specced security model (https://www.khronos.org/webgl/security/).
Yes that's what I was referring to. I've only read the linked SO discussion, but it seems like security on that compilation step is a focus point for the companies implementing WebGL. Still, any time you're manually bound checking at compilation, there's a chance for error. (A SO comment mentioned something about open acceptance tests for this.)
Microsoft says the security is too bad to develop for, but that could be a play for DirectX (whatever it's called now) more than a real security worry.
Point is, it seems like a lot of the WebGL security is based on "don't worry, we tested everything!"
It is likely the IE team didn't worry about portability of IE between Windows, Linux and OSX, thus they did some direct connection to DirectX that wasn't that feasible for the Chrome team.
I believe there is an extra layer of indirection in chrome. IE can implement WebGL using DirectX, while chrome uses OpenGL, which under windows is a wrapper over DirectX. This may not be true of all video cards/drivers but I believe it's the default implementation of OpenGL under windows.
Yeah, I know this is true for the AMD driver, at least. Their Catalyst ("fglrx") OpenGL implementation is shared between Linux and Windows, with a fairly low-level wrapper for each OS. [1][2]
When Vista first came out, Microsoft forced OpenGL to be a layer above DirectX. But they pretty quickly backed down and let people build direct OpenGL drivers again.
Beautiful! However I tried it on my Nexus 5 and it looks like a bunch of stuff is turned off and the display is kind of pixellated. Is there a way to run it full-res mode? Modern mobiles have surprisingly powerful GPUs.