The next step in what, exactly? The major thing I see GPGPU being used for on the Web is to mine cryptocurrency using your viewers' hardware in lieu of (or, more likely, as a supplement to) ads.
A lot of modern 3D engine use compute shader to do a lot of different things.
For example, I use it to process millions of particles, which wouldn't be possible without it.
I hope not. The proposed standard was not even remotely vendor-neutral.
I would love to have compute shaders in WebGL. All it would require is bumping the OpenGL version that WebGL is based on from ES 3.0 to ES 3.1 in the next revision.
As far as I can tell, that will not happen because it would reduce the need for the WebGPU proposal. Needless to say, I find the situation very annoying.
> I hope not. The proposed standard was not even remotely vendor-neutral.
Why not? WebGPU work continues here, based on the work that Apple proposed. Google has a cross-platform prototype implementation. https://github.com/gpuweb/gpuweb
> I would love to have compute shaders in WebGL. All it would require is bumping the OpenGL version that WebGL is based on from ES 3.0 to ES 3.1 in the next revision.
WebGL2 has very little vendor support already, and OpenGL is a dead end, from an API perspective. Something low-ish like Metal without being as absurd as Vulkan would be a great fit for the web.
There's room to develop a new API that's a lot better than WebGL. I just don't think we'll get the best result from a standards process driven by realpolitik.
The performance critical aspects of an app shouldn't need to run on the topmost abstraction layer.
JavaScript should tell the CPU what to do, not how to do it.
Edit: If there are CPU intensive tasks that also need to be customized for a particular app we should perhaps have some way to define those tasks in a way that gives you as developer more control of performance characteristics. I guess this is one of the intentions behind WebAssembly?
No, that's precisely what the original question is trying to address: Why not keep using WebGL since the bottleneck in a graphical application running in Javascript is likely NOT going to be WebGL, but Javascript itself.
Because thats a totally arbitrary statement and it's also not really true. It's pretty trivial to swamp the GPU, even if you know what you're doing. If you want to show something big/complex or very pretty, or even just novel rendering that will stress the GPU (raymarching, scattering), the CPU will be waaaaay less burdened.
CPU bottlenecking is an issue in videogames where the main loop is usually handling a massive amount of computation, doing its own intersection checks, etc. Etc. Plus any other processes queuing sounds, running physics etc. There are just a few cores doing a huge amount of work.
Its still relatively easy to swamp the CPU in javascript (webworkers and async obviously help), but if youre just piping orders to the GPU then any CPU can easily max out the abilities even on high end cards. In most cases that's basically what webgl is used for, AFAIK. How many full on ai and physics heavy games are there? Webgl games tend to be lighter, and are very often accessible ways to play with interesting shaders. The use case tends towards GPU bound.
The thing is, Javascript is not going to pipe those orders to the GPU fast enough. Vulkan is not like OpenGL, with Vulkan you basically have to say for GPU how it should to EVERY LITTLE THING.
There is no purpose in having a low level API in Javascript, because such API's have to be guided like babies, and Javascript (and most of high level programming languages in fact) is too slow to keep up with their speed. Imagine helping a baby walk, but the baby move 3x as fast as you, but if you let his hand, he falls down.
We do HPC at Graphistry with node because of its low overhead ability to script async over binary buffers (which go straight to cuda/opencl/WebGL) and many general+async app code libs for the 99% case. The result is faster than the native equivs - generally by 10-100x, including multicore. So I'd take that hesitation w a grain of salt.
JS's ability to juggle "Fortran-like" code and scripted app code is pretty unappreciated. Granted, we are slowly adding PyGDF support (google GoAi) to get to GB workloads in real-time, but were fine up to that. I'm not sure if we could have done it in Ruby, may be could have done it in Python. Vulkan adds even more async support, which is the type of thing we could certainly use.
So... Yeah, takes a team that gets how to write HPC code to get HPC code, and JS can be a good choice here when there is a lot of app code to use the HPC.
We'd love for WebGL to get close to GPU compute parity, but even webgl2 feels more like 1998 or 2008 than 2018. The problem there isn't the JS side.
I would think you can achieve similar speed using WebGL in the browser and then have all the comfortable functionality of the browser for free.