Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Vulkan bindings for JavaScript (github.com/maierfelix)
77 points by _s47s on Oct 2, 2018 | hide | past | favorite | 24 comments



Is there a benefit of using Vulkan over WebGL?

I would think you can achieve similar speed using WebGL in the browser and then have all the comfortable functionality of the browser for free.


This is for Node.js, so it's for desktop app, not web app (it's not possible to use Vulkan on the Web).

For the web, you have no choice but to use WebGL.


The future is WebGPU.

If you're interested on why a Vulkan binding for the web is not a good idea, I found this doc (from two engineers working at Google) quite interesting : https://docs.google.com/document/d/1-lAvR9GXaNJiqUIpm3N2XuGU...


Thank you for the document !

Definitely, having some form of GPGPU on the web is the next step. It is currently one of my few complaint with WebGL: The lack of compute shader.


You can do GPU compute without the OpenGL "compute shader" feature. WebGL 2 has much improved features for it vs WebGL 1.

There are existing GPGPU things running on WebGL, see eg https://github.com/tensorflow/tfjs-core and https://magenta.tensorflow.org/demos/#web-apps


But WebGL compute shader are under an extension which, as far as I know, doesn't have much support yet :/


The next step in what, exactly? The major thing I see GPGPU being used for on the Web is to mine cryptocurrency using your viewers' hardware in lieu of (or, more likely, as a supplement to) ads.


A lot of modern 3D engine use compute shader to do a lot of different things. For example, I use it to process millions of particles, which wouldn't be possible without it.


> The future is WebGPU

I hope not. The proposed standard was not even remotely vendor-neutral.

I would love to have compute shaders in WebGL. All it would require is bumping the OpenGL version that WebGL is based on from ES 3.0 to ES 3.1 in the next revision.

As far as I can tell, that will not happen because it would reduce the need for the WebGPU proposal. Needless to say, I find the situation very annoying.


> I hope not. The proposed standard was not even remotely vendor-neutral.

Why not? WebGPU work continues here, based on the work that Apple proposed. Google has a cross-platform prototype implementation. https://github.com/gpuweb/gpuweb

> I would love to have compute shaders in WebGL. All it would require is bumping the OpenGL version that WebGL is based on from ES 3.0 to ES 3.1 in the next revision.

WebGL2 has very little vendor support already, and OpenGL is a dead end, from an API perspective. Something low-ish like Metal without being as absurd as Vulkan would be a great fit for the web.


There's room to develop a new API that's a lot better than WebGL. I just don't think we'll get the best result from a standards process driven by realpolitik.


Pretty cool!


What is the point of using a low overhead graphics API like Vulkan if you're just gonna crush performance with JavaScript?


The performance critical aspects of an app shouldn't need to run on the topmost abstraction layer.

JavaScript should tell the CPU what to do, not how to do it.

Edit: If there are CPU intensive tasks that also need to be customized for a particular app we should perhaps have some way to define those tasks in a way that gives you as developer more control of performance characteristics. I guess this is one of the intentions behind WebAssembly?


You’re not going to write the next tomb raider but tools like this can be very good for fast feedback prototyping.


The point of using APIs like WebGL or Vulkan is precisely to move work from the CPU (JavaScript) to the GPU.

The call from JavaScript to native (the binding from v8 to C++), has itself a very low overhead


That's not what the parent was asking.


I'm saying that, I don't see how JavaScript could crush the performance of few native API calls.

It's like saying "I don't see why WebGL exists because JavaScript is slow"


No, that's precisely what the original question is trying to address: Why not keep using WebGL since the bottleneck in a graphical application running in Javascript is likely NOT going to be WebGL, but Javascript itself.


Because thats a totally arbitrary statement and it's also not really true. It's pretty trivial to swamp the GPU, even if you know what you're doing. If you want to show something big/complex or very pretty, or even just novel rendering that will stress the GPU (raymarching, scattering), the CPU will be waaaaay less burdened.

CPU bottlenecking is an issue in videogames where the main loop is usually handling a massive amount of computation, doing its own intersection checks, etc. Etc. Plus any other processes queuing sounds, running physics etc. There are just a few cores doing a huge amount of work.

Its still relatively easy to swamp the CPU in javascript (webworkers and async obviously help), but if youre just piping orders to the GPU then any CPU can easily max out the abilities even on high end cards. In most cases that's basically what webgl is used for, AFAIK. How many full on ai and physics heavy games are there? Webgl games tend to be lighter, and are very often accessible ways to play with interesting shaders. The use case tends towards GPU bound.


The thing is, Javascript is not going to pipe those orders to the GPU fast enough. Vulkan is not like OpenGL, with Vulkan you basically have to say for GPU how it should to EVERY LITTLE THING.

There is no purpose in having a low level API in Javascript, because such API's have to be guided like babies, and Javascript (and most of high level programming languages in fact) is too slow to keep up with their speed. Imagine helping a baby walk, but the baby move 3x as fast as you, but if you let his hand, he falls down.


We do HPC at Graphistry with node because of its low overhead ability to script async over binary buffers (which go straight to cuda/opencl/WebGL) and many general+async app code libs for the 99% case. The result is faster than the native equivs - generally by 10-100x, including multicore. So I'd take that hesitation w a grain of salt.

JS's ability to juggle "Fortran-like" code and scripted app code is pretty unappreciated. Granted, we are slowly adding PyGDF support (google GoAi) to get to GB workloads in real-time, but were fine up to that. I'm not sure if we could have done it in Ruby, may be could have done it in Python. Vulkan adds even more async support, which is the type of thing we could certainly use.

So... Yeah, takes a team that gets how to write HPC code to get HPC code, and JS can be a good choice here when there is a lot of app code to use the HPC.

We'd love for WebGL to get close to GPU compute parity, but even webgl2 feels more like 1998 or 2008 than 2018. The problem there isn't the JS side.


You know, that most popular machine learning frameworks are in Python, which is similar to, or actually even slower than JS.


These bindings are for node.js, not for the web.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: