Hacker News new | past | comments | ask | show | jobs | submit login

I'm a graphics programmer who has quite a bit of experience with WebGL, and (disclaimer) I've also contributed to the WebGPU spec.

> Quite honestly, I have no idea how ThreeJS manages to be so robust, but it does manage somehow.

> To be clear, me not being able to internalize WebGL is probably a shortcoming of my own. People smarter than me have been able to build amazing stuff with WebGL (and OpenGL outside the web), but it just never really clicked for me.

WebGL (and OpenGL) are awful APIs that can give you a very backwards impression about how to use them, and are very state-sensitive. It is not your fault for getting stuck here. Basically one of the first things everybody does is build a sane layer on top of OpenGL; if you are using gl.enable(gl.BLEND) in your core render loop, you have basically already failed.

The first thing everybody does when they start working with WebGL is basically build a little helper on top that makes it easier to control its state logic and do draws all in one go. You can find this helper in three.js here: https://github.com/mrdoob/three.js/blob/master/src/renderers...

> Luckily, accessing an array is safe-guarded by an implicit clamp, so every write past the end of the array will end up writing to the last element of the array

This article might be a bit out of date (mind putting a publish date on these articles?), but these days, the language has been a bit relaxed. From https://gpuweb.github.io/gpuweb/#security-shader :

> If the shader attempts to write data outside of physical resource bounds, the implementation is allowed to:

> * write the value to a different location within the resource bounds

> * discard the write operation

> * partially discard the draw or dispatch call

The rest seems accurate.




That is what I hate on Khronos APIs, it is almost a rite of passage into adulthood to create our own mini-engine on top of their APIs to make them usable.

I already have my toolbox, but that doesn't mean I am fine with them being like that.


I think OpenGL and Vulkan fail at opposite reasons for this. OpenGL is a giant ball of yarn state machine that's way too complicated to drive and doesn't do what you want. Vulkan requires spelling everything out in excruciating detail (though recent things like VK_EXT_dynamic_rendering help clean up the mess a lot).

I don't think there's a common design principle there of trying to be behind mini-engines, they just overcompensated in the other direction when designing Vulkan. D3D12 is a bit similar.

There are many possible ways to wrap these APIs for your own use case, and nobody will ever decide how those wrappers should work (e.g. automatic resource tracking makes bindless difficult, and multi-thread command recording makes automatic resource tracking difficult, but RT basically requires bindless, so, pick which feature to drop). Metal shows one very strong direction. WebGPU shows another good direction, but they all take some very deep compromises here.


IME it almost always makes sense to wrap system APIs with your own wrapper which 'massages' the low level APIs into a (usually) much smaller API that's specialized for your use case. Gives you more wiggle room to experiment and optimize without having to rewrite large parts of your higher level code.


> (mind putting a publish date on these articles?)

Off-topic, but please PLEASE put the publish date on your (technical) articles. When I'm looking for documentation and open an article without a publish date, I almost always discard it immediately. I'm not going to risk wasting my time learning outdated knowledge.


This article does have a publish date, it's just easy to miss in the top right corner with a bit of a low contrast ratio (in dark mode at least)

The article is from 2022-03-08


Can you elaborate more on this? It seems interesting

> if you are using gl.enable(gl.BLEND) in your core render loop, you have basically already failed.


Basically, if you have one piece of codebase call gl.enable(gl.BLEND), then either that needs to reset it at the end with gl.disable(gl.BLEND) and you have some vague ambiguous default state that everything enters and leaves. If you don't, then either your code up ahead needs to unset any possible state added while outside and call gl.disable(gl.BLEND) before it renders, or it's dependent on state set from around it.

That latter issue is a real problem, because it makes the frame a lot harder to refactor. One of the biggest stumbling blocks for any new graphics programmer that started on OpenGL is implementing something like a Z-prepass or a shadow map, where you have the same object drawing to two passes, because the GL state machine makes it very easy to accidentally depend on some hidden piece of state you didn't know you were using.

The right answer is to have a state tracker that knows the current state, and some combination of object/pass/material know the state intended to switch to.

And gl.BLEND is the easy case. Things like VAOs and FBOs are dangerous to have bound latently, because of GL's brutal bind-to-modify API design. Bind-to-modify was optionally dropped when EXT_direct_state_access was added, but that never made its way to GLES/WebGL, unfortunately.


One reason is that small innocent changes like this can cause a large wave of dependent changes inside the GL implementation, shaders may be patched or recompiled, internal 'state group objects' discarded and recreated, and those expensive actions might happen at a random point further down in another GL call. Also those details differ between GPU vendor drivers, and sometimes even driver versions. This is what makes GL extremely unpredictable when it comes to profiling the CPU overhead, and why modern 3D APIs prefer bundling state into immuntable state-group-objects.

For other things, specifically WebGL may need to run expensive input validation which might also trigger at more or less random places.

Also: it's very easy to forget one tiny state change out of dozens, which then mess up rendering in more or less subtle ways, not just for the rest of the frame, but the rest of the application lifetime.


As Jasper said, you write a library to manage GL state for you, rather than calling GL functions directly to manage state (like glEnable and glDisable, among countless others). The risk is simply that you will forget to change things back and one drawing operation will accidentally affect the next.


I'd also be interested in details on this but I assume the gl.enable() API changes fundamental things about the rendering pipeline. It allows enabling things like depth testing and stencil (both involve an extra buffer) and face culling (additional tests after vertex shader). For blending in particular I think it requires the fragment shader to first read the previous value from the frame buffer. Changes this stuff is probably not a trivial operation and requires a lot of communication with the GPU which is slow (just a guess).

If you want to change blending for each draw call you can change the blending function or just return suitable alpha values from the fragment shader.


> Changes this stuff is probably not a trivial operation and requires a lot of communication with the GPU which is slow (just a guess).

The GPU underneath looks a lot more like Vulkan than it does like OpenGL. Changing state should, in general, not require communicating with the GPU at all, that happens once you draw stuff (or do other operations like compiling shaders or creating textures).


Yeah, but the problem specifically with GL is that it is almost unpredictable what actually happens at 'draw time', because small GL state changes can be amplified into big GPU state changs.


Yeah, that's definitely an issue. Vulkan has some of the same issues, they're just moved to the pipeline creation stage.


> WebGL (and OpenGL) are awful APIs that can give you a very backwards impression about how to use them, and are very state-sensitive. It is not your fault for getting stuck here. Basically one of the first things everybody does is build a sane layer on top of OpenGL; if you are using gl.enable(gl.BLEND) in your core render loop, you have basically already failed.

I really don't understand this. Why the need for relentless abstraction? Just learn the ways that OpenGL is weird and use it anyway. Most people who work on these things will need to understand OpenGL anyway.

Then again, I guess it depends what you mean by "everybody" when you say "everybody does". Clearly, you are using hyperbolae here, but who do you actually mean by everybody? For example, if everybody did it then how did some people reach your failure case? Unclear and bizarre comment. If you say "everybody" you must at least attempt to clarify who is meant, else it is a contentless comment


(Based a bit on older OGL usage)

> I really don't understand this. Why the need for relentless abstraction? Just learn the ways that OpenGL is weird and use it anyway. Most people who work on these things will need to understand OpenGL anyway.

OpenGL/WebGL have a global state that affects drawing and which introduces side effects. Direct-written drawing code tends to make assumptions, which then makes it blow up when drawing code next to it makes different assumptions.

The API abstraction means you don't really know how expensive it is to make state changes.

So you tend to have either a thin abstraction that manages to set states appropriately for all your drawing code before use, flushing prior state, or you build a full local state manager that understands the 'delta' between old and new state.


> WebGL (and OpenGL) are awful APIs (...)

What would a good graphics API look like?


D3D11, Metal and WebGPU are all pretty good in that they are much less brittle than the OpenGL programming model while still being usable by mere humans without 20 years experience of writing GPU drivers - which is pretty much what's expected to make any sense of Vulkan ;)


Most people that want to keep using OpenGL would be better served by a cross-platform D3D11. WebGPU is similar-ish to that.


if you any suggestion about articles to read about wgpu plz share. I kind of struggle to find good articles with good examples for beginners to get started with wgpu



Thanks so much This one https://webgpufundamentals.org/ is amazing.

I wish there was more about compute shading instead and not JS centered.


Did you gave this article a try?

I found it very helpful for me to get started. And it is not really outdated, only the first example does not work anymore right away, but if you go to the next step, it all works and then you progresd to a small working physics simulation.


yes I did, and I loved it that's why I shared it here :)


I find chatGPT 4 quite usefull for this, even if he may does some mistake, it can generate little examples and explain all lines




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: