Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On X11 you can design far more efficient remote apps via GLX serialization. You could heave a headless server that has no graphics card render 3D applications on your local machine which might have really beefy graphics hardware. Something that will never be possible on Wayland because its truly flawed protocol design.

This solution to "network transparency" is nothing else than pushing the whole screen updates directly over the wire. So why not use established protocols like VNC?



Remote GLX is a giant hack, and even NVIDIA gave up on it ten years ago. Modern GPU programming is all about data management and scheduling and bandwidth, and when you add serialization and latency into the mix, performance tanks.

For instance, sub-buffer updates have certain constraints that make it very fast in the local case but would require a lot of data serialized over a the wire every frame, and networks do not have the bandwidth for that.

"network transparency" is an anti-goal in protocol design for the same reason "rpc that acts like a function call" is inherently flawed - the network adds a lot of complexity and different design constraints.


I only can talk from experience. I used specialized CAD programs for simulations in the past and GLX serialization worked really well. As soon as all textures, shaders and models are uploaded the only thing that goes over the wire are camera position updates and small updates to the display list.

Games that try to sqeeze every ounce out of the hardware with tricks for extra FPS are not suitable to be serialized. I agree with that.


It's not just serialization costs, but rather a change in presumed trust boundaries. The graphics hardware abstraction is becoming less like storage or communication, which can be easily virtualized and reasoned about to manage risks with delegated access. Graphics is becoming more like the host processor and memory, running arbitrary application code. GPU allocation is more like process or job control and scheduling, with a cross-compilation step stuck in the middle.

So, the very abstraction of "textures, models, display lists, and draw commands" is no longer what is being managed by the graphics stack. That is just one legacy abstraction which could be emulated by an application or gateway service. As people have stated elsewhere, one can continue to operate something like X Windows to keep that legacy protocol. Or, one can run a web browser to offer HTML+js+WebGL as another legacy and low-trust interface.

But, one cannot expect all application developers to limit themselves to these primitive, legacy APIs. They want and need the direct bypass that lets them put the modern GPU to good use. They are going to invest in different application frameworks and programming models that help them in this work. I hope that the core OS abstractions for sharing this hardware can be made robust enough to host a mixture of such frameworks as well as enabling multi-user sharing and virtualization of GPU hardware in server environments.

To provide transparently remote applications in this coming world, I think you have to accept that the whole application will have to run somewhere colocating the host and GPU device resources, if the original developer has focused on that local rendering model. Transparency needs to be added at the input/output layer where you can put the application's virtual window or virtual full-screen video output through a pipe to a different screen that the application doesn't really know or care about.


At some point in the future GPUs have to decide if they are computing devices or graphics devices. Right now they are trying to be both.

If you purposely design graphics devices you can make many simplifications and optimizations because you can abstract all tasks as drawing primitives. That will make serialization very easy.


I think they have already decided, and they are computing devices, with various graphics techniques captured as userspace code. It is a bit of a fiction that graphics consists of just "drawing primitives" like triangles anymore. Those simplistic applications are supported by compatibility libraries to abstract the real computational system.

The core of the GPU is really computational data transforms on arrays of data. But there is a whole spectrum to these computational methods rather than just a few discrete modes. This is where application-specific code is now supplied to define the small bits of work as well as to redefine the entire pipeline, e.g. of a multi-pass renderer. The differences between "transforms and lighting", "texturing and shading", "z-buffering and blending", or even "ray-casting and ray-tracing" are really more in the intent of the application programmer than in actual hardware. The core hardware features are really to support different data types/precisions, SIMD vs MIMD parallelism, fused operations for common computational idioms, and some and memory systems to balance the hardware for certain presumed workloads.


Remote GLX only works for really old versions of GL (such as the versions used by CAD software). X11 already effectively gave up on it a long time ago.


It could be revived via "fossilizing"[1] vulkan commands made possible by valve. Maybe even as a Wayland extension.

[1]:https://github.com/ValveSoftware/Fossilize


This is a really stupid hill to die on. Are you even serious? What you're describing is a massive hack built on a foundation of sand, and its absence is NOT an indication of Wayland's "truly flawed" protocol design, which I'm certain you haven't read about anyway.

> So why not use established protocols like VNC?

Yes indeed!


> What you're describing is a massive hack built on a foundation of sand, and its absence is NOT an indication of Wayland's "truly flawed" protocol design, which I'm certain you haven't read about anyway.

Serialized GLX was introduced by SGI before direct rendering was even possible. It is what the creators of OpenGL originally envisioned (Graphics terminals connected to servers). If anything DRI that came afterwards was the hack.

Serialized GLX doesn't make sense in every context, I agree with that. But it is great to have the option. X11 offers that option, Wayland does not. Of course you could write your own proprietary client server architecture on top of the Wayland protocol. But why reinvent the wheel?


VNC on GNU/Linux has terrible performance and UI. Isn't SPICE where the cutting edge of display over LAN technology is at?


>whole screen updates directly over the wire

Not whole screen. Whole windows.

You can use VNC, but it won't integrate well with other windows.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: