Hacker News new | past | comments | ask | show | jobs | submit login

> Rust is normally used only when high performance is of uttermost importance, so it will always attract people who want to optimise everything.

Which is not the way to do things. Profile, then optimize.

I'm writing a metaverse client that's heavily multithreaded and can keep a GPU, a dozen CPUs, and a network connection busy. Only some parts have to go fast. The critical parts are:

* The render loop, which is in its own higher-priority thread.

* Blocking the render loop with locks set during GPU content updating, which is supposed to be done in parallel with rendering.

* JPEG 2000 decoding, which eats up too much time and for which 10x faster decoders are available.

* Strategies for deciding which content to load first.

* Strategies for deciding what doesn't have to be drawn.

Those really matter. The rest is either minor, infrequent, or not on the critical path.

I use Tracy to let me watch and zoom in on where the time goes in each rendered frame. Unless Tracy says performance is a problem, it doesn't need to be optimized.




Coming from gaming industry i think you might want to measure how far you can go with a single threaded rendering. There is limit of content and code that can be a brick wall later. Here is an example from SIGGRAPH 2021 where Activision presents how multithreaded rendering looks like: https://youtu.be/9ublsQNbv6I ps I don't work with Activision its just a public example that illustrates industry practice.


I'm using Rend3/WGPU, where multithreaded rendering is coming, but isn't here yet. Work is underway.[1]

The Rust game dev ecosystem is far enough along for simple games, but not there yet when you need all the performance of which the hardware is capable.

[1] https://www.youtube.com/watch?v=DDG4bcGs7zM


Cool video that matches best practices. Reducing memory footprint is always good and laying out things in memory is also good way to speed things up without changing amount of work.

I have trouble with concept of WGPU. GPUs are complex by themselves to bolt on top any abstraction that is coming from Web. But its just me, its not important since I am not a 3d programmer myself. I am more Engine / CPU optimization guy.

My interest in Rust and this topic is that i would like to see fine grained task parallel systems written in Rust. Instead of systems with separate thread for render that became a bottleneck years ago. I wish you good luck and hope to see a success story about Rust.


We'll see how it works out.

WGPU's API is basically Vulkan. It exists mostly to deal with Apple's Metal. Metal has roughly the same feature set as Vulkan, but Apple just had to Think Different and be incompatible. I'm not supporting the Wasm or Android targets. Android and browsers have a different threading model, and I don't want to deal with that at this stage. Linux/Windows/Mac is enough for now.

Thought for the near future - will VR and AR headgear have threads or something more like the processes with some shared shared memory model from Javascript land?

(That video isn't me, it's the Rend3 dev, who also works on WGPU.)


Gaming solved same problem by opting out of Apple rendering API support. ( ahaha ). Those who have to support mobile can't skip it ofc.

I dont work with VR/AR myself so I dont know.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: