> Lots of Vulkan map quite well to Rust's ownership rules, the memory allocation API surface maps very well. But anything that's happening on the GPU timeline is pretty much impossible to do safely.
I agree with this, having been dabbling with Vulkan and Rust for a few years now. Destructors and ownership can make a pretty ergonomic interface to the cpu side of gpu programming. It's "safe" as long as you don't screw up your gpu synchronization which is not perfect but it's an improvement over "raw" graphic api calls (with little to no overhead).
As for the GPU timeline, I've been experimenting with timeline semaphores. E.g. all the images (and image views) in descriptor set D must be live as long as semaphore S has value less than X. This coupled with some kind of deletion queue could accurately track lifetimes of resources on the GPU timeline.
On the other hand, basic applications and "small world" game engines have a simpler way out. Most resources have a pre-defined lifetime, either it lives as long as the application, or the "loaded level" or the current frame. You might even use Rust lifetimes to track this (but I don't). This model is not applicable when streaming textures and geometry in and out of the GPU.
What I would really like to experiment with is using async Rust for GPU programming. Instead of using `epoll/kqueue/WaitForMultipleObjects` in the async runtime for switching between "green threads" the runtime could do `vkWaitForSemaphores(VK_SEMAPHORE_WAIT_ANY_BIT)` (sadly this function does not return which semaphore(s) were signaled). Each green thread would need its own semaphore, command pools, etc.
Unfortunately this would be a 6-12 month research project and I don't have that much free time at hand. It would also be quite an alien model for most graphics programmers so I don't think it would catch on. But it would be a fun research experiment to try.
> As for the GPU timeline, I've been experimenting with timeline semaphores. E.g. all the images (and image views) in descriptor set D must be live as long as semaphore S has value less than X. This coupled with some kind of deletion queue could accurately track lifetimes of resources on the GPU timeline.
> What I would really like to experiment with is using async Rust for GPU programming.
Most of the waiting required is of the form "X can't proceed until A, B, D, and Q are done", plus "Y can't proceed until B, C, and R are done". This is not a good match for the async model.
That many-many keeps coming up in game work.
Outside the GPU, it appears when assets such as meshes and textures come from an external server or files, and are used in multiple displayed objects.
I agree with this, having been dabbling with Vulkan and Rust for a few years now. Destructors and ownership can make a pretty ergonomic interface to the cpu side of gpu programming. It's "safe" as long as you don't screw up your gpu synchronization which is not perfect but it's an improvement over "raw" graphic api calls (with little to no overhead).
As for the GPU timeline, I've been experimenting with timeline semaphores. E.g. all the images (and image views) in descriptor set D must be live as long as semaphore S has value less than X. This coupled with some kind of deletion queue could accurately track lifetimes of resources on the GPU timeline.
On the other hand, basic applications and "small world" game engines have a simpler way out. Most resources have a pre-defined lifetime, either it lives as long as the application, or the "loaded level" or the current frame. You might even use Rust lifetimes to track this (but I don't). This model is not applicable when streaming textures and geometry in and out of the GPU.
What I would really like to experiment with is using async Rust for GPU programming. Instead of using `epoll/kqueue/WaitForMultipleObjects` in the async runtime for switching between "green threads" the runtime could do `vkWaitForSemaphores(VK_SEMAPHORE_WAIT_ANY_BIT)` (sadly this function does not return which semaphore(s) were signaled). Each green thread would need its own semaphore, command pools, etc.
Unfortunately this would be a 6-12 month research project and I don't have that much free time at hand. It would also be quite an alien model for most graphics programmers so I don't think it would catch on. But it would be a fun research experiment to try.