Hacker News new | past | comments | ask | show | jobs | submit login

Does this mean you effectively get it "free" in terms of CPU cycles, and can use the CPU for all 16ms (60fps) of each frame to do game logic, without worrying about render time?



It's best to think of it as a real limited thread you can shunt off some of the work to.

Effectively, a shader is no different than any other bit of code you have. Anything you can do in a shader you can do on the main program thread (and vice versa). Now, things you typically want to do with a shader are better done by the GPU for various reasons. Better floating point math processor, pipelines more suited for the task, etc.

And you can make the shaders generic enough to reuse for multiple applications. Basically the shader says "Hey, here's where the light source is, here's the luminosity, here's the color, here's what it is shining on, here's how the color changes."

And being essentially dedicated number crunchers, people realized that not everything sent to a graphics card needed to actually render. You could make a shader to do something crazy, like solve complex equations more quickly than a generic CPU could. So if this was 8 years ago, you might decide to write a shader that could effectively mine bitcoins. Which is what people did and why good graphics cards have become crazy expensive.


Yes but you still have CPU overhead in terms of organising and submitting work to the GPU. Although some of that work is itself making its way to the GPU now compute shaders are widely supported. There will always be a need to synchronise though. The other big change in this regard is with newer graphics APIs allowing the work on the CPU to be properly multithreaded.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: