Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One question I always have about these sorts of translation layers is how they deal with the different warp sizes. I'd imagine a lot of CUDA code relies on 32-wide warps, while as far as I know AMD tends to have 64-wide warps. Is there some sort of emulation that needs to happen?


SCALE is not a "translation layer", it's a full source-to-target compiler from CUDA-like C++ code to AMD GPUs.

See this part of the documentation for more details regarding warp sizes: https://docs.scale-lang.com/manual/language-extensions/#impr...


The older AMD GCN had 64-wide wavefront, but the newer AMD GPUs "RDNA" support both 64 and 32 wavefront, and this is configurable at runtime. It appears the narrower wavefronts are better suited for games in general.

Not sure what is the situation with "CDNA", which is the compute-oriented evolution of "GCN", i.e. whether CDNA is 64-wavefront only or dual like RNDA.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: