Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Llama.cpp also has a Vulkan backend that is portable and performant, you don't need to mess with ROCm at all.


Oh yes I know, but "can i compile llama.cpp with rocm" has been my yardstick for how good AMD drivers are for some time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: