Although Rust is an amazing language with rapid adoption, it still has a relatively smaller user base in low-level programming, where llama.cpp operates. As a result, the pool of talent that can contribute to such projects would be more limited. Almost all low-level programmers can write in C++ if needed, as for C programmers, it's essentially like writing C with classes. Rust programmers, especially those who care about low-level details including throughput and latency, almost always have a background in C or C++. If llama.cpp were written in Rust, there would likely be far fewer contributors. Considering that one needs to be at least interested in deep learning to contribute, the fact that it currently has 476 contributors is impressive. [1] I think this is one the most important reasons the project can move so fast and be such an essential project in the LLM scene.
[1]: https://github.com/ggerganov/llama.cpp/graphs/contributors