Chaos mode is an option when invoking rr that can expose some concurrency issues. Basically it switches which thread is executing a bunch to try and simulate multiple cores executing. It has found some race conditions for me but it’s of course limited
Unfortunately that only works for large-scale races, and not, say, one instruction interleaving with another one on another thread without proper synchronization. -fsanitize=thread probably works for that though (and of course you could then combine said sanitizer with rr to some effect probably).
One option would be to combine chaos mode with a dynamic race detector to try to focus chaos mode on specific fine-grained races. Someone should try that as a research project. Not really the same thing as rr + TSAN.
There's still the fundamental limitation that rr won't help you with weak memory orderings.
I havent tried Tsan with rr but msan and asan work quite well with it (it’s quite slow when doing this) but seeing the sanitizer trigger then following back what caused it to trigger is very useful.
Yeah, the reason it only works for these coarser race conditions is that RR only has one thread executing at a time. Chaos mode randomizes the durations of time allotted to each thread before it is preempted. This may be out of date. I believe I read it in the Extended Technical Report from 2017: https://arxiv.org/pdf/1705.05937
There is, it's called count_ones. Though I wouldn't be surprised if LLVM could maybe optimize some of these loops into a popcnt, but I'm sure it would be brittle
So it was a bit more pervasive than this, the issue was that flushing subnormals (values very close to 0) to 0 is a register that gets set, so if a library is built with the fastmath flags and it gets loaded, it sets the register, causing the whole process to flush it's subnormals. i.e https://github.com/llvm/llvm-project/issues/57589
btw, there has been a pretty nice effort of reimplementing the tidyverse in julia with https://github.com/TidierOrg/Tidier.jl and it seems to be quite nice to work with, if you were missing that from R at least
LLVMs API is the c++ one. The C one while more stable also doesn't support everything. Keeping up with LLVM is annoying but it's not the source of bugs or anything of the sort. PS(it's not actually stable. Because if the c++ code it calls is removed it just gets removed from the C one)
I say this as one of the devs that usually do the work of keeping up the latest LLVM.
I do believe this is an issue of not having explicit dependencies. Julia takes the approach of, we build and ship everything for every OS, which means Pkg (the package manager) knows about binary dependencies as well. Making things more reproducible in language
Linux distros often do things to force packages to declare all their dependencies: Nix and Guix use unique prefixes and sandboxed builds, openSUSE builds all their packages in blank slate VMs that only install what is declared, standard Fedora tools for running builds in minimal chroot environments, etc.
I'm not aware of any language ecosystem package managers taking similar measures to ensure that dependency declarations in their packages are complete.
Julia does have really nice GPU support, being able to directly compile julia code into CUDA, ROCm, Metal or other accelerators. (Being GPU code it's limited to a subset of the main language)