No, they are trying to execute exactly the same code. To quote the parent:
You can see that the fundamentals of bitboard manipulation (using the 64-bit number to represent the 64-squares of a chessboard) remain the same whether or not you're on a CPU or GPU.
In this case, you want the code to be shared between the two sides. Why write the code twice? Both CPUs and GPUs are very good at 64-bit integer manipulation.
The ceremony around co-ordinating threads and memory access is different, but the code that is being run is exactly the same.
The exact same business logic (helper functions), yes of course. My point is the high level algorithm is going to be fundmanetally different. Just like the high level algorithm on the browser (render and handle UI interaction) is different than the fundamental server-side algorithm (render html from database requests), and yet there might be shared helper functions. Does this make sense?
differentiation isn't really the key algorithm. In fact, I believe that happens at compile time, not run-time. So not only is not key, it isn't even happening at run-time.
You can see that the fundamentals of bitboard manipulation (using the 64-bit number to represent the 64-squares of a chessboard) remain the same whether or not you're on a CPU or GPU.
In this case, you want the code to be shared between the two sides. Why write the code twice? Both CPUs and GPUs are very good at 64-bit integer manipulation.
The ceremony around co-ordinating threads and memory access is different, but the code that is being run is exactly the same.