Interesting library, but i see it falls back into what happens to almost all SIMD libraries, which is that they hardcode the vector target completely and you cant mix/match feature levels within a build. The documentation recommends writing your kernels into DLLs and dynamic-loading them which is a huge mess https://jfalcou.github.io/eve/multiarch.html
Meanwhile xsimd (https://github.com/xtensor-stack/xsimd) has the feature level as a template parameter on its vector objects, which lets you branch at runtime between simd levels as you wish. I find its a far better way of doing things if you actually want to ship the simd code to users.
You can do many thing with macros and inline namespaces but I believe they run into problems when modules come into play. Can you compile the same code twice, with different flags with modules?
I think I do understand, this is exactly what we do. (MEGA_MACRO == HWY_NAMESPACE)
Then we have a table of function pointers to &AVX2::foo, &AVX3::foo etc. As long as the module exports one single thing, which either calls into or exports this table, I do not see how it is incompatible with building your project using modules enabled?
(The way we compile the code twice is to re-include our source file, taking care that only the SIMD parts are actually seen by the compiler, and stuff like the module exports would only be compiled once.)
What leads you to that conclusion?
It is still possible to use #include in module implementations.
We can use that to make the module implementation look like your example.
Thus it ought to be possible, though I have not yet tried it.
Since you seem knowledgeable about this, what does this do differently from other SIMD libraries like xsimd / highway? Is it the addition of algorithms similar to the STD library that are explicitly SIMD optimized?
The algorithms I tried to make as good as I knew how. Maybe 95% there.
Nice tail handling. A lot of things supported.
I like or interface over other alternatives, but I'm biased here.
Really massive math library.
EVE is personally my favorite SIMD library in any programming language. It's the only one I've tried that provides masked lane operations in a declarative style, aside from SPMD languages like CUDA or OpenMP. The [] syntax for that is admittedly pretty exotic C++, but I think the usefulness of the feature is worth it. I wish the documentation was better, though. When I first started, I struggled to figure out how to simply make a 4-lane float vector that I can pass into shaders, because almost all of the examples are written for the "wide" native-SIMD size.
This library's eve::soa_vector is the first attempt I've seen at dealing with the "SOA problem," which is that if you write good, parallel-friendly code, all your types go to hell and never come back because the language can't express concepts like "my object is made from element 7 of each of these 6 pointers." Instead you write really FORTRAN-looking array processing code with no types or methods in sight.
Does anyone know of other libraries that help a C++ programmer deal with struct-of-arrays?
I personally think we have the following strenghs:
* Algorithms. Writing SIMD loops is very hard. We give you a lot of ready to go loops. (find, search, remove, set_intersection to name a few).
* zip and SOA support out of the box.
* High quality codegen. I haven't seen other libraries care about unrolling/aligning data accesses - meanwhile these give you substantial improvements.
* Supporting more than transform/reduce. We have really decent compress implemented for sse/avx/neon implemented for example.
If this is something you need we recommend compiling a few dynamic libraries with the correct fixed lengths.
Google Highway manage to pull it off but the trade off is a variadics interface that I personally find very difficult.
* Runtime dispatch based on arch.
We again recommend dlls for this. The problem here is ODR. I believe there is a solution based on preprocessor and namespaces I could use but it breaks as soon as modules become a thing. So - in the module world - we don't have an option. I'm happy for suggestions.
* No MSVC support
C++20 and MSVC is still not a thing enough. And each new version breaks something that was already working. Sad times.
* Just tricky to get started.
I don't know what to do about that. I'm happy to just write examples for people. If you wanna try a library - please create an issue/discussion or smth - I'm happy to take some time and try to solve your case.
> Google Highway manage to pull it off but the trade off is a variadics interface that I personally find very difficult.
I'm curious what you mean by 'variadics', and what exactly you find difficult?
People new to Highway are often surprised by the d/tag argument to loads that say whether to load half/full vector, or no more than 4 elements, etc. The key is to understand these are just zero-sized structs used for type information, and are not the actual vector/data. After that, I observe introductory workshop participants are able to get started/productive quickly.
Thanks for sharing :) Any thoughts on what kind of things you are looking for and didn't find?
I cannot recall anyone saying this kind of thing is a bottleneck for them.
We don't use std::range, but searching for a negative value can look like:
https://gcc.godbolt.org/z/8bbb16Eea
Can you write the second one two? With two ranges? That's where I believe the variadics will be.
FYI:
The codegen is smaller because the loop is not unrolled. That's a 2x slower on my measurements.
+ at least I don't see any aligning of memory accesses, that'd give you another third improment when the data is in L1.
You really should fix that.
We have a different philosophy: not supporting/encouraging needlessly SIMD-hostile software. We assume users properly allocate their data, for example using the allocator we provide. It is easy to deal with 2K aliasing in the allocator, but much harder later. At least in my opinion, this seems like a better path than penalizing all users with unnecessary (re)alignment code.
We have not added a FindIf for two ranges because no one has yet requested that or mentioned it is time-critical for their use cases.
Meanwhile xsimd (https://github.com/xtensor-stack/xsimd) has the feature level as a template parameter on its vector objects, which lets you branch at runtime between simd levels as you wish. I find its a far better way of doing things if you actually want to ship the simd code to users.