As someone who has used numpy for many years and written a great deal of production code using it, I was surprised when I read through this and saw some numpy tricks that I didn't know regarding the speeds of various operations! This is really a fantastic reference that provides a deeper level of understanding of what numpy does under the hood.
One thing I will highlight that the author just touched on briefly, is that numpy combined with numba is really a phenomenal combination for dealing with very computationally intensive problems.
The folks at Continuum Analytics have really done a fantastic job building numba (numba.pydata.org), which JIT compiles a subset of python functions using LLVM, and is designed to work seamlessly with numpy arrays. Numba makes it much easier to speed up performance bottlenecks and allows you to easily create numpy ufuncs which can take advantage of array broadcasting.
Can I ask how intensively you have used Numba and over what period? I'm interested in how Numba has progressed over the last few years, with a view to using it over Cython.
My team and I looked at Numba a year ago or so for optimisation of a fairly large calculation, and found that the speed-ups were impressive where they worked, but were not consistent or predictable.
We used Cython for large parts, and while there was boilerplate and incantations, the gains were achievable, incremental and certain. The annotation tools were also quite helpful for identifying bottlenecks where Cython code could be effective.
Incidentally, once we decided that Cython was our go-to tools, we often wrote simple looping code rather than vectorised code because it was simpler to transition to Cython, alá Julia.
Sure. I've used numba for the past 1 1/2 years, and I've seen it grow quite a bit. When I first started using it, there was a separate product called numbapro that did all of the gpu jit, which they've now included in numba for example.
Regarding whether it would be appropriate vis-a-vis cython really depends on your application
First, Cython is fantastic as well, and my endorsement of numba doesn't take anything away from it. Cython is much more fully featured and mature, in the sense that you can really develop your own data structures and control flows. Pretty much anything you could do in C, you could do in Cython. I've written Cython and it also plays very nicely with numpy.
In comparison, Numba is much more limited. You are basically limited to using numpy arrays and matrixes as your data structures, and you really need to understand exactly what is going to be used prior to the jit loop or you won't be able to use it in nopython mode (which is where you get the most benefit). It also doesn't handle strings really at all. One fairly recent thing Numba does is allow you to use a list of a single type within nopython mode. Under the hood it handles the malloc for you.
My endorsement of Numba really boils down to ease of integration with existing python codebase. For me the "killer feature" was the ability to simply comment out the @jit or @njit decorator and step through the code like I would step through normal python code, then just turn it back on again when I needed it. The other was that numba gained the ability early in our adoption to chain functions together, so while you can't generate a numpy array in the nopython mode, for instance, you can generate a numpy array in a @jit function (object mode) outside of the nopython mode, then call the looping function (nopython mode) from that jitted function, and numba handles that seamlessly and cuts out a lot of the overhead. For us, our speed of development of a custom algorithm has really been helped greatly by Numba.
The other thing I will mention is that when I first started, getting LLVM to work with numba was, initially, a nightmare on different OS's. That has completely gone away with improvements in conda package manager now.
All that said, you cannot go wrong with Cython, it just has a little more of a learning curve and was a little tricker to implement in our codebase.
>> once we decided that Cython was our go-to tools, we often wrote simple looping code rather than vectorised code because it was simpler to transition to Cython, alá Julia.
If you're used to doing this with cython, you might find it even easier to do this with numba. This is how I develop all the time with numba now. I find that it's incredibly beneficial to step through it as though it was just regular python initially during algorithm and test development, then once the algorithm is right and tests pass, turn on the jit when ready. You sort of get a sense for what numba will accept and still have performant no-python mode jitting after a while, and knowing those limitations actually tends to cause me to write more modular code to take advantage of the speed boosts.
Just to jump on the numba train, I've generally found it to reliably obtain C like performance from C-like Python code. This property also holds when you use Python as a preprocessor language for generating computational kernels, which provides a lot of flexibility not evident in the documentation.
It also has simple-to-use openmp-like multicore parallelization, limited class support, AOT compilation and CUDA & AMD HSAIL support.
Are you using python as a preprocessor to glue together text strings of numba code which you then eval? Or are you using python to generate the numba AST?
One thing I will highlight that the author just touched on briefly, is that numpy combined with numba is really a phenomenal combination for dealing with very computationally intensive problems.
The folks at Continuum Analytics have really done a fantastic job building numba (numba.pydata.org), which JIT compiles a subset of python functions using LLVM, and is designed to work seamlessly with numpy arrays. Numba makes it much easier to speed up performance bottlenecks and allows you to easily create numpy ufuncs which can take advantage of array broadcasting.