Polars can use lazy processing, where it collects all of the operations together and creates a graph of what needs to happen, while pandas executes everything upon calling of the code.
Spark tended to do this and it makes complete sense for distributed setups, but apparently is still faster locally.
Laziness in this context has huge advantages in reducing memory allocation. Many operations can be fused together, so there's less of a need to allocate huge intermediate data structures at every step.
It's been around in R-land for a while with dplyr and its variety of backends (including Arrow, the same as Polars). Pandas is just an incredibly mediocre library in nearly all respects.
Spark tended to do this and it makes complete sense for distributed setups, but apparently is still faster locally.