I'm curious if there will be any appreciable performance gains here that are worthwhile. FWIW, last I checked[0], Polars still smokes Pandas in basically every way.
I’ve moved entirely to Polars (which is essentially Pandas written in Rust with new design decisions) with DuckDB as my SQL query engine. Since both are backed by Arrow, there is zero copy and performance on large datasets is super fast (due to vectorization, not just parallelization)
I keep Pandas around for quick plots and legacy code. I will always be grateful for Pandas because there truly was no good dataframe library during its time. It has enabled an entire generation of data scientists to do what they do and built a foundation — a foundation which Polars and DuckDB are now building on and have surpassed.
If you were to say “pandas in long format only” then yes that would be correct, but the power of pandas comes in its ability to work in a long relational or wide ndarray style. Pandas was originally written to replace excel in financial/econometric modeling, not as a replacement for sql. Models written solely in the long relational style are near unmaintainable for constantly evolving models with hundreds of data sources and thousands of interactions being developed and tuned by teams of analysts and engineers.
For example, this is how some basic operations would look.
Point taken but most data wrangling these days — especially at scale — is of the long and thin variety (what is also known as 3rd normal form or tidy format — which actually allows for more flexibility if you think in terms of coordinatized data theory) where aggregations and joins dominate column operations (Pandas’ also allows array like column operations due to its index but there are other ways to achieve the same thing).
I typical do the type of column operation in your example only on subsets of data, and typically I do it in SQL using DuckDB. Interop between Polars and DuckDB is virtually zero cost so I seamlessly move between the two. And to be honest I don’t remember the last time I needed to do this but that’s just the nature of my work and not a generalized statement.
But yes if you are still in a world where you need to perform Excel like operations then I agree.
def add(df1, df2, meta_cols, val_cols=None):
# join on meta cols
# add val cols (default to all non meta cols if None)
# return df with all meta and val cols selected
In theory I think that's fine. The problem is that in practice this will cause a lot of visual noise in your models, since for every operation you would need to specify, at least, your meta columns, and potentially value columns too. If you change the dimensionality of your data, you would need to update everywhere you've specified them. You could get around this a bit by defining the meta columns in a constant, but that's really only maintainable at a global module level. Once you start passing dfs around, you'll have to pass the specified columns as packaged data around with the df as well. There's also the problem that you'd need to use functions instead of standard operators.
One thing that would be nice to do is set an (and forgive me, I understand the aversion to the word "index") index on the polars dataframe. Not a real index, just a list of columns that are specified as "metadata columns". This wouldn't actually affect any internal state of the data, but what it would do is affect the path of certain operations. Like if an "index" is set, then `+` does the join from above, rather than the current standard `+` operation.
In any case I realize this is a major philosophical divergence from the polars way of thinking, so more just shooting shit than offering real suggestions.
Now imagine the other side of this equation, where pandas seems too clunky, behold YOLOPandas https://pypi.org/project/yolopandas/ i.e. `df.llm.query("What item is the least expensive?")`
It works as well as Pandas (realize that scikit actually doesn’t support Pandas — you have to cast a dataframe into a Numpy array first)
I generally work with with Polars and DuckDB until the final step, when I cast it into a data structure I need (Pandas dataframe, Parquet etc)
All the expensive intermediate operations are taken care of in Polars and DuckDB.
Also a Polars dataframe — although it has different semantics — behaves like a Pandas dataframe for the most part. I haven’t had much trouble moving between it and Pandas.
You do not have to cast pandas DataFrames when using scikit-learn, for many years already. Additional in recent version there has been increasing support for also returning DataFrames, at least with transformers and checking column names/order.
Yes that support is still not complete. When you pass a Pandas dataframe into Scikit you are implicitly doing df.values which loses all the dataframe metadata.
There is a library called sklearn-pandas which doesn’t seem to be mainstream and dev has stopped since 2022.
> What should be the API for working with pandas, pyarrow, and dataclasses and/or pydantic?
> Pandas 2.0 supports pyarrow for so many things now, and pydantic does data validation with a drop-in dataclasses.dataclass replacement at pydantic.dataclasses.dataclass.
Model output may or may not converge given the enumeration ordering of Categorical CSVW columns, for example; so consistent round-trip (Linked Data) schema tool support would be essential.
CuML is scikit-learn API compatible and can use Dask for distributed and/or multi-GPU workloads. CuML is built on CuDF and CuPY; CuPy is a replacement for NumPy arrays on GPUs with 100x relative performance.
> CuPy is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. CuPy acts as a drop-in replacement to run existing NumPy/SciPy code on NVIDIA CUDA or AMD ROCm platforms.
> CuPy is an open-source array library for GPU-accelerated computing with Python. CuPy utilizes CUDA Toolkit libraries including cuBLAS, cuRAND, cuSOLVER, cuSPARSE, cuFFT, cuDNN and NCCL to make full use of the GPU architecture.
> The figure shows CuPy speedup over NumPy. Most operations perform well on a GPU using CuPy out of the box. CuPy speeds up some operations more than 100X. Read the original benchmark article Single-GPU CuPy Speedups on the RAPIDS AI Medium blog
> cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects.*
> cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn.
> For large datasets, these GPU-based implementations can complete 10-50x faster than their CPU equivalents. For details on performance, see the cuML Benchmarks Notebook.
FWICS there's now a ROCm version of CuPy, so it says CUDA (NVIDIA only) but also compiles for AMD. IDK whether there are plans to support Intel OneAPI, too.
What of the non-Arrow parts of other pandas-compatible and not pandas-compatible DataFrame libraries can be ported back to Pandas (and R)?
My experience is that Polars generally isn't supported by third party libraries, hopefully tjst'll change soon but in the meantime it has some pretty snappy to and from pandas functionality, so converting when you hit a library that needs pandas often still brings a good deal of speedups.
To be honest I haven’t actually worked with any third party libraries that actually supported Pandas directly — most ML libs like Scikit require Numpy arrays (not Pandas dataframes) as inputs so I’ve always had to cast to Numpy from Pandas anyway.
But yes, Polars and DuckDB can easily cast to Pandas and also read Pandas dataframes in memory. I have some legacy data transformations that are mostly DuckDB but involve some intermediate steps in Pandas (because I didn’t want to rewrite them) and it’s all seamless (though not zero copy as it would be in a pure Arrow workflow).
And ironically DuckDB can query Pandas dataframes faster than Pandas itself due to its vectorized engine.
> It has enabled an entire generation of data scientists to do what they do
The SQL you're using finally in 2023 has enabled data scientists to do what they do for decades. Pandas was a massive derailment and distraction in what otherwise would have been called progress.
To be fair pandas was never meant to be a replacement for sql, it was meant to be a replacement for excel in financial models. Which it still excels at (pun intended).
As someone who mainly uses pandas, what is the benefit of using DuckDB to write your queries over using pandas (or polars) to operate on the data. Is it that you can already subset the data without loading it into memory?
I use DuckDB because I can express complex analytic queries better using a SQL mental model. Most software people hate SQL because they can never remember its syntax, but for data scientists, SQL lets us express our thoughts more simply and precisely in a declarative fashion — which gets us query plan optimization as a plus.
People have been trying to get rid of SQL for years yet they only end up reinventing it badly.
I’ve written a lot of code and the two notations I always gravitate toward are the magrittr + dplyr pipeline notation and SQL.
The chained methods notation is a bit too unergonomic especially to express window functions and complex joins.
Spark started out with method chaining but eventually found that most people used Spark SQL.
In addition to this here's one really specific case: ever had a pandas groupby().apply() that took forever often mostly re-aggregating after the apply?
With columnar data DuckDuckGo is somuchfaster at this.
For one of my projects I have what sounds like a dumb workflow:
- JSON api fetches get cached in sqlite3
- Parsing the JSON gets done with sqlite3 JSON operators (Fast! Fault tolerant! Handles NULLs nicely! Fast!!).
- Collating data later gets queried with duckdb - everything gets munged and aggregated into the shape I want it and is persisted in parquet files
- When it's time to consume it duckdb queries my various sources, does my (used to be expensive) groupbys onthefly and spits out pandas data frames
- Lastly those data frames are small-ish, tidy and flexible
So yeah, on paper it sounds like these 3 libraries overlap too much to be use at the same time but in practice they can each have their place and interact well.
Pandas having arrow as backend is great and will make interop with the arrow community (and polars) much better.
However, if you need performance, polars remains orders of magnitudes faster on whole queries, changing to the arrow memory format does not change that.
Biggest performance games most people will see will be dealing with strings when using the pyarrow backend since those are now a native type and not wrapped in a python object.
But for people who are looking for the performance polars gives with all the nice APIs of pandas, the big news is, since polars and pandas will now both use arrow for the underlying data, you can convert between the two kinds of dataframes without copying the data itself.
polars_df = polars.from_pandas(df)
# ... do performance heavy stuff ...
df = polars.to_pandas(polars_df)
Read_csv - yes. I just tried loading a 5MM row csv. Pandas took 5.5 secs. Polars took 0.6 secs.
.apply - might be faster in Polars but will not be as fast as using native expressions. That’s because applying a Python function invokes the GIL which kills parallelization and this is an inherent limitation of Python.
That said, I try not to use Python functions these days. I write transformations in SQL (or native expressions in Polars) and these can be executed at full speed with complete vectorization and parallelization.
Another alternative is to try Modin. They wrap a scalable engine (doesn't have to be multiple machine, speedup happens locally from single -> multithread) with Pandas API so you just need to change your imports. .apply() is also automatically parallelized through Modin.
What's the threshold for when you need to use something like pola.rs instead of just fitting the majority of your data set into memory? If your computer is 8GB and you have ~4-6GB of memory free, you need to be working with a data set that is at least 4GB+, right? Is that a good way to look at it?
Kind of like "do I really need k8s?", "does my workload dictate I need SQL OLAP (Online Analytical Processing) / data frame lazy loading data library?"
what's the general rule of thumb to know "you're missing out by not using existing library like pandas/polars" for somebody who is out of the loop on this kind of stuff
I don’t know if it’s still true but Wes McKinney’s (Pandas author) rule of thumb for Pandas was “have 5 to 10 times as much as RAM as your dataset”. But he wrote this in 2017 so things may have changed.
That said, this is a 2023 comparison of Pandas and Polars memory usage.
A lot of libraries depend on numpy directly. Unless polars is a drop in replacement (which I could be wrong but it doesn't seem like it is) then ultimately there's no avoiding pandas in many cases.
Numpy is mostly just a thin metadata wrapper around memory buffers. If pola.rs stores data in that format it should be able to give you zero-copy numpy objects pointing to the underlying pola.rs data.
Too bad polars like everything else written in rust has horrible ergonomics. I don’t know how anyone can look at the polars rust API and say in good faith that not having default arguments and named parameters was a good idea.
Although Polars is written in Rust, most people will use Polars from Python since that what most data wranglers use. And Polars’ Python interface is excellent.
It depends on what you do with Pandas. The semantics are different so you’ll have to rewrite stuff — that said, for me it was worth it for the most part because I work with massive columnar data in Parquet.
So no, not a drop in replacement. But not a difficult transition either.
This page explains how Polars differs from Pandas.
As someone who loathes the Pandas syntax and lusts for the relatively cleaner tidyverse code of my colleagues, the Polar syntax just feels ... off (from the link):
Seeing the multiple nested pl calls within a single expression just feels odd to me. It's definitely reminiscent of Dplyr but in a much less elegant way.
You can use DuckDB (SQL syntax) then just convert to Polars (instantaneous).
The method chaining syntax is unwieldy in any language.
Magrittr + dplyr (tidyverse) pipeline syntax is beautiful syntax but there’s a lot of magic with NSE (nonstandard evaluation) that makes it really tricky when you need to pass a variable column name.
I’ve sort of converged on SQL as the best compromise.
I like the idea of Polars, and it's on my list of things to try. Unfortunately, my current codebase in heavily tied to Pandas (bioinformatics tools that use Pandas as a dependency). I also deal mostly in sparse matrices, which I'm unsure if duckdb handles.
Looking at the Polars documentation also makes me nervous, due to how much of my current Pandas-fu relies on indexing to work. I appreciate that indexes can be NSE but it's how a lot of the current tools in my field work (python and in R) with important data in the index, eg, genes or cellular barcodes, and relying on the index to merge datasets.
Another caveat for me is at multiple times in my workflow I drop into R for plots and rpy2 can convert pandas dataframes into equivalent R atomics. With Polars it would be just an additional step of converting to pandas df but just something I need to consider. That said, I've disliked the Pandas syntax for so long that the mental overhead might be well worth it.
[0] https://www.pola.rs/benchmarks.html