polars1.17.1
Published
Blazingly fast DataFrame library
pip install polars
Package Downloads
Authors
Project URLs
Requires Python
>=3.9
Dependencies
- numpy
>=1.16.0; extra == "numpy"
- pandas
; extra == "pandas"
- polars
[pyarrow]; extra == "pandas"
- pyarrow
>=7.0.0; extra == "pyarrow"
- pydantic
; extra == "pydantic"
- fastexcel
>=0.9; extra == "calamine"
- openpyxl
>=3.0.0; extra == "openpyxl"
- xlsx2csv
>=0.8.0; extra == "xlsx2csv"
- xlsxwriter
; extra == "xlsxwriter"
- polars
[calamine,openpyxl,xlsx2csv,xlsxwriter]; extra == "excel"
- adbc-driver-manager
[dbapi]; extra == "adbc"
- adbc-driver-sqlite
[dbapi]; extra == "adbc"
- connectorx
>=0.3.2; extra == "connectorx"
- sqlalchemy
; extra == "sqlalchemy"
- polars
[pandas]; extra == "sqlalchemy"
- polars
[adbc,connectorx,sqlalchemy]; extra == "database"
- nest-asyncio
; extra == "database"
- fsspec
; extra == "fsspec"
- deltalake
>=0.19.0; extra == "deltalake"
- pyiceberg
>=0.5.0; extra == "iceberg"
- gevent
; extra == "async"
- cloudpickle
; extra == "cloudpickle"
- matplotlib
; extra == "graph"
- altair
>=5.4.0; extra == "plot"
- great-tables
>=0.8.0; extra == "style"
- backports-zoneinfo
; python_version < "3.9" and extra == "timezone"
- tzdata
; platform_system == "Windows" and extra == "timezone"
- cudf-polars-cu12
; extra == "gpu"
- polars
[async,cloudpickle,database,deltalake,excel,fsspec,graph,iceberg,numpy,pandas,plot,pyarrow,pydantic,style,timezone]; extra == "all"
Documentation: Python - Rust - Node.js - R | StackOverflow: Python - Rust - Node.js - R | User guide | Discord
Polars: Blazingly fast DataFrames in Rust, Python, Node.js, R, and SQL
Polars is a DataFrame interface on top of an OLAP Query Engine implemented in Rust using Apache Arrow Columnar Format as the memory model.
- Lazy | eager execution
- Multi-threaded
- SIMD
- Query optimization
- Powerful expression API
- Hybrid Streaming (larger-than-RAM datasets)
- Rust | Python | NodeJS | R | ...
To learn more, read the user guide.
Python
>>> import polars as pl
>>> df = pl.DataFrame(
... {
... "A": [1, 2, 3, 4, 5],
... "fruits": ["banana", "banana", "apple", "apple", "banana"],
... "B": [5, 4, 3, 2, 1],
... "cars": ["beetle", "audi", "beetle", "beetle", "beetle"],
... }
... )
# embarrassingly parallel execution & very expressive query language
>>> df.sort("fruits").select(
... "fruits",
... "cars",
... pl.lit("fruits").alias("literal_string_fruits"),
... pl.col("B").filter(pl.col("cars") == "beetle").sum(),
... pl.col("A").filter(pl.col("B") > 2).sum().over("cars").alias("sum_A_by_cars"),
... pl.col("A").sum().over("fruits").alias("sum_A_by_fruits"),
... pl.col("A").reverse().over("fruits").alias("rev_A_by_fruits"),
... pl.col("A").sort_by("B").over("fruits").alias("sort_A_by_B_by_fruits"),
... )
shape: (5, 8)
┌──────────┬──────────┬──────────────┬─────┬─────────────┬─────────────┬─────────────┬─────────────┐
│ fruits ┆ cars ┆ literal_stri ┆ B ┆ sum_A_by_ca ┆ sum_A_by_fr ┆ rev_A_by_fr ┆ sort_A_by_B │
│ --- ┆ --- ┆ ng_fruits ┆ --- ┆ rs ┆ uits ┆ uits ┆ _by_fruits │
│ str ┆ str ┆ --- ┆ i64 ┆ --- ┆ --- ┆ --- ┆ --- │
│ ┆ ┆ str ┆ ┆ i64 ┆ i64 ┆ i64 ┆ i64 │
╞══════════╪══════════╪══════════════╪═════╪═════════════╪═════════════╪═════════════╪═════════════╡
│ "apple" ┆ "beetle" ┆ "fruits" ┆ 11 ┆ 4 ┆ 7 ┆ 4 ┆ 4 │
│ "apple" ┆ "beetle" ┆ "fruits" ┆ 11 ┆ 4 ┆ 7 ┆ 3 ┆ 3 │
│ "banana" ┆ "beetle" ┆ "fruits" ┆ 11 ┆ 4 ┆ 8 ┆ 5 ┆ 5 │
│ "banana" ┆ "audi" ┆ "fruits" ┆ 11 ┆ 2 ┆ 8 ┆ 2 ┆ 2 │
│ "banana" ┆ "beetle" ┆ "fruits" ┆ 11 ┆ 4 ┆ 8 ┆ 1 ┆ 1 │
└──────────┴──────────┴──────────────┴─────┴─────────────┴─────────────┴─────────────┴─────────────┘
SQL
>>> df = pl.scan_csv("docs/assets/data/iris.csv")
>>> ## OPTION 1
>>> # run SQL queries on frame-level
>>> df.sql("""
... SELECT species,
... AVG(sepal_length) AS avg_sepal_length
... FROM self
... GROUP BY species
... """).collect()
shape: (3, 2)
┌────────────┬──────────────────┐
│ species ┆ avg_sepal_length │
│ --- ┆ --- │
│ str ┆ f64 │
╞════════════╪══════════════════╡
│ Virginica ┆ 6.588 │
│ Versicolor ┆ 5.936 │
│ Setosa ┆ 5.006 │
└────────────┴──────────────────┘
>>> ## OPTION 2
>>> # use pl.sql() to operate on the global context
>>> df2 = pl.LazyFrame({
... "species": ["Setosa", "Versicolor", "Virginica"],
... "blooming_season": ["Spring", "Summer", "Fall"]
...})
>>> pl.sql("""
... SELECT df.species,
... AVG(df.sepal_length) AS avg_sepal_length,
... df2.blooming_season
... FROM df
... LEFT JOIN df2 ON df.species = df2.species
... GROUP BY df.species, df2.blooming_season
... """).collect()
SQL commands can also be run directly from your terminal using the Polars CLI:
# run an inline SQL query
> polars -c "SELECT species, AVG(sepal_length) AS avg_sepal_length, AVG(sepal_width) AS avg_sepal_width FROM read_csv('docs/assets/data/iris.csv') GROUP BY species;"
# run interactively
> polars
Polars CLI v0.3.0
Type .help for help.
> SELECT species, AVG(sepal_length) AS avg_sepal_length, AVG(sepal_width) AS avg_sepal_width FROM read_csv('docs/assets/data/iris.csv') GROUP BY species;
Refer to the Polars CLI repository for more information.
Performance 🚀🚀
Blazingly fast
Polars is very fast. In fact, it is one of the best performing solutions available. See the PDS-H benchmarks results.
Lightweight
Polars is also very lightweight. It comes with zero required dependencies, and this shows in the import times:
- polars: 70ms
- numpy: 104ms
- pandas: 520ms
Handles larger-than-RAM data
If you have data that does not fit into memory, Polars' query engine is able to process your query
(or parts of your query) in a streaming fashion. This drastically reduces memory requirements, so
you might be able to process your 250GB dataset on your laptop. Collect with
collect(streaming=True)
to run the query streaming. (This might be a little slower, but it is
still very fast!)
Setup
Python
Install the latest Polars version with:
pip install polars
We also have a conda package (conda install -c conda-forge polars
), however pip is the preferred
way to install Polars.
Install Polars with all optional dependencies.
pip install 'polars[all]'
You can also install a subset of all optional dependencies.
pip install 'polars[numpy,pandas,pyarrow]'
See the User Guide for more details on optional dependencies
To see the current Polars version and a full list of its optional dependencies, run:
pl.show_versions()
Releases happen quite often (weekly / every few days) at the moment, so updating Polars regularly to get the latest bugfixes / features might not be a bad idea.
Rust
You can take latest release from crates.io
, or if you want to use the latest features /
performance improvements point to the main
branch of this repo.
polars = { git = "https://github.com/pola-rs/polars", rev = "<optional git tag>" }
Requires Rust version >=1.80
.
Contributing
Want to contribute? Read our contributing guide.
Python: compile Polars from source
If you want a bleeding edge release or maximal performance you should compile Polars from source.
This can be done by going through the following steps in sequence:
- Install the latest Rust compiler
- Install maturin:
pip install maturin
cd py-polars
and choose one of the following:make build
, slow binary with debug assertions and symbols, fast compile timesmake build-release
, fast binary without debug assertions, minimal debug symbols, long compile timesmake build-nodebug-release
, same as build-release but without any debug symbols, slightly faster to compilemake build-debug-release
, same as build-release but with full debug symbols, slightly slower to compilemake build-dist-release
, fastest binary, extreme compile times
By default the binary is compiled with optimizations turned on for a modern CPU. Specify LTS_CPU=1
with the command if your CPU is older and does not support e.g. AVX2.
Note that the Rust crate implementing the Python bindings is called py-polars
to distinguish from
the wrapped Rust crate polars
itself. However, both the Python package and the Python module are
named polars
, so you can pip install polars
and import polars
.
Using custom Rust functions in Python
Extending Polars with UDFs compiled in Rust is easy. We expose PyO3 extensions for DataFrame
and
Series
data structures. See more in https://github.com/pola-rs/pyo3-polars.
Going big...
Do you expect more than 2^32 (~4.2 billion) rows? Compile Polars with the bigidx
feature flag or,
for Python users, install pip install polars-u64-idx
.
Don't use this unless you hit the row boundary as the default build of Polars is faster and consumes less memory.
Legacy
Do you want Polars to run on an old CPU (e.g. dating from before 2011), or on an x86-64
build of
Python on Apple Silicon under Rosetta? Install pip install polars-lts-cpu
. This version of Polars
is compiled without AVX target features.