# Polars OLS
## Least squares extension in Polars
Supports linear model estimation in [Polars](https://www.pola.rs/).
This package provides efficient rust implementations of common linear
regression variants (OLS, WLS, Ridge, Elastic Net, Non-negative least squares, Recursive least squares) and exposes
them as simple polars expressions which can easily be integrated into your workflow.
### Why?
1. **High Performance**: implementations are written in rust and make use of optimized rust linear-algebra crates & LAPACK routines. See [benchmark](#benchmark) section.
2. **Polars Integration**: avoids unnecessary conversions from lazy to eager mode and to external libraries (e.g. numpy, sklearn) to do simple linear regressions.
Chain least squares formulae like any other expression in polars.
3. **Efficient Implementations**:
- Numerically stable algorithms are chosen where appropriate (e.g. QR, Cholesky).
- Flexible model specification allows arbitrary combination of sample weighting, L1/L2 regularization, & non-negativity constraints on parameters.
- Efficient rank-1 update algorithms used for moving window regressions.
4. **Easy Parallelism**: Computing OLS predictions, in parallel, across groups can not be easier: call `.over()` or `group_by` just like any other polars' expression and benefit from full Rust parallelism.
5. **Formula API**: supports building models via patsy syntax: `y ~ x1 + x2 + x3:x4 -1` (like statsmodels) which automatically converts to equivalent polars expressions.
Installation
------------
First, you need to [install Polars](https://pola-rs.github.io/polars/user-guide/installation/). Then run the below to install the `polars-ols` extension:
```console
pip install polars-ols
```
API & Examples
------------
Importing `polars_ols` will register the namespace `least_squares` provided by this package.
You can build models either by either specifying polars expressions (e.g. `pl.col(...)`) for your targets and features or using
the formula api (patsy syntax). All models support the following general (optional) arguments:
- `mode` - a literal which determines the type of output produced by the model
- `null_policy` - a literal which determines how to deal with missing data
- `add_intercept` - a boolean specifying if an intercept feature should be added to the features
- `sample_weights` - a column or expression providing non-negative weights applied to the samples
Remaining parameters are model specific, for example `alpha` penalty parameter used by regularized least squares models.
See below for basic usage examples.
Please refer to the [tests](./tests/test_ols.py) or [demo notebook](./notebooks/polars_ols_demo.ipynb) for detailed examples.
```python
import polars as pl
import polars_ols as pls # registers 'least_squares' namespace
df = pl.DataFrame({"y": [1.16, -2.16, -1.57, 0.21, 0.22, 1.6, -2.11, -2.92, -0.86, 0.47],
"x1": [0.72, -2.43, -0.63, 0.05, -0.07, 0.65, -0.02, -1.64, -0.92, -0.27],
"x2": [0.24, 0.18, -0.95, 0.23, 0.44, 1.01, -2.08, -1.36, 0.01, 0.75],
"group": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2],
"weights": [0.34, 0.97, 0.39, 0.8, 0.57, 0.41, 0.19, 0.87, 0.06, 0.34],
})
lasso_expr = pl.col("y").least_squares.lasso("x1", "x2", alpha=0.0001, add_intercept=True).over("group")
wls_expr = pls.compute_least_squares_from_formula("y ~ x1 + x2 -1", sample_weights=pl.col("weights"))
predictions = df.with_columns(lasso_expr.round(2).alias("predictions_lasso"),
wls_expr.round(2).alias("predictions_wls"))
print(predictions.head(5))
```
```
shape: (5, 7)
┌───────┬───────┬───────┬───────┬─────────┬───────────────────┬─────────────────┐
│ y ┆ x1 ┆ x2 ┆ group ┆ weights ┆ predictions_lasso ┆ predictions_wls │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 ┆ i64 ┆ f64 ┆ f64 ┆ f64 │
╞═══════╪═══════╪═══════╪═══════╪═════════╪═══════════════════╪═════════════════╡
│ 1.16 ┆ 0.72 ┆ 0.24 ┆ 1 ┆ 0.34 ┆ 0.97 ┆ 0.93 │
│ -2.16 ┆ -2.43 ┆ 0.18 ┆ 1 ┆ 0.97 ┆ -2.23 ┆ -2.18 │
│ -1.57 ┆ -0.63 ┆ -0.95 ┆ 1 ┆ 0.39 ┆ -1.54 ┆ -1.54 │
│ 0.21 ┆ 0.05 ┆ 0.23 ┆ 1 ┆ 0.8 ┆ 0.29 ┆ 0.27 │
│ 0.22 ┆ -0.07 ┆ 0.44 ┆ 1 ┆ 0.57 ┆ 0.37 ┆ 0.36 │
└───────┴───────┴───────┴───────┴─────────┴───────────────────┴─────────────────┘
```
The `mode` parameter is used to set the type of the output returned by all methods (`"predictions", "residuals", "coefficients", "statistics"`).
It defaults to returning predictions matching the input's length.
Note that `"statistics"` is currently only supported for OLS/WLS/Ridge models.
In case `"coefficients"` is set the output is a [polars Struct](https://docs.pola.rs/user-guide/expressions/structs/) with coefficients as values and feature names as fields.
It's output shape 'broadcasts' depending on context, see below:
```python
coefficients = df.select(pl.col("y").least_squares.from_formula("x1 + x2", mode="coefficients")
.alias("coefficients"))
coefficients_group = df.select("group", pl.col("y").least_squares.from_formula("x1 + x2", mode="coefficients").over("group")
.alias("coefficients_group")).unique(maintain_order=True)
print(coefficients)
print(coefficients_group)
```
```
shape: (1, 1)
┌──────────────────────────────┐
│ coefficients │
│ --- │
│ struct[3] │
╞══════════════════════════════╡
│ {0.977375,0.987413,0.000757} │ # <--- coef for x1, x2, and intercept added by formula API
└──────────────────────────────┘
shape: (2, 2)
┌───────┬───────────────────────────────┐
│ group ┆ coefficients_group │
│ --- ┆ --- │
│ i64 ┆ struct[3] │
╞═══════╪═══════════════════════════════╡
│ 1 ┆ {0.995157,0.977495,0.014344} │
│ 2 ┆ {0.939217,0.997441,-0.017599} │ # <--- (unique) coefficients per group
└───────┴───────────────────────────────┘
```
For dynamic models (like `rolling_ols`) or if in a `.over`, `.group_by`, or `.with_columns` context, the
coefficients will take the shape of the data it is applied on. For example:
```python
coefficients = df.with_columns(pl.col("y").least_squares.rls(pl.col("x1"), pl.col("x2"), mode="coefficients")
.over("group").alias("coefficients"))
print(coefficients.head())
```
```
shape: (5, 6)
┌───────┬───────┬───────┬───────┬─────────┬─────────────────────┐
│ y ┆ x1 ┆ x2 ┆ group ┆ weights ┆ coefficients │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 ┆ i64 ┆ f64 ┆ struct[2] │
╞═══════╪═══════╪═══════╪═══════╪═════════╪═════════════════════╡
│ 1.16 ┆ 0.72 ┆ 0.24 ┆ 1 ┆ 0.34 ┆ {1.235503,0.411834} │
│ -2.16 ┆ -2.43 ┆ 0.18 ┆ 1 ┆ 0.97 ┆ {0.963515,0.760769} │
│ -1.57 ┆ -0.63 ┆ -0.95 ┆ 1 ┆ 0.39 ┆ {0.975484,0.966029} │
│ 0.21 ┆ 0.05 ┆ 0.23 ┆ 1 ┆ 0.8 ┆ {0.975657,0.953735} │
│ 0.22 ┆ -0.07 ┆ 0.44 ┆ 1 ┆ 0.57 ┆ {0.97898,0.909793} │
└───────┴───────┴───────┴───────┴─────────┴─────────────────────┘
```
For plain OLS/WLS and Ridge models, support has been recently added for producing a simple statistical significance
report. It can be used as such:
```python
statistics = (df.select(
pl.col("y").least_squares.ols(pl.col("x1", "x2"), mode="statistics", add_intercept=True)
)
.unnest("statistics") # results stored in a nested series by default
.explode(["feature_names", "coefficients", "standard_errors", "t_values", "p_values"])
)
print(statistics)
```
```
shape: (3, 8)
┌─────────┬──────────┬─────────┬──────────────┬──────────────┬─────────────┬───────────┬───────────┐
│ r2 ┆ mae ┆ mse ┆ feature_name ┆ coefficients ┆ standard_er ┆ t_values ┆ p_values │
│ --- ┆ --- ┆ --- ┆ s ┆ --- ┆ rors ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 ┆ --- ┆ f64 ┆ --- ┆ f64 ┆ f64 │
│ ┆ ┆ ┆ str ┆ ┆ f64 ┆ ┆ │
╞═════════╪══════════╪═════════╪══════════════╪══════════════╪═════════════╪═══════════╪═══════════╡
│ 0.99631 ┆ 0.061732 ┆ 0.00794 ┆ x1 ┆ 0.977375 ┆ 0.037286 ┆ 26.212765 ┆ 3.0095e-8 │
│ 0.99631 ┆ 0.061732 ┆ 0.00794 ┆ x2 ┆ 0.987413 ┆ 0.037321 ┆ 26.457169 ┆ 2.8218e-8 │
│ 0.99631 ┆ 0.061732 ┆ 0.00794 ┆ const ┆ 0.000757 ┆ 0.037474 ┆ 0.02021 ┆ 0.98444 │
└─────────┴──────────┴─────────┴──────────────┴──────────────┴─────────────┴───────────┴───────────┘
```
Finally, for convenience, in order to compute out-of-sample predictions you can use:
```least_squares.{predict, predict_from_formula}```. This saves you the effort of un-nesting the coefficients and doing the dot product in
python and instead does this in Rust, as an expression. Usage is as follows:
```python
df_test.select(pl.col("coefficients_train").least_squares.predict(pl.col("x1"), pl.col("x2")).alias("predictions_test"))
```
Supported Models
------------
Currently, this extension package supports the following variants:
- Ordinary Least Squares: ```least_squares.ols```
- Weighted Least Squares: ```least_squares.wls```
- Regularized Least Squares (Lasso / Ridge / Elastic Net) ```least_squares.{lasso, ridge, elastic_net}```
- Non-negative Least Squares: ```least_squares.nnls```
- Multi-target Least Squares: ```least_squares.multi_target_ols```
As well as efficient implementations of moving window models:
- Recursive Least Squares: ```least_squares.rls```
- Rolling / Expanding Window OLS: ```least_squares.{rolling_ols, expanding_ols}```
An arbitrary combination of sample_weights, L1/L2 penalties, and non-negativity constraints can be specified with
the ```least_squares.from_formula``` and ```least_squares.least_squares``` entry-points.
Solve Methods
------------
`polars-ols` provides a choice over multiple supported numerical approaches per model (via `solve_method` flag),
with implications on performance vs numerical accuracy. These choices are exposed to the user for full control,
however, if left unspecified the package will choose a reasonable default depending on context.
For example, if you know you are dealing with highly collinear data, with unregularized OLS model, you may want to
explicitly set `solve_method="svd"` so that the minimum norm solution is obtained.
Benchmark
------------
The usual caveats of benchmarks apply here, but the below should still be indicative of the
type of performance improvements to expect when using this package.
This benchmark was run on randomly generated data with [pyperf](https://github.com/psf/pyperf) on my Apple M2 Max macbook
(32GB RAM, MacOS Sonoma 14.2.1). See [benchmark.py](./tests/benchmark.py) for implementation.
<a id="bennchmark"></a>
### n_samples=2_000, n_features=5
| Model | polars_ols | Python Benchmark | Benchmark Type | Speed-up vs Python Benchmark |
|-------------------------|--------------------|--------------------|--------------------|------------------------------|
| Least Squares (QR) | 195 µs ± 6 µs | 466 µs ± 104 µs | Numpy (QR) | 2.4x |
| Least Squares (SVD) | 247 µs ± 5 µs | 395 µs ± 69 µs | Numpy (SVD) | 1.6x |
| Ridge (Cholesky) | 171 µs ± 8 µs | 1.02 ms ± 0.29 ms | Sklearn (Cholesky) | 5.9x |
| Ridge (SVD) | 238 µs ± 7 µs | 1.12 ms ± 0.41 ms | Sklearn (SVD) | 4.7x |
| Weighted Least Squares | 334 µs ± 13 µs | 2.04 ms ± 0.22 ms | Statsmodels | 6.1x |
| Elastic Net (CD) | 227 µs ± 7 µs | 1.18 ms ± 0.19 ms | Sklearn | 5.2x |
| Recursive Least Squares | 1.12 ms ± 0.23 ms | 18.2 ms ± 1.6 ms | Statsmodels | 16.2x |
| Rolling Least Squares | 1.99 ms ± 0.03 ms | 22.1 ms ± 0.2 ms | Statsmodels | 11.1x |
### n_samples=10_000, n_features=100
| Model | polars_ols | Python Benchmark | Benchmark Type | Speed-up vs Python Benchmark |
|-------------------------|--------------------|---------------------------|-----------------------|------------------------------|
| Least Squares (QR) | 17.6 ms ± 0.3 ms | 44.4 ms ± 9.3 ms | Numpy (QR) | 2.5x |
| Least Squares (SVD) | 23.8 ms ± 0.2 ms | 26.6 ms ± 5.5 ms | Numpy (SVD) | 1.1x |
| Ridge (Cholesky) | 5.36 ms ± 0.16 ms | 475 ms ± 71 ms | Sklearn (Cholesky) | 88.7x |
| Ridge (SVD) | 30.2 ms ± 0.4 ms | 400 ms ± 48 ms | Sklearn (SVD) | 13.2x |
| Weighted Least Squares | 18.8 ms ± 0.3 ms | 80.4 ms ± 12.4 ms | Statsmodels | 4.3x |
| Elastic Net (CD) | 22.7 ms ± 0.2 ms | 138 ms ± 27 ms | Sklearn | 6.1x |
| Recursive Least Squares | 270 ms ± 53 ms | 57.8 sec ± 43.7 sec | Statsmodels | 1017.0x |
| Rolling Least Squares | 371 ms ± 13 ms | 4.41 sec ± 0.17 sec | Statsmodels | 11.9x |
- Numpy's `lstsq` (uses divide-and-conquer SVD) is already a highly optimized call into LAPACK and so the scope for speed-up is relatively limited,
and the same applies to simple approaches like directly solving normal equations with Cholesky.
- However, even in such problems `polars-ols` Rust implementations for matching numerical algorithms tend to outperform by ~2-3x
- More substantial speed-up is achieved for the more complex models by working entirely in rust
and avoiding overhead from back and forth into python.
- Expect a large additional relative order-of-magnitude speed up to your workflow if it involved repeated re-estimation of models in
(python) loops.
Credits & Related Projects
------------
- Rust linear algebra libraries [faer](https://faer-rs.github.io/getting-started.html) and [ndarray](https://docs.rs/ndarray/latest/ndarray/) support the implementations provided by this extension package
- This package was templated around the very helpful: [polars-plugin-tutorial](https://marcogorelli.github.io/polars-plugins-tutorial/)
- The python package [patsy](https://patsy.readthedocs.io/en/latest/formulas.html) is used for (optionally) building models from formulae
- Please check out the extension package [polars-ds](https://github.com/abstractqqq/polars_ds_extension) for general data-science functionality in polars
Future Work / TODOs
------------
- Support generic types, in rust implementations, so that both f32 and f64 types are recognized. Right now data is cast to f64 prior to estimation
- Add docs explaining supported models, signatures, and API
Raw data
{
"_id": null,
"home_page": null,
"name": "polars-ols",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "polars-extension, linear-regression",
"author": null,
"author_email": "Azmy Rajab <azmy.rajab@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/2b/da/56fdbd7c00d0e9bf82fe88a55d4cd1132c834619456a375b1821da4bb865/polars_ols-0.3.5.tar.gz",
"platform": null,
"description": "# Polars OLS\n## Least squares extension in Polars\n\nSupports linear model estimation in [Polars](https://www.pola.rs/).\n\nThis package provides efficient rust implementations of common linear\nregression variants (OLS, WLS, Ridge, Elastic Net, Non-negative least squares, Recursive least squares) and exposes\nthem as simple polars expressions which can easily be integrated into your workflow.\n\n### Why?\n\n1. **High Performance**: implementations are written in rust and make use of optimized rust linear-algebra crates & LAPACK routines. See [benchmark](#benchmark) section.\n2. **Polars Integration**: avoids unnecessary conversions from lazy to eager mode and to external libraries (e.g. numpy, sklearn) to do simple linear regressions.\nChain least squares formulae like any other expression in polars.\n3. **Efficient Implementations**:\n - Numerically stable algorithms are chosen where appropriate (e.g. QR, Cholesky).\n - Flexible model specification allows arbitrary combination of sample weighting, L1/L2 regularization, & non-negativity constraints on parameters.\n - Efficient rank-1 update algorithms used for moving window regressions.\n4. **Easy Parallelism**: Computing OLS predictions, in parallel, across groups can not be easier: call `.over()` or `group_by` just like any other polars' expression and benefit from full Rust parallelism.\n5. **Formula API**: supports building models via patsy syntax: `y ~ x1 + x2 + x3:x4 -1` (like statsmodels) which automatically converts to equivalent polars expressions.\n\nInstallation\n------------\n\nFirst, you need to [install Polars](https://pola-rs.github.io/polars/user-guide/installation/). Then run the below to install the `polars-ols` extension:\n```console\npip install polars-ols\n```\n\nAPI & Examples\n------------\n\nImporting `polars_ols` will register the namespace `least_squares` provided by this package.\nYou can build models either by either specifying polars expressions (e.g. `pl.col(...)`) for your targets and features or using\nthe formula api (patsy syntax). All models support the following general (optional) arguments:\n- `mode` - a literal which determines the type of output produced by the model\n- `null_policy` - a literal which determines how to deal with missing data\n- `add_intercept` - a boolean specifying if an intercept feature should be added to the features\n- `sample_weights` - a column or expression providing non-negative weights applied to the samples\n\nRemaining parameters are model specific, for example `alpha` penalty parameter used by regularized least squares models.\n\nSee below for basic usage examples.\nPlease refer to the [tests](./tests/test_ols.py) or [demo notebook](./notebooks/polars_ols_demo.ipynb) for detailed examples.\n\n```python\nimport polars as pl\nimport polars_ols as pls # registers 'least_squares' namespace\n\ndf = pl.DataFrame({\"y\": [1.16, -2.16, -1.57, 0.21, 0.22, 1.6, -2.11, -2.92, -0.86, 0.47],\n \"x1\": [0.72, -2.43, -0.63, 0.05, -0.07, 0.65, -0.02, -1.64, -0.92, -0.27],\n \"x2\": [0.24, 0.18, -0.95, 0.23, 0.44, 1.01, -2.08, -1.36, 0.01, 0.75],\n \"group\": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2],\n \"weights\": [0.34, 0.97, 0.39, 0.8, 0.57, 0.41, 0.19, 0.87, 0.06, 0.34],\n })\n\nlasso_expr = pl.col(\"y\").least_squares.lasso(\"x1\", \"x2\", alpha=0.0001, add_intercept=True).over(\"group\")\nwls_expr = pls.compute_least_squares_from_formula(\"y ~ x1 + x2 -1\", sample_weights=pl.col(\"weights\"))\n\npredictions = df.with_columns(lasso_expr.round(2).alias(\"predictions_lasso\"),\n wls_expr.round(2).alias(\"predictions_wls\"))\n\nprint(predictions.head(5))\n```\n```\nshape: (5, 7)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 y \u2506 x1 \u2506 x2 \u2506 group \u2506 weights \u2506 predictions_lasso \u2506 predictions_wls \u2502\n\u2502 --- \u2506 --- \u2506 --- \u2506 --- \u2506 --- \u2506 --- \u2506 --- \u2502\n\u2502 f64 \u2506 f64 \u2506 f64 \u2506 i64 \u2506 f64 \u2506 f64 \u2506 f64 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 1.16 \u2506 0.72 \u2506 0.24 \u2506 1 \u2506 0.34 \u2506 0.97 \u2506 0.93 \u2502\n\u2502 -2.16 \u2506 -2.43 \u2506 0.18 \u2506 1 \u2506 0.97 \u2506 -2.23 \u2506 -2.18 \u2502\n\u2502 -1.57 \u2506 -0.63 \u2506 -0.95 \u2506 1 \u2506 0.39 \u2506 -1.54 \u2506 -1.54 \u2502\n\u2502 0.21 \u2506 0.05 \u2506 0.23 \u2506 1 \u2506 0.8 \u2506 0.29 \u2506 0.27 \u2502\n\u2502 0.22 \u2506 -0.07 \u2506 0.44 \u2506 1 \u2506 0.57 \u2506 0.37 \u2506 0.36 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nThe `mode` parameter is used to set the type of the output returned by all methods (`\"predictions\", \"residuals\", \"coefficients\", \"statistics\"`).\nIt defaults to returning predictions matching the input's length.\nNote that `\"statistics\"` is currently only supported for OLS/WLS/Ridge models.\n\nIn case `\"coefficients\"` is set the output is a [polars Struct](https://docs.pola.rs/user-guide/expressions/structs/) with coefficients as values and feature names as fields.\nIt's output shape 'broadcasts' depending on context, see below:\n\n```python\ncoefficients = df.select(pl.col(\"y\").least_squares.from_formula(\"x1 + x2\", mode=\"coefficients\")\n .alias(\"coefficients\"))\n\ncoefficients_group = df.select(\"group\", pl.col(\"y\").least_squares.from_formula(\"x1 + x2\", mode=\"coefficients\").over(\"group\")\n .alias(\"coefficients_group\")).unique(maintain_order=True)\n\nprint(coefficients)\nprint(coefficients_group)\n```\n```\nshape: (1, 1)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 coefficients \u2502\n\u2502 --- \u2502\n\u2502 struct[3] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 {0.977375,0.987413,0.000757} \u2502 # <--- coef for x1, x2, and intercept added by formula API\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\nshape: (2, 2)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 group \u2506 coefficients_group \u2502\n\u2502 --- \u2506 --- \u2502\n\u2502 i64 \u2506 struct[3] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 1 \u2506 {0.995157,0.977495,0.014344} \u2502\n\u2502 2 \u2506 {0.939217,0.997441,-0.017599} \u2502 # <--- (unique) coefficients per group\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nFor dynamic models (like `rolling_ols`) or if in a `.over`, `.group_by`, or `.with_columns` context, the\ncoefficients will take the shape of the data it is applied on. For example:\n\n```python\ncoefficients = df.with_columns(pl.col(\"y\").least_squares.rls(pl.col(\"x1\"), pl.col(\"x2\"), mode=\"coefficients\")\n .over(\"group\").alias(\"coefficients\"))\n\nprint(coefficients.head())\n```\n```\nshape: (5, 6)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 y \u2506 x1 \u2506 x2 \u2506 group \u2506 weights \u2506 coefficients \u2502\n\u2502 --- \u2506 --- \u2506 --- \u2506 --- \u2506 --- \u2506 --- \u2502\n\u2502 f64 \u2506 f64 \u2506 f64 \u2506 i64 \u2506 f64 \u2506 struct[2] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 1.16 \u2506 0.72 \u2506 0.24 \u2506 1 \u2506 0.34 \u2506 {1.235503,0.411834} \u2502\n\u2502 -2.16 \u2506 -2.43 \u2506 0.18 \u2506 1 \u2506 0.97 \u2506 {0.963515,0.760769} \u2502\n\u2502 -1.57 \u2506 -0.63 \u2506 -0.95 \u2506 1 \u2506 0.39 \u2506 {0.975484,0.966029} \u2502\n\u2502 0.21 \u2506 0.05 \u2506 0.23 \u2506 1 \u2506 0.8 \u2506 {0.975657,0.953735} \u2502\n\u2502 0.22 \u2506 -0.07 \u2506 0.44 \u2506 1 \u2506 0.57 \u2506 {0.97898,0.909793} \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nFor plain OLS/WLS and Ridge models, support has been recently added for producing a simple statistical significance\nreport. It can be used as such:\n\n```python\nstatistics = (df.select(\n pl.col(\"y\").least_squares.ols(pl.col(\"x1\", \"x2\"), mode=\"statistics\", add_intercept=True)\n)\n.unnest(\"statistics\") # results stored in a nested series by default\n.explode([\"feature_names\", \"coefficients\", \"standard_errors\", \"t_values\", \"p_values\"])\n)\n\nprint(statistics)\n```\n```\nshape: (3, 8)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 r2 \u2506 mae \u2506 mse \u2506 feature_name \u2506 coefficients \u2506 standard_er \u2506 t_values \u2506 p_values \u2502\n\u2502 --- \u2506 --- \u2506 --- \u2506 s \u2506 --- \u2506 rors \u2506 --- \u2506 --- \u2502\n\u2502 f64 \u2506 f64 \u2506 f64 \u2506 --- \u2506 f64 \u2506 --- \u2506 f64 \u2506 f64 \u2502\n\u2502 \u2506 \u2506 \u2506 str \u2506 \u2506 f64 \u2506 \u2506 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 0.99631 \u2506 0.061732 \u2506 0.00794 \u2506 x1 \u2506 0.977375 \u2506 0.037286 \u2506 26.212765 \u2506 3.0095e-8 \u2502\n\u2502 0.99631 \u2506 0.061732 \u2506 0.00794 \u2506 x2 \u2506 0.987413 \u2506 0.037321 \u2506 26.457169 \u2506 2.8218e-8 \u2502\n\u2502 0.99631 \u2506 0.061732 \u2506 0.00794 \u2506 const \u2506 0.000757 \u2506 0.037474 \u2506 0.02021 \u2506 0.98444 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nFinally, for convenience, in order to compute out-of-sample predictions you can use:\n```least_squares.{predict, predict_from_formula}```. This saves you the effort of un-nesting the coefficients and doing the dot product in\npython and instead does this in Rust, as an expression. Usage is as follows:\n\n```python\ndf_test.select(pl.col(\"coefficients_train\").least_squares.predict(pl.col(\"x1\"), pl.col(\"x2\")).alias(\"predictions_test\"))\n```\n\n\nSupported Models\n------------\n\nCurrently, this extension package supports the following variants:\n- Ordinary Least Squares: ```least_squares.ols```\n- Weighted Least Squares: ```least_squares.wls```\n- Regularized Least Squares (Lasso / Ridge / Elastic Net) ```least_squares.{lasso, ridge, elastic_net}```\n- Non-negative Least Squares: ```least_squares.nnls```\n- Multi-target Least Squares: ```least_squares.multi_target_ols```\n\nAs well as efficient implementations of moving window models:\n- Recursive Least Squares: ```least_squares.rls```\n- Rolling / Expanding Window OLS: ```least_squares.{rolling_ols, expanding_ols}```\n\nAn arbitrary combination of sample_weights, L1/L2 penalties, and non-negativity constraints can be specified with\nthe ```least_squares.from_formula``` and ```least_squares.least_squares``` entry-points.\n\nSolve Methods\n------------\n\n`polars-ols` provides a choice over multiple supported numerical approaches per model (via `solve_method` flag),\nwith implications on performance vs numerical accuracy. These choices are exposed to the user for full control,\nhowever, if left unspecified the package will choose a reasonable default depending on context.\n\nFor example, if you know you are dealing with highly collinear data, with unregularized OLS model, you may want to\nexplicitly set `solve_method=\"svd\"` so that the minimum norm solution is obtained.\n\nBenchmark\n------------\nThe usual caveats of benchmarks apply here, but the below should still be indicative of the\ntype of performance improvements to expect when using this package.\n\nThis benchmark was run on randomly generated data with [pyperf](https://github.com/psf/pyperf) on my Apple M2 Max macbook\n(32GB RAM, MacOS Sonoma 14.2.1). See [benchmark.py](./tests/benchmark.py) for implementation.\n\n<a id=\"bennchmark\"></a>\n\n### n_samples=2_000, n_features=5\n| Model | polars_ols | Python Benchmark | Benchmark Type | Speed-up vs Python Benchmark |\n|-------------------------|--------------------|--------------------|--------------------|------------------------------|\n| Least Squares (QR) | 195 \u00b5s \u00b1 6 \u00b5s | 466 \u00b5s \u00b1 104 \u00b5s | Numpy (QR) | 2.4x |\n| Least Squares (SVD) | 247 \u00b5s \u00b1 5 \u00b5s | 395 \u00b5s \u00b1 69 \u00b5s | Numpy (SVD) | 1.6x |\n| Ridge (Cholesky) | 171 \u00b5s \u00b1 8 \u00b5s | 1.02 ms \u00b1 0.29 ms | Sklearn (Cholesky) | 5.9x |\n| Ridge (SVD) | 238 \u00b5s \u00b1 7 \u00b5s | 1.12 ms \u00b1 0.41 ms | Sklearn (SVD) | 4.7x |\n| Weighted Least Squares | 334 \u00b5s \u00b1 13 \u00b5s | 2.04 ms \u00b1 0.22 ms | Statsmodels | 6.1x |\n| Elastic Net (CD) | 227 \u00b5s \u00b1 7 \u00b5s | 1.18 ms \u00b1 0.19 ms | Sklearn | 5.2x |\n| Recursive Least Squares | 1.12 ms \u00b1 0.23 ms | 18.2 ms \u00b1 1.6 ms | Statsmodels | 16.2x |\n| Rolling Least Squares | 1.99 ms \u00b1 0.03 ms | 22.1 ms \u00b1 0.2 ms | Statsmodels | 11.1x |\n\n### n_samples=10_000, n_features=100\n| Model | polars_ols | Python Benchmark | Benchmark Type | Speed-up vs Python Benchmark |\n|-------------------------|--------------------|---------------------------|-----------------------|------------------------------|\n| Least Squares (QR) | 17.6 ms \u00b1 0.3 ms | 44.4 ms \u00b1 9.3 ms | Numpy (QR) | 2.5x |\n| Least Squares (SVD) | 23.8 ms \u00b1 0.2 ms | 26.6 ms \u00b1 5.5 ms | Numpy (SVD) | 1.1x |\n| Ridge (Cholesky) | 5.36 ms \u00b1 0.16 ms | 475 ms \u00b1 71 ms | Sklearn (Cholesky) | 88.7x |\n| Ridge (SVD) | 30.2 ms \u00b1 0.4 ms | 400 ms \u00b1 48 ms | Sklearn (SVD) | 13.2x |\n| Weighted Least Squares | 18.8 ms \u00b1 0.3 ms | 80.4 ms \u00b1 12.4 ms | Statsmodels | 4.3x |\n| Elastic Net (CD) | 22.7 ms \u00b1 0.2 ms | 138 ms \u00b1 27 ms | Sklearn | 6.1x |\n| Recursive Least Squares | 270 ms \u00b1 53 ms | 57.8 sec \u00b1 43.7 sec | Statsmodels | 1017.0x |\n| Rolling Least Squares | 371 ms \u00b1 13 ms | 4.41 sec \u00b1 0.17 sec | Statsmodels | 11.9x |\n\n- Numpy's `lstsq` (uses divide-and-conquer SVD) is already a highly optimized call into LAPACK and so the scope for speed-up is relatively limited,\nand the same applies to simple approaches like directly solving normal equations with Cholesky.\n- However, even in such problems `polars-ols` Rust implementations for matching numerical algorithms tend to outperform by ~2-3x\n- More substantial speed-up is achieved for the more complex models by working entirely in rust\nand avoiding overhead from back and forth into python.\n- Expect a large additional relative order-of-magnitude speed up to your workflow if it involved repeated re-estimation of models in\n(python) loops.\n\n\nCredits & Related Projects\n------------\n- Rust linear algebra libraries [faer](https://faer-rs.github.io/getting-started.html) and [ndarray](https://docs.rs/ndarray/latest/ndarray/) support the implementations provided by this extension package\n- This package was templated around the very helpful: [polars-plugin-tutorial](https://marcogorelli.github.io/polars-plugins-tutorial/)\n- The python package [patsy](https://patsy.readthedocs.io/en/latest/formulas.html) is used for (optionally) building models from formulae\n- Please check out the extension package [polars-ds](https://github.com/abstractqqq/polars_ds_extension) for general data-science functionality in polars\n\nFuture Work / TODOs\n------------\n- Support generic types, in rust implementations, so that both f32 and f64 types are recognized. Right now data is cast to f64 prior to estimation\n- Add docs explaining supported models, signatures, and API\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Polars Least Squares Extension",
"version": "0.3.5",
"project_urls": null,
"split_keywords": [
"polars-extension",
" linear-regression"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "306ac1a955f68f04ef18a6c1b8e185b23f49a34b86e72623f809166f2fbaa3c9",
"md5": "6a767d55bbf66fa87cbce47165e1c47f",
"sha256": "79c71a38cc86de74a3312dec80f578a0c5dee0eec66064bde7d52b98c8b0cbe9"
},
"downloads": -1,
"filename": "polars_ols-0.3.5-cp38-abi3-macosx_10_12_x86_64.whl",
"has_sig": false,
"md5_digest": "6a767d55bbf66fa87cbce47165e1c47f",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 11294762,
"upload_time": "2024-08-25T16:22:11",
"upload_time_iso_8601": "2024-08-25T16:22:11.703522Z",
"url": "https://files.pythonhosted.org/packages/30/6a/c1a955f68f04ef18a6c1b8e185b23f49a34b86e72623f809166f2fbaa3c9/polars_ols-0.3.5-cp38-abi3-macosx_10_12_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "d0618fe5d3bcc15a671336eb6cfe3b34c1ddcbd969f3552225d815dc729e22c5",
"md5": "1e4d16e9e3d3c41328c5032ae3c63b88",
"sha256": "fe4b97b168c5404d796463e97db851a94607c79a909fdaeedc4629b56649a2fb"
},
"downloads": -1,
"filename": "polars_ols-0.3.5-cp38-abi3-macosx_11_0_arm64.whl",
"has_sig": false,
"md5_digest": "1e4d16e9e3d3c41328c5032ae3c63b88",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 10243546,
"upload_time": "2024-08-25T16:22:14",
"upload_time_iso_8601": "2024-08-25T16:22:14.671180Z",
"url": "https://files.pythonhosted.org/packages/d0/61/8fe5d3bcc15a671336eb6cfe3b34c1ddcbd969f3552225d815dc729e22c5/polars_ols-0.3.5-cp38-abi3-macosx_11_0_arm64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "20bd711cf2815e79d83e9057ffa893036d78210f3411bf638a78496635da49c9",
"md5": "53edbc8da0143af8bb3ded4d506acd89",
"sha256": "fe5b08142fdd85d4beaecf9ab8bdf16b93db9919b2706755ae90e362a985f01d"
},
"downloads": -1,
"filename": "polars_ols-0.3.5-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"has_sig": false,
"md5_digest": "53edbc8da0143af8bb3ded4d506acd89",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 11920671,
"upload_time": "2024-08-25T16:22:17",
"upload_time_iso_8601": "2024-08-25T16:22:17.541733Z",
"url": "https://files.pythonhosted.org/packages/20/bd/711cf2815e79d83e9057ffa893036d78210f3411bf638a78496635da49c9/polars_ols-0.3.5-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4da233b0cdbec45b859cf93657c4978ab2e4747235e5ab10138c16ce2622ada8",
"md5": "114285818cdbee52a4be9b2497fcb1b8",
"sha256": "fa2880802d42076a935b5c1bd9e114e8f3bb4126290330b63aaf32eda6381a51"
},
"downloads": -1,
"filename": "polars_ols-0.3.5-cp38-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl",
"has_sig": false,
"md5_digest": "114285818cdbee52a4be9b2497fcb1b8",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 12108217,
"upload_time": "2024-08-25T16:22:20",
"upload_time_iso_8601": "2024-08-25T16:22:20.057141Z",
"url": "https://files.pythonhosted.org/packages/4d/a2/33b0cdbec45b859cf93657c4978ab2e4747235e5ab10138c16ce2622ada8/polars_ols-0.3.5-cp38-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b1bf95a3841b43ede52a01ec02ad59fcaac92804ab208637f2c25694833a5399",
"md5": "7179748646f6f89206d36670734968f1",
"sha256": "5fe50e4032e7430e5ed154f426f4c83973a9e30120211afd122c597bea19828f"
},
"downloads": -1,
"filename": "polars_ols-0.3.5-cp38-abi3-manylinux_2_17_i686.manylinux2014_i686.whl",
"has_sig": false,
"md5_digest": "7179748646f6f89206d36670734968f1",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 13592335,
"upload_time": "2024-08-25T16:22:23",
"upload_time_iso_8601": "2024-08-25T16:22:23.110947Z",
"url": "https://files.pythonhosted.org/packages/b1/bf/95a3841b43ede52a01ec02ad59fcaac92804ab208637f2c25694833a5399/polars_ols-0.3.5-cp38-abi3-manylinux_2_17_i686.manylinux2014_i686.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "0e40c760dc5eaa03bf6ff3e9d09c723b7d7b02521a5a45324910b74d6aa19480",
"md5": "8a201a22dfedaf65ab275f2236cca69a",
"sha256": "7b8c0f03c6bf6dd9e0384f4d95183989724fd8e5703d20daade1754631f5b7d9"
},
"downloads": -1,
"filename": "polars_ols-0.3.5-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "8a201a22dfedaf65ab275f2236cca69a",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 15267885,
"upload_time": "2024-08-25T16:22:26",
"upload_time_iso_8601": "2024-08-25T16:22:26.114676Z",
"url": "https://files.pythonhosted.org/packages/0e/40/c760dc5eaa03bf6ff3e9d09c723b7d7b02521a5a45324910b74d6aa19480/polars_ols-0.3.5-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "131db3f7c57ca9ab7bf878b766467e68f4adcc5e41a94eee68c8d0c7bcd948d2",
"md5": "6b90ae8bdeb26146fe1ebd978b87cfbb",
"sha256": "8e33fc7d6cb62cb7930b50a5dd428ce9dfd5bb3587463672269ed65776aa8081"
},
"downloads": -1,
"filename": "polars_ols-0.3.5-cp38-abi3-win32.whl",
"has_sig": false,
"md5_digest": "6b90ae8bdeb26146fe1ebd978b87cfbb",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 8963620,
"upload_time": "2024-08-25T16:22:28",
"upload_time_iso_8601": "2024-08-25T16:22:28.697919Z",
"url": "https://files.pythonhosted.org/packages/13/1d/b3f7c57ca9ab7bf878b766467e68f4adcc5e41a94eee68c8d0c7bcd948d2/polars_ols-0.3.5-cp38-abi3-win32.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ea5f5a7b73ab8bead1b453a8b9fcca1bf024dd7b057177c19bbc523537c4bfcd",
"md5": "44efb4ccfd5ccc1aefacf9bf29edb35e",
"sha256": "075e27381781895a9ada73296777304fcd2752797c4f7e17f3c30b57ae4a2120"
},
"downloads": -1,
"filename": "polars_ols-0.3.5-cp38-abi3-win_amd64.whl",
"has_sig": false,
"md5_digest": "44efb4ccfd5ccc1aefacf9bf29edb35e",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 10338290,
"upload_time": "2024-08-25T16:22:31",
"upload_time_iso_8601": "2024-08-25T16:22:31.476286Z",
"url": "https://files.pythonhosted.org/packages/ea/5f/5a7b73ab8bead1b453a8b9fcca1bf024dd7b057177c19bbc523537c4bfcd/polars_ols-0.3.5-cp38-abi3-win_amd64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "2bda56fdbd7c00d0e9bf82fe88a55d4cd1132c834619456a375b1821da4bb865",
"md5": "e15f2e646148a1a0596b03a1a8cfa5cb",
"sha256": "b507aa33c920573f3e3a152b6cb4e4429b3892db5e393fd2cb5cd7cfb53d7e77"
},
"downloads": -1,
"filename": "polars_ols-0.3.5.tar.gz",
"has_sig": false,
"md5_digest": "e15f2e646148a1a0596b03a1a8cfa5cb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 84767,
"upload_time": "2024-08-25T16:22:33",
"upload_time_iso_8601": "2024-08-25T16:22:33.613487Z",
"url": "https://files.pythonhosted.org/packages/2b/da/56fdbd7c00d0e9bf82fe88a55d4cd1132c834619456a375b1821da4bb865/polars_ols-0.3.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-25 16:22:33",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "polars-ols"
}