polars-upgrade


Namepolars-upgrade JSON
Version 0.3.6 PyPI version JSON
download
home_pageNone
SummaryAutomatically upgrade Polars code to the latest version.
upload_time2024-06-18 08:51:53
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            polars-upgrade
==============

<p align="center">
<img width="200" src="https://github.com/MarcoGorelli/polars-upgrade/assets/33491632/91a50994-af5d-4abd-8e9f-e3125258c167">
</p>

[![PyPI version](https://badge.fury.io/py/polars-upgrade.svg)](https://badge.fury.io/py/polars-upgrade)

Automatically upgrade your Polars code so it's compatible with future versions.

## Installation

Easy:
```
pip install -U polars-upgrade
```

## Usage (command-line)

Run
```
polars-upgrade my_project --target-version=0.20.31
```
from the command line. Replace `0.20.31` and `my_project` with your Polars version,
and the name of your directory.

NOTE: this tool will modify your code!
You're advised to stage your files before running it.

## Usage (pre-commit hook)

```yaml
-   repo: https://github.com/MarcoGorelli/polars-upgrade
    rev: 0.3.6  # polars-upgrade version goes here
    hooks:
    -   id: polars-upgrade
        args: [--target-version=0.20.31]  # Polars version goes here
```

## Usage (Jupyter Notebooks)

Install [nbqa](https://github.com/nbQA-dev/nbQA) and then run
```
nbqa polars_upgrade my_project --target-version=0.20.31
```

## Usage (library)

In a Python script:
```python
from polars_upgrade import rewrite, Settings

src = """\
import polars as pl
df.select(pl.count())
"""
settings = Settings(target_version=(0, 20, 4))
output = rewrite(src, settings=settings)
print(output)
```
Output:
```
import polars as pl
df.select(pl.len())
```

If your snippet does _not_ include `import polars` or `import as pl`,
then you will also need to provide `pl` and/or `polars` to `aliases`, else `polars-upgrade` will
not perform the rewrite. Example:

```python
from polars_upgrade import rewrite, Settings

src = """\
df.select(pl.count())
"""
settings = Settings(target_version=(0, 20, 4))
output = rewrite(src, settings=settings, aliases={'pl'})
print(output)
```
Output:
```
df.select(pl.len())
```

## Supported rewrites

### Version 0.18.12+

```diff
- pl.avg
+ pl.mean
```

### Version 0.19.0+

```diff
- df.groupby_dynamic
+ df.group_by_dynamic
- df.groupby_rolling
+ df.rolling
- df.rolling('ts', period='3d').apply
+ df.rolling('ts', period='3d').map_groups
- pl.col('a').rolling_apply
+ pl.col('a').rolling_map
- pl.col('a').apply
+ pl.col('a').map_elements
- pl.col('a').map
+ pl.col('a').map_batches
- pl.map
+ pl.map_batches
- pl.apply
+ pl.map_groups
- pl.col('a').any(drop_nulls=True)
+ pl.col('a').any(ignore_nulls=True)
- pl.col('a').all(drop_nulls=True)
+ pl.col('a').all(ignore_nulls=True)
- pl.col('a').value_counts(multithreaded=True)
+ pl.col('a').value_counts(parallel=True)
```

### Version 0.19.2+
```diff
- pl.col('a').is_not
+ pl.col('a').not_
```

### Version 0.19.3+

```diff
- pl.enable_string_cache(True)
+ pl.enable_string_cache()
- pl.enable_string_cache(False)
+ pl.disable_string_cache()
- pl.col('a').list.count_match
+ pl.col('a').list.count_matches
- pl.col('a').is_last
+ pl.col('a').is_last_distinct
- pl.col('a').is_first
+ pl.col('a').is_first_distinct
- pl.col('a').str.strip
+ pl.col('a').str.strip_chars
- pl.col('a').str.lstrip
+ pl.col('a').str.strip_chars_start
- pl.col('a').str.rstrip
+ pl.col('a').str.strip_chars_end
- pl.col('a').str.count_match
+ pl.col('a').str.count_matches
- pl.col("dt").dt.offset_by("1mo_saturating")
+ pl.col("dt").dt.offset_by("1mo")
```

### Version 0.19.4+
```diff
- df.group_by_dynamic('ts', every='3d', truncate=True)
+ df.group_by_dynamic('ts', every='3d', label='left')
- df.group_by_dynamic('ts', every='3d', truncate=False)
+ df.group_by_dynamic('ts', every='3d', label='datapoint')
```

### Version 0.19.8+
```diff
- pl.col('a').list.lengths
+ pl.col('a').list.len
- pl.col('a').str.lengths
+ pl.col('a').str.len_bytes
- pl.col('a').str.n_chars
+ pl.col('a').str.len_chars
```

### Version 0.19.11+
```diff
- pl.col('a').shift(periods=4)
+ pl.col('a').shift(n=4)
- pl.col('a').shift_and_fill(periods=4)
+ pl.col('a').shift_and_fill(n=4)
- pl.col('a').list.shift(periods=4)
+ pl.col('a').list.shift(n=4)
- pl.col('a').map_dict(remapping={1: 2})
+ pl.col('a').map_dict(mapping={1: 2})
```

### Version 0.19.12+
```diff
- pl.col('a').keep_name
+ pl.col('a').name.keep
- pl.col('a').suffix
+ pl.col('a').name.suffix
- pl.col('a').prefix
+ pl.col('a').name.prefix
- pl.col('a').map_alias
+ pl.col('a').name.map
- pl.col('a').str.ljust
+ pl.col('a').str.pad_end
- pl.col('a').str.rjust
+ pl.col('a').str.pad_start
- pl.col('a').zfill(alignment=3)
+ pl.col('a').zfill(length=3)
- pl.col('a').ljust(width=3)
+ pl.col('a').ljust(length=3)
- pl.col('a').rjust(width=3)
+ pl.col('a').rjust(length=3)
```

### Version 0.19.13+
```diff
- pl.col('a').dt.milliseconds
+ pl.col('a').dt.total_milliseconds
- pl.col('a').dt.microseconds
+ pl.col('a').dt.total_microseconds
- pl.col('a').dt.nanoseconds
+ pl.col('a').dt.total_nanoseconds
```
(and so on for other units)

### Version 0.19.14+
```diff
- pl.col('a').list.take
+ pl.col('a').list.gather
- pl.col('a').cumcount
+ pl.col('a').cum_count
- pl.col('a').cummax
+ pl.col('a').cum_max
- pl.col('a').cummin
+ pl.col('a').cum_min
- pl.col('a').cumprod
+ pl.col('a').cum_prod
- pl.col('a').cumsum
+ pl.col('a').cum_sum
- pl.col('a').cumcount
+ pl.col('a').cum_count
- pl.col('a').take
+ pl.col('a').gather
- pl.col('a').take_every
+ pl.col('a').gather_every
- pl.cumsum
+ pl.cum_sum
- pl.cumfold
+ pl.cum_fold
- pl.cumreduce
+ pl.cum_reduce
- pl.cumsum_horizontal
+ pl.cum_sum_horizontal
- pl.col('a').list.take(index=[1, 2])
+ pl.col('a').list.take(indices=[1, 2])
- pl.col('a').str.parse_int(radix=1)
+ pl.col('a').str.parse_int(base=1)
```

### Version 0.19.15+
```diff
- pl.col('a').str.json_extract
+ pl.col('a').str.json_decode
```

### Version 0.19.16+
```diff
- pl.col('a').map_dict({'a': 'b'})
+ pl.col('a').replace({'a': 'b'}, default=None)
- pl.col('a').map_dict({'a': 'b'}, default='c')
+ pl.col('a').replace({'a': 'b'}, default='c')
```

### Version 0.20.0+
```diff
- df.write_database(table_name='foo', if_exists="append")
+ df.write_database(table_name='foo', if_table_exists="append")
```

### Version 0.20.4+
```diff
- pl.col('a').where
+ pl.col('a').filter
- pl.count()
+ pl.len()
- df.with_row_count('row_number')
+ df.with_row_index('row_number')
- pl.scan_ndjson(source, row_count_name='foo', row_count_offset=3)
+ pl.scan_ndjson(source, row_index_name='foo', row_index_offset=3)
[...and similarly for `read_csv`, `read_csv_batched`, `scan_csv`, `read_ipc`, `read_ipc_stream`, `scan_ipc`, `read_parquet`, `scan_parquet`]
```

### Version 0.20.5+
```diff
- df.pivot(index=index, values=values, columns=columns, aggregate_function='count')
+ df.pivot(index=index, values=values, columns=columns, aggregate_function='len')
```

### Version 0.20.6+
```diff
- pl.read_excel(source, xlsx2csv_options=options, read_csv_options=read_options)
+ pl.read_excel(source, engine_options=options, read_options=read_options)
```

### Version 0.20.7+
```diff
- pl.threadpool_size
+ pl.thread_pool_size
```

### Version 0.20.8+
```diff
- df.pivot(a, b, c)
+ df.pivot(values=a, index=b, columns=c)
```

### Version 0.20.11+
```diff
- pl.col('a').meta.write_json
+ pl.col('a').meta.serialize
```

### Version 0.20.14+
```diff
- df.group_by_dynamic('time', every='2d', by='symbol')
+ df.group_by_dynamic('time', every='2d', group_by='symbol')
- df.rolling('time', period='2d', by='symbol')
+ df.rolling('time', period='2d', group_by='symbol')
- df.upsample('time', every='2d', by='symbol')
+ df.upsample('time', every='2d', group_by='symbol')
```

### Version 0.20.17+
```diff
- pl.from_repr(tbl=data)
+ pl.from_repr(data=data)
```

### Version 0.20.24+
```diff
- pl.col('a').rolling_min('2d', by='time')
+ pl.col('a').rolling_min_by(window_size='2d', by='time')
- pl.col('a').rolling_max('2d', by='time')
+ pl.col('a').rolling_max_by(window_size='2d', by='time')
- pl.col('a').rolling_mean('2d', by='time')
+ pl.col('a').rolling_mean_by(window_size='2d', by='time')
- pl.col('a').rolling_std('2d', by='time')
+ pl.col('a').rolling_std_by(window_size='2d', by='time')
- pl.col('a').rolling_var('2d', by='time')
+ pl.col('a').rolling_var_by(window_size='2d', by='time')
- pl.col('a').rolling_prod('2d', by='time')
+ pl.col('a').rolling_prod_by(window_size='2d', by='time')
- pl.col('a').rolling_sum('2d', by='time')
+ pl.col('a').rolling_sum_by(window_size='2d', by='time')
```

### Version 0.20.29+
```diff
- df.join(df_right, how='outer')
+ df.join(df_right, how='full')
- df.join(df_right, how='outer_coalesce')
+ df.join(df_right, how='full', coalesce=True)
```

### Version 0.20.31+
```diff
- pl.read_csv(file, dtypes=schema)
+ pl.read_csv(file, schema=schema)
- pl.SQLContext(eager_execution=True)
+ pl.SQLContext(eager=True)
- pl.col('a').top_k(k=2, maintain_order=True)
+ pl.col('a').top_k(k=2)
```

## Notes

This work is derivative of [pyupgrade](https://github.com/asottile/pyupgrade) - many parts
have been lifted verbatim. As required, I've included pyupgrade's license.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "polars-upgrade",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "Marco Gorelli <33491632+MarcoGorelli@users.noreply.github.com>",
    "download_url": "https://files.pythonhosted.org/packages/a6/8d/0b5ba21af1a15d1264d9869337aa0f1338f8d0588d79955dc5c1ee8bf4a9/polars_upgrade-0.3.6.tar.gz",
    "platform": null,
    "description": "polars-upgrade\n==============\n\n<p align=\"center\">\n<img width=\"200\" src=\"https://github.com/MarcoGorelli/polars-upgrade/assets/33491632/91a50994-af5d-4abd-8e9f-e3125258c167\">\n</p>\n\n[![PyPI version](https://badge.fury.io/py/polars-upgrade.svg)](https://badge.fury.io/py/polars-upgrade)\n\nAutomatically upgrade your Polars code so it's compatible with future versions.\n\n## Installation\n\nEasy:\n```\npip install -U polars-upgrade\n```\n\n## Usage (command-line)\n\nRun\n```\npolars-upgrade my_project --target-version=0.20.31\n```\nfrom the command line. Replace `0.20.31` and `my_project` with your Polars version,\nand the name of your directory.\n\nNOTE: this tool will modify your code!\nYou're advised to stage your files before running it.\n\n## Usage (pre-commit hook)\n\n```yaml\n-   repo: https://github.com/MarcoGorelli/polars-upgrade\n    rev: 0.3.6  # polars-upgrade version goes here\n    hooks:\n    -   id: polars-upgrade\n        args: [--target-version=0.20.31]  # Polars version goes here\n```\n\n## Usage (Jupyter Notebooks)\n\nInstall [nbqa](https://github.com/nbQA-dev/nbQA) and then run\n```\nnbqa polars_upgrade my_project --target-version=0.20.31\n```\n\n## Usage (library)\n\nIn a Python script:\n```python\nfrom polars_upgrade import rewrite, Settings\n\nsrc = \"\"\"\\\nimport polars as pl\ndf.select(pl.count())\n\"\"\"\nsettings = Settings(target_version=(0, 20, 4))\noutput = rewrite(src, settings=settings)\nprint(output)\n```\nOutput:\n```\nimport polars as pl\ndf.select(pl.len())\n```\n\nIf your snippet does _not_ include `import polars` or `import as pl`,\nthen you will also need to provide `pl` and/or `polars` to `aliases`, else `polars-upgrade` will\nnot perform the rewrite. Example:\n\n```python\nfrom polars_upgrade import rewrite, Settings\n\nsrc = \"\"\"\\\ndf.select(pl.count())\n\"\"\"\nsettings = Settings(target_version=(0, 20, 4))\noutput = rewrite(src, settings=settings, aliases={'pl'})\nprint(output)\n```\nOutput:\n```\ndf.select(pl.len())\n```\n\n## Supported rewrites\n\n### Version 0.18.12+\n\n```diff\n- pl.avg\n+ pl.mean\n```\n\n### Version 0.19.0+\n\n```diff\n- df.groupby_dynamic\n+ df.group_by_dynamic\n- df.groupby_rolling\n+ df.rolling\n- df.rolling('ts', period='3d').apply\n+ df.rolling('ts', period='3d').map_groups\n- pl.col('a').rolling_apply\n+ pl.col('a').rolling_map\n- pl.col('a').apply\n+ pl.col('a').map_elements\n- pl.col('a').map\n+ pl.col('a').map_batches\n- pl.map\n+ pl.map_batches\n- pl.apply\n+ pl.map_groups\n- pl.col('a').any(drop_nulls=True)\n+ pl.col('a').any(ignore_nulls=True)\n- pl.col('a').all(drop_nulls=True)\n+ pl.col('a').all(ignore_nulls=True)\n- pl.col('a').value_counts(multithreaded=True)\n+ pl.col('a').value_counts(parallel=True)\n```\n\n### Version 0.19.2+\n```diff\n- pl.col('a').is_not\n+ pl.col('a').not_\n```\n\n### Version 0.19.3+\n\n```diff\n- pl.enable_string_cache(True)\n+ pl.enable_string_cache()\n- pl.enable_string_cache(False)\n+ pl.disable_string_cache()\n- pl.col('a').list.count_match\n+ pl.col('a').list.count_matches\n- pl.col('a').is_last\n+ pl.col('a').is_last_distinct\n- pl.col('a').is_first\n+ pl.col('a').is_first_distinct\n- pl.col('a').str.strip\n+ pl.col('a').str.strip_chars\n- pl.col('a').str.lstrip\n+ pl.col('a').str.strip_chars_start\n- pl.col('a').str.rstrip\n+ pl.col('a').str.strip_chars_end\n- pl.col('a').str.count_match\n+ pl.col('a').str.count_matches\n- pl.col(\"dt\").dt.offset_by(\"1mo_saturating\")\n+ pl.col(\"dt\").dt.offset_by(\"1mo\")\n```\n\n### Version 0.19.4+\n```diff\n- df.group_by_dynamic('ts', every='3d', truncate=True)\n+ df.group_by_dynamic('ts', every='3d', label='left')\n- df.group_by_dynamic('ts', every='3d', truncate=False)\n+ df.group_by_dynamic('ts', every='3d', label='datapoint')\n```\n\n### Version 0.19.8+\n```diff\n- pl.col('a').list.lengths\n+ pl.col('a').list.len\n- pl.col('a').str.lengths\n+ pl.col('a').str.len_bytes\n- pl.col('a').str.n_chars\n+ pl.col('a').str.len_chars\n```\n\n### Version 0.19.11+\n```diff\n- pl.col('a').shift(periods=4)\n+ pl.col('a').shift(n=4)\n- pl.col('a').shift_and_fill(periods=4)\n+ pl.col('a').shift_and_fill(n=4)\n- pl.col('a').list.shift(periods=4)\n+ pl.col('a').list.shift(n=4)\n- pl.col('a').map_dict(remapping={1: 2})\n+ pl.col('a').map_dict(mapping={1: 2})\n```\n\n### Version 0.19.12+\n```diff\n- pl.col('a').keep_name\n+ pl.col('a').name.keep\n- pl.col('a').suffix\n+ pl.col('a').name.suffix\n- pl.col('a').prefix\n+ pl.col('a').name.prefix\n- pl.col('a').map_alias\n+ pl.col('a').name.map\n- pl.col('a').str.ljust\n+ pl.col('a').str.pad_end\n- pl.col('a').str.rjust\n+ pl.col('a').str.pad_start\n- pl.col('a').zfill(alignment=3)\n+ pl.col('a').zfill(length=3)\n- pl.col('a').ljust(width=3)\n+ pl.col('a').ljust(length=3)\n- pl.col('a').rjust(width=3)\n+ pl.col('a').rjust(length=3)\n```\n\n### Version 0.19.13+\n```diff\n- pl.col('a').dt.milliseconds\n+ pl.col('a').dt.total_milliseconds\n- pl.col('a').dt.microseconds\n+ pl.col('a').dt.total_microseconds\n- pl.col('a').dt.nanoseconds\n+ pl.col('a').dt.total_nanoseconds\n```\n(and so on for other units)\n\n### Version 0.19.14+\n```diff\n- pl.col('a').list.take\n+ pl.col('a').list.gather\n- pl.col('a').cumcount\n+ pl.col('a').cum_count\n- pl.col('a').cummax\n+ pl.col('a').cum_max\n- pl.col('a').cummin\n+ pl.col('a').cum_min\n- pl.col('a').cumprod\n+ pl.col('a').cum_prod\n- pl.col('a').cumsum\n+ pl.col('a').cum_sum\n- pl.col('a').cumcount\n+ pl.col('a').cum_count\n- pl.col('a').take\n+ pl.col('a').gather\n- pl.col('a').take_every\n+ pl.col('a').gather_every\n- pl.cumsum\n+ pl.cum_sum\n- pl.cumfold\n+ pl.cum_fold\n- pl.cumreduce\n+ pl.cum_reduce\n- pl.cumsum_horizontal\n+ pl.cum_sum_horizontal\n- pl.col('a').list.take(index=[1, 2])\n+ pl.col('a').list.take(indices=[1, 2])\n- pl.col('a').str.parse_int(radix=1)\n+ pl.col('a').str.parse_int(base=1)\n```\n\n### Version 0.19.15+\n```diff\n- pl.col('a').str.json_extract\n+ pl.col('a').str.json_decode\n```\n\n### Version 0.19.16+\n```diff\n- pl.col('a').map_dict({'a': 'b'})\n+ pl.col('a').replace({'a': 'b'}, default=None)\n- pl.col('a').map_dict({'a': 'b'}, default='c')\n+ pl.col('a').replace({'a': 'b'}, default='c')\n```\n\n### Version 0.20.0+\n```diff\n- df.write_database(table_name='foo', if_exists=\"append\")\n+ df.write_database(table_name='foo', if_table_exists=\"append\")\n```\n\n### Version 0.20.4+\n```diff\n- pl.col('a').where\n+ pl.col('a').filter\n- pl.count()\n+ pl.len()\n- df.with_row_count('row_number')\n+ df.with_row_index('row_number')\n- pl.scan_ndjson(source, row_count_name='foo', row_count_offset=3)\n+ pl.scan_ndjson(source, row_index_name='foo', row_index_offset=3)\n[...and similarly for `read_csv`, `read_csv_batched`, `scan_csv`, `read_ipc`, `read_ipc_stream`, `scan_ipc`, `read_parquet`, `scan_parquet`]\n```\n\n### Version 0.20.5+\n```diff\n- df.pivot(index=index, values=values, columns=columns, aggregate_function='count')\n+ df.pivot(index=index, values=values, columns=columns, aggregate_function='len')\n```\n\n### Version 0.20.6+\n```diff\n- pl.read_excel(source, xlsx2csv_options=options, read_csv_options=read_options)\n+ pl.read_excel(source, engine_options=options, read_options=read_options)\n```\n\n### Version 0.20.7+\n```diff\n- pl.threadpool_size\n+ pl.thread_pool_size\n```\n\n### Version 0.20.8+\n```diff\n- df.pivot(a, b, c)\n+ df.pivot(values=a, index=b, columns=c)\n```\n\n### Version 0.20.11+\n```diff\n- pl.col('a').meta.write_json\n+ pl.col('a').meta.serialize\n```\n\n### Version 0.20.14+\n```diff\n- df.group_by_dynamic('time', every='2d', by='symbol')\n+ df.group_by_dynamic('time', every='2d', group_by='symbol')\n- df.rolling('time', period='2d', by='symbol')\n+ df.rolling('time', period='2d', group_by='symbol')\n- df.upsample('time', every='2d', by='symbol')\n+ df.upsample('time', every='2d', group_by='symbol')\n```\n\n### Version 0.20.17+\n```diff\n- pl.from_repr(tbl=data)\n+ pl.from_repr(data=data)\n```\n\n### Version 0.20.24+\n```diff\n- pl.col('a').rolling_min('2d', by='time')\n+ pl.col('a').rolling_min_by(window_size='2d', by='time')\n- pl.col('a').rolling_max('2d', by='time')\n+ pl.col('a').rolling_max_by(window_size='2d', by='time')\n- pl.col('a').rolling_mean('2d', by='time')\n+ pl.col('a').rolling_mean_by(window_size='2d', by='time')\n- pl.col('a').rolling_std('2d', by='time')\n+ pl.col('a').rolling_std_by(window_size='2d', by='time')\n- pl.col('a').rolling_var('2d', by='time')\n+ pl.col('a').rolling_var_by(window_size='2d', by='time')\n- pl.col('a').rolling_prod('2d', by='time')\n+ pl.col('a').rolling_prod_by(window_size='2d', by='time')\n- pl.col('a').rolling_sum('2d', by='time')\n+ pl.col('a').rolling_sum_by(window_size='2d', by='time')\n```\n\n### Version 0.20.29+\n```diff\n- df.join(df_right, how='outer')\n+ df.join(df_right, how='full')\n- df.join(df_right, how='outer_coalesce')\n+ df.join(df_right, how='full', coalesce=True)\n```\n\n### Version 0.20.31+\n```diff\n- pl.read_csv(file, dtypes=schema)\n+ pl.read_csv(file, schema=schema)\n- pl.SQLContext(eager_execution=True)\n+ pl.SQLContext(eager=True)\n- pl.col('a').top_k(k=2, maintain_order=True)\n+ pl.col('a').top_k(k=2)\n```\n\n## Notes\n\nThis work is derivative of [pyupgrade](https://github.com/asottile/pyupgrade) - many parts\nhave been lifted verbatim. As required, I've included pyupgrade's license.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Automatically upgrade Polars code to the latest version.",
    "version": "0.3.6",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7ea1ad79d37a45c73d2201d94cd94f25d5039b593a27990d44ef43722127c6a8",
                "md5": "1ddab59406b94508bbe215136288ee80",
                "sha256": "095cb56a6ff83314a0e87858bdd8afd76cdd38ffbdee975625c86a06ad547567"
            },
            "downloads": -1,
            "filename": "polars_upgrade-0.3.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1ddab59406b94508bbe215136288ee80",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 37650,
            "upload_time": "2024-06-18T08:51:50",
            "upload_time_iso_8601": "2024-06-18T08:51:50.737681Z",
            "url": "https://files.pythonhosted.org/packages/7e/a1/ad79d37a45c73d2201d94cd94f25d5039b593a27990d44ef43722127c6a8/polars_upgrade-0.3.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a68d0b5ba21af1a15d1264d9869337aa0f1338f8d0588d79955dc5c1ee8bf4a9",
                "md5": "3da02e5a22d26933cce24616cd70a83f",
                "sha256": "9148baa9c04d3835cfaad2cdf71e7a9fa11c72a60cf346ee7a152861216e379b"
            },
            "downloads": -1,
            "filename": "polars_upgrade-0.3.6.tar.gz",
            "has_sig": false,
            "md5_digest": "3da02e5a22d26933cce24616cd70a83f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 24535,
            "upload_time": "2024-06-18T08:51:53",
            "upload_time_iso_8601": "2024-06-18T08:51:53.036821Z",
            "url": "https://files.pythonhosted.org/packages/a6/8d/0b5ba21af1a15d1264d9869337aa0f1338f8d0588d79955dc5c1ee8bf4a9/polars_upgrade-0.3.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-18 08:51:53",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "polars-upgrade"
}
        
Elapsed time: 0.24311s