<!---
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
# DataFusion in Python
[](https://github.com/apache/datafusion-python/actions/workflows/test.yaml)
[](https://github.com/apache/datafusion-python/actions/workflows/build.yml)
This is a Python library that binds to [Apache Arrow](https://arrow.apache.org/) in-memory query engine [DataFusion](https://github.com/apache/datafusion).
DataFusion's Python bindings can be used as a foundation for building new data systems in Python. Here are some examples:
- [Dask SQL](https://github.com/dask-contrib/dask-sql) uses DataFusion's Python bindings for SQL parsing, query
planning, and logical plan optimizations, and then transpiles the logical plan to Dask operations for execution.
- [DataFusion Ballista](https://github.com/apache/datafusion-ballista) is a distributed SQL query engine that extends
DataFusion's Python bindings for distributed use cases.
- [DataFusion Ray](https://github.com/apache/datafusion-ray) is another distributed query engine that uses
DataFusion's Python bindings.
## Features
- Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
- Queries are optimized using DataFusion's query optimizer.
- Execute user-defined Python code from SQL.
- Exchange data with Pandas and other DataFrame libraries that support PyArrow.
- Serialize and deserialize query plans in Substrait format.
- Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.
## Example Usage
The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results
in a Pandas DataFrame, and then plotting a chart.
The Parquet file used in this example can be downloaded from the following page:
- https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page
```python
from datafusion import SessionContext
# Create a DataFusion context
ctx = SessionContext()
# Register table with context
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')
# Execute SQL
df = ctx.sql("select passenger_count, count(*) "
"from taxi "
"where passenger_count is not null "
"group by passenger_count "
"order by passenger_count")
# convert to Pandas
pandas_df = df.to_pandas()
# create a chart
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')
```
This produces the following chart:

## Registering a DataFrame as a View
You can use SessionContext's `register_view` method to convert a DataFrame into a view and register it with the context.
```python
from datafusion import SessionContext, col, literal
# Create a DataFusion context
ctx = SessionContext()
# Create sample data
data = {"a": [1, 2, 3, 4, 5], "b": [10, 20, 30, 40, 50]}
# Create a DataFrame from the dictionary
df = ctx.from_pydict(data, "my_table")
# Filter the DataFrame (for example, keep rows where a > 2)
df_filtered = df.filter(col("a") > literal(2))
# Register the dataframe as a view with the context
ctx.register_view("view1", df_filtered)
# Now run a SQL query against the registered view
df_view = ctx.sql("SELECT * FROM view1")
# Collect the results
results = df_view.collect()
# Convert results to a list of dictionaries for display
result_dicts = [batch.to_pydict() for batch in results]
print(result_dicts)
```
This will output:
```python
[{'a': [3, 4, 5], 'b': [30, 40, 50]}]
```
## Configuration
It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.
```python
runtime = (
RuntimeEnvBuilder()
.with_disk_manager_os()
.with_fair_spill_pool(10000000)
)
config = (
SessionConfig()
.with_create_default_catalog_and_schema(True)
.with_default_catalog_and_schema("foo", "bar")
.with_target_partitions(8)
.with_information_schema(True)
.with_repartition_joins(False)
.with_repartition_aggregations(False)
.with_repartition_windows(False)
.with_parquet_pruning(False)
.set("datafusion.execution.parquet.pushdown_filters", "true")
)
ctx = SessionContext(config, runtime)
```
Refer to the [API documentation](https://arrow.apache.org/datafusion-python/#api-reference) for more information.
Printing the context will show the current configuration settings.
```python
print(ctx)
```
## Extensions
For information about how to extend DataFusion Python, please see the extensions page of the
[online documentation](https://datafusion.apache.org/python/).
## More Examples
See [examples](examples/README.md) for more information.
### Executing Queries with DataFusion
- [Query a Parquet file using SQL](https://github.com/apache/datafusion-python/blob/main/examples/sql-parquet.py)
- [Query a Parquet file using the DataFrame API](https://github.com/apache/datafusion-python/blob/main/examples/dataframe-parquet.py)
- [Run a SQL query and store the results in a Pandas DataFrame](https://github.com/apache/datafusion-python/blob/main/examples/sql-to-pandas.py)
- [Run a SQL query with a Python user-defined function (UDF)](https://github.com/apache/datafusion-python/blob/main/examples/sql-using-python-udf.py)
- [Run a SQL query with a Python user-defined aggregation function (UDAF)](https://github.com/apache/datafusion-python/blob/main/examples/sql-using-python-udaf.py)
- [Query PyArrow Data](https://github.com/apache/datafusion-python/blob/main/examples/query-pyarrow-data.py)
- [Create dataframe](https://github.com/apache/datafusion-python/blob/main/examples/import.py)
- [Export dataframe](https://github.com/apache/datafusion-python/blob/main/examples/export.py)
### Running User-Defined Python Code
- [Register a Python UDF with DataFusion](https://github.com/apache/datafusion-python/blob/main/examples/python-udf.py)
- [Register a Python UDAF with DataFusion](https://github.com/apache/datafusion-python/blob/main/examples/python-udaf.py)
### Substrait Support
- [Serialize query plans using Substrait](https://github.com/apache/datafusion-python/blob/main/examples/substrait.py)
## How to install
### uv
```bash
uv add datafusion
```
### Pip
```bash
pip install datafusion
# or
python -m pip install datafusion
```
### Conda
```bash
conda install -c conda-forge datafusion
```
You can verify the installation by running:
```python
>>> import datafusion
>>> datafusion.__version__
'0.6.0'
```
## How to develop
This assumes that you have rust and cargo installed. We use the workflow recommended by [pyo3](https://github.com/PyO3/pyo3) and [maturin](https://github.com/PyO3/maturin). The Maturin tools used in this workflow can be installed either via `uv` or `pip`. Both approaches should offer the same experience. It is recommended to use `uv` since it has significant performance improvements
over `pip`.
Bootstrap (`uv`):
By default `uv` will attempt to build the datafusion python package. For our development we prefer to build manually. This means
that when creating your virtual environment using `uv sync` you need to pass in the additional `--no-install-package datafusion`
and for `uv run` commands the additional parameter `--no-project`
```bash
# fetch this repo
git clone git@github.com:apache/datafusion-python.git
# create the virtual enviornment
uv sync --dev --no-install-package datafusion
# activate the environment
source .venv/bin/activate
```
Bootstrap (`pip`):
```bash
# fetch this repo
git clone git@github.com:apache/datafusion-python.git
# prepare development environment (used to build wheel / install in development)
python3 -m venv .venv
# activate the venv
source .venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies
python -m pip install -r pyproject.toml
```
The tests rely on test data in git submodules.
```bash
git submodule update --init
```
Whenever rust code changes (your changes or via `git pull`):
```bash
# make sure you activate the venv using "source venv/bin/activate" first
maturin develop --uv
python -m pytest
```
Alternatively if you are using `uv` you can do the following without
needing to activate the virtual environment:
```bash
uv run --no-project maturin develop --uv
uv --no-project pytest .
```
### Running & Installing pre-commit hooks
`datafusion-python` takes advantage of [pre-commit](https://pre-commit.com/) to assist developers with code linting to help reduce
the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the
developer but certainly helpful for keeping PRs clean and concise.
Our pre-commit hooks can be installed by running `pre-commit install`, which will install the configurations in
your DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete
the commit if an offending lint is found allowing you to make changes locally before pushing.
The pre-commit hooks can also be run adhoc without installing them by simply running `pre-commit run --all-files`
## Running linters without using pre-commit
There are scripts in `ci/scripts` for running Rust and Python linters.
```shell
./ci/scripts/python_lint.sh
./ci/scripts/rust_clippy.sh
./ci/scripts/rust_fmt.sh
./ci/scripts/rust_toml_fmt.sh
```
## How to update dependencies
To change test dependencies, change the `pyproject.toml` and run
```bash
uv sync --dev --no-install-package datafusion
```
Raw data
{
"_id": null,
"home_page": "https://datafusion.apache.org/python",
"name": "datafusion",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "datafusion, dataframe, rust, query-engine",
"author": "Apache DataFusion <dev@datafusion.apache.org>",
"author_email": "Apache DataFusion <dev@datafusion.apache.org>",
"download_url": "https://files.pythonhosted.org/packages/84/6a/9363ca73aa2593fce9ac3ad1c6e97db7ec78530a316a3dbc0fa2a330b597/datafusion-48.0.0.tar.gz",
"platform": null,
"description": "<!---\n Licensed to the Apache Software Foundation (ASF) under one\n or more contributor license agreements. See the NOTICE file\n distributed with this work for additional information\n regarding copyright ownership. The ASF licenses this file\n to you under the Apache License, Version 2.0 (the\n \"License\"); you may not use this file except in compliance\n with the License. You may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing,\n software distributed under the License is distributed on an\n \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n KIND, either express or implied. See the License for the\n specific language governing permissions and limitations\n under the License.\n-->\n\n# DataFusion in Python\n\n[](https://github.com/apache/datafusion-python/actions/workflows/test.yaml)\n[](https://github.com/apache/datafusion-python/actions/workflows/build.yml)\n\nThis is a Python library that binds to [Apache Arrow](https://arrow.apache.org/) in-memory query engine [DataFusion](https://github.com/apache/datafusion).\n\nDataFusion's Python bindings can be used as a foundation for building new data systems in Python. Here are some examples:\n\n- [Dask SQL](https://github.com/dask-contrib/dask-sql) uses DataFusion's Python bindings for SQL parsing, query\n planning, and logical plan optimizations, and then transpiles the logical plan to Dask operations for execution.\n- [DataFusion Ballista](https://github.com/apache/datafusion-ballista) is a distributed SQL query engine that extends\n DataFusion's Python bindings for distributed use cases.\n- [DataFusion Ray](https://github.com/apache/datafusion-ray) is another distributed query engine that uses\n DataFusion's Python bindings.\n\n## Features\n\n- Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.\n- Queries are optimized using DataFusion's query optimizer.\n- Execute user-defined Python code from SQL.\n- Exchange data with Pandas and other DataFrame libraries that support PyArrow.\n- Serialize and deserialize query plans in Substrait format.\n- Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.\n\n## Example Usage\n\nThe following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results\nin a Pandas DataFrame, and then plotting a chart.\n\nThe Parquet file used in this example can be downloaded from the following page:\n\n- https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page\n\n```python\nfrom datafusion import SessionContext\n\n# Create a DataFusion context\nctx = SessionContext()\n\n# Register table with context\nctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')\n\n# Execute SQL\ndf = ctx.sql(\"select passenger_count, count(*) \"\n \"from taxi \"\n \"where passenger_count is not null \"\n \"group by passenger_count \"\n \"order by passenger_count\")\n\n# convert to Pandas\npandas_df = df.to_pandas()\n\n# create a chart\nfig = pandas_df.plot(kind=\"bar\", title=\"Trip Count by Number of Passengers\").get_figure()\nfig.savefig('chart.png')\n```\n\nThis produces the following chart:\n\n\n\n## Registering a DataFrame as a View\n\nYou can use SessionContext's `register_view` method to convert a DataFrame into a view and register it with the context.\n\n```python\nfrom datafusion import SessionContext, col, literal\n\n# Create a DataFusion context\nctx = SessionContext()\n\n# Create sample data\ndata = {\"a\": [1, 2, 3, 4, 5], \"b\": [10, 20, 30, 40, 50]}\n\n# Create a DataFrame from the dictionary\ndf = ctx.from_pydict(data, \"my_table\")\n\n# Filter the DataFrame (for example, keep rows where a > 2)\ndf_filtered = df.filter(col(\"a\") > literal(2))\n\n# Register the dataframe as a view with the context\nctx.register_view(\"view1\", df_filtered)\n\n# Now run a SQL query against the registered view\ndf_view = ctx.sql(\"SELECT * FROM view1\")\n\n# Collect the results\nresults = df_view.collect()\n\n# Convert results to a list of dictionaries for display\nresult_dicts = [batch.to_pydict() for batch in results]\n\nprint(result_dicts)\n```\n\nThis will output:\n\n```python\n[{'a': [3, 4, 5], 'b': [30, 40, 50]}]\n```\n\n## Configuration\n\nIt is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.\n\n```python\nruntime = (\n RuntimeEnvBuilder()\n .with_disk_manager_os()\n .with_fair_spill_pool(10000000)\n)\nconfig = (\n SessionConfig()\n .with_create_default_catalog_and_schema(True)\n .with_default_catalog_and_schema(\"foo\", \"bar\")\n .with_target_partitions(8)\n .with_information_schema(True)\n .with_repartition_joins(False)\n .with_repartition_aggregations(False)\n .with_repartition_windows(False)\n .with_parquet_pruning(False)\n .set(\"datafusion.execution.parquet.pushdown_filters\", \"true\")\n)\nctx = SessionContext(config, runtime)\n```\n\nRefer to the [API documentation](https://arrow.apache.org/datafusion-python/#api-reference) for more information.\n\nPrinting the context will show the current configuration settings.\n\n```python\nprint(ctx)\n```\n\n## Extensions\n\nFor information about how to extend DataFusion Python, please see the extensions page of the\n[online documentation](https://datafusion.apache.org/python/).\n\n## More Examples\n\nSee [examples](examples/README.md) for more information.\n\n### Executing Queries with DataFusion\n\n- [Query a Parquet file using SQL](https://github.com/apache/datafusion-python/blob/main/examples/sql-parquet.py)\n- [Query a Parquet file using the DataFrame API](https://github.com/apache/datafusion-python/blob/main/examples/dataframe-parquet.py)\n- [Run a SQL query and store the results in a Pandas DataFrame](https://github.com/apache/datafusion-python/blob/main/examples/sql-to-pandas.py)\n- [Run a SQL query with a Python user-defined function (UDF)](https://github.com/apache/datafusion-python/blob/main/examples/sql-using-python-udf.py)\n- [Run a SQL query with a Python user-defined aggregation function (UDAF)](https://github.com/apache/datafusion-python/blob/main/examples/sql-using-python-udaf.py)\n- [Query PyArrow Data](https://github.com/apache/datafusion-python/blob/main/examples/query-pyarrow-data.py)\n- [Create dataframe](https://github.com/apache/datafusion-python/blob/main/examples/import.py)\n- [Export dataframe](https://github.com/apache/datafusion-python/blob/main/examples/export.py)\n\n### Running User-Defined Python Code\n\n- [Register a Python UDF with DataFusion](https://github.com/apache/datafusion-python/blob/main/examples/python-udf.py)\n- [Register a Python UDAF with DataFusion](https://github.com/apache/datafusion-python/blob/main/examples/python-udaf.py)\n\n### Substrait Support\n\n- [Serialize query plans using Substrait](https://github.com/apache/datafusion-python/blob/main/examples/substrait.py)\n\n## How to install\n\n### uv\n\n```bash\nuv add datafusion\n```\n\n### Pip\n\n```bash\npip install datafusion\n# or\npython -m pip install datafusion\n```\n\n### Conda\n\n```bash\nconda install -c conda-forge datafusion\n```\n\nYou can verify the installation by running:\n\n```python\n>>> import datafusion\n>>> datafusion.__version__\n'0.6.0'\n```\n\n## How to develop\n\nThis assumes that you have rust and cargo installed. We use the workflow recommended by [pyo3](https://github.com/PyO3/pyo3) and [maturin](https://github.com/PyO3/maturin). The Maturin tools used in this workflow can be installed either via `uv` or `pip`. Both approaches should offer the same experience. It is recommended to use `uv` since it has significant performance improvements\nover `pip`.\n\nBootstrap (`uv`):\n\nBy default `uv` will attempt to build the datafusion python package. For our development we prefer to build manually. This means\nthat when creating your virtual environment using `uv sync` you need to pass in the additional `--no-install-package datafusion`\nand for `uv run` commands the additional parameter `--no-project`\n\n```bash\n# fetch this repo\ngit clone git@github.com:apache/datafusion-python.git\n# create the virtual enviornment\nuv sync --dev --no-install-package datafusion\n# activate the environment\nsource .venv/bin/activate\n```\n\nBootstrap (`pip`):\n\n```bash\n# fetch this repo\ngit clone git@github.com:apache/datafusion-python.git\n# prepare development environment (used to build wheel / install in development)\npython3 -m venv .venv\n# activate the venv\nsource .venv/bin/activate\n# update pip itself if necessary\npython -m pip install -U pip\n# install dependencies\npython -m pip install -r pyproject.toml\n```\n\nThe tests rely on test data in git submodules.\n\n```bash\ngit submodule update --init\n```\n\nWhenever rust code changes (your changes or via `git pull`):\n\n```bash\n# make sure you activate the venv using \"source venv/bin/activate\" first\nmaturin develop --uv\npython -m pytest\n```\n\nAlternatively if you are using `uv` you can do the following without\nneeding to activate the virtual environment:\n\n```bash\nuv run --no-project maturin develop --uv\nuv --no-project pytest .\n```\n\n### Running & Installing pre-commit hooks\n\n`datafusion-python` takes advantage of [pre-commit](https://pre-commit.com/) to assist developers with code linting to help reduce\nthe number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the\ndeveloper but certainly helpful for keeping PRs clean and concise.\n\nOur pre-commit hooks can be installed by running `pre-commit install`, which will install the configurations in\nyour DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete\nthe commit if an offending lint is found allowing you to make changes locally before pushing.\n\nThe pre-commit hooks can also be run adhoc without installing them by simply running `pre-commit run --all-files`\n\n## Running linters without using pre-commit\n\nThere are scripts in `ci/scripts` for running Rust and Python linters.\n\n```shell\n./ci/scripts/python_lint.sh\n./ci/scripts/rust_clippy.sh\n./ci/scripts/rust_fmt.sh\n./ci/scripts/rust_toml_fmt.sh\n```\n\n## How to update dependencies\n\nTo change test dependencies, change the `pyproject.toml` and run\n\n```bash\nuv sync --dev --no-install-package datafusion\n```\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Build and run queries against data",
"version": "48.0.0",
"project_urls": {
"Homepage": "https://datafusion.apache.org/python",
"documentation": "https://datafusion.apache.org/python",
"repository": "https://github.com/apache/datafusion-python"
},
"split_keywords": [
"datafusion",
" dataframe",
" rust",
" query-engine"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f668f02fe93c53dd77afdd0b187d592e618b6a10e9477f8de114baa7f8f4ce51",
"md5": "09842d0cf16b26fb2306261a78187e5a",
"sha256": "24984e3c4077caca7b3746bdcf6d67171c4976325d035970b97bf59d49327c5b"
},
"downloads": -1,
"filename": "datafusion-48.0.0-cp39-abi3-macosx_10_12_x86_64.whl",
"has_sig": false,
"md5_digest": "09842d0cf16b26fb2306261a78187e5a",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 25819127,
"upload_time": "2025-07-12T11:44:02",
"upload_time_iso_8601": "2025-07-12T11:44:02.883921Z",
"url": "https://files.pythonhosted.org/packages/f6/68/f02fe93c53dd77afdd0b187d592e618b6a10e9477f8de114baa7f8f4ce51/datafusion-48.0.0-cp39-abi3-macosx_10_12_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a121fdbb3bf1f5bb8f8c06cf80de967ee56519c0ead4ad3354ee0ba22b4bff99",
"md5": "24785d2c5753e5c4672268ac8a3d5b59",
"sha256": "31e841d02147b0904984850421ae18499d4ab2492ff1ef4dd9d15d3cba3fbef3"
},
"downloads": -1,
"filename": "datafusion-48.0.0-cp39-abi3-macosx_11_0_arm64.whl",
"has_sig": false,
"md5_digest": "24785d2c5753e5c4672268ac8a3d5b59",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 23400042,
"upload_time": "2025-07-12T11:44:06",
"upload_time_iso_8601": "2025-07-12T11:44:06.516653Z",
"url": "https://files.pythonhosted.org/packages/a1/21/fdbb3bf1f5bb8f8c06cf80de967ee56519c0ead4ad3354ee0ba22b4bff99/datafusion-48.0.0-cp39-abi3-macosx_11_0_arm64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5f7395daf83a61e6cc877da78831a848aa13b0af050ca0c9df23a96bb61cf234",
"md5": "566273c135b422d910a84e8fa091a3d2",
"sha256": "b6b1ed4552c496b961d648d2cbbb6a43aaae3c6442acebc795a4ef256f549cd4"
},
"downloads": -1,
"filename": "datafusion-48.0.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "566273c135b422d910a84e8fa091a3d2",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 28555364,
"upload_time": "2025-07-12T11:44:09",
"upload_time_iso_8601": "2025-07-12T11:44:09.412750Z",
"url": "https://files.pythonhosted.org/packages/5f/73/95daf83a61e6cc877da78831a848aa13b0af050ca0c9df23a96bb61cf234/datafusion-48.0.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "3cca0227e285fbf1b35d1a45d15f25dc698b594c718b1a514851a1bc1caab812",
"md5": "73d1cca4decb531ab4440f13ae445b6c",
"sha256": "3d316dc339c0231588ac3f4139af490c556912c54c4508c443e3466c81ff457b"
},
"downloads": -1,
"filename": "datafusion-48.0.0-cp39-abi3-manylinux_2_28_aarch64.whl",
"has_sig": false,
"md5_digest": "73d1cca4decb531ab4440f13ae445b6c",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 26791000,
"upload_time": "2025-07-12T11:44:12",
"upload_time_iso_8601": "2025-07-12T11:44:12.641083Z",
"url": "https://files.pythonhosted.org/packages/3c/ca/0227e285fbf1b35d1a45d15f25dc698b594c718b1a514851a1bc1caab812/datafusion-48.0.0-cp39-abi3-manylinux_2_28_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "83c848abb69d2482477996cc1cf33274b953524471ae7eea68dd06d374489aa3",
"md5": "85a921d0fe3a9f8dd7b842d0e980f0a8",
"sha256": "3d75026f93083febef2e8b362f56e19cfbd5d8058c61c3847f04e786697fc4bd"
},
"downloads": -1,
"filename": "datafusion-48.0.0-cp39-abi3-win_amd64.whl",
"has_sig": false,
"md5_digest": "85a921d0fe3a9f8dd7b842d0e980f0a8",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 28104564,
"upload_time": "2025-07-12T11:44:15",
"upload_time_iso_8601": "2025-07-12T11:44:15.913235Z",
"url": "https://files.pythonhosted.org/packages/83/c8/48abb69d2482477996cc1cf33274b953524471ae7eea68dd06d374489aa3/datafusion-48.0.0-cp39-abi3-win_amd64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "846a9363ca73aa2593fce9ac3ad1c6e97db7ec78530a316a3dbc0fa2a330b597",
"md5": "e79c7ee9e010cecd81f4184038a09cc1",
"sha256": "fcb89124db22a43e00bf5a1a4542157155d83d69589677c5309f106e83156a32"
},
"downloads": -1,
"filename": "datafusion-48.0.0.tar.gz",
"has_sig": false,
"md5_digest": "e79c7ee9e010cecd81f4184038a09cc1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 182992,
"upload_time": "2025-07-12T11:44:18",
"upload_time_iso_8601": "2025-07-12T11:44:18.091063Z",
"url": "https://files.pythonhosted.org/packages/84/6a/9363ca73aa2593fce9ac3ad1c6e97db7ec78530a316a3dbc0fa2a330b597/datafusion-48.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-12 11:44:18",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "apache",
"github_project": "datafusion-python",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "datafusion"
}