datafusion


Namedatafusion JSON
Version 35.0.0 PyPI version JSON
download
home_pagehttps://github.com/apache/arrow-datafusion-python
SummaryBuild and run queries against data
upload_time2024-02-04 23:20:48
maintainer
docs_urlNone
authorApache Arrow <dev@arrow.apache.org>
requires_python>=3.6
licenseApache-2.0
keywords datafusion dataframe rust query-engine
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!---
  Licensed to the Apache Software Foundation (ASF) under one
  or more contributor license agreements.  See the NOTICE file
  distributed with this work for additional information
  regarding copyright ownership.  The ASF licenses this file
  to you under the Apache License, Version 2.0 (the
  "License"); you may not use this file except in compliance
  with the License.  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing,
  software distributed under the License is distributed on an
  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
  KIND, either express or implied.  See the License for the
  specific language governing permissions and limitations
  under the License.
-->

# DataFusion in Python

[![Python test](https://github.com/apache/arrow-datafusion-python/actions/workflows/test.yaml/badge.svg)](https://github.com/apache/arrow-datafusion-python/actions/workflows/test.yaml)
[![Python Release Build](https://github.com/apache/arrow-datafusion-python/actions/workflows/build.yml/badge.svg)](https://github.com/apache/arrow-datafusion-python/actions/workflows/build.yml)

This is a Python library that binds to [Apache Arrow](https://arrow.apache.org/) in-memory query engine [DataFusion](https://github.com/apache/arrow-datafusion).

DataFusion's Python bindings can be used as an end-user tool as well as providing a foundation for building new systems.

## Features

- Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
- Queries are optimized using DataFusion's query optimizer.
- Execute user-defined Python code from SQL.
- Exchange data with Pandas and other DataFrame libraries that support PyArrow.
- Serialize and deserialize query plans in Substrait format.
- Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.

## Comparison with other projects

Here is a comparison with similar projects that may help understand when DataFusion might be suitable and unsuitable
for your needs:

- [DuckDB](http://www.duckdb.org/) is an open source, in-process analytic database. Like DataFusion, it supports
  very fast execution, both from its custom file format and directly from Parquet files. Unlike DataFusion, it is
  written in C/C++ and it is primarily used directly by users as a serverless database and query system rather than
  as a library for building such database systems.

- [Polars](http://pola.rs/) is one of the fastest DataFrame libraries at the time of writing. Like DataFusion, it
  is also written in Rust and uses the Apache Arrow memory model, but unlike DataFusion it does not provide full SQL
  support, nor as many extension points.

## Example Usage

The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results
in a Pandas DataFrame, and then plotting a chart.

The Parquet file used in this example can be downloaded from the following page:

- https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page

```python
from datafusion import SessionContext

# Create a DataFusion context
ctx = SessionContext()

# Register table with context
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')

# Execute SQL
df = ctx.sql("select passenger_count, count(*) "
             "from taxi "
             "where passenger_count is not null "
             "group by passenger_count "
             "order by passenger_count")

# convert to Pandas
pandas_df = df.to_pandas()

# create a chart
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')
```

This produces the following chart:

![Chart](examples/chart.png)

## Configuration

It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.

```python
runtime = (
    RuntimeConfig()
    .with_disk_manager_os()
    .with_fair_spill_pool(10000000)
)
config = (
    SessionConfig()
    .with_create_default_catalog_and_schema(True)
    .with_default_catalog_and_schema("foo", "bar")
    .with_target_partitions(8)
    .with_information_schema(True)
    .with_repartition_joins(False)
    .with_repartition_aggregations(False)
    .with_repartition_windows(False)
    .with_parquet_pruning(False)
    .set("datafusion.execution.parquet.pushdown_filters", "true")
)
ctx = SessionContext(config, runtime)
```

Refer to the [API documentation](https://arrow.apache.org/datafusion-python/#api-reference) for more information.

Printing the context will show the current configuration settings.

```python
print(ctx)
```

## More Examples

See [examples](examples/README.md) for more information.

### Executing Queries with DataFusion

- [Query a Parquet file using SQL](./examples/sql-parquet.py)
- [Query a Parquet file using the DataFrame API](./examples/dataframe-parquet.py)
- [Run a SQL query and store the results in a Pandas DataFrame](./examples/sql-to-pandas.py)
- [Run a SQL query with a Python user-defined function (UDF)](./examples/sql-using-python-udf.py)
- [Run a SQL query with a Python user-defined aggregation function (UDAF)](./examples/sql-using-python-udaf.py)
- [Query PyArrow Data](./examples/query-pyarrow-data.py)
- [Create dataframe](./examples/import.py)
- [Export dataframe](./examples/export.py)

### Running User-Defined Python Code

- [Register a Python UDF with DataFusion](./examples/python-udf.py)
- [Register a Python UDAF with DataFusion](./examples/python-udaf.py)

### Substrait Support

- [Serialize query plans using Substrait](./examples/substrait.py)

### Executing SQL against DataFrame Libraries (Experimental)

- [Executing SQL on Polars](./examples/sql-on-polars.py)
- [Executing SQL on Pandas](./examples/sql-on-pandas.py)
- [Executing SQL on cuDF](./examples/sql-on-cudf.py)

## How to install (from pip)

### Pip

```bash
pip install datafusion
# or
python -m pip install datafusion
```

### Conda

```bash
conda install -c conda-forge datafusion
```

You can verify the installation by running:

```python
>>> import datafusion
>>> datafusion.__version__
'0.6.0'
```

## How to develop

This assumes that you have rust and cargo installed. We use the workflow recommended by [pyo3](https://github.com/PyO3/pyo3) and [maturin](https://github.com/PyO3/maturin).

The Maturin tools used in this workflow can be installed either via Conda or Pip. Both approaches should offer the same experience. Multiple approaches are only offered to appease developer preference. Bootstrapping for both Conda and Pip are as follows.

Bootstrap (Conda):

```bash
# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# create the conda environment for dev
conda env create -f ./conda/environments/datafusion-dev.yaml -n datafusion-dev
# activate the conda environment
conda activate datafusion-dev
```

Bootstrap (Pip):

```bash
# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# prepare development environment (used to build wheel / install in development)
python3 -m venv venv
# activate the venv
source venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies (for Python 3.8+)
python -m pip install -r requirements-310.txt
```

The tests rely on test data in git submodules.

```bash
git submodule init
git submodule update
```

Whenever rust code changes (your changes or via `git pull`):

```bash
# make sure you activate the venv using "source venv/bin/activate" first
maturin develop
python -m pytest
```

### Running & Installing pre-commit hooks

arrow-datafusion-python takes advantage of [pre-commit](https://pre-commit.com/) to assist developers with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keeping PRs clean and concise.

Our pre-commit hooks can be installed by running `pre-commit install`, which will install the configurations in your ARROW_DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete the commit if an offending lint is found allowing you to make changes locally before pushing.

The pre-commit hooks can also be run adhoc without installing them by simply running `pre-commit run --all-files`

## How to update dependencies

To change test dependencies, change the `requirements.in` and run

```bash
# install pip-tools (this can be done only once), also consider running in venv
python -m pip install pip-tools
python -m piptools compile --generate-hashes -o requirements-310.txt
```

To update dependencies, run with `-U`

```bash
python -m piptools compile -U --generate-hashes -o requirements-310.txt
```

More details [here](https://github.com/jazzband/pip-tools)


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/apache/arrow-datafusion-python",
    "name": "datafusion",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "datafusion,dataframe,rust,query-engine",
    "author": "Apache Arrow <dev@arrow.apache.org>",
    "author_email": "Apache Arrow <dev@arrow.apache.org>",
    "download_url": "https://files.pythonhosted.org/packages/a6/01/a1ad24f37c3b5a1fd4f076197bd438dd91318f6932c286a7a9a4607345b0/datafusion-35.0.0.tar.gz",
    "platform": null,
    "description": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# DataFusion in Python\n\n[![Python test](https://github.com/apache/arrow-datafusion-python/actions/workflows/test.yaml/badge.svg)](https://github.com/apache/arrow-datafusion-python/actions/workflows/test.yaml)\n[![Python Release Build](https://github.com/apache/arrow-datafusion-python/actions/workflows/build.yml/badge.svg)](https://github.com/apache/arrow-datafusion-python/actions/workflows/build.yml)\n\nThis is a Python library that binds to [Apache Arrow](https://arrow.apache.org/) in-memory query engine [DataFusion](https://github.com/apache/arrow-datafusion).\n\nDataFusion's Python bindings can be used as an end-user tool as well as providing a foundation for building new systems.\n\n## Features\n\n- Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.\n- Queries are optimized using DataFusion's query optimizer.\n- Execute user-defined Python code from SQL.\n- Exchange data with Pandas and other DataFrame libraries that support PyArrow.\n- Serialize and deserialize query plans in Substrait format.\n- Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.\n\n## Comparison with other projects\n\nHere is a comparison with similar projects that may help understand when DataFusion might be suitable and unsuitable\nfor your needs:\n\n- [DuckDB](http://www.duckdb.org/) is an open source, in-process analytic database. Like DataFusion, it supports\n  very fast execution, both from its custom file format and directly from Parquet files. Unlike DataFusion, it is\n  written in C/C++ and it is primarily used directly by users as a serverless database and query system rather than\n  as a library for building such database systems.\n\n- [Polars](http://pola.rs/) is one of the fastest DataFrame libraries at the time of writing. Like DataFusion, it\n  is also written in Rust and uses the Apache Arrow memory model, but unlike DataFusion it does not provide full SQL\n  support, nor as many extension points.\n\n## Example Usage\n\nThe following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results\nin a Pandas DataFrame, and then plotting a chart.\n\nThe Parquet file used in this example can be downloaded from the following page:\n\n- https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page\n\n```python\nfrom datafusion import SessionContext\n\n# Create a DataFusion context\nctx = SessionContext()\n\n# Register table with context\nctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')\n\n# Execute SQL\ndf = ctx.sql(\"select passenger_count, count(*) \"\n             \"from taxi \"\n             \"where passenger_count is not null \"\n             \"group by passenger_count \"\n             \"order by passenger_count\")\n\n# convert to Pandas\npandas_df = df.to_pandas()\n\n# create a chart\nfig = pandas_df.plot(kind=\"bar\", title=\"Trip Count by Number of Passengers\").get_figure()\nfig.savefig('chart.png')\n```\n\nThis produces the following chart:\n\n![Chart](examples/chart.png)\n\n## Configuration\n\nIt is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.\n\n```python\nruntime = (\n    RuntimeConfig()\n    .with_disk_manager_os()\n    .with_fair_spill_pool(10000000)\n)\nconfig = (\n    SessionConfig()\n    .with_create_default_catalog_and_schema(True)\n    .with_default_catalog_and_schema(\"foo\", \"bar\")\n    .with_target_partitions(8)\n    .with_information_schema(True)\n    .with_repartition_joins(False)\n    .with_repartition_aggregations(False)\n    .with_repartition_windows(False)\n    .with_parquet_pruning(False)\n    .set(\"datafusion.execution.parquet.pushdown_filters\", \"true\")\n)\nctx = SessionContext(config, runtime)\n```\n\nRefer to the [API documentation](https://arrow.apache.org/datafusion-python/#api-reference) for more information.\n\nPrinting the context will show the current configuration settings.\n\n```python\nprint(ctx)\n```\n\n## More Examples\n\nSee [examples](examples/README.md) for more information.\n\n### Executing Queries with DataFusion\n\n- [Query a Parquet file using SQL](./examples/sql-parquet.py)\n- [Query a Parquet file using the DataFrame API](./examples/dataframe-parquet.py)\n- [Run a SQL query and store the results in a Pandas DataFrame](./examples/sql-to-pandas.py)\n- [Run a SQL query with a Python user-defined function (UDF)](./examples/sql-using-python-udf.py)\n- [Run a SQL query with a Python user-defined aggregation function (UDAF)](./examples/sql-using-python-udaf.py)\n- [Query PyArrow Data](./examples/query-pyarrow-data.py)\n- [Create dataframe](./examples/import.py)\n- [Export dataframe](./examples/export.py)\n\n### Running User-Defined Python Code\n\n- [Register a Python UDF with DataFusion](./examples/python-udf.py)\n- [Register a Python UDAF with DataFusion](./examples/python-udaf.py)\n\n### Substrait Support\n\n- [Serialize query plans using Substrait](./examples/substrait.py)\n\n### Executing SQL against DataFrame Libraries (Experimental)\n\n- [Executing SQL on Polars](./examples/sql-on-polars.py)\n- [Executing SQL on Pandas](./examples/sql-on-pandas.py)\n- [Executing SQL on cuDF](./examples/sql-on-cudf.py)\n\n## How to install (from pip)\n\n### Pip\n\n```bash\npip install datafusion\n# or\npython -m pip install datafusion\n```\n\n### Conda\n\n```bash\nconda install -c conda-forge datafusion\n```\n\nYou can verify the installation by running:\n\n```python\n>>> import datafusion\n>>> datafusion.__version__\n'0.6.0'\n```\n\n## How to develop\n\nThis assumes that you have rust and cargo installed. We use the workflow recommended by [pyo3](https://github.com/PyO3/pyo3) and [maturin](https://github.com/PyO3/maturin).\n\nThe Maturin tools used in this workflow can be installed either via Conda or Pip. Both approaches should offer the same experience. Multiple approaches are only offered to appease developer preference. Bootstrapping for both Conda and Pip are as follows.\n\nBootstrap (Conda):\n\n```bash\n# fetch this repo\ngit clone git@github.com:apache/arrow-datafusion-python.git\n# create the conda environment for dev\nconda env create -f ./conda/environments/datafusion-dev.yaml -n datafusion-dev\n# activate the conda environment\nconda activate datafusion-dev\n```\n\nBootstrap (Pip):\n\n```bash\n# fetch this repo\ngit clone git@github.com:apache/arrow-datafusion-python.git\n# prepare development environment (used to build wheel / install in development)\npython3 -m venv venv\n# activate the venv\nsource venv/bin/activate\n# update pip itself if necessary\npython -m pip install -U pip\n# install dependencies (for Python 3.8+)\npython -m pip install -r requirements-310.txt\n```\n\nThe tests rely on test data in git submodules.\n\n```bash\ngit submodule init\ngit submodule update\n```\n\nWhenever rust code changes (your changes or via `git pull`):\n\n```bash\n# make sure you activate the venv using \"source venv/bin/activate\" first\nmaturin develop\npython -m pytest\n```\n\n### Running & Installing pre-commit hooks\n\narrow-datafusion-python takes advantage of [pre-commit](https://pre-commit.com/) to assist developers with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keeping PRs clean and concise.\n\nOur pre-commit hooks can be installed by running `pre-commit install`, which will install the configurations in your ARROW_DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete the commit if an offending lint is found allowing you to make changes locally before pushing.\n\nThe pre-commit hooks can also be run adhoc without installing them by simply running `pre-commit run --all-files`\n\n## How to update dependencies\n\nTo change test dependencies, change the `requirements.in` and run\n\n```bash\n# install pip-tools (this can be done only once), also consider running in venv\npython -m pip install pip-tools\npython -m piptools compile --generate-hashes -o requirements-310.txt\n```\n\nTo update dependencies, run with `-U`\n\n```bash\npython -m piptools compile -U --generate-hashes -o requirements-310.txt\n```\n\nMore details [here](https://github.com/jazzband/pip-tools)\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Build and run queries against data",
    "version": "35.0.0",
    "project_urls": {
        "Homepage": "https://github.com/apache/arrow-datafusion-python",
        "documentation": "https://arrow.apache.org/datafusion-python",
        "homepage": "https://arrow.apache.org/datafusion-python",
        "repository": "https://github.com/apache/arrow-datafusion-python"
    },
    "split_keywords": [
        "datafusion",
        "dataframe",
        "rust",
        "query-engine"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "92c5baba4f321c75901c6ef3680c633389c6012b0d1967bc133e14ed40c26a95",
                "md5": "7d50cb71cda91fedee22337e5aacce84",
                "sha256": "8e460dff1f6588632273629145d53d77878b2969b6fbeccec034a9404bed8dbe"
            },
            "downloads": -1,
            "filename": "datafusion-35.0.0-cp38-abi3-macosx_10_12_x86_64.whl",
            "has_sig": false,
            "md5_digest": "7d50cb71cda91fedee22337e5aacce84",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.6",
            "size": 15217732,
            "upload_time": "2024-02-04T23:20:19",
            "upload_time_iso_8601": "2024-02-04T23:20:19.971476Z",
            "url": "https://files.pythonhosted.org/packages/92/c5/baba4f321c75901c6ef3680c633389c6012b0d1967bc133e14ed40c26a95/datafusion-35.0.0-cp38-abi3-macosx_10_12_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0d5da1916dbe695961fa15829f01f2d4dfc95a3da741123613146d18d656758c",
                "md5": "ac08a60f0280c2335798126a0b082b6b",
                "sha256": "e817d61c82d93fc0f64334b86045e1fcdae39f9a84b2afb82c5b66d7f6b9b85d"
            },
            "downloads": -1,
            "filename": "datafusion-35.0.0-cp38-abi3-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "ac08a60f0280c2335798126a0b082b6b",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.6",
            "size": 14062946,
            "upload_time": "2024-02-04T23:20:27",
            "upload_time_iso_8601": "2024-02-04T23:20:27.078852Z",
            "url": "https://files.pythonhosted.org/packages/0d/5d/a1916dbe695961fa15829f01f2d4dfc95a3da741123613146d18d656758c/datafusion-35.0.0-cp38-abi3-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fe13a3f561b56ff17eea0a6c1c2e75ecbc77b45d66567679137dbd08fc122b52",
                "md5": "797b4fa1fecabadb53e9b2a0c9ad1e72",
                "sha256": "ca943440ad61ac3f9cd894e5bd2266986809132076f951687ea40642b1bd1103"
            },
            "downloads": -1,
            "filename": "datafusion-35.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "797b4fa1fecabadb53e9b2a0c9ad1e72",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.6",
            "size": 17732406,
            "upload_time": "2024-02-04T23:20:34",
            "upload_time_iso_8601": "2024-02-04T23:20:34.209713Z",
            "url": "https://files.pythonhosted.org/packages/fe/13/a3f561b56ff17eea0a6c1c2e75ecbc77b45d66567679137dbd08fc122b52/datafusion-35.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ed6357f4e6aec694f324b64a1d0b6cc7dec5873e3493f09199629f84e0f8e532",
                "md5": "cd0599de19dd4e3276262dd7564d5e45",
                "sha256": "fb299b7550b5466f84463c3715605153c356f78fdd0c67ea630feb67631789d2"
            },
            "downloads": -1,
            "filename": "datafusion-35.0.0-cp38-abi3-manylinux_2_28_aarch64.whl",
            "has_sig": false,
            "md5_digest": "cd0599de19dd4e3276262dd7564d5e45",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.6",
            "size": 16832184,
            "upload_time": "2024-02-04T23:20:40",
            "upload_time_iso_8601": "2024-02-04T23:20:40.089631Z",
            "url": "https://files.pythonhosted.org/packages/ed/63/57f4e6aec694f324b64a1d0b6cc7dec5873e3493f09199629f84e0f8e532/datafusion-35.0.0-cp38-abi3-manylinux_2_28_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "886353502c334c02a04b6cc24a39ccefe13445d5114bede7e9d70c0b980134e3",
                "md5": "a0bb1487f4e1c67432322238ea5405c1",
                "sha256": "9c66e2b258b705adbfd611b492698eb52b42f2bd123ff838e5cbe7353e54bd3d"
            },
            "downloads": -1,
            "filename": "datafusion-35.0.0-cp38-abi3-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "a0bb1487f4e1c67432322238ea5405c1",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.6",
            "size": 16111622,
            "upload_time": "2024-02-04T23:20:45",
            "upload_time_iso_8601": "2024-02-04T23:20:45.664616Z",
            "url": "https://files.pythonhosted.org/packages/88/63/53502c334c02a04b6cc24a39ccefe13445d5114bede7e9d70c0b980134e3/datafusion-35.0.0-cp38-abi3-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a601a1ad24f37c3b5a1fd4f076197bd438dd91318f6932c286a7a9a4607345b0",
                "md5": "a06da9263d8f87819200b4a7172ca29a",
                "sha256": "467f8d8f963256a321031f405cc553c8aa27e3cab6402389eb10ddb2db55a40d"
            },
            "downloads": -1,
            "filename": "datafusion-35.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "a06da9263d8f87819200b4a7172ca29a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 107689,
            "upload_time": "2024-02-04T23:20:48",
            "upload_time_iso_8601": "2024-02-04T23:20:48.509025Z",
            "url": "https://files.pythonhosted.org/packages/a6/01/a1ad24f37c3b5a1fd4f076197bd438dd91318f6932c286a7a9a4607345b0/datafusion-35.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-04 23:20:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "apache",
    "github_project": "arrow-datafusion-python",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "datafusion"
}
        
Elapsed time: 0.21803s