Name | pytpch JSON |
Version |
0.2.0
JSON |
| download |
home_page | |
Summary | bindings to libdbgen / tpch-dbgen |
upload_time | 2024-02-25 21:10:38 |
maintainer | |
docs_url | None |
author | Miles Granger <miles59923@gmail.com> |
requires_python | >=3.8 |
license | MIT |
keywords |
tpc-h
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
Ergonomically create [TPC-H](https://www.tpc.org/tpch/) data thru Python as Arrow tables.
**NOTE**:
This was a weekend project, it is a WIP. For now only x86_64 linux wheels are available on PyPI
```python
import pytpch
import pyarrow as pa
# Generate TPC-H data at scale 1 (~1GB)
tables: dict[str, pa.Table] = pytpch.dbgen(sf=1)
# Generate a single table at scale 1
tables: dict[str, pa.Table] = pytpch.dbgen(sf=1, table=pytpch.Table.Nation)
# Generate a single chunk out of n chunks of a single table
# this is wildly helpful when generating larger scale factors as you can make
# subsets of the data and store them or join them after some sort of parallelism.
tables: dict[str, pa.Table] = pytpch.dbgen(sf=1, table=pytpch.Table.Nation)
# NOTE! As mentioned in the docs for this function, it is NOT thread-safe.
# If you want to generate data in parallel, you must do so in other processes for now
# by using things like `multiprocessing` or `concurrent.futures.ProcessPoolExecutor`.
# This is a TODO, as the original C code uses copious amounts of global and static function
# variables to maintain state, and while the state is reset between function calls from refactoring
# in milesgranger/libdbgen, these shared global states are not removed so thus not thread-safe.
#
# Example of generating data in parallel:
from concurrent.futures import ProcessPoolExecutor
n_steps = 10 # 10 total chunks
def gen_step(step):
return pytpch.dbgen(sf=10, n_steps=n_steps, nth_step=step)
with ProcessPoolExecutor() as executor:
jobs: list[dict[str, pa.Table]] = list(executor.map(gen_step, range(n_steps)))
# Default reference queries provided (1-22) as:
print(pytpch.QUERY_1)
```
---
### Tell me more...
Python bindings (thru Rust, b/c why not) to [libdbgen](https://github.com/milesgranger/libdbgen)
which is a fork of [databricks/tpch-dbgen](https://github.com/databricks/tpch-dbgen) for generating
[TPC-H data](https://www.tpc.org/tpch/).
tpch-dbgen is originally a CLI to generate CSV files for TPC-H data. I wanted to make it into an ergonomic
Python API for use in other projects.
TODOS (roughly in order of priority):
- [ ] Support for more than Linux x86_64 (mostly just adapting C lib and updating CI)
- [ ] Remove verbose stdout
- [ ] Write directly to Arrow, removing CSV writing (w/ nanoarrow probably)
- [ ] Make thread safe (remove global and static function variables in C lib, and remove changing of CWD)
- [ ] Separate out the Rust stuff into it's own crate.
### Build from source...
Roughly:
- `git clone --recursive git@github.com:milesgranger/pytpch.git`
- `python -m pip install maturin`
- `maturin build --release`
That'll only work if you're on x86_64 linux for now, you can try adapting `build.rs` but good luck with that. For now.
Raw data
{
"_id": null,
"home_page": "",
"name": "pytpch",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "tpc-h",
"author": "Miles Granger <miles59923@gmail.com>",
"author_email": "Miles Granger <miles59923@gmail.com>",
"download_url": "",
"platform": null,
"description": "Ergonomically create [TPC-H](https://www.tpc.org/tpch/) data thru Python as Arrow tables.\n\n\n**NOTE**:\n This was a weekend project, it is a WIP. For now only x86_64 linux wheels are available on PyPI\n\n```python\n\nimport pytpch\nimport pyarrow as pa\n\n# Generate TPC-H data at scale 1 (~1GB)\ntables: dict[str, pa.Table] = pytpch.dbgen(sf=1)\n\n# Generate a single table at scale 1\ntables: dict[str, pa.Table] = pytpch.dbgen(sf=1, table=pytpch.Table.Nation)\n\n# Generate a single chunk out of n chunks of a single table\n# this is wildly helpful when generating larger scale factors as you can make\n# subsets of the data and store them or join them after some sort of parallelism.\ntables: dict[str, pa.Table] = pytpch.dbgen(sf=1, table=pytpch.Table.Nation)\n\n\n# NOTE! As mentioned in the docs for this function, it is NOT thread-safe.\n# If you want to generate data in parallel, you must do so in other processes for now\n# by using things like `multiprocessing` or `concurrent.futures.ProcessPoolExecutor`.\n# This is a TODO, as the original C code uses copious amounts of global and static function\n# variables to maintain state, and while the state is reset between function calls from refactoring\n# in milesgranger/libdbgen, these shared global states are not removed so thus not thread-safe.\n#\n# Example of generating data in parallel:\nfrom concurrent.futures import ProcessPoolExecutor\n\nn_steps = 10 # 10 total chunks\n\ndef gen_step(step):\n return pytpch.dbgen(sf=10, n_steps=n_steps, nth_step=step)\n\nwith ProcessPoolExecutor() as executor:\n jobs: list[dict[str, pa.Table]] = list(executor.map(gen_step, range(n_steps)))\n \n\n# Default reference queries provided (1-22) as:\nprint(pytpch.QUERY_1)\n```\n\n---\n\n### Tell me more...\n\nPython bindings (thru Rust, b/c why not) to [libdbgen](https://github.com/milesgranger/libdbgen) \nwhich is a fork of [databricks/tpch-dbgen](https://github.com/databricks/tpch-dbgen) for generating \n[TPC-H data](https://www.tpc.org/tpch/).\n\ntpch-dbgen is originally a CLI to generate CSV files for TPC-H data. I wanted to make it into an ergonomic\nPython API for use in other projects. \n\nTODOS (roughly in order of priority):\n - [ ] Support for more than Linux x86_64 (mostly just adapting C lib and updating CI)\n - [ ] Remove verbose stdout\n - [ ] Write directly to Arrow, removing CSV writing (w/ nanoarrow probably)\n - [ ] Make thread safe (remove global and static function variables in C lib, and remove changing of CWD)\n - [ ] Separate out the Rust stuff into it's own crate.\n\n### Build from source...\n\nRoughly:\n\n- `git clone --recursive git@github.com:milesgranger/pytpch.git`\n- `python -m pip install maturin`\n- `maturin build --release`\n\nThat'll only work if you're on x86_64 linux for now, you can try adapting `build.rs` but good luck with that. For now.\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "bindings to libdbgen / tpch-dbgen",
"version": "0.2.0",
"project_urls": {
"documentation": "https://github.com/milesgranger/pytpch",
"homepage": "https://github.com/milesgranger/pytpch",
"repository": "https://github.com/milesgranger/pytpch"
},
"split_keywords": [
"tpc-h"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5df336a50114f5095861af5363c8bb1b0e149ee04f41726caff353484d089a36",
"md5": "9390a41ac869053da9d58d2956a08a1d",
"sha256": "b64d4c8f0c88938ea22afdb1ab9219da527425e1e475786eaebcc655f3e4886b"
},
"downloads": -1,
"filename": "pytpch-0.2.0-cp310-cp310-manylinux_2_34_x86_64.whl",
"has_sig": false,
"md5_digest": "9390a41ac869053da9d58d2956a08a1d",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.8",
"size": 696452,
"upload_time": "2024-02-25T21:10:38",
"upload_time_iso_8601": "2024-02-25T21:10:38.292258Z",
"url": "https://files.pythonhosted.org/packages/5d/f3/36a50114f5095861af5363c8bb1b0e149ee04f41726caff353484d089a36/pytpch-0.2.0-cp310-cp310-manylinux_2_34_x86_64.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-02-25 21:10:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "milesgranger",
"github_project": "pytpch",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pytpch"
}