Name | be-great-v JSON |
Version |
0.1.3
JSON |
| download |
home_page | |
Summary | (A Fork)Generating Realistic Tabular Data using Large Language Models |
upload_time | 2024-01-07 12:00:28 |
maintainer | |
docs_url | None |
author | |
requires_python | >=3.9 |
license | MIT License Copyright (c) 2022 Kathrin Seßler and Vadim Borisov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
great
pytorch
tabular data
data generation
transformer
language models
deep learning
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
[![PyPI version](https://badge.fury.io/py/be-great.svg)](https://badge.fury.io/py/be-great) [![Downloads](https://static.pepy.tech/badge/be-great)](https://pepy.tech/project/be-great)
[//]: # (![Screenshot](https://github.com/kathrinse/be_great/blob/main/imgs/GReaT_logo.png))
<p align="center">
<img src="https://github.com/kathrinse/be_great/raw/main/imgs/GReaT_logo.png" width="326"/>
</p>
<p align="center">
<strong>Generation of Realistic Tabular data</strong>
<br> with pretrained Transformer-based language models
</p>
Our GReaT framework leverages the power of advanced pretrained Transformer language models to produce high-quality synthetic tabular data. Generate new data samples effortlessly with our user-friendly API in just a few lines of code. Please see our [publication](https://openreview.net/forum?id=cEygmQNOeI) for more details.
我们的GReaT框架利用先进的预训练Transformer语言模型的力量,生成高质量的合成表格数据。只需几行代码,就可以使用我们的用户友好的API轻松生成新的数据样本。更多详情请参阅我们的[出版物](https://openreview.net/forum?d=cEygmQNOeI)
## GReaT Installation
The GReaT framework can be easily installed using with [pip](https://pypi.org/project/pip/) - requires a Python version >= 3.9:
```bash
pip install be-great
```
## GReaT Quickstart
In the example below, we show how the GReaT approach is used to generate synthetic tabular data for the California Housing dataset.
```python
from be_great import GReaT
from sklearn.datasets import fetch_california_housing
data = fetch_california_housing(as_frame=True).frame
model = GReaT(llm='distilgpt2', batch_size=32, epochs=25)
model.fit(data)
synthetic_data = model.sample(n_samples=100)
```
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kathrinse/be_great/blob/main/examples/GReaT_colab_example.ipynb)
### Imputing a sample
GReaT also features an interface to impute, i.e., fill in, missing values in arbitrary combinations. This requires a trained ``model``, for instance one obtained using the code snippet above, and a ```pd.DataFrame``` where missing values are set to NaN.
A minimal example is provided below:
```python
# test_data: pd.DataFrame with samples from the distribution
# model: GReaT trained on the data distribution that should be imputed
# Drop values randomly from test_data
import numpy as np
for clm in test_data.columns:
test_data[clm]=test_data[clm].apply(lambda x: (x if np.random.rand() > 0.5 else np.nan))
imputed_data = model.impute(test_data, max_length=200)
```
## GReaT Citation
If you use GReaT, please link or cite our work:
``` bibtex
@inproceedings{borisov2023language,
title={Language Models are Realistic Tabular Data Generators},
author={Vadim Borisov and Kathrin Sessler and Tobias Leemann and Martin Pawelczyk and Gjergji Kasneci},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=cEygmQNOeI}
}
```
## GReaT Acknowledgements
We sincerely thank the [HuggingFace](https://huggingface.co/) :hugs: framework.
Raw data
{
"_id": null,
"home_page": "",
"name": "be-great-v",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "",
"keywords": "great,pytorch,tabular data,data generation,transformer,language models,deep learning",
"author": "",
"author_email": "Tim Saijun <code@zair.top>, Kathrin Sessler <kathrin.sessler@tum.de>, Vadim Borisov <vadim.borisov@uni-tuebingen.de>",
"download_url": "https://files.pythonhosted.org/packages/48/da/6a7c19e1d1052349bcd739dacaf570335a50dcfc08b76ca34994f282a342/be-great-v-0.1.3.tar.gz",
"platform": null,
"description": "[![PyPI version](https://badge.fury.io/py/be-great.svg)](https://badge.fury.io/py/be-great) [![Downloads](https://static.pepy.tech/badge/be-great)](https://pepy.tech/project/be-great)\n\n[//]: # (![Screenshot](https://github.com/kathrinse/be_great/blob/main/imgs/GReaT_logo.png))\n<p align=\"center\">\n<img src=\"https://github.com/kathrinse/be_great/raw/main/imgs/GReaT_logo.png\" width=\"326\"/>\n</p>\n\n<p align=\"center\">\n<strong>Generation of Realistic Tabular data</strong>\n<br> with pretrained Transformer-based language models\n</p>\n\n \n \n \n\nOur GReaT framework leverages the power of advanced pretrained Transformer language models to produce high-quality synthetic tabular data. Generate new data samples effortlessly with our user-friendly API in just a few lines of code. Please see our [publication](https://openreview.net/forum?id=cEygmQNOeI) for more details. \n\n\u6211\u4eec\u7684GReaT\u6846\u67b6\u5229\u7528\u5148\u8fdb\u7684\u9884\u8bad\u7ec3Transformer\u8bed\u8a00\u6a21\u578b\u7684\u529b\u91cf\uff0c\u751f\u6210\u9ad8\u8d28\u91cf\u7684\u5408\u6210\u8868\u683c\u6570\u636e\u3002\u53ea\u9700\u51e0\u884c\u4ee3\u7801\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528\u6211\u4eec\u7684\u7528\u6237\u53cb\u597d\u7684API\u8f7b\u677e\u751f\u6210\u65b0\u7684\u6570\u636e\u6837\u672c\u3002\u66f4\u591a\u8be6\u60c5\u8bf7\u53c2\u9605\u6211\u4eec\u7684[\u51fa\u7248\u7269](https://openreview.net/forum?d=cEygmQNOeI)\n\n## GReaT Installation\n\nThe GReaT framework can be easily installed using with [pip](https://pypi.org/project/pip/) - requires a Python version >= 3.9: \n```bash\npip install be-great\n```\n\n\n\n## GReaT Quickstart\n\nIn the example below, we show how the GReaT approach is used to generate synthetic tabular data for the California Housing dataset.\n```python\nfrom be_great import GReaT\nfrom sklearn.datasets import fetch_california_housing\n\ndata = fetch_california_housing(as_frame=True).frame\n\nmodel = GReaT(llm='distilgpt2', batch_size=32, epochs=25)\nmodel.fit(data)\nsynthetic_data = model.sample(n_samples=100)\n```\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kathrinse/be_great/blob/main/examples/GReaT_colab_example.ipynb)\n\n### Imputing a sample\nGReaT also features an interface to impute, i.e., fill in, missing values in arbitrary combinations. This requires a trained ``model``, for instance one obtained using the code snippet above, and a ```pd.DataFrame``` where missing values are set to NaN.\nA minimal example is provided below:\n```python\n# test_data: pd.DataFrame with samples from the distribution\n# model: GReaT trained on the data distribution that should be imputed\n\n# Drop values randomly from test_data\nimport numpy as np\nfor clm in test_data.columns:\n test_data[clm]=test_data[clm].apply(lambda x: (x if np.random.rand() > 0.5 else np.nan))\n\nimputed_data = model.impute(test_data, max_length=200)\n```\n\n\n\n## GReaT Citation \n\nIf you use GReaT, please link or cite our work:\n\n``` bibtex\n@inproceedings{borisov2023language,\n title={Language Models are Realistic Tabular Data Generators},\n author={Vadim Borisov and Kathrin Sessler and Tobias Leemann and Martin Pawelczyk and Gjergji Kasneci},\n booktitle={The Eleventh International Conference on Learning Representations },\n year={2023},\n url={https://openreview.net/forum?id=cEygmQNOeI}\n}\n```\n\n## GReaT Acknowledgements\n\nWe sincerely thank the [HuggingFace](https://huggingface.co/) :hugs: framework. \n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2022 Kathrin Se\u00dfler and Vadim Borisov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "(A Fork)Generating Realistic Tabular Data using Large Language Models",
"version": "0.1.3",
"project_urls": {
"Documentation": "https://kathrinse.github.io/be_great/",
"Homepage": "https://github.com/Tim-Saijun/be_great"
},
"split_keywords": [
"great",
"pytorch",
"tabular data",
"data generation",
"transformer",
"language models",
"deep learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b8976274b762ebe90d6380129db875896ae120346d17f4b4735daaf5b356fce3",
"md5": "7d51aded1c02b421c13caa3450dea77a",
"sha256": "9be8ca6259f72198bbb6f3f303d2a21a1cc428a184c0d40d83dfd8a4e6af8de9"
},
"downloads": -1,
"filename": "be_great_v-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7d51aded1c02b421c13caa3450dea77a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 16669,
"upload_time": "2024-01-07T12:00:27",
"upload_time_iso_8601": "2024-01-07T12:00:27.075461Z",
"url": "https://files.pythonhosted.org/packages/b8/97/6274b762ebe90d6380129db875896ae120346d17f4b4735daaf5b356fce3/be_great_v-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "48da6a7c19e1d1052349bcd739dacaf570335a50dcfc08b76ca34994f282a342",
"md5": "0eb7a91e050f558b462e0ba434921c51",
"sha256": "af69f2422aae7a177198abb5d257ca0a1ab7e56f091c3a0368f35982f7170389"
},
"downloads": -1,
"filename": "be-great-v-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "0eb7a91e050f558b462e0ba434921c51",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 16850,
"upload_time": "2024-01-07T12:00:28",
"upload_time_iso_8601": "2024-01-07T12:00:28.757755Z",
"url": "https://files.pythonhosted.org/packages/48/da/6a7c19e1d1052349bcd739dacaf570335a50dcfc08b76ca34994f282a342/be-great-v-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-01-07 12:00:28",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Tim-Saijun",
"github_project": "be_great",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "be-great-v"
}