be-great


Namebe-great JSON
Version 0.0.7 PyPI version JSON
download
home_page
SummaryGenerating Realistic Tabular Data using Large Language Models
upload_time2023-09-06 11:31:06
maintainer
docs_urlNone
author
requires_python>=3.9
licenseMIT License Copyright (c) 2022 Kathrin Seßler and Vadim Borisov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords great pytorch tabular data data generation transformer language models deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![PyPI version](https://badge.fury.io/py/be-great.svg)](https://badge.fury.io/py/be-great) [![Downloads](https://static.pepy.tech/badge/be-great)](https://pepy.tech/project/be-great)

[//]: # (![Screenshot](https://github.com/kathrinse/be_great/blob/main/imgs/GReaT_logo.png))
<p align="center">
<img src="https://github.com/kathrinse/be_great/raw/main/imgs/GReaT_logo.png" width="326"/>
</p>

<p align="center">
<strong>Generation of Realistic Tabular data</strong>
<br> with pretrained Transformer-based language models
</p>

&nbsp;
&nbsp;
&nbsp;

Our GReaT framework leverages the power of advanced pretrained Transformer language models to produce high-quality synthetic tabular data. Generate new data samples effortlessly with our user-friendly API in just a few lines of code. Please see our [publication](https://openreview.net/forum?id=cEygmQNOeI) for more details. 

## GReaT Installation

The GReaT framework can be easily installed using with [pip](https://pypi.org/project/pip/) - requires a Python version >= 3.9: 
```bash
pip install be-great
```



## GReaT Quickstart

In the example below, we show how the GReaT approach is used to generate synthetic tabular data for the California Housing dataset.
```python
from be_great import GReaT
from sklearn.datasets import fetch_california_housing

data = fetch_california_housing(as_frame=True).frame

model = GReaT(llm='distilgpt2', batch_size=32, epochs=25)
model.fit(data)
synthetic_data = model.sample(n_samples=100)
```

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kathrinse/be_great/blob/main/examples/GReaT_colab_example.ipynb)

### Imputing a sample
GReaT also features an interface to impute, i.e., fill in, missing values in arbitrary combinations. This requires a trained ``model``, for instance one obtained using the code snippet above, and a ```pd.DataFrame``` where missing values are set to NaN.
A minimal example is provided below:
```python
# test_data: pd.DataFrame with samples from the distribution
# model: GReaT trained on the data distribution that should be imputed

# Drop values randomly from test_data
import numpy as np
for clm in test_data.columns:
    test_data[clm]=test_data[clm].apply(lambda x: (x if np.random.rand() > 0.5 else np.nan))

imputed_data = model.impute(test_data, max_length=200)
```



## GReaT Citation 

If you use GReaT, please link or cite our work:

``` bibtex
@inproceedings{borisov2023language,
  title={Language Models are Realistic Tabular Data Generators},
  author={Vadim Borisov and Kathrin Sessler and Tobias Leemann and Martin Pawelczyk and Gjergji Kasneci},
  booktitle={The Eleventh International Conference on Learning Representations },
  year={2023},
  url={https://openreview.net/forum?id=cEygmQNOeI}
}
```

## GReaT Acknowledgements

We sincerely thank the [HuggingFace](https://huggingface.co/) :hugs: framework. 

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "be-great",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "great,pytorch,tabular data,data generation,transformer,language models,deep learning",
    "author": "",
    "author_email": "Kathrin Sessler <kathrin.sessler@tum.de>, Vadim Borisov <vadim.borisov@uni-tuebingen.de>",
    "download_url": "https://files.pythonhosted.org/packages/5c/a6/94d4fdfddd9095b4377a9fec40e7d9d76187c26b99e8cdeb944622d2b928/be_great-0.0.7.tar.gz",
    "platform": null,
    "description": "[![PyPI version](https://badge.fury.io/py/be-great.svg)](https://badge.fury.io/py/be-great) [![Downloads](https://static.pepy.tech/badge/be-great)](https://pepy.tech/project/be-great)\n\n[//]: # (![Screenshot]&#40;https://github.com/kathrinse/be_great/blob/main/imgs/GReaT_logo.png&#41;)\n<p align=\"center\">\n<img src=\"https://github.com/kathrinse/be_great/raw/main/imgs/GReaT_logo.png\" width=\"326\"/>\n</p>\n\n<p align=\"center\">\n<strong>Generation of Realistic Tabular data</strong>\n<br> with pretrained Transformer-based language models\n</p>\n\n&nbsp;\n&nbsp;\n&nbsp;\n\nOur GReaT framework leverages the power of advanced pretrained Transformer language models to produce high-quality synthetic tabular data. Generate new data samples effortlessly with our user-friendly API in just a few lines of code. Please see our [publication](https://openreview.net/forum?id=cEygmQNOeI) for more details. \n\n## GReaT Installation\n\nThe GReaT framework can be easily installed using with [pip](https://pypi.org/project/pip/) - requires a Python version >= 3.9: \n```bash\npip install be-great\n```\n\n\n\n## GReaT Quickstart\n\nIn the example below, we show how the GReaT approach is used to generate synthetic tabular data for the California Housing dataset.\n```python\nfrom be_great import GReaT\nfrom sklearn.datasets import fetch_california_housing\n\ndata = fetch_california_housing(as_frame=True).frame\n\nmodel = GReaT(llm='distilgpt2', batch_size=32, epochs=25)\nmodel.fit(data)\nsynthetic_data = model.sample(n_samples=100)\n```\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kathrinse/be_great/blob/main/examples/GReaT_colab_example.ipynb)\n\n### Imputing a sample\nGReaT also features an interface to impute, i.e., fill in, missing values in arbitrary combinations. This requires a trained ``model``, for instance one obtained using the code snippet above, and a ```pd.DataFrame``` where missing values are set to NaN.\nA minimal example is provided below:\n```python\n# test_data: pd.DataFrame with samples from the distribution\n# model: GReaT trained on the data distribution that should be imputed\n\n# Drop values randomly from test_data\nimport numpy as np\nfor clm in test_data.columns:\n    test_data[clm]=test_data[clm].apply(lambda x: (x if np.random.rand() > 0.5 else np.nan))\n\nimputed_data = model.impute(test_data, max_length=200)\n```\n\n\n\n## GReaT Citation \n\nIf you use GReaT, please link or cite our work:\n\n``` bibtex\n@inproceedings{borisov2023language,\n  title={Language Models are Realistic Tabular Data Generators},\n  author={Vadim Borisov and Kathrin Sessler and Tobias Leemann and Martin Pawelczyk and Gjergji Kasneci},\n  booktitle={The Eleventh International Conference on Learning Representations },\n  year={2023},\n  url={https://openreview.net/forum?id=cEygmQNOeI}\n}\n```\n\n## GReaT Acknowledgements\n\nWe sincerely thank the [HuggingFace](https://huggingface.co/) :hugs: framework. \n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2022 Kathrin Se\u00dfler and Vadim Borisov  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Generating Realistic Tabular Data using Large Language Models",
    "version": "0.0.7",
    "project_urls": {
        "Documentation": "https://kathrinse.github.io/be_great/",
        "Homepage": "https://github.com/kathrinse/be_great"
    },
    "split_keywords": [
        "great",
        "pytorch",
        "tabular data",
        "data generation",
        "transformer",
        "language models",
        "deep learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d0d7f34c78ba5cecacd9a57c75340b402a46b78e0aeb222f5d90254b2bda0eb5",
                "md5": "595063482a78299de16b4f2679471f8a",
                "sha256": "824f10482581f5211d3a435d8b14adcc6c21b38d952f00b6a792fc7c089788b1"
            },
            "downloads": -1,
            "filename": "be_great-0.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "595063482a78299de16b4f2679471f8a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 16090,
            "upload_time": "2023-09-06T11:31:04",
            "upload_time_iso_8601": "2023-09-06T11:31:04.522488Z",
            "url": "https://files.pythonhosted.org/packages/d0/d7/f34c78ba5cecacd9a57c75340b402a46b78e0aeb222f5d90254b2bda0eb5/be_great-0.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5ca694d4fdfddd9095b4377a9fec40e7d9d76187c26b99e8cdeb944622d2b928",
                "md5": "abbd42507f3b9e816388a79751f63a7b",
                "sha256": "b144c0e054b82254f28bd16affa13925a625e7727b1da9d80f2d536d32cbcc01"
            },
            "downloads": -1,
            "filename": "be_great-0.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "abbd42507f3b9e816388a79751f63a7b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 15912,
            "upload_time": "2023-09-06T11:31:06",
            "upload_time_iso_8601": "2023-09-06T11:31:06.375757Z",
            "url": "https://files.pythonhosted.org/packages/5c/a6/94d4fdfddd9095b4377a9fec40e7d9d76187c26b99e8cdeb944622d2b928/be_great-0.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-06 11:31:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kathrinse",
    "github_project": "be_great",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "be-great"
}
        
Elapsed time: 0.11169s