be-great


Namebe-great JSON
Version 0.0.8 PyPI version JSON
download
home_pageNone
SummaryGenerating Realistic Tabular Data using Large Language Models
upload_time2024-10-29 16:28:11
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT License Copyright (c) 2022 Kathrin Seßler and Vadim Borisov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords great pytorch tabular data data generation transformer language models deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![PyPI version](https://badge.fury.io/py/be-great.svg)](https://badge.fury.io/py/be-great) [![Downloads](https://static.pepy.tech/badge/be-great)](https://pepy.tech/project/be-great)

[//]: # (![Screenshot](https://github.com/kathrinse/be_great/blob/main/imgs/GReaT_logo.png))
<p align="center">
<img src="https://github.com/kathrinse/be_great/raw/main/imgs/GReaT_logo.png" width="326"/>
</p>

<p align="center">
<strong>Generation of Realistic Tabular data</strong>
<br> with pretrained Transformer-based language models
</p>

&nbsp;
&nbsp;
&nbsp;

Our GReaT framework leverages the power of advanced pretrained Transformer language models to produce high-quality synthetic tabular data. Generate new data samples effortlessly with our user-friendly API in just a few lines of code. Please see our [publication](https://openreview.net/forum?id=cEygmQNOeI) for more details. 

## GReaT Installation

The GReaT framework can be easily installed using with [pip](https://pypi.org/project/pip/) - requires a Python version >= 3.9: 
```bash
pip install be-great
```



## GReaT Quickstart

In the example below, we show how the GReaT approach is used to generate synthetic tabular data for the California Housing dataset.
```python
from be_great import GReaT
from sklearn.datasets import fetch_california_housing

data = fetch_california_housing(as_frame=True).frame

model = GReaT(llm='distilgpt2', batch_size=32,  epochs=50, fp16=True)
model.fit(data)
synthetic_data = model.sample(n_samples=100)
```

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kathrinse/be_great/blob/main/examples/GReaT_colab_example.ipynb)

### Imputing a sample
GReaT also features an interface to impute, i.e., fill in, missing values in arbitrary combinations. This requires a trained ``model``, for instance one obtained using the code snippet above, and a ```pd.DataFrame``` where missing values are set to NaN.
A minimal example is provided below:
```python
# test_data: pd.DataFrame with samples from the distribution
# model: GReaT trained on the data distribution that should be imputed

# Drop values randomly from test_data
import numpy as np
for clm in test_data.columns:
    test_data[clm]=test_data[clm].apply(lambda x: (x if np.random.rand() > 0.5 else np.nan))

imputed_data = model.impute(test_data, max_length=200)
```

### Saving and Loading
GReaT provides methods for saving a model checkpoint (besides the checkpoints stored by the huggingface transformers Trainer) and loading the checkpoint again.
```python
model = GReaT(llm='distilgpt2', batch_size=32,  epochs=50, fp16=True)
model.fit(data)
model.save("my_directory")  # saves a "model.pt" and a "config.json" file
model = GReaT.load_from_dir("my_directory")  # loads the model again

# supports remote file systems via fsspec
model.save("s3://my_bucket")
model = GReaT.load_from_dir("s3://my_bucket")
```

## GReaT Citation 

If you use GReaT, please link or cite our work:

``` bibtex
@inproceedings{borisov2023language,
  title={Language Models are Realistic Tabular Data Generators},
  author={Vadim Borisov and Kathrin Sessler and Tobias Leemann and Martin Pawelczyk and Gjergji Kasneci},
  booktitle={The Eleventh International Conference on Learning Representations },
  year={2023},
  url={https://openreview.net/forum?id=cEygmQNOeI}
}
```

## GReaT Acknowledgements

We sincerely thank the [HuggingFace](https://huggingface.co/) :hugs: framework. 

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "be-great",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "great, pytorch, tabular data, data generation, transformer, language models, deep learning",
    "author": null,
    "author_email": "Kathrin Sessler <kathrin.sessler@tum.de>, Vadim Borisov <vadim.borisov@uni-tuebingen.de>",
    "download_url": "https://files.pythonhosted.org/packages/c6/79/244bc593c4854264c866d6d7078c29cdddf7e23ce325e0aeb220153ed066/be_great-0.0.8.tar.gz",
    "platform": null,
    "description": "[![PyPI version](https://badge.fury.io/py/be-great.svg)](https://badge.fury.io/py/be-great) [![Downloads](https://static.pepy.tech/badge/be-great)](https://pepy.tech/project/be-great)\n\n[//]: # (![Screenshot]&#40;https://github.com/kathrinse/be_great/blob/main/imgs/GReaT_logo.png&#41;)\n<p align=\"center\">\n<img src=\"https://github.com/kathrinse/be_great/raw/main/imgs/GReaT_logo.png\" width=\"326\"/>\n</p>\n\n<p align=\"center\">\n<strong>Generation of Realistic Tabular data</strong>\n<br> with pretrained Transformer-based language models\n</p>\n\n&nbsp;\n&nbsp;\n&nbsp;\n\nOur GReaT framework leverages the power of advanced pretrained Transformer language models to produce high-quality synthetic tabular data. Generate new data samples effortlessly with our user-friendly API in just a few lines of code. Please see our [publication](https://openreview.net/forum?id=cEygmQNOeI) for more details. \n\n## GReaT Installation\n\nThe GReaT framework can be easily installed using with [pip](https://pypi.org/project/pip/) - requires a Python version >= 3.9: \n```bash\npip install be-great\n```\n\n\n\n## GReaT Quickstart\n\nIn the example below, we show how the GReaT approach is used to generate synthetic tabular data for the California Housing dataset.\n```python\nfrom be_great import GReaT\nfrom sklearn.datasets import fetch_california_housing\n\ndata = fetch_california_housing(as_frame=True).frame\n\nmodel = GReaT(llm='distilgpt2', batch_size=32,  epochs=50, fp16=True)\nmodel.fit(data)\nsynthetic_data = model.sample(n_samples=100)\n```\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kathrinse/be_great/blob/main/examples/GReaT_colab_example.ipynb)\n\n### Imputing a sample\nGReaT also features an interface to impute, i.e., fill in, missing values in arbitrary combinations. This requires a trained ``model``, for instance one obtained using the code snippet above, and a ```pd.DataFrame``` where missing values are set to NaN.\nA minimal example is provided below:\n```python\n# test_data: pd.DataFrame with samples from the distribution\n# model: GReaT trained on the data distribution that should be imputed\n\n# Drop values randomly from test_data\nimport numpy as np\nfor clm in test_data.columns:\n    test_data[clm]=test_data[clm].apply(lambda x: (x if np.random.rand() > 0.5 else np.nan))\n\nimputed_data = model.impute(test_data, max_length=200)\n```\n\n### Saving and Loading\nGReaT provides methods for saving a model checkpoint (besides the checkpoints stored by the huggingface transformers Trainer) and loading the checkpoint again.\n```python\nmodel = GReaT(llm='distilgpt2', batch_size=32,  epochs=50, fp16=True)\nmodel.fit(data)\nmodel.save(\"my_directory\")  # saves a \"model.pt\" and a \"config.json\" file\nmodel = GReaT.load_from_dir(\"my_directory\")  # loads the model again\n\n# supports remote file systems via fsspec\nmodel.save(\"s3://my_bucket\")\nmodel = GReaT.load_from_dir(\"s3://my_bucket\")\n```\n\n## GReaT Citation \n\nIf you use GReaT, please link or cite our work:\n\n``` bibtex\n@inproceedings{borisov2023language,\n  title={Language Models are Realistic Tabular Data Generators},\n  author={Vadim Borisov and Kathrin Sessler and Tobias Leemann and Martin Pawelczyk and Gjergji Kasneci},\n  booktitle={The Eleventh International Conference on Learning Representations },\n  year={2023},\n  url={https://openreview.net/forum?id=cEygmQNOeI}\n}\n```\n\n## GReaT Acknowledgements\n\nWe sincerely thank the [HuggingFace](https://huggingface.co/) :hugs: framework. \n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2022 Kathrin Se\u00dfler and Vadim Borisov  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Generating Realistic Tabular Data using Large Language Models",
    "version": "0.0.8",
    "project_urls": {
        "Documentation": "https://kathrinse.github.io/be_great/",
        "Homepage": "https://github.com/kathrinse/be_great"
    },
    "split_keywords": [
        "great",
        " pytorch",
        " tabular data",
        " data generation",
        " transformer",
        " language models",
        " deep learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "224d61db5b59f132221334927e1845fe63fbcb97be62aaa01a68c086919b9445",
                "md5": "8dee7b1ee850080890277c334ac5283e",
                "sha256": "68e515b388254ef930bca272b929156fb901873fcdbcfeef56d7ad3d78d4c2b0"
            },
            "downloads": -1,
            "filename": "be_great-0.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8dee7b1ee850080890277c334ac5283e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 16344,
            "upload_time": "2024-10-29T16:28:07",
            "upload_time_iso_8601": "2024-10-29T16:28:07.577754Z",
            "url": "https://files.pythonhosted.org/packages/22/4d/61db5b59f132221334927e1845fe63fbcb97be62aaa01a68c086919b9445/be_great-0.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c679244bc593c4854264c866d6d7078c29cdddf7e23ce325e0aeb220153ed066",
                "md5": "294c4e8d6f742d5bcf1a78d3adf96aab",
                "sha256": "2c50cbba8ac512f38c35814add41565ba24361c641cd669fe210dc120fa60dd1"
            },
            "downloads": -1,
            "filename": "be_great-0.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "294c4e8d6f742d5bcf1a78d3adf96aab",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 16385,
            "upload_time": "2024-10-29T16:28:11",
            "upload_time_iso_8601": "2024-10-29T16:28:11.007390Z",
            "url": "https://files.pythonhosted.org/packages/c6/79/244bc593c4854264c866d6d7078c29cdddf7e23ce325e0aeb220153ed066/be_great-0.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-29 16:28:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kathrinse",
    "github_project": "be_great",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "be-great"
}
        
Elapsed time: 0.56345s