table-evaluator


Nametable-evaluator JSON
Version 1.7.2 PyPI version JSON
download
home_pagehttps://github.com/Baukebrenninkmeijer/Table-Evaluator
SummaryA package to evaluate how close a synthetic data set is to real data.
upload_time2024-12-05 20:30:22
maintainerNone
docs_urlNone
authorBauke Brenninkmeijer
requires_python<4.0,>=3.10
licenseMIT
keywords table-evaluation synthetic-data data-generation data generation data-evaluation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Table Evaluator
[![PyPI version](https://badge.fury.io/py/table-evaluator.svg)](https://badge.fury.io/py/table-evaluator)
[![Supported versions](https://img.shields.io/pypi/pyversions/table_evaluator.svg)](https://pypi.python.org/pypi/table_evaluator)
![Package deployment](https://github.com/Baukebrenninkmeijer/table-evaluator/actions/workflows/python-publish.yml/badge.svg?branch=master)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/table_evaluator)](https://pypistats.org/packages/table_evaluator)
[![Documentation](https://img.shields.io/badge/Documentation-%20-blue)](https://baukebrenninkmeijer.github.io/table-evaluator/)

TableEvaluator is a library to evaluate how similar a synthesized dataset is to a real data. In other words, it tries to give an indication into how real your fake data is. With the rise of GANs, specifically designed for tabular data, many applications are becoming possibilities. For industries like finance, healthcare and goverments, having the capacity to create high quality synthetic data that does **not** have the privacy constraints of normal data is extremely valuable. Since this field is this quite young and developing, I created this library to have a consistent evaluation method for your models.

## Installation
The package can be installed with
```
pip install table_evaluator
```

## Tests
The test can be run by cloning the repo and running:
```
pytest tests
```
if this does not work, the package might not currently be findable. In that case, please install it locally with:

```
pip install -e .
```

## Usage
**Please see the [example notebook](https://github.com/Baukebrenninkmeijer/table-evaluator/blob/master/example_table_evaluator.ipynb) for the most up-to-date examples. The README example is just that notebook as markdown.**

Start by importing the class
```Python
from table_evaluator import load_data, TableEvaluator
```

The package is used by having two DataFrames; one with the real data and one with the synthetic data. These are passed to the TableEvaluator on init.
The `utils.load_data` is nice to retrieve these dataframes from disk since it converts them to the same dtypes and columns after loading. However, any dataframe will do as long as they have the same columns and data types.

 Using the test data available in the `data` directory, we do:

```python
real, fake = load_data('data/real_test_sample.csv', 'data/fake_test_sample.csv')

```
which gives us two dataframes and specifies which columns should be treated as categorical columns.

```python
real.head()
```


| trans_id | account_id | trans_amount | balance_after_trans | trans_type | trans_operation            | trans_k_symbol    | trans_date |
|----------|------------|--------------|---------------------|------------|----------------------------|-------------------|------------|
| 951892   | 3245       | 3878.0       | 13680.0             | WITHDRAWAL | REMITTANCE_TO_OTHER_BANK   | HOUSEHOLD         | 2165       |
| 3547680  | 515        | 65.9         | 14898.6             | CREDIT     | UNKNOWN                    | INTEREST_CREDITED | 2006       |
| 1187131  | 4066       | 32245.0      | 57995.5             | CREDIT     | COLLECTION_FROM_OTHER_BANK | UNKNOWN           | 2139       |
| 531421   | 1811       | 3990.8       | 23324.9             | WITHDRAWAL | REMITTANCE_TO_OTHER_BANK   | LOAN_PAYMENT      | 892        |
| 37081    | 119        | 12100.0      | 36580.0             | WITHDRAWAL | WITHDRAWAL_IN_CASH         | UNKNOWN           | 654        |


```python
fake.head()
```

| trans_id | account_id | trans_amount | balance_after_trans | trans_type | trans_operation            | trans_k_symbol | trans_date |
|----------|------------|--------------|---------------------|------------|----------------------------|----------------|------------|
| 911598   | 3001       | 13619.0      | 92079.0             | CREDIT     | COLLECTION_FROM_OTHER_BANK | UNKNOWN        | 1885       |
| 377371   | 1042       | 4174.0       | 32470.0             | WITHDRAWAL | REMITTANCE_TO_OTHER_BANK   | HOUSEHOLD      | 1483       |
| 970113   | 3225       | 274.0        | 57608.0             | WITHDRAWAL | WITHDRAWAL_IN_CASH         | UNKNOWN        | 1855       |
| 450090   | 1489       | 301.0        | 36258.0             | CREDIT     | CREDIT_IN_CASH             | UNKNOWN        | 885        |
| 1120409  | 3634       | 6303.0       | 50975.0             | WITHDRAWAL | REMITTANCE_TO_OTHER_BANK   | HOUSEHOLD      | 1211       |


```Python
cat_cols = ['trans_type', 'trans_operation', 'trans_k_symbol']
```

If we do not specify categorical columns when initializing the TableEvaluator, it will consider all columns with more than 50 unique values a continuous column and anything with less a categorical columns.

Then we create the TableEvaluator object:
```Python
table_evaluator = TableEvaluator(real, fake, cat_cols=cat_cols)
```

It's nice to start with some plots to get a feel for the data and how they correlate. The test samples contain only 1000 samples, which is why the cumulative sum plots are not very smooth.

```python
table_evaluator.visual_evaluation()
```


![png](images/output_7_0.png)



![png](images/output_7_1.png)



![png](images/output_7_2.png)



![png](images/output_7_3.png)


The `evaluate` method gives us the most complete idea of how close the data sets are together.

```python
table_evaluator.evaluate(target_col='trans_type')
```


    Correlation metric: pearsonr

    Classifier F1-scores:
                                          real   fake
    real_data_LogisticRegression_F1     0.8200 0.8150
    real_data_RandomForestClassifier_F1 0.9800 0.9800
    real_data_DecisionTreeClassifier_F1 0.9600 0.9700
    real_data_MLPClassifier_F1          0.3500 0.6850
    fake_data_LogisticRegression_F1     0.7800 0.7650
    fake_data_RandomForestClassifier_F1 0.9300 0.9300
    fake_data_DecisionTreeClassifier_F1 0.9300 0.9400
    fake_data_MLPClassifier_F1          0.3600 0.6200

    Miscellaneous results:
                                      Result
    Column Correlation Distance RMSE          0.0399
    Column Correlation distance MAE           0.0296
    Duplicate rows between sets (real/fake)   (0, 0)
    nearest neighbor mean                     0.5655
    nearest neighbor std                      0.3726

    Results:
                                                    Result
    basic statistics                                0.9940
    Correlation column correlations                 0.9904
    Mean Correlation between fake and real columns  0.9566
    1 - MAPE Estimator results                      0.7843
    1 - MAPE 5 PCA components                       0.9138
    Similarity Score                                0.9278

 The similarity score is an aggregate metric of the five other metrics in the section with results. Additionally, the F1/RMSE scores are printed since they give valuable insights into the strengths and weaknesses of some of these models. Lastly, some miscellaneous results are printed, like the nearest neighbor distance between each row in the fake dataset and the closest row in the real dataset. This provides insight into the privacy retention capability of the model. Note that the mean and standard deviation of nearest neighbor is limited to 20k rows, due to time and hardware limitations.


## Full Documentation
Please see the full documentation on [https://baukebrenninkmeijer.github.io/table-evaluator/](https://baukebrenninkmeijer.github.io/table-evaluator/).

## Motivation
To see the motivation for my decisions, please have a look at my master thesis, found at the [Radboud University](https://www.ru.nl/publish/pages/769526/z04_master_thesis_brenninkmeijer.pdf)

If you have any tips or suggestions, please contact send me on email.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Baukebrenninkmeijer/Table-Evaluator",
    "name": "table-evaluator",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "Table-evaluation, synthetic-data, data-generation, data, generation, data-evaluation",
    "author": "Bauke Brenninkmeijer",
    "author_email": "bauke.brenninkmeijer@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/ca/eb/f44302e459b031c412230d87c054def4e763566c44d2fb3fe464b3ae4305/table_evaluator-1.7.2.tar.gz",
    "platform": null,
    "description": "# Table Evaluator\n[![PyPI version](https://badge.fury.io/py/table-evaluator.svg)](https://badge.fury.io/py/table-evaluator)\n[![Supported versions](https://img.shields.io/pypi/pyversions/table_evaluator.svg)](https://pypi.python.org/pypi/table_evaluator)\n![Package deployment](https://github.com/Baukebrenninkmeijer/table-evaluator/actions/workflows/python-publish.yml/badge.svg?branch=master)\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/table_evaluator)](https://pypistats.org/packages/table_evaluator)\n[![Documentation](https://img.shields.io/badge/Documentation-%20-blue)](https://baukebrenninkmeijer.github.io/table-evaluator/)\n\nTableEvaluator is a library to evaluate how similar a synthesized dataset is to a real data. In other words, it tries to give an indication into how real your fake data is. With the rise of GANs, specifically designed for tabular data, many applications are becoming possibilities. For industries like finance, healthcare and goverments, having the capacity to create high quality synthetic data that does **not** have the privacy constraints of normal data is extremely valuable. Since this field is this quite young and developing, I created this library to have a consistent evaluation method for your models.\n\n## Installation\nThe package can be installed with\n```\npip install table_evaluator\n```\n\n## Tests\nThe test can be run by cloning the repo and running:\n```\npytest tests\n```\nif this does not work, the package might not currently be findable. In that case, please install it locally with:\n\n```\npip install -e .\n```\n\n## Usage\n**Please see the [example notebook](https://github.com/Baukebrenninkmeijer/table-evaluator/blob/master/example_table_evaluator.ipynb) for the most up-to-date examples. The README example is just that notebook as markdown.**\n\nStart by importing the class\n```Python\nfrom table_evaluator import load_data, TableEvaluator\n```\n\nThe package is used by having two DataFrames; one with the real data and one with the synthetic data. These are passed to the TableEvaluator on init.\nThe `utils.load_data` is nice to retrieve these dataframes from disk since it converts them to the same dtypes and columns after loading. However, any dataframe will do as long as they have the same columns and data types.\n\n Using the test data available in the `data` directory, we do:\n\n```python\nreal, fake = load_data('data/real_test_sample.csv', 'data/fake_test_sample.csv')\n\n```\nwhich gives us two dataframes and specifies which columns should be treated as categorical columns.\n\n```python\nreal.head()\n```\n\n\n| trans_id | account_id | trans_amount | balance_after_trans | trans_type | trans_operation            | trans_k_symbol    | trans_date |\n|----------|------------|--------------|---------------------|------------|----------------------------|-------------------|------------|\n| 951892   | 3245       | 3878.0       | 13680.0             | WITHDRAWAL | REMITTANCE_TO_OTHER_BANK   | HOUSEHOLD         | 2165       |\n| 3547680  | 515        | 65.9         | 14898.6             | CREDIT     | UNKNOWN                    | INTEREST_CREDITED | 2006       |\n| 1187131  | 4066       | 32245.0      | 57995.5             | CREDIT     | COLLECTION_FROM_OTHER_BANK | UNKNOWN           | 2139       |\n| 531421   | 1811       | 3990.8       | 23324.9             | WITHDRAWAL | REMITTANCE_TO_OTHER_BANK   | LOAN_PAYMENT      | 892        |\n| 37081    | 119        | 12100.0      | 36580.0             | WITHDRAWAL | WITHDRAWAL_IN_CASH         | UNKNOWN           | 654        |\n\n\n```python\nfake.head()\n```\n\n| trans_id | account_id | trans_amount | balance_after_trans | trans_type | trans_operation            | trans_k_symbol | trans_date |\n|----------|------------|--------------|---------------------|------------|----------------------------|----------------|------------|\n| 911598   | 3001       | 13619.0      | 92079.0             | CREDIT     | COLLECTION_FROM_OTHER_BANK | UNKNOWN        | 1885       |\n| 377371   | 1042       | 4174.0       | 32470.0             | WITHDRAWAL | REMITTANCE_TO_OTHER_BANK   | HOUSEHOLD      | 1483       |\n| 970113   | 3225       | 274.0        | 57608.0             | WITHDRAWAL | WITHDRAWAL_IN_CASH         | UNKNOWN        | 1855       |\n| 450090   | 1489       | 301.0        | 36258.0             | CREDIT     | CREDIT_IN_CASH             | UNKNOWN        | 885        |\n| 1120409  | 3634       | 6303.0       | 50975.0             | WITHDRAWAL | REMITTANCE_TO_OTHER_BANK   | HOUSEHOLD      | 1211       |\n\n\n```Python\ncat_cols = ['trans_type', 'trans_operation', 'trans_k_symbol']\n```\n\nIf we do not specify categorical columns when initializing the TableEvaluator, it will consider all columns with more than 50 unique values a continuous column and anything with less a categorical columns.\n\nThen we create the TableEvaluator object:\n```Python\ntable_evaluator = TableEvaluator(real, fake, cat_cols=cat_cols)\n```\n\nIt's nice to start with some plots to get a feel for the data and how they correlate. The test samples contain only 1000 samples, which is why the cumulative sum plots are not very smooth.\n\n```python\ntable_evaluator.visual_evaluation()\n```\n\n\n![png](images/output_7_0.png)\n\n\n\n![png](images/output_7_1.png)\n\n\n\n![png](images/output_7_2.png)\n\n\n\n![png](images/output_7_3.png)\n\n\nThe `evaluate` method gives us the most complete idea of how close the data sets are together.\n\n```python\ntable_evaluator.evaluate(target_col='trans_type')\n```\n\n\n    Correlation metric: pearsonr\n\n    Classifier F1-scores:\n                                          real   fake\n    real_data_LogisticRegression_F1     0.8200 0.8150\n    real_data_RandomForestClassifier_F1 0.9800 0.9800\n    real_data_DecisionTreeClassifier_F1 0.9600 0.9700\n    real_data_MLPClassifier_F1          0.3500 0.6850\n    fake_data_LogisticRegression_F1     0.7800 0.7650\n    fake_data_RandomForestClassifier_F1 0.9300 0.9300\n    fake_data_DecisionTreeClassifier_F1 0.9300 0.9400\n    fake_data_MLPClassifier_F1          0.3600 0.6200\n\n    Miscellaneous results:\n                                      Result\n    Column Correlation Distance RMSE          0.0399\n    Column Correlation distance MAE           0.0296\n    Duplicate rows between sets (real/fake)   (0, 0)\n    nearest neighbor mean                     0.5655\n    nearest neighbor std                      0.3726\n\n    Results:\n                                                    Result\n    basic statistics                                0.9940\n    Correlation column correlations                 0.9904\n    Mean Correlation between fake and real columns  0.9566\n    1 - MAPE Estimator results                      0.7843\n    1 - MAPE 5 PCA components                       0.9138\n    Similarity Score                                0.9278\n\n The similarity score is an aggregate metric of the five other metrics in the section with results. Additionally, the F1/RMSE scores are printed since they give valuable insights into the strengths and weaknesses of some of these models. Lastly, some miscellaneous results are printed, like the nearest neighbor distance between each row in the fake dataset and the closest row in the real dataset. This provides insight into the privacy retention capability of the model. Note that the mean and standard deviation of nearest neighbor is limited to 20k rows, due to time and hardware limitations.\n\n\n## Full Documentation\nPlease see the full documentation on [https://baukebrenninkmeijer.github.io/table-evaluator/](https://baukebrenninkmeijer.github.io/table-evaluator/).\n\n## Motivation\nTo see the motivation for my decisions, please have a look at my master thesis, found at the [Radboud University](https://www.ru.nl/publish/pages/769526/z04_master_thesis_brenninkmeijer.pdf)\n\nIf you have any tips or suggestions, please contact send me on email.\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A package to evaluate how close a synthetic data set is to real data.",
    "version": "1.7.2",
    "project_urls": {
        "Documentation": "https://baukebrenninkmeijer.github.io/table-evaluator/index.html",
        "Homepage": "https://github.com/Baukebrenninkmeijer/Table-Evaluator",
        "Repository": "https://github.com/Baukebrenninkmeijer/Table-Evaluator"
    },
    "split_keywords": [
        "table-evaluation",
        " synthetic-data",
        " data-generation",
        " data",
        " generation",
        " data-evaluation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1be67d5782e39e05f854ee3b6cb173f88783ac2d6058a803b547eb9b7bce2168",
                "md5": "c326cfb872e9a6fd9ccd49b25cc9c50c",
                "sha256": "0fe7538731319305a08e5fd3f4b12dea011f00fe6ec88f7598f96a42b2151ddc"
            },
            "downloads": -1,
            "filename": "table_evaluator-1.7.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c326cfb872e9a6fd9ccd49b25cc9c50c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 22716,
            "upload_time": "2024-12-05T20:30:20",
            "upload_time_iso_8601": "2024-12-05T20:30:20.618320Z",
            "url": "https://files.pythonhosted.org/packages/1b/e6/7d5782e39e05f854ee3b6cb173f88783ac2d6058a803b547eb9b7bce2168/table_evaluator-1.7.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "caebf44302e459b031c412230d87c054def4e763566c44d2fb3fe464b3ae4305",
                "md5": "c2e912b034b3ed86746b50f8caf5f88f",
                "sha256": "b5dfa2ab94e7dc3e8e1e004484a0f61f7eecee80ea87da440dd96e76bdc1f0c8"
            },
            "downloads": -1,
            "filename": "table_evaluator-1.7.2.tar.gz",
            "has_sig": false,
            "md5_digest": "c2e912b034b3ed86746b50f8caf5f88f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 23141,
            "upload_time": "2024-12-05T20:30:22",
            "upload_time_iso_8601": "2024-12-05T20:30:22.176474Z",
            "url": "https://files.pythonhosted.org/packages/ca/eb/f44302e459b031c412230d87c054def4e763566c44d2fb3fe464b3ae4305/table_evaluator-1.7.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-05 20:30:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Baukebrenninkmeijer",
    "github_project": "Table-Evaluator",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "table-evaluator"
}
        
Elapsed time: 0.41976s