DataProfiler


NameDataProfiler JSON
Version 0.10.9 PyPI version JSON
download
home_pagehttps://github.com/capitalone/data-profiler
SummaryWhat is in your data? Detect schema, statistics and entities in almost any file.
upload_time2024-03-06 14:30:17
maintainer
docs_urlNone
authorJeremy Goodsitt, Taylor Turner, Michael Davis, Kenny Bean, Tyler Farnan
requires_python>=3.8
licenseApache License, Version 2.0
keywords data investigation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/DataProfiler)
![GitHub](https://img.shields.io/github/license/CapitalOne/DataProfiler)
![GitHub last commit](https://img.shields.io/github/last-commit/CapitalOne/DataProfiler)
[![Downloads](https://static.pepy.tech/badge/dataprofiler)](https://pepy.tech/project/dataprofiler)


# Data Profiler | What's in your data?

The DataProfiler is a Python library designed to make data analysis, monitoring, and **sensitive data detection** easy.

Loading **Data** with a single command, the library automatically formats & loads files into a DataFrame. **Profiling** the Data, the library identifies the schema, statistics, entities (PII / NPI) and more. Data Profiles can then be used in downstream applications or reports.

Getting started only takes a few lines of code ([example csv](https://raw.githubusercontent.com/capitalone/DataProfiler/main/dataprofiler/tests/data/csv/aws_honeypot_marx_geo.csv)):

```python
import json
from dataprofiler import Data, Profiler

data = Data("your_file.csv") # Auto-Detect & Load: CSV, AVRO, Parquet, JSON, Text, URL

print(data.data.head(5)) # Access data directly via a compatible Pandas DataFrame

profile = Profiler(data) # Calculate Statistics, Entity Recognition, etc

readable_report = profile.report(report_options={"output_format": "compact"})

print(json.dumps(readable_report, indent=4))
```
Note: The Data Profiler comes with a pre-trained deep learning model, used to efficiently identify **sensitive data** (PII / NPI). If desired, it's easy to add new entities to the existing pre-trained model or insert an entire new pipeline for entity recognition.

For API documentation, visit the [documentation page](https://capitalone.github.io/DataProfiler/).

If you have suggestions or find a bug, [please open an issue](https://github.com/capitalone/dataprofiler/issues/new/choose).

If you want to contribute, visit the [contributing page](https://github.com/capitalone/dataprofiler/blob/main/.github/CONTRIBUTING.md).

------------------

# Install

**To install the full package from pypi**: `pip install DataProfiler[full]`

If you want to install the ml dependencies without generating reports use `DataProfiler[ml]`

If the ML requirements are too strict (say, you don't want to install tensorflow), you can install a slimmer package with `DataProfiler[reports]`. The slimmer package disables the default sensitive data detection / entity recognition (labler)

Install from pypi: `pip install DataProfiler`

------------------

# What is a Data Profile?

In the case of this library, a data profile is a dictionary containing statistics and predictions about the underlying dataset. There are "global statistics" or `global_stats`, which contain dataset level data and there are "column/row level statistics" or `data_stats` (each column is a new key-value entry).

The format for a structured profile is below:

```
"global_stats": {
    "samples_used": int,
    "column_count": int,
    "row_count": int,
    "row_has_null_ratio": float,
    "row_is_null_ratio": float,
    "unique_row_ratio": float,
    "duplicate_row_count": int,
    "file_type": string,
    "encoding": string,
    "correlation_matrix": list[list[int]], (*)
    "chi2_matrix": list[list[float]],
    "profile_schema": {
        string: list[int]
    },
    "times": dict[string, float],
},
"data_stats": [
    {
        "column_name": string,
        "data_type": string,
        "data_label": string,
        "categorical": bool,
        "order": string,
        "samples": list[str],
        "statistics": {
            "sample_size": int,
            "null_count": int,
            "null_types": list[string],
            "null_types_index": {
                string: list[int]
            },
            "data_type_representation": dict[string, float],
            "min": [null, float, str],
            "max": [null, float, str],
            "mode": float,
            "median": float,
            "median_absolute_deviation": float,
            "sum": float,
            "mean": float,
            "variance": float,
            "stddev": float,
            "skewness": float,
            "kurtosis": float,
            "num_zeros": int,
            "num_negatives": int,
            "histogram": {
                "bin_counts": list[int],
                "bin_edges": list[float],
            },
            "quantiles": {
                int: float
            },
            "vocab": list[char],
            "avg_predictions": dict[string, float],
            "data_label_representation": dict[string, float],
            "categories": list[str],
            "unique_count": int,
            "unique_ratio": float,
            "categorical_count": dict[string, int],
            "gini_impurity": float,
            "unalikeability": float,
            "precision": {
                'min': int,
                'max': int,
                'mean': float,
                'var': float,
                'std': float,
                'sample_size': int,
                'margin_of_error': float,
                'confidence_level': float
            },
            "times": dict[string, float],
            "format": string
        },
        "null_replication_metrics": {
            "class_prior": list[int],
            "class_sum": list[list[int]],
            "class_mean": list[list[int]]
        }
    }
]
```
(*) Currently the correlation matrix update is toggled off. It will be reset in a later update. Users can still use it as desired with the is_enable option set to True.

The format for an unstructured profile is below:
```
"global_stats": {
    "samples_used": int,
    "empty_line_count": int,
    "file_type": string,
    "encoding": string,
    "memory_size": float, # in MB
    "times": dict[string, float],
},
"data_stats": {
    "data_label": {
        "entity_counts": {
            "word_level": dict[string, int],
            "true_char_level": dict[string, int],
            "postprocess_char_level": dict[string, int]
        },
        "entity_percentages": {
            "word_level": dict[string, float],
            "true_char_level": dict[string, float],
            "postprocess_char_level": dict[string, float]
        },
        "times": dict[string, float]
    },
    "statistics": {
        "vocab": list[char],
        "vocab_count": dict[string, int],
        "words": list[string],
        "word_count": dict[string, int],
        "times": dict[string, float]
    }
}
```

The format for a graph profile is below:
```
"num_nodes": int,
"num_edges": int,
"categorical_attributes": list[string],
"continuous_attributes": list[string],
"avg_node_degree": float,
"global_max_component_size": int,
"continuous_distribution": {
    "<attribute_1>": {
        "name": string,
        "scale": float,
        "properties": list[float, np.array]
    },
    "<attribute_2>": None,
    ...
},
"categorical_distribution": {
    "<attribute_1>": None,
    "<attribute_2>": {
        "bin_counts": list[int],
        "bin_edges": list[float]
    },
    ...
},
"times": dict[string, float]

```

# Profile Statistic Descriptions

### Structured Profile

#### global_stats:

* `samples_used` - number of input data samples used to generate this profile
* `column_count` - the number of columns contained in the input dataset
* `row_count` - the number of rows contained in the input dataset
* `row_has_null_ratio` - the proportion of rows that contain at least one null value to the total number of rows
* `row_is_null_ratio` - the proportion of rows that are fully comprised of null values (null rows) to the total number of rows
* `unique_row_ratio` - the proportion of distinct rows in the input dataset to the total number of rows
* `duplicate_row_count` - the number of rows that occur more than once in the input dataset
* `file_type` - the format of the file containing the input dataset (ex: .csv)
* `encoding` - the encoding of the file containing the input dataset (ex: UTF-8)
* `correlation_matrix` - matrix of shape `column_count` x `column_count` containing the correlation coefficients between each column in the dataset
* `chi2_matrix` - matrix of shape `column_count` x `column_count` containing the chi-square statistics between each column in the dataset
* `profile_schema` - a description of the format of the input dataset labeling each column and its index in the dataset
    * `string` - the label of the column in question and its index in the profile schema
* `times` - the duration of time it took to generate the global statistics for this dataset in milliseconds

#### data_stats:

* `column_name` - the label/title of this column in the input dataset
* `data_type` - the primitive python data type that is contained within this column
* `data_label` - the label/entity of the data in this column as determined by the Labeler component
* `categorical` - ‘true’ if this column contains categorical data
* `order` - the way in which the data in this column is ordered, if any, otherwise “random”
* `samples` - a small subset of data entries from this column
* `statistics` - statistical information on the column
    * `sample_size` - number of input data samples used to generate this profile
    * `null_count` - the number of null entries in the sample
    * `null_types` - a list of the different null types present within this sample
    * `null_types_index` - a dict containing each null type and a respective list of the indicies that it is present within this sample
    * `data_type_representation` - the percentage of samples used identifying as each data_type
    * `min` - minimum value in the sample
    * `max` - maximum value in the sample
    * `mode` - mode of the entries in the sample
    * `median` - median of the entries in the sample
    * `median_absolute_deviation` - the median absolute deviation of the entries in the sample
    * `sum` - the total of all sampled values from the column
    * `mean` - the average of all entries in the sample
    * `variance` - the variance of all entries in the sample
    * `stddev` - the standard deviation of all entries in the sample
    * `skewness` - the statistical skewness of all entries in the sample
    * `kurtosis` - the statistical kurtosis of all entries in the sample
    * `num_zeros` - the number of entries in this sample that have the value 0
    * `num_negatives` - the number of entries in this sample that have a value less than 0
    * `histogram` - contains histogram relevant information
        * `bin_counts` - the number of entries within each bin
        * `bin_edges` - the thresholds of each bin
    * `quantiles` - the value at each percentile in the order they are listed based on the entries in the sample
    * `vocab` - a list of the characters used within the entries in this sample
    * `avg_predictions` - average of the data label prediction confidences across all data points sampled
    * `categories` - a list of each distinct category within the sample if `categorial` = 'true'
    * `unique_count` - the number of distinct entries in the sample
    * `unique_ratio` - the proportion of the number of distinct entries in the sample to the total number of entries in the sample
    * `categorical_count` - number of entries sampled for each category if `categorical` = 'true'
    * `gini_impurity` - measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset
    * `unalikeability` - a value denoting how frequently entries differ from one another within the sample
    * `precision` - a dict of statistics with respect to the number of digits in a number for each sample
    * `times` - the duration of time it took to generate this sample's statistics in milliseconds
    * `format` - list of possible datetime formats
* `null_replication_metrics` - statistics of data partitioned based on whether column value is null (index 1 of lists referenced by dict keys) or not (index 0)
    * `class_prior` - a list containing probability of a column value being null and not null
    * `class_sum`- a list containing sum of all other rows based on whether column value is null or not
    * `class_mean`- a list containing mean of all other rows based on whether column value is null or not

### Unstructured Profile

#### global_stats:

* `samples_used` - number of input data samples used to generate this profile
* `empty_line_count` - the number of empty lines in the input data
* `file_type` - the file type of the input data (ex: .txt)
* `encoding` - file encoding of the input data file (ex: UTF-8)
* `memory_size` - size of the input data in MB
* `times` - duration of time it took to generate this profile in milliseconds

#### data_stats:

* `data_label` - labels and statistics on the labels of the input data
    * `entity_counts` - the number of times a specific label or entity appears inside the input data
        * `word_level` - the number of words counted within each label or entity
        * `true_char_level` - the number of characters counted within each label or entity as determined by the model
        * `postprocess_char_level` - the number of characters counted within each label or entity as determined by the postprocessor
    * `entity_percentages` - the percentages of each label or entity within the input data
        * `word_level` - the percentage of words in the input data that are contained within each label or entity
        * `true_char_level` - the percentage of characters in the input data that are contained within each label or entity as determined by the model
        * `postprocess_char_level` - the percentage of characters in the input data that are contained within each label or entity as determined by the postprocessor
    * `times` - the duration of time it took for the data labeler to predict on the data
* `statistics` - statistics of the input data
    * `vocab` - a list of each character in the input data
    * `vocab_count` - the number of occurrences of each distinct character in the input data
    * `words` - a list of each word in the input data
    * `word_count` - the number of occurrences of each distinct word in the input data
    * `times` - the duration of time it took to generate the vocab and words statistics in milliseconds

### Graph Profile
* `num_nodes` - number of nodes in the graph
* `num_edges` - number of edges in the graph
* `categorical_attributes` - list of categorical edge attributes
* `continuous_attributes` - list of continuous edge attributes
* `avg_node_degree` - average degree of nodes in the graph
* `global_max_component_size`: size of the global max component

#### continuous_distribution:
* `<attribute_N>`: name of N-th edge attribute in list of attributes
    * `name` - name of distribution for attribute
    * `scale` - negative log likelihood used to scale and compare distributions
    * `properties` - list of statistical properties describing the distribution
        * [shape (optional), loc, scale, mean, variance, skew, kurtosis]


#### categorical_distribution:
* `<attribute_N>`: name of N-th edge attribute in list of attributes
    * `bin_counts`: counts in each bin of the distribution histogram
    * `bin_edges`: edges of each bin of the distribution histogram

* times - duration of time it took to generate this profile in milliseconds

# Support

### Supported Data Formats

* Any delimited file (CSV, TSV, etc.)
* JSON object
* Avro file
* Parquet file
* Text file
* Pandas DataFrame
* A URL that points to one of the supported file types above

### Data Types

*Data Types* are determined at the column level for structured data

* Int
* Float
* String
* DateTime

### Data Labels

*Data Labels* are determined per cell for structured data (column/row when the *profiler* is used) or at the character level for unstructured data.

* UNKNOWN
* ADDRESS
* BAN (bank account number, 10-18 digits)
* CREDIT_CARD
* EMAIL_ADDRESS
* UUID
* HASH_OR_KEY (md5, sha1, sha256, random hash, etc.)
* IPV4
* IPV6
* MAC_ADDRESS
* PERSON
* PHONE_NUMBER
* SSN
* URL
* US_STATE
* DRIVERS_LICENSE
* DATE
* TIME
* DATETIME
* INTEGER
* FLOAT
* QUANTITY
* ORDINAL

# Get Started

### Load a File

The Data Profiler can profile the following data/file types:

* CSV file (or any delimited file)
* JSON object
* Avro file
* Parquet file
* Text file
* Pandas DataFrame
* A URL that points to one of the supported file types above

The profiler should automatically identify the file type and load the data into a `Data Class`.

Along with other attributtes the `Data class` enables data to be accessed via a valid Pandas DataFrame.

```python
# Load a csv file, return a CSVData object
csv_data = Data('your_file.csv')

# Print the first 10 rows of the csv file
print(csv_data.data.head(10))

# Load a parquet file, return a ParquetData object
parquet_data = Data('your_file.parquet')

# Sort the data by the name column
parquet_data.data.sort_values(by='name', inplace=True)

# Print the sorted first 10 rows of the parquet data
print(parquet_data.data.head(10))

# Load a json file from a URL, return a JSONData object
json_data = Data('https://github.com/capitalone/DataProfiler/blob/main/dataprofiler/tests/data/json/iris-utf-8.json')
```

If the file type is not automatically identified (rare), you can specify them
specifically, see section [Specifying a Filetype or Delimiter](#specifying-a-filetype-or-delimiter).

### Profile a File

Example uses a CSV file for example, but CSV, JSON, Avro, Parquet or Text also work.

```python
import json
from dataprofiler import Data, Profiler

# Load file (CSV should be automatically identified)
data = Data("your_file.csv")

# Profile the dataset
profile = Profiler(data)

# Generate a report and use json to prettify.
report  = profile.report(report_options={"output_format": "pretty"})

# Print the report
print(json.dumps(report, indent=4))
```

### Updating Profiles

Currently, the data profiler is equipped to update its profile in batches.

```python
import json
from dataprofiler import Data, Profiler

# Load and profile a CSV file
data = Data("your_file.csv")
profile = Profiler(data)

# Update the profile with new data:
new_data = Data("new_data.csv")
profile.update_profile(new_data)

# Print the report using json to prettify.
report  = profile.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))
```

Note that if the data you update the profile with contains integer indices that overlap with the indices on data originally profiled, when null rows are calculated the indices will be "shifted" to uninhabited values so that null counts and ratios are still accurate.

### Merging Profiles

If you have two files with the same schema (but different data), it is possible to merge the two profiles together via an addition operator.

This also enables profiles to be determined in a distributed manner.

```python
import json
from dataprofiler import Data, Profiler

# Load a CSV file with a schema
data1 = Data("file_a.csv")
profile1 = Profiler(data1)

# Load another CSV file with the same schema
data2 = Data("file_b.csv")
profile2 = Profiler(data2)

profile3 = profile1 + profile2

# Print the report using json to prettify.
report  = profile3.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))
```

Note that if merged profiles had overlapping integer indices, when null rows are calculated the indices will be "shifted" to uninhabited values so that null counts and ratios are still accurate.

### Profiler Differences
For finding the change between profiles with the same schema we can utilize the
profile's `diff` function. The `diff` will provide overall file and sampling
differences as well as detailed differences of the data's statistics. For
example, numerical columns have both t-test to evaluate similarity and PSI (Population Stability Index) to quantify column distribution shift.
More information is described in the Profiler section of the [Github Pages](
https://capitalone.github.io/DataProfiler/).

Create the difference report like this:
```python
import json
import dataprofiler as dp

# Load a CSV file
data1 = dp.Data("file_a.csv")
profile1 = dp.Profiler(data1)

# Load another CSV file
data2 = dp.Data("file_b.csv")
profile2 = dp.Profiler(data2)

diff_report = profile1.diff(profile2)
print(json.dumps(diff_report, indent=4))
```

### Profile a Pandas DataFrame
```python
import pandas as pd
import dataprofiler as dp
import json

my_dataframe = pd.DataFrame([[1, 2.0],[1, 2.2],[-1, 3]])
profile = dp.Profiler(my_dataframe)

# print the report using json to prettify.
report = profile.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))

# read a specified column, in this case it is labeled 0:
print(json.dumps(report["data_stats"][0], indent=4))
```

### Unstructured Profiler
In addition to the structured profiler, DataProfiler provides unstructured profiling for the TextData object or string. The unstructured profiler also works with list[string], pd.Series(string) or pd.DataFrame(string) given profiler_type option specified as `unstructured`. Below is an example of the unstructured profiler with a text file.
```python
import dataprofiler as dp
import json

my_text = dp.Data('text_file.txt')
profile = dp.Profiler(my_text)

# print the report using json to prettify.
report = profile.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))
```

Another example of the unstructured profiler with pd.Series of strings is given as below, with the profiler option `profiler_type='unstructured'`
```python
import dataprofiler as dp
import pandas as pd
import json

text_data = pd.Series(['first string', 'second string'])
profile = dp.Profiler(text_data, profiler_type='unstructured')

# print the report using json to prettify.
report = profile.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))
```

### Graph Profiler
DataProfiler also provides the ability to profile graph data from a csv file. Below is an example of the graph profiler with a graph data csv file:
```python
import dataprofiler as dp
import pprint

my_graph = dp.Data('graph_file.csv')
profile = dp.Profiler(my_graph)

# print the report using pretty print (json dump does not work on numpy array values inside dict)
report = profile.report()
printer = pprint.PrettyPrinter(sort_dicts=False, compact=True)
printer.pprint(report)
```

**Visit the [documentation page](https://capitalone.github.io/DataProfiler/) for additional Examples and API details**

# References
```
Sensitive Data Detection with High-Throughput Neural Network Models for Financial Institutions
Authors: Anh Truong, Austin Walters, Jeremy Goodsitt
2020 https://arxiv.org/abs/2012.09597
The AAAI-21 Workshop on Knowledge Discovery from Unstructured Data in Financial Services
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/capitalone/data-profiler",
    "name": "DataProfiler",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "Data Investigation",
    "author": "Jeremy Goodsitt, Taylor Turner, Michael Davis, Kenny Bean, Tyler Farnan",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/3e/32/ca0f331d4d728f2c1ed0c78326ffc11d13f8fca3040fdeae2bc0a25ad387/DataProfiler-0.10.9.tar.gz",
    "platform": null,
    "description": "![PyPI - Python Version](https://img.shields.io/pypi/pyversions/DataProfiler)\n![GitHub](https://img.shields.io/github/license/CapitalOne/DataProfiler)\n![GitHub last commit](https://img.shields.io/github/last-commit/CapitalOne/DataProfiler)\n[![Downloads](https://static.pepy.tech/badge/dataprofiler)](https://pepy.tech/project/dataprofiler)\n\n\n# Data Profiler | What's in your data?\n\nThe DataProfiler is a Python library designed to make data analysis, monitoring, and **sensitive data detection** easy.\n\nLoading **Data** with a single command, the library automatically formats & loads files into a DataFrame. **Profiling** the Data, the library identifies the schema, statistics, entities (PII / NPI) and more. Data Profiles can then be used in downstream applications or reports.\n\nGetting started only takes a few lines of code ([example csv](https://raw.githubusercontent.com/capitalone/DataProfiler/main/dataprofiler/tests/data/csv/aws_honeypot_marx_geo.csv)):\n\n```python\nimport json\nfrom dataprofiler import Data, Profiler\n\ndata = Data(\"your_file.csv\") # Auto-Detect & Load: CSV, AVRO, Parquet, JSON, Text, URL\n\nprint(data.data.head(5)) # Access data directly via a compatible Pandas DataFrame\n\nprofile = Profiler(data) # Calculate Statistics, Entity Recognition, etc\n\nreadable_report = profile.report(report_options={\"output_format\": \"compact\"})\n\nprint(json.dumps(readable_report, indent=4))\n```\nNote: The Data Profiler comes with a pre-trained deep learning model, used to efficiently identify **sensitive data** (PII / NPI). If desired, it's easy to add new entities to the existing pre-trained model or insert an entire new pipeline for entity recognition.\n\nFor API documentation, visit the [documentation page](https://capitalone.github.io/DataProfiler/).\n\nIf you have suggestions or find a bug, [please open an issue](https://github.com/capitalone/dataprofiler/issues/new/choose).\n\nIf you want to contribute, visit the [contributing page](https://github.com/capitalone/dataprofiler/blob/main/.github/CONTRIBUTING.md).\n\n------------------\n\n# Install\n\n**To install the full package from pypi**: `pip install DataProfiler[full]`\n\nIf you want to install the ml dependencies without generating reports use `DataProfiler[ml]`\n\nIf the ML requirements are too strict (say, you don't want to install tensorflow), you can install a slimmer package with `DataProfiler[reports]`. The slimmer package disables the default sensitive data detection / entity recognition (labler)\n\nInstall from pypi: `pip install DataProfiler`\n\n------------------\n\n# What is a Data Profile?\n\nIn the case of this library, a data profile is a dictionary containing statistics and predictions about the underlying dataset. There are \"global statistics\" or `global_stats`, which contain dataset level data and there are \"column/row level statistics\" or `data_stats` (each column is a new key-value entry).\n\nThe format for a structured profile is below:\n\n```\n\"global_stats\": {\n    \"samples_used\": int,\n    \"column_count\": int,\n    \"row_count\": int,\n    \"row_has_null_ratio\": float,\n    \"row_is_null_ratio\": float,\n    \"unique_row_ratio\": float,\n    \"duplicate_row_count\": int,\n    \"file_type\": string,\n    \"encoding\": string,\n    \"correlation_matrix\": list[list[int]], (*)\n    \"chi2_matrix\": list[list[float]],\n    \"profile_schema\": {\n        string: list[int]\n    },\n    \"times\": dict[string, float],\n},\n\"data_stats\": [\n    {\n        \"column_name\": string,\n        \"data_type\": string,\n        \"data_label\": string,\n        \"categorical\": bool,\n        \"order\": string,\n        \"samples\": list[str],\n        \"statistics\": {\n            \"sample_size\": int,\n            \"null_count\": int,\n            \"null_types\": list[string],\n            \"null_types_index\": {\n                string: list[int]\n            },\n            \"data_type_representation\": dict[string, float],\n            \"min\": [null, float, str],\n            \"max\": [null, float, str],\n            \"mode\": float,\n            \"median\": float,\n            \"median_absolute_deviation\": float,\n            \"sum\": float,\n            \"mean\": float,\n            \"variance\": float,\n            \"stddev\": float,\n            \"skewness\": float,\n            \"kurtosis\": float,\n            \"num_zeros\": int,\n            \"num_negatives\": int,\n            \"histogram\": {\n                \"bin_counts\": list[int],\n                \"bin_edges\": list[float],\n            },\n            \"quantiles\": {\n                int: float\n            },\n            \"vocab\": list[char],\n            \"avg_predictions\": dict[string, float],\n            \"data_label_representation\": dict[string, float],\n            \"categories\": list[str],\n            \"unique_count\": int,\n            \"unique_ratio\": float,\n            \"categorical_count\": dict[string, int],\n            \"gini_impurity\": float,\n            \"unalikeability\": float,\n            \"precision\": {\n                'min': int,\n                'max': int,\n                'mean': float,\n                'var': float,\n                'std': float,\n                'sample_size': int,\n                'margin_of_error': float,\n                'confidence_level': float\n            },\n            \"times\": dict[string, float],\n            \"format\": string\n        },\n        \"null_replication_metrics\": {\n            \"class_prior\": list[int],\n            \"class_sum\": list[list[int]],\n            \"class_mean\": list[list[int]]\n        }\n    }\n]\n```\n(*) Currently the correlation matrix update is toggled off. It will be reset in a later update. Users can still use it as desired with the is_enable option set to True.\n\nThe format for an unstructured profile is below:\n```\n\"global_stats\": {\n    \"samples_used\": int,\n    \"empty_line_count\": int,\n    \"file_type\": string,\n    \"encoding\": string,\n    \"memory_size\": float, # in MB\n    \"times\": dict[string, float],\n},\n\"data_stats\": {\n    \"data_label\": {\n        \"entity_counts\": {\n            \"word_level\": dict[string, int],\n            \"true_char_level\": dict[string, int],\n            \"postprocess_char_level\": dict[string, int]\n        },\n        \"entity_percentages\": {\n            \"word_level\": dict[string, float],\n            \"true_char_level\": dict[string, float],\n            \"postprocess_char_level\": dict[string, float]\n        },\n        \"times\": dict[string, float]\n    },\n    \"statistics\": {\n        \"vocab\": list[char],\n        \"vocab_count\": dict[string, int],\n        \"words\": list[string],\n        \"word_count\": dict[string, int],\n        \"times\": dict[string, float]\n    }\n}\n```\n\nThe format for a graph profile is below:\n```\n\"num_nodes\": int,\n\"num_edges\": int,\n\"categorical_attributes\": list[string],\n\"continuous_attributes\": list[string],\n\"avg_node_degree\": float,\n\"global_max_component_size\": int,\n\"continuous_distribution\": {\n    \"<attribute_1>\": {\n        \"name\": string,\n        \"scale\": float,\n        \"properties\": list[float, np.array]\n    },\n    \"<attribute_2>\": None,\n    ...\n},\n\"categorical_distribution\": {\n    \"<attribute_1>\": None,\n    \"<attribute_2>\": {\n        \"bin_counts\": list[int],\n        \"bin_edges\": list[float]\n    },\n    ...\n},\n\"times\": dict[string, float]\n\n```\n\n# Profile Statistic Descriptions\n\n### Structured Profile\n\n#### global_stats:\n\n* `samples_used` - number of input data samples used to generate this profile\n* `column_count` - the number of columns contained in the input dataset\n* `row_count` - the number of rows contained in the input dataset\n* `row_has_null_ratio` - the proportion of rows that contain at least one null value to the total number of rows\n* `row_is_null_ratio` - the proportion of rows that are fully comprised of null values (null rows) to the total number of rows\n* `unique_row_ratio` - the proportion of distinct rows in the input dataset to the total number of rows\n* `duplicate_row_count` - the number of rows that occur more than once in the input dataset\n* `file_type` - the format of the file containing the input dataset (ex: .csv)\n* `encoding` - the encoding of the file containing the input dataset (ex: UTF-8)\n* `correlation_matrix` - matrix of shape `column_count` x `column_count` containing the correlation coefficients between each column in the dataset\n* `chi2_matrix` - matrix of shape `column_count` x `column_count` containing the chi-square statistics between each column in the dataset\n* `profile_schema` - a description of the format of the input dataset labeling each column and its index in the dataset\n    * `string` - the label of the column in question and its index in the profile schema\n* `times` - the duration of time it took to generate the global statistics for this dataset in milliseconds\n\n#### data_stats:\n\n* `column_name` - the label/title of this column in the input dataset\n* `data_type` - the primitive python data type that is contained within this column\n* `data_label` - the label/entity of the data in this column as determined by the Labeler component\n* `categorical` - \u2018true\u2019 if this column contains categorical data\n* `order` - the way in which the data in this column is ordered, if any, otherwise \u201crandom\u201d\n* `samples` - a small subset of data entries from this column\n* `statistics` - statistical information on the column\n    * `sample_size` - number of input data samples used to generate this profile\n    * `null_count` - the number of null entries in the sample\n    * `null_types` - a list of the different null types present within this sample\n    * `null_types_index` - a dict containing each null type and a respective list of the indicies that it is present within this sample\n    * `data_type_representation` - the percentage of samples used identifying as each data_type\n    * `min` - minimum value in the sample\n    * `max` - maximum value in the sample\n    * `mode` - mode of the entries in the sample\n    * `median` - median of the entries in the sample\n    * `median_absolute_deviation` - the median absolute deviation of the entries in the sample\n    * `sum` - the total of all sampled values from the column\n    * `mean` - the average of all entries in the sample\n    * `variance` - the variance of all entries in the sample\n    * `stddev` - the standard deviation of all entries in the sample\n    * `skewness` - the statistical skewness of all entries in the sample\n    * `kurtosis` - the statistical kurtosis of all entries in the sample\n    * `num_zeros` - the number of entries in this sample that have the value 0\n    * `num_negatives` - the number of entries in this sample that have a value less than 0\n    * `histogram` - contains histogram relevant information\n        * `bin_counts` - the number of entries within each bin\n        * `bin_edges` - the thresholds of each bin\n    * `quantiles` - the value at each percentile in the order they are listed based on the entries in the sample\n    * `vocab` - a list of the characters used within the entries in this sample\n    * `avg_predictions` - average of the data label prediction confidences across all data points sampled\n    * `categories` - a list of each distinct category within the sample if `categorial` = 'true'\n    * `unique_count` - the number of distinct entries in the sample\n    * `unique_ratio` - the proportion of the number of distinct entries in the sample to the total number of entries in the sample\n    * `categorical_count` - number of entries sampled for each category if `categorical` = 'true'\n    * `gini_impurity` - measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset\n    * `unalikeability` - a value denoting how frequently entries differ from one another within the sample\n    * `precision` - a dict of statistics with respect to the number of digits in a number for each sample\n    * `times` - the duration of time it took to generate this sample's statistics in milliseconds\n    * `format` - list of possible datetime formats\n* `null_replication_metrics` - statistics of data partitioned based on whether column value is null (index 1 of lists referenced by dict keys) or not (index 0)\n    * `class_prior` - a list containing probability of a column value being null and not null\n    * `class_sum`- a list containing sum of all other rows based on whether column value is null or not\n    * `class_mean`- a list containing mean of all other rows based on whether column value is null or not\n\n### Unstructured Profile\n\n#### global_stats:\n\n* `samples_used` - number of input data samples used to generate this profile\n* `empty_line_count` - the number of empty lines in the input data\n* `file_type` - the file type of the input data (ex: .txt)\n* `encoding` - file encoding of the input data file (ex: UTF-8)\n* `memory_size` - size of the input data in MB\n* `times` - duration of time it took to generate this profile in milliseconds\n\n#### data_stats:\n\n* `data_label` - labels and statistics on the labels of the input data\n    * `entity_counts` - the number of times a specific label or entity appears inside the input data\n        * `word_level` - the number of words counted within each label or entity\n        * `true_char_level` - the number of characters counted within each label or entity as determined by the model\n        * `postprocess_char_level` - the number of characters counted within each label or entity as determined by the postprocessor\n    * `entity_percentages` - the percentages of each label or entity within the input data\n        * `word_level` - the percentage of words in the input data that are contained within each label or entity\n        * `true_char_level` - the percentage of characters in the input data that are contained within each label or entity as determined by the model\n        * `postprocess_char_level` - the percentage of characters in the input data that are contained within each label or entity as determined by the postprocessor\n    * `times` - the duration of time it took for the data labeler to predict on the data\n* `statistics` - statistics of the input data\n    * `vocab` - a list of each character in the input data\n    * `vocab_count` - the number of occurrences of each distinct character in the input data\n    * `words` - a list of each word in the input data\n    * `word_count` - the number of occurrences of each distinct word in the input data\n    * `times` - the duration of time it took to generate the vocab and words statistics in milliseconds\n\n### Graph Profile\n* `num_nodes` - number of nodes in the graph\n* `num_edges` - number of edges in the graph\n* `categorical_attributes` - list of categorical edge attributes\n* `continuous_attributes` - list of continuous edge attributes\n* `avg_node_degree` - average degree of nodes in the graph\n* `global_max_component_size`: size of the global max component\n\n#### continuous_distribution:\n* `<attribute_N>`: name of N-th edge attribute in list of attributes\n    * `name` - name of distribution for attribute\n    * `scale` - negative log likelihood used to scale and compare distributions\n    * `properties` - list of statistical properties describing the distribution\n        * [shape (optional), loc, scale, mean, variance, skew, kurtosis]\n\n\n#### categorical_distribution:\n* `<attribute_N>`: name of N-th edge attribute in list of attributes\n    * `bin_counts`: counts in each bin of the distribution histogram\n    * `bin_edges`: edges of each bin of the distribution histogram\n\n* times - duration of time it took to generate this profile in milliseconds\n\n# Support\n\n### Supported Data Formats\n\n* Any delimited file (CSV, TSV, etc.)\n* JSON object\n* Avro file\n* Parquet file\n* Text file\n* Pandas DataFrame\n* A URL that points to one of the supported file types above\n\n### Data Types\n\n*Data Types* are determined at the column level for structured data\n\n* Int\n* Float\n* String\n* DateTime\n\n### Data Labels\n\n*Data Labels* are determined per cell for structured data (column/row when the *profiler* is used) or at the character level for unstructured data.\n\n* UNKNOWN\n* ADDRESS\n* BAN (bank account number, 10-18 digits)\n* CREDIT_CARD\n* EMAIL_ADDRESS\n* UUID\n* HASH_OR_KEY (md5, sha1, sha256, random hash, etc.)\n* IPV4\n* IPV6\n* MAC_ADDRESS\n* PERSON\n* PHONE_NUMBER\n* SSN\n* URL\n* US_STATE\n* DRIVERS_LICENSE\n* DATE\n* TIME\n* DATETIME\n* INTEGER\n* FLOAT\n* QUANTITY\n* ORDINAL\n\n# Get Started\n\n### Load a File\n\nThe Data Profiler can profile the following data/file types:\n\n* CSV file (or any delimited file)\n* JSON object\n* Avro file\n* Parquet file\n* Text file\n* Pandas DataFrame\n* A URL that points to one of the supported file types above\n\nThe profiler should automatically identify the file type and load the data into a `Data Class`.\n\nAlong with other attributtes the `Data class` enables data to be accessed via a valid Pandas DataFrame.\n\n```python\n# Load a csv file, return a CSVData object\ncsv_data = Data('your_file.csv')\n\n# Print the first 10 rows of the csv file\nprint(csv_data.data.head(10))\n\n# Load a parquet file, return a ParquetData object\nparquet_data = Data('your_file.parquet')\n\n# Sort the data by the name column\nparquet_data.data.sort_values(by='name', inplace=True)\n\n# Print the sorted first 10 rows of the parquet data\nprint(parquet_data.data.head(10))\n\n# Load a json file from a URL, return a JSONData object\njson_data = Data('https://github.com/capitalone/DataProfiler/blob/main/dataprofiler/tests/data/json/iris-utf-8.json')\n```\n\nIf the file type is not automatically identified (rare), you can specify them\nspecifically, see section [Specifying a Filetype or Delimiter](#specifying-a-filetype-or-delimiter).\n\n### Profile a File\n\nExample uses a CSV file for example, but CSV, JSON, Avro, Parquet or Text also work.\n\n```python\nimport json\nfrom dataprofiler import Data, Profiler\n\n# Load file (CSV should be automatically identified)\ndata = Data(\"your_file.csv\")\n\n# Profile the dataset\nprofile = Profiler(data)\n\n# Generate a report and use json to prettify.\nreport  = profile.report(report_options={\"output_format\": \"pretty\"})\n\n# Print the report\nprint(json.dumps(report, indent=4))\n```\n\n### Updating Profiles\n\nCurrently, the data profiler is equipped to update its profile in batches.\n\n```python\nimport json\nfrom dataprofiler import Data, Profiler\n\n# Load and profile a CSV file\ndata = Data(\"your_file.csv\")\nprofile = Profiler(data)\n\n# Update the profile with new data:\nnew_data = Data(\"new_data.csv\")\nprofile.update_profile(new_data)\n\n# Print the report using json to prettify.\nreport  = profile.report(report_options={\"output_format\": \"pretty\"})\nprint(json.dumps(report, indent=4))\n```\n\nNote that if the data you update the profile with contains integer indices that overlap with the indices on data originally profiled, when null rows are calculated the indices will be \"shifted\" to uninhabited values so that null counts and ratios are still accurate.\n\n### Merging Profiles\n\nIf you have two files with the same schema (but different data), it is possible to merge the two profiles together via an addition operator.\n\nThis also enables profiles to be determined in a distributed manner.\n\n```python\nimport json\nfrom dataprofiler import Data, Profiler\n\n# Load a CSV file with a schema\ndata1 = Data(\"file_a.csv\")\nprofile1 = Profiler(data1)\n\n# Load another CSV file with the same schema\ndata2 = Data(\"file_b.csv\")\nprofile2 = Profiler(data2)\n\nprofile3 = profile1 + profile2\n\n# Print the report using json to prettify.\nreport  = profile3.report(report_options={\"output_format\": \"pretty\"})\nprint(json.dumps(report, indent=4))\n```\n\nNote that if merged profiles had overlapping integer indices, when null rows are calculated the indices will be \"shifted\" to uninhabited values so that null counts and ratios are still accurate.\n\n### Profiler Differences\nFor finding the change between profiles with the same schema we can utilize the\nprofile's `diff` function. The `diff` will provide overall file and sampling\ndifferences as well as detailed differences of the data's statistics. For\nexample, numerical columns have both t-test to evaluate similarity and PSI (Population Stability Index) to quantify column distribution shift.\nMore information is described in the Profiler section of the [Github Pages](\nhttps://capitalone.github.io/DataProfiler/).\n\nCreate the difference report like this:\n```python\nimport json\nimport dataprofiler as dp\n\n# Load a CSV file\ndata1 = dp.Data(\"file_a.csv\")\nprofile1 = dp.Profiler(data1)\n\n# Load another CSV file\ndata2 = dp.Data(\"file_b.csv\")\nprofile2 = dp.Profiler(data2)\n\ndiff_report = profile1.diff(profile2)\nprint(json.dumps(diff_report, indent=4))\n```\n\n### Profile a Pandas DataFrame\n```python\nimport pandas as pd\nimport dataprofiler as dp\nimport json\n\nmy_dataframe = pd.DataFrame([[1, 2.0],[1, 2.2],[-1, 3]])\nprofile = dp.Profiler(my_dataframe)\n\n# print the report using json to prettify.\nreport = profile.report(report_options={\"output_format\": \"pretty\"})\nprint(json.dumps(report, indent=4))\n\n# read a specified column, in this case it is labeled 0:\nprint(json.dumps(report[\"data_stats\"][0], indent=4))\n```\n\n### Unstructured Profiler\nIn addition to the structured profiler, DataProfiler provides unstructured profiling for the TextData object or string. The unstructured profiler also works with list[string], pd.Series(string) or pd.DataFrame(string) given profiler_type option specified as `unstructured`. Below is an example of the unstructured profiler with a text file.\n```python\nimport dataprofiler as dp\nimport json\n\nmy_text = dp.Data('text_file.txt')\nprofile = dp.Profiler(my_text)\n\n# print the report using json to prettify.\nreport = profile.report(report_options={\"output_format\": \"pretty\"})\nprint(json.dumps(report, indent=4))\n```\n\nAnother example of the unstructured profiler with pd.Series of strings is given as below, with the profiler option `profiler_type='unstructured'`\n```python\nimport dataprofiler as dp\nimport pandas as pd\nimport json\n\ntext_data = pd.Series(['first string', 'second string'])\nprofile = dp.Profiler(text_data, profiler_type='unstructured')\n\n# print the report using json to prettify.\nreport = profile.report(report_options={\"output_format\": \"pretty\"})\nprint(json.dumps(report, indent=4))\n```\n\n### Graph Profiler\nDataProfiler also provides the ability to profile graph data from a csv file. Below is an example of the graph profiler with a graph data csv file:\n```python\nimport dataprofiler as dp\nimport pprint\n\nmy_graph = dp.Data('graph_file.csv')\nprofile = dp.Profiler(my_graph)\n\n# print the report using pretty print (json dump does not work on numpy array values inside dict)\nreport = profile.report()\nprinter = pprint.PrettyPrinter(sort_dicts=False, compact=True)\nprinter.pprint(report)\n```\n\n**Visit the [documentation page](https://capitalone.github.io/DataProfiler/) for additional Examples and API details**\n\n# References\n```\nSensitive Data Detection with High-Throughput Neural Network Models for Financial Institutions\nAuthors: Anh Truong, Austin Walters, Jeremy Goodsitt\n2020 https://arxiv.org/abs/2012.09597\nThe AAAI-21 Workshop on Knowledge Discovery from Unstructured Data in Financial Services\n```\n",
    "bugtrack_url": null,
    "license": "Apache License, Version 2.0",
    "summary": "What is in your data? Detect schema, statistics and entities in almost any file.",
    "version": "0.10.9",
    "project_urls": {
        "Homepage": "https://github.com/capitalone/data-profiler"
    },
    "split_keywords": [
        "data",
        "investigation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "786c16cedbee99034ed57ebb073f1f7b7ea9508a8e4b394af146baa3f42c992f",
                "md5": "8b9a82e3371ac19805c4fe073b21f7af",
                "sha256": "362658507f1f2325464c4443d1c0a128bc4ba0a4a12ed1634ba866e62ead607f"
            },
            "downloads": -1,
            "filename": "DataProfiler-0.10.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8b9a82e3371ac19805c4fe073b21f7af",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 12114777,
            "upload_time": "2024-03-06T14:30:14",
            "upload_time_iso_8601": "2024-03-06T14:30:14.028647Z",
            "url": "https://files.pythonhosted.org/packages/78/6c/16cedbee99034ed57ebb073f1f7b7ea9508a8e4b394af146baa3f42c992f/DataProfiler-0.10.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3e32ca0f331d4d728f2c1ed0c78326ffc11d13f8fca3040fdeae2bc0a25ad387",
                "md5": "f086e8f235eaf79bcf138216983ae08b",
                "sha256": "278d0b40f0780ea860afcd342755a477769ad9d199e1346062a779a9261232ba"
            },
            "downloads": -1,
            "filename": "DataProfiler-0.10.9.tar.gz",
            "has_sig": false,
            "md5_digest": "f086e8f235eaf79bcf138216983ae08b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 10618248,
            "upload_time": "2024-03-06T14:30:17",
            "upload_time_iso_8601": "2024-03-06T14:30:17.107944Z",
            "url": "https://files.pythonhosted.org/packages/3e/32/ca0f331d4d728f2c1ed0c78326ffc11d13f8fca3040fdeae2bc0a25ad387/DataProfiler-0.10.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-06 14:30:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "capitalone",
    "github_project": "data-profiler",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "tox": true,
    "lcname": "dataprofiler"
}
        
Elapsed time: 0.20497s