# GLocalX - Global through Local Explainability
Explanations come in two forms: local, explaining a single model prediction, and global, explaining all model predictions. The Local to Global (L2G) problem consists in bridging these two family of explanations. Simply put, we generate global explanations by merging local ones.
## The algorithm
Local and global explanations are provided in the form of decision rules:
```
age < 40, income > 50000, status = married, job = CEO ⇒ grant loan application
```
This rule describes the rationale followed given by an unexplainable model to grant the loan application to an individual younger than 40 years-old, with an income above 50000$, married and currently working as a CEO.
---
## Setup
```bash
git clone https://github.com/msetzu/glocalx/
cd glocalx
```
Dependencies are listed in `requirements.txt`, a virtual environment is advised:
```bash
mkvirtualenv glocalx # optional but reccomended
pip3 install -r requirements.txt
```
## Running the code
Installing from pypi does not give direct access to quickstart data, first download the `data/` folder from the [original repository](https://github.com/msetzu/glocalx/tree/main).
### Python interface
```python
from tensorflow.keras.models import load_model
import logzero
from glocalx.glocalx import GLocalX, shut_up_tensorflow
from glocalx.models import Rule
# Set log profile: INFO for normal logging, DEBUG for verbosity
logzero.loglevel(logzero.logging.INFO)
shut_up_tensorflow()
# Load black box: optional! Use black_box = None to use the dataset labels
black_box = load_model('data/dummy/dummy_model.h5')
# Load data and header
data = genfromtxt('data/dummy/dummy_dataset.csv', delimiter=',', names=True)
features_names = data.dtype.names
tr_set = data.view(float).reshape(data.shape + (-1,))
# Load local explanations
local_explanations = Rule.from_json('data/dummy/dummy_rules.json', names=features_names)
# Create a GLocalX instance for `black_box`
glocalx = GLocalX(oracle=black_box)
# Fit the model, use batch_size=128 for larger datasets
glocalx = glocalx.fit(local_explanations, tr_set, batch_size=2, name='black_box_explanations')
# Retrieve global explanations by fidelity
alpha = 0.5
global_explanations = glocalx.rules(alpha, tr_set)
# Retrieve global explanations by fidelity percentile
alpha = 95
global_explanations = glocalx.rules(alpha, tr_set, is_percentile=True)
# Retrieve exactly `alpha` global explanations, `alpha/2` per class
alpha = 10
global_explanations = glocalx.rules(alpha, tr_set)
```
### Command line interface
You can invoke `GLocalX` from the api interface in `api.py`:
```bash
> python3 api.py --help
Usage: api.py [OPTIONS] RULES TR
Options:
-o, --oracle PATH
--names TEXT Features names.
-cbs, --callbacks FLOAT Callback step, either int or float. Defaults
to 0.1
-m, --name TEXT Name of the log files.
--generate TEXT Number of records to generate, if given.
Defaults to None.
-i, --intersect TEXT Whether to use coverage intersection
('coverage') or polyhedra intersection
('polyhedra'). Defaults to 'coverage'.
-f, --fidelity_weight FLOAT Fidelity weight. Defaults to 1.
-c, --complexity_weight FLOAT Complexity weight. Defaults to 1.
-a, --alpha FLOAT Pruning factor. Defaults to 0.5
-b, --batch INTEGER Batch size. Set to -1 for full batch.
Defaults to 128.
-u, --undersample FLOAT Undersample size, to use a percentage of the
rules. Defaults to 1.0 (No undersample).
--strict_join Use to use high concordance.
--strict_cut Use to use the strong cut.
--global_direction Use to use the global search direction.
-d, --debug INTEGER Debug level.
--help Show this message and exit.
```
A minimal run simply requires a set of local input rules, a training set and (optionally) a black box:
```shell script
python3.8 apy.py data/dummy/dummy_rules.json data/dummy/dummy_dataset.csv --oracle data/dummy/dummy_model.h5 --name dummy --batch 2
```
If you are interested, the folder `data/dummy/` contains a dummy example.
You can run it with
```shell script
python3.8 apy.py my_rules.json training_set.csv --oracle my_black_box.h5 \
--name my_black_box_explanations
```
The remaining hyperparameters are optional:
- `--names $names` Comma-separated features names
- `--callbacks $callbacks` Callback step, either int or float. Callbacks are invoked every `--callbacks` iterations of the algorithm
- `--name $name` Name of the log files and output. The script dumps here all the additional info as it executes
- `--generate $size` Number of synthetic records to generate, if you don't wish to use the provided training set
- `--intersect $strategy` Intersection strategy: either `coverage` or `polyhedra`. Defaults to `coverage`
- `--fidelity_weight $fidelity` Fidelity weight to reward accurate yet complex models. Defaults to 1.
- `--complexity_weight $complexity` Complexity weight to reward simple yet less accurate models. Defaults to 1.
- `--alpha $alpha` Pruning factor. Defaults to 0.5
- `--batch $batch` Batch size. Set to -1 for full batch. Defaults to 128.
- `--undersample $pct` Undersample size, to use a percentage of the input rules. Defaults to 1.0 (all rules)
- `--strict_join` to use a more stringent `join`
- `--strict_cut` to use a more stringent `cut`
- `--global_direction` Use to evaluate merges on the whole validation set
- `--debug` Debug level: the higher, the less messages shown
## Validation
`GLocalX` outputs a set of rules (list of `models.Rule`) stored in a `$names.rules.glocalx.alpha=$alpha.json` file.
The `evaluators.validate` function provides a simple interface for validation. If you wish to extend it, you can directly extend either the `evaluators.DummyEvaluator` or `evaluators.MemEvaluator` class.
---
## Run on your own dataset
GLocalX has a strict format on input data. It accepts tabular datasets and binary classification tasks. You can find a dummy example for each of these formats in `/data/dummy/`.
#### Local rules
We provide an integration with [Lore](https://github.com/riccotti/LORE) rules through the `loaders` module.
To convert Lore rules to GLocalX rules, use the provided `loaders.lore.lore_to_glocalx(json_file, info_file)` function. `json_file` should be the path to a json with Lore's rules, and `info_file` should be the path to a JSON dictionary holding the class values (key `class_names`) and a list of the features' names (key `feature_names`).
You can find dummy examples of the info file in [data/loaders/adult_info.json](https://github.com/msetzu/glocalx/blob/main/data/loaders/adult_info.json).
More examples on other local rule extractors to follow.
#### Rules [`/data/dummy/dummy_rules.json`]
Local rules are to be stored in a `JSON` format:
```json
[
{"22": [30.0, 91.9], "23": [-Infinity, 553.3], "label": 0},
...
]
```
Each rule in the list is a dictionary with an arbitrary (greater than 2) premises. The rule prediction ({0, 1}) is stored in the key `label`. Premises on features are stored according to their ordering and bounds: in the above, `"22": [-Infinity, 91.9]` indicates the premise "feature number 22 has value between 30.0 and 91.9".
#### Black boxes [`/data/dummy/dummy_model.h5`]
Black boxes (if used) are to be stored in a `hdf5` format if given through command line. If given programmatically instead, it suffices that they implement the `Predictor` interface:
```python
class Predictor:
@abstractmethod
def predict(self, x):
pass
```
when called to predict `numpy.ndarray:x` the predictor shall return its predictions in a `numpy.ndarray` of integers.
#### Training data[`/data/dummy/dummy_dataset.csv`]
Training data is to be stored in a csv, comma-separated format with features names as header. The classification labels should have feature name `y`.
---
## Docs and reference
You can find the software documentation in the `/html/` folder and a powerpoint presentation on GLocalX can be found [here](https://docs.google.com/presentation/d/12Nv2MRlvpQfwk9A8TeN6QQwnVUKgE00V-ZWGS2FV5p8/edit?usp=sharing).
You can cite this work with
```
@article{SETZU2021103457,
title = {GLocalX - From Local to Global Explanations of Black Box AI Models},
journal = {Artificial Intelligence},
volume = {294},
pages = {103457},
year = {2021},
issn = {0004-3702},
doi = {https://doi.org/10.1016/j.artint.2021.103457},
url = {https://www.sciencedirect.com/science/article/pii/S0004370221000084},
author = {Mattia Setzu and Riccardo Guidotti and Anna Monreale and Franco Turini and Dino Pedreschi and Fosca Giannotti},
keywords = {Explainable AI, Global explanation, Local explanations, Interpretable models, Open the black box},
}
```
---
## Useful functions & Overrides
#### Callbacks
The `fit()` function provides a `callbacks` parameter to add any callbacks you desire to be invoked every `callbacks_step` iterations. The callback should implement the `callbacks.Callback` interface. You can find the set of parameters available to the callback in `glocalx.GLocalX.fit()`.
#### Serialization and deserialization
You can dump to disk and load `GLocalX` instances and their output with the `Rule` object and the `serialization` module:
```python
from models import Rule
import serialization
rules_only_json = 'input_rules.json'
run_file = 'my_run.glocalx.json'
# Load input rules
rules = Rule.from_json(rules_only_json)
# Load GLocalX output
glocalx_output = serialization.load_run(run_file)
# Load a GLocalX instance from a set of rules, regardless of whether they come from an actual run or not!
# From a GLocalX run...
glocalx = serialization.load_glocalx(run_file, is_glocalx_run=True)
# From a
glocalx = serialization.load_glocalx(rules_only_json, is_glocalx_run=False)
```
#### Extending the `merge` function
To override the merge function, simply extend the `glocalx.GLocalX` object and override the `merge` function with the following signature:
```python
merge(self, A:set, B:set, x:numpy.ndarray, y:numpy.ndarray, ids:numpy.ndarray)
```
where `A` and `B` are the sets of `models.Rule` you are merging, `x` is the training data, `y` are the training labels and `ids` are the batch ids. The ids are used by the `MemEvaluator` to store pre-computed results.
#### Extending the `distance` function
The `distance` between explanations is computed by the `evaluators.Evaluator` objects. To override it, override either the `evaluators.DummyEvaluator` or `evaluators.MemEvaluator` object with the following signature:
```python
distance(self, A:set, B:set, x:numpy.ndarray, ids:numpy.ndarray) -> numpy.ndarray
```
where `A`, `B` are the two (sets of) explanation(s), `x` is the training data and `ids` are the ids for the current batch.
#### Extending the merge/acceptance
Whether a merge is accepted or rejected is decided by the `glocalx.GLocalX.accept_merge()` function with signature:
```python
accept_merge(union:set, merge:set, **kwargs) -> bool
```
Raw data
{
"_id": null,
"home_page": "",
"name": "glocalx",
"maintainer": "",
"docs_url": null,
"requires_python": "<3.11,>=3.8",
"maintainer_email": "",
"keywords": "",
"author": "",
"author_email": "Mattia Setzu <mattia.setzu@unipi.it>",
"download_url": "https://files.pythonhosted.org/packages/94/9a/7603b80d01d54cbf355458cad1ab6f59c25b22ecec25f551438885103179/glocalx-0.0.9.tar.gz",
"platform": null,
"description": "# GLocalX - Global through Local Explainability\n\nExplanations come in two forms: local, explaining a single model prediction, and global, explaining all model predictions. The Local to Global (L2G) problem consists in bridging these two family of explanations. Simply put, we generate global explanations by merging local ones.\n\n## The algorithm\n\nLocal and global explanations are provided in the form of decision rules:\n\n```\nage < 40, income > 50000, status = married, job = CEO \u21d2 grant loan application\n```\n\nThis rule describes the rationale followed given by an unexplainable model to grant the loan application to an individual younger than 40 years-old, with an income above 50000$, married and currently working as a CEO.\n\n---\n\n## Setup\n\n```bash\ngit clone https://github.com/msetzu/glocalx/\ncd glocalx\n```\n\nDependencies are listed in `requirements.txt`, a virtual environment is advised:\n\n```bash\nmkvirtualenv glocalx # optional but reccomended\npip3 install -r requirements.txt\n```\n\n## Running the code\n\nInstalling from pypi does not give direct access to quickstart data, first download the `data/` folder from the [original repository](https://github.com/msetzu/glocalx/tree/main).\n\n### Python interface\n\n```python\nfrom tensorflow.keras.models import load_model\nimport logzero\n\nfrom glocalx.glocalx import GLocalX, shut_up_tensorflow\nfrom glocalx.models import Rule\n\n# Set log profile: INFO for normal logging, DEBUG for verbosity\nlogzero.loglevel(logzero.logging.INFO)\nshut_up_tensorflow()\n\n# Load black box: optional! Use black_box = None to use the dataset labels\nblack_box = load_model('data/dummy/dummy_model.h5')\n# Load data and header\ndata = genfromtxt('data/dummy/dummy_dataset.csv', delimiter=',', names=True)\nfeatures_names = data.dtype.names\ntr_set = data.view(float).reshape(data.shape + (-1,))\n\n# Load local explanations\nlocal_explanations = Rule.from_json('data/dummy/dummy_rules.json', names=features_names)\n\n# Create a GLocalX instance for `black_box`\nglocalx = GLocalX(oracle=black_box)\n# Fit the model, use batch_size=128 for larger datasets\nglocalx = glocalx.fit(local_explanations, tr_set, batch_size=2, name='black_box_explanations')\n\n# Retrieve global explanations by fidelity\nalpha = 0.5\nglobal_explanations = glocalx.rules(alpha, tr_set)\n# Retrieve global explanations by fidelity percentile\nalpha = 95\nglobal_explanations = glocalx.rules(alpha, tr_set, is_percentile=True)\n# Retrieve exactly `alpha` global explanations, `alpha/2` per class\nalpha = 10\nglobal_explanations = glocalx.rules(alpha, tr_set)\n```\n\n\n\n### Command line interface\n\nYou can invoke `GLocalX` from the api interface in `api.py`:\n\n```bash\n> python3 api.py --help\nUsage: api.py [OPTIONS] RULES TR\n\nOptions:\n -o, --oracle PATH\n --names TEXT Features names.\n -cbs, --callbacks FLOAT Callback step, either int or float. Defaults\n to 0.1\n\n -m, --name TEXT Name of the log files.\n --generate TEXT Number of records to generate, if given.\n Defaults to None.\n\n -i, --intersect TEXT Whether to use coverage intersection\n ('coverage') or polyhedra intersection\n ('polyhedra'). Defaults to 'coverage'.\n\n -f, --fidelity_weight FLOAT Fidelity weight. Defaults to 1.\n -c, --complexity_weight FLOAT Complexity weight. Defaults to 1.\n -a, --alpha FLOAT Pruning factor. Defaults to 0.5\n -b, --batch INTEGER Batch size. Set to -1 for full batch.\n Defaults to 128.\n\n -u, --undersample FLOAT Undersample size, to use a percentage of the\n rules. Defaults to 1.0 (No undersample).\n\n --strict_join Use to use high concordance.\n --strict_cut Use to use the strong cut.\n --global_direction Use to use the global search direction.\n -d, --debug INTEGER Debug level.\n --help Show this message and exit.\n\n```\n\nA minimal run simply requires a set of local input rules, a training set and (optionally) a black box:\n\n```shell script\npython3.8 apy.py data/dummy/dummy_rules.json data/dummy/dummy_dataset.csv --oracle data/dummy/dummy_model.h5 --name dummy --batch 2\n```\n\nIf you are interested, the folder `data/dummy/` contains a dummy example.\nYou can run it with\n```shell script\npython3.8 apy.py my_rules.json training_set.csv --oracle my_black_box.h5 \\\n\t\t--name my_black_box_explanations\n```\n\nThe remaining hyperparameters are optional:\n\n- `--names $names` Comma-separated features names\n- `--callbacks $callbacks` Callback step, either int or float. Callbacks are invoked every `--callbacks` iterations of the algorithm\n- `--name $name` Name of the log files and output. The script dumps here all the additional info as it executes\n- `--generate $size` Number of synthetic records to generate, if you don't wish to use the provided training set\n- `--intersect $strategy` Intersection strategy: either `coverage` or `polyhedra`. Defaults to `coverage`\n- `--fidelity_weight $fidelity` Fidelity weight to reward accurate yet complex models. Defaults to 1.\n- `--complexity_weight $complexity` Complexity weight to reward simple yet less accurate models. Defaults to 1.\n- `--alpha $alpha` Pruning factor. Defaults to 0.5\n- `--batch $batch` Batch size. Set to -1 for full batch. Defaults to 128.\n- `--undersample $pct` Undersample size, to use a percentage of the input rules. Defaults to 1.0 (all rules)\n- `--strict_join` to use a more stringent `join`\n- `--strict_cut` to use a more stringent `cut`\n- `--global_direction` Use to evaluate merges on the whole validation set\n- `--debug` Debug level: the higher, the less messages shown\n\n## Validation\n\n`GLocalX` outputs a set of rules (list of `models.Rule`) stored in a `$names.rules.glocalx.alpha=$alpha.json` file.\n\nThe `evaluators.validate` function provides a simple interface for validation. If you wish to extend it, you can directly extend either the `evaluators.DummyEvaluator` or `evaluators.MemEvaluator` class.\n\n---\n\n## Run on your own dataset\n\nGLocalX has a strict format on input data. It accepts tabular datasets and binary classification tasks. You can find a dummy example for each of these formats in `/data/dummy/`.\n\n#### Local rules\n\nWe provide an integration with [Lore](https://github.com/riccotti/LORE) rules through the `loaders` module.\nTo convert Lore rules to GLocalX rules, use the provided `loaders.lore.lore_to_glocalx(json_file, info_file)` function. `json_file` should be the path to a json with Lore's rules, and `info_file` should be the path to a JSON dictionary holding the class values (key `class_names`) and a list of the features' names (key `feature_names`).\nYou can find dummy examples of the info file in [data/loaders/adult_info.json](https://github.com/msetzu/glocalx/blob/main/data/loaders/adult_info.json).\n\nMore examples on other local rule extractors to follow.\n\n\n#### Rules [`/data/dummy/dummy_rules.json`]\n\nLocal rules are to be stored in a `JSON` format:\n\n```json\n[\n {\"22\": [30.0, 91.9], \"23\": [-Infinity, 553.3], \"label\": 0},\n ...\n]\n```\n\nEach rule in the list is a dictionary with an arbitrary (greater than 2) premises. The rule prediction ({0, 1}) is stored in the key `label`. Premises on features are stored according to their ordering and bounds: in the above, `\"22\": [-Infinity, 91.9]` indicates the premise \"feature number 22 has value between 30.0 and 91.9\".\n\n#### Black boxes [`/data/dummy/dummy_model.h5`]\n\nBlack boxes (if used) are to be stored in a `hdf5` format if given through command line. If given programmatically instead, it suffices that they implement the `Predictor` interface:\n\n```python\nclass Predictor:\n @abstractmethod\n def predict(self, x):\n pass\n```\n\nwhen called to predict `numpy.ndarray:x` the predictor shall return its predictions in a `numpy.ndarray` of integers.\n\n#### Training data[`/data/dummy/dummy_dataset.csv`]\n\nTraining data is to be stored in a csv, comma-separated format with features names as header. The classification labels should have feature name `y`.\n\n---\n\n## Docs and reference\n\nYou can find the software documentation in the `/html/` folder and a powerpoint presentation on GLocalX can be found [here](https://docs.google.com/presentation/d/12Nv2MRlvpQfwk9A8TeN6QQwnVUKgE00V-ZWGS2FV5p8/edit?usp=sharing).\n\nYou can cite this work with\n```\n@article{SETZU2021103457,\n title = {GLocalX - From Local to Global Explanations of Black Box AI Models},\n journal = {Artificial Intelligence},\n volume = {294},\n pages = {103457},\n year = {2021},\n issn = {0004-3702},\n doi = {https://doi.org/10.1016/j.artint.2021.103457},\n url = {https://www.sciencedirect.com/science/article/pii/S0004370221000084},\n author = {Mattia Setzu and Riccardo Guidotti and Anna Monreale and Franco Turini and Dino Pedreschi and Fosca Giannotti},\n keywords = {Explainable AI, Global explanation, Local explanations, Interpretable models, Open the black box},\n}\n```\n\n\n\n---\n\n## Useful functions & Overrides\n\n#### Callbacks\n\nThe `fit()` function provides a `callbacks` parameter to add any callbacks you desire to be invoked every `callbacks_step` iterations. The callback should implement the `callbacks.Callback` interface. You can find the set of parameters available to the callback in `glocalx.GLocalX.fit()`.\n\n#### Serialization and deserialization\nYou can dump to disk and load `GLocalX` instances and their output with the `Rule` object and the `serialization` module:\n```python\nfrom models import Rule\nimport serialization\n\nrules_only_json = 'input_rules.json'\nrun_file = 'my_run.glocalx.json'\n\n# Load input rules\nrules = Rule.from_json(rules_only_json)\n\n# Load GLocalX output\nglocalx_output = serialization.load_run(run_file)\n\n# Load a GLocalX instance from a set of rules, regardless of whether they come from an actual run or not!\n# From a GLocalX run...\nglocalx = serialization.load_glocalx(run_file, is_glocalx_run=True)\n# From a \nglocalx = serialization.load_glocalx(rules_only_json, is_glocalx_run=False)\n```\n\n#### Extending the `merge` function\n\nTo override the merge function, simply extend the `glocalx.GLocalX` object and override the `merge` function with the following signature:\n\n```python\nmerge(self, A:set, B:set, x:numpy.ndarray, y:numpy.ndarray, ids:numpy.ndarray)\n```\n\nwhere `A` and `B` are the sets of `models.Rule` you are merging, `x` is the training data, `y` are the training labels and `ids` are the batch ids. The ids are used by the `MemEvaluator` to store pre-computed results.\n\n#### Extending the `distance` function\n\nThe `distance` between explanations is computed by the `evaluators.Evaluator` objects. To override it, override either the `evaluators.DummyEvaluator` or `evaluators.MemEvaluator` object with the following signature:\n\n```python\ndistance(self, A:set, B:set, x:numpy.ndarray, ids:numpy.ndarray) -> numpy.ndarray\n```\n\nwhere `A`, `B` are the two (sets of) explanation(s), `x` is the training data and `ids` are the ids for the current batch.\n\n#### Extending the merge/acceptance\nWhether a merge is accepted or rejected is decided by the `glocalx.GLocalX.accept_merge()` function with signature:\n```python\naccept_merge(union:set, merge:set, **kwargs) -> bool\n```",
"bugtrack_url": null,
"license": "",
"summary": "Generating global explanations from local ones.",
"version": "0.0.9",
"project_urls": {
"Bug Tracker": "https://github.com/msetzu/glocalx/issues",
"Homepage": "https://github.com/msetzu/glocalx"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "15d792b3b223ae17a192a89e9d55bbcc33e457137bd9d437bce2f23027248533",
"md5": "bf7228644b6e7ca7f65a2ce12da3b284",
"sha256": "e1b36fe573b2958058b19c995d55c4fc6243255286acc5b48a082eddd84c0f44"
},
"downloads": -1,
"filename": "glocalx-0.0.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bf7228644b6e7ca7f65a2ce12da3b284",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.11,>=3.8",
"size": 49723,
"upload_time": "2023-07-20T22:05:43",
"upload_time_iso_8601": "2023-07-20T22:05:43.328854Z",
"url": "https://files.pythonhosted.org/packages/15/d7/92b3b223ae17a192a89e9d55bbcc33e457137bd9d437bce2f23027248533/glocalx-0.0.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "949a7603b80d01d54cbf355458cad1ab6f59c25b22ecec25f551438885103179",
"md5": "cfc976aff4e3ac3c0f29e4f95fa57abf",
"sha256": "ddb7d4eaf5607dd1dacdefa9c7d87b134af91e6a2a496a84f19f3f5424a31b8d"
},
"downloads": -1,
"filename": "glocalx-0.0.9.tar.gz",
"has_sig": false,
"md5_digest": "cfc976aff4e3ac3c0f29e4f95fa57abf",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.11,>=3.8",
"size": 52784,
"upload_time": "2023-07-20T22:05:46",
"upload_time_iso_8601": "2023-07-20T22:05:46.944832Z",
"url": "https://files.pythonhosted.org/packages/94/9a/7603b80d01d54cbf355458cad1ab6f59c25b22ecec25f551438885103179/glocalx-0.0.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-07-20 22:05:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "msetzu",
"github_project": "glocalx",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "absl-py",
"specs": [
[
"==",
"0.11.0"
]
]
},
{
"name": "appdirs",
"specs": [
[
"==",
"1.4.3"
]
]
},
{
"name": "astunparse",
"specs": [
[
"==",
"1.6.3"
]
]
},
{
"name": "CacheControl",
"specs": [
[
"==",
"0.12.6"
]
]
},
{
"name": "cachetools",
"specs": [
[
"==",
"4.1.1"
]
]
},
{
"name": "certifi",
"specs": [
[
"==",
"2019.11.28"
]
]
},
{
"name": "chardet",
"specs": [
[
"==",
"3.0.4"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"7.1.2"
]
]
},
{
"name": "colorama",
"specs": [
[
"==",
"0.4.3"
]
]
},
{
"name": "contextlib2",
"specs": [
[
"==",
"0.6.0"
]
]
},
{
"name": "distlib",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "distro",
"specs": [
[
"==",
"1.4.0"
]
]
},
{
"name": "gast",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "google-auth",
"specs": [
[
"==",
"1.23.0"
]
]
},
{
"name": "google-auth-oauthlib",
"specs": [
[
"==",
"0.4.2"
]
]
},
{
"name": "google-pasta",
"specs": [
[
"==",
"0.2.0"
]
]
},
{
"name": "grpcio",
"specs": []
},
{
"name": "h5py",
"specs": [
[
"~=",
"3.1.0"
]
]
},
{
"name": "html5lib",
"specs": [
[
"==",
"1.0.1"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"2.8"
]
]
},
{
"name": "ipaddr",
"specs": [
[
"==",
"2.2.0"
]
]
},
{
"name": "joblib",
"specs": [
[
"==",
"0.17.0"
]
]
},
{
"name": "Keras",
"specs": [
[
"==",
"2.4.3"
]
]
},
{
"name": "Keras-Preprocessing",
"specs": [
[
"==",
"1.1.2"
]
]
},
{
"name": "lockfile",
"specs": [
[
"==",
"0.12.2"
]
]
},
{
"name": "logzero",
"specs": [
[
"==",
"1.6.2"
]
]
},
{
"name": "msgpack",
"specs": [
[
"==",
"0.6.2"
]
]
},
{
"name": "numpy",
"specs": []
},
{
"name": "oauthlib",
"specs": [
[
"==",
"3.1.0"
]
]
},
{
"name": "opt-einsum",
"specs": [
[
"==",
"3.3.0"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"20.3"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"1.1.4"
]
]
},
{
"name": "pep517",
"specs": [
[
"==",
"0.8.2"
]
]
},
{
"name": "progress",
"specs": [
[
"==",
"1.5"
]
]
},
{
"name": "protobuf",
"specs": [
[
"==",
"3.13.0"
]
]
},
{
"name": "pyasn1",
"specs": [
[
"==",
"0.4.8"
]
]
},
{
"name": "pyasn1-modules",
"specs": [
[
"==",
"0.2.8"
]
]
},
{
"name": "pyparsing",
"specs": [
[
"==",
"2.4.6"
]
]
},
{
"name": "python-dateutil",
"specs": [
[
"==",
"2.8.1"
]
]
},
{
"name": "pytoml",
"specs": [
[
"==",
"0.1.21"
]
]
},
{
"name": "pytz",
"specs": [
[
"==",
"2020.4"
]
]
},
{
"name": "PyYAML",
"specs": [
[
"==",
"5.3.1"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.22.0"
]
]
},
{
"name": "requests-oauthlib",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "retrying",
"specs": [
[
"==",
"1.3.3"
]
]
},
{
"name": "rsa",
"specs": [
[
"==",
"4.6"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
"==",
"0.23.2"
]
]
},
{
"name": "scipy",
"specs": [
[
"==",
"1.5.4"
]
]
},
{
"name": "six",
"specs": []
},
{
"name": "tensorboard",
"specs": []
},
{
"name": "tensorboard-plugin-wit",
"specs": []
},
{
"name": "tensorflow",
"specs": [
[
"==",
"2.5.0"
]
]
},
{
"name": "tensorflow-estimator",
"specs": []
},
{
"name": "termcolor",
"specs": [
[
"==",
"1.1.0"
]
]
},
{
"name": "threadpoolctl",
"specs": [
[
"==",
"2.1.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
"==",
"4.51.0"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"1.25.8"
]
]
},
{
"name": "webencodings",
"specs": [
[
"==",
"0.5.1"
]
]
},
{
"name": "Werkzeug",
"specs": [
[
"==",
"1.0.1"
]
]
},
{
"name": "wrapt",
"specs": [
[
"==",
"1.12.1"
]
]
}
],
"lcname": "glocalx"
}