revelionn


Namerevelionn JSON
Version 1.0.2 PyPI version JSON
download
home_pagehttps://github.com/cais-lab/revelionn
SummaryRetrospective Extraction of Visual and Logical Insights for Ontology-based interpretation of Neural Networks
upload_time2023-09-04 01:27:09
maintainer
docs_urlNone
authorCAIS Lab
requires_python>=3.9
licenseBSD 3-Clause
keywords explainable ai xai interpretation black-box convolutional neural network ontology concept extraction visual explanation logical explanation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
<b>Retrospective Extraction of Visual and Logical Insights for Ontology-based interpretation of Neural Networks</b>
</p>
 
<b>RevelioNN</b> is an open-source library of post-hoc algorithms for explaining predictions of deep convolutional 
neural networks of binary classification using ontologies. The algorithms are based on the construction of mapping 
networks linking the internal representations of a convolutional neural network with ontology concepts. 
The inspiration for the development of this library was a paper in which this approach to the interpretation of 
neural networks was proposed:

*  M. de Sousa Ribeiro and J. Leite, “Aligning Artificial Neural Networks and Ontologies towards Explainable AI,” in 35th AAAI Conference on Artificial Intelligence, AAAI 2021, May 2021, vol. 6A, no. 6, pp. 4932–4940. doi: [10.1609/aaai.v35i6.16626](https://doi.org/10.1609/aaai.v35i6.16626).

## How the library works
The convolutional neural network, whose predictions need to be explained, is called “main” network. When an image is 
passed through it, the output is a probability of some target class, which at the same time is a concept of ontology. 
Activations of the “main” network produced as a result of image passing represent input data for the mapping networks. 
The outputs of the mapping networks are the probabilities of each of the concepts relevant to the target class, that is, 
the concepts that are involved in its definition. Knowing the probabilities of each of the concepts, it becomes possible 
to form logical and visual explanations.

### Logical Explanations
By extracting relevant concepts, it is possible to form logical explanations about the belonging of the sample to the 
target concept, accompanied by a set of axioms of ontology.
The input image presented in the scheme was taken from the [SCDB dataset](https://github.com/adriano-lucieri/SCDB), with which the “main” network and mapping networks were trained. 
This image belongs to class <i>C1</i>. The image is classified as <i>C1</i> if the concepts <i>Hexagon</i> ⊓ <i>Star</i> or 
<i>Ellipse</i> ⊓ <i>Star</i> or <i>Triangle</i> ⊓ <i>Ellipse</i> ⊓ <i>Starmarker</i> are present. An example of a logical 
explanation by ontological inference for this sample is given below.

```console
 The image is classified as ['C1'].

 The following concepts were extracted from the image:
 ['HexStar', 'EllStar', 'NotTEStarmarker', 'Hexagon', 'Star', 'Ellipse', 'NotTriangle', 'NotStarmarker']
 with the following probabilities:
 [0.99938893, 0.99976605, 0.9937676684930921, 0.99947304, 0.9999995, 0.99962604, 0.9861229043453932, 0.9810010809451342]

 Justification for '__input__ Type C1':	(Degree of Belief: 0.99963)
 	__input__ Type has some Star	("0.9999995")
 	__input__ Type has some Ellipse	("0.99962604")
 	(has some Ellipse) and (has some Star) SubClassOf EllStar
 	C1 EquivalentTo EllStar or HexStar or TEStarmarker
```

Each of the extracted concepts corresponds to a certain probability, which is then used to calculate the degree of 
confidence of the justifications. The list of possible justifications is ranked by the degree of trust.
If any concept has not been extracted, then we can say that the opposite concept has been extracted, the name of which 
is automatically formed by adding the prefix 'Not'.

The above example shows one of the explanations from the list of possible explanations. It can be interpreted as 
follows. The concepts of <i>Star</i> and <i>Ellipse</i> were extracted from the image. Therefore, based on the axiom of ontology that 
the conjunction of the concepts <i>Star</i> and <i>Ellipse</i> is a subclass of <i>EllStar</i>, we can conclude that the image also 
represents <i>EllStar</i>. And according to another axiom, the <i>C1</i> target concept is equivalent to <i>EllStar</i>. Thus, the 
prediction of the neural network was confirmed by ontological reasoning.

### Visual Explanations

Visual explanations mean highlighting positively extracted concepts in the image. Currently, visual explanations are 
formed using the occlusion method. Its essence lies in the fact that the input image is systematically overlapped by a 
square of a given size with a given step. At each step, the overlapped image is run through the “main” network, and its 
activations are run through the mapping network. Thus, by obtaining output probabilities at each step, a saliency map can 
be formed. 

## RevelioNN Features
### Mapping Networks
The library implements two types of mapping networks whose parameters can be flexibly customized by the user.

| Type of mapping network       | Features                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
|-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Single mapping network        | It is a full-connected neural network, the number of input neurons of which is determined by the number of activations of neurons of the specified convolutional network layers. It has a ReLU activation function in its hidden layers and a sigmoid in its output.<br/>It is reasonable to use it to extract only one concept from one or more given convolutional network layers.<br/>The user can vary the number of layers and the number of neurons in each layer of this mapping network. |
| Simultaneous mapping network  | Due to the features of its architecture, it allows you to extract many concepts simultaneously, receiving activations of all specified layers of the convolutional network at once.<br/>It takes into account the features of the 2D image structure and is less prone to overfitting compared to single mapping networks. <br/>It also shows good results in semi-supervised learning using semantic loss, which strengthens the relationship between concepts.                                 |

### Extraction Algorithms

| Extraction algorithm    | Type of mapping network      | What it does                                                                                                                           |
|-------------------------|------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| Exhaustive search       | Single mapping network       | Trains and evaluates mapping networks based on the activations of each of the specified layers of the convolutional network            |
| Heuristic search        | Single mapping network       | Due to the heuristic reduction of the set of specified layers, mapping networks are not trained for every combination of layer-concept |
| Simultaneous extraction | Simultaneous mapping network | Trains a mapping network that can simultaneously extract a set of relevant concepts from the entire set of layers of specified types   |

## How to Use
RevelioNN can interpret convolutional binary classification networks that have already been trained without using this 
library. It is worth noting that the network class must be inherited from the nn.Module class, that is, your network 
must be implemented using PyTorch. The specified model must be converted to RevelioNN format.

To use the API, follow these steps:
1. In order to convert your model to RevelioNN format, your network class must be described in a separate file in which 
the following variables must also be declared:
   * variable storing the number of channels of the image fed to the network;
   * variable storing the size of the image fed to the network;
   * the ``torchvision.transforms`` module object, which represents a transformation over images.

   Examples of network descriptions are given in the [main_net_classes](https://github.com/cais-lab/revelionn/tree/main/main_net_classes) directory.
2. Next, you need to initialize your convolutional neural network model.
    ```
    from main_net_classes.resnet18_scdb import ResNet18, NUM_CHANNELS, IMG_SIDE_SIZE, transformation

    main_net = ResNet18()
    main_net.load_state_dict(torch.load('SCDB_ResNet18_C1.pt'))
    ```

3. Import ``convert_to_rvl_format()`` function:
    ```
    from revelionn.utils.model import convert_to_rvl_format
    ```

    Call this function by passing the data of the previously declared network model as parameters:
    ```
    convert_to_rvl_format(main_net, 'SCDB_ResNet18_C1', 'C1', 'resnet18_scdb', 'ResNet18', 'transformation', IMG_SIDE_SIZE, NUM_CHANNELS)
    ```

4. After the main network has been successfully converted to the RevelioNN format, mapping networks can be trained. 
Here is an example for training a simultaneous extraction network. Here activations were extracted from all batch normalization layers (the value is 'bn').
    ```
    from revelionn.mapping_trainer import MappingTrainer
   
    device = torch.device('cuda')
    trainer = MappingTrainer('SCDB_ResNet18_C1.rvl', os.path.join(root_path, 'main_net_classes'), ['bn'], 20, 100, 
                             os.path.join(root_path, 'trained_models', 'mapping_models'),
                             device, os.path.join(root_path, 'data', 'scdb_custom', 'images'),
                             'C1_mapping_train.csv', 'C1_mapping_val.csv', 'name', 100, 6, None)
   
    trainer.train_simultaneous_model(['HexStar', 'EllStar', 'TEStarmarker', 'Hexagon', 
                                     'Star', 'Ellipse', 'Triangle', 'Starmarker'], 
                                     20, [160, 80, 40, 20], [20, 1])
    ```

5. Once the mapping network is trained, you can form logical and visual explanations. To do this, you must first load 
the trained network model via ``load_mapping_model()``.
    ```
    from revelionn.utils.model import load_mapping_model

    main_module, mapping_module, activation_extractor, transformation, img_size = load_mapping_model(
        os.path.join(root_path, 'trained_models', 'mapping_models', 'C1_20_[160, 80, 40, 20]_[20, 1].rvl'), 
        cur_path, os.path.join(root_path, 'main_net_classes'), device)
    ```
   
6. To form logical explanations using an ontology, one must first extract the concepts relevant to the target concept 
from the image, and then transfer the extracted concepts and their probabilities to the reasoning module along with the 
ontology. This can be done as follows:
    ```
    from revelionn.utils.explanation import extract_concepts_from_img, explain_target_concept
    from ontologies.scdb_ontology import concepts_map
    from PIL import Image

    image_path = os.path.join(root_path, 'data', 'scdb_custom', 'images', '001236.png')
   
    image = Image.open(image_path)
    main_concepts, extracted_concepts, mapping_probabilities = extract_concepts_from_img(main_module,
                                                                                         mapping_module,
                                                                                         image,
                                                                                         transformation)
    print(f'\nThe image is classified as {main_concepts}.')
    print('\nThe following concepts were extracted from the image:')
    print(extracted_concepts)
    print('with the following probabilities:')
    print(f'{mapping_probabilities}\n')
       
    justifications = explain_target_concept(extracted_concepts, mapping_probabilities, concepts_map, 'C1',
                                            os.path.join(root_path, 'ontologies', 'SCDB.owl'), 
                                            os.path.join(root_path, 'temp'))
    print(justifications)
    ```

7. Visual explanations can be formed as follows:
    ```
    import matplotlib.pyplot as plt
    from revelionn.occlusion import perform_occlusion
   
    perform_occlusion(main_module, mapping_module, activation_extractor, transformation, img_size,
                     image_path, window_size=20, stride=5, threads=0)
    plt.show()
    ```

The execution of the listed steps is shown in [basic_example.ipynb](https://github.com/cais-lab/revelionn/blob/main/examples/basic_example.ipynb).

RevelioNN also supports a command line-based interface, i.e. interaction through scripts. A detailed description of how to use each of the scripts can be found in the documentation.

## Installation

The simplest way to install RevelioNN is using ``pip``:

```bash
pip install revelionn
pip install git+https://github.com/lucadiliello/semantic-loss-pytorch.git
```

You can view a list of required dependencies in the [requirements.txt](https://github.com/cais-lab/revelionn/blob/main/requirements.txt) file. You can also install them as follows:

```bash
pip install -r requirements.txt
```

It is also worth noting that [Java SE 8](https://www.java.com/en/download/manual.jsp) must be installed to form logical explanations.

## Project Structure

The repository includes the following directories:

* Package `main_net_classes` contains various convolutional neural network architectures that can serve as examples for initializing your network in RevelioNN; 
* Package `ontologies` contains examples of ontology files in OWL format, as well as examples of the dictionary of relations of dataset attributes to ontology concepts and examples of the class representing the ontology as a graph;
* Package `examples` includes notebooks that contain practical examples of RevelioNN use;
* All unit and integration tests can be observed in the `tests` directory;
* The sources of the documentation are in the `docs` directory.

## Documentation

A detailed RevelioNN description is available in [Read the Docs](https://revelionn.readthedocs.io/en/latest/).

## Tests

To run tests, you can use:

```bash
pytest tests
```

## Publications

The library was used in the following publications:
* Agafonov A., Ponomarev A. An Experiment on Localization of Ontology Concepts in Deep Convolutional Neural Networks // In the *11th International Symposium on Information and Communication Technology (SoICT 2022)*, 82–87. DOI: [10.1145/3568562.3568602](http://doi.org/10.1145/3568562.3568602)
* Ponomarev A., Agafonov A. Ontology Concept Extraction Algorithm for Deep Neural Networks // *Proceedings of the 32nd Conference of Open Innovations Association FRUCT*, 221-226. DOI: [10.23919/FRUCT56874.2022.9953838](http://doi.org/10.23919/FRUCT56874.2022.9953838)
* Agafonov A., Ponomarev A. Localization of Ontology Concepts in Deep Convolutional Neural Networks // *2022 IEEE International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON)*, 160-165. DOI: [10.1109/SIBIRCON56155.2022.10016932](http://doi.org/10.1109/SIBIRCON56155.2022.10016932)

## Funding
The RevelioNN library was developed in the scope of the project 22-11-00214, funded by the Russian Science Foundation (RSF).

## Acknowledgements
We thank the developers of [xaitk-saliency](https://github.com/XAITK/xaitk-saliency), [semantic-loss-pytorch](https://github.com/lucadiliello/semantic-loss-pytorch), 
[nxontology](https://github.com/related-sciences/nxontology) and [BUNDLE](https://ml.unife.it/bundle/), thanks to whom the development of RevelioNN became possible!

Special thanks to the creators of the [XTRAINS dataset](https://bitbucket.org/xtrains/dataset/src/master/) for providing the ontology and for inspiring the development of this library!

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/cais-lab/revelionn",
    "name": "revelionn",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "explainable AI,XAI,interpretation,black-box,convolutional neural network,ontology,concept extraction,visual explanation,logical explanation",
    "author": "CAIS Lab",
    "author_email": "agafonov.a@spcras.ru",
    "download_url": "https://files.pythonhosted.org/packages/94/63/66e4137c5476ae2857e81136ae6dafe05aa372b0158bccb4277b040bd02c/revelionn-1.0.2.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\r\n<b>Retrospective Extraction of Visual and Logical Insights for Ontology-based interpretation of Neural Networks</b>\r\n</p>\r\n \r\n<b>RevelioNN</b> is an open-source library of post-hoc algorithms for explaining predictions of deep convolutional \r\nneural networks of binary classification using ontologies. The algorithms are based on the construction of mapping \r\nnetworks linking the internal representations of a convolutional neural network with ontology concepts. \r\nThe inspiration for the development of this library was a paper in which this approach to the interpretation of \r\nneural networks was proposed:\r\n\r\n*  M. de Sousa Ribeiro and J. Leite, \u201cAligning Artificial Neural Networks and Ontologies towards Explainable AI,\u201d in 35th AAAI Conference on Artificial Intelligence, AAAI 2021, May 2021, vol. 6A, no. 6, pp. 4932\u20134940. doi: [10.1609/aaai.v35i6.16626](https://doi.org/10.1609/aaai.v35i6.16626).\r\n\r\n## How the library works\r\nThe convolutional neural network, whose predictions need to be explained, is called \u201cmain\u201d network. When an image is \r\npassed through it, the output is a probability of some target class, which at the same time is a concept of ontology. \r\nActivations of the \u201cmain\u201d network produced as a result of image passing represent input data for the mapping networks. \r\nThe outputs of the mapping networks are the probabilities of each of the concepts relevant to the target class, that is, \r\nthe concepts that are involved in its definition. Knowing the probabilities of each of the concepts, it becomes possible \r\nto form logical and visual explanations.\r\n\r\n### Logical Explanations\r\nBy extracting relevant concepts, it is possible to form logical explanations about the belonging of the sample to the \r\ntarget concept, accompanied by a set of axioms of ontology.\r\nThe input image presented in the scheme was taken from the [SCDB dataset](https://github.com/adriano-lucieri/SCDB), with which the \u201cmain\u201d network and mapping networks were trained. \r\nThis image belongs to class <i>C1</i>. The image is classified as <i>C1</i> if the concepts <i>Hexagon</i> \u2293 <i>Star</i> or \r\n<i>Ellipse</i> \u2293 <i>Star</i> or <i>Triangle</i> \u2293 <i>Ellipse</i> \u2293 <i>Starmarker</i> are present. An example of a logical \r\nexplanation by ontological inference for this sample is given below.\r\n\r\n```console\r\n The image is classified as ['C1'].\r\n\r\n The following concepts were extracted from the image:\r\n ['HexStar', 'EllStar', 'NotTEStarmarker', 'Hexagon', 'Star', 'Ellipse', 'NotTriangle', 'NotStarmarker']\r\n with the following probabilities:\r\n [0.99938893, 0.99976605, 0.9937676684930921, 0.99947304, 0.9999995, 0.99962604, 0.9861229043453932, 0.9810010809451342]\r\n\r\n Justification for '__input__ Type C1':\t(Degree of Belief: 0.99963)\r\n \t__input__ Type has some Star\t(\"0.9999995\")\r\n \t__input__ Type has some Ellipse\t(\"0.99962604\")\r\n \t(has some Ellipse) and (has some Star) SubClassOf EllStar\r\n \tC1 EquivalentTo EllStar or HexStar or TEStarmarker\r\n```\r\n\r\nEach of the extracted concepts corresponds to a certain probability, which is then used to calculate the degree of \r\nconfidence of the justifications. The list of possible justifications is ranked by the degree of trust.\r\nIf any concept has not been extracted, then we can say that the opposite concept has been extracted, the name of which \r\nis automatically formed by adding the prefix 'Not'.\r\n\r\nThe above example shows one of the explanations from the list of possible explanations. It can be interpreted as \r\nfollows. The concepts of <i>Star</i> and <i>Ellipse</i> were extracted from the image. Therefore, based on the axiom of ontology that \r\nthe conjunction of the concepts <i>Star</i> and <i>Ellipse</i> is a subclass of <i>EllStar</i>, we can conclude that the image also \r\nrepresents <i>EllStar</i>. And according to another axiom, the <i>C1</i> target concept is equivalent to <i>EllStar</i>. Thus, the \r\nprediction of the neural network was confirmed by ontological reasoning.\r\n\r\n### Visual Explanations\r\n\r\nVisual explanations mean highlighting positively extracted concepts in the image. Currently, visual explanations are \r\nformed using the occlusion method. Its essence lies in the fact that the input image is systematically overlapped by a \r\nsquare of a given size with a given step. At each step, the overlapped image is run through the \u201cmain\u201d network, and its \r\nactivations are run through the mapping network. Thus, by obtaining output probabilities at each step, a saliency map can \r\nbe formed. \r\n\r\n## RevelioNN Features\r\n### Mapping Networks\r\nThe library implements two types of mapping networks whose parameters can be flexibly customized by the user.\r\n\r\n| Type of mapping network       | Features                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |\r\n|-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\r\n| Single mapping network        | It is a full-connected neural network, the number of input neurons of which is determined by the number of activations of neurons of the specified convolutional network layers. It has a ReLU activation function in its hidden layers and a sigmoid in its output.<br/>It is reasonable to use it to extract only one concept from one or more given convolutional network layers.<br/>The user can vary the number of layers and the number of neurons in each layer of this mapping network. |\r\n| Simultaneous mapping network  | Due to the features of its architecture, it allows you to extract many concepts simultaneously, receiving activations of all specified layers of the convolutional network at once.<br/>It takes into account the features of the 2D image structure and is less prone to overfitting compared to single mapping networks. <br/>It also shows good results in semi-supervised learning using semantic loss, which strengthens the relationship between concepts.                                 |\r\n\r\n### Extraction Algorithms\r\n\r\n| Extraction algorithm    | Type of mapping network      | What it does                                                                                                                           |\r\n|-------------------------|------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|\r\n| Exhaustive search       | Single mapping network       | Trains and evaluates mapping networks based on the activations of each of the specified layers of the convolutional network            |\r\n| Heuristic search        | Single mapping network       | Due to the heuristic reduction of the set of specified layers, mapping networks are not trained for every combination of layer-concept |\r\n| Simultaneous extraction | Simultaneous mapping network | Trains a mapping network that can simultaneously extract a set of relevant concepts from the entire set of layers of specified types   |\r\n\r\n## How to Use\r\nRevelioNN can interpret convolutional binary classification networks that have already been trained without using this \r\nlibrary. It is worth noting that the network class must be inherited from the nn.Module class, that is, your network \r\nmust be implemented using PyTorch. The specified model must be converted to RevelioNN format.\r\n\r\nTo use the API, follow these steps:\r\n1. In order to convert your model to RevelioNN format, your network class must be described in a separate file in which \r\nthe following variables must also be declared:\r\n   * variable storing the number of channels of the image fed to the network;\r\n   * variable storing the size of the image fed to the network;\r\n   * the ``torchvision.transforms`` module object, which represents a transformation over images.\r\n\r\n   Examples of network descriptions are given in the [main_net_classes](https://github.com/cais-lab/revelionn/tree/main/main_net_classes) directory.\r\n2. Next, you need to initialize your convolutional neural network model.\r\n    ```\r\n    from main_net_classes.resnet18_scdb import ResNet18, NUM_CHANNELS, IMG_SIDE_SIZE, transformation\r\n\r\n    main_net = ResNet18()\r\n    main_net.load_state_dict(torch.load('SCDB_ResNet18_C1.pt'))\r\n    ```\r\n\r\n3. Import ``convert_to_rvl_format()`` function:\r\n    ```\r\n    from revelionn.utils.model import convert_to_rvl_format\r\n    ```\r\n\r\n    Call this function by passing the data of the previously declared network model as parameters:\r\n    ```\r\n    convert_to_rvl_format(main_net, 'SCDB_ResNet18_C1', 'C1', 'resnet18_scdb', 'ResNet18', 'transformation', IMG_SIDE_SIZE, NUM_CHANNELS)\r\n    ```\r\n\r\n4. After the main network has been successfully converted to the RevelioNN format, mapping networks can be trained. \r\nHere is an example for training a simultaneous extraction network. Here activations were extracted from all batch normalization layers (the value is 'bn').\r\n    ```\r\n    from revelionn.mapping_trainer import MappingTrainer\r\n   \r\n    device = torch.device('cuda')\r\n    trainer = MappingTrainer('SCDB_ResNet18_C1.rvl', os.path.join(root_path, 'main_net_classes'), ['bn'], 20, 100, \r\n                             os.path.join(root_path, 'trained_models', 'mapping_models'),\r\n                             device, os.path.join(root_path, 'data', 'scdb_custom', 'images'),\r\n                             'C1_mapping_train.csv', 'C1_mapping_val.csv', 'name', 100, 6, None)\r\n   \r\n    trainer.train_simultaneous_model(['HexStar', 'EllStar', 'TEStarmarker', 'Hexagon', \r\n                                     'Star', 'Ellipse', 'Triangle', 'Starmarker'], \r\n                                     20, [160, 80, 40, 20], [20, 1])\r\n    ```\r\n\r\n5. Once the mapping network is trained, you can form logical and visual explanations. To do this, you must first load \r\nthe trained network model via ``load_mapping_model()``.\r\n    ```\r\n    from revelionn.utils.model import load_mapping_model\r\n\r\n    main_module, mapping_module, activation_extractor, transformation, img_size = load_mapping_model(\r\n        os.path.join(root_path, 'trained_models', 'mapping_models', 'C1_20_[160, 80, 40, 20]_[20, 1].rvl'), \r\n        cur_path, os.path.join(root_path, 'main_net_classes'), device)\r\n    ```\r\n   \r\n6. To form logical explanations using an ontology, one must first extract the concepts relevant to the target concept \r\nfrom the image, and then transfer the extracted concepts and their probabilities to the reasoning module along with the \r\nontology. This can be done as follows:\r\n    ```\r\n    from revelionn.utils.explanation import extract_concepts_from_img, explain_target_concept\r\n    from ontologies.scdb_ontology import concepts_map\r\n    from PIL import Image\r\n\r\n    image_path = os.path.join(root_path, 'data', 'scdb_custom', 'images', '001236.png')\r\n   \r\n    image = Image.open(image_path)\r\n    main_concepts, extracted_concepts, mapping_probabilities = extract_concepts_from_img(main_module,\r\n                                                                                         mapping_module,\r\n                                                                                         image,\r\n                                                                                         transformation)\r\n    print(f'\\nThe image is classified as {main_concepts}.')\r\n    print('\\nThe following concepts were extracted from the image:')\r\n    print(extracted_concepts)\r\n    print('with the following probabilities:')\r\n    print(f'{mapping_probabilities}\\n')\r\n       \r\n    justifications = explain_target_concept(extracted_concepts, mapping_probabilities, concepts_map, 'C1',\r\n                                            os.path.join(root_path, 'ontologies', 'SCDB.owl'), \r\n                                            os.path.join(root_path, 'temp'))\r\n    print(justifications)\r\n    ```\r\n\r\n7. Visual explanations can be formed as follows:\r\n    ```\r\n    import matplotlib.pyplot as plt\r\n    from revelionn.occlusion import perform_occlusion\r\n   \r\n    perform_occlusion(main_module, mapping_module, activation_extractor, transformation, img_size,\r\n                     image_path, window_size=20, stride=5, threads=0)\r\n    plt.show()\r\n    ```\r\n\r\nThe execution of the listed steps is shown in [basic_example.ipynb](https://github.com/cais-lab/revelionn/blob/main/examples/basic_example.ipynb).\r\n\r\nRevelioNN also supports a command line-based interface, i.e. interaction through scripts. A detailed description of how to use each of the scripts can be found in the documentation.\r\n\r\n## Installation\r\n\r\nThe simplest way to install RevelioNN is using ``pip``:\r\n\r\n```bash\r\npip install revelionn\r\npip install git+https://github.com/lucadiliello/semantic-loss-pytorch.git\r\n```\r\n\r\nYou can view a list of required dependencies in the [requirements.txt](https://github.com/cais-lab/revelionn/blob/main/requirements.txt) file. You can also install them as follows:\r\n\r\n```bash\r\npip install -r requirements.txt\r\n```\r\n\r\nIt is also worth noting that [Java SE 8](https://www.java.com/en/download/manual.jsp) must be installed to form logical explanations.\r\n\r\n## Project Structure\r\n\r\nThe repository includes the following directories:\r\n\r\n* Package `main_net_classes` contains various convolutional neural network architectures that can serve as examples for initializing your network in RevelioNN; \r\n* Package `ontologies` contains examples of ontology files in OWL format, as well as examples of the dictionary of relations of dataset attributes to ontology concepts and examples of the class representing the ontology as a graph;\r\n* Package `examples` includes notebooks that contain practical examples of RevelioNN use;\r\n* All unit and integration tests can be observed in the `tests` directory;\r\n* The sources of the documentation are in the `docs` directory.\r\n\r\n## Documentation\r\n\r\nA detailed RevelioNN description is available in [Read the Docs](https://revelionn.readthedocs.io/en/latest/).\r\n\r\n## Tests\r\n\r\nTo run tests, you can use:\r\n\r\n```bash\r\npytest tests\r\n```\r\n\r\n## Publications\r\n\r\nThe library was used in the following publications:\r\n* Agafonov A., Ponomarev A. An Experiment on Localization of Ontology Concepts in Deep Convolutional Neural Networks // In the *11th International Symposium on Information and Communication Technology (SoICT 2022)*, 82\u201387. DOI: [10.1145/3568562.3568602](http://doi.org/10.1145/3568562.3568602)\r\n* Ponomarev A., Agafonov A. Ontology Concept Extraction Algorithm for Deep Neural Networks // *Proceedings of the 32nd Conference of Open Innovations Association FRUCT*, 221-226. DOI: [10.23919/FRUCT56874.2022.9953838](http://doi.org/10.23919/FRUCT56874.2022.9953838)\r\n* Agafonov A., Ponomarev A. Localization of Ontology Concepts in Deep Convolutional Neural Networks // *2022 IEEE International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON)*, 160-165. DOI: [10.1109/SIBIRCON56155.2022.10016932](http://doi.org/10.1109/SIBIRCON56155.2022.10016932)\r\n\r\n## Funding\r\nThe RevelioNN library was developed in the scope of the project 22-11-00214, funded by the Russian Science Foundation (RSF).\r\n\r\n## Acknowledgements\r\nWe thank the developers of [xaitk-saliency](https://github.com/XAITK/xaitk-saliency), [semantic-loss-pytorch](https://github.com/lucadiliello/semantic-loss-pytorch), \r\n[nxontology](https://github.com/related-sciences/nxontology) and [BUNDLE](https://ml.unife.it/bundle/), thanks to whom the development of RevelioNN became possible!\r\n\r\nSpecial thanks to the creators of the [XTRAINS dataset](https://bitbucket.org/xtrains/dataset/src/master/) for providing the ontology and for inspiring the development of this library!\r\n",
    "bugtrack_url": null,
    "license": "BSD 3-Clause",
    "summary": "Retrospective Extraction of Visual and Logical Insights for Ontology-based interpretation of Neural Networks",
    "version": "1.0.2",
    "project_urls": {
        "Homepage": "https://github.com/cais-lab/revelionn"
    },
    "split_keywords": [
        "explainable ai",
        "xai",
        "interpretation",
        "black-box",
        "convolutional neural network",
        "ontology",
        "concept extraction",
        "visual explanation",
        "logical explanation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "946366e4137c5476ae2857e81136ae6dafe05aa372b0158bccb4277b040bd02c",
                "md5": "e3c23b2b4350bfb2bcc50526ad92fd73",
                "sha256": "78230abf6dbcc9b1bd46d64d2681b25d0280d015fd16f47b66b4c8543511a154"
            },
            "downloads": -1,
            "filename": "revelionn-1.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "e3c23b2b4350bfb2bcc50526ad92fd73",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 68810578,
            "upload_time": "2023-09-04T01:27:09",
            "upload_time_iso_8601": "2023-09-04T01:27:09.195229Z",
            "url": "https://files.pythonhosted.org/packages/94/63/66e4137c5476ae2857e81136ae6dafe05aa372b0158bccb4277b040bd02c/revelionn-1.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-04 01:27:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cais-lab",
    "github_project": "revelionn",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "revelionn"
}
        
Elapsed time: 0.11999s