verifiNN


NameverifiNN JSON
Version 0.0.0.dev10 PyPI version JSON
download
home_page
SummaryA package for robustness verification of neural networks using optimization methods.
upload_time2023-06-03 19:32:27
maintainer
docs_urlNone
author
requires_python>=3.7
license
keywords neural networks verification convex optimization semidefinite programming
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # verifiNN

*Robustness* is a desirable property in a neural network. Informally, robustness can be described as ‘resilience to perturbations in the input’. Said differently, a neural network is robust if small changes to the input produce small or no changes to the output. In particular, if the network is a classifier, robustness means that inputs close to each other should be assigned the same class by the network.

This project implements convex optimization based methods for robustness verification of neural networks. Given a trained neural network and an input, we use an optimization based approach to determine if the network is robust at the input point. Currently only a Linear Programming based approach is supported for ReLU as well as Idenditity activated feed-forward neural networks. Future work will include a Semidefinite Programming based approach for fully connected as well as convolutional neural networks.

For a detailed treatment of the mathematical background, check out this [blog post](https://ayusbhar2.github.io/verifying-neural-nertwork-robustness-using-linear-programming/). Here is a small example on how to use `verifiNN`.

## Example

```{python}
pip install verifiNN
```

```{python}
import numpy as np

from verifiNN.models.network import Network
from verifiNN.verifier import LPVerifier
```

Here we generate a toy network for our example. In reality, this network would be given to us.

```{python}
# Defining a network
W1 = np.array([[1, 0],
              [0, 1]])
b1 = np.array([1, 1])
W2 = np.array([[0, 1],
              [1, 0]])
b2 = np.array([2, 2])

weights = [W1, W2]
biases = [b1, b2]
network = Network(weights, biases, activation='ReLU', labeler='argmax')
```

Next, we note the class label that the network assigns to a reference input `x_0`.

```{python}
x_0 = np.array([1, 2])
l_0 = network.classify(x_0)  # class 0
assert l_0 == 0
```

Then, we compute the *pointwise robustness* (i.e. the distance to the nearest adversarial example within an $\epsilon-$Ball around the reference point.

```{python}
epsilon = 1.5

vf = LPVerifier()
result = vf.compute_pointwise_robustness(network, x_0, epsilon)
assert result['verification_status'] == 'verified'
assert result['robustness_status'] == 'not_robust'
```

`verifiNN` was able to verify that the above nework is NOT robust at `x_0`. This is because an adversarial example was found within the $\epsilon-$Ball around `x_0` (as shown below).

```
rho = np.round(result['pointwise_robustness'], decimals=5)
assert rho == 0.5  # distanc to nearest adverarial example

x_hat = result['adversarial_example']
assert np.round(x_hat[0], decimals=5) == 1.5
assert np.round(x_hat[1], decimals=5) == 1.5

assert network.classify(x_hat) == 1  # class 1
```

The adversarial example `(1.5, 1.5)` lies inside (actually, on the boundary of) the $\epsilon-$Ball around `x_0`. Yet, as expected, the network assigns the class label `1` to `x_hat`.

**Caution**: `verifiNN` currently suffers from a limitation - if an adversarial example is found, then clearly the network is not robust. However, the converse is not true. In other words, if no adversarial example was found (i.e. the underlyin optimization problem was infeasible) we cannot conclude that the network is robust. This limitation comes from the affine appoximation of the ReLU function in the current lineaer programming based approach. Alternative appraoches (to be implemented in the future) do not suffer from this limitation.


## References:
- [Verifying Neural Network Robustness with Lineear Programming](https://ayusbhar2.github.io/verifying-neural-nertwork-robustness-using-linear-programming/)

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "verifiNN",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "Ayush Bharadwaj <ayush.bharadwaj@gmail.com>",
    "keywords": "neural networks,verification,convex optimization,semidefinite programming",
    "author": "",
    "author_email": "Ayush Bharadwaj <ayush.bharadwaj@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/c6/2d/48a73d978f1abb9e8835493d0cbc8614e603af1ffeabd364a4820cb79716/verifiNN-0.0.0.dev10.tar.gz",
    "platform": null,
    "description": "# verifiNN\n\n*Robustness* is a desirable property in a neural network. Informally, robustness can be described as \u2018resilience to perturbations in the input\u2019. Said differently, a neural network is robust if small changes to the input produce small or no changes to the output. In particular, if the network is a classifier, robustness means that inputs close to each other should be assigned the same class by the network.\n\nThis project implements convex optimization based methods for robustness verification of neural networks. Given a trained neural network and an input, we use an optimization based approach to determine if the network is robust at the input point. Currently only a Linear Programming based approach is supported for ReLU as well as Idenditity activated feed-forward neural networks. Future work will include a Semidefinite Programming based approach for fully connected as well as convolutional neural networks.\n\nFor a detailed treatment of the mathematical background, check out this [blog post](https://ayusbhar2.github.io/verifying-neural-nertwork-robustness-using-linear-programming/). Here is a small example on how to use `verifiNN`.\n\n## Example\n\n```{python}\npip install verifiNN\n```\n\n```{python}\nimport numpy as np\n\nfrom verifiNN.models.network import Network\nfrom verifiNN.verifier import LPVerifier\n```\n\nHere we generate a toy network for our example. In reality, this network would be given to us.\n\n```{python}\n# Defining a network\nW1 = np.array([[1, 0],\n              [0, 1]])\nb1 = np.array([1, 1])\nW2 = np.array([[0, 1],\n              [1, 0]])\nb2 = np.array([2, 2])\n\nweights = [W1, W2]\nbiases = [b1, b2]\nnetwork = Network(weights, biases, activation='ReLU', labeler='argmax')\n```\n\nNext, we note the class label that the network assigns to a reference input `x_0`.\n\n```{python}\nx_0 = np.array([1, 2])\nl_0 = network.classify(x_0)  # class 0\nassert l_0 == 0\n```\n\nThen, we compute the *pointwise robustness* (i.e. the distance to the nearest adversarial example within an $\\epsilon-$Ball around the reference point.\n\n```{python}\nepsilon = 1.5\n\nvf = LPVerifier()\nresult = vf.compute_pointwise_robustness(network, x_0, epsilon)\nassert result['verification_status'] == 'verified'\nassert result['robustness_status'] == 'not_robust'\n```\n\n`verifiNN` was able to verify that the above nework is NOT robust at `x_0`. This is because an adversarial example was found within the $\\epsilon-$Ball around `x_0` (as shown below).\n\n```\nrho = np.round(result['pointwise_robustness'], decimals=5)\nassert rho == 0.5  # distanc to nearest adverarial example\n\nx_hat = result['adversarial_example']\nassert np.round(x_hat[0], decimals=5) == 1.5\nassert np.round(x_hat[1], decimals=5) == 1.5\n\nassert network.classify(x_hat) == 1  # class 1\n```\n\nThe adversarial example `(1.5, 1.5)` lies inside (actually, on the boundary of) the $\\epsilon-$Ball around `x_0`. Yet, as expected, the network assigns the class label `1` to `x_hat`.\n\n**Caution**: `verifiNN` currently suffers from a limitation - if an adversarial example is found, then clearly the network is not robust. However, the converse is not true. In other words, if no adversarial example was found (i.e. the underlyin optimization problem was infeasible) we cannot conclude that the network is robust. This limitation comes from the affine appoximation of the ReLU function in the current lineaer programming based approach. Alternative appraoches (to be implemented in the future) do not suffer from this limitation.\n\n\n## References:\n- [Verifying Neural Network Robustness with Lineear Programming](https://ayusbhar2.github.io/verifying-neural-nertwork-robustness-using-linear-programming/)\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A package for robustness verification of neural networks using optimization methods.",
    "version": "0.0.0.dev10",
    "project_urls": {
        "homepage": "https://github.com/ayusbhar2/verifiNN"
    },
    "split_keywords": [
        "neural networks",
        "verification",
        "convex optimization",
        "semidefinite programming"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d294722c0ef62d84f6482d913dec99085ae430689364df3398d859ff43101419",
                "md5": "596c8d00cdfd128c75d587f6b8b30000",
                "sha256": "31885be38a20efab46b8c5883e413d9d30326405f77fd7213cf382624fe5cc4d"
            },
            "downloads": -1,
            "filename": "verifiNN-0.0.0.dev10-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "596c8d00cdfd128c75d587f6b8b30000",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 6807,
            "upload_time": "2023-06-03T19:32:25",
            "upload_time_iso_8601": "2023-06-03T19:32:25.554110Z",
            "url": "https://files.pythonhosted.org/packages/d2/94/722c0ef62d84f6482d913dec99085ae430689364df3398d859ff43101419/verifiNN-0.0.0.dev10-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c62d48a73d978f1abb9e8835493d0cbc8614e603af1ffeabd364a4820cb79716",
                "md5": "a7572787c2130fcdbc31a75bdbb142cd",
                "sha256": "cb19c9f724df668006703b12d1a09f0b7f3bab758e0bfdd6a43aac444fca7455"
            },
            "downloads": -1,
            "filename": "verifiNN-0.0.0.dev10.tar.gz",
            "has_sig": false,
            "md5_digest": "a7572787c2130fcdbc31a75bdbb142cd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 6226,
            "upload_time": "2023-06-03T19:32:27",
            "upload_time_iso_8601": "2023-06-03T19:32:27.428884Z",
            "url": "https://files.pythonhosted.org/packages/c6/2d/48a73d978f1abb9e8835493d0cbc8614e603af1ffeabd364a4820cb79716/verifiNN-0.0.0.dev10.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-03 19:32:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ayusbhar2",
    "github_project": "verifiNN",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "verifinn"
}
        
Elapsed time: 0.22089s