leap-net


Nameleap-net JSON
Version 0.1.1 PyPI version JSON
download
home_pagehttps://github.com/bdonnot/leap_net
SummaryAn implementation in keras 3.0 (and tensorflow keras) of the LeapNet model
upload_time2024-02-21 13:43:05
maintainer
docs_urlNone
authorBenjamin DONNOT
requires_python>=3.8
licenseMozilla Public License 2.0 (MPL 2.0)
keywords leap-net guided-dropout dropout resnet
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # leap_net
This repository implements the necessary algorithm and data generation process to reproduce the results published around LEAP Net.

## What is the leap net

### Brief introduction
Suppose you have a "system" `S` that generates data `y` from input data `x`. Suppose also that the response `y` of this
system can be modulated depending on some known setpoint `τ`. 

In our experiments, `S` was a powergrid, we were interested in predicting `y` the vector representing the flows
on each powerline of this grid. These flows are determined by the power injected at each "bus" (a bus is the 
word in the power system community close to meaning "nodes") of the `x` (these injections can be both positive if power 
is injected, typically when there is a production unit, or negative when there power is consumed). The vector `τ`
encodes for variation of the topology of the powergrid, typically "is a powerline connected or disconnected" and 
"is this powerline connected to this other powerline".

In summary we suppose a generation process `y = S(x, τ)`. We also suppose that we have some dataset
`{(x_i, τ_i, y_i)}` that was generated using this model with input data coming from a distribution `D = Dist(x, τ)`. 
The LEAP net is a "novel" neural network architecture that is 
able to predict some response `ŷ_i` from  `x_i` and `τ_i` with the following properties
 
- it is fully trainable by stochastic gradient descent like any neural network
- its implementation (given here in keras) is really simple
- on data `(x, τ)` drawn from the same distribution `D` than the one use for training `ŷ` is good approximation
- most importantly, under some circumstances, even if `(x, τ)` is **NOT** drawn from the distribution used to train it.

We call this last property "*super generalization*". This is somewhat related to transfer learning and *zero shot* /
*few shots* learning. We explored this super-generalization properties with discrete modulation `τ` in the case
where, for example, the neural network is learned when the system `S` has **zero** or **one** 
disconnected powerline but it's still able to perform accurate prediction even when **two** powerlines are disconnected
at the same time.

Internally, we also made some experiments on load forecast, where the input `x` included the past realized loads and
the weather forecast for example. The modulating
variable `τ` included the properties of the day to predict *eg* is it a monday or a sunday ? It is during bank holiday ?
Is there lots of road traffic (indicating possible start of holidays) this day ? etc. On another topic we also studied 
this model in the context of generative models (cVAE or GANs) where `x` was noise, `y` MNIST images and the modulator 
`τ` included the color or rotation of the generated digits.

LEAP Net gave also pretty good results on all these tasks that we didn't had time to polish for publishing. This makes
us believe that the leap net model is suited in different context than powergrid related application and usable
for modulator `τ` both discrete and continuous.

### References
To know more about the leap Net, you can have a look at the
[LEAP nets for power grid perturbations](https://arxiv.org/pdf/1908.08314.pdf) paper available on arxiv publish at the
ESANN conference.

It has been my main focus during my PhD titled
[Deep learning methods for predicting flows in power grids : novel architectures and algorithms
](https://tel.archives-ouvertes.fr/tel-02045873/document)
also available online.

More recently, some analytical proofs and further development where published in the paper 
[LEAP Nets for System Identification and Application to Power Systems
](https://www.sciencedirect.com/science/article/abs/pii/S0925231220305051)

## Use the leap net

### Reproducing results of the neuro computing paper.
The repository [neurocomputing_paper](./neurocomputing_paper) contains the necessary material to reproduce the figures
presented in the paper. **NB** as of writing, a commercial solver was used to compute the powerflows. We are trying to 
port the code to use the [Grid2Op](https://github.com/rte-france/Grid2Op) framework instead.

### Use the LEAP Net

#### Setting up
##### Quick and dirty way
Of course, this way of doing is absolutely not recommended. By doing it you need to make sure the license of your
own code is compatible with the license of this specific package etc. You have more information on this topic in the
[LICENSE](LICENSE) file.

The most simple way to use the LEAP Net, and especially the Ltau module is to define this class in your project:
```python
# Copyright (c) 2019-2020, RTE (https://www.rte-france.com)
# See AUTHORS.txt
# This Source Code Form is subject to the terms of the Mozilla Public License, version 2.0.
# If a copy of the Mozilla Public License, version 2.0 was not distributed with this file,
# you can obtain one at http://mozilla.org/MPL/2.0/.
# SPDX-License-Identifier: MPL-2.0
# This file is part of leap_net, leap_net a keras implementation of the LEAP Net model.

from keras.layers import (Layer, Dense, add as keras_add, multiply as keras_multiply)


class Ltau(Layer):
    """
    This layer implements the Ltau layer.

    This kind of leap net layer computes, from their input `x`: `d.(e.x * tau)` where `.` denotes the
    matrix multiplication and `*` the elementwise multiplication.

    """
    def __init__(self, initializer='glorot_uniform', use_bias=True, trainable=True, name=None, **kwargs):
        super(Ltau, self).__init__(trainable=trainable, name=name, **kwargs)
        self.initializer = initializer
        self.use_bias=use_bias
        self.e = None
        self.d = None

    def build(self, input_shape):
        is_x, is_tau = input_shape
        nm_e = None
        nm_d = None
        if self.name is not None:
            nm_e = '{}_e'.format(self.name)
            nm_d = '{}_d'.format(self.name)
        self.e = Dense(is_tau[-1],
                       kernel_initializer=self.initializer,
                       use_bias=self.use_bias,
                       trainable=self.trainable,
                       name=nm_e)
        self.d = Dense(is_x[-1],
                       kernel_initializer=self.initializer,
                       use_bias=False,
                       trainable=self.trainable,
                       name=nm_d)

    def call(self, inputs):
        x, tau = inputs
        tmp = self.e(x)
        tmp = keras_multiply([tau, tmp])  # element wise multiplication
        tmp = self.d(tmp)
        res = keras_add([x, tmp])
        return res

```

This is the complete code of the Ltau module that you can use as any keras layer.


##### Clean installation (from source)

We also provide a simple implement of the LEAP Net that can be use as a any `keras` layer. First you have to 
download this github repository:
```bash
git clone https://github.com/BDonnot/leap_net.git
cd leap_net
```
Then you need to install it (we strongly encourage to install it in a virtual envrionment):
```bash
pip install -U -e .
```
Then, **as all python packages installed from source** you need to change the current working directory to use this
module:
```bash
cd ..
rm -rf leap_net  # optionnally you can also delete the repository
```
In the future, to ease the installation process, we might provide a version of this package on pypi soon, 
but haven't done that at the moment. If you would like this feature, write us an issue on github.

#### LeapNet usage
Once installed, this package provide a keras-compatible of the `Ltau` block defined in the cited papers. Supposes you 
have at your disposal:
- a `X` matrix of dimension (nb_row, dim_x)
- a `T` matrix of dimension (nb_row, dim_tau)
- a `Y` matrix of dimentsion (nb_row, dim_x)

```python
from keras.layers import Input
from keras.models import Model
from leap_net import Ltau  # this import might change if you use the "quick and dirty way".

# create the keras model
x = Input(shape=(dim_x,), name="x")
tau = Input(shape=(dim_tau,), name="tau")
res_Ltau = Ltau()((x, tau))
model = Model(inputs=[x, tau], outputs=[res_Ltau])

# "compile" the model with a given optimizer
adam_ = keras.optimizers.Adam(lr=1e-3)
model.compile(optimizer=adam_, loss='mse')
# train it
model.fit(x=[X, T], y=[Y], epochs=200, batch_size=32, verbose=False)

# make prediction out of it
y_hat = model.predict([X, T])
```

Of course, it is more than recommended to first encode your input data `X` with an encore denoted by `E` on the paper
and then decode them with a "decoder" denoted by `D` in the papers. An example of such a model is:

```python
from keras.layers import Input, Activation, Dense
from keras.models import Model
from leap_net import Ltau  # this import might change if you use the "quick and dirty way".

# create the keras model
x = Input(shape=(dim_x,), name="x")
tau = Input(shape=(dim_tau,), name="tau")

## create E, for example with 2 layers of size "layer_size"
layer1 = Dense(layer_size)(x)
layer1 = Activation("relu")(layer1)

layer2 = Dense(layer_size)(x)
layer2 = Activation("relu")(layer1)
# layer2 is the output of E.

## this is Ltau
res_Ltau = Ltau()((layer2, tau))

## now create D, in this case hidden layer, for example
layer4 = Dense(layer_size)(res_Ltau)
layer4 = Activation("relu")(layer4)

# and make the standard (if you do a regression) linear layer for the output
output = Dense(dim_y)(layer4)

model = Model(inputs=[x, tau], outputs=[output])

# "compile" the model with a given optimizer
adam_ = keras.optimizers.Adam(lr=1e-3)
model.compile(optimizer=adam_, loss='mse')
# train it
model.fit(x=[X, T], y=[Y], epochs=200, batch_size=32, verbose=False)

# make prediction out of it
y_hat = model.predict([X, T])
```

**NB** We think the variable we use above are transparent, and we let the user of this work fine tune the learning
rate, the optimizer, the number of epochs the even the size of the batch to suit their purpose. 

**NB** To use this model easily, we suppose you already format your dataset to have the shape `{(x_i, τ_i, y_i)}` and
in particular that you have a pre-defined encoding of your modulator `τ` in the form of a vector. The performance of
the LEAP Net can vary depending on the encoding you choose for `τ`. More information will be provided in the near 
future when we will release a port of the code we used to get our results for the neurcomputing paper. We remind
that this port of the code will not be strictly equivalent to the original implementation of the paper that uses a 
proprietary powerflow as this code will use the open source [Grid2Op](https://github.com/rte-france/Grid2Op) framework, 
that as not available when the paper was first submitted.

## Cite this work
If you use this work please cite:
```
@article{DONON2020,
title = "LEAP nets for system identification and application to power systems",
journal = "Neurocomputing",
year = "2020",
issn = "0925-2312",
doi = "https://doi.org/10.1016/j.neucom.2019.12.135",
url = "http://www.sciencedirect.com/science/article/pii/S0925231220305051",
author = "B. Donon and B. Donnot and I. Guyon and Z. Liu and A. Marot and P. Panciatici and M. Schoenauer",
keywords = "System identification, Latent space, Residual networks, LEAP Net, Power systems",
abstract = "Using neural network modeling, we address the problem of system identification for continuous multivariate systems, whose structures vary around an operating point. Structural changes in the system are of combinatorial nature, and some of them may be very rare; they may be actionable for the purpose of controlling the system. Although our ultimate goal is both system identification and control, we only address the problem of identification in this paper. We propose and study a novel neural network architecture called LEAP net, for Latent Encoding of Atypical Perturbation. Our method maps system structure changes to neural net structure changes, using structural actionable variables. We demonstrate empirically that LEAP nets can be trained with a natural observational distribution, very concentrated around a “reference” operating point of the system, and yet generalize to rare (or unseen) structural changes. We validate the generalization properties of LEAP nets theoretically in particular cases. We apply our technique to power transmission grids, in which high voltage lines are disconnected and re-connected with one-another from time to time, either accidentally or willfully. We discuss extensions of our approach to actionable variables, which are continuous (instead of discrete, in the case of our application) and make connections between our problem setting, transfer learning, causal inference, and reinforcement learning."
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/bdonnot/leap_net",
    "name": "leap-net",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "LEAP-Net guided-dropout dropout resnet",
    "author": "Benjamin DONNOT",
    "author_email": "benjamin.donnot@rte-france.com",
    "download_url": "https://files.pythonhosted.org/packages/c9/3a/bae0d5c90180cbb0c277bd61ee98b9137626b1c41fbfd810e4045c06f53f/leap_net-0.1.1.tar.gz",
    "platform": null,
    "description": "# leap_net\nThis repository implements the necessary algorithm and data generation process to reproduce the results published around LEAP Net.\n\n## What is the leap net\n\n### Brief introduction\nSuppose you have a \"system\" `S` that generates data `y` from input data `x`. Suppose also that the response `y` of this\nsystem can be modulated depending on some known setpoint `\u03c4`. \n\nIn our experiments, `S` was a powergrid, we were interested in predicting `y` the vector representing the flows\non each powerline of this grid. These flows are determined by the power injected at each \"bus\" (a bus is the \nword in the power system community close to meaning \"nodes\") of the `x` (these injections can be both positive if power \nis injected, typically when there is a production unit, or negative when there power is consumed). The vector `\u03c4`\nencodes for variation of the topology of the powergrid, typically \"is a powerline connected or disconnected\" and \n\"is this powerline connected to this other powerline\".\n\nIn summary we suppose a generation process `y = S(x, \u03c4)`. We also suppose that we have some dataset\n`{(x_i, \u03c4_i, y_i)}` that was generated using this model with input data coming from a distribution `D = Dist(x, \u03c4)`. \nThe LEAP net is a \"novel\" neural network architecture that is \nable to predict some response `\u0177_i` from  `x_i` and `\u03c4_i` with the following properties\n \n- it is fully trainable by stochastic gradient descent like any neural network\n- its implementation (given here in keras) is really simple\n- on data `(x, \u03c4)` drawn from the same distribution `D` than the one use for training `\u0177` is good approximation\n- most importantly, under some circumstances, even if `(x, \u03c4)` is **NOT** drawn from the distribution used to train it.\n\nWe call this last property \"*super generalization*\". This is somewhat related to transfer learning and *zero shot* /\n*few shots* learning. We explored this super-generalization properties with discrete modulation `\u03c4` in the case\nwhere, for example, the neural network is learned when the system `S` has **zero** or **one** \ndisconnected powerline but it's still able to perform accurate prediction even when **two** powerlines are disconnected\nat the same time.\n\nInternally, we also made some experiments on load forecast, where the input `x` included the past realized loads and\nthe weather forecast for example. The modulating\nvariable `\u03c4` included the properties of the day to predict *eg* is it a monday or a sunday ? It is during bank holiday ?\nIs there lots of road traffic (indicating possible start of holidays) this day ? etc. On another topic we also studied \nthis model in the context of generative models (cVAE or GANs) where `x` was noise, `y` MNIST images and the modulator \n`\u03c4` included the color or rotation of the generated digits.\n\nLEAP Net gave also pretty good results on all these tasks that we didn't had time to polish for publishing. This makes\nus believe that the leap net model is suited in different context than powergrid related application and usable\nfor modulator `\u03c4` both discrete and continuous.\n\n### References\nTo know more about the leap Net, you can have a look at the\n[LEAP nets for power grid perturbations](https://arxiv.org/pdf/1908.08314.pdf) paper available on arxiv publish at the\nESANN conference.\n\nIt has been my main focus during my PhD titled\n[Deep learning methods for predicting flows in power grids : novel architectures and algorithms\n](https://tel.archives-ouvertes.fr/tel-02045873/document)\nalso available online.\n\nMore recently, some analytical proofs and further development where published in the paper \n[LEAP Nets for System Identification and Application to Power Systems\n](https://www.sciencedirect.com/science/article/abs/pii/S0925231220305051)\n\n## Use the leap net\n\n### Reproducing results of the neuro computing paper.\nThe repository [neurocomputing_paper](./neurocomputing_paper) contains the necessary material to reproduce the figures\npresented in the paper. **NB** as of writing, a commercial solver was used to compute the powerflows. We are trying to \nport the code to use the [Grid2Op](https://github.com/rte-france/Grid2Op) framework instead.\n\n### Use the LEAP Net\n\n#### Setting up\n##### Quick and dirty way\nOf course, this way of doing is absolutely not recommended. By doing it you need to make sure the license of your\nown code is compatible with the license of this specific package etc. You have more information on this topic in the\n[LICENSE](LICENSE) file.\n\nThe most simple way to use the LEAP Net, and especially the Ltau module is to define this class in your project:\n```python\n# Copyright (c) 2019-2020, RTE (https://www.rte-france.com)\n# See AUTHORS.txt\n# This Source Code Form is subject to the terms of the Mozilla Public License, version 2.0.\n# If a copy of the Mozilla Public License, version 2.0 was not distributed with this file,\n# you can obtain one at http://mozilla.org/MPL/2.0/.\n# SPDX-License-Identifier: MPL-2.0\n# This file is part of leap_net, leap_net a keras implementation of the LEAP Net model.\n\nfrom keras.layers import (Layer, Dense, add as keras_add, multiply as keras_multiply)\n\n\nclass Ltau(Layer):\n    \"\"\"\n    This layer implements the Ltau layer.\n\n    This kind of leap net layer computes, from their input `x`: `d.(e.x * tau)` where `.` denotes the\n    matrix multiplication and `*` the elementwise multiplication.\n\n    \"\"\"\n    def __init__(self, initializer='glorot_uniform', use_bias=True, trainable=True, name=None, **kwargs):\n        super(Ltau, self).__init__(trainable=trainable, name=name, **kwargs)\n        self.initializer = initializer\n        self.use_bias=use_bias\n        self.e = None\n        self.d = None\n\n    def build(self, input_shape):\n        is_x, is_tau = input_shape\n        nm_e = None\n        nm_d = None\n        if self.name is not None:\n            nm_e = '{}_e'.format(self.name)\n            nm_d = '{}_d'.format(self.name)\n        self.e = Dense(is_tau[-1],\n                       kernel_initializer=self.initializer,\n                       use_bias=self.use_bias,\n                       trainable=self.trainable,\n                       name=nm_e)\n        self.d = Dense(is_x[-1],\n                       kernel_initializer=self.initializer,\n                       use_bias=False,\n                       trainable=self.trainable,\n                       name=nm_d)\n\n    def call(self, inputs):\n        x, tau = inputs\n        tmp = self.e(x)\n        tmp = keras_multiply([tau, tmp])  # element wise multiplication\n        tmp = self.d(tmp)\n        res = keras_add([x, tmp])\n        return res\n\n```\n\nThis is the complete code of the Ltau module that you can use as any keras layer.\n\n\n##### Clean installation (from source)\n\nWe also provide a simple implement of the LEAP Net that can be use as a any `keras` layer. First you have to \ndownload this github repository:\n```bash\ngit clone https://github.com/BDonnot/leap_net.git\ncd leap_net\n```\nThen you need to install it (we strongly encourage to install it in a virtual envrionment):\n```bash\npip install -U -e .\n```\nThen, **as all python packages installed from source** you need to change the current working directory to use this\nmodule:\n```bash\ncd ..\nrm -rf leap_net  # optionnally you can also delete the repository\n```\nIn the future, to ease the installation process, we might provide a version of this package on pypi soon, \nbut haven't done that at the moment. If you would like this feature, write us an issue on github.\n\n#### LeapNet usage\nOnce installed, this package provide a keras-compatible of the `Ltau` block defined in the cited papers. Supposes you \nhave at your disposal:\n- a `X` matrix of dimension (nb_row, dim_x)\n- a `T` matrix of dimension (nb_row, dim_tau)\n- a `Y` matrix of dimentsion (nb_row, dim_x)\n\n```python\nfrom keras.layers import Input\nfrom keras.models import Model\nfrom leap_net import Ltau  # this import might change if you use the \"quick and dirty way\".\n\n# create the keras model\nx = Input(shape=(dim_x,), name=\"x\")\ntau = Input(shape=(dim_tau,), name=\"tau\")\nres_Ltau = Ltau()((x, tau))\nmodel = Model(inputs=[x, tau], outputs=[res_Ltau])\n\n# \"compile\" the model with a given optimizer\nadam_ = keras.optimizers.Adam(lr=1e-3)\nmodel.compile(optimizer=adam_, loss='mse')\n# train it\nmodel.fit(x=[X, T], y=[Y], epochs=200, batch_size=32, verbose=False)\n\n# make prediction out of it\ny_hat = model.predict([X, T])\n```\n\nOf course, it is more than recommended to first encode your input data `X` with an encore denoted by `E` on the paper\nand then decode them with a \"decoder\" denoted by `D` in the papers. An example of such a model is:\n\n```python\nfrom keras.layers import Input, Activation, Dense\nfrom keras.models import Model\nfrom leap_net import Ltau  # this import might change if you use the \"quick and dirty way\".\n\n# create the keras model\nx = Input(shape=(dim_x,), name=\"x\")\ntau = Input(shape=(dim_tau,), name=\"tau\")\n\n## create E, for example with 2 layers of size \"layer_size\"\nlayer1 = Dense(layer_size)(x)\nlayer1 = Activation(\"relu\")(layer1)\n\nlayer2 = Dense(layer_size)(x)\nlayer2 = Activation(\"relu\")(layer1)\n# layer2 is the output of E.\n\n## this is Ltau\nres_Ltau = Ltau()((layer2, tau))\n\n## now create D, in this case hidden layer, for example\nlayer4 = Dense(layer_size)(res_Ltau)\nlayer4 = Activation(\"relu\")(layer4)\n\n# and make the standard (if you do a regression) linear layer for the output\noutput = Dense(dim_y)(layer4)\n\nmodel = Model(inputs=[x, tau], outputs=[output])\n\n# \"compile\" the model with a given optimizer\nadam_ = keras.optimizers.Adam(lr=1e-3)\nmodel.compile(optimizer=adam_, loss='mse')\n# train it\nmodel.fit(x=[X, T], y=[Y], epochs=200, batch_size=32, verbose=False)\n\n# make prediction out of it\ny_hat = model.predict([X, T])\n```\n\n**NB** We think the variable we use above are transparent, and we let the user of this work fine tune the learning\nrate, the optimizer, the number of epochs the even the size of the batch to suit their purpose. \n\n**NB** To use this model easily, we suppose you already format your dataset to have the shape `{(x_i, \u03c4_i, y_i)}` and\nin particular that you have a pre-defined encoding of your modulator `\u03c4` in the form of a vector. The performance of\nthe LEAP Net can vary depending on the encoding you choose for `\u03c4`. More information will be provided in the near \nfuture when we will release a port of the code we used to get our results for the neurcomputing paper. We remind\nthat this port of the code will not be strictly equivalent to the original implementation of the paper that uses a \nproprietary powerflow as this code will use the open source [Grid2Op](https://github.com/rte-france/Grid2Op) framework, \nthat as not available when the paper was first submitted.\n\n## Cite this work\nIf you use this work please cite:\n```\n@article{DONON2020,\ntitle = \"LEAP nets for system identification and application to power systems\",\njournal = \"Neurocomputing\",\nyear = \"2020\",\nissn = \"0925-2312\",\ndoi = \"https://doi.org/10.1016/j.neucom.2019.12.135\",\nurl = \"http://www.sciencedirect.com/science/article/pii/S0925231220305051\",\nauthor = \"B. Donon and B. Donnot and I. Guyon and Z. Liu and A. Marot and P. Panciatici and M. Schoenauer\",\nkeywords = \"System identification, Latent space, Residual networks, LEAP Net, Power systems\",\nabstract = \"Using neural network modeling, we address the problem of system identification for continuous multivariate systems, whose structures vary around an operating point. Structural changes in the system are of combinatorial nature, and some of them may be very rare; they may be actionable for the purpose of controlling the system. Although our ultimate goal is both system identification and control, we only address the problem of identification in this paper. We propose and study a novel neural network architecture called LEAP net, for Latent Encoding of Atypical Perturbation. Our method maps system structure changes to neural net structure changes, using structural actionable variables. We demonstrate empirically that LEAP nets can be trained with a natural observational distribution, very concentrated around a \u201creference\u201d operating point of the system, and yet generalize to rare (or unseen) structural changes. We validate the generalization properties of LEAP nets theoretically in particular cases. We apply our technique to power transmission grids, in which high voltage lines are disconnected and re-connected with one-another from time to time, either accidentally or willfully. We discuss extensions of our approach to actionable variables, which are continuous (instead of discrete, in the case of our application) and make connections between our problem setting, transfer learning, causal inference, and reinforcement learning.\"\n}\n```\n",
    "bugtrack_url": null,
    "license": "Mozilla Public License 2.0 (MPL 2.0)",
    "summary": "An implementation in keras 3.0 (and tensorflow keras) of the LeapNet model",
    "version": "0.1.1",
    "project_urls": {
        "Homepage": "https://github.com/bdonnot/leap_net"
    },
    "split_keywords": [
        "leap-net",
        "guided-dropout",
        "dropout",
        "resnet"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "78323e67cbf7584cd57b375f6c6c80f72a6d67fe96d99034f3c8ef06880ef1b2",
                "md5": "a32fa4ece4a15f63cc152a8f998efc90",
                "sha256": "0c1164057d817b3565c90e77659d645b10373bede18b14bc992419d871ed0a56"
            },
            "downloads": -1,
            "filename": "leap_net-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a32fa4ece4a15f63cc152a8f998efc90",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 100501,
            "upload_time": "2024-02-21T13:43:03",
            "upload_time_iso_8601": "2024-02-21T13:43:03.916178Z",
            "url": "https://files.pythonhosted.org/packages/78/32/3e67cbf7584cd57b375f6c6c80f72a6d67fe96d99034f3c8ef06880ef1b2/leap_net-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c93abae0d5c90180cbb0c277bd61ee98b9137626b1c41fbfd810e4045c06f53f",
                "md5": "2be8d510f0411624a9aace48c5e365af",
                "sha256": "fb670b8c97acc9ac0e0165b7e492c0500b6f99240020b297eff691ef8a03a569"
            },
            "downloads": -1,
            "filename": "leap_net-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "2be8d510f0411624a9aace48c5e365af",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 70063,
            "upload_time": "2024-02-21T13:43:05",
            "upload_time_iso_8601": "2024-02-21T13:43:05.634313Z",
            "url": "https://files.pythonhosted.org/packages/c9/3a/bae0d5c90180cbb0c277bd61ee98b9137626b1c41fbfd810e4045c06f53f/leap_net-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-21 13:43:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "bdonnot",
    "github_project": "leap_net",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "leap-net"
}
        
Elapsed time: 0.20066s