deepstruct


Namedeepstruct JSON
Version 0.10.0 PyPI version JSON
download
home_pagehttps://github.com/innvariant/deepstruct
Summary
upload_time2023-01-18 11:46:06
maintainer
docs_urlNone
authorJulian Stier
requires_python>=3.8,<4.0
licenseMIT
keywords neural network sparsity machine learning structure graph training
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # deepstruct - neural network structure tool [![PyPI version](https://badge.fury.io/py/deepstruct.svg)](https://badge.fury.io/py/deepstruct) ![Tests](https://github.com/innvariant/deepstruct/workflows/Tests/badge.svg) [![Documentation Status](https://readthedocs.org/projects/deepstruct/badge/?version=latest)](https://deepstruct.readthedocs.io/en/latest/?badge=latest) [![Downloads](https://pepy.tech/badge/deepstruct)](https://pepy.tech/project/deepstruct)  [![Python 3.8](https://img.shields.io/badge/python-3.8-blue.svg)](https://www.python.org/downloads/release/python-380/)
Create deep neural networks based on very different kinds of graphs or use *deepstruct* to extract the structure of your deep neural network.

Deepstruct combines tools for fusing machine learning and graph theory.
We are fascinated with the interplay of end-to-end learnable, locally restricted models and their graph theoretical properties.
Searching for evidence of the structural prior hypothesis.
Interested in pruning, neural architecture search or learning theory in general?

See [examples](#examples) below or [read the docs](https://deepstruct.readthedocs.io).

We're glad if you reference our work
```bibtex
@article{stier2022deepstruct,
  title={deepstruct -- linking deep learning and graph theory},
  author={Stier, Julian and Granitzer, Michael},
  journal={Software Impacts},
  volume={11},
  year={2022},
  publisher={Elsevier}
}
```

## Installation
- With **pip** from PyPi: ``pip install deepstruct``
- With **conda** in your *environment.yml* (recommended for reproducible experiments):
```yaml
name: exp01
channels:
- defaults
dependencies:
- pip>=20
- pip:
    - deepstruct
```
- With **poetry** (recommended for *projects*) using PyPi: ``poetry add deepstruct``
- From public GitHub: ``pip install --upgrade git+ssh://git@github.com:innvariant/deepstruct.git``

## Quick usage: multi-layered feed-forward neural network on MNIST
The simplest implementation is one which provides multiple layers with binary masks for each weight matrix.
It doesn't consider any skip-layer connections.
Each layer is then connected to only the following one.
```python
import deepstruct.sparse

mnist_model = deepstruct.sparse.MaskedDeepFFN((1, 28, 28), 10, [100]*10, use_layer_norm=True)
```
This is a ready-to-use pytorch module which has ten layers of each one hundred neurons and applies layer normalization before each activation.
Training it on any dataset will work out of the box like every other pytorch module.
Have a look on [pytorch ignite](https://pytorch.org/ignite/) or [pytorch lightning](https://github.com/Lightning-AI/lightning/) for designing your training loops.
You can set masks on the model via
```python
import deepstruct.sparse
for layer in deepstruct.sparse.maskable_layers(mnist_model):
    layer.mask[:, :] = True
```
and if you disable some of these mask elements you have defined your first sparse model.


## Examples
Specify structures by prior design, e.g. random social networks transformed into directed acyclic graphs:
```python
import networkx as nx
import deepstruct.sparse

# Use networkx to generate a random graph based on the Watts-Strogatz model
random_graph = nx.newman_watts_strogatz_graph(100, 4, 0.5)
structure = deepstruct.graph.CachedLayeredGraph()
structure.add_edges_from(random_graph.edges)
structure.add_nodes_from(random_graph.nodes)

# Build a neural network classifier with 784 input and 10 output neurons and the given structure
model = deepstruct.sparse.MaskedDeepDAN(784, 10, structure)
model.apply_mask()  # Apply the mask on the weights (hard, not undoable)
model.recompute_mask()  # Use weight magnitude to recompute the mask from the network
pruned_structure = model.generate_structure()  # Get the structure -- a networkx graph -- based on the current mask

new_model = deepstruct.sparse.MaskedDeepDAN(784, 10, pruned_structure)
```

Define a feed-forward neural network (with no skip-layer connections) and obtain its structure as a graph:
```python
import deepstruct.sparse

model = deepstruct.sparse.MaskedDeepFFN(784, 10, [100, 100])
# .. train model
model.generate_structure()  # a networkx graph
```


### Recurrent Neural Networks with sparsity
```python
import torch
import deepstruct.recurrent
import numpy as np

# A sequence of size 15 with one-dimensional elements which could e.g. be labelled
# BatchSize x [(1,), (2,), (3,), (4,), (5,), (0,), (0,), (0,)] --> [ label1, label2, ..]
batch_size = 100
seq_size = 15
input_size = 1
model = deepstruct.recurrent.MaskedDeepRNN(
    input_size,
    hidden_layers=[100, 100, 1],
    batch_first=True,
    build_recurrent_layer=deepstruct.recurrent.MaskedLSTMLayer,
)
random_input = torch.tensor(
    np.random.random((batch_size, seq_size, input_size)),
    dtype=torch.float32,
    requires_grad=False,
)
model.forward(random_input)
```



## Sparse Neural Network implementations
![Sparse Network Connectivity on zeroth order with a masked deep feed-forward neural network](docs/masked-deep-ffn.png)
![Sparse Network Connectivity on zeroth order with a masked deep neural network with skip-layer connections](docs/masked-deep-dan.png)
![Sparse Network Connectivity on second order with a masked deep cell-based neural network](docs/masked-deep-cell-dan.png)

**What's contained in deepstruct?**
- ready-to-use models in pytorch for learning instances on common (supervised/unsupervised) datasets from which a structural analysis is possible
- model-to-graph transformations for studying models from a graph-theoretic perspective

**Models:**
- *deepstruct.sparse.MaskableModule*: pytorch modules that contain explicit masks to enforce (mostly zero-ordered) structure
- *deepstruct.sparse.MaskedLinearLayer*: pytorch module with a simple linear layer extended with masking capability.
Suitable if you want to have linear-layers on which to enforce masks which could be obtained through pruning, regularization or other other search techniques.
- *deepstruct.sparse.MaskedDeepFFN*: feed-forward neural network with any width and depth and easy-to-use masks.
Suitable for simple and canonical pruning research on zero-ordered structure
- *deepstruct.sparse.MaskedDeepDAN*: feed-forward neural network with skip-layer connections based on any directed acyclic network.
Suitable for arbitrary structures on zero-order and on that level most flexible but also computationally expensive.
- *deepstruct.sparse.DeepCellDAN*: complex module based on a directed acyclic network and custom cells on third-order structures.
Suitable for large-scale neural architecture search
- *deepstruct.recurrent.MaskedDeepRNN*: multi-layered network with recurrent layers which can be masked

## What is the orders of structure?
- zero-th order: weight-level
- first order: kernel-level (filter, channel, blocks, cells)
- second order: layers

There is various evidence across empirical machine learning studies that the way artificial neural networks are structurally connected has a (minor?) influence on performance metrics such as the accuracy or probably even on more complex concepts such as adversarial robustness.
What do we mean by "structure"?
We define structure over graph theoretic properties given a computational graph with very restricted non-linearities.
This includes all major neural network definitions and lets us study them from the perspective of their *representation* and their *structure*.
In a probabilistic sense, one can interprete structure as a prior to the model and despite single-layered wide networks are universal function approximators we follow the hypothesis that given certain structural priors we can find models with better properties.

Before considering implementations, one should have a look on possible representations of Sparse Neural Networks.
In case of feed-forward neural networks (FFNs) the network can be represented as a list of weight matrices.
Each weight matrix represents the connections from one layer to the next.
Having a network without some connections then means setting entries in those matrices to zero.
Removing a particular neuron means setting all entries representing its incoming connections to zero.

However, sparsity can be employed on various levels of a general artificial neural network.
Zero order sparsity would remove single weights (representing connections) from the network.
First order sparsity removes groups of weights within one dimension of a matrix from the network.
Sparsity can be employed on connection-, weight-, block-, channel-, cell-level and so on.
Implementations respecting the areas for sparsification can have drastical differences.
Thus there are various ways for implementing Sparse Neural Networks.


# Artificial PyTorch Datasets
![A custom artificial landscape Stier2020B for testing function approximation](docs/artificial-landscape-approximation.png)
We provide some simple utilities for artificial function approximation.
Like polynomials, neural networks are universal function approximators on bounded intervals of compact spaces.
To test, you can easily define a function of any finite dimension, e.g. $f: \mathbb{R}^2\rightarrow\mathbb{R}, (x,y)\mapsto 20 + x - 1.8*(y-5) + 3 * np.sin(x + 2 * y) * y + (x / 4) ** 4 + (y / 4) ** 4$:
```python
import numpy as np
import torch.utils.data
from deepstruct.dataset import FuncDataset
from deepstruct.sparse import MaskedDeepFFN

# Our artificial landscape: f: R^2 -> R
# Have a look at https://github.com/innvariant/eddy for some visual examples
# You could easily define arbitrary functions from R^a to R^b
stier2020B1d = lambda x, y: 20 + x - 1.8*(y-5) + 3 * np.sin(x + 2 * y) * y + (x / 4) ** 4 + (y / 4) ** 4
ds_input_shape = (2,)  # specify the number of input dimensions (usually a one-sized tensor if no further structures are used)
# Explicitly define the target function for the dataset which returns a numpy array of our above function
# By above definition x is two-dimensional, so you have access to x[0] and x[1]
fn_target = lambda x: np.array([stier2020B1d(x[0], x[1])])
# Define a sampling strategy for the dataset, e.g. uniform sampling the space
fn_sampler = lambda: np.random.uniform(-2, 2, size=ds_input_shape)
# Define the dataset given the target function and your sampling strategy
# This simply wraps your function into a pytorch dataset and provides you with discrete observations
# Your model will later only know those observations to come up with an approximate solution of your target
ds_train = FuncDataset(fn_target, shape_input=ds_input_shape, size=500)

# Calculate the output shape given our target function .. usually simply a (1,)-dimensional output
ds_output_shape = fn_target(fn_sampler()).shape

# As usual in pytorch, you can simply wrap your dataset with a loading strategy ..
# This ensures e.g. that you do not iterate over your observations in the exact same manner
# In case you sample first 100 examples of a binary classification dataset with label 1 and then another
# 100 with label 2 it might impact your training .. so this ensures you have an e.g. random sampling strategy over the dataset
batch_size = 100
train_sampler = torch.utils.data.SubsetRandomSampler(np.arange(len(ds_train), dtype=np.int64))
train_loader = torch.utils.data.DataLoader(ds_train, batch_size=batch_size, sampler=train_sampler, num_workers=2)

# Define a model for which we can later extract its structure or impose sparsity constraints
model = MaskedDeepFFN(2, 1, [50, 20])

# Iterate over your training set
for feat, target in train_loader:
    print(feat, target)

    # feed it into a model to learn
    prediction = model.forward(feat)

    # compute a loss based on the expected target and the models prediction
    # ..
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/innvariant/deepstruct",
    "name": "deepstruct",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": "",
    "keywords": "neural network,sparsity,machine learning,structure,graph,training",
    "author": "Julian Stier",
    "author_email": "julian.stier@uni-passau.de",
    "download_url": "https://files.pythonhosted.org/packages/3e/1b/fa4256df95f8bf516d0b3e04d69704ab6df0b368f086d5321bdc49cdfaf6/deepstruct-0.10.0.tar.gz",
    "platform": null,
    "description": "# deepstruct - neural network structure tool [![PyPI version](https://badge.fury.io/py/deepstruct.svg)](https://badge.fury.io/py/deepstruct) ![Tests](https://github.com/innvariant/deepstruct/workflows/Tests/badge.svg) [![Documentation Status](https://readthedocs.org/projects/deepstruct/badge/?version=latest)](https://deepstruct.readthedocs.io/en/latest/?badge=latest) [![Downloads](https://pepy.tech/badge/deepstruct)](https://pepy.tech/project/deepstruct)  [![Python 3.8](https://img.shields.io/badge/python-3.8-blue.svg)](https://www.python.org/downloads/release/python-380/)\nCreate deep neural networks based on very different kinds of graphs or use *deepstruct* to extract the structure of your deep neural network.\n\nDeepstruct combines tools for fusing machine learning and graph theory.\nWe are fascinated with the interplay of end-to-end learnable, locally restricted models and their graph theoretical properties.\nSearching for evidence of the structural prior hypothesis.\nInterested in pruning, neural architecture search or learning theory in general?\n\nSee [examples](#examples) below or [read the docs](https://deepstruct.readthedocs.io).\n\nWe're glad if you reference our work\n```bibtex\n@article{stier2022deepstruct,\n  title={deepstruct -- linking deep learning and graph theory},\n  author={Stier, Julian and Granitzer, Michael},\n  journal={Software Impacts},\n  volume={11},\n  year={2022},\n  publisher={Elsevier}\n}\n```\n\n## Installation\n- With **pip** from PyPi: ``pip install deepstruct``\n- With **conda** in your *environment.yml* (recommended for reproducible experiments):\n```yaml\nname: exp01\nchannels:\n- defaults\ndependencies:\n- pip>=20\n- pip:\n    - deepstruct\n```\n- With **poetry** (recommended for *projects*) using PyPi: ``poetry add deepstruct``\n- From public GitHub: ``pip install --upgrade git+ssh://git@github.com:innvariant/deepstruct.git``\n\n## Quick usage: multi-layered feed-forward neural network on MNIST\nThe simplest implementation is one which provides multiple layers with binary masks for each weight matrix.\nIt doesn't consider any skip-layer connections.\nEach layer is then connected to only the following one.\n```python\nimport deepstruct.sparse\n\nmnist_model = deepstruct.sparse.MaskedDeepFFN((1, 28, 28), 10, [100]*10, use_layer_norm=True)\n```\nThis is a ready-to-use pytorch module which has ten layers of each one hundred neurons and applies layer normalization before each activation.\nTraining it on any dataset will work out of the box like every other pytorch module.\nHave a look on [pytorch ignite](https://pytorch.org/ignite/) or [pytorch lightning](https://github.com/Lightning-AI/lightning/) for designing your training loops.\nYou can set masks on the model via\n```python\nimport deepstruct.sparse\nfor layer in deepstruct.sparse.maskable_layers(mnist_model):\n    layer.mask[:, :] = True\n```\nand if you disable some of these mask elements you have defined your first sparse model.\n\n\n## Examples\nSpecify structures by prior design, e.g. random social networks transformed into directed acyclic graphs:\n```python\nimport networkx as nx\nimport deepstruct.sparse\n\n# Use networkx to generate a random graph based on the Watts-Strogatz model\nrandom_graph = nx.newman_watts_strogatz_graph(100, 4, 0.5)\nstructure = deepstruct.graph.CachedLayeredGraph()\nstructure.add_edges_from(random_graph.edges)\nstructure.add_nodes_from(random_graph.nodes)\n\n# Build a neural network classifier with 784 input and 10 output neurons and the given structure\nmodel = deepstruct.sparse.MaskedDeepDAN(784, 10, structure)\nmodel.apply_mask()  # Apply the mask on the weights (hard, not undoable)\nmodel.recompute_mask()  # Use weight magnitude to recompute the mask from the network\npruned_structure = model.generate_structure()  # Get the structure -- a networkx graph -- based on the current mask\n\nnew_model = deepstruct.sparse.MaskedDeepDAN(784, 10, pruned_structure)\n```\n\nDefine a feed-forward neural network (with no skip-layer connections) and obtain its structure as a graph:\n```python\nimport deepstruct.sparse\n\nmodel = deepstruct.sparse.MaskedDeepFFN(784, 10, [100, 100])\n# .. train model\nmodel.generate_structure()  # a networkx graph\n```\n\n\n### Recurrent Neural Networks with sparsity\n```python\nimport torch\nimport deepstruct.recurrent\nimport numpy as np\n\n# A sequence of size 15 with one-dimensional elements which could e.g. be labelled\n# BatchSize x [(1,), (2,), (3,), (4,), (5,), (0,), (0,), (0,)] --> [ label1, label2, ..]\nbatch_size = 100\nseq_size = 15\ninput_size = 1\nmodel = deepstruct.recurrent.MaskedDeepRNN(\n    input_size,\n    hidden_layers=[100, 100, 1],\n    batch_first=True,\n    build_recurrent_layer=deepstruct.recurrent.MaskedLSTMLayer,\n)\nrandom_input = torch.tensor(\n    np.random.random((batch_size, seq_size, input_size)),\n    dtype=torch.float32,\n    requires_grad=False,\n)\nmodel.forward(random_input)\n```\n\n\n\n## Sparse Neural Network implementations\n![Sparse Network Connectivity on zeroth order with a masked deep feed-forward neural network](docs/masked-deep-ffn.png)\n![Sparse Network Connectivity on zeroth order with a masked deep neural network with skip-layer connections](docs/masked-deep-dan.png)\n![Sparse Network Connectivity on second order with a masked deep cell-based neural network](docs/masked-deep-cell-dan.png)\n\n**What's contained in deepstruct?**\n- ready-to-use models in pytorch for learning instances on common (supervised/unsupervised) datasets from which a structural analysis is possible\n- model-to-graph transformations for studying models from a graph-theoretic perspective\n\n**Models:**\n- *deepstruct.sparse.MaskableModule*: pytorch modules that contain explicit masks to enforce (mostly zero-ordered) structure\n- *deepstruct.sparse.MaskedLinearLayer*: pytorch module with a simple linear layer extended with masking capability.\nSuitable if you want to have linear-layers on which to enforce masks which could be obtained through pruning, regularization or other other search techniques.\n- *deepstruct.sparse.MaskedDeepFFN*: feed-forward neural network with any width and depth and easy-to-use masks.\nSuitable for simple and canonical pruning research on zero-ordered structure\n- *deepstruct.sparse.MaskedDeepDAN*: feed-forward neural network with skip-layer connections based on any directed acyclic network.\nSuitable for arbitrary structures on zero-order and on that level most flexible but also computationally expensive.\n- *deepstruct.sparse.DeepCellDAN*: complex module based on a directed acyclic network and custom cells on third-order structures.\nSuitable for large-scale neural architecture search\n- *deepstruct.recurrent.MaskedDeepRNN*: multi-layered network with recurrent layers which can be masked\n\n## What is the orders of structure?\n- zero-th order: weight-level\n- first order: kernel-level (filter, channel, blocks, cells)\n- second order: layers\n\nThere is various evidence across empirical machine learning studies that the way artificial neural networks are structurally connected has a (minor?) influence on performance metrics such as the accuracy or probably even on more complex concepts such as adversarial robustness.\nWhat do we mean by \"structure\"?\nWe define structure over graph theoretic properties given a computational graph with very restricted non-linearities.\nThis includes all major neural network definitions and lets us study them from the perspective of their *representation* and their *structure*.\nIn a probabilistic sense, one can interprete structure as a prior to the model and despite single-layered wide networks are universal function approximators we follow the hypothesis that given certain structural priors we can find models with better properties.\n\nBefore considering implementations, one should have a look on possible representations of Sparse Neural Networks.\nIn case of feed-forward neural networks (FFNs) the network can be represented as a list of weight matrices.\nEach weight matrix represents the connections from one layer to the next.\nHaving a network without some connections then means setting entries in those matrices to zero.\nRemoving a particular neuron means setting all entries representing its incoming connections to zero.\n\nHowever, sparsity can be employed on various levels of a general artificial neural network.\nZero order sparsity would remove single weights (representing connections) from the network.\nFirst order sparsity removes groups of weights within one dimension of a matrix from the network.\nSparsity can be employed on connection-, weight-, block-, channel-, cell-level and so on.\nImplementations respecting the areas for sparsification can have drastical differences.\nThus there are various ways for implementing Sparse Neural Networks.\n\n\n# Artificial PyTorch Datasets\n![A custom artificial landscape Stier2020B for testing function approximation](docs/artificial-landscape-approximation.png)\nWe provide some simple utilities for artificial function approximation.\nLike polynomials, neural networks are universal function approximators on bounded intervals of compact spaces.\nTo test, you can easily define a function of any finite dimension, e.g. $f: \\mathbb{R}^2\\rightarrow\\mathbb{R}, (x,y)\\mapsto 20 + x - 1.8*(y-5) + 3 * np.sin(x + 2 * y) * y + (x / 4) ** 4 + (y / 4) ** 4$:\n```python\nimport numpy as np\nimport torch.utils.data\nfrom deepstruct.dataset import FuncDataset\nfrom deepstruct.sparse import MaskedDeepFFN\n\n# Our artificial landscape: f: R^2 -> R\n# Have a look at https://github.com/innvariant/eddy for some visual examples\n# You could easily define arbitrary functions from R^a to R^b\nstier2020B1d = lambda x, y: 20 + x - 1.8*(y-5) + 3 * np.sin(x + 2 * y) * y + (x / 4) ** 4 + (y / 4) ** 4\nds_input_shape = (2,)  # specify the number of input dimensions (usually a one-sized tensor if no further structures are used)\n# Explicitly define the target function for the dataset which returns a numpy array of our above function\n# By above definition x is two-dimensional, so you have access to x[0] and x[1]\nfn_target = lambda x: np.array([stier2020B1d(x[0], x[1])])\n# Define a sampling strategy for the dataset, e.g. uniform sampling the space\nfn_sampler = lambda: np.random.uniform(-2, 2, size=ds_input_shape)\n# Define the dataset given the target function and your sampling strategy\n# This simply wraps your function into a pytorch dataset and provides you with discrete observations\n# Your model will later only know those observations to come up with an approximate solution of your target\nds_train = FuncDataset(fn_target, shape_input=ds_input_shape, size=500)\n\n# Calculate the output shape given our target function .. usually simply a (1,)-dimensional output\nds_output_shape = fn_target(fn_sampler()).shape\n\n# As usual in pytorch, you can simply wrap your dataset with a loading strategy ..\n# This ensures e.g. that you do not iterate over your observations in the exact same manner\n# In case you sample first 100 examples of a binary classification dataset with label 1 and then another\n# 100 with label 2 it might impact your training .. so this ensures you have an e.g. random sampling strategy over the dataset\nbatch_size = 100\ntrain_sampler = torch.utils.data.SubsetRandomSampler(np.arange(len(ds_train), dtype=np.int64))\ntrain_loader = torch.utils.data.DataLoader(ds_train, batch_size=batch_size, sampler=train_sampler, num_workers=2)\n\n# Define a model for which we can later extract its structure or impose sparsity constraints\nmodel = MaskedDeepFFN(2, 1, [50, 20])\n\n# Iterate over your training set\nfor feat, target in train_loader:\n    print(feat, target)\n\n    # feed it into a model to learn\n    prediction = model.forward(feat)\n\n    # compute a loss based on the expected target and the models prediction\n    # ..\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "",
    "version": "0.10.0",
    "split_keywords": [
        "neural network",
        "sparsity",
        "machine learning",
        "structure",
        "graph",
        "training"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "44051c032bfddc0020f4fcd85ffebd7f548bc3b85ad2b903bd3a6fc18f2e24b5",
                "md5": "e5863124d75cb2e6cb639e8072ecd064",
                "sha256": "9274124222a8a2cc61a79633fe1f9930c6bebc3f8ec024555364d0d7c9224d87"
            },
            "downloads": -1,
            "filename": "deepstruct-0.10.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e5863124d75cb2e6cb639e8072ecd064",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 50206,
            "upload_time": "2023-01-18T11:46:04",
            "upload_time_iso_8601": "2023-01-18T11:46:04.126996Z",
            "url": "https://files.pythonhosted.org/packages/44/05/1c032bfddc0020f4fcd85ffebd7f548bc3b85ad2b903bd3a6fc18f2e24b5/deepstruct-0.10.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3e1bfa4256df95f8bf516d0b3e04d69704ab6df0b368f086d5321bdc49cdfaf6",
                "md5": "3795592adaaa6df04a058277cd4b7104",
                "sha256": "7f27944b218fe2deecbae95facc808db4e3fbe4e31940fbe40636ca6c206cff7"
            },
            "downloads": -1,
            "filename": "deepstruct-0.10.0.tar.gz",
            "has_sig": false,
            "md5_digest": "3795592adaaa6df04a058277cd4b7104",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 46119,
            "upload_time": "2023-01-18T11:46:06",
            "upload_time_iso_8601": "2023-01-18T11:46:06.501170Z",
            "url": "https://files.pythonhosted.org/packages/3e/1b/fa4256df95f8bf516d0b3e04d69704ab6df0b368f086d5321bdc49cdfaf6/deepstruct-0.10.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-18 11:46:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "innvariant",
    "github_project": "deepstruct",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "deepstruct"
}
        
Elapsed time: 0.03523s