tinytorchtest


Nametinytorchtest JSON
Version 1.2.3 PyPI version JSON
download
home_pagehttps://github.com/abdrysdale/tinytorchtest
SummaryA tiny test suite for pytorch based machine learning models.
upload_time2024-07-08 10:21:18
maintainerNone
docs_urlNone
authorAlex
requires_python<4.0,>=3.8
licenseGPL-3.0-or-later
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Tiny Torchtest

![coverage](.coverage.svg)

A Tiny Test Suite for pytorch based Machine Learning models, inspired by
[mltest](https://github.com/Thenerdstation/mltest/blob/master/mltest/mltest.py).
Chase Roberts lists out 4 basic tests in his [medium
post](https://medium.com/@keeper6928/mltest-automatically-test-neural-network-models-in-one-function-call-eb6f1fa5019d)
about mltest. tinytorchtest is mostly a pytorch port of mltest (which was
written for tensorflow).

--- 

Forked from [BrainPugh](https://github.com/BrianPugh/torchtest) who
forked the repo from
[suriyadeepan](https://github.com/suriyadeepan/torchtest).

Tiny torchtest actually has more features and supports more models than torchtest - albeit with fewer dependencies.
The prefix *"tiny"* is used to highlight that this is small test suite that provides sanity checks rather than testing for convergence.

Notable changes:

-   Support for models to have multiple positional arguments.

-   Support for unsupervised learning.

-   Support for models that output a tuple or list of tensors.

- 	Object orientated implementation.

- 	Easily reproducible tests - thanks to the object orientated implementation!

-   Fewer requirements (due to streamlining testing).

-   More comprehensive internal unit tests.

-   This repository is still active. I've created an
    [issue](https://github.com/suriyadeepan/torchtest/issues/6) to
    double check but it looks like the original maintainer is no longer
    actioning pull requests.

---

# Installation

``` bash
pip install --upgrade tinytorchtest
```

# Usage

``` python
# imports for examples
import torch
import torch.nn as nn
```

## Variables Change

``` python
from tinytorchtest import tinytorchtest as ttt

# We'll be using a simple linear model
model = nn.Linear(20, 2)

# For this example, we'll pretend we have a classification problem
# and create some random inputs and outputs.
inputs = torch.randn(20, 20)
targets = torch.randint(0, 2, (20,)).long()
batch = [inputs, targets]

# Next we'll need a loss function
loss_fn = nn.functional.cross_entropy()

# ... and an optimisation function
optim = torch.optim.Adam(model.parameters())

# Lets set up the test object
test = ttt.TinyTorchTest(model, loss_fn, optim, batch)

# Now we've got our tiny test object, lets run some tests!
# What are the variables?
print('Our list of parameters', [ np[0] for np in model.named_parameters() ])

# Do they change after a training step?
#  Let's run a train step and see
test.test(test_vars_change=True)
```

``` python
""" FAILURE """
# Let's try to break this, so the test fails
params_to_train = [ np[1] for np in model.named_parameters() if np[0] is not 'bias' ]
# Run test now
test.test(test_vars_change=True)
# YES! bias did not change
```

## Variables Don't Change

``` python
# What if bias is not supposed to change, by design?
#  Let's test to see if bias remains the same after training
test.test(non_train_vars=[('bias', model.bias)])
# It does! Good. Now, let's move on.
```

## Output Range

``` python
# NOTE : bias is fixed (not trainable)
test.test(output_range=(-2, 2), test_output_range=True)

# Seems to work...
```

``` python
""" FAILURE """
#  Let's tweak the model to fail the test.
model.bias = nn.Parameter(2 + torch.randn(2, ))

# We'll still use the same loss function, optimiser and batch
# from earlier; however this time we've tweaked the bias of the model.
# As it's a new model, we'll need a new tiny test object.
test = ttt.TinyTorchTest(model , loss_fn, optim, batch)

test.test(output_range=(-1, 1), test_output_range=True)

# As expected, it fails; yay!
```

## NaN Tensors

``` python
""" FAILURE """

# Again, keeping everything the same but tweaking the model
model.bias = nn.Parameter(float('NaN') * torch.randn(2, ))

test = ttt.TinyTorchTest(model , loss_fn, optim, batch)

test.test(test_nan_vals=True)
# This test should fail as we've got 'NaN' values in the outputs.
```

## Inf Tensors

``` python
""" FAILURE """
model.bias = nn.Parameter(float('Inf') * torch.randn(2, ))

test = ttt.TinyTorchTest(model , loss_fn, optim, batch)

test.test(test_inf_vals=True)
# Again, this will fail as we've now got 'Inf' values in our model outputs.
```

## Multi-argument models
``` python
# Everything we've done works for models with multi-arguments

# Let's define a network that takes some input features along 
# with a 3D spacial coordinate and predicts a single value.
# Sure, we could perform the concatenation before we pass 
# our inputs to the model but let's say that it's much easier to
# do it this way. Maybe as you're working tightly with other codes
# and you want to match your inputs with the other code.
class MutliArgModel(torch.nn.Module):
	def __init__(self):
		self.layers = torch.nn.Linear(8, 1)
	def foward(self, data, x, y, z):
		inputs = torch.cat((data, x, y, z), dim=1)
		return self.layers(nn_input)
model = MultiArgModel()

# This looks a bit more like a regression problem so we'll redefine our loss 
# function to be something more appropriate.
loss_fn = torch.nn.MSELoss()

# We'll stick with the Adam optimiser but for completeness lets redefine it below
optim = Adam(model.parameters())

# We'll also need some new data for this model
inputs = (
	torch.rand(10, 5), # data
	torch.rand(10, 1), # x
	torch.rand(10, 1), # y
	torch.rand(10, 1), # z
)
outputs = torch.rand(10,1)
batch = [inputs, outputs]
		
# Next we initialise our tiny test object
test = ttt.TinyTorchTest(model , loss_fn, optim, batch)

# Now lets run some tests
test.test(
	train_vars=list(model.named_parameters()),
	test_vars_change=True,
	test_inf_vals=True,
	test_nan_vals=True,
)
# Great! Everything works as before but with models that take multiple inputs.
```

## Models with tuple or list outputs

``` python
# Now what about models that output a tuple or list of tensors?
# This could be for something like a variation auto-encoder
# or anything where it's more convienient to separate 
# internal networks.

# Lets define a model
class MultiOutputModel(nn.Module):
	def __init__(self, in_size, hidden_size, out_size, num_outputs):
		super().__init__()

		# This network is common for all predictions.
		nets = [nn.Linear(in_size, hidden_size)] 

		# These networks operate separately (in parallel)
		for _ in range(num_outputs):
			nets.append(nn.Linear(hidden_size, out_size))
		self.nets = nn.ModuleList(nets)

	def forward(self, x):
		# Passes through the first network
		x = self.nets[0](x)

		# Returns a list of the seprate network predictions
		return [net(x) for net in self.nets[1:]]

# 10 features, 5 hidden nodes, 1 output node, 3 output models
model = MultiOutputModel(10, 5, 1, 3)

# Creates a batch with 100 samples.
batch = [torch.rand(100, 10), torch.rand(100, 1)]

# Optimiser...
optim = torch.optim.Adam([p for p in model.parameters() if p.requires_grad])

# Here we'll have to define a custom loss function to deal with the multiple outputs
# For now, I'll use something trivial (and quite meaningless).
def _loss(outputs, target):
	loss_list = [torch.abs(output - target) ** p for p, output in enumerate(outputs)]
	return torch.mean(torch.tensor(loss_list))

# Setup test suite
test = ttt.TinyTorchTest(model, _loss, optim, batch, supervised=True)

# Run the tests we want to run!
test.test(
	train_vars=list(model.named_parameters()),
	test_vars_change=True,
	test_inf_vals=True,
	test_nan_vals=True,
)

# Great! Everything works as before but with model with tuple or list outputs.
```

## Unsupervised learning

``` python
# We've looked a lot at supervised learning examples
# but what about unsupervised learning?

# Lets define a simple model
model = nn.Linear(20, 2)

# Now our inputs - notice there are no labels so we just have inputs in our batch
batch = torch.randn(20, 20)

# Here we'll write a very basic loss function that represents a reconstruction loss.
# This is actually a mean absolute distance loss function.
# This would typically be used for something like an auto-encoder.
# The important thing to note is tinytorchtest expects the loss to be loss(outputs, inputs).
def loss_fn(output, input):
	return torch.mean(torch.abs(output - input))

# We set supervised to false, to let the test suite
# know that there aren't any targets or correct labels.
test = ttt.TinyTorchTest(model , loss_fn, optim, batch, supervised=False)

# Now lets run some tests
test.test(
	train_vars=list(model.named_parameters()),
	test_vars_change=True,
	test_inf_vals=True,
	test_nan_vals=True,
)
# Great! Everything works as before but with unsupervised models.
```

## Testing the GPU

``` python
# Some models really need GPU availability.
# We can get our test suite to fail when the GPU isn't available.

# Sticking with the unsupervised example
test = ttt.TinyTorchTest(model , loss_fn, optim, batch, supervised=False)

# Now lets make sure the GPU is available.
test.test(test_gpu_available=True)
# This test will fail if the GPU isn't available. Your CPU can thank you later.

# We can also explitly ask that our model and tensors be moved to the GPU
test = ttt.TinyTorchTest(model , loss_fn, optim, batch, supervised=False, device='cuda:0')

# Now all future tests will be run on the GPU
```

## Reproducible tests

``` python
# When unit testing our models it's good practice to have reproducable results.
# For this, we can spefiy a seed when getting our tiny test object.
test = ttt.TinyTorchTest(model, loss_fn, optim, batch, seed=42)

# This seed will be called before running each test so the results should always be the same
# regardless of the order they are called.

```

# Debugging

``` bash
torchtest\torchtest.py", line 151, in _var_change_helper
assert not torch.equal(p0, p1)
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'other'
```

When you are making use of a GPU, you should explicitly specify
`device=cuda:0`. By default `device` is set to `cpu`. See [issue
#1](https://github.com/suriyadeepan/torchtest/issues/1) for more
information.

``` python
test = ttt.TinyTorchTest(model , loss_fn, optim, batch, device='cuda:0')
```

# Citation

``` tex
@misc{abdrysdale2022
  author = {Alex Drysdale},
  title = {tinytorchtest},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/abdrysdale/tinytorchtest}},
  commit = {4c39c52f27aad1fe9bcc7fbb2525fe1292db81b7}
 }
@misc{Ram2019,
  author = {Suriyadeepan Ramamoorthy},
  title = {torchtest},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/suriyadeepan/torchtest}},
  commit = {42ba442e54e5117de80f761a796fba3589f9b223}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/abdrysdale/tinytorchtest",
    "name": "tinytorchtest",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Alex",
    "author_email": "adrysdale@protonmail.com",
    "download_url": "https://files.pythonhosted.org/packages/6d/7f/e0b7a31487cbfc7ab03758c0f4501d3d300c2c698388d3e22dfe648b0a5f/tinytorchtest-1.2.3.tar.gz",
    "platform": null,
    "description": "# Tiny Torchtest\n\n![coverage](.coverage.svg)\n\nA Tiny Test Suite for pytorch based Machine Learning models, inspired by\n[mltest](https://github.com/Thenerdstation/mltest/blob/master/mltest/mltest.py).\nChase Roberts lists out 4 basic tests in his [medium\npost](https://medium.com/@keeper6928/mltest-automatically-test-neural-network-models-in-one-function-call-eb6f1fa5019d)\nabout mltest. tinytorchtest is mostly a pytorch port of mltest (which was\nwritten for tensorflow).\n\n--- \n\nForked from [BrainPugh](https://github.com/BrianPugh/torchtest) who\nforked the repo from\n[suriyadeepan](https://github.com/suriyadeepan/torchtest).\n\nTiny torchtest actually has more features and supports more models than torchtest - albeit with fewer dependencies.\nThe prefix *\"tiny\"* is used to highlight that this is small test suite that provides sanity checks rather than testing for convergence.\n\nNotable changes:\n\n-   Support for models to have multiple positional arguments.\n\n-   Support for unsupervised learning.\n\n-   Support for models that output a tuple or list of tensors.\n\n- \tObject orientated implementation.\n\n- \tEasily reproducible tests - thanks to the object orientated implementation!\n\n-   Fewer requirements (due to streamlining testing).\n\n-   More comprehensive internal unit tests.\n\n-   This repository is still active. I've created an\n    [issue](https://github.com/suriyadeepan/torchtest/issues/6) to\n    double check but it looks like the original maintainer is no longer\n    actioning pull requests.\n\n---\n\n# Installation\n\n``` bash\npip install --upgrade tinytorchtest\n```\n\n# Usage\n\n``` python\n# imports for examples\nimport torch\nimport torch.nn as nn\n```\n\n## Variables Change\n\n``` python\nfrom tinytorchtest import tinytorchtest as ttt\n\n# We'll be using a simple linear model\nmodel = nn.Linear(20, 2)\n\n# For this example, we'll pretend we have a classification problem\n# and create some random inputs and outputs.\ninputs = torch.randn(20, 20)\ntargets = torch.randint(0, 2, (20,)).long()\nbatch = [inputs, targets]\n\n# Next we'll need a loss function\nloss_fn = nn.functional.cross_entropy()\n\n# ... and an optimisation function\noptim = torch.optim.Adam(model.parameters())\n\n# Lets set up the test object\ntest = ttt.TinyTorchTest(model, loss_fn, optim, batch)\n\n# Now we've got our tiny test object, lets run some tests!\n# What are the variables?\nprint('Our list of parameters', [ np[0] for np in model.named_parameters() ])\n\n# Do they change after a training step?\n#  Let's run a train step and see\ntest.test(test_vars_change=True)\n```\n\n``` python\n\"\"\" FAILURE \"\"\"\n# Let's try to break this, so the test fails\nparams_to_train = [ np[1] for np in model.named_parameters() if np[0] is not 'bias' ]\n# Run test now\ntest.test(test_vars_change=True)\n# YES! bias did not change\n```\n\n## Variables Don't Change\n\n``` python\n# What if bias is not supposed to change, by design?\n#  Let's test to see if bias remains the same after training\ntest.test(non_train_vars=[('bias', model.bias)])\n# It does! Good. Now, let's move on.\n```\n\n## Output Range\n\n``` python\n# NOTE : bias is fixed (not trainable)\ntest.test(output_range=(-2, 2), test_output_range=True)\n\n# Seems to work...\n```\n\n``` python\n\"\"\" FAILURE \"\"\"\n#  Let's tweak the model to fail the test.\nmodel.bias = nn.Parameter(2 + torch.randn(2, ))\n\n# We'll still use the same loss function, optimiser and batch\n# from earlier; however this time we've tweaked the bias of the model.\n# As it's a new model, we'll need a new tiny test object.\ntest = ttt.TinyTorchTest(model , loss_fn, optim, batch)\n\ntest.test(output_range=(-1, 1), test_output_range=True)\n\n# As expected, it fails; yay!\n```\n\n## NaN Tensors\n\n``` python\n\"\"\" FAILURE \"\"\"\n\n# Again, keeping everything the same but tweaking the model\nmodel.bias = nn.Parameter(float('NaN') * torch.randn(2, ))\n\ntest = ttt.TinyTorchTest(model , loss_fn, optim, batch)\n\ntest.test(test_nan_vals=True)\n# This test should fail as we've got 'NaN' values in the outputs.\n```\n\n## Inf Tensors\n\n``` python\n\"\"\" FAILURE \"\"\"\nmodel.bias = nn.Parameter(float('Inf') * torch.randn(2, ))\n\ntest = ttt.TinyTorchTest(model , loss_fn, optim, batch)\n\ntest.test(test_inf_vals=True)\n# Again, this will fail as we've now got 'Inf' values in our model outputs.\n```\n\n## Multi-argument models\n``` python\n# Everything we've done works for models with multi-arguments\n\n# Let's define a network that takes some input features along \n# with a 3D spacial coordinate and predicts a single value.\n# Sure, we could perform the concatenation before we pass \n# our inputs to the model but let's say that it's much easier to\n# do it this way. Maybe as you're working tightly with other codes\n# and you want to match your inputs with the other code.\nclass MutliArgModel(torch.nn.Module):\n\tdef __init__(self):\n\t\tself.layers = torch.nn.Linear(8, 1)\n\tdef foward(self, data, x, y, z):\n\t\tinputs = torch.cat((data, x, y, z), dim=1)\n\t\treturn self.layers(nn_input)\nmodel = MultiArgModel()\n\n# This looks a bit more like a regression problem so we'll redefine our loss \n# function to be something more appropriate.\nloss_fn = torch.nn.MSELoss()\n\n# We'll stick with the Adam optimiser but for completeness lets redefine it below\noptim = Adam(model.parameters())\n\n# We'll also need some new data for this model\ninputs = (\n\ttorch.rand(10, 5), # data\n\ttorch.rand(10, 1), # x\n\ttorch.rand(10, 1), # y\n\ttorch.rand(10, 1), # z\n)\noutputs = torch.rand(10,1)\nbatch = [inputs, outputs]\n\t\t\n# Next we initialise our tiny test object\ntest = ttt.TinyTorchTest(model , loss_fn, optim, batch)\n\n# Now lets run some tests\ntest.test(\n\ttrain_vars=list(model.named_parameters()),\n\ttest_vars_change=True,\n\ttest_inf_vals=True,\n\ttest_nan_vals=True,\n)\n# Great! Everything works as before but with models that take multiple inputs.\n```\n\n## Models with tuple or list outputs\n\n``` python\n# Now what about models that output a tuple or list of tensors?\n# This could be for something like a variation auto-encoder\n# or anything where it's more convienient to separate \n# internal networks.\n\n# Lets define a model\nclass MultiOutputModel(nn.Module):\n\tdef __init__(self, in_size, hidden_size, out_size, num_outputs):\n\t\tsuper().__init__()\n\n\t\t# This network is common for all predictions.\n\t\tnets = [nn.Linear(in_size, hidden_size)] \n\n\t\t# These networks operate separately (in parallel)\n\t\tfor _ in range(num_outputs):\n\t\t\tnets.append(nn.Linear(hidden_size, out_size))\n\t\tself.nets = nn.ModuleList(nets)\n\n\tdef forward(self, x):\n\t\t# Passes through the first network\n\t\tx = self.nets[0](x)\n\n\t\t# Returns a list of the seprate network predictions\n\t\treturn [net(x) for net in self.nets[1:]]\n\n# 10 features, 5 hidden nodes, 1 output node, 3 output models\nmodel = MultiOutputModel(10, 5, 1, 3)\n\n# Creates a batch with 100 samples.\nbatch = [torch.rand(100, 10), torch.rand(100, 1)]\n\n# Optimiser...\noptim = torch.optim.Adam([p for p in model.parameters() if p.requires_grad])\n\n# Here we'll have to define a custom loss function to deal with the multiple outputs\n# For now, I'll use something trivial (and quite meaningless).\ndef _loss(outputs, target):\n\tloss_list = [torch.abs(output - target) ** p for p, output in enumerate(outputs)]\n\treturn torch.mean(torch.tensor(loss_list))\n\n# Setup test suite\ntest = ttt.TinyTorchTest(model, _loss, optim, batch, supervised=True)\n\n# Run the tests we want to run!\ntest.test(\n\ttrain_vars=list(model.named_parameters()),\n\ttest_vars_change=True,\n\ttest_inf_vals=True,\n\ttest_nan_vals=True,\n)\n\n# Great! Everything works as before but with model with tuple or list outputs.\n```\n\n## Unsupervised learning\n\n``` python\n# We've looked a lot at supervised learning examples\n# but what about unsupervised learning?\n\n# Lets define a simple model\nmodel = nn.Linear(20, 2)\n\n# Now our inputs - notice there are no labels so we just have inputs in our batch\nbatch = torch.randn(20, 20)\n\n# Here we'll write a very basic loss function that represents a reconstruction loss.\n# This is actually a mean absolute distance loss function.\n# This would typically be used for something like an auto-encoder.\n# The important thing to note is tinytorchtest expects the loss to be loss(outputs, inputs).\ndef loss_fn(output, input):\n\treturn torch.mean(torch.abs(output - input))\n\n# We set supervised to false, to let the test suite\n# know that there aren't any targets or correct labels.\ntest = ttt.TinyTorchTest(model , loss_fn, optim, batch, supervised=False)\n\n# Now lets run some tests\ntest.test(\n\ttrain_vars=list(model.named_parameters()),\n\ttest_vars_change=True,\n\ttest_inf_vals=True,\n\ttest_nan_vals=True,\n)\n# Great! Everything works as before but with unsupervised models.\n```\n\n## Testing the GPU\n\n``` python\n# Some models really need GPU availability.\n# We can get our test suite to fail when the GPU isn't available.\n\n# Sticking with the unsupervised example\ntest = ttt.TinyTorchTest(model , loss_fn, optim, batch, supervised=False)\n\n# Now lets make sure the GPU is available.\ntest.test(test_gpu_available=True)\n# This test will fail if the GPU isn't available. Your CPU can thank you later.\n\n# We can also explitly ask that our model and tensors be moved to the GPU\ntest = ttt.TinyTorchTest(model , loss_fn, optim, batch, supervised=False, device='cuda:0')\n\n# Now all future tests will be run on the GPU\n```\n\n## Reproducible tests\n\n``` python\n# When unit testing our models it's good practice to have reproducable results.\n# For this, we can spefiy a seed when getting our tiny test object.\ntest = ttt.TinyTorchTest(model, loss_fn, optim, batch, seed=42)\n\n# This seed will be called before running each test so the results should always be the same\n# regardless of the order they are called.\n\n```\n\n# Debugging\n\n``` bash\ntorchtest\\torchtest.py\", line 151, in _var_change_helper\nassert not torch.equal(p0, p1)\nRuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'other'\n```\n\nWhen you are making use of a GPU, you should explicitly specify\n`device=cuda:0`. By default `device` is set to `cpu`. See [issue\n#1](https://github.com/suriyadeepan/torchtest/issues/1) for more\ninformation.\n\n``` python\ntest = ttt.TinyTorchTest(model , loss_fn, optim, batch, device='cuda:0')\n```\n\n# Citation\n\n``` tex\n@misc{abdrysdale2022\n  author = {Alex Drysdale},\n  title = {tinytorchtest},\n  year = {2022},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https://github.com/abdrysdale/tinytorchtest}},\n  commit = {4c39c52f27aad1fe9bcc7fbb2525fe1292db81b7}\n }\n@misc{Ram2019,\n  author = {Suriyadeepan Ramamoorthy},\n  title = {torchtest},\n  year = {2019},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https://github.com/suriyadeepan/torchtest}},\n  commit = {42ba442e54e5117de80f761a796fba3589f9b223}\n}\n```\n",
    "bugtrack_url": null,
    "license": "GPL-3.0-or-later",
    "summary": "A tiny test suite for pytorch based machine learning models.",
    "version": "1.2.3",
    "project_urls": {
        "Homepage": "https://github.com/abdrysdale/tinytorchtest",
        "Repository": "https://github.com/abdrysdale/tinytorchtest"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2e3feb9f02b54683c2240a5af99b7449992b6659a1cad59b95882b4645c9f6ed",
                "md5": "95ab22e7db56f887bd5a95960d19f895",
                "sha256": "c8b073efe779eaa85f44236158a24cfee46d9b5b18ceddb034c11fb89976491f"
            },
            "downloads": -1,
            "filename": "tinytorchtest-1.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "95ab22e7db56f887bd5a95960d19f895",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 21752,
            "upload_time": "2024-07-08T10:21:16",
            "upload_time_iso_8601": "2024-07-08T10:21:16.107866Z",
            "url": "https://files.pythonhosted.org/packages/2e/3f/eb9f02b54683c2240a5af99b7449992b6659a1cad59b95882b4645c9f6ed/tinytorchtest-1.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6d7fe0b7a31487cbfc7ab03758c0f4501d3d300c2c698388d3e22dfe648b0a5f",
                "md5": "00d4512317ce8588b475b3e2250ad351",
                "sha256": "e4c937ee3f84097e2cefc2ff33e56b03729f0025e0f9cee055d7f45314fe260a"
            },
            "downloads": -1,
            "filename": "tinytorchtest-1.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "00d4512317ce8588b475b3e2250ad351",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 23934,
            "upload_time": "2024-07-08T10:21:18",
            "upload_time_iso_8601": "2024-07-08T10:21:18.020165Z",
            "url": "https://files.pythonhosted.org/packages/6d/7f/e0b7a31487cbfc7ab03758c0f4501d3d300c2c698388d3e22dfe648b0a5f/tinytorchtest-1.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-08 10:21:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "abdrysdale",
    "github_project": "tinytorchtest",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "tinytorchtest"
}
        
Elapsed time: 2.01193s