perceiver-io


Nameperceiver-io JSON
Version 0.11.0 PyPI version JSON
download
home_pagehttps://github.com/krasserm/perceiver-io
SummaryPerceiver IO
upload_time2023-06-12 11:27:33
maintainer
docs_urlNone
authorMartin Krasser
requires_python>=3.8,<3.11
licenseApache-2.0
keywords perceiver-io perceiver-ar deep-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Perceiver, Perceiver IO and Perceiver AR

This repository is a PyTorch implementation of Perceiver, Perceiver IO and Perceiver AR, with PyTorch Lightning
interfaces for model training and Hugging Face 🤗 interfaces for inference.

<table>
  <tr>
    <td>
       <b>Perceiver</b>: General Perception with Iterative Attention
       (<a href="https://arxiv.org/abs/2103.03206">paper</a>,
        <a href="https://www.youtube.com/watch?v=P_xeshTnPZg">video</a>)
    </td>
    <td><img src="docs/images/small-perceiver.png" alt="Perceiver"/></td>
  </tr>
  <tr>
    <td>
      <b>Perceiver IO</b>: A General Architecture for Structured Inputs & Outputs
      (<a href="https://arxiv.org/abs/2107.14795">paper</a>,
       <a href="https://www.deepmind.com/blog/building-architectures-that-can-handle-the-worlds-data">blog post</a>)
    </td>
    <td><img src="docs/images/small-perceiver-io.png" alt="Perceiver IO"/></td>
  </tr>
  <tr>
    <td>
      General-purpose, long-context autoregressive modeling with <b>Perceiver AR</b>
      (<a href="https://arxiv.org/abs/2202.07765">paper</a>,
       <a href="https://www.deepmind.com/blog/perceiver-ar-general-purpose-long-context-autoregressive-generation">blog post</a>)
    </td>
    <td><img src="docs/images/small-perceiver-ar.png" alt="Perceiver AR"/></td>
  </tr>
</table>

## Overview

Core of the `perceiver-io` library are *backend models*, lightweight PyTorch implementations of Perceiver,
Perceiver IO and Perceiver AR. They can be wrapped into [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/)
modules for training (*Lightning interface*) and 🤗 modules for inference (*Hugging Face interface*). See
[library design](docs/library-design.md) for details.

<p align="center">
    <img src="docs/images/library-design-small.jpg" alt="library-design"/>
</p>

The command line interface for training is implemented with [Lightning CLI](https://pytorch-lightning.readthedocs.io/en/stable/cli/lightning_cli.html).
Training datasets are 🤗 [datasets](https://huggingface.co/docs/datasets) wrapped into PyTorch Lightning data modules.
For NLP tasks, `perceiver-io` supports all 🤗 [fast tokenizers](https://huggingface.co/docs/transformers/fast_tokenizers)
and the 🤗 Perceiver UTF-8 bytes tokenizer.

## Documentation

- [Installation](#installation)
- [Getting started](#getting-started)
- [Library design](docs/library-design.md)
- [Pretrained models](docs/pretrained-models.md)
- [Training examples](docs/training-examples.md)
- [Inference examples](examples/inference.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/krasserm/perceiver-io/blob/main/examples/inference.ipynb)
- [Model construction](docs/model-construction.md)
- [Building blocks](docs/building-blocks.md)

## Installation

### Via pip

```shell
pip install perceiver-io[text,vision,audio]
```

### From sources

Installation from sources requires a [Miniconda](https://docs.conda.io/en/latest/miniconda.html) and a
[Poetry](https://python-poetry.org/docs/#installation) (1.2.0 or higher) installation.

Create and activate the `perceiver-io` conda environment:

```shell
conda env create -f environment.yml
conda activate perceiver-io
```

Install main and test dependencies, including all extras:

```shell
# Without dependencies required for examples
poetry install --all-extras
```

If you want to run the [examples](examples) locally, additionally use `--with examples`:

```shell
poetry install --all-extras --with examples
```

### Docker image

```shell
docker pull ghcr.io/krasserm/perceiver-io:latest
```

See [Docker image](docs/docker-image.md) for details.

## Getting started

### Inference

#### Optical flow

Compute the optical flow between consecutive frames of an input video and write the rendered results to an output
video:

```python
from urllib.request import urlretrieve
from transformers import pipeline

from perceiver.data.vision import video_utils
from perceiver.model.vision import optical_flow  # register auto-classes and pipeline

urlretrieve(
    url="https://martin-krasser.com/perceiver/flow/sintel_clip_cave_dragon_fight.mp4",
    filename="sintel_clip_cave_dragon_fight.mp4",
)

# Create optical flow pipeline
optical_flow_pipeline = pipeline("optical-flow", model="krasserm/perceiver-io-optical-flow", device="cuda:0")

# load consecutive video frame pairs
frame_pairs = video_utils.read_video_frame_pairs("sintel_clip_cave_dragon_fight.mp4")

# create and render optical flow for all frame pairs
optical_flows = optical_flow_pipeline(frame_pairs, render=True, device="cuda:0")

# create video with rendered optical flows
video_utils.write_video("sintel_clip_cave_dragon_fight_output.mp4", optical_flows, fps=24)
```

Here is a side-by-side comparison of the input and output video:

<p align="center">
    <img src="docs/images/optical-flow.gif" alt="optical-flow-sbs">
</p>

#### Symbolic audio generation

Create audio sequences by generating symbolic ([MIDI](https://en.wikipedia.org/wiki/MIDI)) audio data and converting the
generated audio symbols into WAV output using [fluidsynth](https://www.fluidsynth.org/) (_Note:_ fluidsynth must be installed
in order for the following example to work):  

```python
from transformers import pipeline
from pretty_midi import PrettyMIDI
from perceiver.model.audio import symbolic  # auto-class registration

repo_id = "krasserm/perceiver-ar-sam-giant-midi"

prompt = PrettyMIDI("prompt.mid")
audio_generator = pipeline("symbolic-audio-generation", model=repo_id)

output = audio_generator(prompt, max_new_tokens=64, num_latents=1, do_sample=True, top_p=0.95, temperature=1.0, render=True)

with open("generated_audio.wav", "wb") as f:
    f.write(output["generated_audio_wav"])
```

Examples of generated audio sequences are available on the 🤗 [hub](https://huggingface.co/krasserm/perceiver-ar-sam-giant-midi#audio-samples).

See [inference examples](https://colab.research.google.com/github/krasserm/perceiver-io/blob/main/examples/inference.ipynb)
for more examples.

### Training

Train a small Perceiver IO image classifier (907K parameters) on MNIST from the command line. The classifier
cross-attends to individual pixels of input images with [repeated cross-attention](docs/building-blocks.md).
See [image classification](docs/training-examples.md#image-classification) training example for more details.

```shell
python -m perceiver.scripts.vision.image_classifier fit \
  --model.num_latents=32 \
  --model.num_latent_channels=128 \
  --model.encoder.num_frequency_bands=32 \
  --model.encoder.num_cross_attention_layers=2 \
  --model.encoder.num_self_attention_blocks=3 \
  --model.encoder.num_self_attention_layers_per_block=3 \
  --model.encoder.first_self_attention_block_shared=false \
  --model.encoder.dropout=0.1 \
  --model.encoder.init_scale=0.1 \
  --model.decoder.num_output_query_channels=128 \
  --model.decoder.dropout=0.1 \
  --model.decoder.init_scale=0.1 \
  --data=MNISTDataModule \
  --data.batch_size=64 \
  --optimizer=AdamW \
  --optimizer.lr=1e-3 \
  --lr_scheduler.warmup_steps=500 \
  --trainer.accelerator=gpu \
  --trainer.devices=1 \
  --trainer.max_epochs=30 \
  --trainer.logger=TensorBoardLogger \
  --trainer.logger.save_dir=logs \
  --trainer.logger.name=logs
```

[Model construction](docs/model-construction.md) describes how to implement model-specific command line interfaces
with the Lightning CLI. Training checkpoints are written to the `logs/img_clf/version_0/checkpoints` directory. Assuming
a checkpoint with filename `epoch=025-val_loss=0.065.ckpt` exists, it can be converted to a `perceiver-io` 🤗 model with

```python
from perceiver.model.vision.image_classifier import convert_mnist_classifier_checkpoint

convert_mnist_classifier_checkpoint(
    save_dir="example/mnist-classifier",
    ckpt_url="logs/img_clf/version_0/checkpoints/epoch=025-val_loss=0.065.ckpt",
)
```

so that it can be used in a 🤗 image classification pipeline

```python
from datasets import load_dataset
from transformers import pipeline

mnist_dataset = load_dataset("mnist", split="test")[:9]

images = mnist_dataset["image"]
labels = mnist_dataset["label"]

classifier = pipeline("image-classification", model="example/mnist-classifier")
predictions = [pred[0]["label"] for pred in classifier(images)]

print(f"Labels:      {labels}")
print(f"Predictions: {predictions}")
```
```
Labels:      [7, 2, 1, 0, 4, 1, 4, 9, 5]
Predictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]
```

or loaded directly:

```python
import torch
from transformers import AutoModelForImageClassification, AutoImageProcessor

model = AutoModelForImageClassification.from_pretrained("example/mnist-classifier")
processor = AutoImageProcessor.from_pretrained("example/mnist-classifier")

inputs = processor(images, return_tensors="pt")

with torch.no_grad():
    # use perceiver-io Hugging Face model
    output_1 = model(**inputs).logits

with torch.no_grad():
    # or use perceiver-io backend model directly  
    output_2 = model.backend_model(inputs.pixel_values)

print(f"Predictions: {output_1.argmax(dim=-1).numpy().tolist()}")
print(f"Predictions: {output_2.argmax(dim=-1).numpy().tolist()}")
```
```
Predictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]
Predictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]
```

See [training examples](docs/training-examples.md) for more examples.

## Articles

Articles referencing this repository:

- [Training compute-optimal Perceiver AR language models](https://krasserm.github.io/2023/01/23/scaling-perceiver-ar/)
- [A gentle introduction to Rotary Position Embedding](https://krasserm.github.io/2022/12/13/rotary-position-embedding/)

## Other implementations

- [Perceiver](https://paperswithcode.com/paper/perceiver-general-perception-with-iterative#code)
- [Perceiver IO](https://paperswithcode.com/paper/perceiver-io-a-general-architecture-for#code)
- [Perceiver AR](https://paperswithcode.com/paper/general-purpose-long-context-autoregressive#code)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/krasserm/perceiver-io",
    "name": "perceiver-io",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<3.11",
    "maintainer_email": "",
    "keywords": "perceiver-io,perceiver-ar,deep-learning",
    "author": "Martin Krasser",
    "author_email": "krasserm@googlemail.com",
    "download_url": "https://files.pythonhosted.org/packages/04/fb/10a6c2e5c567269e2f5c09557fed8f5f571b13aa452e5e8186365e6598dd/perceiver-io-0.11.0.tar.gz",
    "platform": null,
    "description": "# Perceiver, Perceiver IO and Perceiver AR\n\nThis repository is a PyTorch implementation of Perceiver, Perceiver IO and Perceiver AR, with PyTorch Lightning\ninterfaces for model training and Hugging Face \ud83e\udd17 interfaces for inference.\n\n<table>\n  <tr>\n    <td>\n       <b>Perceiver</b>: General Perception with Iterative Attention\n       (<a href=\"https://arxiv.org/abs/2103.03206\">paper</a>,\n        <a href=\"https://www.youtube.com/watch?v=P_xeshTnPZg\">video</a>)\n    </td>\n    <td><img src=\"docs/images/small-perceiver.png\" alt=\"Perceiver\"/></td>\n  </tr>\n  <tr>\n    <td>\n      <b>Perceiver IO</b>: A General Architecture for Structured Inputs & Outputs\n      (<a href=\"https://arxiv.org/abs/2107.14795\">paper</a>,\n       <a href=\"https://www.deepmind.com/blog/building-architectures-that-can-handle-the-worlds-data\">blog post</a>)\n    </td>\n    <td><img src=\"docs/images/small-perceiver-io.png\" alt=\"Perceiver IO\"/></td>\n  </tr>\n  <tr>\n    <td>\n      General-purpose, long-context autoregressive modeling with <b>Perceiver AR</b>\n      (<a href=\"https://arxiv.org/abs/2202.07765\">paper</a>,\n       <a href=\"https://www.deepmind.com/blog/perceiver-ar-general-purpose-long-context-autoregressive-generation\">blog post</a>)\n    </td>\n    <td><img src=\"docs/images/small-perceiver-ar.png\" alt=\"Perceiver AR\"/></td>\n  </tr>\n</table>\n\n## Overview\n\nCore of the `perceiver-io` library are *backend models*, lightweight PyTorch implementations of Perceiver,\nPerceiver IO and Perceiver AR. They can be wrapped into [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/)\nmodules for training (*Lightning interface*) and \ud83e\udd17 modules for inference (*Hugging Face interface*). See\n[library design](docs/library-design.md) for details.\n\n<p align=\"center\">\n    <img src=\"docs/images/library-design-small.jpg\" alt=\"library-design\"/>\n</p>\n\nThe command line interface for training is implemented with [Lightning CLI](https://pytorch-lightning.readthedocs.io/en/stable/cli/lightning_cli.html).\nTraining datasets are \ud83e\udd17 [datasets](https://huggingface.co/docs/datasets) wrapped into PyTorch Lightning data modules.\nFor NLP tasks, `perceiver-io` supports all \ud83e\udd17 [fast tokenizers](https://huggingface.co/docs/transformers/fast_tokenizers)\nand the \ud83e\udd17 Perceiver UTF-8 bytes tokenizer.\n\n## Documentation\n\n- [Installation](#installation)\n- [Getting started](#getting-started)\n- [Library design](docs/library-design.md)\n- [Pretrained models](docs/pretrained-models.md)\n- [Training examples](docs/training-examples.md)\n- [Inference examples](examples/inference.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/krasserm/perceiver-io/blob/main/examples/inference.ipynb)\n- [Model construction](docs/model-construction.md)\n- [Building blocks](docs/building-blocks.md)\n\n## Installation\n\n### Via pip\n\n```shell\npip install perceiver-io[text,vision,audio]\n```\n\n### From sources\n\nInstallation from sources requires a [Miniconda](https://docs.conda.io/en/latest/miniconda.html) and a\n[Poetry](https://python-poetry.org/docs/#installation) (1.2.0 or higher) installation.\n\nCreate and activate the `perceiver-io` conda environment:\n\n```shell\nconda env create -f environment.yml\nconda activate perceiver-io\n```\n\nInstall main and test dependencies, including all extras:\n\n```shell\n# Without dependencies required for examples\npoetry install --all-extras\n```\n\nIf you want to run the [examples](examples) locally, additionally use `--with examples`:\n\n```shell\npoetry install --all-extras --with examples\n```\n\n### Docker image\n\n```shell\ndocker pull ghcr.io/krasserm/perceiver-io:latest\n```\n\nSee [Docker image](docs/docker-image.md) for details.\n\n## Getting started\n\n### Inference\n\n#### Optical flow\n\nCompute the optical flow between consecutive frames of an input video and write the rendered results to an output\nvideo:\n\n```python\nfrom urllib.request import urlretrieve\nfrom transformers import pipeline\n\nfrom perceiver.data.vision import video_utils\nfrom perceiver.model.vision import optical_flow  # register auto-classes and pipeline\n\nurlretrieve(\n    url=\"https://martin-krasser.com/perceiver/flow/sintel_clip_cave_dragon_fight.mp4\",\n    filename=\"sintel_clip_cave_dragon_fight.mp4\",\n)\n\n# Create optical flow pipeline\noptical_flow_pipeline = pipeline(\"optical-flow\", model=\"krasserm/perceiver-io-optical-flow\", device=\"cuda:0\")\n\n# load consecutive video frame pairs\nframe_pairs = video_utils.read_video_frame_pairs(\"sintel_clip_cave_dragon_fight.mp4\")\n\n# create and render optical flow for all frame pairs\noptical_flows = optical_flow_pipeline(frame_pairs, render=True, device=\"cuda:0\")\n\n# create video with rendered optical flows\nvideo_utils.write_video(\"sintel_clip_cave_dragon_fight_output.mp4\", optical_flows, fps=24)\n```\n\nHere is a side-by-side comparison of the input and output video:\n\n<p align=\"center\">\n    <img src=\"docs/images/optical-flow.gif\" alt=\"optical-flow-sbs\">\n</p>\n\n#### Symbolic audio generation\n\nCreate audio sequences by generating symbolic ([MIDI](https://en.wikipedia.org/wiki/MIDI)) audio data and converting the\ngenerated audio symbols into WAV output using [fluidsynth](https://www.fluidsynth.org/) (_Note:_ fluidsynth must be installed\nin order for the following example to work):  \n\n```python\nfrom transformers import pipeline\nfrom pretty_midi import PrettyMIDI\nfrom perceiver.model.audio import symbolic  # auto-class registration\n\nrepo_id = \"krasserm/perceiver-ar-sam-giant-midi\"\n\nprompt = PrettyMIDI(\"prompt.mid\")\naudio_generator = pipeline(\"symbolic-audio-generation\", model=repo_id)\n\noutput = audio_generator(prompt, max_new_tokens=64, num_latents=1, do_sample=True, top_p=0.95, temperature=1.0, render=True)\n\nwith open(\"generated_audio.wav\", \"wb\") as f:\n    f.write(output[\"generated_audio_wav\"])\n```\n\nExamples of generated audio sequences are available on the \ud83e\udd17 [hub](https://huggingface.co/krasserm/perceiver-ar-sam-giant-midi#audio-samples).\n\nSee [inference examples](https://colab.research.google.com/github/krasserm/perceiver-io/blob/main/examples/inference.ipynb)\nfor more examples.\n\n### Training\n\nTrain a small Perceiver IO image classifier (907K parameters) on MNIST from the command line. The classifier\ncross-attends to individual pixels of input images with [repeated cross-attention](docs/building-blocks.md).\nSee [image classification](docs/training-examples.md#image-classification) training example for more details.\n\n```shell\npython -m perceiver.scripts.vision.image_classifier fit \\\n  --model.num_latents=32 \\\n  --model.num_latent_channels=128 \\\n  --model.encoder.num_frequency_bands=32 \\\n  --model.encoder.num_cross_attention_layers=2 \\\n  --model.encoder.num_self_attention_blocks=3 \\\n  --model.encoder.num_self_attention_layers_per_block=3 \\\n  --model.encoder.first_self_attention_block_shared=false \\\n  --model.encoder.dropout=0.1 \\\n  --model.encoder.init_scale=0.1 \\\n  --model.decoder.num_output_query_channels=128 \\\n  --model.decoder.dropout=0.1 \\\n  --model.decoder.init_scale=0.1 \\\n  --data=MNISTDataModule \\\n  --data.batch_size=64 \\\n  --optimizer=AdamW \\\n  --optimizer.lr=1e-3 \\\n  --lr_scheduler.warmup_steps=500 \\\n  --trainer.accelerator=gpu \\\n  --trainer.devices=1 \\\n  --trainer.max_epochs=30 \\\n  --trainer.logger=TensorBoardLogger \\\n  --trainer.logger.save_dir=logs \\\n  --trainer.logger.name=logs\n```\n\n[Model construction](docs/model-construction.md) describes how to implement model-specific command line interfaces\nwith the Lightning CLI. Training checkpoints are written to the `logs/img_clf/version_0/checkpoints` directory. Assuming\na checkpoint with filename `epoch=025-val_loss=0.065.ckpt` exists, it can be converted to a `perceiver-io` \ud83e\udd17 model with\n\n```python\nfrom perceiver.model.vision.image_classifier import convert_mnist_classifier_checkpoint\n\nconvert_mnist_classifier_checkpoint(\n    save_dir=\"example/mnist-classifier\",\n    ckpt_url=\"logs/img_clf/version_0/checkpoints/epoch=025-val_loss=0.065.ckpt\",\n)\n```\n\nso that it can be used in a \ud83e\udd17 image classification pipeline\n\n```python\nfrom datasets import load_dataset\nfrom transformers import pipeline\n\nmnist_dataset = load_dataset(\"mnist\", split=\"test\")[:9]\n\nimages = mnist_dataset[\"image\"]\nlabels = mnist_dataset[\"label\"]\n\nclassifier = pipeline(\"image-classification\", model=\"example/mnist-classifier\")\npredictions = [pred[0][\"label\"] for pred in classifier(images)]\n\nprint(f\"Labels:      {labels}\")\nprint(f\"Predictions: {predictions}\")\n```\n```\nLabels:      [7, 2, 1, 0, 4, 1, 4, 9, 5]\nPredictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]\n```\n\nor loaded directly:\n\n```python\nimport torch\nfrom transformers import AutoModelForImageClassification, AutoImageProcessor\n\nmodel = AutoModelForImageClassification.from_pretrained(\"example/mnist-classifier\")\nprocessor = AutoImageProcessor.from_pretrained(\"example/mnist-classifier\")\n\ninputs = processor(images, return_tensors=\"pt\")\n\nwith torch.no_grad():\n    # use perceiver-io Hugging Face model\n    output_1 = model(**inputs).logits\n\nwith torch.no_grad():\n    # or use perceiver-io backend model directly  \n    output_2 = model.backend_model(inputs.pixel_values)\n\nprint(f\"Predictions: {output_1.argmax(dim=-1).numpy().tolist()}\")\nprint(f\"Predictions: {output_2.argmax(dim=-1).numpy().tolist()}\")\n```\n```\nPredictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]\nPredictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]\n```\n\nSee [training examples](docs/training-examples.md) for more examples.\n\n## Articles\n\nArticles referencing this repository:\n\n- [Training compute-optimal Perceiver AR language models](https://krasserm.github.io/2023/01/23/scaling-perceiver-ar/)\n- [A gentle introduction to Rotary Position Embedding](https://krasserm.github.io/2022/12/13/rotary-position-embedding/)\n\n## Other implementations\n\n- [Perceiver](https://paperswithcode.com/paper/perceiver-general-perception-with-iterative#code)\n- [Perceiver IO](https://paperswithcode.com/paper/perceiver-io-a-general-architecture-for#code)\n- [Perceiver AR](https://paperswithcode.com/paper/general-purpose-long-context-autoregressive#code)\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Perceiver IO",
    "version": "0.11.0",
    "project_urls": {
        "Homepage": "https://github.com/krasserm/perceiver-io"
    },
    "split_keywords": [
        "perceiver-io",
        "perceiver-ar",
        "deep-learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ed254b8956dc2190e3655b7e6a6dd72fb8b2ca5f5d9a38ca623d09aebd70af9a",
                "md5": "a5513393d23c71b6c0c5631cf1f4ef0e",
                "sha256": "9ab8398fb5b1120af7be73b80d982d71b65756f75b949236cc0326be45031bb6"
            },
            "downloads": -1,
            "filename": "perceiver_io-0.11.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a5513393d23c71b6c0c5631cf1f4ef0e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<3.11",
            "size": 87588,
            "upload_time": "2023-06-12T11:27:36",
            "upload_time_iso_8601": "2023-06-12T11:27:36.500983Z",
            "url": "https://files.pythonhosted.org/packages/ed/25/4b8956dc2190e3655b7e6a6dd72fb8b2ca5f5d9a38ca623d09aebd70af9a/perceiver_io-0.11.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "04fb10a6c2e5c567269e2f5c09557fed8f5f571b13aa452e5e8186365e6598dd",
                "md5": "6cf995abb61a9530b202cb3036890d49",
                "sha256": "8ccef357d29e49d093b44417e9449d47fcf10c8a5ec2f4e17b4b06730842462a"
            },
            "downloads": -1,
            "filename": "perceiver-io-0.11.0.tar.gz",
            "has_sig": false,
            "md5_digest": "6cf995abb61a9530b202cb3036890d49",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<3.11",
            "size": 15413928,
            "upload_time": "2023-06-12T11:27:33",
            "upload_time_iso_8601": "2023-06-12T11:27:33.399845Z",
            "url": "https://files.pythonhosted.org/packages/04/fb/10a6c2e5c567269e2f5c09557fed8f5f571b13aa452e5e8186365e6598dd/perceiver-io-0.11.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-12 11:27:33",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "krasserm",
    "github_project": "perceiver-io",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "perceiver-io"
}
        
Elapsed time: 0.08508s