marlin-pytorch


Namemarlin-pytorch JSON
Version 0.3.4 PyPI version JSON
download
home_pagehttps://github.com/ControlNet/MARLIN
SummaryOfficial pytorch implementation for MARLIN.
upload_time2023-08-31 08:28:26
maintainer
docs_urlNone
authorControlNet
requires_python>=3.6
licenseCC BY-NC 4.0
keywords deep learning pytorch ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MARLIN: Masked Autoencoder for facial video Representation LearnINg

<div>
    <img src="assets/teaser.svg">
    <p></p>
</div>

<div align="center">
    <a href="https://github.com/ControlNet/MARLIN/network/members">
        <img src="https://img.shields.io/github/forks/ControlNet/MARLIN?style=flat-square">
    </a>
    <a href="https://github.com/ControlNet/MARLIN/stargazers">
        <img src="https://img.shields.io/github/stars/ControlNet/MARLIN?style=flat-square">
    </a>
    <a href="https://github.com/ControlNet/MARLIN/issues">
        <img src="https://img.shields.io/github/issues/ControlNet/MARLIN?style=flat-square">
    </a>
    <a href="https://github.com/ControlNet/MARLIN/blob/master/LICENSE">
        <img src="https://img.shields.io/badge/license-CC%20BY--NC%204.0-97ca00?style=flat-square">
    </a>
    <a href="https://arxiv.org/abs/2211.06627">
        <img src="https://img.shields.io/badge/arXiv-2211.06627-b31b1b.svg?style=flat-square">
    </a>
</div>

<div align="center">    
    <a href="https://pypi.org/project/marlin-pytorch/">
        <img src="https://img.shields.io/pypi/v/marlin-pytorch?style=flat-square">
    </a>
    <a href="https://pypi.org/project/marlin-pytorch/">
        <img src="https://img.shields.io/pypi/dm/marlin-pytorch?style=flat-square">
    </a>
    <a href="https://www.python.org/"><img src="https://img.shields.io/pypi/pyversions/marlin-pytorch?style=flat-square"></a>
    <a href="https://pytorch.org/"><img src="https://img.shields.io/badge/PyTorch-%3E%3D1.8.0-EE4C2C?style=flat-square&logo=pytorch"></a>
</div>

<div align="center">
    <a href="https://github.com/ControlNet/MARLIN/actions"><img src="https://img.shields.io/github/actions/workflow/status/ControlNet/MARLIN/unittest.yaml?branch=dev&label=unittest&style=flat-square"></a>
    <a href="https://github.com/ControlNet/MARLIN/actions"><img src="https://img.shields.io/github/actions/workflow/status/ControlNet/MARLIN/release.yaml?branch=master&label=release&style=flat-square"></a>
    <a href="https://coveralls.io/github/ControlNet/MARLIN"><img src="https://img.shields.io/coverallsCoverage/github/ControlNet/MARLIN?style=flat-square"></a>
</div>

This repo is the official PyTorch implementation for the paper 
[MARLIN: Masked Autoencoder for facial video Representation LearnINg](https://openaccess.thecvf.com/content/CVPR2023/html/Cai_MARLIN_Masked_Autoencoder_for_Facial_Video_Representation_LearnINg_CVPR_2023_paper) (CVPR 2023).

## Repository Structure

The repository contains 2 parts:
 - `marlin-pytorch`: The PyPI package for MARLIN used for inference.
 - The implementation for the paper including training and evaluation scripts.

```
.
├── assets                # Images for README.md
├── LICENSE
├── README.md
├── MODEL_ZOO.md
├── CITATION.cff
├── .gitignore
├── .github

# below is for the PyPI package marlin-pytorch
├── src                   # Source code for marlin-pytorch
├── tests                 # Unittest
├── requirements.lib.txt
├── setup.py
├── init.py
├── version.txt

# below is for the paper implementation
├── configs              # Configs for experiments settings
├── model                # Marlin models
├── preprocess           # Preprocessing scripts
├── dataset              # Dataloaders
├── utils                # Utility functions
├── train.py             # Training script
├── evaluate.py          # Evaluation script (TODO)
├── requirements.txt

```

## Use `marlin-pytorch` for Feature Extraction

Requirements:
- Python >= 3.6, < 3.11
- PyTorch >= 1.8
- ffmpeg


Install from PyPI:
```bash
pip install marlin-pytorch
```

Load MARLIN model from online
```python
from marlin_pytorch import Marlin
# Load MARLIN model from GitHub Release
model = Marlin.from_online("marlin_vit_base_ytf")
```

Load MARLIN model from file
```python
from marlin_pytorch import Marlin
# Load MARLIN model from local file
model = Marlin.from_file("marlin_vit_base_ytf", "path/to/marlin.pt")
# Load MARLIN model from the ckpt file trained by the scripts in this repo
model = Marlin.from_file("marlin_vit_base_ytf", "path/to/marlin.ckpt")
```

Current model name list:
- `marlin_vit_small_ytf`: ViT-small encoder trained on YTF dataset. Embedding 384 dim.
- `marlin_vit_base_ytf`: ViT-base encoder trained on YTF dataset. Embedding 768 dim.
- `marlin_vit_large_ytf`: ViT-large encoder trained on YTF dataset. Embedding 1024 dim.

For more details, see [MODEL_ZOO.md](MODEL_ZOO.md).

When MARLIN model is retrieved from GitHub Release, it will be cached in `.marlin`. You can remove marlin cache by
```python
from marlin_pytorch import Marlin
Marlin.clean_cache()
```

Extract features from cropped video file
```python
# Extract features from facial cropped video with size (224x224)
features = model.extract_video("path/to/video.mp4")
print(features.shape)  # torch.Size([T, 768]) where T is the number of windows

# You can keep output of all elements from the sequence by setting keep_seq=True
features = model.extract_video("path/to/video.mp4", keep_seq=True)
print(features.shape)  # torch.Size([T, k, 768]) where k = T/t * H/h * W/w = 8 * 14 * 14 = 1568
```

Extract features from in-the-wild video file
```python
# Extract features from in-the-wild video with various size
features = model.extract_video("path/to/video.mp4", crop_face=True)
print(features.shape)  # torch.Size([T, 768])
```

Extract features from video clip tensor
```python
# Extract features from clip tensor with size (B, 3, 16, 224, 224)
x = ...  # video clip
features = model.extract_features(x)  # torch.Size([B, k, 768])
features = model.extract_features(x, keep_seq=False)  # torch.Size([B, 768])
```

## Paper Implementation

### Requirements
- Python >= 3.7, < 3.11
- PyTorch ~= 1.11
- Torchvision ~= 0.12

### Installation

Firstly, make sure you have installed PyTorch and Torchvision with or without CUDA. 

Clone the repo and install the requirements:
```bash
git clone https://github.com/ControlNet/MARLIN.git
cd MARLIN
pip install -r requirements.txt
```

### MARLIN Pretraining

Download the [YoutubeFaces](https://www.cs.tau.ac.il/~wolf/ytfaces/) dataset (only `frame_images_DB` is required). 

Download the face parsing model from [face_parsing.farl.lapa](https://github.com/FacePerceiver/facer/releases/download/models-v1/face_parsing.farl.lapa.main_ema_136500_jit191.pt)
and put it in `utils/face_sdk/models/face_parsing/face_parsing_1.0`.

Download the VideoMAE pretrained [checkpoint](https://github.com/ControlNet/MARLIN/releases/misc) 
for initializing the weights. (ps. They updated their models in this 
[commit](https://github.com/MCG-NJU/VideoMAE/commit/2b56a75d166c619f71019e3d1bb1c4aedafe7a90), but we are using the 
old models which are not shared anymore by the authors. So we uploaded this model by ourselves.)

Then run scripts to process the dataset:
```bash
python preprocess/ytf_preprocess.py --data_dir /path/to/youtube_faces --max_workers 8
```
After processing, the directory structure should be like this:
```
├── YoutubeFaces
│   ├── frame_images_DB
│   │   ├── Aaron_Eckhart
│   │   │   ├── 0
│   │   │   │   ├── 0.555.jpg
│   │   │   │   ├── ...
│   │   │   ├── ...
│   │   ├── ...
│   ├── crop_images_DB
│   │   ├── Aaron_Eckhart
│   │   │   ├── 0
│   │   │   │   ├── 0.555.jpg
│   │   │   │   ├── ...
│   │   │   ├── ...
│   │   ├── ...
│   ├── face_parsing_images_DB
│   │   ├── Aaron_Eckhart
│   │   │   ├── 0
│   │   │   │   ├── 0.555.npy
│   │   │   │   ├── ...
│   │   │   ├── ...
│   │   ├── ...
│   ├── train_set.csv
│   ├── val_set.csv
```

Then, run the training script:
```bash
python train.py \
    --config config/pretrain/marlin_vit_base.yaml \
    --data_dir /path/to/youtube_faces \
    --n_gpus 4 \
    --num_workers 8 \
    --batch_size 16 \
    --epochs 2000 \
    --official_pretrained /path/to/videomae/checkpoint.pth
```

After trained, you can load the checkpoint for inference by

```python
from marlin_pytorch import Marlin
from marlin_pytorch.config import register_model_from_yaml

register_model_from_yaml("my_marlin_model", "path/to/config.yaml")
model = Marlin.from_file("my_marlin_model", "path/to/marlin.ckpt")
```

## References
If you find this work useful for your research, please consider citing it.
```bibtex
@inproceedings{cai2022marlin,
  title = {MARLIN: Masked Autoencoder for facial video Representation LearnINg},
  author = {Cai, Zhixi and Ghosh, Shreya and Stefanov, Kalin and Dhall, Abhinav and Cai, Jianfei and Rezatofighi, Hamid and Haffari, Reza and Hayat, Munawar},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2023},
  month = {June},
  pages = {1493-1504},
  doi = {10.1109/CVPR52729.2023.00150},
  publisher = {IEEE},
}
```

## License

This project is under the CC BY-NC 4.0 license. See [LICENSE](LICENSE) for details.

## Acknowledgements

Some code about model is based on [MCG-NJU/VideoMAE](https://github.com/MCG-NJU/VideoMAE). The code related to preprocessing
is borrowed from [JDAI-CV/FaceX-Zoo](https://github.com/JDAI-CV/FaceX-Zoo).



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ControlNet/MARLIN",
    "name": "marlin-pytorch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "deep learning,pytorch,AI",
    "author": "ControlNet",
    "author_email": "smczx@hotmail.com",
    "download_url": "https://files.pythonhosted.org/packages/00/0d/f00b78303bce133efebbd1c1d65626ec2384bbe5d88a59059676f0a78162/marlin_pytorch-0.3.4.tar.gz",
    "platform": null,
    "description": "# MARLIN: Masked Autoencoder for facial video Representation LearnINg\n\n<div>\n    <img src=\"assets/teaser.svg\">\n    <p></p>\n</div>\n\n<div align=\"center\">\n    <a href=\"https://github.com/ControlNet/MARLIN/network/members\">\n        <img src=\"https://img.shields.io/github/forks/ControlNet/MARLIN?style=flat-square\">\n    </a>\n    <a href=\"https://github.com/ControlNet/MARLIN/stargazers\">\n        <img src=\"https://img.shields.io/github/stars/ControlNet/MARLIN?style=flat-square\">\n    </a>\n    <a href=\"https://github.com/ControlNet/MARLIN/issues\">\n        <img src=\"https://img.shields.io/github/issues/ControlNet/MARLIN?style=flat-square\">\n    </a>\n    <a href=\"https://github.com/ControlNet/MARLIN/blob/master/LICENSE\">\n        <img src=\"https://img.shields.io/badge/license-CC%20BY--NC%204.0-97ca00?style=flat-square\">\n    </a>\n    <a href=\"https://arxiv.org/abs/2211.06627\">\n        <img src=\"https://img.shields.io/badge/arXiv-2211.06627-b31b1b.svg?style=flat-square\">\n    </a>\n</div>\n\n<div align=\"center\">    \n    <a href=\"https://pypi.org/project/marlin-pytorch/\">\n        <img src=\"https://img.shields.io/pypi/v/marlin-pytorch?style=flat-square\">\n    </a>\n    <a href=\"https://pypi.org/project/marlin-pytorch/\">\n        <img src=\"https://img.shields.io/pypi/dm/marlin-pytorch?style=flat-square\">\n    </a>\n    <a href=\"https://www.python.org/\"><img src=\"https://img.shields.io/pypi/pyversions/marlin-pytorch?style=flat-square\"></a>\n    <a href=\"https://pytorch.org/\"><img src=\"https://img.shields.io/badge/PyTorch-%3E%3D1.8.0-EE4C2C?style=flat-square&logo=pytorch\"></a>\n</div>\n\n<div align=\"center\">\n    <a href=\"https://github.com/ControlNet/MARLIN/actions\"><img src=\"https://img.shields.io/github/actions/workflow/status/ControlNet/MARLIN/unittest.yaml?branch=dev&label=unittest&style=flat-square\"></a>\n    <a href=\"https://github.com/ControlNet/MARLIN/actions\"><img src=\"https://img.shields.io/github/actions/workflow/status/ControlNet/MARLIN/release.yaml?branch=master&label=release&style=flat-square\"></a>\n    <a href=\"https://coveralls.io/github/ControlNet/MARLIN\"><img src=\"https://img.shields.io/coverallsCoverage/github/ControlNet/MARLIN?style=flat-square\"></a>\n</div>\n\nThis repo is the official PyTorch implementation for the paper \n[MARLIN: Masked Autoencoder for facial video Representation LearnINg](https://openaccess.thecvf.com/content/CVPR2023/html/Cai_MARLIN_Masked_Autoencoder_for_Facial_Video_Representation_LearnINg_CVPR_2023_paper) (CVPR 2023).\n\n## Repository Structure\n\nThe repository contains 2 parts:\n - `marlin-pytorch`: The PyPI package for MARLIN used for inference.\n - The implementation for the paper including training and evaluation scripts.\n\n```\n.\n\u251c\u2500\u2500 assets                # Images for README.md\n\u251c\u2500\u2500 LICENSE\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 MODEL_ZOO.md\n\u251c\u2500\u2500 CITATION.cff\n\u251c\u2500\u2500 .gitignore\n\u251c\u2500\u2500 .github\n\n# below is for the PyPI package marlin-pytorch\n\u251c\u2500\u2500 src                   # Source code for marlin-pytorch\n\u251c\u2500\u2500 tests                 # Unittest\n\u251c\u2500\u2500 requirements.lib.txt\n\u251c\u2500\u2500 setup.py\n\u251c\u2500\u2500 init.py\n\u251c\u2500\u2500 version.txt\n\n# below is for the paper implementation\n\u251c\u2500\u2500 configs              # Configs for experiments settings\n\u251c\u2500\u2500 model                # Marlin models\n\u251c\u2500\u2500 preprocess           # Preprocessing scripts\n\u251c\u2500\u2500 dataset              # Dataloaders\n\u251c\u2500\u2500 utils                # Utility functions\n\u251c\u2500\u2500 train.py             # Training script\n\u251c\u2500\u2500 evaluate.py          # Evaluation script (TODO)\n\u251c\u2500\u2500 requirements.txt\n\n```\n\n## Use `marlin-pytorch` for Feature Extraction\n\nRequirements:\n- Python >= 3.6, < 3.11\n- PyTorch >= 1.8\n- ffmpeg\n\n\nInstall from PyPI:\n```bash\npip install marlin-pytorch\n```\n\nLoad MARLIN model from online\n```python\nfrom marlin_pytorch import Marlin\n# Load MARLIN model from GitHub Release\nmodel = Marlin.from_online(\"marlin_vit_base_ytf\")\n```\n\nLoad MARLIN model from file\n```python\nfrom marlin_pytorch import Marlin\n# Load MARLIN model from local file\nmodel = Marlin.from_file(\"marlin_vit_base_ytf\", \"path/to/marlin.pt\")\n# Load MARLIN model from the ckpt file trained by the scripts in this repo\nmodel = Marlin.from_file(\"marlin_vit_base_ytf\", \"path/to/marlin.ckpt\")\n```\n\nCurrent model name list:\n- `marlin_vit_small_ytf`: ViT-small encoder trained on YTF dataset. Embedding 384 dim.\n- `marlin_vit_base_ytf`: ViT-base encoder trained on YTF dataset. Embedding 768 dim.\n- `marlin_vit_large_ytf`: ViT-large encoder trained on YTF dataset. Embedding 1024 dim.\n\nFor more details, see [MODEL_ZOO.md](MODEL_ZOO.md).\n\nWhen MARLIN model is retrieved from GitHub Release, it will be cached in `.marlin`. You can remove marlin cache by\n```python\nfrom marlin_pytorch import Marlin\nMarlin.clean_cache()\n```\n\nExtract features from cropped video file\n```python\n# Extract features from facial cropped video with size (224x224)\nfeatures = model.extract_video(\"path/to/video.mp4\")\nprint(features.shape)  # torch.Size([T, 768]) where T is the number of windows\n\n# You can keep output of all elements from the sequence by setting keep_seq=True\nfeatures = model.extract_video(\"path/to/video.mp4\", keep_seq=True)\nprint(features.shape)  # torch.Size([T, k, 768]) where k = T/t * H/h * W/w = 8 * 14 * 14 = 1568\n```\n\nExtract features from in-the-wild video file\n```python\n# Extract features from in-the-wild video with various size\nfeatures = model.extract_video(\"path/to/video.mp4\", crop_face=True)\nprint(features.shape)  # torch.Size([T, 768])\n```\n\nExtract features from video clip tensor\n```python\n# Extract features from clip tensor with size (B, 3, 16, 224, 224)\nx = ...  # video clip\nfeatures = model.extract_features(x)  # torch.Size([B, k, 768])\nfeatures = model.extract_features(x, keep_seq=False)  # torch.Size([B, 768])\n```\n\n## Paper Implementation\n\n### Requirements\n- Python >= 3.7, < 3.11\n- PyTorch ~= 1.11\n- Torchvision ~= 0.12\n\n### Installation\n\nFirstly, make sure you have installed PyTorch and Torchvision with or without CUDA. \n\nClone the repo and install the requirements:\n```bash\ngit clone https://github.com/ControlNet/MARLIN.git\ncd MARLIN\npip install -r requirements.txt\n```\n\n### MARLIN Pretraining\n\nDownload the [YoutubeFaces](https://www.cs.tau.ac.il/~wolf/ytfaces/) dataset (only `frame_images_DB` is required). \n\nDownload the face parsing model from [face_parsing.farl.lapa](https://github.com/FacePerceiver/facer/releases/download/models-v1/face_parsing.farl.lapa.main_ema_136500_jit191.pt)\nand put it in `utils/face_sdk/models/face_parsing/face_parsing_1.0`.\n\nDownload the VideoMAE pretrained [checkpoint](https://github.com/ControlNet/MARLIN/releases/misc) \nfor initializing the weights. (ps. They updated their models in this \n[commit](https://github.com/MCG-NJU/VideoMAE/commit/2b56a75d166c619f71019e3d1bb1c4aedafe7a90), but we are using the \nold models which are not shared anymore by the authors. So we uploaded this model by ourselves.)\n\nThen run scripts to process the dataset:\n```bash\npython preprocess/ytf_preprocess.py --data_dir /path/to/youtube_faces --max_workers 8\n```\nAfter processing, the directory structure should be like this:\n```\n\u251c\u2500\u2500 YoutubeFaces\n\u2502   \u251c\u2500\u2500 frame_images_DB\n\u2502   \u2502   \u251c\u2500\u2500 Aaron_Eckhart\n\u2502   \u2502   \u2502   \u251c\u2500\u2500 0\n\u2502   \u2502   \u2502   \u2502   \u251c\u2500\u2500 0.555.jpg\n\u2502   \u2502   \u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502   \u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502   \u251c\u2500\u2500 crop_images_DB\n\u2502   \u2502   \u251c\u2500\u2500 Aaron_Eckhart\n\u2502   \u2502   \u2502   \u251c\u2500\u2500 0\n\u2502   \u2502   \u2502   \u2502   \u251c\u2500\u2500 0.555.jpg\n\u2502   \u2502   \u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502   \u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502   \u251c\u2500\u2500 face_parsing_images_DB\n\u2502   \u2502   \u251c\u2500\u2500 Aaron_Eckhart\n\u2502   \u2502   \u2502   \u251c\u2500\u2500 0\n\u2502   \u2502   \u2502   \u2502   \u251c\u2500\u2500 0.555.npy\n\u2502   \u2502   \u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502   \u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502   \u251c\u2500\u2500 train_set.csv\n\u2502   \u251c\u2500\u2500 val_set.csv\n```\n\nThen, run the training script:\n```bash\npython train.py \\\n    --config config/pretrain/marlin_vit_base.yaml \\\n    --data_dir /path/to/youtube_faces \\\n    --n_gpus 4 \\\n    --num_workers 8 \\\n    --batch_size 16 \\\n    --epochs 2000 \\\n    --official_pretrained /path/to/videomae/checkpoint.pth\n```\n\nAfter trained, you can load the checkpoint for inference by\n\n```python\nfrom marlin_pytorch import Marlin\nfrom marlin_pytorch.config import register_model_from_yaml\n\nregister_model_from_yaml(\"my_marlin_model\", \"path/to/config.yaml\")\nmodel = Marlin.from_file(\"my_marlin_model\", \"path/to/marlin.ckpt\")\n```\n\n## References\nIf you find this work useful for your research, please consider citing it.\n```bibtex\n@inproceedings{cai2022marlin,\n  title = {MARLIN: Masked Autoencoder for facial video Representation LearnINg},\n  author = {Cai, Zhixi and Ghosh, Shreya and Stefanov, Kalin and Dhall, Abhinav and Cai, Jianfei and Rezatofighi, Hamid and Haffari, Reza and Hayat, Munawar},\n  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n  year = {2023},\n  month = {June},\n  pages = {1493-1504},\n  doi = {10.1109/CVPR52729.2023.00150},\n  publisher = {IEEE},\n}\n```\n\n## License\n\nThis project is under the CC BY-NC 4.0 license. See [LICENSE](LICENSE) for details.\n\n## Acknowledgements\n\nSome code about model is based on [MCG-NJU/VideoMAE](https://github.com/MCG-NJU/VideoMAE). The code related to preprocessing\nis borrowed from [JDAI-CV/FaceX-Zoo](https://github.com/JDAI-CV/FaceX-Zoo).\n\n\n",
    "bugtrack_url": null,
    "license": "CC BY-NC 4.0",
    "summary": "Official pytorch implementation for MARLIN.",
    "version": "0.3.4",
    "project_urls": {
        "Bug Tracker": "https://github.com/ControlNet/MARLIN/issues",
        "Homepage": "https://github.com/ControlNet/MARLIN",
        "Source Code": "https://github.com/ControlNet/MARLIN"
    },
    "split_keywords": [
        "deep learning",
        "pytorch",
        "ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "271f0d9e4f266f501be0256503c99928b650b98b56486bd05c20b175e2dfc537",
                "md5": "e07422d45bce0a5894f195264db11fe5",
                "sha256": "7395436be563ba1e37ee04fdfd7c9b98c4fbd2721ab7f5f11d4a8c5b40503813"
            },
            "downloads": -1,
            "filename": "marlin_pytorch-0.3.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e07422d45bce0a5894f195264db11fe5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 25164,
            "upload_time": "2023-08-31T08:28:25",
            "upload_time_iso_8601": "2023-08-31T08:28:25.315899Z",
            "url": "https://files.pythonhosted.org/packages/27/1f/0d9e4f266f501be0256503c99928b650b98b56486bd05c20b175e2dfc537/marlin_pytorch-0.3.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "000df00b78303bce133efebbd1c1d65626ec2384bbe5d88a59059676f0a78162",
                "md5": "33fba2712d5a9267679077d52b3cb3b4",
                "sha256": "ed550cd5c48d7861672d153331110a1a71c69df4a56c83ef415645fba9568855"
            },
            "downloads": -1,
            "filename": "marlin_pytorch-0.3.4.tar.gz",
            "has_sig": false,
            "md5_digest": "33fba2712d5a9267679077d52b3cb3b4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 26238,
            "upload_time": "2023-08-31T08:28:26",
            "upload_time_iso_8601": "2023-08-31T08:28:26.524042Z",
            "url": "https://files.pythonhosted.org/packages/00/0d/f00b78303bce133efebbd1c1d65626ec2384bbe5d88a59059676f0a78162/marlin_pytorch-0.3.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-31 08:28:26",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ControlNet",
    "github_project": "MARLIN",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "marlin-pytorch"
}
        
Elapsed time: 0.12122s