tfimm


Nametfimm JSON
Version 0.2.14 PyPI version JSON
download
home_pagehttps://github.com/martinsbruveris/tensorflow-image-models
SummaryTensorFlow port of PyTorch Image Models (timm) - image models with pretrained weights
upload_time2023-05-15 12:54:41
maintainer
docs_urlNone
authorMartins Bruveris
requires_python>=3.8,<3.11
licenseApache-2.0
keywords tensorfow pretrained models visual transformer
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # TensorFlow Image Models

![Test Status](https://github.com/martinsbruveris/tensorflow-image-models/actions/workflows/tests.yml/badge.svg)
[![Documentation Status](https://readthedocs.org/projects/tfimm/badge/?version=latest)](https://tfimm.readthedocs.io/en/latest/?badge=latest)
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Slack](https://img.shields.io/badge/Slack-4A154B?style=for-the-badge&logo=slack&logoColor=white)](https://join.slack.com/t/tfimm/shared_invite/zt-13dnaf3qo-5JJaCBFIQhugeBXBT3NK8A)

- [Introduction](#introduction)
- [Usage](#usage)
- [Models](#models)
- [Profiling](#profiling)
- [License](#license)
- [Contact](#contact)

## Introduction

TensorFlow Image Models (`tfimm`) is a collection of image models with pretrained
weights, obtained by porting architectures from 
[timm](https://github.com/rwightman/pytorch-image-models) to TensorFlow. The hope is
that the number of available architectures will grow over time. For now, it contains
vision transformers (ViT, DeiT, CaiT, PVT and Swin Transformers), MLP-Mixer models 
(MLP-Mixer, ResMLP, gMLP, PoolFormer and ConvMixer), various ResNet flavours (ResNet,
ResNeXt, ECA-ResNet, SE-ResNet), the EfficientNet family (including AdvProp, 
NoisyStudent, Edge-TPU, V2 and Lite versions), MobileNet-V2, VGG, as well as the recent 
ConvNeXt. `tfimm` has now expanded beyond classification and also includes Segment 
Anything.

This work would not have been possible wihout Ross Wightman's `timm` library and the
work on PyTorch/TensorFlow interoperability in HuggingFace's `transformer` repository.
I tried to make sure all source material is acknowledged. Please let me know if I have
missed something.

## Usage

### Installation 

The package can be installed via `pip`,

```shell
pip install tfimm
```

To load pretrained weights, `timm` needs to be installed separately.

### Creating models

To load pretrained models use

```python
import tfimm

model = tfimm.create_model("vit_tiny_patch16_224", pretrained="timm")
```

We can list available models with pretrained weights via

```python
import tfimm

print(tfimm.list_models(pretrained="timm"))
```

Most models are pretrained on ImageNet or ImageNet-21k. If we want to use them for other
tasks we need to change the number of classes in the classifier or remove the 
classifier altogether. We can do this by setting the `nb_classes` parameter in 
`create_model`. If `nb_classes=0`, the model will have no classification layer. If
`nb_classes` is set to a value different from the default model config, the 
classification layer will be randomly initialized, while all other weights will be
copied from the pretrained model.

The preprocessing function for each model can be created via
```python
import tensorflow as tf
import tfimm

preprocess = tfimm.create_preprocessing("vit_tiny_patch16_224", dtype="float32")
img = tf.ones((1, 224, 224, 3), dtype="uint8")
img_preprocessed = preprocess(img)
```

### Saving and loading models

All models are subclassed from `tf.keras.Model` (they are _not_ functional models).
They can still be saved and loaded using the `SavedModel` format.

```
>>> import tesnorflow as tf
>>> import tfimm
>>> model = tfimm.create_model("vit_tiny_patch16_224")
>>> type(model)
<class 'tfimm.architectures.vit.ViT'>
>>> model.save("/tmp/my_model")
>>> loaded_model = tf.keras.models.load_model("/tmp/my_model")
>>> type(loaded_model)
<class 'tfimm.architectures.vit.ViT'>
```

For this to work, the `tfimm` library needs to be imported before the model is loaded,
since during the import process, `tfimm` is registering custom models with Keras.
Otherwise, we obtain the following output

```
>>> import tensorflow as tf
>>> loaded_model = tf.keras.models.load_model("/tmp/my_model")
>>> type(loaded_model)
<class 'keras.saving.saved_model.load.Custom>ViT'>
```

## Models

The following architectures are currently available:

- CaiT (vision transformer) 
  [\[github\]](https://github.com/facebookresearch/deit/blob/main/README_cait.md)
  - Going deeper with Image Transformers 
    [\[arXiv:2103.17239\]](https://arxiv.org/abs/2103.17239)
- DeiT (vision transformer) 
  [\[github\]](https://github.com/facebookresearch/deit)
  - Training data-efficient image transformers & distillation through attention. 
    [\[arXiv:2012.12877\]](https://arxiv.org/abs/2012.12877) 
- ViT (vision transformer) 
  [\[github\]](https://github.com/google-research/vision_transformer)
  - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.
    [\[arXiv:2010.11929\]](https://arxiv.org/abs/2010.11929)
  - How to train your ViT? Data, Augmentation, and Regularization in Vision 
    Transformers. [\[arXiv:2106.10270\]](https://arxiv.org/abs/2106.10270)
  - Includes models trained with the SAM optimizer: Sharpness-Aware Minimization for 
    Efficiently Improving Generalization. 
    [\[arXiv:2010.01412\]](https://arxiv.org/abs/2010.01412)
  - Includes models from: ImageNet-21K Pretraining for the Masses
    [\[arXiv:2104.10972\]](https://arxiv.org/abs/2104.10972) 
    [\[github\]](https://github.com/Alibaba-MIIL/ImageNet21K)
- Swin Transformer 
  [\[github\]](https://github.com/microsoft/Swin-Transformer)
  - Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. 
    [\[arXiv:2103.14030\]](https://arxiv.org/abs/2103.14030)
  - Tensorflow code adapted from 
    [Swin-Transformer-TF](https://github.com/rishigami/Swin-Transformer-TF)
- MLP-Mixer and friends
  - MLP-Mixer: An all-MLP Architecture for Vision 
    [\[arXiv:2105.01601\]](https://arxiv.org/abs/2105.01601)
  - ResMLP: Feedforward networks for image classification... 
    [\[arXiv:2105.03404\]](https://arxiv.org/abs/2105.03404)
  - Pay Attention to MLPs (gMLP)
    [\[arXiv:2105.08050\]](https://arxiv.org/abs/2105.08050)
- ConvMixer 
  [\[github\]](https://github.com/tmp-iclr/convmixer)
  - Patches Are All You Need? 
    [\[ICLR 2022 submission\]](https://openreview.net/forum?id=TVHS5Y4dNvM)
- EfficientNet family
  - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
    [\[arXiv:1905.11946\]](https://arxiv.org/abs/1905.11946)
  - Adversarial Examples Improve Image Recognition
    [\[arXiv:1911.09665\]](https://arxiv.org/abs/1911.09665)
  - Self-training with Noisy Student improves ImageNet classification
    [\[arXiv:1911.04252\]](https://arxiv.org/abs/1911.04252)
  - EfficientNet-EdgeTPU
    [\[Blog\]](https://ai.googleblog.com/2019/08/efficientnet-edgetpu-creating.html)
  - EfficientNet-Lite
    [\[Blog\]](https://blog.tensorflow.org/2020/03/higher-accuracy-on-vision-models-with-efficientnet-lite.html)
  - EfficientNetV2: Smaller Models and Faster Training
    [\[arXiv:2104.00298\]](https://arxiv.org/abs/2104.00298)
- MobileNet-V2
  - MobileNetV2: Inverted Residuals and Linear Bottlenecks
    [\[arXiv:1801.04381\]](https://arxiv.org/abs/1801.04381)
- Pyramid Vision Transformer 
  [\[github\]](https://github.com/whai362/PVT)
  - Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without
    Convolutions. [\[arXiv:2102.12122\]](https://arxiv.org/abs/2102.12122)
  - PVTv2: Improved Baselines with Pyramid Vision Transformer 
    [\[arXiv:2106.13797\]](https://arxiv.org/abs/2106.13797)
- ConvNeXt
  [\[github\]](https://github.com/facebookresearch/ConvNeXt)
  - A ConvNet for the 2020s. [\[arXiv:2201.03545\]](https://arxiv.org/abs/2201.03545)
- PoolFormer
  [\[github\]](https://github.com/sail-sg/poolformer)
  - PoolFormer: MetaFormer is Actually What You Need for Vision.
    [\[arXiv:2111.11418\]](https://arxiv.org/abs/2111.11418)
- Pooling-based Vision Transformers (PiT)
  - Rethinking Spatial Dimensions of Vision Transformers.
    [\[arXiv:2103.16302\]](https://arxiv.org/abs/2103.16302)
- ResNet, ResNeXt, ECA-ResNet, SE-ResNet and friends
  - Deep Residual Learning for Image Recognition. 
    [\[arXiv:1512.03385\]](https://arxiv.org/abs/1512.03385)
  - Exploring the Limits of Weakly Supervised Pretraining. 
    [\[arXiv:1805.00932\]](https://arxiv.org/abs/1805.00932)
  - Billion-scale Semi-Supervised Learning for Image Classification. 
    [\[arXiv:1905.00546\]](https://arxiv.org/abs/1905.00546)
  - ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. 
    [\[arXiv1910.03151\]](https://arxiv.org/abs/1910.03151)
  - Revisiting ResNets. [\[arXiv:2103.07579\]](https://arxiv.org/abs/2103.07579)
  - Making Convolutional Networks Shift-Invariant Again. (anti-aliasing layer)
    [\[arXiv:1904.11486\]](https://arxiv.org/abs/1904.11486)
  - Squeeze-and-Excitation Networks. 
    [\[arXiv:1709.01507\]](https://arxiv.org/abs/1709.01507)
  - Big Transfer (BiT): General Visual Representation Learning
    [\[arXiv:1912.11370\]](https://arxiv.org/abs/1912.11370)
  - Knowledge distillation: A good teacher is patient and consistent
    [\[arXiv:2106:05237\]](https://arxiv.org/abs/2106.05237)
- Segment Anything Model (SAM) 
  [\[github\]](https://github.com/facebookresearch/segment-anything)
    - Segment Anything [\[arXiv:2304.02643\]](https://arxiv.org/abs/2304.02643)

## Profiling

To understand how big each of the models is, I have done some profiling to measure
- maximum batch size that fits in GPU memory and
- throughput in images/second
for both inference and backpropagation on K80 and V100 GPUs. For V100, measurements 
were done for both `float32` and mixed precision.

The results can be found in the `results/profiling_{k80, v100}.csv` files.

For backpropagation, we use as loss the mean of model outputs

```python
def backprop():
    with tf.GradientTape() as tape:
        output = model(x, training=True)
        loss = tf.reduce_mean(output)
        grads = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(grads, model.trainable_variables))
```

## License

This repository is released under the Apache 2.0 license as found in the 
[LICENSE](LICENSE) file.

## Contact

All things related to `tfimm` can be discussed via 
[Slack](https://join.slack.com/t/tfimm/shared_invite/zt-13dnaf3qo-5JJaCBFIQhugeBXBT3NK8A).
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/martinsbruveris/tensorflow-image-models",
    "name": "tfimm",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<3.11",
    "maintainer_email": "",
    "keywords": "tensorfow,pretrained models,visual transformer",
    "author": "Martins Bruveris",
    "author_email": "martins.bruveris@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/9f/ab/18896405aea670fc94915ae91400295469f3a101675212a460974ba76603/tfimm-0.2.14.tar.gz",
    "platform": null,
    "description": "# TensorFlow Image Models\n\n![Test Status](https://github.com/martinsbruveris/tensorflow-image-models/actions/workflows/tests.yml/badge.svg)\n[![Documentation Status](https://readthedocs.org/projects/tfimm/badge/?version=latest)](https://tfimm.readthedocs.io/en/latest/?badge=latest)\n[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![Slack](https://img.shields.io/badge/Slack-4A154B?style=for-the-badge&logo=slack&logoColor=white)](https://join.slack.com/t/tfimm/shared_invite/zt-13dnaf3qo-5JJaCBFIQhugeBXBT3NK8A)\n\n- [Introduction](#introduction)\n- [Usage](#usage)\n- [Models](#models)\n- [Profiling](#profiling)\n- [License](#license)\n- [Contact](#contact)\n\n## Introduction\n\nTensorFlow Image Models (`tfimm`) is a collection of image models with pretrained\nweights, obtained by porting architectures from \n[timm](https://github.com/rwightman/pytorch-image-models) to TensorFlow. The hope is\nthat the number of available architectures will grow over time. For now, it contains\nvision transformers (ViT, DeiT, CaiT, PVT and Swin Transformers), MLP-Mixer models \n(MLP-Mixer, ResMLP, gMLP, PoolFormer and ConvMixer), various ResNet flavours (ResNet,\nResNeXt, ECA-ResNet, SE-ResNet), the EfficientNet family (including AdvProp, \nNoisyStudent, Edge-TPU, V2 and Lite versions), MobileNet-V2, VGG, as well as the recent \nConvNeXt. `tfimm` has now expanded beyond classification and also includes Segment \nAnything.\n\nThis work would not have been possible wihout Ross Wightman's `timm` library and the\nwork on PyTorch/TensorFlow interoperability in HuggingFace's `transformer` repository.\nI tried to make sure all source material is acknowledged. Please let me know if I have\nmissed something.\n\n## Usage\n\n### Installation \n\nThe package can be installed via `pip`,\n\n```shell\npip install tfimm\n```\n\nTo load pretrained weights, `timm` needs to be installed separately.\n\n### Creating models\n\nTo load pretrained models use\n\n```python\nimport tfimm\n\nmodel = tfimm.create_model(\"vit_tiny_patch16_224\", pretrained=\"timm\")\n```\n\nWe can list available models with pretrained weights via\n\n```python\nimport tfimm\n\nprint(tfimm.list_models(pretrained=\"timm\"))\n```\n\nMost models are pretrained on ImageNet or ImageNet-21k. If we want to use them for other\ntasks we need to change the number of classes in the classifier or remove the \nclassifier altogether. We can do this by setting the `nb_classes` parameter in \n`create_model`. If `nb_classes=0`, the model will have no classification layer. If\n`nb_classes` is set to a value different from the default model config, the \nclassification layer will be randomly initialized, while all other weights will be\ncopied from the pretrained model.\n\nThe preprocessing function for each model can be created via\n```python\nimport tensorflow as tf\nimport tfimm\n\npreprocess = tfimm.create_preprocessing(\"vit_tiny_patch16_224\", dtype=\"float32\")\nimg = tf.ones((1, 224, 224, 3), dtype=\"uint8\")\nimg_preprocessed = preprocess(img)\n```\n\n### Saving and loading models\n\nAll models are subclassed from `tf.keras.Model` (they are _not_ functional models).\nThey can still be saved and loaded using the `SavedModel` format.\n\n```\n>>> import tesnorflow as tf\n>>> import tfimm\n>>> model = tfimm.create_model(\"vit_tiny_patch16_224\")\n>>> type(model)\n<class 'tfimm.architectures.vit.ViT'>\n>>> model.save(\"/tmp/my_model\")\n>>> loaded_model = tf.keras.models.load_model(\"/tmp/my_model\")\n>>> type(loaded_model)\n<class 'tfimm.architectures.vit.ViT'>\n```\n\nFor this to work, the `tfimm` library needs to be imported before the model is loaded,\nsince during the import process, `tfimm` is registering custom models with Keras.\nOtherwise, we obtain the following output\n\n```\n>>> import tensorflow as tf\n>>> loaded_model = tf.keras.models.load_model(\"/tmp/my_model\")\n>>> type(loaded_model)\n<class 'keras.saving.saved_model.load.Custom>ViT'>\n```\n\n## Models\n\nThe following architectures are currently available:\n\n- CaiT (vision transformer) \n  [\\[github\\]](https://github.com/facebookresearch/deit/blob/main/README_cait.md)\n  - Going deeper with Image Transformers \n    [\\[arXiv:2103.17239\\]](https://arxiv.org/abs/2103.17239)\n- DeiT (vision transformer) \n  [\\[github\\]](https://github.com/facebookresearch/deit)\n  - Training data-efficient image transformers & distillation through attention. \n    [\\[arXiv:2012.12877\\]](https://arxiv.org/abs/2012.12877) \n- ViT (vision transformer) \n  [\\[github\\]](https://github.com/google-research/vision_transformer)\n  - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.\n    [\\[arXiv:2010.11929\\]](https://arxiv.org/abs/2010.11929)\n  - How to train your ViT? Data, Augmentation, and Regularization in Vision \n    Transformers. [\\[arXiv:2106.10270\\]](https://arxiv.org/abs/2106.10270)\n  - Includes models trained with the SAM optimizer: Sharpness-Aware Minimization for \n    Efficiently Improving Generalization. \n    [\\[arXiv:2010.01412\\]](https://arxiv.org/abs/2010.01412)\n  - Includes models from: ImageNet-21K Pretraining for the Masses\n    [\\[arXiv:2104.10972\\]](https://arxiv.org/abs/2104.10972) \n    [\\[github\\]](https://github.com/Alibaba-MIIL/ImageNet21K)\n- Swin Transformer \n  [\\[github\\]](https://github.com/microsoft/Swin-Transformer)\n  - Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. \n    [\\[arXiv:2103.14030\\]](https://arxiv.org/abs/2103.14030)\n  - Tensorflow code adapted from \n    [Swin-Transformer-TF](https://github.com/rishigami/Swin-Transformer-TF)\n- MLP-Mixer and friends\n  - MLP-Mixer: An all-MLP Architecture for Vision \n    [\\[arXiv:2105.01601\\]](https://arxiv.org/abs/2105.01601)\n  - ResMLP: Feedforward networks for image classification... \n    [\\[arXiv:2105.03404\\]](https://arxiv.org/abs/2105.03404)\n  - Pay Attention to MLPs (gMLP)\n    [\\[arXiv:2105.08050\\]](https://arxiv.org/abs/2105.08050)\n- ConvMixer \n  [\\[github\\]](https://github.com/tmp-iclr/convmixer)\n  - Patches Are All You Need? \n    [\\[ICLR 2022 submission\\]](https://openreview.net/forum?id=TVHS5Y4dNvM)\n- EfficientNet family\n  - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks\n    [\\[arXiv:1905.11946\\]](https://arxiv.org/abs/1905.11946)\n  - Adversarial Examples Improve Image Recognition\n    [\\[arXiv:1911.09665\\]](https://arxiv.org/abs/1911.09665)\n  - Self-training with Noisy Student improves ImageNet classification\n    [\\[arXiv:1911.04252\\]](https://arxiv.org/abs/1911.04252)\n  - EfficientNet-EdgeTPU\n    [\\[Blog\\]](https://ai.googleblog.com/2019/08/efficientnet-edgetpu-creating.html)\n  - EfficientNet-Lite\n    [\\[Blog\\]](https://blog.tensorflow.org/2020/03/higher-accuracy-on-vision-models-with-efficientnet-lite.html)\n  - EfficientNetV2: Smaller Models and Faster Training\n    [\\[arXiv:2104.00298\\]](https://arxiv.org/abs/2104.00298)\n- MobileNet-V2\n  - MobileNetV2: Inverted Residuals and Linear Bottlenecks\n    [\\[arXiv:1801.04381\\]](https://arxiv.org/abs/1801.04381)\n- Pyramid Vision Transformer \n  [\\[github\\]](https://github.com/whai362/PVT)\n  - Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without\n    Convolutions. [\\[arXiv:2102.12122\\]](https://arxiv.org/abs/2102.12122)\n  - PVTv2: Improved Baselines with Pyramid Vision Transformer \n    [\\[arXiv:2106.13797\\]](https://arxiv.org/abs/2106.13797)\n- ConvNeXt\n  [\\[github\\]](https://github.com/facebookresearch/ConvNeXt)\n  - A ConvNet for the 2020s. [\\[arXiv:2201.03545\\]](https://arxiv.org/abs/2201.03545)\n- PoolFormer\n  [\\[github\\]](https://github.com/sail-sg/poolformer)\n  - PoolFormer: MetaFormer is Actually What You Need for Vision.\n    [\\[arXiv:2111.11418\\]](https://arxiv.org/abs/2111.11418)\n- Pooling-based Vision Transformers (PiT)\n  - Rethinking Spatial Dimensions of Vision Transformers.\n    [\\[arXiv:2103.16302\\]](https://arxiv.org/abs/2103.16302)\n- ResNet, ResNeXt, ECA-ResNet, SE-ResNet and friends\n  - Deep Residual Learning for Image Recognition. \n    [\\[arXiv:1512.03385\\]](https://arxiv.org/abs/1512.03385)\n  - Exploring the Limits of Weakly Supervised Pretraining. \n    [\\[arXiv:1805.00932\\]](https://arxiv.org/abs/1805.00932)\n  - Billion-scale Semi-Supervised Learning for Image Classification. \n    [\\[arXiv:1905.00546\\]](https://arxiv.org/abs/1905.00546)\n  - ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. \n    [\\[arXiv1910.03151\\]](https://arxiv.org/abs/1910.03151)\n  - Revisiting ResNets. [\\[arXiv:2103.07579\\]](https://arxiv.org/abs/2103.07579)\n  - Making Convolutional Networks Shift-Invariant Again. (anti-aliasing layer)\n    [\\[arXiv:1904.11486\\]](https://arxiv.org/abs/1904.11486)\n  - Squeeze-and-Excitation Networks. \n    [\\[arXiv:1709.01507\\]](https://arxiv.org/abs/1709.01507)\n  - Big Transfer (BiT): General Visual Representation Learning\n    [\\[arXiv:1912.11370\\]](https://arxiv.org/abs/1912.11370)\n  - Knowledge distillation: A good teacher is patient and consistent\n    [\\[arXiv:2106:05237\\]](https://arxiv.org/abs/2106.05237)\n- Segment Anything Model (SAM) \n  [\\[github\\]](https://github.com/facebookresearch/segment-anything)\n    - Segment Anything [\\[arXiv:2304.02643\\]](https://arxiv.org/abs/2304.02643)\n\n## Profiling\n\nTo understand how big each of the models is, I have done some profiling to measure\n- maximum batch size that fits in GPU memory and\n- throughput in images/second\nfor both inference and backpropagation on K80 and V100 GPUs. For V100, measurements \nwere done for both `float32` and mixed precision.\n\nThe results can be found in the `results/profiling_{k80, v100}.csv` files.\n\nFor backpropagation, we use as loss the mean of model outputs\n\n```python\ndef backprop():\n    with tf.GradientTape() as tape:\n        output = model(x, training=True)\n        loss = tf.reduce_mean(output)\n        grads = tape.gradient(loss, model.trainable_variables)\n    optimizer.apply_gradients(zip(grads, model.trainable_variables))\n```\n\n## License\n\nThis repository is released under the Apache 2.0 license as found in the \n[LICENSE](LICENSE) file.\n\n## Contact\n\nAll things related to `tfimm` can be discussed via \n[Slack](https://join.slack.com/t/tfimm/shared_invite/zt-13dnaf3qo-5JJaCBFIQhugeBXBT3NK8A).",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "TensorFlow port of PyTorch Image Models (timm) - image models with pretrained weights",
    "version": "0.2.14",
    "project_urls": {
        "Homepage": "https://github.com/martinsbruveris/tensorflow-image-models",
        "Repository": "https://github.com/martinsbruveris/tensorflow-image-models"
    },
    "split_keywords": [
        "tensorfow",
        "pretrained models",
        "visual transformer"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2b58da31a94293d086b353555d36d00eba9cb8c92fa85f161f2a27db9a9488cd",
                "md5": "9c37cc64aea9bd8fdf772eccfa9be2d2",
                "sha256": "2a42876d5c51c128ce870df066a53f5d8361d256ea7c485f8c98808badc5ba3a"
            },
            "downloads": -1,
            "filename": "tfimm-0.2.14-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9c37cc64aea9bd8fdf772eccfa9be2d2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<3.11",
            "size": 225870,
            "upload_time": "2023-05-15T12:54:40",
            "upload_time_iso_8601": "2023-05-15T12:54:40.327579Z",
            "url": "https://files.pythonhosted.org/packages/2b/58/da31a94293d086b353555d36d00eba9cb8c92fa85f161f2a27db9a9488cd/tfimm-0.2.14-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9fab18896405aea670fc94915ae91400295469f3a101675212a460974ba76603",
                "md5": "c85b37ee776cf92ee8ce6f725efa4ea5",
                "sha256": "bd10628935694347effa00bc753fba72647695751b301247f4c5c5d8b1edea94"
            },
            "downloads": -1,
            "filename": "tfimm-0.2.14.tar.gz",
            "has_sig": false,
            "md5_digest": "c85b37ee776cf92ee8ce6f725efa4ea5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<3.11",
            "size": 169572,
            "upload_time": "2023-05-15T12:54:41",
            "upload_time_iso_8601": "2023-05-15T12:54:41.729417Z",
            "url": "https://files.pythonhosted.org/packages/9f/ab/18896405aea670fc94915ae91400295469f3a101675212a460974ba76603/tfimm-0.2.14.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-15 12:54:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "martinsbruveris",
    "github_project": "tensorflow-image-models",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "tfimm"
}
        
Elapsed time: 0.06305s