kimm


Namekimm JSON
Version 0.2.2 PyPI version JSON
download
home_pageNone
SummaryA Keras model zoo with pretrained weights.
upload_time2024-05-29 03:44:39
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseApache License 2.0
keywords deep-learning model-zoo keras jax tensorflow torch imagenet pretrained-weights timm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!-- markdownlint-disable MD033 -->
<!-- markdownlint-disable MD041 -->

<div align="center">
<img width="50%" src="https://github.com/james77777778/kimm/assets/20734616/b21db8f2-307b-4791-b93d-e913e45fb238" alt="KIMM">

[![Keras](https://img.shields.io/badge/keras-v3.3.0+-success.svg)](https://github.com/keras-team/keras)
[![PyPI](https://img.shields.io/pypi/v/kimm)](https://pypi.org/project/kimm/)
[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/james77777778/kimm/issues)
[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/james77777778/keras-image-models/actions.yml?label=tests)](https://github.com/james77777778/keras-image-models/actions/workflows/actions.yml?query=branch%3Amain++)
[![codecov](https://codecov.io/gh/james77777778/keras-image-models/graph/badge.svg?token=eEha1SR80D)](https://codecov.io/gh/james77777778/keras-image-models)
</div>

# Keras Image Models

- [Latest Updates](#latest-updates)
- [Introduction](#introduction)
- [Usage](#usage)
- [Installation](#installation)
- [Quickstart](#quickstart)
  - [Image classification with ImageNet weights](#image-classification-using-the-model-pretrained-on-imagenet)
  - [An end-to-end fine-tuning example: cats vs. dogs dataset](#an-end-to-end-example-fine-tuning-an-image-classification-model-on-a-cats-vs-dogs-dataset)
  - [Grad-CAM](#grad-cam)
- [Model Zoo](#model-zoo)
- [License](#license)
- [Acknowledgements](#acknowledgements)

## Latest Updates

2024/05/29:

- Merge reparameterizable layers into 1 `ReparameterizableConv2D`
- Add `GhostNetV3*` from [huawei-noah/Efficient-AI-Backbones](https://github.com/huawei-noah/Efficient-AI-Backbones)

## Introduction

**K**eras **Im**age **M**odels (`kimm`) is a collection of image models, blocks and layers written in Keras 3. The goal is to offer SOTA models with pretrained weights in a user-friendly manner.

**KIMM** is:

- 🚀 A model zoo where almost all models come with **pre-trained weights on ImageNet**.
- 🧰 Providing APIs to export models to `.tflite` and `.onnx`.
- 🔧 Supporting the **reparameterization** technique.
- ✨ Integrated with **feature extraction** capability.

## Usage

- `kimm.list_models`
- `kimm.models.*.available_feature_keys`
- `kimm.models.*(...)`
- `kimm.models.*(..., feature_extractor=True, feature_keys=[...])`
- `kimm.utils.get_reparameterized_model`
- `kimm.export.export_tflite`
- `kimm.export.export_onnx`

```python
import keras
import kimm
import numpy as np


# List available models
print(kimm.list_models("mobileone", weights="imagenet"))
# ['MobileOneS0', 'MobileOneS1', 'MobileOneS2', 'MobileOneS3']

# Initialize model with pretrained ImageNet weights
x = keras.random.uniform([1, 224, 224, 3])
model = kimm.models.MobileOneS0()
y = model.predict(x)
print(y.shape)
# (1, 1000)

# Get reparameterized model by kimm.utils.get_reparameterized_model
reparameterized_model = kimm.utils.get_reparameterized_model(model)
y2 = reparameterized_model.predict(x)
np.testing.assert_allclose(
    keras.ops.convert_to_numpy(y), keras.ops.convert_to_numpy(y2), atol=1e-5
)

# Export model to tflite format
kimm.export.export_tflite(reparameterized_model, 224, "model.tflite")

# Export model to onnx format (note: must be "channels_first" format)
# kimm.export.export_onnx(reparameterized_model, 224, "model.onnx")

# List available feature keys of the model class
print(kimm.models.MobileOneS0.available_feature_keys)
# ['STEM_S2', 'BLOCK0_S4', 'BLOCK1_S8', 'BLOCK2_S16', 'BLOCK3_S32']

# Enable feature extraction by setting `feature_extractor=True`
# `feature_keys` can be optionally specified
model = kimm.models.MobileOneS0(
    feature_extractor=True, feature_keys=["BLOCK2_S16", "BLOCK3_S32"]
)
features = model.predict(x)
for feature_name, feature in features.items():
    print(feature_name, feature.shape)
# BLOCK2_S16 (1, 14, 14, 256)
# BLOCK3_S32 (1, 7, 7, 1024)
# TOP (1, 1000)

```

## Installation

```bash
pip install keras kimm -U
```

## Quickstart

### Image classification using the model pretrained on ImageNet

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/14WxYgVjlwCIO9MwqPYW-dskbTL2UHsVN?usp=sharing)

Using `kimm.models.VisionTransformerTiny16`:

<div align="center">
<img width="50%" src="https://github.com/james77777778/keras-image-models/assets/20734616/7caa4e5e-8561-425b-aaf2-6ae44ac3ea00" alt="african_elephant">
</div>

```bash
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step
Predicted: [('n02504458', 'African_elephant', 0.6895825), ('n01871265', 'tusker', 0.17934209), ('n02504013', 'Indian_elephant', 0.12927249)]
```

### An end-to-end example: fine-tuning an image classification model on a cats vs. dogs dataset

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1IbqfqG2NKEOKvBOznIPT1kjOdVPfThmd?usp=sharing)

Using `kimm.models.EfficientNetLiteB0`:

<div align="center">
<img width="75%" src="https://github.com/james77777778/kimm/assets/20734616/cbfc0773-a3fa-407d-be9a-fba4f19da6d3" alt="kimm_prediction_0">

<img width="75%" src="https://github.com/james77777778/kimm/assets/20734616/2eac0831-75bb-4790-a3af-412c3e09cf8f" alt="kimm_prediction_1">
</div>

Reference: [Transfer learning & fine-tuning (keras.io)](https://keras.io/guides/transfer_learning/#an-endtoend-example-finetuning-an-image-classification-model-on-a-cats-vs-dogs-dataset)

### Grad-CAM

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1h25VmsYDOLL6BNbRPEVOh1arIgcEoHu6?usp=sharing)

Using `kimm.models.MobileViTS`:

<div align="center">
<img width="50%" src="https://github.com/james77777778/kimm/assets/20734616/cb5022a3-aaea-4324-a9cd-3d2e63a0a6b2" alt="grad_cam">
</div>

Reference: [Grad-CAM class activation visualization (keras.io)](https://keras.io/examples/vision/grad_cam/)

## Model Zoo

|Model|Paper|Weights are ported from|API|
|-|-|-|-|
|ConvMixer|[ICLR 2022 Submission](https://arxiv.org/abs/2201.09792)|`timm`|`kimm.models.ConvMixer*`|
|ConvNeXt|[CVPR 2022](https://arxiv.org/abs/2201.03545)|`timm`|`kimm.models.ConvNeXt*`|
|DenseNet|[CVPR 2017](https://arxiv.org/abs/1608.06993)|`timm`|`kimm.models.DenseNet*`|
|EfficientNet|[ICML 2019](https://arxiv.org/abs/1905.11946)|`timm`|`kimm.models.EfficientNet*`|
|EfficientNetLite|[ICML 2019](https://arxiv.org/abs/1905.11946)|`timm`|`kimm.models.EfficientNetLite*`|
|EfficientNetV2|[ICML 2021](https://arxiv.org/abs/2104.00298)|`timm`|`kimm.models.EfficientNetV2*`|
|GhostNet|[CVPR 2020](https://arxiv.org/abs/1911.11907)|`timm`|`kimm.models.GhostNet*`|
|GhostNetV2|[NeurIPS 2022](https://arxiv.org/abs/2211.12905)|`timm`|`kimm.models.GhostNetV2*`|
|GhostNetV3|[arXiv 2024](https://arxiv.org/abs/2404.11202)|`github`|`kimm.models.GhostNetV3*`|
|HGNet||`timm`|`kimm.models.HGNet*`|
|HGNetV2||`timm`|`kimm.models.HGNetV2*`|
|InceptionNeXt|[arXiv 2023](https://arxiv.org/abs/2303.16900)|`timm`|`kimm.models.InceptionNeXt*`|
|InceptionV3|[CVPR 2016](https://arxiv.org/abs/1512.00567)|`timm`|`kimm.models.InceptionV3`|
|LCNet|[arXiv 2021](https://arxiv.org/abs/2109.15099)|`timm`|`kimm.models.LCNet*`|
|MobileNetV2|[CVPR 2018](https://arxiv.org/abs/1801.04381)|`timm`|`kimm.models.MobileNetV2*`|
|MobileNetV3|[ICCV 2019](https://arxiv.org/abs/1905.02244)|`timm`|`kimm.models.MobileNetV3*`|
|MobileOne|[CVPR 2023](https://arxiv.org/abs/2206.04040)|`timm`|`kimm.models.MobileOne*`|
|MobileViT|[ICLR 2022](https://arxiv.org/abs/2110.02178)|`timm`|`kimm.models.MobileViT*`|
|MobileViTV2|[arXiv 2022](https://arxiv.org/abs/2206.02680)|`timm`|`kimm.models.MobileViTV2*`|
|RegNet|[CVPR 2020](https://arxiv.org/abs/2003.13678)|`timm`|`kimm.models.RegNet*`|
|RepVGG|[CVPR 2021](https://arxiv.org/abs/2101.03697)|`timm`|`kimm.models.RepVGG*`|
|ResNet|[CVPR 2015](https://arxiv.org/abs/1512.03385)|`timm`|`kimm.models.ResNet*`|
|TinyNet|[NeurIPS 2020](https://arxiv.org/abs/2010.14819)|`timm`|`kimm.models.TinyNet*`|
|VGG|[ICLR 2015](https://arxiv.org/abs/1409.1556)|`timm`|`kimm.models.VGG*`|
|ViT|[ICLR 2021](https://arxiv.org/abs/2010.11929)|`timm`|`kimm.models.VisionTransformer*`|
|Xception|[CVPR 2017](https://arxiv.org/abs/1610.02357)|`keras`|`kimm.models.Xception`|

The export scripts can be found in `tools/convert_*.py`.

## License

Please refer to [timm](https://github.com/huggingface/pytorch-image-models#licenses) as this project is built upon it.

### `kimm` Code

The code here is licensed Apache 2.0.

## Acknowledgements

Thanks for these awesome projects that were used in `kimm`

- [https://github.com/keras-team/keras](https://github.com/keras-team/keras)
- [https://github.com/huggingface/pytorch-image-models](https://github.com/huggingface/pytorch-image-models)

## Citing

### BibTeX

```bash
@misc{rw2019timm,
  author = {Ross Wightman},
  title = {PyTorch Image Models},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub repository},
  doi = {10.5281/zenodo.4414861},
  howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```

```bash
@misc{hy2024kimm,
  author = {Hongyu Chiu},
  title = {Keras Image Models},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/james77777778/kimm}}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "kimm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "Hong-Yu Chiu <james77777778@gmail.com>",
    "keywords": "deep-learning, model-zoo, keras, jax, tensorflow, torch, imagenet, pretrained-weights, timm",
    "author": null,
    "author_email": "Hong-Yu Chiu <james77777778@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/64/c0/787ff44d19324e6ad08f4eba84f78da1ec40abffb8c3f076f0d3ebe1af7a/kimm-0.2.2.tar.gz",
    "platform": null,
    "description": "<!-- markdownlint-disable MD033 -->\n<!-- markdownlint-disable MD041 -->\n\n<div align=\"center\">\n<img width=\"50%\" src=\"https://github.com/james77777778/kimm/assets/20734616/b21db8f2-307b-4791-b93d-e913e45fb238\" alt=\"KIMM\">\n\n[![Keras](https://img.shields.io/badge/keras-v3.3.0+-success.svg)](https://github.com/keras-team/keras)\n[![PyPI](https://img.shields.io/pypi/v/kimm)](https://pypi.org/project/kimm/)\n[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/james77777778/kimm/issues)\n[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/james77777778/keras-image-models/actions.yml?label=tests)](https://github.com/james77777778/keras-image-models/actions/workflows/actions.yml?query=branch%3Amain++)\n[![codecov](https://codecov.io/gh/james77777778/keras-image-models/graph/badge.svg?token=eEha1SR80D)](https://codecov.io/gh/james77777778/keras-image-models)\n</div>\n\n# Keras Image Models\n\n- [Latest Updates](#latest-updates)\n- [Introduction](#introduction)\n- [Usage](#usage)\n- [Installation](#installation)\n- [Quickstart](#quickstart)\n  - [Image classification with ImageNet weights](#image-classification-using-the-model-pretrained-on-imagenet)\n  - [An end-to-end fine-tuning example: cats vs. dogs dataset](#an-end-to-end-example-fine-tuning-an-image-classification-model-on-a-cats-vs-dogs-dataset)\n  - [Grad-CAM](#grad-cam)\n- [Model Zoo](#model-zoo)\n- [License](#license)\n- [Acknowledgements](#acknowledgements)\n\n## Latest Updates\n\n2024/05/29:\n\n- Merge reparameterizable layers into 1 `ReparameterizableConv2D`\n- Add `GhostNetV3*` from [huawei-noah/Efficient-AI-Backbones](https://github.com/huawei-noah/Efficient-AI-Backbones)\n\n## Introduction\n\n**K**eras **Im**age **M**odels (`kimm`) is a collection of image models, blocks and layers written in Keras 3. The goal is to offer SOTA models with pretrained weights in a user-friendly manner.\n\n**KIMM** is:\n\n- \ud83d\ude80 A model zoo where almost all models come with **pre-trained weights on ImageNet**.\n- \ud83e\uddf0 Providing APIs to export models to `.tflite` and `.onnx`.\n- \ud83d\udd27 Supporting the **reparameterization** technique.\n- \u2728 Integrated with **feature extraction** capability.\n\n## Usage\n\n- `kimm.list_models`\n- `kimm.models.*.available_feature_keys`\n- `kimm.models.*(...)`\n- `kimm.models.*(..., feature_extractor=True, feature_keys=[...])`\n- `kimm.utils.get_reparameterized_model`\n- `kimm.export.export_tflite`\n- `kimm.export.export_onnx`\n\n```python\nimport keras\nimport kimm\nimport numpy as np\n\n\n# List available models\nprint(kimm.list_models(\"mobileone\", weights=\"imagenet\"))\n# ['MobileOneS0', 'MobileOneS1', 'MobileOneS2', 'MobileOneS3']\n\n# Initialize model with pretrained ImageNet weights\nx = keras.random.uniform([1, 224, 224, 3])\nmodel = kimm.models.MobileOneS0()\ny = model.predict(x)\nprint(y.shape)\n# (1, 1000)\n\n# Get reparameterized model by kimm.utils.get_reparameterized_model\nreparameterized_model = kimm.utils.get_reparameterized_model(model)\ny2 = reparameterized_model.predict(x)\nnp.testing.assert_allclose(\n    keras.ops.convert_to_numpy(y), keras.ops.convert_to_numpy(y2), atol=1e-5\n)\n\n# Export model to tflite format\nkimm.export.export_tflite(reparameterized_model, 224, \"model.tflite\")\n\n# Export model to onnx format (note: must be \"channels_first\" format)\n# kimm.export.export_onnx(reparameterized_model, 224, \"model.onnx\")\n\n# List available feature keys of the model class\nprint(kimm.models.MobileOneS0.available_feature_keys)\n# ['STEM_S2', 'BLOCK0_S4', 'BLOCK1_S8', 'BLOCK2_S16', 'BLOCK3_S32']\n\n# Enable feature extraction by setting `feature_extractor=True`\n# `feature_keys` can be optionally specified\nmodel = kimm.models.MobileOneS0(\n    feature_extractor=True, feature_keys=[\"BLOCK2_S16\", \"BLOCK3_S32\"]\n)\nfeatures = model.predict(x)\nfor feature_name, feature in features.items():\n    print(feature_name, feature.shape)\n# BLOCK2_S16 (1, 14, 14, 256)\n# BLOCK3_S32 (1, 7, 7, 1024)\n# TOP (1, 1000)\n\n```\n\n## Installation\n\n```bash\npip install keras kimm -U\n```\n\n## Quickstart\n\n### Image classification using the model pretrained on ImageNet\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/14WxYgVjlwCIO9MwqPYW-dskbTL2UHsVN?usp=sharing)\n\nUsing `kimm.models.VisionTransformerTiny16`:\n\n<div align=\"center\">\n<img width=\"50%\" src=\"https://github.com/james77777778/keras-image-models/assets/20734616/7caa4e5e-8561-425b-aaf2-6ae44ac3ea00\" alt=\"african_elephant\">\n</div>\n\n```bash\n1/1 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 1s 1s/step\nPredicted: [('n02504458', 'African_elephant', 0.6895825), ('n01871265', 'tusker', 0.17934209), ('n02504013', 'Indian_elephant', 0.12927249)]\n```\n\n### An end-to-end example: fine-tuning an image classification model on a cats vs. dogs dataset\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1IbqfqG2NKEOKvBOznIPT1kjOdVPfThmd?usp=sharing)\n\nUsing `kimm.models.EfficientNetLiteB0`:\n\n<div align=\"center\">\n<img width=\"75%\" src=\"https://github.com/james77777778/kimm/assets/20734616/cbfc0773-a3fa-407d-be9a-fba4f19da6d3\" alt=\"kimm_prediction_0\">\n\n<img width=\"75%\" src=\"https://github.com/james77777778/kimm/assets/20734616/2eac0831-75bb-4790-a3af-412c3e09cf8f\" alt=\"kimm_prediction_1\">\n</div>\n\nReference: [Transfer learning & fine-tuning (keras.io)](https://keras.io/guides/transfer_learning/#an-endtoend-example-finetuning-an-image-classification-model-on-a-cats-vs-dogs-dataset)\n\n### Grad-CAM\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1h25VmsYDOLL6BNbRPEVOh1arIgcEoHu6?usp=sharing)\n\nUsing `kimm.models.MobileViTS`:\n\n<div align=\"center\">\n<img width=\"50%\" src=\"https://github.com/james77777778/kimm/assets/20734616/cb5022a3-aaea-4324-a9cd-3d2e63a0a6b2\" alt=\"grad_cam\">\n</div>\n\nReference: [Grad-CAM class activation visualization (keras.io)](https://keras.io/examples/vision/grad_cam/)\n\n## Model Zoo\n\n|Model|Paper|Weights are ported from|API|\n|-|-|-|-|\n|ConvMixer|[ICLR 2022 Submission](https://arxiv.org/abs/2201.09792)|`timm`|`kimm.models.ConvMixer*`|\n|ConvNeXt|[CVPR 2022](https://arxiv.org/abs/2201.03545)|`timm`|`kimm.models.ConvNeXt*`|\n|DenseNet|[CVPR 2017](https://arxiv.org/abs/1608.06993)|`timm`|`kimm.models.DenseNet*`|\n|EfficientNet|[ICML 2019](https://arxiv.org/abs/1905.11946)|`timm`|`kimm.models.EfficientNet*`|\n|EfficientNetLite|[ICML 2019](https://arxiv.org/abs/1905.11946)|`timm`|`kimm.models.EfficientNetLite*`|\n|EfficientNetV2|[ICML 2021](https://arxiv.org/abs/2104.00298)|`timm`|`kimm.models.EfficientNetV2*`|\n|GhostNet|[CVPR 2020](https://arxiv.org/abs/1911.11907)|`timm`|`kimm.models.GhostNet*`|\n|GhostNetV2|[NeurIPS 2022](https://arxiv.org/abs/2211.12905)|`timm`|`kimm.models.GhostNetV2*`|\n|GhostNetV3|[arXiv 2024](https://arxiv.org/abs/2404.11202)|`github`|`kimm.models.GhostNetV3*`|\n|HGNet||`timm`|`kimm.models.HGNet*`|\n|HGNetV2||`timm`|`kimm.models.HGNetV2*`|\n|InceptionNeXt|[arXiv 2023](https://arxiv.org/abs/2303.16900)|`timm`|`kimm.models.InceptionNeXt*`|\n|InceptionV3|[CVPR 2016](https://arxiv.org/abs/1512.00567)|`timm`|`kimm.models.InceptionV3`|\n|LCNet|[arXiv 2021](https://arxiv.org/abs/2109.15099)|`timm`|`kimm.models.LCNet*`|\n|MobileNetV2|[CVPR 2018](https://arxiv.org/abs/1801.04381)|`timm`|`kimm.models.MobileNetV2*`|\n|MobileNetV3|[ICCV 2019](https://arxiv.org/abs/1905.02244)|`timm`|`kimm.models.MobileNetV3*`|\n|MobileOne|[CVPR 2023](https://arxiv.org/abs/2206.04040)|`timm`|`kimm.models.MobileOne*`|\n|MobileViT|[ICLR 2022](https://arxiv.org/abs/2110.02178)|`timm`|`kimm.models.MobileViT*`|\n|MobileViTV2|[arXiv 2022](https://arxiv.org/abs/2206.02680)|`timm`|`kimm.models.MobileViTV2*`|\n|RegNet|[CVPR 2020](https://arxiv.org/abs/2003.13678)|`timm`|`kimm.models.RegNet*`|\n|RepVGG|[CVPR 2021](https://arxiv.org/abs/2101.03697)|`timm`|`kimm.models.RepVGG*`|\n|ResNet|[CVPR 2015](https://arxiv.org/abs/1512.03385)|`timm`|`kimm.models.ResNet*`|\n|TinyNet|[NeurIPS 2020](https://arxiv.org/abs/2010.14819)|`timm`|`kimm.models.TinyNet*`|\n|VGG|[ICLR 2015](https://arxiv.org/abs/1409.1556)|`timm`|`kimm.models.VGG*`|\n|ViT|[ICLR 2021](https://arxiv.org/abs/2010.11929)|`timm`|`kimm.models.VisionTransformer*`|\n|Xception|[CVPR 2017](https://arxiv.org/abs/1610.02357)|`keras`|`kimm.models.Xception`|\n\nThe export scripts can be found in `tools/convert_*.py`.\n\n## License\n\nPlease refer to [timm](https://github.com/huggingface/pytorch-image-models#licenses) as this project is built upon it.\n\n### `kimm` Code\n\nThe code here is licensed Apache 2.0.\n\n## Acknowledgements\n\nThanks for these awesome projects that were used in `kimm`\n\n- [https://github.com/keras-team/keras](https://github.com/keras-team/keras)\n- [https://github.com/huggingface/pytorch-image-models](https://github.com/huggingface/pytorch-image-models)\n\n## Citing\n\n### BibTeX\n\n```bash\n@misc{rw2019timm,\n  author = {Ross Wightman},\n  title = {PyTorch Image Models},\n  year = {2019},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  doi = {10.5281/zenodo.4414861},\n  howpublished = {\\url{https://github.com/rwightman/pytorch-image-models}}\n}\n```\n\n```bash\n@misc{hy2024kimm,\n  author = {Hongyu Chiu},\n  title = {Keras Image Models},\n  year = {2024},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https://github.com/james77777778/kimm}}\n}\n```\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "A Keras model zoo with pretrained weights.",
    "version": "0.2.2",
    "project_urls": {
        "Documentation": "https://github.com/james77777778/keras-image-models",
        "Homepage": "https://github.com/james77777778/keras-image-models",
        "Issues": "https://github.com/james77777778/keras-image-models/issues",
        "Repository": "https://github.com/james77777778/keras-image-models.git"
    },
    "split_keywords": [
        "deep-learning",
        " model-zoo",
        " keras",
        " jax",
        " tensorflow",
        " torch",
        " imagenet",
        " pretrained-weights",
        " timm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4fa53bb8fc20327f388f7ba33994f8474d6dacd7aee9a1dc41772a1600282de1",
                "md5": "6ffd96f0c11653b0102bfd2f5ef585b9",
                "sha256": "a3ca135c5ef9d7d721ee5303303ca4f0edaa5f1401aa624edfb509b5187d501d"
            },
            "downloads": -1,
            "filename": "kimm-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6ffd96f0c11653b0102bfd2f5ef585b9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 107247,
            "upload_time": "2024-05-29T03:44:37",
            "upload_time_iso_8601": "2024-05-29T03:44:37.597474Z",
            "url": "https://files.pythonhosted.org/packages/4f/a5/3bb8fc20327f388f7ba33994f8474d6dacd7aee9a1dc41772a1600282de1/kimm-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "64c0787ff44d19324e6ad08f4eba84f78da1ec40abffb8c3f076f0d3ebe1af7a",
                "md5": "246a5dae7a39f1377d6007158f38f71d",
                "sha256": "c61cf3d2a85985f7a1d3345ffc13f2aea40812d74144e41616883aefe693a94f"
            },
            "downloads": -1,
            "filename": "kimm-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "246a5dae7a39f1377d6007158f38f71d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 67392,
            "upload_time": "2024-05-29T03:44:39",
            "upload_time_iso_8601": "2024-05-29T03:44:39.385384Z",
            "url": "https://files.pythonhosted.org/packages/64/c0/787ff44d19324e6ad08f4eba84f78da1ec40abffb8c3f076f0d3ebe1af7a/kimm-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-29 03:44:39",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "james77777778",
    "github_project": "keras-image-models",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "kimm"
}
        
Elapsed time: 0.24680s