image-classifiers


Nameimage-classifiers JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/qubvel/classification_models
SummaryImage classification models. Keras.
upload_time2019-10-04 10:27:28
maintainer
docs_urlNone
authorPavel Yakubovskiy
requires_python>=3.0.0
licenseMIT
keywords
VCS
bugtrack_url
requirements keras_applications
Travis-CI
coveralls test coverage No coveralls.
            
[![PyPI version](https://badge.fury.io/py/image-classifiers.svg)](https://badge.fury.io/py/image-classifiers) [![Build Status](https://travis-ci.com/qubvel/classification_models.svg?branch=master)](https://travis-ci.com/qubvel/classification_models) 
# Classification models Zoo - Keras (and TensorFlow Keras)
Trained on [ImageNet](http://www.image-net.org/) classification models. 
The library is designed to work both with [Keras](https://keras.io/) and [TensorFlow Keras](https://www.tensorflow.org/guide/keras). See example below.

## Important!
There was a huge library update **05 of August**. Now classification-models works with both frameworks: `keras` and `tensorflow.keras`.
If you have models, trained before that date, to load them, please, use `image-classifiers` (PyPI package name) of 0.2.2 version. You can roll back using `pip install -U image-classifiers==0.2.2`.

### Architectures: 
- [VGG](https://arxiv.org/abs/1409.1556) [16, 19]
- [ResNet](https://arxiv.org/abs/1512.03385) [18, 34, 50, 101, 152]
- [ResNeXt](https://arxiv.org/abs/1611.05431) [50, 101]
- [SE-ResNet](https://arxiv.org/abs/1709.01507) [18, 34, 50, 101, 152]
- [SE-ResNeXt](https://arxiv.org/abs/1709.01507) [50, 101]
- [SE-Net](https://arxiv.org/abs/1709.01507) [154]
- [DenseNet](https://arxiv.org/abs/1608.06993) [121, 169, 201]
- [Inception ResNet V2](https://arxiv.org/abs/1602.07261)
- [Inception V3](http://arxiv.org/abs/1512.00567)
- [Xception](https://arxiv.org/abs/1610.02357)
- [NASNet](https://arxiv.org/abs/1707.07012) [large, mobile]
- [MobileNet](https://arxiv.org/pdf/1704.04861.pdf)
- [MobileNet v2](https://arxiv.org/abs/1801.04381)

### Specification 
The top-k accuracy were obtained using center single crop on the 
2012 ILSVRC ImageNet validation set and may differ from the original ones. 
The input size used was 224x224 (min size 256) for all models except:
 - NASNetLarge 331x331 (352)
 - InceptionV3 299x299 (324)
 - InceptionResNetV2 299x299 (324)
 - Xception 299x299 (324)  

The inference \*Time was evaluated on 500 batches of size 16. 
All models have been tested using same hardware and software. 
Time is listed just for comparison of performance.

| Model           |Acc@1|Acc@5|Time*|Source|
|-----------------|:---:|:---:|:---:|------|
|vgg16            |70.79|89.74|24.95|[keras](https://github.com/keras-team/keras-applications)|
|vgg19            |70.89|89.69|24.95|[keras](https://github.com/keras-team/keras-applications)|
|resnet18         |68.24|88.49|16.07|[mxnet](https://github.com/Microsoft/MMdnn)|
|resnet34         |72.17|90.74|17.37|[mxnet](https://github.com/Microsoft/MMdnn)|
|resnet50         |74.81|92.38|22.62|[mxnet](https://github.com/Microsoft/MMdnn)|
|resnet101        |76.58|93.10|33.03|[mxnet](https://github.com/Microsoft/MMdnn)|
|resnet152        |76.66|93.08|42.37|[mxnet](https://github.com/Microsoft/MMdnn)|
|resnet50v2       |69.73|89.31|19.56|[keras](https://github.com/keras-team/keras-applications)|
|resnet101v2      |71.93|90.41|28.80|[keras](https://github.com/keras-team/keras-applications)|
|resnet152v2      |72.29|90.61|41.09|[keras](https://github.com/keras-team/keras-applications)|
|resnext50        |77.36|93.48|37.57|[keras](https://github.com/keras-team/keras-applications)|
|resnext101       |78.48|94.00|60.07|[keras](https://github.com/keras-team/keras-applications)|
|densenet121      |74.67|92.04|27.66|[keras](https://github.com/keras-team/keras-applications)|
|densenet169      |75.85|92.93|33.71|[keras](https://github.com/keras-team/keras-applications)|
|densenet201      |77.13|93.43|42.40|[keras](https://github.com/keras-team/keras-applications)|
|inceptionv3      |77.55|93.48|38.94|[keras](https://github.com/keras-team/keras-applications)|
|xception         |78.87|94.20|42.18|[keras](https://github.com/keras-team/keras-applications)|
|inceptionresnetv2|80.03|94.89|54.77|[keras](https://github.com/keras-team/keras-applications)|
|seresnet18       |69.41|88.84|20.19|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|
|seresnet34       |72.60|90.91|22.20|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|
|seresnet50       |76.44|93.02|23.64|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|
|seresnet101      |77.92|94.00|32.55|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|
|seresnet152      |78.34|94.08|47.88|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|
|seresnext50      |78.74|94.30|38.29|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|
|seresnext101     |79.88|94.87|62.80|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|
|senet154         |81.06|95.24|137.36|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|
|nasnetlarge      |**82.12**|**95.72**|116.53|[keras](https://github.com/keras-team/keras-applications)|
|nasnetmobile     |74.04|91.54|27.73|[keras](https://github.com/keras-team/keras-applications)|
|mobilenet        |70.36|89.39|15.50|[keras](https://github.com/keras-team/keras-applications)|
|mobilenetv2      |71.63|90.35|18.31|[keras](https://github.com/keras-team/keras-applications)|


### Weights
| Name                    |Classes   | Models    |
|-------------------------|:--------:|:---------:|
|'imagenet'               |1000      |all models |
|'imagenet11k-place365ch' |11586     |resnet50   |
|'imagenet11k'            |11221     |resnet152  |


### Installation

Requirements:
- Keras >= 2.2.0 / TensorFlow >= 1.12
- keras_applications >= 1.0.7

###### Note
    This library does not have TensorFlow in a requirements for installation. 
    Please, choose suitable version (‘cpu’/’gpu’) and install it manually using 
    official Guide (https://www.tensorflow.org/install/).

PyPI stable package:
```bash
$ pip install image-classifiers==0.2.2
```

PyPI latest package:
```bash
$ pip install image-classifiers==1.0.0b1
```

Latest version:
```bash
$ pip install git+https://github.com/qubvel/classification_models.git
```

### Examples 

##### Loading model with `imagenet` weights:

```python
# for keras
from classification_models.keras import Classifiers

# for tensorflow.keras
# from classification_models.tfkeras import Classifiers

ResNet18, preprocess_input = Classifiers.get('resnet18')
model = ResNet18((224, 224, 3), weights='imagenet')
```

This way take one additional line of code, however if you would 
like to train several models you do not need to import them directly, 
just access everything through `Classifiers`.

You can get all model names using `Classifiers.models_names()` method.

##### Inference example:

```python
import numpy as np
from skimage.io import imread
from skimage.transform import resize
from keras.applications.imagenet_utils import decode_predictions
from classification_models.keras import Classifiers

ResNet18, preprocess_input = Classifiers.get('resnet18')

# read and prepare image
x = imread('./imgs/tests/seagull.jpg')
x = resize(x, (224, 224)) * 255    # cast back to 0-255 range
x = preprocess_input(x)
x = np.expand_dims(x, 0)

# load model
model = ResNet18(input_shape=(224,224,3), weights='imagenet', classes=1000)

# processing image
y = model.predict(x)

# result
print(decode_predictions(y))
```

##### Model fine-tuning example:
```python
import keras
from classification_models.keras import Classifiers

ResNet18, preprocess_input = Classifiers.get('resnet18')

# prepare your data
X = ...
y = ...

X = preprocess_input(X)

n_classes = 10

# build model
base_model = ResNet18(input_shape=(224,224,3), weights='imagenet', include_top=False)
x = keras.layers.GlobalAveragePooling2D()(base_model.output)
output = keras.layers.Dense(n_classes, activation='softmax')(x)
model = keras.models.Model(inputs=[base_model.input], outputs=[output])

# train
model.compile(optimizer='SGD', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X, y)
```



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/qubvel/classification_models",
    "name": "image-classifiers",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.0.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Pavel Yakubovskiy",
    "author_email": "qubvel@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/83/89/cf76a884d63477fc0e964d3494e65095272af60c48ee72b2c74b96da92c7/image_classifiers-1.0.0.tar.gz",
    "platform": "",
    "description": "\n[![PyPI version](https://badge.fury.io/py/image-classifiers.svg)](https://badge.fury.io/py/image-classifiers) [![Build Status](https://travis-ci.com/qubvel/classification_models.svg?branch=master)](https://travis-ci.com/qubvel/classification_models) \n# Classification models Zoo - Keras (and TensorFlow Keras)\nTrained on [ImageNet](http://www.image-net.org/) classification models. \nThe library is designed to work both with [Keras](https://keras.io/) and [TensorFlow Keras](https://www.tensorflow.org/guide/keras). See example below.\n\n## Important!\nThere was a huge library update **05 of August**. Now classification-models works with both frameworks: `keras` and `tensorflow.keras`.\nIf you have models, trained before that date, to load them, please, use `image-classifiers` (PyPI package name) of 0.2.2 version. You can roll back using `pip install -U image-classifiers==0.2.2`.\n\n### Architectures: \n- [VGG](https://arxiv.org/abs/1409.1556) [16, 19]\n- [ResNet](https://arxiv.org/abs/1512.03385) [18, 34, 50, 101, 152]\n- [ResNeXt](https://arxiv.org/abs/1611.05431) [50, 101]\n- [SE-ResNet](https://arxiv.org/abs/1709.01507) [18, 34, 50, 101, 152]\n- [SE-ResNeXt](https://arxiv.org/abs/1709.01507) [50, 101]\n- [SE-Net](https://arxiv.org/abs/1709.01507) [154]\n- [DenseNet](https://arxiv.org/abs/1608.06993) [121, 169, 201]\n- [Inception ResNet V2](https://arxiv.org/abs/1602.07261)\n- [Inception V3](http://arxiv.org/abs/1512.00567)\n- [Xception](https://arxiv.org/abs/1610.02357)\n- [NASNet](https://arxiv.org/abs/1707.07012) [large, mobile]\n- [MobileNet](https://arxiv.org/pdf/1704.04861.pdf)\n- [MobileNet v2](https://arxiv.org/abs/1801.04381)\n\n### Specification \nThe top-k accuracy were obtained using center single crop on the \n2012 ILSVRC ImageNet validation set and may differ from the original ones. \nThe input size used was 224x224 (min size 256) for all models except:\n - NASNetLarge 331x331 (352)\n - InceptionV3 299x299 (324)\n - InceptionResNetV2 299x299 (324)\n - Xception 299x299 (324)  \n\nThe inference \\*Time was evaluated on 500 batches of size 16. \nAll models have been tested using same hardware and software. \nTime is listed just for comparison of performance.\n\n| Model           |Acc@1|Acc@5|Time*|Source|\n|-----------------|:---:|:---:|:---:|------|\n|vgg16            |70.79|89.74|24.95|[keras](https://github.com/keras-team/keras-applications)|\n|vgg19            |70.89|89.69|24.95|[keras](https://github.com/keras-team/keras-applications)|\n|resnet18         |68.24|88.49|16.07|[mxnet](https://github.com/Microsoft/MMdnn)|\n|resnet34         |72.17|90.74|17.37|[mxnet](https://github.com/Microsoft/MMdnn)|\n|resnet50         |74.81|92.38|22.62|[mxnet](https://github.com/Microsoft/MMdnn)|\n|resnet101        |76.58|93.10|33.03|[mxnet](https://github.com/Microsoft/MMdnn)|\n|resnet152        |76.66|93.08|42.37|[mxnet](https://github.com/Microsoft/MMdnn)|\n|resnet50v2       |69.73|89.31|19.56|[keras](https://github.com/keras-team/keras-applications)|\n|resnet101v2      |71.93|90.41|28.80|[keras](https://github.com/keras-team/keras-applications)|\n|resnet152v2      |72.29|90.61|41.09|[keras](https://github.com/keras-team/keras-applications)|\n|resnext50        |77.36|93.48|37.57|[keras](https://github.com/keras-team/keras-applications)|\n|resnext101       |78.48|94.00|60.07|[keras](https://github.com/keras-team/keras-applications)|\n|densenet121      |74.67|92.04|27.66|[keras](https://github.com/keras-team/keras-applications)|\n|densenet169      |75.85|92.93|33.71|[keras](https://github.com/keras-team/keras-applications)|\n|densenet201      |77.13|93.43|42.40|[keras](https://github.com/keras-team/keras-applications)|\n|inceptionv3      |77.55|93.48|38.94|[keras](https://github.com/keras-team/keras-applications)|\n|xception         |78.87|94.20|42.18|[keras](https://github.com/keras-team/keras-applications)|\n|inceptionresnetv2|80.03|94.89|54.77|[keras](https://github.com/keras-team/keras-applications)|\n|seresnet18       |69.41|88.84|20.19|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|\n|seresnet34       |72.60|90.91|22.20|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|\n|seresnet50       |76.44|93.02|23.64|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|\n|seresnet101      |77.92|94.00|32.55|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|\n|seresnet152      |78.34|94.08|47.88|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|\n|seresnext50      |78.74|94.30|38.29|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|\n|seresnext101     |79.88|94.87|62.80|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|\n|senet154         |81.06|95.24|137.36|[pytorch](https://github.com/Cadene/pretrained-models.pytorch)|\n|nasnetlarge      |**82.12**|**95.72**|116.53|[keras](https://github.com/keras-team/keras-applications)|\n|nasnetmobile     |74.04|91.54|27.73|[keras](https://github.com/keras-team/keras-applications)|\n|mobilenet        |70.36|89.39|15.50|[keras](https://github.com/keras-team/keras-applications)|\n|mobilenetv2      |71.63|90.35|18.31|[keras](https://github.com/keras-team/keras-applications)|\n\n\n### Weights\n| Name                    |Classes   | Models    |\n|-------------------------|:--------:|:---------:|\n|'imagenet'               |1000      |all models |\n|'imagenet11k-place365ch' |11586     |resnet50   |\n|'imagenet11k'            |11221     |resnet152  |\n\n\n### Installation\n\nRequirements:\n- Keras >= 2.2.0 / TensorFlow >= 1.12\n- keras_applications >= 1.0.7\n\n###### Note\n    This library does not have TensorFlow in a requirements for installation. \n    Please, choose suitable version (\u2018cpu\u2019/\u2019gpu\u2019) and install it manually using \n    official Guide (https://www.tensorflow.org/install/).\n\nPyPI stable package:\n```bash\n$ pip install image-classifiers==0.2.2\n```\n\nPyPI latest package:\n```bash\n$ pip install image-classifiers==1.0.0b1\n```\n\nLatest version:\n```bash\n$ pip install git+https://github.com/qubvel/classification_models.git\n```\n\n### Examples \n\n##### Loading model with `imagenet` weights:\n\n```python\n# for keras\nfrom classification_models.keras import Classifiers\n\n# for tensorflow.keras\n# from classification_models.tfkeras import Classifiers\n\nResNet18, preprocess_input = Classifiers.get('resnet18')\nmodel = ResNet18((224, 224, 3), weights='imagenet')\n```\n\nThis way take one additional line of code, however if you would \nlike to train several models you do not need to import them directly, \njust access everything through `Classifiers`.\n\nYou can get all model names using `Classifiers.models_names()` method.\n\n##### Inference example:\n\n```python\nimport numpy as np\nfrom skimage.io import imread\nfrom skimage.transform import resize\nfrom keras.applications.imagenet_utils import decode_predictions\nfrom classification_models.keras import Classifiers\n\nResNet18, preprocess_input = Classifiers.get('resnet18')\n\n# read and prepare image\nx = imread('./imgs/tests/seagull.jpg')\nx = resize(x, (224, 224)) * 255    # cast back to 0-255 range\nx = preprocess_input(x)\nx = np.expand_dims(x, 0)\n\n# load model\nmodel = ResNet18(input_shape=(224,224,3), weights='imagenet', classes=1000)\n\n# processing image\ny = model.predict(x)\n\n# result\nprint(decode_predictions(y))\n```\n\n##### Model fine-tuning example:\n```python\nimport keras\nfrom classification_models.keras import Classifiers\n\nResNet18, preprocess_input = Classifiers.get('resnet18')\n\n# prepare your data\nX = ...\ny = ...\n\nX = preprocess_input(X)\n\nn_classes = 10\n\n# build model\nbase_model = ResNet18(input_shape=(224,224,3), weights='imagenet', include_top=False)\nx = keras.layers.GlobalAveragePooling2D()(base_model.output)\noutput = keras.layers.Dense(n_classes, activation='softmax')(x)\nmodel = keras.models.Model(inputs=[base_model.input], outputs=[output])\n\n# train\nmodel.compile(optimizer='SGD', loss='categorical_crossentropy', metrics=['accuracy'])\nmodel.fit(X, y)\n```\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Image classification models. Keras.",
    "version": "1.0.0",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "fedd8f4ef0525aa52893cfb675922739",
                "sha256": "6030bdfd1bc334a4e9d018e8962fbe9c8deba3257dc21c237032ff1590da2b98"
            },
            "downloads": -1,
            "filename": "image_classifiers-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fedd8f4ef0525aa52893cfb675922739",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.0.0",
            "size": 19951,
            "upload_time": "2019-10-04T10:27:26",
            "upload_time_iso_8601": "2019-10-04T10:27:26.234780Z",
            "url": "https://files.pythonhosted.org/packages/81/98/6f84720e299a4942ab80df5f76ab97b7828b24d1de5e9b2cbbe6073228b7/image_classifiers-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "64847c2f9807a4e656860bc6425be2e3",
                "sha256": "62022c0ff919d8ba5e3ffb7958b7db916e102e3e65c47c71cf8717ced43c0e4c"
            },
            "downloads": -1,
            "filename": "image_classifiers-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "64847c2f9807a4e656860bc6425be2e3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.0.0",
            "size": 18195,
            "upload_time": "2019-10-04T10:27:28",
            "upload_time_iso_8601": "2019-10-04T10:27:28.193930Z",
            "url": "https://files.pythonhosted.org/packages/83/89/cf76a884d63477fc0e964d3494e65095272af60c48ee72b2c74b96da92c7/image_classifiers-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2019-10-04 10:27:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "qubvel",
    "github_project": "classification_models",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "keras_applications",
            "specs": [
                [
                    "<=",
                    "1.0.8"
                ],
                [
                    ">=",
                    "1.0.7"
                ]
            ]
        }
    ],
    "lcname": "image-classifiers"
}
        
Elapsed time: 0.01488s