Deep Object Reid
================
Deep Object Reid is a library for deep-learning image classification and object re-identification, written in `PyTorch <https://pytorch.org/>`_.
It is a part of `OpenVINO™ Training Extensions <https://github.com/opencv/openvino_training_extensions>`_.
The project is based on Kaiyang Zhou's `Torchreid <https://github.com/KaiyangZhou/deep-person-reid>`_ project.
Its features:
- multi-GPU training
- support both image- and video-reid
- end-to-end training and evaluation
- incredibly easy preparation of reid datasets
- multi-dataset training
- cross-dataset evaluation
- standard protocol used by most research papers
- highly extensible (easy to add models, datasets, training methods, etc.)
- implementations of state-of-the-art deep reid models
- access to pretrained reid models
- advanced training techniques
- visualization tools (tensorboard, ranks, etc.)
Code: https://github.com/openvinotoolkit/deep-object-reid
How-to instructions: https://github.com/openvinotoolkit/deep-object-reid/blob/ote/docs/user_guide.rst
Model zoo by Kaiyang Zhou: https://github.com/openvinotoolkit/deep-object-reid/blob/ote/docs/MODEL_ZOO.md
Original tech report by Kaiyang Zhou and Tao Xiang: https://arxiv.org/abs/1910.10093.
Also you can find some other research projects that are built on top of Torchreid `here <https://github.com/KaiyangZhou/deep-person-reid/tree/master/projects>`_.
What's new
------------
- [May 2020] Added the person attribute recognition code used in `Omni-Scale Feature Learning for Person Re-Identification (ICCV'19) <https://arxiv.org/abs/1905.00953>`_. See ``projects/attribute_recognition/``.
- [May 2020] ``1.2.1``: Added a simple API for feature extraction (``torchreid/utils/feature_extractor.py``). See the `documentation <https://kaiyangzhou.github.io/deep-person-reid/user_guide.html>`_ for the instruction.
- [Apr 2020] Code for reproducing the experiments of `deep mutual learning <https://zpascal.net/cvpr2018/Zhang_Deep_Mutual_Learning_CVPR_2018_paper.pdf>`_ in the `OSNet paper <https://arxiv.org/pdf/1905.00953v6.pdf>`__ (Supp. B) has been released at ``projects/DML``.
- [Apr 2020] Upgraded to ``1.2.0``. The engine class has been made more model-agnostic to improve extensibility. See `Engine <torchreid/engine/engine.py>`_ and `ImageSoftmaxEngine <torchreid/engine/image/softmax.py>`_ for more details. Credit to `Dassl.pytorch <https://github.com/KaiyangZhou/Dassl.pytorch>`_.
- [Dec 2019] Our `OSNet paper <https://arxiv.org/pdf/1905.00953v6.pdf>`_ has been updated, with additional experiments (in section B of the supplementary) showing some useful techniques for improving OSNet's performance in practice.
- [Nov 2019] ``ImageDataManager`` can load training data from target datasets by setting ``load_train_targets=True``, and the train-loader can be accessed with ``train_loader_t = datamanager.train_loader_t``. This feature is useful for domain adaptation research.
Installation
---------------
Make sure `conda <https://www.anaconda.com/distribution/>`_ is installed.
.. code-block:: bash
# cd to your preferred directory and clone this repo
git clone https://github.com/KaiyangZhou/deep-person-reid.git
# create environment
cd deep-person-reid/
conda create --name torchreid python=3.7
conda activate torchreid
# install dependencies
# make sure `which python` and `which pip` point to the correct path
pip install -r requirements.txt
# install torch and torchvision (select the proper cuda version to suit your machine)
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
# install torchreid (don't need to re-build it if you modify the source code)
python setup.py develop
Get started: 30 seconds to Torchreid
-------------------------------------
1. Import ``torchreid``
.. code-block:: python
import torchreid
2. Load data manager
.. code-block:: python
datamanager = torchreid.data.ImageDataManager(
root='reid-data',
sources='market1501',
targets='market1501',
height=256,
width=128,
batch_size_train=32,
batch_size_test=100,
transforms=['random_flip', 'random_crop']
)
3 Build model, optimizer and lr_scheduler
.. code-block:: python
model = torchreid.models.build_model(
name='resnet50',
num_classes=datamanager.num_train_pids,
loss='softmax',
pretrained=True
)
model = model.cuda()
optimizer = torchreid.optim.build_optimizer(
model,
optim='adam',
lr=0.0003
)
scheduler = torchreid.optim.build_lr_scheduler(
optimizer,
lr_scheduler='single_step',
stepsize=20
)
4. Build engine
.. code-block:: python
engine = torchreid.engine.ImageSoftmaxEngine(
datamanager,
model,
optimizer=optimizer,
scheduler=scheduler,
label_smooth=True
)
5. Run training and test
.. code-block:: python
engine.run(
save_dir='log/resnet50',
max_epoch=60,
eval_freq=10,
print_freq=10,
test_only=False
)
A unified interface
-----------------------
In "deep-person-reid/scripts/", we provide a unified interface to train and test a model. See "scripts/main.py" and "scripts/default_config.py" for more details. The folder "configs/" contains some predefined configs which you can use as a starting point.
Below we provide an example to train and test `OSNet (Zhou et al. ICCV'19) <https://arxiv.org/abs/1905.00953>`_. Assume :code:`PATH_TO_DATA` is the directory containing reid datasets. The environmental variable :code:`CUDA_VISIBLE_DEVICES` is omitted, which you need to specify if you have a pool of gpus and want to use a specific set of them.
Conventional setting
^^^^^^^^^^^^^^^^^^^^^
To train OSNet on Market1501, do
.. code-block:: bash
python scripts/main.py \
--config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml \
--transforms random_flip random_erase \
--root $PATH_TO_DATA
The config file sets Market1501 as the default dataset. If you wanna use DukeMTMC-reID, do
.. code-block:: bash
python scripts/main.py \
--config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml \
-s dukemtmcreid \
-t dukemtmcreid \
--transforms random_flip random_erase \
--root $PATH_TO_DATA \
data.save_dir log/osnet_x1_0_dukemtmcreid_softmax_cosinelr
The code will automatically (download and) load the ImageNet pretrained weights. After the training is done, the model will be saved as "log/osnet_x1_0_market1501_softmax_cosinelr/model.pth.tar-250". Under the same folder, you can find the `tensorboard <https://pytorch.org/docs/stable/tensorboard.html>`_ file. To visualize the learning curves using tensorboard, you can run :code:`tensorboard --logdir=log/osnet_x1_0_market1501_softmax_cosinelr` in the terminal and visit :code:`http://localhost:6006/` in your web browser.
Evaluation is automatically performed at the end of training. To run the test again using the trained model, do
.. code-block:: bash
python scripts/main.py \
--config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml \
--root $PATH_TO_DATA \
model.load_weights log/osnet_x1_0_market1501_softmax_cosinelr/model.pth.tar-250 \
test.evaluate True
Cross-domain setting
^^^^^^^^^^^^^^^^^^^^^
Suppose you wanna train OSNet on DukeMTMC-reID and test its performance on Market1501, you can do
.. code-block:: bash
python scripts/main.py \
--config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad.yaml \
-s dukemtmcreid \
-t market1501 \
--transforms random_flip color_jitter \
--root $PATH_TO_DATA
Here we only test the cross-domain performance. However, if you also want to test the performance on the source dataset, i.e. DukeMTMC-reID, you can set :code:`-t dukemtmcreid market1501`, which will evaluate the model on the two datasets separately.
Different from the same-domain setting, here we replace :code:`random_erase` with :code:`color_jitter`. This can improve the generalization performance on the unseen target dataset.
Pretrained models are available in the `Model Zoo <https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.html>`_.
Datasets
--------
Image-reid datasets
^^^^^^^^^^^^^^^^^^^^^
- `Market1501 <https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zheng_Scalable_Person_Re-Identification_ICCV_2015_paper.pdf>`_
- `CUHK03 <https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Li_DeepReID_Deep_Filter_2014_CVPR_paper.pdf>`_
- `DukeMTMC-reID <https://arxiv.org/abs/1701.07717>`_
- `MSMT17 <https://arxiv.org/abs/1711.08565>`_
- `VIPeR <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.331.7285&rep=rep1&type=pdf>`_
- `GRID <http://www.eecs.qmul.ac.uk/~txiang/publications/LoyXiangGong_cvpr_2009.pdf>`_
- `CUHK01 <http://www.ee.cuhk.edu.hk/~xgwang/papers/liZWaccv12.pdf>`_
- `SenseReID <http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Spindle_Net_Person_CVPR_2017_paper.pdf>`_
- `QMUL-iLIDS <http://www.eecs.qmul.ac.uk/~sgg/papers/ZhengGongXiang_BMVC09.pdf>`_
- `PRID <https://pdfs.semanticscholar.org/4c1b/f0592be3e535faf256c95e27982db9b3d3d3.pdf>`_
Video-reid datasets
^^^^^^^^^^^^^^^^^^^^^^^
- `MARS <http://www.liangzheng.org/1320.pdf>`_
- `iLIDS-VID <https://www.eecs.qmul.ac.uk/~sgg/papers/WangEtAl_ECCV14.pdf>`_
- `PRID2011 <https://pdfs.semanticscholar.org/4c1b/f0592be3e535faf256c95e27982db9b3d3d3.pdf>`_
- `DukeMTMC-VideoReID <http://openaccess.thecvf.com/content_cvpr_2018/papers/Wu_Exploit_the_Unknown_CVPR_2018_paper.pdf>`_
Models
-------
ImageNet classification models
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- `ResNet <https://arxiv.org/abs/1512.03385>`_
- `ResNeXt <https://arxiv.org/abs/1611.05431>`_
- `SENet <https://arxiv.org/abs/1709.01507>`_
- `DenseNet <https://arxiv.org/abs/1608.06993>`_
- `Inception-ResNet-V2 <https://arxiv.org/abs/1602.07261>`_
- `Inception-V4 <https://arxiv.org/abs/1602.07261>`_
- `Xception <https://arxiv.org/abs/1610.02357>`_
- `IBN-Net <https://arxiv.org/abs/1807.09441>`_
Lightweight models
^^^^^^^^^^^^^^^^^^^
- `NASNet <https://arxiv.org/abs/1707.07012>`_
- `MobileNetV2 <https://arxiv.org/abs/1801.04381>`_
- `ShuffleNet <https://arxiv.org/abs/1707.01083>`_
- `ShuffleNetV2 <https://arxiv.org/abs/1807.11164>`_
- `SqueezeNet <https://arxiv.org/abs/1602.07360>`_
ReID-specific models
^^^^^^^^^^^^^^^^^^^^^^
- `MuDeep <https://arxiv.org/abs/1709.05165>`_
- `ResNet-mid <https://arxiv.org/abs/1711.08106>`_
- `HACNN <https://arxiv.org/abs/1802.08122>`_
- `PCB <https://arxiv.org/abs/1711.09349>`_
- `MLFN <https://arxiv.org/abs/1803.09132>`_
- `OSNet <https://arxiv.org/abs/1905.00953>`_
- `OSNet-AIN <https://arxiv.org/abs/1910.06827>`_
Useful links
-------------
- `OSNet-IBN1-Lite (test-only code with lite docker container) <https://github.com/RodMech/OSNet-IBN1-Lite>`_
- `Deep Learning for Person Re-identification: A Survey and Outlook <https://github.com/mangye16/ReID-Survey>`_
Citation
---------
If you find this code useful to your research, please cite the following papers.
.. code-block:: bash
@article{torchreid,
title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},
author={Zhou, Kaiyang and Xiang, Tao},
journal={arXiv preprint arXiv:1910.10093},
year={2019}
}
@inproceedings{zhou2019osnet,
title={Omni-Scale Feature Learning for Person Re-Identification},
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
booktitle={ICCV},
year={2019}
}
@article{zhou2019learning,
title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
journal={arXiv preprint arXiv:1910.06827},
year={2019}
}
Raw data
{
"_id": null,
"home_page": "https://github.com/openvinotoolkit/deep-object-reid",
"name": "otxreid",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "Object Re-Identification,Image Classification,Deep Learning,Computer Vision",
"author": "Kaiyang Zhou, Intel Corporation",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/7f/ca/2728a1b571a4fc07938201839b26493ff0553f8cea7c6258958df2a77a0a/otxreid-0.3.1.tar.gz",
"platform": null,
"description": "Deep Object Reid\n================\n\nDeep Object Reid is a library for deep-learning image classification and object re-identification, written in `PyTorch <https://pytorch.org/>`_.\nIt is a part of `OpenVINO\u2122 Training Extensions <https://github.com/opencv/openvino_training_extensions>`_.\n\nThe project is based on Kaiyang Zhou's `Torchreid <https://github.com/KaiyangZhou/deep-person-reid>`_ project.\n\nIts features:\n\n- multi-GPU training\n- support both image- and video-reid\n- end-to-end training and evaluation\n- incredibly easy preparation of reid datasets\n- multi-dataset training\n- cross-dataset evaluation\n- standard protocol used by most research papers\n- highly extensible (easy to add models, datasets, training methods, etc.)\n- implementations of state-of-the-art deep reid models\n- access to pretrained reid models\n- advanced training techniques\n- visualization tools (tensorboard, ranks, etc.)\n\n\nCode: https://github.com/openvinotoolkit/deep-object-reid\n\nHow-to instructions: https://github.com/openvinotoolkit/deep-object-reid/blob/ote/docs/user_guide.rst\n\nModel zoo by Kaiyang Zhou: https://github.com/openvinotoolkit/deep-object-reid/blob/ote/docs/MODEL_ZOO.md\n\nOriginal tech report by Kaiyang Zhou and Tao Xiang: https://arxiv.org/abs/1910.10093.\n\nAlso you can find some other research projects that are built on top of Torchreid `here <https://github.com/KaiyangZhou/deep-person-reid/tree/master/projects>`_.\n\n\nWhat's new\n------------\n- [May 2020] Added the person attribute recognition code used in `Omni-Scale Feature Learning for Person Re-Identification (ICCV'19) <https://arxiv.org/abs/1905.00953>`_. See ``projects/attribute_recognition/``.\n- [May 2020] ``1.2.1``: Added a simple API for feature extraction (``torchreid/utils/feature_extractor.py``). See the `documentation <https://kaiyangzhou.github.io/deep-person-reid/user_guide.html>`_ for the instruction.\n- [Apr 2020] Code for reproducing the experiments of `deep mutual learning <https://zpascal.net/cvpr2018/Zhang_Deep_Mutual_Learning_CVPR_2018_paper.pdf>`_ in the `OSNet paper <https://arxiv.org/pdf/1905.00953v6.pdf>`__ (Supp. B) has been released at ``projects/DML``.\n- [Apr 2020] Upgraded to ``1.2.0``. The engine class has been made more model-agnostic to improve extensibility. See `Engine <torchreid/engine/engine.py>`_ and `ImageSoftmaxEngine <torchreid/engine/image/softmax.py>`_ for more details. Credit to `Dassl.pytorch <https://github.com/KaiyangZhou/Dassl.pytorch>`_.\n- [Dec 2019] Our `OSNet paper <https://arxiv.org/pdf/1905.00953v6.pdf>`_ has been updated, with additional experiments (in section B of the supplementary) showing some useful techniques for improving OSNet's performance in practice.\n- [Nov 2019] ``ImageDataManager`` can load training data from target datasets by setting ``load_train_targets=True``, and the train-loader can be accessed with ``train_loader_t = datamanager.train_loader_t``. This feature is useful for domain adaptation research.\n\n\nInstallation\n---------------\n\nMake sure `conda <https://www.anaconda.com/distribution/>`_ is installed.\n\n\n.. code-block:: bash\n\n # cd to your preferred directory and clone this repo\n git clone https://github.com/KaiyangZhou/deep-person-reid.git\n\n # create environment\n cd deep-person-reid/\n conda create --name torchreid python=3.7\n conda activate torchreid\n\n # install dependencies\n # make sure `which python` and `which pip` point to the correct path\n pip install -r requirements.txt\n\n # install torch and torchvision (select the proper cuda version to suit your machine)\n conda install pytorch torchvision cudatoolkit=9.0 -c pytorch\n\n # install torchreid (don't need to re-build it if you modify the source code)\n python setup.py develop\n\n\nGet started: 30 seconds to Torchreid\n-------------------------------------\n1. Import ``torchreid``\n\n.. code-block:: python\n \n import torchreid\n\n2. Load data manager\n\n.. code-block:: python\n \n datamanager = torchreid.data.ImageDataManager(\n root='reid-data',\n sources='market1501',\n targets='market1501',\n height=256,\n width=128,\n batch_size_train=32,\n batch_size_test=100,\n transforms=['random_flip', 'random_crop']\n )\n\n3 Build model, optimizer and lr_scheduler\n\n.. code-block:: python\n \n model = torchreid.models.build_model(\n name='resnet50',\n num_classes=datamanager.num_train_pids,\n loss='softmax',\n pretrained=True\n )\n\n model = model.cuda()\n\n optimizer = torchreid.optim.build_optimizer(\n model,\n optim='adam',\n lr=0.0003\n )\n\n scheduler = torchreid.optim.build_lr_scheduler(\n optimizer,\n lr_scheduler='single_step',\n stepsize=20\n )\n\n4. Build engine\n\n.. code-block:: python\n \n engine = torchreid.engine.ImageSoftmaxEngine(\n datamanager,\n model,\n optimizer=optimizer,\n scheduler=scheduler,\n label_smooth=True\n )\n\n5. Run training and test\n\n.. code-block:: python\n \n engine.run(\n save_dir='log/resnet50',\n max_epoch=60,\n eval_freq=10,\n print_freq=10,\n test_only=False\n )\n\n\nA unified interface\n-----------------------\nIn \"deep-person-reid/scripts/\", we provide a unified interface to train and test a model. See \"scripts/main.py\" and \"scripts/default_config.py\" for more details. The folder \"configs/\" contains some predefined configs which you can use as a starting point.\n\nBelow we provide an example to train and test `OSNet (Zhou et al. ICCV'19) <https://arxiv.org/abs/1905.00953>`_. Assume :code:`PATH_TO_DATA` is the directory containing reid datasets. The environmental variable :code:`CUDA_VISIBLE_DEVICES` is omitted, which you need to specify if you have a pool of gpus and want to use a specific set of them.\n\nConventional setting\n^^^^^^^^^^^^^^^^^^^^^\n\nTo train OSNet on Market1501, do\n\n.. code-block:: bash\n\n python scripts/main.py \\\n --config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml \\\n --transforms random_flip random_erase \\\n --root $PATH_TO_DATA\n\n\nThe config file sets Market1501 as the default dataset. If you wanna use DukeMTMC-reID, do\n\n.. code-block:: bash\n\n python scripts/main.py \\\n --config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml \\\n -s dukemtmcreid \\\n -t dukemtmcreid \\\n --transforms random_flip random_erase \\\n --root $PATH_TO_DATA \\\n data.save_dir log/osnet_x1_0_dukemtmcreid_softmax_cosinelr\n\nThe code will automatically (download and) load the ImageNet pretrained weights. After the training is done, the model will be saved as \"log/osnet_x1_0_market1501_softmax_cosinelr/model.pth.tar-250\". Under the same folder, you can find the `tensorboard <https://pytorch.org/docs/stable/tensorboard.html>`_ file. To visualize the learning curves using tensorboard, you can run :code:`tensorboard --logdir=log/osnet_x1_0_market1501_softmax_cosinelr` in the terminal and visit :code:`http://localhost:6006/` in your web browser.\n\nEvaluation is automatically performed at the end of training. To run the test again using the trained model, do\n\n.. code-block:: bash\n\n python scripts/main.py \\\n --config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml \\\n --root $PATH_TO_DATA \\\n model.load_weights log/osnet_x1_0_market1501_softmax_cosinelr/model.pth.tar-250 \\\n test.evaluate True\n\n\nCross-domain setting\n^^^^^^^^^^^^^^^^^^^^^\n\nSuppose you wanna train OSNet on DukeMTMC-reID and test its performance on Market1501, you can do\n\n.. code-block:: bash\n\n python scripts/main.py \\\n --config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad.yaml \\\n -s dukemtmcreid \\\n -t market1501 \\\n --transforms random_flip color_jitter \\\n --root $PATH_TO_DATA\n\nHere we only test the cross-domain performance. However, if you also want to test the performance on the source dataset, i.e. DukeMTMC-reID, you can set :code:`-t dukemtmcreid market1501`, which will evaluate the model on the two datasets separately.\n\nDifferent from the same-domain setting, here we replace :code:`random_erase` with :code:`color_jitter`. This can improve the generalization performance on the unseen target dataset.\n\nPretrained models are available in the `Model Zoo <https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.html>`_.\n\n\nDatasets\n--------\n\nImage-reid datasets\n^^^^^^^^^^^^^^^^^^^^^\n- `Market1501 <https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zheng_Scalable_Person_Re-Identification_ICCV_2015_paper.pdf>`_\n- `CUHK03 <https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Li_DeepReID_Deep_Filter_2014_CVPR_paper.pdf>`_\n- `DukeMTMC-reID <https://arxiv.org/abs/1701.07717>`_\n- `MSMT17 <https://arxiv.org/abs/1711.08565>`_\n- `VIPeR <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.331.7285&rep=rep1&type=pdf>`_\n- `GRID <http://www.eecs.qmul.ac.uk/~txiang/publications/LoyXiangGong_cvpr_2009.pdf>`_\n- `CUHK01 <http://www.ee.cuhk.edu.hk/~xgwang/papers/liZWaccv12.pdf>`_\n- `SenseReID <http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Spindle_Net_Person_CVPR_2017_paper.pdf>`_\n- `QMUL-iLIDS <http://www.eecs.qmul.ac.uk/~sgg/papers/ZhengGongXiang_BMVC09.pdf>`_\n- `PRID <https://pdfs.semanticscholar.org/4c1b/f0592be3e535faf256c95e27982db9b3d3d3.pdf>`_\n\nVideo-reid datasets\n^^^^^^^^^^^^^^^^^^^^^^^\n- `MARS <http://www.liangzheng.org/1320.pdf>`_\n- `iLIDS-VID <https://www.eecs.qmul.ac.uk/~sgg/papers/WangEtAl_ECCV14.pdf>`_\n- `PRID2011 <https://pdfs.semanticscholar.org/4c1b/f0592be3e535faf256c95e27982db9b3d3d3.pdf>`_\n- `DukeMTMC-VideoReID <http://openaccess.thecvf.com/content_cvpr_2018/papers/Wu_Exploit_the_Unknown_CVPR_2018_paper.pdf>`_\n\n\nModels\n-------\n\nImageNet classification models\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n- `ResNet <https://arxiv.org/abs/1512.03385>`_\n- `ResNeXt <https://arxiv.org/abs/1611.05431>`_\n- `SENet <https://arxiv.org/abs/1709.01507>`_\n- `DenseNet <https://arxiv.org/abs/1608.06993>`_\n- `Inception-ResNet-V2 <https://arxiv.org/abs/1602.07261>`_\n- `Inception-V4 <https://arxiv.org/abs/1602.07261>`_\n- `Xception <https://arxiv.org/abs/1610.02357>`_\n- `IBN-Net <https://arxiv.org/abs/1807.09441>`_\n\nLightweight models\n^^^^^^^^^^^^^^^^^^^\n- `NASNet <https://arxiv.org/abs/1707.07012>`_\n- `MobileNetV2 <https://arxiv.org/abs/1801.04381>`_\n- `ShuffleNet <https://arxiv.org/abs/1707.01083>`_\n- `ShuffleNetV2 <https://arxiv.org/abs/1807.11164>`_\n- `SqueezeNet <https://arxiv.org/abs/1602.07360>`_\n\nReID-specific models\n^^^^^^^^^^^^^^^^^^^^^^\n- `MuDeep <https://arxiv.org/abs/1709.05165>`_\n- `ResNet-mid <https://arxiv.org/abs/1711.08106>`_\n- `HACNN <https://arxiv.org/abs/1802.08122>`_\n- `PCB <https://arxiv.org/abs/1711.09349>`_\n- `MLFN <https://arxiv.org/abs/1803.09132>`_\n- `OSNet <https://arxiv.org/abs/1905.00953>`_\n- `OSNet-AIN <https://arxiv.org/abs/1910.06827>`_\n\n\nUseful links\n-------------\n- `OSNet-IBN1-Lite (test-only code with lite docker container) <https://github.com/RodMech/OSNet-IBN1-Lite>`_\n- `Deep Learning for Person Re-identification: A Survey and Outlook <https://github.com/mangye16/ReID-Survey>`_\n\n\nCitation\n---------\nIf you find this code useful to your research, please cite the following papers.\n\n.. code-block:: bash\n\n @article{torchreid,\n title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},\n author={Zhou, Kaiyang and Xiang, Tao},\n journal={arXiv preprint arXiv:1910.10093},\n year={2019}\n }\n \n @inproceedings{zhou2019osnet,\n title={Omni-Scale Feature Learning for Person Re-Identification},\n author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},\n booktitle={ICCV},\n year={2019}\n }\n\n @article{zhou2019learning,\n title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},\n author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},\n journal={arXiv preprint arXiv:1910.06827},\n year={2019}\n }\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "A library for deep learning object re-ID and classification in PyTorch",
"version": "0.3.1",
"split_keywords": [
"object re-identification",
"image classification",
"deep learning",
"computer vision"
],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "231cb3257aaf62104793042c1c493ee4",
"sha256": "9cca7940c0af8161c10c64812754fac878c09ed4dd081047c2d8266cc93844ac"
},
"downloads": -1,
"filename": "otxreid-0.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "231cb3257aaf62104793042c1c493ee4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 163545,
"upload_time": "2023-01-02T06:16:34",
"upload_time_iso_8601": "2023-01-02T06:16:34.667961Z",
"url": "https://files.pythonhosted.org/packages/6b/a8/0fb83c2e0812b2a823456e65b77f9cf98a10d3c99e6359fe1e0f6dd06ab1/otxreid-0.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "ddd17f1666728b1b5f38c5d09e953058",
"sha256": "9899411ad0879e433464ce43f4d427210db71238a245637060afe9327a885866"
},
"downloads": -1,
"filename": "otxreid-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "ddd17f1666728b1b5f38c5d09e953058",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 128106,
"upload_time": "2023-01-02T06:16:37",
"upload_time_iso_8601": "2023-01-02T06:16:37.053754Z",
"url": "https://files.pythonhosted.org/packages/7f/ca/2728a1b571a4fc07938201839b26493ff0553f8cea7c6258958df2a77a0a/otxreid-0.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-01-02 06:16:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "openvinotoolkit",
"github_project": "deep-object-reid",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": []
},
{
"name": "Pillow",
"specs": []
},
{
"name": "six",
"specs": []
},
{
"name": "scipy",
"specs": [
[
"==",
"1.5.4"
]
]
},
{
"name": "opencv-python",
"specs": []
},
{
"name": "matplotlib",
"specs": []
},
{
"name": "tb-nightly",
"specs": []
},
{
"name": "future",
"specs": []
},
{
"name": "yacs",
"specs": []
},
{
"name": "gdown",
"specs": []
},
{
"name": "scikit-learn",
"specs": []
},
{
"name": "terminaltables",
"specs": []
},
{
"name": "pytorchcv",
"specs": []
},
{
"name": "torch-lr-finder",
"specs": [
[
"==",
"0.2.1"
]
]
},
{
"name": "onnx",
"specs": []
},
{
"name": "torchvision",
"specs": [
[
"==",
"0.9.*"
]
]
},
{
"name": "torch",
"specs": [
[
"==",
"1.8.*"
]
]
},
{
"name": "optuna",
"specs": [
[
"==",
"2.10.1"
]
]
},
{
"name": "timm",
"specs": [
[
">=",
"0.6.5"
]
]
},
{
"name": "addict",
"specs": []
},
{
"name": "randaugment",
"specs": []
},
{
"name": "ptflops",
"specs": []
},
{
"name": "sklearn",
"specs": []
}
],
"lcname": "otxreid"
}