hugsvision


Namehugsvision JSON
Version 0.75.5 PyPI version JSON
download
home_pagehttps://HugsVision.github.io/
SummaryA easy to use huggingface wrapper for computer vision.
upload_time2023-01-22 01:21:16
maintainer
docs_urlNone
authorYanis Labrak & Others
requires_python>=3.6
license
keywords python transformers huggingface wrapper toolkit computer vision easy computer vision
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/logo_name_transparent.png" alt="drawing" width="250"/>
</p>

[![PyPI version](https://badge.fury.io/py/hugsvision.svg)](https://badge.fury.io/py/hugsvision)
[![GitHub Issues](https://img.shields.io/github/issues/qanastek/HugsVision.svg)](https://github.com/qanastek/HugsVision/issues)
[![Contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg)](CONTRIBUTING.md)
[![License: MIT](https://img.shields.io/badge/License-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT)
[![Downloads](https://static.pepy.tech/personalized-badge/hugsvision?period=total&units=international_system&left_color=grey&right_color=orange&left_text=Downloads)](https://pepy.tech/project/hugsvision)

HugsVision is an open-source and easy to use all-in-one huggingface wrapper for computer vision.

The goal is to create a fast, flexible and user-friendly toolkit that can be used to easily develop **state-of-the-art** computer vision technologies, including systems for Image Classification, Semantic Segmentation, Object Detection, Image Generation, Denoising and much more.

⚠️ HugsVision is currently in beta. ⚠️

# Quick installation

HugsVision is constantly evolving. New features, tutorials, and documentation will appear over time. HugsVision can be installed via PyPI to rapidly use the standard library. Moreover, a local installation can be used by those users than want to run experiments and modify/customize the toolkit. HugsVision supports both CPU and GPU computations. For most recipes, however, a GPU is necessary during training. Please note that CUDA must be properly installed to use GPUs.

## Anaconda setup

```bash
conda create --name HugsVision python=3.6 -y
conda activate HugsVision
```

More information on managing environments with Anaconda can be found in [the conda cheat sheet](https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf).

## Install via PyPI

Once you have created your Python environment (Python 3.6+) you can simply type:

```bash
pip install hugsvision
```

## Install with GitHub

Once you have created your Python environment (Python 3.6+) you can simply type:

```bash
git clone https://github.com/qanastek/HugsVision.git
cd HugsVision
pip install -r requirements.txt
pip install --editable .
```

Any modification made to the `hugsvision` package will be automatically interpreted as we installed it with the `--editable` flag.

# Example Usage

Let's train a binary classifier that can distinguish people with or without `Pneumothorax` thanks to their radiography.

**Steps:**

1. Move to the recipe directory `cd recipes/pneumothorax/binary_classification/`
2. Download the dataset [here](https://www.kaggle.com/volodymyrgavrysh/pneumothorax-binary-classification-task) ~779 MB.
3. Transform the dataset into a directory based one, thanks to the `process.py` script.
4. Train the model:  `python train_example_vit.py --imgs="./pneumothorax_binary_classification_task_data/" --name="pneumo_model_vit" --epochs=1`
5. Rename `<MODEL_PATH>/config.json` to `<MODEL_PATH>/preprocessor_config.json` in my case, the model is situated at the output path like `./out/MYVITMODEL/1_2021-08-10-00-53-58/model/`
6. Make a prediction: `python predict.py --img="42.png" --path="./out/MYVITMODEL/1_2021-08-10-00-53-58/model/"`

# Models recipes

You can find all the currently available models or tasks under the `recipes/` folder.

<table>
  <tr>
      <td rowspan="3" width="160">
        <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/pneumothorax.png" width="256">
      </td>    
      <td rowspan="3">
        <b>Training a Transformer Image Classifier to help radiologists detect Pneumothorax cases:</b> A demonstration of how to train a Image Classifier Transformer model that can distinguish people with or without Pneumothorax thanks to their radiography with HugsVision.
      </td>
      <td align="center" width="80">
          <a href="https://nbviewer.jupyter.org/github/qanastek/HugsVision/blob/main/recipes/pneumothorax/binary_classification/Image_Classifier.ipynb">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/nbviewer_logo.svg" height="34">
          </a>
      </td>
  </tr>
  <tr>
      <td align="center">
          <a href="https://github.com/qanastek/HugsVision/tree/main/recipes/pneumothorax/binary_classification/Image_Classifier.ipynb">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/github_logo.png" height="32">
          </a>
      </td>
  </tr>
  <tr>
      <td align="center">
          <a href="https://colab.research.google.com/drive/1IIs3iWaVcH3sRkijdsXqQit0XXewJ0pJ?usp=sharing">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/colab_logo.png" height="28">
          </a>
      </td>
  </tr>

  <!-- ------------------------------------------------------------------- -->
  
  <tr>
      <td rowspan="3" width="160">
        <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/new_blood_cells_coco.png" width="256">
      </td>    
      <td rowspan="3">
        <b>Training a End-To-End Object Detection with Transformers to detect blood cells:</b> A demonstration of how to train a E2E Object Detection Transformer model which can detect and identify blood cells with HugsVision.
      </td>
      <td align="center" width="80">
          <a href="https://nbviewer.jupyter.org/github/qanastek/HugsVision/blob/main/recipes/blood_cells/object_detection/Object_Detection.ipynb">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/nbviewer_logo.svg" height="34">
          </a>
      </td>
  </tr>
  <tr>
      <td align="center">
          <a href="https://github.com/qanastek/HugsVision/tree/main/recipes/blood_cells/object_detection/Object_Detection.ipynb">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/github_logo.png" height="32">
          </a>
      </td>
  </tr>
  <tr>
      <td align="center">
          <a href="https://colab.research.google.com/drive/1Q7_HYfZKrQJHV052OCGnZBHwKMIep3kv?usp=sharing">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/colab_logo.png" height="28">
          </a>
      </td>
  </tr>

  <!-- ------------------------------------------------------------------- -->
  
  <tr>
      <td rowspan="4" width="160">
        <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/kvasir_v2.png" width="256">
      </td>    
      <td rowspan="4">  
        <b>Training a Transformer Image Classifier to help endoscopists:</b> A demonstration of how to train a Image Classifier Transformer model that can help endoscopists to automate detection of various anatomical landmarks, phatological findings or endoscopic procedures in the gastrointestinal tract with HugsVision.
      </td>
      <td align="center" width="80">
          <a href="https://nbviewer.jupyter.org/github/qanastek/HugsVision/blob/main/recipes/kvasir_v2/binary_classification/Kvasir_v2_Image_Classifier.ipynb">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/nbviewer_logo.svg" height="34">
          </a>
      </td>
  </tr>
  <tr>
      <td align="center">
          <a href="https://github.com/qanastek/HugsVision/blob/main/recipes/kvasir_v2/binary_classification/Kvasir_v2_Image_Classifier.ipynb">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/github_logo.png" height="32">
          </a>
      </td>
  </tr>
  <tr>
      <td align="center">
          <a href="https://colab.research.google.com/drive/1PMV-5c54ZlyoVh6dtkazaDdJR7I8VaqN?usp=sharing">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/colab_logo.png" height="28">
          </a>
      </td>
  </tr>
  <tr>
      <td align="center">
          <a href="https://medium.com/@yanis.labrak/how-to-train-a-custom-vision-transformer-vit-image-classifier-to-help-endoscopists-in-under-5-min-2e7e4110a353">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/medium.png" height="28">
          </a>
      </td>
  </tr>

  <!-- ------------------------------------------------------------------- -->
  
  <tr>
      <td rowspan="3" width="160">
        <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/HAM10000.png" width="256">
      </td>    
      <td rowspan="3">  
        <b>Training and using a TorchVision Image Classifier in 5 min to identify skin cancer:</b> A fast and easy tutorial to train a TorchVision Image Classifier that can help dermatologist in their identification procedures Melanoma cases with HugsVision and HAM10000 dataset.
      </td>
      <td align="center" width="80">
          <a href="https://nbviewer.jupyter.org/github/qanastek/HugsVision/blob/main/recipes/HAM10000/binary_classification/HAM10000_Image_Classifier.ipynb">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/nbviewer_logo.svg" height="34">
          </a>
      </td>
  </tr>
  <tr>
      <td align="center">
          <a href="https://github.com/qanastek/HugsVision/blob/main/recipes/HAM10000/binary_classification/HAM10000_Image_Classifier.ipynb">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/github_logo.png" height="32">
          </a>
      </td>
  </tr>
  <tr>
      <td align="center">
          <a href="https://colab.research.google.com/drive/1tfRpFTT1GJUgrcwHI0pYdAZ5_z0VSevJ?usp=sharing">
              <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/colab_logo.png" height="28">
          </a>
      </td>
  </tr>
</table>

# HuggingFace Spaces

You can try some of the models or tasks on HuggingFace thanks to theirs amazing spaces :

<table>
<thead>
  <tr>
    <td>
        <a href="https://huggingface.co/spaces/HugsVision/Skin-Cancer">
            <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/spaces/1-to-1_ratio/skin-cancer-classifier.png" width="128">
        </a>
    </td>
    <td>
        <a href="https://huggingface.co/spaces/zihaoz96/shark-classifier">
            <img src="https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/spaces/1-to-1_ratio/shark-classifier.png" width="128">
        </a>
    </td>
  </tr>
</thead>
</table>

# Model architectures

All the model checkpoints provided by 🤗 Transformers and compatible with our tasks can be seamlessly integrated from the huggingface.co model hub where they are uploaded directly by users and organizations.

Before starting implementing, please check if your model has an implementation in `PyTorch` by refering to [this table](https://huggingface.co/transformers/index.html#supported-frameworks).

🤗 Transformers currently provides the following architectures for Computer Vision:

1. **[ViT](https://huggingface.co/transformers/model_doc/vit.html)** (from Google Research, Brain Team) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/pdf/2010.11929.pdf), by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
2. **[DeiT](https://huggingface.co/transformers/model_doc/deit.html)** (from Facebook AI and Sorbonne University) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/pdf/2012.12877.pdf) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
3. **[BEiT](https://huggingface.co/transformers/master/model_doc/beit.html)** (from Microsoft Research) released with the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/pdf/2106.08254.pdf) by Hangbo Bao, Li Dong and Furu Wei.
4. **[DETR](https://huggingface.co/transformers/model_doc/detr.html)** (from Facebook AI) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/pdf/2005.12872.pdf) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko.

# Build PyPi package

Build: `python setup.py sdist bdist_wheel`

Upload: `twine upload dist/*`

# Citation

If you want to cite the tool you can use this:

```bibtex
@misc{HugsVision,
  title={HugsVision},
  author={Yanis Labrak},
  publisher={GitHub},
  journal={GitHub repository},
  howpublished={\url{https://github.com/qanastek/HugsVision}},
  year={2021}
}
```



            

Raw data

            {
    "_id": null,
    "home_page": "https://HugsVision.github.io/",
    "name": "hugsvision",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "python,transformers,huggingface,wrapper,toolkit,computer vision,easy,computer,vision",
    "author": "Yanis Labrak & Others",
    "author_email": "yanis.labrak@univ-avignon.fr",
    "download_url": "https://files.pythonhosted.org/packages/fb/82/acaf56122a54580f2d590d2dc99c443f9f3cc003e9e1fe2cb19aa39f2b6a/hugsvision-0.75.5.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/logo_name_transparent.png\" alt=\"drawing\" width=\"250\"/>\n</p>\n\n[![PyPI version](https://badge.fury.io/py/hugsvision.svg)](https://badge.fury.io/py/hugsvision)\n[![GitHub Issues](https://img.shields.io/github/issues/qanastek/HugsVision.svg)](https://github.com/qanastek/HugsVision/issues)\n[![Contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg)](CONTRIBUTING.md)\n[![License: MIT](https://img.shields.io/badge/License-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT)\n[![Downloads](https://static.pepy.tech/personalized-badge/hugsvision?period=total&units=international_system&left_color=grey&right_color=orange&left_text=Downloads)](https://pepy.tech/project/hugsvision)\n\nHugsVision is an open-source and easy to use all-in-one huggingface wrapper for computer vision.\n\nThe goal is to create a fast, flexible and user-friendly toolkit that can be used to easily develop **state-of-the-art** computer vision technologies, including systems for Image Classification, Semantic Segmentation, Object Detection, Image Generation, Denoising and much more.\n\n\u26a0\ufe0f HugsVision is currently in beta. \u26a0\ufe0f\n\n# Quick installation\n\nHugsVision is constantly evolving. New features, tutorials, and documentation will appear over time. HugsVision can be installed via PyPI to rapidly use the standard library. Moreover, a local installation can be used by those users than want to run experiments and modify/customize the toolkit. HugsVision supports both CPU and GPU computations. For most recipes, however, a GPU is necessary during training. Please note that CUDA must be properly installed to use GPUs.\n\n## Anaconda setup\n\n```bash\nconda create --name HugsVision python=3.6 -y\nconda activate HugsVision\n```\n\nMore information on managing environments with Anaconda can be found in [the conda cheat sheet](https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf).\n\n## Install via PyPI\n\nOnce you have created your Python environment (Python 3.6+) you can simply type:\n\n```bash\npip install hugsvision\n```\n\n## Install with GitHub\n\nOnce you have created your Python environment (Python 3.6+) you can simply type:\n\n```bash\ngit clone https://github.com/qanastek/HugsVision.git\ncd HugsVision\npip install -r requirements.txt\npip install --editable .\n```\n\nAny modification made to the `hugsvision` package will be automatically interpreted as we installed it with the `--editable` flag.\n\n# Example Usage\n\nLet's train a binary classifier that can distinguish people with or without `Pneumothorax` thanks to their radiography.\n\n**Steps:**\n\n1. Move to the recipe directory `cd recipes/pneumothorax/binary_classification/`\n2. Download the dataset [here](https://www.kaggle.com/volodymyrgavrysh/pneumothorax-binary-classification-task) ~779 MB.\n3. Transform the dataset into a directory based one, thanks to the `process.py` script.\n4. Train the model:  `python train_example_vit.py --imgs=\"./pneumothorax_binary_classification_task_data/\" --name=\"pneumo_model_vit\" --epochs=1`\n5. Rename `<MODEL_PATH>/config.json` to `<MODEL_PATH>/preprocessor_config.json` in my case, the model is situated at the output path like `./out/MYVITMODEL/1_2021-08-10-00-53-58/model/`\n6. Make a prediction: `python predict.py --img=\"42.png\" --path=\"./out/MYVITMODEL/1_2021-08-10-00-53-58/model/\"`\n\n# Models recipes\n\nYou can find all the currently available models or tasks under the `recipes/` folder.\n\n<table>\n  <tr>\n      <td rowspan=\"3\" width=\"160\">\n        <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/pneumothorax.png\" width=\"256\">\n      </td>    \n      <td rowspan=\"3\">\n        <b>Training a Transformer Image Classifier to help radiologists detect Pneumothorax cases:</b> A demonstration of how to train a Image Classifier Transformer model that can distinguish people with or without Pneumothorax thanks to their radiography with HugsVision.\n      </td>\n      <td align=\"center\" width=\"80\">\n          <a href=\"https://nbviewer.jupyter.org/github/qanastek/HugsVision/blob/main/recipes/pneumothorax/binary_classification/Image_Classifier.ipynb\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/nbviewer_logo.svg\" height=\"34\">\n          </a>\n      </td>\n  </tr>\n  <tr>\n      <td align=\"center\">\n          <a href=\"https://github.com/qanastek/HugsVision/tree/main/recipes/pneumothorax/binary_classification/Image_Classifier.ipynb\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/github_logo.png\" height=\"32\">\n          </a>\n      </td>\n  </tr>\n  <tr>\n      <td align=\"center\">\n          <a href=\"https://colab.research.google.com/drive/1IIs3iWaVcH3sRkijdsXqQit0XXewJ0pJ?usp=sharing\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/colab_logo.png\" height=\"28\">\n          </a>\n      </td>\n  </tr>\n\n  <!-- ------------------------------------------------------------------- -->\n  \n  <tr>\n      <td rowspan=\"3\" width=\"160\">\n        <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/new_blood_cells_coco.png\" width=\"256\">\n      </td>    \n      <td rowspan=\"3\">\n        <b>Training a End-To-End Object Detection with Transformers to detect blood cells:</b> A demonstration of how to train a E2E Object Detection Transformer model which can detect and identify blood cells with HugsVision.\n      </td>\n      <td align=\"center\" width=\"80\">\n          <a href=\"https://nbviewer.jupyter.org/github/qanastek/HugsVision/blob/main/recipes/blood_cells/object_detection/Object_Detection.ipynb\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/nbviewer_logo.svg\" height=\"34\">\n          </a>\n      </td>\n  </tr>\n  <tr>\n      <td align=\"center\">\n          <a href=\"https://github.com/qanastek/HugsVision/tree/main/recipes/blood_cells/object_detection/Object_Detection.ipynb\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/github_logo.png\" height=\"32\">\n          </a>\n      </td>\n  </tr>\n  <tr>\n      <td align=\"center\">\n          <a href=\"https://colab.research.google.com/drive/1Q7_HYfZKrQJHV052OCGnZBHwKMIep3kv?usp=sharing\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/colab_logo.png\" height=\"28\">\n          </a>\n      </td>\n  </tr>\n\n  <!-- ------------------------------------------------------------------- -->\n  \n  <tr>\n      <td rowspan=\"4\" width=\"160\">\n        <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/kvasir_v2.png\" width=\"256\">\n      </td>    \n      <td rowspan=\"4\">  \n        <b>Training a Transformer Image Classifier to help endoscopists:</b> A demonstration of how to train a Image Classifier Transformer model that can help endoscopists to automate detection of various anatomical landmarks, phatological findings or endoscopic procedures in the gastrointestinal tract with HugsVision.\n      </td>\n      <td align=\"center\" width=\"80\">\n          <a href=\"https://nbviewer.jupyter.org/github/qanastek/HugsVision/blob/main/recipes/kvasir_v2/binary_classification/Kvasir_v2_Image_Classifier.ipynb\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/nbviewer_logo.svg\" height=\"34\">\n          </a>\n      </td>\n  </tr>\n  <tr>\n      <td align=\"center\">\n          <a href=\"https://github.com/qanastek/HugsVision/blob/main/recipes/kvasir_v2/binary_classification/Kvasir_v2_Image_Classifier.ipynb\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/github_logo.png\" height=\"32\">\n          </a>\n      </td>\n  </tr>\n  <tr>\n      <td align=\"center\">\n          <a href=\"https://colab.research.google.com/drive/1PMV-5c54ZlyoVh6dtkazaDdJR7I8VaqN?usp=sharing\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/colab_logo.png\" height=\"28\">\n          </a>\n      </td>\n  </tr>\n  <tr>\n      <td align=\"center\">\n          <a href=\"https://medium.com/@yanis.labrak/how-to-train-a-custom-vision-transformer-vit-image-classifier-to-help-endoscopists-in-under-5-min-2e7e4110a353\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/medium.png\" height=\"28\">\n          </a>\n      </td>\n  </tr>\n\n  <!-- ------------------------------------------------------------------- -->\n  \n  <tr>\n      <td rowspan=\"3\" width=\"160\">\n        <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/HAM10000.png\" width=\"256\">\n      </td>    \n      <td rowspan=\"3\">  \n        <b>Training and using a TorchVision Image Classifier in 5 min to identify skin cancer:</b> A fast and easy tutorial to train a TorchVision Image Classifier that can help dermatologist in their identification procedures Melanoma cases with HugsVision and HAM10000 dataset.\n      </td>\n      <td align=\"center\" width=\"80\">\n          <a href=\"https://nbviewer.jupyter.org/github/qanastek/HugsVision/blob/main/recipes/HAM10000/binary_classification/HAM10000_Image_Classifier.ipynb\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/nbviewer_logo.svg\" height=\"34\">\n          </a>\n      </td>\n  </tr>\n  <tr>\n      <td align=\"center\">\n          <a href=\"https://github.com/qanastek/HugsVision/blob/main/recipes/HAM10000/binary_classification/HAM10000_Image_Classifier.ipynb\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/github_logo.png\" height=\"32\">\n          </a>\n      </td>\n  </tr>\n  <tr>\n      <td align=\"center\">\n          <a href=\"https://colab.research.google.com/drive/1tfRpFTT1GJUgrcwHI0pYdAZ5_z0VSevJ?usp=sharing\">\n              <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/receipes/colab_logo.png\" height=\"28\">\n          </a>\n      </td>\n  </tr>\n</table>\n\n# HuggingFace Spaces\n\nYou can try some of the models or tasks on HuggingFace thanks to theirs amazing spaces :\n\n<table>\n<thead>\n  <tr>\n    <td>\n        <a href=\"https://huggingface.co/spaces/HugsVision/Skin-Cancer\">\n            <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/spaces/1-to-1_ratio/skin-cancer-classifier.png\" width=\"128\">\n        </a>\n    </td>\n    <td>\n        <a href=\"https://huggingface.co/spaces/zihaoz96/shark-classifier\">\n            <img src=\"https://raw.githubusercontent.com/qanastek/HugsVision/main/ressources/images/spaces/1-to-1_ratio/shark-classifier.png\" width=\"128\">\n        </a>\n    </td>\n  </tr>\n</thead>\n</table>\n\n# Model architectures\n\nAll the model checkpoints provided by \ud83e\udd17 Transformers and compatible with our tasks can be seamlessly integrated from the huggingface.co model hub where they are uploaded directly by users and organizations.\n\nBefore starting implementing, please check if your model has an implementation in `PyTorch` by refering to [this table](https://huggingface.co/transformers/index.html#supported-frameworks).\n\n\ud83e\udd17 Transformers currently provides the following architectures for Computer Vision:\n\n1. **[ViT](https://huggingface.co/transformers/model_doc/vit.html)** (from Google Research, Brain Team) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/pdf/2010.11929.pdf), by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.\n2. **[DeiT](https://huggingface.co/transformers/model_doc/deit.html)** (from Facebook AI and Sorbonne University) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/pdf/2012.12877.pdf) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Herv\u00e9 J\u00e9gou.\n3. **[BEiT](https://huggingface.co/transformers/master/model_doc/beit.html)** (from Microsoft Research) released with the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/pdf/2106.08254.pdf) by Hangbo Bao, Li Dong and Furu Wei.\n4. **[DETR](https://huggingface.co/transformers/model_doc/detr.html)** (from Facebook AI) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/pdf/2005.12872.pdf) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko.\n\n# Build PyPi package\n\nBuild: `python setup.py sdist bdist_wheel`\n\nUpload: `twine upload dist/*`\n\n# Citation\n\nIf you want to cite the tool you can use this:\n\n```bibtex\n@misc{HugsVision,\n  title={HugsVision},\n  author={Yanis Labrak},\n  publisher={GitHub},\n  journal={GitHub repository},\n  howpublished={\\url{https://github.com/qanastek/HugsVision}},\n  year={2021}\n}\n```\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A easy to use huggingface wrapper for computer vision.",
    "version": "0.75.5",
    "split_keywords": [
        "python",
        "transformers",
        "huggingface",
        "wrapper",
        "toolkit",
        "computer vision",
        "easy",
        "computer",
        "vision"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "77055f06c25fb69d0b6e9162ca57eaf80ac9b44e954c708f03a701a6d57b6193",
                "md5": "a5d855774ce79716ac83b70481957e38",
                "sha256": "1b24836f2dc5beba740a6f82362632332e6bac5b1b165f0277fa73b654348abf"
            },
            "downloads": -1,
            "filename": "hugsvision-0.75.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a5d855774ce79716ac83b70481957e38",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 26266,
            "upload_time": "2023-01-22T01:21:14",
            "upload_time_iso_8601": "2023-01-22T01:21:14.641490Z",
            "url": "https://files.pythonhosted.org/packages/77/05/5f06c25fb69d0b6e9162ca57eaf80ac9b44e954c708f03a701a6d57b6193/hugsvision-0.75.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fb82acaf56122a54580f2d590d2dc99c443f9f3cc003e9e1fe2cb19aa39f2b6a",
                "md5": "6354be8d7daff94c0e4f04fdb5223678",
                "sha256": "f07fcb165e949d7f974b377ac98b030985d49cfb181e19e19da4c360c89750e2"
            },
            "downloads": -1,
            "filename": "hugsvision-0.75.5.tar.gz",
            "has_sig": false,
            "md5_digest": "6354be8d7daff94c0e4f04fdb5223678",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 22850,
            "upload_time": "2023-01-22T01:21:16",
            "upload_time_iso_8601": "2023-01-22T01:21:16.728454Z",
            "url": "https://files.pythonhosted.org/packages/fb/82/acaf56122a54580f2d590d2dc99c443f9f3cc003e9e1fe2cb19aa39f2b6a/hugsvision-0.75.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-22 01:21:16",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "hugsvision"
}
        
Elapsed time: 0.06512s