<div align="center">
<p>
<a align="center" href="" target="_blank">
<img
width="850"
src="https://media.roboflow.com/open-source/autodistill/autodistill-banner.png"
>
</a>
</p>
</div>
# Autodistill Transformers Module
This repository contains the code supporting the Transformers models model for use with [Autodistill](https://github.com/autodistill/autodistill).
[Transformers](https://github.com/huggingface/transformers), maintained by Hugging Face, features a range of state of the art models for Natural Language Processing (NLP), computer vision, and more.
This package allows you to write a function that calls a Transformers object detection model and use it to automatically label data. You can use this data to train a fine-tuned model using an architecture supported by Autodistill (i.e. [YOLOv8](https://github.com/autodistil/autodistill-yolov8), [YOLOv5](https://github.com/autodistil/autodistill-yolov5), or [DETR](https://github.com/autodistil/autodistill-detr)).
Read the full [Autodistill documentation](https://autodistill.github.io/autodistill/).
## Installation
To use Transformers with autodistill, you need to install the following dependency:
```bash
pip3 install autodistill-transformers
```
## Quickstart
The following example shows how to use the Transformers module to label images using the [Owlv2ForObjectDetection](https://huggingface.co/google/owlv2-large-patch14-ensemble) model.
You can update the `inference()` functon to use any object detection model supported in the Transformers library.
```python
import cv2
import torch
from autodistill.detection import CaptionOntology
from autodistill.utils import plot
from transformers import OwlViTForObjectDetection, OwlViTProcessor
from autodistill_transformers import TransformersModel
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
def inference(image, prompts):
inputs = processor(text=prompts, images=image, return_tensors="pt")
outputs = model(**inputs)
target_sizes = torch.Tensor([image.size[::-1]])
results = processor.post_process_object_detection(
outputs=outputs, target_sizes=target_sizes, threshold=0.1
)[0]
return results
base_model = TransformersModel(
ontology=CaptionOntology(
{
"a photo of a person": "person",
"a photo of a cat": "cat",
}
),
callback=inference,
)
# run inference
results = base_model.predict("image.jpg", confidence=0.1)
print(results)
# plot results
plot(
image=cv2.imread("image.jpg"),
detections=results,
classes=base_model.ontology.classes(),
)
# label a directory of images
base_model.label("./context_images", extension=".jpeg")
```
## License
This project is licensed under an [MIT license](LICENSE).
## 🏆 Contributing
We love your input! Please see the core Autodistill [contributing guide](https://github.com/autodistill/autodistill/blob/main/CONTRIBUTING.md) to get started. Thank you 🙏 to all our contributors!
Raw data
{
"_id": null,
"home_page": "https://github.com/autodistill/autodistill-transformers",
"name": "autodistill-transformers",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "",
"author": "Roboflow",
"author_email": "support@roboflow.com",
"download_url": "https://files.pythonhosted.org/packages/02/97/66c9a571e9341fa4dc981f6a9f5bf453e3b5ea189bf57ba524c8560a248e/autodistill-transformers-0.1.1.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <p>\n <a align=\"center\" href=\"\" target=\"_blank\">\n <img\n width=\"850\"\n src=\"https://media.roboflow.com/open-source/autodistill/autodistill-banner.png\"\n >\n </a>\n </p>\n</div>\n\n# Autodistill Transformers Module\n\nThis repository contains the code supporting the Transformers models model for use with [Autodistill](https://github.com/autodistill/autodistill).\n\n[Transformers](https://github.com/huggingface/transformers), maintained by Hugging Face, features a range of state of the art models for Natural Language Processing (NLP), computer vision, and more.\n\nThis package allows you to write a function that calls a Transformers object detection model and use it to automatically label data. You can use this data to train a fine-tuned model using an architecture supported by Autodistill (i.e. [YOLOv8](https://github.com/autodistil/autodistill-yolov8), [YOLOv5](https://github.com/autodistil/autodistill-yolov5), or [DETR](https://github.com/autodistil/autodistill-detr)).\n\nRead the full [Autodistill documentation](https://autodistill.github.io/autodistill/).\n\n## Installation\n\nTo use Transformers with autodistill, you need to install the following dependency:\n\n```bash\npip3 install autodistill-transformers\n```\n\n## Quickstart\n\nThe following example shows how to use the Transformers module to label images using the [Owlv2ForObjectDetection](https://huggingface.co/google/owlv2-large-patch14-ensemble) model.\n\nYou can update the `inference()` functon to use any object detection model supported in the Transformers library.\n\n```python\nimport cv2\nimport torch\nfrom autodistill.detection import CaptionOntology\nfrom autodistill.utils import plot\nfrom transformers import OwlViTForObjectDetection, OwlViTProcessor\n\nfrom autodistill_transformers import TransformersModel\n\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\n\n\ndef inference(image, prompts):\n inputs = processor(text=prompts, images=image, return_tensors=\"pt\")\n outputs = model(**inputs)\n\n target_sizes = torch.Tensor([image.size[::-1]])\n\n results = processor.post_process_object_detection(\n outputs=outputs, target_sizes=target_sizes, threshold=0.1\n )[0]\n\n return results\n\n\nbase_model = TransformersModel(\n ontology=CaptionOntology(\n {\n \"a photo of a person\": \"person\",\n \"a photo of a cat\": \"cat\",\n }\n ),\n callback=inference,\n)\n\n# run inference\nresults = base_model.predict(\"image.jpg\", confidence=0.1)\n\nprint(results)\n\n# plot results\nplot(\n image=cv2.imread(\"image.jpg\"),\n detections=results,\n classes=base_model.ontology.classes(),\n)\n\n# label a directory of images\nbase_model.label(\"./context_images\", extension=\".jpeg\")\n```\n\n## License\n\nThis project is licensed under an [MIT license](LICENSE).\n\n## \ud83c\udfc6 Contributing\n\nWe love your input! Please see the core Autodistill [contributing guide](https://github.com/autodistill/autodistill/blob/main/CONTRIBUTING.md) to get started. Thank you \ud83d\ude4f to all our contributors!\n",
"bugtrack_url": null,
"license": "",
"summary": "Use object detection models in Hugging Face Transformers to automatically label data to train a fine-tuned model.",
"version": "0.1.1",
"project_urls": {
"Homepage": "https://github.com/autodistill/autodistill-transformers"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "111b01bfb9a140956e164a89e33c73391c3f85174b38c824da976a4208272b3b",
"md5": "585c108dee4e4be571959fbc8550699e",
"sha256": "f1a6e3c3edbb91f3def12fd716bf1bd54687f157750f9e76a76544c70ee4a64d"
},
"downloads": -1,
"filename": "autodistill_transformers-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "585c108dee4e4be571959fbc8550699e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 4491,
"upload_time": "2023-12-05T09:25:29",
"upload_time_iso_8601": "2023-12-05T09:25:29.241726Z",
"url": "https://files.pythonhosted.org/packages/11/1b/01bfb9a140956e164a89e33c73391c3f85174b38c824da976a4208272b3b/autodistill_transformers-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "029766c9a571e9341fa4dc981f6a9f5bf453e3b5ea189bf57ba524c8560a248e",
"md5": "0d7316044f01372d4aecb0a0253792b9",
"sha256": "fde324d48426fa1d9ec877cfc4016ce18521b621f9d367f9b1bbdcad7d607823"
},
"downloads": -1,
"filename": "autodistill-transformers-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "0d7316044f01372d4aecb0a0253792b9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 4128,
"upload_time": "2023-12-05T09:25:30",
"upload_time_iso_8601": "2023-12-05T09:25:30.823463Z",
"url": "https://files.pythonhosted.org/packages/02/97/66c9a571e9341fa4dc981f6a9f5bf453e3b5ea189bf57ba524c8560a248e/autodistill-transformers-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-05 09:25:30",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "autodistill",
"github_project": "autodistill-transformers",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "autodistill-transformers"
}