autodistill


Nameautodistill JSON
Version 0.1.26 PyPI version JSON
download
home_pagehttps://github.com/autodistill/autodistill
SummaryDistill large foundational models into smaller, domain-specific models for deployment
upload_time2024-02-13 20:10:14
maintainer
docs_urlNone
authorRoboflow
requires_python>=3.7
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <p>
    <a align="center" href="" target="_blank">
      <img
        width="850"
        src="https://media.roboflow.com/open-source/autodistill/autodistill-banner.jpg?2"
      >
    </a>
  </p>

[notebooks](https://github.com/roboflow/notebooks) | [inference](https://github.com/roboflow/inference) | [autodistill](https://github.com/autodistill/autodistill) | [collect](https://github.com/roboflow/roboflow-collect)

[![version](https://badge.fury.io/py/autodistill.svg?)](https://badge.fury.io/py/autodistill)
[![downloads](https://img.shields.io/pypi/dm/autodistill)](https://pypistats.org/packages/autodistill)
[![license](https://img.shields.io/pypi/l/autodistill?)](https://github.com/autodistill/autodistill/blob/main/LICENSE)
[![python-version](https://img.shields.io/pypi/pyversions/autodistill)](https://badge.fury.io/py/autodistill)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-auto-train-yolov8-model-with-autodistill.ipynb)
</div>

Autodistill uses big, slower foundation models to train small, faster supervised models. Using `autodistill`, you can go from unlabeled images to inference on a custom model running at the edge with no human intervention in between.

<div align="center">
  <p>
    <a align="center" href="" target="_blank">
      <img
        width="850"
        src="https://media.roboflow.com/open-source/autodistill/steps.jpg"
      >
    </a>
  </p>
</div>

Currently, `autodistill` supports vision tasks like object detection and instance segmentation, but in the future it can be expanded to support language (and other) models.

## πŸ”— Quicklinks

| [Tutorial](https://blog.roboflow.com/autodistill)| [Docs](https://docs.autodistill.com)| [Supported Models](#-available-models)  | [Contribute](CONTRIBUTING.md)
|:---:|:---:|:---:|:---:|

## πŸ‘€ Example Output

Here are example predictions of a Target Model detecting milk bottles and bottlecaps after being trained on an auto-labeled dataset using Autodistill (see [the Autodistill YouTube video](https://www.youtube.com/watch?v=gKTYMfwPo4M) for a full walkthrough):

<div align="center">
  <p>
    <a align="center" href="https://www.youtube.com/watch?v=gKTYMfwPo4M" target="_blank">
      <img
        width="850"
        src="https://media.roboflow.com/open-source/autodistill/milk-480.gif"
      >
    </a>
  </p>
</div>

## πŸš€ Features

* πŸ”Œ Pluggable interface to connect models together
* πŸ€– Automatically label datasets
* 🐰 Train fast supervised models
* πŸ”’ Own your model
* πŸš€ Deploy distilled models to the cloud or the edge

## πŸ“š Basic Concepts

To use `autodistill`, you input unlabeled data into a Base Model which uses an Ontology to label a Dataset that is used to train a Target Model which outputs a Distilled Model fine-tuned to perform a specific Task.

<div align="center">
  <p>
    <a align="center" href="" target="_blank">
      <img
        width="850"
        src="https://media.roboflow.com/open-source/autodistill/overview.jpg"
      >
    </a>
  </p>
</div>

Autodistill defines several basic primitives:

* **Task** - A Task defines what a Target Model will predict. The Task for each component (Base Model, Ontology, and Target Model) of an `autodistill` pipeline must match for them to be compatible with each other. Object Detection and Instance Segmentation are currently supported through the `detection` task. `classification` support will be added soon.
* **Base Model** - A Base Model is a large foundation model that knows a lot about a lot. Base models are often multimodal and can perform many tasks. They're large, slow, and expensive. Examples of Base Models are GroundedSAM and GPT-4's upcoming multimodal variant. We use a Base Model (along with unlabeled input data and an Ontology) to create a Dataset.
* **Ontology** - an Ontology defines how your Base Model is prompted, what your Dataset will describe, and what your Target Model will predict. A simple Ontology is the `CaptionOntology` which prompts a Base Model with text captions and maps them to class names. Other Ontologies may, for instance, use a CLIP vector or example images instead of a text caption.
* **Dataset** - a Dataset is a set of auto-labeled data that can be used to train a Target Model. It is the output generated by a Base Model.
* **Target Model** - a Target Model is a supervised model that consumes a Dataset and outputs a distilled model that is ready for deployment. Target Models are usually small, fast, and fine-tuned to perform a specific task very well (but they don't generalize well beyond the information described in their Dataset). Examples of Target Models are YOLOv8 and DETR.
* **Distilled Model** - a Distilled Model is the final output of the `autodistill` process; it's a set of weights fine-tuned for your task that can be deployed to get predictions.

## πŸ’‘ Theory and Limitations

Human labeling is one of the biggest barriers to broad adoption of computer vision. It can take thousands of hours to craft a dataset suitable for training a production model. The process of distillation for training supervised models is not new, in fact, traditional human labeling is just another form of distillation from an extremely capable Base Model (the human brain 🧠).


Foundation models know a lot about a lot, but for production we need models that know a lot about a little.

As foundation models get better and better they will increasingly be able to augment or replace humans in the labeling process. We need tools for steering, utilizing, and comparing these models. Additionally, these foundation models are big, expensive, and often gated behind private APIs. For many production use-cases, we need models that can run cheaply and in realtime at the edge.

<div align="center">
  <p>
    <a align="center" href="" target="_blank">
      <img
        width="850"
        src="https://media.roboflow.com/open-source/autodistill/connections.jpg"
      >
    </a>
  </p>
</div>

Autodistill's Base Models can already create datasets for many common use-cases (and through creative prompting and few-shotting we can expand their utility to many more), but they're not perfect yet. There's still a lot of work to do; this is just the beginning and we'd love your help testing and expanding the capabilities of the system!

## πŸ’Ώ Installation

Autodistill is modular. You'll need to install the `autodistill` package (which defines the interfaces for the above concepts) along with [Base Model and Target Model plugins](#-available-models) (which implement specific models).

By packaging these separately as plugins, dependency and licensing incompatibilities are minimized and new models can be implemented and maintained by anyone.

Example: 
```bash
pip install autodistill autodistill-grounded-sam autodistill-yolov8
```

<details close>
<summary>Install from source</summary>

You can also clone the project from GitHub for local development:

```bash
git clone https://github.com/roboflow/autodistill
cd autodistill
pip install -e .
```
</details>

Additional Base and Target models are [enumerated below](#-available-models).

## πŸš€ Quickstart

See the [demo Notebook](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-auto-train-yolov8-model-with-autodistill.ipynb) for a quick introduction to `autodistill`. This notebook walks through building a milk container detection model with no labeling.

Below, we have condensed key parts of the notebook for a quick introduction to `autodistill`.

You can also run Autodistill in one command. First, install `autodistill`:

```bash
pip install autodistill
```

Then, run:

```bash
autodistill images --base="grounding_dino" --target="yolov8" --ontology '{"prompt": "label"}' --output="./dataset"
```

This command will label all images in a directory called `images` with Grounding DINO and use the labeled images to train a YOLOv8 model. Grounding DINO will label all images with the "prompt" and save the label as the "label". You can specify as many prompts and labels as you want. The resulting dataset will be saved in a folder called `dataset`.

### Install Packages

For this example, we'll show how to distill [GroundedSAM](https://github.com/IDEA-Research/Grounded-Segment-Anything) into a small [YOLOv8](https://github.com/ultralytics/ultralytics) model using [autodistill-grounded-sam](https://github.com/autodistill/autodistill-grounded-sam) and [autodistill-yolov8](https://github.com/autodistill/autodistill-yolov8).

```
pip install autodistill autodistill-grounded-sam autodistill-yolov8
```

### Distill a Model

```python
from autodistill_grounded_sam import GroundedSAM
from autodistill.detection import CaptionOntology
from autodistill_yolov8 import YOLOv8

# define an ontology to map class names to our GroundingDINO prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
base_model = GroundedSAM(ontology=CaptionOntology({"shipping container": "container"}))

# label all images in a folder called `context_images`
base_model.label(
  input_folder="./images",
  output_folder="./dataset"
)

target_model = YOLOv8("yolov8n.pt")
target_model.train("./dataset/data.yaml", epochs=200)

# run inference on the new model
pred = target_model.predict("./dataset/valid/your-image.jpg", confidence=0.5)
print(pred)

# optional: upload your model to Roboflow for deployment
from roboflow import Roboflow

rf = Roboflow(api_key="API_KEY")
project = rf.workspace().project("PROJECT_ID")
project.version(DATASET_VERSION).deploy(model_type="yolov8", model_path=f"./runs/detect/train/")
```

<details close>
<summary>Visualize Predictions</summary>

To plot the annotations for a single image using `autodistill`, you can use the code below. This code is helpful to visualize the annotations generated by your base model (i.e. GroundedSAM) and the results from your target model (i.e. YOLOv8).

```python
import supervision as sv
import cv2

img_path = "./images/your-image.jpeg"

image = cv2.imread(img_path)

detections = base_model.predict(img_path)
# annotate image with detections
box_annotator = sv.BoxAnnotator()

labels = [
    f"{base_model.ontology.classes()[class_id]} {confidence:0.2f}"
    for _, _, confidence, class_id, _ in detections
]

annotated_frame = box_annotator.annotate(
    scene=image.copy(), detections=detections, labels=labels
)

sv.plot_image(annotated_frame, (16, 16))
```
</details>

## πŸ“ Available Models

Our goal is for `autodistill` to support using all foundation models as Base Models and most SOTA supervised models as Target Models. We focused on object detection and segmentation
tasks first but plan to launch classification support soon! In the future, we hope `autodistill` will also be used for models beyond computer vision.

* βœ… - complete (click row/column header to go to repo)
* 🚧 - work in progress

### object detection

| base / target | [YOLOv8](https://github.com/autodistill/autodistill-yolov8) | [YOLO-NAS](https://github.com/autodistill/autodistill-yolonas) | [YOLOv5](https://github.com/autodistill/autodistill-yolov5) | [DETR](https://github.com/autodistill/autodistill-detr) | YOLOv6 | YOLOv7 | MT-YOLOv6 |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [DETIC](https://github.com/autodistill/autodistill-detic) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [GroundedSAM](https://github.com/autodistill/autodistill-grounded-sam) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [GroundingDINO](https://github.com/autodistill/autodistill-grounding-dino) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [OWL-ViT](https://github.com/autodistill/autodistill-owl-vit) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [SAM-CLIP](https://github.com/autodistill/autodistill-sam-clip) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [LLaVA-1.5](https://github.com/autodistill/autodistill-llava) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [Kosmos-2](https://github.com/autodistill/autodistill-kosmos-2) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [OWLv2](https://github.com/autodistill/autodistill-owlv2) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [Roboflow Universe Models (50k+ pre-trained models)](https://github.com/autodistill/autodistill-roboflow-universe) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [CoDet](https://github.com/autodistill/autodistill-codet) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [VLPart](https://github.com/autodistill/autodistill-vlpart) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [Azure Custom Vision](https://github.com/autodistill/autodistill-azure-vision) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [AWS Rekognition](https://github.com/autodistill/autodistill-rekognition) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |
| [Google Vision](https://github.com/autodistill/autodistill-gcp-vision) | βœ… | βœ… | βœ… | βœ… | 🚧 |  |  |


### instance segmentation

| base / target | [YOLOv8](https://github.com/autodistill/autodistill-yolov8) | [YOLO-NAS](https://github.com/autodistill/autodistill-yolonas) | [YOLOv5](https://github.com/autodistill/autodistill-yolov5) | YOLOv7 | Segformer |
|:---:|:---:|:---:|:---:|:---:|:---:|
| [GroundedSAM](https://github.com/autodistill/autodistill-grounded-sam) | βœ… | 🚧 | 🚧 |  |  |
| SAM-CLIP | βœ… | 🚧 | 🚧 |  |  |
| SegGPT | βœ… | 🚧 | 🚧 |  |  |
| FastSAM | 🚧 | 🚧 | 🚧 |  |  |


### classification

| base / target | [ViT](https://github.com/autodistill/autodistill-vit) | [YOLOv8](https://github.com/autodistill/autodistill-yolov8) | [YOLOv5](https://github.com/autodistill/autodistill-yolov5) |
|:---:|:---:|:---:|:---:|
| [CLIP](https://github.com/autodistill/autodistill-clip) | βœ… | βœ… | 🚧 |
| [MetaCLIP](https://github.com/autodistill/autodistill-metaclip) | βœ… | βœ… | 🚧 |
| [DINOv2](https://github.com/autodistill/autodistill-dinov2) | βœ… | βœ… | 🚧 |
| [BLIP](https://github.com/autodistill/autodistill-blip) | βœ… | βœ… | 🚧 |
| [ALBEF](https://github.com/autodistill/autodistill-albef) | βœ… | βœ… | 🚧 |
| [FastViT](https://github.com/autodistill/autodistill-fastvit) | βœ… | βœ… | 🚧 |
| [AltCLIP](https://github.com/autodistill/autodistill-altcip) | βœ… | βœ… | 🚧 |
| Fuyu | 🚧 | 🚧 | 🚧 |
| Open Flamingo | 🚧 | 🚧 | 🚧 |
| GPT-4 |  |  |  |
| PaLM-2 |  |  |  |


## Roboflow Model Deployment Support

You can optionally deploy some Target Models trained using Autodistill on Roboflow. Deploying on Roboflow allows you to use a range of concise SDKs for using your model on the edge, from [roboflow.js](https://docs.roboflow.com/inference/web-browser) for web deployment to [NVIDIA Jetson](https://docs.roboflow.com/inference/nvidia-jetson) devices.

The following Autodistill Target Models are supported by Roboflow for deployment:

| model name | Supported? |
|:---:|:---:|
| YOLOv8 Object Detection | βœ… |
| YOLOv8 Instance Segmentation | βœ… |
| YOLOv5 Object Detection | βœ… |
| YOLOv5 Instance Segmentation | βœ… |
| YOLOv8 Classification |  |

## 🎬 Video Guides

<p align="left">
<a href="https://www.youtube.com/watch?v=gKTYMfwPo4M" title="Autodistill: Train YOLOv8 with ZERO Annotations"><img src="https://i.ytimg.com/vi/gKTYMfwPo4M/maxresdefault.jpg" alt="Autodistill: Train YOLOv8 with ZERO Annotations" width="300px" align="left" /></a>
<a href="https://youtu.be/oEQYStnF2l8"><strong>Autodistill: Train YOLOv8 with ZERO Annotations</strong></a>
<div><strong>Published: 8 June 2023</strong></div>
<br/>In this video, we will show you how to use a new library to train a YOLOv8 model to detect bottles moving on a conveyor line. Yes, that's right - zero annotation hours are required! We dive deep into Autodistill's functionality, covering topics from setting up your Python environment and preparing your images, to the thrilling automatic annotation of images. </p> 

## πŸ’‘ Community Resources

- [Distill Large Vision Models into Smaller, Efficient Models with Autodistill](https://blog.roboflow.com/autodistill/): Announcement post with written guide on how to use Autodistill
- [Comparing AI-Labeled Data to Human-Labeled Data](https://blog.roboflow.com/ai-vs-human-labeled-data/): A qualitative evaluation of Grounding DINO used with Autodistill across various tasks and domains.
- [How to Evaluate Autodistill Prompts with CVevals](https://blog.roboflow.com/autodistill-prompt-evaluation/): Evaluate Autodistill prompts.
- [Autodistill: Label and Train a Computer Vision Model in Under 20 Minutes](https://www.youtube.com/watch?v=M_QZ_Q0zT0k): Building a model to detect planes in under 20 minutes.
- [Comparing AI-Labeled Data to Human-Labeled Data](https://blog.roboflow.com/ai-vs-human-labeled-data/): Explore the strengths and limitations of a base model used with Autoditsill.
- [Train an Image Classification Model with No Labeling](https://blog.roboflow.com/train-classification-model-no-labeling/): Use Grounded SAM to automatically label images for training an Ultralytics YOLOv8 classification model.
- [Train a Segmentation Model with No Labeling](https://blog.roboflow.com/train-a-segmentation-model-no-labeling/): Use CLIP to automatically label images for training an Ultralytics YOLOv8 segmentation model.
- File a PR to add your own resources here!

## πŸ—ΊοΈ Roadmap

Apart from adding new models, there are several areas we plan to explore with `autodistill` including:

* πŸ’‘ Ontology creation & prompt engineering
* πŸ‘©β€πŸ’» Human in the loop support
* πŸ€” Model evaluation
* πŸ”„ Active learning
* πŸ’¬ Language tasks

## πŸ† Contributing

We love your input! Please see our [contributing guide](CONTRIBUTING.md) to get started. Thank you πŸ™ to all our contributors!

## πŸ‘©β€βš–οΈ License

The `autodistill` package is licensed under an [Apache 2.0](LICENSE). Each Base or Target model plugin may use its own license corresponding with the license of its underlying model. Please refer to the license in each plugin repo for more information.

## Frequently Asked Questions ❓

### What causes the `PytorchStreamReader failed reading zip archive: failed finding central directory` error?

This error is caused when PyTorch cannot load the model weights for a model. Go into the `~/.cache/autodistill` directory and delete the folder associated with the model you are trying to load. Then, run your code again. The model weights will be downloaded from scratch. Leave the installation process uninterrupted.

## πŸ’» explore more Roboflow open source projects

|Project | Description|
|:---|:---|
|[supervision](https://roboflow.com/supervision) | General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation.
|[Autodistill](https://github.com/autodistill/autodistill) (this project) | Automatically label images for use in training computer vision models. |
|[Inference](https://github.com/roboflow/inference) | An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
|[Notebooks](https://roboflow.com/notebooks) | Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone.
|[Collect](https://github.com/roboflow/roboflow-collect) | Automated, intelligent data collection powered by CLIP.

<br>

<div align="center">

  <div align="center">
      <a href="https://youtube.com/roboflow">
          <img
            src="https://media.roboflow.com/notebooks/template/icons/purple/youtube.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634652"
            width="3%"
          />
      </a>
      <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
      <a href="https://roboflow.com">
          <img
            src="https://media.roboflow.com/notebooks/template/icons/purple/roboflow-app.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949746649"
            width="3%"
          />
      </a>
      <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
      <a href="https://www.linkedin.com/company/roboflow-ai/">
          <img
            src="https://media.roboflow.com/notebooks/template/icons/purple/linkedin.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633691"
            width="3%"
          />
      </a>
      <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
      <a href="https://docs.roboflow.com">
          <img
            src="https://media.roboflow.com/notebooks/template/icons/purple/knowledge.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634511"
            width="3%"
          />
      </a>
      <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
      <a href="https://disuss.roboflow.com">
          <img
            src="https://media.roboflow.com/notebooks/template/icons/purple/forum.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633584"
            width="3%"
          />
      <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
      <a href="https://blog.roboflow.com">
          <img
            src="https://media.roboflow.com/notebooks/template/icons/purple/blog.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633605"
            width="3%"
          />
      </a>
      </a>
  </div>

</div>



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/autodistill/autodistill",
    "name": "autodistill",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "Roboflow",
    "author_email": "autodistill@roboflow.com",
    "download_url": "https://files.pythonhosted.org/packages/2a/8c/d9ca5cb19816cc13231b7b9b4f7d5c617c03f046aa11c30b09c662171e0b/autodistill-0.1.26.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <p>\n    <a align=\"center\" href=\"\" target=\"_blank\">\n      <img\n        width=\"850\"\n        src=\"https://media.roboflow.com/open-source/autodistill/autodistill-banner.jpg?2\"\n      >\n    </a>\n  </p>\n\n[notebooks](https://github.com/roboflow/notebooks) | [inference](https://github.com/roboflow/inference) | [autodistill](https://github.com/autodistill/autodistill) | [collect](https://github.com/roboflow/roboflow-collect)\n\n[![version](https://badge.fury.io/py/autodistill.svg?)](https://badge.fury.io/py/autodistill)\n[![downloads](https://img.shields.io/pypi/dm/autodistill)](https://pypistats.org/packages/autodistill)\n[![license](https://img.shields.io/pypi/l/autodistill?)](https://github.com/autodistill/autodistill/blob/main/LICENSE)\n[![python-version](https://img.shields.io/pypi/pyversions/autodistill)](https://badge.fury.io/py/autodistill)\n[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-auto-train-yolov8-model-with-autodistill.ipynb)\n</div>\n\nAutodistill uses big, slower foundation models to train small, faster supervised models. Using `autodistill`, you can go from unlabeled images to inference on a custom model running at the edge with no human intervention in between.\n\n<div align=\"center\">\n  <p>\n    <a align=\"center\" href=\"\" target=\"_blank\">\n      <img\n        width=\"850\"\n        src=\"https://media.roboflow.com/open-source/autodistill/steps.jpg\"\n      >\n    </a>\n  </p>\n</div>\n\nCurrently, `autodistill` supports vision tasks like object detection and instance segmentation, but in the future it can be expanded to support language (and other) models.\n\n## \ud83d\udd17 Quicklinks\n\n| [Tutorial](https://blog.roboflow.com/autodistill)| [Docs](https://docs.autodistill.com)| [Supported Models](#-available-models)  | [Contribute](CONTRIBUTING.md)\n|:---:|:---:|:---:|:---:|\n\n## \ud83d\udc40 Example Output\n\nHere are example predictions of a Target Model detecting milk bottles and bottlecaps after being trained on an auto-labeled dataset using Autodistill (see [the Autodistill YouTube video](https://www.youtube.com/watch?v=gKTYMfwPo4M) for a full walkthrough):\n\n<div align=\"center\">\n  <p>\n    <a align=\"center\" href=\"https://www.youtube.com/watch?v=gKTYMfwPo4M\" target=\"_blank\">\n      <img\n        width=\"850\"\n        src=\"https://media.roboflow.com/open-source/autodistill/milk-480.gif\"\n      >\n    </a>\n  </p>\n</div>\n\n## \ud83d\ude80 Features\n\n* \ud83d\udd0c Pluggable interface to connect models together\n* \ud83e\udd16 Automatically label datasets\n* \ud83d\udc30 Train fast supervised models\n* \ud83d\udd12 Own your model\n* \ud83d\ude80 Deploy distilled models to the cloud or the edge\n\n## \ud83d\udcda Basic Concepts\n\nTo use `autodistill`, you input unlabeled data into a Base Model which uses an Ontology to label a Dataset that is used to train a Target Model which outputs a Distilled Model fine-tuned to perform a specific Task.\n\n<div align=\"center\">\n  <p>\n    <a align=\"center\" href=\"\" target=\"_blank\">\n      <img\n        width=\"850\"\n        src=\"https://media.roboflow.com/open-source/autodistill/overview.jpg\"\n      >\n    </a>\n  </p>\n</div>\n\nAutodistill defines several basic primitives:\n\n* **Task** - A Task defines what a Target Model will predict. The Task for each component (Base Model, Ontology, and Target Model) of an `autodistill` pipeline must match for them to be compatible with each other. Object Detection and Instance Segmentation are currently supported through the `detection` task. `classification` support will be added soon.\n* **Base Model** - A Base Model is a large foundation model that knows a lot about a lot. Base models are often multimodal and can perform many tasks. They're large, slow, and expensive. Examples of Base Models are GroundedSAM and GPT-4's upcoming multimodal variant. We use a Base Model (along with unlabeled input data and an Ontology) to create a Dataset.\n* **Ontology** - an Ontology defines how your Base Model is prompted, what your Dataset will describe, and what your Target Model will predict. A simple Ontology is the `CaptionOntology` which prompts a Base Model with text captions and maps them to class names. Other Ontologies may, for instance, use a CLIP vector or example images instead of a text caption.\n* **Dataset** - a Dataset is a set of auto-labeled data that can be used to train a Target Model. It is the output generated by a Base Model.\n* **Target Model** - a Target Model is a supervised model that consumes a Dataset and outputs a distilled model that is ready for deployment. Target Models are usually small, fast, and fine-tuned to perform a specific task very well (but they don't generalize well beyond the information described in their Dataset). Examples of Target Models are YOLOv8 and DETR.\n* **Distilled Model** - a Distilled Model is the final output of the `autodistill` process; it's a set of weights fine-tuned for your task that can be deployed to get predictions.\n\n## \ud83d\udca1 Theory and Limitations\n\nHuman labeling is one of the biggest barriers to broad adoption of computer vision. It can take thousands of hours to craft a dataset suitable for training a production model. The process of distillation for training supervised models is not new, in fact, traditional human labeling is just another form of distillation from an extremely capable Base Model (the human brain \ud83e\udde0).\n\n\nFoundation models know a lot about a lot, but for production we need models that know a lot about a little.\n\nAs foundation models get better and better they will increasingly be able to augment or replace humans in the labeling process. We need tools for steering, utilizing, and comparing these models. Additionally, these foundation models are big, expensive, and often gated behind private APIs. For many production use-cases, we need models that can run cheaply and in realtime at the edge.\n\n<div align=\"center\">\n  <p>\n    <a align=\"center\" href=\"\" target=\"_blank\">\n      <img\n        width=\"850\"\n        src=\"https://media.roboflow.com/open-source/autodistill/connections.jpg\"\n      >\n    </a>\n  </p>\n</div>\n\nAutodistill's Base Models can already create datasets for many common use-cases (and through creative prompting and few-shotting we can expand their utility to many more), but they're not perfect yet. There's still a lot of work to do; this is just the beginning and we'd love your help testing and expanding the capabilities of the system!\n\n## \ud83d\udcbf Installation\n\nAutodistill is modular. You'll need to install the `autodistill` package (which defines the interfaces for the above concepts) along with [Base Model and Target Model plugins](#-available-models) (which implement specific models).\n\nBy packaging these separately as plugins, dependency and licensing incompatibilities are minimized and new models can be implemented and maintained by anyone.\n\nExample: \n```bash\npip install autodistill autodistill-grounded-sam autodistill-yolov8\n```\n\n<details close>\n<summary>Install from source</summary>\n\nYou can also clone the project from GitHub for local development:\n\n```bash\ngit clone https://github.com/roboflow/autodistill\ncd autodistill\npip install -e .\n```\n</details>\n\nAdditional Base and Target models are [enumerated below](#-available-models).\n\n## \ud83d\ude80 Quickstart\n\nSee the [demo Notebook](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-auto-train-yolov8-model-with-autodistill.ipynb) for a quick introduction to `autodistill`. This notebook walks through building a milk container detection model with no labeling.\n\nBelow, we have condensed key parts of the notebook for a quick introduction to `autodistill`.\n\nYou can also run Autodistill in one command. First, install `autodistill`:\n\n```bash\npip install autodistill\n```\n\nThen, run:\n\n```bash\nautodistill images --base=\"grounding_dino\" --target=\"yolov8\" --ontology '{\"prompt\": \"label\"}' --output=\"./dataset\"\n```\n\nThis command will label all images in a directory called `images` with Grounding DINO and use the labeled images to train a YOLOv8 model. Grounding DINO will label all images with the \"prompt\" and save the label as the \"label\". You can specify as many prompts and labels as you want. The resulting dataset will be saved in a folder called `dataset`.\n\n### Install Packages\n\nFor this example, we'll show how to distill [GroundedSAM](https://github.com/IDEA-Research/Grounded-Segment-Anything) into a small [YOLOv8](https://github.com/ultralytics/ultralytics) model using [autodistill-grounded-sam](https://github.com/autodistill/autodistill-grounded-sam) and [autodistill-yolov8](https://github.com/autodistill/autodistill-yolov8).\n\n```\npip install autodistill autodistill-grounded-sam autodistill-yolov8\n```\n\n### Distill a Model\n\n```python\nfrom autodistill_grounded_sam import GroundedSAM\nfrom autodistill.detection import CaptionOntology\nfrom autodistill_yolov8 import YOLOv8\n\n# define an ontology to map class names to our GroundingDINO prompt\n# the ontology dictionary has the format {caption: class}\n# where caption is the prompt sent to the base model, and class is the label that will\n# be saved for that caption in the generated annotations\nbase_model = GroundedSAM(ontology=CaptionOntology({\"shipping container\": \"container\"}))\n\n# label all images in a folder called `context_images`\nbase_model.label(\n  input_folder=\"./images\",\n  output_folder=\"./dataset\"\n)\n\ntarget_model = YOLOv8(\"yolov8n.pt\")\ntarget_model.train(\"./dataset/data.yaml\", epochs=200)\n\n# run inference on the new model\npred = target_model.predict(\"./dataset/valid/your-image.jpg\", confidence=0.5)\nprint(pred)\n\n# optional: upload your model to Roboflow for deployment\nfrom roboflow import Roboflow\n\nrf = Roboflow(api_key=\"API_KEY\")\nproject = rf.workspace().project(\"PROJECT_ID\")\nproject.version(DATASET_VERSION).deploy(model_type=\"yolov8\", model_path=f\"./runs/detect/train/\")\n```\n\n<details close>\n<summary>Visualize Predictions</summary>\n\nTo plot the annotations for a single image using `autodistill`, you can use the code below. This code is helpful to visualize the annotations generated by your base model (i.e. GroundedSAM) and the results from your target model (i.e. YOLOv8).\n\n```python\nimport supervision as sv\nimport cv2\n\nimg_path = \"./images/your-image.jpeg\"\n\nimage = cv2.imread(img_path)\n\ndetections = base_model.predict(img_path)\n# annotate image with detections\nbox_annotator = sv.BoxAnnotator()\n\nlabels = [\n    f\"{base_model.ontology.classes()[class_id]} {confidence:0.2f}\"\n    for _, _, confidence, class_id, _ in detections\n]\n\nannotated_frame = box_annotator.annotate(\n    scene=image.copy(), detections=detections, labels=labels\n)\n\nsv.plot_image(annotated_frame, (16, 16))\n```\n</details>\n\n## \ud83d\udccd Available Models\n\nOur goal is for `autodistill` to support using all foundation models as Base Models and most SOTA supervised models as Target Models. We focused on object detection and segmentation\ntasks first but plan to launch classification support soon! In the future, we hope `autodistill` will also be used for models beyond computer vision.\n\n* \u2705 - complete (click row/column header to go to repo)\n* \ud83d\udea7 - work in progress\n\n### object detection\n\n| base / target | [YOLOv8](https://github.com/autodistill/autodistill-yolov8) | [YOLO-NAS](https://github.com/autodistill/autodistill-yolonas) | [YOLOv5](https://github.com/autodistill/autodistill-yolov5) | [DETR](https://github.com/autodistill/autodistill-detr) | YOLOv6 | YOLOv7 | MT-YOLOv6 |\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| [DETIC](https://github.com/autodistill/autodistill-detic) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [GroundedSAM](https://github.com/autodistill/autodistill-grounded-sam) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [GroundingDINO](https://github.com/autodistill/autodistill-grounding-dino) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [OWL-ViT](https://github.com/autodistill/autodistill-owl-vit) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [SAM-CLIP](https://github.com/autodistill/autodistill-sam-clip) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [LLaVA-1.5](https://github.com/autodistill/autodistill-llava) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [Kosmos-2](https://github.com/autodistill/autodistill-kosmos-2) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [OWLv2](https://github.com/autodistill/autodistill-owlv2) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [Roboflow Universe Models (50k+ pre-trained models)](https://github.com/autodistill/autodistill-roboflow-universe) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [CoDet](https://github.com/autodistill/autodistill-codet) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [VLPart](https://github.com/autodistill/autodistill-vlpart) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [Azure Custom Vision](https://github.com/autodistill/autodistill-azure-vision) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [AWS Rekognition](https://github.com/autodistill/autodistill-rekognition) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n| [Google Vision](https://github.com/autodistill/autodistill-gcp-vision) | \u2705 | \u2705 | \u2705 | \u2705 | \ud83d\udea7 |  |  |\n\n\n### instance segmentation\n\n| base / target | [YOLOv8](https://github.com/autodistill/autodistill-yolov8) | [YOLO-NAS](https://github.com/autodistill/autodistill-yolonas) | [YOLOv5](https://github.com/autodistill/autodistill-yolov5) | YOLOv7 | Segformer |\n|:---:|:---:|:---:|:---:|:---:|:---:|\n| [GroundedSAM](https://github.com/autodistill/autodistill-grounded-sam) | \u2705 | \ud83d\udea7 | \ud83d\udea7 |  |  |\n| SAM-CLIP | \u2705 | \ud83d\udea7 | \ud83d\udea7 |  |  |\n| SegGPT | \u2705 | \ud83d\udea7 | \ud83d\udea7 |  |  |\n| FastSAM | \ud83d\udea7 | \ud83d\udea7 | \ud83d\udea7 |  |  |\n\n\n### classification\n\n| base / target | [ViT](https://github.com/autodistill/autodistill-vit) | [YOLOv8](https://github.com/autodistill/autodistill-yolov8) | [YOLOv5](https://github.com/autodistill/autodistill-yolov5) |\n|:---:|:---:|:---:|:---:|\n| [CLIP](https://github.com/autodistill/autodistill-clip) | \u2705 | \u2705 | \ud83d\udea7 |\n| [MetaCLIP](https://github.com/autodistill/autodistill-metaclip) | \u2705 | \u2705 | \ud83d\udea7 |\n| [DINOv2](https://github.com/autodistill/autodistill-dinov2) | \u2705 | \u2705 | \ud83d\udea7 |\n| [BLIP](https://github.com/autodistill/autodistill-blip) | \u2705 | \u2705 | \ud83d\udea7 |\n| [ALBEF](https://github.com/autodistill/autodistill-albef) | \u2705 | \u2705 | \ud83d\udea7 |\n| [FastViT](https://github.com/autodistill/autodistill-fastvit) | \u2705 | \u2705 | \ud83d\udea7 |\n| [AltCLIP](https://github.com/autodistill/autodistill-altcip) | \u2705 | \u2705 | \ud83d\udea7 |\n| Fuyu | \ud83d\udea7 | \ud83d\udea7 | \ud83d\udea7 |\n| Open Flamingo | \ud83d\udea7 | \ud83d\udea7 | \ud83d\udea7 |\n| GPT-4 |  |  |  |\n| PaLM-2 |  |  |  |\n\n\n## Roboflow Model Deployment Support\n\nYou can optionally deploy some Target Models trained using Autodistill on Roboflow. Deploying on Roboflow allows you to use a range of concise SDKs for using your model on the edge, from [roboflow.js](https://docs.roboflow.com/inference/web-browser) for web deployment to [NVIDIA Jetson](https://docs.roboflow.com/inference/nvidia-jetson) devices.\n\nThe following Autodistill Target Models are supported by Roboflow for deployment:\n\n| model name | Supported? |\n|:---:|:---:|\n| YOLOv8 Object Detection | \u2705 |\n| YOLOv8 Instance Segmentation | \u2705 |\n| YOLOv5 Object Detection | \u2705 |\n| YOLOv5 Instance Segmentation | \u2705 |\n| YOLOv8 Classification |  |\n\n## \ud83c\udfac Video Guides\n\n<p align=\"left\">\n<a href=\"https://www.youtube.com/watch?v=gKTYMfwPo4M\" title=\"Autodistill: Train YOLOv8 with ZERO Annotations\"><img src=\"https://i.ytimg.com/vi/gKTYMfwPo4M/maxresdefault.jpg\" alt=\"Autodistill: Train YOLOv8 with ZERO Annotations\" width=\"300px\" align=\"left\" /></a>\n<a href=\"https://youtu.be/oEQYStnF2l8\"><strong>Autodistill: Train YOLOv8 with ZERO Annotations</strong></a>\n<div><strong>Published: 8 June 2023</strong></div>\n<br/>In this video, we will show you how to use a new library to train a YOLOv8 model to detect bottles moving on a conveyor line. Yes, that's right - zero annotation hours are required! We dive deep into Autodistill's functionality, covering topics from setting up your Python environment and preparing your images, to the thrilling automatic annotation of images. </p> \n\n## \ud83d\udca1 Community Resources\n\n- [Distill Large Vision Models into Smaller, Efficient Models with Autodistill](https://blog.roboflow.com/autodistill/): Announcement post with written guide on how to use Autodistill\n- [Comparing AI-Labeled Data to Human-Labeled Data](https://blog.roboflow.com/ai-vs-human-labeled-data/): A qualitative evaluation of Grounding DINO used with Autodistill across various tasks and domains.\n- [How to Evaluate Autodistill Prompts with CVevals](https://blog.roboflow.com/autodistill-prompt-evaluation/): Evaluate Autodistill prompts.\n- [Autodistill: Label and Train a Computer Vision Model in Under 20 Minutes](https://www.youtube.com/watch?v=M_QZ_Q0zT0k): Building a model to detect planes in under 20 minutes.\n- [Comparing AI-Labeled Data to Human-Labeled Data](https://blog.roboflow.com/ai-vs-human-labeled-data/): Explore the strengths and limitations of a base model used with Autoditsill.\n- [Train an Image Classification Model with No Labeling](https://blog.roboflow.com/train-classification-model-no-labeling/): Use Grounded SAM to automatically label images for training an Ultralytics YOLOv8 classification model.\n- [Train a Segmentation Model with No Labeling](https://blog.roboflow.com/train-a-segmentation-model-no-labeling/): Use CLIP to automatically label images for training an Ultralytics YOLOv8 segmentation model.\n- File a PR to add your own resources here!\n\n## \ud83d\uddfa\ufe0f Roadmap\n\nApart from adding new models, there are several areas we plan to explore with `autodistill` including:\n\n* \ud83d\udca1 Ontology creation & prompt engineering\n* \ud83d\udc69\u200d\ud83d\udcbb Human in the loop support\n* \ud83e\udd14 Model evaluation\n* \ud83d\udd04 Active learning\n* \ud83d\udcac Language tasks\n\n## \ud83c\udfc6 Contributing\n\nWe love your input! Please see our [contributing guide](CONTRIBUTING.md) to get started. Thank you \ud83d\ude4f to all our contributors!\n\n## \ud83d\udc69\u200d\u2696\ufe0f License\n\nThe `autodistill` package is licensed under an [Apache 2.0](LICENSE). Each Base or Target model plugin may use its own license corresponding with the license of its underlying model. Please refer to the license in each plugin repo for more information.\n\n## Frequently Asked Questions \u2753\n\n### What causes the `PytorchStreamReader failed reading zip archive: failed finding central directory` error?\n\nThis error is caused when PyTorch cannot load the model weights for a model. Go into the `~/.cache/autodistill` directory and delete the folder associated with the model you are trying to load. Then, run your code again. The model weights will be downloaded from scratch. Leave the installation process uninterrupted.\n\n## \ud83d\udcbb explore more Roboflow open source projects\n\n|Project | Description|\n|:---|:---|\n|[supervision](https://roboflow.com/supervision) | General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation.\n|[Autodistill](https://github.com/autodistill/autodistill) (this project) | Automatically label images for use in training computer vision models. |\n|[Inference](https://github.com/roboflow/inference) | An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.\n|[Notebooks](https://roboflow.com/notebooks) | Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone.\n|[Collect](https://github.com/roboflow/roboflow-collect) | Automated, intelligent data collection powered by CLIP.\n\n<br>\n\n<div align=\"center\">\n\n  <div align=\"center\">\n      <a href=\"https://youtube.com/roboflow\">\n          <img\n            src=\"https://media.roboflow.com/notebooks/template/icons/purple/youtube.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634652\"\n            width=\"3%\"\n          />\n      </a>\n      <img src=\"https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png\" width=\"3%\"/>\n      <a href=\"https://roboflow.com\">\n          <img\n            src=\"https://media.roboflow.com/notebooks/template/icons/purple/roboflow-app.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949746649\"\n            width=\"3%\"\n          />\n      </a>\n      <img src=\"https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png\" width=\"3%\"/>\n      <a href=\"https://www.linkedin.com/company/roboflow-ai/\">\n          <img\n            src=\"https://media.roboflow.com/notebooks/template/icons/purple/linkedin.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633691\"\n            width=\"3%\"\n          />\n      </a>\n      <img src=\"https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png\" width=\"3%\"/>\n      <a href=\"https://docs.roboflow.com\">\n          <img\n            src=\"https://media.roboflow.com/notebooks/template/icons/purple/knowledge.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634511\"\n            width=\"3%\"\n          />\n      </a>\n      <img src=\"https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png\" width=\"3%\"/>\n      <a href=\"https://disuss.roboflow.com\">\n          <img\n            src=\"https://media.roboflow.com/notebooks/template/icons/purple/forum.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633584\"\n            width=\"3%\"\n          />\n      <img src=\"https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png\" width=\"3%\"/>\n      <a href=\"https://blog.roboflow.com\">\n          <img\n            src=\"https://media.roboflow.com/notebooks/template/icons/purple/blog.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633605\"\n            width=\"3%\"\n          />\n      </a>\n      </a>\n  </div>\n\n</div>\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Distill large foundational models into smaller, domain-specific models for deployment",
    "version": "0.1.26",
    "project_urls": {
        "Homepage": "https://github.com/autodistill/autodistill"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cca91e46434424d9bf2ea1be7b02bd5eb46357086e79c84513ba6b2a6f26af5c",
                "md5": "18ec68d4e47a7cac76e3e2e1342d57cc",
                "sha256": "29fbe6441cd7adcc7fe405d8c1558783f8ea378e94448b05920700d885a4a408"
            },
            "downloads": -1,
            "filename": "autodistill-0.1.26-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "18ec68d4e47a7cac76e3e2e1342d57cc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 30086,
            "upload_time": "2024-02-13T20:10:08",
            "upload_time_iso_8601": "2024-02-13T20:10:08.583061Z",
            "url": "https://files.pythonhosted.org/packages/cc/a9/1e46434424d9bf2ea1be7b02bd5eb46357086e79c84513ba6b2a6f26af5c/autodistill-0.1.26-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2a8cd9ca5cb19816cc13231b7b9b4f7d5c617c03f046aa11c30b09c662171e0b",
                "md5": "fe121d51efb05d542a093f5061188b73",
                "sha256": "4820bd830bdb71c693eed168be2d39e99f6713c6f24d2dea77df5d01957ef313"
            },
            "downloads": -1,
            "filename": "autodistill-0.1.26.tar.gz",
            "has_sig": false,
            "md5_digest": "fe121d51efb05d542a093f5061188b73",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 32084,
            "upload_time": "2024-02-13T20:10:14",
            "upload_time_iso_8601": "2024-02-13T20:10:14.354510Z",
            "url": "https://files.pythonhosted.org/packages/2a/8c/d9ca5cb19816cc13231b7b9b4f7d5c617c03f046aa11c30b09c662171e0b/autodistill-0.1.26.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-13 20:10:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "autodistill",
    "github_project": "autodistill",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "autodistill"
}
        
Elapsed time: 0.23630s