autodistill-clip


Nameautodistill-clip JSON
Version 0.1.5 PyPI version JSON
download
home_pagehttps://github.com/autodistill/autodistill-clip
SummaryCLIP module for use with Autodistill
upload_time2023-12-05 09:13:50
maintainer
docs_urlNone
authorRoboflow
requires_python>=3.7
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <p>
    <a align="center" href="" target="_blank">
      <img
        width="850"
        src="https://media.roboflow.com/open-source/autodistill/autodistill-banner.png"
      >
    </a>
  </p>
</div>

# Autodistill CLIP Module

This repository contains the code supporting the CLIP base model for use with [Autodistill](https://github.com/autodistill/autodistill).

[CLIP](https://github.com/openai/CLIP), developed by OpenAI, is a computer vision model trained using pairs of images and text. You can use CLIP with autodistill for image classification.

Read the full [Autodistill documentation](https://autodistill.github.io/autodistill/).

Read the [CLIP Autodistill documentation](https://autodistill.github.io/autodistill/base_models/clip/).

## Installation

To use CLIP with autodistill, you need to install the following dependency:


```bash
pip3 install autodistill-clip
```

## Quickstart

```python
from autodistill_clip import CLIP
from autodistill.detection import CaptionOntology

# define an ontology to map class names to our CLIP prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = CLIP(
    ontology=CaptionOntology(
        {
            "person": "person",
            "a forklift": "forklift"
        }
    )
)

results = base_model.predict("./context_images/test.jpg")

print(results)

base_model.label("./context_images", extension=".jpeg")
```

## License

The code in this repository is licensed under an [MIT license](LICENSE.md).

## 🏆 Contributing

We love your input! Please see the core Autodistill [contributing guide](https://github.com/autodistill/autodistill/blob/main/CONTRIBUTING.md) to get started. Thank you 🙏 to all our contributors!

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/autodistill/autodistill-clip",
    "name": "autodistill-clip",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "Roboflow",
    "author_email": "support@roboflow.com",
    "download_url": "https://files.pythonhosted.org/packages/d4/f2/6b5f38355f885e7ed12cb7a67c018b38bd33353b3abac28615fb1f6373d3/autodistill_clip-0.1.5.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <p>\n    <a align=\"center\" href=\"\" target=\"_blank\">\n      <img\n        width=\"850\"\n        src=\"https://media.roboflow.com/open-source/autodistill/autodistill-banner.png\"\n      >\n    </a>\n  </p>\n</div>\n\n# Autodistill CLIP Module\n\nThis repository contains the code supporting the CLIP base model for use with [Autodistill](https://github.com/autodistill/autodistill).\n\n[CLIP](https://github.com/openai/CLIP), developed by OpenAI, is a computer vision model trained using pairs of images and text. You can use CLIP with autodistill for image classification.\n\nRead the full [Autodistill documentation](https://autodistill.github.io/autodistill/).\n\nRead the [CLIP Autodistill documentation](https://autodistill.github.io/autodistill/base_models/clip/).\n\n## Installation\n\nTo use CLIP with autodistill, you need to install the following dependency:\n\n\n```bash\npip3 install autodistill-clip\n```\n\n## Quickstart\n\n```python\nfrom autodistill_clip import CLIP\nfrom autodistill.detection import CaptionOntology\n\n# define an ontology to map class names to our CLIP prompt\n# the ontology dictionary has the format {caption: class}\n# where caption is the prompt sent to the base model, and class is the label that will\n# be saved for that caption in the generated annotations\n# then, load the model\nbase_model = CLIP(\n    ontology=CaptionOntology(\n        {\n            \"person\": \"person\",\n            \"a forklift\": \"forklift\"\n        }\n    )\n)\n\nresults = base_model.predict(\"./context_images/test.jpg\")\n\nprint(results)\n\nbase_model.label(\"./context_images\", extension=\".jpeg\")\n```\n\n## License\n\nThe code in this repository is licensed under an [MIT license](LICENSE.md).\n\n## \ud83c\udfc6 Contributing\n\nWe love your input! Please see the core Autodistill [contributing guide](https://github.com/autodistill/autodistill/blob/main/CONTRIBUTING.md) to get started. Thank you \ud83d\ude4f to all our contributors!\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "CLIP module for use with Autodistill",
    "version": "0.1.5",
    "project_urls": {
        "Homepage": "https://github.com/autodistill/autodistill-clip"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e8dfe1724244c736c207f90c092d10981538631bb82fcf94a1bfab8a0306ae29",
                "md5": "eae916ef74e2c5891ea7aa74f05ecb35",
                "sha256": "45b2b566d0edbba1fabd1c2742dff867c7533103e2e913a3d26c4ecfc7acb5f5"
            },
            "downloads": -1,
            "filename": "autodistill_clip-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "eae916ef74e2c5891ea7aa74f05ecb35",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 4425,
            "upload_time": "2023-12-05T09:13:48",
            "upload_time_iso_8601": "2023-12-05T09:13:48.908791Z",
            "url": "https://files.pythonhosted.org/packages/e8/df/e1724244c736c207f90c092d10981538631bb82fcf94a1bfab8a0306ae29/autodistill_clip-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d4f26b5f38355f885e7ed12cb7a67c018b38bd33353b3abac28615fb1f6373d3",
                "md5": "0f5962c354a3d576d50ea46c110595d6",
                "sha256": "e74d773822bdbb4c163f46e1c71d5bda7db1a5379dc5f66f4b72f09dc8769f00"
            },
            "downloads": -1,
            "filename": "autodistill_clip-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "0f5962c354a3d576d50ea46c110595d6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 4193,
            "upload_time": "2023-12-05T09:13:50",
            "upload_time_iso_8601": "2023-12-05T09:13:50.037923Z",
            "url": "https://files.pythonhosted.org/packages/d4/f2/6b5f38355f885e7ed12cb7a67c018b38bd33353b3abac28615fb1f6373d3/autodistill_clip-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-05 09:13:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "autodistill",
    "github_project": "autodistill-clip",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "autodistill-clip"
}
        
Elapsed time: 0.15760s