autodistill-llava


Nameautodistill-llava JSON
Version 0.1.0 PyPI version JSON
download
home_page
SummaryLLaVA for use with Autodistill
upload_time2023-10-16 13:05:00
maintainer
docs_urlNone
authorRoboflow
requires_python>=3.7
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <p>
    <a align="center" href="" target="_blank">
      <img
        width="850"
        src="https://media.roboflow.com/open-source/autodistill/autodistill-banner.png"
      >
    </a>
  </p>
</div>

# Autodistill LLaVA Module

This repository contains the code supporting the LLaVA base model for use with [Autodistill](https://github.com/autodistill/autodistill).

[LLaVA](https://github.com/haotian-liu/LLaVA) is a multi-modal language model with object detection capabilities.  You can use LLaVA with autodistill for object detection. [Learn more about LLaVA 1.5](https://blog.roboflow.com/first-impressions-with-llava-1-5/), the most recent version of LLaVA at the time of releasing this package.

Read the full [Autodistill documentation](https://autodistill.github.io/autodistill/).

Read the [LLaVA Autodistill documentation](https://autodistill.github.io/autodistill/base_models/llava/).

## Installation

To use CLIP with autodistill, you need to install the following dependency:


```bash
pip3 install autodistill-clip
```

## Quickstart

```python
from autodistill_llava import LLaVA

# define an ontology to map class names to our LLaVA prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = LLaVA(
    ontology=CaptionOntology(
        {
            "a forklift": "forklift"
        }
    )
)
base_model.label("./context_images", extension=".jpeg")
```


## License

This model is licensed under an [Apache 2.0 License](LICENSE).

## 🏆 Contributing

We love your input! Please see the core Autodistill [contributing guide](https://github.com/autodistill/autodistill/blob/main/CONTRIBUTING.md) to get started. Thank you 🙏 to all our contributors!

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "autodistill-llava",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "Roboflow",
    "author_email": "support@roboflow.com",
    "download_url": "https://files.pythonhosted.org/packages/c9/bd/a09288a79fae2986fdb03d371c9482b3237f87a4cce5b738f9e2789f476d/autodistill-llava-0.1.0.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <p>\n    <a align=\"center\" href=\"\" target=\"_blank\">\n      <img\n        width=\"850\"\n        src=\"https://media.roboflow.com/open-source/autodistill/autodistill-banner.png\"\n      >\n    </a>\n  </p>\n</div>\n\n# Autodistill LLaVA Module\n\nThis repository contains the code supporting the LLaVA base model for use with [Autodistill](https://github.com/autodistill/autodistill).\n\n[LLaVA](https://github.com/haotian-liu/LLaVA) is a multi-modal language model with object detection capabilities.  You can use LLaVA with autodistill for object detection. [Learn more about LLaVA 1.5](https://blog.roboflow.com/first-impressions-with-llava-1-5/), the most recent version of LLaVA at the time of releasing this package.\n\nRead the full [Autodistill documentation](https://autodistill.github.io/autodistill/).\n\nRead the [LLaVA Autodistill documentation](https://autodistill.github.io/autodistill/base_models/llava/).\n\n## Installation\n\nTo use CLIP with autodistill, you need to install the following dependency:\n\n\n```bash\npip3 install autodistill-clip\n```\n\n## Quickstart\n\n```python\nfrom autodistill_llava import LLaVA\n\n# define an ontology to map class names to our LLaVA prompt\n# the ontology dictionary has the format {caption: class}\n# where caption is the prompt sent to the base model, and class is the label that will\n# be saved for that caption in the generated annotations\n# then, load the model\nbase_model = LLaVA(\n    ontology=CaptionOntology(\n        {\n            \"a forklift\": \"forklift\"\n        }\n    )\n)\nbase_model.label(\"./context_images\", extension=\".jpeg\")\n```\n\n\n## License\n\nThis model is licensed under an [Apache 2.0 License](LICENSE).\n\n## \ud83c\udfc6 Contributing\n\nWe love your input! Please see the core Autodistill [contributing guide](https://github.com/autodistill/autodistill/blob/main/CONTRIBUTING.md) to get started. Thank you \ud83d\ude4f to all our contributors!\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "LLaVA for use with Autodistill",
    "version": "0.1.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c64f692de445291ce22e5081101a3902e1647d2c82e1a09b46ce2ef0178b9569",
                "md5": "de29a3863b3b02762b9e4c1b99893548",
                "sha256": "84161f8c0f6eb16567b8ab237b37b19495e762116185f88a3c2225176773d4b9"
            },
            "downloads": -1,
            "filename": "autodistill_llava-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "de29a3863b3b02762b9e4c1b99893548",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 8993,
            "upload_time": "2023-10-16T13:04:57",
            "upload_time_iso_8601": "2023-10-16T13:04:57.572762Z",
            "url": "https://files.pythonhosted.org/packages/c6/4f/692de445291ce22e5081101a3902e1647d2c82e1a09b46ce2ef0178b9569/autodistill_llava-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c9bda09288a79fae2986fdb03d371c9482b3237f87a4cce5b738f9e2789f476d",
                "md5": "715ac3114565cbb967ce62977ebac4d2",
                "sha256": "ccfa4651a8c8efb920518d85618381297c90a19d4a28239cae103559e137eb91"
            },
            "downloads": -1,
            "filename": "autodistill-llava-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "715ac3114565cbb967ce62977ebac4d2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 8657,
            "upload_time": "2023-10-16T13:05:00",
            "upload_time_iso_8601": "2023-10-16T13:05:00.210411Z",
            "url": "https://files.pythonhosted.org/packages/c9/bd/a09288a79fae2986fdb03d371c9482b3237f87a4cce5b738f9e2789f476d/autodistill-llava-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-16 13:05:00",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "autodistill-llava"
}
        
Elapsed time: 2.37328s