clip-interrogator


Nameclip-interrogator JSON
Version 0.6.0 PyPI version JSON
download
home_pagehttps://github.com/pharmapsychotic/clip-interrogator
SummaryGenerate a prompt from an image
upload_time2023-03-20 03:47:53
maintainer
docs_urlNone
authorpharmapsychotic
requires_python
licenseMIT
keywords blip clip prompt-engineering stable-diffusion text-to-image
VCS
bugtrack_url
requirements torch torchvision Pillow requests safetensors tqdm open_clip_torch accelerate transformers
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # clip-interrogator

*Want to figure out what a good prompt might be to create new images like an existing one? The **CLIP Interrogator** is here to get you answers!*

## Run it!

🆕 Now available as a [Stable Diffusion Web UI Extension](https://github.com/pharmapsychotic/clip-interrogator-ext)! 🆕

<br>

Run Version 2 on Colab, HuggingFace, and Replicate!

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/main/clip_interrogator.ipynb) [![Generic badge](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue.svg)](https://huggingface.co/spaces/pharma/CLIP-Interrogator) [![Replicate](https://replicate.com/pharmapsychotic/clip-interrogator/badge)](https://replicate.com/pharmapsychotic/clip-interrogator) [![Lambda](https://img.shields.io/badge/%CE%BB-Lambda-blue)](https://cloud.lambdalabs.com/demos/ml/CLIP-Interrogator)

<br>


Version 1 still available in Colab for comparing different CLIP models 

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/v1/clip_interrogator.ipynb) 


## About

The **CLIP Interrogator** is a prompt engineering tool that combines OpenAI's [CLIP](https://openai.com/blog/clip/) and Salesforce's [BLIP](https://blog.salesforceairesearch.com/blip-bootstrapping-language-image-pretraining/) to optimize text prompts to match a given image. Use the resulting prompts with text-to-image models like [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on [DreamStudio](https://beta.dreamstudio.ai/) to create cool art!


## Using as a library

Create and activate a Python virtual environment
```bash
python3 -m venv ci_env
(for linux  ) source ci_env/bin/activate
(for windows) .\ci_env\Scripts\activate
```

Install with PIP
```
# install torch with GPU support for example:
pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu117

# install clip-interrogator
pip install clip-interrogator==0.5.5
```

You can then use it in your script
```python
from PIL import Image
from clip_interrogator import Config, Interrogator
image = Image.open(image_path).convert('RGB')
ci = Interrogator(Config(clip_model_name="ViT-L-14/openai"))
print(ci.interrogate(image))
```

CLIP Interrogator uses OpenCLIP which supports many different pretrained CLIP models. For the best prompts for 
Stable Diffusion 1.X use `ViT-L-14/openai` for clip_model_name. For Stable Diffusion 2.0 use `ViT-H-14/laion2b_s32b_b79k`

## Configuration

The `Config` object lets you configure CLIP Interrogator's processing. 
* `clip_model_name`: which of the OpenCLIP pretrained CLIP models to use
* `cache_path`: path where to save precomputed text embeddings
* `download_cache`: when True will download the precomputed embeddings from huggingface
* `chunk_size`: batch size for CLIP, use smaller for lower VRAM
* `quiet`: when True no progress bars or text output will be displayed

On systems with low VRAM you can call `config.apply_low_vram_defaults()` to reduce the amount of VRAM needed (at the cost of some speed and quality). The default settings use about 6.3GB of VRAM and the low VRAM settings use about 2.7GB.

See the [run_cli.py](https://github.com/pharmapsychotic/clip-interrogator/blob/main/run_cli.py) and [run_gradio.py](https://github.com/pharmapsychotic/clip-interrogator/blob/main/run_gradio.py) for more examples on using Config and Interrogator classes.


## Ranking against your own list of terms

```python
from clip_interrogator import Config, Interrogator, LabelTable, load_list
from PIL import Image

ci = Interrogator(Config(blip_model_type=None))
image = Image.open(image_path).convert('RGB')
table = LabelTable(load_list('terms.txt'), 'terms', ci)
best_match = table.rank(ci.image_to_features(image), top_count=1)[0]
print(best_match)
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/pharmapsychotic/clip-interrogator",
    "name": "clip-interrogator",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "blip,clip,prompt-engineering,stable-diffusion,text-to-image",
    "author": "pharmapsychotic",
    "author_email": "me@pharmapsychotic.com",
    "download_url": "https://files.pythonhosted.org/packages/23/d1/2f0f61c5cbaea3d1480f2eb2709f89d64d62976e9634e7eeaac2e2c03ba2/clip-interrogator-0.6.0.tar.gz",
    "platform": null,
    "description": "# clip-interrogator\n\n*Want to figure out what a good prompt might be to create new images like an existing one? The **CLIP Interrogator** is here to get you answers!*\n\n## Run it!\n\n\ud83c\udd95 Now available as a [Stable Diffusion Web UI Extension](https://github.com/pharmapsychotic/clip-interrogator-ext)! \ud83c\udd95\n\n<br>\n\nRun Version 2 on Colab, HuggingFace, and Replicate!\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/main/clip_interrogator.ipynb) [![Generic badge](https://img.shields.io/badge/\ud83e\udd17-Open%20in%20Spaces-blue.svg)](https://huggingface.co/spaces/pharma/CLIP-Interrogator) [![Replicate](https://replicate.com/pharmapsychotic/clip-interrogator/badge)](https://replicate.com/pharmapsychotic/clip-interrogator) [![Lambda](https://img.shields.io/badge/%CE%BB-Lambda-blue)](https://cloud.lambdalabs.com/demos/ml/CLIP-Interrogator)\n\n<br>\n\n\nVersion 1 still available in Colab for comparing different CLIP models \n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/v1/clip_interrogator.ipynb) \n\n\n## About\n\nThe **CLIP Interrogator** is a prompt engineering tool that combines OpenAI's [CLIP](https://openai.com/blog/clip/) and Salesforce's [BLIP](https://blog.salesforceairesearch.com/blip-bootstrapping-language-image-pretraining/) to optimize text prompts to match a given image. Use the resulting prompts with text-to-image models like [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on [DreamStudio](https://beta.dreamstudio.ai/) to create cool art!\n\n\n## Using as a library\n\nCreate and activate a Python virtual environment\n```bash\npython3 -m venv ci_env\n(for linux  ) source ci_env/bin/activate\n(for windows) .\\ci_env\\Scripts\\activate\n```\n\nInstall with PIP\n```\n# install torch with GPU support for example:\npip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu117\n\n# install clip-interrogator\npip install clip-interrogator==0.5.5\n```\n\nYou can then use it in your script\n```python\nfrom PIL import Image\nfrom clip_interrogator import Config, Interrogator\nimage = Image.open(image_path).convert('RGB')\nci = Interrogator(Config(clip_model_name=\"ViT-L-14/openai\"))\nprint(ci.interrogate(image))\n```\n\nCLIP Interrogator uses OpenCLIP which supports many different pretrained CLIP models. For the best prompts for \nStable Diffusion 1.X use `ViT-L-14/openai` for clip_model_name. For Stable Diffusion 2.0 use `ViT-H-14/laion2b_s32b_b79k`\n\n## Configuration\n\nThe `Config` object lets you configure CLIP Interrogator's processing. \n* `clip_model_name`: which of the OpenCLIP pretrained CLIP models to use\n* `cache_path`: path where to save precomputed text embeddings\n* `download_cache`: when True will download the precomputed embeddings from huggingface\n* `chunk_size`: batch size for CLIP, use smaller for lower VRAM\n* `quiet`: when True no progress bars or text output will be displayed\n\nOn systems with low VRAM you can call `config.apply_low_vram_defaults()` to reduce the amount of VRAM needed (at the cost of some speed and quality). The default settings use about 6.3GB of VRAM and the low VRAM settings use about 2.7GB.\n\nSee the [run_cli.py](https://github.com/pharmapsychotic/clip-interrogator/blob/main/run_cli.py) and [run_gradio.py](https://github.com/pharmapsychotic/clip-interrogator/blob/main/run_gradio.py) for more examples on using Config and Interrogator classes.\n\n\n## Ranking against your own list of terms\n\n```python\nfrom clip_interrogator import Config, Interrogator, LabelTable, load_list\nfrom PIL import Image\n\nci = Interrogator(Config(blip_model_type=None))\nimage = Image.open(image_path).convert('RGB')\ntable = LabelTable(load_list('terms.txt'), 'terms', ci)\nbest_match = table.rank(ci.image_to_features(image), top_count=1)[0]\nprint(best_match)\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Generate a prompt from an image",
    "version": "0.6.0",
    "project_urls": {
        "Homepage": "https://github.com/pharmapsychotic/clip-interrogator"
    },
    "split_keywords": [
        "blip",
        "clip",
        "prompt-engineering",
        "stable-diffusion",
        "text-to-image"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3079a75e9129809368b3e3d9b9bc803230ac1cba7d690338f7b0c3ad46107fa3",
                "md5": "af6925a6ed62ab6ec82bab0d673f15f1",
                "sha256": "cd7c6bf9db170f005b4179e943fc1658aa0f8eebcc75ab3428b0a992aaeabd1c"
            },
            "downloads": -1,
            "filename": "clip_interrogator-0.6.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "af6925a6ed62ab6ec82bab0d673f15f1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 787802,
            "upload_time": "2023-03-20T03:47:51",
            "upload_time_iso_8601": "2023-03-20T03:47:51.190854Z",
            "url": "https://files.pythonhosted.org/packages/30/79/a75e9129809368b3e3d9b9bc803230ac1cba7d690338f7b0c3ad46107fa3/clip_interrogator-0.6.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "23d12f0f61c5cbaea3d1480f2eb2709f89d64d62976e9634e7eeaac2e2c03ba2",
                "md5": "0494078a1cf78911eb329004c1e9f838",
                "sha256": "e7942372fe9b96181881f7083e3179de746e59b0e3c4199fb3e3e19bef421693"
            },
            "downloads": -1,
            "filename": "clip-interrogator-0.6.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0494078a1cf78911eb329004c1e9f838",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 788919,
            "upload_time": "2023-03-20T03:47:53",
            "upload_time_iso_8601": "2023-03-20T03:47:53.357830Z",
            "url": "https://files.pythonhosted.org/packages/23/d1/2f0f61c5cbaea3d1480f2eb2709f89d64d62976e9634e7eeaac2e2c03ba2/clip-interrogator-0.6.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-20 03:47:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pharmapsychotic",
    "github_project": "clip-interrogator",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.13.0"
                ]
            ]
        },
        {
            "name": "torchvision",
            "specs": []
        },
        {
            "name": "Pillow",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "safetensors",
            "specs": []
        },
        {
            "name": "tqdm",
            "specs": []
        },
        {
            "name": "open_clip_torch",
            "specs": []
        },
        {
            "name": "accelerate",
            "specs": []
        },
        {
            "name": "transformers",
            "specs": [
                [
                    ">=",
                    "4.27.1"
                ]
            ]
        }
    ],
    "lcname": "clip-interrogator"
}
        
Elapsed time: 0.39714s