clip-anytorch


Nameclip-anytorch JSON
Version 2.6.0 PyPI version JSON
download
home_pagehttps://github.com/rom1504/CLIP
Summary# CLIP
upload_time2024-01-13 15:21:35
maintainer
docs_urlNone
authorOpenAI
requires_python
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # CLIP

[[Blog]](https://openai.com/blog/clip/) [[Paper]](https://arxiv.org/abs/2103.00020) [[Model Card]](model-card.md) [[Colab]](https://colab.research.google.com/github/openai/clip/blob/master/notebooks/Interacting_with_CLIP.ipynb)

CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision.


This repo is a fork maintaining a PYPI package for clip. Changes from the main repo:
* remove the strict torch dependency
* add [truncate_text](https://github.com/openai/CLIP/pull/126) option to tokenize to be able to handle longer sequences

You will need to disable JIT by doing `model, preprocess = clip.load("ViT-B/32", device=device, jit=False)` if not using torch 1.7.1.
Run `pip install clip-anytorch` to install this package.

One benefit of not depending on an old torch version is installing clip on colab is super fast, try [this colab](https://colab.research.google.com/drive/1q-YgGtk5IeU3-uTvfKF63vI8ntU9k3Cw#scrollTo=ctt1dsV18smF) to see it for yourself.

## Approach

![CLIP](CLIP.png)



## Installation

## With pip

`pip install clip-anytorch`. Yes that's it!

## With conda

First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick:

```bash
$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
$ pip install ftfy regex tqdm
$ pip install git+https://github.com/openai/CLIP.git
```

Replace `cudatoolkit=11.0` above with the appropriate CUDA version on your machine or `cpuonly` when installing on a machine without a GPU.

## Usage

```python
import torch
import clip
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)

    logits_per_image, logits_per_text = model(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs)  # prints: [[0.9927937  0.00421068 0.00299572]]
```


## API

The CLIP module `clip` provides the following methods:

#### `clip.available_models()`

Returns the names of the available CLIP models.

#### `clip.load(name, device=..., jit=False)`

Returns the model and the TorchVision transform needed by the model, specified by the model name returned by `clip.available_models()`. It will download the model as necessary. The `name` argument can also be a path to a local checkpoint.

The device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU. When `jit` is `False`, a non-JIT version of the model will be loaded.

#### `clip.tokenize(text: Union[str, List[str]], context_length=77)`

Returns a LongTensor containing tokenized sequences of given text input(s). This can be used as the input to the model

---

The model returned by `clip.load()` supports the following methods:

#### `model.encode_image(image: Tensor)`

Given a batch of images, returns the image features encoded by the vision portion of the CLIP model.

#### `model.encode_text(text: Tensor)`

Given a batch of text tokens, returns the text features encoded by the language portion of the CLIP model.

#### `model(image: Tensor, text: Tensor)`

Given a batch of images and a batch of text tokens, returns two Tensors, containing the logit scores corresponding to each image and text input. The values are cosine similarities between the corresponding image and text features, times 100.



## More Examples

### Zero-Shot Prediction

The code below performs zero-shot prediction using CLIP, as shown in Appendix B in the paper. This example takes an image from the [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and predicts the most likely labels among the 100 textual labels from the dataset.

```python
import os
import clip
import torch
from torchvision.datasets import CIFAR100

# Load the model
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load('ViT-B/32', device)

# Download the dataset
cifar100 = CIFAR100(root=os.path.expanduser("~/.cache"), download=True, train=False)

# Prepare the inputs
image, class_id = cifar100[3637]
image_input = preprocess(image).unsqueeze(0).to(device)
text_inputs = torch.cat([clip.tokenize(f"a photo of a {c}") for c in cifar100.classes]).to(device)

# Calculate features
with torch.no_grad():
    image_features = model.encode_image(image_input)
    text_features = model.encode_text(text_inputs)

# Pick the top 5 most similar labels for the image
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)
values, indices = similarity[0].topk(5)

# Print the result
print("\nTop predictions:\n")
for value, index in zip(values, indices):
    print(f"{cifar100.classes[index]:>16s}: {100 * value.item():.2f}%")
```

The output will look like the following (the exact numbers may be slightly different depending on the compute device):

```
Top predictions:

           snake: 65.31%
          turtle: 12.29%
    sweet_pepper: 3.83%
          lizard: 1.88%
       crocodile: 1.75%
```

Note that this example uses the `encode_image()` and `encode_text()` methods that return the encoded features of given inputs.


### Linear-probe evaluation

The example below uses [scikit-learn](https://scikit-learn.org/) to perform logistic regression on image features.

```python
import os
import clip
import torch

import numpy as np
from sklearn.linear_model import LogisticRegression
from torch.utils.data import DataLoader
from torchvision.datasets import CIFAR100
from tqdm import tqdm

# Load the model
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load('ViT-B/32', device)

# Load the dataset
root = os.path.expanduser("~/.cache")
train = CIFAR100(root, download=True, train=True, transform=preprocess)
test = CIFAR100(root, download=True, train=False, transform=preprocess)


def get_features(dataset):
    all_features = []
    all_labels = []

    with torch.no_grad():
        for images, labels in tqdm(DataLoader(dataset, batch_size=100)):
            features = model.encode_image(images.to(device))

            all_features.append(features)
            all_labels.append(labels)

    return torch.cat(all_features).cpu().numpy(), torch.cat(all_labels).cpu().numpy()

# Calculate the image features
train_features, train_labels = get_features(train)
test_features, test_labels = get_features(test)

# Perform logistic regression
classifier = LogisticRegression(random_state=0, C=0.316, max_iter=1000, verbose=1)
classifier.fit(train_features, train_labels)

# Evaluate using the logistic regression classifier
predictions = classifier.predict(test_features)
accuracy = np.mean((test_labels == predictions).astype(float)) * 100.
print(f"Accuracy = {accuracy:.3f}")
```

Note that the `C` value should be determined via a hyperparameter sweep using a validation split.


## See Also

* [OpenCLIP](https://github.com/mlfoundations/open_clip): includes larger and independently trained CLIP models up to ViT-G/14
* [Hugging Face implementation of CLIP](https://huggingface.co/docs/transformers/model_doc/clip): for easier integration with the HF ecosystem



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rom1504/CLIP",
    "name": "clip-anytorch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "OpenAI",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/98/26/85f9a52aedf8129b42a7416a3ec082c05822d9acd4f98bd7caaa811f657c/clip-anytorch-2.6.0.tar.gz",
    "platform": null,
    "description": "# CLIP\n\n[[Blog]](https://openai.com/blog/clip/) [[Paper]](https://arxiv.org/abs/2103.00020) [[Model Card]](model-card.md) [[Colab]](https://colab.research.google.com/github/openai/clip/blob/master/notebooks/Interacting_with_CLIP.ipynb)\n\nCLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet \u201czero-shot\u201d without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision.\n\n\nThis repo is a fork maintaining a PYPI package for clip. Changes from the main repo:\n* remove the strict torch dependency\n* add [truncate_text](https://github.com/openai/CLIP/pull/126) option to tokenize to be able to handle longer sequences\n\nYou will need to disable JIT by doing `model, preprocess = clip.load(\"ViT-B/32\", device=device, jit=False)` if not using torch 1.7.1.\nRun `pip install clip-anytorch` to install this package.\n\nOne benefit of not depending on an old torch version is installing clip on colab is super fast, try [this colab](https://colab.research.google.com/drive/1q-YgGtk5IeU3-uTvfKF63vI8ntU9k3Cw#scrollTo=ctt1dsV18smF) to see it for yourself.\n\n## Approach\n\n![CLIP](CLIP.png)\n\n\n\n## Installation\n\n## With pip\n\n`pip install clip-anytorch`. Yes that's it!\n\n## With conda\n\nFirst, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick:\n\n```bash\n$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0\n$ pip install ftfy regex tqdm\n$ pip install git+https://github.com/openai/CLIP.git\n```\n\nReplace `cudatoolkit=11.0` above with the appropriate CUDA version on your machine or `cpuonly` when installing on a machine without a GPU.\n\n## Usage\n\n```python\nimport torch\nimport clip\nfrom PIL import Image\n\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmodel, preprocess = clip.load(\"ViT-B/32\", device=device)\n\nimage = preprocess(Image.open(\"CLIP.png\")).unsqueeze(0).to(device)\ntext = clip.tokenize([\"a diagram\", \"a dog\", \"a cat\"]).to(device)\n\nwith torch.no_grad():\n    image_features = model.encode_image(image)\n    text_features = model.encode_text(text)\n\n    logits_per_image, logits_per_text = model(image, text)\n    probs = logits_per_image.softmax(dim=-1).cpu().numpy()\n\nprint(\"Label probs:\", probs)  # prints: [[0.9927937  0.00421068 0.00299572]]\n```\n\n\n## API\n\nThe CLIP module `clip` provides the following methods:\n\n#### `clip.available_models()`\n\nReturns the names of the available CLIP models.\n\n#### `clip.load(name, device=..., jit=False)`\n\nReturns the model and the TorchVision transform needed by the model, specified by the model name returned by `clip.available_models()`. It will download the model as necessary. The `name` argument can also be a path to a local checkpoint.\n\nThe device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU. When `jit` is `False`, a non-JIT version of the model will be loaded.\n\n#### `clip.tokenize(text: Union[str, List[str]], context_length=77)`\n\nReturns a LongTensor containing tokenized sequences of given text input(s). This can be used as the input to the model\n\n---\n\nThe model returned by `clip.load()` supports the following methods:\n\n#### `model.encode_image(image: Tensor)`\n\nGiven a batch of images, returns the image features encoded by the vision portion of the CLIP model.\n\n#### `model.encode_text(text: Tensor)`\n\nGiven a batch of text tokens, returns the text features encoded by the language portion of the CLIP model.\n\n#### `model(image: Tensor, text: Tensor)`\n\nGiven a batch of images and a batch of text tokens, returns two Tensors, containing the logit scores corresponding to each image and text input. The values are cosine similarities between the corresponding image and text features, times 100.\n\n\n\n## More Examples\n\n### Zero-Shot Prediction\n\nThe code below performs zero-shot prediction using CLIP, as shown in Appendix B in the paper. This example takes an image from the [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and predicts the most likely labels among the 100 textual labels from the dataset.\n\n```python\nimport os\nimport clip\nimport torch\nfrom torchvision.datasets import CIFAR100\n\n# Load the model\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmodel, preprocess = clip.load('ViT-B/32', device)\n\n# Download the dataset\ncifar100 = CIFAR100(root=os.path.expanduser(\"~/.cache\"), download=True, train=False)\n\n# Prepare the inputs\nimage, class_id = cifar100[3637]\nimage_input = preprocess(image).unsqueeze(0).to(device)\ntext_inputs = torch.cat([clip.tokenize(f\"a photo of a {c}\") for c in cifar100.classes]).to(device)\n\n# Calculate features\nwith torch.no_grad():\n    image_features = model.encode_image(image_input)\n    text_features = model.encode_text(text_inputs)\n\n# Pick the top 5 most similar labels for the image\nimage_features /= image_features.norm(dim=-1, keepdim=True)\ntext_features /= text_features.norm(dim=-1, keepdim=True)\nsimilarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)\nvalues, indices = similarity[0].topk(5)\n\n# Print the result\nprint(\"\\nTop predictions:\\n\")\nfor value, index in zip(values, indices):\n    print(f\"{cifar100.classes[index]:>16s}: {100 * value.item():.2f}%\")\n```\n\nThe output will look like the following (the exact numbers may be slightly different depending on the compute device):\n\n```\nTop predictions:\n\n           snake: 65.31%\n          turtle: 12.29%\n    sweet_pepper: 3.83%\n          lizard: 1.88%\n       crocodile: 1.75%\n```\n\nNote that this example uses the `encode_image()` and `encode_text()` methods that return the encoded features of given inputs.\n\n\n### Linear-probe evaluation\n\nThe example below uses [scikit-learn](https://scikit-learn.org/) to perform logistic regression on image features.\n\n```python\nimport os\nimport clip\nimport torch\n\nimport numpy as np\nfrom sklearn.linear_model import LogisticRegression\nfrom torch.utils.data import DataLoader\nfrom torchvision.datasets import CIFAR100\nfrom tqdm import tqdm\n\n# Load the model\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmodel, preprocess = clip.load('ViT-B/32', device)\n\n# Load the dataset\nroot = os.path.expanduser(\"~/.cache\")\ntrain = CIFAR100(root, download=True, train=True, transform=preprocess)\ntest = CIFAR100(root, download=True, train=False, transform=preprocess)\n\n\ndef get_features(dataset):\n    all_features = []\n    all_labels = []\n\n    with torch.no_grad():\n        for images, labels in tqdm(DataLoader(dataset, batch_size=100)):\n            features = model.encode_image(images.to(device))\n\n            all_features.append(features)\n            all_labels.append(labels)\n\n    return torch.cat(all_features).cpu().numpy(), torch.cat(all_labels).cpu().numpy()\n\n# Calculate the image features\ntrain_features, train_labels = get_features(train)\ntest_features, test_labels = get_features(test)\n\n# Perform logistic regression\nclassifier = LogisticRegression(random_state=0, C=0.316, max_iter=1000, verbose=1)\nclassifier.fit(train_features, train_labels)\n\n# Evaluate using the logistic regression classifier\npredictions = classifier.predict(test_features)\naccuracy = np.mean((test_labels == predictions).astype(float)) * 100.\nprint(f\"Accuracy = {accuracy:.3f}\")\n```\n\nNote that the `C` value should be determined via a hyperparameter sweep using a validation split.\n\n\n## See Also\n\n* [OpenCLIP](https://github.com/mlfoundations/open_clip): includes larger and independently trained CLIP models up to ViT-G/14\n* [Hugging Face implementation of CLIP](https://huggingface.co/docs/transformers/model_doc/clip): for easier integration with the HF ecosystem\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "# CLIP",
    "version": "2.6.0",
    "project_urls": {
        "Homepage": "https://github.com/rom1504/CLIP"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "881419d5ff584cb58f654d22d8d6552d7c2fff7b85e4a9d525357f62a4d1e7e0",
                "md5": "548c5404bd0225ed486972015fbaa6c6",
                "sha256": "ebc0afed243851b9dc7e16ab8afb665237e8a5838986e97cd99f123ae4ce94ce"
            },
            "downloads": -1,
            "filename": "clip_anytorch-2.6.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "548c5404bd0225ed486972015fbaa6c6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 1372891,
            "upload_time": "2024-01-13T15:21:33",
            "upload_time_iso_8601": "2024-01-13T15:21:33.903848Z",
            "url": "https://files.pythonhosted.org/packages/88/14/19d5ff584cb58f654d22d8d6552d7c2fff7b85e4a9d525357f62a4d1e7e0/clip_anytorch-2.6.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "982685f9a52aedf8129b42a7416a3ec082c05822d9acd4f98bd7caaa811f657c",
                "md5": "a91e74a47d8cefe710a4922d18573989",
                "sha256": "a827e4a64edbf72db01641b85ba88d37c4aff64ba3ffd4e3f7091a68e7e2badf"
            },
            "downloads": -1,
            "filename": "clip-anytorch-2.6.0.tar.gz",
            "has_sig": false,
            "md5_digest": "a91e74a47d8cefe710a4922d18573989",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 1377219,
            "upload_time": "2024-01-13T15:21:35",
            "upload_time_iso_8601": "2024-01-13T15:21:35.712932Z",
            "url": "https://files.pythonhosted.org/packages/98/26/85f9a52aedf8129b42a7416a3ec082c05822d9acd4f98bd7caaa811f657c/clip-anytorch-2.6.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-13 15:21:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rom1504",
    "github_project": "CLIP",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "clip-anytorch"
}
        
Elapsed time: 0.16786s