ammico-lavis


Nameammico-lavis JSON
Version 1.0.2.2 PyPI version JSON
download
home_page
SummaryLAVIS - A One-stop Library for Language-Vision Intelligence
upload_time2023-12-12 12:15:06
maintainer
docs_urlNone
authorDongxu Li, Junnan Li, Hung Le, Guangsen Wang, Silvio Savarese, Steven C.H. Hoi
requires_python>=3.7.0
license3-Clause BSD
keywords vision-language multimodal image captioning generative ai deep learning library pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AMMICO-LAVIS

This is a fork of [LAVIS](https://github.com/salesforce/LAVIS) (release 1.0.2) that support ARM M1, M2, and M3 Macs. On MacOS it depends on [eva-decord](https://github.com/georgia-tech-db/eva-decord), instead of [decord](https://github.com/dmlc/decord) on other systems. 
Supports [transformers](https://github.com/huggingface/transformers)>=4.25.0,<4.27.

<p align="center">
    <br>
    <img src="docs/_static/logo_final.png" width="400"/>
    <br>
<p>

<div align="center">
  <a href="https://github.com/salesforce/LAVIS/releases"><img alt="Latest Release" src="https://img.shields.io/github/release/salesforce/LAVIS.svg" /></a>
  <a href="https://opensource.salesforce.com/LAVIS/index.html">
  <img alt="docs" src="https://github.com/salesforce/LAVIS/actions/workflows/docs.yaml/badge.svg"/>
  <a href="https://opensource.org/licenses/BSD-3-Clause">
  <img alt="license" src="https://img.shields.io/badge/License-BSD_3--Clause-blue.svg"/>
  </a> 
  <a href="https://pepy.tech/project/salesforce-lavis">
  <img alt="Downloads" src="https://pepy.tech/badge/salesforce-lavis">
  </a>
</div>

<div align="center">
<a href="https://opensource.salesforce.com/LAVIS//latest/benchmark.html">Benchmark</a>,
<a href="https://arxiv.org/abs/2209.09019">Technical Report</a>,
<a href="https://opensource.salesforce.com/LAVIS//latest/index.html">Documentation</a>,
<a href="https://github.com/salesforce/LAVIS/tree/main/examples">Jupyter Notebook Examples</a>,
<a href="https://blog.salesforceairesearch.com/lavis-language-vision-library/">Blog</a>
</div>

# LAVIS - A Library for Language-Vision Intelligence

## What's New: 🎉 
  * [Model Release] Jan 2023, released implementation of **BLIP-2** <br>
  [Paper](https://arxiv.org/abs/2301.12597), [Project Page](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Salesforce/BLIP2), [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/examples/blip2_instructed_generation.ipynb)
  > A generic and efficient pre-training strategy that easily harvests development of pretrained vision models and large language models (LLMs) for vision-language pretraining. BLIP-2 beats Flamingo on zero-shot VQAv2 (**65.0** vs **56.3**), establishing new state-of-the-art on zero-shot captioning (on NoCaps **121.6** CIDEr score vs previous best **113.2**). In addition, equipped with powerful LLMs (e.g. OPT, FlanT5), BLIP-2 also unlocks the new **zero-shot instructed vision-to-language generation** capabilities for various interesting applications!
  * Jan 2023, LAVIS is now available on [PyPI](https://pypi.org/project/salesforce-lavis/) for installation!
  * [Model Release] Dec 2022, released implementation of **Img2prompt-VQA** <br>
  [Paper](https://arxiv.org/pdf/2212.10846.pdf), [Project Page](https://github.com/salesforce/LAVIS/tree/main/projects/img2prompt-vqa), [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/projects/img2prompt-vqa/img2prompt_vqa.ipynb)
  > A plug-and-play module that enables off-the-shelf use of Large Language Models (LLMs) for visual question answering (VQA). Img2Prompt-VQA surpasses Flamingo on zero-shot VQA on VQAv2 (61.9 vs 56.3), while in contrast requiring no end-to-end training! 
  * [Model Release] Oct 2022, released implementation of **PNP-VQA** (**EMNLP Findings 2022**, _"Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero Training"_, by Anthony T.M.H. et al), <br> 
  [Paper](https://arxiv.org/abs/2210.08773), [Project Page](https://github.com/salesforce/LAVIS/tree/main/projects/pnp-vqa), [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/projects/pnp-vqa/pnp_vqa.ipynb))
  >  A modular zero-shot VQA framework that requires no PLMs training, achieving SoTA zero-shot VQA performance. 
    
## Table of Contents
  - [Introduction](#introduction)
  - [Installation](#installation)
  - [Getting Started](#getting-started)
    - [Model Zoo](#model-zoo)
    - [Image Captioning](#image-captioning)
    - [Visual question answering (VQA)](#visual-question-answering-vqa)
    - [Unified Feature Extraction Interface](#unified-feature-extraction-interface)
    - [Load Datasets](#load-datasets)
  - [Jupyter Notebook Examples](#jupyter-notebook-examples)
  - [Resources and Tools](#resources-and-tools)
  - [Documentations](#documentations)
  - [Ethical and Responsible Use](#ethical-and-responsible-use)
  - [Technical Report and Citing LAVIS](#technical-report-and-citing-lavis)
  - [License](#license)

## Introduction
LAVIS is a Python deep learning library for LAnguage-and-VISion intelligence research and applications. This library aims to provide engineers and researchers with a one-stop solution to rapidly develop models for their specific multimodal scenarios, and benchmark them across standard and customized datasets.
It features a unified interface design to access
- **10+** tasks
(retrieval, captioning, visual question answering, multimodal classification etc.);
- **20+** datasets (COCO, Flickr, Nocaps, Conceptual
Commons, SBU, etc.);
- **30+** pretrained weights of state-of-the-art foundation language-vision models and their task-specific adaptations, including [ALBEF](https://arxiv.org/pdf/2107.07651.pdf),
[BLIP](https://arxiv.org/pdf/2201.12086.pdf), [ALPRO](https://arxiv.org/pdf/2112.09583.pdf), [CLIP](https://arxiv.org/pdf/2103.00020.pdf).
<p align="center">
    <br>
    <img src="assets/demo-6.png"/>
    <br>
<p>

Key features of LAVIS include:

- **Unified and Modular Interface**: facilitating to easily leverage and repurpose existing modules (datasets, models, preprocessors), also to add new modules.

- **Easy Off-the-shelf Inference and Feature Extraction**: readily available pre-trained models let you take advantage of state-of-the-art multimodal understanding and generation capabilities on your own data.

- **Reproducible Model Zoo and Training Recipes**: easily replicate and extend state-of-the-art models on existing and new tasks.

- **Dataset Zoo and Automatic Downloading Tools**: it can be a hassle to prepare the many language-vision datasets. LAVIS provides automatic downloading scripts to help prepare a large variety of datasets and their annotations.


The following table shows the supported tasks, datasets and models in our library. This is a continuing effort and we are working on further growing the list.

|                  Tasks                   |     Supported Models     |             Supported Datasets             |
| :--------------------------------------: | :----------------------: | :----------------------------------------: |
|         Image-text Pre-training          |       ALBEF, BLIP        | COCO, VisualGenome, SBU ConceptualCaptions |
|           Image-text Retrieval           |    ALBEF, BLIP, CLIP     |              COCO, Flickr30k               |
|           Text-image Retrieval           |    ALBEF, BLIP, CLIP     |              COCO, Flickr30k               |
|        Visual Question Answering         |       ALBEF, BLIP        |           VQAv2, OKVQA, A-OKVQA            |
|             Image Captioning             |           BLIP           |                COCO, NoCaps                |
|           Image Classification           |           CLIP           |                  ImageNet                  |
| Natural Language Visual Reasoning (NLVR) |       ALBEF, BLIP        |                   NLVR2                    |
|          Visual Entailment (VE)          |          ALBEF           |                  SNLI-VE                   |
|             Visual Dialogue              |           BLIP           |                  VisDial                   |
|           Video-text Retrieval           |       BLIP, ALPRO        |               MSRVTT, DiDeMo               |
|           Text-video Retrieval           |       BLIP, ALPRO        |               MSRVTT, DiDeMo               |
|    Video Question Answering (VideoQA)    |       BLIP, ALPRO        |                MSRVTT, MSVD                |
|              Video Dialogue              |         VGD-GPT          |                    AVSD                    |
|      Multimodal Feature Extraction       | ALBEF, CLIP, BLIP, ALPRO |                 customized                 |
|         Text-to-image Generation         |      [COMING SOON]       |                                            |

## Installation

1. (Optional) Creating conda environment

```bash
conda create -n lavis python=3.8
conda activate lavis
```

2. install from [PyPI](https://pypi.org/project/salesforce-lavis/)
```bash
pip install salesforce-lavis
```
    
3. Or, for development, you may build from source

```bash
git clone https://github.com/salesforce/LAVIS.git
cd LAVIS
pip install -e .
```

## Getting Started
### Model Zoo
Model zoo summarizes supported models in LAVIS, to view:
```python
from lavis.models import model_zoo
print(model_zoo)
# ==================================================
# Architectures                  Types
# ==================================================
# albef_classification           ve
# albef_feature_extractor        base
# albef_nlvr                     nlvr
# albef_pretrain                 base
# albef_retrieval                coco, flickr
# albef_vqa                      vqav2
# alpro_qa                       msrvtt, msvd
# alpro_retrieval                msrvtt, didemo
# blip_caption                   base_coco, large_coco
# blip_classification            base
# blip_feature_extractor         base
# blip_nlvr                      nlvr
# blip_pretrain                  base
# blip_retrieval                 coco, flickr
# blip_vqa                       vqav2, okvqa, aokvqa
# clip_feature_extractor         ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50
# clip                           ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50
# gpt_dialogue                   base
```

Let’s see how to use models in LAVIS to perform inference on example data. We first load a sample image from local.

```python
import torch
from PIL import Image
# setup device to use
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# load sample image
raw_image = Image.open("docs/_static/merlion.png").convert("RGB")
```

This example image shows [Merlion park](https://en.wikipedia.org/wiki/Merlion) ([source](https://theculturetrip.com/asia/singapore/articles/what-exactly-is-singapores-merlion-anyway/)), a landmark in Singapore.


### Image Captioning
In this example, we use the BLIP model to generate a caption for the image. To make inference even easier, we also associate each
pre-trained model with its preprocessors (transforms), accessed via ``load_model_and_preprocess()``.

```python
import torch
from lavis.models import load_model_and_preprocess
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.
# this also loads the associated image processors
model, vis_processors, _ = load_model_and_preprocess(name="blip_caption", model_type="base_coco", is_eval=True, device=device)
# preprocess the image
# vis_processors stores image transforms for "train" and "eval" (validation / testing / inference)
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
# generate caption
model.generate({"image": image})
# ['a large fountain spewing water into the air']
```

### Visual question answering (VQA)
BLIP model is able to answer free-form questions about images in natural language.
To access the VQA model, simply replace the ``name`` and ``model_type`` arguments
passed to ``load_model_and_preprocess()``.

```python
from lavis.models import load_model_and_preprocess
model, vis_processors, txt_processors = load_model_and_preprocess(name="blip_vqa", model_type="vqav2", is_eval=True, device=device)
# ask a random question.
question = "Which city is this photo taken?"
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
question = txt_processors["eval"](question)
model.predict_answers(samples={"image": image, "text_input": question}, inference_method="generate")
# ['singapore']
```

### Unified Feature Extraction Interface

LAVIS provides a unified interface to extract features from each architecture. 
To extract features, we load the feature extractor variants of each model.
The multimodal feature can be used for multimodal classification.
The low-dimensional unimodal features can be used to compute cross-modal similarity.


```python
from lavis.models import load_model_and_preprocess
model, vis_processors, txt_processors = load_model_and_preprocess(name="blip_feature_extractor", model_type="base", is_eval=True, device=device)
caption = "a large fountain spewing water into the air"
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
text_input = txt_processors["eval"](caption)
sample = {"image": image, "text_input": [text_input]}

features_multimodal = model.extract_features(sample)
print(features_multimodal.multimodal_embeds.shape)
# torch.Size([1, 12, 768]), use features_multimodal[:,0,:] for multimodal classification tasks

features_image = model.extract_features(sample, mode="image")
features_text = model.extract_features(sample, mode="text")
print(features_image.image_embeds.shape)
# torch.Size([1, 197, 768])
print(features_text.text_embeds.shape)
# torch.Size([1, 12, 768])

# low-dimensional projected features
print(features_image.image_embeds_proj.shape)
# torch.Size([1, 197, 256])
print(features_text.text_embeds_proj.shape)
# torch.Size([1, 12, 256])
similarity = features_image.image_embeds_proj[:,0,:] @ features_text.text_embeds_proj[:,0,:].t()
print(similarity)
# tensor([[0.2622]])
```

### Load Datasets
LAVIS inherently supports a wide variety of common language-vision datasets by providing [automatic download tools](https://opensource.salesforce.com/LAVIS//latest/benchmark) to help download and organize these datasets. After downloading, to load the datasets, use the following code:

```python
from lavis.datasets.builders import dataset_zoo
dataset_names = dataset_zoo.get_names()
print(dataset_names)
# ['aok_vqa', 'coco_caption', 'coco_retrieval', 'coco_vqa', 'conceptual_caption_12m',
#  'conceptual_caption_3m', 'didemo_retrieval', 'flickr30k', 'imagenet', 'laion2B_multi',
#  'msrvtt_caption', 'msrvtt_qa', 'msrvtt_retrieval', 'msvd_caption', 'msvd_qa', 'nlvr',
#  'nocaps', 'ok_vqa', 'sbu_caption', 'snli_ve', 'vatex_caption', 'vg_caption', 'vg_vqa']
```
After downloading the images, we can use ``load_dataset()`` to obtain the dataset.
```python
from lavis.datasets.builders import load_dataset
coco_dataset = load_dataset("coco_caption")
print(coco_dataset.keys())
# dict_keys(['train', 'val', 'test'])
print(len(coco_dataset["train"]))
# 566747
print(coco_dataset["train"][0])
# {'image': <PIL.Image.Image image mode=RGB size=640x480>,
#  'text_input': 'A woman wearing a net on her head cutting a cake. ',
#  'image_id': 0}
```

If you already host a local copy of the dataset, you can pass in the ``vis_path`` argument to change the default location to load images.

```python
coco_dataset = load_dataset("coco_caption", vis_path=YOUR_LOCAL_PATH)
```

## Jupyter Notebook Examples
See [examples](https://github.com/salesforce/LAVIS/tree/main/examples) for more inference examples, e.g. captioning, feature extraction, VQA, GradCam, zeros-shot classification.

## Resources and Tools
- **Benchmarks**: see [Benchmark](https://opensource.salesforce.com/LAVIS//latest/benchmark) for instructions to evaluate and train supported models.
- **Dataset Download and Browsing**: see [Dataset Download](https://opensource.salesforce.com/LAVIS//latest/benchmark) for instructions and automatic tools on download common language-vision datasets.
- **GUI Demo**: to run the demo locally, run ```bash run_scripts/run_demo.sh``` and then follow the instruction on the prompts to view in browser. A web demo is coming soon.


## Documentations
For more details and advanced usages, please refer to
[documentation](https://opensource.salesforce.com/LAVIS//latest/index.html#).

## Ethical and Responsible Use
We note that models in LAVIS provide no guarantees on their multimodal abilities; incorrect or biased predictions may be observed. In particular, the datasets and pretrained models utilized in LAVIS may contain socioeconomic biases which could result in misclassification and other unwanted behaviors such as offensive or inappropriate speech. We strongly recommend that users review the pre-trained models and overall system in LAVIS before practical adoption. We plan to improve the library by investigating and mitigating these potential biases and
inappropriate behaviors in the future.


## Technical Report and Citing LAVIS
You can find more details in our [technical report](https://arxiv.org/abs/2209.09019).

If you're using LAVIS in your research or applications, please cite using this BibTeX:
```bibtex
@misc{li2022lavis,
      title={LAVIS: A Library for Language-Vision Intelligence}, 
      author={Dongxu Li and Junnan Li and Hung Le and Guangsen Wang and Silvio Savarese and Steven C. H. Hoi},
      year={2022},
      eprint={2209.09019},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
```

## Contact us
If you have any questions, comments or suggestions, please do not hesitate to contact us at lavis@salesforce.com.

## License
[BSD 3-Clause License](LICENSE.txt)

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "ammico-lavis",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7.0",
    "maintainer_email": "",
    "keywords": "Vision-Language,Multimodal,Image Captioning,Generative AI,Deep Learning,Library,PyTorch",
    "author": "Dongxu Li, Junnan Li, Hung Le, Guangsen Wang, Silvio Savarese, Steven C.H. Hoi",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/bb/4d/474d8f2c6803747702bc5ebc4b8156a32351311d97072da54fa85b1ef1f0/ammico-lavis-1.0.2.2.tar.gz",
    "platform": null,
    "description": "# AMMICO-LAVIS\n\nThis is a fork of [LAVIS](https://github.com/salesforce/LAVIS) (release 1.0.2) that support ARM M1, M2, and M3 Macs. On MacOS it depends on [eva-decord](https://github.com/georgia-tech-db/eva-decord), instead of [decord](https://github.com/dmlc/decord) on other systems. \nSupports [transformers](https://github.com/huggingface/transformers)>=4.25.0,<4.27.\n\n<p align=\"center\">\n    <br>\n    <img src=\"docs/_static/logo_final.png\" width=\"400\"/>\n    <br>\n<p>\n\n<div align=\"center\">\n  <a href=\"https://github.com/salesforce/LAVIS/releases\"><img alt=\"Latest Release\" src=\"https://img.shields.io/github/release/salesforce/LAVIS.svg\" /></a>\n  <a href=\"https://opensource.salesforce.com/LAVIS/index.html\">\n  <img alt=\"docs\" src=\"https://github.com/salesforce/LAVIS/actions/workflows/docs.yaml/badge.svg\"/>\n  <a href=\"https://opensource.org/licenses/BSD-3-Clause\">\n  <img alt=\"license\" src=\"https://img.shields.io/badge/License-BSD_3--Clause-blue.svg\"/>\n  </a> \n  <a href=\"https://pepy.tech/project/salesforce-lavis\">\n  <img alt=\"Downloads\" src=\"https://pepy.tech/badge/salesforce-lavis\">\n  </a>\n</div>\n\n<div align=\"center\">\n<a href=\"https://opensource.salesforce.com/LAVIS//latest/benchmark.html\">Benchmark</a>,\n<a href=\"https://arxiv.org/abs/2209.09019\">Technical Report</a>,\n<a href=\"https://opensource.salesforce.com/LAVIS//latest/index.html\">Documentation</a>,\n<a href=\"https://github.com/salesforce/LAVIS/tree/main/examples\">Jupyter Notebook Examples</a>,\n<a href=\"https://blog.salesforceairesearch.com/lavis-language-vision-library/\">Blog</a>\n</div>\n\n# LAVIS - A Library for Language-Vision Intelligence\n\n## What's New: \ud83c\udf89 \n  * [Model Release] Jan 2023, released implementation of **BLIP-2** <br>\n  [Paper](https://arxiv.org/abs/2301.12597), [Project Page](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Salesforce/BLIP2), [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/examples/blip2_instructed_generation.ipynb)\n  > A generic and efficient pre-training strategy that easily harvests development of pretrained vision models and large language models (LLMs) for vision-language pretraining. BLIP-2 beats Flamingo on zero-shot VQAv2 (**65.0** vs **56.3**), establishing new state-of-the-art on zero-shot captioning (on NoCaps **121.6** CIDEr score vs previous best **113.2**). In addition, equipped with powerful LLMs (e.g. OPT, FlanT5), BLIP-2 also unlocks the new **zero-shot instructed vision-to-language generation** capabilities for various interesting applications!\n  * Jan 2023, LAVIS is now available on [PyPI](https://pypi.org/project/salesforce-lavis/) for installation!\n  * [Model Release] Dec 2022, released implementation of **Img2prompt-VQA** <br>\n  [Paper](https://arxiv.org/pdf/2212.10846.pdf), [Project Page](https://github.com/salesforce/LAVIS/tree/main/projects/img2prompt-vqa), [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/projects/img2prompt-vqa/img2prompt_vqa.ipynb)\n  > A plug-and-play module that enables off-the-shelf use of Large Language Models (LLMs) for visual question answering (VQA). Img2Prompt-VQA surpasses Flamingo on zero-shot VQA on VQAv2 (61.9 vs 56.3), while in contrast requiring no end-to-end training! \n  * [Model Release] Oct 2022, released implementation of **PNP-VQA** (**EMNLP Findings 2022**, _\"Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero Training\"_, by Anthony T.M.H. et al), <br> \n  [Paper](https://arxiv.org/abs/2210.08773), [Project Page](https://github.com/salesforce/LAVIS/tree/main/projects/pnp-vqa), [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/projects/pnp-vqa/pnp_vqa.ipynb))\n  >  A modular zero-shot VQA framework that requires no PLMs training, achieving SoTA zero-shot VQA performance. \n    \n## Table of Contents\n  - [Introduction](#introduction)\n  - [Installation](#installation)\n  - [Getting Started](#getting-started)\n    - [Model Zoo](#model-zoo)\n    - [Image Captioning](#image-captioning)\n    - [Visual question answering (VQA)](#visual-question-answering-vqa)\n    - [Unified Feature Extraction Interface](#unified-feature-extraction-interface)\n    - [Load Datasets](#load-datasets)\n  - [Jupyter Notebook Examples](#jupyter-notebook-examples)\n  - [Resources and Tools](#resources-and-tools)\n  - [Documentations](#documentations)\n  - [Ethical and Responsible Use](#ethical-and-responsible-use)\n  - [Technical Report and Citing LAVIS](#technical-report-and-citing-lavis)\n  - [License](#license)\n\n## Introduction\nLAVIS is a Python deep learning library for LAnguage-and-VISion intelligence research and applications. This library aims to provide engineers and researchers with a one-stop solution to rapidly develop models for their specific multimodal scenarios, and benchmark them across standard and customized datasets.\nIt features a unified interface design to access\n- **10+** tasks\n(retrieval, captioning, visual question answering, multimodal classification etc.);\n- **20+** datasets (COCO, Flickr, Nocaps, Conceptual\nCommons, SBU, etc.);\n- **30+** pretrained weights of state-of-the-art foundation language-vision models and their task-specific adaptations, including [ALBEF](https://arxiv.org/pdf/2107.07651.pdf),\n[BLIP](https://arxiv.org/pdf/2201.12086.pdf), [ALPRO](https://arxiv.org/pdf/2112.09583.pdf), [CLIP](https://arxiv.org/pdf/2103.00020.pdf).\n<p align=\"center\">\n    <br>\n    <img src=\"assets/demo-6.png\"/>\n    <br>\n<p>\n\nKey features of LAVIS include:\n\n- **Unified and Modular Interface**: facilitating to easily leverage and repurpose existing modules (datasets, models, preprocessors), also to add new modules.\n\n- **Easy Off-the-shelf Inference and Feature Extraction**: readily available pre-trained models let you take advantage of state-of-the-art multimodal understanding and generation capabilities on your own data.\n\n- **Reproducible Model Zoo and Training Recipes**: easily replicate and extend state-of-the-art models on existing and new tasks.\n\n- **Dataset Zoo and Automatic Downloading Tools**: it can be a hassle to prepare the many language-vision datasets. LAVIS provides automatic downloading scripts to help prepare a large variety of datasets and their annotations.\n\n\nThe following table shows the supported tasks, datasets and models in our library. This is a continuing effort and we are working on further growing the list.\n\n|                  Tasks                   |     Supported Models     |             Supported Datasets             |\n| :--------------------------------------: | :----------------------: | :----------------------------------------: |\n|         Image-text Pre-training          |       ALBEF, BLIP        | COCO, VisualGenome, SBU ConceptualCaptions |\n|           Image-text Retrieval           |    ALBEF, BLIP, CLIP     |              COCO, Flickr30k               |\n|           Text-image Retrieval           |    ALBEF, BLIP, CLIP     |              COCO, Flickr30k               |\n|        Visual Question Answering         |       ALBEF, BLIP        |           VQAv2, OKVQA, A-OKVQA            |\n|             Image Captioning             |           BLIP           |                COCO, NoCaps                |\n|           Image Classification           |           CLIP           |                  ImageNet                  |\n| Natural Language Visual Reasoning (NLVR) |       ALBEF, BLIP        |                   NLVR2                    |\n|          Visual Entailment (VE)          |          ALBEF           |                  SNLI-VE                   |\n|             Visual Dialogue              |           BLIP           |                  VisDial                   |\n|           Video-text Retrieval           |       BLIP, ALPRO        |               MSRVTT, DiDeMo               |\n|           Text-video Retrieval           |       BLIP, ALPRO        |               MSRVTT, DiDeMo               |\n|    Video Question Answering (VideoQA)    |       BLIP, ALPRO        |                MSRVTT, MSVD                |\n|              Video Dialogue              |         VGD-GPT          |                    AVSD                    |\n|      Multimodal Feature Extraction       | ALBEF, CLIP, BLIP, ALPRO |                 customized                 |\n|         Text-to-image Generation         |      [COMING SOON]       |                                            |\n\n## Installation\n\n1. (Optional) Creating conda environment\n\n```bash\nconda create -n lavis python=3.8\nconda activate lavis\n```\n\n2. install from [PyPI](https://pypi.org/project/salesforce-lavis/)\n```bash\npip install salesforce-lavis\n```\n    \n3. Or, for development, you may build from source\n\n```bash\ngit clone https://github.com/salesforce/LAVIS.git\ncd LAVIS\npip install -e .\n```\n\n## Getting Started\n### Model Zoo\nModel zoo summarizes supported models in LAVIS, to view:\n```python\nfrom lavis.models import model_zoo\nprint(model_zoo)\n# ==================================================\n# Architectures                  Types\n# ==================================================\n# albef_classification           ve\n# albef_feature_extractor        base\n# albef_nlvr                     nlvr\n# albef_pretrain                 base\n# albef_retrieval                coco, flickr\n# albef_vqa                      vqav2\n# alpro_qa                       msrvtt, msvd\n# alpro_retrieval                msrvtt, didemo\n# blip_caption                   base_coco, large_coco\n# blip_classification            base\n# blip_feature_extractor         base\n# blip_nlvr                      nlvr\n# blip_pretrain                  base\n# blip_retrieval                 coco, flickr\n# blip_vqa                       vqav2, okvqa, aokvqa\n# clip_feature_extractor         ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50\n# clip                           ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50\n# gpt_dialogue                   base\n```\n\nLet\u2019s see how to use models in LAVIS to perform inference on example data. We first load a sample image from local.\n\n```python\nimport torch\nfrom PIL import Image\n# setup device to use\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n# load sample image\nraw_image = Image.open(\"docs/_static/merlion.png\").convert(\"RGB\")\n```\n\nThis example image shows [Merlion park](https://en.wikipedia.org/wiki/Merlion) ([source](https://theculturetrip.com/asia/singapore/articles/what-exactly-is-singapores-merlion-anyway/)), a landmark in Singapore.\n\n\n### Image Captioning\nIn this example, we use the BLIP model to generate a caption for the image. To make inference even easier, we also associate each\npre-trained model with its preprocessors (transforms), accessed via ``load_model_and_preprocess()``.\n\n```python\nimport torch\nfrom lavis.models import load_model_and_preprocess\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n# loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.\n# this also loads the associated image processors\nmodel, vis_processors, _ = load_model_and_preprocess(name=\"blip_caption\", model_type=\"base_coco\", is_eval=True, device=device)\n# preprocess the image\n# vis_processors stores image transforms for \"train\" and \"eval\" (validation / testing / inference)\nimage = vis_processors[\"eval\"](raw_image).unsqueeze(0).to(device)\n# generate caption\nmodel.generate({\"image\": image})\n# ['a large fountain spewing water into the air']\n```\n\n### Visual question answering (VQA)\nBLIP model is able to answer free-form questions about images in natural language.\nTo access the VQA model, simply replace the ``name`` and ``model_type`` arguments\npassed to ``load_model_and_preprocess()``.\n\n```python\nfrom lavis.models import load_model_and_preprocess\nmodel, vis_processors, txt_processors = load_model_and_preprocess(name=\"blip_vqa\", model_type=\"vqav2\", is_eval=True, device=device)\n# ask a random question.\nquestion = \"Which city is this photo taken?\"\nimage = vis_processors[\"eval\"](raw_image).unsqueeze(0).to(device)\nquestion = txt_processors[\"eval\"](question)\nmodel.predict_answers(samples={\"image\": image, \"text_input\": question}, inference_method=\"generate\")\n# ['singapore']\n```\n\n### Unified Feature Extraction Interface\n\nLAVIS provides a unified interface to extract features from each architecture. \nTo extract features, we load the feature extractor variants of each model.\nThe multimodal feature can be used for multimodal classification.\nThe low-dimensional unimodal features can be used to compute cross-modal similarity.\n\n\n```python\nfrom lavis.models import load_model_and_preprocess\nmodel, vis_processors, txt_processors = load_model_and_preprocess(name=\"blip_feature_extractor\", model_type=\"base\", is_eval=True, device=device)\ncaption = \"a large fountain spewing water into the air\"\nimage = vis_processors[\"eval\"](raw_image).unsqueeze(0).to(device)\ntext_input = txt_processors[\"eval\"](caption)\nsample = {\"image\": image, \"text_input\": [text_input]}\n\nfeatures_multimodal = model.extract_features(sample)\nprint(features_multimodal.multimodal_embeds.shape)\n# torch.Size([1, 12, 768]), use features_multimodal[:,0,:] for multimodal classification tasks\n\nfeatures_image = model.extract_features(sample, mode=\"image\")\nfeatures_text = model.extract_features(sample, mode=\"text\")\nprint(features_image.image_embeds.shape)\n# torch.Size([1, 197, 768])\nprint(features_text.text_embeds.shape)\n# torch.Size([1, 12, 768])\n\n# low-dimensional projected features\nprint(features_image.image_embeds_proj.shape)\n# torch.Size([1, 197, 256])\nprint(features_text.text_embeds_proj.shape)\n# torch.Size([1, 12, 256])\nsimilarity = features_image.image_embeds_proj[:,0,:] @ features_text.text_embeds_proj[:,0,:].t()\nprint(similarity)\n# tensor([[0.2622]])\n```\n\n### Load Datasets\nLAVIS inherently supports a wide variety of common language-vision datasets by providing [automatic download tools](https://opensource.salesforce.com/LAVIS//latest/benchmark) to help download and organize these datasets. After downloading, to load the datasets, use the following code:\n\n```python\nfrom lavis.datasets.builders import dataset_zoo\ndataset_names = dataset_zoo.get_names()\nprint(dataset_names)\n# ['aok_vqa', 'coco_caption', 'coco_retrieval', 'coco_vqa', 'conceptual_caption_12m',\n#  'conceptual_caption_3m', 'didemo_retrieval', 'flickr30k', 'imagenet', 'laion2B_multi',\n#  'msrvtt_caption', 'msrvtt_qa', 'msrvtt_retrieval', 'msvd_caption', 'msvd_qa', 'nlvr',\n#  'nocaps', 'ok_vqa', 'sbu_caption', 'snli_ve', 'vatex_caption', 'vg_caption', 'vg_vqa']\n```\nAfter downloading the images, we can use ``load_dataset()`` to obtain the dataset.\n```python\nfrom lavis.datasets.builders import load_dataset\ncoco_dataset = load_dataset(\"coco_caption\")\nprint(coco_dataset.keys())\n# dict_keys(['train', 'val', 'test'])\nprint(len(coco_dataset[\"train\"]))\n# 566747\nprint(coco_dataset[\"train\"][0])\n# {'image': <PIL.Image.Image image mode=RGB size=640x480>,\n#  'text_input': 'A woman wearing a net on her head cutting a cake. ',\n#  'image_id': 0}\n```\n\nIf you already host a local copy of the dataset, you can pass in the ``vis_path`` argument to change the default location to load images.\n\n```python\ncoco_dataset = load_dataset(\"coco_caption\", vis_path=YOUR_LOCAL_PATH)\n```\n\n## Jupyter Notebook Examples\nSee [examples](https://github.com/salesforce/LAVIS/tree/main/examples) for more inference examples, e.g. captioning, feature extraction, VQA, GradCam, zeros-shot classification.\n\n## Resources and Tools\n- **Benchmarks**: see [Benchmark](https://opensource.salesforce.com/LAVIS//latest/benchmark) for instructions to evaluate and train supported models.\n- **Dataset Download and Browsing**: see [Dataset Download](https://opensource.salesforce.com/LAVIS//latest/benchmark) for instructions and automatic tools on download common language-vision datasets.\n- **GUI Demo**: to run the demo locally, run ```bash run_scripts/run_demo.sh``` and then follow the instruction on the prompts to view in browser. A web demo is coming soon.\n\n\n## Documentations\nFor more details and advanced usages, please refer to\n[documentation](https://opensource.salesforce.com/LAVIS//latest/index.html#).\n\n## Ethical and Responsible Use\nWe note that models in LAVIS provide no guarantees on their multimodal abilities; incorrect or biased predictions may be observed. In particular, the datasets and pretrained models utilized in LAVIS may contain socioeconomic biases which could result in misclassification and other unwanted behaviors such as offensive or inappropriate speech. We strongly recommend that users review the pre-trained models and overall system in LAVIS before practical adoption. We plan to improve the library by investigating and mitigating these potential biases and\ninappropriate behaviors in the future.\n\n\n## Technical Report and Citing LAVIS\nYou can find more details in our [technical report](https://arxiv.org/abs/2209.09019).\n\nIf you're using LAVIS in your research or applications, please cite using this BibTeX:\n```bibtex\n@misc{li2022lavis,\n      title={LAVIS: A Library for Language-Vision Intelligence}, \n      author={Dongxu Li and Junnan Li and Hung Le and Guangsen Wang and Silvio Savarese and Steven C. H. Hoi},\n      year={2022},\n      eprint={2209.09019},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n```\n\n## Contact us\nIf you have any questions, comments or suggestions, please do not hesitate to contact us at lavis@salesforce.com.\n\n## License\n[BSD 3-Clause License](LICENSE.txt)\n",
    "bugtrack_url": null,
    "license": "3-Clause BSD",
    "summary": "LAVIS - A One-stop Library for Language-Vision Intelligence",
    "version": "1.0.2.2",
    "project_urls": null,
    "split_keywords": [
        "vision-language",
        "multimodal",
        "image captioning",
        "generative ai",
        "deep learning",
        "library",
        "pytorch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a26ece74c2b6fccf5e57542d808e07f5e910a68060fcb346301948b4dc1aa63e",
                "md5": "da3dbe635490f897e92531965bc8eb15",
                "sha256": "13d710187df4bcf53361df8d7592ef0e7663c8ecce25a9029bbd765108423056"
            },
            "downloads": -1,
            "filename": "ammico_lavis-1.0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "da3dbe635490f897e92531965bc8eb15",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7.0",
            "size": 1812470,
            "upload_time": "2023-12-12T12:15:02",
            "upload_time_iso_8601": "2023-12-12T12:15:02.647590Z",
            "url": "https://files.pythonhosted.org/packages/a2/6e/ce74c2b6fccf5e57542d808e07f5e910a68060fcb346301948b4dc1aa63e/ammico_lavis-1.0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bb4d474d8f2c6803747702bc5ebc4b8156a32351311d97072da54fa85b1ef1f0",
                "md5": "ff1c4c7a5f0ecb5d194740ed5055ae34",
                "sha256": "d0b8fb33f8aa317b4348476d762c345d76eb5f7434cfa3058b218007350720f0"
            },
            "downloads": -1,
            "filename": "ammico-lavis-1.0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "ff1c4c7a5f0ecb5d194740ed5055ae34",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.0",
            "size": 1621290,
            "upload_time": "2023-12-12T12:15:06",
            "upload_time_iso_8601": "2023-12-12T12:15:06.564312Z",
            "url": "https://files.pythonhosted.org/packages/bb/4d/474d8f2c6803747702bc5ebc4b8156a32351311d97072da54fa85b1ef1f0/ammico-lavis-1.0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-12 12:15:06",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "ammico-lavis"
}
        
Elapsed time: 0.15510s