mantis-vl


Namemantis-vl JSON
Version 0.0.4 PyPI version JSON
download
home_pagehttps://github.com/TIGER-AI-Lab/Mantis
SummaryOfficial Codes for of "MANTIS: Interleaved Multi-Image Instruction Tuning"
upload_time2024-10-20 09:04:22
maintainerNone
docs_urlNone
authorDongfu Jiang
requires_pythonNone
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Mantis: Multi-Image Instruction Tuning

![Mantis](./docs/assets/images/radar_chart.png)
<a target="_blank" href="https://arxiv.org/abs/2405.01483">
<img style="height:22pt" src="https://img.shields.io/badge/-Paper-black?style=flat&logo=arxiv"></a>
<a target="_blank" href="https://github.com/TIGER-AI-Lab/Mantis">
<img style="height:22pt" src="https://img.shields.io/badge/-Code-green?style=flat&logo=github"></a>
<a target="_blank" href="https://tiger-ai-lab.github.io/Mantis/">
<img style="height:22pt" src="https://img.shields.io/badge/-🌐%20Website-red?style=flat"></a>
<a target="_blank" href="https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct">
<img style="height:22pt" src="https://img.shields.io/badge/-πŸ€—%20Dataset-red?style=flat"></a>
<a target="_blank" href="https://huggingface.co/spaces/TIGER-Lab/Mantis">
<img style="height:22pt" src="https://img.shields.io/badge/-πŸ€—%20Demo-red?style=flat"></a> 
<a target="_blank" href="https://huggingface.co/collections/TIGER-Lab/mantis-6619b0834594c878cdb1d6e4">
<img style="height:22pt" src="https://img.shields.io/badge/-πŸ€—%20Models-red?style=flat"></a>
<a target="_blank" href="https://twitter.com/DongfuJiang/status/1786552974598078677">
<img style="height:22pt" src="https://img.shields.io/badge/-Tweet-blue?style=flat&logo=twitter"></a>
<br>

---

πŸ€” The recent years have witnessed a great array of large multimodal models (LMMs) to effectively solve single-image vision language tasks. However, their abilities to solve multi-image visual language tasks is yet to be improved.

😦 The existing multi-image LMMs (e.g. OpenFlamingo, Emu, Idefics, etc) mostly gain their multi-image ability through pre-training on hundreds of millions of noisy interleaved image-text data from web, which is neither efficient nor effective.

πŸ”₯ Therefore, we present Mantis, an LLaMA-3 based LMM with interleaved text and image as inputs, train on Mantis-Instruct under **academic-level** resources (i.e. 36 hours on 16xA100-40G). 

πŸš€ Mantis achieves state-of-the-art performance on 5 multi-image benchmarks (NLVR2, Q-Bench, BLINK, MVBench, Mantis-Eval), and maintaining a strong single-image performance on par with CogVLM and Emu2.

## πŸ”₯News
- [2024-08-22] We add support for training [πŸ€— Idefics-3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3); script here: [train_idefics3.sh](./mantis/train/scripts/train_idefics3.sh)
- [2024-08-05] [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) now supports the evaluation of Mantis model. Thanks to the efforts of [BrenchCC](https://github.com/BrenchCC)
- [2024-08-05] We release the Wandb training curves of [Mantis-8B-CLIP-LLaMA-3](https://wandb.ai/dongfu/MLlava/reports/Mantis-8B-CLIP-LLaMA-3--Vmlldzo4OTM0MDk5), [Mantis-8B-SigLIP-LLaMA-3](https://wandb.ai/dongfu/MLlava/reports/Mantis-8B-SigLIP-LLaMA-3--Vmlldzo4OTM0MTQ2), and [Mantis-8B-Idefics2](https://wandb.ai/dongfu/Mantis/reports/Mantis-8B-Idefics2--Vmlldzo4OTM0MTcw) for training reproduction.
- [2024-07-23] [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) now supports the evaluation of Mantis model. Thanks to the efforts of [EvolvingLMMs-Lab](https://github.com/EvolvingLMMs-Lab) Team.
- [2024-05-23] πŸ”₯Excited to announce our current SoTA Mantis-8B-Idefics2 model! Check the [model](https://huggingface.co/TIGER-Lab/Mantis-8B-Idefics2) and [demo](https://huggingface.co/spaces/TIGER-Lab/Mantis) now!
- [2024-05-03] We have release our [training codes](./mantis/train/README.md), [dataset](https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct), [evaluation codes](./mantis/benchmark/README.md) codes to the community! Check the following sections for more details.
- [2024-05-02] We release the first multi-image abled LMM model Mantis-8B based on LLaMA3! Interact with Mantis-8B-SigLIP on [Hugging Face Spaces](https://huggingface.co/spaces/TIGER-Lab/Mantis) or [Colab Demo](./examples/run_mantis.py)
- [2024-05-02] Mantis's technical report is now available on [arXiv](https://arxiv.org/abs/2405.01483). Kudos to the team!

## Installation
```bash
conda create -n mantis python=3.10
conda activate mantis
pip install -e .
# install flash-attention
pip install flash-attn --no-build-isolation
```
## Inference

You can run inference with the following command:
```bash
cd examples
python run_mantis.py
```

## Training
Install the requirements with the following command:
```bash
pip install -e .[train,eval]
cd mantis/train
```

**Our training scripts follows the coding format and model structure of Hugging face. Different from LLaVA Github repo, you can directly load our models from Hugging Face model hub.**

### Training examples with different data formats
(These example data are all pre-prepared in the `data/examples/` folder, so you can check the format of the data and the debug the training script directly. set `CUDA_VISIBLE_DEVICES` to the GPU you want to use.)
- training with text-image interleaved data (see [example data](./data/examples/chat/train.json))
```bash
cd mantis/train
bash scripts/train_example_chat.sh # Q-lora, 1 GPU required
```
- training with video-text interleaved data (see [example data](./data/examples/chat_video/train.json))
```bash
cd mantis/train
bash scripts/train_example_video.sh # Q-lora, 1 GPU required
```

- training with classification data (see [example data](./data/examples/classification/train.json))
```bash
cd mantis/train
bash scripts/train_example_classification.sh # full-finetune, might need 8 GPUs or more
```

### Training examples with different models
We support training of Mantis based on the Fuyu architecture and the LLaVA architecture. You can train the model with the following command:

**Training Mantis based on LLaMA3 with CLIP/SigLIP encoder:**
- Pretrain Mantis-LLaMA3 Multimodal projector on pretrain data (Stage 1):
```bash
bash scripts/pretrain_mllava.sh
```

- Fine-tune the pretrained Mantis-LLaMA3 on Mantis-Instruct (Stage 2):
```bash
bash scripts/train_mllava.sh
```

**Training Mantis based on Fuyu-8B:**
- Fine-tune Fuyu-8B on Mantis-Instruct to get Mantis-Fuyu:
```bash
bash scripts/train_fuyu.sh
```

**Note**: 
- Our training scripts contain auto inference bash commands to infer the number of GPUs and the number of GPU nodes use for the training. So you only need to modify the data config path and the base models.
- The training data will be automatically downloaded from hugging face when you run the training scripts.

See [mantis/train/README.md](./mantis/train/README.md) for more details. 

Check all the training scripts in [mantist/train/scripts](./mantis/train/scripts)

## Evaluation
To reproduce our evaluation results, please check [mantis/benchmark/README.md](./mantis/benchmark/README.md)

## Data
- [πŸ€— Mantis-Instruct](https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct) 721K text-image interleaved datasets for multi-image instruction tuning
- [πŸ€— Mantis-Eval](https://huggingface.co/datasets/TIGER-Lab/Mantis-Eval) 217 high-quality examples for evaluating LMM's multi-image skills

### Downloading
you can easily preparing Mantis-Insturct's downloading with the following command (The downloading and extracting might take about an hour):
```bash
python data/download_mantis_instruct.py --max_workers 8
```

## Model Zoo

### Mantis Models
We provide the following models in the πŸ€— Hugging Face model hub:
- [TIGER-Lab/Mantis-8B-Idefics2](https://huggingface.co/TIGER-Lab/Mantis-8B-Idefics2)
- [TIGER-Lab/Mantis-8B-clip-llama3](https://huggingface.co/TIGER-Lab/Mantis-8B-clip-llama3)
- [TIGER-Lab/Mantis-8B-siglip-llama3](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3)
- [TIGER-Lab/Mantis-8B-Fuyu](https://huggingface.co/TIGER-Lab/Mantis-8B-Fuyu)

### Run models

- Run Mantis-8B-Idefics2:
```bash
cd examples && python run_mantis_idefics2.py
```

- Mantis-8B-siglip-llama3:
```bash
cd examples && python run_mantis.py
```
- Mantis-8B-Fuyu:
```bash
cd examples && python run_mantis_fuyu.py
```

### Chat CLI
We provide a simple chat CLI for Mantis models. You can run the following command to chat with Mantis-8B-siglip-llama3:
```bash
python examples/chat_mantis.py
```

### Intermediate Checkpoints
The following intermediate checkpoints after pre-training the multi-modal projectors are also available for experiments reproducibility (**Please note the follwing checkpoints still needs further fine-tuning on Mantis-Eval to be intelligent. They are not working models.**):
- [TIGER-Lab/Mantis-8B-clip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-clip-llama3-pretraind)
- [TIGER-Lab/Mantis-8B-siglip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3-pretraind)



## Acknowledgement
- Thanks LLaVA and LLaVA-hf team for providing the LLaVA codebase, and hugging face compatibility!
- Thanks [Haoning Wu](https://teowu.github.io/) for providing codes of MVBench evaluation!


## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=TIGER-AI-Lab/Mantis&type=Date)](https://star-history.com/#TIGER-AI-Lab/Mantis&Date)

## Citation
```bibtex
@article{jiang2024mantis,
  title={MANTIS: Interleaved Multi-Image Instruction Tuning},
  author={Jiang, Dongfu and He, Xuan and Zeng, Huaye and Wei, Con and Ku, Max and Liu, Qian and Chen, Wenhu},
  journal={arXiv preprint arXiv:2405.01483},
  year={2024}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/TIGER-AI-Lab/Mantis",
    "name": "mantis-vl",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Dongfu Jiang",
    "author_email": "dongfu.jiang@uwaterloo.ca",
    "download_url": "https://files.pythonhosted.org/packages/a8/1c/03000d6a8f497328074c70e9c40e02a49a69b1ff0cce0f0c21d9ae3737f7/mantis_vl-0.0.4.tar.gz",
    "platform": null,
    "description": "# Mantis: Multi-Image Instruction Tuning\n\n![Mantis](./docs/assets/images/radar_chart.png)\n<a target=\"_blank\" href=\"https://arxiv.org/abs/2405.01483\">\n<img style=\"height:22pt\" src=\"https://img.shields.io/badge/-Paper-black?style=flat&logo=arxiv\"></a>\n<a target=\"_blank\" href=\"https://github.com/TIGER-AI-Lab/Mantis\">\n<img style=\"height:22pt\" src=\"https://img.shields.io/badge/-Code-green?style=flat&logo=github\"></a>\n<a target=\"_blank\" href=\"https://tiger-ai-lab.github.io/Mantis/\">\n<img style=\"height:22pt\" src=\"https://img.shields.io/badge/-\ud83c\udf10%20Website-red?style=flat\"></a>\n<a target=\"_blank\" href=\"https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct\">\n<img style=\"height:22pt\" src=\"https://img.shields.io/badge/-\ud83e\udd17%20Dataset-red?style=flat\"></a>\n<a target=\"_blank\" href=\"https://huggingface.co/spaces/TIGER-Lab/Mantis\">\n<img style=\"height:22pt\" src=\"https://img.shields.io/badge/-\ud83e\udd17%20Demo-red?style=flat\"></a> \n<a target=\"_blank\" href=\"https://huggingface.co/collections/TIGER-Lab/mantis-6619b0834594c878cdb1d6e4\">\n<img style=\"height:22pt\" src=\"https://img.shields.io/badge/-\ud83e\udd17%20Models-red?style=flat\"></a>\n<a target=\"_blank\" href=\"https://twitter.com/DongfuJiang/status/1786552974598078677\">\n<img style=\"height:22pt\" src=\"https://img.shields.io/badge/-Tweet-blue?style=flat&logo=twitter\"></a>\n<br>\n\n---\n\n\ud83e\udd14 The recent years have witnessed a great array of large multimodal models (LMMs) to effectively solve single-image vision language tasks. However, their abilities to solve multi-image visual language tasks is yet to be improved.\n\n\ud83d\ude26 The existing multi-image LMMs (e.g. OpenFlamingo, Emu, Idefics, etc) mostly gain their multi-image ability through pre-training on hundreds of millions of noisy interleaved image-text data from web, which is neither efficient nor effective.\n\n\ud83d\udd25 Therefore, we present Mantis, an LLaMA-3 based LMM with interleaved text and image as inputs, train on Mantis-Instruct under **academic-level** resources (i.e. 36 hours on 16xA100-40G). \n\n\ud83d\ude80 Mantis achieves state-of-the-art performance on 5 multi-image benchmarks (NLVR2, Q-Bench, BLINK, MVBench, Mantis-Eval), and maintaining a strong single-image performance on par with CogVLM and Emu2.\n\n## \ud83d\udd25News\n- [2024-08-22] We add support for training [\ud83e\udd17 Idefics-3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3); script here: [train_idefics3.sh](./mantis/train/scripts/train_idefics3.sh)\n- [2024-08-05] [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) now supports the evaluation of Mantis model. Thanks to the efforts of [BrenchCC](https://github.com/BrenchCC)\n- [2024-08-05] We release the Wandb training curves of [Mantis-8B-CLIP-LLaMA-3](https://wandb.ai/dongfu/MLlava/reports/Mantis-8B-CLIP-LLaMA-3--Vmlldzo4OTM0MDk5), [Mantis-8B-SigLIP-LLaMA-3](https://wandb.ai/dongfu/MLlava/reports/Mantis-8B-SigLIP-LLaMA-3--Vmlldzo4OTM0MTQ2), and [Mantis-8B-Idefics2](https://wandb.ai/dongfu/Mantis/reports/Mantis-8B-Idefics2--Vmlldzo4OTM0MTcw) for training reproduction.\n- [2024-07-23] [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) now supports the evaluation of Mantis model. Thanks to the efforts of [EvolvingLMMs-Lab](https://github.com/EvolvingLMMs-Lab) Team.\n- [2024-05-23] \ud83d\udd25Excited to announce our current SoTA Mantis-8B-Idefics2 model! Check the [model](https://huggingface.co/TIGER-Lab/Mantis-8B-Idefics2) and [demo](https://huggingface.co/spaces/TIGER-Lab/Mantis) now!\n- [2024-05-03] We have release our [training codes](./mantis/train/README.md), [dataset](https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct), [evaluation codes](./mantis/benchmark/README.md) codes to the community! Check the following sections for more details.\n- [2024-05-02] We release the first multi-image abled LMM model Mantis-8B based on LLaMA3! Interact with Mantis-8B-SigLIP on [Hugging Face Spaces](https://huggingface.co/spaces/TIGER-Lab/Mantis) or [Colab Demo](./examples/run_mantis.py)\n- [2024-05-02] Mantis's technical report is now available on [arXiv](https://arxiv.org/abs/2405.01483). Kudos to the team!\n\n## Installation\n```bash\nconda create -n mantis python=3.10\nconda activate mantis\npip install -e .\n# install flash-attention\npip install flash-attn --no-build-isolation\n```\n## Inference\n\nYou can run inference with the following command:\n```bash\ncd examples\npython run_mantis.py\n```\n\n## Training\nInstall the requirements with the following command:\n```bash\npip install -e .[train,eval]\ncd mantis/train\n```\n\n**Our training scripts follows the coding format and model structure of Hugging face. Different from LLaVA Github repo, you can directly load our models from Hugging Face model hub.**\n\n### Training examples with different data formats\n(These example data are all pre-prepared in the `data/examples/` folder, so you can check the format of the data and the debug the training script directly. set `CUDA_VISIBLE_DEVICES` to the GPU you want to use.)\n- training with text-image interleaved data (see [example data](./data/examples/chat/train.json))\n```bash\ncd mantis/train\nbash scripts/train_example_chat.sh # Q-lora, 1 GPU required\n```\n- training with video-text interleaved data (see [example data](./data/examples/chat_video/train.json))\n```bash\ncd mantis/train\nbash scripts/train_example_video.sh # Q-lora, 1 GPU required\n```\n\n- training with classification data (see [example data](./data/examples/classification/train.json))\n```bash\ncd mantis/train\nbash scripts/train_example_classification.sh # full-finetune, might need 8 GPUs or more\n```\n\n### Training examples with different models\nWe support training of Mantis based on the Fuyu architecture and the LLaVA architecture. You can train the model with the following command:\n\n**Training Mantis based on LLaMA3 with CLIP/SigLIP encoder:**\n- Pretrain Mantis-LLaMA3 Multimodal projector on pretrain data (Stage 1):\n```bash\nbash scripts/pretrain_mllava.sh\n```\n\n- Fine-tune the pretrained Mantis-LLaMA3 on Mantis-Instruct (Stage 2):\n```bash\nbash scripts/train_mllava.sh\n```\n\n**Training Mantis based on Fuyu-8B:**\n- Fine-tune Fuyu-8B on Mantis-Instruct to get Mantis-Fuyu:\n```bash\nbash scripts/train_fuyu.sh\n```\n\n**Note**: \n- Our training scripts contain auto inference bash commands to infer the number of GPUs and the number of GPU nodes use for the training. So you only need to modify the data config path and the base models.\n- The training data will be automatically downloaded from hugging face when you run the training scripts.\n\nSee [mantis/train/README.md](./mantis/train/README.md) for more details. \n\nCheck all the training scripts in [mantist/train/scripts](./mantis/train/scripts)\n\n## Evaluation\nTo reproduce our evaluation results, please check [mantis/benchmark/README.md](./mantis/benchmark/README.md)\n\n## Data\n- [\ud83e\udd17 Mantis-Instruct](https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct) 721K text-image interleaved datasets for multi-image instruction tuning\n- [\ud83e\udd17 Mantis-Eval](https://huggingface.co/datasets/TIGER-Lab/Mantis-Eval) 217 high-quality examples for evaluating LMM's multi-image skills\n\n### Downloading\nyou can easily preparing Mantis-Insturct's downloading with the following command (The downloading and extracting might take about an hour):\n```bash\npython data/download_mantis_instruct.py --max_workers 8\n```\n\n## Model Zoo\n\n### Mantis Models\nWe provide the following models in the \ud83e\udd17 Hugging Face model hub:\n- [TIGER-Lab/Mantis-8B-Idefics2](https://huggingface.co/TIGER-Lab/Mantis-8B-Idefics2)\n- [TIGER-Lab/Mantis-8B-clip-llama3](https://huggingface.co/TIGER-Lab/Mantis-8B-clip-llama3)\n- [TIGER-Lab/Mantis-8B-siglip-llama3](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3)\n- [TIGER-Lab/Mantis-8B-Fuyu](https://huggingface.co/TIGER-Lab/Mantis-8B-Fuyu)\n\n### Run models\n\n- Run Mantis-8B-Idefics2:\n```bash\ncd examples && python run_mantis_idefics2.py\n```\n\n- Mantis-8B-siglip-llama3:\n```bash\ncd examples && python run_mantis.py\n```\n- Mantis-8B-Fuyu:\n```bash\ncd examples && python run_mantis_fuyu.py\n```\n\n### Chat CLI\nWe provide a simple chat CLI for Mantis models. You can run the following command to chat with Mantis-8B-siglip-llama3:\n```bash\npython examples/chat_mantis.py\n```\n\n### Intermediate Checkpoints\nThe following intermediate checkpoints after pre-training the multi-modal projectors are also available for experiments reproducibility (**Please note the follwing checkpoints still needs further fine-tuning on Mantis-Eval to be intelligent. They are not working models.**):\n- [TIGER-Lab/Mantis-8B-clip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-clip-llama3-pretraind)\n- [TIGER-Lab/Mantis-8B-siglip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3-pretraind)\n\n\n\n## Acknowledgement\n- Thanks LLaVA and LLaVA-hf team for providing the LLaVA codebase, and hugging face compatibility!\n- Thanks [Haoning Wu](https://teowu.github.io/) for providing codes of MVBench evaluation!\n\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=TIGER-AI-Lab/Mantis&type=Date)](https://star-history.com/#TIGER-AI-Lab/Mantis&Date)\n\n## Citation\n```bibtex\n@article{jiang2024mantis,\n  title={MANTIS: Interleaved Multi-Image Instruction Tuning},\n  author={Jiang, Dongfu and He, Xuan and Zeng, Huaye and Wei, Con and Ku, Max and Liu, Qian and Chen, Wenhu},\n  journal={arXiv preprint arXiv:2405.01483},\n  year={2024}\n}\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Official Codes for of \"MANTIS: Interleaved Multi-Image Instruction Tuning\"",
    "version": "0.0.4",
    "project_urls": {
        "Homepage": "https://github.com/TIGER-AI-Lab/Mantis"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "66a336ec873132b422832128205aa6919a5014bce3550f8d87675067220a7a10",
                "md5": "d86352c968ae728797818dba2c7781e2",
                "sha256": "870acdd29c686bfda49ac5800e79a6c32bc1b0fa204b2568d961396c18234e32"
            },
            "downloads": -1,
            "filename": "mantis_vl-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d86352c968ae728797818dba2c7781e2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 331816,
            "upload_time": "2024-10-20T09:04:20",
            "upload_time_iso_8601": "2024-10-20T09:04:20.532755Z",
            "url": "https://files.pythonhosted.org/packages/66/a3/36ec873132b422832128205aa6919a5014bce3550f8d87675067220a7a10/mantis_vl-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a81c03000d6a8f497328074c70e9c40e02a49a69b1ff0cce0f0c21d9ae3737f7",
                "md5": "2e7860f35d48a6e927e154c4dd3b29c9",
                "sha256": "7014f5c3c98e29d8e70df7886eb119c2cf7da7dbf7d2ef343779e15489e44b2f"
            },
            "downloads": -1,
            "filename": "mantis_vl-0.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "2e7860f35d48a6e927e154c4dd3b29c9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 259078,
            "upload_time": "2024-10-20T09:04:22",
            "upload_time_iso_8601": "2024-10-20T09:04:22.746760Z",
            "url": "https://files.pythonhosted.org/packages/a8/1c/03000d6a8f497328074c70e9c40e02a49a69b1ff0cce0f0c21d9ae3737f7/mantis_vl-0.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-20 09:04:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "TIGER-AI-Lab",
    "github_project": "Mantis",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "mantis-vl"
}
        
Elapsed time: 0.93565s