qwen-vl-utils


Nameqwen-vl-utils JSON
Version 0.0.8 PyPI version JSON
download
home_pageNone
SummaryQwen Vision Language Model Utils - PyTorch
upload_time2024-09-24 09:55:07
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseApache-2.0
keywords large language model pytorch qwen-vl vision language model
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # qwen-vl-utils

Qwen-VL Utils contains a set of helper functions for processing and integrating visual language information with Qwen-VL Series Model.

## Install

```bash
pip install qwen-vl-utils
```

## Usage

```python
from transformers import Qwen2VLForConditionalGeneration, Qwen2VLProcessor
from qwen_vl_utils import process_vision_info


# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
messages = [
    # Image
    ## Local file path
    [{"role": "user", "content": [{"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}]}],
    ## Image URL
    [{"role": "user", "content": [{"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}]}],
    ## Base64 encoded image
    [{"role": "user", "content": [{"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}]}],
    ## PIL.Image.Image
    [{"role": "user", "content": [{"type": "image", "image": pil_image}, {"type": "text", "text": "Describe this image."}]}],
    ## Model dynamically adjusts image size, specify dimensions if required.
    [{"role": "user", "content": [{"type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420}, {"type": "text", "text": "Describe this image."}]}],
    # Video
    ## Local video path
    [{"role": "user", "content": [{"type": "video", "video": "file:///path/to/video1.mp4"}, {"type": "text", "text": "Describe this video."}]}],
    ## Local video frames
    [{"role": "user", "content": [{"type": "video", "video": ["file:///path/to/extracted_frame1.jpg", "file:///path/to/extracted_frame2.jpg", "file:///path/to/extracted_frame3.jpg"],}, {"type": "text", "text": "Describe this video."},],}],
    ## Model dynamically adjusts video nframes, video height and width. specify args if required.
    [{"role": "user", "content": [{"type": "video", "video": "file:///path/to/video1.mp4", "fps": 2.0, "resized_height": 280, "resized_width": 280}, {"type": "text", "text": "Describe this video."}]}],
]

processor = Qwen2VLProcessor.from_pretrained(model_path)
model = Qwen2VLForConditionalGeneration.from_pretrained(model_path, torch_dtype="auto", device_map="auto")
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
images, videos = process_vision_info(messages)
inputs = processor(text=text, images=images, videos=videos, padding=True, return_tensors="pt")
print(inputs)
generated_ids = model.generate(**inputs)
print(generated_ids)
```
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "qwen-vl-utils",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "large language model, pytorch, qwen-vl, vision language model",
    "author": null,
    "author_email": "Qwen Team <chenkeqin.ckq@alibaba-inc.com>",
    "download_url": "https://files.pythonhosted.org/packages/3a/8d/4256e3b9dc36f104269abc1a2ec5b22a6aebb1d8366cd3dd7e374d7b0d54/qwen_vl_utils-0.0.8.tar.gz",
    "platform": null,
    "description": "# qwen-vl-utils\n\nQwen-VL Utils contains a set of helper functions for processing and integrating visual language information with Qwen-VL Series Model.\n\n## Install\n\n```bash\npip install qwen-vl-utils\n```\n\n## Usage\n\n```python\nfrom transformers import Qwen2VLForConditionalGeneration, Qwen2VLProcessor\nfrom qwen_vl_utils import process_vision_info\n\n\n# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.\nmessages = [\n    # Image\n    ## Local file path\n    [{\"role\": \"user\", \"content\": [{\"type\": \"image\", \"image\": \"file:///path/to/your/image.jpg\"}, {\"type\": \"text\", \"text\": \"Describe this image.\"}]}],\n    ## Image URL\n    [{\"role\": \"user\", \"content\": [{\"type\": \"image\", \"image\": \"http://path/to/your/image.jpg\"}, {\"type\": \"text\", \"text\": \"Describe this image.\"}]}],\n    ## Base64 encoded image\n    [{\"role\": \"user\", \"content\": [{\"type\": \"image\", \"image\": \"data:image;base64,/9j/...\"}, {\"type\": \"text\", \"text\": \"Describe this image.\"}]}],\n    ## PIL.Image.Image\n    [{\"role\": \"user\", \"content\": [{\"type\": \"image\", \"image\": pil_image}, {\"type\": \"text\", \"text\": \"Describe this image.\"}]}],\n    ## Model dynamically adjusts image size, specify dimensions if required.\n    [{\"role\": \"user\", \"content\": [{\"type\": \"image\", \"image\": \"file:///path/to/your/image.jpg\", \"resized_height\": 280, \"resized_width\": 420}, {\"type\": \"text\", \"text\": \"Describe this image.\"}]}],\n    # Video\n    ## Local video path\n    [{\"role\": \"user\", \"content\": [{\"type\": \"video\", \"video\": \"file:///path/to/video1.mp4\"}, {\"type\": \"text\", \"text\": \"Describe this video.\"}]}],\n    ## Local video frames\n    [{\"role\": \"user\", \"content\": [{\"type\": \"video\", \"video\": [\"file:///path/to/extracted_frame1.jpg\", \"file:///path/to/extracted_frame2.jpg\", \"file:///path/to/extracted_frame3.jpg\"],}, {\"type\": \"text\", \"text\": \"Describe this video.\"},],}],\n    ## Model dynamically adjusts video nframes, video height and width. specify args if required.\n    [{\"role\": \"user\", \"content\": [{\"type\": \"video\", \"video\": \"file:///path/to/video1.mp4\", \"fps\": 2.0, \"resized_height\": 280, \"resized_width\": 280}, {\"type\": \"text\", \"text\": \"Describe this video.\"}]}],\n]\n\nprocessor = Qwen2VLProcessor.from_pretrained(model_path)\nmodel = Qwen2VLForConditionalGeneration.from_pretrained(model_path, torch_dtype=\"auto\", device_map=\"auto\")\ntext = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\nimages, videos = process_vision_info(messages)\ninputs = processor(text=text, images=images, videos=videos, padding=True, return_tensors=\"pt\")\nprint(inputs)\ngenerated_ids = model.generate(**inputs)\nprint(generated_ids)\n```",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Qwen Vision Language Model Utils - PyTorch",
    "version": "0.0.8",
    "project_urls": {
        "Homepage": "https://github.com/QwenLM/Qwen2-VL/tree/main/qwen-vl-utils",
        "Issues": "https://github.com/QwenLM/Qwen2-VL/issues",
        "Repository": "https://github.com/QwenLM/Qwen2-VL.git"
    },
    "split_keywords": [
        "large language model",
        " pytorch",
        " qwen-vl",
        " vision language model"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4fe55db523e7e2bd7d0043b2dfcb03dc3e87d3f9bbf257fcee6f926f83568699",
                "md5": "894fc912d2753f9f39cb6f7b5477795e",
                "sha256": "2988aa08256f3d7ee6f08d7b27b004e840608b61ed36d0b32d1775be56a1639d"
            },
            "downloads": -1,
            "filename": "qwen_vl_utils-0.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "894fc912d2753f9f39cb6f7b5477795e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 5883,
            "upload_time": "2024-09-24T09:55:06",
            "upload_time_iso_8601": "2024-09-24T09:55:06.084269Z",
            "url": "https://files.pythonhosted.org/packages/4f/e5/5db523e7e2bd7d0043b2dfcb03dc3e87d3f9bbf257fcee6f926f83568699/qwen_vl_utils-0.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3a8d4256e3b9dc36f104269abc1a2ec5b22a6aebb1d8366cd3dd7e374d7b0d54",
                "md5": "b8581ca75dff9b4f9ab38029a28df8db",
                "sha256": "3dfce951226b0a3c9cb13e6d0ad92d86d6fb3d8946af3bcf5c4b0121a1fa717a"
            },
            "downloads": -1,
            "filename": "qwen_vl_utils-0.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "b8581ca75dff9b4f9ab38029a28df8db",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 6773,
            "upload_time": "2024-09-24T09:55:07",
            "upload_time_iso_8601": "2024-09-24T09:55:07.230820Z",
            "url": "https://files.pythonhosted.org/packages/3a/8d/4256e3b9dc36f104269abc1a2ec5b22a6aebb1d8366cd3dd7e374d7b0d54/qwen_vl_utils-0.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-24 09:55:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "QwenLM",
    "github_project": "Qwen2-VL",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "qwen-vl-utils"
}
        
Elapsed time: 1.34585s