mlx-vlm


Namemlx-vlm JSON
Version 0.0.13 PyPI version JSON
download
home_pagehttps://github.com/Blaizzy/mlx-vlm
SummaryVision LLMs on Apple silicon with MLX and the Hugging Face Hub
upload_time2024-08-16 20:52:52
maintainerNone
docs_urlNone
authorPrince Canuma
requires_python>=3.8
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MLX-VLM

MLX-VLM a package for running Vision LLMs on your Mac using MLX.


## Get started

The easiest way to get started is to install the `mlx-vlm` package:

**With `pip`**:

```sh
pip install mlx-vlm
```

## Inference

**CLI**
```sh
python -m mlx_vlm.generate --model qnguyen3/nanoLLaVA --max-tokens 100 --temp 0.0
```

**Chat UI with Gradio**
```sh
python -m mlx_vlm.chat_ui --model qnguyen3/nanoLLaVA
```

**Script**
```python
import mlx.core as mx
from mlx_vlm import load, generate

model_path = "mlx-community/llava-1.5-7b-4bit"
model, processor = load(model_path)

prompt = processor.tokenizer.apply_chat_template(
    [{"role": "user", "content": f"<image>\nWhat are these?"}],
    tokenize=False,
    add_generation_prompt=True,
)

output = generate(model, processor, "http://images.cocodataset.org/val2017/000000039769.jpg", prompt, verbose=False)
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Blaizzy/mlx-vlm",
    "name": "mlx-vlm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Prince Canuma",
    "author_email": "prince.gdt@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/40/25/52b34f764d01384321cf9e50c153f52ebb656c5937f10eb088e6118cbe19/mlx_vlm-0.0.13.tar.gz",
    "platform": null,
    "description": "# MLX-VLM\n\nMLX-VLM a package for running Vision LLMs on your Mac using MLX.\n\n\n## Get started\n\nThe easiest way to get started is to install the `mlx-vlm` package:\n\n**With `pip`**:\n\n```sh\npip install mlx-vlm\n```\n\n## Inference\n\n**CLI**\n```sh\npython -m mlx_vlm.generate --model qnguyen3/nanoLLaVA --max-tokens 100 --temp 0.0\n```\n\n**Chat UI with Gradio**\n```sh\npython -m mlx_vlm.chat_ui --model qnguyen3/nanoLLaVA\n```\n\n**Script**\n```python\nimport mlx.core as mx\nfrom mlx_vlm import load, generate\n\nmodel_path = \"mlx-community/llava-1.5-7b-4bit\"\nmodel, processor = load(model_path)\n\nprompt = processor.tokenizer.apply_chat_template(\n    [{\"role\": \"user\", \"content\": f\"<image>\\nWhat are these?\"}],\n    tokenize=False,\n    add_generation_prompt=True,\n)\n\noutput = generate(model, processor, \"http://images.cocodataset.org/val2017/000000039769.jpg\", prompt, verbose=False)\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Vision LLMs on Apple silicon with MLX and the Hugging Face Hub",
    "version": "0.0.13",
    "project_urls": {
        "Homepage": "https://github.com/Blaizzy/mlx-vlm"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b75b5a89fcff55397001ea5c90d745fc674d0b2542ee511da097ca180ab7e51e",
                "md5": "8a3f8103cc5a6115a7442bb4ad24326b",
                "sha256": "f0e1d35e942f650a36941993863ebfd35ffc2fc97f6c78a6d3c6a4cd0dae82a3"
            },
            "downloads": -1,
            "filename": "mlx_vlm-0.0.13-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8a3f8103cc5a6115a7442bb4ad24326b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 76194,
            "upload_time": "2024-08-16T20:52:51",
            "upload_time_iso_8601": "2024-08-16T20:52:51.419225Z",
            "url": "https://files.pythonhosted.org/packages/b7/5b/5a89fcff55397001ea5c90d745fc674d0b2542ee511da097ca180ab7e51e/mlx_vlm-0.0.13-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "402552b34f764d01384321cf9e50c153f52ebb656c5937f10eb088e6118cbe19",
                "md5": "a3612537e98477444571bc353bd6d728",
                "sha256": "74665dc961f86c99aaf50230656ef416a0c915145075dc1640f18094b141a3eb"
            },
            "downloads": -1,
            "filename": "mlx_vlm-0.0.13.tar.gz",
            "has_sig": false,
            "md5_digest": "a3612537e98477444571bc353bd6d728",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 52213,
            "upload_time": "2024-08-16T20:52:52",
            "upload_time_iso_8601": "2024-08-16T20:52:52.767214Z",
            "url": "https://files.pythonhosted.org/packages/40/25/52b34f764d01384321cf9e50c153f52ebb656c5937f10eb088e6118cbe19/mlx_vlm-0.0.13.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-16 20:52:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Blaizzy",
    "github_project": "mlx-vlm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "mlx-vlm"
}
        
Elapsed time: 0.31968s