mlx-llm


Namemlx-llm JSON
Version 1.0.7 PyPI version JSON
download
home_pageNone
SummaryLarge Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX
upload_time2024-08-24 12:58:01
maintainerNone
docs_urlNone
authorRiccardo Musmeci
requires_python<4.0,>=3.10
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # mlx-llm
Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with [Apple MLX](https://github.com/ml-explore/mlx).

![Alt Text](static/mlx-llm-demo.gif)

Go to the entire [Youtube Video](https://www.youtube.com/watch?v=vB7tk6W6VIw).

## **How to install 🔨**
```
pip install mlx-llm
```

## **Models 🧠**

Currently, out-of-the-box supported models are:

| Family        |  Models |
|---------------------|----------------|
| LLaMA 2                  |     llama_2_7b_chat_hf, llama_2_7b_hf            |
| LLaMA 3          |  llama_3_8b, llama_3_8b_instruct, hermes_2_pro_llama_3_8b              |
| Phi3 |   phi_3_mini_4k_instruct, phi_3_mini_128k_instruct, phi_3.5_mini_instruct        |
| Mistral |  mistral_7b_instruct_v0.2, openhermes_2.5_mistral_7b, starling_lm_7b_beta          |
| TinyLLaMA |     tiny_llama_1.1B_chat_v1.0       |
| Gemma |  gemma_1.1_2b_it, gemma_1.1_7b_it, gemma_2_2b_it, gemma_2_9b_it                    |
| OpenELM |  openelm_270M_instruct, openelm_450M_instruct, openelm_1.1B_instruct, openelm_3B_instruct |

To create a model with pre-trained weights from HuggingFace:

```python
from mlx_llm.model import create_model

# loading weights from HuggingFace
model = create_model("llama_3_8b_instruct")
```

You can also load a new version of pre-trained weights for a specific model directly from HuggingFace:
- set `weights` by adding `hf://` before the HuggingFace repository 
- if necessary, specify custom model configs (rope_theta, rope_traditional, vocab_size, norm_eps)

Here's an example of how to to it:
```python
from mlx_llm.model import create_model

# an example of loading new weights from HuggingFace
model = create_model(
    model_name="openelm_1.1B_instruct", # it's the base model
    weights="hf://apple/OpenELM-1.1B", # new weights from HuggingFace
)

# an example of loading new weights from HuggingFace with custom model configs
model = create_model(
    model_name="llama_3_8b_instruct", # it's the base model
    weights="hf://gradientai/Llama-3-8B-Instruct-262k", # new weights from HuggingFace
    model_config={
        "rope_theta": 207112184.0
    }
)
```

### **Quantization 📉**

To quantize a model and save its weights just use:

```python
from mlx_llm.model import create_model, quantize, get_weights
from mlx_llm.utils.weights import save_weights

# create the model from original weights
model = create_model("llama_3_8b_instruct")
# quantize the model
model = quantize(model, group_size=64, bits=4)
# getting weights dict (similar to state_dict in PyTorch)
weights = get_weights(model)
# save the model
save_weights(weights, "llama_3_8b_instruct-4bit.safetensors")
```

### **Model Embeddings ✴️**
Models in `mlx-llm` are able to extract embeddings from a given text.

```python
import mlx.core as mx
from mlx_llm.model import create_model, create_tokenizer

model = create_model("llama_3_8b_instruct")
tokenizer = create_tokenizer('llama_3_8b_instruct')
text = ["I like to play basketball", "I like to play tennis"]
tokens = tokenizer(text)
x = mx.array(tokens["input_ids"])
embeds, _ = model.embed(x, norm=True)
```

## **Applications 📁**
With `mlx-llm` you can run a variety of applications, such as:
- Chat with an LLM running on Apple Silicon on a Command Line interface
- Fine-Tuning a model with LoRA or QLoRA
- Retrieval Augmented Generation (RAG) for Question Answering

### **Chat with LLM 📱**
`mlx-llm` comes with tools to easily run your LLM chat on Apple Silicon.

To chat with an LLM provide:
- a system prompt --> to set the overall tone of the LLM
- optional previous interactions to set the mood of the conversation

```python
from mlx_llm.chat import ChatSetup, LLMChat
from mlx_llm.model import create_model, create_tokenizer
from mlx_llm.prompt import create_prompt

model_name = "tiny_llama_1.1B_chat_v1.0"

chat = LLMChat(
    model_name=model_name,
    prompt_family="tinyllama",
    chat_setup=ChatSetup(
        system="You are Michael Scott from The Office. Your goal is to answer like him, so be funny and inappropriate, but be brief.",
        history=[
            {"question": "What is your name?", "answer": "Michael Scott"},
            {"question": "What is your favorite episode of The Office?", "answer": "The Dinner Party"},
        ],
    ),
    quantized=False, # if you want it faster use the quantization params (e.g., group_size=64, bits=4)
)

chat.start()
```

> [!WARNING]
> OpenELM chat-mode is broken. I am working on fixing it.

> [!WARNING]
> In current release (v1.0.5) chat mode is supported only for registered models and mode with other HF weights from HuggingFace is not supported.

### **Fine-Tuning with LoRA or QLoRA 🚀**
```python
raise NotImplementedError
```

### **Retrieval Augmented Generation (RAG) 📚**
```python
raise NotImplementedError
```


## **ToDos**

[ ] LoRA and QLoRA

[ ] RAG

## 📧 Contact

If you have any questions, please email `riccardomusmeci92@gmail.com`


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "mlx-llm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Riccardo Musmeci",
    "author_email": "riccardomusmeci92@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/e2/22/3968c341a8562a1ce73da1f2a93db07248c123d4437516ca37dbf6935443/mlx_llm-1.0.7.tar.gz",
    "platform": null,
    "description": "# mlx-llm\nLarge Language Models (LLMs) applications and tools running on Apple Silicon in real-time with [Apple MLX](https://github.com/ml-explore/mlx).\n\n![Alt Text](static/mlx-llm-demo.gif)\n\nGo to the entire [Youtube Video](https://www.youtube.com/watch?v=vB7tk6W6VIw).\n\n## **How to install \ud83d\udd28**\n```\npip install mlx-llm\n```\n\n## **Models \ud83e\udde0**\n\nCurrently, out-of-the-box supported models are:\n\n| Family        |  Models |\n|---------------------|----------------|\n| LLaMA 2                  |     llama_2_7b_chat_hf, llama_2_7b_hf            |\n| LLaMA 3          |  llama_3_8b, llama_3_8b_instruct, hermes_2_pro_llama_3_8b              |\n| Phi3 |   phi_3_mini_4k_instruct, phi_3_mini_128k_instruct, phi_3.5_mini_instruct        |\n| Mistral |  mistral_7b_instruct_v0.2, openhermes_2.5_mistral_7b, starling_lm_7b_beta          |\n| TinyLLaMA |     tiny_llama_1.1B_chat_v1.0       |\n| Gemma |  gemma_1.1_2b_it, gemma_1.1_7b_it, gemma_2_2b_it, gemma_2_9b_it                    |\n| OpenELM |  openelm_270M_instruct, openelm_450M_instruct, openelm_1.1B_instruct, openelm_3B_instruct |\n\nTo create a model with pre-trained weights from HuggingFace:\n\n```python\nfrom mlx_llm.model import create_model\n\n# loading weights from HuggingFace\nmodel = create_model(\"llama_3_8b_instruct\")\n```\n\nYou can also load a new version of pre-trained weights for a specific model directly from HuggingFace:\n- set `weights` by adding `hf://` before the HuggingFace repository \n- if necessary, specify custom model configs (rope_theta, rope_traditional, vocab_size, norm_eps)\n\nHere's an example of how to to it:\n```python\nfrom mlx_llm.model import create_model\n\n# an example of loading new weights from HuggingFace\nmodel = create_model(\n    model_name=\"openelm_1.1B_instruct\", # it's the base model\n    weights=\"hf://apple/OpenELM-1.1B\", # new weights from HuggingFace\n)\n\n# an example of loading new weights from HuggingFace with custom model configs\nmodel = create_model(\n    model_name=\"llama_3_8b_instruct\", # it's the base model\n    weights=\"hf://gradientai/Llama-3-8B-Instruct-262k\", # new weights from HuggingFace\n    model_config={\n        \"rope_theta\": 207112184.0\n    }\n)\n```\n\n### **Quantization \ud83d\udcc9**\n\nTo quantize a model and save its weights just use:\n\n```python\nfrom mlx_llm.model import create_model, quantize, get_weights\nfrom mlx_llm.utils.weights import save_weights\n\n# create the model from original weights\nmodel = create_model(\"llama_3_8b_instruct\")\n# quantize the model\nmodel = quantize(model, group_size=64, bits=4)\n# getting weights dict (similar to state_dict in PyTorch)\nweights = get_weights(model)\n# save the model\nsave_weights(weights, \"llama_3_8b_instruct-4bit.safetensors\")\n```\n\n### **Model Embeddings \u2734\ufe0f**\nModels in `mlx-llm` are able to extract embeddings from a given text.\n\n```python\nimport mlx.core as mx\nfrom mlx_llm.model import create_model, create_tokenizer\n\nmodel = create_model(\"llama_3_8b_instruct\")\ntokenizer = create_tokenizer('llama_3_8b_instruct')\ntext = [\"I like to play basketball\", \"I like to play tennis\"]\ntokens = tokenizer(text)\nx = mx.array(tokens[\"input_ids\"])\nembeds, _ = model.embed(x, norm=True)\n```\n\n## **Applications \ud83d\udcc1**\nWith `mlx-llm` you can run a variety of applications, such as:\n- Chat with an LLM running on Apple Silicon on a Command Line interface\n- Fine-Tuning a model with LoRA or QLoRA\n- Retrieval Augmented Generation (RAG) for Question Answering\n\n### **Chat with LLM \ud83d\udcf1**\n`mlx-llm` comes with tools to easily run your LLM chat on Apple Silicon.\n\nTo chat with an LLM provide:\n- a system prompt --> to set the overall tone of the LLM\n- optional previous interactions to set the mood of the conversation\n\n```python\nfrom mlx_llm.chat import ChatSetup, LLMChat\nfrom mlx_llm.model import create_model, create_tokenizer\nfrom mlx_llm.prompt import create_prompt\n\nmodel_name = \"tiny_llama_1.1B_chat_v1.0\"\n\nchat = LLMChat(\n    model_name=model_name,\n    prompt_family=\"tinyllama\",\n    chat_setup=ChatSetup(\n        system=\"You are Michael Scott from The Office. Your goal is to answer like him, so be funny and inappropriate, but be brief.\",\n        history=[\n            {\"question\": \"What is your name?\", \"answer\": \"Michael Scott\"},\n            {\"question\": \"What is your favorite episode of The Office?\", \"answer\": \"The Dinner Party\"},\n        ],\n    ),\n    quantized=False, # if you want it faster use the quantization params (e.g., group_size=64, bits=4)\n)\n\nchat.start()\n```\n\n> [!WARNING]\n> OpenELM chat-mode is broken. I am working on fixing it.\n\n> [!WARNING]\n> In current release (v1.0.5) chat mode is supported only for registered models and mode with other HF weights from HuggingFace is not supported.\n\n### **Fine-Tuning with LoRA or QLoRA \ud83d\ude80**\n```python\nraise NotImplementedError\n```\n\n### **Retrieval Augmented Generation (RAG) \ud83d\udcda**\n```python\nraise NotImplementedError\n```\n\n\n## **ToDos**\n\n[ ] LoRA and QLoRA\n\n[ ] RAG\n\n## \ud83d\udce7 Contact\n\nIf you have any questions, please email `riccardomusmeci92@gmail.com`\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX",
    "version": "1.0.7",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1a5f6c37e82de0f76e12fb8ad08620fc7ebda114210b0af78690eb136b6d5ffa",
                "md5": "1b332ae2d8d8d0bb881da6ca53e76801",
                "sha256": "896711152e2a691c795e8272e28127107d1af4a3cf88cddde057138e4c1a6e2a"
            },
            "downloads": -1,
            "filename": "mlx_llm-1.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1b332ae2d8d8d0bb881da6ca53e76801",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 30166,
            "upload_time": "2024-08-24T12:57:59",
            "upload_time_iso_8601": "2024-08-24T12:57:59.630480Z",
            "url": "https://files.pythonhosted.org/packages/1a/5f/6c37e82de0f76e12fb8ad08620fc7ebda114210b0af78690eb136b6d5ffa/mlx_llm-1.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e2223968c341a8562a1ce73da1f2a93db07248c123d4437516ca37dbf6935443",
                "md5": "25d9f5223a99f2d77a39af96fde37c38",
                "sha256": "f3f2b8c5b9758f71c0af89e41f2da68732941f435dba3ccaed02d421d2d79dcb"
            },
            "downloads": -1,
            "filename": "mlx_llm-1.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "25d9f5223a99f2d77a39af96fde37c38",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 25169,
            "upload_time": "2024-08-24T12:58:01",
            "upload_time_iso_8601": "2024-08-24T12:58:01.198694Z",
            "url": "https://files.pythonhosted.org/packages/e2/22/3968c341a8562a1ce73da1f2a93db07248c123d4437516ca37dbf6935443/mlx_llm-1.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-24 12:58:01",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "mlx-llm"
}
        
Elapsed time: 0.28802s