chat-with-mlx


Namechat-with-mlx JSON
Version 0.2.2 PyPI version JSON
download
home_pageNone
SummaryA Retrieval-augmented Generation (RAG) chat interface with support for multiple open-source models, designed to run natively on MacOS and Apple Silicon with MLX.
upload_time2024-06-13 04:07:04
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License
keywords mlx chat chatbot chat_with_mlx chat_with_mlx
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

# Chat with MLX 🧑‍💻

[![version](https://badge.fury.io/py/chat-with-mlx.svg)](https://badge.fury.io/py/chat-with-mlx)
[![downloads](https://img.shields.io/pypi/dm/chat-with-mlx)](https://pypistats.org/packages/chat-with-mlx)
[![license](https://img.shields.io/pypi/l/chat-with-mlx)](https://github.com/qnguyen3/chat-with-mlx/blob/main/LICENSE.md)
[![python-version](https://img.shields.io/pypi/pyversions/chat-with-mlx)](https://badge.fury.io/py/chat-with-mlx)
</div>

An all-in-one Chat Playground using Apple MLX on Apple Silicon Macs.

![chat_with_mlx](assets/Logo.png)

## Features

- **Privacy-enhanced AI**: Chat with your favourite models and data securely.
- **MLX Playground**: Your all in one LLM Chat UI for Apple MLX
- **Easy Integration**: Easy integrate any HuggingFace and MLX Compatible Open-Source Models.
- **Default Models**: Llama-3, Phi-3, Yi, Qwen, Mistral, Codestral, Mixtral, StableLM (along with Dolphin and Hermes variants)

## Installation and Usage

### Easy Setup

- Install Pip
- Install: `pip install chat-with-mlx`

### Manual Pip Installation

```bash
git clone https://github.com/qnguyen3/chat-with-mlx.git
cd chat-with-mlx
python -m venv .venv
source .venv/bin/activate
pip install -e .
```

#### Manual Conda Installation

```bash
git clone https://github.com/qnguyen3/chat-with-mlx.git
cd chat-with-mlx
conda create -n mlx-chat python=3.11
conda activate mlx-chat
pip install -e .
```

#### Usage

- Start the app: `chat-with-mlx`

## Add Your Model

Please checkout the guide [HERE](ADD_MODEL.MD)

## Known Issues

- When the model is downloading by Solution 1, the only way to stop it is to hit `control + C` on your Terminal.
- If you want to switch the file, you have to manually hit STOP INDEXING. Otherwise, the vector database would add the second document to the current database.
- You have to choose a dataset mode (Document or YouTube) in order for it to work.
- **Phi-3-small** can't do streaming in completions

## Why MLX?

MLX is an array framework for machine learning research on Apple silicon,
brought to you by Apple machine learning research.

Some key features of MLX include:

- **Familiar APIs**: MLX has a Python API that closely follows NumPy.  MLX
   also has fully featured C++, [C](https://github.com/ml-explore/mlx-c), and
   [Swift](https://github.com/ml-explore/mlx-swift/) APIs, which closely mirror
   the Python API.  MLX has higher-level packages like `mlx.nn` and
   `mlx.optimizers` with APIs that closely follow PyTorch to simplify building
   more complex models.

- **Composable function transformations**: MLX supports composable function
   transformations for automatic differentiation, automatic vectorization,
   and computation graph optimization.

- **Lazy computation**: Computations in MLX are lazy. Arrays are only
   materialized when needed.

- **Dynamic graph construction**: Computation graphs in MLX are constructed
   dynamically. Changing the shapes of function arguments does not trigger
   slow compilations, and debugging is simple and intuitive.

- **Multi-device**: Operations can run on any of the supported devices
   (currently the CPU and the GPU).

- **Unified memory**: A notable difference from MLX and other frameworks
   is the *unified memory model*. Arrays in MLX live in shared memory.
   Operations on MLX arrays can be performed on any of the supported
   device types without transferring data.

## Acknowledgement

I would like to send my many thanks to:

- The Apple Machine Learning Research team for the amazing MLX library.
- LangChain and ChromaDB for such easy RAG Implementation
- All contributors

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=qnguyen3/chat-with-mlx&type=Date)](https://star-history.com/#qnguyen3/chat-with-mlx&Date)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "chat-with-mlx",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "mlx, chat, chatbot, chat_with_mlx, chat_with_mlx",
    "author": null,
    "author_email": "Quan Nguyen <quanjenkey@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/ad/09/91f59103795fd3335a76d025d53ce2f58c59f6f2da7bb076fb99f8d1ce07/chat_with_mlx-0.2.2.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n# Chat with MLX \ud83e\uddd1\u200d\ud83d\udcbb\n\n[![version](https://badge.fury.io/py/chat-with-mlx.svg)](https://badge.fury.io/py/chat-with-mlx)\n[![downloads](https://img.shields.io/pypi/dm/chat-with-mlx)](https://pypistats.org/packages/chat-with-mlx)\n[![license](https://img.shields.io/pypi/l/chat-with-mlx)](https://github.com/qnguyen3/chat-with-mlx/blob/main/LICENSE.md)\n[![python-version](https://img.shields.io/pypi/pyversions/chat-with-mlx)](https://badge.fury.io/py/chat-with-mlx)\n</div>\n\nAn all-in-one Chat Playground using Apple MLX on Apple Silicon Macs.\n\n![chat_with_mlx](assets/Logo.png)\n\n## Features\n\n- **Privacy-enhanced AI**: Chat with your favourite models and data securely.\n- **MLX Playground**: Your all in one LLM Chat UI for Apple MLX\n- **Easy Integration**: Easy integrate any HuggingFace and MLX Compatible Open-Source Models.\n- **Default Models**: Llama-3, Phi-3, Yi, Qwen, Mistral, Codestral, Mixtral, StableLM (along with Dolphin and Hermes variants)\n\n## Installation and Usage\n\n### Easy Setup\n\n- Install Pip\n- Install: `pip install chat-with-mlx`\n\n### Manual Pip Installation\n\n```bash\ngit clone https://github.com/qnguyen3/chat-with-mlx.git\ncd chat-with-mlx\npython -m venv .venv\nsource .venv/bin/activate\npip install -e .\n```\n\n#### Manual Conda Installation\n\n```bash\ngit clone https://github.com/qnguyen3/chat-with-mlx.git\ncd chat-with-mlx\nconda create -n mlx-chat python=3.11\nconda activate mlx-chat\npip install -e .\n```\n\n#### Usage\n\n- Start the app: `chat-with-mlx`\n\n## Add Your Model\n\nPlease checkout the guide [HERE](ADD_MODEL.MD)\n\n## Known Issues\n\n- When the model is downloading by Solution 1, the only way to stop it is to hit `control + C` on your Terminal.\n- If you want to switch the file, you have to manually hit STOP INDEXING. Otherwise, the vector database would add the second document to the current database.\n- You have to choose a dataset mode (Document or YouTube) in order for it to work.\n- **Phi-3-small** can't do streaming in completions\n\n## Why MLX?\n\nMLX is an array framework for machine learning research on Apple silicon,\nbrought to you by Apple machine learning research.\n\nSome key features of MLX include:\n\n- **Familiar APIs**: MLX has a Python API that closely follows NumPy.  MLX\n   also has fully featured C++, [C](https://github.com/ml-explore/mlx-c), and\n   [Swift](https://github.com/ml-explore/mlx-swift/) APIs, which closely mirror\n   the Python API.  MLX has higher-level packages like `mlx.nn` and\n   `mlx.optimizers` with APIs that closely follow PyTorch to simplify building\n   more complex models.\n\n- **Composable function transformations**: MLX supports composable function\n   transformations for automatic differentiation, automatic vectorization,\n   and computation graph optimization.\n\n- **Lazy computation**: Computations in MLX are lazy. Arrays are only\n   materialized when needed.\n\n- **Dynamic graph construction**: Computation graphs in MLX are constructed\n   dynamically. Changing the shapes of function arguments does not trigger\n   slow compilations, and debugging is simple and intuitive.\n\n- **Multi-device**: Operations can run on any of the supported devices\n   (currently the CPU and the GPU).\n\n- **Unified memory**: A notable difference from MLX and other frameworks\n   is the *unified memory model*. Arrays in MLX live in shared memory.\n   Operations on MLX arrays can be performed on any of the supported\n   device types without transferring data.\n\n## Acknowledgement\n\nI would like to send my many thanks to:\n\n- The Apple Machine Learning Research team for the amazing MLX library.\n- LangChain and ChromaDB for such easy RAG Implementation\n- All contributors\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=qnguyen3/chat-with-mlx&type=Date)](https://star-history.com/#qnguyen3/chat-with-mlx&Date)\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "A Retrieval-augmented Generation (RAG) chat interface with support for multiple open-source models, designed to run natively on MacOS and Apple Silicon with MLX.",
    "version": "0.2.2",
    "project_urls": null,
    "split_keywords": [
        "mlx",
        " chat",
        " chatbot",
        " chat_with_mlx",
        " chat_with_mlx"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a5a59b6d014a29ae9f0fb7304c4536da9996358595acf04f9e61bec5fb135d36",
                "md5": "0eccd5f51808dfaf0e2e200ffacc7a14",
                "sha256": "2c7404cef8f3d887c008c3d3428c711f76926d21d2618ee6f3c337162c64bba6"
            },
            "downloads": -1,
            "filename": "chat_with_mlx-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0eccd5f51808dfaf0e2e200ffacc7a14",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 24333,
            "upload_time": "2024-06-13T04:07:02",
            "upload_time_iso_8601": "2024-06-13T04:07:02.135923Z",
            "url": "https://files.pythonhosted.org/packages/a5/a5/9b6d014a29ae9f0fb7304c4536da9996358595acf04f9e61bec5fb135d36/chat_with_mlx-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ad0991f59103795fd3335a76d025d53ce2f58c59f6f2da7bb076fb99f8d1ce07",
                "md5": "887f9b82b5f4b938cf03d71841a2dada",
                "sha256": "1718bceb73bddb3c8fcfb315d6f31cb532e06f9cecea0174a39baea75aaee4d3"
            },
            "downloads": -1,
            "filename": "chat_with_mlx-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "887f9b82b5f4b938cf03d71841a2dada",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 97995,
            "upload_time": "2024-06-13T04:07:04",
            "upload_time_iso_8601": "2024-06-13T04:07:04.002778Z",
            "url": "https://files.pythonhosted.org/packages/ad/09/91f59103795fd3335a76d025d53ce2f58c59f6f2da7bb076fb99f8d1ce07/chat_with_mlx-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-13 04:07:04",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "chat-with-mlx"
}
        
Elapsed time: 2.42145s