llmchatlinker


Namellmchatlinker JSON
Version 0.1.3 PyPI version JSON
download
home_pageNone
SummaryLLMChatLinker is a Middleware SDK designed to facilitate interaction between clients and Large Language Models (LLMs).
upload_time2024-12-11 04:31:13
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2024 Changjae Lee Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords llm middleware sdk chat api python-package
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLMChatLinker

LLMChatLinker is a Middleware SDK designed to facilitate interaction between clients and Large Language Models (LLMs). The SDK acts as an intermediary between the client and the LLM(s), allowing multiple users to communicate with the LLM(s) simultaneously through the client's front-end.

## Architecture

The interaction flow follows the fetch-decode-execute-store cycle, similar to the architecture of a CPU (CISC/RISC):

1. **Fetch**: The Orchestrator (acting as the CPU) fetches the instructions from the instruction queue.
2. **Decode**: The Control Unit decodes the fetched instructions. Depending on the instruction type, it decodes the instruction to be directly executed later.
3. **Execute**: The decoded instruction is executed by the relevant manage unit (User, Chat, LLM, or Database Manage Unit).
4. **Store**: The result from the execution is stored back in the result queue, ensuring each user request is immediately linked to its corresponding result.

## Components

- **Client**: The front-end that interacts with users.
- **Middleware SDK (LLMChatLinker)**: Facilitates communication between the client and LLM(s). Composed of various units that mimic the functionality of CPU components:
  - **Orchestrator**: Acts as the CPU, fetching, decoding, executing, and storing instructions.
  - **Control Unit**: Decodes instructions fetched by the Orchestrator.
  - **User Manage Unit**: Manages user-related instructions.
  - **Chat Manage Unit**: Manages chat-related instructions.
  - **LLM Manage Unit**: Manages LLM-related instructions.
  - **Database Manage Unit**: Manages database interactions.
- **LLM(s)**: Large Language Models providing responses to user queries.

## Features

- **Fetch-Decode-Execute-Store Cycle**: Adopts the CPU-like mechanism to process instructions efficiently.
- **User Management**: Create, update, delete, and list users.
- **Chat Management**: Create, update, delete, load, and list chats.
- **LLM Management**: Add, update, delete, list LLM providers and LLMs.
- **Instruction Management**: Enable/disable instruction recording, delete and list instruction records.

## Quick Start Guide

### Prerequisites

- Docker

### Installation

You can install LLMChatLinker via PyPI:

```bash
pip install llmchatlinker
```

### Setting Up the Environment

1. **Clone the repository:**
    ```bash
    git clone https://github.com/cjlee7128/LLMChatLinker.git
    cd LLMChatLinker
    ```

2. **Environment Configuration:**

    The repository includes a `.env.example` file which contains example environment variables. You need to create a `.env` file for Docker Compose to use these settings:

    ```bash
    cp .env.example .env
    ```

    Repeat the same process for the `llmchatlinker-frontend`:

    ```bash
    cd llmchatlinker-frontend
    cp .env.example .env
    cd ..
    ```

3. **Run services using Docker Compose:**

   Ensure that you have a properly configured `docker-compose.yml` file. This file should specify all necessary services like PostgreSQL, RabbitMQ, and any additional dependencies.

   Start all services defined in `docker-compose.yml`:

    ```bash
    docker compose up --build -d
    ```

4. **Pull and run MLModelScope API agent individually:**

    Execute the following command to start the MLModelScope API agent:

    ```bash
    docker run -d -p 15555:15555 xlabub/pytorch-agent-api:latest
    ```

    If you want MLModelScope API agent not to download huggingface models every time, you can use the following command:

    ```bash
    docker run -d -e HF_HOME=/root/.cache/huggingface \
      -p 15555:15555 -v ~/.cache/huggingface:/root/.cache/huggingface xlabub/pytorch-agent-api:latest
    ```

    (Recommended) After running the MLModelScope API agent, you can query the API to download the models if not existing and to reduce the next response time even if the model is already downloaded. For example:

    ```bash
    curl http://localhost:15555/api/chat \
      -H "Content-Type: application/json" \
      -d '{
          "model": "llama_3_2_1b_instruct",
          "messages": [
              {"role": "user", "content": "What is the longest river in the world?"}
          ]
      }'
    ```

    The above command will download the `llama_3_2_1b_instruct` model if it does not exist and generate a response for the given user message.

### Accessing the Front-end

After successfully running the `docker compose` command, you can access the front-end application via your web browser. Open the following URL:
    
```bash
http://localhost:<FRONTEND_PORT>
```

Replace `<FRONTEND_PORT>` with the actual port number specified in the `.env` file within the `LLMChatLinker` directory. This value should correspond to the port binding for your front-end application.

### Important Notes

- **Environment Variables:** Both backend and frontend parts of the application rely on certain environment variables. Ensure your `.env` files have correct values for seamless deployment.
  
- **Docker Compose:** It's crucial that your `docker-compose.yml` is configured correctly with all the required services. If you need additional environment-specific settings, update your `.env` files before running `docker compose up --build -d`.

## Deployment without Front-end (Optional)

The LLMChatLinker can be deployed without the front-end application in a [Python](https://www.python.org/) environment.

If you want to deploy the LLMChatLinker without the front-end, you can follow the steps below:

1. **Clone the repository:**
    ```bash
    git clone https://github.com/cjlee7128/LLMChatLinker.git
    cd LLMChatLinker
    ```

2. **Run PostgreSQL service using Docker:**

    ```bash
    docker run --name my_postgres -e POSTGRES_USER=myuser -e POSTGRES_PASSWORD=mypassword -e POSTGRES_DB=mydatabase -p 5433:5432 -d postgres:16
    ```

3. **Run RabbitMQ service using Docker:**

    ```bash
    docker run -d -e RABBITMQ_DEFAULT_USER=myuser -e RABBITMQ_DEFAULT_PASS=mypassword --name rabbitmq -p 5673:5672 -p 15673:15672 rabbitmq:3-management
    ```

4. **Run MLModelScope API agent using Docker:**

    Execute the following command to start the MLModelScope API agent:

    ```bash
    docker run -d -p 15555:15555 xlabub/pytorch-agent-api:latest
    ```

    If you want MLModelScope API agent not to download huggingface models every time, you can use the following command:

    ```bash
    docker run -d -e HF_HOME=/root/.cache/huggingface \
      -p 15555:15555 -v ~/.cache/huggingface:/root/.cache/huggingface xlabub/pytorch-agent-api:latest
    ```

    (Recommended) After running the MLModelScope API agent, you can query the API to download the models if not existing and to reduce the next response time even if the model is already downloaded. For example:

    ```bash
    curl http://localhost:15555/api/chat \
      -H "Content-Type: application/json" \
      -d '{
          "model": "llama_3_2_1b_instruct",
          "messages": [
              {"role": "user", "content": "What is the longest river in the world?"}
          ]
      }'
    ```

    The above command will download the `llama_3_2_1b_instruct` model if it does not exist and generate a response for the given user message.

5. **Run the LLMChatLinker service without using api:**

    ```bash
    python -m llmchatlinker.main_without_api 
    ```

6. **(Optional) Run Example Scripts:**

    You can run the example scripts provided in the `examples` directory to interact with the LLMChatLinker service.

    ```bash
    python -m examples.1_create_user
    python -m examples.2_create_chat
    python -m examples.3_add_llm_provider
    python -m examples.4_add_llm
    python -m examples.5_generate_llm_response
    python -m examples.12_llm_response_regenerate
    ```

    Make sure to replace the placeholders with the actual IDs generated during the execution of the previous scripts.

## Usage

### Instructions

#### User-related Instructions

- **USER_CREATE**: Create a new user.
- **USER_UPDATE**: Update an existing user.
- **USER_DELETE**: Delete an existing user.
- **USER_LIST**: List all users.
- **USER_GET**: Get a user by username or ID.
- **USER_INSTRUCTION_RECORDING_ENABLE**: Enable instruction recording for a user.
- **USER_INSTRUCTION_RECORDING_DISABLE**: Disable instruction recording for a user.
- **USER_INSTRUCTION_RECORDS_DELETE**: Delete all instruction records for a user.
- **USER_INSTRUCTION_RECORDS_LIST**: List all instruction records for a user.

#### Chat-related Instructions

- **CHAT_CREATE**: Create a new chat.
- **CHAT_UPDATE**: Update an existing chat.
- **CHAT_DELETE**: Delete an existing chat.
- **CHAT_LOAD**: Load an existing chat.
- **CHAT_LIST**: List all chats.
- **CHAT_LIST_BY_USER**: List all chats for a user.

#### LLM-related Instructions

- **LLM_RESPONSE_GENERATE**: Generate a response from the LLM.
- **LLM_RESPONSE_REGENERATE**: Regenerate a response from the LLM.
- **LLM_PROVIDER_ADD**: Add a new LLM provider.
- **LLM_PROVIDER_UPDATE**: Update an existing LLM provider.
- **LLM_PROVIDER_DELETE**: Delete an LLM provider.
- **LLM_PROVIDER_LIST**: List all LLM providers.
- **LLM_ADD**: Add a new LLM.
- **LLM_UPDATE**: Update an existing LLM.
- **LLM_DELETE**: Delete an LLM.
- **LLM_LIST**: List all LLMs.
- **LLM_LIST_BY_PROVIDER**: List all LLMs for a provider.

### Examples

Below are some example usage scripts to interact with LLMChatLinker.

#### 1. Create a User

User creation requires a username and a profile. The response will contain the user ID. The user ID is required for subsequent instructions.

```python
from llmchatlinker.client import LLMChatLinkerClient
from config_utils import read_config, write_config

def main():
    client = LLMChatLinkerClient()
    response = client.create_user(username="john_doe", profile="Sample profile")
    print(f" [x] Received {response}")
    user_id = response['data']['user']['user_id']
    print(f" [x] User ID: {user_id}")

    # Save user_id to config.json
    config = read_config()
    config['user_id'] = user_id
    write_config(config)

if __name__ == "__main__":
    main()
```

#### 2. Create a Chat

Chat creation requires a title and a list of user_ids. The response will contain the chat ID. The chat ID is required for subsequent instructions.

You can get the User ID from the response of the previous instruction.

```python
from llmchatlinker.client import LLMChatLinkerClient
from config_utils import read_config, write_config

def main():
    client = LLMChatLinkerClient()

    # Load user_id from config.json
    config = read_config()
    user_id = config.get('user_id')

    response = client.create_chat(title="Sample Chat", user_ids=[user_id])
    print(f" [x] Received {response}")
    chat_id = response['data']['chat']['chat_id']
    print(f" [x] Chat ID: {chat_id}")

    # Save chat_id to config.json
    config['chat_id'] = chat_id
    write_config(config)

if __name__ == "__main__":
    main()
```

#### 3. Add an LLM Provider

LLM Provider addition requires a name and an API endpoint. The response will contain the LLM Provider ID. The LLM Provider ID is required for subsequent instructions.

```python
from llmchatlinker.client import LLMChatLinkerClient
from config_utils import read_config, write_config

def main():
    client = LLMChatLinkerClient()
    response = client.add_llm_provider(name="MLModelScope", api_endpoint="http://localhost:15555/api/chat")
    print(f" [x] Received {response}")
    provider_id = response['data']['provider']['provider_id']
    print(f" [x] LLM Provider ID: {provider_id}")

    # Save provider_id to config.json
    config = read_config()
    config['provider_id'] = provider_id
    write_config(config)

if __name__ == "__main__":
    main()
```

#### 4. Add an LLM

LLM addition requires an LLM Provider ID and an LLM name. The response will contain the LLM ID. The LLM ID is required for subsequent instructions.

You can get the LLM Provider ID from the response of the previous instruction.

```python
from llmchatlinker.client import LLMChatLinkerClient
from config_utils import read_config, write_config

def main():
    client = LLMChatLinkerClient()

    # Load provider_id from config.json
    config = read_config()
    provider_id = config.get('provider_id')

    response = client.add_llm(provider_id=provider_id, llm_name="llama_3_2_1b_instruct")
    print(f" [x] Received {response}")
    llm_id = response['data']['llm']['llm_id']
    print(f" [x] LLM ID: {llm_id}")

    # Save llm_id to config.json
    config['llm_id'] = llm_id
    write_config(config)

if __name__ == "__main__":
    main()
```

#### 5. Generate an LLM Response

LLM response generation requires a User ID, Chat ID, LLM Provider ID, LLM ID, and user input. The response will contain the message ID. The message ID is required for subsequent instructions.

You can get the User ID, Chat ID, LLM Provider ID, and LLM ID from the responses of the previous instructions.

```python
from llmchatlinker.client import LLMChatLinkerClient
from config_utils import read_config, write_config

def main():
    client = LLMChatLinkerClient()

    # Load necessary IDs from config.json
    config = read_config()
    user_id = config.get('user_id')
    chat_id = config.get('chat_id')
    provider_id = config.get('provider_id')
    llm_id = config.get('llm_id')

    response = client.generate_llm_response(
        user_id=user_id,
        chat_id=chat_id,
        provider_id=provider_id,
        llm_id=llm_id,
        user_input="What is the longest river in the world?"
    )
    print(f" [x] Received {response}")
    message_id = response['data']['llm_response']['message_id']
    print(f" [x] Message ID: {message_id}")

    # Save message_id to config.json
    config['message_id'] = message_id
    write_config(config)

if __name__ == "__main__":
    main()
```

#### 6. Regenerate an LLM Response

LLM response reg-eneration requires a Message ID. The response will contain the re-generated response.

You can get the Message ID from the response of the previous instruction.

```python
from llmchatlinker.client import LLMChatLinkerClient
from config_utils import read_config, write_config

def main():
    client = LLMChatLinkerClient()

    # Load message_id from config.json
    config = read_config()
    message_id = config.get('message_id')

    response = client.regenerate_llm_response(message_id=message_id)
    print(f" [x] Received {response}")
    new_message_id = response['data']['llm_response']['message_id']
    print(f" [x] New Message ID: {new_message_id}")

    # Save new_message_id to config.json
    config['message_id'] = new_message_id
    write_config(config)

if __name__ == "__main__":
    main()
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llmchatlinker",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Changjae Lee <changjae@buffalo.edu>",
    "keywords": "LLM, middleware, SDK, chat, API, python-package",
    "author": null,
    "author_email": "Changjae Lee <changjae@buffalo.edu>",
    "download_url": "https://files.pythonhosted.org/packages/f1/fb/281a7cca9ff96444237683b86b24d9eea392a592534186f3ad293e1514bb/llmchatlinker-0.1.3.tar.gz",
    "platform": null,
    "description": "# LLMChatLinker\n\nLLMChatLinker is a Middleware SDK designed to facilitate interaction between clients and Large Language Models (LLMs). The SDK acts as an intermediary between the client and the LLM(s), allowing multiple users to communicate with the LLM(s) simultaneously through the client's front-end.\n\n## Architecture\n\nThe interaction flow follows the fetch-decode-execute-store cycle, similar to the architecture of a CPU (CISC/RISC):\n\n1. **Fetch**: The Orchestrator (acting as the CPU) fetches the instructions from the instruction queue.\n2. **Decode**: The Control Unit decodes the fetched instructions. Depending on the instruction type, it decodes the instruction to be directly executed later.\n3. **Execute**: The decoded instruction is executed by the relevant manage unit (User, Chat, LLM, or Database Manage Unit).\n4. **Store**: The result from the execution is stored back in the result queue, ensuring each user request is immediately linked to its corresponding result.\n\n## Components\n\n- **Client**: The front-end that interacts with users.\n- **Middleware SDK (LLMChatLinker)**: Facilitates communication between the client and LLM(s). Composed of various units that mimic the functionality of CPU components:\n  - **Orchestrator**: Acts as the CPU, fetching, decoding, executing, and storing instructions.\n  - **Control Unit**: Decodes instructions fetched by the Orchestrator.\n  - **User Manage Unit**: Manages user-related instructions.\n  - **Chat Manage Unit**: Manages chat-related instructions.\n  - **LLM Manage Unit**: Manages LLM-related instructions.\n  - **Database Manage Unit**: Manages database interactions.\n- **LLM(s)**: Large Language Models providing responses to user queries.\n\n## Features\n\n- **Fetch-Decode-Execute-Store Cycle**: Adopts the CPU-like mechanism to process instructions efficiently.\n- **User Management**: Create, update, delete, and list users.\n- **Chat Management**: Create, update, delete, load, and list chats.\n- **LLM Management**: Add, update, delete, list LLM providers and LLMs.\n- **Instruction Management**: Enable/disable instruction recording, delete and list instruction records.\n\n## Quick Start Guide\n\n### Prerequisites\n\n- Docker\n\n### Installation\n\nYou can install LLMChatLinker via PyPI:\n\n```bash\npip install llmchatlinker\n```\n\n### Setting Up the Environment\n\n1. **Clone the repository:**\n    ```bash\n    git clone https://github.com/cjlee7128/LLMChatLinker.git\n    cd LLMChatLinker\n    ```\n\n2. **Environment Configuration:**\n\n    The repository includes a `.env.example` file which contains example environment variables. You need to create a `.env` file for Docker Compose to use these settings:\n\n    ```bash\n    cp .env.example .env\n    ```\n\n    Repeat the same process for the `llmchatlinker-frontend`:\n\n    ```bash\n    cd llmchatlinker-frontend\n    cp .env.example .env\n    cd ..\n    ```\n\n3. **Run services using Docker Compose:**\n\n   Ensure that you have a properly configured `docker-compose.yml` file. This file should specify all necessary services like PostgreSQL, RabbitMQ, and any additional dependencies.\n\n   Start all services defined in `docker-compose.yml`:\n\n    ```bash\n    docker compose up --build -d\n    ```\n\n4. **Pull and run MLModelScope API agent individually:**\n\n    Execute the following command to start the MLModelScope API agent:\n\n    ```bash\n    docker run -d -p 15555:15555 xlabub/pytorch-agent-api:latest\n    ```\n\n    If you want MLModelScope API agent not to download huggingface models every time, you can use the following command:\n\n    ```bash\n    docker run -d -e HF_HOME=/root/.cache/huggingface \\\n      -p 15555:15555 -v ~/.cache/huggingface:/root/.cache/huggingface xlabub/pytorch-agent-api:latest\n    ```\n\n    (Recommended) After running the MLModelScope API agent, you can query the API to download the models if not existing and to reduce the next response time even if the model is already downloaded. For example:\n\n    ```bash\n    curl http://localhost:15555/api/chat \\\n      -H \"Content-Type: application/json\" \\\n      -d '{\n          \"model\": \"llama_3_2_1b_instruct\",\n          \"messages\": [\n              {\"role\": \"user\", \"content\": \"What is the longest river in the world?\"}\n          ]\n      }'\n    ```\n\n    The above command will download the `llama_3_2_1b_instruct` model if it does not exist and generate a response for the given user message.\n\n### Accessing the Front-end\n\nAfter successfully running the `docker compose` command, you can access the front-end application via your web browser. Open the following URL:\n    \n```bash\nhttp://localhost:<FRONTEND_PORT>\n```\n\nReplace `<FRONTEND_PORT>` with the actual port number specified in the `.env` file within the `LLMChatLinker` directory. This value should correspond to the port binding for your front-end application.\n\n### Important Notes\n\n- **Environment Variables:** Both backend and frontend parts of the application rely on certain environment variables. Ensure your `.env` files have correct values for seamless deployment.\n  \n- **Docker Compose:** It's crucial that your `docker-compose.yml` is configured correctly with all the required services. If you need additional environment-specific settings, update your `.env` files before running `docker compose up --build -d`.\n\n## Deployment without Front-end (Optional)\n\nThe LLMChatLinker can be deployed without the front-end application in a [Python](https://www.python.org/) environment.\n\nIf you want to deploy the LLMChatLinker without the front-end, you can follow the steps below:\n\n1. **Clone the repository:**\n    ```bash\n    git clone https://github.com/cjlee7128/LLMChatLinker.git\n    cd LLMChatLinker\n    ```\n\n2. **Run PostgreSQL service using Docker:**\n\n    ```bash\n    docker run --name my_postgres -e POSTGRES_USER=myuser -e POSTGRES_PASSWORD=mypassword -e POSTGRES_DB=mydatabase -p 5433:5432 -d postgres:16\n    ```\n\n3. **Run RabbitMQ service using Docker:**\n\n    ```bash\n    docker run -d -e RABBITMQ_DEFAULT_USER=myuser -e RABBITMQ_DEFAULT_PASS=mypassword --name rabbitmq -p 5673:5672 -p 15673:15672 rabbitmq:3-management\n    ```\n\n4. **Run MLModelScope API agent using Docker:**\n\n    Execute the following command to start the MLModelScope API agent:\n\n    ```bash\n    docker run -d -p 15555:15555 xlabub/pytorch-agent-api:latest\n    ```\n\n    If you want MLModelScope API agent not to download huggingface models every time, you can use the following command:\n\n    ```bash\n    docker run -d -e HF_HOME=/root/.cache/huggingface \\\n      -p 15555:15555 -v ~/.cache/huggingface:/root/.cache/huggingface xlabub/pytorch-agent-api:latest\n    ```\n\n    (Recommended) After running the MLModelScope API agent, you can query the API to download the models if not existing and to reduce the next response time even if the model is already downloaded. For example:\n\n    ```bash\n    curl http://localhost:15555/api/chat \\\n      -H \"Content-Type: application/json\" \\\n      -d '{\n          \"model\": \"llama_3_2_1b_instruct\",\n          \"messages\": [\n              {\"role\": \"user\", \"content\": \"What is the longest river in the world?\"}\n          ]\n      }'\n    ```\n\n    The above command will download the `llama_3_2_1b_instruct` model if it does not exist and generate a response for the given user message.\n\n5. **Run the LLMChatLinker service without using api:**\n\n    ```bash\n    python -m llmchatlinker.main_without_api \n    ```\n\n6. **(Optional) Run Example Scripts:**\n\n    You can run the example scripts provided in the `examples` directory to interact with the LLMChatLinker service.\n\n    ```bash\n    python -m examples.1_create_user\n    python -m examples.2_create_chat\n    python -m examples.3_add_llm_provider\n    python -m examples.4_add_llm\n    python -m examples.5_generate_llm_response\n    python -m examples.12_llm_response_regenerate\n    ```\n\n    Make sure to replace the placeholders with the actual IDs generated during the execution of the previous scripts.\n\n## Usage\n\n### Instructions\n\n#### User-related Instructions\n\n- **USER_CREATE**: Create a new user.\n- **USER_UPDATE**: Update an existing user.\n- **USER_DELETE**: Delete an existing user.\n- **USER_LIST**: List all users.\n- **USER_GET**: Get a user by username or ID.\n- **USER_INSTRUCTION_RECORDING_ENABLE**: Enable instruction recording for a user.\n- **USER_INSTRUCTION_RECORDING_DISABLE**: Disable instruction recording for a user.\n- **USER_INSTRUCTION_RECORDS_DELETE**: Delete all instruction records for a user.\n- **USER_INSTRUCTION_RECORDS_LIST**: List all instruction records for a user.\n\n#### Chat-related Instructions\n\n- **CHAT_CREATE**: Create a new chat.\n- **CHAT_UPDATE**: Update an existing chat.\n- **CHAT_DELETE**: Delete an existing chat.\n- **CHAT_LOAD**: Load an existing chat.\n- **CHAT_LIST**: List all chats.\n- **CHAT_LIST_BY_USER**: List all chats for a user.\n\n#### LLM-related Instructions\n\n- **LLM_RESPONSE_GENERATE**: Generate a response from the LLM.\n- **LLM_RESPONSE_REGENERATE**: Regenerate a response from the LLM.\n- **LLM_PROVIDER_ADD**: Add a new LLM provider.\n- **LLM_PROVIDER_UPDATE**: Update an existing LLM provider.\n- **LLM_PROVIDER_DELETE**: Delete an LLM provider.\n- **LLM_PROVIDER_LIST**: List all LLM providers.\n- **LLM_ADD**: Add a new LLM.\n- **LLM_UPDATE**: Update an existing LLM.\n- **LLM_DELETE**: Delete an LLM.\n- **LLM_LIST**: List all LLMs.\n- **LLM_LIST_BY_PROVIDER**: List all LLMs for a provider.\n\n### Examples\n\nBelow are some example usage scripts to interact with LLMChatLinker.\n\n#### 1. Create a User\n\nUser creation requires a username and a profile. The response will contain the user ID. The user ID is required for subsequent instructions.\n\n```python\nfrom llmchatlinker.client import LLMChatLinkerClient\nfrom config_utils import read_config, write_config\n\ndef main():\n    client = LLMChatLinkerClient()\n    response = client.create_user(username=\"john_doe\", profile=\"Sample profile\")\n    print(f\" [x] Received {response}\")\n    user_id = response['data']['user']['user_id']\n    print(f\" [x] User ID: {user_id}\")\n\n    # Save user_id to config.json\n    config = read_config()\n    config['user_id'] = user_id\n    write_config(config)\n\nif __name__ == \"__main__\":\n    main()\n```\n\n#### 2. Create a Chat\n\nChat creation requires a title and a list of user_ids. The response will contain the chat ID. The chat ID is required for subsequent instructions.\n\nYou can get the User ID from the response of the previous instruction.\n\n```python\nfrom llmchatlinker.client import LLMChatLinkerClient\nfrom config_utils import read_config, write_config\n\ndef main():\n    client = LLMChatLinkerClient()\n\n    # Load user_id from config.json\n    config = read_config()\n    user_id = config.get('user_id')\n\n    response = client.create_chat(title=\"Sample Chat\", user_ids=[user_id])\n    print(f\" [x] Received {response}\")\n    chat_id = response['data']['chat']['chat_id']\n    print(f\" [x] Chat ID: {chat_id}\")\n\n    # Save chat_id to config.json\n    config['chat_id'] = chat_id\n    write_config(config)\n\nif __name__ == \"__main__\":\n    main()\n```\n\n#### 3. Add an LLM Provider\n\nLLM Provider addition requires a name and an API endpoint. The response will contain the LLM Provider ID. The LLM Provider ID is required for subsequent instructions.\n\n```python\nfrom llmchatlinker.client import LLMChatLinkerClient\nfrom config_utils import read_config, write_config\n\ndef main():\n    client = LLMChatLinkerClient()\n    response = client.add_llm_provider(name=\"MLModelScope\", api_endpoint=\"http://localhost:15555/api/chat\")\n    print(f\" [x] Received {response}\")\n    provider_id = response['data']['provider']['provider_id']\n    print(f\" [x] LLM Provider ID: {provider_id}\")\n\n    # Save provider_id to config.json\n    config = read_config()\n    config['provider_id'] = provider_id\n    write_config(config)\n\nif __name__ == \"__main__\":\n    main()\n```\n\n#### 4. Add an LLM\n\nLLM addition requires an LLM Provider ID and an LLM name. The response will contain the LLM ID. The LLM ID is required for subsequent instructions.\n\nYou can get the LLM Provider ID from the response of the previous instruction.\n\n```python\nfrom llmchatlinker.client import LLMChatLinkerClient\nfrom config_utils import read_config, write_config\n\ndef main():\n    client = LLMChatLinkerClient()\n\n    # Load provider_id from config.json\n    config = read_config()\n    provider_id = config.get('provider_id')\n\n    response = client.add_llm(provider_id=provider_id, llm_name=\"llama_3_2_1b_instruct\")\n    print(f\" [x] Received {response}\")\n    llm_id = response['data']['llm']['llm_id']\n    print(f\" [x] LLM ID: {llm_id}\")\n\n    # Save llm_id to config.json\n    config['llm_id'] = llm_id\n    write_config(config)\n\nif __name__ == \"__main__\":\n    main()\n```\n\n#### 5. Generate an LLM Response\n\nLLM response generation requires a User ID, Chat ID, LLM Provider ID, LLM ID, and user input. The response will contain the message ID. The message ID is required for subsequent instructions.\n\nYou can get the User ID, Chat ID, LLM Provider ID, and LLM ID from the responses of the previous instructions.\n\n```python\nfrom llmchatlinker.client import LLMChatLinkerClient\nfrom config_utils import read_config, write_config\n\ndef main():\n    client = LLMChatLinkerClient()\n\n    # Load necessary IDs from config.json\n    config = read_config()\n    user_id = config.get('user_id')\n    chat_id = config.get('chat_id')\n    provider_id = config.get('provider_id')\n    llm_id = config.get('llm_id')\n\n    response = client.generate_llm_response(\n        user_id=user_id,\n        chat_id=chat_id,\n        provider_id=provider_id,\n        llm_id=llm_id,\n        user_input=\"What is the longest river in the world?\"\n    )\n    print(f\" [x] Received {response}\")\n    message_id = response['data']['llm_response']['message_id']\n    print(f\" [x] Message ID: {message_id}\")\n\n    # Save message_id to config.json\n    config['message_id'] = message_id\n    write_config(config)\n\nif __name__ == \"__main__\":\n    main()\n```\n\n#### 6. Regenerate an LLM Response\n\nLLM response reg-eneration requires a Message ID. The response will contain the re-generated response.\n\nYou can get the Message ID from the response of the previous instruction.\n\n```python\nfrom llmchatlinker.client import LLMChatLinkerClient\nfrom config_utils import read_config, write_config\n\ndef main():\n    client = LLMChatLinkerClient()\n\n    # Load message_id from config.json\n    config = read_config()\n    message_id = config.get('message_id')\n\n    response = client.regenerate_llm_response(message_id=message_id)\n    print(f\" [x] Received {response}\")\n    new_message_id = response['data']['llm_response']['message_id']\n    print(f\" [x] New Message ID: {new_message_id}\")\n\n    # Save new_message_id to config.json\n    config['message_id'] = new_message_id\n    write_config(config)\n\nif __name__ == \"__main__\":\n    main()\n```\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Changjae Lee  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "LLMChatLinker is a Middleware SDK designed to facilitate interaction between clients and Large Language Models (LLMs).",
    "version": "0.1.3",
    "project_urls": {
        "Documentation": "https://github.com/xlab-ub/LLMChatLinker#readme",
        "Issues": "https://github.com/xlab-ub/LLMChatLinker/issues",
        "Repository": "https://github.com/xlab-ub/LLMChatLinker.git"
    },
    "split_keywords": [
        "llm",
        " middleware",
        " sdk",
        " chat",
        " api",
        " python-package"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "28cee9f713e6c075af36aefdfd371d06d6c7f0f743fed1c313b873a2c0e9b995",
                "md5": "6e230a68581c0fec072c930e51bb062d",
                "sha256": "2e3b242be91acaff66fbd87c81b038d54613c48e6316f3fdaa60c243c792fc4c"
            },
            "downloads": -1,
            "filename": "llmchatlinker-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6e230a68581c0fec072c930e51bb062d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 26485,
            "upload_time": "2024-12-11T04:31:12",
            "upload_time_iso_8601": "2024-12-11T04:31:12.382581Z",
            "url": "https://files.pythonhosted.org/packages/28/ce/e9f713e6c075af36aefdfd371d06d6c7f0f743fed1c313b873a2c0e9b995/llmchatlinker-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f1fb281a7cca9ff96444237683b86b24d9eea392a592534186f3ad293e1514bb",
                "md5": "5531115b081d4206a6e94c46c89905d2",
                "sha256": "38b81893401ec5155173e514cc8252f9c9b39665506b9549f94d2609b8c1ef94"
            },
            "downloads": -1,
            "filename": "llmchatlinker-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "5531115b081d4206a6e94c46c89905d2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 27126,
            "upload_time": "2024-12-11T04:31:13",
            "upload_time_iso_8601": "2024-12-11T04:31:13.509437Z",
            "url": "https://files.pythonhosted.org/packages/f1/fb/281a7cca9ff96444237683b86b24d9eea392a592534186f3ad293e1514bb/llmchatlinker-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-11 04:31:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "xlab-ub",
    "github_project": "LLMChatLinker#readme",
    "github_not_found": true,
    "lcname": "llmchatlinker"
}
        
Elapsed time: 0.44027s