# LLM Adaptive Router
LLM Adaptive Router is a Python package that enables dynamic model selection based on query content. It uses efficient vector search for initial categorization and LLM-based fine-grained selection for complex cases. The router can adapt and learn from feedback, making it suitable for a wide range of applications.
## Features
- Dynamic model selection based on query content
- Efficient vector search for initial categorization
- LLM-based fine-grained selection for complex cases
- Adaptive learning from feedback
- Flexible configuration of routes and models
- Easy integration with LangChain and various LLM providers
## Installation
You can install LLM Adaptive Router using pip:
```bash
pip3 install llm-adaptive-router
```
## Quick Start
Here's a basic example of how to use LLM Adaptive Router:
```python
from llm_adaptive_router import AdaptiveRouter, RouteMetadata
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
gpt_3_5_turbo = ChatOpenAI(model="gpt-3.5-turbo")
mini = ChatOpenAI(model="gpt-4o-mini")
gpt_4 = ChatOpenAI(model="gpt-4")
routes = {
"general": RouteMetadata(
invoker=gpt_3_5_turbo,
capabilities=["general knowledge"],
cost=0.002,
example_sentences=["What is the capital of France?", "Explain photosynthesis."]
),
"mini": RouteMetadata(
invoker=mini,
capabilities=["general knowledge"],
cost=0.002,
example_sentences=["What is the capital of France?", "Explain photosynthesis."]
),
"math": RouteMetadata(
invoker=gpt_4,
capabilities=["advanced math", "problem solving"],
cost=0.01,
example_sentences=["Solve this differential equation.", "Prove the Pythagorean theorem."]
)
}
llm = ChatOpenAI(model="gpt-3.5-turbo")
router = AdaptiveRouter(
vectorstore=Chroma(embedding_function=OpenAIEmbeddings()),
llm=llm,
embeddings=OpenAIEmbeddings(),
routes=routes
)
query = "How are you"
query2 = "Write a Python function to hello world"
selected_model_route = router.route(query)
selected_model_name = selected_model_route
print(selected_model_name)
invoker = selected_model_route.invoker
response = invoker.invoke(query)
print(f"Response: {response}")
```
## Detailed Usage
### Creating Route Metadata
Use the `create_route_metadata` function to define routes:
```python
from llm_adaptive_router import create_route_metadata
route = create_route_metadata(
invoker=model_function,
capabilities=["capability1", "capability2"],
cost=0.01,
example_sentences=["Example query 1", "Example query 2"],
additional_info={"key": "value"}
)
```
### Initializing the AdaptiveRouter
Create an instance of `AdaptiveRouter` with your configured routes:
```python
router = AdaptiveRouter(
vectorstore=your_vectorstore,
llm=your_llm,
embeddings=your_embeddings,
routes=your_routes
)
```
### Routing Queries
Use the `route` method to select the appropriate model for a query:
```python
selected_model_route = router.route("Your query here")
selected_model_name = selected_model_route.model
invoker = selected_model_route.invoker
response = invoker.invoke("Your query here")
```
### Adding Feedback
Improve the router's performance by providing feedback:
```python
router.add_feedback(query, selected_model, performance_score)
```
### Advanced Features
- Custom Vector Stores: LLM Adaptive Router supports various vector stores. You can use any vector store that implements the `VectorStore` interface from LangChain.
- Dynamic Route Updates: You can add or remove routes dynamically:
```python
router.add_route("new_route", new_route_metadata)
router.remove_route("old_route")
```
- Adjusting Router Behavior: Fine-tune the router's behavior:
```python
router.set_complexity_threshold(0.8)
router.set_update_frequency(200)
```
Raw data
{
"_id": null,
"home_page": "https://github.com/emingenc/llm_adaptive_router",
"name": "llm-adaptive-router",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "llm, router, adaptive, llm-adaptive-router, llm-router",
"author": "Emin Genc",
"author_email": "emingench@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/a9/5b/d69ab9caedfa5b11e9b7e72907a6d73e8d123a7eea7158c10d9328e5f0af/llm_adaptive_router-0.1.13.tar.gz",
"platform": null,
"description": "# LLM Adaptive Router\n\nLLM Adaptive Router is a Python package that enables dynamic model selection based on query content. It uses efficient vector search for initial categorization and LLM-based fine-grained selection for complex cases. The router can adapt and learn from feedback, making it suitable for a wide range of applications.\n\n## Features\n\n- Dynamic model selection based on query content\n- Efficient vector search for initial categorization\n- LLM-based fine-grained selection for complex cases\n- Adaptive learning from feedback\n- Flexible configuration of routes and models\n- Easy integration with LangChain and various LLM providers\n\n## Installation\n\nYou can install LLM Adaptive Router using pip:\n\n```bash\npip3 install llm-adaptive-router\n```\n\n## Quick Start\n\nHere's a basic example of how to use LLM Adaptive Router:\n\n```python\nfrom llm_adaptive_router import AdaptiveRouter, RouteMetadata\nfrom langchain_chroma import Chroma\nfrom langchain_openai import OpenAIEmbeddings, ChatOpenAI\nfrom dotenv import load_dotenv\nload_dotenv()\n\ngpt_3_5_turbo = ChatOpenAI(model=\"gpt-3.5-turbo\")\nmini = ChatOpenAI(model=\"gpt-4o-mini\")\ngpt_4 = ChatOpenAI(model=\"gpt-4\")\n\nroutes = {\n \"general\": RouteMetadata(\n invoker=gpt_3_5_turbo,\n capabilities=[\"general knowledge\"],\n cost=0.002,\n example_sentences=[\"What is the capital of France?\", \"Explain photosynthesis.\"]\n ),\n \"mini\": RouteMetadata(\n invoker=mini,\n capabilities=[\"general knowledge\"],\n cost=0.002,\n example_sentences=[\"What is the capital of France?\", \"Explain photosynthesis.\"]\n \n ),\n \"math\": RouteMetadata(\n invoker=gpt_4,\n capabilities=[\"advanced math\", \"problem solving\"],\n cost=0.01,\n example_sentences=[\"Solve this differential equation.\", \"Prove the Pythagorean theorem.\"]\n )\n}\n\nllm = ChatOpenAI(model=\"gpt-3.5-turbo\")\n\nrouter = AdaptiveRouter(\n vectorstore=Chroma(embedding_function=OpenAIEmbeddings()),\n llm=llm,\n embeddings=OpenAIEmbeddings(),\n routes=routes\n)\n\nquery = \"How are you\"\nquery2 = \"Write a Python function to hello world\"\nselected_model_route = router.route(query)\nselected_model_name = selected_model_route\nprint(selected_model_name)\ninvoker = selected_model_route.invoker\nresponse = invoker.invoke(query)\n\nprint(f\"Response: {response}\")\n```\n\n## Detailed Usage\n\n### Creating Route Metadata\n\nUse the `create_route_metadata` function to define routes:\n\n```python\nfrom llm_adaptive_router import create_route_metadata\n\nroute = create_route_metadata(\n invoker=model_function,\n capabilities=[\"capability1\", \"capability2\"],\n cost=0.01,\n example_sentences=[\"Example query 1\", \"Example query 2\"],\n additional_info={\"key\": \"value\"}\n)\n```\n\n### Initializing the AdaptiveRouter\n\nCreate an instance of `AdaptiveRouter` with your configured routes:\n\n```python\nrouter = AdaptiveRouter(\n vectorstore=your_vectorstore,\n llm=your_llm,\n embeddings=your_embeddings,\n routes=your_routes\n)\n```\n\n### Routing Queries\n\nUse the `route` method to select the appropriate model for a query:\n\n```python\nselected_model_route = router.route(\"Your query here\")\nselected_model_name = selected_model_route.model\ninvoker = selected_model_route.invoker\nresponse = invoker.invoke(\"Your query here\")\n```\n\n### Adding Feedback\n\nImprove the router's performance by providing feedback:\n\n```python\nrouter.add_feedback(query, selected_model, performance_score)\n```\n\n### Advanced Features\n\n- Custom Vector Stores: LLM Adaptive Router supports various vector stores. You can use any vector store that implements the `VectorStore` interface from LangChain.\n- Dynamic Route Updates: You can add or remove routes dynamically:\n\n```python\nrouter.add_route(\"new_route\", new_route_metadata)\nrouter.remove_route(\"old_route\")\n```\n\n- Adjusting Router Behavior: Fine-tune the router's behavior:\n\n```python\nrouter.set_complexity_threshold(0.8)\nrouter.set_update_frequency(200)\n```\n\n",
"bugtrack_url": null,
"license": null,
"summary": "An adaptive router for LLM model selection",
"version": "0.1.13",
"project_urls": {
"Homepage": "https://github.com/emingenc/llm_adaptive_router",
"Issue Tracker": "https://github.com/emingenc/llm_adaptive_router/issues",
"Source": "https://github.com/emingenc/llm_adaptive_router"
},
"split_keywords": [
"llm",
" router",
" adaptive",
" llm-adaptive-router",
" llm-router"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "76b71aa4bf6f77ed92370a5b664b4eedcb640e80e367ca882863b51695fed12d",
"md5": "713d73040893f2f2a2a16acd8229af71",
"sha256": "f7c5add71d88c6b2a66437e1979b2e665c7cfdd8a3c2fc7dbf8f728029a1fcf3"
},
"downloads": -1,
"filename": "llm_adaptive_router-0.1.13-py3-none-any.whl",
"has_sig": false,
"md5_digest": "713d73040893f2f2a2a16acd8229af71",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 11672,
"upload_time": "2024-10-28T05:12:51",
"upload_time_iso_8601": "2024-10-28T05:12:51.912350Z",
"url": "https://files.pythonhosted.org/packages/76/b7/1aa4bf6f77ed92370a5b664b4eedcb640e80e367ca882863b51695fed12d/llm_adaptive_router-0.1.13-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a95bd69ab9caedfa5b11e9b7e72907a6d73e8d123a7eea7158c10d9328e5f0af",
"md5": "70d3f7716f01fc610874b85e7f92df3f",
"sha256": "99201bc35b184eaf060ae5a7ad52f2a302994b28a51e4bc245bcdf5791109723"
},
"downloads": -1,
"filename": "llm_adaptive_router-0.1.13.tar.gz",
"has_sig": false,
"md5_digest": "70d3f7716f01fc610874b85e7f92df3f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 10058,
"upload_time": "2024-10-28T05:12:54",
"upload_time_iso_8601": "2024-10-28T05:12:54.542076Z",
"url": "https://files.pythonhosted.org/packages/a9/5b/d69ab9caedfa5b11e9b7e72907a6d73e8d123a7eea7158c10d9328e5f0af/llm_adaptive_router-0.1.13.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-28 05:12:54",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "emingenc",
"github_project": "llm_adaptive_router",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "langchain",
"specs": [
[
">=",
"0.3.1"
]
]
},
{
"name": "langchain-community",
"specs": [
[
">=",
"0.3.1"
]
]
},
{
"name": "langchain-core",
"specs": [
[
">=",
"0.3.8"
]
]
},
{
"name": "langchain-openai",
"specs": [
[
">=",
"0.2.1"
]
]
},
{
"name": "langchain-text-splitters",
"specs": [
[
">=",
"0.3.0"
]
]
}
],
"lcname": "llm-adaptive-router"
}