# LangGraph OpenAI Serve
A package that provides an OpenAI-compatible API for LangGraph instances.
## Features
- Expose your LangGraph instances through an OpenAI-compatible API
- Register multiple graphs and map them to different model names
- Use with any FastAPI application
- Support for both streaming and non-streaming completions
## Installation
```bash
# Using uv
uv add langgraph-openai-serve
# Using pip
pip install langgraph-openai-serve
```
## Quick Start
Here's a simple example of how to use LangGraph OpenAI Serve:
```python
from langgraph_openai_serve import LangchainOpenaiApiServe, GraphRegistry, GraphConfig
# Import your LangGraph instances
from your_graphs import simple_graph, advanced_graph
# Create a GraphRegistry
graph_registry = GraphRegistry(
registry={
"simple_graph": GraphConfig(graph=simple_graph),
"advanced_graph": GraphConfig(graph=advanced_graph)
}
)
graph_serve = LangchainOpenaiApiServe(
graphs=graph_registry,
)
# Bind the OpenAI-compatible endpoints
graph_serve.bind_openai_chat_completion(prefix="/v1")
# Run the app with uvicorn
if __name__ == "__main__":
import uvicorn
uvicorn.run(graph_serve.app, host="0.0.0.0", port=8000)
```
Usage with your own FastAPI app is also supported:
```python
from fastapi import FastAPI
from langgraph_openai_serve import LangchainOpenaiApiServe, GraphRegistry, GraphConfig
# Import your LangGraph instances
from your_graphs import simple_graph, advanced_graph
# Create a FastAPI app
app = FastAPI(
title="LangGraph OpenAI API",
version="1.0",
description="OpenAI API exposing LangGraph agents",
)
# Create a GraphRegistry
graph_registry = GraphRegistry(
registry={
"simple_graph": GraphConfig(graph=simple_graph),
"advanced_graph": GraphConfig(graph=advanced_graph)
}
)
graph_serve = LangchainOpenaiApiServe(
app=app,
graphs=graph_registry,
)
# Bind the OpenAI-compatible endpoints
graph_serve.bind_openai_chat_completion(prefix="/v1")
# Run the app with uvicorn
if __name__ == "__main__":
import uvicorn
uvicorn.run(graph_serve.app, host="0.0.0.0", port=8000)
```
## Using with the OpenAI Client
Once your API is running, you can use any OpenAI-compatible client to interact with it:
```python
from openai import OpenAI
# Create a client pointing to your API
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="any-value" # API key is not verified
)
# Use a specific graph by specifying its name as the model
response = client.chat.completions.create(
model="simple_graph_1", # This maps to the graph name in your registry
messages=[
{"role": "user", "content": "Hello, how can you help me today?"}
]
)
print(response.choices[0].message.content)
# You can also use streaming
stream = client.chat.completions.create(
model="advanced_graph",
messages=[
{"role": "user", "content": "Write a short poem about AI."}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
```
## Docker Usage
To run with Docker:
```bash
# Start the server
docker compose up -d langgraph-openai-serve-dev
# For a complete example with open-webui
docker compose up -d open-webui
```
Raw data
{
"_id": null,
"home_page": null,
"name": "langgraph_openai_serve",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.11",
"maintainer_email": "\u0130lker SI\u011eIRCI <sigirci.ilker@gmail.com>",
"keywords": "langgraph_openai_serve",
"author": null,
"author_email": "\u0130lker SI\u011eIRCI <sigirci.ilker@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/37/a8/57dd553b799a0f278fd5d451115fcddca101fdc70b317523f89f5129fbd1/langgraph_openai_serve-0.0.6.tar.gz",
"platform": null,
"description": "# LangGraph OpenAI Serve\n\nA package that provides an OpenAI-compatible API for LangGraph instances.\n\n## Features\n\n- Expose your LangGraph instances through an OpenAI-compatible API\n- Register multiple graphs and map them to different model names\n- Use with any FastAPI application\n- Support for both streaming and non-streaming completions\n\n## Installation\n\n```bash\n# Using uv\nuv add langgraph-openai-serve\n\n# Using pip\npip install langgraph-openai-serve\n```\n\n## Quick Start\n\nHere's a simple example of how to use LangGraph OpenAI Serve:\n\n```python\nfrom langgraph_openai_serve import LangchainOpenaiApiServe, GraphRegistry, GraphConfig\n\n# Import your LangGraph instances\nfrom your_graphs import simple_graph, advanced_graph\n\n# Create a GraphRegistry\ngraph_registry = GraphRegistry(\n registry={\n \"simple_graph\": GraphConfig(graph=simple_graph),\n \"advanced_graph\": GraphConfig(graph=advanced_graph)\n }\n)\n\ngraph_serve = LangchainOpenaiApiServe(\n graphs=graph_registry,\n)\n\n# Bind the OpenAI-compatible endpoints\ngraph_serve.bind_openai_chat_completion(prefix=\"/v1\")\n\n# Run the app with uvicorn\nif __name__ == \"__main__\":\n import uvicorn\n uvicorn.run(graph_serve.app, host=\"0.0.0.0\", port=8000)\n```\n\nUsage with your own FastAPI app is also supported:\n\n```python\nfrom fastapi import FastAPI\nfrom langgraph_openai_serve import LangchainOpenaiApiServe, GraphRegistry, GraphConfig\n\n# Import your LangGraph instances\nfrom your_graphs import simple_graph, advanced_graph\n\n# Create a FastAPI app\napp = FastAPI(\n title=\"LangGraph OpenAI API\",\n version=\"1.0\",\n description=\"OpenAI API exposing LangGraph agents\",\n)\n\n# Create a GraphRegistry\ngraph_registry = GraphRegistry(\n registry={\n \"simple_graph\": GraphConfig(graph=simple_graph),\n \"advanced_graph\": GraphConfig(graph=advanced_graph)\n }\n)\n\ngraph_serve = LangchainOpenaiApiServe(\n app=app,\n graphs=graph_registry,\n)\n\n# Bind the OpenAI-compatible endpoints\ngraph_serve.bind_openai_chat_completion(prefix=\"/v1\")\n\n# Run the app with uvicorn\nif __name__ == \"__main__\":\n import uvicorn\n uvicorn.run(graph_serve.app, host=\"0.0.0.0\", port=8000)\n```\n\n## Using with the OpenAI Client\n\nOnce your API is running, you can use any OpenAI-compatible client to interact with it:\n\n```python\nfrom openai import OpenAI\n\n# Create a client pointing to your API\nclient = OpenAI(\n base_url=\"http://localhost:8000/v1\",\n api_key=\"any-value\" # API key is not verified\n)\n\n# Use a specific graph by specifying its name as the model\nresponse = client.chat.completions.create(\n model=\"simple_graph_1\", # This maps to the graph name in your registry\n messages=[\n {\"role\": \"user\", \"content\": \"Hello, how can you help me today?\"}\n ]\n)\n\nprint(response.choices[0].message.content)\n\n# You can also use streaming\nstream = client.chat.completions.create(\n model=\"advanced_graph\",\n messages=[\n {\"role\": \"user\", \"content\": \"Write a short poem about AI.\"}\n ],\n stream=True\n)\n\nfor chunk in stream:\n if chunk.choices[0].delta.content:\n print(chunk.choices[0].delta.content, end=\"\")\n```\n\n## Docker Usage\n\nTo run with Docker:\n\n```bash\n# Start the server\ndocker compose up -d langgraph-openai-serve-dev\n\n# For a complete example with open-webui\ndocker compose up -d open-webui\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "Openai Compatible Langgraph Server",
"version": "0.0.6",
"project_urls": null,
"split_keywords": [
"langgraph_openai_serve"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "37a857dd553b799a0f278fd5d451115fcddca101fdc70b317523f89f5129fbd1",
"md5": "a216a2e521211f1db7235b7b0ae88aab",
"sha256": "e1032cb74b8938c7b85ef5c31f1015c9c1e50a498607fad04df538d6c3695bf2"
},
"downloads": -1,
"filename": "langgraph_openai_serve-0.0.6.tar.gz",
"has_sig": false,
"md5_digest": "a216a2e521211f1db7235b7b0ae88aab",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.11",
"size": 126090,
"upload_time": "2025-04-20T22:12:02",
"upload_time_iso_8601": "2025-04-20T22:12:02.909329Z",
"url": "https://files.pythonhosted.org/packages/37/a8/57dd553b799a0f278fd5d451115fcddca101fdc70b317523f89f5129fbd1/langgraph_openai_serve-0.0.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-04-20 22:12:02",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "langgraph_openai_serve"
}