Name | llm-api-server JSON |
Version |
0.1.0
JSON |
| download |
home_page | None |
Summary | LLM plugin to expose a FastAPI server with compatible APIs for popular LLM clients |
upload_time | 2025-08-12 14:19:12 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT |
keywords |
api
fastapi
llm
openai
server
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# llm-api
A FastAPI-based server plugin for the [`llm`](https://github.com/simonw/llm) CLI that exposes LLM models through API interfaces compatible with popular LLM clients.
This allows you to use local or remote LLM models with any client that expects standard LLM API formats.
## Installation
### As an LLM Plugin
Install this plugin to [`llm`](https://llm.datasette.io/):
```sh
# Install from PyPI (once published)
llm install llm-api
# Or install from GitHub
llm install https://github.com/danielcorin/llm-api.git
# Or install from local development directory
cd /path/to/llm-api
llm install -e .
```
Verify installation:
```sh
# Check the plugin is installed
llm plugins
# The 'api' command should be available
llm api --help
```
### Development Installation
For development, use [`uv`](https://github.com/astral-sh/uv):
```sh
# Clone the repository
git clone https://github.com/danielcorin/llm-api.git
cd llm-api
# Create a virtual environment and install dependencies
uv venv
source .venv/bin/activate
uv sync --dev
# Install as an editable LLM plugin
llm install -e .
```
## Usage
Start the API server:
```sh
llm api --port 8000
```
The server provides OpenAI [Chat Completions API](https://platform.openai.com/docs/api-reference/chat) endpoints:
- `GET /v1/models` - List available models
- `POST /v1/chat/completions` - Create chat completions with:
- Streaming support
- Tool/function calling (for models with `supports_tools=True`)
- Structured output via `response_format` (for models with `supports_schema=True`)
- Conversation history with tool results
## Features
### Basic Usage
```python
from openai import OpenAI
# Point the client to your local llm-server
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="not-needed" # API key is not required for local server
)
# Use any model available in your llm CLI
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
]
)
print(response.choices[0].message.content)
```
Streaming is also supported
```python
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
```
### Tool/Function Calling
Models that support tools (indicated by `supports_tools=True`) can use OpenAI-compatible function calling:
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="not-needed" # API key is not required for local server
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What's the weather in San Francisco?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}]
)
```
### Structured Output with Schema
Models that support schema (indicated by `supports_schema=True`) can generate structured JSON output:
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="not-needed"
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Generate a person's profile"}],
response_format={
"type": "json_schema",
"json_schema": {
"name": "person",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"email": {"type": "string"}
},
"required": ["name", "age", "email"]
}
}
}
)
# The response will contain valid JSON matching the schema
print(response.choices[0].message.content)
```
## Testing
Run the test script to verify the OpenAI-compliant API:
```sh
python -m pytest tests/test_openai_api.py
```
## Development
### Prerequisites
- Python 3.9+
- [`llm`](https://github.com/simonw/llm) CLI tool installed
- One or more LLM models configured in `llm`
### Code Quality
Format code:
```sh
ruff format .
```
Lint code:
```sh
ruff check --fix .
```
### Running Tests
Run all tests:
```sh
pytest
```
## Configuration
The server integrates with the `llm` CLI tool's configuration.
Make sure you follow the [setup instructions](https://llm.datasette.io/en/stable/setup.html).
1. Installed and configured `llm` with your preferred models
2. Set up any necessary API keys for cloud-based models
3. Verified models are available with `llm models`
## Supported API Specifications
### Currently Implemented
- **OpenAI Chat Completions API** (`/v1/chat/completions`)
- Compatible with OpenAI Python/JavaScript SDKs
- Works tools expecting OpenAI format
- Full support for streaming, tool calling, and structured output
### Help Wanted
- [OpenAI Responses API](https://platform.openai.com/docs/api-reference/responses) (`/v1/responses`)
- [Anthropic Messages API](https://docs.anthropic.com/en/api/messages) (`/v1/messages`)
## License
MIT
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-api-server",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "api, fastapi, llm, openai, server",
"author": null,
"author_email": "Daniel Corin <dan@wvlen.llc>",
"download_url": "https://files.pythonhosted.org/packages/5f/36/ea994ae6989d75ea667289109bac1cc2a411bcd998910f1ff09ee799b1d6/llm_api_server-0.1.0.tar.gz",
"platform": null,
"description": "# llm-api\n\nA FastAPI-based server plugin for the [`llm`](https://github.com/simonw/llm) CLI that exposes LLM models through API interfaces compatible with popular LLM clients.\nThis allows you to use local or remote LLM models with any client that expects standard LLM API formats.\n\n## Installation\n\n### As an LLM Plugin\n\nInstall this plugin to [`llm`](https://llm.datasette.io/):\n\n```sh\n# Install from PyPI (once published)\nllm install llm-api\n\n# Or install from GitHub\nllm install https://github.com/danielcorin/llm-api.git\n\n# Or install from local development directory\ncd /path/to/llm-api\nllm install -e .\n```\n\nVerify installation:\n\n```sh\n# Check the plugin is installed\nllm plugins\n\n# The 'api' command should be available\nllm api --help\n```\n\n### Development Installation\n\nFor development, use [`uv`](https://github.com/astral-sh/uv):\n\n```sh\n# Clone the repository\ngit clone https://github.com/danielcorin/llm-api.git\ncd llm-api\n\n# Create a virtual environment and install dependencies\nuv venv\nsource .venv/bin/activate\nuv sync --dev\n\n# Install as an editable LLM plugin\nllm install -e .\n```\n\n## Usage\n\nStart the API server:\n\n```sh\nllm api --port 8000\n```\n\nThe server provides OpenAI [Chat Completions API](https://platform.openai.com/docs/api-reference/chat) endpoints:\n- `GET /v1/models` - List available models\n- `POST /v1/chat/completions` - Create chat completions with:\n - Streaming support\n - Tool/function calling (for models with `supports_tools=True`)\n - Structured output via `response_format` (for models with `supports_schema=True`)\n - Conversation history with tool results\n\n## Features\n\n### Basic Usage\n\n```python\nfrom openai import OpenAI\n\n# Point the client to your local llm-server\nclient = OpenAI(\n base_url=\"http://localhost:8000/v1\",\n api_key=\"not-needed\" # API key is not required for local server\n)\n\n# Use any model available in your llm CLI\nresponse = client.chat.completions.create(\n model=\"gpt-4o-mini\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Hello, how are you?\"}\n ]\n)\n\nprint(response.choices[0].message.content)\n```\n\nStreaming is also supported\n\n```python\nstream = client.chat.completions.create(\n model=\"gpt-4o-mini\",\n messages=[{\"role\": \"user\", \"content\": \"Tell me a story\"}],\n stream=True\n)\n\nfor chunk in stream:\n if chunk.choices[0].delta.content:\n print(chunk.choices[0].delta.content, end=\"\")\n```\n\n### Tool/Function Calling\n\nModels that support tools (indicated by `supports_tools=True`) can use OpenAI-compatible function calling:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(\n base_url=\"http://localhost:8000/v1\",\n api_key=\"not-needed\" # API key is not required for local server\n)\n\nresponse = client.chat.completions.create(\n model=\"gpt-4o-mini\",\n messages=[{\"role\": \"user\", \"content\": \"What's the weather in San Francisco?\"}],\n tools=[{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get weather for a location\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\"type\": \"string\", \"description\": \"City name\"}\n },\n \"required\": [\"location\"]\n }\n }\n }]\n)\n```\n\n### Structured Output with Schema\n\nModels that support schema (indicated by `supports_schema=True`) can generate structured JSON output:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(\n base_url=\"http://localhost:8000/v1\",\n api_key=\"not-needed\"\n)\n\nresponse = client.chat.completions.create(\n model=\"gpt-4o-mini\",\n messages=[{\"role\": \"user\", \"content\": \"Generate a person's profile\"}],\n response_format={\n \"type\": \"json_schema\",\n \"json_schema\": {\n \"name\": \"person\",\n \"schema\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\"type\": \"string\"},\n \"age\": {\"type\": \"integer\"},\n \"email\": {\"type\": \"string\"}\n },\n \"required\": [\"name\", \"age\", \"email\"]\n }\n }\n }\n)\n\n# The response will contain valid JSON matching the schema\nprint(response.choices[0].message.content)\n```\n\n## Testing\n\nRun the test script to verify the OpenAI-compliant API:\n\n```sh\npython -m pytest tests/test_openai_api.py\n```\n\n## Development\n\n### Prerequisites\n\n- Python 3.9+\n- [`llm`](https://github.com/simonw/llm) CLI tool installed\n- One or more LLM models configured in `llm`\n\n### Code Quality\n\nFormat code:\n\n```sh\nruff format .\n```\n\nLint code:\n\n```sh\nruff check --fix .\n```\n\n### Running Tests\n\nRun all tests:\n\n```sh\npytest\n```\n\n## Configuration\n\nThe server integrates with the `llm` CLI tool's configuration.\nMake sure you follow the [setup instructions](https://llm.datasette.io/en/stable/setup.html).\n\n1. Installed and configured `llm` with your preferred models\n2. Set up any necessary API keys for cloud-based models\n3. Verified models are available with `llm models`\n\n## Supported API Specifications\n\n### Currently Implemented\n- **OpenAI Chat Completions API** (`/v1/chat/completions`)\n - Compatible with OpenAI Python/JavaScript SDKs\n - Works tools expecting OpenAI format\n - Full support for streaming, tool calling, and structured output\n\n### Help Wanted\n- [OpenAI Responses API](https://platform.openai.com/docs/api-reference/responses) (`/v1/responses`)\n- [Anthropic Messages API](https://docs.anthropic.com/en/api/messages) (`/v1/messages`)\n\n## License\n\nMIT\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "LLM plugin to expose a FastAPI server with compatible APIs for popular LLM clients",
"version": "0.1.0",
"project_urls": {
"Documentation": "https://github.com/danielcorin/llm-api#readme",
"Homepage": "https://github.com/danielcorin/llm-api",
"Issues": "https://github.com/danielcorin/llm-api/issues",
"Repository": "https://github.com/danielcorin/llm-api"
},
"split_keywords": [
"api",
" fastapi",
" llm",
" openai",
" server"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e92a390ea750aba255db90f20593ea2b5c40acd5ba741a754d6ce942435880a9",
"md5": "504b6639614af816279760ca84ea60fa",
"sha256": "433ce5c9a390c56a73f839d7409464f3f5462c7245f47a7a8b223d59c583831c"
},
"downloads": -1,
"filename": "llm_api_server-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "504b6639614af816279760ca84ea60fa",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 23285,
"upload_time": "2025-08-12T14:19:11",
"upload_time_iso_8601": "2025-08-12T14:19:11.485822Z",
"url": "https://files.pythonhosted.org/packages/e9/2a/390ea750aba255db90f20593ea2b5c40acd5ba741a754d6ce942435880a9/llm_api_server-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5f36ea994ae6989d75ea667289109bac1cc2a411bcd998910f1ff09ee799b1d6",
"md5": "09e72c0d00c0373dd3fce8499463796e",
"sha256": "ba258ca5f75ee17386b4667ce5e9db7b3b08b4305cbf053315c165bfe352827a"
},
"downloads": -1,
"filename": "llm_api_server-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "09e72c0d00c0373dd3fce8499463796e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 14556,
"upload_time": "2025-08-12T14:19:12",
"upload_time_iso_8601": "2025-08-12T14:19:12.967294Z",
"url": "https://files.pythonhosted.org/packages/5f/36/ea994ae6989d75ea667289109bac1cc2a411bcd998910f1ff09ee799b1d6/llm_api_server-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-12 14:19:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "danielcorin",
"github_project": "llm-api#readme",
"github_not_found": true,
"lcname": "llm-api-server"
}