Name | mockllm JSON |
Version |
0.1.0
JSON |
| download |
home_page | None |
Summary | A mock server that mimics OpenAI and Anthropic API formats for testing |
upload_time | 2025-02-14 16:59:46 |
maintainer | None |
docs_url | None |
author | Luke Hinds |
requires_python | >=3.8 |
license | Apache-2.0 |
keywords |
mock
llm
openai
anthropic
testing
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Mock LLM Server
[](https://github.com/stacklok/mockllm/actions/workflows/ci.yml)
[](https://badge.fury.io/py/mockllm)
[](https://opensource.org/licenses/Apache-2.0)
A FastAPI-based mock LLM server that mimics OpenAI and Anthropic API formats. Instead of calling actual language models,
it uses predefined responses from a YAML configuration file.
This is made for when you want a deterministic response for testing or development purposes.
Check out the [CodeGate](https://github.com/stacklok/codegate) project when you're done here!
## Features
- OpenAI and Anthropic compatible API endpoints
- Streaming support (character-by-character response streaming)
- Configurable responses via YAML file
- Hot-reloading of response configurations
- JSON logging
- Error handling
- Mock token counting
## Installation
### From PyPI
```bash
pip install mockllm
```
### From Source
1. Clone the repository:
```bash
git clone https://github.com/stacklok/mockllm.git
cd mockllm
```
2. Install Poetry (if not already installed):
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
3. Install dependencies:
```bash
poetry install # Install with all dependencies
# or
poetry install --without dev # Install without development dependencies
```
## Usage
1. Set up the responses.yml
```bash
cp example.responses.yml responses.yml
```
2. Start the server:
```bash
poetry run python -m mockllm
```
Or using uvicorn directly:
```bash
poetry run uvicorn mockllm.server:app --reload
```
The server will start on `http://localhost:8000`
3. Send requests to the API endpoints:
### OpenAI Format
Regular request:
```bash
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "mock-llm",
"messages": [
{"role": "user", "content": "what colour is the sky?"}
]
}'
```
Streaming request:
```bash
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "mock-llm",
"messages": [
{"role": "user", "content": "what colour is the sky?"}
],
"stream": true
}'
```
### Anthropic Format
Regular request:
```bash
curl -X POST http://localhost:8000/v1/messages \
-H "Content-Type: application/json" \
-d '{
"model": "claude-3-sonnet-20240229",
"messages": [
{"role": "user", "content": "what colour is the sky?"}
]
}'
```
Streaming request:
```bash
curl -X POST http://localhost:8000/v1/messages \
-H "Content-Type: application/json" \
-d '{
"model": "claude-3-sonnet-20240229",
"messages": [
{"role": "user", "content": "what colour is the sky?"}
],
"stream": true
}'
```
## Configuration
### Response Configuration
Responses are configured in `responses.yml`. The file has two main sections:
1. `responses`: Maps input prompts to predefined responses
2. `defaults`: Contains default configurations like the unknown response message
Example `responses.yml`:
```yaml
responses:
"what colour is the sky?": "The sky is blue during a clear day due to a phenomenon called Rayleigh scattering."
"what is 2+2?": "2+2 equals 9."
defaults:
unknown_response: "I don't know the answer to that. This is a mock response."
```
### Hot Reloading
The server automatically detects changes to `responses.yml` and reloads the configuration without requiring a restart.
## Development
The project uses Poetry for dependency management and includes a Makefile to help with common development tasks:
```bash
# Set up development environment
make setup
# Run all checks (setup, lint, test)
make all
# Run tests
make test
# Format code
make format
# Run all linting and type checking
make lint
# Clean up build artifacts
make clean
# See all available commands
make help
```
### Development Commands
- `make setup`: Install all development dependencies with Poetry
- `make test`: Run the test suite
- `make format`: Format code with black and isort
- `make lint`: Run all code quality checks (format, lint, type)
- `make build`: Build the package with Poetry
- `make clean`: Remove build artifacts and cache files
- `make install-dev`: Install package with development dependencies
For more details on available commands, run `make help`.
## Error Handling
The server includes comprehensive error handling:
- Invalid requests return 400 status codes with descriptive messages
- Server errors return 500 status codes with error details
- All errors are logged using JSON format
## Logging
The server uses JSON-formatted logging for:
- Incoming request details
- Response configuration loading
- Error messages and stack traces
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the Apache License, Version 2.0 - see the [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "mockllm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "mock, llm, openai, anthropic, testing",
"author": "Luke Hinds",
"author_email": "luke@stacklok.com",
"download_url": "https://files.pythonhosted.org/packages/63/67/e955f97771207478a8c3dce2e4d008f734bdd949f9da1578ddbae8ff1a2c/mockllm-0.1.0.tar.gz",
"platform": null,
"description": "# Mock LLM Server\n\n[](https://github.com/stacklok/mockllm/actions/workflows/ci.yml)\n[](https://badge.fury.io/py/mockllm)\n[](https://opensource.org/licenses/Apache-2.0)\n\nA FastAPI-based mock LLM server that mimics OpenAI and Anthropic API formats. Instead of calling actual language models,\nit uses predefined responses from a YAML configuration file. \n\nThis is made for when you want a deterministic response for testing or development purposes.\n\nCheck out the [CodeGate](https://github.com/stacklok/codegate) project when you're done here!\n\n## Features\n\n- OpenAI and Anthropic compatible API endpoints\n- Streaming support (character-by-character response streaming)\n- Configurable responses via YAML file\n- Hot-reloading of response configurations\n- JSON logging\n- Error handling\n- Mock token counting\n\n## Installation\n\n### From PyPI\n\n```bash\npip install mockllm\n```\n\n### From Source\n\n1. Clone the repository:\n```bash\ngit clone https://github.com/stacklok/mockllm.git\ncd mockllm\n```\n\n2. Install Poetry (if not already installed):\n```bash\ncurl -sSL https://install.python-poetry.org | python3 -\n```\n\n3. Install dependencies:\n```bash\npoetry install # Install with all dependencies\n# or\npoetry install --without dev # Install without development dependencies\n```\n\n## Usage\n\n1. Set up the responses.yml\n\n```bash\ncp example.responses.yml responses.yml\n```\n\n2. Start the server:\n```bash\npoetry run python -m mockllm\n```\nOr using uvicorn directly:\n```bash\npoetry run uvicorn mockllm.server:app --reload\n```\n\nThe server will start on `http://localhost:8000`\n\n3. Send requests to the API endpoints:\n\n### OpenAI Format\n\nRegular request:\n```bash\ncurl -X POST http://localhost:8000/v1/chat/completions \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"mock-llm\",\n \"messages\": [\n {\"role\": \"user\", \"content\": \"what colour is the sky?\"}\n ]\n }'\n```\n\nStreaming request:\n```bash\ncurl -X POST http://localhost:8000/v1/chat/completions \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"mock-llm\",\n \"messages\": [\n {\"role\": \"user\", \"content\": \"what colour is the sky?\"}\n ],\n \"stream\": true\n }'\n```\n\n### Anthropic Format\n\nRegular request:\n```bash\ncurl -X POST http://localhost:8000/v1/messages \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"claude-3-sonnet-20240229\",\n \"messages\": [\n {\"role\": \"user\", \"content\": \"what colour is the sky?\"}\n ]\n }'\n```\n\nStreaming request:\n```bash\ncurl -X POST http://localhost:8000/v1/messages \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"claude-3-sonnet-20240229\",\n \"messages\": [\n {\"role\": \"user\", \"content\": \"what colour is the sky?\"}\n ],\n \"stream\": true\n }'\n```\n\n## Configuration\n\n### Response Configuration\n\nResponses are configured in `responses.yml`. The file has two main sections:\n\n1. `responses`: Maps input prompts to predefined responses\n2. `defaults`: Contains default configurations like the unknown response message\n\nExample `responses.yml`:\n```yaml\nresponses:\n \"what colour is the sky?\": \"The sky is blue during a clear day due to a phenomenon called Rayleigh scattering.\"\n \"what is 2+2?\": \"2+2 equals 9.\"\n\ndefaults:\n unknown_response: \"I don't know the answer to that. This is a mock response.\"\n```\n\n### Hot Reloading\n\nThe server automatically detects changes to `responses.yml` and reloads the configuration without requiring a restart.\n\n## Development\n\nThe project uses Poetry for dependency management and includes a Makefile to help with common development tasks:\n\n```bash\n# Set up development environment\nmake setup\n\n# Run all checks (setup, lint, test)\nmake all\n\n# Run tests\nmake test\n\n# Format code\nmake format\n\n# Run all linting and type checking\nmake lint\n\n# Clean up build artifacts\nmake clean\n\n# See all available commands\nmake help\n```\n\n### Development Commands\n\n- `make setup`: Install all development dependencies with Poetry\n- `make test`: Run the test suite\n- `make format`: Format code with black and isort\n- `make lint`: Run all code quality checks (format, lint, type)\n- `make build`: Build the package with Poetry\n- `make clean`: Remove build artifacts and cache files\n- `make install-dev`: Install package with development dependencies\n\nFor more details on available commands, run `make help`.\n\n## Error Handling\n\nThe server includes comprehensive error handling:\n\n- Invalid requests return 400 status codes with descriptive messages\n- Server errors return 500 status codes with error details\n- All errors are logged using JSON format\n\n## Logging\n\nThe server uses JSON-formatted logging for:\n\n- Incoming request details\n- Response configuration loading\n- Error messages and stack traces\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nThis project is licensed under the Apache License, Version 2.0 - see the [LICENSE](LICENSE) file for details.\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "A mock server that mimics OpenAI and Anthropic API formats for testing",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://github.com/stacklok/mockllm",
"Repository": "https://github.com/stacklok/mockllm"
},
"split_keywords": [
"mock",
" llm",
" openai",
" anthropic",
" testing"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "2a5e26d9e5ae540f43da2ae947b3e79521ad272f8f203acc8cd7c522118b5bad",
"md5": "04a5c75566a1410873a1a877f129246f",
"sha256": "f22efb573f5d4be04a21f68d01f5b2624cdd7ec0e2a7ca2a22fb790e61be116c"
},
"downloads": -1,
"filename": "mockllm-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "04a5c75566a1410873a1a877f129246f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 11583,
"upload_time": "2025-02-14T16:59:44",
"upload_time_iso_8601": "2025-02-14T16:59:44.271436Z",
"url": "https://files.pythonhosted.org/packages/2a/5e/26d9e5ae540f43da2ae947b3e79521ad272f8f203acc8cd7c522118b5bad/mockllm-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "6367e955f97771207478a8c3dce2e4d008f734bdd949f9da1578ddbae8ff1a2c",
"md5": "f4f95582da7e5da43b2bbfc6e8d2d7d2",
"sha256": "acc9feefe14e70af32731590da06cadb018356ab669ce58047b78199ec2d24ea"
},
"downloads": -1,
"filename": "mockllm-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "f4f95582da7e5da43b2bbfc6e8d2d7d2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 9902,
"upload_time": "2025-02-14T16:59:46",
"upload_time_iso_8601": "2025-02-14T16:59:46.192024Z",
"url": "https://files.pythonhosted.org/packages/63/67/e955f97771207478a8c3dce2e4d008f734bdd949f9da1578ddbae8ff1a2c/mockllm-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-14 16:59:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "stacklok",
"github_project": "mockllm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "mockllm"
}