# ToolMockers
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
**ToolMockers** is a Python library that automatically generates realistic mock responses for functions using Large Language Models (LLMs). Perfect for development, testing, and prototyping when you need intelligent mocks that understand context and generate meaningful data.
## ๐ Features
- **LLM-Powered Mocking**: Generate contextually appropriate mock responses using any LangChain-compatible LLM
- **Flexible Configuration**: Control what context to include (docstrings, source code, examples)
- **Easy Integration**: Simple decorator-based API that works with existing code
- **Smart Parsing**: Automatic parsing of LLM responses into proper Python objects
- **Comprehensive Logging**: Built-in logging for debugging and monitoring
- **Type-Safe**: Full type hints for better IDE support and code quality
## ๐ฆ Installation
### Using pip
```bash
pip install toolmockers
```
### Using uv (recommended)
```bash
uv add toolmockers
```
For development:
```bash
uv add toolmockers --dev
```
## ๐ง Quick Start
```python
from langchain_openai import ChatOpenAI
from toolmockers import get_mock_decorator
# Initialize your LLM
llm = ChatOpenAI(model="gpt-4")
# Create a mock decorator
mock = get_mock_decorator(llm=llm, enabled=True)
@mock
def fetch_user_profile(user_id: str) -> dict:
"""Fetch user profile from the database.
Args:
user_id: The unique identifier for the user
Returns:
A dictionary containing user profile information
"""
# This would normally make a database call
# But when mocked, the LLM generates a realistic response
pass
# The function now returns LLM-generated mock data
profile = fetch_user_profile("user123")
print(profile)
# Output: {'id': 'user123', 'name': 'John Smith', 'email': 'john.smith@email.com', ...}
```
## ๐ก Advanced Usage
### Custom Examples
Provide examples to guide the LLM's responses:
```python
@mock(examples=[
{
"input": "analyze_sentiment('I love this!')",
"output": {"sentiment": "positive", "confidence": 0.95}
},
{
"input": "analyze_sentiment('This is awful')",
"output": {"sentiment": "negative", "confidence": 0.87}
}
])
def analyze_sentiment(text: str) -> dict:
"""Analyze sentiment of text."""
pass
```
### Including Source Code
Include function source code for better context:
```python
@mock(use_code=True)
def complex_calculation(data: list) -> float:
"""Perform complex statistical calculation."""
# The LLM can see this implementation
return sum(x**2 for x in data) / len(data)
```
### Conditional Mocking
Enable/disable mocking based on environment:
```python
import os
mock = get_mock_decorator(
llm=llm,
enabled=os.getenv("ENVIRONMENT") == "development"
)
```
### Custom Response Parser
Create custom parsers for specific response formats:
```python
def custom_parser(response_str, func, args, kwargs):
"""Custom parser that always returns a specific format."""
try:
import json
return json.loads(response_str)
except:
return {"error": "parsing_failed", "raw": response_str}
mock = get_mock_decorator(llm=llm, parser=custom_parser)
```
## ๐ ๏ธ Configuration Options
The `get_mock_decorator` function accepts these parameters:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `llm` | `BaseChatModel` | Required | The LangChain LLM to use for generation |
| `enabled` | `bool` | `False` | Whether mocking is enabled |
| `use_docstring` | `bool` | `True` | Include function docstrings in prompts |
| `use_code` | `bool` | `False` | Include source code in prompts |
| `use_examples` | `bool` | `True` | Include examples when provided |
| `parser` | `Callable` | `default_mock_response_parser` | Function to parse LLM responses |
Individual functions can override these settings:
```python
@mock(
enabled=True, # Override global enabled setting
use_code=True, # Include source code for this function
use_examples=False, # Don't use examples
examples=[...] # Provide specific examples
)
def my_function():
pass
```
## ๐ Logging
ToolMockers includes comprehensive logging. Configure it to see what's happening:
```python
import logging
# Basic configuration
logging.basicConfig(level=logging.INFO)
# More detailed logging
logging.getLogger("toolmockers").setLevel(logging.DEBUG)
```
Log levels:
- **INFO**: Function mocking events, generation success/failure
- **DEBUG**: Detailed prompt generation, response parsing, decorator application
- **WARNING**: Parsing fallbacks, missing source code
- **ERROR**: LLM invocation failures, parsing errors
## ๐งช Testing
ToolMockers is perfect for testing scenarios where you need realistic data without external dependencies:
```python
import pytest
from toolmockers import get_mock_decorator
@pytest.fixture
def mock_decorator():
return get_mock_decorator(llm=your_test_llm, enabled=True)
def test_user_service(mock_decorator):
@mock_decorator
def get_user_data(user_id):
"""Get user data from external API."""
pass
# Test with mocked data
result = get_user_data("test123")
assert "id" in result
assert result["id"] == "test123"
```
## ๐ฏ Use Cases
- **API Development**: Mock external service calls during development
- **Testing**: Generate realistic test data without setting up databases
- **Prototyping**: Quickly build working prototypes with smart mocks
- **Load Testing**: Replace slow external calls with fast LLM-generated responses
- **Documentation**: Generate example outputs for API documentation
## ๐ ๏ธ Development
This project uses [uv](https://github.com/astral-sh/uv) for dependency management.
### Setup
```bash
# Clone the repository
git clone https://github.com/yourusername/toolmockers.git
cd toolmockers
# Install dependencies
uv sync
# Run tests
uv run python -m pytest
# Run the example
uv run python example.py
# Format code
uv run black .
uv run isort .
```
### Building
```bash
# Build the package
uv build
# Publish to PyPI
uv publish
```
## ๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
## ๐ License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## ๐ Acknowledgments
- Built on top of [LangChain](https://github.com/langchain-ai/langchain) for LLM integration
- Inspired by the need for intelligent mocking in AI agent development
## ๐ Related Projects
- [LangChain](https://github.com/langchain-ai/langchain) - Framework for developing applications with LLMs
- [unittest.mock](https://docs.python.org/3/library/unittest.mock.html) - Python's built-in mocking library
- [pytest-mock](https://github.com/pytest-dev/pytest-mock/) - Pytest plugin for mocking
Raw data
{
"_id": null,
"home_page": null,
"name": "toolmockers",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "Alessandro Clerici Lorenzini <foss@aclerici.me>",
"keywords": "agents, ai, development, langchain, llm, mock, testing, tools",
"author": null,
"author_email": "Alessandro Clerici Lorenzini <foss@aclerici.me>",
"download_url": "https://files.pythonhosted.org/packages/69/e1/850ac456f38e04c1d1bbd4e60ab8e5898bb6c2066a27454b581df1dfa78b/toolmockers-1.0.0.tar.gz",
"platform": null,
"description": "# ToolMockers\n\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n\n**ToolMockers** is a Python library that automatically generates realistic mock responses for functions using Large Language Models (LLMs). Perfect for development, testing, and prototyping when you need intelligent mocks that understand context and generate meaningful data.\n\n## \ud83d\ude80 Features\n\n- **LLM-Powered Mocking**: Generate contextually appropriate mock responses using any LangChain-compatible LLM\n- **Flexible Configuration**: Control what context to include (docstrings, source code, examples)\n- **Easy Integration**: Simple decorator-based API that works with existing code\n- **Smart Parsing**: Automatic parsing of LLM responses into proper Python objects\n- **Comprehensive Logging**: Built-in logging for debugging and monitoring\n- **Type-Safe**: Full type hints for better IDE support and code quality\n\n## \ud83d\udce6 Installation\n\n### Using pip\n```bash\npip install toolmockers\n```\n\n### Using uv (recommended)\n```bash\nuv add toolmockers\n```\n\nFor development:\n```bash\nuv add toolmockers --dev\n```\n\n## \ud83d\udd27 Quick Start\n\n```python\nfrom langchain_openai import ChatOpenAI\nfrom toolmockers import get_mock_decorator\n\n# Initialize your LLM\nllm = ChatOpenAI(model=\"gpt-4\")\n\n# Create a mock decorator\nmock = get_mock_decorator(llm=llm, enabled=True)\n\n@mock\ndef fetch_user_profile(user_id: str) -> dict:\n \"\"\"Fetch user profile from the database.\n\n Args:\n user_id: The unique identifier for the user\n\n Returns:\n A dictionary containing user profile information\n \"\"\"\n # This would normally make a database call\n # But when mocked, the LLM generates a realistic response\n pass\n\n# The function now returns LLM-generated mock data\nprofile = fetch_user_profile(\"user123\")\nprint(profile)\n# Output: {'id': 'user123', 'name': 'John Smith', 'email': 'john.smith@email.com', ...}\n```\n\n## \ud83d\udca1 Advanced Usage\n\n### Custom Examples\n\nProvide examples to guide the LLM's responses:\n\n```python\n@mock(examples=[\n {\n \"input\": \"analyze_sentiment('I love this!')\",\n \"output\": {\"sentiment\": \"positive\", \"confidence\": 0.95}\n },\n {\n \"input\": \"analyze_sentiment('This is awful')\",\n \"output\": {\"sentiment\": \"negative\", \"confidence\": 0.87}\n }\n])\ndef analyze_sentiment(text: str) -> dict:\n \"\"\"Analyze sentiment of text.\"\"\"\n pass\n```\n\n### Including Source Code\n\nInclude function source code for better context:\n\n```python\n@mock(use_code=True)\ndef complex_calculation(data: list) -> float:\n \"\"\"Perform complex statistical calculation.\"\"\"\n # The LLM can see this implementation\n return sum(x**2 for x in data) / len(data)\n```\n\n### Conditional Mocking\n\nEnable/disable mocking based on environment:\n\n```python\nimport os\n\nmock = get_mock_decorator(\n llm=llm,\n enabled=os.getenv(\"ENVIRONMENT\") == \"development\"\n)\n```\n\n### Custom Response Parser\n\nCreate custom parsers for specific response formats:\n\n```python\ndef custom_parser(response_str, func, args, kwargs):\n \"\"\"Custom parser that always returns a specific format.\"\"\"\n try:\n import json\n return json.loads(response_str)\n except:\n return {\"error\": \"parsing_failed\", \"raw\": response_str}\n\nmock = get_mock_decorator(llm=llm, parser=custom_parser)\n```\n\n## \ud83d\udee0\ufe0f Configuration Options\n\nThe `get_mock_decorator` function accepts these parameters:\n\n| Parameter | Type | Default | Description |\n|-----------|------|---------|-------------|\n| `llm` | `BaseChatModel` | Required | The LangChain LLM to use for generation |\n| `enabled` | `bool` | `False` | Whether mocking is enabled |\n| `use_docstring` | `bool` | `True` | Include function docstrings in prompts |\n| `use_code` | `bool` | `False` | Include source code in prompts |\n| `use_examples` | `bool` | `True` | Include examples when provided |\n| `parser` | `Callable` | `default_mock_response_parser` | Function to parse LLM responses |\n\nIndividual functions can override these settings:\n\n```python\n@mock(\n enabled=True, # Override global enabled setting\n use_code=True, # Include source code for this function\n use_examples=False, # Don't use examples\n examples=[...] # Provide specific examples\n)\ndef my_function():\n pass\n```\n\n## \ud83d\udd0d Logging\n\nToolMockers includes comprehensive logging. Configure it to see what's happening:\n\n```python\nimport logging\n\n# Basic configuration\nlogging.basicConfig(level=logging.INFO)\n\n# More detailed logging\nlogging.getLogger(\"toolmockers\").setLevel(logging.DEBUG)\n```\n\nLog levels:\n- **INFO**: Function mocking events, generation success/failure\n- **DEBUG**: Detailed prompt generation, response parsing, decorator application\n- **WARNING**: Parsing fallbacks, missing source code\n- **ERROR**: LLM invocation failures, parsing errors\n\n## \ud83e\uddea Testing\n\nToolMockers is perfect for testing scenarios where you need realistic data without external dependencies:\n\n```python\nimport pytest\nfrom toolmockers import get_mock_decorator\n\n@pytest.fixture\ndef mock_decorator():\n return get_mock_decorator(llm=your_test_llm, enabled=True)\n\ndef test_user_service(mock_decorator):\n @mock_decorator\n def get_user_data(user_id):\n \"\"\"Get user data from external API.\"\"\"\n pass\n\n # Test with mocked data\n result = get_user_data(\"test123\")\n assert \"id\" in result\n assert result[\"id\"] == \"test123\"\n```\n\n## \ud83c\udfaf Use Cases\n\n- **API Development**: Mock external service calls during development\n- **Testing**: Generate realistic test data without setting up databases\n- **Prototyping**: Quickly build working prototypes with smart mocks\n- **Load Testing**: Replace slow external calls with fast LLM-generated responses\n- **Documentation**: Generate example outputs for API documentation\n\n## \ud83d\udee0\ufe0f Development\n\nThis project uses [uv](https://github.com/astral-sh/uv) for dependency management.\n\n### Setup\n```bash\n# Clone the repository\ngit clone https://github.com/yourusername/toolmockers.git\ncd toolmockers\n\n# Install dependencies\nuv sync\n\n# Run tests\nuv run python -m pytest\n\n# Run the example\nuv run python example.py\n\n# Format code\nuv run black .\nuv run isort .\n```\n\n### Building\n```bash\n# Build the package\nuv build\n\n# Publish to PyPI\nuv publish\n```\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- Built on top of [LangChain](https://github.com/langchain-ai/langchain) for LLM integration\n- Inspired by the need for intelligent mocking in AI agent development\n\n## \ud83d\udcda Related Projects\n\n- [LangChain](https://github.com/langchain-ai/langchain) - Framework for developing applications with LLMs\n- [unittest.mock](https://docs.python.org/3/library/unittest.mock.html) - Python's built-in mocking library\n- [pytest-mock](https://github.com/pytest-dev/pytest-mock/) - Pytest plugin for mocking\n",
"bugtrack_url": null,
"license": null,
"summary": "LLM-powered mocking library for generating realistic mock responses in AI agent tools",
"version": "1.0.0",
"project_urls": {
"Changelog": "https://github.com/sgorblex/toolmockers/releases",
"Documentation": "https://github.com/sgorblex/toolmockers#readme",
"Homepage": "https://github.com/sgorblex/toolmockers",
"Issues": "https://github.com/sgorblex/toolmockers/issues",
"Repository": "https://github.com/sgorblex/toolmockers.git"
},
"split_keywords": [
"agents",
" ai",
" development",
" langchain",
" llm",
" mock",
" testing",
" tools"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "6aa2c67a0912862e3f77fc6f822f1cf818910237e70739f540522b2ce0cd8997",
"md5": "839e557b5fde1000d9fa86b7f033365b",
"sha256": "728a60f4d9fcdae2caf6e5da188cd8bb377cf673647098be6307cec1d55cd56d"
},
"downloads": -1,
"filename": "toolmockers-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "839e557b5fde1000d9fa86b7f033365b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 7632,
"upload_time": "2025-08-23T12:50:12",
"upload_time_iso_8601": "2025-08-23T12:50:12.840877Z",
"url": "https://files.pythonhosted.org/packages/6a/a2/c67a0912862e3f77fc6f822f1cf818910237e70739f540522b2ce0cd8997/toolmockers-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "69e1850ac456f38e04c1d1bbd4e60ab8e5898bb6c2066a27454b581df1dfa78b",
"md5": "5eacd5ef666b4d3300469b99fa69a0a3",
"sha256": "34ae43d70c0182599f3a6d32f65606a4ba5106cab023ef34c823e476e415fc0b"
},
"downloads": -1,
"filename": "toolmockers-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "5eacd5ef666b4d3300469b99fa69a0a3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 8612,
"upload_time": "2025-08-23T12:50:14",
"upload_time_iso_8601": "2025-08-23T12:50:14.272822Z",
"url": "https://files.pythonhosted.org/packages/69/e1/850ac456f38e04c1d1bbd4e60ab8e5898bb6c2066a27454b581df1dfa78b/toolmockers-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-23 12:50:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sgorblex",
"github_project": "toolmockers",
"github_not_found": true,
"lcname": "toolmockers"
}