# Structured Prompts
A powerful and modular package for managing structured prompts with any LLM API. This package provides a database-agnostic interface for storing, retrieving, and managing prompt schemas with model-specific optimizations, input validation, and structured response validation. Designed for large-scale applications requiring consistent prompt management across different AI models.
## Key Features
### Core Functionality
- Database-agnostic schema management with SQLAlchemy support
- Model-specific optimizations (thinking mode, special tokens)
- Input validation before sending to LLMs
- JSON schema validation for responses
- MCP (Model Context Protocol) server integration
- Async/await support with high performance
- FastAPI integration ready
- Extensible design patterns
### Schema Management
- Flexible prompt schema creation and management
- Version control for prompt schemas
- Default prompt templates
- Custom prompt type support
- Schema validation and enforcement
### Input & Response Validation
- User input validation with JSON schemas
- Text constraints (length, patterns, forbidden content)
- Code syntax validation
- Structured response validation using JSON Schema
- Custom validation rules and error messages
- Response type enforcement
- Schema evolution support
### Database Integration
- Support for multiple database backends:
- PostgreSQL (with asyncpg)
- SQLite
- MySQL
- Any SQLAlchemy-compatible database
- Connection pooling and optimization
- Async database operations
- Migration support
### API Integration
- FastAPI endpoints for schema management
- MCP server for LLM client integration
- RESTful API for CRUD operations
- Swagger/OpenAPI documentation
- Rate limiting support
- Authentication ready
### Model-Specific Features
- Support for model capabilities detection
- Automatic thinking mode optimization
- Special token handling (Claude, GPT, Gemini)
- Model-specific prompt formatting
- Smart routing based on capabilities
## Installation
```bash
pip install structured-prompts
```
## Configuration
Copy `.env.template` to `.env` and set the `DATABASE_URL` variable to match your
database connection string:
```bash
cp .env.template .env
# Edit .env and adjust DATABASE_URL
```
If `DATABASE_URL` is unset, the package defaults to `sqlite:///./structured_prompts.db`.
### Environment Variables
Currently the only environment variable recognized by the package is `DATABASE_URL`. This value sets the
database connection string for SQLAlchemy and falls back to the default SQLite database if not provided.
You can supply any SQLAlchemy-compatible connection string. For hosted PostgreSQL services such as
Supabase, set `DATABASE_URL` to the connection URL they provide, for example:
```
DATABASE_URL=postgresql://user:password@db.supabase.co:5432/dbname?sslmode=require
```
## Quick Start
### Basic Usage
```python
from structured_prompts import SchemaManager, PromptSchema
from structured_prompts.database import Database
# Initialize with your database connection
db = Database(url="postgresql://user:pass@localhost/db")
schema_manager = SchemaManager(database=db)
# Create a prompt with input validation
await schema_manager.create_prompt_schema(
prompt_id="code_analysis",
prompt_title="Code Analysis",
prompt_description="Analyze code and explain its functionality",
main_prompt="Analyze this code and explain what it does.",
prompt_categories=["code", "analysis"],
input_schema={
"type": "object",
"properties": {
"code": {"type": "string"},
"language": {"type": "string", "enum": ["python", "javascript", "go"]}
},
"required": ["code", "language"]
},
response_schema={
"type": "object",
"properties": {
"explanation": {"type": "string"},
"complexity": {"type": "string", "enum": ["low", "medium", "high"]},
"suggestions": {"type": "array", "items": {"type": "string"}}
},
"required": ["explanation", "complexity"]
}
)
```
### Model-Specific Optimization
```python
from structured_prompts.model_capabilities import ModelCapability, PromptOptimization
# Create a prompt optimized for thinking models
await schema_manager.create_prompt_schema(
prompt_id="complex_reasoning",
prompt_title="Complex Reasoning Task",
prompt_description="Mathematical proof requiring deep thinking",
main_prompt="Solve this step-by-step mathematical proof.",
prompt_categories=["reasoning", "mathematics"],
model_capabilities={
"prefer_thinking_mode": True,
"thinking_instruction": "Work through this systematically",
"optimal_models": ["o1", "claude-3-opus"]
},
system_prompts=[
{
"id": "think_deeply",
"name": "Deep Thinking Mode",
"content": "Take your time to think through each step carefully",
"priority": 1,
"condition": {
"capability_required": "thinking",
"always_apply": False
},
"token_format": "plain"
}
]
)
# The system automatically adds thinking tags for capable models
```
### Input Validation
```python
# Validate user input before sending to LLM
from structured_prompts.input_validation import validate_user_input, InputValidation
validation_config = InputValidation(
input_type="json",
json_schema={
"type": "object",
"properties": {
"query": {"type": "string", "minLength": 10},
"max_results": {"type": "integer", "minimum": 1, "maximum": 100}
},
"required": ["query"]
}
)
result = validate_user_input(user_data, validation_config)
if not result.is_valid:
print(f"Input errors: {result.errors}")
```
### MCP Server Usage
```bash
# Run the MCP server
structured-prompts-mcp
# Or with custom database
DATABASE_URL=postgresql://user:pass@localhost/prompts structured-prompts-mcp
```
Configure your MCP client:
```json
{
"mcpServers": {
"prompts": {
"command": "structured-prompts-mcp"
}
}
}
```
### PromptSchema Fields
`PromptSchema` instances include several attributes for managing metadata and
additional instructions:
- `prompt_id`: unique identifier for the prompt (required)
- `prompt_title`: human-readable title for the prompt (required)
- `prompt_description`: detailed description of the prompt
- `prompt_categories`: list of category tags for organization
- `main_prompt`: primary text shown to the model (required)
- `model_instruction`: optional instructions for model behaviour
- `additional_messages`: optional list of `{role, content}` messages
- `response_schema`: JSON schema describing the expected response (required)
- `input_schema`: JSON schema for validating user inputs
- `model_capabilities`: model-specific optimizations and requirements (JSON)
- `system_prompts`: conditional system prompts based on model capabilities (JSON)
- `provider_configs`: provider-specific configurations (JSON)
- `is_public`: flag to expose the prompt publicly (default: False)
- `ranking`: numeric rating for effectiveness (default: 0.0)
- `last_used` / `usage_count`: tracking statistics
- `created_at` / `created_by`: creation metadata
- `last_updated` / `last_updated_by`: update metadata
## Project Structure
The codebase is organized into a few key modules:
- **`database.py`** – Async wrapper around SQLAlchemy that handles engine
creation, connection checks and automatic database creation. It exposes
convenience methods like `create_schema`, `get_schema` and similar for
`PromptSchemaDB` and `PromptResponseDB` models.
- **`models.py`** – Defines the SQLAlchemy models and matching Pydantic models
(`PromptSchema` and `PromptResponse`) used for validation and data transfer.
- **`schema_manager.py`** – High level manager that converts between Pydantic
and SQLAlchemy objects, performing CRUD operations and providing helpful
error handling.
- **`__init__.py`** – Exports `SchemaManager` along with the Pydantic models as
the public API for the package.
- **`tests/`** – Contains a small pytest suite demonstrating SQLite based
integration tests.
## Using as a plugin
Install the package directly from GitHub:
```bash
pip install git+https://github.com/ebowwa/structured-prompts.git
```
Initialize the package in another project:
```python
from structured_prompts.database import Database
from structured_prompts import SchemaManager
db = Database(url="postgresql://user:pass@localhost/db")
schema_manager = SchemaManager(database=db)
```
Now you can manage prompt schemas using `schema_manager`.
## Advanced Usage
### Custom Schema Types
```python
from structured_prompts import SchemaManager
# Create a complex analysis schema
await schema_manager.create_prompt_schema(
prompt_id="content_analysis",
prompt_title="Content Analysis",
prompt_description="Detailed content analysis with sentiment and insights",
main_prompt="Perform a detailed analysis of this content.",
prompt_categories=["analysis", "sentiment"],
response_schema={
"type": "object",
"properties": {
"main_topics": {
"type": "array",
"items": {"type": "string"}
},
"sentiment": {
"type": "object",
"properties": {
"overall": {"type": "string"},
"confidence": {"type": "number"},
"aspects": {
"type": "array",
"items": {
"type": "object",
"properties": {
"aspect": {"type": "string"},
"sentiment": {"type": "string"}
}
}
}
}
},
"key_insights": {
"type": "array",
"items": {"type": "string"}
}
}
}
)
```
### Database Operations
```python
# Custom database configuration
from structured_prompts.database import Database
db = Database(
url="postgresql://user:pass@localhost/db",
min_size=5,
max_size=20
)
schema_manager = SchemaManager(database=db)
# Batch operations
async def migrate_schemas(old_id: str, new_id: str):
old_config = await schema_manager.get_prompt_schema(old_id)
if old_config:
await schema_manager.create_prompt_schema(
prompt_id=new_id,
prompt_title=old_config["prompt_title"] + " (Migrated)",
prompt_description=old_config.get("prompt_description", ""),
main_prompt=old_config["main_prompt"],
response_schema=old_config["response_schema"],
input_schema=old_config.get("input_schema"),
model_capabilities=old_config.get("model_capabilities")
)
```
## Development Setup
1. Clone the repository:
```bash
git clone https://github.com/ebowwa/structured-prompts.git
cd structured-prompts
```
2. Install dependencies:
```bash
./setup.sh
```
3. Run tests:
```bash
poetry run pytest
```
## Contributing
We welcome contributions! Contributor guidelines will be added soon. Highlights include:
- Code style
- Development process
- Testing requirements
- Pull request process
## License
This project is licensed under the MIT License.
## Acknowledgments
- The open source community for their contributions
- FastAPI community for inspiration on API design
- SQLAlchemy team for the robust database toolkit
Raw data
{
"_id": null,
"home_page": "https://github.com/ebowwa/structured-prompts",
"name": "structured-prompts",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": "ebowwa",
"author_email": "your.email@example.com",
"download_url": "https://files.pythonhosted.org/packages/b8/56/49ad99760d7dc9aa918c7708e599f6d8914f0738e253c841e01ad1c513fb/structured_prompts-0.1.1.tar.gz",
"platform": null,
"description": "# Structured Prompts\n\nA powerful and modular package for managing structured prompts with any LLM API. This package provides a database-agnostic interface for storing, retrieving, and managing prompt schemas with model-specific optimizations, input validation, and structured response validation. Designed for large-scale applications requiring consistent prompt management across different AI models.\n\n## Key Features\n\n### Core Functionality\n- Database-agnostic schema management with SQLAlchemy support\n- Model-specific optimizations (thinking mode, special tokens)\n- Input validation before sending to LLMs\n- JSON schema validation for responses\n- MCP (Model Context Protocol) server integration\n- Async/await support with high performance\n- FastAPI integration ready\n- Extensible design patterns\n\n### Schema Management\n- Flexible prompt schema creation and management\n- Version control for prompt schemas\n- Default prompt templates\n- Custom prompt type support\n- Schema validation and enforcement\n\n### Input & Response Validation\n- User input validation with JSON schemas\n- Text constraints (length, patterns, forbidden content)\n- Code syntax validation\n- Structured response validation using JSON Schema\n- Custom validation rules and error messages\n- Response type enforcement\n- Schema evolution support\n\n### Database Integration\n- Support for multiple database backends:\n - PostgreSQL (with asyncpg)\n - SQLite\n - MySQL\n - Any SQLAlchemy-compatible database\n- Connection pooling and optimization\n- Async database operations\n- Migration support\n\n### API Integration\n- FastAPI endpoints for schema management\n- MCP server for LLM client integration\n- RESTful API for CRUD operations\n- Swagger/OpenAPI documentation\n- Rate limiting support\n- Authentication ready\n\n### Model-Specific Features\n- Support for model capabilities detection\n- Automatic thinking mode optimization\n- Special token handling (Claude, GPT, Gemini)\n- Model-specific prompt formatting\n- Smart routing based on capabilities\n\n## Installation\n\n```bash\npip install structured-prompts\n```\n\n## Configuration\n\nCopy `.env.template` to `.env` and set the `DATABASE_URL` variable to match your\ndatabase connection string:\n\n```bash\ncp .env.template .env\n# Edit .env and adjust DATABASE_URL\n```\n\nIf `DATABASE_URL` is unset, the package defaults to `sqlite:///./structured_prompts.db`.\n\n### Environment Variables\n\nCurrently the only environment variable recognized by the package is `DATABASE_URL`. This value sets the\ndatabase connection string for SQLAlchemy and falls back to the default SQLite database if not provided.\nYou can supply any SQLAlchemy-compatible connection string. For hosted PostgreSQL services such as\nSupabase, set `DATABASE_URL` to the connection URL they provide, for example:\n\n```\nDATABASE_URL=postgresql://user:password@db.supabase.co:5432/dbname?sslmode=require\n```\n\n## Quick Start\n\n### Basic Usage\n\n```python\nfrom structured_prompts import SchemaManager, PromptSchema\nfrom structured_prompts.database import Database\n\n# Initialize with your database connection\ndb = Database(url=\"postgresql://user:pass@localhost/db\")\nschema_manager = SchemaManager(database=db)\n\n# Create a prompt with input validation\nawait schema_manager.create_prompt_schema(\n prompt_id=\"code_analysis\",\n prompt_title=\"Code Analysis\",\n prompt_description=\"Analyze code and explain its functionality\",\n main_prompt=\"Analyze this code and explain what it does.\",\n prompt_categories=[\"code\", \"analysis\"],\n input_schema={\n \"type\": \"object\",\n \"properties\": {\n \"code\": {\"type\": \"string\"},\n \"language\": {\"type\": \"string\", \"enum\": [\"python\", \"javascript\", \"go\"]}\n },\n \"required\": [\"code\", \"language\"]\n },\n response_schema={\n \"type\": \"object\",\n \"properties\": {\n \"explanation\": {\"type\": \"string\"},\n \"complexity\": {\"type\": \"string\", \"enum\": [\"low\", \"medium\", \"high\"]},\n \"suggestions\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}\n },\n \"required\": [\"explanation\", \"complexity\"]\n }\n)\n```\n\n### Model-Specific Optimization\n\n```python\nfrom structured_prompts.model_capabilities import ModelCapability, PromptOptimization\n\n# Create a prompt optimized for thinking models\nawait schema_manager.create_prompt_schema(\n prompt_id=\"complex_reasoning\",\n prompt_title=\"Complex Reasoning Task\",\n prompt_description=\"Mathematical proof requiring deep thinking\",\n main_prompt=\"Solve this step-by-step mathematical proof.\",\n prompt_categories=[\"reasoning\", \"mathematics\"],\n model_capabilities={\n \"prefer_thinking_mode\": True,\n \"thinking_instruction\": \"Work through this systematically\",\n \"optimal_models\": [\"o1\", \"claude-3-opus\"]\n },\n system_prompts=[\n {\n \"id\": \"think_deeply\",\n \"name\": \"Deep Thinking Mode\",\n \"content\": \"Take your time to think through each step carefully\",\n \"priority\": 1,\n \"condition\": {\n \"capability_required\": \"thinking\",\n \"always_apply\": False\n },\n \"token_format\": \"plain\"\n }\n ]\n)\n\n# The system automatically adds thinking tags for capable models\n```\n\n### Input Validation\n\n```python\n# Validate user input before sending to LLM\nfrom structured_prompts.input_validation import validate_user_input, InputValidation\n\nvalidation_config = InputValidation(\n input_type=\"json\",\n json_schema={\n \"type\": \"object\",\n \"properties\": {\n \"query\": {\"type\": \"string\", \"minLength\": 10},\n \"max_results\": {\"type\": \"integer\", \"minimum\": 1, \"maximum\": 100}\n },\n \"required\": [\"query\"]\n }\n)\n\nresult = validate_user_input(user_data, validation_config)\nif not result.is_valid:\n print(f\"Input errors: {result.errors}\")\n```\n\n### MCP Server Usage\n\n```bash\n# Run the MCP server\nstructured-prompts-mcp\n\n# Or with custom database\nDATABASE_URL=postgresql://user:pass@localhost/prompts structured-prompts-mcp\n```\n\nConfigure your MCP client:\n```json\n{\n \"mcpServers\": {\n \"prompts\": {\n \"command\": \"structured-prompts-mcp\"\n }\n }\n}\n```\n\n### PromptSchema Fields\n\n`PromptSchema` instances include several attributes for managing metadata and\nadditional instructions:\n\n- `prompt_id`: unique identifier for the prompt (required)\n- `prompt_title`: human-readable title for the prompt (required)\n- `prompt_description`: detailed description of the prompt\n- `prompt_categories`: list of category tags for organization\n- `main_prompt`: primary text shown to the model (required)\n- `model_instruction`: optional instructions for model behaviour\n- `additional_messages`: optional list of `{role, content}` messages\n- `response_schema`: JSON schema describing the expected response (required)\n- `input_schema`: JSON schema for validating user inputs\n- `model_capabilities`: model-specific optimizations and requirements (JSON)\n- `system_prompts`: conditional system prompts based on model capabilities (JSON)\n- `provider_configs`: provider-specific configurations (JSON)\n- `is_public`: flag to expose the prompt publicly (default: False)\n- `ranking`: numeric rating for effectiveness (default: 0.0)\n- `last_used` / `usage_count`: tracking statistics\n- `created_at` / `created_by`: creation metadata\n- `last_updated` / `last_updated_by`: update metadata\n\n## Project Structure\n\nThe codebase is organized into a few key modules:\n\n- **`database.py`** \u2013 Async wrapper around SQLAlchemy that handles engine\n creation, connection checks and automatic database creation. It exposes\n convenience methods like `create_schema`, `get_schema` and similar for\n `PromptSchemaDB` and `PromptResponseDB` models.\n- **`models.py`** \u2013 Defines the SQLAlchemy models and matching Pydantic models\n (`PromptSchema` and `PromptResponse`) used for validation and data transfer.\n- **`schema_manager.py`** \u2013 High level manager that converts between Pydantic\n and SQLAlchemy objects, performing CRUD operations and providing helpful\n error handling.\n- **`__init__.py`** \u2013 Exports `SchemaManager` along with the Pydantic models as\n the public API for the package.\n- **`tests/`** \u2013 Contains a small pytest suite demonstrating SQLite based\n integration tests.\n\n## Using as a plugin\n\nInstall the package directly from GitHub:\n\n```bash\npip install git+https://github.com/ebowwa/structured-prompts.git\n```\n\nInitialize the package in another project:\n\n```python\nfrom structured_prompts.database import Database\nfrom structured_prompts import SchemaManager\n\ndb = Database(url=\"postgresql://user:pass@localhost/db\")\nschema_manager = SchemaManager(database=db)\n```\n\nNow you can manage prompt schemas using `schema_manager`.\n## Advanced Usage\n\n### Custom Schema Types\n\n```python\nfrom structured_prompts import SchemaManager\n\n# Create a complex analysis schema\nawait schema_manager.create_prompt_schema(\n prompt_id=\"content_analysis\",\n prompt_title=\"Content Analysis\",\n prompt_description=\"Detailed content analysis with sentiment and insights\",\n main_prompt=\"Perform a detailed analysis of this content.\",\n prompt_categories=[\"analysis\", \"sentiment\"],\n response_schema={\n \"type\": \"object\",\n \"properties\": {\n \"main_topics\": {\n \"type\": \"array\",\n \"items\": {\"type\": \"string\"}\n },\n \"sentiment\": {\n \"type\": \"object\",\n \"properties\": {\n \"overall\": {\"type\": \"string\"},\n \"confidence\": {\"type\": \"number\"},\n \"aspects\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"object\",\n \"properties\": {\n \"aspect\": {\"type\": \"string\"},\n \"sentiment\": {\"type\": \"string\"}\n }\n }\n }\n }\n },\n \"key_insights\": {\n \"type\": \"array\",\n \"items\": {\"type\": \"string\"}\n }\n }\n }\n)\n```\n\n### Database Operations\n\n```python\n# Custom database configuration\nfrom structured_prompts.database import Database\n\ndb = Database(\n url=\"postgresql://user:pass@localhost/db\",\n min_size=5,\n max_size=20\n)\n\nschema_manager = SchemaManager(database=db)\n\n# Batch operations\nasync def migrate_schemas(old_id: str, new_id: str):\n old_config = await schema_manager.get_prompt_schema(old_id)\n if old_config:\n await schema_manager.create_prompt_schema(\n prompt_id=new_id,\n prompt_title=old_config[\"prompt_title\"] + \" (Migrated)\",\n prompt_description=old_config.get(\"prompt_description\", \"\"),\n main_prompt=old_config[\"main_prompt\"],\n response_schema=old_config[\"response_schema\"],\n input_schema=old_config.get(\"input_schema\"),\n model_capabilities=old_config.get(\"model_capabilities\")\n )\n```\n\n## Development Setup\n\n1. Clone the repository:\n```bash\ngit clone https://github.com/ebowwa/structured-prompts.git\ncd structured-prompts\n```\n\n2. Install dependencies:\n```bash\n./setup.sh\n```\n\n3. Run tests:\n```bash\npoetry run pytest\n```\n\n## Contributing\n\nWe welcome contributions! Contributor guidelines will be added soon. Highlights include:\n- Code style\n- Development process\n- Testing requirements\n- Pull request process\n\n## License\n\nThis project is licensed under the MIT License.\n\n## Acknowledgments\n\n- The open source community for their contributions\n- FastAPI community for inspiration on API design\n- SQLAlchemy team for the robust database toolkit\n",
"bugtrack_url": null,
"license": null,
"summary": "A modular package for managing structured prompts with any LLM API",
"version": "0.1.1",
"project_urls": {
"Homepage": "https://github.com/ebowwa/structured-prompts"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "44b9c6a02316c248b49b3bcc4cd28d8dc077b0c4588202d6efa3603d749defab",
"md5": "566ec71c6e5e27ad39dbcf008ec73980",
"sha256": "daf535011068a940a54c1aa89c4177a51e8211e9f7ac9d91dd3f2cc316b882d4"
},
"downloads": -1,
"filename": "structured_prompts-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "566ec71c6e5e27ad39dbcf008ec73980",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 17490,
"upload_time": "2025-08-01T04:22:35",
"upload_time_iso_8601": "2025-08-01T04:22:35.358465Z",
"url": "https://files.pythonhosted.org/packages/44/b9/c6a02316c248b49b3bcc4cd28d8dc077b0c4588202d6efa3603d749defab/structured_prompts-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b85649ad99760d7dc9aa918c7708e599f6d8914f0738e253c841e01ad1c513fb",
"md5": "d4eb7af80b7693df5d5ce73c0cd4f7c8",
"sha256": "3845c02f0a2c4fb572853c2c696da16f6407d434b62310851f45423bb0f12b20"
},
"downloads": -1,
"filename": "structured_prompts-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "d4eb7af80b7693df5d5ce73c0cd4f7c8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 19261,
"upload_time": "2025-08-01T04:22:36",
"upload_time_iso_8601": "2025-08-01T04:22:36.456841Z",
"url": "https://files.pythonhosted.org/packages/b8/56/49ad99760d7dc9aa918c7708e599f6d8914f0738e253c841e01ad1c513fb/structured_prompts-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-01 04:22:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ebowwa",
"github_project": "structured-prompts",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "structured-prompts"
}