[![PyPI version](https://badge.fury.io/py/typed-prompt.svg)](https://badge.fury.io/py/typed-prompt)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/typed-prompt)](https://pypi.org/project/typed-prompt/)
# typed-prompt
A type-safe, validated prompt management system for LLMs that catches errors early, enforces type safety, and provides a structured way to manage prompts.
Uses Pydantic models for variable validation and Jinja2 templates for prompt rendering.
> **Note**: This library is in early development and subject to change.
## Why typed-prompt?
I have always found it challenging to manage dynamic prompts for LLMs. The process is error-prone, with issues often discovered only at runtime. typed-prompt aims to solve this problem by providing a structured, type-safe way to manage prompts that catches errors early and enforces type safety.
> **Disclaimer**: This is a personal project to solve gripes ive had in the past and not affiliated with any organization. It is a work in progress and subject to change.
>
> I will be adding more features and examples in the future. If you have any suggestions or feedback, feel free to open an issue!
## Quick Examples
### 1. Basic Usage with Validation
```python
from typed_prompt import BasePrompt
from pydantic import BaseModel
from typing import Optional
# Define your variables
class UserVars(BaseModel):
name: str
expertise: str
# This works - all template variables are defined
class ValidPrompt(BasePrompt[UserVars]):
"""Helping {{name}} with {{expertise}} level knowledge."""
prompt_template: str = "Explain {{topic}} to me"
variables: UserVars
def render(self, *, topic: str, **extra_vars) -> RenderOutput:
extra_vars["topic"] = topic
return super().render(**extra_vars)
# This fails immediately - 'unknown_var' not defined
class InvalidPrompt(BasePrompt[UserVars]):
prompt_template: str = "What is {{unknown_var}}?" # ValueError!
variables: UserVars
# This fails - 'expertise' defined but never used
class UnusedVarPrompt(BasePrompt[UserVars]):
prompt_template: str = "Hello {{name}}" # ValueError!
variables: UserVars
```
### 2. Conditional Templates
```python
from typing import Union
class TemplateVars(BaseModel):
user_type: Union["expert", "beginner"]
name: str
preferences: Optional[dict] = None
class ConditionalPrompt(BasePrompt[TemplateVars]):
"""{% if user_type == 'expert' %}
Technical advisor for {{name}}
{% else %}
Friendly helper for {{name}}
{% endif %}"""
prompt_template: str = """
{% if preferences %}
Considering your preferences: {% for k, v in preferences.items() %}
- {{k}}: {{v}}{% endfor %}
{% endif %}
How can I help with {{topic}}?
"""
variables: TemplateVars
def render(self, *, topic: str, **extra_vars) -> RenderOutput:
extra_vars["topic"] = topic
return super().render(**extra_vars)
```
### 3. LLM configuration defined with the template
```python
from typed_prompt import RenderOutput
from pydantic import BaseModel, Field
class MyConfig(BaseModel):
temperature: float = Field(default=0.7, ge=0, le=2)
model: str = Field(default="gpt-4")
class MyPrompt(BasePrompt[UserVars]):
"""Assistant for {{name}}"""
prompt_template: str = "Help with {{topic}}"
variables: UserVars
config: MyConfig = Field(default_factory=MyConfig)
def render(self, *, topic: str, **extra_vars) -> RenderOutput:
extra_vars["topic"] = topic
return super().render(**extra_vars)
# Use custom config
prompt = MyPrompt(
variables=UserVars(name="Alice", expertise="intermediate"),
config=MyConfig(temperature=0.9, model="gpt-3.5-turbo")
)
```
> **Note**: Using None as a value for optional variables will render as `None` in the prompt.
> e.g "Test example `{{var}}` will render as `Test example None` if `var` is `None`.
> This is the default behaviour of jinja.
>Therefore you need to handle this in your jinja2 template.
> e.g `{{if var}}` or `{{var | default('default value')}}` or however you want to handle it.
## Key Features
### Early Validation
The library validates your prompt templates during class definition:
- Missing variables are caught immediately
- Unused variables are detected
- Template syntax is verified
- Type checking is enforced
### Type Safety
All variables are validated through Pydantic:
- Required vs optional fields
- Type constraints
- Custom validators
- Nested models
### Flexible Configuration
Attach custom configuration to prompts:
- Model parameters
- Custom settings
- Validation rules
- Default values
## Why Early Validation Matters
Consider this example:
```python
# Without typed-prompt
def create_prompt(user_data):
template = "Hello {{username}}, your level is {{level}}"
# Error only discovered when rendering with wrong data
return template.format(**user_data) # KeyError at runtime!
# With typed-prompt
class UserPrompt(BasePrompt[UserVars]):
prompt_template: str = "Hello {{unknown_var}}" # Error immediately!
variables: UserVars
```
The library catches template errors at definition time.
## Installation
```bash
uv add tpyed-prompt
```
or
```bash
pip install typed-prompt
```
## Examples
For more examples and detailed documentation, check the [examples](./examples) directory.
To run the examples:
```bash
uv run python examples/user.py
```
## Core Concepts
### The Prompt Structure
typed-prompt uses a two-part prompt structure that matches common LLM interaction patterns:
1. **System Prompt**: Provides context or instructions for the AI model. You can define this in two ways:
- As a class docstring (recommended for better code organization)
- As a `system_prompt_template` class attribute
2. **User Prompt**: Contains the actual prompt template that will be sent to the model. This is always defined in the `prompt_template` class attribute.
### Variable Management
Variables in typed-prompt are handled through three complementary mechanisms:
1. **Variables Model**: A Pydantic model that defines the core variables your prompt needs:
```python
class UserVariables(BaseModel):
name: str
age: int
occupation: Optional[str] = None
```
2. **Render Method Parameters**: Additional variables can be defined as keyword-only arguments in a custom render method:
```python
def render(self, *, learning_topic: str, **extra_vars) -> RenderOutput:
extra_vars["learning_topic"] = learning_topic
return super().render(**extra_vars)
```
3. **Extra Variables**: One-off variables can be passed directly to the render method.
### Template Validation
The library performs comprehensive validation to catch common issues early:
1. **Missing Variables**: Ensures all variables used in templates are defined either in the variables model or render method
2. **Unused Variables**: Identifies variables that are defined but never used in templates
3. **Template Syntax**: Validates Jinja2 template syntax at class definition time
4. **Type Checking**: Leverages Pydantic's type validation for all variables
### Working with External Templates
For complex prompts, you can load templates from external files:
```python
class ComplexPrompt(BasePrompt[ComplexVariables]):
system_prompt_template = Path("templates/system_prompt.j2").read_text()
prompt_template: str = Path("templates/user_prompt.j2").read_text()
```
> **Note**: With templating engines like Jinja2, you can normally hot reload templates, but this is not supported in typed-prompt as the templates are validated at class definition time.
## API Reference
### BasePrompt[T]
The foundational class for creating structured prompts.
#### Type Parameters
- `T`: A Pydantic BaseModel subclass defining the structure of template variables
#### Class Attributes
- `system_prompt_template`: Optional[str] - System prompt template
- `prompt_template`: str - User prompt template
- `variables`: T - Instance of the variables model
#### Methods
- `render(**extra_vars) -> RenderOutput`: Renders both prompts with provided variables
### RenderOutput
A NamedTuple providing structured access to rendered prompts:
- `system_prompt`: Optional[str] - The rendered system prompt
- `user_prompt`: str - The rendered user prompt
## Best Practices
### Template Organization
Structure your templates for maximum readability and maintainability:
1. **Use Docstrings for System Prompts**: When possible, define system prompts in class docstrings for better code organization:
```python
class UserPrompt(BasePrompt[UserVariables]):
"""You are having a conversation with {{name}}, a {{age}}-year-old {{occupation}}."""
prompt_template: str = "What would you like to discuss?"
```
2. **Separate Complex Templates**: For longer templates, use external files:
```python
system_prompt_template = Path("templates/system_prompt.j2").read_text()
```
## Common Patterns
### Conditional Content
Use Jinja2's conditional syntax for dynamic content:
```python
class DynamicPrompt(BasePrompt[Variables]):
prompt_template: str = """
{% if expert_mode %}
Provide a detailed technical explanation of {{topic}}
{% else %}
Explain {{topic}} in simple terms
{% endif %}
"""
```
## Contributing
Contributions are welcome!
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## TODO
- [ ] Optionals will still render as `None` in the prompt.
- [ ] Make Jinja2 optional, (for very simple templating just use string formatting e.g `f"Hello {name}"`). Maybe shoulda started simpler lol.
- [ ] Output OpenAI compatible Message objects.
- [ ] The ability to define, not just a system prompt and a single prompt, but prompt chains. eg `system_prompt -> user_prompt -> assistant_response -> user_prompt -> assistant_response -> ...`
Raw data
{
"_id": null,
"home_page": null,
"name": "typed-prompt",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "jinja2, llm, prompt-engineering, pydantic, templating, type-safety",
"author": null,
"author_email": "SamBroomy <36888606+SamBroomy@users.noreply.github.com>",
"download_url": "https://files.pythonhosted.org/packages/b4/51/c38e2226f8ecd4037a9f966db74411f605cae0be8c9f76b7c02d35a792e1/typed_prompt-0.1.2.tar.gz",
"platform": null,
"description": "[![PyPI version](https://badge.fury.io/py/typed-prompt.svg)](https://badge.fury.io/py/typed-prompt)\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/typed-prompt)](https://pypi.org/project/typed-prompt/)\n\n# typed-prompt\n\nA type-safe, validated prompt management system for LLMs that catches errors early, enforces type safety, and provides a structured way to manage prompts.\nUses Pydantic models for variable validation and Jinja2 templates for prompt rendering.\n\n> **Note**: This library is in early development and subject to change.\n\n## Why typed-prompt?\n\nI have always found it challenging to manage dynamic prompts for LLMs. The process is error-prone, with issues often discovered only at runtime. typed-prompt aims to solve this problem by providing a structured, type-safe way to manage prompts that catches errors early and enforces type safety.\n\n> **Disclaimer**: This is a personal project to solve gripes ive had in the past and not affiliated with any organization. It is a work in progress and subject to change.\n>\n> I will be adding more features and examples in the future. If you have any suggestions or feedback, feel free to open an issue!\n\n## Quick Examples\n\n### 1. Basic Usage with Validation\n\n```python\nfrom typed_prompt import BasePrompt\nfrom pydantic import BaseModel\nfrom typing import Optional\n\n# Define your variables\nclass UserVars(BaseModel):\n name: str\n expertise: str\n\n# This works - all template variables are defined\nclass ValidPrompt(BasePrompt[UserVars]):\n \"\"\"Helping {{name}} with {{expertise}} level knowledge.\"\"\"\n prompt_template: str = \"Explain {{topic}} to me\"\n variables: UserVars\n\n def render(self, *, topic: str, **extra_vars) -> RenderOutput:\n extra_vars[\"topic\"] = topic\n return super().render(**extra_vars)\n\n# This fails immediately - 'unknown_var' not defined\nclass InvalidPrompt(BasePrompt[UserVars]):\n prompt_template: str = \"What is {{unknown_var}}?\" # ValueError!\n variables: UserVars\n\n# This fails - 'expertise' defined but never used\nclass UnusedVarPrompt(BasePrompt[UserVars]):\n prompt_template: str = \"Hello {{name}}\" # ValueError!\n variables: UserVars\n```\n\n### 2. Conditional Templates\n\n```python\nfrom typing import Union\n\nclass TemplateVars(BaseModel):\n user_type: Union[\"expert\", \"beginner\"]\n name: str\n preferences: Optional[dict] = None\n\nclass ConditionalPrompt(BasePrompt[TemplateVars]):\n \"\"\"{% if user_type == 'expert' %}\n Technical advisor for {{name}}\n {% else %}\n Friendly helper for {{name}}\n {% endif %}\"\"\"\n\n prompt_template: str = \"\"\"\n {% if preferences %}\n Considering your preferences: {% for k, v in preferences.items() %}\n - {{k}}: {{v}}{% endfor %}\n {% endif %}\n How can I help with {{topic}}?\n \"\"\"\n variables: TemplateVars\n\n def render(self, *, topic: str, **extra_vars) -> RenderOutput:\n extra_vars[\"topic\"] = topic\n return super().render(**extra_vars)\n```\n\n### 3. LLM configuration defined with the template\n\n```python\nfrom typed_prompt import RenderOutput\nfrom pydantic import BaseModel, Field\n\n\nclass MyConfig(BaseModel):\n temperature: float = Field(default=0.7, ge=0, le=2)\n model: str = Field(default=\"gpt-4\")\n\nclass MyPrompt(BasePrompt[UserVars]):\n \"\"\"Assistant for {{name}}\"\"\"\n prompt_template: str = \"Help with {{topic}}\"\n variables: UserVars\n config: MyConfig = Field(default_factory=MyConfig)\n\n def render(self, *, topic: str, **extra_vars) -> RenderOutput:\n extra_vars[\"topic\"] = topic\n return super().render(**extra_vars)\n\n# Use custom config\nprompt = MyPrompt(\n variables=UserVars(name=\"Alice\", expertise=\"intermediate\"),\n config=MyConfig(temperature=0.9, model=\"gpt-3.5-turbo\")\n)\n```\n\n> **Note**: Using None as a value for optional variables will render as `None` in the prompt.\n> e.g \"Test example `{{var}}` will render as `Test example None` if `var` is `None`.\n> This is the default behaviour of jinja.\n>Therefore you need to handle this in your jinja2 template.\n> e.g `{{if var}}` or `{{var | default('default value')}}` or however you want to handle it.\n\n## Key Features\n\n### Early Validation\n\nThe library validates your prompt templates during class definition:\n\n- Missing variables are caught immediately\n- Unused variables are detected\n- Template syntax is verified\n- Type checking is enforced\n\n### Type Safety\n\nAll variables are validated through Pydantic:\n\n- Required vs optional fields\n- Type constraints\n- Custom validators\n- Nested models\n\n### Flexible Configuration\n\nAttach custom configuration to prompts:\n\n- Model parameters\n- Custom settings\n- Validation rules\n- Default values\n\n## Why Early Validation Matters\n\nConsider this example:\n\n```python\n# Without typed-prompt\ndef create_prompt(user_data):\n template = \"Hello {{username}}, your level is {{level}}\"\n # Error only discovered when rendering with wrong data\n return template.format(**user_data) # KeyError at runtime!\n\n# With typed-prompt\nclass UserPrompt(BasePrompt[UserVars]):\n prompt_template: str = \"Hello {{unknown_var}}\" # Error immediately!\n variables: UserVars\n```\n\nThe library catches template errors at definition time.\n\n## Installation\n\n```bash\nuv add tpyed-prompt\n```\n\nor\n\n```bash\npip install typed-prompt\n```\n\n## Examples\n\nFor more examples and detailed documentation, check the [examples](./examples) directory.\n\nTo run the examples:\n\n```bash\nuv run python examples/user.py\n```\n\n## Core Concepts\n\n### The Prompt Structure\n\ntyped-prompt uses a two-part prompt structure that matches common LLM interaction patterns:\n\n1. **System Prompt**: Provides context or instructions for the AI model. You can define this in two ways:\n - As a class docstring (recommended for better code organization)\n - As a `system_prompt_template` class attribute\n\n2. **User Prompt**: Contains the actual prompt template that will be sent to the model. This is always defined in the `prompt_template` class attribute.\n\n### Variable Management\n\nVariables in typed-prompt are handled through three complementary mechanisms:\n\n1. **Variables Model**: A Pydantic model that defines the core variables your prompt needs:\n\n ```python\n class UserVariables(BaseModel):\n name: str\n age: int\n occupation: Optional[str] = None\n ```\n\n2. **Render Method Parameters**: Additional variables can be defined as keyword-only arguments in a custom render method:\n\n ```python\n def render(self, *, learning_topic: str, **extra_vars) -> RenderOutput:\n extra_vars[\"learning_topic\"] = learning_topic\n return super().render(**extra_vars)\n ```\n\n3. **Extra Variables**: One-off variables can be passed directly to the render method.\n\n### Template Validation\n\nThe library performs comprehensive validation to catch common issues early:\n\n1. **Missing Variables**: Ensures all variables used in templates are defined either in the variables model or render method\n2. **Unused Variables**: Identifies variables that are defined but never used in templates\n3. **Template Syntax**: Validates Jinja2 template syntax at class definition time\n4. **Type Checking**: Leverages Pydantic's type validation for all variables\n\n### Working with External Templates\n\nFor complex prompts, you can load templates from external files:\n\n```python\nclass ComplexPrompt(BasePrompt[ComplexVariables]):\n system_prompt_template = Path(\"templates/system_prompt.j2\").read_text()\n\n prompt_template: str = Path(\"templates/user_prompt.j2\").read_text()\n\n```\n\n> **Note**: With templating engines like Jinja2, you can normally hot reload templates, but this is not supported in typed-prompt as the templates are validated at class definition time.\n\n## API Reference\n\n### BasePrompt[T]\n\nThe foundational class for creating structured prompts.\n\n#### Type Parameters\n\n- `T`: A Pydantic BaseModel subclass defining the structure of template variables\n\n#### Class Attributes\n\n- `system_prompt_template`: Optional[str] - System prompt template\n- `prompt_template`: str - User prompt template\n- `variables`: T - Instance of the variables model\n\n#### Methods\n\n- `render(**extra_vars) -> RenderOutput`: Renders both prompts with provided variables\n\n### RenderOutput\n\nA NamedTuple providing structured access to rendered prompts:\n\n- `system_prompt`: Optional[str] - The rendered system prompt\n- `user_prompt`: str - The rendered user prompt\n\n## Best Practices\n\n### Template Organization\n\nStructure your templates for maximum readability and maintainability:\n\n1. **Use Docstrings for System Prompts**: When possible, define system prompts in class docstrings for better code organization:\n\n ```python\n class UserPrompt(BasePrompt[UserVariables]):\n \"\"\"You are having a conversation with {{name}}, a {{age}}-year-old {{occupation}}.\"\"\"\n prompt_template: str = \"What would you like to discuss?\"\n ```\n\n2. **Separate Complex Templates**: For longer templates, use external files:\n\n ```python\n system_prompt_template = Path(\"templates/system_prompt.j2\").read_text()\n ```\n\n## Common Patterns\n\n### Conditional Content\n\nUse Jinja2's conditional syntax for dynamic content:\n\n```python\nclass DynamicPrompt(BasePrompt[Variables]):\n prompt_template: str = \"\"\"\n {% if expert_mode %}\n Provide a detailed technical explanation of {{topic}}\n {% else %}\n Explain {{topic}} in simple terms\n {% endif %}\n \"\"\"\n```\n\n## Contributing\n\nContributions are welcome!\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## TODO\n\n- [ ] Optionals will still render as `None` in the prompt.\n\n- [ ] Make Jinja2 optional, (for very simple templating just use string formatting e.g `f\"Hello {name}\"`). Maybe shoulda started simpler lol.\n\n- [ ] Output OpenAI compatible Message objects.\n\n- [ ] The ability to define, not just a system prompt and a single prompt, but prompt chains. eg `system_prompt -> user_prompt -> assistant_response -> user_prompt -> assistant_response -> ...`\n",
"bugtrack_url": null,
"license": null,
"summary": "A simple type-safe, validated prompt management system for LLMs",
"version": "0.1.2",
"project_urls": {
"Changelog": "https://github.com/SamBroomy/typed-prompt/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/SamBroomy/typed-prompt#readme",
"Homepage": "https://github.com/SamBroomy/typed-prompt",
"Repository": "https://github.com/SamBroomy/typed-prompt.git"
},
"split_keywords": [
"jinja2",
" llm",
" prompt-engineering",
" pydantic",
" templating",
" type-safety"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b48bbcc65a1d4515dfa4e963d25407e1eecbfb7322f0756213a4bc935e322664",
"md5": "af57af5c06b2021fa9cf3e3ec9e34f90",
"sha256": "3ff5801dc367851a1e70cfafb1159cc04cebd944b3fb3449626ac73ea1a910a3"
},
"downloads": -1,
"filename": "typed_prompt-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "af57af5c06b2021fa9cf3e3ec9e34f90",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 10485,
"upload_time": "2024-12-10T19:57:02",
"upload_time_iso_8601": "2024-12-10T19:57:02.461820Z",
"url": "https://files.pythonhosted.org/packages/b4/8b/bcc65a1d4515dfa4e963d25407e1eecbfb7322f0756213a4bc935e322664/typed_prompt-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b451c38e2226f8ecd4037a9f966db74411f605cae0be8c9f76b7c02d35a792e1",
"md5": "fc72cab0123aee85f4c89fe723e48291",
"sha256": "0c7167579bd6e84633eb96955f02189adb3831b2193c1fbc90c7c277de08fa2f"
},
"downloads": -1,
"filename": "typed_prompt-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "fc72cab0123aee85f4c89fe723e48291",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 47794,
"upload_time": "2024-12-10T19:57:06",
"upload_time_iso_8601": "2024-12-10T19:57:06.558968Z",
"url": "https://files.pythonhosted.org/packages/b4/51/c38e2226f8ecd4037a9f966db74411f605cae0be8c9f76b7c02d35a792e1/typed_prompt-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-10 19:57:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "SamBroomy",
"github_project": "typed-prompt",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "typed-prompt"
}