Name | textprompts JSON |
Version |
0.0.2
JSON |
| download |
home_page | None |
Summary | Minimal text-based prompt-loader with TOML front-matter |
upload_time | 2025-07-20 20:42:44 |
maintainer | None |
docs_url | None |
author | Jan Siml |
requires_python | >=3.11 |
license | None |
keywords |
prompts
toml
frontmatter
template
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# textprompts
> **So simple, it's not even worth vibing about coding yet it just makes so much sense.**
Are you tired of vendors trying to sell you fancy UIs for prompt management that just make your system more confusing and harder to debug? Isn't it nice to just have your prompts **next to your code**?
But then you worry: *Did my formatter change my prompt? Are those spaces at the beginning actually part of the prompt or just indentation?*
**textprompts** solves this elegantly: treat your prompts as **text files** and keep your linters and formatters away from them.
## Why textprompts?
- ✅ **Prompts live next to your code** - no external systems to manage
- ✅ **Git is your version control** - diff, branch, and experiment with ease
- ✅ **No formatter headaches** - your prompts stay exactly as you wrote them
- ✅ **Minimal markup** - just TOML front-matter when you need metadata (or no metadata if you prefer!)
- ✅ **Zero dependencies** - well, almost (just Pydantic)
- ✅ **Safe formatting** - catch missing variables before they cause problems
- ✅ **Works with everything** - OpenAI, Anthropic, local models, function calls
## Installation
```bash
uv add textprompts # or pip install textprompts
```
## Quick Start
**Super simple by default** - TextPrompts just loads text files with optional metadata:
1. **Create a prompt file** (`greeting.txt`):
```
---
title = "Customer Greeting"
version = "1.0.0"
description = "Friendly greeting for customer support"
---
Hello {customer_name}!
Welcome to {company_name}. We're here to help you with {issue_type}.
Best regards,
{agent_name}
```
2. **Load and use it** (no configuration needed):
```python
import textprompts
# Just load it - works with or without metadata
prompt = textprompts.load_prompt("greeting.txt")
# Use it safely - all placeholders must be provided
message = prompt.prompt.format(
customer_name="Alice",
company_name="ACME Corp",
issue_type="billing question",
agent_name="Sarah"
)
print(message)
# Or use partial formatting when needed
partial = prompt.prompt.format(
customer_name="Alice",
company_name="ACME Corp",
skip_validation=True
)
# Result: "Hello Alice!\n\nWelcome to ACME Corp. We're here to help you with {issue_type}.\n\nBest regards,\n{agent_name}"
# Prompt objects expose `.meta` and `.prompt`.
# Use `prompt.prompt.format()` for safe formatting or `str(prompt)` for raw text.
```
**Even simpler** - no metadata required:
```python
# simple_prompt.txt contains just: "Analyze this data: {data}"
prompt = textprompts.load_prompt("simple_prompt.txt") # Just works!
result = prompt.prompt.format(data="sales figures")
```
## Core Features
### Safe String Formatting
Never ship a prompt with missing variables again:
```python
from textprompts import PromptString
template = PromptString("Hello {name}, your order {order_id} is {status}")
# ✅ Strict formatting - all placeholders must be provided
result = template.format(name="Alice", order_id="12345", status="shipped")
# ❌ This catches the error by default
try:
result = template.format(name="Alice") # Missing order_id and status
except ValueError as e:
print(f"Error: {e}") # Missing format variables: ['order_id', 'status']
# ✅ Partial formatting - replace only what you have
partial = template.format(name="Alice", skip_validation=True)
print(partial) # "Hello Alice, your order {order_id} is {status}"
```
### Bulk Loading
Load entire directories of prompts:
```python
from textprompts import load_prompts
# Load all prompts from a directory
prompts = load_prompts("prompts/", recursive=True)
# Create a lookup
prompt_dict = {p.meta.title: p for p in prompts if p.meta}
greeting = prompt_dict["Customer Greeting"]
```
### Simple & Flexible Metadata Handling
TextPrompts is designed to be **super simple** by default - just load text files with optional metadata when available. No configuration needed!
```python
import textprompts
# Default behavior: load metadata if available, otherwise just use the file content
prompt = textprompts.load_prompt("my_prompt.txt") # Just works!
# Three modes available for different use cases:
# 1. IGNORE (default): Treat as simple text file, use filename as title
textprompts.set_metadata("ignore") # Super simple file loading
prompt = textprompts.load_prompt("prompt.txt") # No metadata parsing
print(prompt.meta.title) # "prompt" (from filename)
# 2. ALLOW: Load metadata if present, don't worry if it's incomplete
textprompts.set_metadata("allow") # Flexible metadata loading
prompt = textprompts.load_prompt("prompt.txt") # Loads any metadata found
# 3. STRICT: Require complete metadata for production use
textprompts.set_metadata("strict") # Prevent errors in production
prompt = textprompts.load_prompt("prompt.txt") # Must have title, description, version
# Override per prompt when needed
prompt = textprompts.load_prompt("prompt.txt", meta="strict")
```
**Why this design?**
- **Default = Simple**: No configuration needed, just load files
- **Flexible**: Add metadata when you want structure
- **Production-Safe**: Use strict mode to catch missing metadata before deployment
## Real-World Examples
### OpenAI Integration
```python
import openai
from textprompts import load_prompt
system_prompt = load_prompt("prompts/customer_support_system.txt")
user_prompt = load_prompt("prompts/user_query_template.txt")
response = openai.chat.completions.create(
model="gpt-4.1-mini",
messages=[
{
"role": "system",
"content": system_prompt.prompt.format(
company_name="ACME Corp",
support_level="premium"
)
},
{
"role": "user",
"content": user_prompt.prompt.format(
query="How do I return an item?",
customer_tier="premium"
)
}
]
)
```
### Function Calling (Tool Definitions)
Yes, you can version control your whole tool schemas too:
```python
# tools/search_products.txt
---
title = "Product Search Tool"
version = "2.1.0"
description = "Search our product catalog"
---
{
"type": "function",
"function": {
"name": "search_products",
"description": "Search for products in our catalog",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query for products"
},
"category": {
"type": "string",
"enum": ["electronics", "clothing", "books"],
"description": "Product category to search within"
},
"max_results": {
"type": "integer",
"default": 10,
"description": "Maximum number of results to return"
}
},
"required": ["query"]
}
}
}
```
```python
import json
from textprompts import load_prompt
# Load and parse the tool definition
tool_prompt = load_prompt("tools/search_products.txt")
tool_schema = json.loads(tool_prompt.prompt)
# Use with OpenAI
response = openai.chat.completions.create(
model="gpt-4.1-mini",
messages=[{"role": "user", "content": "Find me some electronics"}],
tools=[tool_schema]
)
```
### Environment-Specific Prompts
```python
import os
from textprompts import load_prompt
env = os.getenv("ENVIRONMENT", "development")
system_prompt = load_prompt(f"prompts/{env}/system.txt")
# prompts/development/system.txt - verbose logging
# prompts/production/system.txt - concise responses
```
### Prompt Versioning & Experimentation
```python
from textprompts import load_prompt
# Easy A/B testing
prompt_version = "v2" # or "v1", "experimental", etc.
prompt = load_prompt(f"prompts/{prompt_version}/system.txt")
# Git handles the rest:
# git checkout experiment-branch
# git diff main -- prompts/
```
## File Format
TextPrompts uses TOML front-matter (optional) followed by your prompt content:
```
---
title = "My Prompt"
version = "1.0.0"
author = "Your Name"
description = "What this prompt does"
created = "2024-01-15"
tags = ["customer-support", "greeting"]
---
Your prompt content goes here.
Use {variables} for templating.
```
### Metadata Modes
Choose the right level of strictness for your use case:
1. **IGNORE** (default) - Simple text file loading, filename becomes title
2. **ALLOW** - Load metadata if present, don't worry about completeness
3. **STRICT** - Require complete metadata (title, description, version) for production safety
```python
# Set globally
textprompts.set_metadata("ignore") # Default: simple file loading
textprompts.set_metadata("allow") # Flexible: load any metadata
textprompts.set_metadata("strict") # Production: require complete metadata
# Or override per prompt
prompt = textprompts.load_prompt("file.txt", meta="strict")
```
## API Reference
### `load_prompt(path, *, meta=None)`
Load a single prompt file.
- `path`: Path to the prompt file
- `meta`: Metadata handling mode - `MetadataMode.STRICT`, `MetadataMode.ALLOW`, `MetadataMode.IGNORE`, or string equivalents. None uses global config.
Returns a `Prompt` object with:
- `prompt.meta`: Metadata from TOML front-matter (always present)
- `prompt.prompt`: The prompt content as a `PromptString`
- `prompt.path`: Path to the original file
### `load_prompts(*paths, recursive=False, glob="*.txt", meta=None, max_files=1000)`
Load multiple prompts from files or directories.
- `*paths`: Files or directories to load
- `recursive`: Search directories recursively (default: False)
- `glob`: File pattern to match (default: "*.txt")
- `meta`: Metadata handling mode - `MetadataMode.STRICT`, `MetadataMode.ALLOW`, `MetadataMode.IGNORE`, or string equivalents. None uses global config.
- `max_files`: Maximum files to process (default: 1000)
### `set_metadata(mode)` / `get_metadata()`
Set or get the global metadata handling mode.
- `mode`: `MetadataMode.STRICT`, `MetadataMode.ALLOW`, `MetadataMode.IGNORE`, or string equivalents
```python
import textprompts
# Set global mode
textprompts.set_metadata(textprompts.MetadataMode.STRICT)
textprompts.set_metadata("allow") # String also works
# Get current mode
current_mode = textprompts.get_metadata()
```
### `save_prompt(path, content)`
Save a prompt to a file.
- `path`: Path to save the prompt file
- `content`: Either a string (creates template with required fields) or a `Prompt` object
```python
from textprompts import save_prompt
# Save a simple prompt with metadata template
save_prompt("my_prompt.txt", "You are a helpful assistant.")
# Save a Prompt object with full metadata
save_prompt("my_prompt.txt", prompt_object)
```
### `PromptString`
A string subclass that validates `format()` calls:
```python
from textprompts import PromptString
template = PromptString("Hello {name}, you are {role}")
# Strict formatting (default) - all placeholders required
result = template.format(name="Alice", role="admin") # ✅ Works
result = template.format(name="Alice") # ❌ Raises ValueError
# Partial formatting - replace only available placeholders
partial = template.format(name="Alice", skip_validation=True) # ✅ "Hello Alice, you are {role}"
# Access placeholder information
print(template.placeholders) # {'name', 'role'}
```
## Error Handling
TextPrompts provides specific exception types:
```python
from textprompts import (
TextPromptsError, # Base exception
FileMissingError, # File not found
MissingMetadataError, # No TOML front-matter when required
InvalidMetadataError, # Invalid TOML syntax
MalformedHeaderError, # Malformed front-matter structure
MetadataMode, # Metadata handling mode enum
set_metadata, # Set global metadata mode
get_metadata # Get global metadata mode
)
```
## CLI Tool
TextPrompts includes a CLI for quick prompt inspection:
```bash
# View a single prompt
textprompts show greeting.txt
# List all prompts in a directory
textprompts list prompts/ --recursive
# Validate prompts
textprompts validate prompts/
```
## Best Practices
1. **Organize by purpose**: Group related prompts in folders
```
prompts/
├── customer-support/
├── content-generation/
└── code-review/
```
2. **Use semantic versioning**: Version your prompts like code
```
version = "1.2.0" # major.minor.patch
```
3. **Document your variables**: List expected variables in descriptions
```
description = "Requires: customer_name, issue_type, agent_name"
```
4. **Test your prompts**: Write unit tests for critical prompts
```python
def test_greeting_prompt():
prompt = load_prompt("greeting.txt")
result = prompt.prompt.format(customer_name="Test")
assert "Test" in result
```
5. **Use environment-specific prompts**: Different prompts for dev/prod
```python
env = os.getenv("ENV", "development")
prompt = load_prompt(f"prompts/{env}/system.txt")
```
## Why Not Just Use String Templates?
You could, but then you lose:
- **Metadata tracking** (versions, authors, descriptions)
- **Safe formatting** (catch missing variables)
- **Organized storage** (searchable, documentable)
- **Version control benefits** (proper diffs, blame, history)
- **Tooling support** (CLI, validation, testing)
## Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
MIT License - see [LICENSE](LICENSE) for details.
---
**textprompts** - Because your prompts deserve better than being buried in code strings. 🚀
Raw data
{
"_id": null,
"home_page": null,
"name": "textprompts",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "prompts, toml, frontmatter, template",
"author": "Jan Siml",
"author_email": "Jan Siml <49557684+svilupp@users.noreply.github.com>",
"download_url": "https://files.pythonhosted.org/packages/ff/04/330304bdfab99e83a9663b726226aea32a7db58824614d66f0f5926c04b7/textprompts-0.0.2.tar.gz",
"platform": null,
"description": "# textprompts\n\n> **So simple, it's not even worth vibing about coding yet it just makes so much sense.**\n\nAre you tired of vendors trying to sell you fancy UIs for prompt management that just make your system more confusing and harder to debug? Isn't it nice to just have your prompts **next to your code**? \n\nBut then you worry: *Did my formatter change my prompt? Are those spaces at the beginning actually part of the prompt or just indentation?*\n\n**textprompts** solves this elegantly: treat your prompts as **text files** and keep your linters and formatters away from them.\n\n## Why textprompts?\n\n- \u2705 **Prompts live next to your code** - no external systems to manage\n- \u2705 **Git is your version control** - diff, branch, and experiment with ease \n- \u2705 **No formatter headaches** - your prompts stay exactly as you wrote them\n- \u2705 **Minimal markup** - just TOML front-matter when you need metadata (or no metadata if you prefer!)\n- \u2705 **Zero dependencies** - well, almost (just Pydantic)\n- \u2705 **Safe formatting** - catch missing variables before they cause problems\n- \u2705 **Works with everything** - OpenAI, Anthropic, local models, function calls\n\n## Installation\n\n```bash\nuv add textprompts # or pip install textprompts\n```\n\n## Quick Start\n\n**Super simple by default** - TextPrompts just loads text files with optional metadata:\n\n1. **Create a prompt file** (`greeting.txt`):\n```\n---\ntitle = \"Customer Greeting\"\nversion = \"1.0.0\"\ndescription = \"Friendly greeting for customer support\"\n---\n\nHello {customer_name}!\n\nWelcome to {company_name}. We're here to help you with {issue_type}.\n\nBest regards,\n{agent_name}\n```\n\n2. **Load and use it** (no configuration needed):\n```python\nimport textprompts\n\n# Just load it - works with or without metadata\nprompt = textprompts.load_prompt(\"greeting.txt\")\n\n# Use it safely - all placeholders must be provided\nmessage = prompt.prompt.format(\n customer_name=\"Alice\",\n company_name=\"ACME Corp\", \n issue_type=\"billing question\",\n agent_name=\"Sarah\"\n)\n\nprint(message)\n\n# Or use partial formatting when needed\npartial = prompt.prompt.format(\n customer_name=\"Alice\",\n company_name=\"ACME Corp\",\n skip_validation=True\n)\n# Result: \"Hello Alice!\\n\\nWelcome to ACME Corp. We're here to help you with {issue_type}.\\n\\nBest regards,\\n{agent_name}\"\n\n# Prompt objects expose `.meta` and `.prompt`.\n# Use `prompt.prompt.format()` for safe formatting or `str(prompt)` for raw text.\n```\n\n**Even simpler** - no metadata required:\n```python\n# simple_prompt.txt contains just: \"Analyze this data: {data}\"\nprompt = textprompts.load_prompt(\"simple_prompt.txt\") # Just works!\nresult = prompt.prompt.format(data=\"sales figures\")\n```\n\n## Core Features\n\n### Safe String Formatting\n\nNever ship a prompt with missing variables again:\n\n```python\nfrom textprompts import PromptString\n\ntemplate = PromptString(\"Hello {name}, your order {order_id} is {status}\")\n\n# \u2705 Strict formatting - all placeholders must be provided\nresult = template.format(name=\"Alice\", order_id=\"12345\", status=\"shipped\")\n\n# \u274c This catches the error by default\ntry:\n result = template.format(name=\"Alice\") # Missing order_id and status\nexcept ValueError as e:\n print(f\"Error: {e}\") # Missing format variables: ['order_id', 'status']\n\n# \u2705 Partial formatting - replace only what you have\npartial = template.format(name=\"Alice\", skip_validation=True)\nprint(partial) # \"Hello Alice, your order {order_id} is {status}\"\n```\n\n### Bulk Loading\n\nLoad entire directories of prompts:\n\n```python\nfrom textprompts import load_prompts\n\n# Load all prompts from a directory\nprompts = load_prompts(\"prompts/\", recursive=True)\n\n# Create a lookup\nprompt_dict = {p.meta.title: p for p in prompts if p.meta}\ngreeting = prompt_dict[\"Customer Greeting\"]\n```\n\n### Simple & Flexible Metadata Handling\n\nTextPrompts is designed to be **super simple** by default - just load text files with optional metadata when available. No configuration needed!\n\n```python\nimport textprompts\n\n# Default behavior: load metadata if available, otherwise just use the file content\nprompt = textprompts.load_prompt(\"my_prompt.txt\") # Just works!\n\n# Three modes available for different use cases:\n# 1. IGNORE (default): Treat as simple text file, use filename as title\ntextprompts.set_metadata(\"ignore\") # Super simple file loading\nprompt = textprompts.load_prompt(\"prompt.txt\") # No metadata parsing\nprint(prompt.meta.title) # \"prompt\" (from filename)\n\n# 2. ALLOW: Load metadata if present, don't worry if it's incomplete\ntextprompts.set_metadata(\"allow\") # Flexible metadata loading \nprompt = textprompts.load_prompt(\"prompt.txt\") # Loads any metadata found\n\n# 3. STRICT: Require complete metadata for production use\ntextprompts.set_metadata(\"strict\") # Prevent errors in production\nprompt = textprompts.load_prompt(\"prompt.txt\") # Must have title, description, version\n\n# Override per prompt when needed\nprompt = textprompts.load_prompt(\"prompt.txt\", meta=\"strict\")\n```\n\n**Why this design?**\n- **Default = Simple**: No configuration needed, just load files\n- **Flexible**: Add metadata when you want structure \n- **Production-Safe**: Use strict mode to catch missing metadata before deployment\n\n## Real-World Examples\n\n### OpenAI Integration\n\n```python\nimport openai\nfrom textprompts import load_prompt\n\nsystem_prompt = load_prompt(\"prompts/customer_support_system.txt\")\nuser_prompt = load_prompt(\"prompts/user_query_template.txt\")\n\nresponse = openai.chat.completions.create(\n model=\"gpt-4.1-mini\",\n messages=[\n {\n \"role\": \"system\",\n \"content\": system_prompt.prompt.format(\n company_name=\"ACME Corp\",\n support_level=\"premium\"\n )\n },\n {\n \"role\": \"user\", \n \"content\": user_prompt.prompt.format(\n query=\"How do I return an item?\",\n customer_tier=\"premium\"\n )\n }\n ]\n)\n```\n\n### Function Calling (Tool Definitions)\n\nYes, you can version control your whole tool schemas too:\n\n```python\n# tools/search_products.txt\n---\ntitle = \"Product Search Tool\"\nversion = \"2.1.0\"\ndescription = \"Search our product catalog\"\n---\n\n{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"search_products\",\n \"description\": \"Search for products in our catalog\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"query\": {\n \"type\": \"string\",\n \"description\": \"Search query for products\"\n },\n \"category\": {\n \"type\": \"string\", \n \"enum\": [\"electronics\", \"clothing\", \"books\"],\n \"description\": \"Product category to search within\"\n },\n \"max_results\": {\n \"type\": \"integer\",\n \"default\": 10,\n \"description\": \"Maximum number of results to return\"\n }\n },\n \"required\": [\"query\"]\n }\n }\n}\n```\n\n```python\nimport json\nfrom textprompts import load_prompt\n\n# Load and parse the tool definition\ntool_prompt = load_prompt(\"tools/search_products.txt\")\ntool_schema = json.loads(tool_prompt.prompt)\n\n# Use with OpenAI\nresponse = openai.chat.completions.create(\n model=\"gpt-4.1-mini\",\n messages=[{\"role\": \"user\", \"content\": \"Find me some electronics\"}],\n tools=[tool_schema]\n)\n```\n\n### Environment-Specific Prompts\n\n```python\nimport os\nfrom textprompts import load_prompt\n\nenv = os.getenv(\"ENVIRONMENT\", \"development\")\nsystem_prompt = load_prompt(f\"prompts/{env}/system.txt\")\n\n# prompts/development/system.txt - verbose logging\n# prompts/production/system.txt - concise responses\n```\n\n### Prompt Versioning & Experimentation\n\n```python\nfrom textprompts import load_prompt\n\n# Easy A/B testing\nprompt_version = \"v2\" # or \"v1\", \"experimental\", etc.\nprompt = load_prompt(f\"prompts/{prompt_version}/system.txt\")\n\n# Git handles the rest:\n# git checkout experiment-branch\n# git diff main -- prompts/\n```\n\n## File Format\n\nTextPrompts uses TOML front-matter (optional) followed by your prompt content:\n\n```\n---\ntitle = \"My Prompt\"\nversion = \"1.0.0\"\nauthor = \"Your Name\"\ndescription = \"What this prompt does\"\ncreated = \"2024-01-15\"\ntags = [\"customer-support\", \"greeting\"]\n---\n\nYour prompt content goes here.\n\nUse {variables} for templating.\n```\n\n### Metadata Modes\n\nChoose the right level of strictness for your use case:\n\n1. **IGNORE** (default) - Simple text file loading, filename becomes title\n2. **ALLOW** - Load metadata if present, don't worry about completeness \n3. **STRICT** - Require complete metadata (title, description, version) for production safety\n\n```python\n# Set globally\ntextprompts.set_metadata(\"ignore\") # Default: simple file loading\ntextprompts.set_metadata(\"allow\") # Flexible: load any metadata \ntextprompts.set_metadata(\"strict\") # Production: require complete metadata\n\n# Or override per prompt\nprompt = textprompts.load_prompt(\"file.txt\", meta=\"strict\")\n```\n\n## API Reference\n\n### `load_prompt(path, *, meta=None)`\n\nLoad a single prompt file.\n\n- `path`: Path to the prompt file\n- `meta`: Metadata handling mode - `MetadataMode.STRICT`, `MetadataMode.ALLOW`, `MetadataMode.IGNORE`, or string equivalents. None uses global config.\n\nReturns a `Prompt` object with:\n- `prompt.meta`: Metadata from TOML front-matter (always present)\n- `prompt.prompt`: The prompt content as a `PromptString`\n- `prompt.path`: Path to the original file\n\n### `load_prompts(*paths, recursive=False, glob=\"*.txt\", meta=None, max_files=1000)`\n\nLoad multiple prompts from files or directories.\n\n- `*paths`: Files or directories to load\n- `recursive`: Search directories recursively (default: False)\n- `glob`: File pattern to match (default: \"*.txt\")\n- `meta`: Metadata handling mode - `MetadataMode.STRICT`, `MetadataMode.ALLOW`, `MetadataMode.IGNORE`, or string equivalents. None uses global config.\n- `max_files`: Maximum files to process (default: 1000)\n\n### `set_metadata(mode)` / `get_metadata()`\n\nSet or get the global metadata handling mode.\n\n- `mode`: `MetadataMode.STRICT`, `MetadataMode.ALLOW`, `MetadataMode.IGNORE`, or string equivalents\n\n```python\nimport textprompts\n\n# Set global mode\ntextprompts.set_metadata(textprompts.MetadataMode.STRICT)\ntextprompts.set_metadata(\"allow\") # String also works\n\n# Get current mode\ncurrent_mode = textprompts.get_metadata()\n```\n\n### `save_prompt(path, content)`\n\nSave a prompt to a file.\n\n- `path`: Path to save the prompt file\n- `content`: Either a string (creates template with required fields) or a `Prompt` object\n\n```python\nfrom textprompts import save_prompt\n\n# Save a simple prompt with metadata template\nsave_prompt(\"my_prompt.txt\", \"You are a helpful assistant.\")\n\n# Save a Prompt object with full metadata\nsave_prompt(\"my_prompt.txt\", prompt_object)\n```\n\n### `PromptString`\n\nA string subclass that validates `format()` calls:\n\n```python\nfrom textprompts import PromptString\n\ntemplate = PromptString(\"Hello {name}, you are {role}\")\n\n# Strict formatting (default) - all placeholders required\nresult = template.format(name=\"Alice\", role=\"admin\") # \u2705 Works\nresult = template.format(name=\"Alice\") # \u274c Raises ValueError\n\n# Partial formatting - replace only available placeholders \npartial = template.format(name=\"Alice\", skip_validation=True) # \u2705 \"Hello Alice, you are {role}\"\n\n# Access placeholder information\nprint(template.placeholders) # {'name', 'role'}\n```\n\n## Error Handling\n\nTextPrompts provides specific exception types:\n\n```python\nfrom textprompts import (\n TextPromptsError, # Base exception\n FileMissingError, # File not found\n MissingMetadataError, # No TOML front-matter when required\n InvalidMetadataError, # Invalid TOML syntax\n MalformedHeaderError, # Malformed front-matter structure\n MetadataMode, # Metadata handling mode enum\n set_metadata, # Set global metadata mode\n get_metadata # Get global metadata mode\n)\n```\n\n## CLI Tool\n\nTextPrompts includes a CLI for quick prompt inspection:\n\n```bash\n# View a single prompt\ntextprompts show greeting.txt\n\n# List all prompts in a directory\ntextprompts list prompts/ --recursive\n\n# Validate prompts\ntextprompts validate prompts/\n```\n\n## Best Practices\n\n1. **Organize by purpose**: Group related prompts in folders\n ```\n prompts/\n \u251c\u2500\u2500 customer-support/\n \u251c\u2500\u2500 content-generation/\n \u2514\u2500\u2500 code-review/\n ```\n\n2. **Use semantic versioning**: Version your prompts like code\n ```\n version = \"1.2.0\" # major.minor.patch\n ```\n\n3. **Document your variables**: List expected variables in descriptions\n ```\n description = \"Requires: customer_name, issue_type, agent_name\"\n ```\n\n4. **Test your prompts**: Write unit tests for critical prompts\n ```python\n def test_greeting_prompt():\n prompt = load_prompt(\"greeting.txt\")\n result = prompt.prompt.format(customer_name=\"Test\")\n assert \"Test\" in result\n ```\n\n5. **Use environment-specific prompts**: Different prompts for dev/prod\n ```python\n env = os.getenv(\"ENV\", \"development\")\n prompt = load_prompt(f\"prompts/{env}/system.txt\")\n ```\n\n## Why Not Just Use String Templates?\n\nYou could, but then you lose:\n- **Metadata tracking** (versions, authors, descriptions)\n- **Safe formatting** (catch missing variables)\n- **Organized storage** (searchable, documentable)\n- **Version control benefits** (proper diffs, blame, history)\n- **Tooling support** (CLI, validation, testing)\n\n## Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.\n\n---\n\n**textprompts** - Because your prompts deserve better than being buried in code strings. \ud83d\ude80",
"bugtrack_url": null,
"license": null,
"summary": "Minimal text-based prompt-loader with TOML front-matter",
"version": "0.0.2",
"project_urls": {
"Bug Tracker": "https://github.com/svilupp/textprompts/issues",
"Documentation": "https://github.com/svilupp/textprompts#readme",
"Homepage": "https://github.com/svilupp/textprompts"
},
"split_keywords": [
"prompts",
" toml",
" frontmatter",
" template"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "6364dabe7b7416453c8cd753519c8419f3c6404737ec322ded00a9de3abb7a33",
"md5": "7bc0ccd1c9964a9610d9c78ee0ce2148",
"sha256": "8434eedfcd1a0c66488d4cd00ab73284ade62457a1f7abfce632014dfbde002a"
},
"downloads": -1,
"filename": "textprompts-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7bc0ccd1c9964a9610d9c78ee0ce2148",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 17736,
"upload_time": "2025-07-20T20:42:42",
"upload_time_iso_8601": "2025-07-20T20:42:42.800204Z",
"url": "https://files.pythonhosted.org/packages/63/64/dabe7b7416453c8cd753519c8419f3c6404737ec322ded00a9de3abb7a33/textprompts-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ff04330304bdfab99e83a9663b726226aea32a7db58824614d66f0f5926c04b7",
"md5": "08a177a270596ad07e05eed6ef6cee39",
"sha256": "de64a14b396520db5cecc6f0fef783e1aa09a074ba185522ca70d28525ec4fee"
},
"downloads": -1,
"filename": "textprompts-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "08a177a270596ad07e05eed6ef6cee39",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 13398,
"upload_time": "2025-07-20T20:42:44",
"upload_time_iso_8601": "2025-07-20T20:42:44.174165Z",
"url": "https://files.pythonhosted.org/packages/ff/04/330304bdfab99e83a9663b726226aea32a7db58824614d66f0f5926c04b7/textprompts-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-20 20:42:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "svilupp",
"github_project": "textprompts",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "textprompts"
}