Name | prompt-versions-manager JSON |
Version |
0.0.2
JSON |
| download |
home_page | None |
Summary | A delightful Python package for managing LLM prompts with versioning and i18n-style string management |
upload_time | 2025-01-12 12:47:12 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | MIT |
keywords |
llm
prompts
management
versioning
i18n
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Prompt Versions Manager 🚀
A delightful Python package for managing LLM prompts with versioning and i18n-style string management. Perfect for keeping your AI conversations organized and maintainable!
## Installation 📦
You can install the package directly from PyPI using pip:
```bash
pip install prompt-versions-manager
```
That's it! Now you can start using the package in your Python projects.
## Why Prompt Versions Manager?
Managing prompts for Large Language Models can quickly become messy. As your application grows, you might find yourself:
- Copy-pasting prompts across different files 📋
- Struggling to maintain consistent versions 🔄
- Manually handling string interpolation 🔧
- Losing track of prompt variations 😅
Prompt Versions Manager solves these problems by providing an elegant, i18n-inspired solution that makes prompt management a breeze!
## Use Cases 💡
### Multi-Model Prompt Management
Use versions to maintain model-specific prompts for the same instructions. For example:
```python
from prompt_versions_manager import PromptVersionsManager
p = PromptVersionsManager()
# First, set different prompt values for the same key for each model version
p.version("gpt4").set(
"system.role",
"You are GPT-4, a large language model trained by OpenAI. Follow the user's instructions carefully and format responses appropriately."
)
p.version("claude").set(
"system.role",
"You are Claude, an AI assistant created by Anthropic to be helpful, harmless, and honest. Always approach tasks thoughtfully and aim to provide accurate, nuanced responses."
)
p.version("llama2").set(
"system.role",
"You are Llama 2, an AI assistant trained by Meta. Provide direct, factual responses and always maintain a helpful and respectful tone."
)
# Attempting to overwrite without permission will raise an error
try:
p.version("gpt4").set(
"system.role",
"A different system role" # This will raise ValueError
)
except ValueError as e:
print(e) # Prompt 'system.role' already exists in version 'gpt4'. Set overwrite=True to update it.
# To update an existing prompt, explicitly set overwrite=True
p.version("gpt4").set(
"system.role",
"You are GPT-4, an advanced AI model. You excel at complex reasoning and provide detailed, accurate responses.",
overwrite=True # Now it will update the existing prompt
)
# Now you can get model-specific responses using the same prompt key
gpt4_response = p.version("gpt4").t("system.role")
claude_response = p.version("claude").t("system.role")
llama_response = p.version("llama2").t("system.role")
# Get all versions of the same prompt key
versions = p.get_all_versions("system.role")
# Returns: [
# {"version": "gpt4", "value": "You are GPT-4, an advanced AI model..."},
# {"version": "claude", "value": "You are Claude, an AI assistant..."},
# {"version": "llama2", "value": "You are Llama 2, an AI assistant..."}
# ]
```
### Multiple Prompt Sets
Use named managers to handle different types of prompts, each with their own versions:
```python
# Setup managers for different use cases
chat = PromptVersionsManager.setup("chat")
code = PromptVersionsManager.setup("code")
# Set different versions of the same chat prompt
chat.version("formal").set(
"assist.task",
"I understand you need assistance with {task}. Let me analyze your requirements: {requirements}"
)
chat.version("casual").set(
"assist.task",
"I'll help you with {task}! Looking at your requirements: {requirements}"
)
# Try to modify an existing prompt (will fail without overwrite=True)
try:
chat.version("formal").set(
"assist.task",
"Let me help you with {task}. Your requirements are: {requirements}"
)
except ValueError:
print("Cannot overwrite existing prompt without overwrite=True")
# Set different versions of the same code review prompt
code.version("detailed").set(
"review.code",
"I'll perform a comprehensive review of your {language} code, analyzing: architecture, performance, security, and best practices."
)
code.version("quick").set(
"review.code",
"I'll do a quick review of your {language} code, focusing on critical issues and basic improvements."
)
# Use the same prompt keys with different versions
formal_response = chat.version("formal").t("assist.task",
task="data analysis",
requirements="must be scalable")
casual_response = chat.version("casual").t("assist.task",
task="data analysis",
requirements="must be scalable")
detailed_review = code.version("detailed").t("review.code", language="Python")
quick_review = code.version("quick").t("review.code", language="Python")
```
Your `prompts` directory could look like:
```
prompts/
├── chat/
│ ├── formal.json
│ └── casual.json
├── code/
│ ├── detailed.json
│ └── quick.json
└── default/
└── v1.json
```
## Features ✨
- **Automatic Prompt Creation** - Just use prompts and they're automatically saved
- **Version Control** - Maintain different versions of your prompts
- **Variable Interpolation** - Use placeholders like `{name}` in your prompts
- **Named Managers** - Organize prompts by use case or feature
- **JSON Storage** - Simple, human-readable storage format
- **FastAPI Integration** - Ready-to-use REST endpoints for your prompts
- **Zero Configuration** - Works out of the box with sensible defaults
## Quick Start 🏃♂️
```python
from prompt_versions_manager import PromptVersionsManager
# Get the default prompt manager
p = PromptVersionsManager()
# Set a prompt with placeholders
p.set(
"introduce.expert",
"I am a highly knowledgeable {field} expert with {years} years of experience. I can provide detailed, accurate information about {specialization}."
)
# Use the prompt with different parameters
ml_expert = p.t("introduce.expert",
field="machine learning",
years="10",
specialization="neural networks")
# Set different versions of the same prompt
p.version("technical").set(
"analyze.code",
"Perform a technical analysis of this {language} code, focusing on {aspect}. Provide specific recommendations for improvement."
)
p.version("simple").set(
"analyze.code",
"Look at this {language} code and suggest simple ways to make it better, especially regarding {aspect}."
)
# Attempting to overwrite without permission will raise an error
try:
p.version("technical").set(
"analyze.code",
"A different analysis prompt" # This will raise ValueError
)
except ValueError as e:
print(e) # Prompt 'analyze.code' already exists in version 'technical'. Set overwrite=True to update it.
# To update an existing prompt, explicitly set overwrite=True
p.version("technical").set(
"analyze.code",
"Perform a technical analysis of this {language} code, focusing on {aspect}. Provide specific recommendations for improvement.",
overwrite=True # Now it will update the existing prompt
)
# Use different versions of the same prompt
technical_review = p.version("technical").t("analyze.code",
language="Python",
aspect="performance")
simple_review = p.version("simple").t("analyze.code",
language="Python",
aspect="performance")
# List all versions
versions = p.versions() # ['technical', 'simple']
# Find all versions of a specific prompt
analysis_versions = p.versions_for("analyze.code")
```
## FastAPI Integration 🌐
```python
from fastapi import FastAPI
from prompt_versions_manager import PromptVersionsManager
app = FastAPI()
chat = PromptVersionsManager("chat")
code = PromptVersionsManager("code")
@app.get("/prompts/{manager}/{version}/{prompt_id}")
async def get_prompt(manager: str, version: str, prompt_id: str, **params):
p = PromptVersionsManager(manager)
return {"prompt": p.version(version).t(prompt_id, **params)}
@app.get("/versions/{manager}")
async def list_versions(manager: str):
return {"versions": PromptVersionsManager(manager).versions()}
```
## Configuration 🛠️
Configure through environment variables:
```bash
export PROMPTS_DIR="./prompts" # Base directory for all prompt managers
export PROMPT_VERSION="v1" # Default version
```
## Directory Structure 📁
Each named manager gets its own subdirectory with version-specific files:
```
prompts/
├── chat/ # Chat prompts
│ ├── formal.json
│ └── casual.json
├── code/ # Code assistance prompts
│ ├── detailed.json
│ └── quick.json
└── default/
└── v1.json
```
Example `chat/formal.json`:
```json
{
"assist.task": "I understand you need assistance with {task}. Let me analyze your requirements: {requirements}",
"introduce.expert": "I am a highly knowledgeable {field} expert with {years} years of experience. I can provide detailed, accurate information about {specialization}."
}
```
## Contributing 🤝
We love contributions! Feel free to:
1. Fork the repository
2. Create a feature branch
3. Submit a pull request
## License 📄
MIT License - feel free to use this in your projects!
Raw data
{
"_id": null,
"home_page": null,
"name": "prompt-versions-manager",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm, prompts, management, versioning, i18n",
"author": null,
"author_email": "Pablo Schaffner <pablo@puntorigen.com>",
"download_url": "https://files.pythonhosted.org/packages/97/9a/1770e33e2bdd2a8ae454ea59114f797ad884aac3540478f0a2b4d9f1633e/prompt_versions_manager-0.0.2.tar.gz",
"platform": null,
"description": "# Prompt Versions Manager \ud83d\ude80\n\nA delightful Python package for managing LLM prompts with versioning and i18n-style string management. Perfect for keeping your AI conversations organized and maintainable!\n\n## Installation \ud83d\udce6\n\nYou can install the package directly from PyPI using pip:\n\n```bash\npip install prompt-versions-manager\n```\n\nThat's it! Now you can start using the package in your Python projects.\n\n## Why Prompt Versions Manager?\n\nManaging prompts for Large Language Models can quickly become messy. As your application grows, you might find yourself:\n\n- Copy-pasting prompts across different files \ud83d\udccb\n- Struggling to maintain consistent versions \ud83d\udd04\n- Manually handling string interpolation \ud83d\udd27\n- Losing track of prompt variations \ud83d\ude05\n\nPrompt Versions Manager solves these problems by providing an elegant, i18n-inspired solution that makes prompt management a breeze!\n\n## Use Cases \ud83d\udca1\n\n### Multi-Model Prompt Management\nUse versions to maintain model-specific prompts for the same instructions. For example:\n\n```python\nfrom prompt_versions_manager import PromptVersionsManager\n\np = PromptVersionsManager()\n\n# First, set different prompt values for the same key for each model version\np.version(\"gpt4\").set(\n \"system.role\",\n \"You are GPT-4, a large language model trained by OpenAI. Follow the user's instructions carefully and format responses appropriately.\"\n)\n\np.version(\"claude\").set(\n \"system.role\",\n \"You are Claude, an AI assistant created by Anthropic to be helpful, harmless, and honest. Always approach tasks thoughtfully and aim to provide accurate, nuanced responses.\"\n)\n\np.version(\"llama2\").set(\n \"system.role\",\n \"You are Llama 2, an AI assistant trained by Meta. Provide direct, factual responses and always maintain a helpful and respectful tone.\"\n)\n\n# Attempting to overwrite without permission will raise an error\ntry:\n p.version(\"gpt4\").set(\n \"system.role\",\n \"A different system role\" # This will raise ValueError\n )\nexcept ValueError as e:\n print(e) # Prompt 'system.role' already exists in version 'gpt4'. Set overwrite=True to update it.\n\n# To update an existing prompt, explicitly set overwrite=True\np.version(\"gpt4\").set(\n \"system.role\",\n \"You are GPT-4, an advanced AI model. You excel at complex reasoning and provide detailed, accurate responses.\",\n overwrite=True # Now it will update the existing prompt\n)\n\n# Now you can get model-specific responses using the same prompt key\ngpt4_response = p.version(\"gpt4\").t(\"system.role\")\nclaude_response = p.version(\"claude\").t(\"system.role\")\nllama_response = p.version(\"llama2\").t(\"system.role\")\n\n# Get all versions of the same prompt key\nversions = p.get_all_versions(\"system.role\")\n# Returns: [\n# {\"version\": \"gpt4\", \"value\": \"You are GPT-4, an advanced AI model...\"},\n# {\"version\": \"claude\", \"value\": \"You are Claude, an AI assistant...\"},\n# {\"version\": \"llama2\", \"value\": \"You are Llama 2, an AI assistant...\"}\n# ]\n```\n\n### Multiple Prompt Sets\nUse named managers to handle different types of prompts, each with their own versions:\n\n```python\n# Setup managers for different use cases\nchat = PromptVersionsManager.setup(\"chat\")\ncode = PromptVersionsManager.setup(\"code\")\n\n# Set different versions of the same chat prompt\nchat.version(\"formal\").set(\n \"assist.task\",\n \"I understand you need assistance with {task}. Let me analyze your requirements: {requirements}\"\n)\n\nchat.version(\"casual\").set(\n \"assist.task\",\n \"I'll help you with {task}! Looking at your requirements: {requirements}\"\n)\n\n# Try to modify an existing prompt (will fail without overwrite=True)\ntry:\n chat.version(\"formal\").set(\n \"assist.task\",\n \"Let me help you with {task}. Your requirements are: {requirements}\"\n )\nexcept ValueError:\n print(\"Cannot overwrite existing prompt without overwrite=True\")\n\n# Set different versions of the same code review prompt\ncode.version(\"detailed\").set(\n \"review.code\",\n \"I'll perform a comprehensive review of your {language} code, analyzing: architecture, performance, security, and best practices.\"\n)\n\ncode.version(\"quick\").set(\n \"review.code\",\n \"I'll do a quick review of your {language} code, focusing on critical issues and basic improvements.\"\n)\n\n# Use the same prompt keys with different versions\nformal_response = chat.version(\"formal\").t(\"assist.task\", \n task=\"data analysis\", \n requirements=\"must be scalable\")\n\ncasual_response = chat.version(\"casual\").t(\"assist.task\",\n task=\"data analysis\",\n requirements=\"must be scalable\")\n\ndetailed_review = code.version(\"detailed\").t(\"review.code\", language=\"Python\")\nquick_review = code.version(\"quick\").t(\"review.code\", language=\"Python\")\n```\n\nYour `prompts` directory could look like:\n```\nprompts/\n\u251c\u2500\u2500 chat/\n\u2502 \u251c\u2500\u2500 formal.json\n\u2502 \u2514\u2500\u2500 casual.json\n\u251c\u2500\u2500 code/\n\u2502 \u251c\u2500\u2500 detailed.json\n\u2502 \u2514\u2500\u2500 quick.json\n\u2514\u2500\u2500 default/\n \u2514\u2500\u2500 v1.json\n```\n\n## Features \u2728\n\n- **Automatic Prompt Creation** - Just use prompts and they're automatically saved\n- **Version Control** - Maintain different versions of your prompts\n- **Variable Interpolation** - Use placeholders like `{name}` in your prompts\n- **Named Managers** - Organize prompts by use case or feature\n- **JSON Storage** - Simple, human-readable storage format\n- **FastAPI Integration** - Ready-to-use REST endpoints for your prompts\n- **Zero Configuration** - Works out of the box with sensible defaults\n\n## Quick Start \ud83c\udfc3\u200d\u2642\ufe0f\n\n```python\nfrom prompt_versions_manager import PromptVersionsManager\n\n# Get the default prompt manager\np = PromptVersionsManager()\n\n# Set a prompt with placeholders\np.set(\n \"introduce.expert\",\n \"I am a highly knowledgeable {field} expert with {years} years of experience. I can provide detailed, accurate information about {specialization}.\"\n)\n\n# Use the prompt with different parameters\nml_expert = p.t(\"introduce.expert\",\n field=\"machine learning\",\n years=\"10\",\n specialization=\"neural networks\")\n\n# Set different versions of the same prompt\np.version(\"technical\").set(\n \"analyze.code\",\n \"Perform a technical analysis of this {language} code, focusing on {aspect}. Provide specific recommendations for improvement.\"\n)\n\np.version(\"simple\").set(\n \"analyze.code\",\n \"Look at this {language} code and suggest simple ways to make it better, especially regarding {aspect}.\"\n)\n\n# Attempting to overwrite without permission will raise an error\ntry:\n p.version(\"technical\").set(\n \"analyze.code\",\n \"A different analysis prompt\" # This will raise ValueError\n )\nexcept ValueError as e:\n print(e) # Prompt 'analyze.code' already exists in version 'technical'. Set overwrite=True to update it.\n\n# To update an existing prompt, explicitly set overwrite=True\np.version(\"technical\").set(\n \"analyze.code\",\n \"Perform a technical analysis of this {language} code, focusing on {aspect}. Provide specific recommendations for improvement.\",\n overwrite=True # Now it will update the existing prompt\n)\n\n# Use different versions of the same prompt\ntechnical_review = p.version(\"technical\").t(\"analyze.code\",\n language=\"Python\",\n aspect=\"performance\")\n\nsimple_review = p.version(\"simple\").t(\"analyze.code\",\n language=\"Python\",\n aspect=\"performance\")\n\n# List all versions\nversions = p.versions() # ['technical', 'simple']\n\n# Find all versions of a specific prompt\nanalysis_versions = p.versions_for(\"analyze.code\")\n```\n\n## FastAPI Integration \ud83c\udf10\n\n```python\nfrom fastapi import FastAPI\nfrom prompt_versions_manager import PromptVersionsManager\n\napp = FastAPI()\nchat = PromptVersionsManager(\"chat\")\ncode = PromptVersionsManager(\"code\")\n\n@app.get(\"/prompts/{manager}/{version}/{prompt_id}\")\nasync def get_prompt(manager: str, version: str, prompt_id: str, **params):\n p = PromptVersionsManager(manager)\n return {\"prompt\": p.version(version).t(prompt_id, **params)}\n\n@app.get(\"/versions/{manager}\")\nasync def list_versions(manager: str):\n return {\"versions\": PromptVersionsManager(manager).versions()}\n```\n\n## Configuration \ud83d\udee0\ufe0f\n\nConfigure through environment variables:\n\n```bash\nexport PROMPTS_DIR=\"./prompts\" # Base directory for all prompt managers\nexport PROMPT_VERSION=\"v1\" # Default version\n```\n\n## Directory Structure \ud83d\udcc1\n\nEach named manager gets its own subdirectory with version-specific files:\n\n```\nprompts/\n\u251c\u2500\u2500 chat/ # Chat prompts\n\u2502 \u251c\u2500\u2500 formal.json\n\u2502 \u2514\u2500\u2500 casual.json\n\u251c\u2500\u2500 code/ # Code assistance prompts\n\u2502 \u251c\u2500\u2500 detailed.json\n\u2502 \u2514\u2500\u2500 quick.json\n\u2514\u2500\u2500 default/\n \u2514\u2500\u2500 v1.json\n```\n\nExample `chat/formal.json`:\n```json\n{\n \"assist.task\": \"I understand you need assistance with {task}. Let me analyze your requirements: {requirements}\",\n \"introduce.expert\": \"I am a highly knowledgeable {field} expert with {years} years of experience. I can provide detailed, accurate information about {specialization}.\"\n}\n```\n\n## Contributing \ud83e\udd1d\n\nWe love contributions! Feel free to:\n\n1. Fork the repository\n2. Create a feature branch\n3. Submit a pull request\n\n## License \ud83d\udcc4\n\nMIT License - feel free to use this in your projects!\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A delightful Python package for managing LLM prompts with versioning and i18n-style string management",
"version": "0.0.2",
"project_urls": {
"Homepage": "https://github.com/puntorigen/prompt-versions-manager"
},
"split_keywords": [
"llm",
" prompts",
" management",
" versioning",
" i18n"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "4eac11093ea969fefe3e3b3a52e0691fb6926924121b926ae2e9ee861496255f",
"md5": "b3b83dce53e26c52cb70cec86e1c84e4",
"sha256": "28b00a3d5d68902b06ed2025e91e8ac84b73ad9f0168b2ebfdb57030282dc67d"
},
"downloads": -1,
"filename": "prompt_versions_manager-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b3b83dce53e26c52cb70cec86e1c84e4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 9530,
"upload_time": "2025-01-12T12:47:09",
"upload_time_iso_8601": "2025-01-12T12:47:09.604160Z",
"url": "https://files.pythonhosted.org/packages/4e/ac/11093ea969fefe3e3b3a52e0691fb6926924121b926ae2e9ee861496255f/prompt_versions_manager-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "979a1770e33e2bdd2a8ae454ea59114f797ad884aac3540478f0a2b4d9f1633e",
"md5": "a6838aa42a90780b3e1d9fdda08cfc9d",
"sha256": "e9a1496513c96b6d8601f820365703e2d5c239a523f5da9a59a31847150b345c"
},
"downloads": -1,
"filename": "prompt_versions_manager-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "a6838aa42a90780b3e1d9fdda08cfc9d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 10551,
"upload_time": "2025-01-12T12:47:12",
"upload_time_iso_8601": "2025-01-12T12:47:12.279584Z",
"url": "https://files.pythonhosted.org/packages/97/9a/1770e33e2bdd2a8ae454ea59114f797ad884aac3540478f0a2b4d9f1633e/prompt_versions_manager-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-12 12:47:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "puntorigen",
"github_project": "prompt-versions-manager",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "prompt-versions-manager"
}