| Name | prompteer JSON |
| Version |
0.2.0
JSON |
| download |
| home_page | None |
| Summary | A lightweight file-based prompt manager for LLM workflows. Simple, scalable, and version-control friendly. |
| upload_time | 2025-10-25 04:32:30 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.7 |
| license | None |
| keywords |
prompt
llm
ai
prompt-engineering
prompt-management
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# prompteer
A lightweight file-based prompt manager for LLM workflows. Simple, scalable, and version-control friendly.
## Features
- **File-based prompt management** - Store prompts as markdown files
- **Intuitive dot notation API** - Access prompts naturally: `prompts.chat.system()`
- **Dynamic routing** - Next.js-style `[param]` directories for flexible prompt selection
- **Version control friendly** - Track prompt changes with Git
- **Zero configuration** - Start using immediately
- **IDE autocomplete support** - Full type hints with generated stubs
- **Lightweight** - Minimal dependencies (only PyYAML)
- **Python 3.7+** - Wide compatibility
## Installation
```bash
pip install prompteer
```
## Quick Start
### 1. Initialize Your Prompt Directory
Use the `init` command to create a prompt directory with example prompts:
```bash
prompteer init
```
This creates a `prompts/` directory with:
- Basic chat prompts
- Dynamic routing examples
- Variable types demonstrations
Or initialize in a custom directory:
```bash
prompteer init my-prompts
```
### 2. Or Create Your Own Structure
```
my-project/
├── prompts/
│ ├── greeting/
│ │ └── hello.md
│ └── chat/
│ └── system.md
└── main.py
```
### 3. Write Prompts with Variables
**`prompts/chat/system.md`:**
```markdown
---
description: System message for chat
role: AI role description
personality: AI personality traits
---
You are a {role}.
Your personality is {personality}.
Please be helpful, accurate, and respectful in all interactions.
```
### 4. Use in Your Code
```python
from pathlib import Path
from prompteer import create_prompts
# Option 1: Relative to current working directory
prompts = create_prompts("./prompts")
# Option 2: Relative to your script file (recommended for packages/libraries)
PROMPTS_DIR = Path(__file__).parent / "prompts"
prompts = create_prompts(PROMPTS_DIR)
# Use with variables
system_message = prompts.chat.system(
role="helpful assistant",
personality="friendly and patient"
)
print(system_message)
# Output:
# You are a helpful assistant.
# Your personality is friendly and patient.
# Please be helpful, accurate, and respectful in all interactions.
```
**Path Resolution:**
- Relative paths (e.g., `"./prompts"`) are resolved from the current working directory
- For packages/libraries, use `Path(__file__).parent / "prompts"` to ensure it works regardless of where the code is run from
- Absolute paths always work but are less portable
## Type Hints & IDE Autocomplete
Generate type stubs for perfect IDE autocomplete:
```bash
prompteer generate-types ./prompts -o prompts.pyi
```
Now your IDE will provide:
- ✅ Autocomplete for all prompt paths
- ✅ Parameter suggestions
- ✅ Type checking
- ✅ Documentation tooltips
```python
from prompteer import create_prompts
prompts = create_prompts("./prompts")
# Full IDE autocomplete support!
prompts.chat.system(role="...", personality="...")
```
### Watch Mode
Automatically regenerate types when prompts change:
```bash
prompteer generate-types ./prompts --watch
```
## Variable Types
Specify types in your prompt frontmatter:
```markdown
---
description: My prompt
name(str): User's name
age(int): User's age
score(float): User's score
active(bool): Is user active
count(number): Can be int or float
data(any): Any type
---
Hello {name}, you are {age} years old!
```
Supported types:
- `str` (default)
- `int`
- `float`
- `bool`
- `number` (int or float)
- `any`
## Dynamic Routing
Create flexible prompts that adapt based on runtime parameters, similar to Next.js dynamic routes.
### Basic Example
**File Structure:**
```
prompts/
└── question/
└── [type]/ # Dynamic parameter: type
├── basic/ # type="basic"
│ └── user.md
├── advanced/ # type="advanced"
│ └── user.md
└── default.md # Fallback when no match
```
**Usage:**
```python
from prompteer import create_prompts
prompts = create_prompts("./prompts")
# Select different prompt versions
basic = prompts.question.user(type="basic", name="Alice")
advanced = prompts.question.user(type="advanced", name="Bob", context="Python expert")
# Fallback to default.md if type not found
fallback = prompts.question.user(type="expert") # Uses default.md
```
### How It Works
1. `[type]` directory = dynamic parameter
2. `basic/`, `advanced/` = possible values for the parameter
3. `default.md` = fallback when value doesn't match any directory
4. If no default.md exists, raises `PromptNotFoundError`
### Type Hints with Dynamic Routing
Generate type stubs to get IDE autocomplete for available values:
```bash
prompteer generate-types ./prompts -o prompts.pyi
```
Your generated type stub will include `Literal` types:
```python
def user(
self,
type: Literal["basic", "advanced"], # Autocomplete with available values!
name: str = "",
**kwargs: Any
) -> str: ...
```
## Real-World Example
### Prompt File Structure
```
prompts/
├── code-review/
│ └── review-request.md
├── translation/
│ └── translate.md
└── chat/
├── system.md
└── user-query.md
```
### Using with LLM APIs
```python
from prompteer import create_prompts
import openai
prompts = create_prompts("./prompts")
# Prepare system message
system_msg = prompts.chat.system(
role="Python expert",
personality="concise and technical"
)
# Prepare user query
user_msg = prompts.chat.userQuery(
question="How do I handle exceptions in Python?",
context="I'm a beginner learning best practices."
)
# Send to OpenAI
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_msg},
{"role": "user", "content": user_msg}
]
)
```
## CLI Commands
### Initialize Project
Create a new prompts directory with example prompts:
```bash
# Create in default 'prompts/' directory
prompteer init
# Create in custom directory
prompteer init my-prompts
# Overwrite existing directory
prompteer init prompts --force
```
The `init` command creates:
- Basic chat prompts with variables
- Dynamic routing examples (`[type]` directories)
- Sample prompts demonstrating all features
### Generate Type Stubs
```bash
# Default command - can omit 'generate-types'
prompteer ./prompts -o prompts.pyi
# Or explicitly use generate-types
prompteer generate-types ./prompts -o prompts.pyi
# Watch mode - auto-regenerate on changes
prompteer ./prompts --watch
# Specify encoding
prompteer ./prompts --encoding utf-8
```
### Help
```bash
prompteer --help
prompteer generate-types --help
```
## Advanced Usage
### Dynamic Prompt Selection
```python
from prompteer import create_prompts
prompts = create_prompts("./prompts")
# Select prompts dynamically
prompt_type = "code_review"
if prompt_type == "code_review":
result = prompts.codeReview.reviewRequest(
language="Python",
code="def hello(): print('hi')",
focus_areas="style and best practices"
)
```
### Error Handling
```python
from prompteer import create_prompts, PromptNotFoundError
try:
prompts = create_prompts("./prompts")
result = prompts.nonexistent.prompt()
except PromptNotFoundError as e:
print(f"Prompt not found: {e}")
```
### Multiple Prompt Directories
```python
from prompteer import create_prompts
# Different prompt sets for different purposes
chat_prompts = create_prompts("./prompts/chat")
review_prompts = create_prompts("./prompts/reviews")
system_msg = chat_prompts.system(role="assistant")
review_msg = review_prompts.codeReview(language="Python")
```
## Why prompteer?
**Before prompteer:**
```python
# Prompts scattered in code
system_prompt = """You are a helpful assistant.
Your personality is friendly.
Please be respectful."""
# Hard to maintain, version, and reuse
```
**With prompteer:**
```python
# Prompts organized in files
# Easy to version control
# Reusable across projects
# Type-safe with autocomplete
prompts = create_prompts("./prompts")
system_prompt = prompts.chat.system(
role="helpful assistant",
personality="friendly"
)
```
## File Naming Convention
- **Directories**: Use `kebab-case` → becomes `camelCase` in Python
- `code-review/` → `prompts.codeReview`
- **Files**: Use `kebab-case.md` → becomes `camelCase()` method
- `user-query.md` → `prompts.chat.userQuery()`
## Requirements
- Python 3.7+
- PyYAML >= 5.1
Optional:
- watchdog (for `--watch` mode)
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
MIT License - see [LICENSE](LICENSE) file for details.
---
## For AI Agents & LLM Tools
### Quick Integration Guide
**Installation from GitHub (before PyPI release):**
```bash
pip install git+https://github.com/ibare/prompteer.git
```
**Installation from PyPI (when available):**
```bash
pip install prompteer
```
### Essential Usage Pattern
```python
from pathlib import Path
from prompteer import create_prompts
# 1. Load prompts from directory
# Option A: Relative to current working directory
prompts = create_prompts("./prompts")
# Option B: Relative to script file (recommended for packages)
PROMPTS_DIR = Path(__file__).parent / "prompts"
prompts = create_prompts(PROMPTS_DIR)
# 2. Access prompts with dot notation
result = prompts.chat.system(
role="helpful assistant",
personality="friendly and patient"
)
# 3. Use the rendered prompt
print(result)
```
**Important - Path Resolution:**
- Relative paths are resolved from current working directory (CWD)
- For library/package usage, use `Path(__file__).parent / "prompts"` pattern
- This ensures prompts are found regardless of where the host application runs from
### Prompt File Format
Create markdown files with YAML frontmatter:
```markdown
---
description: System message prompt
role: AI role description
personality: AI personality traits
---
You are a {role}.
Your personality is {personality}.
```
### File Structure Convention
```
prompts/
├── chat/
│ ├── system.md → prompts.chat.system()
│ └── user-query.md → prompts.chat.userQuery()
└── code-review/
└── review.md → prompts.codeReview.review()
```
**Key Convention**: `kebab-case` files/directories → `camelCase` Python methods
### Dynamic Routing (v0.2.0+)
Use `[param]` directories for runtime prompt selection:
```
prompts/
└── question/
└── [type]/ # Dynamic parameter
├── basic/
│ └── user.md
├── advanced/
│ └── user.md
└── default.md # Fallback
```
```python
prompts = create_prompts("./prompts")
# Select different versions based on runtime parameter
basic = prompts.question.user(type="basic", name="Alice")
advanced = prompts.question.user(type="advanced", name="Bob", context="expert")
# Automatic fallback to default.md if value doesn't match
fallback = prompts.question.user(type="expert") # Uses default.md
```
**Type safety with Literal types:**
```python
# Generated type stub includes available values
def user(
self,
type: Literal["basic", "advanced"], # IDE autocomplete!
name: str = "",
**kwargs: Any
) -> str: ...
```
### Type Hints (Optional)
```bash
# Generate type stubs for IDE autocomplete
prompteer generate-types ./prompts -o prompts.pyi
```
### Key Implementation Files
- `src/prompteer/core.py` - Main `Prompteer` class and `create_prompts()` function
- `src/prompteer/proxy.py` - Dynamic attribute access via `__getattr__`
- `src/prompteer/template.py` - Variable substitution engine
- `src/prompteer/metadata.py` - YAML frontmatter parsing
- `src/prompteer/type_generator.py` - Type stub generation
### Common Patterns
**Dynamic prompt selection:**
```python
prompts = create_prompts("./prompts")
# Select prompt based on runtime condition
if task_type == "code_review":
prompt = prompts.codeReview.reviewRequest(language="Python", code=code)
elif task_type == "translation":
prompt = prompts.translation.translate(source="EN", target="KO", text=text)
```
**Error handling:**
```python
from prompteer import create_prompts, PromptNotFoundError
try:
prompts = create_prompts("./prompts")
result = prompts.some.prompt()
except PromptNotFoundError as e:
print(f"Prompt not found: {e}")
```
### Supported Variable Types
In YAML frontmatter:
- `name: description` - defaults to `str`
- `age(int): description` - integer
- `score(float): description` - float
- `active(bool): description` - boolean
- `count(number): description` - int or float
- `data(any): description` - any type
### Testing
Examples available in `examples/` directory:
- `examples/basic_usage.py` - Basic features
- `examples/llm_integration.py` - LLM API integration
- `examples/advanced_usage.py` - Advanced patterns
- `examples/dynamic_routing.py` - Dynamic routing examples
---
## Links
- **GitHub**: https://github.com/ibare/prompteer
- **PyPI**: https://pypi.org/project/prompteer/
- **Documentation**: See [examples/](examples/) directory
- **Issues**: https://github.com/ibare/prompteer/issues
Raw data
{
"_id": null,
"home_page": null,
"name": "prompteer",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "prompt, llm, ai, prompt-engineering, prompt-management",
"author": null,
"author_email": "Mintae Kim <ibare77@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/ca/c0/298bde76e175d81fb99b93cfa187371f798f1d89cbae73cb0f9f58faf321/prompteer-0.2.0.tar.gz",
"platform": null,
"description": "# prompteer\n\nA lightweight file-based prompt manager for LLM workflows. Simple, scalable, and version-control friendly.\n\n## Features\n\n- **File-based prompt management** - Store prompts as markdown files\n- **Intuitive dot notation API** - Access prompts naturally: `prompts.chat.system()`\n- **Dynamic routing** - Next.js-style `[param]` directories for flexible prompt selection\n- **Version control friendly** - Track prompt changes with Git\n- **Zero configuration** - Start using immediately\n- **IDE autocomplete support** - Full type hints with generated stubs\n- **Lightweight** - Minimal dependencies (only PyYAML)\n- **Python 3.7+** - Wide compatibility\n\n## Installation\n\n```bash\npip install prompteer\n```\n\n## Quick Start\n\n### 1. Initialize Your Prompt Directory\n\nUse the `init` command to create a prompt directory with example prompts:\n\n```bash\nprompteer init\n```\n\nThis creates a `prompts/` directory with:\n- Basic chat prompts\n- Dynamic routing examples\n- Variable types demonstrations\n\nOr initialize in a custom directory:\n\n```bash\nprompteer init my-prompts\n```\n\n### 2. Or Create Your Own Structure\n\n```\nmy-project/\n\u251c\u2500\u2500 prompts/\n\u2502 \u251c\u2500\u2500 greeting/\n\u2502 \u2502 \u2514\u2500\u2500 hello.md\n\u2502 \u2514\u2500\u2500 chat/\n\u2502 \u2514\u2500\u2500 system.md\n\u2514\u2500\u2500 main.py\n```\n\n### 3. Write Prompts with Variables\n\n**`prompts/chat/system.md`:**\n```markdown\n---\ndescription: System message for chat\nrole: AI role description\npersonality: AI personality traits\n---\nYou are a {role}.\n\nYour personality is {personality}.\n\nPlease be helpful, accurate, and respectful in all interactions.\n```\n\n### 4. Use in Your Code\n\n```python\nfrom pathlib import Path\nfrom prompteer import create_prompts\n\n# Option 1: Relative to current working directory\nprompts = create_prompts(\"./prompts\")\n\n# Option 2: Relative to your script file (recommended for packages/libraries)\nPROMPTS_DIR = Path(__file__).parent / \"prompts\"\nprompts = create_prompts(PROMPTS_DIR)\n\n# Use with variables\nsystem_message = prompts.chat.system(\n role=\"helpful assistant\",\n personality=\"friendly and patient\"\n)\n\nprint(system_message)\n# Output:\n# You are a helpful assistant.\n# Your personality is friendly and patient.\n# Please be helpful, accurate, and respectful in all interactions.\n```\n\n**Path Resolution:**\n- Relative paths (e.g., `\"./prompts\"`) are resolved from the current working directory\n- For packages/libraries, use `Path(__file__).parent / \"prompts\"` to ensure it works regardless of where the code is run from\n- Absolute paths always work but are less portable\n\n## Type Hints & IDE Autocomplete\n\nGenerate type stubs for perfect IDE autocomplete:\n\n```bash\nprompteer generate-types ./prompts -o prompts.pyi\n```\n\nNow your IDE will provide:\n- \u2705 Autocomplete for all prompt paths\n- \u2705 Parameter suggestions\n- \u2705 Type checking\n- \u2705 Documentation tooltips\n\n```python\nfrom prompteer import create_prompts\n\nprompts = create_prompts(\"./prompts\")\n\n# Full IDE autocomplete support!\nprompts.chat.system(role=\"...\", personality=\"...\")\n```\n\n### Watch Mode\n\nAutomatically regenerate types when prompts change:\n\n```bash\nprompteer generate-types ./prompts --watch\n```\n\n## Variable Types\n\nSpecify types in your prompt frontmatter:\n\n```markdown\n---\ndescription: My prompt\nname(str): User's name\nage(int): User's age\nscore(float): User's score\nactive(bool): Is user active\ncount(number): Can be int or float\ndata(any): Any type\n---\nHello {name}, you are {age} years old!\n```\n\nSupported types:\n- `str` (default)\n- `int`\n- `float`\n- `bool`\n- `number` (int or float)\n- `any`\n\n## Dynamic Routing\n\nCreate flexible prompts that adapt based on runtime parameters, similar to Next.js dynamic routes.\n\n### Basic Example\n\n**File Structure:**\n```\nprompts/\n\u2514\u2500\u2500 question/\n \u2514\u2500\u2500 [type]/ # Dynamic parameter: type\n \u251c\u2500\u2500 basic/ # type=\"basic\"\n \u2502 \u2514\u2500\u2500 user.md\n \u251c\u2500\u2500 advanced/ # type=\"advanced\"\n \u2502 \u2514\u2500\u2500 user.md\n \u2514\u2500\u2500 default.md # Fallback when no match\n```\n\n**Usage:**\n```python\nfrom prompteer import create_prompts\n\nprompts = create_prompts(\"./prompts\")\n\n# Select different prompt versions\nbasic = prompts.question.user(type=\"basic\", name=\"Alice\")\nadvanced = prompts.question.user(type=\"advanced\", name=\"Bob\", context=\"Python expert\")\n\n# Fallback to default.md if type not found\nfallback = prompts.question.user(type=\"expert\") # Uses default.md\n```\n\n### How It Works\n\n1. `[type]` directory = dynamic parameter\n2. `basic/`, `advanced/` = possible values for the parameter\n3. `default.md` = fallback when value doesn't match any directory\n4. If no default.md exists, raises `PromptNotFoundError`\n\n### Type Hints with Dynamic Routing\n\nGenerate type stubs to get IDE autocomplete for available values:\n\n```bash\nprompteer generate-types ./prompts -o prompts.pyi\n```\n\nYour generated type stub will include `Literal` types:\n```python\ndef user(\n self,\n type: Literal[\"basic\", \"advanced\"], # Autocomplete with available values!\n name: str = \"\",\n **kwargs: Any\n) -> str: ...\n```\n\n## Real-World Example\n\n### Prompt File Structure\n\n```\nprompts/\n\u251c\u2500\u2500 code-review/\n\u2502 \u2514\u2500\u2500 review-request.md\n\u251c\u2500\u2500 translation/\n\u2502 \u2514\u2500\u2500 translate.md\n\u2514\u2500\u2500 chat/\n \u251c\u2500\u2500 system.md\n \u2514\u2500\u2500 user-query.md\n```\n\n### Using with LLM APIs\n\n```python\nfrom prompteer import create_prompts\nimport openai\n\nprompts = create_prompts(\"./prompts\")\n\n# Prepare system message\nsystem_msg = prompts.chat.system(\n role=\"Python expert\",\n personality=\"concise and technical\"\n)\n\n# Prepare user query\nuser_msg = prompts.chat.userQuery(\n question=\"How do I handle exceptions in Python?\",\n context=\"I'm a beginner learning best practices.\"\n)\n\n# Send to OpenAI\nresponse = openai.ChatCompletion.create(\n model=\"gpt-4\",\n messages=[\n {\"role\": \"system\", \"content\": system_msg},\n {\"role\": \"user\", \"content\": user_msg}\n ]\n)\n```\n\n## CLI Commands\n\n### Initialize Project\n\nCreate a new prompts directory with example prompts:\n\n```bash\n# Create in default 'prompts/' directory\nprompteer init\n\n# Create in custom directory\nprompteer init my-prompts\n\n# Overwrite existing directory\nprompteer init prompts --force\n```\n\nThe `init` command creates:\n- Basic chat prompts with variables\n- Dynamic routing examples (`[type]` directories)\n- Sample prompts demonstrating all features\n\n### Generate Type Stubs\n\n```bash\n# Default command - can omit 'generate-types'\nprompteer ./prompts -o prompts.pyi\n\n# Or explicitly use generate-types\nprompteer generate-types ./prompts -o prompts.pyi\n\n# Watch mode - auto-regenerate on changes\nprompteer ./prompts --watch\n\n# Specify encoding\nprompteer ./prompts --encoding utf-8\n```\n\n### Help\n\n```bash\nprompteer --help\nprompteer generate-types --help\n```\n\n## Advanced Usage\n\n### Dynamic Prompt Selection\n\n```python\nfrom prompteer import create_prompts\n\nprompts = create_prompts(\"./prompts\")\n\n# Select prompts dynamically\nprompt_type = \"code_review\"\nif prompt_type == \"code_review\":\n result = prompts.codeReview.reviewRequest(\n language=\"Python\",\n code=\"def hello(): print('hi')\",\n focus_areas=\"style and best practices\"\n )\n```\n\n### Error Handling\n\n```python\nfrom prompteer import create_prompts, PromptNotFoundError\n\ntry:\n prompts = create_prompts(\"./prompts\")\n result = prompts.nonexistent.prompt()\nexcept PromptNotFoundError as e:\n print(f\"Prompt not found: {e}\")\n```\n\n### Multiple Prompt Directories\n\n```python\nfrom prompteer import create_prompts\n\n# Different prompt sets for different purposes\nchat_prompts = create_prompts(\"./prompts/chat\")\nreview_prompts = create_prompts(\"./prompts/reviews\")\n\nsystem_msg = chat_prompts.system(role=\"assistant\")\nreview_msg = review_prompts.codeReview(language=\"Python\")\n```\n\n## Why prompteer?\n\n**Before prompteer:**\n```python\n# Prompts scattered in code\nsystem_prompt = \"\"\"You are a helpful assistant.\nYour personality is friendly.\nPlease be respectful.\"\"\"\n\n# Hard to maintain, version, and reuse\n```\n\n**With prompteer:**\n```python\n# Prompts organized in files\n# Easy to version control\n# Reusable across projects\n# Type-safe with autocomplete\nprompts = create_prompts(\"./prompts\")\nsystem_prompt = prompts.chat.system(\n role=\"helpful assistant\",\n personality=\"friendly\"\n)\n```\n\n## File Naming Convention\n\n- **Directories**: Use `kebab-case` \u2192 becomes `camelCase` in Python\n - `code-review/` \u2192 `prompts.codeReview`\n- **Files**: Use `kebab-case.md` \u2192 becomes `camelCase()` method\n - `user-query.md` \u2192 `prompts.chat.userQuery()`\n\n## Requirements\n\n- Python 3.7+\n- PyYAML >= 5.1\n\nOptional:\n- watchdog (for `--watch` mode)\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n---\n\n## For AI Agents & LLM Tools\n\n### Quick Integration Guide\n\n**Installation from GitHub (before PyPI release):**\n```bash\npip install git+https://github.com/ibare/prompteer.git\n```\n\n**Installation from PyPI (when available):**\n```bash\npip install prompteer\n```\n\n### Essential Usage Pattern\n\n```python\nfrom pathlib import Path\nfrom prompteer import create_prompts\n\n# 1. Load prompts from directory\n# Option A: Relative to current working directory\nprompts = create_prompts(\"./prompts\")\n\n# Option B: Relative to script file (recommended for packages)\nPROMPTS_DIR = Path(__file__).parent / \"prompts\"\nprompts = create_prompts(PROMPTS_DIR)\n\n# 2. Access prompts with dot notation\nresult = prompts.chat.system(\n role=\"helpful assistant\",\n personality=\"friendly and patient\"\n)\n\n# 3. Use the rendered prompt\nprint(result)\n```\n\n**Important - Path Resolution:**\n- Relative paths are resolved from current working directory (CWD)\n- For library/package usage, use `Path(__file__).parent / \"prompts\"` pattern\n- This ensures prompts are found regardless of where the host application runs from\n\n### Prompt File Format\n\nCreate markdown files with YAML frontmatter:\n\n```markdown\n---\ndescription: System message prompt\nrole: AI role description\npersonality: AI personality traits\n---\nYou are a {role}.\nYour personality is {personality}.\n```\n\n### File Structure Convention\n\n```\nprompts/\n\u251c\u2500\u2500 chat/\n\u2502 \u251c\u2500\u2500 system.md \u2192 prompts.chat.system()\n\u2502 \u2514\u2500\u2500 user-query.md \u2192 prompts.chat.userQuery()\n\u2514\u2500\u2500 code-review/\n \u2514\u2500\u2500 review.md \u2192 prompts.codeReview.review()\n```\n\n**Key Convention**: `kebab-case` files/directories \u2192 `camelCase` Python methods\n\n### Dynamic Routing (v0.2.0+)\n\nUse `[param]` directories for runtime prompt selection:\n\n```\nprompts/\n\u2514\u2500\u2500 question/\n \u2514\u2500\u2500 [type]/ # Dynamic parameter\n \u251c\u2500\u2500 basic/\n \u2502 \u2514\u2500\u2500 user.md\n \u251c\u2500\u2500 advanced/\n \u2502 \u2514\u2500\u2500 user.md\n \u2514\u2500\u2500 default.md # Fallback\n```\n\n```python\nprompts = create_prompts(\"./prompts\")\n\n# Select different versions based on runtime parameter\nbasic = prompts.question.user(type=\"basic\", name=\"Alice\")\nadvanced = prompts.question.user(type=\"advanced\", name=\"Bob\", context=\"expert\")\n\n# Automatic fallback to default.md if value doesn't match\nfallback = prompts.question.user(type=\"expert\") # Uses default.md\n```\n\n**Type safety with Literal types:**\n```python\n# Generated type stub includes available values\ndef user(\n self,\n type: Literal[\"basic\", \"advanced\"], # IDE autocomplete!\n name: str = \"\",\n **kwargs: Any\n) -> str: ...\n```\n\n### Type Hints (Optional)\n\n```bash\n# Generate type stubs for IDE autocomplete\nprompteer generate-types ./prompts -o prompts.pyi\n```\n\n### Key Implementation Files\n\n- `src/prompteer/core.py` - Main `Prompteer` class and `create_prompts()` function\n- `src/prompteer/proxy.py` - Dynamic attribute access via `__getattr__`\n- `src/prompteer/template.py` - Variable substitution engine\n- `src/prompteer/metadata.py` - YAML frontmatter parsing\n- `src/prompteer/type_generator.py` - Type stub generation\n\n### Common Patterns\n\n**Dynamic prompt selection:**\n```python\nprompts = create_prompts(\"./prompts\")\n\n# Select prompt based on runtime condition\nif task_type == \"code_review\":\n prompt = prompts.codeReview.reviewRequest(language=\"Python\", code=code)\nelif task_type == \"translation\":\n prompt = prompts.translation.translate(source=\"EN\", target=\"KO\", text=text)\n```\n\n**Error handling:**\n```python\nfrom prompteer import create_prompts, PromptNotFoundError\n\ntry:\n prompts = create_prompts(\"./prompts\")\n result = prompts.some.prompt()\nexcept PromptNotFoundError as e:\n print(f\"Prompt not found: {e}\")\n```\n\n### Supported Variable Types\n\nIn YAML frontmatter:\n- `name: description` - defaults to `str`\n- `age(int): description` - integer\n- `score(float): description` - float\n- `active(bool): description` - boolean\n- `count(number): description` - int or float\n- `data(any): description` - any type\n\n### Testing\n\nExamples available in `examples/` directory:\n- `examples/basic_usage.py` - Basic features\n- `examples/llm_integration.py` - LLM API integration\n- `examples/advanced_usage.py` - Advanced patterns\n- `examples/dynamic_routing.py` - Dynamic routing examples\n\n---\n\n## Links\n\n- **GitHub**: https://github.com/ibare/prompteer\n- **PyPI**: https://pypi.org/project/prompteer/\n- **Documentation**: See [examples/](examples/) directory\n- **Issues**: https://github.com/ibare/prompteer/issues\n",
"bugtrack_url": null,
"license": null,
"summary": "A lightweight file-based prompt manager for LLM workflows. Simple, scalable, and version-control friendly.",
"version": "0.2.0",
"project_urls": {
"Bug Tracker": "https://github.com/ibare/prompteer/issues",
"Documentation": "https://github.com/ibare/prompteer#readme",
"Homepage": "https://github.com/ibare/prompteer",
"Repository": "https://github.com/ibare/prompteer"
},
"split_keywords": [
"prompt",
" llm",
" ai",
" prompt-engineering",
" prompt-management"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "215b4c4cb44265b5626965cf401f21f5b069713df63d6eb6b2ced7b14d429f01",
"md5": "756bf6ee6cffaa796669e7b6d120fd2f",
"sha256": "9e2b7120d03a0df2db79dec7b80f606ea3aa567fe6b7c9ef6ca23577d3d89341"
},
"downloads": -1,
"filename": "prompteer-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "756bf6ee6cffaa796669e7b6d120fd2f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 25356,
"upload_time": "2025-10-25T04:32:28",
"upload_time_iso_8601": "2025-10-25T04:32:28.723413Z",
"url": "https://files.pythonhosted.org/packages/21/5b/4c4cb44265b5626965cf401f21f5b069713df63d6eb6b2ced7b14d429f01/prompteer-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "cac0298bde76e175d81fb99b93cfa187371f798f1d89cbae73cb0f9f58faf321",
"md5": "5e2c243935a3655e0d8dc10e2247c0b3",
"sha256": "f8bbdef975e73ac06b6efa9a317bb26afe4ac4208db5cc294f119c2f7cce2b7e"
},
"downloads": -1,
"filename": "prompteer-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "5e2c243935a3655e0d8dc10e2247c0b3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 41259,
"upload_time": "2025-10-25T04:32:30",
"upload_time_iso_8601": "2025-10-25T04:32:30.261270Z",
"url": "https://files.pythonhosted.org/packages/ca/c0/298bde76e175d81fb99b93cfa187371f798f1d89cbae73cb0f9f58faf321/prompteer-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-25 04:32:30",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ibare",
"github_project": "prompteer",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "prompteer"
}