| Name | promptly-llm JSON |
| Version |
0.2.0
JSON |
| download |
| home_page | None |
| Summary | A lightweight library for LLM prompt management, observability/tracing, and optimization |
| upload_time | 2025-10-24 03:34:06 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.9 |
| license | None |
| keywords |
llm
prompt
ai
machine-learning
evaluation
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
|
# promptly
A lightweight, developer-friendly library for LLM prompt management, observability/tracing, and optimization.
Currently with support for Python.
## Features
- **Prompt Templates**: Jinja2-based templating system for dynamic prompts
- **Multi-Provider Support**: OpenAI, Anthropic, Google AI (Gemini), and extensible client architecture
- **Built-in Tracing**: Comprehensive observability for prompt execution
- **Genetic Optimization**: LLM-powered genetic algorithms for automated prompt improvement
- **Async Support**: Full async/await support for high-performance applications
- **CLI Interface**: Command-line tools for prompt management
- **Type Safety**: Full type hints and Pydantic models
## Installation
```bash
# Install UV if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install promptly
uv pip install promptly
# With development dependencies
uv pip install promptly[dev]
# With CLI tools
uv pip install promptly[cli]
# With UI components
uv pip install promptly[ui]
```
## Quick Start
```python
import asyncio
from promptly import PromptRunner, OpenAIClient, PromptTemplate
async def main():
# Initialize client
client = OpenAIClient(api_key="your-api-key")
# Create a prompt template
template = PromptTemplate(
name="greeting",
template="Hello {{ name }}, how are you today?",
variables=["name"]
)
# Create runner with tracing
runner = PromptRunner(client)
# Execute prompt
response = await runner.run(
template=template,
variables={"name": "Alice"},
model="gpt-3.5-turbo"
)
print(response.content)
asyncio.run(main())
```
## Prompt Optimization
Promptly includes an advanced genetic algorithm optimizer that uses LLMs to automatically improve your prompts through iterative evaluation and mutation. The optimizer can work with test cases for accuracy-based optimization or without test cases for general quality improvement.
**Quick Example:**
```bash
# Optimize with test cases
promptly optimize \
--base-prompt "Answer this question: {{question}}" \
--test-cases my_tests.json \
--population-size 10 \
--generations 5
# Quality-based optimization (no test cases needed)
promptly optimize \
--base-prompt "Write a {{genre}} story about {{character}}" \
--population-size 8 \
--generations 4
```
For complete documentation on optimization features, configuration options, and examples, see **[OPTIMIZER_README.md](OPTIMIZER_README.md)**.
## CLI Usage
```bash
# Run a simple prompt
promptly run "What is the capital of France?" --model="gpt-3.5-turbo"
# Run with tracing
promptly run "Explain quantum computing" --trace
# View traces
promptly trace
```
## Development
For developers who want to contribute to or extend promptly:
- **[Developer Quick Start.md](DEVELOPER_QUICKSTART.md)** - Complete development guide
## License
MIT License - see LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "promptly-llm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "llm, prompt, ai, machine-learning, evaluation",
"author": null,
"author_email": "Tucker Leach <leachtucker@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/4e/33/0d27cd6bffdc234b2329c096da2b7bef07e0286de01b20adc66e85c236b6/promptly_llm-0.2.0.tar.gz",
"platform": null,
"description": "# promptly\n\nA lightweight, developer-friendly library for LLM prompt management, observability/tracing, and optimization.\nCurrently with support for Python.\n\n## Features\n\n- **Prompt Templates**: Jinja2-based templating system for dynamic prompts\n- **Multi-Provider Support**: OpenAI, Anthropic, Google AI (Gemini), and extensible client architecture\n- **Built-in Tracing**: Comprehensive observability for prompt execution\n- **Genetic Optimization**: LLM-powered genetic algorithms for automated prompt improvement\n- **Async Support**: Full async/await support for high-performance applications\n- **CLI Interface**: Command-line tools for prompt management\n- **Type Safety**: Full type hints and Pydantic models\n\n## Installation\n\n\n```bash\n# Install UV if you haven't already\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n\n# Install promptly\nuv pip install promptly\n\n# With development dependencies\nuv pip install promptly[dev]\n\n# With CLI tools\nuv pip install promptly[cli]\n\n# With UI components\nuv pip install promptly[ui]\n```\n\n## Quick Start\n\n```python\nimport asyncio\nfrom promptly import PromptRunner, OpenAIClient, PromptTemplate\n\nasync def main():\n # Initialize client\n client = OpenAIClient(api_key=\"your-api-key\")\n\n # Create a prompt template\n template = PromptTemplate(\n name=\"greeting\",\n template=\"Hello {{ name }}, how are you today?\",\n variables=[\"name\"]\n )\n\n # Create runner with tracing\n runner = PromptRunner(client)\n\n # Execute prompt\n response = await runner.run(\n template=template,\n variables={\"name\": \"Alice\"},\n model=\"gpt-3.5-turbo\"\n )\n\n print(response.content)\n\nasyncio.run(main())\n```\n\n## Prompt Optimization\n\nPromptly includes an advanced genetic algorithm optimizer that uses LLMs to automatically improve your prompts through iterative evaluation and mutation. The optimizer can work with test cases for accuracy-based optimization or without test cases for general quality improvement.\n\n**Quick Example:**\n```bash\n# Optimize with test cases\npromptly optimize \\\n --base-prompt \"Answer this question: {{question}}\" \\\n --test-cases my_tests.json \\\n --population-size 10 \\\n --generations 5\n\n# Quality-based optimization (no test cases needed)\npromptly optimize \\\n --base-prompt \"Write a {{genre}} story about {{character}}\" \\\n --population-size 8 \\\n --generations 4\n```\n\nFor complete documentation on optimization features, configuration options, and examples, see **[OPTIMIZER_README.md](OPTIMIZER_README.md)**.\n\n## CLI Usage\n\n```bash\n# Run a simple prompt\npromptly run \"What is the capital of France?\" --model=\"gpt-3.5-turbo\"\n\n# Run with tracing\npromptly run \"Explain quantum computing\" --trace\n\n# View traces\npromptly trace\n```\n\n## Development\n\nFor developers who want to contribute to or extend promptly:\n\n- **[Developer Quick Start.md](DEVELOPER_QUICKSTART.md)** - Complete development guide\n\n## License\n\nMIT License - see LICENSE file for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "A lightweight library for LLM prompt management, observability/tracing, and optimization",
"version": "0.2.0",
"project_urls": {
"Changelog": "https://github.com/orgpromptly/promptly/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/orgpromptly/promptly#readme",
"Homepage": "https://github.com/orgpromptly/promptly",
"Issues": "https://github.com/orgpromptly/promptly/issues",
"Repository": "https://github.com/orgpromptly/promptly"
},
"split_keywords": [
"llm",
" prompt",
" ai",
" machine-learning",
" evaluation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "33478a22a7d585fbd9fdbda00d061170bc4e4a9e5b33d24c8d3f7b872636a8c7",
"md5": "afe0ba53ce60cf3ab8b6101f28564fc1",
"sha256": "716c6cf8ea23b5072a110f20bae3d2ef77def51e2252b1591f35abb4a158f261"
},
"downloads": -1,
"filename": "promptly_llm-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "afe0ba53ce60cf3ab8b6101f28564fc1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 34568,
"upload_time": "2025-10-24T03:34:05",
"upload_time_iso_8601": "2025-10-24T03:34:05.151039Z",
"url": "https://files.pythonhosted.org/packages/33/47/8a22a7d585fbd9fdbda00d061170bc4e4a9e5b33d24c8d3f7b872636a8c7/promptly_llm-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4e330d27cd6bffdc234b2329c096da2b7bef07e0286de01b20adc66e85c236b6",
"md5": "9c02fb10ba4eac1b1d83dd9e621befa5",
"sha256": "60d97845dccb382a995df45c162fb1553a502abaa9921186921ad5a6c8a1ce9e"
},
"downloads": -1,
"filename": "promptly_llm-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "9c02fb10ba4eac1b1d83dd9e621befa5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 34760,
"upload_time": "2025-10-24T03:34:06",
"upload_time_iso_8601": "2025-10-24T03:34:06.646068Z",
"url": "https://files.pythonhosted.org/packages/4e/33/0d27cd6bffdc234b2329c096da2b7bef07e0286de01b20adc66e85c236b6/promptly_llm-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-24 03:34:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "orgpromptly",
"github_project": "promptly",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"lcname": "promptly-llm"
}