prympt


Nameprympt JSON
Version 1.0.2 PyPI version JSON
download
home_pageNone
SummaryA Python Package for LLM Prompting and Interfacing
upload_time2025-02-18 10:59:33
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords llm prompt ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Prympt: A Python Package for LLM Prompting and Interfacing

Prympt is an open source Python package designed to simplify and standardize interactions with Large Language Models (LLMs). It encapsulates typical boilerplate functionality such as templating, prompt combination, and structured output handling—all in a lightweight package.

Prympt is provided as a free software under MIT license. Feedback and contributions to improve it are welcome!

---

## Overview

Prympt helps to:
- **Compose dynamic prompts:** Use [Jinja2](https://jinja.palletsprojects.com/) syntax to easily substitute variables and iterate over collections.
- **Combine prompts:** Seamlessly merge multiple prompt templates using the `+` operator.
- **Define structured outputs:** Specify expected output formats (e.g., type) so that the responses from LLMs can be automatically verified and parsed.
- **Robust error handling:** Automatically retry and recover from common LLM response errors or malformed outputs.
- **Interface with multiple LLMs:** By default, Prympt integrates with [LiteLLM](https://github.com/BerriAI/litellm), but can switch to other LLM providers easily.

---

## Features

- **Enhanced Jinja2 Templating:** Extends base Jinja2 capabilities with custom functionality to combine multiple templates, ensuring prompts are modular and reusable.
- **Structured Output Definitions:** Annotate prompts with expected outputs. Prympt can automatically verify that the LLM responses match annotated outputs.
- **Type Enforcement:** Define expected types for outputs (e.g., `int`, `float`). Prympt will validate the responses, raise exceptions, and retry queries if the output does not conform.
- **Error Recovery:** Built-in mechanisms to retry LLM queries when provided outputs are not as expected. This makes the tool particularly robust for working with LLMs that might occasionally return malformed data.
- **Flexible LLM Integration:** Whether you use OpenAI, DeepSeek, or another provider, Prympt offers a default interface and the option to specify own LLM completion function.

---

## Installation

Install Prympt from PyPI using pip:

    pip install prympt

### Environment Configuration

Set up your environment by defining the necessary API keys. You can add these to an `.env` file or set them in your environment.

- **For OpenAI:**

      OPENAI_API_KEY=your_openai_api_key_here

- **For DeepSeek:**

      DEEPSEEK_API_KEY=your_deepseek_api_key_here
      LLM_MODEL=deepseek/deepseek-chat

See [LiteLLM providers](https://docs.litellm.ai/docs/providers/) for further info on configuring Prympt with other LLM service providers.

---

## Basic Usage

### Importing and Using the Prompt Class

Prympt’s main entry point is the `Prompt` class. Here’s a simple example that uses it to generate a poem:

    from prympt import Prompt

    model_params = {
        "model": "gpt-4o",
        "temperature": 1.0,
        "max_tokens": 5000,
    }

    response = Prompt("Can you produce a short poem?").query(**model_params)

The response can be printed as a regular string, although it is a Python object of type `Response`:

    print(response)

By default, the `query()` function uses LiteLLM to interact with the chosen LLM. That function does several more things, such as parsing the response of the LLM for return values (see below).

If you prefer to use your own way to interact with the LLM, you can supply a custom completion function to `query()`:

    def custom_llm_completion(prompt: str, *args, **kwargs) -> str:
        # Replace with your own LLM API call
        message = llm(prompt)
        return message

    response = Prompt("Can you produce a short poem?").query(llm_completion=custom_llm_completion, **model_params)
    print(response)

---

## Jinja2 Substitutions

Prympt supports full Jinja2 templating for dynamic prompt generation:

    sms_prompt = Prompt("Hi {{ name }}, your appointment is at {{ time }}.")
    print(sms_prompt(name="Alice", time="2 PM"))

Advance substitutions are also possible (Jinja2 iterations):

    order_prompt = Prompt("""
    Your order includes:
    {% for item in items %}
    - {{ item }}
    {% endfor %}
    """)
    print(order_prompt(items=["Laptop", "Mouse", "Keyboard"]))

---

## Combining Prompts

Prompts can be concatenated using the `+` operator to build more complex interactions.

    greeting = Prompt("Dear {{ customer_name }},\n")
    body = Prompt("We are pleased to inform you that your order (Order #{{ order_number }}) has been shipped and is expected to arrive by {{ delivery_date }}.\n")
    closing = Prompt("Thank you for choosing {{ company_name }}.\nBest regards,\n{{ company_name }} Support Team")

    combined_email_prompt = greeting + body + closing

    print(combined_email_prompt(
        customer_name="Alice Johnson",
        order_number="987654",
        delivery_date="2025-03-25",
        company_name="TechStore"
    ))

---

### Return Value

Prompts can be annotated with expected return values:

    prompt = Prompt("What is the meaning of life, the universe, and everything?")
    response = prompt.returns(name="meaning", type="int").query(**model_params)
    print(response.meaning) # Expected output: 42

Returned values are automatically parsed and attached as member variables to the response. This approach makes it simple to extract and use them.

The call to `query()` will automatically raise errors (or retry, if retries parameter is set to >= 1, see below) when the values provided by the LLM do not match the specified types or number.

---

### Multiple Return Values

Prympt supports prompts with multiple expected return values:

    prompt = Prompt("""
    Summarize the following news article:  {{news_body}} 
    Also, provide a sentiment score (scale from -1 to 1) for the news article.
    """).returns("summary", "A concise summary of the news article").returns(name="sentiment", type="float")

    news_body = "Aliens attack Earth right after world peace achieved"
    combined_response = prompt(news_body = news_body).query(**model_params)
    print(combined_response.summary)    # Expected output: A brief summary of the news article
    print(combined_response.sentiment)  # Expected output: A sentiment score between -1 and 1

You can also specify the expected outputs as a list of `Output` objects in the Prompt constructor:

    from prympt import Output

    prompt = Prompt("""
    Summarize the following news article:  {{news_body}} 
    Also, provide a sentiment score (scale from -1 to 1) for the news article.
    """, returns=[
        Output("summary", "A concise summary of the news article"),
        Output(name="sentiment", type="float")
    ])

    news_body = "Aliens attack Earth right after world peace achieved"
    response = prompt(news_body = news_body).query(**model_params)
    print(response.summary)    # Expected output: A brief summary of the news article
    print(response.sentiment)  # Expected output: A sentiment score between -1 and 1

---

## Error Control

### Automatic LLM Query Error Recovery

Prympt includes an automatic retry mechanism for queries. You can specify the number of retries if the LLM response does not match the expected output structure:

    prompt = Prompt("Generate Python function that prints weekday, from any given date").returns("python", "python code goes here")
    response = prompt.query(retries=5, **model_params)  # Default number of retries is 3
    print(response)

### Warnings

Prympt will issue warnings in cases such as:
- Errors during Jinja2 template rendering (e.g., undefined variables or incorrect syntax).
- Transient errors during `Prompt.query()` when retries are in progress.

### Exceptions

Prympt defines a hierarchy of exceptions for granular error handling when retries fail:

- **MalformedOutput:** Raised by `Prompt.returns()` and the `Output` constructor when:
  - The output name is invalid (must be a valid Python identifier: [a-z_][a-z0-9_-]*).
  - The specified type cannot be parsed (must be a valid Python type, e.g., `int`, `float`).
  - The LLM provides a value that cannot be converted to the expected type.
- **ConcatenationError:** Raised when attempting to add a prompt to an unsupported type.
- **ResponseError:** Raised by `Prompt.query()` when the LLM response does not match the expected output structure (e.g., incorrect number, name, or type of outputs).

All these custom exceptions inherit from a common Exception class `PromptError`.

---

## Development

### Setting Up the Development Environment

Install Prympt along with its development dependencies:

    pip install prympt[dev]

### Code Formatting and Linting

Use the following commands to ensure your code adheres to project standards:

    black .
    isort .
    ruff check . --fix
    mypy .

### Running Tests

Execute the test suite with:

    pytest


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "prympt",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "llm, prompt, ai",
    "author": null,
    "author_email": "\"foofaraw (GitHub: foofaraw)\" <foofaraw.github@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/b3/50/2a0e7215831f0271fcfadf01f0c9745d04e291c94bc67f22ea43abde2fe1/prympt-1.0.2.tar.gz",
    "platform": null,
    "description": "# Prympt: A Python Package for LLM Prompting and Interfacing\n\nPrympt is an open source Python package designed to simplify and standardize interactions with Large Language Models (LLMs). It encapsulates typical boilerplate functionality such as templating, prompt combination, and structured output handling\u2014all in a lightweight package.\n\nPrympt is provided as a free software under MIT license. Feedback and contributions to improve it are welcome!\n\n---\n\n## Overview\n\nPrympt helps to:\n- **Compose dynamic prompts:** Use [Jinja2](https://jinja.palletsprojects.com/) syntax to easily substitute variables and iterate over collections.\n- **Combine prompts:** Seamlessly merge multiple prompt templates using the `+` operator.\n- **Define structured outputs:** Specify expected output formats (e.g., type) so that the responses from LLMs can be automatically verified and parsed.\n- **Robust error handling:** Automatically retry and recover from common LLM response errors or malformed outputs.\n- **Interface with multiple LLMs:** By default, Prympt integrates with [LiteLLM](https://github.com/BerriAI/litellm), but can switch to other LLM providers easily.\n\n---\n\n## Features\n\n- **Enhanced Jinja2 Templating:** Extends base Jinja2 capabilities with custom functionality to combine multiple templates, ensuring prompts are modular and reusable.\n- **Structured Output Definitions:** Annotate prompts with expected outputs. Prympt can automatically verify that the LLM responses match annotated outputs.\n- **Type Enforcement:** Define expected types for outputs (e.g., `int`, `float`). Prympt will validate the responses, raise exceptions, and retry queries if the output does not conform.\n- **Error Recovery:** Built-in mechanisms to retry LLM queries when provided outputs are not as expected. This makes the tool particularly robust for working with LLMs that might occasionally return malformed data.\n- **Flexible LLM Integration:** Whether you use OpenAI, DeepSeek, or another provider, Prympt offers a default interface and the option to specify own LLM completion function.\n\n---\n\n## Installation\n\nInstall Prympt from PyPI using pip:\n\n    pip install prympt\n\n### Environment Configuration\n\nSet up your environment by defining the necessary API keys. You can add these to an `.env` file or set them in your environment.\n\n- **For OpenAI:**\n\n      OPENAI_API_KEY=your_openai_api_key_here\n\n- **For DeepSeek:**\n\n      DEEPSEEK_API_KEY=your_deepseek_api_key_here\n      LLM_MODEL=deepseek/deepseek-chat\n\nSee [LiteLLM providers](https://docs.litellm.ai/docs/providers/) for further info on configuring Prympt with other LLM service providers.\n\n---\n\n## Basic Usage\n\n### Importing and Using the Prompt Class\n\nPrympt\u2019s main entry point is the `Prompt` class. Here\u2019s a simple example that uses it to generate a poem:\n\n    from prympt import Prompt\n\n    model_params = {\n        \"model\": \"gpt-4o\",\n        \"temperature\": 1.0,\n        \"max_tokens\": 5000,\n    }\n\n    response = Prompt(\"Can you produce a short poem?\").query(**model_params)\n\nThe response can be printed as a regular string, although it is a Python object of type `Response`:\n\n    print(response)\n\nBy default, the `query()` function uses LiteLLM to interact with the chosen LLM. That function does several more things, such as parsing the response of the LLM for return values (see below).\n\nIf you prefer to use your own way to interact with the LLM, you can supply a custom completion function to `query()`:\n\n    def custom_llm_completion(prompt: str, *args, **kwargs) -> str:\n        # Replace with your own LLM API call\n        message = llm(prompt)\n        return message\n\n    response = Prompt(\"Can you produce a short poem?\").query(llm_completion=custom_llm_completion, **model_params)\n    print(response)\n\n---\n\n## Jinja2 Substitutions\n\nPrympt supports full Jinja2 templating for dynamic prompt generation:\n\n    sms_prompt = Prompt(\"Hi {{ name }}, your appointment is at {{ time }}.\")\n    print(sms_prompt(name=\"Alice\", time=\"2 PM\"))\n\nAdvance substitutions are also possible (Jinja2 iterations):\n\n    order_prompt = Prompt(\"\"\"\n    Your order includes:\n    {% for item in items %}\n    - {{ item }}\n    {% endfor %}\n    \"\"\")\n    print(order_prompt(items=[\"Laptop\", \"Mouse\", \"Keyboard\"]))\n\n---\n\n## Combining Prompts\n\nPrompts can be concatenated using the `+` operator to build more complex interactions.\n\n    greeting = Prompt(\"Dear {{ customer_name }},\\n\")\n    body = Prompt(\"We are pleased to inform you that your order (Order #{{ order_number }}) has been shipped and is expected to arrive by {{ delivery_date }}.\\n\")\n    closing = Prompt(\"Thank you for choosing {{ company_name }}.\\nBest regards,\\n{{ company_name }} Support Team\")\n\n    combined_email_prompt = greeting + body + closing\n\n    print(combined_email_prompt(\n        customer_name=\"Alice Johnson\",\n        order_number=\"987654\",\n        delivery_date=\"2025-03-25\",\n        company_name=\"TechStore\"\n    ))\n\n---\n\n### Return Value\n\nPrompts can be annotated with expected return values:\n\n    prompt = Prompt(\"What is the meaning of life, the universe, and everything?\")\n    response = prompt.returns(name=\"meaning\", type=\"int\").query(**model_params)\n    print(response.meaning) # Expected output: 42\n\nReturned values are automatically parsed and attached as member variables to the response. This approach makes it simple to extract and use them.\n\nThe call to `query()` will automatically raise errors (or retry, if retries parameter is set to >= 1, see below) when the values provided by the LLM do not match the specified types or number.\n\n---\n\n### Multiple Return Values\n\nPrympt supports prompts with multiple expected return values:\n\n    prompt = Prompt(\"\"\"\n    Summarize the following news article:  {{news_body}} \n    Also, provide a sentiment score (scale from -1 to 1) for the news article.\n    \"\"\").returns(\"summary\", \"A concise summary of the news article\").returns(name=\"sentiment\", type=\"float\")\n\n    news_body = \"Aliens attack Earth right after world peace achieved\"\n    combined_response = prompt(news_body = news_body).query(**model_params)\n    print(combined_response.summary)    # Expected output: A brief summary of the news article\n    print(combined_response.sentiment)  # Expected output: A sentiment score between -1 and 1\n\nYou can also specify the expected outputs as a list of `Output` objects in the Prompt constructor:\n\n    from prympt import Output\n\n    prompt = Prompt(\"\"\"\n    Summarize the following news article:  {{news_body}} \n    Also, provide a sentiment score (scale from -1 to 1) for the news article.\n    \"\"\", returns=[\n        Output(\"summary\", \"A concise summary of the news article\"),\n        Output(name=\"sentiment\", type=\"float\")\n    ])\n\n    news_body = \"Aliens attack Earth right after world peace achieved\"\n    response = prompt(news_body = news_body).query(**model_params)\n    print(response.summary)    # Expected output: A brief summary of the news article\n    print(response.sentiment)  # Expected output: A sentiment score between -1 and 1\n\n---\n\n## Error Control\n\n### Automatic LLM Query Error Recovery\n\nPrympt includes an automatic retry mechanism for queries. You can specify the number of retries if the LLM response does not match the expected output structure:\n\n    prompt = Prompt(\"Generate Python function that prints weekday, from any given date\").returns(\"python\", \"python code goes here\")\n    response = prompt.query(retries=5, **model_params)  # Default number of retries is 3\n    print(response)\n\n### Warnings\n\nPrympt will issue warnings in cases such as:\n- Errors during Jinja2 template rendering (e.g., undefined variables or incorrect syntax).\n- Transient errors during `Prompt.query()` when retries are in progress.\n\n### Exceptions\n\nPrympt defines a hierarchy of exceptions for granular error handling when retries fail:\n\n- **MalformedOutput:** Raised by `Prompt.returns()` and the `Output` constructor when:\n  - The output name is invalid (must be a valid Python identifier: [a-z_][a-z0-9_-]*).\n  - The specified type cannot be parsed (must be a valid Python type, e.g., `int`, `float`).\n  - The LLM provides a value that cannot be converted to the expected type.\n- **ConcatenationError:** Raised when attempting to add a prompt to an unsupported type.\n- **ResponseError:** Raised by `Prompt.query()` when the LLM response does not match the expected output structure (e.g., incorrect number, name, or type of outputs).\n\nAll these custom exceptions inherit from a common Exception class `PromptError`.\n\n---\n\n## Development\n\n### Setting Up the Development Environment\n\nInstall Prympt along with its development dependencies:\n\n    pip install prympt[dev]\n\n### Code Formatting and Linting\n\nUse the following commands to ensure your code adheres to project standards:\n\n    black .\n    isort .\n    ruff check . --fix\n    mypy .\n\n### Running Tests\n\nExecute the test suite with:\n\n    pytest\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A Python Package for LLM Prompting and Interfacing",
    "version": "1.0.2",
    "project_urls": null,
    "split_keywords": [
        "llm",
        " prompt",
        " ai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d8c7f36d5807574f89ac5c431c031185208600f75d4ce279528b7212e96c4fac",
                "md5": "0a86199975106094ea43fa9da8ed3ade",
                "sha256": "d6726ed9876a4229f6c7ddb0446d4e4a79617b2ffac8242a626107a3a2b30465"
            },
            "downloads": -1,
            "filename": "prympt-1.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0a86199975106094ea43fa9da8ed3ade",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 12151,
            "upload_time": "2025-02-18T10:59:32",
            "upload_time_iso_8601": "2025-02-18T10:59:32.629016Z",
            "url": "https://files.pythonhosted.org/packages/d8/c7/f36d5807574f89ac5c431c031185208600f75d4ce279528b7212e96c4fac/prympt-1.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b3502a0e7215831f0271fcfadf01f0c9745d04e291c94bc67f22ea43abde2fe1",
                "md5": "96302abce682c2b608d90a2faa37bed5",
                "sha256": "9cf1a818898c76b5c733ec29bf0f910f7ce2eb62d3581f1a8902f94eb92e4913"
            },
            "downloads": -1,
            "filename": "prympt-1.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "96302abce682c2b608d90a2faa37bed5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 14258,
            "upload_time": "2025-02-18T10:59:33",
            "upload_time_iso_8601": "2025-02-18T10:59:33.855592Z",
            "url": "https://files.pythonhosted.org/packages/b3/50/2a0e7215831f0271fcfadf01f0c9745d04e291c94bc67f22ea43abde2fe1/prympt-1.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-18 10:59:33",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "prympt"
}
        
Elapsed time: 1.44888s