proctor-ai


Nameproctor-ai JSON
Version 0.1.4 PyPI version JSON
download
home_pagehttps://github.com/svngoku/proctor
SummaryA comprehensive Python package for structured prompt engineering techniques
upload_time2025-07-19 18:33:04
maintainerNone
docs_urlNone
authorYour Name
requires_python>=3.8
licenseMIT
keywords prompt-engineering llm ai nlp gpt openai prompt-templates
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Proctor AI: A Python Library for Prompt Engineering Techniques

<p align="center">
  <img src="assets/proctor.png" alt="Proctor Logo" width="200"/>
</p>

[![PyPI version](https://badge.fury.io/py/proctor-ai.svg)](https://badge.fury.io/py/proctor-ai)
[![CI](https://github.com/svngoku/proctor/actions/workflows/python-package.yml/badge.svg)](https://github.com/svngoku/proctor/actions/workflows/python-package.yml)

`proctor-ai` is a comprehensive Python package designed to implement and explore a variety of text-based prompt engineering techniques. It provides a structured way to apply different prompting strategies to interact with Large Language Models (LLMs), using [LiteLLM](https://github.com/BerriAI/litellm) and [OpenRouter](https://openrouter.ai/) as the default backend.

The library is based on the hierarchical structure of prompting techniques outlined in the initial project documentation (`docs/protoc.md`).

## Features

*   **Hierarchical Technique Implementation:** Organizes prompting techniques into categories:
    *   Zero-Shot (e.g., `EmotionPrompting`, `RolePrompting`, `SelfAsk`)
    *   Few-Shot (e.g., `ExampleGeneration`, `KNN`)
    *   Thought Generation (e.g., `ChainOfThought`, `ZeroShotCoT`, `FewShotCoT`)
    *   Decomposition (e.g., `DECOMP`)
    *   Self-Criticism (e.g., `ChainOfVerification`)
    *   Ensembling (e.g., `SelfConsistency`)
*   **Base Classes:** Provides `PromptTechnique` as an extensible base class for creating custom techniques.
*   **Composability:** Allows combining multiple techniques sequentially using `CompositeTechnique`.
*   **LLM Backend:** Uses [LiteLLM](https://github.com/BerriAI/litellm) to interact with various LLM APIs, configured for [OpenRouter](https://openrouter.ai/) by default.
*   **Configuration:** Easily configure API keys and models via environment variables or a `.env` file.
*   **Utilities:** Includes helper functions like `dedent_prompt`.
*   **Logging:** Integrated logging using `rich` for clear, colorized console output showing inputs, prompts, and responses.
*   **Advanced KNN:** Optional implementation of KNN technique with proper text embeddings and semantic similarity (requires additional dependencies).
*   **Error Handling:** Robust error handling with automatic retries for transient API errors.
*   **Performance Optimization:** Caching mechanisms for technique instances and embeddings to improve performance.

## Installation

1.  **Clone the repository (if developing):**
    ```bash
    # Replace with your actual repository URL
    git clone https://github.com/svngoku/proctor.git # Updated repo URL
    cd proctor
    ```

2.  **Create and activate a virtual environment:**
    ```bash
    # Using venv (requires Python 3)
    python3 -m venv .venv
    source .venv/bin/activate # Use activate.fish for fish shell
    
    # Or using uv
    # uv venv
    # source .venv/bin/activate
    ```

3.  **Install the package:**
    *   **For usage:**
        ```bash
        # Install from PyPI
        pip install proctor-ai
        
        # Or using uv
        uv pip install proctor-ai
        
        # Or install directly from GitHub
        pip install git+https://github.com/svngoku/proctor.git
        ```
    *   **For development (from the cloned repo root):**
        ```bash
        # Use uv for basic installation
        uv pip install -e .
        
        # With advanced KNN features
        uv pip install -e ".[knn]"
        
        # With development tools (pytest, ruff, etc.)
        uv pip install -e ".[dev]"
        
        # With all features and tools
        uv pip install -e ".[all]"
        
        # Or use pip
        pip install -e ".[all]"
        ```
        This installs the package in editable mode with your chosen optional dependencies.

## Configuration

The library requires an OpenRouter API key to function.

1.  **Create a `.env` file** in the root of the `proctor` package directory (i.e., `proctor/.env`).
2.  **Add your API key:**
    ```dotenv
    # proctor/.env
    OPENROUTER_API_KEY="YOUR_OPENROUTER_API_KEY_HERE"
    
    # Optional: Specify a default model to override the default in config.py
    # OPENROUTER_MODEL="mistralai/mistral-7b-instruct"
    ```
    The library uses `python-dotenv` to automatically load these variables when the example scripts are run or when the library is imported.

## Usage

See the `examples/` directory (`proctor/examples/`) for detailed usage patterns.

**Basic Example:**

```python
import os
from dotenv import load_dotenv
from proctor import get_technique, list_techniques

# Load API key from .env file in the current working directory
# Ensure your .env file is in the directory from where you run the script
load_dotenv()

# List available techniques
print("Available techniques:", list_techniques())

# Get a specific technique instance
technique_name = "zero_shot_cot"
cot_technique = get_technique(technique_name)

if cot_technique:
    problem = "Explain the theory of relativity in simple terms."
    
    # Generate the prompt (useful for inspection)
    prompt = cot_technique.generate_prompt(problem)
    print(f"\n--- Generated {technique_name} Prompt ---")
    print(prompt)
    print("--- End Prompt ---")

    # Execute the technique (calls the LLM via LiteLLM/OpenRouter)
    # Check if API key is present before executing
    if os.environ.get("OPENROUTER_API_KEY") and os.environ.get("OPENROUTER_API_KEY") != "YOUR_API_KEY_HERE":
        print(f"\n--- Executing {technique_name} --- ")
        response = cot_technique.execute(problem)
        print(f"\n--- LLM Response ---")
        print(response)
        print("--- End Response ---")
    else:
        print("\nSkipping LLM execution: OPENROUTER_API_KEY not set or is placeholder in .env file.")
else:
    print(f"Technique '{technique_name}' not found.")

```

**Using Composite Techniques:**

```python
from proctor import CompositeTechnique, RolePrompting, ChainOfThought

# Define a composite technique
expert_cot = CompositeTechnique(
    name="Expert Chain-of-Thought",
    identifier="custom-expert-cot",
    techniques=[
        RolePrompting(),      # First, set the role
        ChainOfThought()    # Then, apply structured CoT
    ]
)

problem = "Plan a three-day trip to Kyoto, Japan, focusing on historical sites."

# Generate the combined prompt
prompt = expert_cot.generate_prompt(problem, role="experienced travel planner")
print(prompt)

# Execute (requires API key)
# response = expert_cot.execute(problem, role="experienced travel planner")
# print(response)
```

## Logging

The library uses Python's `logging` module configured with `rich` to provide colorized output in the console. When techniques are executed, you will see:

*   The technique being executed (Magenta)
*   Input text (Cyan)
*   System prompt, if used (Yellow)
*   Generated prompt (Blue)
*   LLM response (Green)

Set the logging level via environment variable if needed (e.g., `LOG_LEVEL=DEBUG`).

## Quick Start

After installation, here's a simple example to get started:

```python
# Install the package
# pip install proctor-ai

import os
from dotenv import load_dotenv
from proctor import ZeroShotCoT

# Load your API key
load_dotenv()

# Create a technique instance
cot = ZeroShotCoT()

# Use it with any problem
problem = "What are the benefits of renewable energy?"
response = cot.execute(problem)
print(response)
```

## Development

1.  Clone the repository.
2.  Set up a virtual environment (see Installation).
3.  Install in development mode: `make install-dev` or `uv pip install -e ".[dev,all]"`.

### Using the Makefile

The project includes a comprehensive Makefile for development tasks:

```bash
# Install dependencies
make install-dev

# Run linting and formatting
make lint

# Run tests
make test

# Run core tests (excluding optional features)
make test-core

# Run tests with coverage
make test-cov

# Check code style without fixing
make check

# Clean build artifacts
make clean

# Build package
make build

# Deploy to Test PyPI
make deploy-test-permissive

# Deploy to Production PyPI
make deploy-prod-permissive

# See all available commands
make help
```

### Manual Commands

If you prefer running commands directly:

```bash
# Running Tests
pytest tests/ -v

# Linting
ruff check proctor/ tests/ examples/
ruff format proctor/ tests/ examples/

# Building
python -m build
```

## Deployment

The project uses automated CI/CD via GitHub Actions:

- **Pull Requests**: Run tests and linting
- **Push to main/master**: Deploy to Test PyPI automatically
- **Tagged releases**: Deploy to Production PyPI automatically

### Creating a Release

1. Update the version in `pyproject.toml`:
   ```bash
   make version-bump-patch  # or version-bump-minor, version-bump-major
   ```

2. Commit and push the version change:
   ```bash
   git add pyproject.toml
   git commit -m "Bump version to X.Y.Z"
   git push
   ```

3. Create and push a tag:
   ```bash
   git tag v0.1.2  # Replace with your version
   git push origin v0.1.2
   ```

4. GitHub Actions will automatically:
   - Run tests
   - Build the package
   - Deploy to PyPI

### Manual Deployment

For manual deployment using the Makefile:

```bash
# Deploy to Test PyPI
make deploy-test-permissive

# Deploy to Production PyPI
make deploy-prod-permissive
```

## Contributing

Contributions are welcome! Please follow these steps:

1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes
4. Run the tests (`make test-core`)
5. Run the linter (`make lint`)
6. Commit your changes (`git commit -m 'Add amazing feature'`)
7. Push to the branch (`git push origin feature/amazing-feature`)
8. Open a Pull Request

### Development Setup

```bash
# Clone the repo
git clone https://github.com/svngoku/proctor.git
cd proctor

# Set up development environment
make install-dev

# Run tests to ensure everything works
make test-core

# See all available commands
make help
```

## License

This project is licensed under the MIT License - see the `LICENSE` file for details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/svngoku/proctor",
    "name": "proctor-ai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "prompt-engineering, llm, ai, nlp, gpt, openai, prompt-templates",
    "author": "Your Name",
    "author_email": "Svngoku <svngokuofficiel@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/f1/38/c677a41ab658d86cfe8a94543b74d4276655be308c3ef8b25dcdf5946902/proctor_ai-0.1.4.tar.gz",
    "platform": null,
    "description": "# Proctor AI: A Python Library for Prompt Engineering Techniques\n\n<p align=\"center\">\n  <img src=\"assets/proctor.png\" alt=\"Proctor Logo\" width=\"200\"/>\n</p>\n\n[![PyPI version](https://badge.fury.io/py/proctor-ai.svg)](https://badge.fury.io/py/proctor-ai)\n[![CI](https://github.com/svngoku/proctor/actions/workflows/python-package.yml/badge.svg)](https://github.com/svngoku/proctor/actions/workflows/python-package.yml)\n\n`proctor-ai` is a comprehensive Python package designed to implement and explore a variety of text-based prompt engineering techniques. It provides a structured way to apply different prompting strategies to interact with Large Language Models (LLMs), using [LiteLLM](https://github.com/BerriAI/litellm) and [OpenRouter](https://openrouter.ai/) as the default backend.\n\nThe library is based on the hierarchical structure of prompting techniques outlined in the initial project documentation (`docs/protoc.md`).\n\n## Features\n\n*   **Hierarchical Technique Implementation:** Organizes prompting techniques into categories:\n    *   Zero-Shot (e.g., `EmotionPrompting`, `RolePrompting`, `SelfAsk`)\n    *   Few-Shot (e.g., `ExampleGeneration`, `KNN`)\n    *   Thought Generation (e.g., `ChainOfThought`, `ZeroShotCoT`, `FewShotCoT`)\n    *   Decomposition (e.g., `DECOMP`)\n    *   Self-Criticism (e.g., `ChainOfVerification`)\n    *   Ensembling (e.g., `SelfConsistency`)\n*   **Base Classes:** Provides `PromptTechnique` as an extensible base class for creating custom techniques.\n*   **Composability:** Allows combining multiple techniques sequentially using `CompositeTechnique`.\n*   **LLM Backend:** Uses [LiteLLM](https://github.com/BerriAI/litellm) to interact with various LLM APIs, configured for [OpenRouter](https://openrouter.ai/) by default.\n*   **Configuration:** Easily configure API keys and models via environment variables or a `.env` file.\n*   **Utilities:** Includes helper functions like `dedent_prompt`.\n*   **Logging:** Integrated logging using `rich` for clear, colorized console output showing inputs, prompts, and responses.\n*   **Advanced KNN:** Optional implementation of KNN technique with proper text embeddings and semantic similarity (requires additional dependencies).\n*   **Error Handling:** Robust error handling with automatic retries for transient API errors.\n*   **Performance Optimization:** Caching mechanisms for technique instances and embeddings to improve performance.\n\n## Installation\n\n1.  **Clone the repository (if developing):**\n    ```bash\n    # Replace with your actual repository URL\n    git clone https://github.com/svngoku/proctor.git # Updated repo URL\n    cd proctor\n    ```\n\n2.  **Create and activate a virtual environment:**\n    ```bash\n    # Using venv (requires Python 3)\n    python3 -m venv .venv\n    source .venv/bin/activate # Use activate.fish for fish shell\n    \n    # Or using uv\n    # uv venv\n    # source .venv/bin/activate\n    ```\n\n3.  **Install the package:**\n    *   **For usage:**\n        ```bash\n        # Install from PyPI\n        pip install proctor-ai\n        \n        # Or using uv\n        uv pip install proctor-ai\n        \n        # Or install directly from GitHub\n        pip install git+https://github.com/svngoku/proctor.git\n        ```\n    *   **For development (from the cloned repo root):**\n        ```bash\n        # Use uv for basic installation\n        uv pip install -e .\n        \n        # With advanced KNN features\n        uv pip install -e \".[knn]\"\n        \n        # With development tools (pytest, ruff, etc.)\n        uv pip install -e \".[dev]\"\n        \n        # With all features and tools\n        uv pip install -e \".[all]\"\n        \n        # Or use pip\n        pip install -e \".[all]\"\n        ```\n        This installs the package in editable mode with your chosen optional dependencies.\n\n## Configuration\n\nThe library requires an OpenRouter API key to function.\n\n1.  **Create a `.env` file** in the root of the `proctor` package directory (i.e., `proctor/.env`).\n2.  **Add your API key:**\n    ```dotenv\n    # proctor/.env\n    OPENROUTER_API_KEY=\"YOUR_OPENROUTER_API_KEY_HERE\"\n    \n    # Optional: Specify a default model to override the default in config.py\n    # OPENROUTER_MODEL=\"mistralai/mistral-7b-instruct\"\n    ```\n    The library uses `python-dotenv` to automatically load these variables when the example scripts are run or when the library is imported.\n\n## Usage\n\nSee the `examples/` directory (`proctor/examples/`) for detailed usage patterns.\n\n**Basic Example:**\n\n```python\nimport os\nfrom dotenv import load_dotenv\nfrom proctor import get_technique, list_techniques\n\n# Load API key from .env file in the current working directory\n# Ensure your .env file is in the directory from where you run the script\nload_dotenv()\n\n# List available techniques\nprint(\"Available techniques:\", list_techniques())\n\n# Get a specific technique instance\ntechnique_name = \"zero_shot_cot\"\ncot_technique = get_technique(technique_name)\n\nif cot_technique:\n    problem = \"Explain the theory of relativity in simple terms.\"\n    \n    # Generate the prompt (useful for inspection)\n    prompt = cot_technique.generate_prompt(problem)\n    print(f\"\\n--- Generated {technique_name} Prompt ---\")\n    print(prompt)\n    print(\"--- End Prompt ---\")\n\n    # Execute the technique (calls the LLM via LiteLLM/OpenRouter)\n    # Check if API key is present before executing\n    if os.environ.get(\"OPENROUTER_API_KEY\") and os.environ.get(\"OPENROUTER_API_KEY\") != \"YOUR_API_KEY_HERE\":\n        print(f\"\\n--- Executing {technique_name} --- \")\n        response = cot_technique.execute(problem)\n        print(f\"\\n--- LLM Response ---\")\n        print(response)\n        print(\"--- End Response ---\")\n    else:\n        print(\"\\nSkipping LLM execution: OPENROUTER_API_KEY not set or is placeholder in .env file.\")\nelse:\n    print(f\"Technique '{technique_name}' not found.\")\n\n```\n\n**Using Composite Techniques:**\n\n```python\nfrom proctor import CompositeTechnique, RolePrompting, ChainOfThought\n\n# Define a composite technique\nexpert_cot = CompositeTechnique(\n    name=\"Expert Chain-of-Thought\",\n    identifier=\"custom-expert-cot\",\n    techniques=[\n        RolePrompting(),      # First, set the role\n        ChainOfThought()    # Then, apply structured CoT\n    ]\n)\n\nproblem = \"Plan a three-day trip to Kyoto, Japan, focusing on historical sites.\"\n\n# Generate the combined prompt\nprompt = expert_cot.generate_prompt(problem, role=\"experienced travel planner\")\nprint(prompt)\n\n# Execute (requires API key)\n# response = expert_cot.execute(problem, role=\"experienced travel planner\")\n# print(response)\n```\n\n## Logging\n\nThe library uses Python's `logging` module configured with `rich` to provide colorized output in the console. When techniques are executed, you will see:\n\n*   The technique being executed (Magenta)\n*   Input text (Cyan)\n*   System prompt, if used (Yellow)\n*   Generated prompt (Blue)\n*   LLM response (Green)\n\nSet the logging level via environment variable if needed (e.g., `LOG_LEVEL=DEBUG`).\n\n## Quick Start\n\nAfter installation, here's a simple example to get started:\n\n```python\n# Install the package\n# pip install proctor-ai\n\nimport os\nfrom dotenv import load_dotenv\nfrom proctor import ZeroShotCoT\n\n# Load your API key\nload_dotenv()\n\n# Create a technique instance\ncot = ZeroShotCoT()\n\n# Use it with any problem\nproblem = \"What are the benefits of renewable energy?\"\nresponse = cot.execute(problem)\nprint(response)\n```\n\n## Development\n\n1.  Clone the repository.\n2.  Set up a virtual environment (see Installation).\n3.  Install in development mode: `make install-dev` or `uv pip install -e \".[dev,all]\"`.\n\n### Using the Makefile\n\nThe project includes a comprehensive Makefile for development tasks:\n\n```bash\n# Install dependencies\nmake install-dev\n\n# Run linting and formatting\nmake lint\n\n# Run tests\nmake test\n\n# Run core tests (excluding optional features)\nmake test-core\n\n# Run tests with coverage\nmake test-cov\n\n# Check code style without fixing\nmake check\n\n# Clean build artifacts\nmake clean\n\n# Build package\nmake build\n\n# Deploy to Test PyPI\nmake deploy-test-permissive\n\n# Deploy to Production PyPI\nmake deploy-prod-permissive\n\n# See all available commands\nmake help\n```\n\n### Manual Commands\n\nIf you prefer running commands directly:\n\n```bash\n# Running Tests\npytest tests/ -v\n\n# Linting\nruff check proctor/ tests/ examples/\nruff format proctor/ tests/ examples/\n\n# Building\npython -m build\n```\n\n## Deployment\n\nThe project uses automated CI/CD via GitHub Actions:\n\n- **Pull Requests**: Run tests and linting\n- **Push to main/master**: Deploy to Test PyPI automatically\n- **Tagged releases**: Deploy to Production PyPI automatically\n\n### Creating a Release\n\n1. Update the version in `pyproject.toml`:\n   ```bash\n   make version-bump-patch  # or version-bump-minor, version-bump-major\n   ```\n\n2. Commit and push the version change:\n   ```bash\n   git add pyproject.toml\n   git commit -m \"Bump version to X.Y.Z\"\n   git push\n   ```\n\n3. Create and push a tag:\n   ```bash\n   git tag v0.1.2  # Replace with your version\n   git push origin v0.1.2\n   ```\n\n4. GitHub Actions will automatically:\n   - Run tests\n   - Build the package\n   - Deploy to PyPI\n\n### Manual Deployment\n\nFor manual deployment using the Makefile:\n\n```bash\n# Deploy to Test PyPI\nmake deploy-test-permissive\n\n# Deploy to Production PyPI\nmake deploy-prod-permissive\n```\n\n## Contributing\n\nContributions are welcome! Please follow these steps:\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Make your changes\n4. Run the tests (`make test-core`)\n5. Run the linter (`make lint`)\n6. Commit your changes (`git commit -m 'Add amazing feature'`)\n7. Push to the branch (`git push origin feature/amazing-feature`)\n8. Open a Pull Request\n\n### Development Setup\n\n```bash\n# Clone the repo\ngit clone https://github.com/svngoku/proctor.git\ncd proctor\n\n# Set up development environment\nmake install-dev\n\n# Run tests to ensure everything works\nmake test-core\n\n# See all available commands\nmake help\n```\n\n## License\n\nThis project is licensed under the MIT License - see the `LICENSE` file for details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A comprehensive Python package for structured prompt engineering techniques",
    "version": "0.1.4",
    "project_urls": {
        "Bug Tracker": "https://github.com/svngoku/proctor/issues",
        "Documentation": "https://github.com/svngoku/proctor/blob/master/README.md",
        "Homepage": "https://github.com/svngoku/proctor",
        "Source Code": "https://github.com/svngoku/proctor"
    },
    "split_keywords": [
        "prompt-engineering",
        " llm",
        " ai",
        " nlp",
        " gpt",
        " openai",
        " prompt-templates"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7f16341770b4484431aeab8f37994b3e1343f147d2505c5dc74f32ce08757297",
                "md5": "39472b1c082c7ab2112f64eff71fc9ef",
                "sha256": "67876efa6cabc4e9529a8ce083e3a4ca758964a60fdbed096669d2aba5174b48"
            },
            "downloads": -1,
            "filename": "proctor_ai-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "39472b1c082c7ab2112f64eff71fc9ef",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 57799,
            "upload_time": "2025-07-19T18:33:02",
            "upload_time_iso_8601": "2025-07-19T18:33:02.540585Z",
            "url": "https://files.pythonhosted.org/packages/7f/16/341770b4484431aeab8f37994b3e1343f147d2505c5dc74f32ce08757297/proctor_ai-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f138c677a41ab658d86cfe8a94543b74d4276655be308c3ef8b25dcdf5946902",
                "md5": "89e05838717bb6a72a3dae9b9fe70ada",
                "sha256": "375deddd2352f44e43825b20d110e5b9bc2206f4345c4b4bbc9684e96050e4ea"
            },
            "downloads": -1,
            "filename": "proctor_ai-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "89e05838717bb6a72a3dae9b9fe70ada",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 61602,
            "upload_time": "2025-07-19T18:33:04",
            "upload_time_iso_8601": "2025-07-19T18:33:04.049923Z",
            "url": "https://files.pythonhosted.org/packages/f1/38/c677a41ab658d86cfe8a94543b74d4276655be308c3ef8b25dcdf5946902/proctor_ai-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-19 18:33:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "svngoku",
    "github_project": "proctor",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "proctor-ai"
}
        
Elapsed time: 0.84098s