nimble-llm-caller


Namenimble-llm-caller JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryA robust, multi-model LLM calling package with intelligent context management, file processing, and advanced prompt handling
upload_time2025-08-16 08:06:26
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseNone
keywords llm ai openai anthropic google prompt batch document context-management file-processing token-estimation model-upshifting
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Nimble LLM Caller

A robust, multi-model LLM calling package with advanced prompt management, retry logic, and document assembly capabilities.

## Features

- **Multi-Model Support**: Call multiple LLM providers (OpenAI, Anthropic, Google, etc.) through LiteLLM
- **Batch Processing**: Submit multiple prompts to multiple models efficiently
- **Robust JSON Parsing**: Multiple fallback strategies for parsing LLM responses
- **Retry Logic**: Exponential backoff with jitter for handling rate limits and transient errors
- **Prompt Management**: JSON-based prompt templates with variable substitution
- **Document Assembly**: Built-in formatters for text, markdown, and LaTeX output
- **Reprompting Support**: Use results from previous calls as context for new prompts
- **Secret Management**: Secure handling of API keys via environment variables
- **Comprehensive Logging**: Detailed logging for debugging and monitoring

## Installation

### From PyPI (when published)
```bash
pip install nimble-llm-caller
```

### Development Installation
```bash
# Clone the repository
git clone https://github.com/fredzannarbor/nimble-llm-caller.git
cd nimble-llm-caller

# Install in development mode
pip install -e .

# Install with development dependencies
pip install -e .[dev]

# Run setup script
python setup_dev.py setup
```

### Verify Installation
```bash
# Run the test CLI
python examples/cli_test.py

# Run specific tests
python examples/cli_test.py --test install
```

## Quick Start

### 1. Set up API Keys
```bash
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
```

### 2. Basic Usage
```python
from nimble_llm_caller import LLMContentGenerator

# Initialize with your prompts file
generator = LLMContentGenerator("examples/sample_prompts.json")

# Simple single prompt call
result = generator.call_single(
    prompt_key="summarize_text",
    model="gpt-4o",
    substitutions={"text": "Your text here"}
)

print(f"Result: {result.content}")
```

### 3. Batch Processing
```python
# Batch processing multiple prompts
results = generator.call_batch(
    prompt_keys=["summarize_text", "extract_keywords", "generate_title"],
    models=["gpt-4o", "claude-3-sonnet"],
    shared_substitutions={"content": "Your content here"}
)

print(f"Success rate: {results.success_rate:.1f}%")
```

### 4. Document Assembly
```python
# Assemble results into a document
document = generator.assemble_document(
    results, 
    format="markdown",
    output_filename="report.md"
)

print(f"Document created: {document.word_count} words")
```

## Configuration

Set your API keys in environment variables:

```bash
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
```

## Prompt Format

Prompts are stored in JSON files with this structure:

```json
{
  "prompt_keys": ["summarize_text", "extract_keywords"],
  "summarize_text": {
    "messages": [
      {
        "role": "system",
        "content": "You are a professional summarizer."
      },
      {
        "role": "user", 
        "content": "Summarize this text: {text}"
      }
    ],
    "params": {
      "temperature": 0.3,
      "max_tokens": 1000
    }
  }
}
```

## Advanced Usage

See the [documentation](docs/) for advanced features including:
- Custom retry strategies
- Document templates
- Reprompting workflows
- Error handling
- Performance optimization

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "nimble-llm-caller",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "llm, ai, openai, anthropic, google, prompt, batch, document, context-management, file-processing, token-estimation, model-upshifting",
    "author": null,
    "author_email": "Nimble Books LLC <info@nimblebooks.com>",
    "download_url": "https://files.pythonhosted.org/packages/3c/c3/7f9a5e05ced41da96277ccb5db149720d93171671baede2f14743b5ee021/nimble_llm_caller-0.2.0.tar.gz",
    "platform": null,
    "description": "# Nimble LLM Caller\n\nA robust, multi-model LLM calling package with advanced prompt management, retry logic, and document assembly capabilities.\n\n## Features\n\n- **Multi-Model Support**: Call multiple LLM providers (OpenAI, Anthropic, Google, etc.) through LiteLLM\n- **Batch Processing**: Submit multiple prompts to multiple models efficiently\n- **Robust JSON Parsing**: Multiple fallback strategies for parsing LLM responses\n- **Retry Logic**: Exponential backoff with jitter for handling rate limits and transient errors\n- **Prompt Management**: JSON-based prompt templates with variable substitution\n- **Document Assembly**: Built-in formatters for text, markdown, and LaTeX output\n- **Reprompting Support**: Use results from previous calls as context for new prompts\n- **Secret Management**: Secure handling of API keys via environment variables\n- **Comprehensive Logging**: Detailed logging for debugging and monitoring\n\n## Installation\n\n### From PyPI (when published)\n```bash\npip install nimble-llm-caller\n```\n\n### Development Installation\n```bash\n# Clone the repository\ngit clone https://github.com/fredzannarbor/nimble-llm-caller.git\ncd nimble-llm-caller\n\n# Install in development mode\npip install -e .\n\n# Install with development dependencies\npip install -e .[dev]\n\n# Run setup script\npython setup_dev.py setup\n```\n\n### Verify Installation\n```bash\n# Run the test CLI\npython examples/cli_test.py\n\n# Run specific tests\npython examples/cli_test.py --test install\n```\n\n## Quick Start\n\n### 1. Set up API Keys\n```bash\nexport OPENAI_API_KEY=\"your-openai-key\"\nexport ANTHROPIC_API_KEY=\"your-anthropic-key\"\nexport GOOGLE_API_KEY=\"your-google-key\"\n```\n\n### 2. Basic Usage\n```python\nfrom nimble_llm_caller import LLMContentGenerator\n\n# Initialize with your prompts file\ngenerator = LLMContentGenerator(\"examples/sample_prompts.json\")\n\n# Simple single prompt call\nresult = generator.call_single(\n    prompt_key=\"summarize_text\",\n    model=\"gpt-4o\",\n    substitutions={\"text\": \"Your text here\"}\n)\n\nprint(f\"Result: {result.content}\")\n```\n\n### 3. Batch Processing\n```python\n# Batch processing multiple prompts\nresults = generator.call_batch(\n    prompt_keys=[\"summarize_text\", \"extract_keywords\", \"generate_title\"],\n    models=[\"gpt-4o\", \"claude-3-sonnet\"],\n    shared_substitutions={\"content\": \"Your content here\"}\n)\n\nprint(f\"Success rate: {results.success_rate:.1f}%\")\n```\n\n### 4. Document Assembly\n```python\n# Assemble results into a document\ndocument = generator.assemble_document(\n    results, \n    format=\"markdown\",\n    output_filename=\"report.md\"\n)\n\nprint(f\"Document created: {document.word_count} words\")\n```\n\n## Configuration\n\nSet your API keys in environment variables:\n\n```bash\nexport OPENAI_API_KEY=\"your-openai-key\"\nexport ANTHROPIC_API_KEY=\"your-anthropic-key\"\nexport GOOGLE_API_KEY=\"your-google-key\"\n```\n\n## Prompt Format\n\nPrompts are stored in JSON files with this structure:\n\n```json\n{\n  \"prompt_keys\": [\"summarize_text\", \"extract_keywords\"],\n  \"summarize_text\": {\n    \"messages\": [\n      {\n        \"role\": \"system\",\n        \"content\": \"You are a professional summarizer.\"\n      },\n      {\n        \"role\": \"user\", \n        \"content\": \"Summarize this text: {text}\"\n      }\n    ],\n    \"params\": {\n      \"temperature\": 0.3,\n      \"max_tokens\": 1000\n    }\n  }\n}\n```\n\n## Advanced Usage\n\nSee the [documentation](docs/) for advanced features including:\n- Custom retry strategies\n- Document templates\n- Reprompting workflows\n- Error handling\n- Performance optimization\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A robust, multi-model LLM calling package with intelligent context management, file processing, and advanced prompt handling",
    "version": "0.2.0",
    "project_urls": {
        "Documentation": "https://nimble-llm-caller.readthedocs.io",
        "Homepage": "https://github.com/nimblebooks/nimble-llm-caller",
        "Issues": "https://github.com/nimblebooks/nimble-llm-caller/issues",
        "Repository": "https://github.com/nimblebooks/nimble-llm-caller"
    },
    "split_keywords": [
        "llm",
        " ai",
        " openai",
        " anthropic",
        " google",
        " prompt",
        " batch",
        " document",
        " context-management",
        " file-processing",
        " token-estimation",
        " model-upshifting"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0fdd26d75a0c8b6ffac6df85d5b2d4598d6efd01ba1840e6904a9b619732f14e",
                "md5": "f695e4026dd8696e07c301ec79e706ab",
                "sha256": "12aee149302f2a5d9a398a58c736fabd2e7f727bfcc83e4e3f03896cdee969fc"
            },
            "downloads": -1,
            "filename": "nimble_llm_caller-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f695e4026dd8696e07c301ec79e706ab",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 83549,
            "upload_time": "2025-08-16T08:06:24",
            "upload_time_iso_8601": "2025-08-16T08:06:24.563771Z",
            "url": "https://files.pythonhosted.org/packages/0f/dd/26d75a0c8b6ffac6df85d5b2d4598d6efd01ba1840e6904a9b619732f14e/nimble_llm_caller-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3cc37f9a5e05ced41da96277ccb5db149720d93171671baede2f14743b5ee021",
                "md5": "5112910eada8eeda6e2ec34f152d2706",
                "sha256": "2602f5708c57e77a92b21d1e89de2e499ae40b28238034d9c4d897d17188f722"
            },
            "downloads": -1,
            "filename": "nimble_llm_caller-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "5112910eada8eeda6e2ec34f152d2706",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 77858,
            "upload_time": "2025-08-16T08:06:26",
            "upload_time_iso_8601": "2025-08-16T08:06:26.123625Z",
            "url": "https://files.pythonhosted.org/packages/3c/c3/7f9a5e05ced41da96277ccb5db149720d93171671baede2f14743b5ee021/nimble_llm_caller-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-16 08:06:26",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "nimblebooks",
    "github_project": "nimble-llm-caller",
    "github_not_found": true,
    "lcname": "nimble-llm-caller"
}
        
Elapsed time: 1.18390s