llm_batch_helper


Namellm_batch_helper JSON
Version 0.2.0 PyPI version JSON
download
home_pagehttps://github.com/TianyiPeng/LLM_batch_helper
SummaryA Python package that enables batch submission of prompts to LLM APIs, with built-in async capabilities and response caching.
upload_time2025-08-23 15:33:46
maintainerNone
docs_urlNone
authorTianyi Peng
requires_python<4.0,>=3.11
licenseMIT
keywords llm openai together openrouter batch async ai nlp api
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM Batch Helper

[![PyPI version](https://badge.fury.io/py/llm_batch_helper.svg)](https://badge.fury.io/py/llm_batch_helper)
[![Downloads](https://pepy.tech/badge/llm_batch_helper)](https://pepy.tech/project/llm_batch_helper)
[![Downloads/Month](https://pepy.tech/badge/llm_batch_helper/month)](https://pepy.tech/project/llm_batch_helper)
[![Documentation Status](https://readthedocs.org/projects/llm-batch-helper/badge/?version=latest)](https://llm-batch-helper.readthedocs.io/en/latest/?badge=latest)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

A Python package that enables batch submission of prompts to LLM APIs, with built-in async capabilities, response caching, prompt verification, and more. This package is designed to streamline applications like LLM simulation, LLM-as-a-judge, and other batch processing scenarios.

📖 **[Complete Documentation](https://llm-batch-helper.readthedocs.io/)** | 🚀 **[Quick Start Guide](https://llm-batch-helper.readthedocs.io/en/latest/quickstart.html)**

## Why we designed this package

Calling LLM APIs has become increasingly common, but several pain points exist in practice:

1. **Efficient Batch Processing**: How do you run LLM calls in batches efficiently? Our async implementation is 3X-100X faster than multi-thread/multi-process approaches.

2. **API Reliability**: LLM APIs can be unstable, so we need robust retry mechanisms when calls get interrupted.

3. **Long-Running Simulations**: During long-running LLM simulations, computers can crash and APIs can fail. Can we cache LLM API calls to avoid repeating completed work?

4. **Output Validation**: LLM outputs often have format requirements. If the output isn't right, we need to retry with validation.

This package is designed to solve these exact pain points with async processing, intelligent caching, and comprehensive error handling. If there are some additional features you need, please post an issue.  

## Features

- **Async Processing**: Submit multiple prompts concurrently for faster processing
- **Response Caching**: Automatically cache responses to avoid redundant API calls
- **Multiple Input Formats**: Support for both file-based and list-based prompts
- **Provider Support**: Works with OpenAI and Together.ai APIs
- **Retry Logic**: Built-in retry mechanism with exponential backoff
- **Verification Callbacks**: Custom verification for response quality
- **Progress Tracking**: Real-time progress bars for batch operations

## Installation

### For Users (Recommended)

```bash
# Install from PyPI
pip install llm_batch_helper
```

### For Development

```bash
# Clone the repository
git clone https://github.com/TianyiPeng/LLM_batch_helper.git
cd llm_batch_helper

# Install with Poetry
poetry install

# Activate the virtual environment
poetry shell
```

## Quick Start

### 1. Set up environment variables

**Option A: Environment Variables**
```bash
# For OpenAI
export OPENAI_API_KEY="your-openai-api-key"

# For Together.ai
export TOGETHER_API_KEY="your-together-api-key"
```

**Option B: .env File (Recommended for Development)**
```python
# In your script, before importing llm_batch_helper
from dotenv import load_dotenv
load_dotenv()  # Load from .env file

# Then use the package normally
from llm_batch_helper import LLMConfig, process_prompts_batch
```

Create a `.env` file in your project:
```
OPENAI_API_KEY=your-openai-api-key
TOGETHER_API_KEY=your-together-api-key
```

### 2. Interactive Tutorial (Recommended)

Check out the comprehensive Jupyter notebook [tutorial](https://github.com/TianyiPeng/LLM_batch_helper/blob/main/tutorials/llm_batch_helper_tutorial.ipynb).

The tutorial covers all features with interactive examples!

### 3. Basic usage

```python
import asyncio
from dotenv import load_dotenv  # Optional: for .env file support
from llm_batch_helper import LLMConfig, process_prompts_batch

# Optional: Load environment variables from .env file
load_dotenv()

async def main():
    # Create configuration
    config = LLMConfig(
        model_name="gpt-4o-mini",
        temperature=0.7,
        max_completion_tokens=100,  # or use max_tokens for backward compatibility
        max_concurrent_requests=30 # number of concurrent requests with asyncIO
    )
    
    # Process prompts
    prompts = [
        "What is the capital of France?",
        "What is 2+2?",
        "Who wrote 'Hamlet'?"
    ]
    
    results = await process_prompts_batch(
        config=config,
        provider="openai",
        prompts=prompts,
        cache_dir="cache"
    )
    
    # Print results
    for prompt_id, response in results.items():
        print(f"{prompt_id}: {response['response_text']}")

if __name__ == "__main__":
    asyncio.run(main())
```

## Usage Examples

### File-based Prompts

```python
import asyncio
from llm_batch_helper import LLMConfig, process_prompts_batch

async def process_files():
    config = LLMConfig(
        model_name="gpt-4o-mini",
        temperature=0.7,
        max_completion_tokens=200
    )
    
    # Process all .txt files in a directory
    results = await process_prompts_batch(
        config=config,
        provider="openai",
        input_dir="prompts",  # Directory containing .txt files
        cache_dir="cache",
        force=False  # Use cached responses if available
    )
    
    return results

asyncio.run(process_files())
```

### Custom Verification

```python
from llm_batch_helper import LLMConfig

def verify_response(prompt_id, llm_response_data, original_prompt_text, **kwargs):
    """Custom verification callback"""
    response_text = llm_response_data.get("response_text", "")
    
    # Check minimum length
    if len(response_text) < kwargs.get("min_length", 10):
        return False
    
    # Check for specific keywords
    if "error" in response_text.lower():
        return False
    
    return True

config = LLMConfig(
    model_name="gpt-4o-mini",
    temperature=0.7,
    verification_callback=verify_response,
    verification_callback_args={"min_length": 20}
)
```



## API Reference

### LLMConfig

Configuration class for LLM requests.

```python
LLMConfig(
    model_name: str,
    temperature: float = 0.7,
    max_completion_tokens: Optional[int] = None,  # Preferred parameter
    max_tokens: Optional[int] = None,  # Deprecated, kept for backward compatibility
    system_instruction: Optional[str] = None,
    max_retries: int = 10,
    max_concurrent_requests: int = 5,
    verification_callback: Optional[Callable] = None,
    verification_callback_args: Optional[Dict] = None
)
```

### process_prompts_batch

Main function for batch processing of prompts.

```python
async def process_prompts_batch(
    config: LLMConfig,
    provider: str,  # "openai", "together", or "openrouter"
    prompts: Optional[List[str]] = None,
    input_dir: Optional[str] = None,
    cache_dir: str = "llm_cache",
    force: bool = False,
    desc: str = "Processing prompts"
) -> Dict[str, Dict[str, Any]]
```

### LLMCache

Caching functionality for responses.

```python
cache = LLMCache(cache_dir="my_cache")

# Check for cached response
cached = cache.get_cached_response(prompt_id)

# Save response to cache
cache.save_response(prompt_id, prompt_text, response_data)

# Clear all cached responses
cache.clear_cache()
```

## Project Structure

```
llm_batch_helper/
├── pyproject.toml              # Poetry configuration
├── poetry.lock                 # Locked dependencies
├── README.md                   # This file
├── LICENSE                     # License file
├── llm_batch_helper/          # Main package
│   ├── __init__.py            # Package exports
│   ├── cache.py               # Response caching
│   ├── config.py              # Configuration classes
│   ├── providers.py           # LLM provider implementations
│   ├── input_handlers.py      # Input processing utilities
│   └── exceptions.py          # Custom exceptions
├── examples/                   # Usage examples
│   ├── example.py             # Basic usage example
│   ├── prompts/               # Sample prompt files
│   └── llm_cache/             # Example cache directory
└── tutorials/                 # Interactive tutorials
    └── llm_batch_helper_tutorial.ipynb  # Comprehensive Jupyter notebook tutorial
```

## Supported Models

### OpenAI
- gpt-4o-mini
- gpt-4o
- gpt-4
- gpt-3.5-turbo

### Together.ai
- meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
- meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo
- mistralai/Mixtral-8x7B-Instruct-v0.1
- And many other open-source models

## Documentation

📖 **[Complete Documentation](https://llm-batch-helper.readthedocs.io/)** - Comprehensive docs on Read the Docs

### Quick Links:
- [Quick Start Guide](https://llm-batch-helper.readthedocs.io/en/latest/quickstart.html) - Get started quickly
- [API Reference](https://llm-batch-helper.readthedocs.io/en/latest/api.html) - Complete API documentation  
- [Examples](https://llm-batch-helper.readthedocs.io/en/latest/examples.html) - Practical usage examples
- [Tutorials](https://llm-batch-helper.readthedocs.io/en/latest/tutorials.html) - Step-by-step tutorials
- [Provider Guide](https://llm-batch-helper.readthedocs.io/en/latest/providers.html) - OpenAI & Together.ai setup

## Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Run the test suite
6. Submit a pull request

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Changelog

### v0.1.5
- Added Together.ai provider support
- Support for open-source models (Llama, Mixtral, etc.)
- Enhanced documentation with Read the Docs
- Updated examples and tutorials

### v0.1.0
- Initial release
- Support for OpenAI API
- Async batch processing
- Response caching
- File and list-based input support
- Custom verification callbacks
- Poetry package management

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/TianyiPeng/LLM_batch_helper",
    "name": "llm_batch_helper",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.11",
    "maintainer_email": null,
    "keywords": "llm, openai, together, openrouter, batch, async, ai, nlp, api",
    "author": "Tianyi Peng",
    "author_email": "tianyipeng95@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/e7/8b/fc4a163b02fc8b610a4476450929f9e1e4737f97b0cc993a47c20e9b7e2c/llm_batch_helper-0.2.0.tar.gz",
    "platform": null,
    "description": "# LLM Batch Helper\n\n[![PyPI version](https://badge.fury.io/py/llm_batch_helper.svg)](https://badge.fury.io/py/llm_batch_helper)\n[![Downloads](https://pepy.tech/badge/llm_batch_helper)](https://pepy.tech/project/llm_batch_helper)\n[![Downloads/Month](https://pepy.tech/badge/llm_batch_helper/month)](https://pepy.tech/project/llm_batch_helper)\n[![Documentation Status](https://readthedocs.org/projects/llm-batch-helper/badge/?version=latest)](https://llm-batch-helper.readthedocs.io/en/latest/?badge=latest)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\nA Python package that enables batch submission of prompts to LLM APIs, with built-in async capabilities, response caching, prompt verification, and more. This package is designed to streamline applications like LLM simulation, LLM-as-a-judge, and other batch processing scenarios.\n\n\ud83d\udcd6 **[Complete Documentation](https://llm-batch-helper.readthedocs.io/)** | \ud83d\ude80 **[Quick Start Guide](https://llm-batch-helper.readthedocs.io/en/latest/quickstart.html)**\n\n## Why we designed this package\n\nCalling LLM APIs has become increasingly common, but several pain points exist in practice:\n\n1. **Efficient Batch Processing**: How do you run LLM calls in batches efficiently? Our async implementation is 3X-100X faster than multi-thread/multi-process approaches.\n\n2. **API Reliability**: LLM APIs can be unstable, so we need robust retry mechanisms when calls get interrupted.\n\n3. **Long-Running Simulations**: During long-running LLM simulations, computers can crash and APIs can fail. Can we cache LLM API calls to avoid repeating completed work?\n\n4. **Output Validation**: LLM outputs often have format requirements. If the output isn't right, we need to retry with validation.\n\nThis package is designed to solve these exact pain points with async processing, intelligent caching, and comprehensive error handling. If there are some additional features you need, please post an issue.  \n\n## Features\n\n- **Async Processing**: Submit multiple prompts concurrently for faster processing\n- **Response Caching**: Automatically cache responses to avoid redundant API calls\n- **Multiple Input Formats**: Support for both file-based and list-based prompts\n- **Provider Support**: Works with OpenAI and Together.ai APIs\n- **Retry Logic**: Built-in retry mechanism with exponential backoff\n- **Verification Callbacks**: Custom verification for response quality\n- **Progress Tracking**: Real-time progress bars for batch operations\n\n## Installation\n\n### For Users (Recommended)\n\n```bash\n# Install from PyPI\npip install llm_batch_helper\n```\n\n### For Development\n\n```bash\n# Clone the repository\ngit clone https://github.com/TianyiPeng/LLM_batch_helper.git\ncd llm_batch_helper\n\n# Install with Poetry\npoetry install\n\n# Activate the virtual environment\npoetry shell\n```\n\n## Quick Start\n\n### 1. Set up environment variables\n\n**Option A: Environment Variables**\n```bash\n# For OpenAI\nexport OPENAI_API_KEY=\"your-openai-api-key\"\n\n# For Together.ai\nexport TOGETHER_API_KEY=\"your-together-api-key\"\n```\n\n**Option B: .env File (Recommended for Development)**\n```python\n# In your script, before importing llm_batch_helper\nfrom dotenv import load_dotenv\nload_dotenv()  # Load from .env file\n\n# Then use the package normally\nfrom llm_batch_helper import LLMConfig, process_prompts_batch\n```\n\nCreate a `.env` file in your project:\n```\nOPENAI_API_KEY=your-openai-api-key\nTOGETHER_API_KEY=your-together-api-key\n```\n\n### 2. Interactive Tutorial (Recommended)\n\nCheck out the comprehensive Jupyter notebook [tutorial](https://github.com/TianyiPeng/LLM_batch_helper/blob/main/tutorials/llm_batch_helper_tutorial.ipynb).\n\nThe tutorial covers all features with interactive examples!\n\n### 3. Basic usage\n\n```python\nimport asyncio\nfrom dotenv import load_dotenv  # Optional: for .env file support\nfrom llm_batch_helper import LLMConfig, process_prompts_batch\n\n# Optional: Load environment variables from .env file\nload_dotenv()\n\nasync def main():\n    # Create configuration\n    config = LLMConfig(\n        model_name=\"gpt-4o-mini\",\n        temperature=0.7,\n        max_completion_tokens=100,  # or use max_tokens for backward compatibility\n        max_concurrent_requests=30 # number of concurrent requests with asyncIO\n    )\n    \n    # Process prompts\n    prompts = [\n        \"What is the capital of France?\",\n        \"What is 2+2?\",\n        \"Who wrote 'Hamlet'?\"\n    ]\n    \n    results = await process_prompts_batch(\n        config=config,\n        provider=\"openai\",\n        prompts=prompts,\n        cache_dir=\"cache\"\n    )\n    \n    # Print results\n    for prompt_id, response in results.items():\n        print(f\"{prompt_id}: {response['response_text']}\")\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n## Usage Examples\n\n### File-based Prompts\n\n```python\nimport asyncio\nfrom llm_batch_helper import LLMConfig, process_prompts_batch\n\nasync def process_files():\n    config = LLMConfig(\n        model_name=\"gpt-4o-mini\",\n        temperature=0.7,\n        max_completion_tokens=200\n    )\n    \n    # Process all .txt files in a directory\n    results = await process_prompts_batch(\n        config=config,\n        provider=\"openai\",\n        input_dir=\"prompts\",  # Directory containing .txt files\n        cache_dir=\"cache\",\n        force=False  # Use cached responses if available\n    )\n    \n    return results\n\nasyncio.run(process_files())\n```\n\n### Custom Verification\n\n```python\nfrom llm_batch_helper import LLMConfig\n\ndef verify_response(prompt_id, llm_response_data, original_prompt_text, **kwargs):\n    \"\"\"Custom verification callback\"\"\"\n    response_text = llm_response_data.get(\"response_text\", \"\")\n    \n    # Check minimum length\n    if len(response_text) < kwargs.get(\"min_length\", 10):\n        return False\n    \n    # Check for specific keywords\n    if \"error\" in response_text.lower():\n        return False\n    \n    return True\n\nconfig = LLMConfig(\n    model_name=\"gpt-4o-mini\",\n    temperature=0.7,\n    verification_callback=verify_response,\n    verification_callback_args={\"min_length\": 20}\n)\n```\n\n\n\n## API Reference\n\n### LLMConfig\n\nConfiguration class for LLM requests.\n\n```python\nLLMConfig(\n    model_name: str,\n    temperature: float = 0.7,\n    max_completion_tokens: Optional[int] = None,  # Preferred parameter\n    max_tokens: Optional[int] = None,  # Deprecated, kept for backward compatibility\n    system_instruction: Optional[str] = None,\n    max_retries: int = 10,\n    max_concurrent_requests: int = 5,\n    verification_callback: Optional[Callable] = None,\n    verification_callback_args: Optional[Dict] = None\n)\n```\n\n### process_prompts_batch\n\nMain function for batch processing of prompts.\n\n```python\nasync def process_prompts_batch(\n    config: LLMConfig,\n    provider: str,  # \"openai\", \"together\", or \"openrouter\"\n    prompts: Optional[List[str]] = None,\n    input_dir: Optional[str] = None,\n    cache_dir: str = \"llm_cache\",\n    force: bool = False,\n    desc: str = \"Processing prompts\"\n) -> Dict[str, Dict[str, Any]]\n```\n\n### LLMCache\n\nCaching functionality for responses.\n\n```python\ncache = LLMCache(cache_dir=\"my_cache\")\n\n# Check for cached response\ncached = cache.get_cached_response(prompt_id)\n\n# Save response to cache\ncache.save_response(prompt_id, prompt_text, response_data)\n\n# Clear all cached responses\ncache.clear_cache()\n```\n\n## Project Structure\n\n```\nllm_batch_helper/\n\u251c\u2500\u2500 pyproject.toml              # Poetry configuration\n\u251c\u2500\u2500 poetry.lock                 # Locked dependencies\n\u251c\u2500\u2500 README.md                   # This file\n\u251c\u2500\u2500 LICENSE                     # License file\n\u251c\u2500\u2500 llm_batch_helper/          # Main package\n\u2502   \u251c\u2500\u2500 __init__.py            # Package exports\n\u2502   \u251c\u2500\u2500 cache.py               # Response caching\n\u2502   \u251c\u2500\u2500 config.py              # Configuration classes\n\u2502   \u251c\u2500\u2500 providers.py           # LLM provider implementations\n\u2502   \u251c\u2500\u2500 input_handlers.py      # Input processing utilities\n\u2502   \u2514\u2500\u2500 exceptions.py          # Custom exceptions\n\u251c\u2500\u2500 examples/                   # Usage examples\n\u2502   \u251c\u2500\u2500 example.py             # Basic usage example\n\u2502   \u251c\u2500\u2500 prompts/               # Sample prompt files\n\u2502   \u2514\u2500\u2500 llm_cache/             # Example cache directory\n\u2514\u2500\u2500 tutorials/                 # Interactive tutorials\n    \u2514\u2500\u2500 llm_batch_helper_tutorial.ipynb  # Comprehensive Jupyter notebook tutorial\n```\n\n## Supported Models\n\n### OpenAI\n- gpt-4o-mini\n- gpt-4o\n- gpt-4\n- gpt-3.5-turbo\n\n### Together.ai\n- meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo\n- meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo\n- mistralai/Mixtral-8x7B-Instruct-v0.1\n- And many other open-source models\n\n## Documentation\n\n\ud83d\udcd6 **[Complete Documentation](https://llm-batch-helper.readthedocs.io/)** - Comprehensive docs on Read the Docs\n\n### Quick Links:\n- [Quick Start Guide](https://llm-batch-helper.readthedocs.io/en/latest/quickstart.html) - Get started quickly\n- [API Reference](https://llm-batch-helper.readthedocs.io/en/latest/api.html) - Complete API documentation  \n- [Examples](https://llm-batch-helper.readthedocs.io/en/latest/examples.html) - Practical usage examples\n- [Tutorials](https://llm-batch-helper.readthedocs.io/en/latest/tutorials.html) - Step-by-step tutorials\n- [Provider Guide](https://llm-batch-helper.readthedocs.io/en/latest/providers.html) - OpenAI & Together.ai setup\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests if applicable\n5. Run the test suite\n6. Submit a pull request\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Changelog\n\n### v0.1.5\n- Added Together.ai provider support\n- Support for open-source models (Llama, Mixtral, etc.)\n- Enhanced documentation with Read the Docs\n- Updated examples and tutorials\n\n### v0.1.0\n- Initial release\n- Support for OpenAI API\n- Async batch processing\n- Response caching\n- File and list-based input support\n- Custom verification callbacks\n- Poetry package management\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Python package that enables batch submission of prompts to LLM APIs, with built-in async capabilities and response caching.",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://github.com/TianyiPeng/LLM_batch_helper",
        "Repository": "https://github.com/TianyiPeng/LLM_batch_helper"
    },
    "split_keywords": [
        "llm",
        " openai",
        " together",
        " openrouter",
        " batch",
        " async",
        " ai",
        " nlp",
        " api"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1c9ef62f70e82df2e955c10ec025fb9ae9dc712f3b2c3954b77ebd88920e5852",
                "md5": "ad135c96af10ef0b6e8c408456e5f676",
                "sha256": "f361ab23ace417c094b75aa5090bb18850887f911146f508c97d461776c66e13"
            },
            "downloads": -1,
            "filename": "llm_batch_helper-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ad135c96af10ef0b6e8c408456e5f676",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.11",
            "size": 14736,
            "upload_time": "2025-08-23T15:33:45",
            "upload_time_iso_8601": "2025-08-23T15:33:45.994982Z",
            "url": "https://files.pythonhosted.org/packages/1c/9e/f62f70e82df2e955c10ec025fb9ae9dc712f3b2c3954b77ebd88920e5852/llm_batch_helper-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e78bfc4a163b02fc8b610a4476450929f9e1e4737f97b0cc993a47c20e9b7e2c",
                "md5": "71d8d0b7e684190f13fb2800e7f60774",
                "sha256": "cc86c1434a070098b20be9fa4718c98c55d8ed59b9cf7a878e6286029e515f5c"
            },
            "downloads": -1,
            "filename": "llm_batch_helper-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "71d8d0b7e684190f13fb2800e7f60774",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.11",
            "size": 15218,
            "upload_time": "2025-08-23T15:33:46",
            "upload_time_iso_8601": "2025-08-23T15:33:46.848858Z",
            "url": "https://files.pythonhosted.org/packages/e7/8b/fc4a163b02fc8b610a4476450929f9e1e4737f97b0cc993a47c20e9b7e2c/llm_batch_helper-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-23 15:33:46",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "TianyiPeng",
    "github_project": "LLM_batch_helper",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llm_batch_helper"
}
        
Elapsed time: 0.99222s