[](https://github.com/parasail-ai/openai-batch/actions/workflows/tests.yml)
[](https://pypi.org/project/openai-batch/)
# openai-batch
Batch inferencing is an easy and inexpensive way to process thousands or millions of LLM inferences.
The process is:
1. Write inferencing requests to an input file
2. start a batch job
3. wait for it to finish
4. download the output
This library aims to make these steps easier. The OpenAI protocol is relatively easy to use, but it has a lot of boilerplate steps. This library automates those.
#### Supported Providers
* [OpenAI](https://openai.com/) - ChatGPT, GPT4o, etc.
* [Parasail](https://parasail.io/) - Most transformer models on HuggingFace, such as LLama, Qwen, LLava, etc.
## Direct Library Usage
You can also use the library directly in your Python code for more control over the batch processing workflow.
### Basic Usage
```python
import random
from openai_batch import Batch
# Create a batch with random prompts
with Batch() as batch:
objects = ["cat", "robot", "coffee mug", "spaceship", "banana"]
for i in range(100):
batch.add_to_batch(
model="meta-llama/Meta-Llama-3-8B-Instruct",
temperature=0.7,
max_completion_tokens=1000,
messages=[{"role": "user", "content": f"Tell me a joke about a {random.choice(objects)}"}]
)
# Submit, wait for completion, and download results
result, output_path, error_path = batch.submit_wait_download()
print(f"Batch completed with status {result.status} and stored in {output_path}")
```
`batch.add_to_batch` accepts the same format as `chat.completion.create` in OpenAI's Python library, so any chat completion parameters can be included. Parasail supports most transformers on HuggingFace (TODO: link), while OpenAI supports all of their serverless models.
You can also create embedding batches in a similar way:
```python
with Batch() as batch:
documents = ["The quick brown fox jumps over the lazy dog",
"Machine learning models can process natural language"]
for doc in documents:
batch.add_to_batch(
model="text-embedding-3-small", # OpenAI embedding model
input=doc
)
result, output_path, error_path = batch.submit_wait_download()
```
### Step-by-Step Workflow
For more control, you can break down the process into individual steps:
```python
from openai_batch import Batch
import time
# Create a batch object
batch = Batch(
submission_input_file="batch_input.jsonl",
output_file="batch_output.jsonl",
error_file="batch_errors.jsonl"
)
# Add chat completion requests to the batch
objects = ["cat", "robot", "coffee mug", "spaceship", "banana"]
for i in range(5):
batch.add_to_batch(
model="gpt-4o-mini",
messages=[{"role": "user", "content": f"Tell me a joke about a {objects[i]}"}]
)
# Submit the batch
batch_id = batch.submit()
print(f"Batch submitted with ID: {batch_id}")
# Check status periodically
while True:
status = batch.status()
print(f"Batch status: {status.status}")
if status.status in ["completed", "failed", "expired", "cancelled"]:
break
time.sleep(60) # Check every minute
# Download results once completed
output_path, error_path = batch.download()
print(f"Output saved to: {output_path}")
print(f"Errors saved to: {error_path}")
```
### Working with Different Providers
The library automatically selects the appropriate provider based on the model:
```python
from openai_batch import Batch
# OpenAI models automatically use the OpenAI provider
openai_batch = Batch()
openai_batch.add_to_batch(
model="gpt-4o-mini", # OpenAI model
messages=[{"role": "user", "content": "Hello, world!"}]
)
# Other models automatically use the Parasail provider
parasail_batch = Batch()
parasail_batch.add_to_batch(
model="meta-llama/Meta-Llama-3-8B-Instruct", # Non-OpenAI model
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
You can also explicitly specify a provider:
```python
from openai_batch import Batch
from openai_batch.providers import get_provider_by_name
# Get a specific provider
provider = get_provider_by_name("parasail")
# Create a batch with this provider
batch = Batch(provider=provider)
batch.add_to_batch(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
### Resuming an Existing Batch
```python
from openai_batch import Batch
import time
# Resume an existing batch
batch = Batch(batch_id="batch_abc123")
# Check status in a loop until completed
while True:
status = batch.status()
print(f"Batch status: {status.status}")
if status.status == "completed":
output_path, error_path = batch.download()
print(f"Output saved to: {output_path}")
break
elif status.status in ["failed", "expired", "cancelled"]:
print(f"Batch ended with status: {status.status}")
break
time.sleep(60) # Check every minute
```
## Command-Line Utilities
Use `openai_batch.run` to run a batch from an input file on disk:
```bash
python -m openai_batch.run input.jsonl
```
This will start the batch, wait for it to complete, then download the results.
Useful switches:
* `-c` Only create the batch, do not wait for it.
* `--resume` Attach to an existing batch job. Wait for it to finish then download results.
* `--dry-run` Confirm your configuration without making an actual request.
* Full list: `python -m openai_batch.run --help`
### OpenAI Example
```bash
export OPENAI_API_KEY="<Your OpenAI API Key>"
# Create an example batch input file
python -m openai_batch.example_prompts | \
python -m openai_batch.create_batch --model 'gpt-4o-mini' > input.jsonl
# Run this batch (resumable with `--resume <BATCH_ID>`)
python -m openai_batch.run input.jsonl
```
### Parasail Example
```bash
export PARASAIL_API_KEY="<Your Parasail API Key>"
# Create an example batch input file
python -m openai_batch.example_prompts | \
python -m openai_batch.create_batch --model 'meta-llama/Meta-Llama-3-8B-Instruct' > input.jsonl
# Run this batch (resumable with `--resume <BATCH_ID>`)
python -m openai_batch.run -p parasail input.jsonl
```
## Resources
* [OpenAI Batch Cookbook](https://cookbook.openai.com/examples/batch_processing)
* [OpenAI Batch API reference](https://platform.openai.com/docs/api-reference/batch)
* [OpenAI Files API reference](https://platform.openai.com/docs/api-reference/files)
Raw data
{
"_id": null,
"home_page": null,
"name": "openai-batch",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "openai, batch, chatgpt, gpt, llm, language model",
"author": "Parasail",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/64/28/1a9835bb439835fd9bde504851f1f48ce46dd060c43de625d2f96aa8e644/openai_batch-0.3.2.tar.gz",
"platform": null,
"description": "[](https://github.com/parasail-ai/openai-batch/actions/workflows/tests.yml)\n[](https://pypi.org/project/openai-batch/)\n\n# openai-batch\n\nBatch inferencing is an easy and inexpensive way to process thousands or millions of LLM inferences.\n\nThe process is:\n1. Write inferencing requests to an input file\n2. start a batch job\n3. wait for it to finish\n4. download the output\n\nThis library aims to make these steps easier. The OpenAI protocol is relatively easy to use, but it has a lot of boilerplate steps. This library automates those.\n\n#### Supported Providers\n\n* [OpenAI](https://openai.com/) - ChatGPT, GPT4o, etc.\n* [Parasail](https://parasail.io/) - Most transformer models on HuggingFace, such as LLama, Qwen, LLava, etc.\n\n\n## Direct Library Usage\n\nYou can also use the library directly in your Python code for more control over the batch processing workflow.\n\n### Basic Usage\n\n```python\nimport random\nfrom openai_batch import Batch\n\n# Create a batch with random prompts\nwith Batch() as batch:\n objects = [\"cat\", \"robot\", \"coffee mug\", \"spaceship\", \"banana\"]\n for i in range(100):\n batch.add_to_batch(\n model=\"meta-llama/Meta-Llama-3-8B-Instruct\",\n temperature=0.7,\n max_completion_tokens=1000,\n messages=[{\"role\": \"user\", \"content\": f\"Tell me a joke about a {random.choice(objects)}\"}]\n )\n \n # Submit, wait for completion, and download results\n result, output_path, error_path = batch.submit_wait_download()\n print(f\"Batch completed with status {result.status} and stored in {output_path}\")\n```\n\n`batch.add_to_batch` accepts the same format as `chat.completion.create` in OpenAI's Python library, so any chat completion parameters can be included. Parasail supports most transformers on HuggingFace (TODO: link), while OpenAI supports all of their serverless models. \n\nYou can also create embedding batches in a similar way:\n\n```python\nwith Batch() as batch:\n documents = [\"The quick brown fox jumps over the lazy dog\", \n \"Machine learning models can process natural language\"]\n \n for doc in documents:\n batch.add_to_batch(\n model=\"text-embedding-3-small\", # OpenAI embedding model\n input=doc\n )\n \n result, output_path, error_path = batch.submit_wait_download()\n```\n\n### Step-by-Step Workflow\n\nFor more control, you can break down the process into individual steps:\n\n```python\nfrom openai_batch import Batch\nimport time\n\n# Create a batch object\nbatch = Batch(\n submission_input_file=\"batch_input.jsonl\",\n output_file=\"batch_output.jsonl\",\n error_file=\"batch_errors.jsonl\"\n)\n\n# Add chat completion requests to the batch\nobjects = [\"cat\", \"robot\", \"coffee mug\", \"spaceship\", \"banana\"]\nfor i in range(5):\n batch.add_to_batch(\n model=\"gpt-4o-mini\",\n messages=[{\"role\": \"user\", \"content\": f\"Tell me a joke about a {objects[i]}\"}]\n )\n\n# Submit the batch\nbatch_id = batch.submit()\nprint(f\"Batch submitted with ID: {batch_id}\")\n\n# Check status periodically\nwhile True:\n status = batch.status()\n print(f\"Batch status: {status.status}\")\n \n if status.status in [\"completed\", \"failed\", \"expired\", \"cancelled\"]:\n break\n \n time.sleep(60) # Check every minute\n\n# Download results once completed\noutput_path, error_path = batch.download()\nprint(f\"Output saved to: {output_path}\")\nprint(f\"Errors saved to: {error_path}\")\n```\n\n### Working with Different Providers\n\nThe library automatically selects the appropriate provider based on the model:\n\n```python\nfrom openai_batch import Batch\n\n# OpenAI models automatically use the OpenAI provider\nopenai_batch = Batch()\nopenai_batch.add_to_batch(\n model=\"gpt-4o-mini\", # OpenAI model\n messages=[{\"role\": \"user\", \"content\": \"Hello, world!\"}]\n)\n\n# Other models automatically use the Parasail provider\nparasail_batch = Batch()\nparasail_batch.add_to_batch(\n model=\"meta-llama/Meta-Llama-3-8B-Instruct\", # Non-OpenAI model\n messages=[{\"role\": \"user\", \"content\": \"Hello, world!\"}]\n)\n```\n\nYou can also explicitly specify a provider:\n\n```python\nfrom openai_batch import Batch\nfrom openai_batch.providers import get_provider_by_name\n\n# Get a specific provider\nprovider = get_provider_by_name(\"parasail\")\n\n# Create a batch with this provider\nbatch = Batch(provider=provider)\nbatch.add_to_batch(\n model=\"meta-llama/Meta-Llama-3-8B-Instruct\",\n messages=[{\"role\": \"user\", \"content\": \"Hello, world!\"}]\n)\n```\n\n### Resuming an Existing Batch\n\n```python\nfrom openai_batch import Batch\nimport time\n\n# Resume an existing batch\nbatch = Batch(batch_id=\"batch_abc123\")\n\n# Check status in a loop until completed\nwhile True:\n status = batch.status()\n print(f\"Batch status: {status.status}\")\n \n if status.status == \"completed\":\n output_path, error_path = batch.download()\n print(f\"Output saved to: {output_path}\")\n break\n elif status.status in [\"failed\", \"expired\", \"cancelled\"]:\n print(f\"Batch ended with status: {status.status}\")\n break\n \n time.sleep(60) # Check every minute\n```\n\n## Command-Line Utilities\n\nUse `openai_batch.run` to run a batch from an input file on disk:\n```bash\npython -m openai_batch.run input.jsonl\n```\n\nThis will start the batch, wait for it to complete, then download the results.\n\nUseful switches:\n* `-c` Only create the batch, do not wait for it.\n* `--resume` Attach to an existing batch job. Wait for it to finish then download results.\n* `--dry-run` Confirm your configuration without making an actual request.\n* Full list: `python -m openai_batch.run --help`\n\n### OpenAI Example\n```bash\nexport OPENAI_API_KEY=\"<Your OpenAI API Key>\"\n\n# Create an example batch input file\npython -m openai_batch.example_prompts | \\\n python -m openai_batch.create_batch --model 'gpt-4o-mini' > input.jsonl\n\n# Run this batch (resumable with `--resume <BATCH_ID>`)\npython -m openai_batch.run input.jsonl\n```\n\n### Parasail Example\n\n```bash\nexport PARASAIL_API_KEY=\"<Your Parasail API Key>\"\n\n# Create an example batch input file\npython -m openai_batch.example_prompts | \\\n python -m openai_batch.create_batch --model 'meta-llama/Meta-Llama-3-8B-Instruct' > input.jsonl\n\n# Run this batch (resumable with `--resume <BATCH_ID>`)\npython -m openai_batch.run -p parasail input.jsonl\n```\n\n\n## Resources\n\n* [OpenAI Batch Cookbook](https://cookbook.openai.com/examples/batch_processing)\n* [OpenAI Batch API reference](https://platform.openai.com/docs/api-reference/batch)\n* [OpenAI Files API reference](https://platform.openai.com/docs/api-reference/files)\n",
"bugtrack_url": null,
"license": null,
"summary": "Make OpenAI batch easy to use.",
"version": "0.3.2",
"project_urls": {
"homepage": "https://github.com/parasail-ai/openai-batch",
"repository": "https://github.com/parasail-ai/openai-batch"
},
"split_keywords": [
"openai",
" batch",
" chatgpt",
" gpt",
" llm",
" language model"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "c55dddcff28fe55a747938d74e93b8929f9b627c90ef47282de8f1655c614b22",
"md5": "965b6c9387ba8e115c0e5c0457b5df5a",
"sha256": "586d72a5b71dd9ce09b8101e1ab1e6b3b6b30b0b4cc934405e15f432248df5af"
},
"downloads": -1,
"filename": "openai_batch-0.3.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "965b6c9387ba8e115c0e5c0457b5df5a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 20543,
"upload_time": "2025-07-23T18:25:15",
"upload_time_iso_8601": "2025-07-23T18:25:15.121942Z",
"url": "https://files.pythonhosted.org/packages/c5/5d/ddcff28fe55a747938d74e93b8929f9b627c90ef47282de8f1655c614b22/openai_batch-0.3.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "64281a9835bb439835fd9bde504851f1f48ce46dd060c43de625d2f96aa8e644",
"md5": "1329f8c1f775a6bb808703eaaad37f0e",
"sha256": "f73b50735ca840b5ecad96f6ca54f0349677a9faff9a6b5952c53a520b4d71f9"
},
"downloads": -1,
"filename": "openai_batch-0.3.2.tar.gz",
"has_sig": false,
"md5_digest": "1329f8c1f775a6bb808703eaaad37f0e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 78639,
"upload_time": "2025-07-23T18:25:16",
"upload_time_iso_8601": "2025-07-23T18:25:16.560762Z",
"url": "https://files.pythonhosted.org/packages/64/28/1a9835bb439835fd9bde504851f1f48ce46dd060c43de625d2f96aa8e644/openai_batch-0.3.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-23 18:25:16",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "parasail-ai",
"github_project": "openai-batch",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "openai-batch"
}