openpo


Nameopenpo JSON
Version 0.7.7 PyPI version JSON
download
home_pagehttps://github.com/dannylee1020/openpo
SummaryBuild high quality synthetic datasets with AI feedback from 200+ LLMs
upload_time2024-12-26 22:04:58
maintainerNone
docs_urlNone
authorDaniel Lee
requires_python<4.0,>=3.10.1
licenseApache-2.0
keywords llm finetuning ai rlaif preference tuning synthetic data generation synthetic data
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # OpenPO 🐼
[![PyPI version](https://img.shields.io/pypi/v/openpo.svg)](https://pypi.org/project/openpo/)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Documentation](https://img.shields.io/badge/docs-docs.openpo.dev-blue)](https://docs.openpo.dev)

OpenPO simplifies building synthetic dataset with AI feedback and state-of-art evaluation methods.

| Resources | Notebooks |
|----------|----------|
| Building dataset with OpenPO and PairRM  |📔 [Notebook](https://colab.research.google.com/drive/1G1T-vOTXjIXuRX3h9OlqgnE04-6IpwIf?usp=sharing) |
| Using OpenPO with Prometheus 2 | 📔 [Notebook](https://colab.research.google.com/drive/1dro0jX1MOfSg0srfjA_DZyeWIWKOuJn2?usp=sharing) |
| Evaluating with LLM Judge| 📔 [Notebook](https://colab.research.google.com/drive/1_QrmejW2Ym8yzP5RLJbLpVNA_FsEt2ZG?usp=sharing) |


## Key Features

- 🤖 **Multiple LLM Support**: Collect diverse set of outputs from 200+ LLMs

- ⚡ **High Performance Inference**: Native vLLM support for optimized inference

- 🚀 **Scalable Processing**: Built-in batch processing capabilities for efficient large-scale data generation

- 📊 **Research-Backed Evaluation Methods**: Support for state-of-art evaluation methods for data synthesis

- 💾 **Flexible Storage:** Out of the box storage providers for HuggingFace and S3.


## Installation
### Install from PyPI (recommended)
OpenPO uses pip for installation. Run the following command in the terminal to install OpenPO:

```bash
pip install openpo

# to use vllm
pip install openpo[vllm]

# for running evaluation models
pip install openpo[eval]
```



### Install from source
Clone the repository first then run the follow command
```bash
cd openpo
poetry install
```

## Getting Started
set your environment variable first

```bash
# for completions
export HF_API_KEY=<your-api-key>
export OPENROUTER_API_KEY=<your-api-key>

# for evaluations
export OPENAI_API_KEY=<your-openai-api-key>
export ANTHROPIC_API_KEY=<your-anthropic-api-key>
```

### Completion
To get started with collecting LLM responses, simply pass in a list of model names of your choice

> [!NOTE]
> OpenPO requires provider name to be prepended to the model identifier.

```python
import os
from openpo import OpenPO

client = OpenPO()

response = client.completion.generate(
    models = [
        "huggingface/Qwen/Qwen2.5-Coder-32B-Instruct",
        "huggingface/mistralai/Mistral-7B-Instruct-v0.3",
        "huggingface/microsoft/Phi-3.5-mini-instruct",
    ],
    messages=[
        {"role": "system", "content": PROMPT},
        {"role": "system", "content": MESSAGE},
    ],
)
```

You can also call models with OpenRouter.

```python
# make request to OpenRouter
client = OpenPO()

response = client.completion.generate(
    models = [
        "openrouter/qwen/qwen-2.5-coder-32b-instruct",
        "openrouter/mistralai/mistral-7b-instruct-v0.3",
        "openrouter/microsoft/phi-3.5-mini-128k-instruct",
    ],
    messages=[
        {"role": "system", "content": PROMPT},
        {"role": "system", "content": MESSAGE},
    ],

)
```

OpenPO takes default model parameters as a dictionary. Take a look at the documentation for more detail.

```python
response = client.completion.generate(
    models = [
        "huggingface/Qwen/Qwen2.5-Coder-32B-Instruct",
        "huggingface/mistralai/Mistral-7B-Instruct-v0.3",
        "huggingface/microsoft/Phi-3.5-mini-instruct",
    ],
    messages=[
        {"role": "system", "content": PROMPT},
        {"role": "system", "content": MESSAGE},
    ],
    params={
        "max_tokens": 500,
        "temperature": 1.0,
    }
)

```

### Evaluation
OpenPO offers various ways to synthesize your dataset.


#### LLM-as-a-Judge
To use single judge to evaluate your response data, use `evaluate.eval`

```python
client = OpenPO()

res = openpo.evaluate.eval(
    models=['openai/gpt-4o'],
    questions=questions,
    responses=responses,
)
```

To use multi judge, pass multiple judge models

```python
res_a, res_b = openpo.evaluate.eval(
    models=["openai/gpt-4o", "anthropic/claude-sonnet-3-5-latest"],
    questions=questions,
    responses=responses,
)

# get consensus for multi judge responses.
result = openpo.evaluate.get_consensus(
    eval_A=res_a,
    eval_B=res_b,
)
```
<br>

OpnePO supports batch processing for evaluating large dataset in a cost-effective way.

> [!NOTE]
> Batch processing is an asynchronous operation and could take up to 24 hours (usually completes much faster).

```python
info = openpo.batch.eval(
    models=["openai/gpt-4o", "anthropic/claude-sonnet-3-5-latest"],
    questions=questions,
    responses=responses,
)

# check status
status = openpo.batch.check_status(batch_id=info.id)
```

For multi-judge with batch processing:

```python
batch_a, batch_b = openpo.batch.eval(
    models=["openai/gpt-4o", "anthropic/claude-sonnet-3-5-latest"],
    questions=questions,
    responses=responses,
)

result = openpo.batch.get_consensus(
    batch_A=batch_a_result,
    batch_B=batch_b_result,
)
```


#### Pre-trained Models
You can use pre-trained open source evaluation models. OpenPo currently supports two types of models: `PairRM` and `Prometheus2`.

> [!NOTE]
> Appropriate hardware with GPU and memory is required to make inference with pre-trained models.

To use PairRM to rank responses:

```python
from openpo import PairRM

pairrm = PairRM()
res = pairrm.eval(prompts, responses)
```

To use Prometheus2:

```python
from openpo import Prometheus2

pm = Prometheus2(model="prometheus-eval/prometheus-7b-v2.0")

feedback = pm.eval_relative(
    instructions=instructions,
    responses_A=response_A,
    responses_B=response_B,
    rubric='reasoning',
)
```


### Storing Data
Use out of the box storage class to easily upload and download data.

```python
from openpo.storage import HuggingFaceStorage
hf_storage = HuggingFaceStorage()

# push data to repo
preference = {"prompt": "text", "preferred": "response1", "rejected": "response2"}
hf_storage.push_to_repo(repo_id="my-hf-repo", data=preference)

# Load data from repo
data = hf_storage.load_from_repo(path="my-hf-repo")
```


## Contributing
Contributions are what makes open source amazingly special! Here's how you can help:

### Development Setup
1. Clone the repository
```bash
git clone https://github.com/yourusername/openpo.git
cd openpo
```

1. Install Poetry (dependency management tool)
```bash
curl -sSL https://install.python-poetry.org | python3 -
```

1. Install dependencies
```bash
poetry install
```

### Development Workflow
1. Create a new branch for your feature
```bash
git checkout -b feature-name
```

2. Submit a Pull Request
- Write a clear description of your changes
- Reference any related issues

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/dannylee1020/openpo",
    "name": "openpo",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10.1",
    "maintainer_email": null,
    "keywords": "llm, finetuning, ai, rlaif, preference tuning, synthetic data generation, synthetic data",
    "author": "Daniel Lee",
    "author_email": "dannylee1020@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/76/9d/8e1c7fb313d578eb2bc226769489f86684c7792188cbf930c7d9d6a225fb/openpo-0.7.7.tar.gz",
    "platform": null,
    "description": "# OpenPO \ud83d\udc3c\n[![PyPI version](https://img.shields.io/pypi/v/openpo.svg)](https://pypi.org/project/openpo/)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![Documentation](https://img.shields.io/badge/docs-docs.openpo.dev-blue)](https://docs.openpo.dev)\n\nOpenPO simplifies building synthetic dataset with AI feedback and state-of-art evaluation methods.\n\n| Resources | Notebooks |\n|----------|----------|\n| Building dataset with OpenPO and PairRM  |\ud83d\udcd4 [Notebook](https://colab.research.google.com/drive/1G1T-vOTXjIXuRX3h9OlqgnE04-6IpwIf?usp=sharing) |\n| Using OpenPO with Prometheus 2 | \ud83d\udcd4 [Notebook](https://colab.research.google.com/drive/1dro0jX1MOfSg0srfjA_DZyeWIWKOuJn2?usp=sharing) |\n| Evaluating with LLM Judge| \ud83d\udcd4 [Notebook](https://colab.research.google.com/drive/1_QrmejW2Ym8yzP5RLJbLpVNA_FsEt2ZG?usp=sharing) |\n\n\n## Key Features\n\n- \ud83e\udd16 **Multiple LLM Support**: Collect diverse set of outputs from 200+ LLMs\n\n- \u26a1 **High Performance Inference**: Native vLLM support for optimized inference\n\n- \ud83d\ude80 **Scalable Processing**: Built-in batch processing capabilities for efficient large-scale data generation\n\n- \ud83d\udcca **Research-Backed Evaluation Methods**: Support for state-of-art evaluation methods for data synthesis\n\n- \ud83d\udcbe **Flexible Storage:** Out of the box storage providers for HuggingFace and S3.\n\n\n## Installation\n### Install from PyPI (recommended)\nOpenPO uses pip for installation. Run the following command in the terminal to install OpenPO:\n\n```bash\npip install openpo\n\n# to use vllm\npip install openpo[vllm]\n\n# for running evaluation models\npip install openpo[eval]\n```\n\n\n\n### Install from source\nClone the repository first then run the follow command\n```bash\ncd openpo\npoetry install\n```\n\n## Getting Started\nset your environment variable first\n\n```bash\n# for completions\nexport HF_API_KEY=<your-api-key>\nexport OPENROUTER_API_KEY=<your-api-key>\n\n# for evaluations\nexport OPENAI_API_KEY=<your-openai-api-key>\nexport ANTHROPIC_API_KEY=<your-anthropic-api-key>\n```\n\n### Completion\nTo get started with collecting LLM responses, simply pass in a list of model names of your choice\n\n> [!NOTE]\n> OpenPO requires provider name to be prepended to the model identifier.\n\n```python\nimport os\nfrom openpo import OpenPO\n\nclient = OpenPO()\n\nresponse = client.completion.generate(\n    models = [\n        \"huggingface/Qwen/Qwen2.5-Coder-32B-Instruct\",\n        \"huggingface/mistralai/Mistral-7B-Instruct-v0.3\",\n        \"huggingface/microsoft/Phi-3.5-mini-instruct\",\n    ],\n    messages=[\n        {\"role\": \"system\", \"content\": PROMPT},\n        {\"role\": \"system\", \"content\": MESSAGE},\n    ],\n)\n```\n\nYou can also call models with OpenRouter.\n\n```python\n# make request to OpenRouter\nclient = OpenPO()\n\nresponse = client.completion.generate(\n    models = [\n        \"openrouter/qwen/qwen-2.5-coder-32b-instruct\",\n        \"openrouter/mistralai/mistral-7b-instruct-v0.3\",\n        \"openrouter/microsoft/phi-3.5-mini-128k-instruct\",\n    ],\n    messages=[\n        {\"role\": \"system\", \"content\": PROMPT},\n        {\"role\": \"system\", \"content\": MESSAGE},\n    ],\n\n)\n```\n\nOpenPO takes default model parameters as a dictionary. Take a look at the documentation for more detail.\n\n```python\nresponse = client.completion.generate(\n    models = [\n        \"huggingface/Qwen/Qwen2.5-Coder-32B-Instruct\",\n        \"huggingface/mistralai/Mistral-7B-Instruct-v0.3\",\n        \"huggingface/microsoft/Phi-3.5-mini-instruct\",\n    ],\n    messages=[\n        {\"role\": \"system\", \"content\": PROMPT},\n        {\"role\": \"system\", \"content\": MESSAGE},\n    ],\n    params={\n        \"max_tokens\": 500,\n        \"temperature\": 1.0,\n    }\n)\n\n```\n\n### Evaluation\nOpenPO offers various ways to synthesize your dataset.\n\n\n#### LLM-as-a-Judge\nTo use single judge to evaluate your response data, use `evaluate.eval`\n\n```python\nclient = OpenPO()\n\nres = openpo.evaluate.eval(\n    models=['openai/gpt-4o'],\n    questions=questions,\n    responses=responses,\n)\n```\n\nTo use multi judge, pass multiple judge models\n\n```python\nres_a, res_b = openpo.evaluate.eval(\n    models=[\"openai/gpt-4o\", \"anthropic/claude-sonnet-3-5-latest\"],\n    questions=questions,\n    responses=responses,\n)\n\n# get consensus for multi judge responses.\nresult = openpo.evaluate.get_consensus(\n    eval_A=res_a,\n    eval_B=res_b,\n)\n```\n<br>\n\nOpnePO supports batch processing for evaluating large dataset in a cost-effective way.\n\n> [!NOTE]\n> Batch processing is an asynchronous operation and could take up to 24 hours (usually completes much faster).\n\n```python\ninfo = openpo.batch.eval(\n    models=[\"openai/gpt-4o\", \"anthropic/claude-sonnet-3-5-latest\"],\n    questions=questions,\n    responses=responses,\n)\n\n# check status\nstatus = openpo.batch.check_status(batch_id=info.id)\n```\n\nFor multi-judge with batch processing:\n\n```python\nbatch_a, batch_b = openpo.batch.eval(\n    models=[\"openai/gpt-4o\", \"anthropic/claude-sonnet-3-5-latest\"],\n    questions=questions,\n    responses=responses,\n)\n\nresult = openpo.batch.get_consensus(\n    batch_A=batch_a_result,\n    batch_B=batch_b_result,\n)\n```\n\n\n#### Pre-trained Models\nYou can use pre-trained open source evaluation models. OpenPo currently supports two types of models: `PairRM` and `Prometheus2`.\n\n> [!NOTE]\n> Appropriate hardware with GPU and memory is required to make inference with pre-trained models.\n\nTo use PairRM to rank responses:\n\n```python\nfrom openpo import PairRM\n\npairrm = PairRM()\nres = pairrm.eval(prompts, responses)\n```\n\nTo use Prometheus2:\n\n```python\nfrom openpo import Prometheus2\n\npm = Prometheus2(model=\"prometheus-eval/prometheus-7b-v2.0\")\n\nfeedback = pm.eval_relative(\n    instructions=instructions,\n    responses_A=response_A,\n    responses_B=response_B,\n    rubric='reasoning',\n)\n```\n\n\n### Storing Data\nUse out of the box storage class to easily upload and download data.\n\n```python\nfrom openpo.storage import HuggingFaceStorage\nhf_storage = HuggingFaceStorage()\n\n# push data to repo\npreference = {\"prompt\": \"text\", \"preferred\": \"response1\", \"rejected\": \"response2\"}\nhf_storage.push_to_repo(repo_id=\"my-hf-repo\", data=preference)\n\n# Load data from repo\ndata = hf_storage.load_from_repo(path=\"my-hf-repo\")\n```\n\n\n## Contributing\nContributions are what makes open source amazingly special! Here's how you can help:\n\n### Development Setup\n1. Clone the repository\n```bash\ngit clone https://github.com/yourusername/openpo.git\ncd openpo\n```\n\n1. Install Poetry (dependency management tool)\n```bash\ncurl -sSL https://install.python-poetry.org | python3 -\n```\n\n1. Install dependencies\n```bash\npoetry install\n```\n\n### Development Workflow\n1. Create a new branch for your feature\n```bash\ngit checkout -b feature-name\n```\n\n2. Submit a Pull Request\n- Write a clear description of your changes\n- Reference any related issues\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Build high quality synthetic datasets with AI feedback from 200+ LLMs",
    "version": "0.7.7",
    "project_urls": {
        "Documentation": "https://docs.openpo.dev",
        "Homepage": "https://github.com/dannylee1020/openpo",
        "Repository": "https://github.com/dannylee1020/openpo"
    },
    "split_keywords": [
        "llm",
        " finetuning",
        " ai",
        " rlaif",
        " preference tuning",
        " synthetic data generation",
        " synthetic data"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b175c36458897e247b171b9c3cf7edde314c3a0c3f6a2ba6965eeeb40f729b90",
                "md5": "f42a62f8a38108ae43547af7738e8f54",
                "sha256": "7c5838c98bb08b3c130ab51e5174140877f6818e7c1e98e700e108522330b79b"
            },
            "downloads": -1,
            "filename": "openpo-0.7.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f42a62f8a38108ae43547af7738e8f54",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10.1",
            "size": 41213,
            "upload_time": "2024-12-26T22:04:57",
            "upload_time_iso_8601": "2024-12-26T22:04:57.328571Z",
            "url": "https://files.pythonhosted.org/packages/b1/75/c36458897e247b171b9c3cf7edde314c3a0c3f6a2ba6965eeeb40f729b90/openpo-0.7.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "769d8e1c7fb313d578eb2bc226769489f86684c7792188cbf930c7d9d6a225fb",
                "md5": "e8464aaf3139ca2c6ffb4fca721c1bef",
                "sha256": "ff214f25fe81573977b1ec51f9121e2f661548dd8ba901ef3417c588946da476"
            },
            "downloads": -1,
            "filename": "openpo-0.7.7.tar.gz",
            "has_sig": false,
            "md5_digest": "e8464aaf3139ca2c6ffb4fca721c1bef",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10.1",
            "size": 26117,
            "upload_time": "2024-12-26T22:04:58",
            "upload_time_iso_8601": "2024-12-26T22:04:58.550701Z",
            "url": "https://files.pythonhosted.org/packages/76/9d/8e1c7fb313d578eb2bc226769489f86684c7792188cbf930c7d9d6a225fb/openpo-0.7.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-26 22:04:58",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dannylee1020",
    "github_project": "openpo",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "openpo"
}
        
Elapsed time: 0.40395s