# OpenPO 🐼
[![PyPI version](https://img.shields.io/pypi/v/openpo.svg)](https://pypi.org/project/openpo/)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Documentation](https://img.shields.io/badge/docs-docs.openpo.dev-blue)](https://docs.openpo.dev)
![Python](https://img.shields.io/badge/python->=3.10.1-blue.svg)
OpenPO simplifies building synthetic datasets for preference tuning from 200+ LLMs.
| Resources | Notebooks |
|----------|----------|
| Building dataset with OpenPO and PairRM |📔 [Notebook](https://colab.research.google.com/drive/1G1T-vOTXjIXuRX3h9OlqgnE04-6IpwIf?usp=sharing) |
| Using OpenPO with Prometheus 2 | 📔 [Notebook](https://colab.research.google.com/drive/1dro0jX1MOfSg0srfjA_DZyeWIWKOuJn2?usp=sharing) |
| Evaluating with LLM-as-a-Judge| 📔 [Notebook](https://colab.research.google.com/drive/1_QrmejW2Ym8yzP5RLJbLpVNA_FsEt2ZG?usp=sharing) |
## What is OpenPO?
OpenPO is an open source library that simplifies the process of building synthetic datasets for LLM preference tuning. By collecting outputs from 200 + LLMs and synthesizing them using research-proven methodologies, OpenPO helps developers build better, more fine-tuned language models with minimal effort.
## Key Features
- 🤖 **Multiple LLM Support**: Collect diverse set of outputs from 200+ LLMs
- 📊 **Research-Backed Evaluation Methods**: Support for state-of-art evaluation methods for data synthesis
- 💾 **Flexible Storage:** Out of the box storage providers for HuggingFace and S3.
## Installation
### Install from PyPI (recommended)
OpenPO uses pip for installation. Run the following command in the terminal to install OpenPO:
```bash
pip install openpo
```
### Install from source
Clone the repository first then run the follow command
```bash
cd openpo
poetry install
```
## Getting Started
set your environment variable first
```bash
# for completions
export HF_API_KEY=<your-api-key>
export OPENROUTER_API_KEY=<your-api-key>
# for evaluations
export OPENAI_API_KEY=<your-openai-api-key>
export ANTHROPIC_API_KEY=<your-anthropic-api-key>
```
### Completion
To get started with collecting LLM responses, simply pass in a list of model names of your choice
> [!NOTE]
> OpenPO requires provider name to be prepended to the model identifier.
```python
import os
from openpo import OpenPO
client = OpenPO()
response = client.completions(
models = [
"huggingface/Qwen/Qwen2.5-Coder-32B-Instruct",
"huggingface/mistralai/Mistral-7B-Instruct-v0.3",
"huggingface/microsoft/Phi-3.5-mini-instruct",
],
messages=[
{"role": "system", "content": PROMPT},
{"role": "system", "content": MESSAGE},
],
)
```
You can also call models with OpenRouter.
```python
# make request to OpenRouter
client = OpenPO()
response = client.completions(
models = [
"openrouter/qwen/qwen-2.5-coder-32b-instruct",
"openrouter/mistralai/mistral-7b-instruct-v0.3",
"openrouter/microsoft/phi-3.5-mini-128k-instruct",
],
messages=[
{"role": "system", "content": PROMPT},
{"role": "system", "content": MESSAGE},
],
)
```
OpenPO takes default model parameters as a dictionary. Take a look at the documentation for more detail.
```python
response = client.completions(
models = [
"huggingface/Qwen/Qwen2.5-Coder-32B-Instruct",
"huggingface/mistralai/Mistral-7B-Instruct-v0.3",
"huggingface/microsoft/Phi-3.5-mini-instruct",
],
messages=[
{"role": "system", "content": PROMPT},
{"role": "system", "content": MESSAGE},
],
params={
"max_tokens": 500,
"temperature": 1.0,
}
)
```
### Evaluation
OpenPO offers various ways to synthesize your dataset. To run evaluation, install extra dependencies by running
```bash
pip install openpo[eval]
```
#### LLM-as-a-Judge
To use single judge to evaluate your response data, use `eval_single`
```python
client = OpenPO()
res = openpo.eval_single(
model='openai/gpt-4o',
questions=questions,
responses=responses,
)
```
To use multi judge, use `eval_multi`
```python
res = openpo.eval_multi(
models=["openai/gpt-4o", "anthropic/claude-sonnet-3-5-latest"],
questions=questions,
responses=responses,
)
```
#### Pre-trained Models
You can use pre-trained open source evaluation models. OpenPo currently supports two types of models: `PairRM` and `Prometheus2`.
> [!NOTE]
> Appropriate hardware with GPU and memory is required to make inference with pre-trained models.
To use PairRM to rank responses:
```python
from openpo import PairRM
pairrm = PairRM()
res = pairrm.eval(prompts, responses)
```
To use Prometheus2:
```python
from openpo import Prometheus2
from openpo.resources.provider import VLLM
model = VLLM<(model="prometheus-eval/prometheus-7b-v2.0")
pm = Prometheus2(model=model)
feedback = pm.eval_relative(
instructions=instructions,
responses_A=response_A,
responses_B=response_B,
rubric='reasoning',
)
```
### Storing Data
Use out of the box storage class to easily upload and download data.
```python
from openpo.storage import HuggingFaceStorage
hf_storage = HuggingFaceStorage()
# push data to repo
preference = {"prompt": "text", "preferred": "response1", "rejected": "response2"}
hf_storage.push_to_repo(repo_id="my-hf-repo", data=preference)
# Load data from repo
data = hf_storage.load_from_repo(path="my-hf-repo")
```
## Contributing
Contributions are what makes open source amazingly special! Here's how you can help:
### Development Setup
1. Clone the repository
```bash
git clone https://github.com/yourusername/openpo.git
cd openpo
```
1. Install Poetry (dependency management tool)
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
1. Install dependencies
```bash
poetry install
```
### Development Workflow
1. Create a new branch for your feature
```bash
git checkout -b feature-name
```
2. Submit a Pull Request
- Write a clear description of your changes
- Reference any related issues
Raw data
{
"_id": null,
"home_page": "https://github.com/dannylee1020/openpo",
"name": "openpo",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10.1",
"maintainer_email": null,
"keywords": "llm, finetuning, ai, rlaif, preference tuning, synthetic data generation, synthetic data",
"author": "Daniel Lee",
"author_email": "dannylee1020@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/92/39/06c2e5372bd62d994836559e6aeaf338c20b9e7c73a930739f7614c725e2/openpo-0.6.0.tar.gz",
"platform": null,
"description": "# OpenPO \ud83d\udc3c\n[![PyPI version](https://img.shields.io/pypi/v/openpo.svg)](https://pypi.org/project/openpo/)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![Documentation](https://img.shields.io/badge/docs-docs.openpo.dev-blue)](https://docs.openpo.dev)\n![Python](https://img.shields.io/badge/python->=3.10.1-blue.svg)\n\n\nOpenPO simplifies building synthetic datasets for preference tuning from 200+ LLMs.\n\n| Resources | Notebooks |\n|----------|----------|\n| Building dataset with OpenPO and PairRM |\ud83d\udcd4 [Notebook](https://colab.research.google.com/drive/1G1T-vOTXjIXuRX3h9OlqgnE04-6IpwIf?usp=sharing) |\n| Using OpenPO with Prometheus 2 | \ud83d\udcd4 [Notebook](https://colab.research.google.com/drive/1dro0jX1MOfSg0srfjA_DZyeWIWKOuJn2?usp=sharing) |\n| Evaluating with LLM-as-a-Judge| \ud83d\udcd4 [Notebook](https://colab.research.google.com/drive/1_QrmejW2Ym8yzP5RLJbLpVNA_FsEt2ZG?usp=sharing) |\n\n\n\n## What is OpenPO?\nOpenPO is an open source library that simplifies the process of building synthetic datasets for LLM preference tuning. By collecting outputs from 200 + LLMs and synthesizing them using research-proven methodologies, OpenPO helps developers build better, more fine-tuned language models with minimal effort.\n\n## Key Features\n\n- \ud83e\udd16 **Multiple LLM Support**: Collect diverse set of outputs from 200+ LLMs\n\n- \ud83d\udcca **Research-Backed Evaluation Methods**: Support for state-of-art evaluation methods for data synthesis\n\n- \ud83d\udcbe **Flexible Storage:** Out of the box storage providers for HuggingFace and S3.\n\n\n## Installation\n### Install from PyPI (recommended)\nOpenPO uses pip for installation. Run the following command in the terminal to install OpenPO:\n\n```bash\npip install openpo\n```\n\n### Install from source\nClone the repository first then run the follow command\n```bash\ncd openpo\npoetry install\n```\n\n## Getting Started\nset your environment variable first\n\n```bash\n# for completions\nexport HF_API_KEY=<your-api-key>\nexport OPENROUTER_API_KEY=<your-api-key>\n\n# for evaluations\nexport OPENAI_API_KEY=<your-openai-api-key>\nexport ANTHROPIC_API_KEY=<your-anthropic-api-key>\n```\n\n### Completion\nTo get started with collecting LLM responses, simply pass in a list of model names of your choice\n\n> [!NOTE]\n> OpenPO requires provider name to be prepended to the model identifier.\n\n```python\nimport os\nfrom openpo import OpenPO\n\nclient = OpenPO()\n\nresponse = client.completions(\n models = [\n \"huggingface/Qwen/Qwen2.5-Coder-32B-Instruct\",\n \"huggingface/mistralai/Mistral-7B-Instruct-v0.3\",\n \"huggingface/microsoft/Phi-3.5-mini-instruct\",\n ],\n messages=[\n {\"role\": \"system\", \"content\": PROMPT},\n {\"role\": \"system\", \"content\": MESSAGE},\n ],\n)\n```\n\nYou can also call models with OpenRouter.\n\n```python\n# make request to OpenRouter\nclient = OpenPO()\n\nresponse = client.completions(\n models = [\n \"openrouter/qwen/qwen-2.5-coder-32b-instruct\",\n \"openrouter/mistralai/mistral-7b-instruct-v0.3\",\n \"openrouter/microsoft/phi-3.5-mini-128k-instruct\",\n ],\n messages=[\n {\"role\": \"system\", \"content\": PROMPT},\n {\"role\": \"system\", \"content\": MESSAGE},\n ],\n\n)\n```\n\nOpenPO takes default model parameters as a dictionary. Take a look at the documentation for more detail.\n\n```python\nresponse = client.completions(\n models = [\n \"huggingface/Qwen/Qwen2.5-Coder-32B-Instruct\",\n \"huggingface/mistralai/Mistral-7B-Instruct-v0.3\",\n \"huggingface/microsoft/Phi-3.5-mini-instruct\",\n ],\n messages=[\n {\"role\": \"system\", \"content\": PROMPT},\n {\"role\": \"system\", \"content\": MESSAGE},\n ],\n params={\n \"max_tokens\": 500,\n \"temperature\": 1.0,\n }\n)\n\n```\n\n### Evaluation\nOpenPO offers various ways to synthesize your dataset. To run evaluation, install extra dependencies by running\n\n```bash\npip install openpo[eval]\n```\n\n#### LLM-as-a-Judge\nTo use single judge to evaluate your response data, use `eval_single`\n\n```python\nclient = OpenPO()\n\nres = openpo.eval_single(\n model='openai/gpt-4o',\n questions=questions,\n responses=responses,\n)\n```\n\nTo use multi judge, use `eval_multi`\n\n```python\nres = openpo.eval_multi(\n models=[\"openai/gpt-4o\", \"anthropic/claude-sonnet-3-5-latest\"],\n questions=questions,\n responses=responses,\n)\n```\n\n#### Pre-trained Models\nYou can use pre-trained open source evaluation models. OpenPo currently supports two types of models: `PairRM` and `Prometheus2`.\n\n> [!NOTE]\n> Appropriate hardware with GPU and memory is required to make inference with pre-trained models.\n\nTo use PairRM to rank responses:\n\n```python\nfrom openpo import PairRM\n\npairrm = PairRM()\nres = pairrm.eval(prompts, responses)\n```\n\nTo use Prometheus2:\n\n```python\nfrom openpo import Prometheus2\nfrom openpo.resources.provider import VLLM\n\nmodel = VLLM<(model=\"prometheus-eval/prometheus-7b-v2.0\")\npm = Prometheus2(model=model)\n\nfeedback = pm.eval_relative(\n instructions=instructions,\n responses_A=response_A,\n responses_B=response_B,\n rubric='reasoning',\n)\n```\n\n\n### Storing Data\nUse out of the box storage class to easily upload and download data.\n\n```python\nfrom openpo.storage import HuggingFaceStorage\nhf_storage = HuggingFaceStorage()\n\n# push data to repo\npreference = {\"prompt\": \"text\", \"preferred\": \"response1\", \"rejected\": \"response2\"}\nhf_storage.push_to_repo(repo_id=\"my-hf-repo\", data=preference)\n\n# Load data from repo\ndata = hf_storage.load_from_repo(path=\"my-hf-repo\")\n```\n\n\n## Contributing\nContributions are what makes open source amazingly special! Here's how you can help:\n\n### Development Setup\n1. Clone the repository\n```bash\ngit clone https://github.com/yourusername/openpo.git\ncd openpo\n```\n\n1. Install Poetry (dependency management tool)\n```bash\ncurl -sSL https://install.python-poetry.org | python3 -\n```\n\n1. Install dependencies\n```bash\npoetry install\n```\n\n### Development Workflow\n1. Create a new branch for your feature\n```bash\ngit checkout -b feature-name\n```\n\n2. Submit a Pull Request\n- Write a clear description of your changes\n- Reference any related issues\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Build high quality synthetic datasets with AI feedback from 200+ LLMs",
"version": "0.6.0",
"project_urls": {
"Documentation": "https://docs.openpo.dev",
"Homepage": "https://github.com/dannylee1020/openpo",
"Repository": "https://github.com/dannylee1020/openpo"
},
"split_keywords": [
"llm",
" finetuning",
" ai",
" rlaif",
" preference tuning",
" synthetic data generation",
" synthetic data"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "cc4536eb171f9a2d77f0cc298a04920b93ad49cf22bba503901099197b8d1530",
"md5": "f4bca7e3bbf32736a3e12a456b5bbcf0",
"sha256": "15e8e9bcb3d59032d1d6e707804d4b67d4c6a3e87fa3e3fc21db5b9485f5cf32"
},
"downloads": -1,
"filename": "openpo-0.6.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f4bca7e3bbf32736a3e12a456b5bbcf0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10.1",
"size": 38451,
"upload_time": "2024-12-18T22:06:23",
"upload_time_iso_8601": "2024-12-18T22:06:23.713437Z",
"url": "https://files.pythonhosted.org/packages/cc/45/36eb171f9a2d77f0cc298a04920b93ad49cf22bba503901099197b8d1530/openpo-0.6.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "923906c2e5372bd62d994836559e6aeaf338c20b9e7c73a930739f7614c725e2",
"md5": "d33dd4acd7a17f2b2a57bcf9d2ea4a41",
"sha256": "ed973f396db24de2cf7dcbc16a9363b9bd9d13453694e565de00ed30b1a03151"
},
"downloads": -1,
"filename": "openpo-0.6.0.tar.gz",
"has_sig": false,
"md5_digest": "d33dd4acd7a17f2b2a57bcf9d2ea4a41",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10.1",
"size": 24332,
"upload_time": "2024-12-18T22:06:29",
"upload_time_iso_8601": "2024-12-18T22:06:29.511611Z",
"url": "https://files.pythonhosted.org/packages/92/39/06c2e5372bd62d994836559e6aeaf338c20b9e7c73a930739f7614c725e2/openpo-0.6.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-18 22:06:29",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "dannylee1020",
"github_project": "openpo",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "openpo"
}