verifiers


Nameverifiers JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
SummaryVerifiers for reinforcement learning with LLMs
upload_time2025-07-31 02:37:25
maintainerNone
docs_urlNone
authorNone
requires_python<3.13,>=3.11
licenseNone
keywords grpo llm multi-turn reinforcement-learning rl verifiable-environments
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

<p align="center">
  <h1>Verifiers</h1>
</p>

<p>
Environments for LLM Reinforcement Learning
</p>

</div>

## Overview

Verifiers is a library of modular components for creating RL environments and training LLM agents. Verifiers includes an async GRPO implementation built around the `transformers` Trainer, is supported by `prime-rl` for large-scale FSDP training, and can easily be integrated into any RL framework which exposes an OpenAI-compatible inference client. In addition to RL training, Verifiers can be used directly for building LLM evaluations, creating synthetic data pipelines, and implementing agent harnesses.

Full documentation is available [here](https://verifiers.readthedocs.io/en/latest/). 

## Setup

We recommend using `verifiers` with along [uv](https://docs.astral.sh/uv/getting-started/installation/) for dependency management in your own project:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
uv init # create a fresh project
source .venv/bin/activate
```

For local (CPU) development and evaluation with API models, do:
```bash
uv add verifiers # uv add 'verifiers[dev]' for Jupyter + testing support
```

For training on GPUs with `vf.GRPOTrainer`, do:
```bash
uv add 'verifiers[all]' && uv pip install flash-attn --no-build-isolation
```

To use the latest `main` branch, do:
```bash
uv add verifiers @ git+https://github.com/willccbb/verifiers.git
```

To use with `prime-rl`, see [here](https://github.com/PrimeIntellect-ai/prime-rl).

To install `verifiers` from source for core library development, do:
```bash
git clone https://github.com/willccbb/verifiers.git
cd verifiers
uv sync --all-extras && uv pip install flash-attn --no-build-isolation
uv run pre-commit install
```

In general, we recommend that you build and train Environments *with* `verifiers`, not *in* `verifiers`. If you find yourself needing to clone and modify the core library in order to implement key functionality for your project, we'd love for you to open an issue so that we can try and streamline the development experience. Our aim is for `verifiers` to be a reliable toolkit to build on top of, and to minimize the "fork proliferation" which often pervades the RL infrastructure ecosystem.

## Environments

Environments in Verifiers are installable Python modules which can specify dependencies in a `pyproject.toml`, and which expose a `load_environment` function for instantiation by downstream applications (e.g. trainers). See `environments/` for examples. 

To initialize a blank Environment module template, do:
```bash
vf-init vf-environment-name # -p /path/to/environments (defaults to "./environments")
```
We recommend using the `vf-` prefix for clarity and avoiding conflicts with other dependencies, and we prepend it by fault if it is not present, though you can pass `--skip-vf-prefix` to override. Names are auto-standardized to use `"-"` in IDs and `"_"` in paths.

To an install an Environment module into your project, do:
```bash
vf-install vf-environment-name # -p /path/to/environments (defaults to "./environments") 
```

To install an Environment module from this repo's `environments` folder, do:
```bash
vf-install vf-math-python --from-repo # -b branch_or_commit (defaults to "main")
```

Once an Environment module is installed, you can create an instance of the Environment using `load_environment`, passing any necessary args:
```python
import verifiers as vf
vf_env = vf.load_environment("vf-environment-name", **env_args)
```

To run a quick evaluation of your Environment with an API-based model, do:
```bash
vf-eval vf-environment-name # vf-eval -h for config options; defaults to gpt-4.1-mini, 5 prompts, 3 rollouts for each
```

The core elements of Environments in are:
- Datasets: a Hugging Face `Dataset` with a `prompt` column for inputs, and either `answer (str)` or `info (dict)` columns for evaluation
- Rollout logic: interactions between models and the environment (e.g. `env_response` + `is_completed` for any `MultiTurnEnv`)
- Rubrics: an encapsulation for one or more reward functions
- Parsers: optional; an encapsulation for reusable parsing logic

We support both `/v1/chat/completions`-style and `/v1/completions`-style inference via OpenAI clients, though we generally recommend `/v1/chat/completions`-style inference for the vast majority of applications. Both the included `GRPOTrainer` as well as `prime-rl` support the full set of [SamplingParams](https://docs.vllm.ai/en/v0.6.0/dev/sampling_params.html) exposed by vLLM (via their OpenAI-compatible [server](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html) interface), and leveraging this will often be the appropriate way to implement rollout strategies requiring finer-grained control, such as interrupting and resuming generations for interleaved tool use, or enforcing reasoning budgets.

The primary constraint we impose on rollout logic is that token sequences must be *increasing*, i.e. once a token has been added to a model's context in a rollout, it must remain as the rollout progresses. Note that this causes issues with some popular reasoning models such as the Qwen3 and DeepSeek-R1-Distill series; see [Footguns](#footguns) for guidance on adapting these models to support multi-turn rollouts.  

### SingleTurnEnv

For tasks requiring only a single response from a model for each prompt, you can use `SingleTurnEnv` directly by specifying a Dataset and a Rubric.

```python
from datasets import load_dataset
import verifiers as vf

dataset = load_dataset("my-account/my-dataset", split="train")

def reward_A(prompt, completion, info) -> float:
	# reward fn, e.g. correctness
	...
def reward_B(parser, completion) -> float:
	# auxiliary reward fn, e.g. format
	...
def metric(completion) -> float:
	# non-reward metric, e.g. proper noun count
	...

rubric = vf.Rubric(funcs=[reward_A, reward_B, metric], weights=[1.0, 0.5, 0.0])

vf_env = SingleTurnEnv(
	dataset=dataset,
	rubric=rubric
)
results = vf_env.evaluate(client=OpenAI(), model="gpt-4.1-mini", num_examples=100, rollouts_per_example=1)
vf_env.make_dataset(results, push_to_hub=True, hub_name="my-new-environment-eval-results") # save results to HF hub
```

Datasets should be formatted with columns for:
- `'prompt' (List[ChatMessage])` OR `'question' (str)` fields
	- `ChatMessage` = e.g. `{'role': 'user', 'content': '...'}`
	- if `question` is set instead of `prompt`, you can also pass `system_prompt (str)` and/or `few_shot (List[ChatMessage])`
- `answer (str)` AND/OR `info (dict)`
- `task (str)`: optional, used by `EnvGroup` and `RubricGroup` for orchestrating composition of Environments and Rubrics

The following named attributes available for use by reward functions in your Rubric:
- `prompt`: sequence of input messages
- `completion`: sequence of messages generated during rollout by model and Environment
- `answer`: primary answer column, optional if `info` is used
- `state`: can be modified during rollout to accumulate any metadata (`state['responses']` includes full OpenAI response objects by default)
- `info`: auxiliary info needed for reward computation (e.g. test cases), optional if `answer` is used
- `task`: tag for task type (used by `EnvGroup` and `RubricGroup`)
- `parser`: the parser object declared, defaults to `vf.Parser()`

For tasks involving LLM judges, you may wish to use `vf.JudgeRubric()` for managing requests to auxiliary models.

### ToolEnv

For many applications involving tool use, you can use `ToolEnv` to leverage models' native tool/function-calling capabilities in an agentic loop. Tools can be specified as generic Python functions (with type hints and docstrings), which will then be passed in JSON schema form to each inference request.

```python
import verifiers as vf
vf_env = vf.ToolEnv(
	dataset= ... # HF Dataset with 'prompt'/'question' + 'answer'/'info' columns
	rubric= ... # Rubric object; vf.ToolRubric() can be optionally used for counting tool invocations in each rollout
	tools=[search_tool, read_article_tool, python_tool], # python functions with type hints + docstrings
	max_turns=10
)
```

In cases where your tools require heavy computational resources, we recommend hosting your tools as standalone servers (e.g. MCP servers) and creating lightweight wrapper functions to pass to `ToolEnv`. Parallel tool call support is enabled by default. 

For training, or self-hosted endpoints, you'll want to enable auto tool choice in [vLLM](https://docs.vllm.ai/en/stable/features/tool_calling.html#automatic-function-calling) with the appropriate parser. If your model does not support native tool calling, you may find the `XMLParser` abstraction useful for rolling your own tool call parsing on top of `MultiTurnEnv`; see `environments/xml_tool_env` for an example.

### MultiTurnEnv

Both `SingleTurnEnv` and `ToolEnv` are instances of `MultiTurnEnv`, which exposes an interface for writing custom Environment interaction protocols. The two methods you must override are

```python
from typing import Tuple
import verifiers as vf
from verifiers.types import Messages, State
class YourMultiTurnEnv(vf.MultiTurnEnv):
    def __init__(self,
                 dataset: Dataset,
                 rubric: Rubric,
				 max_turns: int,
                 **kwargs):
	
  def is_completed(self, messages: Messages, state: State, **kwargs) -> bool:
    # return whether or not a rollout is completed

  def env_response(self, messages: Messages, state: State, **kwargs) -> Tuple[Messages, State]:
    # return new environment message(s) + updated state
```

If your application requires more fine-grained control than is allowed by `MultiTurnEnv`, you may want to inherit from the base `Environment` functionality directly and override the `rollout` method.


## Training


### GRPOTrainer

The included trainer (`vf.GRPOTrainer`) supports running GRPO-style RL training via Accelerate/DeepSpeed, and uses vLLM for inference. It supports both full-parameter finetuning, and is optimized for efficiently training dense transformer models on 2-16 GPUs.

```bash
# install environment
vf-install vf-wordle (-p /path/to/environments | --from-repo)

# quick eval
vf-eval vf-wordle -m (model_name in configs/endpoints.py) -n NUM_EXAMPLES -r ROLLOUTS_PER_EXAMPLE

# inference (shell 0)
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5 vf-vllm --model willcb/Qwen3-1.7B-Wordle \
    --data-parallel-size 7 --enforce-eager --disable-log-requests

# training (shell 1)
CUDA_VISIBLE_DEVICES=6,7 accelerate launch --num-processes 2 \
    --config-file configs/zero3.yaml examples/grpo/train_wordle.py --size 1.7B
```

### Troubleshooting 
- Ensure your `wandb` and `huggingface-cli` logins are set up (or set `report_to=None` in `training_args`). You should also have something set as your `OPENAI_API_KEY` in your environment (can be a dummy key for vLLM). 
- If using high max concurrency, increase the number of allowed open sockets (e.g. `ulimit -n 4096`)
- On some setups, inter-GPU communication can [hang](https://github.com/huggingface/trl/issues/2923) or crash during vLLM weight syncing. This can usually be alleviated by setting (or unsetting) `NCCL_P2P_DISABLE=1` in your environment. Try this as your first step if you experience NCCL-related issues.
- If problems persist, please open an [issue](https://github.com/willccbb/verifiers/issues).

### Resource Requirements
`GRPOTrainer` is optimized for setups with at least 2 GPUs, scaling up to multiple nodes. 2-GPU setups with sufficient memory to enable small-scale experimentation can be [rented](https://app.primeintellect.ai/dashboard/create-cluster?image=ubuntu_22_cuda_12) for <$1/hr.

### PRIME-RL
If you do not require LoRA support, you may want to use the `prime-rl` trainer, which natively supports Environments created using `verifiers`, is more optimized for performance and scalability via FSDP, includes a broader set of configuration options and user experience features, and has more battle-tested defaults. Both trainers support asynchronous rollouts, and use a one-step off-policy delay by default for overlapping training and inference. See the `prime-rl` [docs](https://github.com/PrimeIntellect-ai/prime-rl) for usage instructions.

## Further Documentation

See the full [docs](https://verifiers.readthedocs.io/en/latest/) for more info, including:
- Dataset configuration options (system prompts, few-shot examples, eval datasets)
- Parsers (e.g. ThinkParser, XMLParser)
- Advanced Rubric patterns
- Composing Environments (EnvGroup) and Rubrics (RubricGroup)
- Creating and saving rollout datasets using Environments
- More Environment example walkthroughs
- Hardware considerations
- SFT warmup for improving small-model training efficiency
- RL + GRPO best practices
- Common footguns

## Footguns

**Non-Increasing Chat Templates:** The Qwen3 and DeepSeek-R1 model series both remove `<think>` sections from messages when processing inputs, which violates the increasing context requirement for multi-turn GRPO-style training. We provide versions of many of these models with modified chat templates [here](https://huggingface.co/collections/willcb/qwen3-68434f4883925bfdb4570ee5).

## Contributions

Verifiers warmly welcomes community contributions! Please open an issue or PR if you encounter bugs or other pain points during your development, or start a discussion for more open-ended questions.

Please note that the core `verifiers/` library is intended to be a relatively lightweight set of reusable components rather than an exhaustive catalog of RL environments. For *applications* of `verifiers` (e.g. "an Environment for XYZ task"), you are welcome to submit a PR for a self-contained module that lives within `environments/` if it serves as a canonical example of a new pattern. Stay tuned for more info shortly about our plans for supporting community Environment contributions 🙂

## Citation

If you use this code in your research, please cite:

```bibtex
@misc{brown_verifiers_2025,
  author       = {William Brown},
  title        = {{Verifiers}: Reinforcement Learning with LLMs in Verifiable Environments},
  howpublished = {\url{https://github.com/willccbb/verifiers}},
  note         = {Commit abcdefg • accessed DD Mon YYYY},
  year         = {2025}
}
```


## Roadmap

- A community Environments hub for crowdsourcing, sharing, and discovering new RL environments built with `verifiers`
- Default patterns for hosted resources such as code sandboxes, auxiliary models, and MCP servers
- Multimodal input support
- Non-increasing token sequences via REINFORCE



            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "verifiers",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.11",
    "maintainer_email": null,
    "keywords": "grpo, llm, multi-turn, reinforcement-learning, rl, verifiable-environments",
    "author": null,
    "author_email": "William Brown <williambrown97@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/d7/f8/002f1f1d5d18e39ea13655f2135a368b2e5f98b1c8d1cdabfbdd88f18af2/verifiers-0.1.2.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n<p align=\"center\">\n  <h1>Verifiers</h1>\n</p>\n\n<p>\nEnvironments for LLM Reinforcement Learning\n</p>\n\n</div>\n\n## Overview\n\nVerifiers is a library of modular components for creating RL environments and training LLM agents. Verifiers includes an async GRPO implementation built around the `transformers` Trainer, is supported by `prime-rl` for large-scale FSDP training, and can easily be integrated into any RL framework which exposes an OpenAI-compatible inference client. In addition to RL training, Verifiers can be used directly for building LLM evaluations, creating synthetic data pipelines, and implementing agent harnesses.\n\nFull documentation is available [here](https://verifiers.readthedocs.io/en/latest/). \n\n## Setup\n\nWe recommend using `verifiers` with along [uv](https://docs.astral.sh/uv/getting-started/installation/) for dependency management in your own project:\n```bash\ncurl -LsSf https://astral.sh/uv/install.sh | sh\nuv init # create a fresh project\nsource .venv/bin/activate\n```\n\nFor local (CPU) development and evaluation with API models, do:\n```bash\nuv add verifiers # uv add 'verifiers[dev]' for Jupyter + testing support\n```\n\nFor training on GPUs with `vf.GRPOTrainer`, do:\n```bash\nuv add 'verifiers[all]' && uv pip install flash-attn --no-build-isolation\n```\n\nTo use the latest `main` branch, do:\n```bash\nuv add verifiers @ git+https://github.com/willccbb/verifiers.git\n```\n\nTo use with `prime-rl`, see [here](https://github.com/PrimeIntellect-ai/prime-rl).\n\nTo install `verifiers` from source for core library development, do:\n```bash\ngit clone https://github.com/willccbb/verifiers.git\ncd verifiers\nuv sync --all-extras && uv pip install flash-attn --no-build-isolation\nuv run pre-commit install\n```\n\nIn general, we recommend that you build and train Environments *with* `verifiers`, not *in* `verifiers`. If you find yourself needing to clone and modify the core library in order to implement key functionality for your project, we'd love for you to open an issue so that we can try and streamline the development experience. Our aim is for `verifiers` to be a reliable toolkit to build on top of, and to minimize the \"fork proliferation\" which often pervades the RL infrastructure ecosystem.\n\n## Environments\n\nEnvironments in Verifiers are installable Python modules which can specify dependencies in a `pyproject.toml`, and which expose a `load_environment` function for instantiation by downstream applications (e.g. trainers). See `environments/` for examples. \n\nTo initialize a blank Environment module template, do:\n```bash\nvf-init vf-environment-name # -p /path/to/environments (defaults to \"./environments\")\n```\nWe recommend using the `vf-` prefix for clarity and avoiding conflicts with other dependencies, and we prepend it by fault if it is not present, though you can pass `--skip-vf-prefix` to override. Names are auto-standardized to use `\"-\"` in IDs and `\"_\"` in paths.\n\nTo an install an Environment module into your project, do:\n```bash\nvf-install vf-environment-name # -p /path/to/environments (defaults to \"./environments\") \n```\n\nTo install an Environment module from this repo's `environments` folder, do:\n```bash\nvf-install vf-math-python --from-repo # -b branch_or_commit (defaults to \"main\")\n```\n\nOnce an Environment module is installed, you can create an instance of the Environment using `load_environment`, passing any necessary args:\n```python\nimport verifiers as vf\nvf_env = vf.load_environment(\"vf-environment-name\", **env_args)\n```\n\nTo run a quick evaluation of your Environment with an API-based model, do:\n```bash\nvf-eval vf-environment-name # vf-eval -h for config options; defaults to gpt-4.1-mini, 5 prompts, 3 rollouts for each\n```\n\nThe core elements of Environments in are:\n- Datasets: a Hugging Face `Dataset` with a `prompt` column for inputs, and either `answer (str)` or `info (dict)` columns for evaluation\n- Rollout logic: interactions between models and the environment (e.g. `env_response` + `is_completed` for any `MultiTurnEnv`)\n- Rubrics: an encapsulation for one or more reward functions\n- Parsers: optional; an encapsulation for reusable parsing logic\n\nWe support both `/v1/chat/completions`-style and `/v1/completions`-style inference via OpenAI clients, though we generally recommend `/v1/chat/completions`-style inference for the vast majority of applications. Both the included `GRPOTrainer` as well as `prime-rl` support the full set of [SamplingParams](https://docs.vllm.ai/en/v0.6.0/dev/sampling_params.html) exposed by vLLM (via their OpenAI-compatible [server](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html) interface), and leveraging this will often be the appropriate way to implement rollout strategies requiring finer-grained control, such as interrupting and resuming generations for interleaved tool use, or enforcing reasoning budgets.\n\nThe primary constraint we impose on rollout logic is that token sequences must be *increasing*, i.e. once a token has been added to a model's context in a rollout, it must remain as the rollout progresses. Note that this causes issues with some popular reasoning models such as the Qwen3 and DeepSeek-R1-Distill series; see [Footguns](#footguns) for guidance on adapting these models to support multi-turn rollouts.  \n\n### SingleTurnEnv\n\nFor tasks requiring only a single response from a model for each prompt, you can use `SingleTurnEnv` directly by specifying a Dataset and a Rubric.\n\n```python\nfrom datasets import load_dataset\nimport verifiers as vf\n\ndataset = load_dataset(\"my-account/my-dataset\", split=\"train\")\n\ndef reward_A(prompt, completion, info) -> float:\n\t# reward fn, e.g. correctness\n\t...\ndef reward_B(parser, completion) -> float:\n\t# auxiliary reward fn, e.g. format\n\t...\ndef metric(completion) -> float:\n\t# non-reward metric, e.g. proper noun count\n\t...\n\nrubric = vf.Rubric(funcs=[reward_A, reward_B, metric], weights=[1.0, 0.5, 0.0])\n\nvf_env = SingleTurnEnv(\n\tdataset=dataset,\n\trubric=rubric\n)\nresults = vf_env.evaluate(client=OpenAI(), model=\"gpt-4.1-mini\", num_examples=100, rollouts_per_example=1)\nvf_env.make_dataset(results, push_to_hub=True, hub_name=\"my-new-environment-eval-results\") # save results to HF hub\n```\n\nDatasets should be formatted with columns for:\n- `'prompt' (List[ChatMessage])` OR `'question' (str)` fields\n\t- `ChatMessage` = e.g. `{'role': 'user', 'content': '...'}`\n\t- if `question` is set instead of `prompt`, you can also pass `system_prompt (str)` and/or `few_shot (List[ChatMessage])`\n- `answer (str)` AND/OR `info (dict)`\n- `task (str)`: optional, used by `EnvGroup` and `RubricGroup` for orchestrating composition of Environments and Rubrics\n\nThe following named attributes available for use by reward functions in your Rubric:\n- `prompt`: sequence of input messages\n- `completion`: sequence of messages generated during rollout by model and Environment\n- `answer`: primary answer column, optional if `info` is used\n- `state`: can be modified during rollout to accumulate any metadata (`state['responses']` includes full OpenAI response objects by default)\n- `info`: auxiliary info needed for reward computation (e.g. test cases), optional if `answer` is used\n- `task`: tag for task type (used by `EnvGroup` and `RubricGroup`)\n- `parser`: the parser object declared, defaults to `vf.Parser()`\n\nFor tasks involving LLM judges, you may wish to use `vf.JudgeRubric()` for managing requests to auxiliary models.\n\n### ToolEnv\n\nFor many applications involving tool use, you can use `ToolEnv` to leverage models' native tool/function-calling capabilities in an agentic loop. Tools can be specified as generic Python functions (with type hints and docstrings), which will then be passed in JSON schema form to each inference request.\n\n```python\nimport verifiers as vf\nvf_env = vf.ToolEnv(\n\tdataset= ... # HF Dataset with 'prompt'/'question' + 'answer'/'info' columns\n\trubric= ... # Rubric object; vf.ToolRubric() can be optionally used for counting tool invocations in each rollout\n\ttools=[search_tool, read_article_tool, python_tool], # python functions with type hints + docstrings\n\tmax_turns=10\n)\n```\n\nIn cases where your tools require heavy computational resources, we recommend hosting your tools as standalone servers (e.g. MCP servers) and creating lightweight wrapper functions to pass to `ToolEnv`. Parallel tool call support is enabled by default. \n\nFor training, or self-hosted endpoints, you'll want to enable auto tool choice in [vLLM](https://docs.vllm.ai/en/stable/features/tool_calling.html#automatic-function-calling) with the appropriate parser. If your model does not support native tool calling, you may find the `XMLParser` abstraction useful for rolling your own tool call parsing on top of `MultiTurnEnv`; see `environments/xml_tool_env` for an example.\n\n### MultiTurnEnv\n\nBoth `SingleTurnEnv` and `ToolEnv` are instances of `MultiTurnEnv`, which exposes an interface for writing custom Environment interaction protocols. The two methods you must override are\n\n```python\nfrom typing import Tuple\nimport verifiers as vf\nfrom verifiers.types import Messages, State\nclass YourMultiTurnEnv(vf.MultiTurnEnv):\n    def __init__(self,\n                 dataset: Dataset,\n                 rubric: Rubric,\n\t\t\t\t max_turns: int,\n                 **kwargs):\n\t\n  def is_completed(self, messages: Messages, state: State, **kwargs) -> bool:\n    # return whether or not a rollout is completed\n\n  def env_response(self, messages: Messages, state: State, **kwargs) -> Tuple[Messages, State]:\n    # return new environment message(s) + updated state\n```\n\nIf your application requires more fine-grained control than is allowed by `MultiTurnEnv`, you may want to inherit from the base `Environment` functionality directly and override the `rollout` method.\n\n\n## Training\n\n\n### GRPOTrainer\n\nThe included trainer (`vf.GRPOTrainer`) supports running GRPO-style RL training via Accelerate/DeepSpeed, and uses vLLM for inference. It supports both full-parameter finetuning, and is optimized for efficiently training dense transformer models on 2-16 GPUs.\n\n```bash\n# install environment\nvf-install vf-wordle (-p /path/to/environments | --from-repo)\n\n# quick eval\nvf-eval vf-wordle -m (model_name in configs/endpoints.py) -n NUM_EXAMPLES -r ROLLOUTS_PER_EXAMPLE\n\n# inference (shell 0)\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5 vf-vllm --model willcb/Qwen3-1.7B-Wordle \\\n    --data-parallel-size 7 --enforce-eager --disable-log-requests\n\n# training (shell 1)\nCUDA_VISIBLE_DEVICES=6,7 accelerate launch --num-processes 2 \\\n    --config-file configs/zero3.yaml examples/grpo/train_wordle.py --size 1.7B\n```\n\n### Troubleshooting \n- Ensure your `wandb` and `huggingface-cli` logins are set up (or set `report_to=None` in `training_args`). You should also have something set as your `OPENAI_API_KEY` in your environment (can be a dummy key for vLLM). \n- If using high max concurrency, increase the number of allowed open sockets (e.g. `ulimit -n 4096`)\n- On some setups, inter-GPU communication can [hang](https://github.com/huggingface/trl/issues/2923) or crash during vLLM weight syncing. This can usually be alleviated by setting (or unsetting) `NCCL_P2P_DISABLE=1` in your environment. Try this as your first step if you experience NCCL-related issues.\n- If problems persist, please open an [issue](https://github.com/willccbb/verifiers/issues).\n\n### Resource Requirements\n`GRPOTrainer` is optimized for setups with at least 2 GPUs, scaling up to multiple nodes. 2-GPU setups with sufficient memory to enable small-scale experimentation can be [rented](https://app.primeintellect.ai/dashboard/create-cluster?image=ubuntu_22_cuda_12) for <$1/hr.\n\n### PRIME-RL\nIf you do not require LoRA support, you may want to use the `prime-rl` trainer, which natively supports Environments created using `verifiers`, is more optimized for performance and scalability via FSDP, includes a broader set of configuration options and user experience features, and has more battle-tested defaults. Both trainers support asynchronous rollouts, and use a one-step off-policy delay by default for overlapping training and inference. See the `prime-rl` [docs](https://github.com/PrimeIntellect-ai/prime-rl) for usage instructions.\n\n## Further Documentation\n\nSee the full [docs](https://verifiers.readthedocs.io/en/latest/) for more info, including:\n- Dataset configuration options (system prompts, few-shot examples, eval datasets)\n- Parsers (e.g. ThinkParser, XMLParser)\n- Advanced Rubric patterns\n- Composing Environments (EnvGroup) and Rubrics (RubricGroup)\n- Creating and saving rollout datasets using Environments\n- More Environment example walkthroughs\n- Hardware considerations\n- SFT warmup for improving small-model training efficiency\n- RL + GRPO best practices\n- Common footguns\n\n## Footguns\n\n**Non-Increasing Chat Templates:** The Qwen3 and DeepSeek-R1 model series both remove `<think>` sections from messages when processing inputs, which violates the increasing context requirement for multi-turn GRPO-style training. We provide versions of many of these models with modified chat templates [here](https://huggingface.co/collections/willcb/qwen3-68434f4883925bfdb4570ee5).\n\n## Contributions\n\nVerifiers warmly welcomes community contributions! Please open an issue or PR if you encounter bugs or other pain points during your development, or start a discussion for more open-ended questions.\n\nPlease note that the core `verifiers/` library is intended to be a relatively lightweight set of reusable components rather than an exhaustive catalog of RL environments. For *applications* of `verifiers` (e.g. \"an Environment for XYZ task\"), you are welcome to submit a PR for a self-contained module that lives within `environments/` if it serves as a canonical example of a new pattern. Stay tuned for more info shortly about our plans for supporting community Environment contributions \ud83d\ude42\n\n## Citation\n\nIf you use this code in your research, please cite:\n\n```bibtex\n@misc{brown_verifiers_2025,\n  author       = {William\u00a0Brown},\n  title        = {{Verifiers}: Reinforcement Learning with LLMs in Verifiable Environments},\n  howpublished = {\\url{https://github.com/willccbb/verifiers}},\n  note         = {Commit abcdefg \u2022 accessed DD\u202fMon\u202fYYYY},\n  year         = {2025}\n}\n```\n\n\n## Roadmap\n\n- A community Environments hub for crowdsourcing, sharing, and discovering new RL environments built with `verifiers`\n- Default patterns for hosted resources such as code sandboxes, auxiliary models, and MCP servers\n- Multimodal input support\n- Non-increasing token sequences via REINFORCE\n\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Verifiers for reinforcement learning with LLMs",
    "version": "0.1.2",
    "project_urls": {
        "Documentation": "https://github.com/willccbb/verifiers",
        "Homepage": "https://github.com/willccbb/verifiers",
        "Issues": "https://github.com/willccbb/verifiers/issues",
        "Repository": "https://github.com/willccbb/verifiers.git"
    },
    "split_keywords": [
        "grpo",
        " llm",
        " multi-turn",
        " reinforcement-learning",
        " rl",
        " verifiable-environments"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d1424a5ceb2cd9e4b54609406cd85f05f754392ca085d3562361b75f82413f6b",
                "md5": "60894b167f7505d3c48e01cdfc194f4d",
                "sha256": "a8ffa21a6ffc08cb64e2911ad4763bd9dbcfa9e7719a2dc1ed88be7b09d2e5b4"
            },
            "downloads": -1,
            "filename": "verifiers-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "60894b167f7505d3c48e01cdfc194f4d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.11",
            "size": 87531,
            "upload_time": "2025-07-31T02:37:24",
            "upload_time_iso_8601": "2025-07-31T02:37:24.312397Z",
            "url": "https://files.pythonhosted.org/packages/d1/42/4a5ceb2cd9e4b54609406cd85f05f754392ca085d3562361b75f82413f6b/verifiers-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d7f8002f1f1d5d18e39ea13655f2135a368b2e5f98b1c8d1cdabfbdd88f18af2",
                "md5": "a95fdbb03fdc435783ed163dc3439a32",
                "sha256": "32720a6cf246173459202593b793af54d41367afb470eda78e1884ff79d13520"
            },
            "downloads": -1,
            "filename": "verifiers-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "a95fdbb03fdc435783ed163dc3439a32",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.11",
            "size": 93656,
            "upload_time": "2025-07-31T02:37:25",
            "upload_time_iso_8601": "2025-07-31T02:37:25.899163Z",
            "url": "https://files.pythonhosted.org/packages/d7/f8/002f1f1d5d18e39ea13655f2135a368b2e5f98b1c8d1cdabfbdd88f18af2/verifiers-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-31 02:37:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "willccbb",
    "github_project": "verifiers",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "verifiers"
}
        
Elapsed time: 0.98360s