<div align="center">
# π― HPSv3: Towards Wide-Spectrum Human Preference Score (ICCV 2025)
[](https://mizzenai.github.io/HPSv3.project/)
[]()
[]()
[](https://huggingface.co/MizzenAI/HPSv3)
[](https://huggingface.co/MizzenAI/HPDv3)
<!-- **Yuhang Ma**<sup>1,2*</sup>  **Yunhao Shui**<sup>1,3*</sup>  **Xiaoshi Wu**<sup>4</sup>  **Keqiang Sun**<sup>1,4β </sup>  **Hongsheng Li**<sup>4,5,6β </sup>
<sup>1</sup>Mizzen AI   <sup>2</sup>Kingβs College London   <sup>3</sup>Shanghai Jiaotong University   <sup>4</sup>CUHK MMLab   <sup>5</sup>Shanghai AI Laboratory   <sup>6</sup>CPII, InnoHK  
<sup>*</sup>Equal Contribution   <sup>β </sup>Equal Advising -->
**Yuhang Ma**<sup>1,3*</sup>  **Yunhao Shui**<sup>1,4*</sup>  **Xiaoshi Wu**<sup>2</sup>  **Keqiang Sun**<sup>1,2β </sup>  **Hongsheng Li**<sup>2,4,5β </sup>
<sup>1</sup>Mizzen AI   <sup>2</sup>CUHK MMLab   <sup>3</sup>Kingβs College London   <sup>4</sup>Shanghai Jiaotong University  
<sup>5</sup>Shanghai AI Laboratory   <sup>6</sup>CPII, InnoHK  
<sup>*</sup>Equal Contribution  <sup>β </sup>Equal Advising
</div>
## π Introduction
This is the official implementation for the paper: [HPSv3: Towards Wide-Spectrum Human Preference Score]().
First, we introduce a VLM-based preference model **HPSv3**, trained on a "wide spectrum" preference dataset **HPDv3** with 1.08M text-image pairs and 1.17M annotated pairwise comparisons, covering both state-of-the-art and earlier generative models, as well as high- and low-quality real-world images. Second, we propose a novel reasoning approach for iterative image refinement, **CoHP(Chain-of-Human-Preference)**, which efficiently improves image quality without requiring additional training data.
<p align="center">
<img src="assets/teaser.png" alt="Teaser" width="900"/>
</p>
## β¨ Updates
- **[2025-8-05]** π We release HPSv3: inference code, training code, cohp code and model weights.
## π Table of Contents
1. [π Quick Start](#π-quick-start)
2. [π Gradio Demo](#π-gradio-demo)
3. [ποΈ Training](#ποΈ-training)
4. [π Benchmark](#π-benchmark)
5. [π― CoHP (Chain-of-Human-Preference)](#π―-cohp-chain-of-human-preference)
---
## π Quick Start
HPSv3 is a state-of-the-art human preference score model for evaluating image quality and prompt alignment. It builds upon the Qwen2-VL architecture to provide accurate assessments of generated images.
### π» Installation
<!-- # Method 1: Pypi download and install for inference.
pip install hpsv3 -->
```bash
# Install locally for development or training.
git clone https://github.com/MizzenAI/HPSv3.git
cd HPSv3
conda env create -f environment.yaml
conda activate hpsv3
# Recommend: Install flash-attn
pip install flash-attn==2.7.4.post1
pip install -e .
```
### π οΈ Basic Usage
#### Simple Inference Example
```python
from hpsv3 import HPSv3RewardInferencer
# Initialize the model
inferencer = HPSv3RewardInferencer(device='cuda')
# Evaluate images
image_paths = ["assets/example1.png", "assets/example2.png"]
prompts = [
"cute chibi anime cartoon fox, smiling wagging tail with a small cartoon heart above sticker",
"cute chibi anime cartoon fox, smiling wagging tail with a small cartoon heart above sticker"
]
# Get preference scores
rewards = inferencer.reward(image_paths, prompts)
scores = [reward[0].item() for reward in rewards] # Extract mu values
print(f"Image scores: {scores}")
```
---
## π Gradio Demo
Launch an interactive web interface to test HPSv3:
```bash
python gradio_demo/demo.py
```
The demo will be available at `http://localhost:7860` and provides:
<p align="center">
<img src="assets/gradio.png" alt="Gradio Demo" width="500"/>
</p>
## ποΈ Training
### π Dataset
#### Human Preference Dataset v3
Human Preference Dataset v3 (HPD v3) comprises 1.08M text-image pairs and 1.17M annotated pairwise data. To modeling the wide spectrum of human preference, we introduce newest state-of-the-art generative models and high quality real photographs while maintaining old models and lower quality real images.
<details close>
<summary>Detail information of HPD v3</summary>
| Image Source | Type | Num Image | Prompt Source | Split |
|--------------|------|-----------|---------------|-------|
| High Quality Image (HQI) | Real Image | 57759 | VLM Caption | Train & Test |
| MidJourney | - | 331955 | User | Train |
| CogView4 | DiT | 400 | HQI+HPDv2+JourneyDB | Test |
| FLUX.1 dev | DiT | 48927 | HQI+HPDv2+JourneyDB | Train & Test |
| Infinity | Autoregressive | 27061 | HQI+HPDv2+JourneyDB | Train & Test |
| Kolors | DiT | 49705 | HQI+HPDv2+JourneyDB | Train & Test |
| HunyuanDiT | DiT | 46133 | HQI+HPDv2+JourneyDB | Train & Test |
| Stable Diffusion 3 Medium | DiT | 49266 | HQI+HPDv2+JourneyDB | Train & Test |
| Stable Diffusion XL | Diffusion | 49025 | HQI+HPDv2+JourneyDB | Train & Test |
| Pixart Sigma | Diffusion | 400 | HQI+HPDv2+JourneyDB | Test |
| Stable Diffusion 2 | Diffusion | 19124 | HQI+JourneyDB | Train & Test |
| CogView2 | Autoregressive | 3823 | HQI+JourneyDB | Train & Test |
| FuseDream | Diffusion | 468 | HQI+JourneyDB | Train & Test |
| VQ-Diffusion | Diffusion | 18837 | HQI+JourneyDB | Train & Test |
| Glide | Diffusion | 19989 | HQI+JourneyDB | Train & Test |
| Stable Diffusion 1.4 | Diffusion | 18596 | HQI+JourneyDB | Train & Test |
| Stable Diffusion 1.1 | Diffusion | 19043 | HQI+JourneyDB | Train & Test |
| Curated HPDv2 | - | 327763 | - | Train |
</details>
#### Download HPDv3
```
HPDv3 is comming soon! Stay tuned!
```
<!-- ```bash
huggingface-cli download --repo-type dataset MizzenAI/HPDv3 --local-dir /your-local-dataset-path
``` -->
#### Pairwise Training Data Format
**Important Note: For simplicity, path1's image is always the prefered one**
```json
[
{
"prompt": "A beautiful landscape painting",
"path1": "path/to/better/image.jpg",
"path2": "path/to/worse/image.jpg",
"confidence": 0.95
},
...
]
```
### π Training Command
```bash
# Use Method 2 to install locally
git clone https://github.com/MizzenAI/HPSv3.git
cd HPSv3
conda env create -f environment.yaml
conda activate hpsv3
# Recommend: Install flash-attn
pip install flash-attn==2.7.4.post1
pip install -e .
# Train with 7B model
deepspeed hpsv3/train.py --config hpsv3/config/HPSv3_7B.yaml
```
<details close>
<summary>Important Config Argument</summary>
| Configuration Section | Parameter | Value | Description |
|----------------------|-----------|-------|-------------|
| **Model Configuration** | `rm_head_type` | `"ranknet"` | Type of reward model head architecture |
| | `lora_enable` | `False` | Enable LoRA (Low-Rank Adaptation) for efficient fine-tuning. If `False`, language tower is fully trainable|
| | `vision_lora` | `False` | Apply LoRA specifically to vision components. If `False`, vision tower is fully trainable|
| | `model_name_or_path` | `"Qwen/Qwen2-VL-7B-Instruct"` | Path to the base model checkpoint |
| **Data Configuration** | `confidence_threshold` | `0.95` | Minimum confidence score for training data |
| | `train_json_list` | `[example_train.json]` | List of training data files |
| | `test_json_list` | `[validation_sets]` | List of validation datasets with names |
| | `output_dim` | `2` | Output dimension of the reward head for $\mu$ and $\sigma$|
| | `loss_type` | `"uncertainty"` | Loss function type for training |
</details>
---
## π Benchmark
To evaluate **HPSv3 preference accuracy** or **human preference score of image generation model**, follow the detail instruction is in [Evaluate Insctruction](evaluate/README.md)
<details open>
<summary> Preference Accuracy of HPSv3 </summary>
| Model | ImageReward | Pickscore | HPDv2 | HPDv3 |
|------|-------------|-----------|-------|-------|
| [CLIP ViT-H/14](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) | 57.1 | 60.8 | 65.1 | 48.6 |
| [Aesthetic Score Predictor](https://github.com/christophschuhmann/improved-aesthetic-predictor) | 57.4 | 56.8 | 76.8 | 59.9 |
| [ImageReward](https://github.com/THUDM/ImageReward) | 65.1 | 61.1 | 74.0 | 58.6 |
| [PickScore](https://github.com/yuvalkirstain/PickScore) | 61.6 | <u>70.5</u> | 79.8 | <u>65.6</u> |
| [HPS](https://github.com/tgxs002/align_sd) | 61.2 | 66.7 | 77.6 | 63.8 |
| [HPSv2](https://github.com/tgxs002/HPSv2) | 65.7 | 63.8 | 83.3 | 65.3 |
| [MPS](https://github.com/Kwai-Kolors/MPS) | **67.5** | 63.1 | <u>83.5</u> | 64.3 |
| HPSv3 | <u>66.8</u> | **72.8** | **85.4** | **76.9** |
</details>
<details open>
<summary> Image Generation Benchmark of HPSv3 </summary>
| Model | Overall | Characters | Arts | Design | Architecture | Animals | Natural Scenery | Transportation | Products | Others | Plants | Food | Science |
|------|---------|------------|------|--------|--------------|---------|-----------------|----------------|----------|--------|--------|------|---------|
| Kolors | **10.55** | **11.79** | **10.47** | **9.87** | <u>10.82</u> | **10.60** | 9.89 | <u>10.68</u> | <u>10.93</u> | **10.50** | **10.63** | <u>11.06</u> | <u>9.51</u> |
| Flux-dev | <u>10.43</u> | <u>11.70</u> | <u>10.32</u> | 9.39 | **10.93** | <u>10.38</u> | <u>10.01</u> | **10.84** | **11.24** | <u>10.21</u> | 10.38 | **11.24** | 9.16 |
| Playgroundv2.5 | 10.27 | 11.07 | 9.84 | <u>9.64</u> | 10.45 | <u>10.38</u> | 9.94 | 10.51 | <u>10.62</u> | 10.15 | <u>10.62</u> | 10.84 | 9.39 |
| Infinity | 10.26 | 11.17 | 9.95 | 9.43 | 10.36 | 9.27 | **10.11** | 10.36 | 10.59 | 10.08 | 10.30 | 10.59 | **9.62** |
| CogView4 | 9.61 | 10.72 | 9.86 | 9.33 | 9.88 | 9.16 | 9.45 | 9.69 | 9.86 | 9.45 | 9.49 | 10.16 | 8.97 |
| PixArt-Ξ£ | 9.37 | 10.08 | 9.07 | 8.41 | 9.83 | 8.86 | 8.87 | 9.44 | 9.57 | 9.52 | 9.73 | 10.35 | 8.58 |
| Gemini 2.0 Flash | 9.21 | 9.98 | 8.44 | 7.64 | 10.11 | 9.42 | 9.01 | 9.74 | 9.64 | 9.55 | 10.16 | 7.61 | 9.23 |
| SDXL | 8.20 | 8.67 | 7.63 | 7.53 | 8.57 | 8.18 | 7.76 | 8.65 | 8.85 | 8.32 | 8.43 | 8.78 | 7.29 |
| HunyuanDiT | 8.19 | 7.96 | 8.11 | 8.28 | 8.71 | 7.24 | 7.86 | 8.33 | 8.55 | 8.28 | 8.31 | 8.48 | 8.20 |
| Stable Diffusion 3 Medium | 5.31 | 6.70 | 5.98 | 5.15 | 5.25 | 4.09 | 5.24 | 4.25 | 5.71 | 5.84 | 6.01 | 5.71 | 4.58 |
| SD2 | -0.24 | -0.34 | -0.56 | -1.35 | -0.24 | -0.54 | -0.32 | 1.00 | 1.11 | -0.01 | -0.38 | -0.38 | -0.84 |
</details>
---
## π― CoHP (Chain-of-Human-Preference)
COHP is our novel reasoning approach for iterative image refinement that efficiently improves image quality without requiring additional training data. It works by generating images with multiple diffusion models, selecting the best one using reward models, and then iteratively refining it through image-to-image generation.
<p align="center">
<img src="assets/cohp.png" alt="cohp" width="600"/>
</p>
### π Usage
#### Basic Command
```bash
python hpsv3/cohp/run_cohp.py \
--prompt "A beautiful sunset over mountains" \
--index "sample_001" \
--device "cuda:0" \
--reward_model "hpsv3"
```
#### Parameters
- `--prompt`: Text prompt for image generation (required)
- `--index`: Unique identifier for saving results (required)
- `--device`: GPU device to use (default: 'cuda:1')
- `--reward_model`: Reward model for scoring images
- `hpsv3`: HPSv3 model (default, recommended)
- `hpsv2`: HPSv2 model
- `imagereward`: ImageReward model
- `pickscore`: PickScore model
#### Supported Generation Models
COHP uses multiple state-of-the-art diffusion models for initial generation: **FLUX.1 dev**, **Kolors**, **Stable Diffusion 3 Medium**, **Playground v2.5**
#### How COHP Works
1. **Multi-Model Generation**: Generates images using all supported models
2. **Reward Scoring**: Evaluates each image using the specified reward model
3. **Best Model Selection**: Chooses the model that produced the highest-scoring image
4. **Iterative Refinement**: Performs 5 rounds of image-to-image generation to improve quality
5. **Adaptive Strength**: Uses strength=0.8 for rounds 1-2, then 0.5 for rounds 3-5
---
## π¦Ύ Results as Reward Model
We perform [DanceGRPO](https://github.com/XueZeyue/DanceGRPO) as the reinforcement learning method. Here are some results.
All experiments using the same setting and we use **Stable Diffusion 1.4** as our backbone.
<p align="center">
<img src="assets/rl1.jpg" width="600"/>
</p>
<p align="center">
<img src="assets/rl2.jpg" width="600"/>
</p>
### More Results of HPsv3 as Reward Model (Stable Diffusion 1.4)
<p align="center">
<img src="assets/rl_teaser.jpg" alt="cohp" width="600"/>
</p>
## π Citation
If you find HPSv3 useful in your research, please cite our work:
```bibtex
@inproceedings{hpsv3,
title={HPSv3: Towards Wide-Spectrum Human Preference Score},
author={Ma, Yuhang and Wu, Xiaoshi and Sun, Keqiang and Li, Hongsheng},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2025}
}
```
---
## π Acknowledgements
We would like to thank the [VideoAlign](https://github.com/KwaiVGI/VideoAlign) codebase for providing valuable references.
---
## π¬ Support
For questions and support:
- **Issues**: [GitHub Issues](https://github.com/MizzenAI/HPSv3/issues)
- **Email**: yhshui@mizzen.ai & yhma@mizzen.ai
Raw data
{
"_id": null,
"home_page": null,
"name": "hpsv3",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "machine learning, computer vision, human preference, image quality, VLM, multimodal",
"author": "Yunhao Shui, Yuhang Ma",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/f9/25/a0f5c916f51c10e1a411160fbcfbe8ce9d6edeb5e2c0c1e2d2e1ddf738be/hpsv3-1.0.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n\n# \ud83c\udfaf HPSv3: Towards Wide-Spectrum Human Preference Score (ICCV 2025)\n\n[](https://mizzenai.github.io/HPSv3.project/)\n[]()\n[]()\n[](https://huggingface.co/MizzenAI/HPSv3)\n[](https://huggingface.co/MizzenAI/HPDv3)\n\n\n<!-- **Yuhang Ma**<sup>1,2*</sup>  **Yunhao Shui**<sup>1,3*</sup>  **Xiaoshi Wu**<sup>4</sup>  **Keqiang Sun**<sup>1,4\u2020</sup>  **Hongsheng Li**<sup>4,5,6\u2020</sup>\n\n<sup>1</sup>Mizzen AI   <sup>2</sup>King\u2019s College London   <sup>3</sup>Shanghai Jiaotong University   <sup>4</sup>CUHK MMLab   <sup>5</sup>Shanghai AI Laboratory   <sup>6</sup>CPII, InnoHK   \n\n<sup>*</sup>Equal Contribution   <sup>\u2020</sup>Equal Advising -->\n\n\n**Yuhang Ma**<sup>1,3*</sup>  **Yunhao Shui**<sup>1,4*</sup>  **Xiaoshi Wu**<sup>2</sup>  **Keqiang Sun**<sup>1,2\u2020</sup>  **Hongsheng Li**<sup>2,4,5\u2020</sup>\n\n<sup>1</sup>Mizzen AI   <sup>2</sup>CUHK MMLab   <sup>3</sup>King\u2019s College London   <sup>4</sup>Shanghai Jiaotong University   \n\n<sup>5</sup>Shanghai AI Laboratory   <sup>6</sup>CPII, InnoHK   \n\n<sup>*</sup>Equal Contribution  <sup>\u2020</sup>Equal Advising\n\n</div>\n\n\n## \ud83d\udcd6 Introduction\n\nThis is the official implementation for the paper: [HPSv3: Towards Wide-Spectrum Human Preference Score]().\nFirst, we introduce a VLM-based preference model **HPSv3**, trained on a \"wide spectrum\" preference dataset **HPDv3** with 1.08M text-image pairs and 1.17M annotated pairwise comparisons, covering both state-of-the-art and earlier generative models, as well as high- and low-quality real-world images. Second, we propose a novel reasoning approach for iterative image refinement, **CoHP(Chain-of-Human-Preference)**, which efficiently improves image quality without requiring additional training data.\n\n<p align=\"center\">\n <img src=\"assets/teaser.png\" alt=\"Teaser\" width=\"900\"/>\n</p>\n\n\n## \u2728 Updates\n\n- **[2025-8-05]** \ud83c\udf89 We release HPSv3: inference code, training code, cohp code and model weights.\n\n## \ud83d\udcd1 Table of Contents\n1. [\ud83d\ude80 Quick Start](#\ud83d\ude80-quick-start)\n2. [\ud83c\udf10 Gradio Demo](#\ud83c\udf10-gradio-demo)\n3. [\ud83c\udfcb\ufe0f Training](#\ud83c\udfcb\ufe0f-training)\n4. [\ud83d\udcca Benchmark](#\ud83d\udcca-benchmark)\n5. [\ud83c\udfaf CoHP (Chain-of-Human-Preference)](#\ud83c\udfaf-cohp-chain-of-human-preference)\n\n---\n\n## \ud83d\ude80 Quick Start\n\nHPSv3 is a state-of-the-art human preference score model for evaluating image quality and prompt alignment. It builds upon the Qwen2-VL architecture to provide accurate assessments of generated images.\n\n### \ud83d\udcbb Installation\n\n<!-- # Method 1: Pypi download and install for inference.\npip install hpsv3 -->\n\n```bash\n\n# Install locally for development or training.\ngit clone https://github.com/MizzenAI/HPSv3.git\ncd HPSv3\n\nconda env create -f environment.yaml\nconda activate hpsv3\n# Recommend: Install flash-attn\npip install flash-attn==2.7.4.post1\n\npip install -e .\n```\n\n### \ud83d\udee0\ufe0f Basic Usage\n\n#### Simple Inference Example\n\n```python\nfrom hpsv3 import HPSv3RewardInferencer\n\n# Initialize the model\ninferencer = HPSv3RewardInferencer(device='cuda')\n\n# Evaluate images\nimage_paths = [\"assets/example1.png\", \"assets/example2.png\"]\nprompts = [\n \"cute chibi anime cartoon fox, smiling wagging tail with a small cartoon heart above sticker\",\n \"cute chibi anime cartoon fox, smiling wagging tail with a small cartoon heart above sticker\"\n]\n\n# Get preference scores\nrewards = inferencer.reward(image_paths, prompts)\nscores = [reward[0].item() for reward in rewards] # Extract mu values\nprint(f\"Image scores: {scores}\")\n```\n\n---\n\n## \ud83c\udf10 Gradio Demo\n\nLaunch an interactive web interface to test HPSv3:\n\n```bash\npython gradio_demo/demo.py\n```\n\nThe demo will be available at `http://localhost:7860` and provides:\n\n<p align=\"center\">\n <img src=\"assets/gradio.png\" alt=\"Gradio Demo\" width=\"500\"/>\n</p>\n\n## \ud83c\udfcb\ufe0f Training\n\n### \ud83d\udcc1 Dataset\n\n#### Human Preference Dataset v3\n\nHuman Preference Dataset v3 (HPD v3) comprises 1.08M text-image pairs and 1.17M annotated pairwise data. To modeling the wide spectrum of human preference, we introduce newest state-of-the-art generative models and high quality real photographs while maintaining old models and lower quality real images.\n\n<details close>\n<summary>Detail information of HPD v3</summary>\n\n| Image Source | Type | Num Image | Prompt Source | Split |\n|--------------|------|-----------|---------------|-------|\n| High Quality Image (HQI) | Real Image | 57759 | VLM Caption | Train & Test |\n| MidJourney | - | 331955 | User | Train |\n| CogView4 | DiT | 400 | HQI+HPDv2+JourneyDB | Test |\n| FLUX.1 dev | DiT | 48927 | HQI+HPDv2+JourneyDB | Train & Test |\n| Infinity | Autoregressive | 27061 | HQI+HPDv2+JourneyDB | Train & Test |\n| Kolors | DiT | 49705 | HQI+HPDv2+JourneyDB | Train & Test |\n| HunyuanDiT | DiT | 46133 | HQI+HPDv2+JourneyDB | Train & Test |\n| Stable Diffusion 3 Medium | DiT | 49266 | HQI+HPDv2+JourneyDB | Train & Test |\n| Stable Diffusion XL | Diffusion | 49025 | HQI+HPDv2+JourneyDB | Train & Test |\n| Pixart Sigma | Diffusion | 400 | HQI+HPDv2+JourneyDB | Test |\n| Stable Diffusion 2 | Diffusion | 19124 | HQI+JourneyDB | Train & Test |\n| CogView2 | Autoregressive | 3823 | HQI+JourneyDB | Train & Test |\n| FuseDream | Diffusion | 468 | HQI+JourneyDB | Train & Test |\n| VQ-Diffusion | Diffusion | 18837 | HQI+JourneyDB | Train & Test |\n| Glide | Diffusion | 19989 | HQI+JourneyDB | Train & Test |\n| Stable Diffusion 1.4 | Diffusion | 18596 | HQI+JourneyDB | Train & Test |\n| Stable Diffusion 1.1 | Diffusion | 19043 | HQI+JourneyDB | Train & Test |\n| Curated HPDv2 | - | 327763 | - | Train |\n</details>\n\n#### Download HPDv3\n```\nHPDv3 is comming soon! Stay tuned!\n```\n<!-- ```bash\nhuggingface-cli download --repo-type dataset MizzenAI/HPDv3 --local-dir /your-local-dataset-path\n``` -->\n\n#### Pairwise Training Data Format\n\n**Important Note: For simplicity, path1's image is always the prefered one**\n\n```json\n[\n {\n \"prompt\": \"A beautiful landscape painting\",\n \"path1\": \"path/to/better/image.jpg\",\n \"path2\": \"path/to/worse/image.jpg\",\n \"confidence\": 0.95\n },\n ...\n]\n```\n\n### \ud83d\ude80 Training Command\n\n```bash\n# Use Method 2 to install locally\ngit clone https://github.com/MizzenAI/HPSv3.git\ncd HPSv3\n\nconda env create -f environment.yaml\nconda activate hpsv3\n# Recommend: Install flash-attn\npip install flash-attn==2.7.4.post1\n\npip install -e .\n\n# Train with 7B model\ndeepspeed hpsv3/train.py --config hpsv3/config/HPSv3_7B.yaml\n```\n\n<details close>\n<summary>Important Config Argument</summary>\n\n| Configuration Section | Parameter | Value | Description |\n|----------------------|-----------|-------|-------------|\n| **Model Configuration** | `rm_head_type` | `\"ranknet\"` | Type of reward model head architecture |\n| | `lora_enable` | `False` | Enable LoRA (Low-Rank Adaptation) for efficient fine-tuning. If `False`, language tower is fully trainable|\n| | `vision_lora` | `False` | Apply LoRA specifically to vision components. If `False`, vision tower is fully trainable|\n| | `model_name_or_path` | `\"Qwen/Qwen2-VL-7B-Instruct\"` | Path to the base model checkpoint |\n| **Data Configuration** | `confidence_threshold` | `0.95` | Minimum confidence score for training data |\n| | `train_json_list` | `[example_train.json]` | List of training data files |\n| | `test_json_list` | `[validation_sets]` | List of validation datasets with names |\n| | `output_dim` | `2` | Output dimension of the reward head for $\\mu$ and $\\sigma$|\n| | `loss_type` | `\"uncertainty\"` | Loss function type for training |\n</details>\n\n---\n\n## \ud83d\udcca Benchmark\nTo evaluate **HPSv3 preference accuracy** or **human preference score of image generation model**, follow the detail instruction is in [Evaluate Insctruction](evaluate/README.md)\n\n<details open>\n<summary> Preference Accuracy of HPSv3 </summary>\n\n| Model | ImageReward | Pickscore | HPDv2 | HPDv3 |\n|------|-------------|-----------|-------|-------|\n| [CLIP ViT-H/14](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) | 57.1 | 60.8 | 65.1 | 48.6 |\n| [Aesthetic Score Predictor](https://github.com/christophschuhmann/improved-aesthetic-predictor) | 57.4 | 56.8 | 76.8 | 59.9 |\n| [ImageReward](https://github.com/THUDM/ImageReward) | 65.1 | 61.1 | 74.0 | 58.6 |\n| [PickScore](https://github.com/yuvalkirstain/PickScore) | 61.6 | <u>70.5</u> | 79.8 | <u>65.6</u> |\n| [HPS](https://github.com/tgxs002/align_sd) | 61.2 | 66.7 | 77.6 | 63.8 |\n| [HPSv2](https://github.com/tgxs002/HPSv2) | 65.7 | 63.8 | 83.3 | 65.3 |\n| [MPS](https://github.com/Kwai-Kolors/MPS) | **67.5** | 63.1 | <u>83.5</u> | 64.3 |\n| HPSv3 | <u>66.8</u> | **72.8** | **85.4** | **76.9** |\n\n</details>\n\n<details open>\n<summary> Image Generation Benchmark of HPSv3 </summary>\n\n| Model | Overall | Characters | Arts | Design | Architecture | Animals | Natural Scenery | Transportation | Products | Others | Plants | Food | Science |\n|------|---------|------------|------|--------|--------------|---------|-----------------|----------------|----------|--------|--------|------|---------|\n| Kolors | **10.55** | **11.79** | **10.47** | **9.87** | <u>10.82</u> | **10.60** | 9.89 | <u>10.68</u> | <u>10.93</u> | **10.50** | **10.63** | <u>11.06</u> | <u>9.51</u> |\n| Flux-dev | <u>10.43</u> | <u>11.70</u> | <u>10.32</u> | 9.39 | **10.93** | <u>10.38</u> | <u>10.01</u> | **10.84** | **11.24** | <u>10.21</u> | 10.38 | **11.24** | 9.16 |\n| Playgroundv2.5 | 10.27 | 11.07 | 9.84 | <u>9.64</u> | 10.45 | <u>10.38</u> | 9.94 | 10.51 | <u>10.62</u> | 10.15 | <u>10.62</u> | 10.84 | 9.39 |\n| Infinity | 10.26 | 11.17 | 9.95 | 9.43 | 10.36 | 9.27 | **10.11** | 10.36 | 10.59 | 10.08 | 10.30 | 10.59 | **9.62** |\n| CogView4 | 9.61 | 10.72 | 9.86 | 9.33 | 9.88 | 9.16 | 9.45 | 9.69 | 9.86 | 9.45 | 9.49 | 10.16 | 8.97 |\n| PixArt-\u03a3 | 9.37 | 10.08 | 9.07 | 8.41 | 9.83 | 8.86 | 8.87 | 9.44 | 9.57 | 9.52 | 9.73 | 10.35 | 8.58 |\n| Gemini 2.0 Flash | 9.21 | 9.98 | 8.44 | 7.64 | 10.11 | 9.42 | 9.01 | 9.74 | 9.64 | 9.55 | 10.16 | 7.61 | 9.23 |\n| SDXL | 8.20 | 8.67 | 7.63 | 7.53 | 8.57 | 8.18 | 7.76 | 8.65 | 8.85 | 8.32 | 8.43 | 8.78 | 7.29 |\n| HunyuanDiT | 8.19 | 7.96 | 8.11 | 8.28 | 8.71 | 7.24 | 7.86 | 8.33 | 8.55 | 8.28 | 8.31 | 8.48 | 8.20 |\n| Stable Diffusion 3 Medium | 5.31 | 6.70 | 5.98 | 5.15 | 5.25 | 4.09 | 5.24 | 4.25 | 5.71 | 5.84 | 6.01 | 5.71 | 4.58 |\n| SD2 | -0.24 | -0.34 | -0.56 | -1.35 | -0.24 | -0.54 | -0.32 | 1.00 | 1.11 | -0.01 | -0.38 | -0.38 | -0.84 |\n\n</details>\n\n---\n\n## \ud83c\udfaf CoHP (Chain-of-Human-Preference)\n\nCOHP is our novel reasoning approach for iterative image refinement that efficiently improves image quality without requiring additional training data. It works by generating images with multiple diffusion models, selecting the best one using reward models, and then iteratively refining it through image-to-image generation.\n\n<p align=\"center\">\n <img src=\"assets/cohp.png\" alt=\"cohp\" width=\"600\"/>\n</p>\n\n### \ud83d\ude80 Usage\n\n#### Basic Command\n\n```bash\npython hpsv3/cohp/run_cohp.py \\\n --prompt \"A beautiful sunset over mountains\" \\\n --index \"sample_001\" \\\n --device \"cuda:0\" \\\n --reward_model \"hpsv3\"\n```\n\n#### Parameters\n\n- `--prompt`: Text prompt for image generation (required)\n- `--index`: Unique identifier for saving results (required) \n- `--device`: GPU device to use (default: 'cuda:1')\n- `--reward_model`: Reward model for scoring images\n - `hpsv3`: HPSv3 model (default, recommended)\n - `hpsv2`: HPSv2 model\n - `imagereward`: ImageReward model\n - `pickscore`: PickScore model\n\n#### Supported Generation Models\n\nCOHP uses multiple state-of-the-art diffusion models for initial generation: **FLUX.1 dev**, **Kolors**, **Stable Diffusion 3 Medium**, **Playground v2.5**\n\n#### How COHP Works\n\n1. **Multi-Model Generation**: Generates images using all supported models\n2. **Reward Scoring**: Evaluates each image using the specified reward model\n3. **Best Model Selection**: Chooses the model that produced the highest-scoring image\n4. **Iterative Refinement**: Performs 5 rounds of image-to-image generation to improve quality\n5. **Adaptive Strength**: Uses strength=0.8 for rounds 1-2, then 0.5 for rounds 3-5\n\n---\n\n## \ud83e\uddbe Results as Reward Model\n\nWe perform [DanceGRPO](https://github.com/XueZeyue/DanceGRPO) as the reinforcement learning method. Here are some results.\nAll experiments using the same setting and we use **Stable Diffusion 1.4** as our backbone.\n\n<p align=\"center\">\n <img src=\"assets/rl1.jpg\" width=\"600\"/>\n</p>\n\n<p align=\"center\">\n <img src=\"assets/rl2.jpg\" width=\"600\"/>\n</p>\n\n\n### More Results of HPsv3 as Reward Model (Stable Diffusion 1.4)\n<p align=\"center\">\n <img src=\"assets/rl_teaser.jpg\" alt=\"cohp\" width=\"600\"/>\n</p>\n\n## \ud83d\udcda Citation\n\nIf you find HPSv3 useful in your research, please cite our work:\n\n```bibtex\n@inproceedings{hpsv3,\n title={HPSv3: Towards Wide-Spectrum Human Preference Score},\n author={Ma, Yuhang and Wu, Xiaoshi and Sun, Keqiang and Li, Hongsheng},\n booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},\n year={2025}\n}\n```\n\n\n---\n\n## \ud83d\ude4f Acknowledgements\n\nWe would like to thank the [VideoAlign](https://github.com/KwaiVGI/VideoAlign) codebase for providing valuable references.\n\n---\n\n## \ud83d\udcac Support\n\nFor questions and support:\n- **Issues**: [GitHub Issues](https://github.com/MizzenAI/HPSv3/issues)\n- **Email**: yhshui@mizzen.ai & yhma@mizzen.ai\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "HPSv3: Towards Wide-Spectrum Human Preference Score - A VLM-based preference model for image quality assessment",
"version": "1.0.0",
"project_urls": {
"Documentation": "https://github.com/MizzenAI/HPSv3/blob/main/README.md",
"Homepage": "https://mizzenai.github.io/HPSv3/",
"Paper": "https://arxiv.org/abs/2411.07232",
"Source": "https://github.com/MizzenAI/HPSv3"
},
"split_keywords": [
"machine learning",
" computer vision",
" human preference",
" image quality",
" vlm",
" multimodal"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "20ed6feb979b0086f57dc7a8b545693e1dbeffd60c8d942971e0445c65fcf438",
"md5": "27a85af7d492bfa80da35476b1efde79",
"sha256": "1a1b3ed1f538315ddf0e90c55b9f5c8ab3248d772cd713c11f7dfe35bf867ab3"
},
"downloads": -1,
"filename": "hpsv3-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "27a85af7d492bfa80da35476b1efde79",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 68966,
"upload_time": "2025-08-06T03:11:39",
"upload_time_iso_8601": "2025-08-06T03:11:39.800641Z",
"url": "https://files.pythonhosted.org/packages/20/ed/6feb979b0086f57dc7a8b545693e1dbeffd60c8d942971e0445c65fcf438/hpsv3-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f925a0f5c916f51c10e1a411160fbcfbe8ce9d6edeb5e2c0c1e2d2e1ddf738be",
"md5": "11978fe28493ed4aebb8c0996c1fb755",
"sha256": "42277f2705395f435f22fe68e3f57a023c6f1b1547853a57f5168783953f713c"
},
"downloads": -1,
"filename": "hpsv3-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "11978fe28493ed4aebb8c0996c1fb755",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 63914,
"upload_time": "2025-08-06T03:11:41",
"upload_time_iso_8601": "2025-08-06T03:11:41.878838Z",
"url": "https://files.pythonhosted.org/packages/f9/25/a0f5c916f51c10e1a411160fbcfbe8ce9d6edeb5e2c0c1e2d2e1ddf738be/hpsv3-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-06 03:11:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "MizzenAI",
"github_project": "HPSv3",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "absl-py",
"specs": [
[
"==",
"2.3.0"
]
]
},
{
"name": "accelerate",
"specs": [
[
"==",
"1.8.0"
]
]
},
{
"name": "aiohappyeyeballs",
"specs": [
[
"==",
"2.6.1"
]
]
},
{
"name": "aiohttp",
"specs": [
[
"==",
"3.12.12"
]
]
},
{
"name": "aiosignal",
"specs": [
[
"==",
"1.3.2"
]
]
},
{
"name": "annotated-types",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "antlr4-python3-runtime",
"specs": [
[
"==",
"4.9.3"
]
]
},
{
"name": "anyio",
"specs": [
[
"==",
"4.9.0"
]
]
},
{
"name": "argon2-cffi",
"specs": [
[
"==",
"23.1.0"
]
]
},
{
"name": "argon2-cffi-bindings",
"specs": [
[
"==",
"21.2.0"
]
]
},
{
"name": "arrow",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "asttokens",
"specs": [
[
"==",
"3.0.0"
]
]
},
{
"name": "async-lru",
"specs": [
[
"==",
"2.0.5"
]
]
},
{
"name": "async-timeout",
"specs": [
[
"==",
"5.0.1"
]
]
},
{
"name": "attrs",
"specs": [
[
"==",
"25.3.0"
]
]
},
{
"name": "av",
"specs": [
[
"==",
"14.4.0"
]
]
},
{
"name": "babel",
"specs": [
[
"==",
"2.17.0"
]
]
},
{
"name": "beautifulsoup4",
"specs": [
[
"==",
"4.13.4"
]
]
},
{
"name": "bleach",
"specs": [
[
"==",
"6.2.0"
]
]
},
{
"name": "botocore",
"specs": [
[
"==",
"1.38.35"
]
]
},
{
"name": "certifi",
"specs": [
[
"==",
"2025.4.26"
]
]
},
{
"name": "cffi",
"specs": [
[
"==",
"1.17.1"
]
]
},
{
"name": "charset-normalizer",
"specs": [
[
"==",
"3.4.2"
]
]
},
{
"name": "comm",
"specs": [
[
"==",
"0.2.2"
]
]
},
{
"name": "contourpy",
"specs": [
[
"==",
"1.3.2"
]
]
},
{
"name": "cycler",
"specs": [
[
"==",
"0.12.1"
]
]
},
{
"name": "datasets",
"specs": [
[
"==",
"3.6.0"
]
]
},
{
"name": "debugpy",
"specs": [
[
"==",
"1.8.14"
]
]
},
{
"name": "decorator",
"specs": [
[
"==",
"5.2.1"
]
]
},
{
"name": "deepspeed",
"specs": [
[
"==",
"0.15.4"
]
]
},
{
"name": "defusedxml",
"specs": [
[
"==",
"0.7.1"
]
]
},
{
"name": "diffusers",
"specs": [
[
"==",
"0.33.1"
]
]
},
{
"name": "dill",
"specs": [
[
"==",
"0.3.8"
]
]
},
{
"name": "docstring-parser",
"specs": [
[
"==",
"0.16"
]
]
},
{
"name": "einops",
"specs": [
[
"==",
"0.8.1"
]
]
},
{
"name": "exceptiongroup",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "executing",
"specs": [
[
"==",
"2.2.0"
]
]
},
{
"name": "fastjsonschema",
"specs": [
[
"==",
"2.21.1"
]
]
},
{
"name": "filelock",
"specs": [
[
"==",
"3.13.1"
]
]
},
{
"name": "fire",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "fonttools",
"specs": [
[
"==",
"4.58.1"
]
]
},
{
"name": "fqdn",
"specs": [
[
"==",
"1.5.1"
]
]
},
{
"name": "frozenlist",
"specs": [
[
"==",
"1.7.0"
]
]
},
{
"name": "fsspec",
"specs": [
[
"==",
"2024.6.1"
]
]
},
{
"name": "grpcio",
"specs": [
[
"==",
"1.72.1"
]
]
},
{
"name": "h11",
"specs": [
[
"==",
"0.16.0"
]
]
},
{
"name": "hf-xet",
"specs": [
[
"==",
"1.1.3"
]
]
},
{
"name": "hjson",
"specs": [
[
"==",
"3.1.0"
]
]
},
{
"name": "httpcore",
"specs": [
[
"==",
"1.0.9"
]
]
},
{
"name": "httpx",
"specs": [
[
"==",
"0.28.1"
]
]
},
{
"name": "huggingface-hub",
"specs": [
[
"==",
"0.32.4"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.10"
]
]
},
{
"name": "imageio",
"specs": [
[
"==",
"2.37.0"
]
]
},
{
"name": "importlib-metadata",
"specs": [
[
"==",
"8.7.0"
]
]
},
{
"name": "ipykernel",
"specs": [
[
"==",
"6.29.5"
]
]
},
{
"name": "ipython",
"specs": [
[
"==",
"8.36.0"
]
]
},
{
"name": "ipywidgets",
"specs": [
[
"==",
"8.1.7"
]
]
},
{
"name": "isoduration",
"specs": [
[
"==",
"20.11.0"
]
]
},
{
"name": "jedi",
"specs": [
[
"==",
"0.19.2"
]
]
},
{
"name": "jinja2",
"specs": [
[
"==",
"3.1.6"
]
]
},
{
"name": "jmespath",
"specs": [
[
"==",
"1.0.1"
]
]
},
{
"name": "json5",
"specs": [
[
"==",
"0.12.0"
]
]
},
{
"name": "jsonpointer",
"specs": [
[
"==",
"3.0.0"
]
]
},
{
"name": "jsonschema",
"specs": [
[
"==",
"4.24.0"
]
]
},
{
"name": "jsonschema-specifications",
"specs": [
[
"==",
"2025.4.1"
]
]
},
{
"name": "jupyter",
"specs": [
[
"==",
"1.1.1"
]
]
},
{
"name": "jupyter-client",
"specs": [
[
"==",
"8.6.3"
]
]
},
{
"name": "jupyter-console",
"specs": [
[
"==",
"6.6.3"
]
]
},
{
"name": "jupyter-core",
"specs": [
[
"==",
"5.8.1"
]
]
},
{
"name": "jupyter-events",
"specs": [
[
"==",
"0.12.0"
]
]
},
{
"name": "jupyter-lsp",
"specs": [
[
"==",
"2.2.5"
]
]
},
{
"name": "jupyter-server",
"specs": [
[
"==",
"2.16.0"
]
]
},
{
"name": "jupyter-server-terminals",
"specs": [
[
"==",
"0.5.3"
]
]
},
{
"name": "jupyterlab",
"specs": [
[
"==",
"4.4.3"
]
]
},
{
"name": "jupyterlab-pygments",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "jupyterlab-server",
"specs": [
[
"==",
"2.27.3"
]
]
},
{
"name": "jupyterlab-widgets",
"specs": [
[
"==",
"3.0.15"
]
]
},
{
"name": "kiwisolver",
"specs": [
[
"==",
"1.4.8"
]
]
},
{
"name": "markdown",
"specs": [
[
"==",
"3.8"
]
]
},
{
"name": "markdown-it-py",
"specs": [
[
"==",
"3.0.0"
]
]
},
{
"name": "markupsafe",
"specs": [
[
"==",
"3.0.2"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.10.3"
]
]
},
{
"name": "matplotlib-inline",
"specs": [
[
"==",
"0.1.7"
]
]
},
{
"name": "mdurl",
"specs": [
[
"==",
"0.1.2"
]
]
},
{
"name": "mistune",
"specs": [
[
"==",
"3.1.3"
]
]
},
{
"name": "mpmath",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "msgpack",
"specs": [
[
"==",
"1.1.0"
]
]
},
{
"name": "multidict",
"specs": [
[
"==",
"6.4.4"
]
]
},
{
"name": "multiprocess",
"specs": [
[
"==",
"0.70.16"
]
]
},
{
"name": "nbclient",
"specs": [
[
"==",
"0.10.2"
]
]
},
{
"name": "nbconvert",
"specs": [
[
"==",
"7.16.6"
]
]
},
{
"name": "nbformat",
"specs": [
[
"==",
"5.10.4"
]
]
},
{
"name": "nest-asyncio",
"specs": [
[
"==",
"1.6.0"
]
]
},
{
"name": "networkx",
"specs": [
[
"==",
"3.3"
]
]
},
{
"name": "ninja",
"specs": [
[
"==",
"1.11.1.4"
]
]
},
{
"name": "notebook",
"specs": [
[
"==",
"7.4.3"
]
]
},
{
"name": "notebook-shim",
"specs": [
[
"==",
"0.2.4"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"2.1.2"
]
]
},
{
"name": "nvidia-cublas-cu11",
"specs": [
[
"==",
"11.11.3.6"
]
]
},
{
"name": "nvidia-cuda-cupti-cu11",
"specs": [
[
"==",
"11.8.87"
]
]
},
{
"name": "nvidia-cuda-nvrtc-cu11",
"specs": [
[
"==",
"11.8.89"
]
]
},
{
"name": "nvidia-cuda-runtime-cu11",
"specs": [
[
"==",
"11.8.89"
]
]
},
{
"name": "nvidia-cudnn-cu11",
"specs": [
[
"==",
"9.1.0.70"
]
]
},
{
"name": "nvidia-cufft-cu11",
"specs": [
[
"==",
"10.9.0.58"
]
]
},
{
"name": "nvidia-curand-cu11",
"specs": [
[
"==",
"10.3.0.86"
]
]
},
{
"name": "nvidia-cusolver-cu11",
"specs": [
[
"==",
"11.4.1.48"
]
]
},
{
"name": "nvidia-cusparse-cu11",
"specs": [
[
"==",
"11.7.5.86"
]
]
},
{
"name": "nvidia-ml-py",
"specs": [
[
"==",
"12.575.51"
]
]
},
{
"name": "nvidia-nccl-cu11",
"specs": [
[
"==",
"2.21.5"
]
]
},
{
"name": "nvidia-nvtx-cu11",
"specs": [
[
"==",
"11.8.86"
]
]
},
{
"name": "omegaconf",
"specs": [
[
"==",
"2.3.0"
]
]
},
{
"name": "opencv-python",
"specs": [
[
"==",
"4.11.0.86"
]
]
},
{
"name": "overrides",
"specs": [
[
"==",
"7.7.0"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"25.0"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"2.3.0"
]
]
},
{
"name": "pandocfilters",
"specs": [
[
"==",
"1.5.1"
]
]
},
{
"name": "parso",
"specs": [
[
"==",
"0.8.4"
]
]
},
{
"name": "peft",
"specs": [
[
"==",
"0.10.0"
]
]
},
{
"name": "pexpect",
"specs": [
[
"==",
"4.9.0"
]
]
},
{
"name": "pillow",
"specs": [
[
"==",
"11.0.0"
]
]
},
{
"name": "platformdirs",
"specs": [
[
"==",
"4.3.8"
]
]
},
{
"name": "prometheus-client",
"specs": [
[
"==",
"0.22.0"
]
]
},
{
"name": "prompt-toolkit",
"specs": [
[
"==",
"3.0.51"
]
]
},
{
"name": "propcache",
"specs": [
[
"==",
"0.3.2"
]
]
},
{
"name": "protobuf",
"specs": [
[
"==",
"6.31.1"
]
]
},
{
"name": "psutil",
"specs": [
[
"==",
"7.0.0"
]
]
},
{
"name": "ptyprocess",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "pure-eval",
"specs": [
[
"==",
"0.2.3"
]
]
},
{
"name": "py-cpuinfo",
"specs": [
[
"==",
"9.0.0"
]
]
},
{
"name": "pyarrow",
"specs": [
[
"==",
"20.0.0"
]
]
},
{
"name": "pycparser",
"specs": [
[
"==",
"2.22"
]
]
},
{
"name": "pydantic",
"specs": [
[
"==",
"2.11.5"
]
]
},
{
"name": "pydantic-core",
"specs": [
[
"==",
"2.33.2"
]
]
},
{
"name": "pygments",
"specs": [
[
"==",
"2.19.1"
]
]
},
{
"name": "pyparsing",
"specs": [
[
"==",
"3.2.3"
]
]
},
{
"name": "python-dateutil",
"specs": [
[
"==",
"2.9.0.post0"
]
]
},
{
"name": "python-json-logger",
"specs": [
[
"==",
"3.3.0"
]
]
},
{
"name": "pytz",
"specs": [
[
"==",
"2025.2"
]
]
},
{
"name": "pyyaml",
"specs": [
[
"==",
"6.0.2"
]
]
},
{
"name": "pyzmq",
"specs": [
[
"==",
"26.4.0"
]
]
},
{
"name": "prettytable",
"specs": [
[
"==",
"3.8.0"
]
]
},
{
"name": "qwen-vl-utils",
"specs": [
[
"==",
"0.0.11"
]
]
},
{
"name": "referencing",
"specs": [
[
"==",
"0.36.2"
]
]
},
{
"name": "regex",
"specs": [
[
"==",
"2024.11.6"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.3"
]
]
},
{
"name": "rfc3339-validator",
"specs": [
[
"==",
"0.1.4"
]
]
},
{
"name": "rfc3986-validator",
"specs": [
[
"==",
"0.1.1"
]
]
},
{
"name": "rich",
"specs": [
[
"==",
"14.0.0"
]
]
},
{
"name": "rpds-py",
"specs": [
[
"==",
"0.25.1"
]
]
},
{
"name": "safetensors",
"specs": [
[
"==",
"0.5.3"
]
]
},
{
"name": "send2trash",
"specs": [
[
"==",
"1.8.3"
]
]
},
{
"name": "sentencepiece",
"specs": [
[
"==",
"0.2.0"
]
]
},
{
"name": "shtab",
"specs": [
[
"==",
"1.7.2"
]
]
},
{
"name": "six",
"specs": [
[
"==",
"1.17.0"
]
]
},
{
"name": "sniffio",
"specs": [
[
"==",
"1.3.1"
]
]
},
{
"name": "soupsieve",
"specs": [
[
"==",
"2.7"
]
]
},
{
"name": "stack-data",
"specs": [
[
"==",
"0.6.3"
]
]
},
{
"name": "sympy",
"specs": [
[
"==",
"1.13.1"
]
]
},
{
"name": "tensorboard",
"specs": [
[
"==",
"2.19.0"
]
]
},
{
"name": "tensorboard-data-server",
"specs": [
[
"==",
"0.7.2"
]
]
},
{
"name": "termcolor",
"specs": [
[
"==",
"3.1.0"
]
]
},
{
"name": "terminado",
"specs": [
[
"==",
"0.18.1"
]
]
},
{
"name": "timm",
"specs": [
[
"==",
"1.0.15"
]
]
},
{
"name": "tinycss2",
"specs": [
[
"==",
"1.4.0"
]
]
},
{
"name": "tokenizers",
"specs": [
[
"==",
"0.20.3"
]
]
},
{
"name": "tomli",
"specs": [
[
"==",
"2.2.1"
]
]
},
{
"name": "torch",
"specs": [
[
"==",
"2.6.0"
]
]
},
{
"name": "torchaudio",
"specs": [
[
"==",
"2.6.0"
]
]
},
{
"name": "torchvision",
"specs": [
[
"==",
"0.21.0"
]
]
},
{
"name": "tornado",
"specs": [
[
"==",
"6.5.1"
]
]
},
{
"name": "tqdm",
"specs": [
[
"==",
"4.67.1"
]
]
},
{
"name": "traitlets",
"specs": [
[
"==",
"5.14.3"
]
]
},
{
"name": "transformers",
"specs": [
[
"==",
"4.45.2"
]
]
},
{
"name": "triton",
"specs": [
[
"==",
"3.2.0"
]
]
},
{
"name": "trl",
"specs": [
[
"==",
"0.8.6"
]
]
},
{
"name": "typeguard",
"specs": [
[
"==",
"4.4.3"
]
]
},
{
"name": "types-python-dateutil",
"specs": [
[
"==",
"2.9.0.20250516"
]
]
},
{
"name": "typing-extensions",
"specs": [
[
"==",
"4.14.0"
]
]
},
{
"name": "typing-inspection",
"specs": [
[
"==",
"0.4.1"
]
]
},
{
"name": "tyro",
"specs": [
[
"==",
"0.9.24"
]
]
},
{
"name": "tzdata",
"specs": [
[
"==",
"2025.2"
]
]
},
{
"name": "uri-template",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"2.4.0"
]
]
},
{
"name": "wcwidth",
"specs": [
[
"==",
"0.2.13"
]
]
},
{
"name": "webcolors",
"specs": [
[
"==",
"24.11.1"
]
]
},
{
"name": "webencodings",
"specs": [
[
"==",
"0.5.1"
]
]
},
{
"name": "websocket-client",
"specs": [
[
"==",
"1.8.0"
]
]
},
{
"name": "werkzeug",
"specs": [
[
"==",
"3.1.3"
]
]
},
{
"name": "widgetsnbextension",
"specs": [
[
"==",
"4.0.14"
]
]
},
{
"name": "xxhash",
"specs": [
[
"==",
"3.5.0"
]
]
},
{
"name": "yarl",
"specs": [
[
"==",
"1.20.1"
]
]
},
{
"name": "zipp",
"specs": [
[
"==",
"3.22.0"
]
]
}
],
"lcname": "hpsv3"
}