# Hyperion
Hyperion is a modern hyperparameter optimization framework built for the agentic era. Unlike conventional libraries, it orchestrates and reasons about long-running, parallel experiments through an event-driven, agent-based architecture. Experiments are modeled as a dynamic exploration tree, enabling efficient branching, pruning, and adaptation across parallel runs while maintaining a transparent reasoning trace.
<picture>
<source media="(prefers-color-scheme: dark)" srcset="docs/images/dashboard-dark.png">
<img src="docs/images/dashboard-light.png" alt="Hyperion dashboard (lineage graph)">
</picture>
At its core, Hyperion is designed to be model-agnostic and support any training routine through a flexible interface: if you can wrap your training loop in Python, you can optimize it here. While still in early days, the aim is to make optimization composable, observable, and scalable—with faster results, less manual effort, and interpretable outcomes.
## Key Features
- 🎯 **Multiple Search Strategies**: Random, Grid, Beam Search, Bayesian Optimization, Population-Based Training
- 🤖 **Agent Integration**: LLM-driven and rule-based agents for intelligent optimization
- 🌳 **Lineage-Aware Trials**: First-class support for branching search with trial ancestry tracking
- 📊 **Full Observability**: Complete event log with decision rationale and reproducible experiments
- 🚀 **Progressive Scaling**: From in-memory prototypes to distributed execution
- 🔧 **Ergonomic API**: High-level `tune()` API with progressive disclosure to framework internals
## Quick Start
### Basic Optimization
```python
from hyperion import tune, Float, Choice, ObjectiveResult
def objective(ctx, lr: float, batch_size: int) -> ObjectiveResult:
# Your training code here
score = train_model(lr=lr, batch_size=batch_size)
# Report progress during training
ctx.report(step=1, loss=0.5, accuracy=0.8)
# Check if we should stop early
if ctx.should_stop():
return ObjectiveResult(score=score)
return ObjectiveResult(score=score)
# Run optimization
result = tune(
objective=objective,
space={
"lr": Float(0.001, 0.1, log=True), # Log-scale sampling
"batch_size": Choice([32, 64, 128])
},
strategy="random",
max_trials=50,
max_concurrent=4,
show_progress=True, # Display live progress
show_summary=True, # Show final summary
)
print(f"Best params: {result['best']}")
```
### Using Different Search Strategies
```python
# Beam Search - Tree-based exploration with pruning
result = tune(
objective=objective,
space=space,
strategy="beam_search",
strategy_kwargs={
"K": 3, # Keep top 3 trials per depth
"width": 2, # Generate 2 children per parent
"max_depth": 4, # Maximum search tree depth
},
max_trials=100,
max_concurrent=4,
)
# Grid Search - Exhaustive search over discrete values
from hyperion import Int
result = tune(
objective=objective,
space={
"lr": Choice([0.001, 0.01, 0.1]),
"batch_size": Choice([32, 64]),
"layers": Int(2, 4),
},
strategy="grid",
max_concurrent=4,
)
# LLM Agent - AI-powered optimization (provider-agnostic)
result = tune(
objective=objective,
space=space,
strategy="llm_agent",
strategy_kwargs={
"llm": ..., # Provide any LLM via a simple callable (prompt: str) -> str
"max_history": 20,
},
max_trials=50,
max_concurrent=2,
)
```
### With Persistent Storage
```python
# Use SQLite to persist experiment data
result = tune(
objective=objective,
space=space,
strategy="random",
max_trials=100,
storage="sqlite:///experiments.db", # Save to database
)
# Results are automatically saved and can be analyzed later
```
## Web UI
Hyperion includes a web dashboard for real-time experiment monitoring and visualization. The dashboard provides:
- **Live Experiment Tracking**: Monitor running experiments with real-time updates via WebSocket
- **Interactive Lineage Graph**: Visualize trial relationships and branching patterns
- **Metrics Dashboard**: Track performance metrics, compare trials, and identify trends
- **Event Timeline**: Audit trail of all decisions and actions with full context
### Running the Dashboard
```bash
# Start both backend and frontend
mise run ui
# Or run them separately:
mise run ui-backend # FastAPI server on port 8000
mise run ui-frontend # React app on port 5173
```
The dashboard automatically connects to your SQLite database and provides both live monitoring (when running in-process) and historical analysis capabilities.
## Architecture
Hyperion follows a layered architecture so you can use as much or as little of it as you like:
1. **Core Layer**: Events, commands, controller, executors, and storage primitives
2. **Framework Layer**: Experiments, policies, search spaces, and callbacks
3. **API Layer**: High-level functions like `tune()` and `optimize()`
4. **Interface Layer**: CLI and optional web UI
This separation keeps the internals composable while letting you choose your level of control. Most users will start at the API and only drop down when necessary.
## Documentation
- **[API Guide](docs/api-guide.md)** - Complete guide to using the high-level API
- **[Framework Guide](docs/framework-guide.md)** - Advanced usage and customization
- **[Examples](examples/)** - Runnable example scripts for various use cases
## Installation
### From PyPI
```bash
# Install base package
pip install hyperion-opt
```
Note: The web UI (backend + React frontend) is currently developed and run from the repository. Installing from PyPI does not include the UI app itself; to run the dashboard, clone the repo and use the commands in the Web UI section below.
### From Source
For development or to use the latest unreleased features:
```bash
# Clone the repository
git clone https://github.com/Subjective/hyperion.git
cd hyperion
# Install in editable mode with development dependencies
mise run install-dev
# Or using pip directly
pip install -e ".[dev]"
```
## Development
This project uses:
- **mise** for environment management
- **uv** for fast package management
- **ruff** for linting and formatting
- **pyright** for type checking
- **pytest** for testing
Common development tasks:
```bash
mise run fix # Fix lint issues and format code
mise run check # Run all checks (lint, type-check, test)
mise run test # Run tests
mise run install-dev # Install with dev dependencies
```
## License
This project is licensed under the [MIT License](LICENSE).
## Status
Hyperion is currently in active development. Expect breaking changes as the framework evolves. Contributions and feedback are welcome.
Raw data
{
"_id": null,
"home_page": null,
"name": "hyperion-opt",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "hyperparameter-optimization, machine-learning, automl, bayesian-optimization, grid-search, random-search, beam-search, population-based-training, event-driven, agentic",
"author": "Joshua Yin",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/fe/37/ca077383d71b8b64ddfd18d74128662cdcfc8a1e08f1b96d95f913a339ec/hyperion_opt-0.1.0.tar.gz",
"platform": null,
"description": "# Hyperion\n\nHyperion is a modern hyperparameter optimization framework built for the agentic era. Unlike conventional libraries, it orchestrates and reasons about long-running, parallel experiments through an event-driven, agent-based architecture. Experiments are modeled as a dynamic exploration tree, enabling efficient branching, pruning, and adaptation across parallel runs while maintaining a transparent reasoning trace.\n\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"docs/images/dashboard-dark.png\">\n <img src=\"docs/images/dashboard-light.png\" alt=\"Hyperion dashboard (lineage graph)\">\n</picture>\n\nAt its core, Hyperion is designed to be model-agnostic and support any training routine through a flexible interface: if you can wrap your training loop in Python, you can optimize it here. While still in early days, the aim is to make optimization composable, observable, and scalable\u2014with faster results, less manual effort, and interpretable outcomes.\n\n## Key Features\n\n- \ud83c\udfaf **Multiple Search Strategies**: Random, Grid, Beam Search, Bayesian Optimization, Population-Based Training\n- \ud83e\udd16 **Agent Integration**: LLM-driven and rule-based agents for intelligent optimization\n- \ud83c\udf33 **Lineage-Aware Trials**: First-class support for branching search with trial ancestry tracking\n- \ud83d\udcca **Full Observability**: Complete event log with decision rationale and reproducible experiments\n- \ud83d\ude80 **Progressive Scaling**: From in-memory prototypes to distributed execution\n- \ud83d\udd27 **Ergonomic API**: High-level `tune()` API with progressive disclosure to framework internals\n\n## Quick Start\n\n### Basic Optimization\n\n```python\nfrom hyperion import tune, Float, Choice, ObjectiveResult\n\ndef objective(ctx, lr: float, batch_size: int) -> ObjectiveResult:\n # Your training code here\n score = train_model(lr=lr, batch_size=batch_size)\n\n # Report progress during training\n ctx.report(step=1, loss=0.5, accuracy=0.8)\n\n # Check if we should stop early\n if ctx.should_stop():\n return ObjectiveResult(score=score)\n\n return ObjectiveResult(score=score)\n\n# Run optimization\nresult = tune(\n objective=objective,\n space={\n \"lr\": Float(0.001, 0.1, log=True), # Log-scale sampling\n \"batch_size\": Choice([32, 64, 128])\n },\n strategy=\"random\",\n max_trials=50,\n max_concurrent=4,\n show_progress=True, # Display live progress\n show_summary=True, # Show final summary\n)\n\nprint(f\"Best params: {result['best']}\")\n```\n\n### Using Different Search Strategies\n\n```python\n# Beam Search - Tree-based exploration with pruning\nresult = tune(\n objective=objective,\n space=space,\n strategy=\"beam_search\",\n strategy_kwargs={\n \"K\": 3, # Keep top 3 trials per depth\n \"width\": 2, # Generate 2 children per parent\n \"max_depth\": 4, # Maximum search tree depth\n },\n max_trials=100,\n max_concurrent=4,\n)\n\n# Grid Search - Exhaustive search over discrete values\nfrom hyperion import Int\n\nresult = tune(\n objective=objective,\n space={\n \"lr\": Choice([0.001, 0.01, 0.1]),\n \"batch_size\": Choice([32, 64]),\n \"layers\": Int(2, 4),\n },\n strategy=\"grid\",\n max_concurrent=4,\n)\n\n# LLM Agent - AI-powered optimization (provider-agnostic)\nresult = tune(\n objective=objective,\n space=space,\n strategy=\"llm_agent\",\n strategy_kwargs={\n \"llm\": ..., # Provide any LLM via a simple callable (prompt: str) -> str\n \"max_history\": 20,\n },\n max_trials=50,\n max_concurrent=2,\n)\n```\n\n### With Persistent Storage\n\n```python\n# Use SQLite to persist experiment data\nresult = tune(\n objective=objective,\n space=space,\n strategy=\"random\",\n max_trials=100,\n storage=\"sqlite:///experiments.db\", # Save to database\n)\n\n# Results are automatically saved and can be analyzed later\n```\n\n## Web UI\n\nHyperion includes a web dashboard for real-time experiment monitoring and visualization. The dashboard provides:\n\n- **Live Experiment Tracking**: Monitor running experiments with real-time updates via WebSocket\n- **Interactive Lineage Graph**: Visualize trial relationships and branching patterns\n- **Metrics Dashboard**: Track performance metrics, compare trials, and identify trends\n- **Event Timeline**: Audit trail of all decisions and actions with full context\n\n### Running the Dashboard\n\n```bash\n# Start both backend and frontend\nmise run ui\n\n# Or run them separately:\nmise run ui-backend # FastAPI server on port 8000\nmise run ui-frontend # React app on port 5173\n```\n\nThe dashboard automatically connects to your SQLite database and provides both live monitoring (when running in-process) and historical analysis capabilities.\n\n## Architecture\n\nHyperion follows a layered architecture so you can use as much or as little of it as you like:\n\n1. **Core Layer**: Events, commands, controller, executors, and storage primitives\n2. **Framework Layer**: Experiments, policies, search spaces, and callbacks\n3. **API Layer**: High-level functions like `tune()` and `optimize()`\n4. **Interface Layer**: CLI and optional web UI\n\nThis separation keeps the internals composable while letting you choose your level of control. Most users will start at the API and only drop down when necessary.\n\n## Documentation\n\n- **[API Guide](docs/api-guide.md)** - Complete guide to using the high-level API\n- **[Framework Guide](docs/framework-guide.md)** - Advanced usage and customization\n- **[Examples](examples/)** - Runnable example scripts for various use cases\n\n## Installation\n\n### From PyPI\n\n```bash\n# Install base package\npip install hyperion-opt\n```\n\nNote: The web UI (backend + React frontend) is currently developed and run from the repository. Installing from PyPI does not include the UI app itself; to run the dashboard, clone the repo and use the commands in the Web UI section below.\n\n### From Source\n\nFor development or to use the latest unreleased features:\n\n```bash\n# Clone the repository\ngit clone https://github.com/Subjective/hyperion.git\ncd hyperion\n\n# Install in editable mode with development dependencies\nmise run install-dev\n\n# Or using pip directly\npip install -e \".[dev]\"\n```\n\n## Development\n\nThis project uses:\n\n- **mise** for environment management\n- **uv** for fast package management\n- **ruff** for linting and formatting\n- **pyright** for type checking\n- **pytest** for testing\n\nCommon development tasks:\n\n```bash\nmise run fix # Fix lint issues and format code\nmise run check # Run all checks (lint, type-check, test)\nmise run test # Run tests\nmise run install-dev # Install with dev dependencies\n```\n\n## License\n\nThis project is licensed under the [MIT License](LICENSE).\n\n## Status\n\nHyperion is currently in active development. Expect breaking changes as the framework evolves. Contributions and feedback are welcome.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Event-driven agentic hyperparameter optimization framework",
"version": "0.1.0",
"project_urls": {
"Documentation": "https://github.com/Subjective/hyperion/tree/main/docs",
"Homepage": "https://github.com/Subjective/hyperion",
"Issues": "https://github.com/Subjective/hyperion/issues",
"Repository": "https://github.com/Subjective/hyperion.git"
},
"split_keywords": [
"hyperparameter-optimization",
" machine-learning",
" automl",
" bayesian-optimization",
" grid-search",
" random-search",
" beam-search",
" population-based-training",
" event-driven",
" agentic"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "afe5c29b59859318615c17432a812669f56eac8fb2a59b19061b12a604d20297",
"md5": "6596a1fbb7fe5dee62e4f2d4e20297b9",
"sha256": "23bb6f71d95b2f64e25787af5d18418d3a3595394214fb36727f9d584a317ee7"
},
"downloads": -1,
"filename": "hyperion_opt-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6596a1fbb7fe5dee62e4f2d4e20297b9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 72576,
"upload_time": "2025-09-05T04:18:00",
"upload_time_iso_8601": "2025-09-05T04:18:00.660341Z",
"url": "https://files.pythonhosted.org/packages/af/e5/c29b59859318615c17432a812669f56eac8fb2a59b19061b12a604d20297/hyperion_opt-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "fe37ca077383d71b8b64ddfd18d74128662cdcfc8a1e08f1b96d95f913a339ec",
"md5": "efb79d727de5f96332acd13edf597ab3",
"sha256": "2a3190dc2ab33c55be2245b520dd7b00ff78465ee54c1ccf6ba96503fe9f178a"
},
"downloads": -1,
"filename": "hyperion_opt-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "efb79d727de5f96332acd13edf597ab3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 5131387,
"upload_time": "2025-09-05T04:18:03",
"upload_time_iso_8601": "2025-09-05T04:18:03.050628Z",
"url": "https://files.pythonhosted.org/packages/fe/37/ca077383d71b8b64ddfd18d74128662cdcfc8a1e08f1b96d95f913a339ec/hyperion_opt-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-05 04:18:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Subjective",
"github_project": "hyperion",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "hyperion-opt"
}