# Ato: A Tiny Orchestrator
**Configuration, experimentation, and hyperparameter optimization for Python.**
No runtime magic. No launcher. No platform.
Just Python modules you compose.
```bash
pip install ato
```
---
## Design Philosophy
Ato was built on three constraints:
1. **Visibility** — When configs merge from multiple sources, you should see **why** a value was set.
2. **Composability** — Each module (ADict, Scope, SQLTracker, HyperOpt) works independently. Use one, use all, or mix with other tools.
3. **Structural neutrality** — Ato is a layer, not a platform. It has no opinion on your stack.
This isn't minimalism for its own sake.
It's **structural restraint** — interfering only where necessary, staying out of the way everywhere else.
**What Ato provides:**
- **Config composition** with explicit priority, dependency chaining, and merge order debugging
- **Namespace isolation** for multi-team projects (MultiScope)
- **Experiment tracking** in local SQLite with zero setup
- **Hyperparameter search** via Hyperband (or compose with Optuna/Ray Tune)
**What Ato doesn't provide:**
- Web dashboards (use MLflow/W&B)
- Model registry (use MLflow)
- Dataset versioning (use DVC)
- Plugin marketplace
Ato is designed to work **between** tools, not replace them.
---
## Quick Start
### 30-Second Example
```python
from ato.scope import Scope
scope = Scope()
@scope.observe(default=True)
def config(config):
config.lr = 0.001
config.batch_size = 32
config.model = 'resnet50'
@scope
def train(config):
print(f"Training {config.model} with lr={config.lr}")
# Your training code here
if __name__ == '__main__':
train() # python train.py
# Override from CLI: python train.py lr=0.01 model=%resnet101%
```
**Key features:**
- `@scope.observe()` defines config sources
- `@scope` injects the merged config
- CLI overrides work automatically
- Priority-based merging with dependency chaining (defaults → named configs → CLI → lazy evaluation)
---
## Table of Contents
- [ADict: Enhanced Dictionary](#adict-enhanced-dictionary)
- [Scope: Configuration Management](#scope-configuration-management)
- [Config Chaining](#config-chaining)
- [MultiScope: Namespace Isolation](#multiscope-namespace-isolation)
- [Config Documentation & Debugging](#configuration-documentation--debugging)
- [SQL Tracker: Experiment Tracking](#sql-tracker-experiment-tracking)
- [Hyperparameter Optimization](#hyperparameter-optimization)
- [Best Practices](#best-practices)
- [Contributing](#contributing)
- [Composability](#composability)
---
## ADict: Enhanced Dictionary
`ADict` is an enhanced dictionary for managing experiment configurations.
### Core Features
| Feature | Description | Why It Matters |
|---------|-------------|----------------|
| **Structural Hashing** | Hash based on keys + types, not values | Track when experiment **structure** changes (not just hyperparameters) |
| **Nested Access** | Dot notation for nested configs | `config.model.lr` instead of `config['model']['lr']` |
| **Format Agnostic** | Load/save JSON, YAML, TOML, XYZ | Work with any config format |
| **Safe Updates** | `update_if_absent()` method | Merge configs without accidental overwrites |
| **Auto-nested** | `ADict.auto()` for lazy creation | `config.a.b.c = 1` just works - no KeyError |
### Examples
#### Structural Hashing
```python
from ato.adict import ADict
# Same structure, different values
config1 = ADict(lr=0.1, epochs=100, model='resnet50')
config2 = ADict(lr=0.01, epochs=200, model='resnet101')
print(config1.get_structural_hash() == config2.get_structural_hash()) # True
# Different structure (epochs is str!)
config3 = ADict(lr=0.1, epochs='100', model='resnet50')
print(config1.get_structural_hash() == config3.get_structural_hash()) # False
```
#### Auto-nested Configs
```python
# ❌ Traditional way
config = ADict()
config.model = ADict()
config.model.backbone = ADict()
config.model.backbone.layers = [64, 128, 256]
# ✅ With ADict.auto()
config = ADict.auto()
config.model.backbone.layers = [64, 128, 256] # Just works!
config.data.augmentation.brightness = 0.2
```
#### Format Agnostic
```python
# Load/save any format
config = ADict.from_file('config.json')
config.dump('config.yaml')
# Safe updates
config.update_if_absent(lr=0.01, scheduler='cosine') # Only adds scheduler
```
---
## Scope: Configuration Management
Scope manages configuration through **priority-based merging** and **CLI integration**.
### Key Concept: Priority Chain
```
Default Configs (priority=0)
↓
Named Configs (priority=0+)
↓
CLI Arguments (highest priority)
↓
Lazy Configs (computed after CLI)
```
### Basic Usage
#### Simple Configuration
```python
from ato.scope import Scope
scope = Scope()
@scope.observe()
def my_config(config):
config.dataset = 'cifar10'
config.lr = 0.001
config.batch_size = 32
@scope
def train(config):
print(f"Training on {config.dataset}")
# Your code here
if __name__ == '__main__':
train()
```
#### Priority-based Merging
```python
@scope.observe(default=True) # Always applied
def defaults(config):
config.lr = 0.001
config.epochs = 100
@scope.observe(priority=1) # Applied after defaults
def high_lr(config):
config.lr = 0.01
@scope.observe(priority=2) # Applied last
def long_training(config):
config.epochs = 300
```
```bash
python train.py # lr=0.001, epochs=100
python train.py high_lr # lr=0.01, epochs=100
python train.py high_lr long_training # lr=0.01, epochs=300
```
#### CLI Configuration
Override any parameter from command line:
```bash
# Simple values
python train.py lr=0.01 batch_size=64
# Nested configs
python train.py model.backbone=%resnet101% model.depth=101
# Lists and complex types
python train.py layers=[64,128,256,512] dropout=0.5
# Combine with named configs
python train.py my_config lr=0.001 batch_size=128
```
**Note**: Wrap strings with `%` (e.g., `%resnet101%`) instead of quotes.
### Config Chaining
Sometimes configs have dependencies on other configs. Use `chain_with` to automatically apply prerequisite configs:
```python
@scope.observe()
def base_setup(config):
config.project_name = 'my_project'
config.data_dir = '/data'
@scope.observe()
def gpu_setup(config):
config.device = 'cuda'
config.num_gpus = 4
@scope.observe(chain_with='base_setup') # Automatically applies base_setup first
def advanced_training(config):
config.distributed = True
config.mixed_precision = True
@scope.observe(chain_with=['base_setup', 'gpu_setup']) # Multiple dependencies
def multi_node_training(config):
config.nodes = 4
config.world_size = 16
```
```bash
# Calling advanced_training automatically applies base_setup first
python train.py advanced_training
# Results in: base_setup → advanced_training
# Calling multi_node_training applies all dependencies
python train.py multi_node_training
# Results in: base_setup → gpu_setup → multi_node_training
```
**Why this matters:**
- **Explicit dependencies**: No more remembering to call prerequisite configs
- **Composable configs**: Build complex configs from simpler building blocks
- **Prevents errors**: Can't use a config without its dependencies
### Lazy Evaluation
Sometimes you need configs that depend on other values set via CLI:
```python
@scope.observe()
def base_config(config):
config.model = 'resnet50'
config.dataset = 'imagenet'
@scope.observe(lazy=True) # Evaluated AFTER CLI args
def computed_config(config):
# Adjust based on dataset
if config.dataset == 'imagenet':
config.num_classes = 1000
config.image_size = 224
elif config.dataset == 'cifar10':
config.num_classes = 10
config.image_size = 32
```
```bash
python train.py dataset=%cifar10% computed_config
# Results in: num_classes=10, image_size=32
```
**Python 3.11+ Context Manager**:
```python
@scope.observe()
def my_config(config):
config.model = 'resnet50'
config.num_layers = 50
with Scope.lazy(): # Evaluated after CLI
if config.model == 'resnet101':
config.num_layers = 101
```
### MultiScope: Namespace Isolation
Manage completely separate configuration namespaces with independent priority systems.
**Use case**: Different teams own different scopes without key collisions.
```python
from ato.scope import Scope, MultiScope
model_scope = Scope(name='model')
data_scope = Scope(name='data')
scope = MultiScope(model_scope, data_scope)
@model_scope.observe(default=True)
def model_config(model):
model.backbone = 'resnet50'
model.lr = 0.1 # Model-specific learning rate
@data_scope.observe(default=True)
def data_config(data):
data.dataset = 'cifar10'
data.lr = 0.001 # Data augmentation learning rate (no conflict!)
@scope
def train(model, data): # Named parameters match scope names
# Both have 'lr' but in separate namespaces!
print(f"Model LR: {model.lr}, Data LR: {data.lr}")
```
**Key advantage**: `model.lr` and `data.lr` are completely independent. No need for naming conventions like `model_lr` vs `data_lr`.
**CLI with MultiScope:**
```bash
# Override model scope only
python train.py model.backbone=%resnet101%
# Override data scope only
python train.py data.dataset=%imagenet%
# Override both
python train.py model.backbone=%resnet101% data.dataset=%imagenet%
```
### Configuration Documentation & Debugging
**The `manual` command** visualizes the exact order of configuration application.
```python
@scope.observe(default=True)
def config(config):
config.lr = 0.001
config.batch_size = 32
config.model = 'resnet50'
@scope.manual
def config_docs(config):
config.lr = 'Learning rate for optimizer'
config.batch_size = 'Number of samples per batch'
config.model = 'Model architecture (resnet50, resnet101, etc.)'
```
```bash
python train.py manual
```
**Output:**
```
--------------------------------------------------
[Scope "config"]
(The Applying Order of Views)
config → (CLI Inputs)
(User Manuals)
lr: Learning rate for optimizer
batch_size: Number of samples per batch
model: Model architecture (resnet50, resnet101, etc.)
--------------------------------------------------
```
**Why this matters:**
When debugging "why is this config value not what I expect?", you can see **exactly** which function set it and in what order.
**Complex example:**
```python
@scope.observe(default=True)
def defaults(config):
config.lr = 0.001
@scope.observe(priority=1)
def experiment_config(config):
config.lr = 0.01
@scope.observe(priority=2)
def another_config(config):
config.lr = 0.1
@scope.observe(lazy=True)
def adaptive_lr(config):
if config.batch_size > 64:
config.lr = config.lr * 2
```
When you run `python train.py manual`, you see:
```
(The Applying Order of Views)
defaults → experiment_config → another_config → (CLI Inputs) → adaptive_lr
```
Now it's **crystal clear** why `lr=0.1` (from `another_config`) and not `0.01`!
### Config Import/Export
```python
@scope.observe()
def load_external(config):
# Load from any format
config.load('experiments/baseline.json')
config.load('models/resnet.yaml')
# Export to any format
config.dump('output/final_config.toml')
```
**OpenMMLab compatibility:**
```python
# Import OpenMMLab configs - handles _base_ inheritance automatically
config.load_mm_config('mmdet_configs/faster_rcnn.py')
```
**Hierarchical composition:**
```python
from ato.adict import ADict
# Load configs from directory structure
config = ADict.compose_hierarchy(
root='configs',
config_filename='config',
select={
'model': 'resnet50',
'data': 'imagenet'
},
overrides={
'model.lr': 0.01,
'data.batch_size': 64
},
required=['model.backbone', 'data.dataset'], # Validation
on_missing='warn' # or 'error'
)
```
### Argparse Integration
```python
from ato.scope import Scope
import argparse
scope = Scope(use_external_parser=True)
parser = argparse.ArgumentParser()
parser.add_argument('--gpu', type=int, default=0)
parser.add_argument('--seed', type=int, default=42)
@scope.observe(default=True)
def config(config):
config.lr = 0.001
config.batch_size = 32
@scope
def train(config):
print(f"GPU: {config.gpu}, LR: {config.lr}")
if __name__ == '__main__':
parser.parse_args() # Merges argparse with scope
train()
```
---
## SQL Tracker: Experiment Tracking
Lightweight experiment tracking using SQLite.
### Why SQL Tracker?
- **Zero Setup**: Just a SQLite file, no servers
- **Full History**: Track all runs, metrics, and artifacts
- **Smart Search**: Find similar experiments by config structure
- **Code Versioning**: Track code changes via fingerprints
- **Offline-first**: No network required, sync to cloud tracking later if needed
### Database Schema
```
Project (my_ml_project)
├── Experiment (run_1)
│ ├── config: {...}
│ ├── structural_hash: "abc123..."
│ ├── Metrics: [loss, accuracy, ...]
│ ├── Artifacts: [model.pt, plots/*, ...]
│ └── Fingerprints: [model_forward, train_step, ...]
├── Experiment (run_2)
└── ...
```
### Usage
#### Logging Experiments
```python
from ato.db_routers.sql.manager import SQLLogger
from ato.adict import ADict
# Setup config
config = ADict(
experiment=ADict(
project_name='image_classification',
sql=ADict(db_path='sqlite:///experiments.db')
),
# Your hyperparameters
lr=0.001,
batch_size=32,
model='resnet50'
)
# Create logger
logger = SQLLogger(config)
# Start experiment run
run_id = logger.run(tags=['baseline', 'resnet50', 'cifar10'])
# Training loop
for epoch in range(100):
# Your training code
train_loss = train_one_epoch()
val_acc = validate()
# Log metrics
logger.log_metric('train_loss', train_loss, step=epoch)
logger.log_metric('val_accuracy', val_acc, step=epoch)
# Log artifacts
logger.log_artifact(run_id, 'checkpoints/model_best.pt',
data_type='model',
metadata={'epoch': best_epoch})
# Finish run
logger.finish(status='completed')
```
#### Querying Experiments
```python
from ato.db_routers.sql.manager import SQLFinder
finder = SQLFinder(config)
# Get all runs in project
runs = finder.get_runs_in_project('image_classification')
for run in runs:
print(f"Run {run.id}: {run.config.model} - {run.status}")
# Find best performing run
best_run = finder.find_best_run(
project_name='image_classification',
metric_key='val_accuracy',
mode='max' # or 'min' for loss
)
print(f"Best config: {best_run.config}")
# Find similar experiments (same config structure)
similar = finder.find_similar_runs(run_id=123)
print(f"Found {len(similar)} runs with similar config structure")
# Trace statistics (code fingerprints)
stats = finder.get_trace_statistics('image_classification', trace_id='model_forward')
print(f"Model forward pass has {stats['static_trace_versions']} versions")
```
### Features
| Feature | Description |
|---------|-------------|
| **Structural Hash** | Auto-track config structure changes |
| **Metric Logging** | Time-series metrics with step tracking |
| **Artifact Management** | Track model checkpoints, plots, data files |
| **Fingerprint Tracking** | Version control for code (static & runtime) |
| **Smart Search** | Find similar configs, best runs, statistics |
---
## Hyperparameter Optimization
Built-in **Hyperband** algorithm for efficient hyperparameter search with early stopping.
### How Hyperband Works
Hyperband uses successive halving:
1. Start with many configs, train briefly
2. Keep top performers, discard poor ones
3. Train survivors longer
4. Repeat until one winner remains
### Basic Usage
```python
from ato.adict import ADict
from ato.hyperopt.hyperband import HyperBand
from ato.scope import Scope
scope = Scope()
# Define search space
search_spaces = ADict(
lr=ADict(
param_type='FLOAT',
param_range=(1e-5, 1e-1),
num_samples=20,
space_type='LOG' # Logarithmic spacing
),
batch_size=ADict(
param_type='INTEGER',
param_range=(16, 128),
num_samples=5,
space_type='LOG'
),
model=ADict(
param_type='CATEGORY',
categories=['resnet50', 'resnet101', 'efficientnet_b0']
)
)
# Create Hyperband optimizer
hyperband = HyperBand(
scope,
search_spaces,
halving_rate=0.3, # Keep top 30% each round
num_min_samples=3, # Stop when <= 3 configs remain
mode='max' # Maximize metric (use 'min' for loss)
)
@hyperband.main
def train(config):
# Your training code
model = create_model(config.model)
optimizer = Adam(lr=config.lr)
# Use __num_halved__ for early stopping
num_epochs = compute_epochs(config.__num_halved__)
# Train and return metric
val_acc = train_and_evaluate(model, optimizer, num_epochs)
return val_acc
if __name__ == '__main__':
# Run hyperparameter search
best_result = train()
print(f"Best config: {best_result.config}")
print(f"Best metric: {best_result.metric}")
```
### Automatic Step Calculation
```python
hyperband = HyperBand(scope, search_spaces, halving_rate=0.3, num_min_samples=4)
max_steps = 100000
steps_per_generation = hyperband.compute_optimized_initial_training_steps(max_steps)
# Example output: [27, 88, 292, 972, 3240, 10800, 36000, 120000]
# Use in training
@hyperband.main
def train(config):
generation = config.__num_halved__
num_steps = steps_per_generation[generation]
metric = train_for_n_steps(num_steps)
return metric
```
### Parameter Types
| Type | Description | Example |
|------|-------------|---------|
| `FLOAT` | Continuous values | Learning rate, dropout |
| `INTEGER` | Discrete integers | Batch size, num layers |
| `CATEGORY` | Categorical choices | Model type, optimizer |
Space types:
- `LOG`: Logarithmic spacing (good for learning rates)
- `LINEAR`: Linear spacing (default)
### Distributed Search
```python
from ato.hyperopt.hyperband import DistributedHyperBand
import torch.distributed as dist
# Initialize distributed training
dist.init_process_group(backend='nccl')
rank = dist.get_rank()
world_size = dist.get_world_size()
# Create distributed hyperband
hyperband = DistributedHyperBand(
scope,
search_spaces,
halving_rate=0.3,
num_min_samples=3,
mode='max',
rank=rank,
world_size=world_size,
backend='pytorch'
)
@hyperband.main
def train(config):
# Your distributed training code
model = create_model(config)
model = DDP(model, device_ids=[rank])
metric = train_and_evaluate(model)
return metric
if __name__ == '__main__':
result = train()
if rank == 0:
print(f"Best config: {result.config}")
```
### Extensible Design
Ato's hyperopt module is built for extensibility:
| Component | Purpose |
|-----------|---------|
| `GridSpaceMixIn` | Parameter sampling logic (reusable) |
| `HyperOpt` | Base optimization class |
| `DistributedMixIn` | Distributed training support (optional) |
**Example: Implement custom search algorithm**
```python
from ato.hyperopt.base import GridSpaceMixIn, HyperOpt
class RandomSearch(GridSpaceMixIn, HyperOpt):
def main(self, func):
# Reuse GridSpaceMixIn.prepare_distributions()
configs = self.prepare_distributions(self.config, self.search_spaces)
# Implement random sampling
import random
random.shuffle(configs)
results = []
for config in configs[:10]: # Sample 10 random configs
metric = func(config)
results.append((config, metric))
return max(results, key=lambda x: x[1])
```
---
## Best Practices
### 1. Project Structure
```
my_project/
├── configs/
│ ├── default.py # Default config with @scope.observe(default=True)
│ ├── models.py # Model-specific configs
│ └── datasets.py # Dataset configs
├── train.py # Main training script
├── experiments.db # SQLite experiment tracking
└── experiments/
├── run_001/
│ ├── checkpoints/
│ └── logs/
└── run_002/
```
### 2. Config Organization
```python
# configs/default.py
from ato.scope import Scope
from ato.adict import ADict
scope = Scope()
@scope.observe(default=True)
def defaults(config):
# Data
config.data = ADict(
dataset='cifar10',
batch_size=32,
num_workers=4
)
# Model
config.model = ADict(
backbone='resnet50',
pretrained=True
)
# Training
config.train = ADict(
lr=0.001,
epochs=100,
optimizer='adam'
)
# Experiment tracking
config.experiment = ADict(
project_name='my_project',
sql=ADict(db_path='sqlite:///experiments.db')
)
```
### 3. Combined Workflow
```python
from ato.scope import Scope
from ato.db_routers.sql.manager import SQLLogger
from configs.default import scope
@scope
def train(config):
# Setup experiment tracking
logger = SQLLogger(config)
run_id = logger.run(tags=[config.model.backbone, config.data.dataset])
try:
# Training loop
for epoch in range(config.train.epochs):
loss = train_epoch()
acc = validate()
logger.log_metric('loss', loss, epoch)
logger.log_metric('accuracy', acc, epoch)
logger.finish(status='completed')
except Exception as e:
logger.finish(status='failed')
raise e
if __name__ == '__main__':
train()
```
### 4. Reproducibility Checklist
- ✅ Use structural hashing to track config changes
- ✅ Log all hyperparameters to SQLLogger
- ✅ Tag experiments with meaningful labels
- ✅ Track artifacts (checkpoints, plots)
- ✅ Use lazy configs for derived parameters
- ✅ Document configs with `@scope.manual`
---
## Requirements
- Python >= 3.7
- SQLAlchemy (for SQL Tracker)
- PyYAML, toml (for config serialization)
See `pyproject.toml` for full dependencies.
---
## Contributing
Contributions are welcome! Please feel free to submit issues or pull requests.
### Development Setup
```bash
git clone https://github.com/yourusername/ato.git
cd ato
pip install -e .
```
### Quality Assurance
Ato's design philosophy — **structural neutrality** and **debuggable composition** — extends to our testing practices.
**Release Policy:**
- **All 100+ unit tests must pass before any release**
- No exceptions, no workarounds
- Tests cover every module: ADict, Scope, MultiScope, SQLTracker, HyperBand
**Why this matters:**
When you build on Ato, you're trusting it to stay out of your way. That means zero regressions, predictable behavior, and reliable APIs. Comprehensive test coverage ensures that each component works independently and composes correctly.
Run tests locally:
```bash
python -m pytest unit_tests/
```
---
## Composability
Ato is designed to **compose** with existing tools, not replace them.
### Works Where Other Systems Require Ecosystems
**Config composition:**
- Import OpenMMLab configs: `config.load_mm_config('mmdet_configs/faster_rcnn.py')`
- Load Hydra-style hierarchies: `ADict.compose_hierarchy(root='configs', select={'model': 'resnet50'})`
- Mix with argparse: `Scope(use_external_parser=True)`
**Experiment tracking:**
- Track locally in SQLite (zero setup)
- Sync to MLflow/W&B when you need dashboards
- Or use both: local SQLite + cloud tracking
**Hyperparameter optimization:**
- Built-in Hyperband
- Or compose with Optuna/Ray Tune — Ato's configs work with any optimizer
### Four Capabilities Other Tools Don't Provide
1. **Config chaining (`chain_with`)** — Explicit dependency management between configs
2. **MultiScope** — True namespace isolation with independent priority systems
3. **`manual` command** — Visualize exact config merge order for debugging
4. **Structural hashing** — Track when experiment **architecture** changes, not just values
### When to Use Ato
**Use Ato when:**
- You want zero boilerplate config management
- You need to debug why a config value isn't what you expect
- You're working on multi-team projects with namespace conflicts
- You want local-first experiment tracking
- You're migrating between config/tracking systems
**Ato works alongside:**
- Hydra (config composition)
- MLflow/W&B (cloud tracking)
- Optuna/Ray Tune (advanced hyperparameter search)
- PyTorch/TensorFlow/JAX (any ML framework)
---
## Roadmap
Ato's design constraint is **structural neutrality** — adding capabilities without creating dependencies.
### Planned: Local Dashboard (Optional Module)
A lightweight HTML dashboard for teams that want visual exploration without committing to cloud platforms:
**What it adds:**
- Metric comparison & trends (read-only view of SQLite data)
- Run history & artifact browsing
- Config diff visualization
- Interactive hyperparameter analysis
**Design constraints:**
- No hard dependency — Ato core works 100% without the dashboard
- Separate process — doesn't block or modify runs
- Zero lock-in — delete it anytime, training code doesn't change
- Composable — use alongside MLflow/W&B
**Guiding principle:** Ato remains a set of **independent, composable tools** — not a platform you commit to.
---
## License
MIT License
Raw data
{
"_id": null,
"home_page": null,
"name": "ato",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "config management, experiment tracking, hyperparameter optimization, lightweight, composable, namespace isolation, machine learning",
"author": "ato contributors",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/5b/0e/db4bacc665f836ce9b2e1a7dadee0531db7034256510803cce5139b96afe/ato-2.1.1.tar.gz",
"platform": null,
"description": "# Ato: A Tiny Orchestrator\n\n**Configuration, experimentation, and hyperparameter optimization for Python.**\n\nNo runtime magic. No launcher. No platform.\nJust Python modules you compose.\n\n```bash\npip install ato\n```\n\n---\n\n## Design Philosophy\n\nAto was built on three constraints:\n\n1. **Visibility** \u2014 When configs merge from multiple sources, you should see **why** a value was set.\n2. **Composability** \u2014 Each module (ADict, Scope, SQLTracker, HyperOpt) works independently. Use one, use all, or mix with other tools.\n3. **Structural neutrality** \u2014 Ato is a layer, not a platform. It has no opinion on your stack.\n\nThis isn't minimalism for its own sake.\nIt's **structural restraint** \u2014 interfering only where necessary, staying out of the way everywhere else.\n\n**What Ato provides:**\n- **Config composition** with explicit priority, dependency chaining, and merge order debugging\n- **Namespace isolation** for multi-team projects (MultiScope)\n- **Experiment tracking** in local SQLite with zero setup\n- **Hyperparameter search** via Hyperband (or compose with Optuna/Ray Tune)\n\n**What Ato doesn't provide:**\n- Web dashboards (use MLflow/W&B)\n- Model registry (use MLflow)\n- Dataset versioning (use DVC)\n- Plugin marketplace\n\nAto is designed to work **between** tools, not replace them.\n\n---\n\n## Quick Start\n\n### 30-Second Example\n\n```python\nfrom ato.scope import Scope\n\nscope = Scope()\n\n@scope.observe(default=True)\ndef config(config):\n config.lr = 0.001\n config.batch_size = 32\n config.model = 'resnet50'\n\n@scope\ndef train(config):\n print(f\"Training {config.model} with lr={config.lr}\")\n # Your training code here\n\nif __name__ == '__main__':\n train() # python train.py\n # Override from CLI: python train.py lr=0.01 model=%resnet101%\n```\n\n**Key features:**\n- `@scope.observe()` defines config sources\n- `@scope` injects the merged config\n- CLI overrides work automatically\n- Priority-based merging with dependency chaining (defaults \u2192 named configs \u2192 CLI \u2192 lazy evaluation)\n\n---\n\n## Table of Contents\n\n- [ADict: Enhanced Dictionary](#adict-enhanced-dictionary)\n- [Scope: Configuration Management](#scope-configuration-management)\n - [Config Chaining](#config-chaining)\n - [MultiScope: Namespace Isolation](#multiscope-namespace-isolation)\n - [Config Documentation & Debugging](#configuration-documentation--debugging)\n- [SQL Tracker: Experiment Tracking](#sql-tracker-experiment-tracking)\n- [Hyperparameter Optimization](#hyperparameter-optimization)\n- [Best Practices](#best-practices)\n- [Contributing](#contributing)\n- [Composability](#composability)\n\n---\n\n## ADict: Enhanced Dictionary\n\n`ADict` is an enhanced dictionary for managing experiment configurations.\n\n### Core Features\n\n| Feature | Description | Why It Matters |\n|---------|-------------|----------------|\n| **Structural Hashing** | Hash based on keys + types, not values | Track when experiment **structure** changes (not just hyperparameters) |\n| **Nested Access** | Dot notation for nested configs | `config.model.lr` instead of `config['model']['lr']` |\n| **Format Agnostic** | Load/save JSON, YAML, TOML, XYZ | Work with any config format |\n| **Safe Updates** | `update_if_absent()` method | Merge configs without accidental overwrites |\n| **Auto-nested** | `ADict.auto()` for lazy creation | `config.a.b.c = 1` just works - no KeyError |\n\n### Examples\n\n#### Structural Hashing\n\n```python\nfrom ato.adict import ADict\n\n# Same structure, different values\nconfig1 = ADict(lr=0.1, epochs=100, model='resnet50')\nconfig2 = ADict(lr=0.01, epochs=200, model='resnet101')\nprint(config1.get_structural_hash() == config2.get_structural_hash()) # True\n\n# Different structure (epochs is str!)\nconfig3 = ADict(lr=0.1, epochs='100', model='resnet50')\nprint(config1.get_structural_hash() == config3.get_structural_hash()) # False\n```\n\n#### Auto-nested Configs\n\n```python\n# \u274c Traditional way\nconfig = ADict()\nconfig.model = ADict()\nconfig.model.backbone = ADict()\nconfig.model.backbone.layers = [64, 128, 256]\n\n# \u2705 With ADict.auto()\nconfig = ADict.auto()\nconfig.model.backbone.layers = [64, 128, 256] # Just works!\nconfig.data.augmentation.brightness = 0.2\n```\n\n#### Format Agnostic\n\n```python\n# Load/save any format\nconfig = ADict.from_file('config.json')\nconfig.dump('config.yaml')\n\n# Safe updates\nconfig.update_if_absent(lr=0.01, scheduler='cosine') # Only adds scheduler\n```\n\n---\n\n## Scope: Configuration Management\n\nScope manages configuration through **priority-based merging** and **CLI integration**.\n\n### Key Concept: Priority Chain\n\n```\nDefault Configs (priority=0)\n \u2193\nNamed Configs (priority=0+)\n \u2193\nCLI Arguments (highest priority)\n \u2193\nLazy Configs (computed after CLI)\n```\n\n### Basic Usage\n\n#### Simple Configuration\n\n```python\nfrom ato.scope import Scope\n\nscope = Scope()\n\n@scope.observe()\ndef my_config(config):\n config.dataset = 'cifar10'\n config.lr = 0.001\n config.batch_size = 32\n\n@scope\ndef train(config):\n print(f\"Training on {config.dataset}\")\n # Your code here\n\nif __name__ == '__main__':\n train()\n```\n\n#### Priority-based Merging\n\n```python\n@scope.observe(default=True) # Always applied\ndef defaults(config):\n config.lr = 0.001\n config.epochs = 100\n\n@scope.observe(priority=1) # Applied after defaults\ndef high_lr(config):\n config.lr = 0.01\n\n@scope.observe(priority=2) # Applied last\ndef long_training(config):\n config.epochs = 300\n```\n\n```bash\npython train.py # lr=0.001, epochs=100\npython train.py high_lr # lr=0.01, epochs=100\npython train.py high_lr long_training # lr=0.01, epochs=300\n```\n\n#### CLI Configuration\n\nOverride any parameter from command line:\n\n```bash\n# Simple values\npython train.py lr=0.01 batch_size=64\n\n# Nested configs\npython train.py model.backbone=%resnet101% model.depth=101\n\n# Lists and complex types\npython train.py layers=[64,128,256,512] dropout=0.5\n\n# Combine with named configs\npython train.py my_config lr=0.001 batch_size=128\n```\n\n**Note**: Wrap strings with `%` (e.g., `%resnet101%`) instead of quotes.\n\n### Config Chaining\n\nSometimes configs have dependencies on other configs. Use `chain_with` to automatically apply prerequisite configs:\n\n```python\n@scope.observe()\ndef base_setup(config):\n config.project_name = 'my_project'\n config.data_dir = '/data'\n\n@scope.observe()\ndef gpu_setup(config):\n config.device = 'cuda'\n config.num_gpus = 4\n\n@scope.observe(chain_with='base_setup') # Automatically applies base_setup first\ndef advanced_training(config):\n config.distributed = True\n config.mixed_precision = True\n\n@scope.observe(chain_with=['base_setup', 'gpu_setup']) # Multiple dependencies\ndef multi_node_training(config):\n config.nodes = 4\n config.world_size = 16\n```\n\n```bash\n# Calling advanced_training automatically applies base_setup first\npython train.py advanced_training\n# Results in: base_setup \u2192 advanced_training\n\n# Calling multi_node_training applies all dependencies\npython train.py multi_node_training\n# Results in: base_setup \u2192 gpu_setup \u2192 multi_node_training\n```\n\n**Why this matters:**\n- **Explicit dependencies**: No more remembering to call prerequisite configs\n- **Composable configs**: Build complex configs from simpler building blocks\n- **Prevents errors**: Can't use a config without its dependencies\n\n### Lazy Evaluation\n\nSometimes you need configs that depend on other values set via CLI:\n\n```python\n@scope.observe()\ndef base_config(config):\n config.model = 'resnet50'\n config.dataset = 'imagenet'\n\n@scope.observe(lazy=True) # Evaluated AFTER CLI args\ndef computed_config(config):\n # Adjust based on dataset\n if config.dataset == 'imagenet':\n config.num_classes = 1000\n config.image_size = 224\n elif config.dataset == 'cifar10':\n config.num_classes = 10\n config.image_size = 32\n```\n\n```bash\npython train.py dataset=%cifar10% computed_config\n# Results in: num_classes=10, image_size=32\n```\n\n**Python 3.11+ Context Manager**:\n\n```python\n@scope.observe()\ndef my_config(config):\n config.model = 'resnet50'\n config.num_layers = 50\n\n with Scope.lazy(): # Evaluated after CLI\n if config.model == 'resnet101':\n config.num_layers = 101\n```\n\n### MultiScope: Namespace Isolation\n\nManage completely separate configuration namespaces with independent priority systems.\n\n**Use case**: Different teams own different scopes without key collisions.\n\n```python\nfrom ato.scope import Scope, MultiScope\n\nmodel_scope = Scope(name='model')\ndata_scope = Scope(name='data')\nscope = MultiScope(model_scope, data_scope)\n\n@model_scope.observe(default=True)\ndef model_config(model):\n model.backbone = 'resnet50'\n model.lr = 0.1 # Model-specific learning rate\n\n@data_scope.observe(default=True)\ndef data_config(data):\n data.dataset = 'cifar10'\n data.lr = 0.001 # Data augmentation learning rate (no conflict!)\n\n@scope\ndef train(model, data): # Named parameters match scope names\n # Both have 'lr' but in separate namespaces!\n print(f\"Model LR: {model.lr}, Data LR: {data.lr}\")\n```\n\n**Key advantage**: `model.lr` and `data.lr` are completely independent. No need for naming conventions like `model_lr` vs `data_lr`.\n\n**CLI with MultiScope:**\n\n```bash\n# Override model scope only\npython train.py model.backbone=%resnet101%\n\n# Override data scope only\npython train.py data.dataset=%imagenet%\n\n# Override both\npython train.py model.backbone=%resnet101% data.dataset=%imagenet%\n```\n\n### Configuration Documentation & Debugging\n\n**The `manual` command** visualizes the exact order of configuration application.\n\n```python\n@scope.observe(default=True)\ndef config(config):\n config.lr = 0.001\n config.batch_size = 32\n config.model = 'resnet50'\n\n@scope.manual\ndef config_docs(config):\n config.lr = 'Learning rate for optimizer'\n config.batch_size = 'Number of samples per batch'\n config.model = 'Model architecture (resnet50, resnet101, etc.)'\n```\n\n```bash\npython train.py manual\n```\n\n**Output:**\n```\n--------------------------------------------------\n[Scope \"config\"]\n(The Applying Order of Views)\nconfig \u2192 (CLI Inputs)\n\n(User Manuals)\nlr: Learning rate for optimizer\nbatch_size: Number of samples per batch\nmodel: Model architecture (resnet50, resnet101, etc.)\n--------------------------------------------------\n```\n\n**Why this matters:**\nWhen debugging \"why is this config value not what I expect?\", you can see **exactly** which function set it and in what order.\n\n**Complex example:**\n\n```python\n@scope.observe(default=True)\ndef defaults(config):\n config.lr = 0.001\n\n@scope.observe(priority=1)\ndef experiment_config(config):\n config.lr = 0.01\n\n@scope.observe(priority=2)\ndef another_config(config):\n config.lr = 0.1\n\n@scope.observe(lazy=True)\ndef adaptive_lr(config):\n if config.batch_size > 64:\n config.lr = config.lr * 2\n```\n\nWhen you run `python train.py manual`, you see:\n```\n(The Applying Order of Views)\ndefaults \u2192 experiment_config \u2192 another_config \u2192 (CLI Inputs) \u2192 adaptive_lr\n```\n\nNow it's **crystal clear** why `lr=0.1` (from `another_config`) and not `0.01`!\n\n### Config Import/Export\n\n```python\n@scope.observe()\ndef load_external(config):\n # Load from any format\n config.load('experiments/baseline.json')\n config.load('models/resnet.yaml')\n\n # Export to any format\n config.dump('output/final_config.toml')\n```\n\n**OpenMMLab compatibility:**\n\n```python\n# Import OpenMMLab configs - handles _base_ inheritance automatically\nconfig.load_mm_config('mmdet_configs/faster_rcnn.py')\n```\n\n**Hierarchical composition:**\n\n```python\nfrom ato.adict import ADict\n\n# Load configs from directory structure\nconfig = ADict.compose_hierarchy(\n root='configs',\n config_filename='config',\n select={\n 'model': 'resnet50',\n 'data': 'imagenet'\n },\n overrides={\n 'model.lr': 0.01,\n 'data.batch_size': 64\n },\n required=['model.backbone', 'data.dataset'], # Validation\n on_missing='warn' # or 'error'\n)\n```\n\n### Argparse Integration\n\n```python\nfrom ato.scope import Scope\nimport argparse\n\nscope = Scope(use_external_parser=True)\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0)\nparser.add_argument('--seed', type=int, default=42)\n\n@scope.observe(default=True)\ndef config(config):\n config.lr = 0.001\n config.batch_size = 32\n\n@scope\ndef train(config):\n print(f\"GPU: {config.gpu}, LR: {config.lr}\")\n\nif __name__ == '__main__':\n parser.parse_args() # Merges argparse with scope\n train()\n```\n\n---\n\n## SQL Tracker: Experiment Tracking\n\nLightweight experiment tracking using SQLite.\n\n### Why SQL Tracker?\n\n- **Zero Setup**: Just a SQLite file, no servers\n- **Full History**: Track all runs, metrics, and artifacts\n- **Smart Search**: Find similar experiments by config structure\n- **Code Versioning**: Track code changes via fingerprints\n- **Offline-first**: No network required, sync to cloud tracking later if needed\n\n### Database Schema\n\n```\nProject (my_ml_project)\n\u251c\u2500\u2500 Experiment (run_1)\n\u2502 \u251c\u2500\u2500 config: {...}\n\u2502 \u251c\u2500\u2500 structural_hash: \"abc123...\"\n\u2502 \u251c\u2500\u2500 Metrics: [loss, accuracy, ...]\n\u2502 \u251c\u2500\u2500 Artifacts: [model.pt, plots/*, ...]\n\u2502 \u2514\u2500\u2500 Fingerprints: [model_forward, train_step, ...]\n\u251c\u2500\u2500 Experiment (run_2)\n\u2514\u2500\u2500 ...\n```\n\n### Usage\n\n#### Logging Experiments\n\n```python\nfrom ato.db_routers.sql.manager import SQLLogger\nfrom ato.adict import ADict\n\n# Setup config\nconfig = ADict(\n experiment=ADict(\n project_name='image_classification',\n sql=ADict(db_path='sqlite:///experiments.db')\n ),\n # Your hyperparameters\n lr=0.001,\n batch_size=32,\n model='resnet50'\n)\n\n# Create logger\nlogger = SQLLogger(config)\n\n# Start experiment run\nrun_id = logger.run(tags=['baseline', 'resnet50', 'cifar10'])\n\n# Training loop\nfor epoch in range(100):\n # Your training code\n train_loss = train_one_epoch()\n val_acc = validate()\n\n # Log metrics\n logger.log_metric('train_loss', train_loss, step=epoch)\n logger.log_metric('val_accuracy', val_acc, step=epoch)\n\n# Log artifacts\nlogger.log_artifact(run_id, 'checkpoints/model_best.pt',\n data_type='model',\n metadata={'epoch': best_epoch})\n\n# Finish run\nlogger.finish(status='completed')\n```\n\n#### Querying Experiments\n\n```python\nfrom ato.db_routers.sql.manager import SQLFinder\n\nfinder = SQLFinder(config)\n\n# Get all runs in project\nruns = finder.get_runs_in_project('image_classification')\nfor run in runs:\n print(f\"Run {run.id}: {run.config.model} - {run.status}\")\n\n# Find best performing run\nbest_run = finder.find_best_run(\n project_name='image_classification',\n metric_key='val_accuracy',\n mode='max' # or 'min' for loss\n)\nprint(f\"Best config: {best_run.config}\")\n\n# Find similar experiments (same config structure)\nsimilar = finder.find_similar_runs(run_id=123)\nprint(f\"Found {len(similar)} runs with similar config structure\")\n\n# Trace statistics (code fingerprints)\nstats = finder.get_trace_statistics('image_classification', trace_id='model_forward')\nprint(f\"Model forward pass has {stats['static_trace_versions']} versions\")\n```\n\n### Features\n\n| Feature | Description |\n|---------|-------------|\n| **Structural Hash** | Auto-track config structure changes |\n| **Metric Logging** | Time-series metrics with step tracking |\n| **Artifact Management** | Track model checkpoints, plots, data files |\n| **Fingerprint Tracking** | Version control for code (static & runtime) |\n| **Smart Search** | Find similar configs, best runs, statistics |\n\n---\n\n## Hyperparameter Optimization\n\nBuilt-in **Hyperband** algorithm for efficient hyperparameter search with early stopping.\n\n### How Hyperband Works\n\nHyperband uses successive halving:\n1. Start with many configs, train briefly\n2. Keep top performers, discard poor ones\n3. Train survivors longer\n4. Repeat until one winner remains\n\n### Basic Usage\n\n```python\nfrom ato.adict import ADict\nfrom ato.hyperopt.hyperband import HyperBand\nfrom ato.scope import Scope\n\nscope = Scope()\n\n# Define search space\nsearch_spaces = ADict(\n lr=ADict(\n param_type='FLOAT',\n param_range=(1e-5, 1e-1),\n num_samples=20,\n space_type='LOG' # Logarithmic spacing\n ),\n batch_size=ADict(\n param_type='INTEGER',\n param_range=(16, 128),\n num_samples=5,\n space_type='LOG'\n ),\n model=ADict(\n param_type='CATEGORY',\n categories=['resnet50', 'resnet101', 'efficientnet_b0']\n )\n)\n\n# Create Hyperband optimizer\nhyperband = HyperBand(\n scope,\n search_spaces,\n halving_rate=0.3, # Keep top 30% each round\n num_min_samples=3, # Stop when <= 3 configs remain\n mode='max' # Maximize metric (use 'min' for loss)\n)\n\n@hyperband.main\ndef train(config):\n # Your training code\n model = create_model(config.model)\n optimizer = Adam(lr=config.lr)\n\n # Use __num_halved__ for early stopping\n num_epochs = compute_epochs(config.__num_halved__)\n\n # Train and return metric\n val_acc = train_and_evaluate(model, optimizer, num_epochs)\n return val_acc\n\nif __name__ == '__main__':\n # Run hyperparameter search\n best_result = train()\n print(f\"Best config: {best_result.config}\")\n print(f\"Best metric: {best_result.metric}\")\n```\n\n### Automatic Step Calculation\n\n```python\nhyperband = HyperBand(scope, search_spaces, halving_rate=0.3, num_min_samples=4)\n\nmax_steps = 100000\nsteps_per_generation = hyperband.compute_optimized_initial_training_steps(max_steps)\n# Example output: [27, 88, 292, 972, 3240, 10800, 36000, 120000]\n\n# Use in training\n@hyperband.main\ndef train(config):\n generation = config.__num_halved__\n num_steps = steps_per_generation[generation]\n\n metric = train_for_n_steps(num_steps)\n return metric\n```\n\n### Parameter Types\n\n| Type | Description | Example |\n|------|-------------|---------|\n| `FLOAT` | Continuous values | Learning rate, dropout |\n| `INTEGER` | Discrete integers | Batch size, num layers |\n| `CATEGORY` | Categorical choices | Model type, optimizer |\n\nSpace types:\n- `LOG`: Logarithmic spacing (good for learning rates)\n- `LINEAR`: Linear spacing (default)\n\n### Distributed Search\n\n```python\nfrom ato.hyperopt.hyperband import DistributedHyperBand\nimport torch.distributed as dist\n\n# Initialize distributed training\ndist.init_process_group(backend='nccl')\nrank = dist.get_rank()\nworld_size = dist.get_world_size()\n\n# Create distributed hyperband\nhyperband = DistributedHyperBand(\n scope,\n search_spaces,\n halving_rate=0.3,\n num_min_samples=3,\n mode='max',\n rank=rank,\n world_size=world_size,\n backend='pytorch'\n)\n\n@hyperband.main\ndef train(config):\n # Your distributed training code\n model = create_model(config)\n model = DDP(model, device_ids=[rank])\n metric = train_and_evaluate(model)\n return metric\n\nif __name__ == '__main__':\n result = train()\n if rank == 0:\n print(f\"Best config: {result.config}\")\n```\n\n### Extensible Design\n\nAto's hyperopt module is built for extensibility:\n\n| Component | Purpose |\n|-----------|---------|\n| `GridSpaceMixIn` | Parameter sampling logic (reusable) |\n| `HyperOpt` | Base optimization class |\n| `DistributedMixIn` | Distributed training support (optional) |\n\n**Example: Implement custom search algorithm**\n\n```python\nfrom ato.hyperopt.base import GridSpaceMixIn, HyperOpt\n\nclass RandomSearch(GridSpaceMixIn, HyperOpt):\n def main(self, func):\n # Reuse GridSpaceMixIn.prepare_distributions()\n configs = self.prepare_distributions(self.config, self.search_spaces)\n\n # Implement random sampling\n import random\n random.shuffle(configs)\n\n results = []\n for config in configs[:10]: # Sample 10 random configs\n metric = func(config)\n results.append((config, metric))\n\n return max(results, key=lambda x: x[1])\n```\n\n---\n\n## Best Practices\n\n### 1. Project Structure\n\n```\nmy_project/\n\u251c\u2500\u2500 configs/\n\u2502 \u251c\u2500\u2500 default.py # Default config with @scope.observe(default=True)\n\u2502 \u251c\u2500\u2500 models.py # Model-specific configs\n\u2502 \u2514\u2500\u2500 datasets.py # Dataset configs\n\u251c\u2500\u2500 train.py # Main training script\n\u251c\u2500\u2500 experiments.db # SQLite experiment tracking\n\u2514\u2500\u2500 experiments/\n \u251c\u2500\u2500 run_001/\n \u2502 \u251c\u2500\u2500 checkpoints/\n \u2502 \u2514\u2500\u2500 logs/\n \u2514\u2500\u2500 run_002/\n```\n\n### 2. Config Organization\n\n```python\n# configs/default.py\nfrom ato.scope import Scope\nfrom ato.adict import ADict\n\nscope = Scope()\n\n@scope.observe(default=True)\ndef defaults(config):\n # Data\n config.data = ADict(\n dataset='cifar10',\n batch_size=32,\n num_workers=4\n )\n\n # Model\n config.model = ADict(\n backbone='resnet50',\n pretrained=True\n )\n\n # Training\n config.train = ADict(\n lr=0.001,\n epochs=100,\n optimizer='adam'\n )\n\n # Experiment tracking\n config.experiment = ADict(\n project_name='my_project',\n sql=ADict(db_path='sqlite:///experiments.db')\n )\n```\n\n### 3. Combined Workflow\n\n```python\nfrom ato.scope import Scope\nfrom ato.db_routers.sql.manager import SQLLogger\nfrom configs.default import scope\n\n@scope\ndef train(config):\n # Setup experiment tracking\n logger = SQLLogger(config)\n run_id = logger.run(tags=[config.model.backbone, config.data.dataset])\n\n try:\n # Training loop\n for epoch in range(config.train.epochs):\n loss = train_epoch()\n acc = validate()\n\n logger.log_metric('loss', loss, epoch)\n logger.log_metric('accuracy', acc, epoch)\n\n logger.finish(status='completed')\n\n except Exception as e:\n logger.finish(status='failed')\n raise e\n\nif __name__ == '__main__':\n train()\n```\n\n### 4. Reproducibility Checklist\n\n- \u2705 Use structural hashing to track config changes\n- \u2705 Log all hyperparameters to SQLLogger\n- \u2705 Tag experiments with meaningful labels\n- \u2705 Track artifacts (checkpoints, plots)\n- \u2705 Use lazy configs for derived parameters\n- \u2705 Document configs with `@scope.manual`\n\n---\n\n## Requirements\n\n- Python >= 3.7\n- SQLAlchemy (for SQL Tracker)\n- PyYAML, toml (for config serialization)\n\nSee `pyproject.toml` for full dependencies.\n\n---\n\n## Contributing\n\nContributions are welcome! Please feel free to submit issues or pull requests.\n\n### Development Setup\n\n```bash\ngit clone https://github.com/yourusername/ato.git\ncd ato\npip install -e .\n```\n\n### Quality Assurance\n\nAto's design philosophy \u2014 **structural neutrality** and **debuggable composition** \u2014 extends to our testing practices.\n\n**Release Policy:**\n- **All 100+ unit tests must pass before any release**\n- No exceptions, no workarounds\n- Tests cover every module: ADict, Scope, MultiScope, SQLTracker, HyperBand\n\n**Why this matters:**\nWhen you build on Ato, you're trusting it to stay out of your way. That means zero regressions, predictable behavior, and reliable APIs. Comprehensive test coverage ensures that each component works independently and composes correctly.\n\nRun tests locally:\n```bash\npython -m pytest unit_tests/\n```\n\n---\n\n## Composability\n\nAto is designed to **compose** with existing tools, not replace them.\n\n### Works Where Other Systems Require Ecosystems\n\n**Config composition:**\n- Import OpenMMLab configs: `config.load_mm_config('mmdet_configs/faster_rcnn.py')`\n- Load Hydra-style hierarchies: `ADict.compose_hierarchy(root='configs', select={'model': 'resnet50'})`\n- Mix with argparse: `Scope(use_external_parser=True)`\n\n**Experiment tracking:**\n- Track locally in SQLite (zero setup)\n- Sync to MLflow/W&B when you need dashboards\n- Or use both: local SQLite + cloud tracking\n\n**Hyperparameter optimization:**\n- Built-in Hyperband\n- Or compose with Optuna/Ray Tune \u2014 Ato's configs work with any optimizer\n\n### Four Capabilities Other Tools Don't Provide\n\n1. **Config chaining (`chain_with`)** \u2014 Explicit dependency management between configs\n2. **MultiScope** \u2014 True namespace isolation with independent priority systems\n3. **`manual` command** \u2014 Visualize exact config merge order for debugging\n4. **Structural hashing** \u2014 Track when experiment **architecture** changes, not just values\n\n### When to Use Ato\n\n**Use Ato when:**\n- You want zero boilerplate config management\n- You need to debug why a config value isn't what you expect\n- You're working on multi-team projects with namespace conflicts\n- You want local-first experiment tracking\n- You're migrating between config/tracking systems\n\n**Ato works alongside:**\n- Hydra (config composition)\n- MLflow/W&B (cloud tracking)\n- Optuna/Ray Tune (advanced hyperparameter search)\n- PyTorch/TensorFlow/JAX (any ML framework)\n\n---\n\n## Roadmap\n\nAto's design constraint is **structural neutrality** \u2014 adding capabilities without creating dependencies.\n\n### Planned: Local Dashboard (Optional Module)\n\nA lightweight HTML dashboard for teams that want visual exploration without committing to cloud platforms:\n\n**What it adds:**\n- Metric comparison & trends (read-only view of SQLite data)\n- Run history & artifact browsing\n- Config diff visualization\n- Interactive hyperparameter analysis\n\n**Design constraints:**\n- No hard dependency \u2014 Ato core works 100% without the dashboard\n- Separate process \u2014 doesn't block or modify runs\n- Zero lock-in \u2014 delete it anytime, training code doesn't change\n- Composable \u2014 use alongside MLflow/W&B\n\n**Guiding principle:** Ato remains a set of **independent, composable tools** \u2014 not a platform you commit to.\n\n---\n\n## License\n\nMIT License\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Configuration, experimentation, and hyperparameter optimization for Python. No runtime magic. No launcher. Just Python modules you compose.",
"version": "2.1.1",
"project_urls": {
"Documentation": "https://github.com/yourusername/ato#readme",
"Homepage": "https://github.com/yourusername/ato",
"Issues": "https://github.com/yourusername/ato/issues",
"Repository": "https://github.com/yourusername/ato"
},
"split_keywords": [
"config management",
" experiment tracking",
" hyperparameter optimization",
" lightweight",
" composable",
" namespace isolation",
" machine learning"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "10686b3a97eb0841be10c5e1423168b4e240de8de88e0f054b70694c5789a23e",
"md5": "3f13f5fbcb3413d484fd24f13422b69b",
"sha256": "a3ffa11756f907ad32fd1ad8d43692a6883a5880ae3219b7e1bc9d42635b71be"
},
"downloads": -1,
"filename": "ato-2.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3f13f5fbcb3413d484fd24f13422b69b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 29752,
"upload_time": "2025-11-07T11:42:17",
"upload_time_iso_8601": "2025-11-07T11:42:17.182011Z",
"url": "https://files.pythonhosted.org/packages/10/68/6b3a97eb0841be10c5e1423168b4e240de8de88e0f054b70694c5789a23e/ato-2.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5b0edb4bacc665f836ce9b2e1a7dadee0531db7034256510803cce5139b96afe",
"md5": "fa96828631d9aa4919de9ff945dd2d48",
"sha256": "dc2030b79c3137ba63af207287b50eb7c69942dfa7ff8209d4b06f20de4fded7"
},
"downloads": -1,
"filename": "ato-2.1.1.tar.gz",
"has_sig": false,
"md5_digest": "fa96828631d9aa4919de9ff945dd2d48",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 36701,
"upload_time": "2025-11-07T11:42:18",
"upload_time_iso_8601": "2025-11-07T11:42:18.657676Z",
"url": "https://files.pythonhosted.org/packages/5b/0e/db4bacc665f836ce9b2e1a7dadee0531db7034256510803cce5139b96afe/ato-2.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-07 11:42:18",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "yourusername",
"github_project": "ato#readme",
"github_not_found": true,
"lcname": "ato"
}