### Think training a ResNet-18 on CIFAR-10 is a breeze? π¬οΈπ¨
It might seem simple at first β until you find yourself drowning in boilerplate code:
- Setting up data loaders
- Defining model architectures
- Configuring loss functions
- Choosing and tuning optimizers
- ...and so much more! π€―
What if you could skip all that hassle?
With this approach, ***you won't have to write a single line of code*** β just define a YAML configuration file:
```yaml
# config.yaml
batch_size: 256
num_workers: 8
epochs: 90
init_lr: 1.e-1
optimizer: SGD
optimizer_cfg:
momentum: 0.9
weight_decay: 1.e-4
lr_scheduler: StepLR
lr_scheduler_cfg:
step_size: 30
gamma: 0.1
criterion: CrossEntropyLoss
```
and simply run:
```bash
wah train --dataset cifar10 --dataset-root ./dataset --model resnet18 \
--cfg-path ./config.yaml --log-root ./logs --device auto
```
### What Happens Next?
This single command will:
β
Automatically download CIFAR-10 to `./dataset`\
β
Train a ResNet-18 model on it\
β
Save checkpoints and TensorBoard logs to `./logs`\
β
Detect available hardware (CPU/GPU) with multi-GPU support (DDP)
No tedious setup, no redundant scripting β just efficient, streamlined model training. π
### And thatβs just the beginning!
Youβve found more than just a training tool β a powerful, flexible framework designed to accelerate deep learning research.
### Produly presents:
# WAH

## Install
```commandline
pip install wah
```
### Requirements
You might want to manually install [**PyTorch**](https://pytorch.org/get-started/locally/)
for GPU computation.
```text
lightning
matplotlib
numpy
pandas
pillow
requests
timm
torch
torchmetrics
torchvision
tqdm
yaml
```
## Structure
### `wah`
- `classification`
- `datasets`
- `CIFAR10`
- `CIFAR100`
- `compute_mean_and_std`
- `ImageNet`
- `load_dataloader`
- `portion_dataset`
- `STL10`
- `models`
- `FeatureExtractor`
- `load`
- `load_state_dict`
- `replace`
- `gelu_with_relu`
- `relu_with_gelu`
- `bn_with_ln`
- `ln_with_bn`
- `summary`
- `test`
- `brier_score`
- `ece`
- `Trainer`
- `dicts`
- `fun`
- `RecursionWrapper`
- `lists`
- `mods`
- `path`
- `random`
- `tensor`
- `time`
- `utils`
- `ArgumentParser`
- `download`
- `zips`
Raw data
{
"_id": null,
"home_page": "https://github.com/yupeeee/WAH",
"name": "wah",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Juyeop Kim",
"author_email": "juyeopkim@yonsei.ac.kr",
"download_url": "https://files.pythonhosted.org/packages/b5/9b/aa797be8fbd8faf770e79ae4503a4bbef24f74e5a72c05049902e5da7e54/wah-1.13.20.tar.gz",
"platform": null,
"description": "### Think training a ResNet-18 on CIFAR-10 is a breeze? \ud83c\udf2c\ufe0f\ud83d\udca8\nIt might seem simple at first \u2014 until you find yourself drowning in boilerplate code:\n\n- Setting up data loaders\n- Defining model architectures\n- Configuring loss functions\n- Choosing and tuning optimizers\n- ...and so much more! \ud83e\udd2f\n\nWhat if you could skip all that hassle?\n\nWith this approach, ***you won't have to write a single line of code*** \u2014 just define a YAML configuration file:\n\n```yaml\n# config.yaml\nbatch_size: 256\nnum_workers: 8\nepochs: 90\ninit_lr: 1.e-1\noptimizer: SGD\noptimizer_cfg:\n momentum: 0.9\n weight_decay: 1.e-4\nlr_scheduler: StepLR\nlr_scheduler_cfg:\n step_size: 30\n gamma: 0.1\ncriterion: CrossEntropyLoss\n```\n\nand simply run:\n\n```bash\nwah train --dataset cifar10 --dataset-root ./dataset --model resnet18 \\\n --cfg-path ./config.yaml --log-root ./logs --device auto\n```\n\n### What Happens Next?\nThis single command will:\n\n\u2705 Automatically download CIFAR-10 to `./dataset`\\\n\u2705 Train a ResNet-18 model on it\\\n\u2705 Save checkpoints and TensorBoard logs to `./logs`\\\n\u2705 Detect available hardware (CPU/GPU) with multi-GPU support (DDP)\n\nNo tedious setup, no redundant scripting \u2014 just efficient, streamlined model training. \ud83d\ude80\n\n### And that\u2019s just the beginning!\n\nYou\u2019ve found more than just a training tool \u2014 a powerful, flexible framework designed to accelerate deep learning research.\n\n### Produly presents:\n\n# WAH\n\n\n\n## Install\n\n```commandline\npip install wah\n```\n\n### Requirements\n\nYou might want to manually install [**PyTorch**](https://pytorch.org/get-started/locally/)\nfor GPU computation.\n\n```text\nlightning\nmatplotlib\nnumpy\npandas\npillow\nrequests\ntimm\ntorch\ntorchmetrics\ntorchvision\ntqdm\nyaml\n```\n\n## Structure\n\n### `wah`\n- `classification`\n\t- `datasets`\n\t\t- `CIFAR10`\n\t\t- `CIFAR100`\n\t\t- `compute_mean_and_std`\n\t\t- `ImageNet`\n\t\t- `load_dataloader`\n\t\t- `portion_dataset`\n\t\t- `STL10`\n\t- `models`\n\t\t- `FeatureExtractor`\n\t\t- `load`\n\t\t- `load_state_dict`\n\t\t- `replace`\n\t\t\t- `gelu_with_relu`\n\t\t\t- `relu_with_gelu`\n\t\t\t- `bn_with_ln`\n\t\t\t- `ln_with_bn`\n\t\t- `summary`\n\t- `test`\n\t\t- `brier_score`\n\t\t- `ece`\n\t- `Trainer`\n- `dicts`\n- `fun`\n\t- `RecursionWrapper`\n- `lists`\n- `mods`\n- `path`\n- `random`\n- `tensor`\n- `time`\n- `utils`\n\t- `ArgumentParser`\n\t- `download`\n\t- `zips`\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "a library so simple you will learn Within An Hour",
"version": "1.13.20",
"project_urls": {
"Homepage": "https://github.com/yupeeee/WAH"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b59baa797be8fbd8faf770e79ae4503a4bbef24f74e5a72c05049902e5da7e54",
"md5": "b821e45e34afa5bafa12e13b8918b858",
"sha256": "7614a13db3dfb14074e43403a9c165941c797e723036a48ded05a89fc3203d7a"
},
"downloads": -1,
"filename": "wah-1.13.20.tar.gz",
"has_sig": false,
"md5_digest": "b821e45e34afa5bafa12e13b8918b858",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 532643,
"upload_time": "2025-03-19T11:12:05",
"upload_time_iso_8601": "2025-03-19T11:12:05.723045Z",
"url": "https://files.pythonhosted.org/packages/b5/9b/aa797be8fbd8faf770e79ae4503a4bbef24f74e5a72c05049902e5da7e54/wah-1.13.20.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-19 11:12:05",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "yupeeee",
"github_project": "WAH",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "accelerate",
"specs": []
},
{
"name": "bitsandbytes",
"specs": []
},
{
"name": "diffusers",
"specs": []
},
{
"name": "ftfy",
"specs": []
},
{
"name": "lightning",
"specs": []
},
{
"name": "matplotlib",
"specs": []
},
{
"name": "numpy",
"specs": []
},
{
"name": "pandas",
"specs": []
},
{
"name": "PyYAML",
"specs": []
},
{
"name": "pillow",
"specs": []
},
{
"name": "regex",
"specs": []
},
{
"name": "requests",
"specs": []
},
{
"name": "sentencepiece",
"specs": []
},
{
"name": "tensorboard",
"specs": []
},
{
"name": "timm",
"specs": []
},
{
"name": "torch",
"specs": []
},
{
"name": "torchmetrics",
"specs": []
},
{
"name": "torchvision",
"specs": []
},
{
"name": "tqdm",
"specs": []
},
{
"name": "transformers",
"specs": []
}
],
"lcname": "wah"
}