onedl-mim


Nameonedl-mim JSON
Version 0.4.0rc0 PyPI version JSON
download
home_pageNone
SummaryMIM Installs OpenMMLab packages
upload_time2025-08-07 07:53:27
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseApache-2.0
keywords computer vision deep learning pytorch openmmlab
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MIM: MIM Installs OpenMMLab Packages

MIM provides a unified interface for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

## Major Features

- **Package Management**

  You can use MIM to manage OpenMMLab codebases, install or uninstall them conveniently.

- **Model Management**

  You can use MIM to manage OpenMMLab model zoo, e.g., download checkpoints by name, search checkpoints that meet specific criteria.

- **Unified Entrypoint for Scripts**

  You can execute any script provided by all OpenMMLab codebases with unified commands. Train, test and inference become easier than ever. Besides, you can use `gridsearch` command for vanilla hyper-parameter search.

## License

This project is released under the [Apache 2.0 license](LICENSE).

## Changelog

v0.1.1 was released in 13/6/2021.

## Customization

You can use `.mimrc` for customization. Now we support customize default values of each sub-command. Please refer to [customization.md](docs/en/customization.md) for details.

## Build custom projects with MIM

We provide some examples of how to build custom projects based on OpenMMLAB codebases and MIM in [MIM-Example](https://github.com/open-mmlab/mim-example).
Without worrying about copying codes and scripts from existing codebases, users can focus on developing new components and MIM helps integrate and run the new project.

## Installation

Please refer to [installation.md](docs/en/installation.md) for installation.

## Command

<details>
<summary>1. install</summary>

- command

  ```bash
  # install latest version of onedl-mmcv
  > mim install onedl-mmcv  # wheel
  # install 1.5.0
  > mim install onedl-mmcv==2.3.0

  # install latest version of onedl-mmpretrain
  > mim install onedl-mmpretrain
  # install master branch
  > mim install git+https://github.com/vbti-development/onedl-mmpretrain.git
  # install local repo
  > git clone https://github.com/vbti-development/onedl-mmpretrain.git
  > cd mmclassification
  > mim install .

  # install extension based on OpenMMLab
  mim install git+https://github.com/xxx/onedl-mmpretrain-project.git
  ```

- api

  ```python
  from mim import install

  # install mmcv
  install('onedl-mmcv')

  # install onedl-mmpretrain will automatically install mmcv if it is not installed
  install('onedl-mmpretrain')

  # install extension based on OpenMMLab
  install('git+https://github.com/xxx/onedl-mmpretrain-project.git')
  ```

</details>

<details>
<summary>2. uninstall</summary>

- command

  ```bash
  # uninstall mmcv
  > mim uninstall onedl-mmcv

  # uninstall onedl-mmpretrain
  > mim uninstall onedl-mmpretrain
  ```

- api

  ```python
  from mim import uninstall

  # uninstall mmcv
  uninstall('onedl-mmcv')

  # uninstall onedl-mmpretrain
  uninstall('onedl-mmpretrain')
  ```

</details>

<details>
<summary>3. list</summary>

- command

  ```bash
  > mim list
  > mim list --all
  ```

- api

  ```python
  from mim import list_package

  list_package()
  list_package(True)
  ```

</details>

<details>
<summary>4. search</summary>

- command

  ```bash
  > mim search onedl-mmpretrain
  > mim search onedl-mmpretrain==0.23.0 --remote
  > mim search onedl-mmpretrain --config resnet18_8xb16_cifar10
  > mim search onedl-mmpretrain --model resnet
  > mim search onedl-mmpretrain --dataset cifar-10
  > mim search onedl-mmpretrain --valid-field
  > mim search onedl-mmpretrain --condition 'batch_size>45,epochs>100'
  > mim search onedl-mmpretrain --condition 'batch_size>45 epochs>100'
  > mim search onedl-mmpretrain --condition '128<batch_size<=256'
  > mim search onedl-mmpretrain --sort batch_size epochs
  > mim search onedl-mmpretrain --field epochs batch_size weight
  > mim search onedl-mmpretrain --exclude-field weight paper
  ```

- api

  ```python
  from mim import get_model_info

  get_model_info('onedl-mmpretrain')
  get_model_info('onedl-mmpretrain==0.23.0', local=False)
  get_model_info('onedl-mmpretrain', models=['resnet'])
  get_model_info('onedl-mmpretrain', training_datasets=['cifar-10'])
  get_model_info('onedl-mmpretrain', filter_conditions='batch_size>45,epochs>100')
  get_model_info('onedl-mmpretrain', filter_conditions='batch_size>45 epochs>100')
  get_model_info('onedl-mmpretrain', filter_conditions='128<batch_size<=256')
  get_model_info('onedl-mmpretrain', sorted_fields=['batch_size', 'epochs'])
  get_model_info('onedl-mmpretrain', shown_fields=['epochs', 'batch_size', 'weight'])
  ```

</details>

<details>
<summary>5. download</summary>

- command

  ```bash
  > mim download onedl-mmpretrain --config resnet18_8xb16_cifar10
  > mim download onedl-mmpretrain --config resnet18_8xb16_cifar10 --dest .
  ```

- api

  ```python
  from mim import download

  download('onedl-mmpretrain', ['resnet18_8xb16_cifar10'])
  download('onedl-mmpretrain', ['resnet18_8xb16_cifar10'], dest_root='.')
  ```

</details>

<details>
<summary>6. train</summary>

- command

  ```bash
  # Train models on a single server with CPU by setting `gpus` to 0 and
  # 'launcher' to 'none' (if applicable). The training script of the
  # corresponding codebase will fail if it doesn't support CPU training.
  > mim train onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0
  # Train models on a single server with one GPU
  > mim train onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1
  # Train models on a single server with 4 GPUs and pytorch distributed
  > mim train onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 4 \
      --launcher pytorch
  # Train models on a slurm HPC with one 8-GPU node
  > mim train onedl-mmpretrain resnet101_b16x8_cifar10.py --launcher slurm --gpus 8 \
      --gpus-per-node 8 --partition partition_name --work-dir tmp
  # Print help messages of sub-command train
  > mim train -h
  # Print help messages of sub-command train and the training script of onedl-mmpretrain
  > mim train onedl-mmpretrain -h
  ```

- api

  ```python
  from mim import train

  train(repo='onedl-mmpretrain', config='resnet18_8xb16_cifar10.py', gpus=0,
        other_args=('--work-dir', 'tmp'))
  train(repo='onedl-mmpretrain', config='resnet18_8xb16_cifar10.py', gpus=1,
        other_args=('--work-dir', 'tmp'))
  train(repo='onedl-mmpretrain', config='resnet18_8xb16_cifar10.py', gpus=4,
        launcher='pytorch', other_args=('--work-dir', 'tmp'))
  train(repo='onedl-mmpretrain', config='resnet18_8xb16_cifar10.py', gpus=8,
        launcher='slurm', gpus_per_node=8, partition='partition_name',
        other_args=('--work-dir', 'tmp'))
  ```

</details>

<details>
<summary>7. test</summary>

- command

  ```bash
  # Test models on a single server with 1 GPU, report accuracy
  > mim test onedl-mmpretrain resnet101_b16x8_cifar10.py --checkpoint \
      tmp/epoch_3.pth --gpus 1 --metrics accuracy
  # Test models on a single server with 1 GPU, save predictions
  > mim test onedl-mmpretrain resnet101_b16x8_cifar10.py --checkpoint \
      tmp/epoch_3.pth --gpus 1 --out tmp.pkl
  # Test models on a single server with 4 GPUs, pytorch distributed,
  # report accuracy
  > mim test onedl-mmpretrain resnet101_b16x8_cifar10.py --checkpoint \
      tmp/epoch_3.pth --gpus 4 --launcher pytorch --metrics accuracy
  # Test models on a slurm HPC with one 8-GPU node, report accuracy
  > mim test onedl-mmpretrain resnet101_b16x8_cifar10.py --checkpoint \
      tmp/epoch_3.pth --gpus 8 --metrics accuracy --partition \
      partition_name --gpus-per-node 8 --launcher slurm
  # Print help messages of sub-command test
  > mim test -h
  # Print help messages of sub-command test and the testing script of onedl-mmpretrain
  > mim test onedl-mmpretrain -h
  ```

- api

  ```python
  from mim import test
  test(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py',
       checkpoint='tmp/epoch_3.pth', gpus=1, other_args=('--metrics', 'accuracy'))
  test(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py',
       checkpoint='tmp/epoch_3.pth', gpus=1, other_args=('--out', 'tmp.pkl'))
  test(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py',
       checkpoint='tmp/epoch_3.pth', gpus=4, launcher='pytorch',
       other_args=('--metrics', 'accuracy'))
  test(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py',
       checkpoint='tmp/epoch_3.pth', gpus=8, partition='partition_name',
       launcher='slurm', gpus_per_node=8, other_args=('--metrics', 'accuracy'))
  ```

</details>

<details>
<summary>8. run</summary>

- command

  ```bash
  # Get the Flops of a model
  > mim run onedl-mmpretrain get_flops resnet101_b16x8_cifar10.py
  # Publish a model
  > mim run onedl-mmpretrain publish_model input.pth output.pth
  # Train models on a slurm HPC with one GPU
  > srun -p partition --gres=gpu:1 mim run onedl-mmpretrain train \
      resnet101_b16x8_cifar10.py --work-dir tmp
  # Test models on a slurm HPC with one GPU, report accuracy
  > srun -p partition --gres=gpu:1 mim run onedl-mmpretrain test \
      resnet101_b16x8_cifar10.py tmp/epoch_3.pth --metrics accuracy
  # Print help messages of sub-command run
  > mim run -h
  # Print help messages of sub-command run, list all available scripts in
  # codebase onedl-mmpretrain
  > mim run onedl-mmpretrain -h
  # Print help messages of sub-command run, print the help message of
  # training script in onedl-mmpretrain
  > mim run onedl-mmpretrain train -h
  ```

- api

  ```python
  from mim import run

  run(repo='onedl-mmpretrain', command='get_flops',
      other_args=('resnet101_b16x8_cifar10.py',))
  run(repo='onedl-mmpretrain', command='publish_model',
      other_args=('input.pth', 'output.pth'))
  run(repo='onedl-mmpretrain', command='train',
      other_args=('resnet101_b16x8_cifar10.py', '--work-dir', 'tmp'))
  run(repo='onedl-mmpretrain', command='test',
      other_args=('resnet101_b16x8_cifar10.py', 'tmp/epoch_3.pth', '--metrics accuracy'))
  ```

</details>

<details>
<summary>9. gridsearch</summary>

- command

  ```bash
  # Parameter search on a single server with CPU by setting `gpus` to 0 and
  # 'launcher' to 'none' (if applicable). The training script of the
  # corresponding codebase will fail if it doesn't support CPU training.
  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0 \
      --search-args '--optimizer.lr 1e-2 1e-3'
  # Parameter search with on a single server with one GPU, search learning
  # rate
  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
      --search-args '--optimizer.lr 1e-2 1e-3'
  # Parameter search with on a single server with one GPU, search
  # weight_decay
  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
      --search-args '--optimizer.weight_decay 1e-3 1e-4'
  # Parameter search with on a single server with one GPU, search learning
  # rate and weight_decay
  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
      --search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \
      1e-4'
  # Parameter search on a slurm HPC with one 8-GPU node, search learning
  # rate and weight_decay
  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \
      --partition partition_name --gpus-per-node 8 --launcher slurm \
      --search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \
      1e-4'
  # Parameter search on a slurm HPC with one 8-GPU node, search learning
  # rate and weight_decay, max parallel jobs is 2
  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \
      --partition partition_name --gpus-per-node 8 --launcher slurm \
      --max-jobs 2 --search-args '--optimizer.lr 1e-2 1e-3 \
      --optimizer.weight_decay 1e-3 1e-4'
  # Print the help message of sub-command search
  > mim gridsearch -h
  # Print the help message of sub-command search and the help message of the
  # training script of codebase onedl-mmpretrain
  > mim gridsearch onedl-mmpretrain -h
  ```

- api

  ```python
  from mim import gridsearch

  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=0,
             search_args='--optimizer.lr 1e-2 1e-3',
             other_args=('--work-dir', 'tmp'))
  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=1,
             search_args='--optimizer.lr 1e-2 1e-3',
             other_args=('--work-dir', 'tmp'))
  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=1,
             search_args='--optimizer.weight_decay 1e-3 1e-4',
             other_args=('--work-dir', 'tmp'))
  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=1,
             search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
                         '1e-3 1e-4',
             other_args=('--work-dir', 'tmp'))
  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=8,
             partition='partition_name', gpus_per_node=8, launcher='slurm',
             search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
                         ' 1e-3 1e-4',
             other_args=('--work-dir', 'tmp'))
  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=8,
             partition='partition_name', gpus_per_node=8, launcher='slurm',
             max_workers=2,
             search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
                         ' 1e-3 1e-4',
             other_args=('--work-dir', 'tmp'))
  ```

</details>

## Contributing

We appreciate all contributions to improve mim. Please refer to [CONTRIBUTING.md](https://github.com/vbti-development/onedl-mmcv/blob/master/CONTRIBUTING.md) for the contributing guideline.

## License

This project is released under the [Apache 2.0 license](LICENSE).

## Projects in VBTI-development

- [MMEngine](https://github.com/vbti-development/onedl-mmengine): OpenMMLab foundational library for training deep learning models.
- [MMCV](https://github.com/vbti-development/onedl-mmcv): OpenMMLab foundational library for computer vision.
- [MMPreTrain](https://github.com/vbti-development/onedl-mmpretrain): OpenMMLab pre-training toolbox and benchmark.
- [MMDetection](https://github.com/vbti-development/onedl-mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMRotate](https://github.com/vbti-development/onedl-mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
- [MMSegmentation](https://github.com/vbti-development/onedl-mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMDeploy](https://github.com/vbti-development/onedl-mmdeploy): OpenMMLab model deployment framework.
- [MIM](https://github.com/vbti-development/onedl-mim): MIM installs OpenMMLab packages.

## Projects in OpenMMLab

- [MMagic](https://github.com/open-mmlab/mmagic): Open**MM**Lab **A**dvanced, **G**enerative and **I**ntelligent **C**reation toolbox.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark.
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
- [MMEval](https://github.com/open-mmlab/mmeval): A unified evaluation library for multiple machine learning libraries.
- [Playground](https://github.com/open-mmlab/playground): A central hub for gathering and showcasing amazing projects built upon OpenMMLab.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "onedl-mim",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "computer vision, deep learning, pytorch, openmmlab",
    "author": null,
    "author_email": "MMDetection Contributors <oss-team@vbti.nl>",
    "download_url": "https://files.pythonhosted.org/packages/01/7d/2161e85b972f57db0b261c67cb3ea668b01d208bb6814263c61e9a45ddef/onedl_mim-0.4.0rc0.tar.gz",
    "platform": null,
    "description": "# MIM: MIM Installs OpenMMLab Packages\n\nMIM provides a unified interface for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.\n\n## Major Features\n\n- **Package Management**\n\n  You can use MIM to manage OpenMMLab codebases, install or uninstall them conveniently.\n\n- **Model Management**\n\n  You can use MIM to manage OpenMMLab model zoo, e.g., download checkpoints by name, search checkpoints that meet specific criteria.\n\n- **Unified Entrypoint for Scripts**\n\n  You can execute any script provided by all OpenMMLab codebases with unified commands. Train, test and inference become easier than ever. Besides, you can use `gridsearch` command for vanilla hyper-parameter search.\n\n## License\n\nThis project is released under the [Apache 2.0 license](LICENSE).\n\n## Changelog\n\nv0.1.1 was released in 13/6/2021.\n\n## Customization\n\nYou can use `.mimrc` for customization. Now we support customize default values of each sub-command. Please refer to [customization.md](docs/en/customization.md) for details.\n\n## Build custom projects with MIM\n\nWe provide some examples of how to build custom projects based on OpenMMLAB codebases and MIM in [MIM-Example](https://github.com/open-mmlab/mim-example).\nWithout worrying about copying codes and scripts from existing codebases, users can focus on developing new components and MIM helps integrate and run the new project.\n\n## Installation\n\nPlease refer to [installation.md](docs/en/installation.md) for installation.\n\n## Command\n\n<details>\n<summary>1. install</summary>\n\n- command\n\n  ```bash\n  # install latest version of onedl-mmcv\n  > mim install onedl-mmcv  # wheel\n  # install 1.5.0\n  > mim install onedl-mmcv==2.3.0\n\n  # install latest version of onedl-mmpretrain\n  > mim install onedl-mmpretrain\n  # install master branch\n  > mim install git+https://github.com/vbti-development/onedl-mmpretrain.git\n  # install local repo\n  > git clone https://github.com/vbti-development/onedl-mmpretrain.git\n  > cd mmclassification\n  > mim install .\n\n  # install extension based on OpenMMLab\n  mim install git+https://github.com/xxx/onedl-mmpretrain-project.git\n  ```\n\n- api\n\n  ```python\n  from mim import install\n\n  # install mmcv\n  install('onedl-mmcv')\n\n  # install onedl-mmpretrain will automatically install mmcv if it is not installed\n  install('onedl-mmpretrain')\n\n  # install extension based on OpenMMLab\n  install('git+https://github.com/xxx/onedl-mmpretrain-project.git')\n  ```\n\n</details>\n\n<details>\n<summary>2. uninstall</summary>\n\n- command\n\n  ```bash\n  # uninstall mmcv\n  > mim uninstall onedl-mmcv\n\n  # uninstall onedl-mmpretrain\n  > mim uninstall onedl-mmpretrain\n  ```\n\n- api\n\n  ```python\n  from mim import uninstall\n\n  # uninstall mmcv\n  uninstall('onedl-mmcv')\n\n  # uninstall onedl-mmpretrain\n  uninstall('onedl-mmpretrain')\n  ```\n\n</details>\n\n<details>\n<summary>3. list</summary>\n\n- command\n\n  ```bash\n  > mim list\n  > mim list --all\n  ```\n\n- api\n\n  ```python\n  from mim import list_package\n\n  list_package()\n  list_package(True)\n  ```\n\n</details>\n\n<details>\n<summary>4. search</summary>\n\n- command\n\n  ```bash\n  > mim search onedl-mmpretrain\n  > mim search onedl-mmpretrain==0.23.0 --remote\n  > mim search onedl-mmpretrain --config resnet18_8xb16_cifar10\n  > mim search onedl-mmpretrain --model resnet\n  > mim search onedl-mmpretrain --dataset cifar-10\n  > mim search onedl-mmpretrain --valid-field\n  > mim search onedl-mmpretrain --condition 'batch_size>45,epochs>100'\n  > mim search onedl-mmpretrain --condition 'batch_size>45 epochs>100'\n  > mim search onedl-mmpretrain --condition '128<batch_size<=256'\n  > mim search onedl-mmpretrain --sort batch_size epochs\n  > mim search onedl-mmpretrain --field epochs batch_size weight\n  > mim search onedl-mmpretrain --exclude-field weight paper\n  ```\n\n- api\n\n  ```python\n  from mim import get_model_info\n\n  get_model_info('onedl-mmpretrain')\n  get_model_info('onedl-mmpretrain==0.23.0', local=False)\n  get_model_info('onedl-mmpretrain', models=['resnet'])\n  get_model_info('onedl-mmpretrain', training_datasets=['cifar-10'])\n  get_model_info('onedl-mmpretrain', filter_conditions='batch_size>45,epochs>100')\n  get_model_info('onedl-mmpretrain', filter_conditions='batch_size>45 epochs>100')\n  get_model_info('onedl-mmpretrain', filter_conditions='128<batch_size<=256')\n  get_model_info('onedl-mmpretrain', sorted_fields=['batch_size', 'epochs'])\n  get_model_info('onedl-mmpretrain', shown_fields=['epochs', 'batch_size', 'weight'])\n  ```\n\n</details>\n\n<details>\n<summary>5. download</summary>\n\n- command\n\n  ```bash\n  > mim download onedl-mmpretrain --config resnet18_8xb16_cifar10\n  > mim download onedl-mmpretrain --config resnet18_8xb16_cifar10 --dest .\n  ```\n\n- api\n\n  ```python\n  from mim import download\n\n  download('onedl-mmpretrain', ['resnet18_8xb16_cifar10'])\n  download('onedl-mmpretrain', ['resnet18_8xb16_cifar10'], dest_root='.')\n  ```\n\n</details>\n\n<details>\n<summary>6. train</summary>\n\n- command\n\n  ```bash\n  # Train models on a single server with CPU by setting `gpus` to 0 and\n  # 'launcher' to 'none' (if applicable). The training script of the\n  # corresponding codebase will fail if it doesn't support CPU training.\n  > mim train onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0\n  # Train models on a single server with one GPU\n  > mim train onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1\n  # Train models on a single server with 4 GPUs and pytorch distributed\n  > mim train onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 4 \\\n      --launcher pytorch\n  # Train models on a slurm HPC with one 8-GPU node\n  > mim train onedl-mmpretrain resnet101_b16x8_cifar10.py --launcher slurm --gpus 8 \\\n      --gpus-per-node 8 --partition partition_name --work-dir tmp\n  # Print help messages of sub-command train\n  > mim train -h\n  # Print help messages of sub-command train and the training script of onedl-mmpretrain\n  > mim train onedl-mmpretrain -h\n  ```\n\n- api\n\n  ```python\n  from mim import train\n\n  train(repo='onedl-mmpretrain', config='resnet18_8xb16_cifar10.py', gpus=0,\n        other_args=('--work-dir', 'tmp'))\n  train(repo='onedl-mmpretrain', config='resnet18_8xb16_cifar10.py', gpus=1,\n        other_args=('--work-dir', 'tmp'))\n  train(repo='onedl-mmpretrain', config='resnet18_8xb16_cifar10.py', gpus=4,\n        launcher='pytorch', other_args=('--work-dir', 'tmp'))\n  train(repo='onedl-mmpretrain', config='resnet18_8xb16_cifar10.py', gpus=8,\n        launcher='slurm', gpus_per_node=8, partition='partition_name',\n        other_args=('--work-dir', 'tmp'))\n  ```\n\n</details>\n\n<details>\n<summary>7. test</summary>\n\n- command\n\n  ```bash\n  # Test models on a single server with 1 GPU, report accuracy\n  > mim test onedl-mmpretrain resnet101_b16x8_cifar10.py --checkpoint \\\n      tmp/epoch_3.pth --gpus 1 --metrics accuracy\n  # Test models on a single server with 1 GPU, save predictions\n  > mim test onedl-mmpretrain resnet101_b16x8_cifar10.py --checkpoint \\\n      tmp/epoch_3.pth --gpus 1 --out tmp.pkl\n  # Test models on a single server with 4 GPUs, pytorch distributed,\n  # report accuracy\n  > mim test onedl-mmpretrain resnet101_b16x8_cifar10.py --checkpoint \\\n      tmp/epoch_3.pth --gpus 4 --launcher pytorch --metrics accuracy\n  # Test models on a slurm HPC with one 8-GPU node, report accuracy\n  > mim test onedl-mmpretrain resnet101_b16x8_cifar10.py --checkpoint \\\n      tmp/epoch_3.pth --gpus 8 --metrics accuracy --partition \\\n      partition_name --gpus-per-node 8 --launcher slurm\n  # Print help messages of sub-command test\n  > mim test -h\n  # Print help messages of sub-command test and the testing script of onedl-mmpretrain\n  > mim test onedl-mmpretrain -h\n  ```\n\n- api\n\n  ```python\n  from mim import test\n  test(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py',\n       checkpoint='tmp/epoch_3.pth', gpus=1, other_args=('--metrics', 'accuracy'))\n  test(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py',\n       checkpoint='tmp/epoch_3.pth', gpus=1, other_args=('--out', 'tmp.pkl'))\n  test(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py',\n       checkpoint='tmp/epoch_3.pth', gpus=4, launcher='pytorch',\n       other_args=('--metrics', 'accuracy'))\n  test(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py',\n       checkpoint='tmp/epoch_3.pth', gpus=8, partition='partition_name',\n       launcher='slurm', gpus_per_node=8, other_args=('--metrics', 'accuracy'))\n  ```\n\n</details>\n\n<details>\n<summary>8. run</summary>\n\n- command\n\n  ```bash\n  # Get the Flops of a model\n  > mim run onedl-mmpretrain get_flops resnet101_b16x8_cifar10.py\n  # Publish a model\n  > mim run onedl-mmpretrain publish_model input.pth output.pth\n  # Train models on a slurm HPC with one GPU\n  > srun -p partition --gres=gpu:1 mim run onedl-mmpretrain train \\\n      resnet101_b16x8_cifar10.py --work-dir tmp\n  # Test models on a slurm HPC with one GPU, report accuracy\n  > srun -p partition --gres=gpu:1 mim run onedl-mmpretrain test \\\n      resnet101_b16x8_cifar10.py tmp/epoch_3.pth --metrics accuracy\n  # Print help messages of sub-command run\n  > mim run -h\n  # Print help messages of sub-command run, list all available scripts in\n  # codebase onedl-mmpretrain\n  > mim run onedl-mmpretrain -h\n  # Print help messages of sub-command run, print the help message of\n  # training script in onedl-mmpretrain\n  > mim run onedl-mmpretrain train -h\n  ```\n\n- api\n\n  ```python\n  from mim import run\n\n  run(repo='onedl-mmpretrain', command='get_flops',\n      other_args=('resnet101_b16x8_cifar10.py',))\n  run(repo='onedl-mmpretrain', command='publish_model',\n      other_args=('input.pth', 'output.pth'))\n  run(repo='onedl-mmpretrain', command='train',\n      other_args=('resnet101_b16x8_cifar10.py', '--work-dir', 'tmp'))\n  run(repo='onedl-mmpretrain', command='test',\n      other_args=('resnet101_b16x8_cifar10.py', 'tmp/epoch_3.pth', '--metrics accuracy'))\n  ```\n\n</details>\n\n<details>\n<summary>9. gridsearch</summary>\n\n- command\n\n  ```bash\n  # Parameter search on a single server with CPU by setting `gpus` to 0 and\n  # 'launcher' to 'none' (if applicable). The training script of the\n  # corresponding codebase will fail if it doesn't support CPU training.\n  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0 \\\n      --search-args '--optimizer.lr 1e-2 1e-3'\n  # Parameter search with on a single server with one GPU, search learning\n  # rate\n  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \\\n      --search-args '--optimizer.lr 1e-2 1e-3'\n  # Parameter search with on a single server with one GPU, search\n  # weight_decay\n  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \\\n      --search-args '--optimizer.weight_decay 1e-3 1e-4'\n  # Parameter search with on a single server with one GPU, search learning\n  # rate and weight_decay\n  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \\\n      --search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \\\n      1e-4'\n  # Parameter search on a slurm HPC with one 8-GPU node, search learning\n  # rate and weight_decay\n  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \\\n      --partition partition_name --gpus-per-node 8 --launcher slurm \\\n      --search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \\\n      1e-4'\n  # Parameter search on a slurm HPC with one 8-GPU node, search learning\n  # rate and weight_decay, max parallel jobs is 2\n  > mim gridsearch onedl-mmpretrain resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \\\n      --partition partition_name --gpus-per-node 8 --launcher slurm \\\n      --max-jobs 2 --search-args '--optimizer.lr 1e-2 1e-3 \\\n      --optimizer.weight_decay 1e-3 1e-4'\n  # Print the help message of sub-command search\n  > mim gridsearch -h\n  # Print the help message of sub-command search and the help message of the\n  # training script of codebase onedl-mmpretrain\n  > mim gridsearch onedl-mmpretrain -h\n  ```\n\n- api\n\n  ```python\n  from mim import gridsearch\n\n  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=0,\n             search_args='--optimizer.lr 1e-2 1e-3',\n             other_args=('--work-dir', 'tmp'))\n  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=1,\n             search_args='--optimizer.lr 1e-2 1e-3',\n             other_args=('--work-dir', 'tmp'))\n  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=1,\n             search_args='--optimizer.weight_decay 1e-3 1e-4',\n             other_args=('--work-dir', 'tmp'))\n  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=1,\n             search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'\n                         '1e-3 1e-4',\n             other_args=('--work-dir', 'tmp'))\n  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=8,\n             partition='partition_name', gpus_per_node=8, launcher='slurm',\n             search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'\n                         ' 1e-3 1e-4',\n             other_args=('--work-dir', 'tmp'))\n  gridsearch(repo='onedl-mmpretrain', config='resnet101_b16x8_cifar10.py', gpus=8,\n             partition='partition_name', gpus_per_node=8, launcher='slurm',\n             max_workers=2,\n             search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'\n                         ' 1e-3 1e-4',\n             other_args=('--work-dir', 'tmp'))\n  ```\n\n</details>\n\n## Contributing\n\nWe appreciate all contributions to improve mim. Please refer to [CONTRIBUTING.md](https://github.com/vbti-development/onedl-mmcv/blob/master/CONTRIBUTING.md) for the contributing guideline.\n\n## License\n\nThis project is released under the [Apache 2.0 license](LICENSE).\n\n## Projects in VBTI-development\n\n- [MMEngine](https://github.com/vbti-development/onedl-mmengine): OpenMMLab foundational library for training deep learning models.\n- [MMCV](https://github.com/vbti-development/onedl-mmcv): OpenMMLab foundational library for computer vision.\n- [MMPreTrain](https://github.com/vbti-development/onedl-mmpretrain): OpenMMLab pre-training toolbox and benchmark.\n- [MMDetection](https://github.com/vbti-development/onedl-mmdetection): OpenMMLab detection toolbox and benchmark.\n- [MMRotate](https://github.com/vbti-development/onedl-mmrotate): OpenMMLab rotated object detection toolbox and benchmark.\n- [MMSegmentation](https://github.com/vbti-development/onedl-mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.\n- [MMDeploy](https://github.com/vbti-development/onedl-mmdeploy): OpenMMLab model deployment framework.\n- [MIM](https://github.com/vbti-development/onedl-mim): MIM installs OpenMMLab packages.\n\n## Projects in OpenMMLab\n\n- [MMagic](https://github.com/open-mmlab/mmagic): Open**MM**Lab **A**dvanced, **G**enerative and **I**ntelligent **C**reation toolbox.\n- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.\n- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark.\n- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.\n- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.\n- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.\n- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.\n- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.\n- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.\n- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.\n- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.\n- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.\n- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.\n- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.\n- [MMEval](https://github.com/open-mmlab/mmeval): A unified evaluation library for multiple machine learning libraries.\n- [Playground](https://github.com/open-mmlab/playground): A central hub for gathering and showcasing amazing projects built upon OpenMMLab.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "MIM Installs OpenMMLab packages",
    "version": "0.4.0rc0",
    "project_urls": {
        "Bug Tracker": "https://github.com/vbti-development/onedl-mim/issues",
        "Documentation": "https://onedl-openmim.readthedocs.io",
        "Homepage": "https://github.com/vbti-development/onedl-mim",
        "Repository": "https://github.com/vbti-development/onedl-mim"
    },
    "split_keywords": [
        "computer vision",
        " deep learning",
        " pytorch",
        " openmmlab"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "017d2161e85b972f57db0b261c67cb3ea668b01d208bb6814263c61e9a45ddef",
                "md5": "3ee0754023a7c68f6dd24b31f270dc7e",
                "sha256": "f0b4008d8b1613da307241f23dcf8b7c750a8805e78d7faedd6213149cfee3f8"
            },
            "downloads": -1,
            "filename": "onedl_mim-0.4.0rc0.tar.gz",
            "has_sig": false,
            "md5_digest": "3ee0754023a7c68f6dd24b31f270dc7e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 629793,
            "upload_time": "2025-08-07T07:53:27",
            "upload_time_iso_8601": "2025-08-07T07:53:27.922175Z",
            "url": "https://files.pythonhosted.org/packages/01/7d/2161e85b972f57db0b261c67cb3ea668b01d208bb6814263c61e9a45ddef/onedl_mim-0.4.0rc0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-07 07:53:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vbti-development",
    "github_project": "onedl-mim",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "onedl-mim"
}
        
Elapsed time: 0.66844s