dmlcloud


Namedmlcloud JSON
Version 0.4 PyPI version JSON
download
home_pageNone
SummaryDistributed torch training using horovod and slurm
upload_time2025-02-17 10:34:28
maintainerNone
docs_urlNone
authorSebastian Hoffmann
requires_python>=3.10
licenseBSD 3-Clause License Copyright (c) 2023, Sebastian Hoffmann Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
keywords pytorch torch.distributed slurm distributed training deep learning
VCS
bugtrack_url
requirements torch numpy xarray progress_table omegaconf torchmetrics nvidia-ml-py
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![Dmlcloud Logo](./misc/logo/dmlcloud_color.png)
---------------
[![PyPI Status](https://img.shields.io/pypi/v/dmlcloud)](https://pypi.org/project/dmlcloud/)
[![Documentation Status](https://readthedocs.org/projects/dmlcloud/badge/?version=latest)](https://dmlcloud.readthedocs.io/en/latest/?badge=latest)
[![Test Status](https://img.shields.io/github/actions/workflow/status/sehoffmann/dmlcloud/run_tests.yml?label=tests&logo=github)](https://github.com/sehoffmann/dmlcloud/actions/workflows/run_tests.yml)

A torch library for easy distributed deep learning on HPC clusters. Supports both slurm and MPI. No unnecessary abstractions and overhead. Simple, yet powerful, API.

## Highlights
- Simple, yet powerful, API
- Easy initialization of `torch.distributed`
- Distributed metrics
- Extensive logging and diagnostics
- Wandb support
- Tensorboard support
- A wealth of useful utility functions

## Installation
dmlcloud can be installed directly from PyPI:
```bash
pip install dmlcloud
```

Alternatively, you can install the latest development version directly from Github:
```bash
pip install git+https://github.com/sehoffmann/dmlcloud.git
```

### Documentation

You can find the official documentation at [Read the Docs](https://dmlcloud.readthedocs.io/en/latest/)

## Minimal Example
See [examples/mnist.py](https://github.com/sehoffmann/dmlcloud/blob/develop/examples/mnist.py) for a minimal example on how to train MNIST with multiple GPUS. To run it with 4 GPUs, use
```bash
dmlrun -n 4 python examples/mnist.py
```
`dmlrun` is a thin wrapper around `torchrun` that makes it easier to prototype on a single node.

## Slurm Support
*dmlcloud* automatically looks for slurm environment variables to initialize torch.distributed. On a slurm cluster, you can hence simply use `srun` from within an sbatch script to train on multiple nodes:

```bash
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-node=4
#SBATCH --cpus-per-task=8
#SBATCH --gpu-bind=none

srun python examples/mnist.py
```

## FAQ

### How is dmlcloud different from similar libraries like *pytorch lightning* or *fastai*?

dmlcloud was designed foremost with one underlying principle:
> **No unnecessary abstractions, just help with distributed training**

As a consequence, dmlcloud code is almost identical to a regular pytorch training loop and only requires a few adjustments here and there.
In contrast, other libraries often introduce extensive API's that can quickly feel overwhelming due to their sheer amount of options.

For instance, **the constructor of `ligthning.Trainer` has 51 arguments! `dml.Pipeline` only has 2.**

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "dmlcloud",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "pytorch, torch.distributed, slurm, distributed training, deep learning",
    "author": "Sebastian Hoffmann",
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "![Dmlcloud Logo](./misc/logo/dmlcloud_color.png)\n---------------\n[![PyPI Status](https://img.shields.io/pypi/v/dmlcloud)](https://pypi.org/project/dmlcloud/)\n[![Documentation Status](https://readthedocs.org/projects/dmlcloud/badge/?version=latest)](https://dmlcloud.readthedocs.io/en/latest/?badge=latest)\n[![Test Status](https://img.shields.io/github/actions/workflow/status/sehoffmann/dmlcloud/run_tests.yml?label=tests&logo=github)](https://github.com/sehoffmann/dmlcloud/actions/workflows/run_tests.yml)\n\nA torch library for easy distributed deep learning on HPC clusters. Supports both slurm and MPI. No unnecessary abstractions and overhead. Simple, yet powerful, API.\n\n## Highlights\n- Simple, yet powerful, API\n- Easy initialization of `torch.distributed`\n- Distributed metrics\n- Extensive logging and diagnostics\n- Wandb support\n- Tensorboard support\n- A wealth of useful utility functions\n\n## Installation\ndmlcloud can be installed directly from PyPI:\n```bash\npip install dmlcloud\n```\n\nAlternatively, you can install the latest development version directly from Github:\n```bash\npip install git+https://github.com/sehoffmann/dmlcloud.git\n```\n\n### Documentation\n\nYou can find the official documentation at [Read the Docs](https://dmlcloud.readthedocs.io/en/latest/)\n\n## Minimal Example\nSee [examples/mnist.py](https://github.com/sehoffmann/dmlcloud/blob/develop/examples/mnist.py) for a minimal example on how to train MNIST with multiple GPUS. To run it with 4 GPUs, use\n```bash\ndmlrun -n 4 python examples/mnist.py\n```\n`dmlrun` is a thin wrapper around `torchrun` that makes it easier to prototype on a single node.\n\n## Slurm Support\n*dmlcloud* automatically looks for slurm environment variables to initialize torch.distributed. On a slurm cluster, you can hence simply use `srun` from within an sbatch script to train on multiple nodes:\n\n```bash\n#!/bin/bash\n#SBATCH --nodes=2\n#SBATCH --ntasks-per-node=4\n#SBATCH --gpus-per-node=4\n#SBATCH --cpus-per-task=8\n#SBATCH --gpu-bind=none\n\nsrun python examples/mnist.py\n```\n\n## FAQ\n\n### How is dmlcloud different from similar libraries like *pytorch lightning* or *fastai*?\n\ndmlcloud was designed foremost with one underlying principle:\n> **No unnecessary abstractions, just help with distributed training**\n\nAs a consequence, dmlcloud code is almost identical to a regular pytorch training loop and only requires a few adjustments here and there.\nIn contrast, other libraries often introduce extensive API's that can quickly feel overwhelming due to their sheer amount of options.\n\nFor instance, **the constructor of `ligthning.Trainer` has 51 arguments! `dml.Pipeline` only has 2.**\n",
    "bugtrack_url": null,
    "license": "BSD 3-Clause License\n        \n        Copyright (c) 2023, Sebastian Hoffmann\n        \n        Redistribution and use in source and binary forms, with or without\n        modification, are permitted provided that the following conditions are met:\n        \n        1. Redistributions of source code must retain the above copyright notice, this\n           list of conditions and the following disclaimer.\n        \n        2. Redistributions in binary form must reproduce the above copyright notice,\n           this list of conditions and the following disclaimer in the documentation\n           and/or other materials provided with the distribution.\n        \n        3. Neither the name of the copyright holder nor the names of its\n           contributors may be used to endorse or promote products derived from\n           this software without specific prior written permission.\n        \n        THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n        AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n        IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n        DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n        FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n        DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n        SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n        CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n        OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n        OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n        ",
    "summary": "Distributed torch training using horovod and slurm",
    "version": "0.4",
    "project_urls": {
        "Repository": "https://github.com/sehoffmann/dmlcloud"
    },
    "split_keywords": [
        "pytorch",
        " torch.distributed",
        " slurm",
        " distributed training",
        " deep learning"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7c1cc26940bf956bb8eb7c540de484fcfc11260247ad756097da89170d62958c",
                "md5": "0f3a0be96a06970394747bd6236cdcf9",
                "sha256": "4c9612dec92d87b4ee210503fcf4c4ed0a717452f3e820947d2f665143abaedb"
            },
            "downloads": -1,
            "filename": "dmlcloud-0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0f3a0be96a06970394747bd6236cdcf9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 40656,
            "upload_time": "2025-02-17T10:34:28",
            "upload_time_iso_8601": "2025-02-17T10:34:28.131066Z",
            "url": "https://files.pythonhosted.org/packages/7c/1c/c26940bf956bb8eb7c540de484fcfc11260247ad756097da89170d62958c/dmlcloud-0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-17 10:34:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "sehoffmann",
    "github_project": "dmlcloud",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torch",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "xarray",
            "specs": []
        },
        {
            "name": "progress_table",
            "specs": [
                [
                    ">=",
                    "2.2.0"
                ]
            ]
        },
        {
            "name": "omegaconf",
            "specs": []
        },
        {
            "name": "torchmetrics",
            "specs": []
        },
        {
            "name": "nvidia-ml-py",
            "specs": []
        }
    ],
    "lcname": "dmlcloud"
}
        
Elapsed time: 0.40079s