Name | ddpw JSON |
Version |
5.5.1
JSON |
| download |
home_page | None |
Summary | A lightweight wrapper that scaffolds PyTorch's Distributed (Data) Parallel. |
upload_time | 2025-08-27 02:30:08 |
maintainer | Sujal Vijayaraghavan |
docs_url | None |
author | Sujal Vijayaraghavan |
requires_python | >=3.13 |
license | None |
keywords |
pytorch
distributed compute
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<h1 align="center">DDPW</h1>
**Distributed Data Parallel Wrapper (DDPW)** is a lightweight Python wrapper
relevant to [PyTorch](https://pytorch.org/) users.
DDPW handles basic logistical tasks such as creating threads on GPUs/SLURM
nodes, setting up inter-process communication, _etc._, and provides simple,
default utility methods to move modules to devices and get dataset samplers,
allowing the user to focus on the main aspects of the task. It is written in
Python 3.13. The [documentation](https://ddpw.projects.sujal.tv) contains
details on how to use this package.
## Overview
### Installation
[](https://pypi.org/project/ddpw/)
```bash
# with uv
# to instal and add to pyroject.toml
uv add [--active] ddpw
# or to simply instal
uv pip install ddpw
# with pip
pip install ddpw
```
### Examples
#### With the decorator `wrapper`
```python
from ddpw import Platform, wrapper
platform = Platform(device="gpu", n_cpus=32, ram=64, n_gpus=4, verbose=True)
@wrapper(platform)
def run(*args, **kwargs):
# global and local ranks, and the process group in:
# kwargs['global_rank'], # kwargs['local_rank'], kwargs['group']
pass
if __name__ == '__main__':
run(*args, **kwargs)
```
#### As a callable
```python
from ddpw import Platform, Wrapper
# some task
def run(*args, **kwargs):
# global and local ranks, and the process group in:
# kwargs['global_rank'], # kwargs['local_rank'], kwargs['group']
pass
if __name__ == '__main__':
# platform (e.g., 4 GPUs)
platform = Platform(device='gpu', n_gpus=4)
# wrapper
wrapper = Wrapper(platform=platform)
# start
wrapper.start(task, *args, **kwargs)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "ddpw",
"maintainer": "Sujal Vijayaraghavan",
"docs_url": null,
"requires_python": ">=3.13",
"maintainer_email": null,
"keywords": "pytorch, distributed compute",
"author": "Sujal Vijayaraghavan",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/06/33/037c9c7efde2c5448f19a1c1a73ba83ab9f0516c1c202c5efe55914754b6/ddpw-5.5.1.tar.gz",
"platform": null,
"description": "<h1 align=\"center\">DDPW</h1>\n\n**Distributed Data Parallel Wrapper (DDPW)** is a lightweight Python wrapper\nrelevant to [PyTorch](https://pytorch.org/) users.\n\nDDPW handles basic logistical tasks such as creating threads on GPUs/SLURM\nnodes, setting up inter-process communication, _etc._, and provides simple,\ndefault utility methods to move modules to devices and get dataset samplers,\nallowing the user to focus on the main aspects of the task. It is written in\nPython 3.13. The [documentation](https://ddpw.projects.sujal.tv) contains\ndetails on how to use this package.\n\n## Overview\n\n### Installation\n\n[](https://pypi.org/project/ddpw/)\n\n```bash\n# with uv\n\n# to instal and add to pyroject.toml\nuv add [--active] ddpw\n# or to simply instal\nuv pip install ddpw\n\n# with pip\npip install ddpw\n```\n\n### Examples\n\n#### With the decorator `wrapper`\n\n```python\nfrom ddpw import Platform, wrapper\n\nplatform = Platform(device=\"gpu\", n_cpus=32, ram=64, n_gpus=4, verbose=True)\n\n@wrapper(platform)\ndef run(*args, **kwargs):\n # global and local ranks, and the process group in:\n # kwargs['global_rank'], # kwargs['local_rank'], kwargs['group']\n pass\n\nif __name__ == '__main__':\n run(*args, **kwargs)\n```\n\n#### As a callable\n\n```python\nfrom ddpw import Platform, Wrapper\n\n# some task\ndef run(*args, **kwargs):\n # global and local ranks, and the process group in:\n # kwargs['global_rank'], # kwargs['local_rank'], kwargs['group']\n pass\n\nif __name__ == '__main__':\n # platform (e.g., 4 GPUs)\n platform = Platform(device='gpu', n_gpus=4)\n\n # wrapper\n wrapper = Wrapper(platform=platform)\n\n # start\n wrapper.start(task, *args, **kwargs)\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "A lightweight wrapper that scaffolds PyTorch's Distributed (Data) Parallel.",
"version": "5.5.1",
"project_urls": {
"Documentation": "https://ddpw.projects.sujal.tv",
"Homepage": "https://ddpw.projects.sujal.tv",
"Repository": "https://github.com/sujaltv/ddpw"
},
"split_keywords": [
"pytorch",
" distributed compute"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "85e0add71dfe470bd865ec1423731aac9bc76e700b5b1f9b5743eb01e8d977ee",
"md5": "d50a97a25e59476e5594a0cae377b6f9",
"sha256": "5295a02799b782bc47cb242ebdb69937c013a556e7cf1d10b54dc0e920b4242a"
},
"downloads": -1,
"filename": "ddpw-5.5.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d50a97a25e59476e5594a0cae377b6f9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.13",
"size": 12607,
"upload_time": "2025-08-27T02:30:07",
"upload_time_iso_8601": "2025-08-27T02:30:07.824741Z",
"url": "https://files.pythonhosted.org/packages/85/e0/add71dfe470bd865ec1423731aac9bc76e700b5b1f9b5743eb01e8d977ee/ddpw-5.5.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "0633037c9c7efde2c5448f19a1c1a73ba83ab9f0516c1c202c5efe55914754b6",
"md5": "8e794945a8fb160470c51b82dca378f3",
"sha256": "7553423ff8958ceee65da541167d95d81341d459fc9061cc257692aa37c471e4"
},
"downloads": -1,
"filename": "ddpw-5.5.1.tar.gz",
"has_sig": false,
"md5_digest": "8e794945a8fb160470c51b82dca378f3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.13",
"size": 10472,
"upload_time": "2025-08-27T02:30:08",
"upload_time_iso_8601": "2025-08-27T02:30:08.994363Z",
"url": "https://files.pythonhosted.org/packages/06/33/037c9c7efde2c5448f19a1c1a73ba83ab9f0516c1c202c5efe55914754b6/ddpw-5.5.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-27 02:30:08",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sujaltv",
"github_project": "ddpw",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "ddpw"
}