# UCX Communication Module for Distributed
This is the UCX communication backend for Dask Distributed, providing high-performance communication capabilities using the UCX (Unified Communication X) framework. It enables efficient GPU-to-GPU communication via NVLink (CUDA IPC), InfiniBand support, and various other high-speed interconnects.
## Installation
This package is typically installed as part of the UCXX project build process. It can also be installed separately via conda-forge:
```bash
mamba install -c conda-forge distributed-ucxx
```
Or via PyPI (requires selection of CUDA version):
```bash
pip install distributed-ucxx-cu13 # For CUDA 13.x
pip install distributed-ucxx-cu12 # For CUDA 12.x
```
## Configuration
This package provides its own configuration system that replaces the UCX configuration previously found in the main Distributed package. Configuration can be provided via:
1. **YAML configuration files**: `distributed-ucxx.yaml`
2. **Environment variables**: Using the `DASK_DISTRIBUTED_UCXX_` prefix
3. **Programmatic configuration**: Using Dask's configuration system
### Configuration Schema
The configuration schema is defined in [`distributed-ucxx-schema.yaml`](distributed_ucxx/distributed-ucxx-schema.yaml) and supports various options:
- UCX transport configuration: `tcp`, `nvlink`, `infiniband`, `cuda-copy`, etc.
- RMM configuration: `rmm.pool-size`
- Advanced options: `multi-buffer`, `environment`
### Example Configuration
New schema:
```yaml
distributed-ucxx:
tcp: true
nvlink: true
infiniband: false
cuda-copy: true
create-cuda-context: true
multi-buffer: false
environment:
log-level: "info"
rmm:
pool-size: "1GB"
```
Legacy schema (may be removed in the future):
```yaml
distributed:
comm:
ucx:
tcp: true
nvlink: true
infiniband: false
cuda-copy: true
create-cuda-context: true
multi-buffer: false
environment:
log-level: "info"
rmm:
pool-size: "1GB"
```
### Environment Variables
New schema:
```bash
export DASK_DISTRIBUTED_UCXX__TCP=true
export DASK_DISTRIBUTED_UCXX__NVLINK=true
export DASK_DISTRIBUTED_UCXX__RMM__POOL_SIZE=1GB
```
Legacy schema (may be removed in the future):
```bash
export DASK_DISTRIBUTED__COMM__UCX__TCP=true
export DASK_DISTRIBUTED__COMM__UCX__NVLINK=true
export DASK_DISTRIBUTED__RMM__POOL_SIZE=1GB
```
### Python Configuration
New schema:
```python
import dask
dask.config.set({
"distributed-ucxx.tcp": True,
"distributed-ucxx.nvlink": True,
"distributed-ucxx.rmm.pool-size": "1GB"
})
```
Legacy schema (may be removed in the future):
```python
import dask
dask.config.set({
"distributed.comm.ucx.tcp": True,
"distributed.comm.ucx.nvlink": True,
"distributed.rmm.pool-size": "1GB"
})
```
## Usage
The package automatically registers itself as a communication backend for Distributed using the entry point `ucxx`. Once installed, you can use it by specifying the protocol:
```python
from distributed import Client
# Connect using UCXX protocol
client = Client("ucxx://scheduler-address:8786")
```
Or when starting a scheduler/worker:
```bash
dask scheduler --protocol ucxx
dask worker ucxx://scheduler-address:8786
```
## Migration from Distributed
If you're migrating from the legacy UCX configuration in the main Distributed package, update your configuration keys:
- `distributed.comm.ucx.*` is now `distributed-ucxx.*`
- `distributed.rmm.pool-size` is now `distributed-ucxx.rmm.pool-size`
The old configuration schema is still valid for convenience, but may be removed in a future version.
## See Also
- [UCXX Project](https://github.com/rapidsai/ucxx)
- [Dask Distributed Documentation](https://distributed.dask.org/)
- [UCX Project Documentation](https://openucx.readthedocs.io/en/master/index.html)
Raw data
{
"_id": null,
"home_page": null,
"name": "distributed-ucxx-cu12",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": null,
"author": "NVIDIA Corporation",
"author_email": null,
"download_url": null,
"platform": null,
"description": "# UCX Communication Module for Distributed\n\nThis is the UCX communication backend for Dask Distributed, providing high-performance communication capabilities using the UCX (Unified Communication X) framework. It enables efficient GPU-to-GPU communication via NVLink (CUDA IPC), InfiniBand support, and various other high-speed interconnects.\n\n## Installation\n\nThis package is typically installed as part of the UCXX project build process. It can also be installed separately via conda-forge:\n\n```bash\nmamba install -c conda-forge distributed-ucxx\n```\n\nOr via PyPI (requires selection of CUDA version):\n\n```bash\npip install distributed-ucxx-cu13 # For CUDA 13.x\npip install distributed-ucxx-cu12 # For CUDA 12.x\n```\n\n## Configuration\n\nThis package provides its own configuration system that replaces the UCX configuration previously found in the main Distributed package. Configuration can be provided via:\n\n1. **YAML configuration files**: `distributed-ucxx.yaml`\n2. **Environment variables**: Using the `DASK_DISTRIBUTED_UCXX_` prefix\n3. **Programmatic configuration**: Using Dask's configuration system\n\n### Configuration Schema\n\nThe configuration schema is defined in [`distributed-ucxx-schema.yaml`](distributed_ucxx/distributed-ucxx-schema.yaml) and supports various options:\n\n- UCX transport configuration: `tcp`, `nvlink`, `infiniband`, `cuda-copy`, etc.\n- RMM configuration: `rmm.pool-size`\n- Advanced options: `multi-buffer`, `environment`\n\n### Example Configuration\n\nNew schema:\n\n```yaml\ndistributed-ucxx:\n tcp: true\n nvlink: true\n infiniband: false\n cuda-copy: true\n create-cuda-context: true\n multi-buffer: false\n environment:\n log-level: \"info\"\n rmm:\n pool-size: \"1GB\"\n```\n\nLegacy schema (may be removed in the future):\n\n```yaml\ndistributed:\n comm:\n ucx:\n tcp: true\n nvlink: true\n infiniband: false\n cuda-copy: true\n create-cuda-context: true\n multi-buffer: false\n environment:\n log-level: \"info\"\n rmm:\n pool-size: \"1GB\"\n```\n\n### Environment Variables\n\nNew schema:\n\n```bash\nexport DASK_DISTRIBUTED_UCXX__TCP=true\nexport DASK_DISTRIBUTED_UCXX__NVLINK=true\nexport DASK_DISTRIBUTED_UCXX__RMM__POOL_SIZE=1GB\n```\n\nLegacy schema (may be removed in the future):\n\n```bash\nexport DASK_DISTRIBUTED__COMM__UCX__TCP=true\nexport DASK_DISTRIBUTED__COMM__UCX__NVLINK=true\nexport DASK_DISTRIBUTED__RMM__POOL_SIZE=1GB\n```\n\n### Python Configuration\n\nNew schema:\n\n```python\nimport dask\n\ndask.config.set({\n \"distributed-ucxx.tcp\": True,\n \"distributed-ucxx.nvlink\": True,\n \"distributed-ucxx.rmm.pool-size\": \"1GB\"\n})\n```\n\nLegacy schema (may be removed in the future):\n\n```python\nimport dask\n\ndask.config.set({\n \"distributed.comm.ucx.tcp\": True,\n \"distributed.comm.ucx.nvlink\": True,\n \"distributed.rmm.pool-size\": \"1GB\"\n})\n```\n\n## Usage\n\nThe package automatically registers itself as a communication backend for Distributed using the entry point `ucxx`. Once installed, you can use it by specifying the protocol:\n\n```python\nfrom distributed import Client\n\n# Connect using UCXX protocol\nclient = Client(\"ucxx://scheduler-address:8786\")\n```\n\nOr when starting a scheduler/worker:\n\n```bash\ndask scheduler --protocol ucxx\ndask worker ucxx://scheduler-address:8786\n```\n\n## Migration from Distributed\n\nIf you're migrating from the legacy UCX configuration in the main Distributed package, update your configuration keys:\n\n- `distributed.comm.ucx.*` is now `distributed-ucxx.*`\n- `distributed.rmm.pool-size` is now `distributed-ucxx.rmm.pool-size`\n\nThe old configuration schema is still valid for convenience, but may be removed in a future version.\n\n## See Also\n\n- [UCXX Project](https://github.com/rapidsai/ucxx)\n- [Dask Distributed Documentation](https://distributed.dask.org/)\n- [UCX Project Documentation](https://openucx.readthedocs.io/en/master/index.html)\n",
"bugtrack_url": null,
"license": "BSD-3-Clause",
"summary": "UCX communication module for Dask Distributed",
"version": "0.46.0",
"project_urls": {
"Documentation": "https://distributed.dask.org/",
"Homepage": "https://github.com/rapidsai/ucxx",
"Source": "https://github.com/rapidsai/ucxx"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a8456f08df482da1d7385b2beddb3af17439d8c49b61e766d52fe9bb21be889a",
"md5": "e84e8c0ebe28e5d324383341f4c92fd4",
"sha256": "d0fbf051a986da808e15ee3f5b9f83e69e518eb00d4cf7e7f650c60a62abed96"
},
"downloads": -1,
"filename": "distributed_ucxx_cu12-0.46.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e84e8c0ebe28e5d324383341f4c92fd4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 31621,
"upload_time": "2025-10-09T10:00:31",
"upload_time_iso_8601": "2025-10-09T10:00:31.390027Z",
"url": "https://files.pythonhosted.org/packages/a8/45/6f08df482da1d7385b2beddb3af17439d8c49b61e766d52fe9bb21be889a/distributed_ucxx_cu12-0.46.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-09 10:00:31",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "rapidsai",
"github_project": "ucxx",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "distributed-ucxx-cu12"
}