# Lightning ⚡ Intel Habana
[![lightning](https://img.shields.io/badge/-Lightning_2.0+-792ee5?logo=pytorchlightning&logoColor=white)](https://lightning.ai/)
[![PyPI Status](https://badge.fury.io/py/lightning-habana.svg)](https://badge.fury.io/py/lightning-habana)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lightning-habana)](https://pypi.org/project/lightning-habana/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/lightning-Habana)](https://pepy.tech/project/lightning-habana)
[![Deploy Docs](https://github.com/Lightning-AI/lightning-Habana/actions/workflows/docs-deploy.yml/badge.svg)](https://lightning-ai.github.io/lightning-Habana/)
[![General checks](https://github.com/Lightning-AI/lightning-habana/actions/workflows/ci-checks.yml/badge.svg?event=push)](https://github.com/Lightning-AI/lightning-habana/actions/workflows/ci-checks.yml)
[![Build Status](https://dev.azure.com/Lightning-AI/compatibility/_apis/build/status/Lightning-AI.lightning-Habana?branchName=main)](https://dev.azure.com/Lightning-AI/compatibility/_build/latest?definitionId=45&branchName=main)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/Lightning-AI/lightning-Habana/main.svg)](https://results.pre-commit.ci/latest/github/Lightning-AI/lightning-Habana/main)
[Intel® Gaudi® AI Processor (HPU)](https://habana.ai/) training processors are built on a heterogeneous architecture with a cluster of fully programmable Tensor Processing Cores (TPC) along with its associated development tools and libraries, and a configurable Matrix Math engine.
The TPC core is a VLIW SIMD processor with an instruction set and hardware tailored to serve training workloads efficiently.
The Gaudi memory architecture includes on-die SRAM and local memories in each TPC and,
Gaudi is the first DL training processor that has integrated RDMA over Converged Ethernet (RoCE v2) engines on-chip.
On the software side, the PyTorch Habana bridge interfaces between the framework and SynapseAI software stack to enable the execution of deep learning models on the Habana Gaudi device.
Gaudi provides a significant cost-effective benefit, allowing you to engage in more deep learning training while minimizing expenses.
For more information, check out [Gaudi Architecture](https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Overview.html) and [Gaudi Developer Docs](https://developer.habana.ai).
______________________________________________________________________
## Installing Lighting Habana
To install Lightning Habana, run the following command:
```bash
pip install -U lightning lightning-habana
```
______________________________________________________________________
**NOTE**
Ensure either of lightning or pytorch-lightning is used when working with the plugin.
Mixing strategies, plugins etc from both packages is not yet validated.
______________________________________________________________________
## Using PyTorch Lighting with HPU
To enable PyTorch Lightning with HPU accelerator, provide `accelerator=HPUAccelerator()` parameter to the Trainer class.
```python
from lightning import Trainer
from lightning_habana.pytorch.accelerator import HPUAccelerator
# Run on one HPU.
trainer = Trainer(accelerator=HPUAccelerator(), devices=1)
# Run on multiple HPUs.
trainer = Trainer(accelerator=HPUAccelerator(), devices=8)
# Choose the number of devices automatically.
trainer = Trainer(accelerator=HPUAccelerator(), devices="auto")
```
The `devices=1` parameter with HPUs enables the Habana accelerator for single card training using `SingleHPUStrategy`.
The `devices>1` parameter with HPUs enables the Habana accelerator for distributed training. It uses `HPUDDPStrategy` which is based on DDP strategy with the integration of Habana’s collective communication library (HCCL) to support scale-up within a node and scale-out across multiple nodes.
# Support Matrix
| **SynapseAI** | **1.16.0** |
| --------------------- | --------------------------------------------------- |
| PyTorch | 2.2.2 |
| (PyTorch) Lightning\* | 2.3.x |
| **Lightning Habana** | **1.6.0** |
| DeepSpeed\*\* | Forked from v0.14.0 of the official DeepSpeed repo. |
\* covers both packages [`lightning`](https://pypi.org/project/lightning/) and [`pytorch-lightning`](https://pypi.org/project/pytorch-lightning/)
For more information, check out [HPU Support Matrix](https://docs.habana.ai/en/latest/Support_Matrix/Support_Matrix.html)
Raw data
{
"_id": null,
"home_page": "https://github.com/Lightning-AI/lightning-habana",
"name": "lightning-habana",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "deep learning, pytorch, AI",
"author": "Lightning-AI et al.",
"author_email": "name@lightning.ai",
"download_url": "https://files.pythonhosted.org/packages/c9/d3/6291e1457bd33707912cadee809697d26f08696fe419e5c254a70bf1ac67/lightning_habana-1.6.0.tar.gz",
"platform": null,
"description": "# Lightning \u26a1 Intel Habana\n\n[![lightning](https://img.shields.io/badge/-Lightning_2.0+-792ee5?logo=pytorchlightning&logoColor=white)](https://lightning.ai/)\n[![PyPI Status](https://badge.fury.io/py/lightning-habana.svg)](https://badge.fury.io/py/lightning-habana)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lightning-habana)](https://pypi.org/project/lightning-habana/)\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/lightning-Habana)](https://pepy.tech/project/lightning-habana)\n[![Deploy Docs](https://github.com/Lightning-AI/lightning-Habana/actions/workflows/docs-deploy.yml/badge.svg)](https://lightning-ai.github.io/lightning-Habana/)\n\n[![General checks](https://github.com/Lightning-AI/lightning-habana/actions/workflows/ci-checks.yml/badge.svg?event=push)](https://github.com/Lightning-AI/lightning-habana/actions/workflows/ci-checks.yml)\n[![Build Status](https://dev.azure.com/Lightning-AI/compatibility/_apis/build/status/Lightning-AI.lightning-Habana?branchName=main)](https://dev.azure.com/Lightning-AI/compatibility/_build/latest?definitionId=45&branchName=main)\n[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/Lightning-AI/lightning-Habana/main.svg)](https://results.pre-commit.ci/latest/github/Lightning-AI/lightning-Habana/main)\n\n[Intel\u00ae Gaudi\u00ae AI Processor (HPU)](https://habana.ai/) training processors are built on a heterogeneous architecture with a cluster of fully programmable Tensor Processing Cores (TPC) along with its associated development tools and libraries, and a configurable Matrix Math engine.\n\nThe TPC core is a VLIW SIMD processor with an instruction set and hardware tailored to serve training workloads efficiently.\nThe Gaudi memory architecture includes on-die SRAM and local memories in each TPC and,\nGaudi is the first DL training processor that has integrated RDMA over Converged Ethernet (RoCE v2) engines on-chip.\n\nOn the software side, the PyTorch Habana bridge interfaces between the framework and SynapseAI software stack to enable the execution of deep learning models on the Habana Gaudi device.\n\nGaudi provides a significant cost-effective benefit, allowing you to engage in more deep learning training while minimizing expenses.\n\nFor more information, check out [Gaudi Architecture](https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Overview.html) and [Gaudi Developer Docs](https://developer.habana.ai).\n\n______________________________________________________________________\n\n## Installing Lighting Habana\n\nTo install Lightning Habana, run the following command:\n\n```bash\npip install -U lightning lightning-habana\n```\n\n______________________________________________________________________\n\n**NOTE**\n\nEnsure either of lightning or pytorch-lightning is used when working with the plugin.\nMixing strategies, plugins etc from both packages is not yet validated.\n\n______________________________________________________________________\n\n## Using PyTorch Lighting with HPU\n\nTo enable PyTorch Lightning with HPU accelerator, provide `accelerator=HPUAccelerator()` parameter to the Trainer class.\n\n```python\nfrom lightning import Trainer\nfrom lightning_habana.pytorch.accelerator import HPUAccelerator\n\n# Run on one HPU.\ntrainer = Trainer(accelerator=HPUAccelerator(), devices=1)\n# Run on multiple HPUs.\ntrainer = Trainer(accelerator=HPUAccelerator(), devices=8)\n# Choose the number of devices automatically.\ntrainer = Trainer(accelerator=HPUAccelerator(), devices=\"auto\")\n```\n\nThe `devices=1` parameter with HPUs enables the Habana accelerator for single card training using `SingleHPUStrategy`.\n\nThe `devices>1` parameter with HPUs enables the Habana accelerator for distributed training. It uses `HPUDDPStrategy` which is based on DDP strategy with the integration of Habana\u2019s collective communication library (HCCL) to support scale-up within a node and scale-out across multiple nodes.\n\n# Support Matrix\n\n| **SynapseAI** | **1.16.0** |\n| --------------------- | --------------------------------------------------- |\n| PyTorch | 2.2.2 |\n| (PyTorch) Lightning\\* | 2.3.x |\n| **Lightning Habana** | **1.6.0** |\n| DeepSpeed\\*\\* | Forked from v0.14.0 of the official DeepSpeed repo. |\n\n\\* covers both packages [`lightning`](https://pypi.org/project/lightning/) and [`pytorch-lightning`](https://pypi.org/project/pytorch-lightning/)\n\nFor more information, check out [HPU Support Matrix](https://docs.habana.ai/en/latest/Support_Matrix/Support_Matrix.html)\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Lightning support for Intel Habana accelerators",
"version": "1.6.0",
"project_urls": {
"Bug Tracker": "https://github.com/Lightning-AI/lightning-habana/issues",
"Documentation": "https://lightning-habana.rtfd.io/en/latest/",
"Download": "https://github.com/Lightning-AI/lightning-habana",
"Homepage": "https://github.com/Lightning-AI/lightning-habana",
"Source Code": "https://github.com/Lightning-AI/lightning-habana"
},
"split_keywords": [
"deep learning",
" pytorch",
" ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8a02fdca9521f57fcc8c93aa8a133546abc192ef3b227fbc5c86dff5fa632d29",
"md5": "a09e632bd77a41c7be90c3866fdc5f4c",
"sha256": "d9911562c0c3f82ee4716350ca7231cce9791683b647e1ff7bc69c2626f080fb"
},
"downloads": -1,
"filename": "lightning_habana-1.6.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a09e632bd77a41c7be90c3866fdc5f4c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 81010,
"upload_time": "2024-06-28T11:07:43",
"upload_time_iso_8601": "2024-06-28T11:07:43.877278Z",
"url": "https://files.pythonhosted.org/packages/8a/02/fdca9521f57fcc8c93aa8a133546abc192ef3b227fbc5c86dff5fa632d29/lightning_habana-1.6.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c9d36291e1457bd33707912cadee809697d26f08696fe419e5c254a70bf1ac67",
"md5": "e0ce0a73a8e029a66d530a499b3eda56",
"sha256": "00f07d1105d6d1ccf596686684ab06c58e7811e434d4ac6a7e723a449ba0549b"
},
"downloads": -1,
"filename": "lightning_habana-1.6.0.tar.gz",
"has_sig": false,
"md5_digest": "e0ce0a73a8e029a66d530a499b3eda56",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 55436,
"upload_time": "2024-06-28T11:07:45",
"upload_time_iso_8601": "2024-06-28T11:07:45.706255Z",
"url": "https://files.pythonhosted.org/packages/c9/d3/6291e1457bd33707912cadee809697d26f08696fe419e5c254a70bf1ac67/lightning_habana-1.6.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-28 11:07:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Lightning-AI",
"github_project": "lightning-habana",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "lightning-habana"
}