Name | torchruntime JSON |
Version |
1.4.1
JSON |
| download |
home_page | None |
Summary | Meant for app developers. A convenient way to install and configure the appropriate version of PyTorch on the user's computer, based on the OS and GPU manufacturer and model number. |
upload_time | 2025-01-17 10:54:27 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.0 |
license | None |
keywords |
torch
ai
ml
llm
installer
runtime
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# torchruntime
[![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB)
**torchruntime** is a lightweight package for automatically installing the appropriate variant of PyTorch on a user's computer, based on their OS, and GPU manufacturer and GPU model.
This package is used by [Easy Diffusion](https://github.com/easydiffusion/easydiffusion), but you're welcome to use it as well. It's useful for developers who make PyTorch-based apps that target users with NVIDIA, AMD and Intel graphics cards (as well as CPU-only usage), on Windows, Mac and Linux.
### Why?
It lets you treat PyTorch as a single dependency (like it should be), and lets you assume that each user will get the most-performant variant of PyTorch suitable for their computer's OS and hardware.
It deals with the complexity of the variety of torch builds and configurations required for CUDA, AMD (ROCm, DirectML), Intel (xpu/DirectML/ipex), and CPU-only.
**Compatibility table**: [Click here](#compatibility-table) to see the supported graphics cards and operating systems.
# Installation
Supports Windows, Linux, and Mac.
`pip install torchruntime`
## Usage
### Step 1. Install the appropriate variant of PyTorch
*This command should be run on the user's computer, or while creating platform-specific builds:*
`python -m torchruntime install`
This will install `torch`, `torchvision`, and `torchaudio`, and will decide the variant based on the user's OS, GPU manufacturer and GPU model number. See [customizing packages](#customizing-packages) for more options.
### Step 2. Initialize torch
This should be run inside your program, to initialize the required environment variables (if any) for the variant of torch being used.
```py
import torchruntime
torchruntime.init_torch()
```
## Customizing packages
By default, `python -m torchruntime install` will install the latest available `torch`, `torchvision` and `torchaudio` suitable on the user's platform.
You can customize the packages to install by including their names:
* For e.g. to install only `torch` and `torchvision`, you can run `python -m torchruntime install torch torchvision`
* To install specific versions (in pip format), you can run `python -m torchruntime install "torch>2.0" "torchvision==0.20"`
**Note:** If you specify package versions, please keep in mind that the version may not be available to *all* the users on *all* the torch platforms. For e.g. a user with Python 3.8 would not be able to install torch 2.5 (or higher), because torch 2.5 dropped support for Python 3.8.
So in general, it's better to avoid specifying a version unless it really matters to you (or you know what you're doing). Instead, please allow `torchruntime` to pick the latest-possible version for the user.
# Compatibility table
The list of platforms on which `torchruntime` can install a working variant of PyTorch.
**Note:** *This list is based on user feedback (since I don't have all the cards). Please let me know if your card is supported (or not) by opening a pull request or issue or messaging on [Discord](https://discord.com/invite/u9yhsFmEkB) (with supporting logs).*
**CPU-only:**
| OS | Supported?| Notes |
|---|---|---|
| Windows | ✅ Yes | x86_64 |
| Linux | ✅ Yes | x86_64 and aarch64 |
| Mac (M1/M2/M3/M4) | ✅ Yes | arm64. `mps` backend |
| Mac (Intel) | ✅ Yes | x86_64. Stopped after `torch 2.2.2` |
**NVIDIA:**
| Series | Supported? | OS | Notes |
|---|---|---|---|
| 40xx | ✅ Yes | Win/Linux | Uses CUDA 124 |
| 30xx | ✅ Yes | Win/Linux | Uses CUDA 124 |
| 20xx | ✅ Yes | Win/Linux | Uses CUDA 124 |
| 10xx/16xx | ✅ Yes | Win/Linux | Uses CUDA 124. Full-precision required on 16xx series |
**AMD:**
| Series | Supported? | OS | Notes |
|---|---|---|---|
| 7xxx | ✅ Yes | Win/Linux | Navi3/RDNA3 (gfx110x). ROCm 6.2 on Linux. DirectML on Windows |
| 6xxx | ✅ Yes | Win/Linux | Navi2/RDNA2 (gfx103x). ROCm 6.2 on Linux. DirectML on Windows |
| 6xxx on Intel Mac | ✅ Yes | Intel Mac | gfx103x. 'mps' backend |
| 5xxx | ✅ Yes | Win/Linux | Navi1/RDNA1 (gfx101x). Full-precision required. DirectML on Windows. Linux only supports upto ROCm 5.2. Waiting for [this](https://github.com/pytorch/pytorch/issues/132570#issuecomment-2313071756) for ROCm 6.2 support. |
| 5xxx on Intel Mac | ❓ Untested (WIP) | Intel Mac | gfx101x. Implemented but need testers, please message on [Discord](https://discord.com/invite/u9yhsFmEkB) |
| 4xxxG/Radeon VII | ✅ Yes | Win/Linux | Vega 20 gfx906. Need testers for Windows, please message on [Discord](https://discord.com/invite/u9yhsFmEkB) |
| 2xxxG/Radeon RX Vega 56 | ⚠️ Partial | Win/Linux | Vega 10 gfx900. ROCm 5.2 on Linux. Implemented but need testers for Windows, please message on [Discord](https://discord.com/invite/u9yhsFmEkB) |
| 5xx/Polaris | ❓ Untested (WIP) | N/A | gfx80x. Implemented but need testers, please message on [Discord](https://discord.com/invite/u9yhsFmEkB) |
**Apple:**
| Series | Supported? |Notes |
|---|---|---|
| M1/M2/M3/M4 | ✅ Yes | 'mps' backend |
| AMD 6xxx on Intel Mac | ✅ Yes | Intel Mac | gfx103x. 'mps' backend |
| AMD 5xxx on Intel Mac | ❓ Untested (WIP) | Intel Mac | gfx101x. Implemented but need testers, please message on [Discord](https://discord.com/invite/u9yhsFmEkB) |
**Intel:**
| Series | Supported? | OS | Notes |
|---|---|---|---|
| Arc | ❓ Untested (WIP) | Win/Linux | Implemented but need testers, please message on [Discord](https://discord.com/invite/u9yhsFmEkB). Backends: 'xpu' or DirectML or [ipex](https://github.com/intel/intel-extension-for-pytorch) |
# FAQ
## Why can't I just run 'pip install torch'?
`pip install torch` installs the CPU-only version of torch, so it won't utilize your GPU's capabilities.
## Why can't I just install torch-for-ROCm directly to support AMD?
Different models of AMD cards require different LLVM targets, and sometimes different ROCm versions. And ROCm currently doesn't work on Windows, so AMD on Windows is best served (currently) with DirectML.
And plenty of AMD cards work with ROCm (even when they aren't in the official list of supported cards). Information about these cards (for e.g. the LLVM target to use) is pretty scattered.
`torchruntime` deals with this complexity for your convenience.
# Contributing
📢 I'm looking for contributions in these specific areas:
- More testing on consumer AMD GPUs.
- More support for older AMD GPUs. Explore: Compile and host PyTorch wheels and rocm (on GitHub) for older AMD gpus (e.g. 580/590/Polaris) with the required patches.
- Intel GPUs.
- Testing on professional AMD GPUs (e.g. the Instinct series).
- An easy-to-run benchmark script (that people can run to check the level of compatibility on their platform).
- Improve [the logic](tests/test_configure.py) for supporting multiple AMD GPUs with different ROCm compatibility. At present, it just picks the latest GPU, which means it doesn't support running workloads on multiple AMD GPUs in parallel.
Please message on the [Discord community](https://discord.com/invite/u9yhsFmEkB) if you have AMD or Intel GPUs, and would like to help with testing or adding support for them! Thanks!
# Credits
* Code contributors on [Easy Diffusion](https://github.com/easydiffusion/easydiffusion).
* Users on [Easy Diffusion's Discord](https://discord.com/invite/u9yhsFmEkB) who've helped with testing on various GPUs.
* [PCI Database](https://raw.githubusercontent.com/pciutils/pciids/refs/heads/master/pci.ids) automatically generated from the PCI ID Database at http://pci-ids.ucw.cz
# More resources
* [AMD GPU LLVM Architectures](https://web.archive.org/web/20241228163540/https://llvm.org/docs/AMDGPUUsage.html#processors)
* [Status of ROCm support for AMD Navi 1](https://github.com/ROCm/ROCm/issues/2527)
* [Torch support for ROCm 6.2 on AMD Navi 1](https://github.com/pytorch/pytorch/issues/132570#issuecomment-2313071756)
* [ROCmLibs-for-gfx1103-AMD780M-APU](https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU)
* [Pre-compiled torch for AMD gfx803 (and steps to compile)](https://github.com/tsl0922/pytorch-gfx803)
* [Another guide for compiling torch with rocm 6.2 for gfx803](https://github.com/robertrosenbusch/gfx803_rocm62_pt24)
Raw data
{
"_id": null,
"home_page": null,
"name": "torchruntime",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.0",
"maintainer_email": null,
"keywords": "torch, ai, ml, llm, installer, runtime",
"author": null,
"author_email": "cmdr2 <secondary.cmdr2@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/96/6a/a26f2ca260d2c51de528d09facccfa4e815c72ad1bcb353ccbccf6131ad7/torchruntime-1.4.1.tar.gz",
"platform": null,
"description": "# torchruntime\n[![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB)\n\n**torchruntime** is a lightweight package for automatically installing the appropriate variant of PyTorch on a user's computer, based on their OS, and GPU manufacturer and GPU model.\n\nThis package is used by [Easy Diffusion](https://github.com/easydiffusion/easydiffusion), but you're welcome to use it as well. It's useful for developers who make PyTorch-based apps that target users with NVIDIA, AMD and Intel graphics cards (as well as CPU-only usage), on Windows, Mac and Linux.\n\n### Why?\nIt lets you treat PyTorch as a single dependency (like it should be), and lets you assume that each user will get the most-performant variant of PyTorch suitable for their computer's OS and hardware.\n\nIt deals with the complexity of the variety of torch builds and configurations required for CUDA, AMD (ROCm, DirectML), Intel (xpu/DirectML/ipex), and CPU-only.\n\n**Compatibility table**: [Click here](#compatibility-table) to see the supported graphics cards and operating systems.\n\n# Installation\nSupports Windows, Linux, and Mac.\n\n`pip install torchruntime`\n\n## Usage\n### Step 1. Install the appropriate variant of PyTorch\n*This command should be run on the user's computer, or while creating platform-specific builds:*\n\n`python -m torchruntime install`\n\nThis will install `torch`, `torchvision`, and `torchaudio`, and will decide the variant based on the user's OS, GPU manufacturer and GPU model number. See [customizing packages](#customizing-packages) for more options.\n\n### Step 2. Initialize torch\nThis should be run inside your program, to initialize the required environment variables (if any) for the variant of torch being used.\n\n```py\nimport torchruntime\n\ntorchruntime.init_torch()\n```\n\n## Customizing packages\nBy default, `python -m torchruntime install` will install the latest available `torch`, `torchvision` and `torchaudio` suitable on the user's platform.\n\nYou can customize the packages to install by including their names:\n* For e.g. to install only `torch` and `torchvision`, you can run `python -m torchruntime install torch torchvision`\n* To install specific versions (in pip format), you can run `python -m torchruntime install \"torch>2.0\" \"torchvision==0.20\"`\n\n**Note:** If you specify package versions, please keep in mind that the version may not be available to *all* the users on *all* the torch platforms. For e.g. a user with Python 3.8 would not be able to install torch 2.5 (or higher), because torch 2.5 dropped support for Python 3.8.\n\nSo in general, it's better to avoid specifying a version unless it really matters to you (or you know what you're doing). Instead, please allow `torchruntime` to pick the latest-possible version for the user.\n\n# Compatibility table\nThe list of platforms on which `torchruntime` can install a working variant of PyTorch.\n\n**Note:** *This list is based on user feedback (since I don't have all the cards). Please let me know if your card is supported (or not) by opening a pull request or issue or messaging on [Discord](https://discord.com/invite/u9yhsFmEkB) (with supporting logs).*\n\n**CPU-only:**\n\n| OS | Supported?| Notes |\n|---|---|---|\n| Windows | \u2705 Yes | x86_64 |\n| Linux | \u2705 Yes | x86_64 and aarch64 |\n| Mac (M1/M2/M3/M4) | \u2705 Yes | arm64. `mps` backend |\n| Mac (Intel) | \u2705 Yes | x86_64. Stopped after `torch 2.2.2` |\n\n**NVIDIA:**\n\n| Series | Supported? | OS | Notes |\n|---|---|---|---|\n| 40xx | \u2705 Yes | Win/Linux | Uses CUDA 124 |\n| 30xx | \u2705 Yes | Win/Linux | Uses CUDA 124 |\n| 20xx | \u2705 Yes | Win/Linux | Uses CUDA 124 |\n| 10xx/16xx | \u2705 Yes | Win/Linux | Uses CUDA 124. Full-precision required on 16xx series |\n\n**AMD:**\n\n| Series | Supported? | OS | Notes |\n|---|---|---|---|\n| 7xxx | \u2705 Yes | Win/Linux | Navi3/RDNA3 (gfx110x). ROCm 6.2 on Linux. DirectML on Windows |\n| 6xxx | \u2705 Yes | Win/Linux | Navi2/RDNA2 (gfx103x). ROCm 6.2 on Linux. DirectML on Windows |\n| 6xxx on Intel Mac | \u2705 Yes | Intel Mac | gfx103x. 'mps' backend |\n| 5xxx | \u2705 Yes | Win/Linux | Navi1/RDNA1 (gfx101x). Full-precision required. DirectML on Windows. Linux only supports upto ROCm 5.2. Waiting for [this](https://github.com/pytorch/pytorch/issues/132570#issuecomment-2313071756) for ROCm 6.2 support. |\n| 5xxx on Intel Mac | \u2753 Untested (WIP) | Intel Mac | gfx101x. Implemented but need testers, please message on [Discord](https://discord.com/invite/u9yhsFmEkB) |\n| 4xxxG/Radeon VII | \u2705 Yes | Win/Linux | Vega 20 gfx906. Need testers for Windows, please message on [Discord](https://discord.com/invite/u9yhsFmEkB) |\n| 2xxxG/Radeon RX Vega 56 | \u26a0\ufe0f Partial | Win/Linux | Vega 10 gfx900. ROCm 5.2 on Linux. Implemented but need testers for Windows, please message on [Discord](https://discord.com/invite/u9yhsFmEkB) |\n| 5xx/Polaris | \u2753 Untested (WIP) | N/A | gfx80x. Implemented but need testers, please message on [Discord](https://discord.com/invite/u9yhsFmEkB) |\n\n**Apple:**\n\n| Series | Supported? |Notes |\n|---|---|---|\n| M1/M2/M3/M4 | \u2705 Yes | 'mps' backend |\n| AMD 6xxx on Intel Mac | \u2705 Yes | Intel Mac | gfx103x. 'mps' backend |\n| AMD 5xxx on Intel Mac | \u2753 Untested (WIP) | Intel Mac | gfx101x. Implemented but need testers, please message on [Discord](https://discord.com/invite/u9yhsFmEkB) |\n\n**Intel:**\n\n| Series | Supported? | OS | Notes |\n|---|---|---|---|\n| Arc | \u2753 Untested (WIP) | Win/Linux | Implemented but need testers, please message on [Discord](https://discord.com/invite/u9yhsFmEkB). Backends: 'xpu' or DirectML or [ipex](https://github.com/intel/intel-extension-for-pytorch) |\n\n\n# FAQ\n## Why can't I just run 'pip install torch'?\n`pip install torch` installs the CPU-only version of torch, so it won't utilize your GPU's capabilities.\n\n## Why can't I just install torch-for-ROCm directly to support AMD?\nDifferent models of AMD cards require different LLVM targets, and sometimes different ROCm versions. And ROCm currently doesn't work on Windows, so AMD on Windows is best served (currently) with DirectML.\n\nAnd plenty of AMD cards work with ROCm (even when they aren't in the official list of supported cards). Information about these cards (for e.g. the LLVM target to use) is pretty scattered.\n\n`torchruntime` deals with this complexity for your convenience.\n\n# Contributing\n\ud83d\udce2 I'm looking for contributions in these specific areas:\n- More testing on consumer AMD GPUs.\n- More support for older AMD GPUs. Explore: Compile and host PyTorch wheels and rocm (on GitHub) for older AMD gpus (e.g. 580/590/Polaris) with the required patches.\n- Intel GPUs.\n- Testing on professional AMD GPUs (e.g. the Instinct series).\n- An easy-to-run benchmark script (that people can run to check the level of compatibility on their platform).\n- Improve [the logic](tests/test_configure.py) for supporting multiple AMD GPUs with different ROCm compatibility. At present, it just picks the latest GPU, which means it doesn't support running workloads on multiple AMD GPUs in parallel.\n\nPlease message on the [Discord community](https://discord.com/invite/u9yhsFmEkB) if you have AMD or Intel GPUs, and would like to help with testing or adding support for them! Thanks!\n\n# Credits\n* Code contributors on [Easy Diffusion](https://github.com/easydiffusion/easydiffusion).\n* Users on [Easy Diffusion's Discord](https://discord.com/invite/u9yhsFmEkB) who've helped with testing on various GPUs.\n* [PCI Database](https://raw.githubusercontent.com/pciutils/pciids/refs/heads/master/pci.ids) automatically generated from the PCI ID Database at http://pci-ids.ucw.cz\n\n# More resources\n* [AMD GPU LLVM Architectures](https://web.archive.org/web/20241228163540/https://llvm.org/docs/AMDGPUUsage.html#processors)\n* [Status of ROCm support for AMD Navi 1](https://github.com/ROCm/ROCm/issues/2527)\n* [Torch support for ROCm 6.2 on AMD Navi 1](https://github.com/pytorch/pytorch/issues/132570#issuecomment-2313071756)\n* [ROCmLibs-for-gfx1103-AMD780M-APU](https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU)\n* [Pre-compiled torch for AMD gfx803 (and steps to compile)](https://github.com/tsl0922/pytorch-gfx803)\n* [Another guide for compiling torch with rocm 6.2 for gfx803](https://github.com/robertrosenbusch/gfx803_rocm62_pt24)\n",
"bugtrack_url": null,
"license": null,
"summary": "Meant for app developers. A convenient way to install and configure the appropriate version of PyTorch on the user's computer, based on the OS and GPU manufacturer and model number.",
"version": "1.4.1",
"project_urls": {
"Bug Tracker": "https://github.com/easydiffusion/torchruntime/issues",
"Homepage": "https://github.com/easydiffusion/torchruntime"
},
"split_keywords": [
"torch",
" ai",
" ml",
" llm",
" installer",
" runtime"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "2f3fd346185181fa8be608ab04ae683f2dc2fbe861c1a5318b17f1caab677b4f",
"md5": "80e4c9aaf66c975fee61ab0b8ddae05c",
"sha256": "05bfa03de64131fbaae71f59cb8836aa1efbc33116268cba44f438d12281778c"
},
"downloads": -1,
"filename": "torchruntime-1.4.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "80e4c9aaf66c975fee61ab0b8ddae05c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.0",
"size": 46701,
"upload_time": "2025-01-17T10:54:26",
"upload_time_iso_8601": "2025-01-17T10:54:26.536069Z",
"url": "https://files.pythonhosted.org/packages/2f/3f/d346185181fa8be608ab04ae683f2dc2fbe861c1a5318b17f1caab677b4f/torchruntime-1.4.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "966aa26f2ca260d2c51de528d09facccfa4e815c72ad1bcb353ccbccf6131ad7",
"md5": "99b93bc3c5789cf36e8b6fed83fe9836",
"sha256": "9ae8f3e1117ff0ae56e7647af20d10d228ba4b9114c400c6b16cbd4151db3891"
},
"downloads": -1,
"filename": "torchruntime-1.4.1.tar.gz",
"has_sig": false,
"md5_digest": "99b93bc3c5789cf36e8b6fed83fe9836",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.0",
"size": 51124,
"upload_time": "2025-01-17T10:54:27",
"upload_time_iso_8601": "2025-01-17T10:54:27.958174Z",
"url": "https://files.pythonhosted.org/packages/96/6a/a26f2ca260d2c51de528d09facccfa4e815c72ad1bcb353ccbccf6131ad7/torchruntime-1.4.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-17 10:54:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "easydiffusion",
"github_project": "torchruntime",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "torchruntime"
}