torch-installer-coff33ninja


Nametorch-installer-coff33ninja JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/coff33ninja/torch-installer
SummaryAn intelligent, autonomous PyTorch installer that automatically detects your system, GPU, and CUDA configuration
upload_time2025-08-07 13:35:43
maintainerNone
docs_urlNone
authorcoff33ninja
requires_python>=3.7
licenseNone
keywords pytorch torch cuda gpu installer machine-learning deep-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🚀 PyTorch Installation Assistant

An intelligent, autonomous PyTorch installer that automatically detects your system, GPU, and CUDA configuration to install the optimal PyTorch setup for your hardware.

## ✨ Features

- **🧠 Intelligent GPU Detection**: Automatically detects NVIDIA, AMD, and Apple Silicon GPUs
- **🎯 Smart CUDA Matching**: Finds the best PyTorch version for your CUDA installation
- **🤖 Autonomous CUDA Installation**: Automatically installs CUDA on Windows using package managers
- **📦 Complete Ecosystem**: Installs torch, torchvision, and torchaudio with version compatibility
- **🔄 Fallback Logic**: Handles older CUDA versions and compatibility issues gracefully
- **🎮 Hardware-Specific Optimization**: Tailored recommendations for different GPU generations
- **🔍 Comprehensive Testing**: Post-install verification with tensor operations
- **📊 Detailed Reporting**: Shows complete system and package information

## �️ Installation

Simply download the `torch_installer.py` script - no additional dependencies required beyond Python's standard library.

```bash
# Download the script
curl -O https://raw.githubusercontent.com/coff33ninja/torch-installer/main/torch_installer.py

# Or clone the repository
git clone https://github.com/coff33ninja/torch-installer.git
cd torch-installer/pytorch-installer.git
```

## 🚀 Quick Start

### Basic Installation
```bash
# Automatic installation with smart detection
python torch_installer.py

# CPU-only installation
python torch_installer.py --cpu-only

# Force specific CUDA version
python torch_installer.py --force-cuda cu121
```

### CUDA Auto-Installation (Windows Only)
```bash
# Auto-install recommended CUDA version
python torch_installer.py --auto-install-cuda

# Install specific CUDA version
python torch_installer.py --auto-install-cuda --cuda-version 12.1

# Dry-run to see what would be installed
python torch_installer.py --auto-install-cuda --dry-run
```

## � Commnand Reference

### Core Installation Commands

| Command | Description | Example |
|---------|-------------|---------|
| `python torch_installer.py` | Auto-detect and install optimal PyTorch | Basic usage |
| `--cpu-only` | Force CPU-only installation | `python torch_installer.py --cpu-only` |
| `--force-cuda cu121` | Force specific CUDA version | `python torch_installer.py --force-cuda cu121` |
| `--force-reinstall` | Reinstall even if PyTorch exists | `python torch_installer.py --force-reinstall` |

### CUDA Management (Windows)

| Command | Description | Example |
|---------|-------------|---------|
| `--auto-install-cuda` | Automatically install CUDA | `python torch_installer.py --auto-install-cuda` |
| `--cuda-version 12.1` | Specify CUDA version to install | `python torch_installer.py --auto-install-cuda --cuda-version 12.1` |

### Information & Diagnostics

| Command | Description | Example |
|---------|-------------|---------|
| `--gpu-info` | Show GPU and CUDA compatibility | `python torch_installer.py --gpu-info` |
| `--show-versions` | Display installed PyTorch ecosystem | `python torch_installer.py --show-versions` |
| `--show-matching` | Demo CUDA version matching logic | `python torch_installer.py --show-matching` |
| `--list-cuda` | List supported CUDA versions | `python torch_installer.py --list-cuda` |

### Development & Testing

| Command | Description | Example |
|---------|-------------|---------|
| `--dry-run` | Show commands without executing | `python torch_installer.py --dry-run` |
| `--log` | Log all output to timestamped file | `python torch_installer.py --log` |

## 🎮 GPU Support Matrix

### NVIDIA GPUs

| GPU Generation | Recommended CUDA | PyTorch Support | Performance |
|----------------|------------------|-----------------|-------------|
| **RTX 40 Series** | CUDA 12.1+ | ✅ Excellent | 🔥🔥🔥🔥🔥 |
| **RTX 30 Series** | CUDA 12.1+ | ✅ Excellent | 🔥🔥🔥🔥🔥 |
| **RTX 20 Series** | CUDA 11.8+ | ✅ Excellent | 🔥🔥🔥🔥 |
| **GTX 16 Series** | CUDA 11.8+ | ✅ Very Good | 🔥🔥🔥🔥 |
| **GTX 10 Series** | CUDA 11.8+ | ✅ Good | 🔥🔥🔥 |
| **GT 700 Series** | CUDA 11.8 | ⚠️ Limited | 🔥🔥 |
| **Older GPUs** | Manual Install | ❌ Not Recommended | 🔥 |

### Other GPUs

| GPU Type | Support | Recommendation |
|----------|---------|----------------|
| **Apple Silicon (M1/M2/M3)** | ✅ MPS Support | Automatic detection |
| **AMD GPUs** | ⚠️ ROCm (Linux only) | Manual ROCm installation |
| **Intel GPUs** | ❌ Not supported | Use CPU-only mode |

## 🔧 Usage Examples

### Scenario 1: First-time Installation
```bash
# Let the installer detect everything automatically
python torch_installer.py

# Output example:
# 🚀 PyTorch Installation Assistant
# 🎮 Detected GPU: GeForce RTX 3080
# 🚀 Detected CUDA version: 12.1
# 🎯 Installing PyTorch with CUDA 121 wheels
# ✅ PyTorch installation completed successfully!
```

### Scenario 2: Upgrading CUDA and PyTorch
```bash
# Auto-install newer CUDA version
python torch_installer.py --auto-install-cuda --cuda-version 12.1

# Then reinstall PyTorch
python torch_installer.py --force-reinstall
```

### Scenario 3: Troubleshooting Installation
```bash
# Check current setup
python torch_installer.py --show-versions

# See GPU compatibility
python torch_installer.py --gpu-info

# Test what would be installed
python torch_installer.py --dry-run
```

### Scenario 4: Development Environment
```bash
# Install with logging for debugging
python torch_installer.py --log

# Check CUDA matching logic
python torch_installer.py --show-matching
```

## 🧠 Intelligent Features

### Smart CUDA Version Matching

The installer automatically matches your CUDA version to compatible PyTorch versions:

```
🔍 Detected CUDA: 11.1
📋 Supported versions: ['121', '118', '117', '116', '113']
⚠️ Fallback match: CUDA 111 -> PyTorch cu113 (oldest supported)
✅ Would install: PyTorch 2.0.1 with CUDA 111
📦 Full package set: torch=2.0.1, torchvision=0.15.2, torchaudio=2.0.2
```

### GPU-Specific Recommendations

For older GPUs:
```
💡 GPU ACCELERATION UPGRADE GUIDE (GeForce GT 710):
   ⚠️ Your GeForce GT 710 is an older GPU with limited CUDA support
   💡 Recommended: CUDA 11.8 for optimal compatibility
   🤖 AUTOMATIC INSTALLATION AVAILABLE:
   • Run: python torch_installer.py --auto-install-cuda
```

For modern GPUs:
```
💡 GPU ACCELERATION UPGRADE GUIDE (GeForce RTX 3080):
   🚀 Your GeForce RTX 3080 supports modern CUDA versions
   ✨ Recommended: CUDA 12.1 for best performance
   🤖 AUTOMATIC INSTALLATION AVAILABLE:
   • Run: python torch_installer.py --auto-install-cuda
```

## 🔍 System Information Display

### Complete Ecosystem View
```bash
python torch_installer.py --show-versions

# Output:
# 📊 Installed PyTorch Ecosystem:
#    🔥 PyTorch: 2.8.0+cu121
#    👁️ TorchVision: 0.23.0+cu121
#    🔊 TorchAudio: 2.8.0+cu121
#    🎯 CUDA Support: True
#    🚀 CUDA Version: 12.1
#    🎮 GPU Count: 1
#    🎮 GPU 0: GeForce RTX 3080
```

### GPU Compatibility Analysis
```bash
python torch_installer.py --gpu-info

# Output:
# 🎮 GPU and CUDA Compatibility Information
# 🎮 Detected GPU: GeForce RTX 3080
# 💾 GPU Memory: 10240MB
# 🔍 Detected CUDA: 12.1
# ✅ Latest PyTorch supports your CUDA via cu121
```

## 🤖 CUDA Auto-Installation (Windows)

### Prerequisites
- Windows 10/11
- NVIDIA GPU with compatible drivers
- Package manager: winget (built-in) or chocolatey

### Installation Process
1. **Detection**: Identifies your GPU model and current CUDA version
2. **Recommendation**: Suggests optimal CUDA version for your hardware
3. **Package Manager Check**: Verifies winget or chocolatey availability
4. **Version Matching**: Finds compatible CUDA version in repositories
5. **Installation**: Automatically downloads and installs CUDA
6. **Verification**: Confirms successful installation

### Example Output
```bash
python torch_installer.py --auto-install-cuda

# 🤖 CUDA Auto-Installation Mode
# 🎮 Detected GPU: GeForce RTX 3080
# 📋 Current CUDA: 11.8
# 🔧 Attempting to install CUDA 12.1 for GeForce RTX 3080
# 📦 Trying winget (Windows Package Manager)...
# ✅ Found CUDA versions in winget: 13.0, 12.9, 12.1...
# 🔧 Installing CUDA 12.1 via winget...
# ✅ Successfully installed CUDA 12.1
# 🔄 Please restart your command prompt and run the installer again
```

## 🔧 Advanced Configuration

### Environment Variables
- `CUDA_HOME`: Override CUDA installation path detection
- `PYTORCH_CUDA_ALLOC_CONF`: Configure CUDA memory allocation

### Custom Package Managers
The installer supports:
- **winget**: Native Windows package manager (recommended)
- **chocolatey**: Third-party package manager with more versions

### Offline Installation
For air-gapped environments:
1. Download PyTorch wheels manually from https://pytorch.org/get-started/locally/
2. Use `pip install` with local wheel files
3. Run installer with `--show-versions` to verify

## 🐛 Troubleshooting

### Common Issues

#### "CUDA not available" after installation
```bash
# Check CUDA installation
nvidia-smi

# Verify PyTorch CUDA support
python -c "import torch; print(torch.cuda.is_available())"

# Reinstall with force
python torch_installer.py --force-reinstall
```

#### Package manager not found (Windows)
```bash
# Install chocolatey
Set-ExecutionPolicy Bypass -Scope Process -Force
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

# Or update Windows for winget (Windows 10)
# winget is included in Windows 11 by default
```

#### Older CUDA version detected
```bash
# Check what would be installed
python torch_installer.py --show-matching

# Auto-upgrade CUDA (Windows)
python torch_installer.py --auto-install-cuda

# Or force specific PyTorch version
python torch_installer.py --force-cuda cu118
```

### Debug Mode
```bash
# Enable detailed logging
python torch_installer.py --log --dry-run

# Check system compatibility
python torch_installer.py --gpu-info --show-versions
```

## 🔄 Update & Maintenance

### Updating PyTorch
```bash
# Check for updates and reinstall
python torch_installer.py --force-reinstall

# Upgrade to specific version
python torch_installer.py --force-cuda cu121 --force-reinstall
```

### Updating CUDA (Windows)
```bash
# Auto-install latest compatible version
python torch_installer.py --auto-install-cuda

# Install specific version
python torch_installer.py --auto-install-cuda --cuda-version 12.1
```

## 🤝 Contributing

### Reporting Issues
When reporting issues, please include:
```bash
# System information
python torch_installer.py --gpu-info --show-versions --log

# Attach the generated log file
```

### Feature Requests
- GPU support for additional vendors
- Package manager support for other platforms
- Integration with conda/mamba environments

## 📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

## 🙏 Acknowledgments

- **NVIDIA** for CUDA toolkit and GPU drivers
- **PyTorch Team** for the excellent deep learning framework
- **Microsoft** for winget package manager
- **Chocolatey** community for package management on Windows

---

## 📞 Support

For support and questions:
- 📧 Create an issue on GitHub
- 💬 Join the discussion in GitHub Discussions
- 📖 Check the troubleshooting section above

**Happy Deep Learning! 🚀🔥**

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/coff33ninja/torch-installer",
    "name": "torch-installer-coff33ninja",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "coff33ninja <your-email@example.com>",
    "keywords": "pytorch, torch, cuda, gpu, installer, machine-learning, deep-learning",
    "author": "coff33ninja",
    "author_email": "coff33ninja <your-email@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/2e/f1/193d6d7ac555aea228dc0c47ce7e45fc285c163c949a6e0b64a3ca3b7105/torch_installer_coff33ninja-1.0.0.tar.gz",
    "platform": null,
    "description": "# \ud83d\ude80 PyTorch Installation Assistant\r\n\r\nAn intelligent, autonomous PyTorch installer that automatically detects your system, GPU, and CUDA configuration to install the optimal PyTorch setup for your hardware.\r\n\r\n## \u2728 Features\r\n\r\n- **\ud83e\udde0 Intelligent GPU Detection**: Automatically detects NVIDIA, AMD, and Apple Silicon GPUs\r\n- **\ud83c\udfaf Smart CUDA Matching**: Finds the best PyTorch version for your CUDA installation\r\n- **\ud83e\udd16 Autonomous CUDA Installation**: Automatically installs CUDA on Windows using package managers\r\n- **\ud83d\udce6 Complete Ecosystem**: Installs torch, torchvision, and torchaudio with version compatibility\r\n- **\ud83d\udd04 Fallback Logic**: Handles older CUDA versions and compatibility issues gracefully\r\n- **\ud83c\udfae Hardware-Specific Optimization**: Tailored recommendations for different GPU generations\r\n- **\ud83d\udd0d Comprehensive Testing**: Post-install verification with tensor operations\r\n- **\ud83d\udcca Detailed Reporting**: Shows complete system and package information\r\n\r\n## \ufffd\ufe0f Installation\r\n\r\nSimply download the `torch_installer.py` script - no additional dependencies required beyond Python's standard library.\r\n\r\n```bash\r\n# Download the script\r\ncurl -O https://raw.githubusercontent.com/coff33ninja/torch-installer/main/torch_installer.py\r\n\r\n# Or clone the repository\r\ngit clone https://github.com/coff33ninja/torch-installer.git\r\ncd torch-installer/pytorch-installer.git\r\n```\r\n\r\n## \ud83d\ude80 Quick Start\r\n\r\n### Basic Installation\r\n```bash\r\n# Automatic installation with smart detection\r\npython torch_installer.py\r\n\r\n# CPU-only installation\r\npython torch_installer.py --cpu-only\r\n\r\n# Force specific CUDA version\r\npython torch_installer.py --force-cuda cu121\r\n```\r\n\r\n### CUDA Auto-Installation (Windows Only)\r\n```bash\r\n# Auto-install recommended CUDA version\r\npython torch_installer.py --auto-install-cuda\r\n\r\n# Install specific CUDA version\r\npython torch_installer.py --auto-install-cuda --cuda-version 12.1\r\n\r\n# Dry-run to see what would be installed\r\npython torch_installer.py --auto-install-cuda --dry-run\r\n```\r\n\r\n## \ufffd Commnand Reference\r\n\r\n### Core Installation Commands\r\n\r\n| Command | Description | Example |\r\n|---------|-------------|---------|\r\n| `python torch_installer.py` | Auto-detect and install optimal PyTorch | Basic usage |\r\n| `--cpu-only` | Force CPU-only installation | `python torch_installer.py --cpu-only` |\r\n| `--force-cuda cu121` | Force specific CUDA version | `python torch_installer.py --force-cuda cu121` |\r\n| `--force-reinstall` | Reinstall even if PyTorch exists | `python torch_installer.py --force-reinstall` |\r\n\r\n### CUDA Management (Windows)\r\n\r\n| Command | Description | Example |\r\n|---------|-------------|---------|\r\n| `--auto-install-cuda` | Automatically install CUDA | `python torch_installer.py --auto-install-cuda` |\r\n| `--cuda-version 12.1` | Specify CUDA version to install | `python torch_installer.py --auto-install-cuda --cuda-version 12.1` |\r\n\r\n### Information & Diagnostics\r\n\r\n| Command | Description | Example |\r\n|---------|-------------|---------|\r\n| `--gpu-info` | Show GPU and CUDA compatibility | `python torch_installer.py --gpu-info` |\r\n| `--show-versions` | Display installed PyTorch ecosystem | `python torch_installer.py --show-versions` |\r\n| `--show-matching` | Demo CUDA version matching logic | `python torch_installer.py --show-matching` |\r\n| `--list-cuda` | List supported CUDA versions | `python torch_installer.py --list-cuda` |\r\n\r\n### Development & Testing\r\n\r\n| Command | Description | Example |\r\n|---------|-------------|---------|\r\n| `--dry-run` | Show commands without executing | `python torch_installer.py --dry-run` |\r\n| `--log` | Log all output to timestamped file | `python torch_installer.py --log` |\r\n\r\n## \ud83c\udfae GPU Support Matrix\r\n\r\n### NVIDIA GPUs\r\n\r\n| GPU Generation | Recommended CUDA | PyTorch Support | Performance |\r\n|----------------|------------------|-----------------|-------------|\r\n| **RTX 40 Series** | CUDA 12.1+ | \u2705 Excellent | \ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udd25 |\r\n| **RTX 30 Series** | CUDA 12.1+ | \u2705 Excellent | \ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udd25 |\r\n| **RTX 20 Series** | CUDA 11.8+ | \u2705 Excellent | \ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udd25 |\r\n| **GTX 16 Series** | CUDA 11.8+ | \u2705 Very Good | \ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udd25 |\r\n| **GTX 10 Series** | CUDA 11.8+ | \u2705 Good | \ud83d\udd25\ud83d\udd25\ud83d\udd25 |\r\n| **GT 700 Series** | CUDA 11.8 | \u26a0\ufe0f Limited | \ud83d\udd25\ud83d\udd25 |\r\n| **Older GPUs** | Manual Install | \u274c Not Recommended | \ud83d\udd25 |\r\n\r\n### Other GPUs\r\n\r\n| GPU Type | Support | Recommendation |\r\n|----------|---------|----------------|\r\n| **Apple Silicon (M1/M2/M3)** | \u2705 MPS Support | Automatic detection |\r\n| **AMD GPUs** | \u26a0\ufe0f ROCm (Linux only) | Manual ROCm installation |\r\n| **Intel GPUs** | \u274c Not supported | Use CPU-only mode |\r\n\r\n## \ud83d\udd27 Usage Examples\r\n\r\n### Scenario 1: First-time Installation\r\n```bash\r\n# Let the installer detect everything automatically\r\npython torch_installer.py\r\n\r\n# Output example:\r\n# \ud83d\ude80 PyTorch Installation Assistant\r\n# \ud83c\udfae Detected GPU: GeForce RTX 3080\r\n# \ud83d\ude80 Detected CUDA version: 12.1\r\n# \ud83c\udfaf Installing PyTorch with CUDA 121 wheels\r\n# \u2705 PyTorch installation completed successfully!\r\n```\r\n\r\n### Scenario 2: Upgrading CUDA and PyTorch\r\n```bash\r\n# Auto-install newer CUDA version\r\npython torch_installer.py --auto-install-cuda --cuda-version 12.1\r\n\r\n# Then reinstall PyTorch\r\npython torch_installer.py --force-reinstall\r\n```\r\n\r\n### Scenario 3: Troubleshooting Installation\r\n```bash\r\n# Check current setup\r\npython torch_installer.py --show-versions\r\n\r\n# See GPU compatibility\r\npython torch_installer.py --gpu-info\r\n\r\n# Test what would be installed\r\npython torch_installer.py --dry-run\r\n```\r\n\r\n### Scenario 4: Development Environment\r\n```bash\r\n# Install with logging for debugging\r\npython torch_installer.py --log\r\n\r\n# Check CUDA matching logic\r\npython torch_installer.py --show-matching\r\n```\r\n\r\n## \ud83e\udde0 Intelligent Features\r\n\r\n### Smart CUDA Version Matching\r\n\r\nThe installer automatically matches your CUDA version to compatible PyTorch versions:\r\n\r\n```\r\n\ud83d\udd0d Detected CUDA: 11.1\r\n\ud83d\udccb Supported versions: ['121', '118', '117', '116', '113']\r\n\u26a0\ufe0f Fallback match: CUDA 111 -> PyTorch cu113 (oldest supported)\r\n\u2705 Would install: PyTorch 2.0.1 with CUDA 111\r\n\ud83d\udce6 Full package set: torch=2.0.1, torchvision=0.15.2, torchaudio=2.0.2\r\n```\r\n\r\n### GPU-Specific Recommendations\r\n\r\nFor older GPUs:\r\n```\r\n\ud83d\udca1 GPU ACCELERATION UPGRADE GUIDE (GeForce GT 710):\r\n   \u26a0\ufe0f Your GeForce GT 710 is an older GPU with limited CUDA support\r\n   \ud83d\udca1 Recommended: CUDA 11.8 for optimal compatibility\r\n   \ud83e\udd16 AUTOMATIC INSTALLATION AVAILABLE:\r\n   \u2022 Run: python torch_installer.py --auto-install-cuda\r\n```\r\n\r\nFor modern GPUs:\r\n```\r\n\ud83d\udca1 GPU ACCELERATION UPGRADE GUIDE (GeForce RTX 3080):\r\n   \ud83d\ude80 Your GeForce RTX 3080 supports modern CUDA versions\r\n   \u2728 Recommended: CUDA 12.1 for best performance\r\n   \ud83e\udd16 AUTOMATIC INSTALLATION AVAILABLE:\r\n   \u2022 Run: python torch_installer.py --auto-install-cuda\r\n```\r\n\r\n## \ud83d\udd0d System Information Display\r\n\r\n### Complete Ecosystem View\r\n```bash\r\npython torch_installer.py --show-versions\r\n\r\n# Output:\r\n# \ud83d\udcca Installed PyTorch Ecosystem:\r\n#    \ud83d\udd25 PyTorch: 2.8.0+cu121\r\n#    \ud83d\udc41\ufe0f TorchVision: 0.23.0+cu121\r\n#    \ud83d\udd0a TorchAudio: 2.8.0+cu121\r\n#    \ud83c\udfaf CUDA Support: True\r\n#    \ud83d\ude80 CUDA Version: 12.1\r\n#    \ud83c\udfae GPU Count: 1\r\n#    \ud83c\udfae GPU 0: GeForce RTX 3080\r\n```\r\n\r\n### GPU Compatibility Analysis\r\n```bash\r\npython torch_installer.py --gpu-info\r\n\r\n# Output:\r\n# \ud83c\udfae GPU and CUDA Compatibility Information\r\n# \ud83c\udfae Detected GPU: GeForce RTX 3080\r\n# \ud83d\udcbe GPU Memory: 10240MB\r\n# \ud83d\udd0d Detected CUDA: 12.1\r\n# \u2705 Latest PyTorch supports your CUDA via cu121\r\n```\r\n\r\n## \ud83e\udd16 CUDA Auto-Installation (Windows)\r\n\r\n### Prerequisites\r\n- Windows 10/11\r\n- NVIDIA GPU with compatible drivers\r\n- Package manager: winget (built-in) or chocolatey\r\n\r\n### Installation Process\r\n1. **Detection**: Identifies your GPU model and current CUDA version\r\n2. **Recommendation**: Suggests optimal CUDA version for your hardware\r\n3. **Package Manager Check**: Verifies winget or chocolatey availability\r\n4. **Version Matching**: Finds compatible CUDA version in repositories\r\n5. **Installation**: Automatically downloads and installs CUDA\r\n6. **Verification**: Confirms successful installation\r\n\r\n### Example Output\r\n```bash\r\npython torch_installer.py --auto-install-cuda\r\n\r\n# \ud83e\udd16 CUDA Auto-Installation Mode\r\n# \ud83c\udfae Detected GPU: GeForce RTX 3080\r\n# \ud83d\udccb Current CUDA: 11.8\r\n# \ud83d\udd27 Attempting to install CUDA 12.1 for GeForce RTX 3080\r\n# \ud83d\udce6 Trying winget (Windows Package Manager)...\r\n# \u2705 Found CUDA versions in winget: 13.0, 12.9, 12.1...\r\n# \ud83d\udd27 Installing CUDA 12.1 via winget...\r\n# \u2705 Successfully installed CUDA 12.1\r\n# \ud83d\udd04 Please restart your command prompt and run the installer again\r\n```\r\n\r\n## \ud83d\udd27 Advanced Configuration\r\n\r\n### Environment Variables\r\n- `CUDA_HOME`: Override CUDA installation path detection\r\n- `PYTORCH_CUDA_ALLOC_CONF`: Configure CUDA memory allocation\r\n\r\n### Custom Package Managers\r\nThe installer supports:\r\n- **winget**: Native Windows package manager (recommended)\r\n- **chocolatey**: Third-party package manager with more versions\r\n\r\n### Offline Installation\r\nFor air-gapped environments:\r\n1. Download PyTorch wheels manually from https://pytorch.org/get-started/locally/\r\n2. Use `pip install` with local wheel files\r\n3. Run installer with `--show-versions` to verify\r\n\r\n## \ud83d\udc1b Troubleshooting\r\n\r\n### Common Issues\r\n\r\n#### \"CUDA not available\" after installation\r\n```bash\r\n# Check CUDA installation\r\nnvidia-smi\r\n\r\n# Verify PyTorch CUDA support\r\npython -c \"import torch; print(torch.cuda.is_available())\"\r\n\r\n# Reinstall with force\r\npython torch_installer.py --force-reinstall\r\n```\r\n\r\n#### Package manager not found (Windows)\r\n```bash\r\n# Install chocolatey\r\nSet-ExecutionPolicy Bypass -Scope Process -Force\r\niex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))\r\n\r\n# Or update Windows for winget (Windows 10)\r\n# winget is included in Windows 11 by default\r\n```\r\n\r\n#### Older CUDA version detected\r\n```bash\r\n# Check what would be installed\r\npython torch_installer.py --show-matching\r\n\r\n# Auto-upgrade CUDA (Windows)\r\npython torch_installer.py --auto-install-cuda\r\n\r\n# Or force specific PyTorch version\r\npython torch_installer.py --force-cuda cu118\r\n```\r\n\r\n### Debug Mode\r\n```bash\r\n# Enable detailed logging\r\npython torch_installer.py --log --dry-run\r\n\r\n# Check system compatibility\r\npython torch_installer.py --gpu-info --show-versions\r\n```\r\n\r\n## \ud83d\udd04 Update & Maintenance\r\n\r\n### Updating PyTorch\r\n```bash\r\n# Check for updates and reinstall\r\npython torch_installer.py --force-reinstall\r\n\r\n# Upgrade to specific version\r\npython torch_installer.py --force-cuda cu121 --force-reinstall\r\n```\r\n\r\n### Updating CUDA (Windows)\r\n```bash\r\n# Auto-install latest compatible version\r\npython torch_installer.py --auto-install-cuda\r\n\r\n# Install specific version\r\npython torch_installer.py --auto-install-cuda --cuda-version 12.1\r\n```\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\n### Reporting Issues\r\nWhen reporting issues, please include:\r\n```bash\r\n# System information\r\npython torch_installer.py --gpu-info --show-versions --log\r\n\r\n# Attach the generated log file\r\n```\r\n\r\n### Feature Requests\r\n- GPU support for additional vendors\r\n- Package manager support for other platforms\r\n- Integration with conda/mamba environments\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the MIT License - see the LICENSE file for details.\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\n- **NVIDIA** for CUDA toolkit and GPU drivers\r\n- **PyTorch Team** for the excellent deep learning framework\r\n- **Microsoft** for winget package manager\r\n- **Chocolatey** community for package management on Windows\r\n\r\n---\r\n\r\n## \ud83d\udcde Support\r\n\r\nFor support and questions:\r\n- \ud83d\udce7 Create an issue on GitHub\r\n- \ud83d\udcac Join the discussion in GitHub Discussions\r\n- \ud83d\udcd6 Check the troubleshooting section above\r\n\r\n**Happy Deep Learning! \ud83d\ude80\ud83d\udd25**\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "An intelligent, autonomous PyTorch installer that automatically detects your system, GPU, and CUDA configuration",
    "version": "1.0.0",
    "project_urls": {
        "Documentation": "https://github.com/coff33ninja/torch-installer#readme",
        "Homepage": "https://github.com/coff33ninja/torch-installer",
        "Issues": "https://github.com/coff33ninja/torch-installer/issues",
        "Repository": "https://github.com/coff33ninja/torch-installer"
    },
    "split_keywords": [
        "pytorch",
        " torch",
        " cuda",
        " gpu",
        " installer",
        " machine-learning",
        " deep-learning"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5176caef6870fc0217c2cfe910e1588d6e4a909460589f640b48bd9e85fbf410",
                "md5": "3ed890cd0aed342d5fbac2459258b340",
                "sha256": "f6c0c2b0184fcfe05ee009256d5b841eb22331db85e2efd8faad638f93a1b4e1"
            },
            "downloads": -1,
            "filename": "torch_installer_coff33ninja-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3ed890cd0aed342d5fbac2459258b340",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 16952,
            "upload_time": "2025-08-07T13:35:41",
            "upload_time_iso_8601": "2025-08-07T13:35:41.607304Z",
            "url": "https://files.pythonhosted.org/packages/51/76/caef6870fc0217c2cfe910e1588d6e4a909460589f640b48bd9e85fbf410/torch_installer_coff33ninja-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2ef1193d6d7ac555aea228dc0c47ce7e45fc285c163c949a6e0b64a3ca3b7105",
                "md5": "6d7a7f60e1d2d91a9c40aa4f8194d781",
                "sha256": "948e36dc7dc7dc91c906dc611c04bad4e21858bef3237792eb39a7ec93bac028"
            },
            "downloads": -1,
            "filename": "torch_installer_coff33ninja-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "6d7a7f60e1d2d91a9c40aa4f8194d781",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 36509,
            "upload_time": "2025-08-07T13:35:43",
            "upload_time_iso_8601": "2025-08-07T13:35:43.337633Z",
            "url": "https://files.pythonhosted.org/packages/2e/f1/193d6d7ac555aea228dc0c47ce7e45fc285c163c949a6e0b64a3ca3b7105/torch_installer_coff33ninja-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-07 13:35:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "coff33ninja",
    "github_project": "torch-installer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "torch-installer-coff33ninja"
}
        
Elapsed time: 0.54120s