# Torch Device Manager
A lightweight PyTorch utility for automatic hardware detection and memory optimization across different devices (CPU, CUDA, MPS).
## Features
- 🔍 **Automatic Device Detection**: Detects the best available hardware (CUDA, Apple Silicon MPS, or CPU)
- 🧠 **Memory Optimization**: Automatically adjusts batch sizes and gradient accumulation based on available memory
- ⚡ **Mixed Precision Support**: Optional automatic mixed precision with gradient scaling
- 📊 **Memory Monitoring**: Real-time memory usage tracking and logging
- 🛡️ **Fallback Protection**: Graceful fallback to CPU when requested devices aren't available
## Installation
```bash
pip install torch-device-manager
```
## Quick Start
```python
from torch_device_manager import DeviceManager
import torch
# Initialize device manager (auto-detects best device)
device_manager = DeviceManager(device="auto", mixed_precision=True)
# Get the torch device
device = device_manager.get_device()
# Move your model to the optimal device
model = YourModel().to(device)
# Optimize batch size based on available memory
optimal_batch_size, gradient_steps = device_manager.optimize_for_memory(
model=model,
batch_size=32
)
print(f"Using device: {device}")
print(f"Optimized batch size: {optimal_batch_size}")
print(f"Gradient accumulation steps: {gradient_steps}")
```
## Usage in Training Scripts
### Basic Integration
```python
import torch
import torch.nn as nn
from torch_device_manager import DeviceManager
def train_model():
# Initialize device manager
device_manager = DeviceManager(device="auto", mixed_precision=True)
device = device_manager.get_device()
# Setup model
model = YourModel().to(device)
optimizer = torch.optim.Adam(model.parameters())
# Optimize memory usage
batch_size, gradient_steps = device_manager.optimize_for_memory(model, 32)
# Training loop
for epoch in range(num_epochs):
for batch_idx, (data, target) in enumerate(dataloader):
data, target = data.to(device), target.to(device)
# Use mixed precision if available
if device_manager.mixed_precision and device_manager.scaler:
with torch.cuda.amp.autocast():
output = model(data)
loss = criterion(output, target)
device_manager.scaler.scale(loss).backward()
if (batch_idx + 1) % gradient_steps == 0:
device_manager.scaler.step(optimizer)
device_manager.scaler.update()
optimizer.zero_grad()
else:
output = model(data)
loss = criterion(output, target)
loss.backward()
if (batch_idx + 1) % gradient_steps == 0:
optimizer.step()
optimizer.zero_grad()
# Log memory usage
device_manager.log_memory_usage()
```
### Advanced Usage
```python
from torch_device_manager import DeviceManager
# Force specific device
device_manager = DeviceManager(device="cuda", mixed_precision=False)
# Check memory info
memory_info = device_manager.get_memory_info()
print(f"Available memory: {memory_info}")
# Manual memory optimization
if memory_info.get("free_gb", 0) < 2.0:
print("Low memory detected, reducing batch size")
batch_size = 4
```
## API Reference
### DeviceManager
#### Constructor
- `device` (str, default="auto"): Device to use ("auto", "cuda", "mps", "cpu")
- `mixed_precision` (bool, default=True): Enable mixed precision training
#### Methods
- `get_device()`: Returns torch.device object
- `get_memory_info()`: Returns memory information dict
- `log_memory_usage()`: Logs current memory usage
- `optimize_for_memory(model, batch_size)`: Returns optimized (batch_size, gradient_steps)
## Device Support
- **CUDA**: Full support with memory optimization and mixed precision
- **Apple Silicon (MPS)**: Basic support with conservative memory settings
- **CPU**: Fallback support with optimized batch sizes
## Requirements
- Python >= 3.8
- PyTorch >= 2.1.1
## License
MIT License
## Contributing
Contributions are welcome!
Raw data
{
"_id": null,
"home_page": "https://github.com/yourusername/torch-device-manager",
"name": "torch-device-manager",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "pytorch, cuda, mps, device, memory, optimization, machine learning",
"author": "Ali B.M.",
"author_email": "mainabukarali@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/91/72/d12f8a63d9d98c9ed509b721e5e4c1362324ab9ef85bf9245952e1c18623/torch_device_manager-0.1.0.tar.gz",
"platform": null,
"description": "# Torch Device Manager\n\nA lightweight PyTorch utility for automatic hardware detection and memory optimization across different devices (CPU, CUDA, MPS).\n\n## Features\n\n- \ud83d\udd0d **Automatic Device Detection**: Detects the best available hardware (CUDA, Apple Silicon MPS, or CPU)\n- \ud83e\udde0 **Memory Optimization**: Automatically adjusts batch sizes and gradient accumulation based on available memory\n- \u26a1 **Mixed Precision Support**: Optional automatic mixed precision with gradient scaling\n- \ud83d\udcca **Memory Monitoring**: Real-time memory usage tracking and logging\n- \ud83d\udee1\ufe0f **Fallback Protection**: Graceful fallback to CPU when requested devices aren't available\n\n## Installation\n\n```bash\npip install torch-device-manager\n```\n\n## Quick Start\n\n```python\nfrom torch_device_manager import DeviceManager\nimport torch\n\n# Initialize device manager (auto-detects best device)\ndevice_manager = DeviceManager(device=\"auto\", mixed_precision=True)\n\n# Get the torch device\ndevice = device_manager.get_device()\n\n# Move your model to the optimal device\nmodel = YourModel().to(device)\n\n# Optimize batch size based on available memory\noptimal_batch_size, gradient_steps = device_manager.optimize_for_memory(\n model=model, \n batch_size=32\n)\n\nprint(f\"Using device: {device}\")\nprint(f\"Optimized batch size: {optimal_batch_size}\")\nprint(f\"Gradient accumulation steps: {gradient_steps}\")\n```\n\n## Usage in Training Scripts\n\n### Basic Integration\n\n```python\nimport torch\nimport torch.nn as nn\nfrom torch_device_manager import DeviceManager\n\ndef train_model():\n # Initialize device manager\n device_manager = DeviceManager(device=\"auto\", mixed_precision=True)\n device = device_manager.get_device()\n \n # Setup model\n model = YourModel().to(device)\n optimizer = torch.optim.Adam(model.parameters())\n \n # Optimize memory usage\n batch_size, gradient_steps = device_manager.optimize_for_memory(model, 32)\n \n # Training loop\n for epoch in range(num_epochs):\n for batch_idx, (data, target) in enumerate(dataloader):\n data, target = data.to(device), target.to(device)\n \n # Use mixed precision if available\n if device_manager.mixed_precision and device_manager.scaler:\n with torch.cuda.amp.autocast():\n output = model(data)\n loss = criterion(output, target)\n \n device_manager.scaler.scale(loss).backward()\n \n if (batch_idx + 1) % gradient_steps == 0:\n device_manager.scaler.step(optimizer)\n device_manager.scaler.update()\n optimizer.zero_grad()\n else:\n output = model(data)\n loss = criterion(output, target)\n loss.backward()\n \n if (batch_idx + 1) % gradient_steps == 0:\n optimizer.step()\n optimizer.zero_grad()\n \n # Log memory usage\n device_manager.log_memory_usage()\n```\n\n### Advanced Usage\n\n```python\nfrom torch_device_manager import DeviceManager\n\n# Force specific device\ndevice_manager = DeviceManager(device=\"cuda\", mixed_precision=False)\n\n# Check memory info\nmemory_info = device_manager.get_memory_info()\nprint(f\"Available memory: {memory_info}\")\n\n# Manual memory optimization\nif memory_info.get(\"free_gb\", 0) < 2.0:\n print(\"Low memory detected, reducing batch size\")\n batch_size = 4\n```\n\n## API Reference\n\n### DeviceManager\n\n#### Constructor\n- `device` (str, default=\"auto\"): Device to use (\"auto\", \"cuda\", \"mps\", \"cpu\")\n- `mixed_precision` (bool, default=True): Enable mixed precision training\n\n#### Methods\n- `get_device()`: Returns torch.device object\n- `get_memory_info()`: Returns memory information dict\n- `log_memory_usage()`: Logs current memory usage\n- `optimize_for_memory(model, batch_size)`: Returns optimized (batch_size, gradient_steps)\n\n## Device Support\n\n- **CUDA**: Full support with memory optimization and mixed precision\n- **Apple Silicon (MPS)**: Basic support with conservative memory settings\n- **CPU**: Fallback support with optimized batch sizes\n\n## Requirements\n\n- Python >= 3.8\n- PyTorch >= 2.1.1\n\n## License\n\nMIT License\n\n## Contributing\n\nContributions are welcome! \n",
"bugtrack_url": null,
"license": null,
"summary": "A PyTorch device manager for automatic hardware detection and memory optimization",
"version": "0.1.0",
"project_urls": {
"Bug Reports": "https://github.com/TempCoder82/torch-device-manager/issues",
"Homepage": "https://github.com/yourusername/torch-device-manager",
"Source": "https://github.com/TempCoder82/torch-device-manager"
},
"split_keywords": [
"pytorch",
" cuda",
" mps",
" device",
" memory",
" optimization",
" machine learning"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "667cf33aa97897703c0ef9d234642ec8310d4f3214603f0671cc5b148a27da52",
"md5": "4e71811b275b29f08e6d91b041649674",
"sha256": "6fc555669c43b7e81ee7a26ca8897628f8d45bb8b4aa9bf481c26c500c820858"
},
"downloads": -1,
"filename": "torch_device_manager-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4e71811b275b29f08e6d91b041649674",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 5762,
"upload_time": "2025-07-19T20:38:17",
"upload_time_iso_8601": "2025-07-19T20:38:17.635812Z",
"url": "https://files.pythonhosted.org/packages/66/7c/f33aa97897703c0ef9d234642ec8310d4f3214603f0671cc5b148a27da52/torch_device_manager-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9172d12f8a63d9d98c9ed509b721e5e4c1362324ab9ef85bf9245952e1c18623",
"md5": "fee22521d42bf36f4d028b2065f8b0a8",
"sha256": "31bbdec42054c410e12f800ad80b7efd44b48d6caf6a05b77d33f45cacb02960"
},
"downloads": -1,
"filename": "torch_device_manager-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "fee22521d42bf36f4d028b2065f8b0a8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 7704,
"upload_time": "2025-07-19T20:38:19",
"upload_time_iso_8601": "2025-07-19T20:38:19.147566Z",
"url": "https://files.pythonhosted.org/packages/91/72/d12f8a63d9d98c9ed509b721e5e4c1362324ab9ef85bf9245952e1c18623/torch_device_manager-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-19 20:38:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "yourusername",
"github_project": "torch-device-manager",
"github_not_found": true,
"lcname": "torch-device-manager"
}