# MUFASA
A Python utility module for CPU core management and GPU memory optimization, particularly useful for machine learning workflows.
## Installation
You can install MUFASA directly from PyPI:
```bash
pip install mufasa-polimi
```
## Features
- CPU core detection and optimization for SLURM environments
- Automated GPU memory management and cleanup
- Detailed memory usage reporting
## Usage
### Core Management Functions
```python
from mufasa import getCoreAffinity, setOptimalWorkers
# Get available CPU cores
cpu_count = getCoreAffinity()
print(f"Available CPU cores: {cpu_count}")
# Set optimal number of worker processes
workers = setOptimalWorkers()
print(f"Optimal worker count: {workers}")
```
#### `getCoreAffinity()`
Detects the number of available CPU cores, taking into account SLURM job allocations if running in a SLURM environment. Returns the minimum between SLURM-allocated CPUs and system-available CPUs, or the total system CPU count if not in a SLURM environment.
#### `setOptimalWorkers()`
Similar to `getCoreAffinity()`, but defaults to 1 if no SLURM environment is detected. Useful for setting worker counts in parallel processing scenarios.
### GPU Memory Management
```python
from mufasa import gpuClean
# Basic cleanup
freed_count, freed_memory = gpuClean()
# Detailed cleanup with verbose output
freed_count, freed_memory = gpuClean(
exclude_vars=['model', 'optimizer'], # Variables to preserve
verbose=True # Enable detailed reporting
)
```
#### `gpuClean(local_vars=None, exclude_vars=None, verbose=False)`
Automatically detects and frees GPU memory by cleaning up tensor variables.
**Parameters:**
- `local_vars` (dict, optional): Dictionary of local variables to clean. If None, uses the calling frame's locals.
- `exclude_vars` (list, optional): List of variable names to exclude from cleanup.
- `verbose` (bool): Whether to print detailed information about cleaned variables.
**Returns:**
- tuple: (freed_count, freed_memory_mb)
- freed_count: Number of tensors freed
- freed_memory_mb: Approximate memory freed in MB
**Features:**
- Cleans up PyTorch tensors in local scope
- Handles nested tensors in dictionaries and lists
- Provides detailed memory usage reports when verbose=True
- Allows excluding specific variables from cleanup
- Automatically triggers garbage collection and GPU memory cache clearing
**Example with Verbose Output:**
```python
import torch
from mufasa import gpuClean
# Create some example tensors
tensor1 = torch.randn(1000, 1000).cuda()
tensor2 = torch.randn(2000, 2000).cuda()
# Clean up with detailed output
freed_count, freed_memory = gpuClean(verbose=True)
```
The verbose output includes:
- Table of cleaned tensors with their shapes and sizes
- Total number of tensors freed
- Total memory freed
- Current GPU memory allocation status
- List of excluded variables (if any)
## Notes
- SLURM-specific features require a SLURM environment
- GPU cleaning functions require PyTorch and a CUDA-capable GPU
- Memory sizes are reported in MB or GB depending on the size
- The module uses the `rich` library for formatted console output in verbose mode
Raw data
{
"_id": null,
"home_page": "https://github.com/alberto-rota/mufasa",
"name": "mufasa-polimi",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": null,
"author": "Alberto Rota",
"author_email": "alberto1.rota@polimi.it",
"download_url": "https://files.pythonhosted.org/packages/2c/3a/48a0d26b01fce0e278eb6b33fc5b0345c32e51a0355354f50d25612fa087/mufasa_polimi-0.0.5.tar.gz",
"platform": null,
"description": "# MUFASA\n\nA Python utility module for CPU core management and GPU memory optimization, particularly useful for machine learning workflows.\n\n## Installation\n\nYou can install MUFASA directly from PyPI:\n\n```bash\npip install mufasa-polimi\n```\n\n## Features\n\n- CPU core detection and optimization for SLURM environments\n- Automated GPU memory management and cleanup\n- Detailed memory usage reporting\n\n## Usage\n\n### Core Management Functions\n\n```python\nfrom mufasa import getCoreAffinity, setOptimalWorkers\n\n# Get available CPU cores\ncpu_count = getCoreAffinity()\nprint(f\"Available CPU cores: {cpu_count}\")\n\n# Set optimal number of worker processes\nworkers = setOptimalWorkers()\nprint(f\"Optimal worker count: {workers}\")\n```\n\n#### `getCoreAffinity()`\nDetects the number of available CPU cores, taking into account SLURM job allocations if running in a SLURM environment. Returns the minimum between SLURM-allocated CPUs and system-available CPUs, or the total system CPU count if not in a SLURM environment.\n\n#### `setOptimalWorkers()`\nSimilar to `getCoreAffinity()`, but defaults to 1 if no SLURM environment is detected. Useful for setting worker counts in parallel processing scenarios.\n\n### GPU Memory Management\n\n```python\nfrom mufasa import gpuClean\n\n# Basic cleanup\nfreed_count, freed_memory = gpuClean()\n\n# Detailed cleanup with verbose output\nfreed_count, freed_memory = gpuClean(\n exclude_vars=['model', 'optimizer'], # Variables to preserve\n verbose=True # Enable detailed reporting\n)\n```\n\n#### `gpuClean(local_vars=None, exclude_vars=None, verbose=False)`\n\nAutomatically detects and frees GPU memory by cleaning up tensor variables.\n\n**Parameters:**\n- `local_vars` (dict, optional): Dictionary of local variables to clean. If None, uses the calling frame's locals.\n- `exclude_vars` (list, optional): List of variable names to exclude from cleanup.\n- `verbose` (bool): Whether to print detailed information about cleaned variables.\n\n**Returns:**\n- tuple: (freed_count, freed_memory_mb)\n - freed_count: Number of tensors freed\n - freed_memory_mb: Approximate memory freed in MB\n\n**Features:**\n- Cleans up PyTorch tensors in local scope\n- Handles nested tensors in dictionaries and lists\n- Provides detailed memory usage reports when verbose=True\n- Allows excluding specific variables from cleanup\n- Automatically triggers garbage collection and GPU memory cache clearing\n\n**Example with Verbose Output:**\n```python\nimport torch\nfrom mufasa import gpuClean\n\n# Create some example tensors\ntensor1 = torch.randn(1000, 1000).cuda()\ntensor2 = torch.randn(2000, 2000).cuda()\n\n# Clean up with detailed output\nfreed_count, freed_memory = gpuClean(verbose=True)\n```\n\nThe verbose output includes:\n- Table of cleaned tensors with their shapes and sizes\n- Total number of tensors freed\n- Total memory freed\n- Current GPU memory allocation status\n- List of excluded variables (if any)\n\n## Notes\n\n- SLURM-specific features require a SLURM environment\n- GPU cleaning functions require PyTorch and a CUDA-capable GPU\n- Memory sizes are reported in MB or GB depending on the size\n- The module uses the `rich` library for formatted console output in verbose mode\n",
"bugtrack_url": null,
"license": "GPLv3",
"summary": "Utilities and Helpers for optimal usage of the MUFASA HPC cluster at Politecnico di Milano",
"version": "0.0.5",
"project_urls": {
"Homepage": "https://github.com/alberto-rota/mufasa"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a76f9f94b792b9103eadb3c775574775f7ef4fd991dfb093edbf12784ffa3bb3",
"md5": "ff2d21b1698833c297692b50fa56a8dc",
"sha256": "9e1fd1d3550aa6460b1c2179730b827710ec499acdca2790cbd2902244fe922d"
},
"downloads": -1,
"filename": "mufasa_polimi-0.0.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ff2d21b1698833c297692b50fa56a8dc",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 17351,
"upload_time": "2024-12-04T10:25:36",
"upload_time_iso_8601": "2024-12-04T10:25:36.146459Z",
"url": "https://files.pythonhosted.org/packages/a7/6f/9f94b792b9103eadb3c775574775f7ef4fd991dfb093edbf12784ffa3bb3/mufasa_polimi-0.0.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2c3a48a0d26b01fce0e278eb6b33fc5b0345c32e51a0355354f50d25612fa087",
"md5": "255bee11626d8f3cfabadeedf63071e5",
"sha256": "61d2d8052fa3c1b4aeafee939888f4f5c6969598872757e41c0fbe1445f5b50a"
},
"downloads": -1,
"filename": "mufasa_polimi-0.0.5.tar.gz",
"has_sig": false,
"md5_digest": "255bee11626d8f3cfabadeedf63071e5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 16820,
"upload_time": "2024-12-04T10:25:37",
"upload_time_iso_8601": "2024-12-04T10:25:37.907783Z",
"url": "https://files.pythonhosted.org/packages/2c/3a/48a0d26b01fce0e278eb6b33fc5b0345c32e51a0355354f50d25612fa087/mufasa_polimi-0.0.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-04 10:25:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "alberto-rota",
"github_project": "mufasa",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "rich",
"specs": [
[
"==",
"13.9.4"
]
]
},
{
"name": "setuptools",
"specs": [
[
"==",
"75.1.0"
]
]
},
{
"name": "torch",
"specs": [
[
"==",
"2.4.1"
]
]
}
],
"lcname": "mufasa-polimi"
}