# GPUtil-fix
> [!IMPORTANT]
> This fork exists because I urgently needed a working version of the GPUtil module compatible with Python 3.12. The original author has not updated the code for several years, and the latest pull request fixing this issue has not been reviewed. Therefore, I created this fork to ensure continued functionality.
>
> A big thank you to [@MagicalTux](https://github.com/MagicalTux) for providing the fix.
>
> You can install this updated version via pip:
>
> ```bash
> pip install GPUtil-fix
> ```
# Original description
`GPUtil` is a Python module for getting the GPU status from NVIDA GPUs using `nvidia-smi`.
`GPUtil` locates all GPUs on the computer, determines their availablity and returns a ordered list of available GPUs.
Availablity is based upon the current memory consumption and load of each GPU.
The module is written with GPU selection for Deep Learning in mind, but it is not task/library specific and it can be applied to any task, where it may be useful to identify available GPUs.
**Table of Contents**
1. [Requirements](#requirements)
1. [Installation](#installation)
1. [Usage](#usage)
1. [Main functions](#main-functions)
1. [Helper functions](#helper-functions)
1. [Examples](#examples)
1. [Select first available GPU in Caffe](#select-first-available-gpu-in-caffe)
1. [Occupy only 1 GPU in TensorFlow](#occupy-only-1-gpu-in-tensorflow)
1. [Monitor GPU in a separate thread](#monitor-gpu-in-a-separate-thread)
1. [License](#license)
## Requirements
NVIDIA GPU with latest NVIDIA driver installed.
GPUtil uses the program `nvidia-smi` to get the GPU status of all available NVIDIA GPUs. `nvidia-smi` should be installed automatically, when you install your NVIDIA driver.
Supports both Python 2.X and 3.X.
Python libraries:
* subprocess ([The Python Standard Library](https://docs.python.org/3/library/subprocess.html))
* shutil ([The Python Standard Library](https://docs.python.org/3/library/shutil.html))
* math ([The Python Standard Library](https://docs.python.org/3/library/math.html))
* random ([The Python Standard Library](https://docs.python.org/3/library/random.html))
* time ([The Python Standard Library](https://docs.python.org/3/library/time.html))
* os ([The Python Standard Library](https://docs.python.org/3/library/os.html))
* sys ([The Python Standard Library](https://docs.python.org/3/library/sys.html))
* platform ([The Python Standard Library](https://docs.python.org/3/library/platform.html))
Tested on CUDA driver version 390.77 Python 2.7 and 3.5.
## Installation
1. Open a terminal (Ctrl+Shift+T)
2. Type `pip install gputil`
3. Test the installation
1. Open a terminal in a folder other than the GPUtil folder
2. Start a python console by typing `python` in the terminal
3. In the newly opened python console, type:
```python
import GPUtil
GPUtil.showUtilization()
```
4. Your output should look something like following, depending on your number of GPUs and their current usage:
```
ID GPU MEM
--------------
0 0% 0%
```
### Old way of installation
1. Download or clone repository to your computer
2. Add GPUtil folder to ~/.bashrc
1. Open a new terminal (Press Ctrl+Alt+T)
2. Open bashrc:
```
gedit ~/.bashrc
```
3. Added your GPUtil folder to the environment variable `PYTHONPATH` (replace `<path_to_gputil>` with your folder path):
```
export PYTHONPATH="$PYTHONPATH:<path_to_gputil>"
Example:
export PYTHONPATH="$PYTHONPATH:/home/anderskm/github/gputil"
```
4. Save ~/.bashrc and close gedit
5. Restart your terminal
1. Test the installation
1. Open a terminal in a folder other than the GPUtil folder
2. Start a python console by typing `python` in the terminal
3. In the newly opened python console, type:
```python
import GPUtil
GPUtil.showUtilization()
```
4. Your output should look something like following, depending on your number of GPUs and their current usage:
```
ID GPU MEM
--------------
0 0% 0%
```
## Usage
To include `GPUtil` in your Python code, all you hve to do is included it at the beginning of your script:
```python
import GPUtil
```
Once included all functions are available. The functions along with a short description of inputs, outputs and their functionality can be found in the following two sections.
### Main functions
```python
deviceIDs = GPUtil.getAvailable(order = 'first', limit = 1, maxLoad = 0.5, maxMemory = 0.5, includeNan=False, excludeID=[], excludeUUID=[])
```
Returns a list ids of available GPUs. Availablity is determined based on current memory usage and load. The order, maximum number of devices, their maximum load and maximum memory consumption are determined by the input arguments.
* Inputs
* `order` - Deterimines the order in which the available GPU device ids are returned. `order` should be specified as one of the following strings:
* `'first'` - orders available GPU device ids by ascending id (**defaut**)
* `'last'` - orders available GPU device ids by descending id
* `'random'` - orders the available GPU device ids randomly
* `'load'`- orders the available GPU device ids by ascending load
* `'memory'` - orders the available GPU device ids by ascending memory usage
* `limit` - limits the number of GPU device ids returned to the specified number. Must be positive integer. (**default = 1**)
* `maxLoad` - Maximum current relative load for a GPU to be considered available. GPUs with a load larger than `maxLoad` is not returned. (**default = 0.5**)
* `maxMemory` - Maximum current relative memory usage for a GPU to be considered available. GPUs with a current memory usage larger than `maxMemory` is not returned. (**default = 0.5**)
* `includeNan` - True/false flag indicating whether to include GPUs where either load or memory usage is NaN (indicating usage could not be retrieved). (**default = False**)
* `excludeID` - List of IDs, which should be excluded from the list of available GPUs. See `GPU` class description. (**default = []**)
* `excludeUUID` - Same as `excludeID` except it uses the UUID. (**default = []**)
* Outputs
* deviceIDs - list of all available GPU device ids. A GPU is considered available, if the current load and memory usage is less than `maxLoad` and `maxMemory`, respectively. The list is ordered according to `order`. The maximum number of returned device ids is limited by `limit`.
```python
deviceID = GPUtil.getFirstAvailable(order = 'first', maxLoad=0.5, maxMemory=0.5, attempts=1, interval=900, verbose=False)
```
Returns the first avaiable GPU. Availablity is determined based on current memory usage and load, and the ordering is determined by the specified order.
If no available GPU is found, an error is thrown.
When using the default values, it is the same as `getAvailable(order = 'first', limit = 1, maxLoad = 0.5, maxMemory = 0.5)`
* Inputs
* `order` - See the description for `GPUtil.getAvailable(...)`
* `maxLoad` - Maximum current relative load for a GPU to be considered available. GPUs with a load larger than `maxLoad` is not returned. (**default = 0.5**)
* `maxMemory` - Maximum current relative memory usage for a GPU to be considered available. GPUs with a current memory usage larger than `maxMemory` is not returned. (**default = 0.5**)
* `attempts` - Number of attempts the function should make before giving up finding an available GPU. (**default = 1**)
* `interval` - Interval in seconds between each attempt to find an available GPU. (**default = 900** --> 15 mins)
* `verbose` - If `True`, prints the attempt number before each attempt and the GPU id if an available is found.
* `includeNan` - See the description for `GPUtil.getAvailable(...)`. (**default = False**)
* `excludeID` - See the description for `GPUtil.getAvailable(...)`. (**default = []**)
* `excludeUUID` - See the description for `GPUtil.getAvailable(...)`. (**default = []**)
* Outputs
* deviceID - list with 1 element containing the first available GPU device ids. A GPU is considered available, if the current load and memory usage is less than `maxLoad` and `maxMemory`, respectively. The order and limit are fixed to `'first'` and `1`, respectively.
```python
GPUtil.showUtilization(all=False, attrList=None, useOldCode=False)
```
Prints the current status (id, memory usage, uuid load) of all GPUs
* Inputs
* `all` - True/false flag indicating if all info on the GPUs should be shown. Overwrites `attrList`.
* `attrList` - List of lists of `GPU` attributes to display. See code for more information/example.
* `useOldCode` - True/false flag indicating if the old code to display GPU utilization should be used.
* Outputs
* _None_
### Helper functions
```python
class GPU
```
Helper class handle the attributes of each GPU. Quoted descriptions are copied from corresponding descriptions by `nvidia-smi`.
* Attributes for each `GPU`
* `id` - "Zero based index of the GPU. Can change at each boot."
* `uuid` - "This value is the globally unique immutable alphanumeric identifier of the GPU. It does not correspond to any physical label on the board. Does not change across reboots."
* `load` - Relative GPU load. 0 to 1 (100%, full load). "Percent of time over the past sample period during which one or more kernels was executing on the GPU. The sample period may be between 1 second and 1/6 second depending on the product."
* `memoryUtil` - Relative memory usage from 0 to 1 (100%, full usage). "Percent of time over the past sample period during which global (device) memory was being read or written. The sample period may be between 1 second and 1/6 second depending on the product."
* `memoryTotal` - "Total installed GPU memory."
* `memoryUsed` - "Total GPU memory allocated by active contexts."
* `memoryFree` - "Total free GPU memory."
* `driver` - "The version of the installed NVIDIA display driver."
* `name` - "The official product name of the GPU."
* `serial` - This number matches the serial number physically printed on each board. It is a globally unique immutable alphanumeric value.
* `display_mode` - "A flag that indicates whether a physical display (e.g. monitor) is currently connected to any of the GPU's connectors. "Enabled" indicates an attached display. "Disabled" indicates otherwise."
* `display_active` - "A flag that indicates whether a display is initialized on the GPU's (e.g. memory is allocated on the device for display). Display can be active even when no monitor is physically attached. "Enabled" indicates an active display. "Disabled" indicates otherwise."
```python
GPUs = GPUtil.getGPUs()
```
* Inputs
* _None_
* Outputs
* `GPUs` - list of all GPUs. Each `GPU` corresponds to one GPU in the computer and contains a device id, relative load and relative memory usage.
```python
GPUavailability = GPUtil.getAvailability(GPUs, maxLoad = 0.5, maxMemory = 0.5, includeNan=False, excludeID=[], excludeUUID=[])
```
Given a list of `GPUs` (see `GPUtil.getGPUs()`), return a equally sized list of ones and zeroes indicating which corresponding GPUs are available.
* Inputs
* `GPUs` - List of `GPUs`. See `GPUtil.getGPUs()`
* `maxLoad` - Maximum current relative load for a GPU to be considered available. GPUs with a load larger than `maxLoad` is not returned. (**default = 0.5**)
* `maxMemory` - Maximum current relative memory usage for a GPU to be considered available. GPUs with a current memory usage larger than `maxMemory` is not returned. (**default = 0.5**)
* `includeNan` - See the description for `GPUtil.getAvailable(...)`. (**default = False**)
* `excludeID` - See the description for `GPUtil.getAvailable(...)`. (**default = []**)
* `excludeUUID` - See the description for `GPUtil.getAvailable(...)`. (**default = []**)
* Outputs
* GPUavailability - binary list indicating if `GPUs` are available or not. A GPU is considered available, if the current load and memory usage is less than `maxLoad` and `maxMemory`, respectively.
See [demo_GPUtil.py](https://github.com/anderskm/gputil/blob/master/demo_GPUtil.py) for examples and more details.
## Examples
### Select first available GPU in Caffe
In the Deep Learning library [Caffe](http://caffe.berkeleyvision.org/), the user can switch between using the CPU or GPU through their Python interface.
This is done by calling the methods `caffe.set_mode_cpu()` and `caffe.set_mode_gpu()`, respectively.
Below is a minimum working example for selecting the first available GPU with GPUtil to run a Caffe network.
```python
# Import caffe and GPUtil
import caffe
import GPUtil
# Set CUDA_DEVICE_ORDER so the IDs assigned by CUDA match those from nvidia-smi
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
# Get the first available GPU
DEVICE_ID_LIST = GPUtil.getFirstAvailable()
DEVICE_ID = DEVICE_ID_LIST[0] # grab first element from list
# Select GPU mode
caffe.set_mode_gpu()
# Select GPU id
caffe.set_device(DEVICE_ID)
# Initialize your network here
```
**Note:** At the time of writing this example, the Caffe Python wrapper only supports 1 GPU, although the underlying code supports multiple GPUs.
Calling directly Caffe from the terminal allows for using multiple GPUs.
### Occupy only 1 GPU in TensorFlow
By default, [TensorFlow](https://www.tensorflow.org/) will occupy all available GPUs when using a gpu as a device (e.g. `tf.device('\gpu:0')`).
By setting the environment variable `CUDA_VISIBLE_DEVICES`, the user can mask which GPUs should be visible to TensorFlow via CUDA (See [CUDA_VISIBLE_DEVICES - Masking GPUs](http://acceleware.com/blog/cudavisibledevices-masking-gpus)). Using GPUtil.py, the CUDA_VISIBLE_DEVICES can be set programmatically based on the available GPUs.
Below is a minimum working example of how to occupy only 1 GPU in TensorFlow using GPUtil.
To run the code, copy it into a new python file (e.g. `demo_tensorflow_gputil.py`) and run it (e.g. enter `python demo_tensorflow_gputil.py` in a terminal).
**Note:** Even if you set the device you run your code on to a CPU, TensorFlow will occupy all available GPUs. To avoid this, all GPUs can be hidden from TensorFlow with `os.environ["CUDA_VISIBLE_DEVICES"] = ''`.
```python
# Import os to set the environment variable CUDA_VISIBLE_DEVICES
import os
import tensorflow as tf
import GPUtil
# Set CUDA_DEVICE_ORDER so the IDs assigned by CUDA match those from nvidia-smi
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
# Get the first available GPU
DEVICE_ID_LIST = GPUtil.getFirstAvailable()
DEVICE_ID = DEVICE_ID_LIST[0] # grab first element from list
# Set CUDA_VISIBLE_DEVICES to mask out all other GPUs than the first available device id
os.environ["CUDA_VISIBLE_DEVICES"] = str(DEVICE_ID)
# Since all other GPUs are masked out, the first available GPU will now be identified as GPU:0
device = '/gpu:0'
print('Device ID (unmasked): ' + str(DEVICE_ID))
print('Device ID (masked): ' + str(0))
# Run a minimum working example on the selected GPU
# Start a session
with tf.Session() as sess:
# Select the device
with tf.device(device):
# Declare two numbers and add them together in TensorFlow
a = tf.constant(12)
b = tf.constant(30)
result = sess.run(a+b)
print('a+b=' + str(result))
```
Your output should look something like the code block below. Notice how only one of the GPUs are found and created as a tensorflow device.
```
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally
Device: /gpu:0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:02:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:02:00.0)
a+b=42
```
Comment the `os.environ["CUDA_VISIBLE_DEVICES"] = str(DEVICE_ID)` line and compare the two outputs.
Depending on your number of GPUs, your output should look something like code block below.
Notice, how all 4 GPUs are being found and created as a tensorflow device, whereas when `CUDA_VISIBLE_DEVICES` was set, only 1 GPU was found and created.
```
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally
Device: /gpu:0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:02:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x2c8e400
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 1 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:03:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x2c92040
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 2 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:83:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x2c95d90
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 3 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:84:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 0 and 2
I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 0 and 3
I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 1 and 2
I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 1 and 3
I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 2 and 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 2 and 1
I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 3 and 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 3 and 1
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 1 2 3
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y Y N N
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 1: Y Y N N
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 2: N N Y Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 3: N N Y Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:02:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:1) -> (device: 1, name: TITAN X (Pascal), pci bus id: 0000:03:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:2) -> (device: 2, name: TITAN X (Pascal), pci bus id: 0000:83:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:3) -> (device: 3, name: TITAN X (Pascal), pci bus id: 0000:84:00.0)
a+b=42
```
### Monitor GPU in a separate thread
If using GPUtil to monitor GPUs during training, it may show 0% utilization. A way around this is to use a separate monitoring thread.
```python
import GPUtil
from threading import Thread
import time
class Monitor(Thread):
def __init__(self, delay):
super(Monitor, self).__init__()
self.stopped = False
self.delay = delay # Time between calls to GPUtil
self.start()
def run(self):
while not self.stopped:
GPUtil.showUtilization()
time.sleep(self.delay)
def stop(self):
self.stopped = True
# Instantiate monitor with a 10-second delay between updates
monitor = Monitor(10)
# Train, etc.
# Close monitor
monitor.stop()
```
## License
See [LICENSE](https://github.com/anderskm/gputil/blob/master/LICENSE.txt)
Raw data
{
"_id": null,
"home_page": "https://github.com/atarwn/GPUtil-fix",
"name": "GPUtil-fix",
"maintainer": "atarwn",
"docs_url": null,
"requires_python": null,
"maintainer_email": "a@qwa.su",
"keywords": null,
"author": "Anders Krogh Mortensen",
"author_email": "anderskroghm@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/7f/61/8c9a72aaf41a2ff7d75994f5a8165d5645414d87ebfe4baa5c85ac15a5c9/gputil_fix-1.4.2.tar.gz",
"platform": null,
"description": "# GPUtil-fix\r\n\r\n> [!IMPORTANT]\r\n> This fork exists because I urgently needed a working version of the GPUtil module compatible with Python 3.12. The original author has not updated the code for several years, and the latest pull request fixing this issue has not been reviewed. Therefore, I created this fork to ensure continued functionality.\r\n>\r\n> A big thank you to [@MagicalTux](https://github.com/MagicalTux) for providing the fix.\r\n>\r\n> You can install this updated version via pip:\r\n>\r\n> ```bash\r\n> pip install GPUtil-fix\r\n> ```\r\n\r\n# Original description\r\n\r\n`GPUtil` is a Python module for getting the GPU status from NVIDA GPUs using `nvidia-smi`.\r\n`GPUtil` locates all GPUs on the computer, determines their availablity and returns a ordered list of available GPUs.\r\nAvailablity is based upon the current memory consumption and load of each GPU.\r\nThe module is written with GPU selection for Deep Learning in mind, but it is not task/library specific and it can be applied to any task, where it may be useful to identify available GPUs.\r\n\r\n**Table of Contents**\r\n\r\n1. [Requirements](#requirements)\r\n1. [Installation](#installation)\r\n1. [Usage](#usage)\r\n 1. [Main functions](#main-functions)\r\n 1. [Helper functions](#helper-functions)\r\n1. [Examples](#examples)\r\n 1. [Select first available GPU in Caffe](#select-first-available-gpu-in-caffe)\r\n 1. [Occupy only 1 GPU in TensorFlow](#occupy-only-1-gpu-in-tensorflow)\r\n 1. [Monitor GPU in a separate thread](#monitor-gpu-in-a-separate-thread)\r\n1. [License](#license)\r\n\r\n## Requirements\r\nNVIDIA GPU with latest NVIDIA driver installed.\r\nGPUtil uses the program `nvidia-smi` to get the GPU status of all available NVIDIA GPUs. `nvidia-smi` should be installed automatically, when you install your NVIDIA driver.\r\n\r\nSupports both Python 2.X and 3.X.\r\n\r\nPython libraries:\r\n* subprocess ([The Python Standard Library](https://docs.python.org/3/library/subprocess.html))\r\n* shutil ([The Python Standard Library](https://docs.python.org/3/library/shutil.html))\r\n* math ([The Python Standard Library](https://docs.python.org/3/library/math.html))\r\n* random ([The Python Standard Library](https://docs.python.org/3/library/random.html))\r\n* time ([The Python Standard Library](https://docs.python.org/3/library/time.html))\r\n* os ([The Python Standard Library](https://docs.python.org/3/library/os.html))\r\n* sys ([The Python Standard Library](https://docs.python.org/3/library/sys.html))\r\n* platform ([The Python Standard Library](https://docs.python.org/3/library/platform.html))\r\n\r\nTested on CUDA driver version 390.77 Python 2.7 and 3.5.\r\n\r\n## Installation\r\n\r\n1. Open a terminal (Ctrl+Shift+T)\r\n2. Type `pip install gputil`\r\n3. Test the installation\r\n 1. Open a terminal in a folder other than the GPUtil folder\r\n 2. Start a python console by typing `python` in the terminal\r\n 3. In the newly opened python console, type:\r\n ```python\r\n import GPUtil\r\n GPUtil.showUtilization()\r\n ```\r\n 4. Your output should look something like following, depending on your number of GPUs and their current usage:\r\n ```\r\n ID GPU MEM\r\n --------------\r\n 0 0% 0%\r\n ```\r\n\r\n### Old way of installation\r\n\r\n1. Download or clone repository to your computer\r\n2. Add GPUtil folder to ~/.bashrc\r\n 1. Open a new terminal (Press Ctrl+Alt+T)\r\n 2. Open bashrc:\r\n ```\r\n gedit ~/.bashrc\r\n ```\r\n 3. Added your GPUtil folder to the environment variable `PYTHONPATH` (replace `<path_to_gputil>` with your folder path):\r\n ```\r\n export PYTHONPATH=\"$PYTHONPATH:<path_to_gputil>\"\r\n\r\n Example:\r\n export PYTHONPATH=\"$PYTHONPATH:/home/anderskm/github/gputil\"\r\n ```\r\n 4. Save ~/.bashrc and close gedit\r\n 5. Restart your terminal\r\n1. Test the installation\r\n 1. Open a terminal in a folder other than the GPUtil folder\r\n 2. Start a python console by typing `python` in the terminal\r\n 3. In the newly opened python console, type:\r\n ```python\r\n import GPUtil\r\n GPUtil.showUtilization()\r\n ```\r\n 4. Your output should look something like following, depending on your number of GPUs and their current usage:\r\n ```\r\n ID GPU MEM\r\n --------------\r\n 0 0% 0%\r\n ```\r\n\r\n## Usage\r\n\r\nTo include `GPUtil` in your Python code, all you hve to do is included it at the beginning of your script:\r\n\r\n```python\r\nimport GPUtil\r\n```\r\n\r\nOnce included all functions are available. The functions along with a short description of inputs, outputs and their functionality can be found in the following two sections.\r\n\r\n### Main functions\r\n\r\n```python\r\ndeviceIDs = GPUtil.getAvailable(order = 'first', limit = 1, maxLoad = 0.5, maxMemory = 0.5, includeNan=False, excludeID=[], excludeUUID=[])\r\n```\r\nReturns a list ids of available GPUs. Availablity is determined based on current memory usage and load. The order, maximum number of devices, their maximum load and maximum memory consumption are determined by the input arguments.\r\n\r\n* Inputs\r\n * `order` - Deterimines the order in which the available GPU device ids are returned. `order` should be specified as one of the following strings:\r\n * `'first'` - orders available GPU device ids by ascending id (**defaut**)\r\n * `'last'` - orders available GPU device ids by descending id\r\n * `'random'` - orders the available GPU device ids randomly\r\n * `'load'`- orders the available GPU device ids by ascending load\r\n * `'memory'` - orders the available GPU device ids by ascending memory usage\r\n * `limit` - limits the number of GPU device ids returned to the specified number. Must be positive integer. (**default = 1**)\r\n * `maxLoad` - Maximum current relative load for a GPU to be considered available. GPUs with a load larger than `maxLoad` is not returned. (**default = 0.5**)\r\n * `maxMemory` - Maximum current relative memory usage for a GPU to be considered available. GPUs with a current memory usage larger than `maxMemory` is not returned. (**default = 0.5**)\r\n * `includeNan` - True/false flag indicating whether to include GPUs where either load or memory usage is NaN (indicating usage could not be retrieved). (**default = False**)\r\n * `excludeID` - List of IDs, which should be excluded from the list of available GPUs. See `GPU` class description. (**default = []**)\r\n * `excludeUUID` - Same as `excludeID` except it uses the UUID. (**default = []**)\r\n* Outputs\r\n * deviceIDs - list of all available GPU device ids. A GPU is considered available, if the current load and memory usage is less than `maxLoad` and `maxMemory`, respectively. The list is ordered according to `order`. The maximum number of returned device ids is limited by `limit`.\r\n\r\n```python\r\ndeviceID = GPUtil.getFirstAvailable(order = 'first', maxLoad=0.5, maxMemory=0.5, attempts=1, interval=900, verbose=False)\r\n```\r\nReturns the first avaiable GPU. Availablity is determined based on current memory usage and load, and the ordering is determined by the specified order.\r\nIf no available GPU is found, an error is thrown.\r\nWhen using the default values, it is the same as `getAvailable(order = 'first', limit = 1, maxLoad = 0.5, maxMemory = 0.5)`\r\n\r\n* Inputs\r\n * `order` - See the description for `GPUtil.getAvailable(...)`\r\n * `maxLoad` - Maximum current relative load for a GPU to be considered available. GPUs with a load larger than `maxLoad` is not returned. (**default = 0.5**)\r\n * `maxMemory` - Maximum current relative memory usage for a GPU to be considered available. GPUs with a current memory usage larger than `maxMemory` is not returned. (**default = 0.5**)\r\n * `attempts` - Number of attempts the function should make before giving up finding an available GPU. (**default = 1**)\r\n * `interval` - Interval in seconds between each attempt to find an available GPU. (**default = 900** --> 15 mins)\r\n * `verbose` - If `True`, prints the attempt number before each attempt and the GPU id if an available is found.\r\n * `includeNan` - See the description for `GPUtil.getAvailable(...)`. (**default = False**)\r\n * `excludeID` - See the description for `GPUtil.getAvailable(...)`. (**default = []**)\r\n * `excludeUUID` - See the description for `GPUtil.getAvailable(...)`. (**default = []**)\r\n* Outputs\r\n * deviceID - list with 1 element containing the first available GPU device ids. A GPU is considered available, if the current load and memory usage is less than `maxLoad` and `maxMemory`, respectively. The order and limit are fixed to `'first'` and `1`, respectively.\r\n\r\n\r\n```python\r\nGPUtil.showUtilization(all=False, attrList=None, useOldCode=False)\r\n```\r\nPrints the current status (id, memory usage, uuid load) of all GPUs\r\n* Inputs\r\n * `all` - True/false flag indicating if all info on the GPUs should be shown. Overwrites `attrList`.\r\n * `attrList` - List of lists of `GPU` attributes to display. See code for more information/example.\r\n * `useOldCode` - True/false flag indicating if the old code to display GPU utilization should be used.\r\n* Outputs\r\n * _None_\r\n\r\n### Helper functions\r\n```python\r\n class GPU\r\n```\r\nHelper class handle the attributes of each GPU. Quoted descriptions are copied from corresponding descriptions by `nvidia-smi`.\r\n* Attributes for each `GPU`\r\n * `id` - \"Zero based index of the GPU. Can change at each boot.\"\r\n * `uuid` - \"This value is the globally unique immutable alphanumeric identifier of the GPU. It does not correspond to any physical label on the board. Does not change across reboots.\"\r\n * `load` - Relative GPU load. 0 to 1 (100%, full load). \"Percent of time over the past sample period during which one or more kernels was executing on the GPU. The sample period may be between 1 second and 1/6 second depending on the product.\"\r\n * `memoryUtil` - Relative memory usage from 0 to 1 (100%, full usage). \"Percent of time over the past sample period during which global (device) memory was being read or written. The sample period may be between 1 second and 1/6 second depending on the product.\"\r\n * `memoryTotal` - \"Total installed GPU memory.\"\r\n * `memoryUsed` - \"Total GPU memory allocated by active contexts.\"\r\n * `memoryFree` - \"Total free GPU memory.\"\r\n * `driver` - \"The version of the installed NVIDIA display driver.\"\r\n * `name` - \"The official product name of the GPU.\"\r\n * `serial` - This number matches the serial number physically printed on each board. It is a globally unique immutable alphanumeric value.\r\n * `display_mode` - \"A flag that indicates whether a physical display (e.g. monitor) is currently connected to any of the GPU's connectors. \"Enabled\" indicates an attached display. \"Disabled\" indicates otherwise.\"\r\n * `display_active` - \"A flag that indicates whether a display is initialized on the GPU's (e.g. memory is allocated on the device for display). Display can be active even when no monitor is physically attached. \"Enabled\" indicates an active display. \"Disabled\" indicates otherwise.\"\r\n\r\n```python\r\nGPUs = GPUtil.getGPUs()\r\n```\r\n* Inputs\r\n * _None_\r\n* Outputs\r\n * `GPUs` - list of all GPUs. Each `GPU` corresponds to one GPU in the computer and contains a device id, relative load and relative memory usage.\r\n\r\n```python\r\nGPUavailability = GPUtil.getAvailability(GPUs, maxLoad = 0.5, maxMemory = 0.5, includeNan=False, excludeID=[], excludeUUID=[])\r\n```\r\nGiven a list of `GPUs` (see `GPUtil.getGPUs()`), return a equally sized list of ones and zeroes indicating which corresponding GPUs are available.\r\n\r\n* Inputs\r\n * `GPUs` - List of `GPUs`. See `GPUtil.getGPUs()`\r\n * `maxLoad` - Maximum current relative load for a GPU to be considered available. GPUs with a load larger than `maxLoad` is not returned. (**default = 0.5**)\r\n * `maxMemory` - Maximum current relative memory usage for a GPU to be considered available. GPUs with a current memory usage larger than `maxMemory` is not returned. (**default = 0.5**)\r\n * `includeNan` - See the description for `GPUtil.getAvailable(...)`. (**default = False**)\r\n * `excludeID` - See the description for `GPUtil.getAvailable(...)`. (**default = []**)\r\n * `excludeUUID` - See the description for `GPUtil.getAvailable(...)`. (**default = []**)\r\n* Outputs\r\n * GPUavailability - binary list indicating if `GPUs` are available or not. A GPU is considered available, if the current load and memory usage is less than `maxLoad` and `maxMemory`, respectively.\r\n\r\n\r\nSee [demo_GPUtil.py](https://github.com/anderskm/gputil/blob/master/demo_GPUtil.py) for examples and more details.\r\n\r\n## Examples\r\n\r\n\r\n### Select first available GPU in Caffe\r\nIn the Deep Learning library [Caffe](http://caffe.berkeleyvision.org/), the user can switch between using the CPU or GPU through their Python interface.\r\nThis is done by calling the methods `caffe.set_mode_cpu()` and `caffe.set_mode_gpu()`, respectively.\r\nBelow is a minimum working example for selecting the first available GPU with GPUtil to run a Caffe network.\r\n\r\n```python\r\n# Import caffe and GPUtil\r\nimport caffe\r\nimport GPUtil\r\n\r\n# Set CUDA_DEVICE_ORDER so the IDs assigned by CUDA match those from nvidia-smi\r\nos.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\r\n\r\n# Get the first available GPU\r\nDEVICE_ID_LIST = GPUtil.getFirstAvailable()\r\nDEVICE_ID = DEVICE_ID_LIST[0] # grab first element from list\r\n\r\n# Select GPU mode\r\ncaffe.set_mode_gpu()\r\n# Select GPU id\r\ncaffe.set_device(DEVICE_ID)\r\n\r\n# Initialize your network here\r\n\r\n```\r\n\r\n**Note:** At the time of writing this example, the Caffe Python wrapper only supports 1 GPU, although the underlying code supports multiple GPUs.\r\nCalling directly Caffe from the terminal allows for using multiple GPUs.\r\n\r\n### Occupy only 1 GPU in TensorFlow\r\nBy default, [TensorFlow](https://www.tensorflow.org/) will occupy all available GPUs when using a gpu as a device (e.g. `tf.device('\\gpu:0')`).\r\nBy setting the environment variable `CUDA_VISIBLE_DEVICES`, the user can mask which GPUs should be visible to TensorFlow via CUDA (See [CUDA_VISIBLE_DEVICES - Masking GPUs](http://acceleware.com/blog/cudavisibledevices-masking-gpus)). Using GPUtil.py, the CUDA_VISIBLE_DEVICES can be set programmatically based on the available GPUs.\r\nBelow is a minimum working example of how to occupy only 1 GPU in TensorFlow using GPUtil.\r\nTo run the code, copy it into a new python file (e.g. `demo_tensorflow_gputil.py`) and run it (e.g. enter `python demo_tensorflow_gputil.py` in a terminal).\r\n\r\n**Note:** Even if you set the device you run your code on to a CPU, TensorFlow will occupy all available GPUs. To avoid this, all GPUs can be hidden from TensorFlow with `os.environ[\"CUDA_VISIBLE_DEVICES\"] = ''`.\r\n\r\n```python\r\n# Import os to set the environment variable CUDA_VISIBLE_DEVICES\r\nimport os\r\nimport tensorflow as tf\r\nimport GPUtil\r\n\r\n# Set CUDA_DEVICE_ORDER so the IDs assigned by CUDA match those from nvidia-smi\r\nos.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\r\n\r\n# Get the first available GPU\r\nDEVICE_ID_LIST = GPUtil.getFirstAvailable()\r\nDEVICE_ID = DEVICE_ID_LIST[0] # grab first element from list\r\n\r\n# Set CUDA_VISIBLE_DEVICES to mask out all other GPUs than the first available device id\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = str(DEVICE_ID)\r\n\r\n# Since all other GPUs are masked out, the first available GPU will now be identified as GPU:0\r\ndevice = '/gpu:0'\r\nprint('Device ID (unmasked): ' + str(DEVICE_ID))\r\nprint('Device ID (masked): ' + str(0))\r\n\r\n# Run a minimum working example on the selected GPU\r\n# Start a session\r\nwith tf.Session() as sess:\r\n # Select the device\r\n with tf.device(device):\r\n # Declare two numbers and add them together in TensorFlow\r\n a = tf.constant(12)\r\n b = tf.constant(30)\r\n result = sess.run(a+b)\r\n print('a+b=' + str(result))\r\n\r\n```\r\n\r\nYour output should look something like the code block below. Notice how only one of the GPUs are found and created as a tensorflow device.\r\n\r\n```\r\nI tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally\r\nI tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally\r\nI tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally\r\nI tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally\r\nI tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally\r\nDevice: /gpu:0\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: \r\nname: TITAN X (Pascal)\r\nmajor: 6 minor: 1 memoryClockRate (GHz) 1.531\r\npciBusID 0000:02:00.0\r\nTotal memory: 11.90GiB\r\nFree memory: 11.76GiB\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 \r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y \r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:02:00.0)\r\na+b=42\r\n\r\n```\r\nComment the `os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(DEVICE_ID)` line and compare the two outputs.\r\nDepending on your number of GPUs, your output should look something like code block below.\r\nNotice, how all 4 GPUs are being found and created as a tensorflow device, whereas when `CUDA_VISIBLE_DEVICES` was set, only 1 GPU was found and created.\r\n\r\n```\r\nI tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally\r\nI tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally\r\nI tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally\r\nI tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally\r\nI tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally\r\nDevice: /gpu:0\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: \r\nname: TITAN X (Pascal)\r\nmajor: 6 minor: 1 memoryClockRate (GHz) 1.531\r\npciBusID 0000:02:00.0\r\nTotal memory: 11.90GiB\r\nFree memory: 11.76GiB\r\nW tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x2c8e400\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 1 with properties: \r\nname: TITAN X (Pascal)\r\nmajor: 6 minor: 1 memoryClockRate (GHz) 1.531\r\npciBusID 0000:03:00.0\r\nTotal memory: 11.90GiB\r\nFree memory: 11.76GiB\r\nW tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x2c92040\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 2 with properties: \r\nname: TITAN X (Pascal)\r\nmajor: 6 minor: 1 memoryClockRate (GHz) 1.531\r\npciBusID 0000:83:00.0\r\nTotal memory: 11.90GiB\r\nFree memory: 11.76GiB\r\nW tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x2c95d90\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 3 with properties: \r\nname: TITAN X (Pascal)\r\nmajor: 6 minor: 1 memoryClockRate (GHz) 1.531\r\npciBusID 0000:84:00.0\r\nTotal memory: 11.90GiB\r\nFree memory: 11.76GiB\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 0 and 2\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 0 and 3\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 1 and 2\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 1 and 3\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 2 and 0\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 2 and 1\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 3 and 0\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 3 and 1\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 1 2 3 \r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y Y N N \r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 1: Y Y N N \r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 2: N N Y Y \r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 3: N N Y Y \r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:02:00.0)\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:1) -> (device: 1, name: TITAN X (Pascal), pci bus id: 0000:03:00.0)\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:2) -> (device: 2, name: TITAN X (Pascal), pci bus id: 0000:83:00.0)\r\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:3) -> (device: 3, name: TITAN X (Pascal), pci bus id: 0000:84:00.0)\r\na+b=42\r\n```\r\n\r\n### Monitor GPU in a separate thread\r\nIf using GPUtil to monitor GPUs during training, it may show 0% utilization. A way around this is to use a separate monitoring thread.\r\n```python\r\nimport GPUtil\r\nfrom threading import Thread\r\nimport time\r\n\r\nclass Monitor(Thread):\r\n def __init__(self, delay):\r\n super(Monitor, self).__init__()\r\n self.stopped = False\r\n self.delay = delay # Time between calls to GPUtil\r\n self.start()\r\n\r\n def run(self):\r\n while not self.stopped:\r\n GPUtil.showUtilization()\r\n time.sleep(self.delay)\r\n\r\n def stop(self):\r\n self.stopped = True\r\n \r\n# Instantiate monitor with a 10-second delay between updates\r\nmonitor = Monitor(10)\r\n\r\n# Train, etc.\r\n\r\n# Close monitor\r\nmonitor.stop()\r\n```\r\n\r\n## License\r\nSee [LICENSE](https://github.com/anderskm/gputil/blob/master/LICENSE.txt)\r\n",
"bugtrack_url": null,
"license": null,
"summary": "Utility to get the GPU status from NVIDA GPUs using nvidia-smi",
"version": "1.4.2",
"project_urls": {
"Homepage": "https://github.com/atarwn/GPUtil-fix"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "26c36000f5ef95f02093b7d99c60205d7376d5e176aa6f5e8910f5abf584bf19",
"md5": "b4cf134d255fe38379d2f6acc42b3a3f",
"sha256": "c066b6e872c7e536c32713d238640e4b7ecc4e744e7b4f62bd1d21e7d2ff154a"
},
"downloads": -1,
"filename": "GPUtil_fix-1.4.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b4cf134d255fe38379d2f6acc42b3a3f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 13925,
"upload_time": "2024-07-23T12:33:40",
"upload_time_iso_8601": "2024-07-23T12:33:40.281612Z",
"url": "https://files.pythonhosted.org/packages/26/c3/6000f5ef95f02093b7d99c60205d7376d5e176aa6f5e8910f5abf584bf19/GPUtil_fix-1.4.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7f618c9a72aaf41a2ff7d75994f5a8165d5645414d87ebfe4baa5c85ac15a5c9",
"md5": "746aca696ca9b49a0c07268c5f69b3ee",
"sha256": "d41f33ce0de078b553c0cd6a942482492e21c425e9afa5ec9a1934f2865f381c"
},
"downloads": -1,
"filename": "gputil_fix-1.4.2.tar.gz",
"has_sig": false,
"md5_digest": "746aca696ca9b49a0c07268c5f69b3ee",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 18544,
"upload_time": "2024-07-23T12:33:41",
"upload_time_iso_8601": "2024-07-23T12:33:41.944285Z",
"url": "https://files.pythonhosted.org/packages/7f/61/8c9a72aaf41a2ff7d75994f5a8165d5645414d87ebfe4baa5c85ac15a5c9/gputil_fix-1.4.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-23 12:33:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "atarwn",
"github_project": "GPUtil-fix",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "gputil-fix"
}