nvitop


Namenvitop JSON
Version 1.3.2 PyPI version JSON
download
home_page
SummaryAn interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.
upload_time2023-12-17 11:36:52
maintainer
docs_urlNone
author
requires_python>=3.7
licenseApache License, Version 2.0 (Apache-2.0) & GNU General Public License, Version 3 (GPL-3.0)
keywords nvidia nvidia-smi nvidia nvml cuda gpu top monitoring
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # nvitop

<!-- markdownlint-disable html -->

![Python 3.7+](https://img.shields.io/badge/Python-3.7%2B-brightgreen)
[![PyPI](https://img.shields.io/pypi/v/nvitop?label=pypi&logo=pypi)](https://pypi.org/project/nvitop)
[![conda-forge](https://img.shields.io/conda/vn/conda-forge/nvitop?label=conda&logo=condaforge)](https://anaconda.org/conda-forge/nvitop)
[![Documentation Status](https://img.shields.io/readthedocs/nvitop?label=docs&logo=readthedocs)](https://nvitop.readthedocs.io)
[![Downloads](https://static.pepy.tech/personalized-badge/nvitop?period=total&left_color=grey&right_color=blue&left_text=downloads)](https://pepy.tech/project/nvitop)
[![GitHub Repo Stars](https://img.shields.io/github/stars/XuehaiPan/nvitop?label=stars&logo=github&color=brightgreen)](https://github.com/XuehaiPan/nvitop/stargazers)
[![License](https://img.shields.io/github/license/XuehaiPan/nvitop?label=license&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCIgd2lkdGg9IjI0IiBoZWlnaHQ9IjI0IiBmaWxsPSIjZmZmZmZmIj48cGF0aCBmaWxsLXJ1bGU9ImV2ZW5vZGQiIGQ9Ik0xMi43NSAyLjc1YS43NS43NSAwIDAwLTEuNSAwVjQuNUg5LjI3NmExLjc1IDEuNzUgMCAwMC0uOTg1LjMwM0w2LjU5NiA1Ljk1N0EuMjUuMjUgMCAwMTYuNDU1IDZIMi4zNTNhLjc1Ljc1IDAgMTAwIDEuNUgzLjkzTC41NjMgMTUuMThhLjc2Mi43NjIgMCAwMC4yMS44OGMuMDguMDY0LjE2MS4xMjUuMzA5LjIyMS4xODYuMTIxLjQ1Mi4yNzguNzkyLjQzMy42OC4zMTEgMS42NjIuNjIgMi44NzYuNjJhNi45MTkgNi45MTkgMCAwMDIuODc2LS42MmMuMzQtLjE1NS42MDYtLjMxMi43OTItLjQzMy4xNS0uMDk3LjIzLS4xNTguMzEtLjIyM2EuNzUuNzUgMCAwMC4yMDktLjg3OEw1LjU2OSA3LjVoLjg4NmMuMzUxIDAgLjY5NC0uMTA2Ljk4NC0uMzAzbDEuNjk2LTEuMTU0QS4yNS4yNSAwIDAxOS4yNzUgNmgxLjk3NXYxNC41SDYuNzYzYS43NS43NSAwIDAwMCAxLjVoMTAuNDc0YS43NS43NSAwIDAwMC0xLjVIMTIuNzVWNmgxLjk3NGMuMDUgMCAuMS4wMTUuMTQuMDQzbDEuNjk3IDEuMTU0Yy4yOS4xOTcuNjMzLjMwMy45ODQuMzAzaC44ODZsLTMuMzY4IDcuNjhhLjc1Ljc1IDAgMDAuMjMuODk2Yy4wMTIuMDA5IDAgMCAuMDAyIDBhMy4xNTQgMy4xNTQgMCAwMC4zMS4yMDZjLjE4NS4xMTIuNDUuMjU2Ljc5LjRhNy4zNDMgNy4zNDMgMCAwMDIuODU1LjU2OCA3LjM0MyA3LjM0MyAwIDAwMi44NTYtLjU2OWMuMzM4LS4xNDMuNjA0LS4yODcuNzktLjM5OWEzLjUgMy41IDAgMDAuMzEtLjIwNi43NS43NSAwIDAwLjIzLS44OTZMMjAuMDcgNy41aDEuNTc4YS43NS43NSAwIDAwMC0xLjVoLTQuMTAyYS4yNS4yNSAwIDAxLS4xNC0uMDQzbC0xLjY5Ny0xLjE1NGExLjc1IDEuNzUgMCAwMC0uOTg0LS4zMDNIMTIuNzVWMi43NXpNMi4xOTMgMTUuMTk4YTUuNDE4IDUuNDE4IDAgMDAyLjU1Ny42MzUgNS40MTggNS40MTggMCAwMDIuNTU3LS42MzVMNC43NSA5LjM2OGwtMi41NTcgNS44M3ptMTQuNTEtLjAyNGMuMDgyLjA0LjE3NC4wODMuMjc1LjEyNi41My4yMjMgMS4zMDUuNDUgMi4yNzIuNDVhNS44NDYgNS44NDYgMCAwMDIuNTQ3LS41NzZMMTkuMjUgOS4zNjdsLTIuNTQ3IDUuODA3eiI+PC9wYXRoPjwvc3ZnPgo=)](#license)

An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management. The full API references host at <https://nvitop.readthedocs.io>.

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/171005261-1aad126e-dc27-4ed3-a89b-7f9c1c998bf7.png" alt="Monitor">
  <br/>
  Monitor mode of <code>nvitop</code>.
  <br/>
  (TERM: GNOME Terminal / OS: Ubuntu 16.04 LTS (over SSH) / Locale: <code>en_US.UTF-8</code>)
</p>

### Table of Contents  <!-- omit in toc --> <!-- markdownlint-disable heading-increment -->

- [Features](#features)
- [Requirements](#requirements)
- [Installation](#installation)
- [Usage](#usage)
  - [Device and Process Status](#device-and-process-status)
  - [Resource Monitor](#resource-monitor)
    - [For Docker Users](#for-docker-users)
    - [For SSH Users](#for-ssh-users)
    - [Command Line Options and Environment Variables](#command-line-options-and-environment-variables)
    - [Keybindings for Monitor Mode](#keybindings-for-monitor-mode)
  - [CUDA Visible Devices Selection Tool](#cuda-visible-devices-selection-tool)
  - [Callback Functions for Machine Learning Frameworks](#callback-functions-for-machine-learning-frameworks)
    - [Callback for TensorFlow (Keras)](#callback-for-tensorflow-keras)
    - [Callback for PyTorch Lightning](#callback-for-pytorch-lightning)
    - [TensorBoard Integration](#tensorboard-integration)
  - [More than a Monitor](#more-than-a-monitor)
    - [Quick Start](#quick-start)
    - [Status Snapshot](#status-snapshot)
    - [Resource Metric Collector](#resource-metric-collector)
    - [Low-level APIs](#low-level-apis)
      - [Device](#device)
      - [Process](#process)
      - [Host (inherited from psutil)](#host-inherited-from-psutil)
- [Screenshots](#screenshots)
- [Changelog](#changelog)
- [License](#license)
  - [Copyright Notice](#copyright-notice)

------

`nvitop` is an interactive NVIDIA device and process monitoring tool. It has a colorful and informative interface that continuously updates the status of the devices and processes. As a resource monitor, it includes many features and options, such as tree-view, environment variable viewing, process filtering, process metrics monitoring, etc. Beyond that, the package also ships a [CUDA device selection tool `nvisel`](#cuda-visible-devices-selection-tool) for deep learning researchers. It also provides handy APIs that allow developers to write their own monitoring tools. Please refer to section [More than a Monitor](#more-than-a-monitor) and the full API references at <https://nvitop.readthedocs.io> for more information.

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/202362811-34f2c01d-97c8-49d2-b19b-0d7da648f2d5.png" alt="Filter">
  <br/>
  Process filtering and a more colorful interface.
</p>

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/202362686-859bf4ad-6237-46ca-b2f7-f547d2f63213.png" alt="Comparison">
  <br/>
  Compare to <code>nvidia-smi</code>.
</p>

------

## Features

- **Informative and fancy output**: show more information than `nvidia-smi` with colorized fancy box drawing.
- **Monitor mode**: can run as a resource monitor, rather than print the results only once.
  - bar charts and history graphs
  - process sorting
  - process filtering
  - send signals to processes with a keystroke
  - tree-view screen for GPU processes and their parent processes
  - environment variable screen
  - help screen
  - mouse support
- **Interactive**: responsive for user input (from keyboard and/or mouse) in monitor mode. (vs. [gpustat](https://github.com/wookayin/gpustat) & [py3nvml](https://github.com/fbcotter/py3nvml))
- **Efficient**:
  - query device status using [*NVML Python bindings*](https://pypi.org/project/nvidia-ml-py) directly, instead of parsing the output of `nvidia-smi`. (vs. [nvidia-htop](https://github.com/peci1/nvidia-htop))
  - support sparse query and cache results with `TTLCache` from [cachetools](https://github.com/tkem/cachetools). (vs. [gpustat](https://github.com/wookayin/gpustat))
  - display information using the `curses` library rather than `print` with ANSI escape codes. (vs. [py3nvml](https://github.com/fbcotter/py3nvml))
  - asynchronously gather information using multi-threading and correspond to user input much faster. (vs. [nvtop](https://github.com/Syllo/nvtop))
- **Portable**: work on both Linux and Windows.
  - get host process information using the cross-platform library [psutil](https://github.com/giampaolo/psutil) instead of calling `ps -p <pid>` in a subprocess. (vs. [nvidia-htop](https://github.com/peci1/nvidia-htop) & [py3nvml](https://github.com/fbcotter/py3nvml))
  - written in pure Python, easy to install with `pip`. (vs. [nvtop](https://github.com/Syllo/nvtop))
- **Integrable**: easy to integrate into other applications, more than monitoring. (vs. [nvidia-htop](https://github.com/peci1/nvidia-htop) & [nvtop](https://github.com/Syllo/nvtop))

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/129374533-fe06c01a-630d-4994-b54b-821cccd0d33c.png" alt="Windows">
  <br/>
  <code>nvitop</code> supports Windows!
  <br/>
  (SHELL: PowerShell / TERM: Windows Terminal / OS: Windows 10 / Locale: <code>en-US</code>)
</p>

------

## Requirements

- Python 3.7+
- NVIDIA Management Library (NVML)
- nvidia-ml-py
- psutil
- cachetools
- termcolor
- curses<sup>[*](#curses)</sup> (with `libncursesw`)

**NOTE:** The [NVIDIA Management Library (*NVML*)](https://developer.nvidia.com/nvidia-management-library-nvml) is a C-based programmatic interface for monitoring and managing various states. The runtime version of the NVML library ships with the NVIDIA display driver (available at [Download Drivers | NVIDIA](https://www.nvidia.com/Download/index.aspx)), or can be downloaded as part of the NVIDIA CUDA Toolkit (available at [CUDA Toolkit | NVIDIA Developer](https://developer.nvidia.com/cuda-downloads)). The lists of OS platforms and NVIDIA-GPUs supported by the NVML library can be found in the [NVML API Reference](https://docs.nvidia.com/deploy/nvml-api/nvml-api-reference.html).

This repository contains a Bash script to install/upgrade the NVIDIA drivers for Ubuntu Linux. For example:

```bash
git clone --depth=1 https://github.com/XuehaiPan/nvitop.git && cd nvitop

# Change to tty3 console (required for desktop users with GUI (tty2))
# Optional for SSH users
sudo chvt 3  # or use keyboard shortcut: Ctrl-LeftAlt-F3

bash install-nvidia-driver.sh --package=nvidia-driver-470  # install the R470 driver from ppa:graphics-drivers
bash install-nvidia-driver.sh --latest                     # install the latest driver from ppa:graphics-drivers
```

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/174480112-e9a35edc-8f42-438e-a103-1d0ce998b381.png" alt="install-nvidia-driver">
  <br/>
  NVIDIA driver installer for Ubuntu Linux.
</p>

Run `bash install-nvidia-driver.sh --help` for more information.

<a name="curses">*</a> The `curses` library is a built-in module of Python on Unix-like systems, and it is supported by a third-party package called `windows-curses` on Windows using PDCurses. Inconsistent behavior of `nvitop` may occur on different terminal emulators on Windows, such as missing mouse support.

------

## Installation

**It is highly recommended to install `nvitop` in an isolated virtual environment.** Simple installation and run via [`pipx`](https://pypa.github.io/pipx):

```bash
pipx run nvitop
```

Install from PyPI ([![PyPI](https://img.shields.io/pypi/v/nvitop?label=pypi&logo=pypi)](https://pypi.org/project/nvitop)):

```bash
pip3 install --upgrade nvitop
```

Install from conda-forge ([![conda-forge](https://img.shields.io/conda/v/conda-forge/nvitop?logo=condaforge)](https://anaconda.org/conda-forge/nvitop)):

```bash
conda install -c conda-forge nvitop
```

Install the latest version from GitHub (![Commit Count](https://img.shields.io/github/commits-since/XuehaiPan/nvitop/v1.3.2)):

```bash
pip3 install --upgrade pip setuptools
pip3 install git+https://github.com/XuehaiPan/nvitop.git#egg=nvitop
```

Or, clone this repo and install manually:

```bash
git clone --depth=1 https://github.com/XuehaiPan/nvitop.git
cd nvitop
pip3 install .
```

**NOTE:** If you encounter the *"nvitop: command not found"* error after installation, please check whether you have added the Python console script path (e.g., `"${HOME}/.local/bin"`) to your `PATH` environment variable. Alternatively, you can use `python3 -m nvitop`.

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/178963038-a5cd4eb5-02a8-4456-966f-d5ff04eb44d8.png" alt="MIG Device Support">
  <br/>
  MIG Device Support.
  <br/>
</p>

------

## Usage

### Device and Process Status

Query the device and process status. The output is similar to `nvidia-smi`, but has been enriched and colorized.

```bash
# Query the status of all devices
$ nvitop -1  # or use `python3 -m nvitop -1`

# Specify query devices (by integer indices)
$ nvitop -1 -o 0 1  # only show <GPU 0> and <GPU 1>

# Only show devices in `CUDA_VISIBLE_DEVICES` (by integer indices or UUID strings)
$ nvitop -1 -ov

# Only show GPU processes with the compute context (type: 'C' or 'C+G')
$ nvitop -1 -c
```

When the `-1` switch is on, the result will be displayed **ONLY ONCE** (same as the default behavior of `nvidia-smi`). This is much faster and has lower resource usage. See [Command Line Options](#command-line-options-and-environment-variables) for more command options.

There is also a CLI tool called `nvisel` that ships with the `nvitop` PyPI package. See [CUDA Visible Devices Selection Tool](#cuda-visible-devices-selection-tool) for more information.

### Resource Monitor

Run as a resource monitor:

```bash
# Monitor mode (when the display mode is omitted, `NVITOP_MONITOR_MODE` will be used)
$ nvitop  # or use `python3 -m nvitop`

# Automatically configure the display mode according to the terminal size
$ nvitop -m auto     # shortcut: `a` key

# Arbitrarily display as `full` mode
$ nvitop -m full     # shortcut: `f` key

# Arbitrarily display as `compact` mode
$ nvitop -m compact  # shortcut: `c` key

# Specify query devices (by integer indices)
$ nvitop -o 0 1  # only show <GPU 0> and <GPU 1>

# Only show devices in `CUDA_VISIBLE_DEVICES` (by integer indices or UUID strings)
$ nvitop -ov

# Only show GPU processes with the compute context (type: 'C' or 'C+G')
$ nvitop -c

# Use ASCII characters only
$ nvitop -U  # useful for terminals without Unicode support

# For light terminals
$ nvitop --light

# For spectrum-like bar charts (requires the terminal supports 256-color)
$ nvitop --colorful
```

You can configure the default monitor mode with the `NVITOP_MONITOR_MODE` environment variable (default `auto` if not set). See [Command Line Options and Environment Variables](#command-line-options-and-environment-variables) for more command options.

In monitor mode, you can use <kbd>Ctrl-c</kbd> / <kbd>T</kbd> / <kbd>K</kbd> keys to interrupt / terminate / kill a process. And it's recommended to *terminate* or *kill* a process in the **tree-view screen** (shortcut: <kbd>t</kbd>). For normal users, `nvitop` will shallow other users' processes (in low-intensity colors). For **system administrators**, you can use `sudo nvitop` to terminate other users' processes.

Also, to enter the process metrics screen, select a process and then press the <kbd>Enter</kbd> / <kbd>Return</kbd> key . `nvitop` dynamically displays the process metrics with live graphs.

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/192108815-37c03705-be44-47d4-9908-6d05175db230.png" alt="Process Metrics Screen">
  <br/>
  Watch metrics for a specific process (shortcut: <kbd>Enter</kbd> / <kbd>Return</kbd>).
</p>

Press <kbd>h</kbd> for help or <kbd>q</kbd> to return to the terminal. See [Keybindings for Monitor Mode](#keybindings-for-monitor-mode) for more shortcuts.

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/192108664-61f1983c-6f62-48e6-87c5-29633d9c409e.png" alt="Help Screen">
  <br/>
  <code>nvitop</code> comes with a help screen (shortcut: <kbd>h</kbd>).
</p>

#### For Docker Users

Build and run the Docker image using [nvidia-docker](https://github.com/NVIDIA/nvidia-docker):

```bash
git clone --depth=1 https://github.com/XuehaiPan/nvitop.git && cd nvitop  # clone this repo first
docker build --tag nvitop:latest .  # build the Docker image
docker run -it --rm --runtime=nvidia --gpus=all --pid=host nvitop:latest  # run the Docker container
```

The [`Dockerfile`](Dockerfile) has an optional build argument `basetag` (default: `450-signed-ubuntu22.04`) for the tag of image [`nvcr.io/nvidia/driver`](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/driver/tags).

**NOTE:** Don't forget to add the `--pid=host` option when running the container.

#### For SSH Users

Run `nvitop` directly on the SSH session instead of a login shell:

```bash
ssh user@host -t nvitop                 # installed by `sudo pip3 install ...`
ssh user@host -t '~/.local/bin/nvitop'  # installed by `pip3 install --user ...`
```

**NOTE:** Users need to add the `-t` option to allocate a pseudo-terminal over the SSH session for monitor mode.

#### Command Line Options and Environment Variables

Type `nvitop --help` for more command options:

```text
usage: nvitop [--help] [--version] [--once | --monitor [{auto,full,compact}]]
              [--interval SEC] [--ascii] [--colorful] [--force-color] [--light]
              [--gpu-util-thresh th1 th2] [--mem-util-thresh th1 th2]
              [--only idx [idx ...]] [--only-visible]
              [--compute] [--only-compute] [--graphics] [--only-graphics]
              [--user [USERNAME ...]] [--pid PID [PID ...]]

An interactive NVIDIA-GPU process viewer.

options:
  --help, -h            Show this help message and exit.
  --version, -V         Show nvitop's version number and exit.
  --once, -1            Report query data only once.
  --monitor [{auto,full,compact}], -m [{auto,full,compact}]
                        Run as a resource monitor. Continuously report query data and handle user inputs.
                        If the argument is omitted, the value from `NVITOP_MONITOR_MODE` will be used.
                        (default fallback mode: auto)
  --interval SEC        Process status update interval in seconds. (default: 2)
  --ascii, --no-unicode, -U
                        Use ASCII characters only, which is useful for terminals without Unicode support.

coloring:
  --colorful            Use gradient colors to get spectrum-like bar charts. This option is only available
                        when the terminal supports 256 colors. You may need to set environment variable
                        `TERM="xterm-256color"`. Note that the terminal multiplexer, such as `tmux`, may
                        override the `TREM` variable.
  --force-color         Force colorize even when `stdout` is not a TTY terminal.
  --light               Tweak visual results for light theme terminals in monitor mode.
                        Set variable `NVITOP_MONITOR_MODE="light"` on light terminals for convenience.
  --gpu-util-thresh th1 th2
                        Thresholds of GPU utilization to determine the load intensity.
                        Coloring rules: light < th1 % <= moderate < th2 % <= heavy.
                        ( 1 <= th1 < th2 <= 99, defaults: 10 75 )
  --mem-util-thresh th1 th2
                        Thresholds of GPU memory percent to determine the load intensity.
                        Coloring rules: light < th1 % <= moderate < th2 % <= heavy.
                        ( 1 <= th1 < th2 <= 99, defaults: 10 80 )

device filtering:
  --only idx [idx ...], -o idx [idx ...]
                        Only show the specified devices, suppress option `--only-visible`.
  --only-visible, -ov   Only show devices in the `CUDA_VISIBLE_DEVICES` environment variable.

process filtering:
  --compute, -c         Only show GPU processes with the compute context. (type: 'C' or 'C+G')
  --only-compute, -C    Only show GPU processes exactly with the compute context. (type: 'C' only)
  --graphics, -g        Only show GPU processes with the graphics context. (type: 'G' or 'C+G')
  --only-graphics, -G   Only show GPU processes exactly with the graphics context. (type: 'G' only)
  --user [USERNAME ...], -u [USERNAME ...]
                        Only show processes of the given users (or `$USER` for no argument).
  --pid PID [PID ...], -p PID [PID ...]
                        Only show processes of the given PIDs.
```

`nvitop` can accept the following environment variables for monitor mode:

| Name                                   | Description                                         | Valid Values                                                            | Default Value     |
| -------------------------------------- | --------------------------------------------------- | ----------------------------------------------------------------------- | ----------------- |
| `NVITOP_MONITOR_MODE`                  | The default display mode (a comma-separated string) | `auto` / `full` / `compact`<br>`plain` / `colorful`<br>`dark` / `light` | `auto,plain,dark` |
| `NVITOP_GPU_UTILIZATION_THRESHOLDS`    | Thresholds of GPU utilization                       | `10,75` , `1,99`, ...                                                   | `10,75`           |
| `NVITOP_MEMORY_UTILIZATION_THRESHOLDS` | Thresholds of GPU memory percent                    | `10,80` , `1,99`, ...                                                   | `10,80`           |
| `LOGLEVEL`                             | Log level for log messages                          | `DEBUG` , `INFO`, `WARNING`, ...                                        | `WARNING`         |

For example:

```bash
# Replace the following export statements if you are not using Bash / Zsh
export NVITOP_MONITOR_MODE="full,light"

# Full monitor mode with light terminal tweaks
nvitop
```

For convenience, you can add these environment variables to your shell startup file, e.g.:

```bash
# For Bash
echo 'export NVITOP_MONITOR_MODE="full"' >> ~/.bashrc

# For Zsh
echo 'export NVITOP_MONITOR_MODE="full"' >> ~/.zshrc

# For Fish
echo 'set -gx NVITOP_MONITOR_MODE "full"' >> ~/.config/fish/config.fish

# For PowerShell
'$Env:NVITOP_MONITOR_MODE = "full"' >> $PROFILE.CurrentUserAllHosts
```

#### Keybindings for Monitor Mode

|                                                                        Key | Binding                                                                              |
| -------------------------------------------------------------------------: | :----------------------------------------------------------------------------------- |
|                                                                        `q` | Quit and return to the terminal.                                                     |
|                                                                  `h` / `?` | Go to the help screen.                                                               |
|                                                            `a` / `f` / `c` | Change the display mode to *auto* / *full* / *compact*.                              |
|                                                     `r` / `<C-r>` / `<F5>` | Force refresh the window.                                                            |
|                                                                            |                                                                                      |
| `<Up>` / `<Down>`<br>`<A-k>` / `<A-j>`<br>`<Tab>` / `<S-Tab>`<br>`<Wheel>` | Select and highlight a process.                                                      |
|                   `<Left>` / `<Right>`<br>`<A-h>` / `<A-l>`<br>`<S-Wheel>` | Scroll the host information of processes.                                            |
|                                                                   `<Home>` | Select the first process.                                                            |
|                                                                    `<End>` | Select the last process.                                                             |
|                                                             `<C-a>`<br>`^` | Scroll left to the beginning of the process entry (i.e. beginning of line).          |
|                                                             `<C-e>`<br>`$` | Scroll right to the end of the process entry (i.e. end of line).                     |
|              `<PageUp>` / `<PageDown>`<br/> `<A-K>` / `<A-J>`<br>`[` / `]` | scroll entire screen (for large amounts of processes).                               |
|                                                                            |                                                                                      |
|                                                                  `<Space>` | Tag/untag current process.                                                           |
|                                                                    `<Esc>` | Clear process selection.                                                             |
|                                                             `<C-c>`<br>`I` | Send `signal.SIGINT` to the selected process (interrupt).                            |
|                                                                        `T` | Send `signal.SIGTERM` to the selected process (terminate).                           |
|                                                                        `K` | Send `signal.SIGKILL` to the selected process (kill).                                |
|                                                                            |                                                                                      |
|                                                                        `e` | Show process environment.                                                            |
|                                                                        `t` | Toggle tree-view screen.                                                             |
|                                                                  `<Enter>` | Show process metrics.                                                                |
|                                                                            |                                                                                      |
|                                                                  `,` / `.` | Select the sort column.                                                              |
|                                                                        `/` | Reverse the sort order.                                                              |
|                                                                `on` (`oN`) | Sort processes in the natural order, i.e., in ascending (descending) order of `GPU`. |
|                                                                `ou` (`oU`) | Sort processes by `USER` in ascending (descending) order.                            |
|                                                                `op` (`oP`) | Sort processes by `PID` in descending (ascending) order.                             |
|                                                                `og` (`oG`) | Sort processes by `GPU-MEM` in descending (ascending) order.                         |
|                                                                `os` (`oS`) | Sort processes by `%SM` in descending (ascending) order.                             |
|                                                                `oc` (`oC`) | Sort processes by `%CPU` in descending (ascending) order.                            |
|                                                                `om` (`oM`) | Sort processes by `%MEM` in descending (ascending) order.                            |
|                                                                `ot` (`oT`) | Sort processes by `TIME` in descending (ascending) order.                            |

**HINT:** It's recommended to terminate or kill a process in the tree-view screen (shortcut: <kbd>t</kbd>).

------

### CUDA Visible Devices Selection Tool

Automatically select `CUDA_VISIBLE_DEVICES` from the given criteria. Example usage of the CLI tool:

```console
# All devices but sorted
$ nvisel       # or use `python3 -m nvitop.select`
6,5,4,3,2,1,0,7,8

# A simple example to select 4 devices
$ nvisel -n 4  # or use `python3 -m nvitop.select -n 4`
6,5,4,3

# Select available devices that satisfy the given constraints
$ nvisel --min-count 2 --max-count 3 --min-free-memory 5GiB --max-gpu-utilization 60
6,5,4

# Set `CUDA_VISIBLE_DEVICES` environment variable using `nvisel`
$ export CUDA_DEVICE_ORDER="PCI_BUS_ID" CUDA_VISIBLE_DEVICES="$(nvisel -c 1 -f 10GiB)"
CUDA_VISIBLE_DEVICES="6,5,4,3,2,1,0"

# Use UUID strings in `CUDA_VISIBLE_DEVICES` environment variable
$ export CUDA_VISIBLE_DEVICES="$(nvisel -O uuid -c 2 -f 5000M)"
CUDA_VISIBLE_DEVICES="GPU-849d5a8d-610e-eeea-1fd4-81ff44a23794,GPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1,GPU-96de99c9-d68f-84c8-424c-7c75e59cc0a0,GPU-2428d171-8684-5b64-830c-435cd972ec4a,GPU-6d2a57c9-7783-44bb-9f53-13f36282830a,GPU-f8e5a624-2c7e-417c-e647-b764d26d4733,GPU-f9ca790e-683e-3d56-00ba-8f654e977e02"

# Pipe output to other shell utilities
$ nvisel --newline -O uuid -C 6 -f 8GiB
GPU-849d5a8d-610e-eeea-1fd4-81ff44a23794
GPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1
GPU-96de99c9-d68f-84c8-424c-7c75e59cc0a0
GPU-2428d171-8684-5b64-830c-435cd972ec4a
GPU-6d2a57c9-7783-44bb-9f53-13f36282830a
GPU-f8e5a624-2c7e-417c-e647-b764d26d4733
$ nvisel -0 -O uuid -c 2 -f 4GiB | xargs -0 -I {} nvidia-smi --id={} --query-gpu=index,memory.free --format=csv
CUDA_VISIBLE_DEVICES="GPU-849d5a8d-610e-eeea-1fd4-81ff44a23794,GPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1,GPU-96de99c9-d68f-84c8-424c-7c75e59cc0a0,GPU-2428d171-8684-5b64-830c-435cd972ec4a,GPU-6d2a57c9-7783-44bb-9f53-13f36282830a,GPU-f8e5a624-2c7e-417c-e647-b764d26d4733,GPU-f9ca790e-683e-3d56-00ba-8f654e977e02"
index, memory.free [MiB]
6, 11018 MiB
index, memory.free [MiB]
5, 11018 MiB
index, memory.free [MiB]
4, 11018 MiB
index, memory.free [MiB]
3, 11018 MiB
index, memory.free [MiB]
2, 11018 MiB
index, memory.free [MiB]
1, 11018 MiB
index, memory.free [MiB]
0, 11018 MiB

# Normalize the `CUDA_VISIBLE_DEVICES` environment variable (e.g. convert UUIDs to indices or get full UUIDs for an abbreviated form)
$ nvisel -i "GPU-18ef14e9,GPU-849d5a8d" -S
5,6
$ nvisel -i "GPU-18ef14e9,GPU-849d5a8d" -S -O uuid --newline
GPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1
GPU-849d5a8d-610e-eeea-1fd4-81ff44a23794
```

You can also integrate `nvisel` into your training script like this:

```python
# Put this at the top of the Python script
import os
from nvitop import select_devices

os.environ['CUDA_VISIBLE_DEVICES'] = ','.join(
    select_devices(format='uuid', min_count=4, min_free_memory='8GiB')
)
```

Type `nvisel --help` for more command options:

```text
usage: nvisel [--help] [--version]
              [--inherit [CUDA_VISIBLE_DEVICES]] [--account-as-free [USERNAME ...]]
              [--min-count N] [--max-count N] [--count N]
              [--min-free-memory SIZE] [--min-total-memory SIZE]
              [--max-gpu-utilization RATE] [--max-memory-utilization RATE]
              [--tolerance TOL]
              [--format FORMAT] [--sep SEP | --newline | --null] [--no-sort]

CUDA visible devices selection tool.

options:
  --help, -h            Show this help message and exit.
  --version, -V         Show nvisel's version number and exit.

constraints:
  --inherit [CUDA_VISIBLE_DEVICES], -i [CUDA_VISIBLE_DEVICES]
                        Inherit the given `CUDA_VISIBLE_DEVICES`. If the argument is omitted, use the
                        value from the environment. This means selecting a subset of the currently
                        CUDA-visible devices.
  --account-as-free [USERNAME ...]
                        Account the used GPU memory of the given users as free memory.
                        If this option is specified but without argument, `$USER` will be used.
  --min-count N, -c N   Minimum number of devices to select. (default: 0)
                        The tool will fail (exit non-zero) if the requested resource is not available.
  --max-count N, -C N   Maximum number of devices to select. (default: all devices)
  --count N, -n N       Overriding both `--min-count N` and `--max-count N`.
  --min-free-memory SIZE, -f SIZE
                        Minimum free memory of devices to select. (example value: 4GiB)
                        If this constraint is given, check against all devices.
  --min-total-memory SIZE, -t SIZE
                        Minimum total memory of devices to select. (example value: 10GiB)
                        If this constraint is given, check against all devices.
  --max-gpu-utilization RATE, -G RATE
                        Maximum GPU utilization rate of devices to select. (example value: 30)
                        If this constraint is given, check against all devices.
  --max-memory-utilization RATE, -M RATE
                        Maximum memory bandwidth utilization rate of devices to select. (example value: 50)
                        If this constraint is given, check against all devices.
  --tolerance TOL, --tol TOL
                        The constraints tolerance (in percentage). (default: 0, i.e., strict)
                        This option can loose the constraints if the requested resource is not available.
                        For example, set `--tolerance=20` will accept a device with only 4GiB of free
                        memory when set `--min-free-memory=5GiB`.

formatting:
  --format FORMAT, -O FORMAT
                        The output format of the selected device identifiers. (default: index)
                        If any MIG device found, the output format will be fallback to `uuid`.
  --sep SEP, --separator SEP, -s SEP
                        Separator for the output. (default: ',')
  --newline             Use newline character as separator for the output, equivalent to `--sep=$'\n'`.
  --null, -0            Use null character ('\x00') as separator for the output. This option corresponds
                        to the `-0` option of `xargs`.
  --no-sort, -S         Do not sort the device by memory usage and GPU utilization.
```

------

### Callback Functions for Machine Learning Frameworks

`nvitop` provides two builtin callbacks for [TensorFlow (Keras)](https://www.tensorflow.org) and [PyTorch Lightning](https://pytorchlightning.ai).

#### Callback for [TensorFlow (Keras)](https://www.tensorflow.org)

```python
from tensorflow.python.keras.utils.multi_gpu_utils import multi_gpu_model
from tensorflow.python.keras.callbacks import TensorBoard
from nvitop.callbacks.keras import GpuStatsLogger
gpus = ['/gpu:0', '/gpu:1']  # or `gpus = [0, 1]` or `gpus = 2`
model = Xception(weights=None, ..)
model = multi_gpu_model(model, gpus)  # optional
model.compile(..)
tb_callback = TensorBoard(log_dir='./logs')  # or `keras.callbacks.CSVLogger`
gpu_stats = GpuStatsLogger(gpus)
model.fit(.., callbacks=[gpu_stats, tb_callback])
```

**NOTE:** Users should assign a `keras.callbacks.TensorBoard` callback or a `keras.callbacks.CSVLogger` callback to the model. And the `GpuStatsLogger` callback should be placed before the `keras.callbacks.TensorBoard` / `keras.callbacks.CSVLogger` callback.

#### Callback for [PyTorch Lightning](https://lightning.ai)

```python
from lightning.pytorch import Trainer
from nvitop.callbacks.lightning import GpuStatsLogger
gpu_stats = GpuStatsLogger()
trainer = Trainer(gpus=[..], logger=True, callbacks=[gpu_stats])
```

**NOTE:** Users should assign a logger to the trainer.

#### [TensorBoard](https://github.com/tensorflow/tensorboard) Integration

Please refer to [Resource Metric Collector](#resource-metric-collector) for an example.

------

### More than a Monitor

`nvitop` can be easily integrated into other applications. You can use `nvitop` to make your own monitoring tools. The full API references host at <https://nvitop.readthedocs.io>.

#### Quick Start

A minimal script to monitor the GPU devices based on APIs from `nvitop`:

```python
from nvitop import Device

devices = Device.all()  # or `Device.cuda.all()` to use CUDA ordinal instead
for device in devices:
    processes = device.processes()  # type: Dict[int, GpuProcess]
    sorted_pids = sorted(processes.keys())

    print(device)
    print(f'  - Fan speed:       {device.fan_speed()}%')
    print(f'  - Temperature:     {device.temperature()}C')
    print(f'  - GPU utilization: {device.gpu_utilization()}%')
    print(f'  - Total memory:    {device.memory_total_human()}')
    print(f'  - Used memory:     {device.memory_used_human()}')
    print(f'  - Free memory:     {device.memory_free_human()}')
    print(f'  - Processes ({len(processes)}): {sorted_pids}')
    for pid in sorted_pids:
        print(f'    - {processes[pid]}')
    print('-' * 120)
```

Another more advanced approach with coloring:

```python
import time

from nvitop import Device, GpuProcess, NA, colored

print(colored(time.strftime('%a %b %d %H:%M:%S %Y'), color='red', attrs=('bold',)))

devices = Device.cuda.all()  # or `Device.all()` to use NVML ordinal instead
separator = False
for device in devices:
    processes = device.processes()  # type: Dict[int, GpuProcess]

    print(colored(str(device), color='green', attrs=('bold',)))
    print(colored('  - Fan speed:       ', color='blue', attrs=('bold',)) + f'{device.fan_speed()}%')
    print(colored('  - Temperature:     ', color='blue', attrs=('bold',)) + f'{device.temperature()}C')
    print(colored('  - GPU utilization: ', color='blue', attrs=('bold',)) + f'{device.gpu_utilization()}%')
    print(colored('  - Total memory:    ', color='blue', attrs=('bold',)) + f'{device.memory_total_human()}')
    print(colored('  - Used memory:     ', color='blue', attrs=('bold',)) + f'{device.memory_used_human()}')
    print(colored('  - Free memory:     ', color='blue', attrs=('bold',)) + f'{device.memory_free_human()}')
    if len(processes) > 0:
        processes = GpuProcess.take_snapshots(processes.values(), failsafe=True)
        processes.sort(key=lambda process: (process.username, process.pid))

        print(colored(f'  - Processes ({len(processes)}):', color='blue', attrs=('bold',)))
        fmt = '    {pid:<5}  {username:<8} {cpu:>5}  {host_memory:>8} {time:>8}  {gpu_memory:>8}  {sm:>3}  {command:<}'.format
        print(colored(fmt(pid='PID', username='USERNAME',
                          cpu='CPU%', host_memory='HOST-MEM', time='TIME',
                          gpu_memory='GPU-MEM', sm='SM%',
                          command='COMMAND'),
                      attrs=('bold',)))
        for snapshot in processes:
            print(fmt(pid=snapshot.pid,
                      username=snapshot.username[:7] + ('+' if len(snapshot.username) > 8 else snapshot.username[7:8]),
                      cpu=snapshot.cpu_percent, host_memory=snapshot.host_memory_human,
                      time=snapshot.running_time_human,
                      gpu_memory=(snapshot.gpu_memory_human if snapshot.gpu_memory_human is not NA else 'WDDM:N/A'),
                      sm=snapshot.gpu_sm_utilization,
                      command=snapshot.command))
    else:
        print(colored('  - No Running Processes', attrs=('bold',)))

    if separator:
        print('-' * 120)
    separator = True
```

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/177041142-fe988d58-6a97-4559-84fd-b51204cf9231.png" alt="Demo">
  <br/>
  An example monitoring script built with APIs from <code>nvitop</code>.
</p>

------

#### Status Snapshot

`nvitop` provides a helper function [`take_snapshots`](https://nvitop.readthedocs.io/en/latest/api/collector.html#nvitop.take_snapshots) to retrieve the status of both GPU devices and GPU processes at once. You can type `help(nvitop.take_snapshots)` in Python REPL for detailed documentation.

```python
In [1]: from nvitop import take_snapshots, Device
   ...: import os
   ...: os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
   ...: os.environ['CUDA_VISIBLE_DEVICES'] = '1,0'  # comma-separated integers or UUID strings

In [2]: take_snapshots()  # equivalent to `take_snapshots(Device.all())`
Out[2]:
SnapshotResult(
    devices=[
        DeviceSnapshot(
            real=Device(index=0, ...),
            ...
        ),
        ...
    ],
    gpu_processes=[
        GpuProcessSnapshot(
            real=GpuProcess(pid=xxxxxx, device=Device(index=0, ...), ...),
            ...
        ),
        ...
    ]
)

In [3]: device_snapshots, gpu_process_snapshots = take_snapshots(Device.all())  # type: Tuple[List[DeviceSnapshot], List[GpuProcessSnapshot]]

In [4]: device_snapshots, _ = take_snapshots(gpu_processes=False)  # ignore process snapshots

In [5]: take_snapshots(Device.cuda.all())  # use CUDA device enumeration
Out[5]:
SnapshotResult(
    devices=[
        CudaDeviceSnapshot(
            real=CudaDevice(cuda_index=0, nvml_index=1, ...),
            ...
        ),
        CudaDeviceSnapshot(
            real=CudaDevice(cuda_index=1, nvml_index=0, ...),
            ...
        ),
    ],
    gpu_processes=[
        GpuProcessSnapshot(
            real=GpuProcess(pid=xxxxxx, device=CudaDevice(cuda_index=0, ...), ...),
            ...
        ),
        ...
    ]
)

In [6]: take_snapshots(Device.cuda(1))  # <CUDA 1> only
Out[6]:
SnapshotResult(
    devices=[
        CudaDeviceSnapshot(
            real=CudaDevice(cuda_index=1, nvml_index=0, ...),
            ...
        )
    ],
    gpu_processes=[
        GpuProcessSnapshot(
            real=GpuProcess(pid=xxxxxx, device=CudaDevice(cuda_index=1, ...), ...),
            ...
        ),
        ...
    ]
)
```

Please refer to section [Low-level APIs](#low-level-apis) for more information.

------

#### Resource Metric Collector

[`ResourceMetricCollector`](https://nvitop.readthedocs.io/en/latest/api/collector.html#nvitop.ResourceMetricCollector) is a class that collects resource metrics for host, GPUs and processes running on the GPUs. All metrics will be collected in an asynchronous manner. You can type `help(nvitop.ResourceMetricCollector)` in Python REPL for detailed documentation.

```python
In [1]: from nvitop import ResourceMetricCollector, Device
   ...: import os
   ...: os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
   ...: os.environ['CUDA_VISIBLE_DEVICES'] = '3,2,1,0'  # comma-separated integers or UUID strings

In [2]: collector = ResourceMetricCollector()                                   # log all devices and descendant processes of the current process on the GPUs
In [3]: collector = ResourceMetricCollector(root_pids={1})                      # log all devices and all GPU processes
In [4]: collector = ResourceMetricCollector(devices=Device(0), root_pids={1})   # log <GPU 0> and all GPU processes on <GPU 0>
In [5]: collector = ResourceMetricCollector(devices=Device.cuda.all())          # use the CUDA ordinal

In [6]: with collector(tag='<tag>'):
   ...:     # Do something
   ...:     collector.collect()  # -> Dict[str, float]
# key -> '<tag>/<scope>/<metric (unit)>/<mean/min/max>'
{
    '<tag>/host/cpu_percent (%)/mean': 8.967849777683456,
    '<tag>/host/cpu_percent (%)/min': 6.1,
    '<tag>/host/cpu_percent (%)/max': 28.1,
    ...,
    '<tag>/host/memory_percent (%)/mean': 21.5,
    '<tag>/host/swap_percent (%)/mean': 0.3,
    '<tag>/host/memory_used (GiB)/mean': 91.0136418208109,
    '<tag>/host/load_average (%) (1 min)/mean': 10.251427386878328,
    '<tag>/host/load_average (%) (5 min)/mean': 10.072539414569503,
    '<tag>/host/load_average (%) (15 min)/mean': 11.91126970422139,
    ...,
    '<tag>/cuda:0 (gpu:3)/memory_used (MiB)/mean': 3.875,
    '<tag>/cuda:0 (gpu:3)/memory_free (MiB)/mean': 11015.562499999998,
    '<tag>/cuda:0 (gpu:3)/memory_total (MiB)/mean': 11019.437500000002,
    '<tag>/cuda:0 (gpu:3)/memory_percent (%)/mean': 0.0,
    '<tag>/cuda:0 (gpu:3)/gpu_utilization (%)/mean': 0.0,
    '<tag>/cuda:0 (gpu:3)/memory_utilization (%)/mean': 0.0,
    '<tag>/cuda:0 (gpu:3)/fan_speed (%)/mean': 22.0,
    '<tag>/cuda:0 (gpu:3)/temperature (C)/mean': 25.0,
    '<tag>/cuda:0 (gpu:3)/power_usage (W)/mean': 19.11166264116916,
    ...,
    '<tag>/cuda:1 (gpu:2)/memory_used (MiB)/mean': 8878.875,
    ...,
    '<tag>/cuda:2 (gpu:1)/memory_used (MiB)/mean': 8182.875,
    ...,
    '<tag>/cuda:3 (gpu:0)/memory_used (MiB)/mean': 9286.875,
    ...,
    '<tag>/pid:12345/host/cpu_percent (%)/mean': 151.34342772112265,
    '<tag>/pid:12345/host/host_memory (MiB)/mean': 44749.72373447514,
    '<tag>/pid:12345/host/host_memory_percent (%)/mean': 8.675082352111717,
    '<tag>/pid:12345/host/running_time (min)': 336.23803206741576,
    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_memory (MiB)/mean': 8861.0,
    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_memory_percent (%)/mean': 80.4,
    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_memory_utilization (%)/mean': 6.711118172407917,
    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_sm_utilization (%)/mean': 48.23283397736476,
    ...,
    '<tag>/duration (s)': 7.247399162035435,
    '<tag>/timestamp': 1655909466.9981883
}
```

The results can be easily logged into [TensorBoard](https://github.com/tensorflow/tensorboard) or a CSV file. For example:

```python
import os

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter

from nvitop import CudaDevice, ResourceMetricCollector
from nvitop.callbacks.tensorboard import add_scalar_dict

# Build networks and prepare datasets
...

# Logger and status collector
writer = SummaryWriter()
collector = ResourceMetricCollector(devices=CudaDevice.all(),  # log all visible CUDA devices and use the CUDA ordinal
                                    root_pids={os.getpid()},   # only log the descendant processes of the current process
                                    interval=1.0)              # snapshot interval for background daemon thread

# Start training
global_step = 0
for epoch in range(num_epoch):
    with collector(tag='train'):
        for batch in train_dataset:
            with collector(tag='batch'):
                metrics = train(net, batch)
                global_step += 1
                add_scalar_dict(writer, 'train', metrics, global_step=global_step)
                add_scalar_dict(writer, 'resources',      # tag='resources/train/batch/...'
                                collector.collect(),
                                global_step=global_step)

        add_scalar_dict(writer, 'resources',              # tag='resources/train/...'
                        collector.collect(),
                        global_step=epoch)

    with collector(tag='validate'):
        metrics = validate(net, validation_dataset)
        add_scalar_dict(writer, 'validate', metrics, global_step=epoch)
        add_scalar_dict(writer, 'resources',              # tag='resources/validate/...'
                        collector.collect(),
                        global_step=epoch)
```

Another example for logging into a CSV file:

```python
import datetime
import time

import pandas as pd

from nvitop import ResourceMetricCollector

collector = ResourceMetricCollector(root_pids={1}, interval=2.0)  # log all devices and all GPU processes
df = pd.DataFrame()

with collector(tag='resources'):
    for _ in range(60):
        # Do something
        time.sleep(60)

        metrics = collector.collect()
        df_metrics = pd.DataFrame.from_records(metrics, index=[len(df)])
        df = pd.concat([df, df_metrics], ignore_index=True)
        # Flush to CSV file ...

df.insert(0, 'time', df['resources/timestamp'].map(datetime.datetime.fromtimestamp))
df.to_csv('results.csv', index=False)
```

You can also daemonize the collector in the background using [`collect_in_background`](https://nvitop.readthedocs.io/en/latest/api/collector.html#nvitop.collect_in_background) or [`ResourceMetricCollector.daemonize`](https://nvitop.readthedocs.io/en/latest/api/collector.html#nvitop.ResourceMetricCollector.daemonize) with callback functions.

```python
from nvitop import Device, ResourceMetricCollector, collect_in_background

logger = ...

def on_collect(metrics):  # will be called periodically
    if logger.is_closed():  # closed manually by user
        return False
    logger.log(metrics)
    return True

def on_stop(collector):  # will be called only once at stop
    if not logger.is_closed():
        logger.close()  # cleanup

# Record metrics to the logger in the background every 5 seconds.
# It will collect 5-second mean/min/max for each metric.
collect_in_background(
    on_collect,
    ResourceMetricCollector(Device.cuda.all()),
    interval=5.0,
    on_stop=on_stop,
)
```

or simply:

```python
ResourceMetricCollector(Device.cuda.all()).daemonize(
    on_collect,
    interval=5.0,
    on_stop=on_stop,
)
```

------

#### Low-level APIs

The full API references can be found at <https://nvitop.readthedocs.io>.

##### Device

The [device module](https://nvitop.readthedocs.io/en/latest/api/device.html) provides:

<table class="autosummary longtable docutils align-default">
  <colgroup>
    <col style="width: 10%" />
    <col style="width: 90%" />
  </colgroup>
  <tbody>
    <tr class="row-odd">
      <td><p><a href="https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.Device" title="nvitop.Device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Device</span></code></a>([index, uuid, bus_id])</p></td>
      <td><p>Live class of the GPU devices, different from the device snapshots.</p></td>
    </tr>
    <tr class="row-even">
      <td><p><a href="https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.PhysicalDevice" title="nvitop.PhysicalDevice"><code class="xref py py-obj docutils literal notranslate"><span class="pre">PhysicalDevice</span></code></a>([index, uuid, bus_id])</p></td>
      <td><p>Class for physical devices.</p></td>
    </tr>
    <tr class="row-odd">
      <td><p><a href="https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.MigDevice" title="nvitop.MigDevice"><code class="xref py py-obj docutils literal notranslate"><span class="pre">MigDevice</span></code></a>([index, uuid, bus_id])</p></td>
      <td><p>Class for MIG devices.</p></td>
    </tr>
    <tr class="row-even">
      <td><p><a href="https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.CudaDevice" title="nvitop.CudaDevice"><code class="xref py py-obj docutils literal notranslate"><span class="pre">CudaDevice</span></code></a>([cuda_index, nvml_index, uuid])</p></td>
      <td><p>Class for devices enumerated over the CUDA ordinal.</p></td>
    </tr>
    <tr class="row-odd">
      <td><p><a href="https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.CudaMigDevice" title="nvitop.CudaMigDevice"><code class="xref py py-obj docutils literal notranslate"><span class="pre">CudaMigDevice</span></code></a>([cuda_index, nvml_index, uuid])</p></td>
      <td><p>Class for CUDA devices that are MIG devices.</p></td>
    </tr>
    <tr class="row-even">
      <td><p><a href="https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.parse_cuda_visible_devices" title="nvitop.parse_cuda_visible_devices"><code class="xref py py-obj docutils literal notranslate"><span class="pre">parse_cuda_visible_devices</span></code></a>([...])</p></td>
      <td><p>Parse the given <code class="docutils literal notranslate"><span class="pre">CUDA_VISIBLE_DEVICES</span></code> value into a list of NVML device indices.</p></td>
    </tr>
    <tr class="row-odd">
      <td><p><a href="https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.normalize_cuda_visible_devices" title="nvitop.normalize_cuda_visible_devices"><code class="xref py py-obj docutils literal notranslate"><span class="pre">normalize_cuda_visible_devices</span></code></a>([...])</p></td>
      <td><p>Parse the given <code class="docutils literal notranslate"><span class="pre">CUDA_VISIBLE_DEVICES</span></code> value and convert it into a comma-separated string of UUIDs.</p></td>
    </tr>
  </tbody>
</table>

```python
In [1]: from nvitop import (
   ...:     host,
   ...:     Device, PhysicalDevice, CudaDevice,
   ...:     parse_cuda_visible_devices, normalize_cuda_visible_devices
   ...:     HostProcess, GpuProcess,
   ...:     NA,
   ...: )
   ...: import os
   ...: os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
   ...: os.environ['CUDA_VISIBLE_DEVICES'] = '9,8,7,6'  # comma-separated integers or UUID strings

In [2]: Device.driver_version()
Out[2]: '525.60.11'

In [3]: Device.cuda_driver_version()  # the maximum CUDA version supported by the driver (can be different from the CUDA Runtime version)
Out[3]: '12.0'

In [4]: Device.cuda_runtime_version()  # the CUDA Runtime version
Out[4]: '11.8'

In [5]: Device.count()
Out[5]: 10

In [6]: CudaDevice.count()  # or `Device.cuda.count()`
Out[6]: 4

In [7]: all_devices      = Device.all()                 # all devices on board (physical device)
   ...: nvidia0, nvidia1 = Device.from_indices([0, 1])  # from physical device indices
   ...: all_devices
Out[7]: [
    PhysicalDevice(index=0, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    PhysicalDevice(index=1, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    PhysicalDevice(index=2, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    PhysicalDevice(index=3, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    PhysicalDevice(index=4, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    PhysicalDevice(index=5, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    PhysicalDevice(index=6, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    PhysicalDevice(index=7, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    PhysicalDevice(index=8, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    PhysicalDevice(index=9, name="GeForce RTX 2080 Ti", total_memory=11019MiB)
]

In [8]: # NOTE: The function results might be different between calls when the `CUDA_VISIBLE_DEVICES` environment variable has been modified
   ...: cuda_visible_devices = Device.from_cuda_visible_devices()  # from the `CUDA_VISIBLE_DEVICES` environment variable
   ...: cuda0, cuda1         = Device.from_cuda_indices([0, 1])    # from CUDA device indices (might be different from physical device indices if `CUDA_VISIBLE_DEVICES` is set)
   ...: cuda_visible_devices = CudaDevice.all()                    # shortcut to `Device.from_cuda_visible_devices()`
   ...: cuda_visible_devices = Device.cuda.all()                   # `Device.cuda` is aliased to `CudaDevice`
   ...: cuda_visible_devices
Out[8]: [
    CudaDevice(cuda_index=0, nvml_index=9, name="NVIDIA GeForce RTX 2080 Ti", total_memory=11019MiB),
    CudaDevice(cuda_index=1, nvml_index=8, name="NVIDIA GeForce RTX 2080 Ti", total_memory=11019MiB),
    CudaDevice(cuda_index=2, nvml_index=7, name="NVIDIA GeForce RTX 2080 Ti", total_memory=11019MiB),
    CudaDevice(cuda_index=3, nvml_index=6, name="NVIDIA GeForce RTX 2080 Ti", total_memory=11019MiB)
]

In [9]: nvidia0 = Device(0)  # from device index (or `Device(index=0)`)
   ...: nvidia0
Out[9]: PhysicalDevice(index=0, name="GeForce RTX 2080 Ti", total_memory=11019MiB)

In [10]: nvidia1 = Device(uuid='GPU-01234567-89ab-cdef-0123-456789abcdef')  # from UUID string (or just `Device('GPU-xxxxxxxx-...')`)
    ...: nvidia2 = Device(bus_id='00000000:06:00.0')                        # from PCI bus ID
    ...: nvidia1
Out[10]: PhysicalDevice(index=1, name="GeForce RTX 2080 Ti", total_memory=11019MiB)

In [11]: cuda0 = CudaDevice(0)                        # from CUDA device index (equivalent to `CudaDevice(cuda_index=0)`)
    ...: cuda1 = CudaDevice(nvml_index=8)             # from physical device index
    ...: cuda3 = CudaDevice(uuid='GPU-xxxxxxxx-...')  # from UUID string
    ...: cuda4 = Device.cuda(4)                       # `Device.cuda` is aliased to `CudaDevice`
    ...: cuda0
Out[11]:
CudaDevice(cuda_index=0, nvml_index=9, name="NVIDIA GeForce RTX 2080 Ti", total_memory=11019MiB)

In [12]: nvidia0.memory_used()  # in bytes
Out[12]: 9293398016

In [13]: nvidia0.memory_used_human()
Out[13]: '8862MiB'

In [14]: nvidia0.gpu_utilization()  # in percentage
Out[14]: 5

In [15]: nvidia0.processes()  # type: Dict[int, GpuProcess]
Out[15]: {
    52059: GpuProcess(pid=52059, gpu_memory=7885MiB, type=C, device=PhysicalDevice(index=0, name="GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=52059, name='ipython3', status='sleeping', started='14:31:22')),
    53002: GpuProcess(pid=53002, gpu_memory=967MiB, type=C, device=PhysicalDevice(index=0, name="GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=53002, name='python', status='running', started='14:31:59'))
}

In [16]: nvidia1_snapshot = nvidia1.as_snapshot()
    ...: nvidia1_snapshot
Out[16]: PhysicalDeviceSnapshot(
    real=PhysicalDevice(index=1, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    bus_id='00000000:05:00.0',
    compute_mode='Default',
    clock_infos=ClockInfos(graphics=1815, sm=1815, memory=6800, video=1680),  # in MHz
    clock_speed_infos=ClockSpeedInfos(current=ClockInfos(graphics=1815, sm=1815, memory=6800, video=1680), max=ClockInfos(graphics=2100, sm=2100, memory=7000, video=1950)),  # in MHz
    cuda_compute_capability=(7, 5),
    current_driver_model='N/A',
    decoder_utilization=0,              # in percentage
    display_active='Disabled',
    display_mode='Disabled',
    encoder_utilization=0,              # in percentage
    fan_speed=22,                       # in percentage
    gpu_utilization=17,                 # in percentage (NOTE: this is the utilization rate of SMs, i.e. GPU percent)
    index=1,
    max_clock_infos=ClockInfos(graphics=2100, sm=2100, memory=7000, video=1950),  # in MHz
    memory_clock=6800,                  # in MHz
    memory_free=10462232576,            # in bytes
    memory_free_human='9977MiB',
    memory_info=MemoryInfo(total=11554717696, free=10462232576, used=1092485120)  # in bytes
    memory_percent=9.5,                 # in percentage (NOTE: this is the percentage of used GPU memory)
    memory_total=11554717696,           # in bytes
    memory_total_human='11019MiB',
    memory_usage='1041MiB / 11019MiB',
    memory_used=1092485120,             # in bytes
    memory_used_human='1041MiB',
    memory_utilization=7,               # in percentage (NOTE: this is the utilization rate of GPU memory bandwidth)
    mig_mode='N/A',
    name='GeForce RTX 2080 Ti',
    pcie_rx_throughput=1000,            # in KiB/s
    pcie_rx_throughput_human='1000KiB/s',
    pcie_throughput=ThroughputInfo(tx=1000, rx=1000),  # in KiB/s
    pcie_tx_throughput=1000,            # in KiB/s
    pcie_tx_throughput_human='1000KiB/s',
    performance_state='P2',
    persistence_mode='Disabled',
    power_limit=250000,                 # in milliwatts (mW)
    power_status='66W / 250W',          # in watts (W)
    power_usage=66051,                  # in milliwatts (mW)
    sm_clock=1815,                      # in MHz
    temperature=39,                     # in Celsius
    total_volatile_uncorrected_ecc_errors='N/A',
    utilization_rates=UtilizationRates(gpu=17, memory=7, encoder=0, decoder=0),  # in percentage
    uuid='GPU-01234567-89ab-cdef-0123-456789abcdef',
)

In [17]: nvidia1_snapshot.memory_percent  # snapshot uses properties instead of function calls
Out[17]: 9.5

In [18]: nvidia1_snapshot['memory_info']  # snapshot also supports `__getitem__` by string
Out[18]: MemoryInfo(total=11554717696, free=10462232576, used=1092485120)

In [19]: nvidia1_snapshot.bar1_memory_info  # snapshot will automatically retrieve not presented attributes from `real`
Out[19]: MemoryInfo(total=268435456, free=257622016, used=10813440)
```

**NOTE:** Some entry values may be `'N/A'` (type: [`NaType`](https://nvitop.readthedocs.io/en/latest/index.html#nvitop.NaType), a subclass of `str`) when the corresponding resources are not applicable. The [`NA`](https://nvitop.readthedocs.io/en/latest/index.html#nvitop.NA) value supports arithmetic operations. It acts like `math.nan: float`.

```python
>>> from nvitop import NA
>>> NA
'N/A'

>>> 'memory usage: {}'.format(NA)  # NA is an instance of `str`
'memory usage: N/A'
>>> NA.lower()                     # NA is an instance of `str`
'n/a'
>>> NA.ljust(5)                    # NA is an instance of `str`
'N/A  '
>>> NA + 'str'                     # string contamination if the operand is a string
'N/Astr'

>>> float(NA)                      # explicit conversion to float (`math.nan`)
nan
>>> NA + 1                         # auto-casting to float if the operand is a number
nan
>>> NA * 1024                      # auto-casting to float if the operand is a number
nan
>>> NA / (1024 * 1024)             # auto-casting to float if the operand is a number
nan
```

You can use `entry != 'N/A'` conditions to avoid exceptions. It's safe to use `float(entry)` for numbers while `NaType` will be converted to `math.nan`. For example:

```python
memory_used: Union[int, NaType] = device.memory_used()            # memory usage in bytes or `'N/A'`
memory_used_in_mib: float       = float(memory_used) / (1 << 20)  # memory usage in Mebibytes (MiB) or `math.nan`
```

It's safe to compare `NaType` with numbers, but `NaType` is always larger than any number:

```python
devices_by_used_memory = sorted(Device.all(), key=Device.memory_used, reverse=True)  # it's safe to compare `'N/A'` with numbers
devices_by_free_memory = sorted(Device.all(), key=Device.memory_free, reverse=True)  # please add `memory_free != 'N/A'` checks if sort in descending order here
```

See [`nvitop.NaType`](https://nvitop.readthedocs.io/en/latest/apis/index.html#nvitop.NaType) documentation for more details.

##### Process

The [process module](https://nvitop.readthedocs.io/en/latest/api/process.html) provides:

<table class="autosummary longtable docutils align-default">
  <colgroup>
    <col style="width: 10%" />
    <col style="width: 90%" />
  </colgroup>
  <tbody>
    <tr class="row-odd">
      <td><p><a href="https://nvitop.readthedocs.io/en/latest/api/process.html#nvitop.HostProcess" title="nvitop.HostProcess"><code class="xref py py-obj docutils literal notranslate"><span class="pre">HostProcess</span></code></a>([pid])</p></td>
      <td><p>Represents an OS process with the given PID.</p></td>
    </tr>
    <tr class="row-even">
      <td><p><a href="https://nvitop.readthedocs.io/en/latest/api/process.html#nvitop.GpuProcess" title="nvitop.GpuProcess"><code class="xref py py-obj docutils literal notranslate"><span class="pre">GpuProcess</span></code></a>(pid, device[, gpu_memory, ...])</p></td>
      <td><p>Represents a process with the given PID running on the given GPU device.</p></td>
    </tr>
    <tr class="row-odd">
      <td><p><a href="https://nvitop.readthedocs.io/en/latest/api/process.html#nvitop.command_join" title="nvitop.command_join"><code class="xref py py-obj docutils literal notranslate"><span class="pre">command_join</span></code></a>(cmdline)</p></td>
      <td><p>Returns a shell-escaped string from command line arguments.</p></td>
    </tr>
  </tbody>
</table>

```python
In [20]: processes = nvidia1.processes()  # type: Dict[int, GpuProcess]
    ...: processes
Out[20]: {
    23266: GpuProcess(pid=23266, gpu_memory=1031MiB, type=C, device=Device(index=1, name="GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=23266, name='python3', status='running', started='2021-05-10 21:02:40'))
}

In [21]: process = processes[23266]
    ...: process
Out[21]: GpuProcess(pid=23266, gpu_memory=1031MiB, type=C, device=Device(index=1, name="GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=23266, name='python3', status='running', started='2021-05-10 21:02:40'))

In [22]: process.status()  # GpuProcess will automatically inherit attributes from GpuProcess.host
Out[22]: 'running'

In [23]: process.cmdline()  # type: List[str]
Out[23]: ['python3', 'rllib_train.py']

In [24]: process.command()  # type: str
Out[24]: 'python3 rllib_train.py'

In [25]: process.cwd()  # GpuProcess will automatically inherit attributes from GpuProcess.host
Out[25]: '/home/xxxxxx/Projects/xxxxxx'

In [26]: process.gpu_memory_human()
Out[26]: '1031MiB'

In [27]: process.as_snapshot()
Out[27]: GpuProcessSnapshot(
    real=GpuProcess(pid=23266, gpu_memory=1031MiB, type=C, device=PhysicalDevice(index=1, name="GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=23266, name='python3', status='running', started='2021-05-10 21:02:40')),
    cmdline=['python3', 'rllib_train.py'],
    command='python3 rllib_train.py',
    compute_instance_id='N/A',
    cpu_percent=98.5,                       # in percentage
    device=PhysicalDevice(index=1, name="GeForce RTX 2080 Ti", total_memory=11019MiB),
    gpu_encoder_utilization=0,              # in percentage
    gpu_decoder_utilization=0,              # in percentage
    gpu_instance_id='N/A',
    gpu_memory=1081081856,                  # in bytes
    gpu_memory_human='1031MiB',
    gpu_memory_percent=9.4,                 # in percentage (NOTE: this is the percentage of used GPU memory)
    gpu_memory_utilization=5,               # in percentage (NOTE: this is the utilization rate of GPU memory bandwidth)
    gpu_sm_utilization=0,                   # in percentage (NOTE: this is the utilization rate of SMs, i.e. GPU percent)
    host=HostProcessSnapshot(
        real=HostProcess(pid=23266, name='python3', status='running', started='2021-05-10 21:02:40'),
        cmdline=['python3', 'rllib_train.py'],
        command='python3 rllib_train.py',
        cpu_percent=98.5,                   # in percentage
        host_memory=9113627439,             # in bytes
        host_memory_human='8691MiB',
        is_running=True,
        memory_percent=1.6849018430285683,  # in percentage
        name='python3',
        running_time=datetime.timedelta(days=1, seconds=80013, microseconds=470024),
        running_time_human='46:13:33',
        running_time_in_seconds=166413.470024,
        status='running',
        username='panxuehai',
    ),
    host_memory=9113627439,                 # in bytes
    host_memory_human='8691MiB',
    is_running=True,
    memory_percent=1.6849018430285683,      # in percentage (NOTE: this is the percentage of used host memory)
    name='python3',
    pid=23266,
    running_time=datetime.timedelta(days=1, seconds=80013, microseconds=470024),
    running_time_human='46:13:33',
    running_time_in_seconds=166413.470024,
    status='running',
    type='C',                               # 'C' for Compute / 'G' for Graphics / 'C+G' for Both
    username='panxuehai',
)

In [28]: process.uids()  # GpuProcess will automatically inherit attributes from GpuProcess.host
Out[28]: puids(real=1001, effective=1001, saved=1001)

In [29]: process.kill()  # GpuProcess will automatically inherit attributes from GpuProcess.host

In [30]: list(map(Device.processes, all_devices))  # all processes
Out[30]: [
    {
        52059: GpuProcess(pid=52059, gpu_memory=7885MiB, type=C, device=PhysicalDevice(index=0, name="GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=52059, name='ipython3', status='sleeping', started='14:31:22')),
        53002: GpuProcess(pid=53002, gpu_memory=967MiB, type=C, device=PhysicalDevice(index=0, name="GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=53002, name='python', status='running', started='14:31:59'))
    },
    {},
    {},
    {},
    {},
    {},
    {},
    {},
    {
        84748: GpuProcess(pid=84748, gpu_memory=8975MiB, type=C, device=PhysicalDevice(index=8, name="GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=84748, name='python', status='running', started='11:13:38'))
    },
    {
        84748: GpuProcess(pid=84748, gpu_memory=8341MiB, type=C, device=PhysicalDevice(index=9, name="GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=84748, name='python', status='running', started='11:13:38'))
    }
]

In [31]: this = HostProcess(os.getpid())
    ...: this
Out[31]: HostProcess(pid=35783, name='python', status='running', started='19:19:00')

In [32]: this.cmdline()  # type: List[str]
Out[32]: ['python', '-c', 'import IPython; IPython.terminal.ipapp.launch_new_instance()']

In [33]: this.command()  # not simply `' '.join(cmdline)` but quotes are added
Out[33]: 'python -c "import IPython; IPython.terminal.ipapp.launch_new_instance()"'

In [34]: this.memory_info()
Out[34]: pmem(rss=83988480, vms=343543808, shared=12079104, text=8192, lib=0, data=297435136, dirty=0)

In [35]: import cupy as cp
    ...: x = cp.zeros((10000, 1000))
    ...: this = GpuProcess(os.getpid(), cuda0)  # construct from `GpuProcess(pid, device)` explicitly rather than calling `device.processes()`
    ...: this
Out[35]: GpuProcess(pid=35783, gpu_memory=N/A, type=N/A, device=CudaDevice(cuda_index=0, nvml_index=9, name="NVIDIA GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=35783, name='python', status='running', started='19:19:00'))

In [36]: this.update_gpu_status()  # update used GPU memory from new driver queries
Out[36]: 267386880

In [37]: this
Out[37]: GpuProcess(pid=35783, gpu_memory=255MiB, type=C, device=CudaDevice(cuda_index=0, nvml_index=9, name="NVIDIA GeForce RTX 2080 Ti", total_memory=11019MiB), host=HostProcess(pid=35783, name='python', status='running', started='19:19:00'))

In [38]: id(this) == id(GpuProcess(os.getpid(), cuda0))  # IMPORTANT: the instance will be reused while the process is running
Out[38]: True
```

##### Host (inherited from [psutil](https://github.com/giampaolo/psutil))

```python
In [39]: host.cpu_count()
Out[39]: 88

In [40]: host.cpu_percent()
Out[40]: 18.5

In [41]: host.cpu_times()
Out[41]: scputimes(user=2346377.62, nice=53321.44, system=579177.52, idle=10323719.85, iowait=28750.22, irq=0.0, softirq=11566.87, steal=0.0, guest=0.0, guest_nice=0.0)

In [42]: host.load_average()
Out[42]: (14.88, 17.8, 19.91)

In [43]: host.virtual_memory()
Out[43]: svmem(total=270352478208, available=192275968000, percent=28.9, used=53350518784, free=88924037120, active=125081112576, inactive=44803993600, buffers=37006450688, cached=91071471616, shared=23820632064, slab=8200687616)

In [44]: host.memory_percent()
Out[44]: 28.9

In [45]: host.swap_memory()
Out[45]: sswap(total=65534947328, used=475136, free=65534472192, percent=0.0, sin=2404139008, sout=4259434496)

In [46]: host.swap_percent()
Out[46]: 0.0
```

------

## Screenshots

![Screen Recording](https://user-images.githubusercontent.com/16078332/113173772-508dc380-927c-11eb-84c5-b6f496e54c08.gif)

Example output of `nvitop -1`:

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/117765250-41793880-b260-11eb-8a1b-9c32868a46d4.png" alt="Screenshot">
</p>

Example output of `nvitop`:

<table>
  <tr valign="center" align="center">
    <td>Full</td>
    <td>Compact</td>
  </tr>
  <tr valign="top" align="center">
    <td><img src="https://user-images.githubusercontent.com/16078332/117765260-4342fc00-b260-11eb-9198-7bcfdd1db113.png" alt="Full"></td>
    <td><img src="https://user-images.githubusercontent.com/16078332/117765274-476f1980-b260-11eb-9afd-877cca54e0bc.png" alt="Compact"></td>
  </tr>
</table>

Tree-view screen (shortcut: <kbd>t</kbd>) for GPU processes and their ancestors:

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/123914889-7b3e0400-d9b2-11eb-9b71-a48971617c2a.png" alt="Tree-view">
</p>

**NOTE:** The process tree is built in backward order (recursively back to the tree root). Only GPU processes along with their children and ancestors (parents and grandparents ...) will be shown. Not all running processes will be displayed.

Environment variable screen (shortcut: <kbd>e</kbd>):

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/123914881-7a0cd700-d9b2-11eb-8da1-26f7a3a7c2b6.png" alt="Environment Screen">
</p>

Spectrum-like bar charts (with option <code>--colorful</code>):

<p align="center">
  <img width="100%" src="https://user-images.githubusercontent.com/16078332/182555606-8388e5a5-43a9-4990-90d4-46e45ac448a0.png" alt="Spectrum-like Bar Charts">
  <br/>
</p>

------

## Changelog

See [CHANGELOG.md](https://github.com/XuehaiPan/nvitop/blob/HEAD/CHANGELOG.md).

------

## License

The source code of `nvitop` is dual-licensed by the **Apache License, Version 2.0 (Apache-2.0)** and **GNU General Public License, Version 3 (GPL-3.0)**. The `nvitop` CLI is released under the **GPL-3.0** license while the remaining part of `nvitop` is released under the **Apache-2.0** license. The license files can be found at [LICENSE](https://github.com/XuehaiPan/nvitop/blob/HEAD/LICENSE) (Apache-2.0) and [COPYING](https://github.com/XuehaiPan/nvitop/blob/HEAD/COPYING) (GPL-3.0).

The source code is organized as:

```text
nvitop           (GPL-3.0)
├── __init__.py  (Apache-2.0)
├── version.py   (Apache-2.0)
├── api          (Apache-2.0)
│   ├── LICENSE  (Apache-2.0)
│   └── *        (Apache-2.0)
├── callbacks    (Apache-2.0)
│   ├── LICENSE  (Apache-2.0)
│   └── *        (Apache-2.0)
├── select.py    (Apache-2.0)
├── __main__.py  (GPL-3.0)
├── cli.py       (GPL-3.0)
└── gui          (GPL-3.0)
    ├── COPYING  (GPL-3.0)
    └── *        (GPL-3.0)
```

### Copyright Notice

Please feel free to use `nvitop` as a dependency for your own projects. The following Python import statements are permitted:

```python
import nvitop
import nvitop as alias
import nvitop.api as api
import nvitop.device as device
from nvitop import *
from nvitop.api import *
from nvitop import Device, ResourceMetricCollector
```

The public APIs from `nvitop` are released under the **Apache License, Version 2.0 (Apache-2.0)**. The original license files can be found at [LICENSE](https://github.com/XuehaiPan/nvitop/blob/HEAD/LICENSE), [nvitop/api/LICENSE](https://github.com/XuehaiPan/nvitop/blob/HEAD/nvitop/api/LICENSE), and [nvitop/callbacks/LICENSE](https://github.com/XuehaiPan/nvitop/blob/HEAD/nvitop/callbacks/LICENSE).

The CLI of `nvitop` is released under the **GNU General Public License, Version 3 (GPL-3.0)**. The original license files can be found at [COPYING](https://github.com/XuehaiPan/nvitop/blob/HEAD/COPYING) and [nvitop/gui/COPYING](https://github.com/XuehaiPan/nvitop/blob/HEAD/nvitop/gui/COPYING). If you dynamically load the source code of `nvitop`'s CLI or GUI:

```python
from nvitop import cli
from nvitop import gui
import nvitop.cli
import nvitop.gui
```

your source code should also be released under the GPL-3.0 License.

If you want to add or modify some features of `nvitop`'s CLI, or copy some source code of `nvitop`'s CLI into your own code, the source code should also be released under the GPL-3.0 License (as `nvitop`  contains some modified source code from [ranger](https://github.com/ranger/ranger) under the GPL-3.0 License).

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "nvitop",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "nvidia,nvidia-smi,NVIDIA,NVML,CUDA,GPU,top,monitoring",
    "author": "",
    "author_email": "Xuehai Pan <XuehaiPan@pku.edu.cn>",
    "download_url": "https://files.pythonhosted.org/packages/82/c8/a99ef6649e42aaf22953e3ba97cadd735652ab4512c8fc212ea6f6bdee47/nvitop-1.3.2.tar.gz",
    "platform": null,
    "description": "# nvitop\n\n<!-- markdownlint-disable html -->\n\n![Python 3.7+](https://img.shields.io/badge/Python-3.7%2B-brightgreen)\n[![PyPI](https://img.shields.io/pypi/v/nvitop?label=pypi&logo=pypi)](https://pypi.org/project/nvitop)\n[![conda-forge](https://img.shields.io/conda/vn/conda-forge/nvitop?label=conda&logo=condaforge)](https://anaconda.org/conda-forge/nvitop)\n[![Documentation Status](https://img.shields.io/readthedocs/nvitop?label=docs&logo=readthedocs)](https://nvitop.readthedocs.io)\n[![Downloads](https://static.pepy.tech/personalized-badge/nvitop?period=total&left_color=grey&right_color=blue&left_text=downloads)](https://pepy.tech/project/nvitop)\n[![GitHub Repo Stars](https://img.shields.io/github/stars/XuehaiPan/nvitop?label=stars&logo=github&color=brightgreen)](https://github.com/XuehaiPan/nvitop/stargazers)\n[![License](https://img.shields.io/github/license/XuehaiPan/nvitop?label=license&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCIgd2lkdGg9IjI0IiBoZWlnaHQ9IjI0IiBmaWxsPSIjZmZmZmZmIj48cGF0aCBmaWxsLXJ1bGU9ImV2ZW5vZGQiIGQ9Ik0xMi43NSAyLjc1YS43NS43NSAwIDAwLTEuNSAwVjQuNUg5LjI3NmExLjc1IDEuNzUgMCAwMC0uOTg1LjMwM0w2LjU5NiA1Ljk1N0EuMjUuMjUgMCAwMTYuNDU1IDZIMi4zNTNhLjc1Ljc1IDAgMTAwIDEuNUgzLjkzTC41NjMgMTUuMThhLjc2Mi43NjIgMCAwMC4yMS44OGMuMDguMDY0LjE2MS4xMjUuMzA5LjIyMS4xODYuMTIxLjQ1Mi4yNzguNzkyLjQzMy42OC4zMTEgMS42NjIuNjIgMi44NzYuNjJhNi45MTkgNi45MTkgMCAwMDIuODc2LS42MmMuMzQtLjE1NS42MDYtLjMxMi43OTItLjQzMy4xNS0uMDk3LjIzLS4xNTguMzEtLjIyM2EuNzUuNzUgMCAwMC4yMDktLjg3OEw1LjU2OSA3LjVoLjg4NmMuMzUxIDAgLjY5NC0uMTA2Ljk4NC0uMzAzbDEuNjk2LTEuMTU0QS4yNS4yNSAwIDAxOS4yNzUgNmgxLjk3NXYxNC41SDYuNzYzYS43NS43NSAwIDAwMCAxLjVoMTAuNDc0YS43NS43NSAwIDAwMC0xLjVIMTIuNzVWNmgxLjk3NGMuMDUgMCAuMS4wMTUuMTQuMDQzbDEuNjk3IDEuMTU0Yy4yOS4xOTcuNjMzLjMwMy45ODQuMzAzaC44ODZsLTMuMzY4IDcuNjhhLjc1Ljc1IDAgMDAuMjMuODk2Yy4wMTIuMDA5IDAgMCAuMDAyIDBhMy4xNTQgMy4xNTQgMCAwMC4zMS4yMDZjLjE4NS4xMTIuNDUuMjU2Ljc5LjRhNy4zNDMgNy4zNDMgMCAwMDIuODU1LjU2OCA3LjM0MyA3LjM0MyAwIDAwMi44NTYtLjU2OWMuMzM4LS4xNDMuNjA0LS4yODcuNzktLjM5OWEzLjUgMy41IDAgMDAuMzEtLjIwNi43NS43NSAwIDAwLjIzLS44OTZMMjAuMDcgNy41aDEuNTc4YS43NS43NSAwIDAwMC0xLjVoLTQuMTAyYS4yNS4yNSAwIDAxLS4xNC0uMDQzbC0xLjY5Ny0xLjE1NGExLjc1IDEuNzUgMCAwMC0uOTg0LS4zMDNIMTIuNzVWMi43NXpNMi4xOTMgMTUuMTk4YTUuNDE4IDUuNDE4IDAgMDAyLjU1Ny42MzUgNS40MTggNS40MTggMCAwMDIuNTU3LS42MzVMNC43NSA5LjM2OGwtMi41NTcgNS44M3ptMTQuNTEtLjAyNGMuMDgyLjA0LjE3NC4wODMuMjc1LjEyNi41My4yMjMgMS4zMDUuNDUgMi4yNzIuNDVhNS44NDYgNS44NDYgMCAwMDIuNTQ3LS41NzZMMTkuMjUgOS4zNjdsLTIuNTQ3IDUuODA3eiI+PC9wYXRoPjwvc3ZnPgo=)](#license)\n\nAn interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management. The full API references host at <https://nvitop.readthedocs.io>.\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/171005261-1aad126e-dc27-4ed3-a89b-7f9c1c998bf7.png\" alt=\"Monitor\">\n  <br/>\n  Monitor mode of <code>nvitop</code>.\n  <br/>\n  (TERM: GNOME Terminal / OS: Ubuntu 16.04 LTS (over SSH) / Locale: <code>en_US.UTF-8</code>)\n</p>\n\n### Table of Contents  <!-- omit in toc --> <!-- markdownlint-disable heading-increment -->\n\n- [Features](#features)\n- [Requirements](#requirements)\n- [Installation](#installation)\n- [Usage](#usage)\n  - [Device and Process Status](#device-and-process-status)\n  - [Resource Monitor](#resource-monitor)\n    - [For Docker Users](#for-docker-users)\n    - [For SSH Users](#for-ssh-users)\n    - [Command Line Options and Environment Variables](#command-line-options-and-environment-variables)\n    - [Keybindings for Monitor Mode](#keybindings-for-monitor-mode)\n  - [CUDA Visible Devices Selection Tool](#cuda-visible-devices-selection-tool)\n  - [Callback Functions for Machine Learning Frameworks](#callback-functions-for-machine-learning-frameworks)\n    - [Callback for TensorFlow (Keras)](#callback-for-tensorflow-keras)\n    - [Callback for PyTorch Lightning](#callback-for-pytorch-lightning)\n    - [TensorBoard Integration](#tensorboard-integration)\n  - [More than a Monitor](#more-than-a-monitor)\n    - [Quick Start](#quick-start)\n    - [Status Snapshot](#status-snapshot)\n    - [Resource Metric Collector](#resource-metric-collector)\n    - [Low-level APIs](#low-level-apis)\n      - [Device](#device)\n      - [Process](#process)\n      - [Host (inherited from psutil)](#host-inherited-from-psutil)\n- [Screenshots](#screenshots)\n- [Changelog](#changelog)\n- [License](#license)\n  - [Copyright Notice](#copyright-notice)\n\n------\n\n`nvitop` is an interactive NVIDIA device and process monitoring tool. It has a colorful and informative interface that continuously updates the status of the devices and processes. As a resource monitor, it includes many features and options, such as tree-view, environment variable viewing, process filtering, process metrics monitoring, etc. Beyond that, the package also ships a [CUDA device selection tool `nvisel`](#cuda-visible-devices-selection-tool) for deep learning researchers. It also provides handy APIs that allow developers to write their own monitoring tools. Please refer to section [More than a Monitor](#more-than-a-monitor) and the full API references at <https://nvitop.readthedocs.io> for more information.\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/202362811-34f2c01d-97c8-49d2-b19b-0d7da648f2d5.png\" alt=\"Filter\">\n  <br/>\n  Process filtering and a more colorful interface.\n</p>\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/202362686-859bf4ad-6237-46ca-b2f7-f547d2f63213.png\" alt=\"Comparison\">\n  <br/>\n  Compare to <code>nvidia-smi</code>.\n</p>\n\n------\n\n## Features\n\n- **Informative and fancy output**: show more information than `nvidia-smi` with colorized fancy box drawing.\n- **Monitor mode**: can run as a resource monitor, rather than print the results only once.\n  - bar charts and history graphs\n  - process sorting\n  - process filtering\n  - send signals to processes with a keystroke\n  - tree-view screen for GPU processes and their parent processes\n  - environment variable screen\n  - help screen\n  - mouse support\n- **Interactive**: responsive for user input (from keyboard and/or mouse) in monitor mode. (vs. [gpustat](https://github.com/wookayin/gpustat) & [py3nvml](https://github.com/fbcotter/py3nvml))\n- **Efficient**:\n  - query device status using [*NVML Python bindings*](https://pypi.org/project/nvidia-ml-py) directly, instead of parsing the output of `nvidia-smi`. (vs. [nvidia-htop](https://github.com/peci1/nvidia-htop))\n  - support sparse query and cache results with `TTLCache` from [cachetools](https://github.com/tkem/cachetools). (vs. [gpustat](https://github.com/wookayin/gpustat))\n  - display information using the `curses` library rather than `print` with ANSI escape codes. (vs. [py3nvml](https://github.com/fbcotter/py3nvml))\n  - asynchronously gather information using multi-threading and correspond to user input much faster. (vs. [nvtop](https://github.com/Syllo/nvtop))\n- **Portable**: work on both Linux and Windows.\n  - get host process information using the cross-platform library [psutil](https://github.com/giampaolo/psutil) instead of calling `ps -p <pid>` in a subprocess. (vs. [nvidia-htop](https://github.com/peci1/nvidia-htop) & [py3nvml](https://github.com/fbcotter/py3nvml))\n  - written in pure Python, easy to install with `pip`. (vs. [nvtop](https://github.com/Syllo/nvtop))\n- **Integrable**: easy to integrate into other applications, more than monitoring. (vs. [nvidia-htop](https://github.com/peci1/nvidia-htop) & [nvtop](https://github.com/Syllo/nvtop))\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/129374533-fe06c01a-630d-4994-b54b-821cccd0d33c.png\" alt=\"Windows\">\n  <br/>\n  <code>nvitop</code> supports Windows!\n  <br/>\n  (SHELL: PowerShell / TERM: Windows Terminal / OS: Windows 10 / Locale: <code>en-US</code>)\n</p>\n\n------\n\n## Requirements\n\n- Python 3.7+\n- NVIDIA Management Library (NVML)\n- nvidia-ml-py\n- psutil\n- cachetools\n- termcolor\n- curses<sup>[*](#curses)</sup> (with `libncursesw`)\n\n**NOTE:** The [NVIDIA Management Library (*NVML*)](https://developer.nvidia.com/nvidia-management-library-nvml) is a C-based programmatic interface for monitoring and managing various states. The runtime version of the NVML library ships with the NVIDIA display driver (available at [Download Drivers | NVIDIA](https://www.nvidia.com/Download/index.aspx)), or can be downloaded as part of the NVIDIA CUDA Toolkit (available at [CUDA Toolkit | NVIDIA Developer](https://developer.nvidia.com/cuda-downloads)). The lists of OS platforms and NVIDIA-GPUs supported by the NVML library can be found in the [NVML API Reference](https://docs.nvidia.com/deploy/nvml-api/nvml-api-reference.html).\n\nThis repository contains a Bash script to install/upgrade the NVIDIA drivers for Ubuntu Linux. For example:\n\n```bash\ngit clone --depth=1 https://github.com/XuehaiPan/nvitop.git && cd nvitop\n\n# Change to tty3 console (required for desktop users with GUI (tty2))\n# Optional for SSH users\nsudo chvt 3  # or use keyboard shortcut: Ctrl-LeftAlt-F3\n\nbash install-nvidia-driver.sh --package=nvidia-driver-470  # install the R470 driver from ppa:graphics-drivers\nbash install-nvidia-driver.sh --latest                     # install the latest driver from ppa:graphics-drivers\n```\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/174480112-e9a35edc-8f42-438e-a103-1d0ce998b381.png\" alt=\"install-nvidia-driver\">\n  <br/>\n  NVIDIA driver installer for Ubuntu Linux.\n</p>\n\nRun `bash install-nvidia-driver.sh --help` for more information.\n\n<a name=\"curses\">*</a> The `curses` library is a built-in module of Python on Unix-like systems, and it is supported by a third-party package called `windows-curses` on Windows using PDCurses. Inconsistent behavior of `nvitop` may occur on different terminal emulators on Windows, such as missing mouse support.\n\n------\n\n## Installation\n\n**It is highly recommended to install `nvitop` in an isolated virtual environment.** Simple installation and run via [`pipx`](https://pypa.github.io/pipx):\n\n```bash\npipx run nvitop\n```\n\nInstall from PyPI ([![PyPI](https://img.shields.io/pypi/v/nvitop?label=pypi&logo=pypi)](https://pypi.org/project/nvitop)):\n\n```bash\npip3 install --upgrade nvitop\n```\n\nInstall from conda-forge ([![conda-forge](https://img.shields.io/conda/v/conda-forge/nvitop?logo=condaforge)](https://anaconda.org/conda-forge/nvitop)):\n\n```bash\nconda install -c conda-forge nvitop\n```\n\nInstall the latest version from GitHub (![Commit Count](https://img.shields.io/github/commits-since/XuehaiPan/nvitop/v1.3.2)):\n\n```bash\npip3 install --upgrade pip setuptools\npip3 install git+https://github.com/XuehaiPan/nvitop.git#egg=nvitop\n```\n\nOr, clone this repo and install manually:\n\n```bash\ngit clone --depth=1 https://github.com/XuehaiPan/nvitop.git\ncd nvitop\npip3 install .\n```\n\n**NOTE:** If you encounter the *\"nvitop: command not found\"* error after installation, please check whether you have added the Python console script path (e.g., `\"${HOME}/.local/bin\"`) to your `PATH` environment variable. Alternatively, you can use `python3 -m nvitop`.\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/178963038-a5cd4eb5-02a8-4456-966f-d5ff04eb44d8.png\" alt=\"MIG Device Support\">\n  <br/>\n  MIG Device Support.\n  <br/>\n</p>\n\n------\n\n## Usage\n\n### Device and Process Status\n\nQuery the device and process status. The output is similar to `nvidia-smi`, but has been enriched and colorized.\n\n```bash\n# Query the status of all devices\n$ nvitop -1  # or use `python3 -m nvitop -1`\n\n# Specify query devices (by integer indices)\n$ nvitop -1 -o 0 1  # only show <GPU 0> and <GPU 1>\n\n# Only show devices in `CUDA_VISIBLE_DEVICES` (by integer indices or UUID strings)\n$ nvitop -1 -ov\n\n# Only show GPU processes with the compute context (type: 'C' or 'C+G')\n$ nvitop -1 -c\n```\n\nWhen the `-1` switch is on, the result will be displayed **ONLY ONCE** (same as the default behavior of `nvidia-smi`). This is much faster and has lower resource usage. See [Command Line Options](#command-line-options-and-environment-variables) for more command options.\n\nThere is also a CLI tool called `nvisel` that ships with the `nvitop` PyPI package. See [CUDA Visible Devices Selection Tool](#cuda-visible-devices-selection-tool) for more information.\n\n### Resource Monitor\n\nRun as a resource monitor:\n\n```bash\n# Monitor mode (when the display mode is omitted, `NVITOP_MONITOR_MODE` will be used)\n$ nvitop  # or use `python3 -m nvitop`\n\n# Automatically configure the display mode according to the terminal size\n$ nvitop -m auto     # shortcut: `a` key\n\n# Arbitrarily display as `full` mode\n$ nvitop -m full     # shortcut: `f` key\n\n# Arbitrarily display as `compact` mode\n$ nvitop -m compact  # shortcut: `c` key\n\n# Specify query devices (by integer indices)\n$ nvitop -o 0 1  # only show <GPU 0> and <GPU 1>\n\n# Only show devices in `CUDA_VISIBLE_DEVICES` (by integer indices or UUID strings)\n$ nvitop -ov\n\n# Only show GPU processes with the compute context (type: 'C' or 'C+G')\n$ nvitop -c\n\n# Use ASCII characters only\n$ nvitop -U  # useful for terminals without Unicode support\n\n# For light terminals\n$ nvitop --light\n\n# For spectrum-like bar charts (requires the terminal supports 256-color)\n$ nvitop --colorful\n```\n\nYou can configure the default monitor mode with the `NVITOP_MONITOR_MODE` environment variable (default `auto` if not set). See [Command Line Options and Environment Variables](#command-line-options-and-environment-variables) for more command options.\n\nIn monitor mode, you can use <kbd>Ctrl-c</kbd> / <kbd>T</kbd> / <kbd>K</kbd> keys to interrupt / terminate / kill a process. And it's recommended to *terminate* or *kill* a process in the **tree-view screen** (shortcut: <kbd>t</kbd>). For normal users, `nvitop` will shallow other users' processes (in low-intensity colors). For **system administrators**, you can use `sudo nvitop` to terminate other users' processes.\n\nAlso, to enter the process metrics screen, select a process and then press the <kbd>Enter</kbd> / <kbd>Return</kbd> key . `nvitop` dynamically displays the process metrics with live graphs.\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/192108815-37c03705-be44-47d4-9908-6d05175db230.png\" alt=\"Process Metrics Screen\">\n  <br/>\n  Watch metrics for a specific process (shortcut: <kbd>Enter</kbd> / <kbd>Return</kbd>).\n</p>\n\nPress <kbd>h</kbd> for help or <kbd>q</kbd> to return to the terminal. See [Keybindings for Monitor Mode](#keybindings-for-monitor-mode) for more shortcuts.\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/192108664-61f1983c-6f62-48e6-87c5-29633d9c409e.png\" alt=\"Help Screen\">\n  <br/>\n  <code>nvitop</code> comes with a help screen (shortcut: <kbd>h</kbd>).\n</p>\n\n#### For Docker Users\n\nBuild and run the Docker image using [nvidia-docker](https://github.com/NVIDIA/nvidia-docker):\n\n```bash\ngit clone --depth=1 https://github.com/XuehaiPan/nvitop.git && cd nvitop  # clone this repo first\ndocker build --tag nvitop:latest .  # build the Docker image\ndocker run -it --rm --runtime=nvidia --gpus=all --pid=host nvitop:latest  # run the Docker container\n```\n\nThe [`Dockerfile`](Dockerfile) has an optional build argument `basetag` (default: `450-signed-ubuntu22.04`) for the tag of image [`nvcr.io/nvidia/driver`](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/driver/tags).\n\n**NOTE:** Don't forget to add the `--pid=host` option when running the container.\n\n#### For SSH Users\n\nRun `nvitop` directly on the SSH session instead of a login shell:\n\n```bash\nssh user@host -t nvitop                 # installed by `sudo pip3 install ...`\nssh user@host -t '~/.local/bin/nvitop'  # installed by `pip3 install --user ...`\n```\n\n**NOTE:** Users need to add the `-t` option to allocate a pseudo-terminal over the SSH session for monitor mode.\n\n#### Command Line Options and Environment Variables\n\nType `nvitop --help` for more command options:\n\n```text\nusage: nvitop [--help] [--version] [--once | --monitor [{auto,full,compact}]]\n              [--interval SEC] [--ascii] [--colorful] [--force-color] [--light]\n              [--gpu-util-thresh th1 th2] [--mem-util-thresh th1 th2]\n              [--only idx [idx ...]] [--only-visible]\n              [--compute] [--only-compute] [--graphics] [--only-graphics]\n              [--user [USERNAME ...]] [--pid PID [PID ...]]\n\nAn interactive NVIDIA-GPU process viewer.\n\noptions:\n  --help, -h            Show this help message and exit.\n  --version, -V         Show nvitop's version number and exit.\n  --once, -1            Report query data only once.\n  --monitor [{auto,full,compact}], -m [{auto,full,compact}]\n                        Run as a resource monitor. Continuously report query data and handle user inputs.\n                        If the argument is omitted, the value from `NVITOP_MONITOR_MODE` will be used.\n                        (default fallback mode: auto)\n  --interval SEC        Process status update interval in seconds. (default: 2)\n  --ascii, --no-unicode, -U\n                        Use ASCII characters only, which is useful for terminals without Unicode support.\n\ncoloring:\n  --colorful            Use gradient colors to get spectrum-like bar charts. This option is only available\n                        when the terminal supports 256 colors. You may need to set environment variable\n                        `TERM=\"xterm-256color\"`. Note that the terminal multiplexer, such as `tmux`, may\n                        override the `TREM` variable.\n  --force-color         Force colorize even when `stdout` is not a TTY terminal.\n  --light               Tweak visual results for light theme terminals in monitor mode.\n                        Set variable `NVITOP_MONITOR_MODE=\"light\"` on light terminals for convenience.\n  --gpu-util-thresh th1 th2\n                        Thresholds of GPU utilization to determine the load intensity.\n                        Coloring rules: light < th1 % <= moderate < th2 % <= heavy.\n                        ( 1 <= th1 < th2 <= 99, defaults: 10 75 )\n  --mem-util-thresh th1 th2\n                        Thresholds of GPU memory percent to determine the load intensity.\n                        Coloring rules: light < th1 % <= moderate < th2 % <= heavy.\n                        ( 1 <= th1 < th2 <= 99, defaults: 10 80 )\n\ndevice filtering:\n  --only idx [idx ...], -o idx [idx ...]\n                        Only show the specified devices, suppress option `--only-visible`.\n  --only-visible, -ov   Only show devices in the `CUDA_VISIBLE_DEVICES` environment variable.\n\nprocess filtering:\n  --compute, -c         Only show GPU processes with the compute context. (type: 'C' or 'C+G')\n  --only-compute, -C    Only show GPU processes exactly with the compute context. (type: 'C' only)\n  --graphics, -g        Only show GPU processes with the graphics context. (type: 'G' or 'C+G')\n  --only-graphics, -G   Only show GPU processes exactly with the graphics context. (type: 'G' only)\n  --user [USERNAME ...], -u [USERNAME ...]\n                        Only show processes of the given users (or `$USER` for no argument).\n  --pid PID [PID ...], -p PID [PID ...]\n                        Only show processes of the given PIDs.\n```\n\n`nvitop` can accept the following environment variables for monitor mode:\n\n| Name                                   | Description                                         | Valid Values                                                            | Default Value     |\n| -------------------------------------- | --------------------------------------------------- | ----------------------------------------------------------------------- | ----------------- |\n| `NVITOP_MONITOR_MODE`                  | The default display mode (a comma-separated string) | `auto` / `full` / `compact`<br>`plain` / `colorful`<br>`dark` / `light` | `auto,plain,dark` |\n| `NVITOP_GPU_UTILIZATION_THRESHOLDS`    | Thresholds of GPU utilization                       | `10,75` , `1,99`, ...                                                   | `10,75`           |\n| `NVITOP_MEMORY_UTILIZATION_THRESHOLDS` | Thresholds of GPU memory percent                    | `10,80` , `1,99`, ...                                                   | `10,80`           |\n| `LOGLEVEL`                             | Log level for log messages                          | `DEBUG` , `INFO`, `WARNING`, ...                                        | `WARNING`         |\n\nFor example:\n\n```bash\n# Replace the following export statements if you are not using Bash / Zsh\nexport NVITOP_MONITOR_MODE=\"full,light\"\n\n# Full monitor mode with light terminal tweaks\nnvitop\n```\n\nFor convenience, you can add these environment variables to your shell startup file, e.g.:\n\n```bash\n# For Bash\necho 'export NVITOP_MONITOR_MODE=\"full\"' >> ~/.bashrc\n\n# For Zsh\necho 'export NVITOP_MONITOR_MODE=\"full\"' >> ~/.zshrc\n\n# For Fish\necho 'set -gx NVITOP_MONITOR_MODE \"full\"' >> ~/.config/fish/config.fish\n\n# For PowerShell\n'$Env:NVITOP_MONITOR_MODE = \"full\"' >> $PROFILE.CurrentUserAllHosts\n```\n\n#### Keybindings for Monitor Mode\n\n|                                                                        Key | Binding                                                                              |\n| -------------------------------------------------------------------------: | :----------------------------------------------------------------------------------- |\n|                                                                        `q` | Quit and return to the terminal.                                                     |\n|                                                                  `h` / `?` | Go to the help screen.                                                               |\n|                                                            `a` / `f` / `c` | Change the display mode to *auto* / *full* / *compact*.                              |\n|                                                     `r` / `<C-r>` / `<F5>` | Force refresh the window.                                                            |\n|                                                                            |                                                                                      |\n| `<Up>` / `<Down>`<br>`<A-k>` / `<A-j>`<br>`<Tab>` / `<S-Tab>`<br>`<Wheel>` | Select and highlight a process.                                                      |\n|                   `<Left>` / `<Right>`<br>`<A-h>` / `<A-l>`<br>`<S-Wheel>` | Scroll the host information of processes.                                            |\n|                                                                   `<Home>` | Select the first process.                                                            |\n|                                                                    `<End>` | Select the last process.                                                             |\n|                                                             `<C-a>`<br>`^` | Scroll left to the beginning of the process entry (i.e. beginning of line).          |\n|                                                             `<C-e>`<br>`$` | Scroll right to the end of the process entry (i.e. end of line).                     |\n|              `<PageUp>` / `<PageDown>`<br/> `<A-K>` / `<A-J>`<br>`[` / `]` | scroll entire screen (for large amounts of processes).                               |\n|                                                                            |                                                                                      |\n|                                                                  `<Space>` | Tag/untag current process.                                                           |\n|                                                                    `<Esc>` | Clear process selection.                                                             |\n|                                                             `<C-c>`<br>`I` | Send `signal.SIGINT` to the selected process (interrupt).                            |\n|                                                                        `T` | Send `signal.SIGTERM` to the selected process (terminate).                           |\n|                                                                        `K` | Send `signal.SIGKILL` to the selected process (kill).                                |\n|                                                                            |                                                                                      |\n|                                                                        `e` | Show process environment.                                                            |\n|                                                                        `t` | Toggle tree-view screen.                                                             |\n|                                                                  `<Enter>` | Show process metrics.                                                                |\n|                                                                            |                                                                                      |\n|                                                                  `,` / `.` | Select the sort column.                                                              |\n|                                                                        `/` | Reverse the sort order.                                                              |\n|                                                                `on` (`oN`) | Sort processes in the natural order, i.e., in ascending (descending) order of `GPU`. |\n|                                                                `ou` (`oU`) | Sort processes by `USER` in ascending (descending) order.                            |\n|                                                                `op` (`oP`) | Sort processes by `PID` in descending (ascending) order.                             |\n|                                                                `og` (`oG`) | Sort processes by `GPU-MEM` in descending (ascending) order.                         |\n|                                                                `os` (`oS`) | Sort processes by `%SM` in descending (ascending) order.                             |\n|                                                                `oc` (`oC`) | Sort processes by `%CPU` in descending (ascending) order.                            |\n|                                                                `om` (`oM`) | Sort processes by `%MEM` in descending (ascending) order.                            |\n|                                                                `ot` (`oT`) | Sort processes by `TIME` in descending (ascending) order.                            |\n\n**HINT:** It's recommended to terminate or kill a process in the tree-view screen (shortcut: <kbd>t</kbd>).\n\n------\n\n### CUDA Visible Devices Selection Tool\n\nAutomatically select `CUDA_VISIBLE_DEVICES` from the given criteria. Example usage of the CLI tool:\n\n```console\n# All devices but sorted\n$ nvisel       # or use `python3 -m nvitop.select`\n6,5,4,3,2,1,0,7,8\n\n# A simple example to select 4 devices\n$ nvisel -n 4  # or use `python3 -m nvitop.select -n 4`\n6,5,4,3\n\n# Select available devices that satisfy the given constraints\n$ nvisel --min-count 2 --max-count 3 --min-free-memory 5GiB --max-gpu-utilization 60\n6,5,4\n\n# Set `CUDA_VISIBLE_DEVICES` environment variable using `nvisel`\n$ export CUDA_DEVICE_ORDER=\"PCI_BUS_ID\" CUDA_VISIBLE_DEVICES=\"$(nvisel -c 1 -f 10GiB)\"\nCUDA_VISIBLE_DEVICES=\"6,5,4,3,2,1,0\"\n\n# Use UUID strings in `CUDA_VISIBLE_DEVICES` environment variable\n$ export CUDA_VISIBLE_DEVICES=\"$(nvisel -O uuid -c 2 -f 5000M)\"\nCUDA_VISIBLE_DEVICES=\"GPU-849d5a8d-610e-eeea-1fd4-81ff44a23794,GPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1,GPU-96de99c9-d68f-84c8-424c-7c75e59cc0a0,GPU-2428d171-8684-5b64-830c-435cd972ec4a,GPU-6d2a57c9-7783-44bb-9f53-13f36282830a,GPU-f8e5a624-2c7e-417c-e647-b764d26d4733,GPU-f9ca790e-683e-3d56-00ba-8f654e977e02\"\n\n# Pipe output to other shell utilities\n$ nvisel --newline -O uuid -C 6 -f 8GiB\nGPU-849d5a8d-610e-eeea-1fd4-81ff44a23794\nGPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1\nGPU-96de99c9-d68f-84c8-424c-7c75e59cc0a0\nGPU-2428d171-8684-5b64-830c-435cd972ec4a\nGPU-6d2a57c9-7783-44bb-9f53-13f36282830a\nGPU-f8e5a624-2c7e-417c-e647-b764d26d4733\n$ nvisel -0 -O uuid -c 2 -f 4GiB | xargs -0 -I {} nvidia-smi --id={} --query-gpu=index,memory.free --format=csv\nCUDA_VISIBLE_DEVICES=\"GPU-849d5a8d-610e-eeea-1fd4-81ff44a23794,GPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1,GPU-96de99c9-d68f-84c8-424c-7c75e59cc0a0,GPU-2428d171-8684-5b64-830c-435cd972ec4a,GPU-6d2a57c9-7783-44bb-9f53-13f36282830a,GPU-f8e5a624-2c7e-417c-e647-b764d26d4733,GPU-f9ca790e-683e-3d56-00ba-8f654e977e02\"\nindex, memory.free [MiB]\n6, 11018 MiB\nindex, memory.free [MiB]\n5, 11018 MiB\nindex, memory.free [MiB]\n4, 11018 MiB\nindex, memory.free [MiB]\n3, 11018 MiB\nindex, memory.free [MiB]\n2, 11018 MiB\nindex, memory.free [MiB]\n1, 11018 MiB\nindex, memory.free [MiB]\n0, 11018 MiB\n\n# Normalize the `CUDA_VISIBLE_DEVICES` environment variable (e.g. convert UUIDs to indices or get full UUIDs for an abbreviated form)\n$ nvisel -i \"GPU-18ef14e9,GPU-849d5a8d\" -S\n5,6\n$ nvisel -i \"GPU-18ef14e9,GPU-849d5a8d\" -S -O uuid --newline\nGPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1\nGPU-849d5a8d-610e-eeea-1fd4-81ff44a23794\n```\n\nYou can also integrate `nvisel` into your training script like this:\n\n```python\n# Put this at the top of the Python script\nimport os\nfrom nvitop import select_devices\n\nos.environ['CUDA_VISIBLE_DEVICES'] = ','.join(\n    select_devices(format='uuid', min_count=4, min_free_memory='8GiB')\n)\n```\n\nType `nvisel --help` for more command options:\n\n```text\nusage: nvisel [--help] [--version]\n              [--inherit [CUDA_VISIBLE_DEVICES]] [--account-as-free [USERNAME ...]]\n              [--min-count N] [--max-count N] [--count N]\n              [--min-free-memory SIZE] [--min-total-memory SIZE]\n              [--max-gpu-utilization RATE] [--max-memory-utilization RATE]\n              [--tolerance TOL]\n              [--format FORMAT] [--sep SEP | --newline | --null] [--no-sort]\n\nCUDA visible devices selection tool.\n\noptions:\n  --help, -h            Show this help message and exit.\n  --version, -V         Show nvisel's version number and exit.\n\nconstraints:\n  --inherit [CUDA_VISIBLE_DEVICES], -i [CUDA_VISIBLE_DEVICES]\n                        Inherit the given `CUDA_VISIBLE_DEVICES`. If the argument is omitted, use the\n                        value from the environment. This means selecting a subset of the currently\n                        CUDA-visible devices.\n  --account-as-free [USERNAME ...]\n                        Account the used GPU memory of the given users as free memory.\n                        If this option is specified but without argument, `$USER` will be used.\n  --min-count N, -c N   Minimum number of devices to select. (default: 0)\n                        The tool will fail (exit non-zero) if the requested resource is not available.\n  --max-count N, -C N   Maximum number of devices to select. (default: all devices)\n  --count N, -n N       Overriding both `--min-count N` and `--max-count N`.\n  --min-free-memory SIZE, -f SIZE\n                        Minimum free memory of devices to select. (example value: 4GiB)\n                        If this constraint is given, check against all devices.\n  --min-total-memory SIZE, -t SIZE\n                        Minimum total memory of devices to select. (example value: 10GiB)\n                        If this constraint is given, check against all devices.\n  --max-gpu-utilization RATE, -G RATE\n                        Maximum GPU utilization rate of devices to select. (example value: 30)\n                        If this constraint is given, check against all devices.\n  --max-memory-utilization RATE, -M RATE\n                        Maximum memory bandwidth utilization rate of devices to select. (example value: 50)\n                        If this constraint is given, check against all devices.\n  --tolerance TOL, --tol TOL\n                        The constraints tolerance (in percentage). (default: 0, i.e., strict)\n                        This option can loose the constraints if the requested resource is not available.\n                        For example, set `--tolerance=20` will accept a device with only 4GiB of free\n                        memory when set `--min-free-memory=5GiB`.\n\nformatting:\n  --format FORMAT, -O FORMAT\n                        The output format of the selected device identifiers. (default: index)\n                        If any MIG device found, the output format will be fallback to `uuid`.\n  --sep SEP, --separator SEP, -s SEP\n                        Separator for the output. (default: ',')\n  --newline             Use newline character as separator for the output, equivalent to `--sep=$'\\n'`.\n  --null, -0            Use null character ('\\x00') as separator for the output. This option corresponds\n                        to the `-0` option of `xargs`.\n  --no-sort, -S         Do not sort the device by memory usage and GPU utilization.\n```\n\n------\n\n### Callback Functions for Machine Learning Frameworks\n\n`nvitop` provides two builtin callbacks for [TensorFlow (Keras)](https://www.tensorflow.org) and [PyTorch Lightning](https://pytorchlightning.ai).\n\n#### Callback for [TensorFlow (Keras)](https://www.tensorflow.org)\n\n```python\nfrom tensorflow.python.keras.utils.multi_gpu_utils import multi_gpu_model\nfrom tensorflow.python.keras.callbacks import TensorBoard\nfrom nvitop.callbacks.keras import GpuStatsLogger\ngpus = ['/gpu:0', '/gpu:1']  # or `gpus = [0, 1]` or `gpus = 2`\nmodel = Xception(weights=None, ..)\nmodel = multi_gpu_model(model, gpus)  # optional\nmodel.compile(..)\ntb_callback = TensorBoard(log_dir='./logs')  # or `keras.callbacks.CSVLogger`\ngpu_stats = GpuStatsLogger(gpus)\nmodel.fit(.., callbacks=[gpu_stats, tb_callback])\n```\n\n**NOTE:** Users should assign a `keras.callbacks.TensorBoard` callback or a `keras.callbacks.CSVLogger` callback to the model. And the `GpuStatsLogger` callback should be placed before the `keras.callbacks.TensorBoard` / `keras.callbacks.CSVLogger` callback.\n\n#### Callback for [PyTorch Lightning](https://lightning.ai)\n\n```python\nfrom lightning.pytorch import Trainer\nfrom nvitop.callbacks.lightning import GpuStatsLogger\ngpu_stats = GpuStatsLogger()\ntrainer = Trainer(gpus=[..], logger=True, callbacks=[gpu_stats])\n```\n\n**NOTE:** Users should assign a logger to the trainer.\n\n#### [TensorBoard](https://github.com/tensorflow/tensorboard) Integration\n\nPlease refer to [Resource Metric Collector](#resource-metric-collector) for an example.\n\n------\n\n### More than a Monitor\n\n`nvitop` can be easily integrated into other applications. You can use `nvitop` to make your own monitoring tools. The full API references host at <https://nvitop.readthedocs.io>.\n\n#### Quick Start\n\nA minimal script to monitor the GPU devices based on APIs from `nvitop`:\n\n```python\nfrom nvitop import Device\n\ndevices = Device.all()  # or `Device.cuda.all()` to use CUDA ordinal instead\nfor device in devices:\n    processes = device.processes()  # type: Dict[int, GpuProcess]\n    sorted_pids = sorted(processes.keys())\n\n    print(device)\n    print(f'  - Fan speed:       {device.fan_speed()}%')\n    print(f'  - Temperature:     {device.temperature()}C')\n    print(f'  - GPU utilization: {device.gpu_utilization()}%')\n    print(f'  - Total memory:    {device.memory_total_human()}')\n    print(f'  - Used memory:     {device.memory_used_human()}')\n    print(f'  - Free memory:     {device.memory_free_human()}')\n    print(f'  - Processes ({len(processes)}): {sorted_pids}')\n    for pid in sorted_pids:\n        print(f'    - {processes[pid]}')\n    print('-' * 120)\n```\n\nAnother more advanced approach with coloring:\n\n```python\nimport time\n\nfrom nvitop import Device, GpuProcess, NA, colored\n\nprint(colored(time.strftime('%a %b %d %H:%M:%S %Y'), color='red', attrs=('bold',)))\n\ndevices = Device.cuda.all()  # or `Device.all()` to use NVML ordinal instead\nseparator = False\nfor device in devices:\n    processes = device.processes()  # type: Dict[int, GpuProcess]\n\n    print(colored(str(device), color='green', attrs=('bold',)))\n    print(colored('  - Fan speed:       ', color='blue', attrs=('bold',)) + f'{device.fan_speed()}%')\n    print(colored('  - Temperature:     ', color='blue', attrs=('bold',)) + f'{device.temperature()}C')\n    print(colored('  - GPU utilization: ', color='blue', attrs=('bold',)) + f'{device.gpu_utilization()}%')\n    print(colored('  - Total memory:    ', color='blue', attrs=('bold',)) + f'{device.memory_total_human()}')\n    print(colored('  - Used memory:     ', color='blue', attrs=('bold',)) + f'{device.memory_used_human()}')\n    print(colored('  - Free memory:     ', color='blue', attrs=('bold',)) + f'{device.memory_free_human()}')\n    if len(processes) > 0:\n        processes = GpuProcess.take_snapshots(processes.values(), failsafe=True)\n        processes.sort(key=lambda process: (process.username, process.pid))\n\n        print(colored(f'  - Processes ({len(processes)}):', color='blue', attrs=('bold',)))\n        fmt = '    {pid:<5}  {username:<8} {cpu:>5}  {host_memory:>8} {time:>8}  {gpu_memory:>8}  {sm:>3}  {command:<}'.format\n        print(colored(fmt(pid='PID', username='USERNAME',\n                          cpu='CPU%', host_memory='HOST-MEM', time='TIME',\n                          gpu_memory='GPU-MEM', sm='SM%',\n                          command='COMMAND'),\n                      attrs=('bold',)))\n        for snapshot in processes:\n            print(fmt(pid=snapshot.pid,\n                      username=snapshot.username[:7] + ('+' if len(snapshot.username) > 8 else snapshot.username[7:8]),\n                      cpu=snapshot.cpu_percent, host_memory=snapshot.host_memory_human,\n                      time=snapshot.running_time_human,\n                      gpu_memory=(snapshot.gpu_memory_human if snapshot.gpu_memory_human is not NA else 'WDDM:N/A'),\n                      sm=snapshot.gpu_sm_utilization,\n                      command=snapshot.command))\n    else:\n        print(colored('  - No Running Processes', attrs=('bold',)))\n\n    if separator:\n        print('-' * 120)\n    separator = True\n```\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/177041142-fe988d58-6a97-4559-84fd-b51204cf9231.png\" alt=\"Demo\">\n  <br/>\n  An example monitoring script built with APIs from <code>nvitop</code>.\n</p>\n\n------\n\n#### Status Snapshot\n\n`nvitop` provides a helper function [`take_snapshots`](https://nvitop.readthedocs.io/en/latest/api/collector.html#nvitop.take_snapshots) to retrieve the status of both GPU devices and GPU processes at once. You can type `help(nvitop.take_snapshots)` in Python REPL for detailed documentation.\n\n```python\nIn [1]: from nvitop import take_snapshots, Device\n   ...: import os\n   ...: os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'\n   ...: os.environ['CUDA_VISIBLE_DEVICES'] = '1,0'  # comma-separated integers or UUID strings\n\nIn [2]: take_snapshots()  # equivalent to `take_snapshots(Device.all())`\nOut[2]:\nSnapshotResult(\n    devices=[\n        DeviceSnapshot(\n            real=Device(index=0, ...),\n            ...\n        ),\n        ...\n    ],\n    gpu_processes=[\n        GpuProcessSnapshot(\n            real=GpuProcess(pid=xxxxxx, device=Device(index=0, ...), ...),\n            ...\n        ),\n        ...\n    ]\n)\n\nIn [3]: device_snapshots, gpu_process_snapshots = take_snapshots(Device.all())  # type: Tuple[List[DeviceSnapshot], List[GpuProcessSnapshot]]\n\nIn [4]: device_snapshots, _ = take_snapshots(gpu_processes=False)  # ignore process snapshots\n\nIn [5]: take_snapshots(Device.cuda.all())  # use CUDA device enumeration\nOut[5]:\nSnapshotResult(\n    devices=[\n        CudaDeviceSnapshot(\n            real=CudaDevice(cuda_index=0, nvml_index=1, ...),\n            ...\n        ),\n        CudaDeviceSnapshot(\n            real=CudaDevice(cuda_index=1, nvml_index=0, ...),\n            ...\n        ),\n    ],\n    gpu_processes=[\n        GpuProcessSnapshot(\n            real=GpuProcess(pid=xxxxxx, device=CudaDevice(cuda_index=0, ...), ...),\n            ...\n        ),\n        ...\n    ]\n)\n\nIn [6]: take_snapshots(Device.cuda(1))  # <CUDA 1> only\nOut[6]:\nSnapshotResult(\n    devices=[\n        CudaDeviceSnapshot(\n            real=CudaDevice(cuda_index=1, nvml_index=0, ...),\n            ...\n        )\n    ],\n    gpu_processes=[\n        GpuProcessSnapshot(\n            real=GpuProcess(pid=xxxxxx, device=CudaDevice(cuda_index=1, ...), ...),\n            ...\n        ),\n        ...\n    ]\n)\n```\n\nPlease refer to section [Low-level APIs](#low-level-apis) for more information.\n\n------\n\n#### Resource Metric Collector\n\n[`ResourceMetricCollector`](https://nvitop.readthedocs.io/en/latest/api/collector.html#nvitop.ResourceMetricCollector) is a class that collects resource metrics for host, GPUs and processes running on the GPUs. All metrics will be collected in an asynchronous manner. You can type `help(nvitop.ResourceMetricCollector)` in Python REPL for detailed documentation.\n\n```python\nIn [1]: from nvitop import ResourceMetricCollector, Device\n   ...: import os\n   ...: os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'\n   ...: os.environ['CUDA_VISIBLE_DEVICES'] = '3,2,1,0'  # comma-separated integers or UUID strings\n\nIn [2]: collector = ResourceMetricCollector()                                   # log all devices and descendant processes of the current process on the GPUs\nIn [3]: collector = ResourceMetricCollector(root_pids={1})                      # log all devices and all GPU processes\nIn [4]: collector = ResourceMetricCollector(devices=Device(0), root_pids={1})   # log <GPU 0> and all GPU processes on <GPU 0>\nIn [5]: collector = ResourceMetricCollector(devices=Device.cuda.all())          # use the CUDA ordinal\n\nIn [6]: with collector(tag='<tag>'):\n   ...:     # Do something\n   ...:     collector.collect()  # -> Dict[str, float]\n# key -> '<tag>/<scope>/<metric (unit)>/<mean/min/max>'\n{\n    '<tag>/host/cpu_percent (%)/mean': 8.967849777683456,\n    '<tag>/host/cpu_percent (%)/min': 6.1,\n    '<tag>/host/cpu_percent (%)/max': 28.1,\n    ...,\n    '<tag>/host/memory_percent (%)/mean': 21.5,\n    '<tag>/host/swap_percent (%)/mean': 0.3,\n    '<tag>/host/memory_used (GiB)/mean': 91.0136418208109,\n    '<tag>/host/load_average (%) (1 min)/mean': 10.251427386878328,\n    '<tag>/host/load_average (%) (5 min)/mean': 10.072539414569503,\n    '<tag>/host/load_average (%) (15 min)/mean': 11.91126970422139,\n    ...,\n    '<tag>/cuda:0 (gpu:3)/memory_used (MiB)/mean': 3.875,\n    '<tag>/cuda:0 (gpu:3)/memory_free (MiB)/mean': 11015.562499999998,\n    '<tag>/cuda:0 (gpu:3)/memory_total (MiB)/mean': 11019.437500000002,\n    '<tag>/cuda:0 (gpu:3)/memory_percent (%)/mean': 0.0,\n    '<tag>/cuda:0 (gpu:3)/gpu_utilization (%)/mean': 0.0,\n    '<tag>/cuda:0 (gpu:3)/memory_utilization (%)/mean': 0.0,\n    '<tag>/cuda:0 (gpu:3)/fan_speed (%)/mean': 22.0,\n    '<tag>/cuda:0 (gpu:3)/temperature (C)/mean': 25.0,\n    '<tag>/cuda:0 (gpu:3)/power_usage (W)/mean': 19.11166264116916,\n    ...,\n    '<tag>/cuda:1 (gpu:2)/memory_used (MiB)/mean': 8878.875,\n    ...,\n    '<tag>/cuda:2 (gpu:1)/memory_used (MiB)/mean': 8182.875,\n    ...,\n    '<tag>/cuda:3 (gpu:0)/memory_used (MiB)/mean': 9286.875,\n    ...,\n    '<tag>/pid:12345/host/cpu_percent (%)/mean': 151.34342772112265,\n    '<tag>/pid:12345/host/host_memory (MiB)/mean': 44749.72373447514,\n    '<tag>/pid:12345/host/host_memory_percent (%)/mean': 8.675082352111717,\n    '<tag>/pid:12345/host/running_time (min)': 336.23803206741576,\n    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_memory (MiB)/mean': 8861.0,\n    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_memory_percent (%)/mean': 80.4,\n    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_memory_utilization (%)/mean': 6.711118172407917,\n    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_sm_utilization (%)/mean': 48.23283397736476,\n    ...,\n    '<tag>/duration (s)': 7.247399162035435,\n    '<tag>/timestamp': 1655909466.9981883\n}\n```\n\nThe results can be easily logged into [TensorBoard](https://github.com/tensorflow/tensorboard) or a CSV file. For example:\n\n```python\nimport os\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.utils.tensorboard import SummaryWriter\n\nfrom nvitop import CudaDevice, ResourceMetricCollector\nfrom nvitop.callbacks.tensorboard import add_scalar_dict\n\n# Build networks and prepare datasets\n...\n\n# Logger and status collector\nwriter = SummaryWriter()\ncollector = ResourceMetricCollector(devices=CudaDevice.all(),  # log all visible CUDA devices and use the CUDA ordinal\n                                    root_pids={os.getpid()},   # only log the descendant processes of the current process\n                                    interval=1.0)              # snapshot interval for background daemon thread\n\n# Start training\nglobal_step = 0\nfor epoch in range(num_epoch):\n    with collector(tag='train'):\n        for batch in train_dataset:\n            with collector(tag='batch'):\n                metrics = train(net, batch)\n                global_step += 1\n                add_scalar_dict(writer, 'train', metrics, global_step=global_step)\n                add_scalar_dict(writer, 'resources',      # tag='resources/train/batch/...'\n                                collector.collect(),\n                                global_step=global_step)\n\n        add_scalar_dict(writer, 'resources',              # tag='resources/train/...'\n                        collector.collect(),\n                        global_step=epoch)\n\n    with collector(tag='validate'):\n        metrics = validate(net, validation_dataset)\n        add_scalar_dict(writer, 'validate', metrics, global_step=epoch)\n        add_scalar_dict(writer, 'resources',              # tag='resources/validate/...'\n                        collector.collect(),\n                        global_step=epoch)\n```\n\nAnother example for logging into a CSV file:\n\n```python\nimport datetime\nimport time\n\nimport pandas as pd\n\nfrom nvitop import ResourceMetricCollector\n\ncollector = ResourceMetricCollector(root_pids={1}, interval=2.0)  # log all devices and all GPU processes\ndf = pd.DataFrame()\n\nwith collector(tag='resources'):\n    for _ in range(60):\n        # Do something\n        time.sleep(60)\n\n        metrics = collector.collect()\n        df_metrics = pd.DataFrame.from_records(metrics, index=[len(df)])\n        df = pd.concat([df, df_metrics], ignore_index=True)\n        # Flush to CSV file ...\n\ndf.insert(0, 'time', df['resources/timestamp'].map(datetime.datetime.fromtimestamp))\ndf.to_csv('results.csv', index=False)\n```\n\nYou can also daemonize the collector in the background using [`collect_in_background`](https://nvitop.readthedocs.io/en/latest/api/collector.html#nvitop.collect_in_background) or [`ResourceMetricCollector.daemonize`](https://nvitop.readthedocs.io/en/latest/api/collector.html#nvitop.ResourceMetricCollector.daemonize) with callback functions.\n\n```python\nfrom nvitop import Device, ResourceMetricCollector, collect_in_background\n\nlogger = ...\n\ndef on_collect(metrics):  # will be called periodically\n    if logger.is_closed():  # closed manually by user\n        return False\n    logger.log(metrics)\n    return True\n\ndef on_stop(collector):  # will be called only once at stop\n    if not logger.is_closed():\n        logger.close()  # cleanup\n\n# Record metrics to the logger in the background every 5 seconds.\n# It will collect 5-second mean/min/max for each metric.\ncollect_in_background(\n    on_collect,\n    ResourceMetricCollector(Device.cuda.all()),\n    interval=5.0,\n    on_stop=on_stop,\n)\n```\n\nor simply:\n\n```python\nResourceMetricCollector(Device.cuda.all()).daemonize(\n    on_collect,\n    interval=5.0,\n    on_stop=on_stop,\n)\n```\n\n------\n\n#### Low-level APIs\n\nThe full API references can be found at <https://nvitop.readthedocs.io>.\n\n##### Device\n\nThe [device module](https://nvitop.readthedocs.io/en/latest/api/device.html) provides:\n\n<table class=\"autosummary longtable docutils align-default\">\n  <colgroup>\n    <col style=\"width: 10%\" />\n    <col style=\"width: 90%\" />\n  </colgroup>\n  <tbody>\n    <tr class=\"row-odd\">\n      <td><p><a href=\"https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.Device\" title=\"nvitop.Device\"><code class=\"xref py py-obj docutils literal notranslate\"><span class=\"pre\">Device</span></code></a>([index, uuid, bus_id])</p></td>\n      <td><p>Live class of the GPU devices, different from the device snapshots.</p></td>\n    </tr>\n    <tr class=\"row-even\">\n      <td><p><a href=\"https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.PhysicalDevice\" title=\"nvitop.PhysicalDevice\"><code class=\"xref py py-obj docutils literal notranslate\"><span class=\"pre\">PhysicalDevice</span></code></a>([index, uuid, bus_id])</p></td>\n      <td><p>Class for physical devices.</p></td>\n    </tr>\n    <tr class=\"row-odd\">\n      <td><p><a href=\"https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.MigDevice\" title=\"nvitop.MigDevice\"><code class=\"xref py py-obj docutils literal notranslate\"><span class=\"pre\">MigDevice</span></code></a>([index, uuid, bus_id])</p></td>\n      <td><p>Class for MIG devices.</p></td>\n    </tr>\n    <tr class=\"row-even\">\n      <td><p><a href=\"https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.CudaDevice\" title=\"nvitop.CudaDevice\"><code class=\"xref py py-obj docutils literal notranslate\"><span class=\"pre\">CudaDevice</span></code></a>([cuda_index, nvml_index, uuid])</p></td>\n      <td><p>Class for devices enumerated over the CUDA ordinal.</p></td>\n    </tr>\n    <tr class=\"row-odd\">\n      <td><p><a href=\"https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.CudaMigDevice\" title=\"nvitop.CudaMigDevice\"><code class=\"xref py py-obj docutils literal notranslate\"><span class=\"pre\">CudaMigDevice</span></code></a>([cuda_index, nvml_index, uuid])</p></td>\n      <td><p>Class for CUDA devices that are MIG devices.</p></td>\n    </tr>\n    <tr class=\"row-even\">\n      <td><p><a href=\"https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.parse_cuda_visible_devices\" title=\"nvitop.parse_cuda_visible_devices\"><code class=\"xref py py-obj docutils literal notranslate\"><span class=\"pre\">parse_cuda_visible_devices</span></code></a>([...])</p></td>\n      <td><p>Parse the given <code class=\"docutils literal notranslate\"><span class=\"pre\">CUDA_VISIBLE_DEVICES</span></code> value into a list of NVML device indices.</p></td>\n    </tr>\n    <tr class=\"row-odd\">\n      <td><p><a href=\"https://nvitop.readthedocs.io/en/latest/api/device.html#nvitop.normalize_cuda_visible_devices\" title=\"nvitop.normalize_cuda_visible_devices\"><code class=\"xref py py-obj docutils literal notranslate\"><span class=\"pre\">normalize_cuda_visible_devices</span></code></a>([...])</p></td>\n      <td><p>Parse the given <code class=\"docutils literal notranslate\"><span class=\"pre\">CUDA_VISIBLE_DEVICES</span></code> value and convert it into a comma-separated string of UUIDs.</p></td>\n    </tr>\n  </tbody>\n</table>\n\n```python\nIn [1]: from nvitop import (\n   ...:     host,\n   ...:     Device, PhysicalDevice, CudaDevice,\n   ...:     parse_cuda_visible_devices, normalize_cuda_visible_devices\n   ...:     HostProcess, GpuProcess,\n   ...:     NA,\n   ...: )\n   ...: import os\n   ...: os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'\n   ...: os.environ['CUDA_VISIBLE_DEVICES'] = '9,8,7,6'  # comma-separated integers or UUID strings\n\nIn [2]: Device.driver_version()\nOut[2]: '525.60.11'\n\nIn [3]: Device.cuda_driver_version()  # the maximum CUDA version supported by the driver (can be different from the CUDA Runtime version)\nOut[3]: '12.0'\n\nIn [4]: Device.cuda_runtime_version()  # the CUDA Runtime version\nOut[4]: '11.8'\n\nIn [5]: Device.count()\nOut[5]: 10\n\nIn [6]: CudaDevice.count()  # or `Device.cuda.count()`\nOut[6]: 4\n\nIn [7]: all_devices      = Device.all()                 # all devices on board (physical device)\n   ...: nvidia0, nvidia1 = Device.from_indices([0, 1])  # from physical device indices\n   ...: all_devices\nOut[7]: [\n    PhysicalDevice(index=0, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    PhysicalDevice(index=1, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    PhysicalDevice(index=2, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    PhysicalDevice(index=3, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    PhysicalDevice(index=4, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    PhysicalDevice(index=5, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    PhysicalDevice(index=6, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    PhysicalDevice(index=7, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    PhysicalDevice(index=8, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    PhysicalDevice(index=9, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB)\n]\n\nIn [8]: # NOTE: The function results might be different between calls when the `CUDA_VISIBLE_DEVICES` environment variable has been modified\n   ...: cuda_visible_devices = Device.from_cuda_visible_devices()  # from the `CUDA_VISIBLE_DEVICES` environment variable\n   ...: cuda0, cuda1         = Device.from_cuda_indices([0, 1])    # from CUDA device indices (might be different from physical device indices if `CUDA_VISIBLE_DEVICES` is set)\n   ...: cuda_visible_devices = CudaDevice.all()                    # shortcut to `Device.from_cuda_visible_devices()`\n   ...: cuda_visible_devices = Device.cuda.all()                   # `Device.cuda` is aliased to `CudaDevice`\n   ...: cuda_visible_devices\nOut[8]: [\n    CudaDevice(cuda_index=0, nvml_index=9, name=\"NVIDIA GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    CudaDevice(cuda_index=1, nvml_index=8, name=\"NVIDIA GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    CudaDevice(cuda_index=2, nvml_index=7, name=\"NVIDIA GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    CudaDevice(cuda_index=3, nvml_index=6, name=\"NVIDIA GeForce RTX 2080 Ti\", total_memory=11019MiB)\n]\n\nIn [9]: nvidia0 = Device(0)  # from device index (or `Device(index=0)`)\n   ...: nvidia0\nOut[9]: PhysicalDevice(index=0, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB)\n\nIn [10]: nvidia1 = Device(uuid='GPU-01234567-89ab-cdef-0123-456789abcdef')  # from UUID string (or just `Device('GPU-xxxxxxxx-...')`)\n    ...: nvidia2 = Device(bus_id='00000000:06:00.0')                        # from PCI bus ID\n    ...: nvidia1\nOut[10]: PhysicalDevice(index=1, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB)\n\nIn [11]: cuda0 = CudaDevice(0)                        # from CUDA device index (equivalent to `CudaDevice(cuda_index=0)`)\n    ...: cuda1 = CudaDevice(nvml_index=8)             # from physical device index\n    ...: cuda3 = CudaDevice(uuid='GPU-xxxxxxxx-...')  # from UUID string\n    ...: cuda4 = Device.cuda(4)                       # `Device.cuda` is aliased to `CudaDevice`\n    ...: cuda0\nOut[11]:\nCudaDevice(cuda_index=0, nvml_index=9, name=\"NVIDIA GeForce RTX 2080 Ti\", total_memory=11019MiB)\n\nIn [12]: nvidia0.memory_used()  # in bytes\nOut[12]: 9293398016\n\nIn [13]: nvidia0.memory_used_human()\nOut[13]: '8862MiB'\n\nIn [14]: nvidia0.gpu_utilization()  # in percentage\nOut[14]: 5\n\nIn [15]: nvidia0.processes()  # type: Dict[int, GpuProcess]\nOut[15]: {\n    52059: GpuProcess(pid=52059, gpu_memory=7885MiB, type=C, device=PhysicalDevice(index=0, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=52059, name='ipython3', status='sleeping', started='14:31:22')),\n    53002: GpuProcess(pid=53002, gpu_memory=967MiB, type=C, device=PhysicalDevice(index=0, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=53002, name='python', status='running', started='14:31:59'))\n}\n\nIn [16]: nvidia1_snapshot = nvidia1.as_snapshot()\n    ...: nvidia1_snapshot\nOut[16]: PhysicalDeviceSnapshot(\n    real=PhysicalDevice(index=1, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    bus_id='00000000:05:00.0',\n    compute_mode='Default',\n    clock_infos=ClockInfos(graphics=1815, sm=1815, memory=6800, video=1680),  # in MHz\n    clock_speed_infos=ClockSpeedInfos(current=ClockInfos(graphics=1815, sm=1815, memory=6800, video=1680), max=ClockInfos(graphics=2100, sm=2100, memory=7000, video=1950)),  # in MHz\n    cuda_compute_capability=(7, 5),\n    current_driver_model='N/A',\n    decoder_utilization=0,              # in percentage\n    display_active='Disabled',\n    display_mode='Disabled',\n    encoder_utilization=0,              # in percentage\n    fan_speed=22,                       # in percentage\n    gpu_utilization=17,                 # in percentage (NOTE: this is the utilization rate of SMs, i.e. GPU percent)\n    index=1,\n    max_clock_infos=ClockInfos(graphics=2100, sm=2100, memory=7000, video=1950),  # in MHz\n    memory_clock=6800,                  # in MHz\n    memory_free=10462232576,            # in bytes\n    memory_free_human='9977MiB',\n    memory_info=MemoryInfo(total=11554717696, free=10462232576, used=1092485120)  # in bytes\n    memory_percent=9.5,                 # in percentage (NOTE: this is the percentage of used GPU memory)\n    memory_total=11554717696,           # in bytes\n    memory_total_human='11019MiB',\n    memory_usage='1041MiB / 11019MiB',\n    memory_used=1092485120,             # in bytes\n    memory_used_human='1041MiB',\n    memory_utilization=7,               # in percentage (NOTE: this is the utilization rate of GPU memory bandwidth)\n    mig_mode='N/A',\n    name='GeForce RTX 2080 Ti',\n    pcie_rx_throughput=1000,            # in KiB/s\n    pcie_rx_throughput_human='1000KiB/s',\n    pcie_throughput=ThroughputInfo(tx=1000, rx=1000),  # in KiB/s\n    pcie_tx_throughput=1000,            # in KiB/s\n    pcie_tx_throughput_human='1000KiB/s',\n    performance_state='P2',\n    persistence_mode='Disabled',\n    power_limit=250000,                 # in milliwatts (mW)\n    power_status='66W / 250W',          # in watts (W)\n    power_usage=66051,                  # in milliwatts (mW)\n    sm_clock=1815,                      # in MHz\n    temperature=39,                     # in Celsius\n    total_volatile_uncorrected_ecc_errors='N/A',\n    utilization_rates=UtilizationRates(gpu=17, memory=7, encoder=0, decoder=0),  # in percentage\n    uuid='GPU-01234567-89ab-cdef-0123-456789abcdef',\n)\n\nIn [17]: nvidia1_snapshot.memory_percent  # snapshot uses properties instead of function calls\nOut[17]: 9.5\n\nIn [18]: nvidia1_snapshot['memory_info']  # snapshot also supports `__getitem__` by string\nOut[18]: MemoryInfo(total=11554717696, free=10462232576, used=1092485120)\n\nIn [19]: nvidia1_snapshot.bar1_memory_info  # snapshot will automatically retrieve not presented attributes from `real`\nOut[19]: MemoryInfo(total=268435456, free=257622016, used=10813440)\n```\n\n**NOTE:** Some entry values may be `'N/A'` (type: [`NaType`](https://nvitop.readthedocs.io/en/latest/index.html#nvitop.NaType), a subclass of `str`) when the corresponding resources are not applicable. The [`NA`](https://nvitop.readthedocs.io/en/latest/index.html#nvitop.NA) value supports arithmetic operations. It acts like `math.nan: float`.\n\n```python\n>>> from nvitop import NA\n>>> NA\n'N/A'\n\n>>> 'memory usage: {}'.format(NA)  # NA is an instance of `str`\n'memory usage: N/A'\n>>> NA.lower()                     # NA is an instance of `str`\n'n/a'\n>>> NA.ljust(5)                    # NA is an instance of `str`\n'N/A  '\n>>> NA + 'str'                     # string contamination if the operand is a string\n'N/Astr'\n\n>>> float(NA)                      # explicit conversion to float (`math.nan`)\nnan\n>>> NA + 1                         # auto-casting to float if the operand is a number\nnan\n>>> NA * 1024                      # auto-casting to float if the operand is a number\nnan\n>>> NA / (1024 * 1024)             # auto-casting to float if the operand is a number\nnan\n```\n\nYou can use `entry != 'N/A'` conditions to avoid exceptions. It's safe to use `float(entry)` for numbers while `NaType` will be converted to `math.nan`. For example:\n\n```python\nmemory_used: Union[int, NaType] = device.memory_used()            # memory usage in bytes or `'N/A'`\nmemory_used_in_mib: float       = float(memory_used) / (1 << 20)  # memory usage in Mebibytes (MiB) or `math.nan`\n```\n\nIt's safe to compare `NaType` with numbers, but `NaType` is always larger than any number:\n\n```python\ndevices_by_used_memory = sorted(Device.all(), key=Device.memory_used, reverse=True)  # it's safe to compare `'N/A'` with numbers\ndevices_by_free_memory = sorted(Device.all(), key=Device.memory_free, reverse=True)  # please add `memory_free != 'N/A'` checks if sort in descending order here\n```\n\nSee [`nvitop.NaType`](https://nvitop.readthedocs.io/en/latest/apis/index.html#nvitop.NaType) documentation for more details.\n\n##### Process\n\nThe [process module](https://nvitop.readthedocs.io/en/latest/api/process.html) provides:\n\n<table class=\"autosummary longtable docutils align-default\">\n  <colgroup>\n    <col style=\"width: 10%\" />\n    <col style=\"width: 90%\" />\n  </colgroup>\n  <tbody>\n    <tr class=\"row-odd\">\n      <td><p><a href=\"https://nvitop.readthedocs.io/en/latest/api/process.html#nvitop.HostProcess\" title=\"nvitop.HostProcess\"><code class=\"xref py py-obj docutils literal notranslate\"><span class=\"pre\">HostProcess</span></code></a>([pid])</p></td>\n      <td><p>Represents an OS process with the given PID.</p></td>\n    </tr>\n    <tr class=\"row-even\">\n      <td><p><a href=\"https://nvitop.readthedocs.io/en/latest/api/process.html#nvitop.GpuProcess\" title=\"nvitop.GpuProcess\"><code class=\"xref py py-obj docutils literal notranslate\"><span class=\"pre\">GpuProcess</span></code></a>(pid, device[, gpu_memory, ...])</p></td>\n      <td><p>Represents a process with the given PID running on the given GPU device.</p></td>\n    </tr>\n    <tr class=\"row-odd\">\n      <td><p><a href=\"https://nvitop.readthedocs.io/en/latest/api/process.html#nvitop.command_join\" title=\"nvitop.command_join\"><code class=\"xref py py-obj docutils literal notranslate\"><span class=\"pre\">command_join</span></code></a>(cmdline)</p></td>\n      <td><p>Returns a shell-escaped string from command line arguments.</p></td>\n    </tr>\n  </tbody>\n</table>\n\n```python\nIn [20]: processes = nvidia1.processes()  # type: Dict[int, GpuProcess]\n    ...: processes\nOut[20]: {\n    23266: GpuProcess(pid=23266, gpu_memory=1031MiB, type=C, device=Device(index=1, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=23266, name='python3', status='running', started='2021-05-10 21:02:40'))\n}\n\nIn [21]: process = processes[23266]\n    ...: process\nOut[21]: GpuProcess(pid=23266, gpu_memory=1031MiB, type=C, device=Device(index=1, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=23266, name='python3', status='running', started='2021-05-10 21:02:40'))\n\nIn [22]: process.status()  # GpuProcess will automatically inherit attributes from GpuProcess.host\nOut[22]: 'running'\n\nIn [23]: process.cmdline()  # type: List[str]\nOut[23]: ['python3', 'rllib_train.py']\n\nIn [24]: process.command()  # type: str\nOut[24]: 'python3 rllib_train.py'\n\nIn [25]: process.cwd()  # GpuProcess will automatically inherit attributes from GpuProcess.host\nOut[25]: '/home/xxxxxx/Projects/xxxxxx'\n\nIn [26]: process.gpu_memory_human()\nOut[26]: '1031MiB'\n\nIn [27]: process.as_snapshot()\nOut[27]: GpuProcessSnapshot(\n    real=GpuProcess(pid=23266, gpu_memory=1031MiB, type=C, device=PhysicalDevice(index=1, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=23266, name='python3', status='running', started='2021-05-10 21:02:40')),\n    cmdline=['python3', 'rllib_train.py'],\n    command='python3 rllib_train.py',\n    compute_instance_id='N/A',\n    cpu_percent=98.5,                       # in percentage\n    device=PhysicalDevice(index=1, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB),\n    gpu_encoder_utilization=0,              # in percentage\n    gpu_decoder_utilization=0,              # in percentage\n    gpu_instance_id='N/A',\n    gpu_memory=1081081856,                  # in bytes\n    gpu_memory_human='1031MiB',\n    gpu_memory_percent=9.4,                 # in percentage (NOTE: this is the percentage of used GPU memory)\n    gpu_memory_utilization=5,               # in percentage (NOTE: this is the utilization rate of GPU memory bandwidth)\n    gpu_sm_utilization=0,                   # in percentage (NOTE: this is the utilization rate of SMs, i.e. GPU percent)\n    host=HostProcessSnapshot(\n        real=HostProcess(pid=23266, name='python3', status='running', started='2021-05-10 21:02:40'),\n        cmdline=['python3', 'rllib_train.py'],\n        command='python3 rllib_train.py',\n        cpu_percent=98.5,                   # in percentage\n        host_memory=9113627439,             # in bytes\n        host_memory_human='8691MiB',\n        is_running=True,\n        memory_percent=1.6849018430285683,  # in percentage\n        name='python3',\n        running_time=datetime.timedelta(days=1, seconds=80013, microseconds=470024),\n        running_time_human='46:13:33',\n        running_time_in_seconds=166413.470024,\n        status='running',\n        username='panxuehai',\n    ),\n    host_memory=9113627439,                 # in bytes\n    host_memory_human='8691MiB',\n    is_running=True,\n    memory_percent=1.6849018430285683,      # in percentage (NOTE: this is the percentage of used host memory)\n    name='python3',\n    pid=23266,\n    running_time=datetime.timedelta(days=1, seconds=80013, microseconds=470024),\n    running_time_human='46:13:33',\n    running_time_in_seconds=166413.470024,\n    status='running',\n    type='C',                               # 'C' for Compute / 'G' for Graphics / 'C+G' for Both\n    username='panxuehai',\n)\n\nIn [28]: process.uids()  # GpuProcess will automatically inherit attributes from GpuProcess.host\nOut[28]: puids(real=1001, effective=1001, saved=1001)\n\nIn [29]: process.kill()  # GpuProcess will automatically inherit attributes from GpuProcess.host\n\nIn [30]: list(map(Device.processes, all_devices))  # all processes\nOut[30]: [\n    {\n        52059: GpuProcess(pid=52059, gpu_memory=7885MiB, type=C, device=PhysicalDevice(index=0, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=52059, name='ipython3', status='sleeping', started='14:31:22')),\n        53002: GpuProcess(pid=53002, gpu_memory=967MiB, type=C, device=PhysicalDevice(index=0, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=53002, name='python', status='running', started='14:31:59'))\n    },\n    {},\n    {},\n    {},\n    {},\n    {},\n    {},\n    {},\n    {\n        84748: GpuProcess(pid=84748, gpu_memory=8975MiB, type=C, device=PhysicalDevice(index=8, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=84748, name='python', status='running', started='11:13:38'))\n    },\n    {\n        84748: GpuProcess(pid=84748, gpu_memory=8341MiB, type=C, device=PhysicalDevice(index=9, name=\"GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=84748, name='python', status='running', started='11:13:38'))\n    }\n]\n\nIn [31]: this = HostProcess(os.getpid())\n    ...: this\nOut[31]: HostProcess(pid=35783, name='python', status='running', started='19:19:00')\n\nIn [32]: this.cmdline()  # type: List[str]\nOut[32]: ['python', '-c', 'import IPython; IPython.terminal.ipapp.launch_new_instance()']\n\nIn [33]: this.command()  # not simply `' '.join(cmdline)` but quotes are added\nOut[33]: 'python -c \"import IPython; IPython.terminal.ipapp.launch_new_instance()\"'\n\nIn [34]: this.memory_info()\nOut[34]: pmem(rss=83988480, vms=343543808, shared=12079104, text=8192, lib=0, data=297435136, dirty=0)\n\nIn [35]: import cupy as cp\n    ...: x = cp.zeros((10000, 1000))\n    ...: this = GpuProcess(os.getpid(), cuda0)  # construct from `GpuProcess(pid, device)` explicitly rather than calling `device.processes()`\n    ...: this\nOut[35]: GpuProcess(pid=35783, gpu_memory=N/A, type=N/A, device=CudaDevice(cuda_index=0, nvml_index=9, name=\"NVIDIA GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=35783, name='python', status='running', started='19:19:00'))\n\nIn [36]: this.update_gpu_status()  # update used GPU memory from new driver queries\nOut[36]: 267386880\n\nIn [37]: this\nOut[37]: GpuProcess(pid=35783, gpu_memory=255MiB, type=C, device=CudaDevice(cuda_index=0, nvml_index=9, name=\"NVIDIA GeForce RTX 2080 Ti\", total_memory=11019MiB), host=HostProcess(pid=35783, name='python', status='running', started='19:19:00'))\n\nIn [38]: id(this) == id(GpuProcess(os.getpid(), cuda0))  # IMPORTANT: the instance will be reused while the process is running\nOut[38]: True\n```\n\n##### Host (inherited from [psutil](https://github.com/giampaolo/psutil))\n\n```python\nIn [39]: host.cpu_count()\nOut[39]: 88\n\nIn [40]: host.cpu_percent()\nOut[40]: 18.5\n\nIn [41]: host.cpu_times()\nOut[41]: scputimes(user=2346377.62, nice=53321.44, system=579177.52, idle=10323719.85, iowait=28750.22, irq=0.0, softirq=11566.87, steal=0.0, guest=0.0, guest_nice=0.0)\n\nIn [42]: host.load_average()\nOut[42]: (14.88, 17.8, 19.91)\n\nIn [43]: host.virtual_memory()\nOut[43]: svmem(total=270352478208, available=192275968000, percent=28.9, used=53350518784, free=88924037120, active=125081112576, inactive=44803993600, buffers=37006450688, cached=91071471616, shared=23820632064, slab=8200687616)\n\nIn [44]: host.memory_percent()\nOut[44]: 28.9\n\nIn [45]: host.swap_memory()\nOut[45]: sswap(total=65534947328, used=475136, free=65534472192, percent=0.0, sin=2404139008, sout=4259434496)\n\nIn [46]: host.swap_percent()\nOut[46]: 0.0\n```\n\n------\n\n## Screenshots\n\n![Screen Recording](https://user-images.githubusercontent.com/16078332/113173772-508dc380-927c-11eb-84c5-b6f496e54c08.gif)\n\nExample output of `nvitop -1`:\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/117765250-41793880-b260-11eb-8a1b-9c32868a46d4.png\" alt=\"Screenshot\">\n</p>\n\nExample output of `nvitop`:\n\n<table>\n  <tr valign=\"center\" align=\"center\">\n    <td>Full</td>\n    <td>Compact</td>\n  </tr>\n  <tr valign=\"top\" align=\"center\">\n    <td><img src=\"https://user-images.githubusercontent.com/16078332/117765260-4342fc00-b260-11eb-9198-7bcfdd1db113.png\" alt=\"Full\"></td>\n    <td><img src=\"https://user-images.githubusercontent.com/16078332/117765274-476f1980-b260-11eb-9afd-877cca54e0bc.png\" alt=\"Compact\"></td>\n  </tr>\n</table>\n\nTree-view screen (shortcut: <kbd>t</kbd>) for GPU processes and their ancestors:\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/123914889-7b3e0400-d9b2-11eb-9b71-a48971617c2a.png\" alt=\"Tree-view\">\n</p>\n\n**NOTE:** The process tree is built in backward order (recursively back to the tree root). Only GPU processes along with their children and ancestors (parents and grandparents ...) will be shown. Not all running processes will be displayed.\n\nEnvironment variable screen (shortcut: <kbd>e</kbd>):\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/123914881-7a0cd700-d9b2-11eb-8da1-26f7a3a7c2b6.png\" alt=\"Environment Screen\">\n</p>\n\nSpectrum-like bar charts (with option <code>--colorful</code>):\n\n<p align=\"center\">\n  <img width=\"100%\" src=\"https://user-images.githubusercontent.com/16078332/182555606-8388e5a5-43a9-4990-90d4-46e45ac448a0.png\" alt=\"Spectrum-like Bar Charts\">\n  <br/>\n</p>\n\n------\n\n## Changelog\n\nSee [CHANGELOG.md](https://github.com/XuehaiPan/nvitop/blob/HEAD/CHANGELOG.md).\n\n------\n\n## License\n\nThe source code of `nvitop` is dual-licensed by the **Apache License, Version 2.0 (Apache-2.0)** and **GNU General Public License, Version 3 (GPL-3.0)**. The `nvitop` CLI is released under the **GPL-3.0** license while the remaining part of `nvitop` is released under the **Apache-2.0** license. The license files can be found at [LICENSE](https://github.com/XuehaiPan/nvitop/blob/HEAD/LICENSE) (Apache-2.0) and [COPYING](https://github.com/XuehaiPan/nvitop/blob/HEAD/COPYING) (GPL-3.0).\n\nThe source code is organized as:\n\n```text\nnvitop           (GPL-3.0)\n\u251c\u2500\u2500 __init__.py  (Apache-2.0)\n\u251c\u2500\u2500 version.py   (Apache-2.0)\n\u251c\u2500\u2500 api          (Apache-2.0)\n\u2502   \u251c\u2500\u2500 LICENSE  (Apache-2.0)\n\u2502   \u2514\u2500\u2500 *        (Apache-2.0)\n\u251c\u2500\u2500 callbacks    (Apache-2.0)\n\u2502   \u251c\u2500\u2500 LICENSE  (Apache-2.0)\n\u2502   \u2514\u2500\u2500 *        (Apache-2.0)\n\u251c\u2500\u2500 select.py    (Apache-2.0)\n\u251c\u2500\u2500 __main__.py  (GPL-3.0)\n\u251c\u2500\u2500 cli.py       (GPL-3.0)\n\u2514\u2500\u2500 gui          (GPL-3.0)\n    \u251c\u2500\u2500 COPYING  (GPL-3.0)\n    \u2514\u2500\u2500 *        (GPL-3.0)\n```\n\n### Copyright Notice\n\nPlease feel free to use `nvitop` as a dependency for your own projects. The following Python import statements are permitted:\n\n```python\nimport nvitop\nimport nvitop as alias\nimport nvitop.api as api\nimport nvitop.device as device\nfrom nvitop import *\nfrom nvitop.api import *\nfrom nvitop import Device, ResourceMetricCollector\n```\n\nThe public APIs from `nvitop` are released under the **Apache License, Version 2.0 (Apache-2.0)**. The original license files can be found at [LICENSE](https://github.com/XuehaiPan/nvitop/blob/HEAD/LICENSE), [nvitop/api/LICENSE](https://github.com/XuehaiPan/nvitop/blob/HEAD/nvitop/api/LICENSE), and [nvitop/callbacks/LICENSE](https://github.com/XuehaiPan/nvitop/blob/HEAD/nvitop/callbacks/LICENSE).\n\nThe CLI of `nvitop` is released under the **GNU General Public License, Version 3 (GPL-3.0)**. The original license files can be found at [COPYING](https://github.com/XuehaiPan/nvitop/blob/HEAD/COPYING) and [nvitop/gui/COPYING](https://github.com/XuehaiPan/nvitop/blob/HEAD/nvitop/gui/COPYING). If you dynamically load the source code of `nvitop`'s CLI or GUI:\n\n```python\nfrom nvitop import cli\nfrom nvitop import gui\nimport nvitop.cli\nimport nvitop.gui\n```\n\nyour source code should also be released under the GPL-3.0 License.\n\nIf you want to add or modify some features of `nvitop`'s CLI, or copy some source code of `nvitop`'s CLI into your own code, the source code should also be released under the GPL-3.0 License (as `nvitop`  contains some modified source code from [ranger](https://github.com/ranger/ranger) under the GPL-3.0 License).\n",
    "bugtrack_url": null,
    "license": "Apache License, Version 2.0 (Apache-2.0) & GNU General Public License, Version 3 (GPL-3.0)",
    "summary": "An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.",
    "version": "1.3.2",
    "project_urls": {
        "Bug Report": "https://github.com/XuehaiPan/nvitop/issues",
        "Documentation": "https://nvitop.readthedocs.io",
        "Homepage": "https://github.com/XuehaiPan/nvitop",
        "Repository": "https://github.com/XuehaiPan/nvitop"
    },
    "split_keywords": [
        "nvidia",
        "nvidia-smi",
        "nvidia",
        "nvml",
        "cuda",
        "gpu",
        "top",
        "monitoring"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e204264d0d041abd8b52c17eeec0ef7a2b8d34944c8d53da16a5d8d18b406d56",
                "md5": "23e77ac62bd8b5d7708182c82c643d17",
                "sha256": "be61c3375c99c2d871bf87f46e7969c89da730593c14734120de4f16203022e8"
            },
            "downloads": -1,
            "filename": "nvitop-1.3.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "23e77ac62bd8b5d7708182c82c643d17",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 215353,
            "upload_time": "2023-12-17T11:36:48",
            "upload_time_iso_8601": "2023-12-17T11:36:48.877876Z",
            "url": "https://files.pythonhosted.org/packages/e2/04/264d0d041abd8b52c17eeec0ef7a2b8d34944c8d53da16a5d8d18b406d56/nvitop-1.3.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "82c8a99ef6649e42aaf22953e3ba97cadd735652ab4512c8fc212ea6f6bdee47",
                "md5": "80e28b8efcd328f7f3c5ea87f9ba3772",
                "sha256": "9ea401dfca6b268cf30c041e428f461aab31e4bc5e17bc8e923568e16c9cb1f1"
            },
            "downloads": -1,
            "filename": "nvitop-1.3.2.tar.gz",
            "has_sig": false,
            "md5_digest": "80e28b8efcd328f7f3c5ea87f9ba3772",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 227359,
            "upload_time": "2023-12-17T11:36:52",
            "upload_time_iso_8601": "2023-12-17T11:36:52.858860Z",
            "url": "https://files.pythonhosted.org/packages/82/c8/a99ef6649e42aaf22953e3ba97cadd735652ab4512c8fc212ea6f6bdee47/nvitop-1.3.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-17 11:36:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "XuehaiPan",
    "github_project": "nvitop",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "nvitop"
}
        
Elapsed time: 0.15966s