nvgpu


Namenvgpu JSON
Version 0.10.0 PyPI version JSON
download
home_pagehttps://github.com/rossumai/nvgpu
SummaryNVIDIA GPU tools
upload_time2023-03-30 03:17:01
maintainer
docs_urlNone
authorBohumir Zamecnik, Rossum
requires_python
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # `nvgpu` - NVIDIA GPU tools

It provides information about GPUs and their availability for computation.

Often we want to train a ML model on one of GPUs installed on a multi-GPU
machine. Since TensorFlow allocates all memory, only one such process can
use the GPU at a time. Unfortunately `nvidia-smi` provides only a text
interface with information about GPUs. This packages wraps it with an
easier to use CLI and Python interface.

It's a quick and dirty solution calling `nvidia-smi` and parsing its output.
We can take one or more GPUs availabile for computation based on relative
memory usage, ie. it is OK with Xorg taking a few MB.

In addition we have a fancy table of GPU with more information taken by
python binding to NVML.

For easier monitoring of multiple machines it's possible to deploy agents (that
provide the GPU information in JSON over a REST API) and show the aggregated
status in a web application.

## Installing

For a user:

```bash
pip install nvgpu
```

or to the system:

```bash
sudo -H pip install nvgpu
```

## Usage examples

Command-line interface:

```bash
# grab all available GPUs
CUDA_VISIBLE_DEVICES=$(nvgpu available)

# grab at most available GPU
CUDA_VISIBLE_DEVICES=$(nvgpu available -l 1)
```

Print pretty colored table of devices, availability, users, processes:

```
$ nvgpu list
    status    type                 util.      temp.    MHz  users    since    pids    cmd
--  --------  -------------------  -------  -------  -----  -------  ---------------  ------  --------
 0  [ ]       GeForce GTX 1070      0 %          44    139                          
 1  [~]       GeForce GTX 1080 Ti   0 %          44    139  alice    2 days ago       19028   jupyter
 2  [~]       GeForce GTX 1080 Ti   0 %          44    139  bob      14 hours ago     8479    jupyter
 3  [~]       GeForce GTX 1070     46 %          54   1506  bob      7 days ago       20883   train.py
 4  [~]       GeForce GTX 1070     35 %          64   1480  bob      7 days ago       26228   evaluate.py
 5  [!]       GeForce GTX 1080 Ti   0 %          44    139  ?                         9305
 6  [ ]       GeForce GTX 1080 Ti   0 %          44    139
```

Or shortcut:

```
$ nvl
```

Python API:

```python
import nvgpu

nvgpu.available_gpus()
# ['0', '2']

nvgpu.gpu_info()
[{'index': '0',
  'mem_total': 8119,
  'mem_used': 7881,
  'mem_used_percent': 97.06860450794433,
  'type': 'GeForce GTX 1070',
  'uuid': 'GPU-3aa99ee6-4a9f-470e-3798-70aaed942689'},
 {'index': '1',
  'mem_total': 11178,
  'mem_used': 10795,
  'mem_used_percent': 96.57362676686348,
  'type': 'GeForce GTX 1080 Ti',
  'uuid': 'GPU-60410ded-5218-7b06-9c7a-124b77a22447'},
 {'index': '2',
  'mem_total': 11178,
  'mem_used': 10789,
  'mem_used_percent': 96.51994990159241,
  'type': 'GeForce GTX 1080 Ti',
  'uuid': 'GPU-d0a77bd4-cc70-ca82-54d6-4e2018cfdca6'},
  ...
]
```

## Web application with agents

There are multiple nodes. Agents take info from GPU and provide it in JSON via
REST API. Master gathers info from other nodes and displays it in a HTML page.
Agents can also display their status by default.

### Agent

```bash
FLASK_APP=nvgpu.webapp flask run --host 0.0.0.0 --port 1080
```

### Master

Set agents into a config file. Agent is specified either via a URL to a remote
machine or `'self'` for direct access to local machine. Remove `'self'` if the
machine itself does not have any GPU. Default is `AGENTS = ['self']`, so that
agents also display their own status. Set `AGENTS = []` to avoid this.

```
# nvgpu_master.cfg
AGENTS = [
         'self', # node01 - master - direct access without using HTTP
         'http://node02:1080',
         'http://node03:1080',
         'http://node04:1080',
]
```

```bash
NVGPU_CLUSTER_CFG=/path/to/nvgpu_master.cfg FLASK_APP=nvgpu.webapp flask run --host 0.0.0.0 --port 1080
```

Open the master in the web browser: http://node01:1080.

## Installing as a service

On Ubuntu with `systemd` we can install the agents/master as as service to be
ran automatically on system start.

```bash
# create an unprivileged system user
sudo useradd -r nvgpu
```

Copy [nvgpu-agent.service](nvgpu-agent.service) to:

```bash
sudo vi /etc/systemd/system/nvgpu-agent.service
```

Set agents to the configuration file for the master:

```bash
sudo vi /etc/nvgpu.conf
```

```python
AGENTS = [
         # direct access without using HTTP
         'self',
         'http://node01:1080',
         'http://node02:1080',
         'http://node03:1080',
         'http://node04:1080',
]
```

Set up and start the service:

```bash
# enable for automatic startup at boot
sudo systemctl enable nvgpu-agent.service
# start
sudo systemctl start nvgpu-agent.service 
# check the status
sudo systemctl status nvgpu-agent.service
```

```bash
# check the service
open http://localhost:1080
```

## Author

- Bohumír Zámečník, [Rossum, Ltd.](https://rossum.ai/)
- License: MIT

## TODO

- order GPUs by priority (decreasing power, decreasing free memory)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rossumai/nvgpu",
    "name": "nvgpu",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Bohumir Zamecnik, Rossum",
    "author_email": "bohumir.zamecnik@rossum.ai",
    "download_url": "https://files.pythonhosted.org/packages/1a/95/5b99a5798b366ab242fe0b2190f3814b9321eb98c6e1e9c6b599b2b4ce84/nvgpu-0.10.0.tar.gz",
    "platform": null,
    "description": "# `nvgpu` - NVIDIA GPU tools\n\nIt provides information about GPUs and their availability for computation.\n\nOften we want to train a ML model on one of GPUs installed on a multi-GPU\nmachine. Since TensorFlow allocates all memory, only one such process can\nuse the GPU at a time. Unfortunately `nvidia-smi` provides only a text\ninterface with information about GPUs. This packages wraps it with an\neasier to use CLI and Python interface.\n\nIt's a quick and dirty solution calling `nvidia-smi` and parsing its output.\nWe can take one or more GPUs availabile for computation based on relative\nmemory usage, ie. it is OK with Xorg taking a few MB.\n\nIn addition we have a fancy table of GPU with more information taken by\npython binding to NVML.\n\nFor easier monitoring of multiple machines it's possible to deploy agents (that\nprovide the GPU information in JSON over a REST API) and show the aggregated\nstatus in a web application.\n\n## Installing\n\nFor a user:\n\n```bash\npip install nvgpu\n```\n\nor to the system:\n\n```bash\nsudo -H pip install nvgpu\n```\n\n## Usage examples\n\nCommand-line interface:\n\n```bash\n# grab all available GPUs\nCUDA_VISIBLE_DEVICES=$(nvgpu available)\n\n# grab at most available GPU\nCUDA_VISIBLE_DEVICES=$(nvgpu available -l 1)\n```\n\nPrint pretty colored table of devices, availability, users, processes:\n\n```\n$ nvgpu list\n    status    type                 util.      temp.    MHz  users    since    pids    cmd\n--  --------  -------------------  -------  -------  -----  -------  ---------------  ------  --------\n 0  [ ]       GeForce GTX 1070      0 %          44    139                          \n 1  [~]       GeForce GTX 1080 Ti   0 %          44    139  alice    2 days ago       19028   jupyter\n 2  [~]       GeForce GTX 1080 Ti   0 %          44    139  bob      14 hours ago     8479    jupyter\n 3  [~]       GeForce GTX 1070     46 %          54   1506  bob      7 days ago       20883   train.py\n 4  [~]       GeForce GTX 1070     35 %          64   1480  bob      7 days ago       26228   evaluate.py\n 5  [!]       GeForce GTX 1080 Ti   0 %          44    139  ?                         9305\n 6  [ ]       GeForce GTX 1080 Ti   0 %          44    139\n```\n\nOr shortcut:\n\n```\n$ nvl\n```\n\nPython API:\n\n```python\nimport nvgpu\n\nnvgpu.available_gpus()\n# ['0', '2']\n\nnvgpu.gpu_info()\n[{'index': '0',\n  'mem_total': 8119,\n  'mem_used': 7881,\n  'mem_used_percent': 97.06860450794433,\n  'type': 'GeForce GTX 1070',\n  'uuid': 'GPU-3aa99ee6-4a9f-470e-3798-70aaed942689'},\n {'index': '1',\n  'mem_total': 11178,\n  'mem_used': 10795,\n  'mem_used_percent': 96.57362676686348,\n  'type': 'GeForce GTX 1080 Ti',\n  'uuid': 'GPU-60410ded-5218-7b06-9c7a-124b77a22447'},\n {'index': '2',\n  'mem_total': 11178,\n  'mem_used': 10789,\n  'mem_used_percent': 96.51994990159241,\n  'type': 'GeForce GTX 1080 Ti',\n  'uuid': 'GPU-d0a77bd4-cc70-ca82-54d6-4e2018cfdca6'},\n  ...\n]\n```\n\n## Web application with agents\n\nThere are multiple nodes. Agents take info from GPU and provide it in JSON via\nREST API. Master gathers info from other nodes and displays it in a HTML page.\nAgents can also display their status by default.\n\n### Agent\n\n```bash\nFLASK_APP=nvgpu.webapp flask run --host 0.0.0.0 --port 1080\n```\n\n### Master\n\nSet agents into a config file. Agent is specified either via a URL to a remote\nmachine or `'self'` for direct access to local machine. Remove `'self'` if the\nmachine itself does not have any GPU. Default is `AGENTS = ['self']`, so that\nagents also display their own status. Set `AGENTS = []` to avoid this.\n\n```\n# nvgpu_master.cfg\nAGENTS = [\n         'self', # node01 - master - direct access without using HTTP\n         'http://node02:1080',\n         'http://node03:1080',\n         'http://node04:1080',\n]\n```\n\n```bash\nNVGPU_CLUSTER_CFG=/path/to/nvgpu_master.cfg FLASK_APP=nvgpu.webapp flask run --host 0.0.0.0 --port 1080\n```\n\nOpen the master in the web browser: http://node01:1080.\n\n## Installing as a service\n\nOn Ubuntu with `systemd` we can install the agents/master as as service to be\nran automatically on system start.\n\n```bash\n# create an unprivileged system user\nsudo useradd -r nvgpu\n```\n\nCopy [nvgpu-agent.service](nvgpu-agent.service) to:\n\n```bash\nsudo vi /etc/systemd/system/nvgpu-agent.service\n```\n\nSet agents to the configuration file for the master:\n\n```bash\nsudo vi /etc/nvgpu.conf\n```\n\n```python\nAGENTS = [\n         # direct access without using HTTP\n         'self',\n         'http://node01:1080',\n         'http://node02:1080',\n         'http://node03:1080',\n         'http://node04:1080',\n]\n```\n\nSet up and start the service:\n\n```bash\n# enable for automatic startup at boot\nsudo systemctl enable nvgpu-agent.service\n# start\nsudo systemctl start nvgpu-agent.service \n# check the status\nsudo systemctl status nvgpu-agent.service\n```\n\n```bash\n# check the service\nopen http://localhost:1080\n```\n\n## Author\n\n- Bohum\u00edr Z\u00e1me\u010dn\u00edk, [Rossum, Ltd.](https://rossum.ai/)\n- License: MIT\n\n## TODO\n\n- order GPUs by priority (decreasing power, decreasing free memory)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "NVIDIA GPU tools",
    "version": "0.10.0",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1a955b99a5798b366ab242fe0b2190f3814b9321eb98c6e1e9c6b599b2b4ce84",
                "md5": "83b892a015995031111df47561962709",
                "sha256": "c415f757e0c375357f8904a6ea0cee084ab0ce97ed11e4840f2c8839196b3918"
            },
            "downloads": -1,
            "filename": "nvgpu-0.10.0.tar.gz",
            "has_sig": false,
            "md5_digest": "83b892a015995031111df47561962709",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 8445,
            "upload_time": "2023-03-30T03:17:01",
            "upload_time_iso_8601": "2023-03-30T03:17:01.622029Z",
            "url": "https://files.pythonhosted.org/packages/1a/95/5b99a5798b366ab242fe0b2190f3814b9321eb98c6e1e9c6b599b2b4ce84/nvgpu-0.10.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-30 03:17:01",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "rossumai",
    "github_project": "nvgpu",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "nvgpu"
}
        
Elapsed time: 0.12151s