pynvml


Namepynvml JSON
Version 11.4.1 PyPI version JSON
download
home_pagehttp://www.nvidia.com/
SummaryPython Bindings for the NVIDIA Management Library
upload_time2021-12-08 21:28:57
maintainer
docs_urlNone
authorNVIDIA Corporation
requires_python>=3.6
licenseBSD
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Python bindings to the NVIDIA Management Library
================================================

Provides a Python interface to GPU management and monitoring functions.

This is a wrapper around the NVML library.
For information about the NVML library, see the NVML developer page
http://developer.nvidia.com/nvidia-management-library-nvml

As of version 11.0.0, the NVML-wrappers used in pynvml are identical
to those published through [nvidia-ml-py](https://pypi.org/project/nvidia-ml-py/).

Note that this file can be run with 'python -m doctest -v README.txt'
although the results are system dependent

Requires
--------
Python 3, or an earlier version with the ctypes module.

Installation
------------

    pip install .

Usage
-----

You can use the lower level nvml bindings

```python
>>> from pynvml import *
>>> nvmlInit()
>>> print("Driver Version:", nvmlSystemGetDriverVersion())
Driver Version: 410.00
>>> deviceCount = nvmlDeviceGetCount()
>>> for i in range(deviceCount):
...     handle = nvmlDeviceGetHandleByIndex(i)
...     print("Device", i, ":", nvmlDeviceGetName(handle))
...
Device 0 : Tesla V100

>>> nvmlShutdown()
```

Or the higher level nvidia_smi API

```python
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
nvsmi.DeviceQuery('memory.free, memory.total')
```

```python
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
print(nvsmi.DeviceQuery('--help-query-gpu'), end='\n')
```

Functions
---------
Python methods wrap NVML functions, implemented in a C shared library.
Each function's use is the same with the following exceptions:

- Instead of returning error codes, failing error codes are raised as
  Python exceptions.

    ```python
    >>> try:
    ...     nvmlDeviceGetCount()
    ... except NVMLError as error:
    ...     print(error)
    ...
    Uninitialized
    ```

- C function output parameters are returned from the corresponding
  Python function left to right.

    ```c
    nvmlReturn_t nvmlDeviceGetEccMode(nvmlDevice_t device,
                                      nvmlEnableState_t *current,
                                      nvmlEnableState_t *pending);
    ```

    ```python
    >>> nvmlInit()
    >>> handle = nvmlDeviceGetHandleByIndex(0)
    >>> (current, pending) = nvmlDeviceGetEccMode(handle)
    ```

- C structs are converted into Python classes.

    ```c
    nvmlReturn_t DECLDIR nvmlDeviceGetMemoryInfo(nvmlDevice_t device,
                                                 nvmlMemory_t *memory);
    typedef struct nvmlMemory_st {
        unsigned long long total;
        unsigned long long free;
        unsigned long long used;
    } nvmlMemory_t;
    ```

    ```python
    >>> info = nvmlDeviceGetMemoryInfo(handle)
    >>> print "Total memory:", info.total
    Total memory: 5636292608
    >>> print "Free memory:", info.free
    Free memory: 5578420224
    >>> print "Used memory:", info.used
    Used memory: 57872384
    ```

- Python handles string buffer creation.

    ```c
    nvmlReturn_t nvmlSystemGetDriverVersion(char* version,
                                            unsigned int length);
    ```

    ```python
    >>> version = nvmlSystemGetDriverVersion();
    >>> nvmlShutdown()
    ```

For usage information see the NVML documentation.

Variables
---------

All meaningful NVML constants and enums are exposed in Python.

The NVML_VALUE_NOT_AVAILABLE constant is not used.  Instead None is mapped to the field.

NVML Permissions
----------------

Many of the `pynvml` wrappers assume that the underlying NVIDIA Management Library (NVML) API can be used without admin/root privileges.  However, it is certainly possible for the system permissions to prevent pynvml from querying GPU performance counters. For example:

```
$ nvidia-smi nvlink -g 0
GPU 0: Tesla V100-SXM2-32GB (UUID: GPU-96ab329d-7a1f-73a8-a9b7-18b4b2855f92)
NVML: Unable to get the NvLink link utilization counter control for link 0: Insufficient Permissions
```

A simple way to check the permissions status is to look for `RmProfilingAdminOnly` in the driver `params` file (Note that `RmProfilingAdminOnly == 1` means that admin/sudo access is required):

```
$ cat /proc/driver/nvidia/params | grep RmProfilingAdminOnly
RmProfilingAdminOnly: 1
```

For more information on setting/unsetting the relevant admin privileges, see [these notes](https://developer.nvidia.com/nvidia-development-tools-solutions-ERR_NVGPUCTRPERM-permission-issue-performance-counters) on resolving `ERR_NVGPUCTRPERM` errors.


Release Notes
-------------

-   Version 2.285.0
    - Added new functions for NVML 2.285.  See NVML documentation for more information.
    - Ported to support Python 3.0 and Python 2.0 syntax.
    - Added nvidia_smi.py tool as a sample app.
-   Version 3.295.0
    - Added new functions for NVML 3.295.  See NVML documentation for more information.
    - Updated nvidia_smi.py tool
      - Includes additional error handling
-   Version 4.304.0
    - Added new functions for NVML 4.304.  See NVML documentation for more information.
    - Updated nvidia_smi.py tool
-   Version 4.304.3
    - Fixing nvmlUnitGetDeviceCount bug
-   Version 5.319.0
    - Added new functions for NVML 5.319.  See NVML documentation for more information.
-   Version 6.340.0
    - Added new functions for NVML 6.340.  See NVML documentation for more information.
-   Version 7.346.0
    - Added new functions for NVML 7.346.  See NVML documentation for more information.
-   Version 7.352.0
    - Added new functions for NVML 7.352.  See NVML documentation for more information.
-   Version 8.0.0
    - Refactor code to a nvidia_smi singleton class
    - Added DeviceQuery that returns a dictionary of (name, value).
    - Added filter parameters on DeviceQuery to match query api in nvidia-smi
    - Added filter parameters on XmlDeviceQuery to match query api in nvidia-smi
    - Added integer enumeration for filter strings to reduce overhead for performance monitoring.
    - Added loop(filter) method with async and callback support
-   Version 8.0.1
    - Restructuring directories into two packages (pynvml and nvidia_smi)
    - Adding initial tests for both packages
    - Some name-convention cleanup in pynvml
-   Version 8.0.2
    - Added NVLink function wrappers for pynvml module
-   Version 8.0.3
    - Added versioneer
    - Fixed nvmlDeviceGetNvLinkUtilizationCounter bug
-   Version 8.0.4
    - Added nvmlDeviceGetTotalEnergyConsumption
    - Added notes about NVML permissions
    - Fixed version-check testing
-   Version 11.0.0
    - Updated nvml.py to CUDA 11
    - Updated smi.py DeviceQuery to R460
    - Aligned nvml.py with latest nvidia-ml-py deployment
-   Version 11.4.0
    - Updated nvml.py to CUDA 11.4
    - Updated smi.py NVML_BRAND_NAMES
    - Aligned nvml.py with latest nvidia-ml-py deployment (11.495.46)
-   Version 11.4.1
    - Fix comma bugs in nvml.py



            

Raw data

            {
    "_id": null,
    "home_page": "http://www.nvidia.com/",
    "name": "pynvml",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "NVIDIA Corporation",
    "author_email": "rzamora@nvidia.com",
    "download_url": "https://files.pythonhosted.org/packages/63/4f/a9d35c8bc45af9b5128e66d9af8099c91431db574cc90da0071ad432d110/pynvml-11.4.1.tar.gz",
    "platform": "",
    "description": "Python bindings to the NVIDIA Management Library\n================================================\n\nProvides a Python interface to GPU management and monitoring functions.\n\nThis is a wrapper around the NVML library.\nFor information about the NVML library, see the NVML developer page\nhttp://developer.nvidia.com/nvidia-management-library-nvml\n\nAs of version 11.0.0, the NVML-wrappers used in pynvml are identical\nto those published through [nvidia-ml-py](https://pypi.org/project/nvidia-ml-py/).\n\nNote that this file can be run with 'python -m doctest -v README.txt'\nalthough the results are system dependent\n\nRequires\n--------\nPython 3, or an earlier version with the ctypes module.\n\nInstallation\n------------\n\n    pip install .\n\nUsage\n-----\n\nYou can use the lower level nvml bindings\n\n```python\n>>> from pynvml import *\n>>> nvmlInit()\n>>> print(\"Driver Version:\", nvmlSystemGetDriverVersion())\nDriver Version: 410.00\n>>> deviceCount = nvmlDeviceGetCount()\n>>> for i in range(deviceCount):\n...     handle = nvmlDeviceGetHandleByIndex(i)\n...     print(\"Device\", i, \":\", nvmlDeviceGetName(handle))\n...\nDevice 0 : Tesla V100\n\n>>> nvmlShutdown()\n```\n\nOr the higher level nvidia_smi API\n\n```python\nfrom pynvml.smi import nvidia_smi\nnvsmi = nvidia_smi.getInstance()\nnvsmi.DeviceQuery('memory.free, memory.total')\n```\n\n```python\nfrom pynvml.smi import nvidia_smi\nnvsmi = nvidia_smi.getInstance()\nprint(nvsmi.DeviceQuery('--help-query-gpu'), end='\\n')\n```\n\nFunctions\n---------\nPython methods wrap NVML functions, implemented in a C shared library.\nEach function's use is the same with the following exceptions:\n\n- Instead of returning error codes, failing error codes are raised as\n  Python exceptions.\n\n    ```python\n    >>> try:\n    ...     nvmlDeviceGetCount()\n    ... except NVMLError as error:\n    ...     print(error)\n    ...\n    Uninitialized\n    ```\n\n- C function output parameters are returned from the corresponding\n  Python function left to right.\n\n    ```c\n    nvmlReturn_t nvmlDeviceGetEccMode(nvmlDevice_t device,\n                                      nvmlEnableState_t *current,\n                                      nvmlEnableState_t *pending);\n    ```\n\n    ```python\n    >>> nvmlInit()\n    >>> handle = nvmlDeviceGetHandleByIndex(0)\n    >>> (current, pending) = nvmlDeviceGetEccMode(handle)\n    ```\n\n- C structs are converted into Python classes.\n\n    ```c\n    nvmlReturn_t DECLDIR nvmlDeviceGetMemoryInfo(nvmlDevice_t device,\n                                                 nvmlMemory_t *memory);\n    typedef struct nvmlMemory_st {\n        unsigned long long total;\n        unsigned long long free;\n        unsigned long long used;\n    } nvmlMemory_t;\n    ```\n\n    ```python\n    >>> info = nvmlDeviceGetMemoryInfo(handle)\n    >>> print \"Total memory:\", info.total\n    Total memory: 5636292608\n    >>> print \"Free memory:\", info.free\n    Free memory: 5578420224\n    >>> print \"Used memory:\", info.used\n    Used memory: 57872384\n    ```\n\n- Python handles string buffer creation.\n\n    ```c\n    nvmlReturn_t nvmlSystemGetDriverVersion(char* version,\n                                            unsigned int length);\n    ```\n\n    ```python\n    >>> version = nvmlSystemGetDriverVersion();\n    >>> nvmlShutdown()\n    ```\n\nFor usage information see the NVML documentation.\n\nVariables\n---------\n\nAll meaningful NVML constants and enums are exposed in Python.\n\nThe NVML_VALUE_NOT_AVAILABLE constant is not used.  Instead None is mapped to the field.\n\nNVML Permissions\n----------------\n\nMany of the `pynvml` wrappers assume that the underlying NVIDIA Management Library (NVML) API can be used without admin/root privileges.  However, it is certainly possible for the system permissions to prevent pynvml from querying GPU performance counters. For example:\n\n```\n$ nvidia-smi nvlink -g 0\nGPU 0: Tesla V100-SXM2-32GB (UUID: GPU-96ab329d-7a1f-73a8-a9b7-18b4b2855f92)\nNVML: Unable to get the NvLink link utilization counter control for link 0: Insufficient Permissions\n```\n\nA simple way to check the permissions status is to look for `RmProfilingAdminOnly` in the driver `params` file (Note that `RmProfilingAdminOnly == 1` means that admin/sudo access is required):\n\n```\n$ cat /proc/driver/nvidia/params | grep RmProfilingAdminOnly\nRmProfilingAdminOnly: 1\n```\n\nFor more information on setting/unsetting the relevant admin privileges, see [these notes](https://developer.nvidia.com/nvidia-development-tools-solutions-ERR_NVGPUCTRPERM-permission-issue-performance-counters) on resolving `ERR_NVGPUCTRPERM` errors.\n\n\nRelease Notes\n-------------\n\n-   Version 2.285.0\n    - Added new functions for NVML 2.285.  See NVML documentation for more information.\n    - Ported to support Python 3.0 and Python 2.0 syntax.\n    - Added nvidia_smi.py tool as a sample app.\n-   Version 3.295.0\n    - Added new functions for NVML 3.295.  See NVML documentation for more information.\n    - Updated nvidia_smi.py tool\n      - Includes additional error handling\n-   Version 4.304.0\n    - Added new functions for NVML 4.304.  See NVML documentation for more information.\n    - Updated nvidia_smi.py tool\n-   Version 4.304.3\n    - Fixing nvmlUnitGetDeviceCount bug\n-   Version 5.319.0\n    - Added new functions for NVML 5.319.  See NVML documentation for more information.\n-   Version 6.340.0\n    - Added new functions for NVML 6.340.  See NVML documentation for more information.\n-   Version 7.346.0\n    - Added new functions for NVML 7.346.  See NVML documentation for more information.\n-   Version 7.352.0\n    - Added new functions for NVML 7.352.  See NVML documentation for more information.\n-   Version 8.0.0\n    - Refactor code to a nvidia_smi singleton class\n    - Added DeviceQuery that returns a dictionary of (name, value).\n    - Added filter parameters on DeviceQuery to match query api in nvidia-smi\n    - Added filter parameters on XmlDeviceQuery to match query api in nvidia-smi\n    - Added integer enumeration for filter strings to reduce overhead for performance monitoring.\n    - Added loop(filter) method with async and callback support\n-   Version 8.0.1\n    - Restructuring directories into two packages (pynvml and nvidia_smi)\n    - Adding initial tests for both packages\n    - Some name-convention cleanup in pynvml\n-   Version 8.0.2\n    - Added NVLink function wrappers for pynvml module\n-   Version 8.0.3\n    - Added versioneer\n    - Fixed nvmlDeviceGetNvLinkUtilizationCounter bug\n-   Version 8.0.4\n    - Added nvmlDeviceGetTotalEnergyConsumption\n    - Added notes about NVML permissions\n    - Fixed version-check testing\n-   Version 11.0.0\n    - Updated nvml.py to CUDA 11\n    - Updated smi.py DeviceQuery to R460\n    - Aligned nvml.py with latest nvidia-ml-py deployment\n-   Version 11.4.0\n    - Updated nvml.py to CUDA 11.4\n    - Updated smi.py NVML_BRAND_NAMES\n    - Aligned nvml.py with latest nvidia-ml-py deployment (11.495.46)\n-   Version 11.4.1\n    - Fix comma bugs in nvml.py\n\n\n",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "Python Bindings for the NVIDIA Management Library",
    "version": "11.4.1",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "61a7b25851136d97a54197d8ecdeaba9",
                "sha256": "d27be542cd9d06558de18e2deffc8022ccd7355bc7382255d477038e7e424c6c"
            },
            "downloads": -1,
            "filename": "pynvml-11.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "61a7b25851136d97a54197d8ecdeaba9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 46986,
            "upload_time": "2021-12-08T21:28:55",
            "upload_time_iso_8601": "2021-12-08T21:28:55.967728Z",
            "url": "https://files.pythonhosted.org/packages/cc/0a/47be6726fd13f1f4371fa858b506228ed12bc418c07ffcaa6c0f7ceedac0/pynvml-11.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "38ecbcdadfb4ed87d4c3e8f208e1e3bc",
                "sha256": "b2e4a33b80569d093b513f5804db0c7f40cfc86f15a013ae7a8e99c5e175d5dd"
            },
            "downloads": -1,
            "filename": "pynvml-11.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "38ecbcdadfb4ed87d4c3e8f208e1e3bc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 64285,
            "upload_time": "2021-12-08T21:28:57",
            "upload_time_iso_8601": "2021-12-08T21:28:57.271587Z",
            "url": "https://files.pythonhosted.org/packages/63/4f/a9d35c8bc45af9b5128e66d9af8099c91431db574cc90da0071ad432d110/pynvml-11.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-12-08 21:28:57",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "pynvml"
}
        
Elapsed time: 0.01491s