clore-pynvml


Nameclore-pynvml JSON
Version 11.5.4 PyPI version JSON
download
home_pagehttp://www.nvidia.com/
SummaryPython Bindings for the NVIDIA Management Library
upload_time2024-05-23 13:26:28
maintainerNone
docs_urlNone
authorNVIDIA Corporation
requires_python>=3.6
licenseBSD
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Python bindings to the NVIDIA Management Library
================================================

Provides a Python interface to GPU management and monitoring functions.

This is a wrapper around the NVML library.
For information about the NVML library, see the NVML developer page
http://developer.nvidia.com/nvidia-management-library-nvml

As of version 11.0.0, the NVML-wrappers used in pynvml are identical
to those published through [nvidia-ml-py](https://pypi.org/project/nvidia-ml-py/).

Note that this file can be run with 'python -m doctest -v README.txt'
although the results are system dependent

Requires
--------
Python 3, or an earlier version with the ctypes module.

Installation
------------

    pip install .

Usage
-----

You can use the lower level nvml bindings

```python
>>> from clore_pynvml import *
>>> nvmlInit()
>>> print("Driver Version:", nvmlSystemGetDriverVersion())
Driver Version: 410.00
>>> deviceCount = nvmlDeviceGetCount()
>>> for i in range(deviceCount):
...     handle = nvmlDeviceGetHandleByIndex(i)
...     print("Device", i, ":", nvmlDeviceGetName(handle))
...
Device 0 : Tesla V100

>>> nvmlShutdown()
```

Or the higher level nvidia_smi API

```python
from clore_pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
nvsmi.DeviceQuery('memory.free, memory.total')
```

```python
from clore_pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
print(nvsmi.DeviceQuery('--help-query-gpu'), end='\n')
```

Functions
---------
Python methods wrap NVML functions, implemented in a C shared library.
Each function's use is the same with the following exceptions:

- Instead of returning error codes, failing error codes are raised as
  Python exceptions.

    ```python
    >>> try:
    ...     nvmlDeviceGetCount()
    ... except NVMLError as error:
    ...     print(error)
    ...
    Uninitialized
    ```

- C function output parameters are returned from the corresponding
  Python function left to right.

    ```c
    nvmlReturn_t nvmlDeviceGetEccMode(nvmlDevice_t device,
                                      nvmlEnableState_t *current,
                                      nvmlEnableState_t *pending);
    ```

    ```python
    >>> nvmlInit()
    >>> handle = nvmlDeviceGetHandleByIndex(0)
    >>> (current, pending) = nvmlDeviceGetEccMode(handle)
    ```

- C structs are converted into Python classes.

    ```c
    nvmlReturn_t DECLDIR nvmlDeviceGetMemoryInfo(nvmlDevice_t device,
                                                 nvmlMemory_t *memory);
    typedef struct nvmlMemory_st {
        unsigned long long total;
        unsigned long long free;
        unsigned long long used;
    } nvmlMemory_t;
    ```

    ```python
    >>> info = nvmlDeviceGetMemoryInfo(handle)
    >>> print "Total memory:", info.total
    Total memory: 5636292608
    >>> print "Free memory:", info.free
    Free memory: 5578420224
    >>> print "Used memory:", info.used
    Used memory: 57872384
    ```

- Python handles string buffer creation.

    ```c
    nvmlReturn_t nvmlSystemGetDriverVersion(char* version,
                                            unsigned int length);
    ```

    ```python
    >>> version = nvmlSystemGetDriverVersion();
    >>> nvmlShutdown()
    ```

For usage information see the NVML documentation.

Variables
---------

All meaningful NVML constants and enums are exposed in Python.

The NVML_VALUE_NOT_AVAILABLE constant is not used.  Instead None is mapped to the field.

NVML Permissions
----------------

Many of the `pynvml` wrappers assume that the underlying NVIDIA Management Library (NVML) API can be used without admin/root privileges.  However, it is certainly possible for the system permissions to prevent pynvml from querying GPU performance counters. For example:

```
$ nvidia-smi nvlink -g 0
GPU 0: Tesla V100-SXM2-32GB (UUID: GPU-96ab329d-7a1f-73a8-a9b7-18b4b2855f92)
NVML: Unable to get the NvLink link utilization counter control for link 0: Insufficient Permissions
```

A simple way to check the permissions status is to look for `RmProfilingAdminOnly` in the driver `params` file (Note that `RmProfilingAdminOnly == 1` means that admin/sudo access is required):

```
$ cat /proc/driver/nvidia/params | grep RmProfilingAdminOnly
RmProfilingAdminOnly: 1
```

For more information on setting/unsetting the relevant admin privileges, see [these notes](https://developer.nvidia.com/nvidia-development-tools-solutions-ERR_NVGPUCTRPERM-permission-issue-performance-counters) on resolving `ERR_NVGPUCTRPERM` errors.


Release Notes
-------------

-   Version 2.285.0
    - Added new functions for NVML 2.285.  See NVML documentation for more information.
    - Ported to support Python 3.0 and Python 2.0 syntax.
    - Added nvidia_smi.py tool as a sample app.
-   Version 3.295.0
    - Added new functions for NVML 3.295.  See NVML documentation for more information.
    - Updated nvidia_smi.py tool
      - Includes additional error handling
-   Version 4.304.0
    - Added new functions for NVML 4.304.  See NVML documentation for more information.
    - Updated nvidia_smi.py tool
-   Version 4.304.3
    - Fixing nvmlUnitGetDeviceCount bug
-   Version 5.319.0
    - Added new functions for NVML 5.319.  See NVML documentation for more information.
-   Version 6.340.0
    - Added new functions for NVML 6.340.  See NVML documentation for more information.
-   Version 7.346.0
    - Added new functions for NVML 7.346.  See NVML documentation for more information.
-   Version 7.352.0
    - Added new functions for NVML 7.352.  See NVML documentation for more information.
-   Version 8.0.0
    - Refactor code to a nvidia_smi singleton class
    - Added DeviceQuery that returns a dictionary of (name, value).
    - Added filter parameters on DeviceQuery to match query api in nvidia-smi
    - Added filter parameters on XmlDeviceQuery to match query api in nvidia-smi
    - Added integer enumeration for filter strings to reduce overhead for performance monitoring.
    - Added loop(filter) method with async and callback support
-   Version 8.0.1
    - Restructuring directories into two packages (pynvml and nvidia_smi)
    - Adding initial tests for both packages
    - Some name-convention cleanup in pynvml
-   Version 8.0.2
    - Added NVLink function wrappers for pynvml module
-   Version 8.0.3
    - Added versioneer
    - Fixed nvmlDeviceGetNvLinkUtilizationCounter bug
-   Version 8.0.4
    - Added nvmlDeviceGetTotalEnergyConsumption
    - Added notes about NVML permissions
    - Fixed version-check testing
-   Version 11.0.0
    - Updated nvml.py to CUDA 11
    - Updated smi.py DeviceQuery to R460
    - Aligned nvml.py with latest nvidia-ml-py deployment
-   Version 11.4.0
    - Updated nvml.py to CUDA 11.4
    - Updated smi.py NVML_BRAND_NAMES
    - Aligned nvml.py with latest nvidia-ml-py deployment (11.495.46)
-   Version 11.4.1
    - Fix comma bugs in nvml.py
-   Version 11.5.0
    - Updated nvml.py to support CUDA 11.5 and CUDA 12
    - Aligned with latest nvidia-ml-py deployment (11.525.84)
-   Version 11.5.4 CLORE
    - removed versioneer
    - fixed nvmlDeviceGetGpcClkMinMaxVfOffset, nvmlDeviceGetMemClkMinMaxVfOffset

            

Raw data

            {
    "_id": null,
    "home_page": "http://www.nvidia.com/",
    "name": "clore-pynvml",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": null,
    "author": "NVIDIA Corporation",
    "author_email": "rzamora@nvidia.com",
    "download_url": "https://files.pythonhosted.org/packages/bc/62/e570918e15ea9cce3110210ea3ed8f0f54b0be8f9d59d8a35b5715ed140b/clore_pynvml-11.5.4.tar.gz",
    "platform": null,
    "description": "Python bindings to the NVIDIA Management Library\n================================================\n\nProvides a Python interface to GPU management and monitoring functions.\n\nThis is a wrapper around the NVML library.\nFor information about the NVML library, see the NVML developer page\nhttp://developer.nvidia.com/nvidia-management-library-nvml\n\nAs of version 11.0.0, the NVML-wrappers used in pynvml are identical\nto those published through [nvidia-ml-py](https://pypi.org/project/nvidia-ml-py/).\n\nNote that this file can be run with 'python -m doctest -v README.txt'\nalthough the results are system dependent\n\nRequires\n--------\nPython 3, or an earlier version with the ctypes module.\n\nInstallation\n------------\n\n    pip install .\n\nUsage\n-----\n\nYou can use the lower level nvml bindings\n\n```python\n>>> from clore_pynvml import *\n>>> nvmlInit()\n>>> print(\"Driver Version:\", nvmlSystemGetDriverVersion())\nDriver Version: 410.00\n>>> deviceCount = nvmlDeviceGetCount()\n>>> for i in range(deviceCount):\n...     handle = nvmlDeviceGetHandleByIndex(i)\n...     print(\"Device\", i, \":\", nvmlDeviceGetName(handle))\n...\nDevice 0 : Tesla V100\n\n>>> nvmlShutdown()\n```\n\nOr the higher level nvidia_smi API\n\n```python\nfrom clore_pynvml.smi import nvidia_smi\nnvsmi = nvidia_smi.getInstance()\nnvsmi.DeviceQuery('memory.free, memory.total')\n```\n\n```python\nfrom clore_pynvml.smi import nvidia_smi\nnvsmi = nvidia_smi.getInstance()\nprint(nvsmi.DeviceQuery('--help-query-gpu'), end='\\n')\n```\n\nFunctions\n---------\nPython methods wrap NVML functions, implemented in a C shared library.\nEach function's use is the same with the following exceptions:\n\n- Instead of returning error codes, failing error codes are raised as\n  Python exceptions.\n\n    ```python\n    >>> try:\n    ...     nvmlDeviceGetCount()\n    ... except NVMLError as error:\n    ...     print(error)\n    ...\n    Uninitialized\n    ```\n\n- C function output parameters are returned from the corresponding\n  Python function left to right.\n\n    ```c\n    nvmlReturn_t nvmlDeviceGetEccMode(nvmlDevice_t device,\n                                      nvmlEnableState_t *current,\n                                      nvmlEnableState_t *pending);\n    ```\n\n    ```python\n    >>> nvmlInit()\n    >>> handle = nvmlDeviceGetHandleByIndex(0)\n    >>> (current, pending) = nvmlDeviceGetEccMode(handle)\n    ```\n\n- C structs are converted into Python classes.\n\n    ```c\n    nvmlReturn_t DECLDIR nvmlDeviceGetMemoryInfo(nvmlDevice_t device,\n                                                 nvmlMemory_t *memory);\n    typedef struct nvmlMemory_st {\n        unsigned long long total;\n        unsigned long long free;\n        unsigned long long used;\n    } nvmlMemory_t;\n    ```\n\n    ```python\n    >>> info = nvmlDeviceGetMemoryInfo(handle)\n    >>> print \"Total memory:\", info.total\n    Total memory: 5636292608\n    >>> print \"Free memory:\", info.free\n    Free memory: 5578420224\n    >>> print \"Used memory:\", info.used\n    Used memory: 57872384\n    ```\n\n- Python handles string buffer creation.\n\n    ```c\n    nvmlReturn_t nvmlSystemGetDriverVersion(char* version,\n                                            unsigned int length);\n    ```\n\n    ```python\n    >>> version = nvmlSystemGetDriverVersion();\n    >>> nvmlShutdown()\n    ```\n\nFor usage information see the NVML documentation.\n\nVariables\n---------\n\nAll meaningful NVML constants and enums are exposed in Python.\n\nThe NVML_VALUE_NOT_AVAILABLE constant is not used.  Instead None is mapped to the field.\n\nNVML Permissions\n----------------\n\nMany of the `pynvml` wrappers assume that the underlying NVIDIA Management Library (NVML) API can be used without admin/root privileges.  However, it is certainly possible for the system permissions to prevent pynvml from querying GPU performance counters. For example:\n\n```\n$ nvidia-smi nvlink -g 0\nGPU 0: Tesla V100-SXM2-32GB (UUID: GPU-96ab329d-7a1f-73a8-a9b7-18b4b2855f92)\nNVML: Unable to get the NvLink link utilization counter control for link 0: Insufficient Permissions\n```\n\nA simple way to check the permissions status is to look for `RmProfilingAdminOnly` in the driver `params` file (Note that `RmProfilingAdminOnly == 1` means that admin/sudo access is required):\n\n```\n$ cat /proc/driver/nvidia/params | grep RmProfilingAdminOnly\nRmProfilingAdminOnly: 1\n```\n\nFor more information on setting/unsetting the relevant admin privileges, see [these notes](https://developer.nvidia.com/nvidia-development-tools-solutions-ERR_NVGPUCTRPERM-permission-issue-performance-counters) on resolving `ERR_NVGPUCTRPERM` errors.\n\n\nRelease Notes\n-------------\n\n-   Version 2.285.0\n    - Added new functions for NVML 2.285.  See NVML documentation for more information.\n    - Ported to support Python 3.0 and Python 2.0 syntax.\n    - Added nvidia_smi.py tool as a sample app.\n-   Version 3.295.0\n    - Added new functions for NVML 3.295.  See NVML documentation for more information.\n    - Updated nvidia_smi.py tool\n      - Includes additional error handling\n-   Version 4.304.0\n    - Added new functions for NVML 4.304.  See NVML documentation for more information.\n    - Updated nvidia_smi.py tool\n-   Version 4.304.3\n    - Fixing nvmlUnitGetDeviceCount bug\n-   Version 5.319.0\n    - Added new functions for NVML 5.319.  See NVML documentation for more information.\n-   Version 6.340.0\n    - Added new functions for NVML 6.340.  See NVML documentation for more information.\n-   Version 7.346.0\n    - Added new functions for NVML 7.346.  See NVML documentation for more information.\n-   Version 7.352.0\n    - Added new functions for NVML 7.352.  See NVML documentation for more information.\n-   Version 8.0.0\n    - Refactor code to a nvidia_smi singleton class\n    - Added DeviceQuery that returns a dictionary of (name, value).\n    - Added filter parameters on DeviceQuery to match query api in nvidia-smi\n    - Added filter parameters on XmlDeviceQuery to match query api in nvidia-smi\n    - Added integer enumeration for filter strings to reduce overhead for performance monitoring.\n    - Added loop(filter) method with async and callback support\n-   Version 8.0.1\n    - Restructuring directories into two packages (pynvml and nvidia_smi)\n    - Adding initial tests for both packages\n    - Some name-convention cleanup in pynvml\n-   Version 8.0.2\n    - Added NVLink function wrappers for pynvml module\n-   Version 8.0.3\n    - Added versioneer\n    - Fixed nvmlDeviceGetNvLinkUtilizationCounter bug\n-   Version 8.0.4\n    - Added nvmlDeviceGetTotalEnergyConsumption\n    - Added notes about NVML permissions\n    - Fixed version-check testing\n-   Version 11.0.0\n    - Updated nvml.py to CUDA 11\n    - Updated smi.py DeviceQuery to R460\n    - Aligned nvml.py with latest nvidia-ml-py deployment\n-   Version 11.4.0\n    - Updated nvml.py to CUDA 11.4\n    - Updated smi.py NVML_BRAND_NAMES\n    - Aligned nvml.py with latest nvidia-ml-py deployment (11.495.46)\n-   Version 11.4.1\n    - Fix comma bugs in nvml.py\n-   Version 11.5.0\n    - Updated nvml.py to support CUDA 11.5 and CUDA 12\n    - Aligned with latest nvidia-ml-py deployment (11.525.84)\n-   Version 11.5.4 CLORE\n    - removed versioneer\n    - fixed nvmlDeviceGetGpcClkMinMaxVfOffset, nvmlDeviceGetMemClkMinMaxVfOffset\n",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "Python Bindings for the NVIDIA Management Library",
    "version": "11.5.4",
    "project_urls": {
        "Homepage": "http://www.nvidia.com/"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3a24305d851e2c2d7f8c966838640273f60532fe9e1238f5a55b6e7cb82555cc",
                "md5": "2e4c76fa6d87a192d584e98bddc4e2d7",
                "sha256": "8177f257be14d99ccbbc503d2d90889c2c81bcd1b448cba480713027a6b53609"
            },
            "downloads": -1,
            "filename": "clore_pynvml-11.5.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2e4c76fa6d87a192d584e98bddc4e2d7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 54231,
            "upload_time": "2024-05-23T13:26:26",
            "upload_time_iso_8601": "2024-05-23T13:26:26.397022Z",
            "url": "https://files.pythonhosted.org/packages/3a/24/305d851e2c2d7f8c966838640273f60532fe9e1238f5a55b6e7cb82555cc/clore_pynvml-11.5.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bc62e570918e15ea9cce3110210ea3ed8f0f54b0be8f9d59d8a35b5715ed140b",
                "md5": "07ddf93b4af1119f066010e83e23e963",
                "sha256": "c8ebda89adb7d36c9897270e8c1dc067a52d3b48490e4a396d1859362ae60bf7"
            },
            "downloads": -1,
            "filename": "clore_pynvml-11.5.4.tar.gz",
            "has_sig": false,
            "md5_digest": "07ddf93b4af1119f066010e83e23e963",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 55497,
            "upload_time": "2024-05-23T13:26:28",
            "upload_time_iso_8601": "2024-05-23T13:26:28.672078Z",
            "url": "https://files.pythonhosted.org/packages/bc/62/e570918e15ea9cce3110210ea3ed8f0f54b0be8f9d59d8a35b5715ed140b/clore_pynvml-11.5.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-23 13:26:28",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "clore-pynvml"
}
        
Elapsed time: 0.23032s