nvidia-dali-tf-plugin-nightly-cuda120


Namenvidia-dali-tf-plugin-nightly-cuda120 JSON
Version 1.43.0.dev20240919 PyPI version JSON
download
home_pagehttps://github.com/NVIDIA/dali
SummaryNVIDIA DALI nightly TensorFlow plugin for CUDA 12.0. Git SHA: 94f02ad69abe149f345684ef2aba3e13d246881a
upload_time2024-09-19 17:44:33
maintainerNone
docs_urlNone
authorNVIDIA Corporation
requires_python<3.13,>=3.8
licenseApache License 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            TensorFlow plugin for NVIDIA DALI
=================================

The TensorFlow plugin enables usage of DALI with TensorFlow.

The NVIDIA Data Loading Library (DALI) is a library for data loading and
pre-processing to accelerate deep learning applications. It provides a
collection of highly optimized building blocks for loading and processing
image, video and audio data. It can be used as a portable drop-in replacement
for built in data loaders and data iterators in popular deep learning frameworks.

Deep learning applications require complex, multi-stage data processing pipelines
that include loading, decoding, cropping, resizing, and many other augmentations.
These data processing pipelines, which are currently executed on the CPU, have become a
bottleneck, limiting the performance and scalability of training and inference.

DALI addresses the problem of the CPU bottleneck by offloading data preprocessing to the
GPU. Additionally, DALI relies on its own execution engine, built to maximize the throughput
of the input pipeline. Features such as prefetching, parallel execution, and batch processing
are handled transparently for the user.

In addition, the deep learning frameworks have multiple data pre-processing implementations,
resulting in challenges such as portability of training and inference workflows, and code
maintainability. Data processing pipelines implemented using DALI are portable because they
can easily be retargeted to TensorFlow, PyTorch, MXNet and PaddlePaddle.

For more details please check the
`latest DALI Documentation <https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html>`_.

.. image:: https://raw.githubusercontent.com/NVIDIA/DALI/main/dali.png
    :width: 800
    :align: center
    :alt: DALI Diagram




            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/NVIDIA/dali",
    "name": "nvidia-dali-tf-plugin-nightly-cuda120",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "NVIDIA Corporation",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/cb/ac/9dc3cbf6bec9572babd417d49751a1b429f8c243a782fc9a3fad125baade/nvidia_dali_tf_plugin_nightly_cuda120-1.43.0.dev20240919.tar.gz",
    "platform": null,
    "description": "TensorFlow plugin for NVIDIA DALI\n=================================\n\nThe TensorFlow plugin enables usage of DALI with TensorFlow.\n\nThe NVIDIA Data Loading Library (DALI) is a library for data loading and\npre-processing to accelerate deep learning applications. It provides a\ncollection of highly optimized building blocks for loading and processing\nimage, video and audio data. It can be used as a portable drop-in replacement\nfor built in data loaders and data iterators in popular deep learning frameworks.\n\nDeep learning applications require complex, multi-stage data processing pipelines\nthat include loading, decoding, cropping, resizing, and many other augmentations.\nThese data processing pipelines, which are currently executed on the CPU, have become a\nbottleneck, limiting the performance and scalability of training and inference.\n\nDALI addresses the problem of the CPU bottleneck by offloading data preprocessing to the\nGPU. Additionally, DALI relies on its own execution engine, built to maximize the throughput\nof the input pipeline. Features such as prefetching, parallel execution, and batch processing\nare handled transparently for the user.\n\nIn addition, the deep learning frameworks have multiple data pre-processing implementations,\nresulting in challenges such as portability of training and inference workflows, and code\nmaintainability. Data processing pipelines implemented using DALI are portable because they\ncan easily be retargeted to TensorFlow, PyTorch, MXNet and PaddlePaddle.\n\nFor more details please check the\n`latest DALI Documentation <https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html>`_.\n\n.. image:: https://raw.githubusercontent.com/NVIDIA/DALI/main/dali.png\n    :width: 800\n    :align: center\n    :alt: DALI Diagram\n\n\n\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "NVIDIA DALI nightly  TensorFlow plugin for CUDA 12.0. Git SHA: 94f02ad69abe149f345684ef2aba3e13d246881a",
    "version": "1.43.0.dev20240919",
    "project_urls": {
        "Homepage": "https://github.com/NVIDIA/dali"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cbac9dc3cbf6bec9572babd417d49751a1b429f8c243a782fc9a3fad125baade",
                "md5": "8c439e6cd9434acaf3e6aaa0d52c1347",
                "sha256": "1b7bf93ce2d19dc5b872a595b6d47f6f93a3b41791750a0e3db6781c610053d0"
            },
            "downloads": -1,
            "filename": "nvidia_dali_tf_plugin_nightly_cuda120-1.43.0.dev20240919.tar.gz",
            "has_sig": false,
            "md5_digest": "8c439e6cd9434acaf3e6aaa0d52c1347",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.8",
            "size": 1491,
            "upload_time": "2024-09-19T17:44:33",
            "upload_time_iso_8601": "2024-09-19T17:44:33.329236Z",
            "url": "https://files.pythonhosted.org/packages/cb/ac/9dc3cbf6bec9572babd417d49751a1b429f8c243a782fc9a3fad125baade/nvidia_dali_tf_plugin_nightly_cuda120-1.43.0.dev20240919.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-19 17:44:33",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NVIDIA",
    "github_project": "dali",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "nvidia-dali-tf-plugin-nightly-cuda120"
}
        
Elapsed time: 0.32458s