nvidia-dali-weekly-cuda120


Namenvidia-dali-weekly-cuda120 JSON
Version 1.43.0.dev20241006 PyPI version JSON
download
home_pagehttps://github.com/NVIDIA/dali
SummaryNVIDIA DALI weekly for CUDA 12.0. Git SHA: 2d9d526fa2909f0758336f39a48bae07e9bb2159
upload_time2024-10-07 07:26:17
maintainerNone
docs_urlNone
authorNVIDIA Corporation
requires_python<3.13,>=3.8
licenseApache License 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            NVIDIA DALI
===========

The NVIDIA Data Loading Library (DALI) is a library for data loading and
pre-processing to accelerate deep learning applications. It provides a
collection of highly optimized building blocks for loading and processing
image, video and audio data. It can be used as a portable drop-in replacement
for built in data loaders and data iterators in popular deep learning frameworks.

Deep learning applications require complex, multi-stage data processing pipelines
that include loading, decoding, cropping, resizing, and many other augmentations.
These data processing pipelines, which are currently executed on the CPU, have become a
bottleneck, limiting the performance and scalability of training and inference.

DALI addresses the problem of the CPU bottleneck by offloading data preprocessing to the
GPU. Additionally, DALI relies on its own execution engine, built to maximize the throughput
of the input pipeline. Features such as prefetching, parallel execution, and batch processing
are handled transparently for the user.

In addition, the deep learning frameworks have multiple data pre-processing implementations,
resulting in challenges such as portability of training and inference workflows, and code
maintainability. Data processing pipelines implemented using DALI are portable because they
can easily be retargeted to TensorFlow, PyTorch, MXNet and PaddlePaddle.

For more details please check the
`latest DALI Documentation <https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html>`_.

.. image:: https://raw.githubusercontent.com/NVIDIA/DALI/main/dali.png
    :width: 800
    :align: center
    :alt: DALI Diagram


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/NVIDIA/dali",
    "name": "nvidia-dali-weekly-cuda120",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "NVIDIA Corporation",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/41/c3/a97a88b37a568c7982b42bd6630fd08ca649f0975a4e3ed8051d6bf84317/nvidia_dali_weekly_cuda120-1.43.0.dev20241006.tar.gz",
    "platform": null,
    "description": "NVIDIA DALI\n===========\n\nThe NVIDIA Data Loading Library (DALI) is a library for data loading and\npre-processing to accelerate deep learning applications. It provides a\ncollection of highly optimized building blocks for loading and processing\nimage, video and audio data. It can be used as a portable drop-in replacement\nfor built in data loaders and data iterators in popular deep learning frameworks.\n\nDeep learning applications require complex, multi-stage data processing pipelines\nthat include loading, decoding, cropping, resizing, and many other augmentations.\nThese data processing pipelines, which are currently executed on the CPU, have become a\nbottleneck, limiting the performance and scalability of training and inference.\n\nDALI addresses the problem of the CPU bottleneck by offloading data preprocessing to the\nGPU. Additionally, DALI relies on its own execution engine, built to maximize the throughput\nof the input pipeline. Features such as prefetching, parallel execution, and batch processing\nare handled transparently for the user.\n\nIn addition, the deep learning frameworks have multiple data pre-processing implementations,\nresulting in challenges such as portability of training and inference workflows, and code\nmaintainability. Data processing pipelines implemented using DALI are portable because they\ncan easily be retargeted to TensorFlow, PyTorch, MXNet and PaddlePaddle.\n\nFor more details please check the\n`latest DALI Documentation <https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html>`_.\n\n.. image:: https://raw.githubusercontent.com/NVIDIA/DALI/main/dali.png\n    :width: 800\n    :align: center\n    :alt: DALI Diagram\n\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "NVIDIA DALI weekly  for CUDA 12.0. Git SHA: 2d9d526fa2909f0758336f39a48bae07e9bb2159",
    "version": "1.43.0.dev20241006",
    "project_urls": {
        "Homepage": "https://github.com/NVIDIA/dali"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "41c3a97a88b37a568c7982b42bd6630fd08ca649f0975a4e3ed8051d6bf84317",
                "md5": "17e3f0a0c7534ddfdca1d1d0aaf02349",
                "sha256": "053f9f800aa86262d750989cdecb782522a14cfe54ddc73fd082fa786205f01d"
            },
            "downloads": -1,
            "filename": "nvidia_dali_weekly_cuda120-1.43.0.dev20241006.tar.gz",
            "has_sig": false,
            "md5_digest": "17e3f0a0c7534ddfdca1d1d0aaf02349",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.8",
            "size": 1507,
            "upload_time": "2024-10-07T07:26:17",
            "upload_time_iso_8601": "2024-10-07T07:26:17.932234Z",
            "url": "https://files.pythonhosted.org/packages/41/c3/a97a88b37a568c7982b42bd6630fd08ca649f0975a4e3ed8051d6bf84317/nvidia_dali_weekly_cuda120-1.43.0.dev20241006.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-07 07:26:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NVIDIA",
    "github_project": "dali",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "nvidia-dali-weekly-cuda120"
}
        
Elapsed time: 0.84370s