nvidia-dali-weekly-cuda120


Namenvidia-dali-weekly-cuda120 JSON
Version 1.38.0.dev20240428 PyPI version JSON
download
home_pagehttps://github.com/NVIDIA/dali
SummaryNVIDIA DALI weekly for CUDA 12.0. Git SHA: 82983535cd65dc1ba11018b4b35dbae6e2c305d5
upload_time2024-04-29 10:06:15
maintainerNone
docs_urlNone
authorNVIDIA Corporation
requires_python<3.13,>=3.8
licenseApache License 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            NVIDIA DALI
===========

The NVIDIA Data Loading Library (DALI) is a library for data loading and
pre-processing to accelerate deep learning applications. It provides a
collection of highly optimized building blocks for loading and processing
image, video and audio data. It can be used as a portable drop-in replacement
for built in data loaders and data iterators in popular deep learning frameworks.

Deep learning applications require complex, multi-stage data processing pipelines
that include loading, decoding, cropping, resizing, and many other augmentations.
These data processing pipelines, which are currently executed on the CPU, have become a
bottleneck, limiting the performance and scalability of training and inference.

DALI addresses the problem of the CPU bottleneck by offloading data preprocessing to the
GPU. Additionally, DALI relies on its own execution engine, built to maximize the throughput
of the input pipeline. Features such as prefetching, parallel execution, and batch processing
are handled transparently for the user.

In addition, the deep learning frameworks have multiple data pre-processing implementations,
resulting in challenges such as portability of training and inference workflows, and code
maintainability. Data processing pipelines implemented using DALI are portable because they
can easily be retargeted to TensorFlow, PyTorch, MXNet and PaddlePaddle.

For more details please check the
`latest DALI Documentation <https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html>`_.

.. image:: https://raw.githubusercontent.com/NVIDIA/DALI/main/dali.png
    :width: 800
    :align: center
    :alt: DALI Diagram


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/NVIDIA/dali",
    "name": "nvidia-dali-weekly-cuda120",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "NVIDIA Corporation",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/be/f1/a8f941c754e25b7fb2947810456041daedff1dd46df1345c191b6a32c098/nvidia_dali_weekly_cuda120-1.38.0.dev20240428.tar.gz",
    "platform": null,
    "description": "NVIDIA DALI\n===========\n\nThe NVIDIA Data Loading Library (DALI) is a library for data loading and\npre-processing to accelerate deep learning applications. It provides a\ncollection of highly optimized building blocks for loading and processing\nimage, video and audio data. It can be used as a portable drop-in replacement\nfor built in data loaders and data iterators in popular deep learning frameworks.\n\nDeep learning applications require complex, multi-stage data processing pipelines\nthat include loading, decoding, cropping, resizing, and many other augmentations.\nThese data processing pipelines, which are currently executed on the CPU, have become a\nbottleneck, limiting the performance and scalability of training and inference.\n\nDALI addresses the problem of the CPU bottleneck by offloading data preprocessing to the\nGPU. Additionally, DALI relies on its own execution engine, built to maximize the throughput\nof the input pipeline. Features such as prefetching, parallel execution, and batch processing\nare handled transparently for the user.\n\nIn addition, the deep learning frameworks have multiple data pre-processing implementations,\nresulting in challenges such as portability of training and inference workflows, and code\nmaintainability. Data processing pipelines implemented using DALI are portable because they\ncan easily be retargeted to TensorFlow, PyTorch, MXNet and PaddlePaddle.\n\nFor more details please check the\n`latest DALI Documentation <https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html>`_.\n\n.. image:: https://raw.githubusercontent.com/NVIDIA/DALI/main/dali.png\n    :width: 800\n    :align: center\n    :alt: DALI Diagram\n\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "NVIDIA DALI weekly  for CUDA 12.0. Git SHA: 82983535cd65dc1ba11018b4b35dbae6e2c305d5",
    "version": "1.38.0.dev20240428",
    "project_urls": {
        "Homepage": "https://github.com/NVIDIA/dali"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bef1a8f941c754e25b7fb2947810456041daedff1dd46df1345c191b6a32c098",
                "md5": "c320a43b00cdde7dccf58f328a8f9966",
                "sha256": "30209fb30ca3d8985bd5b69614bb5d01afd00e69a2e8d9df77078abeaa141407"
            },
            "downloads": -1,
            "filename": "nvidia_dali_weekly_cuda120-1.38.0.dev20240428.tar.gz",
            "has_sig": false,
            "md5_digest": "c320a43b00cdde7dccf58f328a8f9966",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.8",
            "size": 1498,
            "upload_time": "2024-04-29T10:06:15",
            "upload_time_iso_8601": "2024-04-29T10:06:15.496177Z",
            "url": "https://files.pythonhosted.org/packages/be/f1/a8f941c754e25b7fb2947810456041daedff1dd46df1345c191b6a32c098/nvidia_dali_weekly_cuda120-1.38.0.dev20240428.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-29 10:06:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NVIDIA",
    "github_project": "dali",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "nvidia-dali-weekly-cuda120"
}
        
Elapsed time: 0.25777s