PyCUDA: Pythonic Access to CUDA, with Arrays and Algorithms
=============================================================
.. image:: https://gitlab.tiker.net/inducer/pycuda/badges/main/pipeline.svg
:alt: Gitlab Build Status
:target: https://gitlab.tiker.net/inducer/pycuda/commits/main
.. image:: https://badge.fury.io/py/pycuda.png
:target: https://pypi.org/project/pycuda
.. image:: https://zenodo.org/badge/1575319.svg
:alt: Zenodo DOI for latest release
:target: https://zenodo.org/badge/latestdoi/1575319
PyCUDA lets you access `Nvidia <https://nvidia.com>`_'s `CUDA
<https://nvidia.com/cuda/>`_ parallel computation API from Python.
Several wrappers of the CUDA API already exist-so what's so special
about PyCUDA?
* Object cleanup tied to lifetime of objects. This idiom, often
called
`RAII <https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization>`_
in C++, makes it much easier to write correct, leak- and
crash-free code. PyCUDA knows about dependencies, too, so (for
example) it won't detach from a context before all memory
allocated in it is also freed.
* Convenience. Abstractions like pycuda.driver.SourceModule and
pycuda.gpuarray.GPUArray make CUDA programming even more
convenient than with Nvidia's C-based runtime.
* Completeness. PyCUDA puts the full power of CUDA's driver API at
your disposal, if you wish. It also includes code for
interoperability with OpenGL.
* Automatic Error Checking. All CUDA errors are automatically
translated into Python exceptions.
* Speed. PyCUDA's base layer is written in C++, so all the niceties
above are virtually free.
* Helpful `Documentation <https://documen.tician.de/pycuda>`_.
Relatedly, like-minded computing goodness for `OpenCL <https://www.khronos.org/registry/OpenCL/>`_
is provided by PyCUDA's sister project `PyOpenCL <https://pypi.org/project/pyopencl>`_.
Raw data
{
"_id": null,
"home_page": "http://mathema.tician.de/software/pycuda",
"name": "pycuda",
"maintainer": null,
"docs_url": null,
"requires_python": "~=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Andreas Kloeckner",
"author_email": "inform@tiker.net",
"download_url": "https://files.pythonhosted.org/packages/61/69/f53a6624def08348778a7407683f44c2a9adfdb0b68b9a45f8213ff66c9d/pycuda-2024.1.2.tar.gz",
"platform": null,
"description": "PyCUDA: Pythonic Access to CUDA, with Arrays and Algorithms\n=============================================================\n\n.. image:: https://gitlab.tiker.net/inducer/pycuda/badges/main/pipeline.svg\n :alt: Gitlab Build Status\n :target: https://gitlab.tiker.net/inducer/pycuda/commits/main\n.. image:: https://badge.fury.io/py/pycuda.png\n :target: https://pypi.org/project/pycuda\n.. image:: https://zenodo.org/badge/1575319.svg\n :alt: Zenodo DOI for latest release\n :target: https://zenodo.org/badge/latestdoi/1575319\n\nPyCUDA lets you access `Nvidia <https://nvidia.com>`_'s `CUDA\n<https://nvidia.com/cuda/>`_ parallel computation API from Python.\nSeveral wrappers of the CUDA API already exist-so what's so special\nabout PyCUDA?\n\n* Object cleanup tied to lifetime of objects. This idiom, often\n called\n `RAII <https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization>`_\n in C++, makes it much easier to write correct, leak- and\n crash-free code. PyCUDA knows about dependencies, too, so (for\n example) it won't detach from a context before all memory\n allocated in it is also freed.\n\n* Convenience. Abstractions like pycuda.driver.SourceModule and\n pycuda.gpuarray.GPUArray make CUDA programming even more\n convenient than with Nvidia's C-based runtime.\n\n* Completeness. PyCUDA puts the full power of CUDA's driver API at\n your disposal, if you wish. It also includes code for\n interoperability with OpenGL.\n\n* Automatic Error Checking. All CUDA errors are automatically\n translated into Python exceptions.\n\n* Speed. PyCUDA's base layer is written in C++, so all the niceties\n above are virtually free.\n\n* Helpful `Documentation <https://documen.tician.de/pycuda>`_.\n\nRelatedly, like-minded computing goodness for `OpenCL <https://www.khronos.org/registry/OpenCL/>`_\nis provided by PyCUDA's sister project `PyOpenCL <https://pypi.org/project/pyopencl>`_.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python wrapper for Nvidia CUDA",
"version": "2024.1.2",
"project_urls": {
"Homepage": "http://mathema.tician.de/software/pycuda",
"Source": "https://github.com/inducer/pycuda"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "6169f53a6624def08348778a7407683f44c2a9adfdb0b68b9a45f8213ff66c9d",
"md5": "976edb53784c528551f2faf16f14d832",
"sha256": "d110b727cbea859da4b63e91b6fa1e9fc32c5bade02d89ff449975996e9ccfab"
},
"downloads": -1,
"filename": "pycuda-2024.1.2.tar.gz",
"has_sig": false,
"md5_digest": "976edb53784c528551f2faf16f14d832",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.8",
"size": 1689589,
"upload_time": "2024-07-30T13:53:42",
"upload_time_iso_8601": "2024-07-30T13:53:42.643739Z",
"url": "https://files.pythonhosted.org/packages/61/69/f53a6624def08348778a7407683f44c2a9adfdb0b68b9a45f8213ff66c9d/pycuda-2024.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-30 13:53:42",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "inducer",
"github_project": "pycuda",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pycuda"
}