PyCUDA: Pythonic Access to CUDA, with Arrays and Algorithms
=============================================================
.. image:: https://gitlab.tiker.net/inducer/pycuda/badges/main/pipeline.svg
:alt: Gitlab Build Status
:target: https://gitlab.tiker.net/inducer/pycuda/commits/main
.. image:: https://badge.fury.io/py/pycuda.png
:target: https://pypi.org/project/pycuda
.. image:: https://zenodo.org/badge/1575319.svg
:alt: Zenodo DOI for latest release
:target: https://zenodo.org/badge/latestdoi/1575319
PyCUDA lets you access `Nvidia <https://nvidia.com>`_'s `CUDA
<https://nvidia.com/cuda/>`_ parallel computation API from Python.
Several wrappers of the CUDA API already exist-so what's so special
about PyCUDA?
* Object cleanup tied to lifetime of objects. This idiom, often
called
`RAII <https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization>`_
in C++, makes it much easier to write correct, leak- and
crash-free code. PyCUDA knows about dependencies, too, so (for
example) it won't detach from a context before all memory
allocated in it is also freed.
* Convenience. Abstractions like pycuda.driver.SourceModule and
pycuda.gpuarray.GPUArray make CUDA programming even more
convenient than with Nvidia's C-based runtime.
* Completeness. PyCUDA puts the full power of CUDA's driver API at
your disposal, if you wish. It also includes code for
interoperability with OpenGL.
* Automatic Error Checking. All CUDA errors are automatically
translated into Python exceptions.
* Speed. PyCUDA's base layer is written in C++, so all the niceties
above are virtually free.
* Helpful `Documentation <https://documen.tician.de/pycuda>`_.
Relatedly, like-minded computing goodness for `OpenCL <https://www.khronos.org/registry/OpenCL/>`_
is provided by PyCUDA's sister project `PyOpenCL <https://pypi.org/project/pyopencl>`_.
Raw data
{
"_id": null,
"home_page": "http://mathema.tician.de/software/pycuda",
"name": "pycuda-gml",
"maintainer": null,
"docs_url": null,
"requires_python": "~=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Andreas Kloeckner",
"author_email": "inform@tiker.net",
"download_url": null,
"platform": null,
"description": "PyCUDA: Pythonic Access to CUDA, with Arrays and Algorithms\n=============================================================\n\n.. image:: https://gitlab.tiker.net/inducer/pycuda/badges/main/pipeline.svg\n :alt: Gitlab Build Status\n :target: https://gitlab.tiker.net/inducer/pycuda/commits/main\n.. image:: https://badge.fury.io/py/pycuda.png\n :target: https://pypi.org/project/pycuda\n.. image:: https://zenodo.org/badge/1575319.svg\n :alt: Zenodo DOI for latest release\n :target: https://zenodo.org/badge/latestdoi/1575319\n\nPyCUDA lets you access `Nvidia <https://nvidia.com>`_'s `CUDA\n<https://nvidia.com/cuda/>`_ parallel computation API from Python.\nSeveral wrappers of the CUDA API already exist-so what's so special\nabout PyCUDA?\n\n* Object cleanup tied to lifetime of objects. This idiom, often\n called\n `RAII <https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization>`_\n in C++, makes it much easier to write correct, leak- and\n crash-free code. PyCUDA knows about dependencies, too, so (for\n example) it won't detach from a context before all memory\n allocated in it is also freed.\n\n* Convenience. Abstractions like pycuda.driver.SourceModule and\n pycuda.gpuarray.GPUArray make CUDA programming even more\n convenient than with Nvidia's C-based runtime.\n\n* Completeness. PyCUDA puts the full power of CUDA's driver API at\n your disposal, if you wish. It also includes code for\n interoperability with OpenGL.\n\n* Automatic Error Checking. All CUDA errors are automatically\n translated into Python exceptions.\n\n* Speed. PyCUDA's base layer is written in C++, so all the niceties\n above are virtually free.\n\n* Helpful `Documentation <https://documen.tician.de/pycuda>`_.\n\nRelatedly, like-minded computing goodness for `OpenCL <https://www.khronos.org/registry/OpenCL/>`_\nis provided by PyCUDA's sister project `PyOpenCL <https://pypi.org/project/pyopencl>`_.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python wrapper for Nvidia CUDA",
"version": "2024.1.2",
"project_urls": {
"Homepage": "http://mathema.tician.de/software/pycuda",
"Source": "https://github.com/inducer/pycuda"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c6e180aaead801974b038f96dc09d268a0c50cb163b9d626586b9a7f5fa75343",
"md5": "aa1172fc5fcf908edafc580c689560bb",
"sha256": "4bd2a0dc773d76f152ab849837d1c54401569aabd3558cbefd9e5a593a2d10e8"
},
"downloads": -1,
"filename": "pycuda_gml-2024.1.2-cp311-cp311-manylinux_2_34_x86_64.whl",
"has_sig": false,
"md5_digest": "aa1172fc5fcf908edafc580c689560bb",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": "~=3.8",
"size": 643962,
"upload_time": "2024-08-28T07:05:47",
"upload_time_iso_8601": "2024-08-28T07:05:47.777902Z",
"url": "https://files.pythonhosted.org/packages/c6/e1/80aaead801974b038f96dc09d268a0c50cb163b9d626586b9a7f5fa75343/pycuda_gml-2024.1.2-cp311-cp311-manylinux_2_34_x86_64.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-28 07:05:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "inducer",
"github_project": "pycuda",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pycuda-gml"
}