###############################################################
cuTENSOR: A High-Performance CUDA Library For Tensor Primitives
###############################################################
`cuTENSOR <https://developer.nvidia.com/cutensor>`_ is a high-performance CUDA library for tensor primitives.
Key Features
============
* Extensive mixed-precision support:
* FP64 inputs with FP32 compute.
* FP32 inputs with FP16, BF16, or TF32 compute.
* Complex-times-real operations.
* Conjugate (without transpose) support.
* Support for up to 64-dimensional tensors.
* Arbitrary data layouts.
* Trivially serializable data structures.
* Main computational routines:
* Direct (i.e., transpose-free) tensor contractions.
* Support just-in-time compilation of dedicated kernels.
* Tensor reductions (including partial reductions).
* Element-wise tensor operations:
* Support for various activation functions.
* Support for padding of the output tensor
* Arbitrary tensor permutations.
* Conversion between different data types.
Documentation
=============
Please refer to https://docs.nvidia.com/cuda/cutensor/index.html for the cuTENSOR documentation.
Installation
============
The cuTENSOR wheel can be installed as follows:
.. code-block:: bash
pip install cutensor-cuXX
where XX is the CUDA major version (currently CUDA 12 & 13 are supported).
The package ``cutensor`` (without the ``-cuXX`` suffix) is deprecated. If you have
``cutensor`` installed, please remove it prior to installing ``cutensor-cuXX``.
Raw data
{
"_id": null,
"home_page": "https://developer.nvidia.com/cutensor",
"name": "cutensor-cu13",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "cuda, nvidia, machine learning, tensor network, high-performance computing",
"author": "NVIDIA Corporation",
"author_email": "cuda_installer@nvidia.com",
"download_url": null,
"platform": null,
"description": "###############################################################\ncuTENSOR: A High-Performance CUDA Library For Tensor Primitives\n###############################################################\n\n`cuTENSOR <https://developer.nvidia.com/cutensor>`_ is a high-performance CUDA library for tensor primitives.\n\nKey Features\n============\n\n* Extensive mixed-precision support:\n\n * FP64 inputs with FP32 compute.\n * FP32 inputs with FP16, BF16, or TF32 compute.\n * Complex-times-real operations.\n * Conjugate (without transpose) support.\n\n* Support for up to 64-dimensional tensors.\n* Arbitrary data layouts.\n* Trivially serializable data structures.\n* Main computational routines:\n\n * Direct (i.e., transpose-free) tensor contractions.\n\n * Support just-in-time compilation of dedicated kernels.\n\n * Tensor reductions (including partial reductions).\n * Element-wise tensor operations:\n\n * Support for various activation functions.\n * Support for padding of the output tensor\n * Arbitrary tensor permutations.\n * Conversion between different data types.\n\nDocumentation\n=============\n\nPlease refer to https://docs.nvidia.com/cuda/cutensor/index.html for the cuTENSOR documentation.\n\nInstallation\n============\n\nThe cuTENSOR wheel can be installed as follows:\n\n.. code-block:: bash\n\n pip install cutensor-cuXX\n\nwhere XX is the CUDA major version (currently CUDA 12 & 13 are supported).\nThe package ``cutensor`` (without the ``-cuXX`` suffix) is deprecated. If you have\n``cutensor`` installed, please remove it prior to installing ``cutensor-cuXX``.\n",
"bugtrack_url": null,
"license": "NVIDIA Proprietary Software",
"summary": "NVIDIA cuTENSOR",
"version": "2.3.0",
"project_urls": {
"Homepage": "https://developer.nvidia.com/cutensor"
},
"split_keywords": [
"cuda",
" nvidia",
" machine learning",
" tensor network",
" high-performance computing"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "15d8197c2cc25d021661e5769e2596e25ce40404edeb3f5b9aff4df7c63e430d",
"md5": "632fd940246e2b2832ff57832b2405e4",
"sha256": "98a15b0d5e209c549950ab794ea8a3b1c335a0915f4b526de0e9f4693147d0e4"
},
"downloads": -1,
"filename": "cutensor_cu13-2.3.0-py3-none-manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "632fd940246e2b2832ff57832b2405e4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 167297534,
"upload_time": "2025-08-15T15:26:06",
"upload_time_iso_8601": "2025-08-15T15:26:06.718675Z",
"url": "https://files.pythonhosted.org/packages/15/d8/197c2cc25d021661e5769e2596e25ce40404edeb3f5b9aff4df7c63e430d/cutensor_cu13-2.3.0-py3-none-manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3ebd3204b7d071d18aef2f9a7dd4bcf2dbb709384179df31f7c5f2b0dab6a61f",
"md5": "638aff29ad8ebfefed9bb92bdba78c58",
"sha256": "d5ecf3b676fe5e8422fe32adf53bc3650f427bdea5a0b4a0c2ab9f163ffdb28b"
},
"downloads": -1,
"filename": "cutensor_cu13-2.3.0-py3-none-win_amd64.whl",
"has_sig": false,
"md5_digest": "638aff29ad8ebfefed9bb92bdba78c58",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 150624621,
"upload_time": "2025-08-15T15:27:35",
"upload_time_iso_8601": "2025-08-15T15:27:35.406513Z",
"url": "https://files.pythonhosted.org/packages/3e/bd/3204b7d071d18aef2f9a7dd4bcf2dbb709384179df31f7c5f2b0dab6a61f/cutensor_cu13-2.3.0-py3-none-win_amd64.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-15 15:26:06",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "cutensor-cu13"
}