Name | torchCompactRadius JSON |
Version |
0.5.5
JSON |
| download |
home_page | https://github.com/wi-re/torchCompactRadius |
Summary | Compact Hashing based radius search for pyTorch using C++/CUDA backends. |
upload_time | 2025-07-29 12:30:44 |
maintainer | None |
docs_url | None |
author | Rene Winchenbach |
requires_python | None |
license | MIT License
Copyright (c) 2024 Rene Winchenbach
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
keywords |
sph
radius
pytorch
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# pyTorch Compact Radius
This repository contains an implementation of a compact hashing based neighborhood search for 1D, 2D and 3D data for pyTorch using a C++/CUDA backend. This code is designed for large scale problems, e.g., point clouds with $\gg 10^3$ points, e.g., for SPH simulations. For smaller problems other libraries, such as [torch-cluster](https://github.com/rusty1s/pytorch_cluster) might be a more appropriate fit.
Requirements:
> pyTorch >= 2.0
The module is built either just-in-time (this is what you get when you install it via pip directly) or pre-built for a variety of systems via conda or our website. Note that for MacOS based systems an external clang compiler installed via homebrew is required for openMP support.
## Installation
__Anaconda__:
```bash
pytorch pyfluids::torch-compact-radius torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia
```
__pip__:
```bash
pip install torchCompactRadius -f https://fluids.dev/torchCompactRadius/wheels/torch-2.5.0+{cuTag}/
```
Note, if you are using Google Colab (or similar) you can run
```py
import torch
!pip install torchCompactRadius -f https://fluids.dev/torchCompactRadius/wheels/torch-{version}/
```
Or the JIT compiled version available on PyPi:
Note that if you install the latter, it makes sense to limit which architectures the code is compiled for before import torchCompactRadius
```py
import torch
os.environ['TORCH_CUDA_ARCH_LIST'] = f'{torch.cuda.get_device_properties(0).major}.{torch.cuda.get_device_properties(0).minor}'
import torchCompactRadius
```
## Usage and Example
__This has changed from previous versions__
This package provices two primary functions `radius` and `radiusSearch`. `radius` is designed as a drop-in replacement of torch cluster's radius function, whereas radiusSearch is the preferred usage. __Important:__ `radius` and `radiusSearch` return index pairs in flipped order!
To call the `radiusSearch` version we use a set of NamedTuples to make the calling conventions less error prone, these are:
```py
class DomainDescription(NamedTuple):
min: torch.Tensor
max: torch.Tensor
periodicity: Union[bool,torch.Tensor]
dim: int
class PointCloud(NamedTuple):
positions: torch.Tensor
supports: Optional[torch.Tensor] = None
class SparseCOO(NamedTuple):
row: torch.Tensor
col: torch.Tensor
numRows: torch.Tensor
numCols: torch.Tensor
class SparseCSR(NamedTuple):
indices: torch.Tensor
indptr: torch.Tensor
rowEntries: torch.Tensor
numRows: torch.Tensor
numCols: torch.Tensor
```
Based on these we can then construct an input set:
```py
dim = 2
targetNumNeighbors = 32
nx = 32
minDomain = torch.tensor([-1] * dim, dtype = torch.float32, device = device)
maxDomain = torch.tensor([ 1] * dim, dtype = torch.float32, device = device)
periodicity = torch.tensor([periodic] * dim, device = device, dtype = torch.bool)
extent = maxDomain - minDomain
shortExtent = torch.min(extent, dim = 0)[0].item()
dx = (shortExtent / nx)
h = volumeToSupport(dx**dim, targetNumNeighbors, dim)
positions = []
for d in range(dim):
positions.append(torch.linspace(minDomain[d] + dx / 2, maxDomain[d] - dx / 2, int((extent[d] - dx) / dx) + 1, device = device))
grid = torch.meshgrid(*positions, indexing = 'xy')
positions = torch.stack(grid, dim = -1).reshape(-1,dim).to(device)
supports = torch.ones(positions.shape[0], device = device) * h
domainDescription = DomainDescription(minDomain, maxDomain, periodicity, dim)
pointCloudX = PointCloud(positions, supports)
```
We can then call the `radiusSearch` method to compute the neighborhood in COO format:
```py
adjacency = radiusSearch(pointCloudX, domain = domainDescription)
```
The `radiusSearch` method has some further options:
```py
def radiusSearch(
queryPointCloud: PointCloud,
referencePointCloud: Optional[PointCloud],
supportOverride : Optional[float] = None,
mode : str = 'gather',
domain : Optional[DomainDescription] = None,
hashMapLength = 4096,
algorithm: str = 'naive',
verbose: bool = False,
format: str = 'coo',
returnStructure : bool = False
)
```
- `queryPointCloud` contains the set of points that are related to the other set
- `referencePositions` contains the reference set of points, i.e., the points for which relations are queried
- `support` determines the cut-off radius for the radius search. This value is either a scalar float, i.e., every point has an identical cut-off radius, a single Tensor of size $n$ that contains a different cut-off radius for every point in `queryPositions`
- `mode` determines the method used to compute the cut-off radius of point to point interactions. Options are (a) `gather`, which uses only the cut-off radius for the `queryPositions`, (b) `scatter`, which uses only the cut-off radius for the `referencePositions` and (c) `symmetric`, which uses the mean cut-off radius.
- `domainMin` and `domainMax` are required for periodic neighborhood searches to define the coordinates at which point the positions wrap around
- `periodicity` indicates if a periodic neighborhood search is to be performed as either a bool (applied to all dimensions) or a list of bools (one per dimension)
- `hashMapLength` is used to determine the internal length of the hash map used in the compact data structure, should be close to $n_x$
- `verbose` prints additional logging information on the console
- `returnStructure` decides if the `compact` algorithm should return its datastructure for reuse in later searches
- `format` decides if an adjacency description in COO or CSR format is returned
For the algorithm the following 4 options exist:
- `naive`: This algorithm computes a dense distance matrix of size $n_x \times n_y \times d$ and performs the adjacency computations on this dense representation. This requires significant amounts of memory but is very straight forward and potentially differentiable. Complexity: $\mathcal{O}\left(n^2\right)$
- `cluster`: This is a wrapper around torch_cluster's `radius` search and only available if that package is installed. Note that this algorithm does not support periodic neighbor searches and does not support non-uniform cut-off radii with a complexity of $\mathcal{O}\left(n^2\right)$. This algorithm is also limited to a fixed number of maximum neighbors ($256$).
- `small`: This algorithm is similar to `cluster` in its implementation and computes an everything against everything distance on-the-fly, i.e., it does not require intermediate large storage, and first computes the number of neighbors per particle and then allocates the according memory. Accordingly, this approach is slower than `cluster` but more versatile. Complexity: $\mathcal{O}\left(n^2\right)$
- `compact`: The primary algorithm of this library. This approach uses compact hashing and a cell-based datastructure to compute neighborhoods in $\mathcal{O}\left(n\log n\right)$. The idea is based on [A parallel sph implementation on multi-core cpus](https://cg.informatik.uni-freiburg.de/publications/2011_CGF_dataStructuresSPH.pdf) and the GPU approach is based on [Multi-Level Memory Structures for Simulating and Rendering SPH](https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.14090). Note that this implementation is not optimized for adaptive simulations.
## Performance
If you want to evaluate the performance on your system simply run `scripts/benchmark.py`, which will generate a `Benchmark.png` for various numbers of point counts algorithms and dimensions.
Compute Performance on GPUs for small scale problems:
3090 | A5000
---|---
<img src="https://github.com/wi-re/torch-compact-radius/blob/main/figures/Benchmark_3090.png?raw=true">| <img src="https://github.com/wi-re/torch-compact-radius/blob/main/figures/Benchmark_A5000.png?raw=true">
CPU perforamnce:
<img src="https://github.com/wi-re/torch-compact-radius/blob/main/figures/Benchmark_CPU.png?raw=true">
Overall GPU based performance for larger scale problems:
<img src="https://github.com/wi-re/torch-compact-radius/blob/main/figures/Overall.png?raw=true">
<!--
## Testing
If you want to check if your version of this library works correctly simply run `python scripts/test.py`. This simple test function runs a variety of configurations and the output will appear like this:
```
periodic = True, reducedSet = True, algorithm = naive device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = True, reducedSet = True, algorithm = small device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = True, reducedSet = True, algorithm = cluster device = cpu ❌❌❌❌❌❌ device = cuda ❌❌❌❌❌❌
periodic = True, reducedSet = True, algorithm = compact device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = True, reducedSet = False, algorithm = naive device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = True, reducedSet = False, algorithm = small device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = True, reducedSet = False, algorithm = cluster device = cpu ❌❌❌❌❌❌ device = cuda ❌❌❌❌❌❌
periodic = True, reducedSet = False, algorithm = compact device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = False, reducedSet = True, algorithm = naive device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = False, reducedSet = True, algorithm = small device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = False, reducedSet = True, algorithm = cluster device = cpu ✅❌❌❌❌❌ device = cuda ✅❌❌❌❌❌
periodic = False, reducedSet = True, algorithm = compact device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = False, reducedSet = False, algorithm = naive device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = False, reducedSet = False, algorithm = small device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
periodic = False, reducedSet = False, algorithm = cluster device = cpu ✅❌❌❌❌❌ device = cuda ✅❌❌❌❌❌
periodic = False, reducedSet = False, algorithm = compact device = cpu ✅✅✅✅✅✅ device = cuda ✅✅✅✅✅✅
```
The `cluster` algorithm failing is due to a lack of support of torch_cluster`s implementation for periodic neighborhood searches as well as searches with non-uniform cut-off radii. -->
## TODO:
> Add AMD Support
> Wrap periodic neighborhood search and non symmetric neighborhoods around torch cluster
## Building and Installing
### Pip Version
Simply run
```bash
pip install -e . --no-build-isolation
```
### Anaconda Version
To build the conda version of the code simply run
```bash
./conda/torchCompactRadius/build_conda.sh {pyVersion} {torchVersion} {cudaVersion}
```
e.g., to build the library for python 3.11, pytorch 2.5.0 and Cuda 12.1 run `build_conda.sh 3.11 2.5.0 cu121`. After building it like this, you can install the locally built version via
```
conda install -c ~/conda-bld/ torch-compact-radius -c pytorch
```
## For development
use ccache
`conda install ccache -c conda-forge`
and then
```export CMAKE_C_COMPILER_LAUNCHER=ccache
export CMAKE_CXX_COMPILER_LAUNCHER=ccache
export CMAKE_CUDA_COMPILER_LAUNCHER=ccache```
before calling setup.py
Raw data
{
"_id": null,
"home_page": "https://github.com/wi-re/torchCompactRadius",
"name": "torchCompactRadius",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": "Rene Winchenbach <contact@fluids.dev>",
"keywords": "sph, radius, pytorch",
"author": "Rene Winchenbach",
"author_email": "Rene Winchenbach <contact@fluids.dev>",
"download_url": "https://files.pythonhosted.org/packages/36/c8/47deabb66bc0ac9bf7347abcea01f5a728be96fec3a5e7bbf750fb7160c1/torchcompactradius-0.5.5.tar.gz",
"platform": null,
"description": "# pyTorch Compact Radius\n\nThis repository contains an implementation of a compact hashing based neighborhood search for 1D, 2D and 3D data for pyTorch using a C++/CUDA backend. This code is designed for large scale problems, e.g., point clouds with $\\gg 10^3$ points, e.g., for SPH simulations. For smaller problems other libraries, such as [torch-cluster](https://github.com/rusty1s/pytorch_cluster) might be a more appropriate fit.\n\nRequirements:\n> pyTorch >= 2.0\n\nThe module is built either just-in-time (this is what you get when you install it via pip directly) or pre-built for a variety of systems via conda or our website. Note that for MacOS based systems an external clang compiler installed via homebrew is required for openMP support.\n\n## Installation\n\n\n__Anaconda__:\n```bash\npytorch pyfluids::torch-compact-radius torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia\n```\n\n__pip__:\n\n```bash\npip install torchCompactRadius -f https://fluids.dev/torchCompactRadius/wheels/torch-2.5.0+{cuTag}/\n```\n\nNote, if you are using Google Colab (or similar) you can run\n```py\nimport torch\n!pip install torchCompactRadius -f https://fluids.dev/torchCompactRadius/wheels/torch-{version}/\n```\n\n\nOr the JIT compiled version available on PyPi:\n\nNote that if you install the latter, it makes sense to limit which architectures the code is compiled for before import torchCompactRadius\n```py\nimport torch\nos.environ['TORCH_CUDA_ARCH_LIST'] = f'{torch.cuda.get_device_properties(0).major}.{torch.cuda.get_device_properties(0).minor}'\n\nimport torchCompactRadius\n```\n\n## Usage and Example\n\n__This has changed from previous versions__\n\nThis package provices two primary functions `radius` and `radiusSearch`. `radius` is designed as a drop-in replacement of torch cluster's radius function, whereas radiusSearch is the preferred usage. __Important:__ `radius` and `radiusSearch` return index pairs in flipped order!\n\nTo call the `radiusSearch` version we use a set of NamedTuples to make the calling conventions less error prone, these are:\n\n```py\nclass DomainDescription(NamedTuple):\n min: torch.Tensor\n max: torch.Tensor\n periodicity: Union[bool,torch.Tensor]\n dim: int\n\nclass PointCloud(NamedTuple):\n positions: torch.Tensor\n supports: Optional[torch.Tensor] = None\n\nclass SparseCOO(NamedTuple):\n row: torch.Tensor\n col: torch.Tensor\n\n numRows: torch.Tensor\n numCols: torch.Tensor\nclass SparseCSR(NamedTuple):\n indices: torch.Tensor\n indptr: torch.Tensor\n\n rowEntries: torch.Tensor\n\n numRows: torch.Tensor\n numCols: torch.Tensor\n```\n\nBased on these we can then construct an input set:\n```py\ndim = 2\ntargetNumNeighbors = 32\nnx = 32\n\nminDomain = torch.tensor([-1] * dim, dtype = torch.float32, device = device)\nmaxDomain = torch.tensor([ 1] * dim, dtype = torch.float32, device = device)\nperiodicity = torch.tensor([periodic] * dim, device = device, dtype = torch.bool)\n\nextent = maxDomain - minDomain\nshortExtent = torch.min(extent, dim = 0)[0].item()\ndx = (shortExtent / nx)\nh = volumeToSupport(dx**dim, targetNumNeighbors, dim)\n\npositions = []\nfor d in range(dim):\n positions.append(torch.linspace(minDomain[d] + dx / 2, maxDomain[d] - dx / 2, int((extent[d] - dx) / dx) + 1, device = device))\ngrid = torch.meshgrid(*positions, indexing = 'xy')\npositions = torch.stack(grid, dim = -1).reshape(-1,dim).to(device)\nsupports = torch.ones(positions.shape[0], device = device) * h\n\ndomainDescription = DomainDescription(minDomain, maxDomain, periodicity, dim)\npointCloudX = PointCloud(positions, supports)\n```\n\nWe can then call the `radiusSearch` method to compute the neighborhood in COO format:\n\n```py\nadjacency = radiusSearch(pointCloudX, domain = domainDescription)\n```\n\nThe `radiusSearch` method has some further options:\n\n```py\ndef radiusSearch( \n queryPointCloud: PointCloud,\n referencePointCloud: Optional[PointCloud],\n supportOverride : Optional[float] = None,\n\n mode : str = 'gather',\n domain : Optional[DomainDescription] = None,\n hashMapLength = 4096,\n algorithm: str = 'naive',\n verbose: bool = False,\n format: str = 'coo',\n returnStructure : bool = False\n )\n```\n\n\n- `queryPointCloud` contains the set of points that are related to the other set\n- `referencePositions` contains the reference set of points, i.e., the points for which relations are queried\n- `support` determines the cut-off radius for the radius search. This value is either a scalar float, i.e., every point has an identical cut-off radius, a single Tensor of size $n$ that contains a different cut-off radius for every point in `queryPositions`\n- `mode` determines the method used to compute the cut-off radius of point to point interactions. Options are (a) `gather`, which uses only the cut-off radius for the `queryPositions`, (b) `scatter`, which uses only the cut-off radius for the `referencePositions` and (c) `symmetric`, which uses the mean cut-off radius.\n- `domainMin` and `domainMax` are required for periodic neighborhood searches to define the coordinates at which point the positions wrap around\n- `periodicity` indicates if a periodic neighborhood search is to be performed as either a bool (applied to all dimensions) or a list of bools (one per dimension)\n- `hashMapLength` is used to determine the internal length of the hash map used in the compact data structure, should be close to $n_x$\n- `verbose` prints additional logging information on the console\n- `returnStructure` decides if the `compact` algorithm should return its datastructure for reuse in later searches\n- `format` decides if an adjacency description in COO or CSR format is returned\n\nFor the algorithm the following 4 options exist:\n- `naive`: This algorithm computes a dense distance matrix of size $n_x \\times n_y \\times d$ and performs the adjacency computations on this dense representation. This requires significant amounts of memory but is very straight forward and potentially differentiable. Complexity: $\\mathcal{O}\\left(n^2\\right)$\n- `cluster`: This is a wrapper around torch_cluster's `radius` search and only available if that package is installed. Note that this algorithm does not support periodic neighbor searches and does not support non-uniform cut-off radii with a complexity of $\\mathcal{O}\\left(n^2\\right)$. This algorithm is also limited to a fixed number of maximum neighbors ($256$).\n- `small`: This algorithm is similar to `cluster` in its implementation and computes an everything against everything distance on-the-fly, i.e., it does not require intermediate large storage, and first computes the number of neighbors per particle and then allocates the according memory. Accordingly, this approach is slower than `cluster` but more versatile. Complexity: $\\mathcal{O}\\left(n^2\\right)$\n- `compact`: The primary algorithm of this library. This approach uses compact hashing and a cell-based datastructure to compute neighborhoods in $\\mathcal{O}\\left(n\\log n\\right)$. The idea is based on [A parallel sph implementation on multi-core cpus](https://cg.informatik.uni-freiburg.de/publications/2011_CGF_dataStructuresSPH.pdf) and the GPU approach is based on [Multi-Level Memory Structures for Simulating and Rendering SPH](https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.14090). Note that this implementation is not optimized for adaptive simulations.\n\n\n## Performance\n\nIf you want to evaluate the performance on your system simply run `scripts/benchmark.py`, which will generate a `Benchmark.png` for various numbers of point counts algorithms and dimensions.\n\nCompute Performance on GPUs for small scale problems:\n\n3090 | A5000\n---|---\n<img src=\"https://github.com/wi-re/torch-compact-radius/blob/main/figures/Benchmark_3090.png?raw=true\">| <img src=\"https://github.com/wi-re/torch-compact-radius/blob/main/figures/Benchmark_A5000.png?raw=true\">\n\nCPU perforamnce:\n\n<img src=\"https://github.com/wi-re/torch-compact-radius/blob/main/figures/Benchmark_CPU.png?raw=true\">\n\nOverall GPU based performance for larger scale problems:\n\n<img src=\"https://github.com/wi-re/torch-compact-radius/blob/main/figures/Overall.png?raw=true\">\n<!-- \n## Testing\n\nIf you want to check if your version of this library works correctly simply run `python scripts/test.py`. This simple test function runs a variety of configurations and the output will appear like this:\n```\nperiodic = True, reducedSet = True, algorithm = naive device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = True, reducedSet = True, algorithm = small device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = True, reducedSet = True, algorithm = cluster device = cpu \u274c\u274c\u274c\u274c\u274c\u274c device = cuda \u274c\u274c\u274c\u274c\u274c\u274c\nperiodic = True, reducedSet = True, algorithm = compact device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = True, reducedSet = False, algorithm = naive device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = True, reducedSet = False, algorithm = small device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = True, reducedSet = False, algorithm = cluster device = cpu \u274c\u274c\u274c\u274c\u274c\u274c device = cuda \u274c\u274c\u274c\u274c\u274c\u274c\nperiodic = True, reducedSet = False, algorithm = compact device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = False, reducedSet = True, algorithm = naive device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = False, reducedSet = True, algorithm = small device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = False, reducedSet = True, algorithm = cluster device = cpu \u2705\u274c\u274c\u274c\u274c\u274c device = cuda \u2705\u274c\u274c\u274c\u274c\u274c\nperiodic = False, reducedSet = True, algorithm = compact device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = False, reducedSet = False, algorithm = naive device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = False, reducedSet = False, algorithm = small device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\nperiodic = False, reducedSet = False, algorithm = cluster device = cpu \u2705\u274c\u274c\u274c\u274c\u274c device = cuda \u2705\u274c\u274c\u274c\u274c\u274c\nperiodic = False, reducedSet = False, algorithm = compact device = cpu \u2705\u2705\u2705\u2705\u2705\u2705 device = cuda \u2705\u2705\u2705\u2705\u2705\u2705\n```\n\nThe `cluster` algorithm failing is due to a lack of support of torch_cluster`s implementation for periodic neighborhood searches as well as searches with non-uniform cut-off radii. -->\n\n## TODO:\n\n> Add AMD Support\n> Wrap periodic neighborhood search and non symmetric neighborhoods around torch cluster\n\n\n## Building and Installing\n\n### Pip Version\n\nSimply run\n```bash\npip install -e . --no-build-isolation\n```\n\n\n### Anaconda Version\n\nTo build the conda version of the code simply run \n```bash\n./conda/torchCompactRadius/build_conda.sh {pyVersion} {torchVersion} {cudaVersion}\n```\n\ne.g., to build the library for python 3.11, pytorch 2.5.0 and Cuda 12.1 run `build_conda.sh 3.11 2.5.0 cu121`. After building it like this, you can install the locally built version via\n```\nconda install -c ~/conda-bld/ torch-compact-radius -c pytorch\n```\n\n## For development\n\nuse ccache\n`conda install ccache -c conda-forge`\n\nand then\n```export CMAKE_C_COMPILER_LAUNCHER=ccache\nexport CMAKE_CXX_COMPILER_LAUNCHER=ccache\nexport CMAKE_CUDA_COMPILER_LAUNCHER=ccache```\n\nbefore calling setup.py\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2024 Rene Winchenbach\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n ",
"summary": "Compact Hashing based radius search for pyTorch using C++/CUDA backends.",
"version": "0.5.5",
"project_urls": {
"Download": "https://github.com/wi-re/torchCompactRadius/archive/0.5.0.tar.gz",
"Homepage": "https://github.com/wi-re/torchCompactRadius",
"Issues": "https://github.com/wi-re/torchCompactRadius/issues",
"Repository": "https://github.com/wi-re/torchCompactRadius"
},
"split_keywords": [
"sph",
" radius",
" pytorch"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "36c847deabb66bc0ac9bf7347abcea01f5a728be96fec3a5e7bbf750fb7160c1",
"md5": "ab613a115558655161a89eae38d0b3c3",
"sha256": "2b39769aee71f21e92d2d1fbb60b64053aa766e6dfd1fbbc7f5f5b49ca93af67"
},
"downloads": -1,
"filename": "torchcompactradius-0.5.5.tar.gz",
"has_sig": false,
"md5_digest": "ab613a115558655161a89eae38d0b3c3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 77067,
"upload_time": "2025-07-29T12:30:44",
"upload_time_iso_8601": "2025-07-29T12:30:44.482600Z",
"url": "https://files.pythonhosted.org/packages/36/c8/47deabb66bc0ac9bf7347abcea01f5a728be96fec3a5e7bbf750fb7160c1/torchcompactradius-0.5.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-29 12:30:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "wi-re",
"github_project": "torchCompactRadius",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "torchcompactradius"
}