Name | pytorch-volumetric JSON |
Version |
0.5.2
JSON |
| download |
home_page | None |
Summary | Volumetric structures such as voxels and SDFs implemented in pytorch |
upload_time | 2024-05-06 22:56:14 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.6 |
license | Copyright (c) 2023 University of Michigan ARM Lab Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
robotics
sdf
voxels
pytorch
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
## Pytorch Volumetric
- signed distance field (SDF) pytorch implementation with parallelized query for value and gradients
- voxel grids with automatic expanding range
- unidirectional chamfer distance (points to mesh)
- robot model to SDF with parallelized query over robot configurations and points
## Installation
```shell
pip install pytorch-volumetric
```
For development, clone repository somewhere, then `pip3 install -e .` to install in editable mode.
For testing, run `pytest` in the root directory.
## Usage
See `tests` for code samples; some are also shown here
### SDF from mesh
```python
import pytorch_volumetric as pv
# supposing we have an object mesh (most formats supported) - from https://github.com/eleramp/pybullet-object-models
obj = pv.MeshObjectFactory("YcbPowerDrill/textured_simple_reoriented.obj")
sdf = pv.MeshSDF(obj)
```
An `open3d` mesh can be provided via the `mesh=` argument to `MeshObjectFactory`. When doing so, transform parameters
such as scale are ignored.
### Cached SDF
```python
import pytorch_volumetric as pv
obj = pv.MeshObjectFactory("YcbPowerDrill/textured_simple_reoriented.obj")
sdf = pv.MeshSDF(obj)
# caching the SDF via a voxel grid to accelerate queries
cached_sdf = pv.CachedSDF('drill', resolution=0.01, range_per_dim=obj.bounding_box(padding=0.1), gt_sdf=sdf)
```
By default, query points outside the cache will be compared against the object bounding box.
To instead use the ground truth SDF, pass `out_of_bounds_strategy=pv.OutOfBoundsStrategy.LOOKUP_GT_SDF` to
the constructor.
Note that the bounding box comparison will always under-approximate the SDF value, but empirically it is sufficient
for most applications when querying out of bound points. It is **dramatically faster** than using the ground truth SDF.
### Composed SDF
Multiple SDFs can be composed together to form an SDF that is convenient to query. This may be because your scene
is composed of multiple objects and you have them as separate meshes. Note: the objects should not be overlapping or
share faces, otherwise there will be artifacts in the SDF query in determining interior-ness.
```python
import pytorch_volumetric as pv
import pytorch_kinematics as pk
obj = pv.MeshObjectFactory("YcbPowerDrill/textured_simple_reoriented.obj")
# 2 drills in the world
sdf1 = pv.MeshSDF(obj)
sdf2 = pv.MeshSDF(obj)
# need to specify the transform of each SDF frame
tsf1 = pk.Translate(0.1, 0, 0)
tsf2 = pk.Translate(-0.2, 0, 0.2)
sdf = pv.ComposedSDF([sdf1, sdf2], tsf1.stack(tsf2))
```
### SDF value and gradient queries
Suppose we have an `ObjectFrameSDF` (such as created from above)
```python
import numpy as np
import pytorch_volumetric as pv
# get points in a grid in the object frame
query_range = np.array([
[-1, 0.5],
[-0.5, 0.5],
[-0.2, 0.8],
])
coords, pts = pv.get_coordinates_and_points_in_grid(0.01, query_range)
# N x 3 points
# we can also query with batched points B x N x 3, B can be any number of batch dimensions
sdf_val, sdf_grad = sdf(pts)
# sdf_val is N, or B x N, the SDF value in meters
# sdf_grad is N x 3 or B x N x 3, the normalized SDF gradient (points along steepest increase in SDF)
```
### Plotting SDF Slice
```python
import pytorch_volumetric as pv
import numpy as np
# supposing we have an object mesh (most formats supported) - from https://github.com/eleramp/pybullet-object-models
obj = pv.MeshObjectFactory("YcbPowerDrill/textured_simple_reoriented.obj")
sdf = pv.MeshSDF(obj)
# need a dimension with no range to slice; here it's y
query_range = np.array([
[-0.15, 0.2],
[0, 0],
[-0.1, 0.2],
])
pv.draw_sdf_slice(sdf, query_range)
```
![drill SDF](https://i.imgur.com/TFaGmx6.png)
### Robot Model to SDF
For many applications such as collision checking, it is useful to have the
SDF of a multi-link robot in certain configurations.
First, we create the robot model (loaded from URDF, SDF, MJCF, ...) with
[pytorch kinematics](https://github.com/UM-ARM-Lab/pytorch_kinematics).
For example, we will be using the KUKA 7 DOF arm model from pybullet data
```python
import os
import torch
import pybullet_data
import pytorch_kinematics as pk
import pytorch_volumetric as pv
urdf = "kuka_iiwa/model.urdf"
search_path = pybullet_data.getDataPath()
full_urdf = os.path.join(search_path, urdf)
chain = pk.build_serial_chain_from_urdf(open(full_urdf).read(), "lbr_iiwa_link_7")
d = "cuda" if torch.cuda.is_available() else "cpu"
chain = chain.to(device=d)
# paths to the link meshes are specified with their relative path inside the URDF
# we need to give them the path prefix as we need their absolute path to load
s = pv.RobotSDF(chain, path_prefix=os.path.join(search_path, "kuka_iiwa"))
```
By default, each link will have a `MeshSDF`. To instead use `CachedSDF` for faster queries
```python
s = pv.RobotSDF(chain, path_prefix=os.path.join(search_path, "kuka_iiwa"),
link_sdf_cls=pv.cache_link_sdf_factory(resolution=0.02, padding=1.0, device=d))
```
Which when the `y=0.02` SDF slice is visualized:
![sdf slice](https://i.imgur.com/Putw72A.png)
With surface points corresponding to:
![wireframe](https://i.imgur.com/L3atG9h.png)
![solid](https://i.imgur.com/XiAks7a.png)
Queries on this SDF is dependent on the joint configurations (by default all zero).
**Queries are batched across configurations and query points**. For example, we have a batch of
joint configurations to query
```python
th = torch.tensor([0.0, -math.pi / 4.0, 0.0, math.pi / 2.0, 0.0, math.pi / 4.0, 0.0], device=d)
N = 200
th_perturbation = torch.randn(N - 1, 7, device=d) * 0.1
# N x 7 joint values
th = torch.cat((th.view(1, -1), th_perturbation + th))
```
And also a batch of points to query (same points for each configuration):
```python
y = 0.02
query_range = np.array([
[-1, 0.5],
[y, y],
[-0.2, 0.8],
])
# M x 3 points
coords, pts = pv.get_coordinates_and_points_in_grid(0.01, query_range, device=s.device)
```
We set the batch of joint configurations and query:
```python
s.set_joint_configuration(th)
# N x M SDF value
# N x M x 3 SDF gradient
sdf_val, sdf_grad = s(pts)
```
Queries are reasonably quick. For the 7 DOF Kuka arm (8 links), using `CachedSDF` on a RTX 2080 Ti,
and using CUDA, we get
```shell
N=20, M=15251, elapsed: 37.688577ms time per config and point: 0.000124ms
N=200, M=15251, elapsed: elapsed: 128.645445ms time per config and point: 0.000042ms
```
Raw data
{
"_id": null,
"home_page": null,
"name": "pytorch-volumetric",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "Sheng Zhong <zhsh@umich.edu>",
"keywords": "robotics, sdf, voxels, pytorch",
"author": null,
"author_email": "Sheng Zhong <zhsh@umich.edu>",
"download_url": "https://files.pythonhosted.org/packages/3c/33/dc1b17c6f1c339da991d0b2e4308521d4638900e0040ae016dfc2d96bc1a/pytorch_volumetric-0.5.2.tar.gz",
"platform": null,
"description": "## Pytorch Volumetric\n\n- signed distance field (SDF) pytorch implementation with parallelized query for value and gradients\n- voxel grids with automatic expanding range\n- unidirectional chamfer distance (points to mesh)\n- robot model to SDF with parallelized query over robot configurations and points\n\n## Installation\n\n```shell\npip install pytorch-volumetric\n```\n\nFor development, clone repository somewhere, then `pip3 install -e .` to install in editable mode.\nFor testing, run `pytest` in the root directory.\n\n## Usage\n\nSee `tests` for code samples; some are also shown here\n\n### SDF from mesh\n\n```python\nimport pytorch_volumetric as pv\n\n# supposing we have an object mesh (most formats supported) - from https://github.com/eleramp/pybullet-object-models\nobj = pv.MeshObjectFactory(\"YcbPowerDrill/textured_simple_reoriented.obj\")\nsdf = pv.MeshSDF(obj)\n```\n\nAn `open3d` mesh can be provided via the `mesh=` argument to `MeshObjectFactory`. When doing so, transform parameters\nsuch as scale are ignored.\n\n### Cached SDF\n\n```python\nimport pytorch_volumetric as pv\n\nobj = pv.MeshObjectFactory(\"YcbPowerDrill/textured_simple_reoriented.obj\")\nsdf = pv.MeshSDF(obj)\n# caching the SDF via a voxel grid to accelerate queries\ncached_sdf = pv.CachedSDF('drill', resolution=0.01, range_per_dim=obj.bounding_box(padding=0.1), gt_sdf=sdf)\n```\n\nBy default, query points outside the cache will be compared against the object bounding box.\nTo instead use the ground truth SDF, pass `out_of_bounds_strategy=pv.OutOfBoundsStrategy.LOOKUP_GT_SDF` to \nthe constructor.\n\nNote that the bounding box comparison will always under-approximate the SDF value, but empirically it is sufficient\nfor most applications when querying out of bound points. It is **dramatically faster** than using the ground truth SDF.\n\n### Composed SDF\nMultiple SDFs can be composed together to form an SDF that is convenient to query. This may be because your scene\nis composed of multiple objects and you have them as separate meshes. Note: the objects should not be overlapping or\nshare faces, otherwise there will be artifacts in the SDF query in determining interior-ness. \n\n```python\nimport pytorch_volumetric as pv\nimport pytorch_kinematics as pk\n\nobj = pv.MeshObjectFactory(\"YcbPowerDrill/textured_simple_reoriented.obj\")\n\n# 2 drills in the world\nsdf1 = pv.MeshSDF(obj)\nsdf2 = pv.MeshSDF(obj)\n# need to specify the transform of each SDF frame\ntsf1 = pk.Translate(0.1, 0, 0)\ntsf2 = pk.Translate(-0.2, 0, 0.2)\nsdf = pv.ComposedSDF([sdf1, sdf2], tsf1.stack(tsf2))\n```\n\n### SDF value and gradient queries\n\nSuppose we have an `ObjectFrameSDF` (such as created from above)\n\n```python\nimport numpy as np\nimport pytorch_volumetric as pv\n\n# get points in a grid in the object frame\nquery_range = np.array([\n [-1, 0.5],\n [-0.5, 0.5],\n [-0.2, 0.8],\n])\n\ncoords, pts = pv.get_coordinates_and_points_in_grid(0.01, query_range)\n# N x 3 points \n# we can also query with batched points B x N x 3, B can be any number of batch dimensions\nsdf_val, sdf_grad = sdf(pts)\n# sdf_val is N, or B x N, the SDF value in meters\n# sdf_grad is N x 3 or B x N x 3, the normalized SDF gradient (points along steepest increase in SDF)\n```\n\n### Plotting SDF Slice\n\n```python\nimport pytorch_volumetric as pv\nimport numpy as np\n\n# supposing we have an object mesh (most formats supported) - from https://github.com/eleramp/pybullet-object-models\nobj = pv.MeshObjectFactory(\"YcbPowerDrill/textured_simple_reoriented.obj\")\nsdf = pv.MeshSDF(obj)\n# need a dimension with no range to slice; here it's y\nquery_range = np.array([\n [-0.15, 0.2],\n [0, 0],\n [-0.1, 0.2],\n])\npv.draw_sdf_slice(sdf, query_range)\n```\n\n![drill SDF](https://i.imgur.com/TFaGmx6.png)\n\n### Robot Model to SDF\n\nFor many applications such as collision checking, it is useful to have the\nSDF of a multi-link robot in certain configurations.\nFirst, we create the robot model (loaded from URDF, SDF, MJCF, ...) with\n[pytorch kinematics](https://github.com/UM-ARM-Lab/pytorch_kinematics).\nFor example, we will be using the KUKA 7 DOF arm model from pybullet data\n\n```python\nimport os\nimport torch\nimport pybullet_data\nimport pytorch_kinematics as pk\nimport pytorch_volumetric as pv\n\nurdf = \"kuka_iiwa/model.urdf\"\nsearch_path = pybullet_data.getDataPath()\nfull_urdf = os.path.join(search_path, urdf)\nchain = pk.build_serial_chain_from_urdf(open(full_urdf).read(), \"lbr_iiwa_link_7\")\nd = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\nchain = chain.to(device=d)\n# paths to the link meshes are specified with their relative path inside the URDF\n# we need to give them the path prefix as we need their absolute path to load\ns = pv.RobotSDF(chain, path_prefix=os.path.join(search_path, \"kuka_iiwa\"))\n```\n\nBy default, each link will have a `MeshSDF`. To instead use `CachedSDF` for faster queries\n\n```python\ns = pv.RobotSDF(chain, path_prefix=os.path.join(search_path, \"kuka_iiwa\"),\n link_sdf_cls=pv.cache_link_sdf_factory(resolution=0.02, padding=1.0, device=d))\n```\n\nWhich when the `y=0.02` SDF slice is visualized:\n![sdf slice](https://i.imgur.com/Putw72A.png)\n\nWith surface points corresponding to:\n![wireframe](https://i.imgur.com/L3atG9h.png)\n![solid](https://i.imgur.com/XiAks7a.png)\n\nQueries on this SDF is dependent on the joint configurations (by default all zero).\n**Queries are batched across configurations and query points**. For example, we have a batch of\njoint configurations to query\n\n```python\nth = torch.tensor([0.0, -math.pi / 4.0, 0.0, math.pi / 2.0, 0.0, math.pi / 4.0, 0.0], device=d)\nN = 200\nth_perturbation = torch.randn(N - 1, 7, device=d) * 0.1\n# N x 7 joint values\nth = torch.cat((th.view(1, -1), th_perturbation + th))\n```\n\nAnd also a batch of points to query (same points for each configuration):\n\n```python\ny = 0.02\nquery_range = np.array([\n [-1, 0.5],\n [y, y],\n [-0.2, 0.8],\n])\n# M x 3 points\ncoords, pts = pv.get_coordinates_and_points_in_grid(0.01, query_range, device=s.device)\n```\n\nWe set the batch of joint configurations and query:\n\n```python\ns.set_joint_configuration(th)\n# N x M SDF value\n# N x M x 3 SDF gradient\nsdf_val, sdf_grad = s(pts)\n```\n\nQueries are reasonably quick. For the 7 DOF Kuka arm (8 links), using `CachedSDF` on a RTX 2080 Ti,\nand using CUDA, we get\n\n```shell\nN=20, M=15251, elapsed: 37.688577ms time per config and point: 0.000124ms\nN=200, M=15251, elapsed: elapsed: 128.645445ms time per config and point: 0.000042ms\n```\n",
"bugtrack_url": null,
"license": "Copyright (c) 2023 University of Michigan ARM Lab Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "Volumetric structures such as voxels and SDFs implemented in pytorch",
"version": "0.5.2",
"project_urls": {
"Bug Reports": "https://github.com/UM-ARM-Lab/pytorch_volumetric/issues",
"Homepage": "https://github.com/UM-ARM-Lab/pytorch_volumetric",
"Source": "https://github.com/UM-ARM-Lab/pytorch_volumetric"
},
"split_keywords": [
"robotics",
" sdf",
" voxels",
" pytorch"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7edc6375a1ffa8f34c8909f900764acd336cdf5c7ea1c6086249fbdba9f58669",
"md5": "cdb6545dc93a324e7140c026d19009b8",
"sha256": "abc0ab153acdba8f3c8904491ad1c99b2fcfc858b6f04542ea0807e2dcf55860"
},
"downloads": -1,
"filename": "pytorch_volumetric-0.5.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cdb6545dc93a324e7140c026d19009b8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 23441,
"upload_time": "2024-05-06T22:56:12",
"upload_time_iso_8601": "2024-05-06T22:56:12.535311Z",
"url": "https://files.pythonhosted.org/packages/7e/dc/6375a1ffa8f34c8909f900764acd336cdf5c7ea1c6086249fbdba9f58669/pytorch_volumetric-0.5.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3c33dc1b17c6f1c339da991d0b2e4308521d4638900e0040ae016dfc2d96bc1a",
"md5": "c2325a5a1f6076385e9f414c1d4b6acd",
"sha256": "5aeac80571ed2771eea5ea0fc88307dee115ed5d0704b01e7a098965244b35a6"
},
"downloads": -1,
"filename": "pytorch_volumetric-0.5.2.tar.gz",
"has_sig": false,
"md5_digest": "c2325a5a1f6076385e9f414c1d4b6acd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 29379,
"upload_time": "2024-05-06T22:56:14",
"upload_time_iso_8601": "2024-05-06T22:56:14.317136Z",
"url": "https://files.pythonhosted.org/packages/3c/33/dc1b17c6f1c339da991d0b2e4308521d4638900e0040ae016dfc2d96bc1a/pytorch_volumetric-0.5.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-06 22:56:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "UM-ARM-Lab",
"github_project": "pytorch_volumetric",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "pytorch-volumetric"
}