chsel


Namechsel JSON
Version 0.2.1 PyPI version JSON
download
home_pageNone
SummaryConstrained pose Hypothesis Set Elimination official code for pose estimation
upload_time2024-08-14 18:40:02
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseCopyright (c) 2023 University of Michigan ARM Lab Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords robotics pose estimation registration pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## CHSEL: Producing Diverse Plausible Pose Estimates from Contact and Free Space Data

![combined demo](https://i.imgur.com/T4DnKDu.gif)

Demo CHSEL results on drill with sparse contact data (top) and mug with dense vision data (bottom).

## Installation
```bash
pip install chsel
```

For development, clone repository somewhere, then `pip3 install -e .` to install in editable mode.

## Links
- [arxiv](https://arxiv.org/abs/2305.08042)
- [website](https://johnsonzhong.me/projects/chsel/)

## Citation
If you use it, please cite

```bibtex
@inproceedings{zhong2023chsel,
  title={CHSEL: Producing Diverse Plausible Pose Estimates from Contact and Free Space Data},
  author={Zhong, Sheng and Fazeli, Nima and Berenson, Dmitry},
  booktitle={Robotics science and systems},
  year={2023}
}
```

To reproduce the results from the paper, see the 
[experiments repository](https://github.com/UM-ARM-Lab/chsel_experiments).

## Intuition
### Cost
* enforce an object's signed distance field (SDF) consistency against point clouds augmented with volumetric semantics
    * whether a point is in free space, inside the object, or has a known SDF value 
* two of those semantics classes encodes uncertainty about the observed point's SDF
    * for example free space just means we know the point has SDF > 0, but makes no claim about its actual value
    * in many observation scenarios, we do not know the observed' point's actual SDF value
* compared to related methods like SDF2SDF that match an SDF against another SDF, this avoids bias

### Optimization
At a high level, our method can be summarized as:
1. start with an initial set of transforms that ideally covers all local minima
   * for every local minima, there exists a transform in the initial set that is in its attraction basin in terms of the local cost landscape
3. find the bounds of the local minima landscape
    * usually much smaller than the whole search space
    * do this by performing gradient descent independently on each initial transform with our given cost
4. consider search in a dimensionality reduced feature space
    * translational component of the transform such that we consider the best rotation for each translation
6. find good local minima with a fine-tuning search with the search space reduced to the local minima bounds that we found to afford higher resolution
    * instead of finding the best local minima, we want to evaluate all local minima, and we do this with Quality Diversity optimization on the feature space
    * we maintain an archive, a grid in feature space, each cell of which holds the best solution given that archive (so the grid is in translation space, and each cell holds a full transform)
    * we populate this archive over the course of QD optimization, which evolutionarily combines the top solutions
7. we return the transforms from the best scoring cells, as many as requested
    * usually same number as in the input transform set

## Usage
CHSEL registers an observed semantic point cloud against a target object's signed distance field (SDF).
It is agnostic to how the semantic point cloud is obtained, which can come from cameras and tactile sensors for example.

Example code is given in `tests/test_wrapper.py`.

First you need an object frame SDF which you can generate from its 3D mesh
```python
import pytorch_volumetric as pv

# supposing we have an object mesh (most formats supported) - from https://github.com/eleramp/pybullet-object-models
obj = pv.MeshObjectFactory("YcbPowerDrill/textured_simple_reoriented.obj")
sdf = pv.MeshSDF(obj)
```
or for (much) faster queries at the cost of some accuracy and memory usage with a cached SDF
```python
import pytorch_volumetric as pv

obj = pv.MeshObjectFactory("YcbPowerDrill/textured_simple_reoriented.obj")
sdf = pv.MeshSDF(obj)
# caching the SDF via a voxel grid to accelerate queries
cached_sdf = pv.CachedSDF('drill', resolution=0.01, range_per_dim=obj.bounding_box(padding=0.1), gt_sdf=sdf)
```

Next you need the semantic point cloud which for each point has a position (numpy array or torch tensor)
and semantics (`chsel.Semantics.FREE`, `chsel.Semantics.OCCUPIED`, or `float`).
This is normally observed through various means, but for this tutorial we will generate it from the SDF.
Note that you don't need points from each semantics, but the more information you provide the better the estimate.
For example, you can use CHSEL even with only known surface points. In this case CHSEL will
behave similarly to ICP.

Easy way to initialize registration (assuming you have observed world-frame points):
```python
import chsel
# can specify None for any of the classes if you don't have them
registration = chsel.CHSEL(sdf, surface_pts=surface_pts, free_pts=free_pts, occupied_pts=None)
```

Alternatively you can specify the points and semantics directly
```python
import chsel
...
sem_free = [chsel.SemanticsClass.FREE for _ in pts_free]
sem_occupied = [chsel.SemanticsClass.OCCUPIED for _ in pts_occupied]
# lookup the SDF value of the points
sem_sdf = sdf_val[known_sdf][:N]
positions = torch.cat((pts_free, pts_occupied, pts_sdf))
semantics = sem_free + sem_occupied + sem_sdf.tolist()
registration = chsel.CHSEL(sdf, positions=positions, semantics=semantics)
```

First you might want to visualize the object interior and surface points to ensure resolutions are fine (need open3d).
Surface points are in green, interior points are in blue. You can adjust them via the `CHSEL` constructor arguments 
`surface_threshold` and `surface_threshold_model_override`. The default value for them is the `resolution` of the registration.
```python
registration.visualize_input(show_model_points=True, show_input_points=False)
```
![model points](https://i.imgur.com/edRkUog.gif)

You can also visualize the observed input points together with the model points.
For them to show up in the same frame, you need the ground truth transform.
Orange points are known free, red points are surface, dark green is occupied.
```python
registration.visualize_input(show_model_points=True, show_input_points=True, gt_obj_to_world_tf=gt_tf)
```
![model and input points](https://i.imgur.com/DD8zSkR.gif)

Also visualized with the object mesh

![points](https://i.imgur.com/GpQf0w1.gif)


We now apply CHSEL for some iterations with an initial random guesses of the transform
```python
import pytorch_kinematics as pk
# we want a set of 30 transforms
B = 30

random_init_tsf = pk.Transform3d(pos=torch.randn((B, 3), device=d), rot=pk.random_rotations(B, device=d), device=d)
random_init_tsf = random_init_tsf.get_matrix()

res, all_solutions = registration.register(iterations=15, batch=B, initial_tsf=random_init_tsf)
```
Starting from a random initial guess, it takes some iterations to converge to plausible estimates,
but you will need fewer iterations (potentially just 1) when warm-starting from the previous estimates
in sequential estimation problems.

We can analyze the results from each iteration and plot the transforms

```python
def visualize_transforms(link_to_world):
    # visualize the transformed mesh and points
    geo = [tf_mesh, pc_free, pc_sdf]
    for i in range(B):
        tfd_mesh = copy.deepcopy(obj._mesh).transform(link_to_world[i].cpu().numpy())
        # paint tfd_mesh a color corresponding to the index in the batch
        tfd_mesh.paint_uniform_color([i / (B * 2), 0, 1 - i / (B * 2)])
        geo.append(tfd_mesh)
    o3d.visualization.draw_geometries_with_animation_callback(geo, rotate_view)

for i in range(len(registration.res_history)):
    # print the sorted RMSE for each iteration
    print(torch.sort(registration.res_history[i].rmse).values)

    # res.RTs.R, res.RTs.T, res.RTs.s are the similarity transform parameters
    # get 30 4x4 transform matrix for homogeneous coordinates
    world_to_link = chsel.solution_to_world_to_link_matrix(registration.res_history[i])
    link_to_world = world_to_link.inverse()
    visualize_transforms(link_to_world)
```

We gradually refine implausible transforms to plausible ones

![first](https://i.imgur.com/wlnnJJL.gif)

![final](https://i.imgur.com/bxdpgpk.gif)

### Comparison to ICP
We can use [pytorch3d](https://pytorch3d.org/)'s ICP implementation as comparison. We can see that CHSEL is able to
converge to plausible transforms while ICP is not able to improve its estimates via refinement
since it does not use any free space information.

```python
from pytorch3d.ops.points_alignment import iterative_closest_point

pcd = obj._mesh.sample_points_uniformly(number_of_points=500)
model_points_register = torch.tensor(np.asarray(pcd.points), device=d, dtype=torch.float)
initial_tsf = random_init_tsf
initial_tsf = SimilarityTransform(initial_tsf[:, :3, :3],
                                  initial_tsf[:, :3, 3],
                                  torch.ones(B, device=d, dtype=model_points_register.dtype))

# known_sdf_pts represent the partial observation on the object surface
known_sdf_pts = positions[2 * N:]
for i in range(10):
    # note that internally ICP also iterates on its estimate, but refining around the elite estimates may
    # filter out some outliters
    res = iterative_closest_point(known_sdf_pts.repeat(B, 1, 1), model_points_register.repeat(B, 1, 1),
                                  init_transform=initial_tsf,
                                  allow_reflection=False)
    world_to_link = chsel.solution_to_world_to_link_matrix(res, invert_rot_matrix=True)
    visualize_transforms(world_to_link.inverse())
    # refine initial_tsf using elites
    # doesn't get much better
    initial_tsf = chsel.reinitialize_transform_around_elites(world_to_link, res.rmse)
    initial_tsf = SimilarityTransform(initial_tsf[:, :3, :3],
                                      initial_tsf[:, :3, 3],
                                      torch.ones(B, device=d, dtype=model_points_register.dtype))

```

### Plausible Set Estimation
Starting from initially random transform guesses may get stuck in bad local minima (it is reasonably robust to it though),
but if you have access to the ground truth or an approximate transform, you can use it as the initial guess instead.

```python
gt_init = chsel.reinitialize_transform_estimates(B, gt_tf.inverse().get_matrix()[0])
res, all_solutions = registration.register(terations=15, batch=B, initial_tsf=gt_init)
```

### Updating for Sequential Registration
If we have a sequential registration problem where we collect more semantic points, we can
warm start our registration with the previous results:
```python
# assuming positions and semantics have been updated
registration = chsel.CHSEL(sdf, positions, semantics, qd_iterations=100, free_voxels_resolution=0.005)
# feed all solutions from the previous iteration
# assuming the object hasn't moved, use the output of the previous iteration as the initial estimate
# this res is from the last iteration of the previous registration
world_to_link = chsel.solution_to_world_to_link_matrix(res)
res, all_solutions = registration.register(iterations=5, batch=B, initial_tsf=world_to_link,
                                           low_cost_transform_set=all_solutions)
```

### SE(2) Constraint
If you know the object lies on a table top for example, you can constrain the optimization to be over
SE(2) instead of SE(3). You do this by specifying the plane via the axis of rotation (normal) and a
scalar offset:
```python
# table top assumed to face positive z
normal = torch.tensor([0, 0, 1])
offset = torch.tensor([0, 0, 0])
registration = chsel.CHSEL(sdf, positions, semantics, axis_of_rotation=normal, offset_along_normal=offset)
```

### Measures
QD optimization works by binning over the measure space. This is how diversity is enforced.
You can specify the measure space by providing a `chsel.measure.MeasureFunction` object.
Note that this also defines what space the QD optimization is performed in. (such as a 9 dimensional continuous 
representation of SE(3) transforms, or a 3 dimensional space for SE(2) transforms).

For example, if you are on a tabletop and know the object's position quite well and want diversity in yaw:

```python
import chsel

normal = torch.tensor([0, 0, 1])
offset = torch.tensor([0, 0, 0])
measure = chsel.SE2AngleMeasure(axis_of_rotation=normal, offset_along_normal=offset)
registration = chsel.CHSEL(sdf, positions, semantics, axis_of_rotation=normal, offset_along_normal=offset)
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "chsel",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Sheng Zhong <zhsh@umich.edu>",
    "keywords": "robotics, pose estimation, registration, pytorch",
    "author": null,
    "author_email": "Sheng Zhong <zhsh@umich.edu>",
    "download_url": "https://files.pythonhosted.org/packages/a6/47/89fac9929db53ef49d14fd9c334a974111fa3bc726cb1af906bc3e5b0fca/chsel-0.2.1.tar.gz",
    "platform": null,
    "description": "## CHSEL: Producing Diverse Plausible Pose Estimates from Contact and Free Space Data\n\n![combined demo](https://i.imgur.com/T4DnKDu.gif)\n\nDemo CHSEL results on drill with sparse contact data (top) and mug with dense vision data (bottom).\n\n## Installation\n```bash\npip install chsel\n```\n\nFor development, clone repository somewhere, then `pip3 install -e .` to install in editable mode.\n\n## Links\n- [arxiv](https://arxiv.org/abs/2305.08042)\n- [website](https://johnsonzhong.me/projects/chsel/)\n\n## Citation\nIf you use it, please cite\n\n```bibtex\n@inproceedings{zhong2023chsel,\n  title={CHSEL: Producing Diverse Plausible Pose Estimates from Contact and Free Space Data},\n  author={Zhong, Sheng and Fazeli, Nima and Berenson, Dmitry},\n  booktitle={Robotics science and systems},\n  year={2023}\n}\n```\n\nTo reproduce the results from the paper, see the \n[experiments repository](https://github.com/UM-ARM-Lab/chsel_experiments).\n\n## Intuition\n### Cost\n* enforce an object's signed distance field (SDF) consistency against point clouds augmented with volumetric semantics\n    * whether a point is in free space, inside the object, or has a known SDF value \n* two of those semantics classes encodes uncertainty about the observed point's SDF\n    * for example free space just means we know the point has SDF > 0, but makes no claim about its actual value\n    * in many observation scenarios, we do not know the observed' point's actual SDF value\n* compared to related methods like SDF2SDF that match an SDF against another SDF, this avoids bias\n\n### Optimization\nAt a high level, our method can be summarized as:\n1. start with an initial set of transforms that ideally covers all local minima\n   * for every local minima, there exists a transform in the initial set that is in its attraction basin in terms of the local cost landscape\n3. find the bounds of the local minima landscape\n    * usually much smaller than the whole search space\n    * do this by performing gradient descent independently on each initial transform with our given cost\n4. consider search in a dimensionality reduced feature space\n    * translational component of the transform such that we consider the best rotation for each translation\n6. find good local minima with a fine-tuning search with the search space reduced to the local minima bounds that we found to afford higher resolution\n    * instead of finding the best local minima, we want to evaluate all local minima, and we do this with Quality Diversity optimization on the feature space\n    * we maintain an archive, a grid in feature space, each cell of which holds the best solution given that archive (so the grid is in translation space, and each cell holds a full transform)\n    * we populate this archive over the course of QD optimization, which evolutionarily combines the top solutions\n7. we return the transforms from the best scoring cells, as many as requested\n    * usually same number as in the input transform set\n\n## Usage\nCHSEL registers an observed semantic point cloud against a target object's signed distance field (SDF).\nIt is agnostic to how the semantic point cloud is obtained, which can come from cameras and tactile sensors for example.\n\nExample code is given in `tests/test_wrapper.py`.\n\nFirst you need an object frame SDF which you can generate from its 3D mesh\n```python\nimport pytorch_volumetric as pv\n\n# supposing we have an object mesh (most formats supported) - from https://github.com/eleramp/pybullet-object-models\nobj = pv.MeshObjectFactory(\"YcbPowerDrill/textured_simple_reoriented.obj\")\nsdf = pv.MeshSDF(obj)\n```\nor for (much) faster queries at the cost of some accuracy and memory usage with a cached SDF\n```python\nimport pytorch_volumetric as pv\n\nobj = pv.MeshObjectFactory(\"YcbPowerDrill/textured_simple_reoriented.obj\")\nsdf = pv.MeshSDF(obj)\n# caching the SDF via a voxel grid to accelerate queries\ncached_sdf = pv.CachedSDF('drill', resolution=0.01, range_per_dim=obj.bounding_box(padding=0.1), gt_sdf=sdf)\n```\n\nNext you need the semantic point cloud which for each point has a position (numpy array or torch tensor)\nand semantics (`chsel.Semantics.FREE`, `chsel.Semantics.OCCUPIED`, or `float`).\nThis is normally observed through various means, but for this tutorial we will generate it from the SDF.\nNote that you don't need points from each semantics, but the more information you provide the better the estimate.\nFor example, you can use CHSEL even with only known surface points. In this case CHSEL will\nbehave similarly to ICP.\n\nEasy way to initialize registration (assuming you have observed world-frame points):\n```python\nimport chsel\n# can specify None for any of the classes if you don't have them\nregistration = chsel.CHSEL(sdf, surface_pts=surface_pts, free_pts=free_pts, occupied_pts=None)\n```\n\nAlternatively you can specify the points and semantics directly\n```python\nimport chsel\n...\nsem_free = [chsel.SemanticsClass.FREE for _ in pts_free]\nsem_occupied = [chsel.SemanticsClass.OCCUPIED for _ in pts_occupied]\n# lookup the SDF value of the points\nsem_sdf = sdf_val[known_sdf][:N]\npositions = torch.cat((pts_free, pts_occupied, pts_sdf))\nsemantics = sem_free + sem_occupied + sem_sdf.tolist()\nregistration = chsel.CHSEL(sdf, positions=positions, semantics=semantics)\n```\n\nFirst you might want to visualize the object interior and surface points to ensure resolutions are fine (need open3d).\nSurface points are in green, interior points are in blue. You can adjust them via the `CHSEL` constructor arguments \n`surface_threshold` and `surface_threshold_model_override`. The default value for them is the `resolution` of the registration.\n```python\nregistration.visualize_input(show_model_points=True, show_input_points=False)\n```\n![model points](https://i.imgur.com/edRkUog.gif)\n\nYou can also visualize the observed input points together with the model points.\nFor them to show up in the same frame, you need the ground truth transform.\nOrange points are known free, red points are surface, dark green is occupied.\n```python\nregistration.visualize_input(show_model_points=True, show_input_points=True, gt_obj_to_world_tf=gt_tf)\n```\n![model and input points](https://i.imgur.com/DD8zSkR.gif)\n\nAlso visualized with the object mesh\n\n![points](https://i.imgur.com/GpQf0w1.gif)\n\n\nWe now apply CHSEL for some iterations with an initial random guesses of the transform\n```python\nimport pytorch_kinematics as pk\n# we want a set of 30 transforms\nB = 30\n\nrandom_init_tsf = pk.Transform3d(pos=torch.randn((B, 3), device=d), rot=pk.random_rotations(B, device=d), device=d)\nrandom_init_tsf = random_init_tsf.get_matrix()\n\nres, all_solutions = registration.register(iterations=15, batch=B, initial_tsf=random_init_tsf)\n```\nStarting from a random initial guess, it takes some iterations to converge to plausible estimates,\nbut you will need fewer iterations (potentially just 1) when warm-starting from the previous estimates\nin sequential estimation problems.\n\nWe can analyze the results from each iteration and plot the transforms\n\n```python\ndef visualize_transforms(link_to_world):\n    # visualize the transformed mesh and points\n    geo = [tf_mesh, pc_free, pc_sdf]\n    for i in range(B):\n        tfd_mesh = copy.deepcopy(obj._mesh).transform(link_to_world[i].cpu().numpy())\n        # paint tfd_mesh a color corresponding to the index in the batch\n        tfd_mesh.paint_uniform_color([i / (B * 2), 0, 1 - i / (B * 2)])\n        geo.append(tfd_mesh)\n    o3d.visualization.draw_geometries_with_animation_callback(geo, rotate_view)\n\nfor i in range(len(registration.res_history)):\n    # print the sorted RMSE for each iteration\n    print(torch.sort(registration.res_history[i].rmse).values)\n\n    # res.RTs.R, res.RTs.T, res.RTs.s are the similarity transform parameters\n    # get 30 4x4 transform matrix for homogeneous coordinates\n    world_to_link = chsel.solution_to_world_to_link_matrix(registration.res_history[i])\n    link_to_world = world_to_link.inverse()\n    visualize_transforms(link_to_world)\n```\n\nWe gradually refine implausible transforms to plausible ones\n\n![first](https://i.imgur.com/wlnnJJL.gif)\n\n![final](https://i.imgur.com/bxdpgpk.gif)\n\n### Comparison to ICP\nWe can use [pytorch3d](https://pytorch3d.org/)'s ICP implementation as comparison. We can see that CHSEL is able to\nconverge to plausible transforms while ICP is not able to improve its estimates via refinement\nsince it does not use any free space information.\n\n```python\nfrom pytorch3d.ops.points_alignment import iterative_closest_point\n\npcd = obj._mesh.sample_points_uniformly(number_of_points=500)\nmodel_points_register = torch.tensor(np.asarray(pcd.points), device=d, dtype=torch.float)\ninitial_tsf = random_init_tsf\ninitial_tsf = SimilarityTransform(initial_tsf[:, :3, :3],\n                                  initial_tsf[:, :3, 3],\n                                  torch.ones(B, device=d, dtype=model_points_register.dtype))\n\n# known_sdf_pts represent the partial observation on the object surface\nknown_sdf_pts = positions[2 * N:]\nfor i in range(10):\n    # note that internally ICP also iterates on its estimate, but refining around the elite estimates may\n    # filter out some outliters\n    res = iterative_closest_point(known_sdf_pts.repeat(B, 1, 1), model_points_register.repeat(B, 1, 1),\n                                  init_transform=initial_tsf,\n                                  allow_reflection=False)\n    world_to_link = chsel.solution_to_world_to_link_matrix(res, invert_rot_matrix=True)\n    visualize_transforms(world_to_link.inverse())\n    # refine initial_tsf using elites\n    # doesn't get much better\n    initial_tsf = chsel.reinitialize_transform_around_elites(world_to_link, res.rmse)\n    initial_tsf = SimilarityTransform(initial_tsf[:, :3, :3],\n                                      initial_tsf[:, :3, 3],\n                                      torch.ones(B, device=d, dtype=model_points_register.dtype))\n\n```\n\n### Plausible Set Estimation\nStarting from initially random transform guesses may get stuck in bad local minima (it is reasonably robust to it though),\nbut if you have access to the ground truth or an approximate transform, you can use it as the initial guess instead.\n\n```python\ngt_init = chsel.reinitialize_transform_estimates(B, gt_tf.inverse().get_matrix()[0])\nres, all_solutions = registration.register(terations=15, batch=B, initial_tsf=gt_init)\n```\n\n### Updating for Sequential Registration\nIf we have a sequential registration problem where we collect more semantic points, we can\nwarm start our registration with the previous results:\n```python\n# assuming positions and semantics have been updated\nregistration = chsel.CHSEL(sdf, positions, semantics, qd_iterations=100, free_voxels_resolution=0.005)\n# feed all solutions from the previous iteration\n# assuming the object hasn't moved, use the output of the previous iteration as the initial estimate\n# this res is from the last iteration of the previous registration\nworld_to_link = chsel.solution_to_world_to_link_matrix(res)\nres, all_solutions = registration.register(iterations=5, batch=B, initial_tsf=world_to_link,\n                                           low_cost_transform_set=all_solutions)\n```\n\n### SE(2) Constraint\nIf you know the object lies on a table top for example, you can constrain the optimization to be over\nSE(2) instead of SE(3). You do this by specifying the plane via the axis of rotation (normal) and a\nscalar offset:\n```python\n# table top assumed to face positive z\nnormal = torch.tensor([0, 0, 1])\noffset = torch.tensor([0, 0, 0])\nregistration = chsel.CHSEL(sdf, positions, semantics, axis_of_rotation=normal, offset_along_normal=offset)\n```\n\n### Measures\nQD optimization works by binning over the measure space. This is how diversity is enforced.\nYou can specify the measure space by providing a `chsel.measure.MeasureFunction` object.\nNote that this also defines what space the QD optimization is performed in. (such as a 9 dimensional continuous \nrepresentation of SE(3) transforms, or a 3 dimensional space for SE(2) transforms).\n\nFor example, if you are on a tabletop and know the object's position quite well and want diversity in yaw:\n\n```python\nimport chsel\n\nnormal = torch.tensor([0, 0, 1])\noffset = torch.tensor([0, 0, 0])\nmeasure = chsel.SE2AngleMeasure(axis_of_rotation=normal, offset_along_normal=offset)\nregistration = chsel.CHSEL(sdf, positions, semantics, axis_of_rotation=normal, offset_along_normal=offset)\n```\n",
    "bugtrack_url": null,
    "license": "Copyright (c) 2023 University of Michigan ARM Lab  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.  ",
    "summary": "Constrained pose Hypothesis Set Elimination official code for pose estimation",
    "version": "0.2.1",
    "project_urls": {
        "Bug Reports": "https://github.com/UM-ARM-Lab/cshel/issues",
        "Homepage": "https://github.com/UM-ARM-Lab/chsel",
        "Source": "https://github.com/UM-ARM-Lab/chsel"
    },
    "split_keywords": [
        "robotics",
        " pose estimation",
        " registration",
        " pytorch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ed9f151cc44871e24d90716d1368db772febf94feb6e4b2954bed2d8cec2344f",
                "md5": "0d608e77018ff78dc44bdcb4cfcc0a49",
                "sha256": "18515cefc51614de2ef7e0693c25ca66780e10c16f927c4d878f106fdfed565f"
            },
            "downloads": -1,
            "filename": "chsel-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0d608e77018ff78dc44bdcb4cfcc0a49",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 35239,
            "upload_time": "2024-08-14T18:40:00",
            "upload_time_iso_8601": "2024-08-14T18:40:00.837827Z",
            "url": "https://files.pythonhosted.org/packages/ed/9f/151cc44871e24d90716d1368db772febf94feb6e4b2954bed2d8cec2344f/chsel-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a64789fac9929db53ef49d14fd9c334a974111fa3bc726cb1af906bc3e5b0fca",
                "md5": "0c2ae1d05122987b6442d28f8a187987",
                "sha256": "a523cd078ebc2967a0666ad43e3ebd45a0938190b294a6fdc6f481fe5be27a60"
            },
            "downloads": -1,
            "filename": "chsel-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "0c2ae1d05122987b6442d28f8a187987",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 42481,
            "upload_time": "2024-08-14T18:40:02",
            "upload_time_iso_8601": "2024-08-14T18:40:02.180756Z",
            "url": "https://files.pythonhosted.org/packages/a6/47/89fac9929db53ef49d14fd9c334a974111fa3bc726cb1af906bc3e5b0fca/chsel-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-14 18:40:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "UM-ARM-Lab",
    "github_project": "cshel",
    "github_not_found": true,
    "lcname": "chsel"
}
        
Elapsed time: 0.66471s