spatialkit


Namespatialkit JSON
Version 0.3.1 PyPI version JSON
download
home_pageNone
SummaryA spatial computing toolkit for 3D vision and robotics
upload_time2025-11-03 12:35:19
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseMIT
keywords spatial-computing 3d-vision geometry robotics pytorch computer-vision
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # spatialkit

**Current Version:** 0.3.0-alpha
**Last Updated:** November 1, 2025

## License

`spatialkit` is freely available under the MIT License. For details, please refer to the [LICENSE](LICENSE) file.

## Introduction

`spatialkit` is a personal library designed to support research and development in computer vision and robotics. This library provides various features and functions necessary for developing and testing computer vision algorithms, including 3D vision. `spatialkit` includes tools and features that help users effectively process and analyze complex data.

### Key Features

- **Prototyping and Research Test Code**: Provides frequently needed test code during the development and testing stages of computer vision algorithms.
- **PyTorch Support**: Offers functions and classes for handling PyTorch tensors to process and analyze 3D data.
- **Integration of Major Libraries**: Provides simplified usage by integrating core features of popular libraries such as NumPy, OpenCV, SciPy, and PyTorch.

### Recommended For

- **Computer Vision and Robotics Beginners**: The code is simpler and easier to understand compared to other libraries, especially facilitating code-level understanding in 3D tasks.
- **3D Vision Researchers**: Provides PyTorch-based code and test code, which can shorten the programming process in various 3D vision research, including deep learning.

## Caution

- **Performance and Efficiency Issues**: Some features may be slower than existing OpenCV or other libraries, so caution should be exercised when used in research and development where optimization and speed are important.

## Getting Started (Development Mode)

### Requirements

- **Python Version**: Python >= 3.12
- **Package Manager**: uv (recommended) or pip
- **Dependencies**: All required dependencies will be automatically installed during installation

### Installation Using uv (Recommended)

1. Install uv (if needed)
   ```bash
   curl -LsSf https://astral.sh/uv/install.sh | sh
   ```

2. Clone the repository
   ```bash
   git clone https://github.com/cshyundev/spatialkit.git
   cd spatialkit
   ```

3. Create virtual environment and install dependencies
   ```bash
   uv venv --python 3.12
   source .venv/bin/activate  # Linux/Mac
   # .venv\Scripts\activate   # Windows
   uv pip install -e .
   ```

### Installation Using pip

1. Clone the repository
   ```bash
   git clone https://github.com/cshyundev/spatialkit.git
   ```
2. Install in development mode
     ```bash
    cd spatialkit
    pip install -e .
    ```

## Simple Examples

### Unified NumPy/PyTorch Interface

`spatialkit` provides a unified interface for both NumPy and PyTorch, with geometry classes that preserve input types:

```python
import spatialkit as sp
import numpy as np
import torch

# Create 3D points (3, N) - works with both NumPy and PyTorch
pts_np = np.random.rand(3, 100)
pts_torch = torch.rand(3, 100)

# Create rotation from RPY (Roll-Pitch-Yaw)
rot = sp.Rotation.from_rpy(np.array([0, np.pi/4, 0]))  # 45° pitch

# Apply rotation using multiplication operator - type is preserved
rotated_np = rot * pts_np        # NumPy in → NumPy out
rotated_torch = rot * pts_torch  # Torch in → Torch out

print(type(rotated_np))    # <class 'numpy.ndarray'>
print(type(rotated_torch)) # <class 'torch.Tensor'>

# Create transform (rotation + translation)
tf = sp.Transform(t=np.array([1., 0., 0.]), rot=rot)

# Apply transform using multiplication operator
pts_transformed = tf * pts_np

# Chain transformations
tf_combined = tf * tf.inverse()  # Returns identity transform

print(tf_combined.mat44())
```

### Multiple Camera Models with Image Warping

`spatialkit` supports various camera models and enables image warping between different camera models:

| 360° Cubemap | Perspective | Fisheye | Double Sphere (180° FOV) |
|:------------:|:-----------:|:-------:|:------------------------:|
| ![360](assets/cubemap_360.png) | ![Perspective](assets/camera_perspective.png) | ![Fisheye](assets/camera_fisheye.png) | ![DoubleSphere](assets/camera_doublesphere.png) |

```python
import spatialkit as sp
from spatialkit.imgproc.synthesis import transition_camera_view
from spatialkit.vis2d import show_image

equirect_360 = sp.io.read_image("assets/cubemap_360.png")
img_size = [640, 480]

# 1. Equirectangular camera (source 360 image)
equirect_cam = sp.EquirectangularCamera.from_image_size([1024, 512])

# 2. Perspective (Pinhole) camera
K_perspective = [[500, 0, 320], [0, 500, 240], [0, 0, 1]]
perspective_cam = sp.PerspectiveCamera.from_K(K_perspective, img_size)

# 3. Fisheye camera (OpenCV model)
K_fisheye = [[300, 0, 320], [0, 300, 240], [0, 0, 1]]
D_fisheye = [-0.042595202508066574, 0.031307765215775184, -0.04104704724832258, 0.015343014605793324]
fisheye_cam = sp.OpenCVFisheyeCamera.from_K_D(K_fisheye, img_size, D_fisheye)

# 4. Double Sphere camera (180° FOV)
double_sphere_cam = sp.DoubleSphereCamera(
    {
        'image_size': [640, 480],
        'cam_type': 'DOUBLESPHERE',
        'principal_point': [318.86121757059797, 235.7432966284313],
        'focal_length': [122.5533262583915, 121.79271712838818],
        'xi': -0.02235598738719681,
        'alpha': 0.562863934931952,
        'fov_deg': 180.0
    }
)

# Warp image between camera models
perspective_warped = transition_camera_view(equirect_360, equirect_cam, perspective_cam)
fisheye_warped = transition_camera_view(equirect_360, equirect_cam, fisheye_cam)
double_sphere_warped = transition_camera_view(equirect_360, equirect_cam, double_sphere_cam)

# Display results
show_image(equirect_360, title="Equirectangular 360 Image")
show_image(perspective_warped, title="Perspective Camera View")
show_image(fisheye_warped, title="Fisheye Camera View")
show_image(double_sphere_warped, title="Double Sphere Camera View")
```


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "spatialkit",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": "spatial-computing, 3d-vision, geometry, robotics, pytorch, computer-vision",
    "author": null,
    "author_email": "Sehyun Cha <cshyundev@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/ed/eb/f92d9e49e0feecec7d831f7404a3e27dca382fcb38759a04e4854733299c/spatialkit-0.3.1.tar.gz",
    "platform": null,
    "description": "# spatialkit\n\n**Current Version:** 0.3.0-alpha\n**Last Updated:** November 1, 2025\n\n## License\n\n`spatialkit` is freely available under the MIT License. For details, please refer to the [LICENSE](LICENSE) file.\n\n## Introduction\n\n`spatialkit` is a personal library designed to support research and development in computer vision and robotics. This library provides various features and functions necessary for developing and testing computer vision algorithms, including 3D vision. `spatialkit` includes tools and features that help users effectively process and analyze complex data.\n\n### Key Features\n\n- **Prototyping and Research Test Code**: Provides frequently needed test code during the development and testing stages of computer vision algorithms.\n- **PyTorch Support**: Offers functions and classes for handling PyTorch tensors to process and analyze 3D data.\n- **Integration of Major Libraries**: Provides simplified usage by integrating core features of popular libraries such as NumPy, OpenCV, SciPy, and PyTorch.\n\n### Recommended For\n\n- **Computer Vision and Robotics Beginners**: The code is simpler and easier to understand compared to other libraries, especially facilitating code-level understanding in 3D tasks.\n- **3D Vision Researchers**: Provides PyTorch-based code and test code, which can shorten the programming process in various 3D vision research, including deep learning.\n\n## Caution\n\n- **Performance and Efficiency Issues**: Some features may be slower than existing OpenCV or other libraries, so caution should be exercised when used in research and development where optimization and speed are important.\n\n## Getting Started (Development Mode)\n\n### Requirements\n\n- **Python Version**: Python >= 3.12\n- **Package Manager**: uv (recommended) or pip\n- **Dependencies**: All required dependencies will be automatically installed during installation\n\n### Installation Using uv (Recommended)\n\n1. Install uv (if needed)\n   ```bash\n   curl -LsSf https://astral.sh/uv/install.sh | sh\n   ```\n\n2. Clone the repository\n   ```bash\n   git clone https://github.com/cshyundev/spatialkit.git\n   cd spatialkit\n   ```\n\n3. Create virtual environment and install dependencies\n   ```bash\n   uv venv --python 3.12\n   source .venv/bin/activate  # Linux/Mac\n   # .venv\\Scripts\\activate   # Windows\n   uv pip install -e .\n   ```\n\n### Installation Using pip\n\n1. Clone the repository\n   ```bash\n   git clone https://github.com/cshyundev/spatialkit.git\n   ```\n2. Install in development mode\n     ```bash\n    cd spatialkit\n    pip install -e .\n    ```\n\n## Simple Examples\n\n### Unified NumPy/PyTorch Interface\n\n`spatialkit` provides a unified interface for both NumPy and PyTorch, with geometry classes that preserve input types:\n\n```python\nimport spatialkit as sp\nimport numpy as np\nimport torch\n\n# Create 3D points (3, N) - works with both NumPy and PyTorch\npts_np = np.random.rand(3, 100)\npts_torch = torch.rand(3, 100)\n\n# Create rotation from RPY (Roll-Pitch-Yaw)\nrot = sp.Rotation.from_rpy(np.array([0, np.pi/4, 0]))  # 45\u00b0 pitch\n\n# Apply rotation using multiplication operator - type is preserved\nrotated_np = rot * pts_np        # NumPy in \u2192 NumPy out\nrotated_torch = rot * pts_torch  # Torch in \u2192 Torch out\n\nprint(type(rotated_np))    # <class 'numpy.ndarray'>\nprint(type(rotated_torch)) # <class 'torch.Tensor'>\n\n# Create transform (rotation + translation)\ntf = sp.Transform(t=np.array([1., 0., 0.]), rot=rot)\n\n# Apply transform using multiplication operator\npts_transformed = tf * pts_np\n\n# Chain transformations\ntf_combined = tf * tf.inverse()  # Returns identity transform\n\nprint(tf_combined.mat44())\n```\n\n### Multiple Camera Models with Image Warping\n\n`spatialkit` supports various camera models and enables image warping between different camera models:\n\n| 360\u00b0 Cubemap | Perspective | Fisheye | Double Sphere (180\u00b0 FOV) |\n|:------------:|:-----------:|:-------:|:------------------------:|\n| ![360](assets/cubemap_360.png) | ![Perspective](assets/camera_perspective.png) | ![Fisheye](assets/camera_fisheye.png) | ![DoubleSphere](assets/camera_doublesphere.png) |\n\n```python\nimport spatialkit as sp\nfrom spatialkit.imgproc.synthesis import transition_camera_view\nfrom spatialkit.vis2d import show_image\n\nequirect_360 = sp.io.read_image(\"assets/cubemap_360.png\")\nimg_size = [640, 480]\n\n# 1. Equirectangular camera (source 360 image)\nequirect_cam = sp.EquirectangularCamera.from_image_size([1024, 512])\n\n# 2. Perspective (Pinhole) camera\nK_perspective = [[500, 0, 320], [0, 500, 240], [0, 0, 1]]\nperspective_cam = sp.PerspectiveCamera.from_K(K_perspective, img_size)\n\n# 3. Fisheye camera (OpenCV model)\nK_fisheye = [[300, 0, 320], [0, 300, 240], [0, 0, 1]]\nD_fisheye = [-0.042595202508066574, 0.031307765215775184, -0.04104704724832258, 0.015343014605793324]\nfisheye_cam = sp.OpenCVFisheyeCamera.from_K_D(K_fisheye, img_size, D_fisheye)\n\n# 4. Double Sphere camera (180\u00b0 FOV)\ndouble_sphere_cam = sp.DoubleSphereCamera(\n    {\n        'image_size': [640, 480],\n        'cam_type': 'DOUBLESPHERE',\n        'principal_point': [318.86121757059797, 235.7432966284313],\n        'focal_length': [122.5533262583915, 121.79271712838818],\n        'xi': -0.02235598738719681,\n        'alpha': 0.562863934931952,\n        'fov_deg': 180.0\n    }\n)\n\n# Warp image between camera models\nperspective_warped = transition_camera_view(equirect_360, equirect_cam, perspective_cam)\nfisheye_warped = transition_camera_view(equirect_360, equirect_cam, fisheye_cam)\ndouble_sphere_warped = transition_camera_view(equirect_360, equirect_cam, double_sphere_cam)\n\n# Display results\nshow_image(equirect_360, title=\"Equirectangular 360 Image\")\nshow_image(perspective_warped, title=\"Perspective Camera View\")\nshow_image(fisheye_warped, title=\"Fisheye Camera View\")\nshow_image(double_sphere_warped, title=\"Double Sphere Camera View\")\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A spatial computing toolkit for 3D vision and robotics",
    "version": "0.3.1",
    "project_urls": {
        "Homepage": "https://github.com/cshyundev/spatialkit",
        "Issues": "https://github.com/cshyundev/spatialkit/issues",
        "Repository": "https://github.com/cshyundev/spatialkit"
    },
    "split_keywords": [
        "spatial-computing",
        " 3d-vision",
        " geometry",
        " robotics",
        " pytorch",
        " computer-vision"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "74d13521651c3e6538d021ca46453048e992bfefa9f480d1520d5bd0770db891",
                "md5": "dadaacf4ee2fd6589e08abbbf0b870fe",
                "sha256": "cad99b55c0036bbd10c6ea2e596937ddeabb01ea91b07d7f321d649a5a5f0b2e"
            },
            "downloads": -1,
            "filename": "spatialkit-0.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dadaacf4ee2fd6589e08abbbf0b870fe",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 103217,
            "upload_time": "2025-11-03T12:35:17",
            "upload_time_iso_8601": "2025-11-03T12:35:17.448636Z",
            "url": "https://files.pythonhosted.org/packages/74/d1/3521651c3e6538d021ca46453048e992bfefa9f480d1520d5bd0770db891/spatialkit-0.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "edebf92d9e49e0feecec7d831f7404a3e27dca382fcb38759a04e4854733299c",
                "md5": "9f18ad03fb2178d13c62e8659a2c97d0",
                "sha256": "e3beed438bf86e83e6f1d460e43ae5af3e3cfb90ccd0b39b75e3c0b1f7151458"
            },
            "downloads": -1,
            "filename": "spatialkit-0.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "9f18ad03fb2178d13c62e8659a2c97d0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 79274,
            "upload_time": "2025-11-03T12:35:19",
            "upload_time_iso_8601": "2025-11-03T12:35:19.730562Z",
            "url": "https://files.pythonhosted.org/packages/ed/eb/f92d9e49e0feecec7d831f7404a3e27dca382fcb38759a04e4854733299c/spatialkit-0.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-11-03 12:35:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cshyundev",
    "github_project": "spatialkit",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "spatialkit"
}
        
Elapsed time: 1.21355s