ivy-vision


Nameivy-vision JSON
Version 1.1.9 PyPI version JSON
download
home_pagehttps://ivy-dl.org/vision
Summary3D Vision functions with end-to-end support for machine learning developers, written in Ivy.
upload_time2021-12-01 16:23:38
maintainer
docs_urlNone
authorIvy Team
requires_python
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            .. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/logos/logo.png?raw=true
   :width: 100%



**3D Vision functions with end-to-end support for machine learning developers, written in Ivy.**



.. image:: https://github.com/ivy-dl/ivy-dl.github.io/blob/master/img/externally_linked/logos/supported/frameworks.png?raw=true
   :width: 100%

Contents
--------

* `Overview`_
* `Run Through`_
* `Interactive Demos`_
* `Get Involed`_

Overview
--------

.. _docs: https://ivy-dl.org/vision

**What is Ivy Vision?**

Ivy vision focuses predominantly on 3D vision, with functions for camera geometry, image projections,
co-ordinate frame transformations, forward warping, inverse warping, optical flow, depth triangulation, voxel grids,
point clouds, signed distance functions, and others.  Check out the docs_ for more info!

The library is built on top of the Ivy machine learning framework.
This means all functions simultaneously support:
Jax, Tensorflow, PyTorch, MXNet, and Numpy.

**Ivy Libraries**

There are a host of derived libraries written in Ivy, in the areas of mechanics, 3D vision, robotics, gym environments,
neural memory, pre-trained models + implementations, and builder tools with trainers, data loaders and more. Click on
the icons below to learn more!



.. image:: https://github.com/ivy-dl/ivy-dl.github.io/blob/master/img/externally_linked/ivy_libraries.png?raw=true
   :width: 100%













**Quick Start**

Ivy vision can be installed like so: ``pip install ivy-vision``

.. _demos: https://github.com/ivy-dl/vision/tree/master/ivy_vision_demos
.. _interactive: https://github.com/ivy-dl/vision/tree/master/ivy_vision_demos/interactive

To quickly see the different aspects of the library, we suggest you check out the demos_!
we suggest you start by running the script ``run_through.py``,
and read the "Run Through" section below which explains this script.

For more interactive demos, we suggest you run either
``coords_to_voxel_grid.py`` or ``render_image.py`` in the interactive_ demos folder.

Run Through
-----------

We run through some of the different parts of the library via a simple ongoing example script.
The full script is available in the demos_ folder, as file ``run_through.py``.
First, we select a random backend framework to use for the examples, from the options
``ivy.jax``, ``ivy.tensorflow``, ``ivy.torch``, ``ivy.mxnet`` or ``ivy.numpy``,
and use this to set the ivy backend framework.

.. code-block:: python

    import ivy
    from ivy_demo_utils.framework_utils import choose_random_framework
    ivy.set_framework(choose_random_framework())

**Camera Geometry**

To get to grips with some of the basics, we next show how to construct ivy containers which represent camera geometry.
The camera intrinsic matrix, extrinsic matrix, full matrix, and all of their inverses are central to most of the
functions in this library.

All of these matrices are contained within the Ivy camera geometry class.

.. code-block:: python

    # intrinsics

    # common intrinsic params
    img_dims = [512, 512]
    pp_offsets = ivy.array([dim / 2 - 0.5 for dim in img_dims], 'float32')
    cam_persp_angles = ivy.array([60 * np.pi / 180] * 2, 'float32')

    # ivy cam intrinsics container
    intrinsics = ivy_vision.persp_angles_and_pp_offsets_to_intrinsics_object(
        cam_persp_angles, pp_offsets, img_dims)

    # extrinsics

    # 3 x 4
    cam1_inv_ext_mat = ivy.array(np.load(data_dir + '/cam1_inv_ext_mat.npy'), 'float32')
    cam2_inv_ext_mat = ivy.array(np.load(data_dir + '/cam2_inv_ext_mat.npy'), 'float32')

    # full geometry

    # ivy cam geometry container
    cam1_geom = ivy_vision.inv_ext_mat_and_intrinsics_to_cam_geometry_object(
        cam1_inv_ext_mat, intrinsics)
    cam2_geom = ivy_vision.inv_ext_mat_and_intrinsics_to_cam_geometry_object(
        cam2_inv_ext_mat, intrinsics)
    cam_geoms = [cam1_geom, cam2_geom]

The geometries used in this quick start demo are based upon the scene presented below.

.. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/images/scene.png?raw=true
   :width: 100%

The code sample below demonstrates all of the attributes contained within the Ivy camera geometry class.

.. code-block:: python

    for cam_geom in cam_geoms:

        assert cam_geom.intrinsics.focal_lengths.shape == (2,)
        assert cam_geom.intrinsics.persp_angles.shape == (2,)
        assert cam_geom.intrinsics.pp_offsets.shape == (2,)
        assert cam_geom.intrinsics.calib_mats.shape == (3, 3)
        assert cam_geom.intrinsics.inv_calib_mats.shape == (3, 3)

        assert cam_geom.extrinsics.cam_centers.shape == (3, 1)
        assert cam_geom.extrinsics.Rs.shape == (3, 3)
        assert cam_geom.extrinsics.inv_Rs.shape == (3, 3)
        assert cam_geom.extrinsics.ext_mats_homo.shape == (4, 4)
        assert cam_geom.extrinsics.inv_ext_mats_homo.shape == (4, 4)

        assert cam_geom.full_mats_homo.shape == (4, 4)
        assert cam_geom.inv_full_mats_homo.shape == (4, 4)

**Load Images**

We next load the color and depth images corresponding to the two camera frames.
We also construct the depth-scaled homogeneous pixel co-ordinates for each image,
which is a central representation for the ivy_vision functions.
This representation simplifies projections between frames.

.. code-block:: python

    # load images

    # h x w x 3
    color1 = ivy.array(cv2.imread(data_dir + '/rgb1.png').astype(np.float32) / 255)
    color2 = ivy.array(cv2.imread(data_dir + '/rgb2.png').astype(np.float32) / 255)

    # h x w x 1
    depth1 = ivy.array(np.reshape(np.frombuffer(cv2.imread(
        data_dir + '/depth1.png', -1).tobytes(), np.float32), img_dims + [1]))
    depth2 = ivy.array(np.reshape(np.frombuffer(cv2.imread(
        data_dir + '/depth2.png', -1).tobytes(), np.float32), img_dims + [1]))

    # depth scaled pixel coords

    # h x w x 3
    u_pix_coords = ivy_vision.create_uniform_pixel_coords_image(img_dims)
    ds_pixel_coords1 = u_pix_coords * depth1
    ds_pixel_coords2 = u_pix_coords * depth2

The rgb and depth images are presented below.

.. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/images/rgb_and_depth.png?raw=true
   :width: 100%

**Optical Flow and Depth Triangulation**

Now that we have two cameras, their geometries, and their images fully defined,
we can start to apply some of the more interesting vision functions.
We start with some optical flow and depth triangulation functions.

.. code-block:: python

    # required mat formats
    cam1to2_full_mat_homo = ivy.matmul(cam2_geom.full_mats_homo, cam1_geom.inv_full_mats_homo)
    cam1to2_full_mat = cam1to2_full_mat_homo[..., 0:3, :]
    full_mats_homo = ivy.concatenate((ivy.expand_dims(cam1_geom.full_mats_homo, 0),
                                      ivy.expand_dims(cam2_geom.full_mats_homo, 0)), 0)
    full_mats = full_mats_homo[..., 0:3, :]

    # flow
    flow1to2 = ivy_vision.flow_from_depth_and_cam_mats(ds_pixel_coords1, cam1to2_full_mat)

    # depth again
    depth1_from_flow = ivy_vision.depth_from_flow_and_cam_mats(flow1to2, full_mats)

Visualizations of these images are given below.

.. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/images/flow_and_depth.png?raw=true
   :width: 100%

**Inverse and Forward Warping**

Most of the vision functions, including the flow and depth functions above,
make use of image projections,
whereby an image of depth-scaled homogeneous pixel-coordinates is transformed into
cartesian co-ordinates relative to the acquiring camera, the world, another camera,
or transformed directly to pixel co-ordinates in another camera frame.
These projections also allow warping of the color values from one camera to another.

For inverse warping, we assume depth to be known for the target frame.
We can then determine the pixel projections into the source frame,
and bilinearly interpolate these color values at the pixel projections,
to infer the color image in the target frame.

Treating frame 1 as our target frame,
we can use the previously calculated optical flow from frame 1 to 2, in order
to inverse warp the color data from frame 2 to frame 1, as shown below.


.. code-block:: python

    # inverse warp rendering
    warp = u_pix_coords[..., 0:2] + flow1to2
    color2_warp_to_f1 = ivy.bilinear_resample(color2, warp)

    # projected depth scaled pixel coords 2
    ds_pixel_coords1_wrt_f2 = ivy_vision.ds_pixel_to_ds_pixel_coords(ds_pixel_coords1, cam1to2_full_mat)

    # projected depth 2
    depth1_wrt_f2 = ds_pixel_coords1_wrt_f2[..., -1:]

    # inverse warp depth
    depth2_warp_to_f1 = ivy.bilinear_resample(depth2, warp)

    # depth validity
    depth_validity = ivy.abs(depth1_wrt_f2 - depth2_warp_to_f1) < 0.01

    # inverse warp rendering with mask
    color2_warp_to_f1_masked = ivy.where(depth_validity, color2_warp_to_f1, ivy.zeros_like(color2_warp_to_f1))

Again, visualizations of these images are given below.
The images represent intermediate steps for the inverse warping of color from frame 2 to frame 1,
which is shown in the bottom right corner.

.. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/images/inverse_warped.png?raw=true
   :width: 100%

For forward warping, we instead assume depth to be known in the source frame.
A common approach is to construct a mesh, and then perform rasterization of the mesh.

The ivy method ``ivy_vision.render_pixel_coords`` instead takes a simpler approach,
by determining the pixel projections into the target frame,
quantizing these to integer pixel co-ordinates,
and scattering the corresponding color values directly into these integer pixel co-ordinates.

This process in general leads to holes and duplicates in the resultant image,
but when compared to inverse warping,
it has the beneft that the target frame does not need to correspond to a real camera with known depth.
Only the target camera geometry is required, which can be for any hypothetical camera.

We now consider the case of forward warping the color data from camera frame 2 to camera frame 1,
and again render the new color image in target frame 1.

.. code-block:: python

    # forward warp rendering
    ds_pixel_coords1_proj = ivy_vision.ds_pixel_to_ds_pixel_coords(
        ds_pixel_coords2, ivy.inv(cam1to2_full_mat_homo)[..., 0:3, :])
    depth1_proj = ds_pixel_coords1_proj[..., -1:]
    ds_pixel_coords1_proj = ds_pixel_coords1_proj[..., 0:2] / depth1_proj
    features_to_render = ivy.concatenate((depth1_proj, color2), -1)

    # without depth buffer
    f1_forward_warp_no_db, _, _ = ivy_vision.quantize_to_image(
        ivy.reshape(ds_pixel_coords1_proj, (-1, 2)), img_dims, ivy.reshape(features_to_render, (-1, 4)),
        ivy.zeros_like(features_to_render), with_db=False)

    # with depth buffer
    f1_forward_warp_w_db, _, _ = ivy_vision.quantize_to_image(
        ivy.reshape(ds_pixel_coords1_proj, (-1, 2)), img_dims, ivy.reshape(features_to_render, (-1, 4)),
        ivy.zeros_like(features_to_render), with_db=False if ivy.get_framework() == 'mxnet' else True)

Again, visualizations of these images are given below.
The images show the forward warping of both depth and color from frame 2 to frame 1,
which are shown with and without depth buffers in the right-hand and central columns respectively.

.. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/images/forward_warped.png?raw=true
   :width: 100%

Interactive Demos
-----------------

In addition to the examples above, we provide two further demo scripts,
which are more visual and interactive, and are each built around a particular function.

Rather than presenting the code here, we show visualizations of the demos.
The scripts for these demos can be found in the interactive_ demos folder.

**Neural Rendering**

The first demo uses method ``ivy_vision.render_implicit_features_and_depth``
to train a Neural Radiance Field (NeRF) model to encode a lego digger. The NeRF model can then be queried at new camera
poses to render new images from poses unseen during training.



.. image:: https://github.com/ivy-dl/ivy-dl.github.io/blob/master/img/externally_linked/ivy_vision/nerf_demo.png?raw=true
   :width: 100%

**Co-ordinates to Voxel Grid**

The second demo captures depth and color images from a set of cameras,
converts the depth to world-centric co-ordinartes,
and uses the method ``ivy_vision.coords_to_voxel_grid`` to
voxelize the depth and color values into a grid, as shown below:



.. image:: https://github.com/ivy-dl/ivy-dl.github.io/blob/master/img/externally_linked/ivy_vision/voxel_grid_demo.png?raw=true
   :width: 100%

**Point Rendering**

The final demo again captures depth and color images from a set of cameras,
but this time uses the method ``ivy_vision.quantize_to_image`` to
dynamically forward warp and point render the images into a new target frame, as shown below.
The acquiring cameras all remain static, while the target frame for point rendering moves freely.



.. image:: https://github.com/ivy-dl/ivy-dl.github.io/blob/master/img/externally_linked/ivy_vision/point_render_demo.png?raw=true
   :width: 100%

Get Involed
-----------

We hope the functions in this library are useful to a wide range of machine learning developers.
However, there are many more areas of 3D vision which could be covered by this library.

If there are any particular vision functions you feel are missing,
and your needs are not met by the functions currently on offer,
then we are very happy to accept pull requests!

We look forward to working with the community on expanding and improving the Ivy vision library.

Citation
--------

::

    @article{lenton2021ivy,
      title={Ivy: Unified Machine Learning for Inter-Framework Portability},
      author={Lenton, Daniel and Pardo, Fabio and Falck, Fabian and James, Stephen and Clark, Ronald},
      journal={arXiv preprint arXiv:2102.02886},
      year={2021}
    }


            

Raw data

            {
    "_id": null,
    "home_page": "https://ivy-dl.org/vision",
    "name": "ivy-vision",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Ivy Team",
    "author_email": "ivydl.team@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/de/54/6b0098c75a6f799e7458df16929f10727b906addf8db190aed1a6688c35e/ivy-vision-1.1.9.tar.gz",
    "platform": "",
    "description": ".. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/logos/logo.png?raw=true\n   :width: 100%\n\n\n\n**3D Vision functions with end-to-end support for machine learning developers, written in Ivy.**\n\n\n\n.. image:: https://github.com/ivy-dl/ivy-dl.github.io/blob/master/img/externally_linked/logos/supported/frameworks.png?raw=true\n   :width: 100%\n\nContents\n--------\n\n* `Overview`_\n* `Run Through`_\n* `Interactive Demos`_\n* `Get Involed`_\n\nOverview\n--------\n\n.. _docs: https://ivy-dl.org/vision\n\n**What is Ivy Vision?**\n\nIvy vision focuses predominantly on 3D vision, with functions for camera geometry, image projections,\nco-ordinate frame transformations, forward warping, inverse warping, optical flow, depth triangulation, voxel grids,\npoint clouds, signed distance functions, and others.  Check out the docs_ for more info!\n\nThe library is built on top of the Ivy machine learning framework.\nThis means all functions simultaneously support:\nJax, Tensorflow, PyTorch, MXNet, and Numpy.\n\n**Ivy Libraries**\n\nThere are a host of derived libraries written in Ivy, in the areas of mechanics, 3D vision, robotics, gym environments,\nneural memory, pre-trained models + implementations, and builder tools with trainers, data loaders and more. Click on\nthe icons below to learn more!\n\n\n\n.. image:: https://github.com/ivy-dl/ivy-dl.github.io/blob/master/img/externally_linked/ivy_libraries.png?raw=true\n   :width: 100%\n\n\n\n\n\n\n\n\n\n\n\n\n\n**Quick Start**\n\nIvy vision can be installed like so: ``pip install ivy-vision``\n\n.. _demos: https://github.com/ivy-dl/vision/tree/master/ivy_vision_demos\n.. _interactive: https://github.com/ivy-dl/vision/tree/master/ivy_vision_demos/interactive\n\nTo quickly see the different aspects of the library, we suggest you check out the demos_!\nwe suggest you start by running the script ``run_through.py``,\nand read the \"Run Through\" section below which explains this script.\n\nFor more interactive demos, we suggest you run either\n``coords_to_voxel_grid.py`` or ``render_image.py`` in the interactive_ demos folder.\n\nRun Through\n-----------\n\nWe run through some of the different parts of the library via a simple ongoing example script.\nThe full script is available in the demos_ folder, as file ``run_through.py``.\nFirst, we select a random backend framework to use for the examples, from the options\n``ivy.jax``, ``ivy.tensorflow``, ``ivy.torch``, ``ivy.mxnet`` or ``ivy.numpy``,\nand use this to set the ivy backend framework.\n\n.. code-block:: python\n\n    import ivy\n    from ivy_demo_utils.framework_utils import choose_random_framework\n    ivy.set_framework(choose_random_framework())\n\n**Camera Geometry**\n\nTo get to grips with some of the basics, we next show how to construct ivy containers which represent camera geometry.\nThe camera intrinsic matrix, extrinsic matrix, full matrix, and all of their inverses are central to most of the\nfunctions in this library.\n\nAll of these matrices are contained within the Ivy camera geometry class.\n\n.. code-block:: python\n\n    # intrinsics\n\n    # common intrinsic params\n    img_dims = [512, 512]\n    pp_offsets = ivy.array([dim / 2 - 0.5 for dim in img_dims], 'float32')\n    cam_persp_angles = ivy.array([60 * np.pi / 180] * 2, 'float32')\n\n    # ivy cam intrinsics container\n    intrinsics = ivy_vision.persp_angles_and_pp_offsets_to_intrinsics_object(\n        cam_persp_angles, pp_offsets, img_dims)\n\n    # extrinsics\n\n    # 3 x 4\n    cam1_inv_ext_mat = ivy.array(np.load(data_dir + '/cam1_inv_ext_mat.npy'), 'float32')\n    cam2_inv_ext_mat = ivy.array(np.load(data_dir + '/cam2_inv_ext_mat.npy'), 'float32')\n\n    # full geometry\n\n    # ivy cam geometry container\n    cam1_geom = ivy_vision.inv_ext_mat_and_intrinsics_to_cam_geometry_object(\n        cam1_inv_ext_mat, intrinsics)\n    cam2_geom = ivy_vision.inv_ext_mat_and_intrinsics_to_cam_geometry_object(\n        cam2_inv_ext_mat, intrinsics)\n    cam_geoms = [cam1_geom, cam2_geom]\n\nThe geometries used in this quick start demo are based upon the scene presented below.\n\n.. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/images/scene.png?raw=true\n   :width: 100%\n\nThe code sample below demonstrates all of the attributes contained within the Ivy camera geometry class.\n\n.. code-block:: python\n\n    for cam_geom in cam_geoms:\n\n        assert cam_geom.intrinsics.focal_lengths.shape == (2,)\n        assert cam_geom.intrinsics.persp_angles.shape == (2,)\n        assert cam_geom.intrinsics.pp_offsets.shape == (2,)\n        assert cam_geom.intrinsics.calib_mats.shape == (3, 3)\n        assert cam_geom.intrinsics.inv_calib_mats.shape == (3, 3)\n\n        assert cam_geom.extrinsics.cam_centers.shape == (3, 1)\n        assert cam_geom.extrinsics.Rs.shape == (3, 3)\n        assert cam_geom.extrinsics.inv_Rs.shape == (3, 3)\n        assert cam_geom.extrinsics.ext_mats_homo.shape == (4, 4)\n        assert cam_geom.extrinsics.inv_ext_mats_homo.shape == (4, 4)\n\n        assert cam_geom.full_mats_homo.shape == (4, 4)\n        assert cam_geom.inv_full_mats_homo.shape == (4, 4)\n\n**Load Images**\n\nWe next load the color and depth images corresponding to the two camera frames.\nWe also construct the depth-scaled homogeneous pixel co-ordinates for each image,\nwhich is a central representation for the ivy_vision functions.\nThis representation simplifies projections between frames.\n\n.. code-block:: python\n\n    # load images\n\n    # h x w x 3\n    color1 = ivy.array(cv2.imread(data_dir + '/rgb1.png').astype(np.float32) / 255)\n    color2 = ivy.array(cv2.imread(data_dir + '/rgb2.png').astype(np.float32) / 255)\n\n    # h x w x 1\n    depth1 = ivy.array(np.reshape(np.frombuffer(cv2.imread(\n        data_dir + '/depth1.png', -1).tobytes(), np.float32), img_dims + [1]))\n    depth2 = ivy.array(np.reshape(np.frombuffer(cv2.imread(\n        data_dir + '/depth2.png', -1).tobytes(), np.float32), img_dims + [1]))\n\n    # depth scaled pixel coords\n\n    # h x w x 3\n    u_pix_coords = ivy_vision.create_uniform_pixel_coords_image(img_dims)\n    ds_pixel_coords1 = u_pix_coords * depth1\n    ds_pixel_coords2 = u_pix_coords * depth2\n\nThe rgb and depth images are presented below.\n\n.. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/images/rgb_and_depth.png?raw=true\n   :width: 100%\n\n**Optical Flow and Depth Triangulation**\n\nNow that we have two cameras, their geometries, and their images fully defined,\nwe can start to apply some of the more interesting vision functions.\nWe start with some optical flow and depth triangulation functions.\n\n.. code-block:: python\n\n    # required mat formats\n    cam1to2_full_mat_homo = ivy.matmul(cam2_geom.full_mats_homo, cam1_geom.inv_full_mats_homo)\n    cam1to2_full_mat = cam1to2_full_mat_homo[..., 0:3, :]\n    full_mats_homo = ivy.concatenate((ivy.expand_dims(cam1_geom.full_mats_homo, 0),\n                                      ivy.expand_dims(cam2_geom.full_mats_homo, 0)), 0)\n    full_mats = full_mats_homo[..., 0:3, :]\n\n    # flow\n    flow1to2 = ivy_vision.flow_from_depth_and_cam_mats(ds_pixel_coords1, cam1to2_full_mat)\n\n    # depth again\n    depth1_from_flow = ivy_vision.depth_from_flow_and_cam_mats(flow1to2, full_mats)\n\nVisualizations of these images are given below.\n\n.. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/images/flow_and_depth.png?raw=true\n   :width: 100%\n\n**Inverse and Forward Warping**\n\nMost of the vision functions, including the flow and depth functions above,\nmake use of image projections,\nwhereby an image of depth-scaled homogeneous pixel-coordinates is transformed into\ncartesian co-ordinates relative to the acquiring camera, the world, another camera,\nor transformed directly to pixel co-ordinates in another camera frame.\nThese projections also allow warping of the color values from one camera to another.\n\nFor inverse warping, we assume depth to be known for the target frame.\nWe can then determine the pixel projections into the source frame,\nand bilinearly interpolate these color values at the pixel projections,\nto infer the color image in the target frame.\n\nTreating frame 1 as our target frame,\nwe can use the previously calculated optical flow from frame 1 to 2, in order\nto inverse warp the color data from frame 2 to frame 1, as shown below.\n\n\n.. code-block:: python\n\n    # inverse warp rendering\n    warp = u_pix_coords[..., 0:2] + flow1to2\n    color2_warp_to_f1 = ivy.bilinear_resample(color2, warp)\n\n    # projected depth scaled pixel coords 2\n    ds_pixel_coords1_wrt_f2 = ivy_vision.ds_pixel_to_ds_pixel_coords(ds_pixel_coords1, cam1to2_full_mat)\n\n    # projected depth 2\n    depth1_wrt_f2 = ds_pixel_coords1_wrt_f2[..., -1:]\n\n    # inverse warp depth\n    depth2_warp_to_f1 = ivy.bilinear_resample(depth2, warp)\n\n    # depth validity\n    depth_validity = ivy.abs(depth1_wrt_f2 - depth2_warp_to_f1) < 0.01\n\n    # inverse warp rendering with mask\n    color2_warp_to_f1_masked = ivy.where(depth_validity, color2_warp_to_f1, ivy.zeros_like(color2_warp_to_f1))\n\nAgain, visualizations of these images are given below.\nThe images represent intermediate steps for the inverse warping of color from frame 2 to frame 1,\nwhich is shown in the bottom right corner.\n\n.. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/images/inverse_warped.png?raw=true\n   :width: 100%\n\nFor forward warping, we instead assume depth to be known in the source frame.\nA common approach is to construct a mesh, and then perform rasterization of the mesh.\n\nThe ivy method ``ivy_vision.render_pixel_coords`` instead takes a simpler approach,\nby determining the pixel projections into the target frame,\nquantizing these to integer pixel co-ordinates,\nand scattering the corresponding color values directly into these integer pixel co-ordinates.\n\nThis process in general leads to holes and duplicates in the resultant image,\nbut when compared to inverse warping,\nit has the beneft that the target frame does not need to correspond to a real camera with known depth.\nOnly the target camera geometry is required, which can be for any hypothetical camera.\n\nWe now consider the case of forward warping the color data from camera frame 2 to camera frame 1,\nand again render the new color image in target frame 1.\n\n.. code-block:: python\n\n    # forward warp rendering\n    ds_pixel_coords1_proj = ivy_vision.ds_pixel_to_ds_pixel_coords(\n        ds_pixel_coords2, ivy.inv(cam1to2_full_mat_homo)[..., 0:3, :])\n    depth1_proj = ds_pixel_coords1_proj[..., -1:]\n    ds_pixel_coords1_proj = ds_pixel_coords1_proj[..., 0:2] / depth1_proj\n    features_to_render = ivy.concatenate((depth1_proj, color2), -1)\n\n    # without depth buffer\n    f1_forward_warp_no_db, _, _ = ivy_vision.quantize_to_image(\n        ivy.reshape(ds_pixel_coords1_proj, (-1, 2)), img_dims, ivy.reshape(features_to_render, (-1, 4)),\n        ivy.zeros_like(features_to_render), with_db=False)\n\n    # with depth buffer\n    f1_forward_warp_w_db, _, _ = ivy_vision.quantize_to_image(\n        ivy.reshape(ds_pixel_coords1_proj, (-1, 2)), img_dims, ivy.reshape(features_to_render, (-1, 4)),\n        ivy.zeros_like(features_to_render), with_db=False if ivy.get_framework() == 'mxnet' else True)\n\nAgain, visualizations of these images are given below.\nThe images show the forward warping of both depth and color from frame 2 to frame 1,\nwhich are shown with and without depth buffers in the right-hand and central columns respectively.\n\n.. image:: https://github.com/ivy-dl/vision/blob/master/docs/partial_source/images/forward_warped.png?raw=true\n   :width: 100%\n\nInteractive Demos\n-----------------\n\nIn addition to the examples above, we provide two further demo scripts,\nwhich are more visual and interactive, and are each built around a particular function.\n\nRather than presenting the code here, we show visualizations of the demos.\nThe scripts for these demos can be found in the interactive_ demos folder.\n\n**Neural Rendering**\n\nThe first demo uses method ``ivy_vision.render_implicit_features_and_depth``\nto train a Neural Radiance Field (NeRF) model to encode a lego digger. The NeRF model can then be queried at new camera\nposes to render new images from poses unseen during training.\n\n\n\n.. image:: https://github.com/ivy-dl/ivy-dl.github.io/blob/master/img/externally_linked/ivy_vision/nerf_demo.png?raw=true\n   :width: 100%\n\n**Co-ordinates to Voxel Grid**\n\nThe second demo captures depth and color images from a set of cameras,\nconverts the depth to world-centric co-ordinartes,\nand uses the method ``ivy_vision.coords_to_voxel_grid`` to\nvoxelize the depth and color values into a grid, as shown below:\n\n\n\n.. image:: https://github.com/ivy-dl/ivy-dl.github.io/blob/master/img/externally_linked/ivy_vision/voxel_grid_demo.png?raw=true\n   :width: 100%\n\n**Point Rendering**\n\nThe final demo again captures depth and color images from a set of cameras,\nbut this time uses the method ``ivy_vision.quantize_to_image`` to\ndynamically forward warp and point render the images into a new target frame, as shown below.\nThe acquiring cameras all remain static, while the target frame for point rendering moves freely.\n\n\n\n.. image:: https://github.com/ivy-dl/ivy-dl.github.io/blob/master/img/externally_linked/ivy_vision/point_render_demo.png?raw=true\n   :width: 100%\n\nGet Involed\n-----------\n\nWe hope the functions in this library are useful to a wide range of machine learning developers.\nHowever, there are many more areas of 3D vision which could be covered by this library.\n\nIf there are any particular vision functions you feel are missing,\nand your needs are not met by the functions currently on offer,\nthen we are very happy to accept pull requests!\n\nWe look forward to working with the community on expanding and improving the Ivy vision library.\n\nCitation\n--------\n\n::\n\n    @article{lenton2021ivy,\n      title={Ivy: Unified Machine Learning for Inter-Framework Portability},\n      author={Lenton, Daniel and Pardo, Fabio and Falck, Fabian and James, Stephen and Clark, Ronald},\n      journal={arXiv preprint arXiv:2102.02886},\n      year={2021}\n    }\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "3D Vision functions with end-to-end support for machine learning developers, written in Ivy.",
    "version": "1.1.9",
    "project_urls": {
        "Docs": "https://ivy-dl.org/vision/",
        "Homepage": "https://ivy-dl.org/vision",
        "Source": "https://github.com/ivy-dl/vision"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cf51acd2c339b560017332e795601b9561946857985d34428fc33d775da12166",
                "md5": "af43ba570bfc3fb90ac05a70faf7bdef",
                "sha256": "da1a672ed7ab248c3a8e8d2b5feed42cc0768464a55659362181d528863466ee"
            },
            "downloads": -1,
            "filename": "ivy_vision-1.1.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "af43ba570bfc3fb90ac05a70faf7bdef",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 43781,
            "upload_time": "2021-12-01T16:23:36",
            "upload_time_iso_8601": "2021-12-01T16:23:36.639049Z",
            "url": "https://files.pythonhosted.org/packages/cf/51/acd2c339b560017332e795601b9561946857985d34428fc33d775da12166/ivy_vision-1.1.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "de546b0098c75a6f799e7458df16929f10727b906addf8db190aed1a6688c35e",
                "md5": "070f264eed84ac0c8414924297e6fc75",
                "sha256": "3ef078a34a7753fc520a513e7b26cad5c0e3fff9e387e130e4e575eb5022136d"
            },
            "downloads": -1,
            "filename": "ivy-vision-1.1.9.tar.gz",
            "has_sig": false,
            "md5_digest": "070f264eed84ac0c8414924297e6fc75",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 43114,
            "upload_time": "2021-12-01T16:23:38",
            "upload_time_iso_8601": "2021-12-01T16:23:38.301942Z",
            "url": "https://files.pythonhosted.org/packages/de/54/6b0098c75a6f799e7458df16929f10727b906addf8db190aed1a6688c35e/ivy-vision-1.1.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-12-01 16:23:38",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ivy-dl",
    "github_project": "vision",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "ivy-vision"
}
        
Elapsed time: 0.10044s