machinevision-toolbox-python


Namemachinevision-toolbox-python JSON
Version 0.9.7 PyPI version JSON
download
home_pageNone
SummaryPython tools for machine vision - education and research
upload_time2024-05-27 01:57:42
maintainerNone
docs_urlNone
authorDorian Tsai
requires_python>=3.7
licenseNone
keywords machine vision computer vision multiview geometry stereo vision bundle adjustment visual servoing image features color blobs morphology image segmentation opencv open3d
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Machine Vision Toolbox for Python

[![A Python Robotics Package](https://raw.githubusercontent.com/petercorke/robotics-toolbox-python/master/.github/svg/py_collection.min.svg)](https://github.com/petercorke/robotics-toolbox-python)
[![Powered by Spatial Maths](https://raw.githubusercontent.com/petercorke/spatialmath-python/master/.github/svg/sm_powered.min.svg)](https://github.com/petercorke/spatialmath-python)
[![QUT Centre for Robotics Open Source](https://github.com/qcr/qcr.github.io/raw/master/misc/badge.svg)](https://qcr.github.io)

[![PyPI version](https://badge.fury.io/py/machinevision-toolbox-python.svg)](https://badge.fury.io/py/machinevision-toolbox-python)
![Python Version](https://img.shields.io/pypi/pyversions/machinevision-toolbox-python.svg)
[![Powered by OpenCV](https://raw.githubusercontent.com/petercorke/machinevision-toolbox-python/master/.github/svg/opencv_powered.svg)](https://opencv.org)
[![Powered by Open3D](https://raw.githubusercontent.com/petercorke/machinevision-toolbox-python/master/.github/svg/open3d_powered.svg)](https://open3d.org)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

[![Build Status](https://github.com/petercorke/machinevision-toolbox-python/workflows/Test-master/badge.svg?branch=master)](https://github.com/petercorke/machinevision-toolbox-python/actions?query=workflow%3Abuild)
[![Coverage](https://codecov.io/gh/petercorke/machinevision-toolbox-python/branch/master/graph/badge.svg)](https://codecov.io/gh/petercorke/machinevision-toolbox-python)
[![PyPI - Downloads](https://img.shields.io/pypi/dw/machinevision-toolbox-python)](https://pypistats.org/packages/machinevision-toolbox-python)

<!-- [![GitHub stars](https://img.shields.io/github/stars/petercorke/machinevision-toolbox-python.svg?style=social&label=Star)](https://GitHub.com/petercorke/machinevision-toolbox-python/stargazers/) -->

<table style="border:0px">
<tr style="border:0px">
<td style="border:0px">
<img src="https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/VisionToolboxLogo_NoBackgnd@2x.png" width="200"></td>
<td style="border:0px">

A Python implementation of the <a href="https://github.com/petercorke/machinevision-toolbox-matlab">Machine Vision Toolbox for MATLAB<sup>&reg;</sup></a><ul>

<li><a href="https://github.com/petercorke/machinevision-toolbox-python">GitHub repository </a></li>
<li><a href="https://petercorke.github.io/machinevision-toolbox-python/">Documentation</a></li>
<li><a href="https://github.com/petercorke/machinevision-toolbox-python/wiki">Examples and details</a></li>
<li><a href="installation#">Installation</a></li>
</ul>
</td>
</tr>
</table>

## Synopsis

The Machine Vision Toolbox for Python (MVTB-P) provides many functions that are useful in machine vision and vision-based control. The main components are:

- An `Image` object with nearly 200 methods and properties that wrap functions
  from OpenCV, NumPy and SciPy. Methods support monadic, dyadic, filtering, edge detection,
  mathematical morphology and feature extraction (blobs, lines and point/corner features), as well as operator overloading. Images are stored as encapsulated [NumPy](https://numpy.org) arrays
  along with image metadata.
- An object-oriented wrapper of Open3D functions that supports a subset of operations, but allows operator overloading and is compatible with the [Spatial Math Toolbox](https://github.com/petercorke/spatialmath-python).
- A collection of camera projection classes for central (normal perspective), fisheye, catadioptric and spherical cameras.
- Some advanced algorithms such as:
  - multiview geometry: camera calibration, stereo vision, bundle adjustment
  - bag of words

Advantages of this Python Toolbox are that:

- it uses, as much as possible, [OpenCV](https://opencv.org) and [NumPy](https://numpy.org) which are portable, efficient, comprehensive and mature collection of functions for image processing and feature extraction;
- it wraps the OpenCV functions in a consistent way, hiding some of the gnarly details of OpenCV like conversion to/from float32 and the BGR color order.
- it is has similarity to the Machine Vision Toolbox for MATLAB.

# Getting going

## Using pip

Install a snapshot from PyPI

```
% pip install machinevision-toolbox-python
```

## From GitHub

Install the current code base from GitHub and pip install a link to that cloned copy

```
% git clone https://github.com/petercorke/machinevision-toolbox-python.git
% cd machinevision-toolbox-python
% pip install -e .
```

# Examples

### Reading and display an image

```python
from machinevisiontoolbox import Image
mona = Image.Read("monalisa.png")
mona.disp()
```

![Mona Lisa image](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/mona.png)

Images can also be returned by iterators that operate over folders, zip files, local cameras, web cameras and video files.

### Simple image processing

The toolbox supports many operations on images such as 2D filtering, edge detection, mathematical morphology, colorspace conversion, padding, cropping, resizing, rotation and warping.

```python
mona.smooth(sigma=5).disp()
```

![Mona Lisa image with smoothing](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/mona_smooth.png)

There are also many functions that operate on pairs of image. All the arithmetic operators are overloaded, and there are methods to combine images in more complex ways. Multiple images can be stacked horizontal, vertically or tiled in a 2D grid. For example, we could display the original and smoothed images side by side

```python
Image.Hstack([mona, mona.smooth(sigma=5)]).disp()
```

where `Hstack` is a class method that creates a new image by stacking the
images from its argument, an image sequence, horizontally.

![Mona Lisa image with smoothing](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/mona+smooth.png)

### Binary blobs

A common problem in robotic vision is to extract features from the image, to describe the position, size, shape and orientation of objects in the scene. For simple binary scenes blob features are commonly used.

```python
im = Image.Read("shark2.png")   # read a binary image of two sharks
im.disp();   # display it with interactive viewing tool
blobs = im.blobs()  # find all the white blobs
print(blobs)

	┌───┬────────┬──────────────┬──────────┬───────┬───────┬─────────────┬────────┬────────┐
	│id │ parent │     centroid │     area │ touch │ perim │ circularity │ orient │ aspect │
	├───┼────────┼──────────────┼──────────┼───────┼───────┼─────────────┼────────┼────────┤
	│ 0 │     -1 │ 371.2, 355.2 │ 7.59e+03 │ False │ 557.6 │       0.341 │  82.9° │  0.976 │
	│ 1 │     -1 │ 171.2, 155.2 │ 7.59e+03 │ False │ 557.6 │       0.341 │  82.9° │  0.976 │
	└───┴────────┴──────────────┴──────────┴───────┴───────┴─────────────┴────────┴────────┘
```

where `blobs` is a list-like object and each element describes a blob in the scene. The element's attributes describe various parameters of the object, and methods can be used to overlay graphics such as bounding boxes and centroids

```python
blobs.plot_box(color="g", linewidth=2)  # put a green bounding box on each blob
blobs.plot_centroid(label=True)  # put a circle+cross on the centroid of each blob
plt.show(block=True)  # display the result
```

![Binary image showing bounding boxes and centroids](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/shark2+boxes.png)

#### Binary blob hierarchy

A more complex image is

```python
im = Image.Read("multiblobs.png")
im.disp()
```

![Binary image with nested blobs](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/multi.png)

and we see that some blobs are contained within other blobs. The results in tabular form

```python
blobs  = im.blobs()
print(blobs)
	┌───┬────────┬───────────────┬──────────┬───────┬────────┬─────────────┬────────┬────────┐
	│id │ parent │      centroid │     area │ touch │  perim │ circularity │ orient │ aspect │
	├───┼────────┼───────────────┼──────────┼───────┼────────┼─────────────┼────────┼────────┤
	│ 0 │      1 │  898.8, 725.3 │ 1.65e+05 │ False │ 2220.0 │       0.467 │  86.7° │  0.754 │
	│ 1 │      2 │ 1025.0, 813.7 │ 1.06e+05 │ False │ 1387.9 │       0.769 │ -88.9° │  0.739 │
	│ 2 │     -1 │  938.1, 855.2 │ 1.72e+04 │ False │  490.7 │       1.001 │  88.7° │  0.862 │
	│ 3 │     -1 │  988.1, 697.2 │ 1.21e+04 │ False │  412.5 │       0.994 │ -87.8° │  0.809 │
	│ 4 │     -1 │  846.0, 511.7 │ 1.75e+04 │ False │  496.9 │       0.992 │ -90.0° │  0.778 │
	│ 5 │      6 │  291.7, 377.8 │  1.7e+05 │ False │ 1712.6 │       0.810 │ -85.3° │  0.767 │
	│ 6 │     -1 │  312.7, 472.1 │ 1.75e+04 │ False │  495.5 │       0.997 │ -89.9° │  0.777 │
	│ 7 │     -1 │  241.9, 245.0 │ 1.75e+04 │ False │  496.9 │       0.992 │ -90.0° │  0.777 │
	│ 8 │      9 │ 1228.0, 254.3 │ 8.14e+04 │ False │ 1215.2 │       0.771 │ -77.2° │  0.713 │
	│ 9 │     -1 │ 1225.2, 220.0 │ 1.75e+04 │ False │  496.9 │       0.992 │ -90.0° │  0.777 │
	└───┴────────┴───────────────┴──────────┴───────┴────────┴─────────────┴────────┴────────┘
```

We can display a label image, where the value of each pixel is the label of the blob that the pixel
belongs to, the `id` attribute

```python
labels = blobs.label_image()
labels.disp(colormap="viridis", ncolors=len(blobs), colorbar=dict(shrink=0.8, aspect=20*0.8))
```

![False color label image](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/multi_labelled.png)

We can also think of the blobs forming a hiearchy and that relationship is reflected in the `parent` and `children` attributes of the blobs.
We can also express it as a directed graph

```python
blobs.dotfile(show=True)
```

![Blob hierarchy as a graph](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/blobs_graph.png)

### Camera modelling

```python
from machinevisiontoolbox import CentralCamera
cam = CentralCamera(f=0.015, rho=10e-6, imagesize=[1280, 1024], pp=[640, 512], name="mycamera")
print(cam)
           Name: mycamera [CentralCamera]
     pixel size: 1e-05 x 1e-05
     image size: 1280 x 1024
           pose: t = 0, 0, 0; rpy/yxz = 0°, 0°, 0°
   principal pt: [     640      512]
   focal length: [   0.015    0.015]
```

and its intrinsic parameters are

```python
print(cam.K)
	[[1.50e+03 0.00e+00 6.40e+02]
	 [0.00e+00 1.50e+03 5.12e+02]
	 [0.00e+00 0.00e+00 1.00e+00]]
```

We can define an arbitrary point in the world

```python
P = [0.3, 0.4, 3.0]
```

and then project it into the camera

```python
p = cam.project(P)
print(p)
	[790. 712.]
```

which is the corresponding coordinate in pixels. If we shift the camera slightly the image plane coordinate will also change

```python
p = cam.project(P, T=SE3(0.1, 0, 0) )
print(p)
[740. 712.]
```

We can define an edge-based cube model and project it into the camera's image plane

```python
from spatialmath import SE3
X, Y, Z = mkcube(0.2, pose=SE3(0, 0, 1), edge=True)
cam.plot_wireframe(X, Y, Z)
```

![Perspective camera view of cube](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/cube.png)

<!---or with a fisheye camera

```matlab
>> cam = FishEyeCamera('name', 'fisheye', ...
'projection', 'equiangular', ...
'pixel', 10e-6, ...
'resolution', [1280 1024]);
>> [X,Y,Z] = mkcube(0.2, 'centre', [0.2, 0, 0.3], 'edge');
>> cam.mesh(X, Y, Z);
```
![Fisheye lens camera view](figs/cube_fisheye.png)


### Bundle adjustment
--->

### Color space

Plot the CIE chromaticity space

```python
plot_chromaticity_diagram("xy");
plot_spectral_locus("xy")
```

![CIE chromaticity space](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/colorspace.png)

Load the spectrum of sunlight at the Earth's surface and compute the CIE xy chromaticity coordinates

```python
nm = 1e-9
lam = np.linspace(400, 701, 5) * nm # visible light
sun_at_ground = loadspectrum(lam, "solar")
xy = lambda2xy(lambda, sun_at_ground)
print(xy)
	[[0.33272798 0.3454013 ]]
print(colorname(xy, "xy"))
	khaki
```

### Hough transform

```python
im = Image.Read("church.png", mono=True)
edges = im.canny()
h = edges.Hough()
lines = h.lines_p(100, minlinelength=200, maxlinegap=5, seed=0)

im.disp(darken=True)
h.plot_lines(lines, "r--")
```

![Hough transform](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/hough.png)

### SURF features

We load two images and compute a set of SURF features for each

```python
view1 = Image.Read("eiffel-1.png", mono=True)
view2 = Image.Read("eiffel-2.png", mono=True)
sf1 = view1.SIFT()
sf2 = view2.SIFT()
```

We can match features between images based purely on the similarity of the features, and display the correspondences found

```python
matches = sf1.match(sf2)
print(matches)
813 matches
matches[1:5].table()
┌──┬────────┬──────────┬─────────────────┬────────────────┐
│# │ inlier │ strength │              p1 │             p2 │
├──┼────────┼──────────┼─────────────────┼────────────────┤
│0 │        │     26.4 │ (1118.6, 178.8) │ (952.5, 418.0) │
│1 │        │     28.2 │ (820.6, 519.1)  │ (708.1, 701.6) │
│2 │        │     29.6 │ (801.1, 632.4)  │ (694.1, 800.3) │
│3 │        │     32.4 │ (746.0, 153.1)  │ (644.5, 392.2) │
└──┴────────┴──────────┴─────────────────┴────────────────┘
```

where we have displayed the feature coordinates for four correspondences.

We can also display the correspondences graphically

```python
matches.subset(100).plot("w")
```

in this case, a subset of 100/813 of the correspondences.

![Feature matching](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/matching.png)

Clearly there are some bad matches here, but we we can use RANSAC and the epipolar constraint implied by the fundamental matrix to estimate the fundamental matrix and classify correspondences as inliers or outliers

```python
F, resid = matches.estimate(CentralCamera.points2F, method="ransac", confidence=0.99, seed=0)
print(F)
array([[1.033e-08, -3.799e-06, 0.002678],
       [3.668e-06, 1.217e-07, -0.004033],
       [-0.00319, 0.003436,        1]])
print(resid)
0.0405

Image.Hstack((view1, view2)).disp()
matches.inliers.subset(100).plot("g", ax=plt.gca())
matches.outliers.subset(100).plot("r", ax=plt.gca())
```

where green lines show correct correspondences (inliers) and red lines show bad correspondences (outliers)

![Feature matching after RANSAC](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/matching_ransac.png)

# History

This package can be considered as a Python version of the [Machine Vision
Toolbox for MATLAB](). That Toolbox, now quite old, is a collection of MATLAB
functions and classes that supported the first two editions of the Robotics,
Vision & Control book. It is a somewhat eclectic collection reflecting my
personal interest in areas of photometry, photogrammetry, colorimetry. It
includes over 100 functions spanning operations such as image file reading and
writing, acquisition, display, filtering, blob, point and line feature
extraction, mathematical morphology, homographies, visual Jacobians, camera
calibration and color space conversion.

This Python version differs in using an object to encapsulate the pixel data and
image metadata, rather than just a native object holding pixel data. The many
functions become methods of the image object which reduces namespace pollutions,
and allows the easy expression of sequential operations using "dot chaining".

The first version was created by Dorian Tsai during 2020, and based on the
MATLAB version.  That work was funded by an Australian University Teacher of
the year award (2017) to Peter Corke.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "machinevision-toolbox-python",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "machine vision, computer vision, multiview geometry, stereo vision, bundle adjustment, visual servoing, image features, color, blobs, morphology, image segmentation, opencv, open3d",
    "author": "Dorian Tsai",
    "author_email": "Peter Corke <rvc@petercorke.com>",
    "download_url": "https://files.pythonhosted.org/packages/a3/98/e2b4aae1dadc2c556d31dce06bd94babaf3fd9cd9bbec50e00bfbac7d2d8/machinevision_toolbox_python-0.9.7.tar.gz",
    "platform": null,
    "description": "# Machine Vision Toolbox for Python\n\n[![A Python Robotics Package](https://raw.githubusercontent.com/petercorke/robotics-toolbox-python/master/.github/svg/py_collection.min.svg)](https://github.com/petercorke/robotics-toolbox-python)\n[![Powered by Spatial Maths](https://raw.githubusercontent.com/petercorke/spatialmath-python/master/.github/svg/sm_powered.min.svg)](https://github.com/petercorke/spatialmath-python)\n[![QUT Centre for Robotics Open Source](https://github.com/qcr/qcr.github.io/raw/master/misc/badge.svg)](https://qcr.github.io)\n\n[![PyPI version](https://badge.fury.io/py/machinevision-toolbox-python.svg)](https://badge.fury.io/py/machinevision-toolbox-python)\n![Python Version](https://img.shields.io/pypi/pyversions/machinevision-toolbox-python.svg)\n[![Powered by OpenCV](https://raw.githubusercontent.com/petercorke/machinevision-toolbox-python/master/.github/svg/opencv_powered.svg)](https://opencv.org)\n[![Powered by Open3D](https://raw.githubusercontent.com/petercorke/machinevision-toolbox-python/master/.github/svg/open3d_powered.svg)](https://open3d.org)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n[![Build Status](https://github.com/petercorke/machinevision-toolbox-python/workflows/Test-master/badge.svg?branch=master)](https://github.com/petercorke/machinevision-toolbox-python/actions?query=workflow%3Abuild)\n[![Coverage](https://codecov.io/gh/petercorke/machinevision-toolbox-python/branch/master/graph/badge.svg)](https://codecov.io/gh/petercorke/machinevision-toolbox-python)\n[![PyPI - Downloads](https://img.shields.io/pypi/dw/machinevision-toolbox-python)](https://pypistats.org/packages/machinevision-toolbox-python)\n\n<!-- [![GitHub stars](https://img.shields.io/github/stars/petercorke/machinevision-toolbox-python.svg?style=social&label=Star)](https://GitHub.com/petercorke/machinevision-toolbox-python/stargazers/) -->\n\n<table style=\"border:0px\">\n<tr style=\"border:0px\">\n<td style=\"border:0px\">\n<img src=\"https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/VisionToolboxLogo_NoBackgnd@2x.png\" width=\"200\"></td>\n<td style=\"border:0px\">\n\nA Python implementation of the <a href=\"https://github.com/petercorke/machinevision-toolbox-matlab\">Machine Vision Toolbox for MATLAB<sup>&reg;</sup></a><ul>\n\n<li><a href=\"https://github.com/petercorke/machinevision-toolbox-python\">GitHub repository </a></li>\n<li><a href=\"https://petercorke.github.io/machinevision-toolbox-python/\">Documentation</a></li>\n<li><a href=\"https://github.com/petercorke/machinevision-toolbox-python/wiki\">Examples and details</a></li>\n<li><a href=\"installation#\">Installation</a></li>\n</ul>\n</td>\n</tr>\n</table>\n\n## Synopsis\n\nThe Machine Vision Toolbox for Python (MVTB-P) provides many functions that are useful in machine vision and vision-based control. The main components are:\n\n- An `Image` object with nearly 200 methods and properties that wrap functions\n  from OpenCV, NumPy and SciPy. Methods support monadic, dyadic, filtering, edge detection,\n  mathematical morphology and feature extraction (blobs, lines and point/corner features), as well as operator overloading. Images are stored as encapsulated [NumPy](https://numpy.org) arrays\n  along with image metadata.\n- An object-oriented wrapper of Open3D functions that supports a subset of operations, but allows operator overloading and is compatible with the [Spatial Math Toolbox](https://github.com/petercorke/spatialmath-python).\n- A collection of camera projection classes for central (normal perspective), fisheye, catadioptric and spherical cameras.\n- Some advanced algorithms such as:\n  - multiview geometry: camera calibration, stereo vision, bundle adjustment\n  - bag of words\n\nAdvantages of this Python Toolbox are that:\n\n- it uses, as much as possible, [OpenCV](https://opencv.org) and [NumPy](https://numpy.org) which are portable, efficient, comprehensive and mature collection of functions for image processing and feature extraction;\n- it wraps the OpenCV functions in a consistent way, hiding some of the gnarly details of OpenCV like conversion to/from float32 and the BGR color order.\n- it is has similarity to the Machine Vision Toolbox for MATLAB.\n\n# Getting going\n\n## Using pip\n\nInstall a snapshot from PyPI\n\n```\n% pip install machinevision-toolbox-python\n```\n\n## From GitHub\n\nInstall the current code base from GitHub and pip install a link to that cloned copy\n\n```\n% git clone https://github.com/petercorke/machinevision-toolbox-python.git\n% cd machinevision-toolbox-python\n% pip install -e .\n```\n\n# Examples\n\n### Reading and display an image\n\n```python\nfrom machinevisiontoolbox import Image\nmona = Image.Read(\"monalisa.png\")\nmona.disp()\n```\n\n![Mona Lisa image](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/mona.png)\n\nImages can also be returned by iterators that operate over folders, zip files, local cameras, web cameras and video files.\n\n### Simple image processing\n\nThe toolbox supports many operations on images such as 2D filtering, edge detection, mathematical morphology, colorspace conversion, padding, cropping, resizing, rotation and warping.\n\n```python\nmona.smooth(sigma=5).disp()\n```\n\n![Mona Lisa image with smoothing](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/mona_smooth.png)\n\nThere are also many functions that operate on pairs of image. All the arithmetic operators are overloaded, and there are methods to combine images in more complex ways. Multiple images can be stacked horizontal, vertically or tiled in a 2D grid. For example, we could display the original and smoothed images side by side\n\n```python\nImage.Hstack([mona, mona.smooth(sigma=5)]).disp()\n```\n\nwhere `Hstack` is a class method that creates a new image by stacking the\nimages from its argument, an image sequence, horizontally.\n\n![Mona Lisa image with smoothing](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/mona+smooth.png)\n\n### Binary blobs\n\nA common problem in robotic vision is to extract features from the image, to describe the position, size, shape and orientation of objects in the scene. For simple binary scenes blob features are commonly used.\n\n```python\nim = Image.Read(\"shark2.png\")   # read a binary image of two sharks\nim.disp();   # display it with interactive viewing tool\nblobs = im.blobs()  # find all the white blobs\nprint(blobs)\n\n\t\u250c\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\t\u2502id \u2502 parent \u2502     centroid \u2502     area \u2502 touch \u2502 perim \u2502 circularity \u2502 orient \u2502 aspect \u2502\n\t\u251c\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\t\u2502 0 \u2502     -1 \u2502 371.2, 355.2 \u2502 7.59e+03 \u2502 False \u2502 557.6 \u2502       0.341 \u2502  82.9\u00b0 \u2502  0.976 \u2502\n\t\u2502 1 \u2502     -1 \u2502 171.2, 155.2 \u2502 7.59e+03 \u2502 False \u2502 557.6 \u2502       0.341 \u2502  82.9\u00b0 \u2502  0.976 \u2502\n\t\u2514\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nwhere `blobs` is a list-like object and each element describes a blob in the scene. The element's attributes describe various parameters of the object, and methods can be used to overlay graphics such as bounding boxes and centroids\n\n```python\nblobs.plot_box(color=\"g\", linewidth=2)  # put a green bounding box on each blob\nblobs.plot_centroid(label=True)  # put a circle+cross on the centroid of each blob\nplt.show(block=True)  # display the result\n```\n\n![Binary image showing bounding boxes and centroids](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/shark2+boxes.png)\n\n#### Binary blob hierarchy\n\nA more complex image is\n\n```python\nim = Image.Read(\"multiblobs.png\")\nim.disp()\n```\n\n![Binary image with nested blobs](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/multi.png)\n\nand we see that some blobs are contained within other blobs. The results in tabular form\n\n```python\nblobs  = im.blobs()\nprint(blobs)\n\t\u250c\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\t\u2502id \u2502 parent \u2502      centroid \u2502     area \u2502 touch \u2502  perim \u2502 circularity \u2502 orient \u2502 aspect \u2502\n\t\u251c\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\t\u2502 0 \u2502      1 \u2502  898.8, 725.3 \u2502 1.65e+05 \u2502 False \u2502 2220.0 \u2502       0.467 \u2502  86.7\u00b0 \u2502  0.754 \u2502\n\t\u2502 1 \u2502      2 \u2502 1025.0, 813.7 \u2502 1.06e+05 \u2502 False \u2502 1387.9 \u2502       0.769 \u2502 -88.9\u00b0 \u2502  0.739 \u2502\n\t\u2502 2 \u2502     -1 \u2502  938.1, 855.2 \u2502 1.72e+04 \u2502 False \u2502  490.7 \u2502       1.001 \u2502  88.7\u00b0 \u2502  0.862 \u2502\n\t\u2502 3 \u2502     -1 \u2502  988.1, 697.2 \u2502 1.21e+04 \u2502 False \u2502  412.5 \u2502       0.994 \u2502 -87.8\u00b0 \u2502  0.809 \u2502\n\t\u2502 4 \u2502     -1 \u2502  846.0, 511.7 \u2502 1.75e+04 \u2502 False \u2502  496.9 \u2502       0.992 \u2502 -90.0\u00b0 \u2502  0.778 \u2502\n\t\u2502 5 \u2502      6 \u2502  291.7, 377.8 \u2502  1.7e+05 \u2502 False \u2502 1712.6 \u2502       0.810 \u2502 -85.3\u00b0 \u2502  0.767 \u2502\n\t\u2502 6 \u2502     -1 \u2502  312.7, 472.1 \u2502 1.75e+04 \u2502 False \u2502  495.5 \u2502       0.997 \u2502 -89.9\u00b0 \u2502  0.777 \u2502\n\t\u2502 7 \u2502     -1 \u2502  241.9, 245.0 \u2502 1.75e+04 \u2502 False \u2502  496.9 \u2502       0.992 \u2502 -90.0\u00b0 \u2502  0.777 \u2502\n\t\u2502 8 \u2502      9 \u2502 1228.0, 254.3 \u2502 8.14e+04 \u2502 False \u2502 1215.2 \u2502       0.771 \u2502 -77.2\u00b0 \u2502  0.713 \u2502\n\t\u2502 9 \u2502     -1 \u2502 1225.2, 220.0 \u2502 1.75e+04 \u2502 False \u2502  496.9 \u2502       0.992 \u2502 -90.0\u00b0 \u2502  0.777 \u2502\n\t\u2514\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nWe can display a label image, where the value of each pixel is the label of the blob that the pixel\nbelongs to, the `id` attribute\n\n```python\nlabels = blobs.label_image()\nlabels.disp(colormap=\"viridis\", ncolors=len(blobs), colorbar=dict(shrink=0.8, aspect=20*0.8))\n```\n\n![False color label image](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/multi_labelled.png)\n\nWe can also think of the blobs forming a hiearchy and that relationship is reflected in the `parent` and `children` attributes of the blobs.\nWe can also express it as a directed graph\n\n```python\nblobs.dotfile(show=True)\n```\n\n![Blob hierarchy as a graph](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/blobs_graph.png)\n\n### Camera modelling\n\n```python\nfrom machinevisiontoolbox import CentralCamera\ncam = CentralCamera(f=0.015, rho=10e-6, imagesize=[1280, 1024], pp=[640, 512], name=\"mycamera\")\nprint(cam)\n           Name: mycamera [CentralCamera]\n     pixel size: 1e-05 x 1e-05\n     image size: 1280 x 1024\n           pose: t = 0, 0, 0; rpy/yxz = 0\u00b0, 0\u00b0, 0\u00b0\n   principal pt: [     640      512]\n   focal length: [   0.015    0.015]\n```\n\nand its intrinsic parameters are\n\n```python\nprint(cam.K)\n\t[[1.50e+03 0.00e+00 6.40e+02]\n\t [0.00e+00 1.50e+03 5.12e+02]\n\t [0.00e+00 0.00e+00 1.00e+00]]\n```\n\nWe can define an arbitrary point in the world\n\n```python\nP = [0.3, 0.4, 3.0]\n```\n\nand then project it into the camera\n\n```python\np = cam.project(P)\nprint(p)\n\t[790. 712.]\n```\n\nwhich is the corresponding coordinate in pixels. If we shift the camera slightly the image plane coordinate will also change\n\n```python\np = cam.project(P, T=SE3(0.1, 0, 0) )\nprint(p)\n[740. 712.]\n```\n\nWe can define an edge-based cube model and project it into the camera's image plane\n\n```python\nfrom spatialmath import SE3\nX, Y, Z = mkcube(0.2, pose=SE3(0, 0, 1), edge=True)\ncam.plot_wireframe(X, Y, Z)\n```\n\n![Perspective camera view of cube](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/cube.png)\n\n<!---or with a fisheye camera\n\n```matlab\n>> cam = FishEyeCamera('name', 'fisheye', ...\n'projection', 'equiangular', ...\n'pixel', 10e-6, ...\n'resolution', [1280 1024]);\n>> [X,Y,Z] = mkcube(0.2, 'centre', [0.2, 0, 0.3], 'edge');\n>> cam.mesh(X, Y, Z);\n```\n![Fisheye lens camera view](figs/cube_fisheye.png)\n\n\n### Bundle adjustment\n--->\n\n### Color space\n\nPlot the CIE chromaticity space\n\n```python\nplot_chromaticity_diagram(\"xy\");\nplot_spectral_locus(\"xy\")\n```\n\n![CIE chromaticity space](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/colorspace.png)\n\nLoad the spectrum of sunlight at the Earth's surface and compute the CIE xy chromaticity coordinates\n\n```python\nnm = 1e-9\nlam = np.linspace(400, 701, 5) * nm # visible light\nsun_at_ground = loadspectrum(lam, \"solar\")\nxy = lambda2xy(lambda, sun_at_ground)\nprint(xy)\n\t[[0.33272798 0.3454013 ]]\nprint(colorname(xy, \"xy\"))\n\tkhaki\n```\n\n### Hough transform\n\n```python\nim = Image.Read(\"church.png\", mono=True)\nedges = im.canny()\nh = edges.Hough()\nlines = h.lines_p(100, minlinelength=200, maxlinegap=5, seed=0)\n\nim.disp(darken=True)\nh.plot_lines(lines, \"r--\")\n```\n\n![Hough transform](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/hough.png)\n\n### SURF features\n\nWe load two images and compute a set of SURF features for each\n\n```python\nview1 = Image.Read(\"eiffel-1.png\", mono=True)\nview2 = Image.Read(\"eiffel-2.png\", mono=True)\nsf1 = view1.SIFT()\nsf2 = view2.SIFT()\n```\n\nWe can match features between images based purely on the similarity of the features, and display the correspondences found\n\n```python\nmatches = sf1.match(sf2)\nprint(matches)\n813 matches\nmatches[1:5].table()\n\u250c\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502# \u2502 inlier \u2502 strength \u2502              p1 \u2502             p2 \u2502\n\u251c\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u25020 \u2502        \u2502     26.4 \u2502 (1118.6, 178.8) \u2502 (952.5, 418.0) \u2502\n\u25021 \u2502        \u2502     28.2 \u2502 (820.6, 519.1)  \u2502 (708.1, 701.6) \u2502\n\u25022 \u2502        \u2502     29.6 \u2502 (801.1, 632.4)  \u2502 (694.1, 800.3) \u2502\n\u25023 \u2502        \u2502     32.4 \u2502 (746.0, 153.1)  \u2502 (644.5, 392.2) \u2502\n\u2514\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nwhere we have displayed the feature coordinates for four correspondences.\n\nWe can also display the correspondences graphically\n\n```python\nmatches.subset(100).plot(\"w\")\n```\n\nin this case, a subset of 100/813 of the correspondences.\n\n![Feature matching](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/matching.png)\n\nClearly there are some bad matches here, but we we can use RANSAC and the epipolar constraint implied by the fundamental matrix to estimate the fundamental matrix and classify correspondences as inliers or outliers\n\n```python\nF, resid = matches.estimate(CentralCamera.points2F, method=\"ransac\", confidence=0.99, seed=0)\nprint(F)\narray([[1.033e-08, -3.799e-06, 0.002678],\n       [3.668e-06, 1.217e-07, -0.004033],\n       [-0.00319, 0.003436,        1]])\nprint(resid)\n0.0405\n\nImage.Hstack((view1, view2)).disp()\nmatches.inliers.subset(100).plot(\"g\", ax=plt.gca())\nmatches.outliers.subset(100).plot(\"r\", ax=plt.gca())\n```\n\nwhere green lines show correct correspondences (inliers) and red lines show bad correspondences (outliers)\n\n![Feature matching after RANSAC](https://github.com/petercorke/machinevision-toolbox-python/raw/master/figs/matching_ransac.png)\n\n# History\n\nThis package can be considered as a Python version of the [Machine Vision\nToolbox for MATLAB](). That Toolbox, now quite old, is a collection of MATLAB\nfunctions and classes that supported the first two editions of the Robotics,\nVision & Control book. It is a somewhat eclectic collection reflecting my\npersonal interest in areas of photometry, photogrammetry, colorimetry. It\nincludes over 100 functions spanning operations such as image file reading and\nwriting, acquisition, display, filtering, blob, point and line feature\nextraction, mathematical morphology, homographies, visual Jacobians, camera\ncalibration and color space conversion.\n\nThis Python version differs in using an object to encapsulate the pixel data and\nimage metadata, rather than just a native object holding pixel data. The many\nfunctions become methods of the image object which reduces namespace pollutions,\nand allows the easy expression of sequential operations using \"dot chaining\".\n\nThe first version was created by Dorian Tsai during 2020, and based on the\nMATLAB version.  That work was funded by an Australian University Teacher of\nthe year award (2017) to Peter Corke.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Python tools for machine vision - education and research",
    "version": "0.9.7",
    "project_urls": {
        "Bug Tracker": "https://github.com/pypa/sampleproject/issues",
        "Documentation": "https://petercorke.github.io/machinevision-toolbox-python",
        "Homepage": "https://github.com/petercorke/machinevision-toolbox-python",
        "Source": "https://github.com/petercorke/machinevision-toolbox-python"
    },
    "split_keywords": [
        "machine vision",
        " computer vision",
        " multiview geometry",
        " stereo vision",
        " bundle adjustment",
        " visual servoing",
        " image features",
        " color",
        " blobs",
        " morphology",
        " image segmentation",
        " opencv",
        " open3d"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "be7f1b356783ebc7046bd12fffe77ef39009944fa383de7d81d2dccc00ba4cd3",
                "md5": "57414edc44960de8989dc7fee541db00",
                "sha256": "cf8cbf8b492d3b0ad54b6a3acefa181f969f26b2f72ec841ea67e30a4b90650f"
            },
            "downloads": -1,
            "filename": "machinevision_toolbox_python-0.9.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "57414edc44960de8989dc7fee541db00",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 238085,
            "upload_time": "2024-05-27T01:57:39",
            "upload_time_iso_8601": "2024-05-27T01:57:39.629750Z",
            "url": "https://files.pythonhosted.org/packages/be/7f/1b356783ebc7046bd12fffe77ef39009944fa383de7d81d2dccc00ba4cd3/machinevision_toolbox_python-0.9.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a398e2b4aae1dadc2c556d31dce06bd94babaf3fd9cd9bbec50e00bfbac7d2d8",
                "md5": "1a759d8bff0fe55f00dd46fca8e7236d",
                "sha256": "ad428aba5716d985b7aaeb72e611ef34acb70169e83984f53e9fefe292eed9e9"
            },
            "downloads": -1,
            "filename": "machinevision_toolbox_python-0.9.7.tar.gz",
            "has_sig": false,
            "md5_digest": "1a759d8bff0fe55f00dd46fca8e7236d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 228996,
            "upload_time": "2024-05-27T01:57:42",
            "upload_time_iso_8601": "2024-05-27T01:57:42.135629Z",
            "url": "https://files.pythonhosted.org/packages/a3/98/e2b4aae1dadc2c556d31dce06bd94babaf3fd9cd9bbec50e00bfbac7d2d8/machinevision_toolbox_python-0.9.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-27 01:57:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pypa",
    "github_project": "sampleproject",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "machinevision-toolbox-python"
}
        
Elapsed time: 0.27335s