Name | visiongraph JSON |
Version |
1.1.0.1
JSON |
| download |
home_page | None |
Summary | Visiongraph is a high level computer vision framework. |
upload_time | 2025-07-31 12:46:35 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <3.13,>=3.10 |
license | None |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<img src="https://github.com/user-attachments/assets/0ed34695-ca0e-47ff-aebb-eb59ff851770" alt="Visiongraph Logo Bright" width="75%">
# Visiongraph
[](https://pypi.org/project/visiongraph/)

[](https://cansik.github.io/visiongraph/visiongraph.html#documentation)
Visiongraph is a high level computer vision framework that includes predefined modules to quickly create and run algorithms on images. It is based on opencv and includes other computer vision frameworks like [Intel openVINO](https://github.com/openvinotoolkit/openvino) and [Google MediaPipe](https://github.com/google-ai-edge/mediapipe).
Here an example on how to start a webcam capture and display the image:
```python
from visiongraph import vg
vg.create_graph(vg.VideoCaptureInput()).then(vg.ImagePreview()).open()
```
Get started with `visiongraph` by reading the **[documentation](https://cansik.github.io/visiongraph/visiongraph.html#documentation)**.
## Installation
Visiongraph supports Python 3.10 and 3.11. Other versions may also work, but are not officially supported. Usually this is a third-party dependency problem: for example, [pyrealsense2](https://pypi.org/project/pyrealsense2/#files) does not have wheel packages for `3.12`.
To install visiongraph with all dependencies call [pip](https://pypi.org/project/pip/) like this:
```bash
pip install "visiongraph[all]"
```
It is also possible to only install certain packages depending on your needs (recommended):
```bash
# example on how to install realsense and openvino support only
pip install "visiongraph[realsense, openvino]"
```
Please read more about the extra packages in the [documentation](https://cansik.github.io/visiongraph/visiongraph.html#extras).
### Optional Mediapipe Support
Visiongraph can integrate Google’s [MediaPipe](https://github.com/google/mediapipe) for advanced hand, face and object tracking pipelines. Unfortunately, the official PyPI MediaPipe wheels declare a strict dependency on `numpy<2.0`, which prevents installation alongside NumPy 2.x, even though most functionality works fine with NumPy 2.0 and above. To work around this limitation, we maintain a custom [mediapipe-numpy2](https://github.com/cansik/mediapipe-numpy2) build that removes the `<2.0` pin.
When you install with the `mediapipe` extra, pip will automatically fetch the matching patched wheel for your OS and Python version.
#### Alternative: Use the Official MediaPipe Release
If you’re happy to stick with NumPy <2.0, you can skip our custom package entirely and install the upstream MediaPipe wheel from PyPI:
```bash
pip install visiongraph mediapipe
```
This will install Visiongraph plus the official `mediapipe` package (which requires `numpy<2.0`). Just make sure your environment’s NumPy version is below 2.0 when using this route.
## Examples
To demonstrate the possibilities of visiongraph there are already implemented [examples](examples) ready for you to try out. Here is a list of the current examples:
- [SimpleVisionGraph](examples/SimpleVisionGraph.py) - SSD object detection & tracking of live webcam input with `5` lines of code.
- [VisionGraphExample](examples/VisionGraphExample.py) - A face detection and tracking example with custom events.
- [InputExample](examples/InputExample.py) - A basic input example that determines the center if possible.
- [RealSenseDepthExample](examples/DepthCameraExample.py) - Display the RealSense or Azure Kinect depth map.
- [FaceDetectionExample](examples/FaceDetectionExample.py) - A face detection pipeline example.
- [FindFaceExample](examples/FindFaceExample.py) - A face recognition example to find a target face.
- [CascadeFaceDetectionExample](examples/CascadeFaceDetectionExample.py) - A face detection pipeline that also predicts other feature points of the face.
- [HandDetectionExample](examples/HandDetectionExample.py) - A hand detection pipeline example.
- [PoseEstimationExample](examples/PoseEstimationExample.py) - A pose estimation pipeline which annotates the generic pose keypoints.
- [ProjectedPoseExample](examples/ProjectedPoseExample.py) - Project the pose estimation into 3d space with the RealSense camera.
- [ObjectDetectionExample](examples/ObjectDetectionExample.py) - An object detection & tracking example.
- [InstanceSegmentationExample](examples/InstanceSegmentationExample.py) - Intance Segmentation based on COCO80 dataset.
- [InpaintExample](examples/InpaintExample.py) - GAN based inpainting example.
- [MidasDepthExample](examples/MidasDepthExample.py) - Realtime depth prediction with the [midas-small](https://github.com/isl-org/MiDaS) network.
- [RGBDSmoother](examples/RGBDSmoother.py) - Smooth RGB-D depth map videos with a one-euro filter per pixel.
- [FaceMeshVVADExample.py](examples/FaceMeshVVADExample.py) - Detect voice activation by landmark sequence classification.
There are even more examples where visiongraph is currently in use:
- [Spout/Syphon RGB-D Example](https://github.com/cansik/spout-rgbd-example) - Share RGB-D images over spout or syphon.
- [WebRTC Input](https://github.com/cansik/visiongraph-webrtc) - WebRTC input example for visiongraph
## Development
To develop on visiongraph it is recommended to clone this repository and install the dependencies like this. First install the [uv](https://docs.astral.sh/uv/getting-started/installation/) package manager.
```bash
# in the visiongraph directory install all dependencies
uv sync --all-extras --dev --group docs
```
### Build
To build a new wheel package of visiongraph run the following command in the root directory. Please find the wheel and source distribution in `./dist`.
```bash
uv run python setup.py generate_init
uv build
```
### Docs
To generate the documentation, use the following commands.
```bash
# create documentation into "./docs
uv run python setup.py doc
# launch pdoc webserver
uv run python setup.py doc --launch
```
## Dependencies
Parts of these libraries are directly included and adapted to work with visiongraph.
* [motpy](https://github.com/wmuron/motpy) - simple multi object tracking library (MIT License)
* [motrackers](https://github.com/adipandas/multi-object-tracker) - Multi-object trackers in Python (MIT License)
* [OneEuroFilter-Numpy](https://github.com/HoBeom/OneEuroFilter-Numpy) - (MIT License)
Here you can find a list of the dependencies of visiongraph and their licence:
```
depthai MIT License
faiss-cpu MIT License
filterpy MIT License
mediapipe Apache License 2.0
moviepy MIT License
numba BSD License
onnxruntime MIT License
onnxruntime-directml MIT License
onnxruntime-gpu MIT License
opencv-python Apache License 2.0
openvino Apache License 2.0
pyk4a-bundle MIT License
pyopengl BSD License
pyrealsense2 Apache License 2.0
pyrealsense2-macosx Apache License 2.0
requests Apache License 2.0
scipy MIT License
SpoutGL BSD License
syphon-python MIT License
tqdm MIT License
vector BSD License
vidgear Apache License 2.0
wheel MIT License
```
For more information about the dependencies have a look at the [requirements.txt](https://github.com/cansik/visiongraph/blob/main/requirements.txt).
Please note that some models (such as Ultralytics YOLOv8 and YOLOv11) have specific licences (AGPLv3). Always check the model licence before using the model.
## About
Copyright (c) 2025 Florian Bruggisser
Raw data
{
"_id": null,
"home_page": null,
"name": "visiongraph",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Florian Bruggisser <github@broox.ch>",
"download_url": "https://files.pythonhosted.org/packages/f6/f3/68d581da37f75e4e0311ac0d7de60525d2327e940eab475a2ab250254082/visiongraph-1.1.0.1.tar.gz",
"platform": null,
"description": "<img src=\"https://github.com/user-attachments/assets/0ed34695-ca0e-47ff-aebb-eb59ff851770\" alt=\"Visiongraph Logo Bright\" width=\"75%\">\n\n# Visiongraph\n\n[](https://pypi.org/project/visiongraph/)\n\n[](https://cansik.github.io/visiongraph/visiongraph.html#documentation)\n\nVisiongraph is a high level computer vision framework that includes predefined modules to quickly create and run algorithms on images. It is based on opencv and includes other computer vision frameworks like [Intel openVINO](https://github.com/openvinotoolkit/openvino) and [Google MediaPipe](https://github.com/google-ai-edge/mediapipe).\n\nHere an example on how to start a webcam capture and display the image:\n\n```python\nfrom visiongraph import vg\n\nvg.create_graph(vg.VideoCaptureInput()).then(vg.ImagePreview()).open()\n```\n\nGet started with `visiongraph` by reading the **[documentation](https://cansik.github.io/visiongraph/visiongraph.html#documentation)**.\n\n## Installation\nVisiongraph supports Python 3.10 and 3.11. Other versions may also work, but are not officially supported. Usually this is a third-party dependency problem: for example, [pyrealsense2](https://pypi.org/project/pyrealsense2/#files) does not have wheel packages for `3.12`.\n\nTo install visiongraph with all dependencies call [pip](https://pypi.org/project/pip/) like this:\n\n```bash\npip install \"visiongraph[all]\"\n```\n\nIt is also possible to only install certain packages depending on your needs (recommended):\n\n```bash\n# example on how to install realsense and openvino support only\npip install \"visiongraph[realsense, openvino]\"\n```\n\nPlease read more about the extra packages in the [documentation](https://cansik.github.io/visiongraph/visiongraph.html#extras).\n\n### Optional Mediapipe Support\n\nVisiongraph can integrate Google\u2019s [MediaPipe](https://github.com/google/mediapipe) for advanced hand, face and object tracking pipelines. Unfortunately, the official PyPI MediaPipe wheels declare a strict dependency on `numpy<2.0`, which prevents installation alongside NumPy 2.x, even though most functionality works fine with NumPy 2.0 and above. To work around this limitation, we maintain a custom [mediapipe-numpy2](https://github.com/cansik/mediapipe-numpy2) build that removes the `<2.0` pin.\n\nWhen you install with the `mediapipe` extra, pip will automatically fetch the matching patched wheel for your OS and Python version.\n\n#### Alternative: Use the Official MediaPipe Release\n\nIf you\u2019re happy to stick with NumPy <2.0, you can skip our custom package entirely and install the upstream MediaPipe wheel from PyPI:\n\n```bash\npip install visiongraph mediapipe\n```\n\nThis will install Visiongraph plus the official `mediapipe` package (which requires `numpy<2.0`). Just make sure your environment\u2019s NumPy version is below 2.0 when using this route.\n\n\n## Examples\nTo demonstrate the possibilities of visiongraph there are already implemented [examples](examples) ready for you to try out. Here is a list of the current examples:\n\n- [SimpleVisionGraph](examples/SimpleVisionGraph.py) - SSD object detection & tracking of live webcam input with `5` lines of code.\n- [VisionGraphExample](examples/VisionGraphExample.py) - A face detection and tracking example with custom events.\n- [InputExample](examples/InputExample.py) - A basic input example that determines the center if possible.\n- [RealSenseDepthExample](examples/DepthCameraExample.py) - Display the RealSense or Azure Kinect depth map.\n- [FaceDetectionExample](examples/FaceDetectionExample.py) - A face detection pipeline example.\n- [FindFaceExample](examples/FindFaceExample.py) - A face recognition example to find a target face.\n- [CascadeFaceDetectionExample](examples/CascadeFaceDetectionExample.py) - A face detection pipeline that also predicts other feature points of the face.\n- [HandDetectionExample](examples/HandDetectionExample.py) - A hand detection pipeline example.\n- [PoseEstimationExample](examples/PoseEstimationExample.py) - A pose estimation pipeline which annotates the generic pose keypoints.\n- [ProjectedPoseExample](examples/ProjectedPoseExample.py) - Project the pose estimation into 3d space with the RealSense camera.\n- [ObjectDetectionExample](examples/ObjectDetectionExample.py) - An object detection & tracking example.\n- [InstanceSegmentationExample](examples/InstanceSegmentationExample.py) - Intance Segmentation based on COCO80 dataset.\n- [InpaintExample](examples/InpaintExample.py) - GAN based inpainting example.\n- [MidasDepthExample](examples/MidasDepthExample.py) - Realtime depth prediction with the [midas-small](https://github.com/isl-org/MiDaS) network.\n- [RGBDSmoother](examples/RGBDSmoother.py) - Smooth RGB-D depth map videos with a one-euro filter per pixel.\n- [FaceMeshVVADExample.py](examples/FaceMeshVVADExample.py) - Detect voice activation by landmark sequence classification.\n\nThere are even more examples where visiongraph is currently in use:\n\n- [Spout/Syphon RGB-D Example](https://github.com/cansik/spout-rgbd-example) - Share RGB-D images over spout or syphon.\n- [WebRTC Input](https://github.com/cansik/visiongraph-webrtc) - WebRTC input example for visiongraph\n\n## Development\nTo develop on visiongraph it is recommended to clone this repository and install the dependencies like this. First install the [uv](https://docs.astral.sh/uv/getting-started/installation/) package manager.\n\n```bash\n# in the visiongraph directory install all dependencies\nuv sync --all-extras --dev --group docs\n```\n\n### Build\nTo build a new wheel package of visiongraph run the following command in the root directory. Please find the wheel and source distribution in `./dist`.\n\n```bash\nuv run python setup.py generate_init\nuv build\n```\n\n### Docs\n\nTo generate the documentation, use the following commands.\n\n```bash\n# create documentation into \"./docs\nuv run python setup.py doc\n\n# launch pdoc webserver\nuv run python setup.py doc --launch\n```\n\n## Dependencies\n\nParts of these libraries are directly included and adapted to work with visiongraph.\n\n* [motpy](https://github.com/wmuron/motpy) - simple multi object tracking library (MIT License)\n* [motrackers](https://github.com/adipandas/multi-object-tracker) - Multi-object trackers in Python (MIT License)\n* [OneEuroFilter-Numpy](https://github.com/HoBeom/OneEuroFilter-Numpy) - (MIT License)\n\nHere you can find a list of the dependencies of visiongraph and their licence:\n\n```\ndepthai MIT License\nfaiss-cpu MIT License\nfilterpy MIT License\nmediapipe Apache License 2.0\nmoviepy MIT License\nnumba BSD License\nonnxruntime MIT License\nonnxruntime-directml MIT License\nonnxruntime-gpu MIT License\nopencv-python Apache License 2.0\nopenvino Apache License 2.0\npyk4a-bundle MIT License\npyopengl BSD License\npyrealsense2 Apache License 2.0\npyrealsense2-macosx Apache License 2.0\nrequests Apache License 2.0\nscipy MIT License\nSpoutGL BSD License\nsyphon-python MIT License\ntqdm MIT License\nvector BSD License\nvidgear Apache License 2.0\nwheel MIT License\n```\n\nFor more information about the dependencies have a look at the [requirements.txt](https://github.com/cansik/visiongraph/blob/main/requirements.txt).\n\nPlease note that some models (such as Ultralytics YOLOv8 and YOLOv11) have specific licences (AGPLv3). Always check the model licence before using the model.\n\n## About\nCopyright (c) 2025 Florian Bruggisser\n",
"bugtrack_url": null,
"license": null,
"summary": "Visiongraph is a high level computer vision framework.",
"version": "1.1.0.1",
"project_urls": {
"Documentation": "https://cansik.github.io/visiongraph/",
"Homepage": "https://github.com/cansik/visiongraph",
"Repository": "https://github.com/cansik/visiongraph.git"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "029a5afbefdd1c5d2df50b86238bb693e8a94e65cec590499f2e9794bd126bcd",
"md5": "befa93c20e854ebd2c0a3ef3d798f67b",
"sha256": "21534fcb7a957cd08d07172fa3f005d4bb99aa1850c1be539ffdedb5f0d8c0f1"
},
"downloads": -1,
"filename": "visiongraph-1.1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "befa93c20e854ebd2c0a3ef3d798f67b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.10",
"size": 356330,
"upload_time": "2025-07-31T12:46:33",
"upload_time_iso_8601": "2025-07-31T12:46:33.354708Z",
"url": "https://files.pythonhosted.org/packages/02/9a/5afbefdd1c5d2df50b86238bb693e8a94e65cec590499f2e9794bd126bcd/visiongraph-1.1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f6f368d581da37f75e4e0311ac0d7de60525d2327e940eab475a2ab250254082",
"md5": "f09797bac262dcd34608ce0aaee76aba",
"sha256": "a5012185275d7778f59398fcd6f9c037abee3a665d222e9e5831cbd20d5d9e7f"
},
"downloads": -1,
"filename": "visiongraph-1.1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "f09797bac262dcd34608ce0aaee76aba",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.10",
"size": 228534,
"upload_time": "2025-07-31T12:46:35",
"upload_time_iso_8601": "2025-07-31T12:46:35.177585Z",
"url": "https://files.pythonhosted.org/packages/f6/f3/68d581da37f75e4e0311ac0d7de60525d2327e940eab475a2ab250254082/visiongraph-1.1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-31 12:46:35",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "cansik",
"github_project": "visiongraph",
"github_not_found": true,
"lcname": "visiongraph"
}