space-stream


Namespace-stream JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/cansik/space-stream
SummarySend RGB-D images over spout / syphon with visiongraph.
upload_time2024-05-08 12:53:57
maintainerNone
docs_urlNone
authorFlorian Bruggisser
requires_pythonNone
licenseMIT License
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Space Stream [![PyPI](https://img.shields.io/pypi/v/space-stream)](https://pypi.org/project/space-stream/)
Send RGB-D images over spout / syphon with visiongraph.

![Example Map](images/space-stream-ui.jpg)
*Source: IntelĀ® RealSenseā„¢ [Sample Data](https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md)*

### Installation
It is recommended to use `Python 11` (`Python 3.8`, `Python 3.9` or `Python 3.10` should work too) and should run on any OS. First create a new [virtualenv](https://docs.python.org/3/library/venv.html) and activate it. 
After that install all dependencies:

```bash
pip install space-stream
```

#### ZED Camera
To be able to use a ZED Camera, please follow the tutorial on the [ZED Python API](https://www.stereolabs.com/docs/app-development/python/install/) website.

1. Install the [ZED SDK](https://www.stereolabs.com/developers/release/) (together with CUDA)
2. Run the command `python "C:\Program Files (x86)\ZED SDK\get_python_api.py"` inside the virtual python environment

### Usage
Simply run the `spacestream` module with the following command to run a capturing pipeline (RealSense based). After that you can open a [spout receiver](https://github.com/leadedge/Spout2/releases) / syphon receiver and check the result there.

```
space-stream --input realsense
```

To use the Azure Kinect use the `azure` input type:

```
space-stream --input azure
```

### Build

```bash
python setup.py distribute
```

### Development
To develop with this project, clone the git repository and install the dependencies from the requirements:

```bash
pip install -r requirements.txt
```

To call the module directly, use the `-m` command from python:

```
python -m spacestream
```

#### Depth Codec
By default the depthmap is encoded by the linear codec. It is possible to change the behaviour to use a specific encoding method. Be aware that some functions have an impact on performance. Here is a list of all available codecs:

```
Linear,
UniformHue
InverseHue
```

The codecs `UniformHue` and `InverseHue` are implemented according to the Intel whitepaper about [Depth image compression by colorization](https://dev.intelrealsense.com/docs/depth-image-compression-by-colorization-for-intel-realsense-depth-cameras).

#### Bit Depth
The encoded bit-depth depends on the codec used. For `Linear` codec there are two different bit-depths encoded. First the `8-bit` encoding in the `red` channel and `16-bit` encoded values in the `green` (MSB) and `blue` (LSB) channel.

#### Distance Range
To define the min and max distance to encode, use the `--min-distance` and `--max-distance` parameter.

#### Help

```
usage: space-stream [-h] [-c CONFIG] [-s SETTINGS]
                    [--loglevel {critical,error,warning,info,debug}]
                    [--record RECORD]
                    [--codec Linear, UniformHue, InverseHue, RSColorizer]
                    [--min-distance MIN_DISTANCE]
                    [--max-distance MAX_DISTANCE] [--stream-name STREAM_NAME]
                    [--input video-capture,image,realsense,azure,camgear,zed]
                    [--input-size width height] [--input-fps INPUT_FPS]
                    [--input-rotate 90,-90,180] [--input-flip h,v]
                    [--input-mask INPUT_MASK] [--input-crop x y width height]
                    [--raw-input] [--channel CHANNEL]
                    [--input-skip INPUT_SKIP]
                    [--input-backend any,vfw,v4l,v4l2,firewire,fireware,ieee1394,dc1394,cmu1394,qt,unicap,dshow,pvapi,openni,openni_asus,android,xiapi,avfoundation,giganetix,msmf,winrt,intelperc,openni2,openni2_asus,gphoto2,gstreamer,ffmpeg,images,aravis,opencv_mjpeg,intel_mfx,xine]
                    [-src SOURCE] [--input-path INPUT_PATH]
                    [--input-delay INPUT_DELAY] [--exposure EXPOSURE]
                    [--gain GAIN] [--white-balance WHITE_BALANCE] [--depth]
                    [--depth-as-input] [-ir] [--rs-serial RS_SERIAL]
                    [--rs-json RS_JSON] [--rs-play-bag RS_PLAY_BAG]
                    [--rs-record-bag RS_RECORD_BAG] [--rs-disable-emitter]
                    [--rs-bag-offline]
                    [--rs-auto-exposure-limit RS_AUTO_EXPOSURE_LIMIT]
                    [--rs-auto-gain-limit RS_AUTO_GAIN_LIMIT]
                    [--rs-filter decimation,spatial,temporal,hole-filling [decimation,spatial,temporal,hole-filling ...]]
                    [--rs-color-scheme Jet,Classic,WhiteToBlack,BlackToWhite,Bio,Cold,Warm,Quantized,Pattern]
                    [--k4a-align-to-color] [--k4a-align-to-depth]
                    [--k4a-device K4A_DEVICE] [--k4a-depth-clipping min max]
                    [--k4a-ir-clipping min max] [--k4a-play-mkv K4A_PLAY_MKV]
                    [--k4a-record-mkv K4A_RECORD_MKV]
                    [--k4a-depth-mode OFF,NFOV_2X2BINNED,NFOV_UNBINNED,WFOV_2X2BINNED,WFOV_UNBINNED,PASSIVE_IR]
                    [--k4a-passive-ir]
                    [--k4a-color-resolution OFF,RES_720P,RES_1080P,RES_1440P,RES_1536P,RES_2160P,RES_3072P]
                    [--k4a-color-format COLOR_MJPG,COLOR_NV12,COLOR_YUY2,COLOR_BGRA32,DEPTH16,IR16,CUSTOM8,CUSTOM16,CUSTOM]
                    [--k4a-wired-sync-mode STANDALONE,MASTER,SUBORDINATE]
                    [--k4a-subordinate-delay-off-master-usec K4A_SUBORDINATE_DELAY_OFF_MASTER_USEC]
                    [--midas] [--mask]
                    [--segnet mediapipe,mediapipe-light,mediapipe-heavy]
                    [--parallel] [--num-threads NUM_THREADS] [--no-fastmath]
                    [--no-filter] [--no-preview] [--record-crf RECORD_CRF]
                    [--view-pcd] [--view-3d]

RGB-D framebuffer sharing demo for visiongraph.

options:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
                        Configuration file path.
  -s SETTINGS, --settings SETTINGS
                        Settings file path (json).
  --loglevel {critical,error,warning,info,debug}
                        Provide logging level. Example --loglevel debug,
                        default=warning
  --record RECORD       Record output into recordings folder.
  --codec Linear, UniformHue, InverseHue, RSColorizer
                        Codec how the depth map will be encoded.
  --min-distance MIN_DISTANCE
                        Min distance to perceive by the camera.
  --max-distance MAX_DISTANCE
                        Max distance to perceive by the camera.
  --stream-name STREAM_NAME
                        Spout / Syphon stream name.

input provider:
  --input video-capture,image,realsense,azure,camgear,zed
                        Image input provider, default: video-capture.
  --input-size width height
                        Requested input media size.
  --input-fps INPUT_FPS
                        Requested input media framerate.
  --input-rotate 90,-90,180
                        Rotate input media.
  --input-flip h,v      Flip input media.
  --input-mask INPUT_MASK
                        Path to the input mask.
  --input-crop x y width height
                        Crop input image.
  --raw-input           Skip automatic input conversion to 3-channel image.
  --channel CHANNEL     Input device channel (camera id, video path, image
                        sequence).
  --input-skip INPUT_SKIP
                        If set the input will be skipped to the value in
                        milliseconds.
  --input-backend any,vfw,v4l,v4l2,firewire,fireware,ieee1394,dc1394,cmu1394,qt,unicap,dshow,pvapi,openni,openni_asus,android,xiapi,avfoundation,giganetix,msmf,winrt,intelperc,openni2,openni2_asus,gphoto2,gstreamer,ffmpeg,images,aravis,opencv_mjpeg,intel_mfx,xine
                        VideoCapture API backends identifier., default: any.
  -src SOURCE, --source SOURCE
                        Generic input source for all inputs.
  --input-path INPUT_PATH
                        Path to the input image.
  --input-delay INPUT_DELAY
                        Input delay time (s).
  --exposure EXPOSURE   Exposure value (usec) for depth camera input (disables
                        auto-exposure).
  --gain GAIN           Gain value for depth input (disables auto-exposure).
  --white-balance WHITE_BALANCE
                        White-Balance value for depth input (disables auto-
                        white-balance).
  --depth               Enable RealSense depth stream.
  --depth-as-input      Use colored depth stream as input stream.
  -ir, --infrared       Use infrared as input stream.
  --rs-serial RS_SERIAL
                        RealSense serial number to choose specific device.
  --rs-json RS_JSON     RealSense json configuration to apply.
  --rs-play-bag RS_PLAY_BAG
                        Path to a pre-recorded bag file for playback.
  --rs-record-bag RS_RECORD_BAG
                        Path to a bag file to store the current recording.
  --rs-disable-emitter  Disable RealSense IR emitter.
  --rs-bag-offline      Disable realtime bag playback.
  --rs-auto-exposure-limit RS_AUTO_EXPOSURE_LIMIT
                        Auto exposure limit (ms).
  --rs-auto-gain-limit RS_AUTO_GAIN_LIMIT
                        Auto gain limit (16-248).
  --rs-filter decimation,spatial,temporal,hole-filling [decimation,spatial,temporal,hole-filling ...]
                        RealSense depth filter.
  --rs-color-scheme Jet,Classic,WhiteToBlack,BlackToWhite,Bio,Cold,Warm,Quantized,Pattern
                        Color scheme for depth map, default: WhiteToBlack.
  --k4a-align-to-color  Align azure frames to color frame.
  --k4a-align-to-depth  Align azure frames to depth frame.
  --k4a-device K4A_DEVICE
                        Azure device id.
  --k4a-depth-clipping min max
                        Depth input clipping.
  --k4a-ir-clipping min max
                        Infrared input clipping.
  --k4a-play-mkv K4A_PLAY_MKV
                        Path to a pre-recorded bag file for playback.
  --k4a-record-mkv K4A_RECORD_MKV
                        Path to a mkv file to store the current recording.
  --k4a-depth-mode OFF,NFOV_2X2BINNED,NFOV_UNBINNED,WFOV_2X2BINNED,WFOV_UNBINNED,PASSIVE_IR
                        Azure depth mode, default: NFOV_UNBINNED.
  --k4a-passive-ir      Use passive IR input.
  --k4a-color-resolution OFF,RES_720P,RES_1080P,RES_1440P,RES_1536P,RES_2160P,RES_3072P
                        Azure color resolution (overwrites input-size),
                        default: RES_720P.
  --k4a-color-format COLOR_MJPG,COLOR_NV12,COLOR_YUY2,COLOR_BGRA32,DEPTH16,IR16,CUSTOM8,CUSTOM16,CUSTOM
                        Azure color image format, default: COLOR_BGRA32.
  --k4a-wired-sync-mode STANDALONE,MASTER,SUBORDINATE
                        Synchronization mode when connecting two or more
                        devices together, default: STANDALONE.
  --k4a-subordinate-delay-off-master-usec K4A_SUBORDINATE_DELAY_OFF_MASTER_USEC
                        The external synchronization timing.
  --midas               Use midas for depth capture.

masking:
  --mask                Apply mask by segmentation algorithm.
  --segnet mediapipe,mediapipe-light,mediapipe-heavy
                        Segmentation Network, default: mediapipe.

performance:
  --parallel            Enable parallel for codec operations.
  --num-threads NUM_THREADS
                        Number of threads for parallelization.
  --no-fastmath         Disable fastmath for codec operations.

debug:
  --no-filter           Disable realsense image filter.
  --no-preview          Disable preview to speed.
  --record-crf RECORD_CRF
                        Recording compression rate.
  --view-pcd            Display PCB preview (deprecated, use --view-3d).
  --view-3d             Display PCB preview.

Args that start with '--' can also be set in a config file (specified via -c).
Config file syntax allows: key=value, flag=true, stuff=[a,b,c] (for details,
see syntax at https://goo.gl/R74nmi). In general, command-line values override
config file values which override defaults.
```

### About
Copyright (c) 2024 Florian Bruggisser

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/cansik/space-stream",
    "name": "space-stream",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Florian Bruggisser",
    "author_email": "github@broox.ch",
    "download_url": null,
    "platform": null,
    "description": "# Space Stream [![PyPI](https://img.shields.io/pypi/v/space-stream)](https://pypi.org/project/space-stream/)\nSend RGB-D images over spout / syphon with visiongraph.\n\n![Example Map](images/space-stream-ui.jpg)\n*Source: Intel\u00ae RealSense\u2122 [Sample Data](https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md)*\n\n### Installation\nIt is recommended to use `Python 11` (`Python 3.8`, `Python 3.9` or `Python 3.10` should work too) and should run on any OS. First create a new [virtualenv](https://docs.python.org/3/library/venv.html) and activate it. \nAfter that install all dependencies:\n\n```bash\npip install space-stream\n```\n\n#### ZED Camera\nTo be able to use a ZED Camera, please follow the tutorial on the [ZED Python API](https://www.stereolabs.com/docs/app-development/python/install/) website.\n\n1. Install the [ZED SDK](https://www.stereolabs.com/developers/release/) (together with CUDA)\n2. Run the command `python \"C:\\Program Files (x86)\\ZED SDK\\get_python_api.py\"` inside the virtual python environment\n\n### Usage\nSimply run the `spacestream` module with the following command to run a capturing pipeline (RealSense based). After that you can open a [spout receiver](https://github.com/leadedge/Spout2/releases) / syphon receiver and check the result there.\n\n```\nspace-stream --input realsense\n```\n\nTo use the Azure Kinect use the `azure` input type:\n\n```\nspace-stream --input azure\n```\n\n### Build\n\n```bash\npython setup.py distribute\n```\n\n### Development\nTo develop with this project, clone the git repository and install the dependencies from the requirements:\n\n```bash\npip install -r requirements.txt\n```\n\nTo call the module directly, use the `-m` command from python:\n\n```\npython -m spacestream\n```\n\n#### Depth Codec\nBy default the depthmap is encoded by the linear codec. It is possible to change the behaviour to use a specific encoding method. Be aware that some functions have an impact on performance. Here is a list of all available codecs:\n\n```\nLinear,\nUniformHue\nInverseHue\n```\n\nThe codecs `UniformHue` and `InverseHue` are implemented according to the Intel whitepaper about [Depth image compression by colorization](https://dev.intelrealsense.com/docs/depth-image-compression-by-colorization-for-intel-realsense-depth-cameras).\n\n#### Bit Depth\nThe encoded bit-depth depends on the codec used. For `Linear` codec there are two different bit-depths encoded. First the `8-bit` encoding in the `red` channel and `16-bit` encoded values in the `green` (MSB) and `blue` (LSB) channel.\n\n#### Distance Range\nTo define the min and max distance to encode, use the `--min-distance` and `--max-distance` parameter.\n\n#### Help\n\n```\nusage: space-stream [-h] [-c CONFIG] [-s SETTINGS]\n                    [--loglevel {critical,error,warning,info,debug}]\n                    [--record RECORD]\n                    [--codec Linear, UniformHue, InverseHue, RSColorizer]\n                    [--min-distance MIN_DISTANCE]\n                    [--max-distance MAX_DISTANCE] [--stream-name STREAM_NAME]\n                    [--input video-capture,image,realsense,azure,camgear,zed]\n                    [--input-size width height] [--input-fps INPUT_FPS]\n                    [--input-rotate 90,-90,180] [--input-flip h,v]\n                    [--input-mask INPUT_MASK] [--input-crop x y width height]\n                    [--raw-input] [--channel CHANNEL]\n                    [--input-skip INPUT_SKIP]\n                    [--input-backend any,vfw,v4l,v4l2,firewire,fireware,ieee1394,dc1394,cmu1394,qt,unicap,dshow,pvapi,openni,openni_asus,android,xiapi,avfoundation,giganetix,msmf,winrt,intelperc,openni2,openni2_asus,gphoto2,gstreamer,ffmpeg,images,aravis,opencv_mjpeg,intel_mfx,xine]\n                    [-src SOURCE] [--input-path INPUT_PATH]\n                    [--input-delay INPUT_DELAY] [--exposure EXPOSURE]\n                    [--gain GAIN] [--white-balance WHITE_BALANCE] [--depth]\n                    [--depth-as-input] [-ir] [--rs-serial RS_SERIAL]\n                    [--rs-json RS_JSON] [--rs-play-bag RS_PLAY_BAG]\n                    [--rs-record-bag RS_RECORD_BAG] [--rs-disable-emitter]\n                    [--rs-bag-offline]\n                    [--rs-auto-exposure-limit RS_AUTO_EXPOSURE_LIMIT]\n                    [--rs-auto-gain-limit RS_AUTO_GAIN_LIMIT]\n                    [--rs-filter decimation,spatial,temporal,hole-filling [decimation,spatial,temporal,hole-filling ...]]\n                    [--rs-color-scheme Jet,Classic,WhiteToBlack,BlackToWhite,Bio,Cold,Warm,Quantized,Pattern]\n                    [--k4a-align-to-color] [--k4a-align-to-depth]\n                    [--k4a-device K4A_DEVICE] [--k4a-depth-clipping min max]\n                    [--k4a-ir-clipping min max] [--k4a-play-mkv K4A_PLAY_MKV]\n                    [--k4a-record-mkv K4A_RECORD_MKV]\n                    [--k4a-depth-mode OFF,NFOV_2X2BINNED,NFOV_UNBINNED,WFOV_2X2BINNED,WFOV_UNBINNED,PASSIVE_IR]\n                    [--k4a-passive-ir]\n                    [--k4a-color-resolution OFF,RES_720P,RES_1080P,RES_1440P,RES_1536P,RES_2160P,RES_3072P]\n                    [--k4a-color-format COLOR_MJPG,COLOR_NV12,COLOR_YUY2,COLOR_BGRA32,DEPTH16,IR16,CUSTOM8,CUSTOM16,CUSTOM]\n                    [--k4a-wired-sync-mode STANDALONE,MASTER,SUBORDINATE]\n                    [--k4a-subordinate-delay-off-master-usec K4A_SUBORDINATE_DELAY_OFF_MASTER_USEC]\n                    [--midas] [--mask]\n                    [--segnet mediapipe,mediapipe-light,mediapipe-heavy]\n                    [--parallel] [--num-threads NUM_THREADS] [--no-fastmath]\n                    [--no-filter] [--no-preview] [--record-crf RECORD_CRF]\n                    [--view-pcd] [--view-3d]\n\nRGB-D framebuffer sharing demo for visiongraph.\n\noptions:\n  -h, --help            show this help message and exit\n  -c CONFIG, --config CONFIG\n                        Configuration file path.\n  -s SETTINGS, --settings SETTINGS\n                        Settings file path (json).\n  --loglevel {critical,error,warning,info,debug}\n                        Provide logging level. Example --loglevel debug,\n                        default=warning\n  --record RECORD       Record output into recordings folder.\n  --codec Linear, UniformHue, InverseHue, RSColorizer\n                        Codec how the depth map will be encoded.\n  --min-distance MIN_DISTANCE\n                        Min distance to perceive by the camera.\n  --max-distance MAX_DISTANCE\n                        Max distance to perceive by the camera.\n  --stream-name STREAM_NAME\n                        Spout / Syphon stream name.\n\ninput provider:\n  --input video-capture,image,realsense,azure,camgear,zed\n                        Image input provider, default: video-capture.\n  --input-size width height\n                        Requested input media size.\n  --input-fps INPUT_FPS\n                        Requested input media framerate.\n  --input-rotate 90,-90,180\n                        Rotate input media.\n  --input-flip h,v      Flip input media.\n  --input-mask INPUT_MASK\n                        Path to the input mask.\n  --input-crop x y width height\n                        Crop input image.\n  --raw-input           Skip automatic input conversion to 3-channel image.\n  --channel CHANNEL     Input device channel (camera id, video path, image\n                        sequence).\n  --input-skip INPUT_SKIP\n                        If set the input will be skipped to the value in\n                        milliseconds.\n  --input-backend any,vfw,v4l,v4l2,firewire,fireware,ieee1394,dc1394,cmu1394,qt,unicap,dshow,pvapi,openni,openni_asus,android,xiapi,avfoundation,giganetix,msmf,winrt,intelperc,openni2,openni2_asus,gphoto2,gstreamer,ffmpeg,images,aravis,opencv_mjpeg,intel_mfx,xine\n                        VideoCapture API backends identifier., default: any.\n  -src SOURCE, --source SOURCE\n                        Generic input source for all inputs.\n  --input-path INPUT_PATH\n                        Path to the input image.\n  --input-delay INPUT_DELAY\n                        Input delay time (s).\n  --exposure EXPOSURE   Exposure value (usec) for depth camera input (disables\n                        auto-exposure).\n  --gain GAIN           Gain value for depth input (disables auto-exposure).\n  --white-balance WHITE_BALANCE\n                        White-Balance value for depth input (disables auto-\n                        white-balance).\n  --depth               Enable RealSense depth stream.\n  --depth-as-input      Use colored depth stream as input stream.\n  -ir, --infrared       Use infrared as input stream.\n  --rs-serial RS_SERIAL\n                        RealSense serial number to choose specific device.\n  --rs-json RS_JSON     RealSense json configuration to apply.\n  --rs-play-bag RS_PLAY_BAG\n                        Path to a pre-recorded bag file for playback.\n  --rs-record-bag RS_RECORD_BAG\n                        Path to a bag file to store the current recording.\n  --rs-disable-emitter  Disable RealSense IR emitter.\n  --rs-bag-offline      Disable realtime bag playback.\n  --rs-auto-exposure-limit RS_AUTO_EXPOSURE_LIMIT\n                        Auto exposure limit (ms).\n  --rs-auto-gain-limit RS_AUTO_GAIN_LIMIT\n                        Auto gain limit (16-248).\n  --rs-filter decimation,spatial,temporal,hole-filling [decimation,spatial,temporal,hole-filling ...]\n                        RealSense depth filter.\n  --rs-color-scheme Jet,Classic,WhiteToBlack,BlackToWhite,Bio,Cold,Warm,Quantized,Pattern\n                        Color scheme for depth map, default: WhiteToBlack.\n  --k4a-align-to-color  Align azure frames to color frame.\n  --k4a-align-to-depth  Align azure frames to depth frame.\n  --k4a-device K4A_DEVICE\n                        Azure device id.\n  --k4a-depth-clipping min max\n                        Depth input clipping.\n  --k4a-ir-clipping min max\n                        Infrared input clipping.\n  --k4a-play-mkv K4A_PLAY_MKV\n                        Path to a pre-recorded bag file for playback.\n  --k4a-record-mkv K4A_RECORD_MKV\n                        Path to a mkv file to store the current recording.\n  --k4a-depth-mode OFF,NFOV_2X2BINNED,NFOV_UNBINNED,WFOV_2X2BINNED,WFOV_UNBINNED,PASSIVE_IR\n                        Azure depth mode, default: NFOV_UNBINNED.\n  --k4a-passive-ir      Use passive IR input.\n  --k4a-color-resolution OFF,RES_720P,RES_1080P,RES_1440P,RES_1536P,RES_2160P,RES_3072P\n                        Azure color resolution (overwrites input-size),\n                        default: RES_720P.\n  --k4a-color-format COLOR_MJPG,COLOR_NV12,COLOR_YUY2,COLOR_BGRA32,DEPTH16,IR16,CUSTOM8,CUSTOM16,CUSTOM\n                        Azure color image format, default: COLOR_BGRA32.\n  --k4a-wired-sync-mode STANDALONE,MASTER,SUBORDINATE\n                        Synchronization mode when connecting two or more\n                        devices together, default: STANDALONE.\n  --k4a-subordinate-delay-off-master-usec K4A_SUBORDINATE_DELAY_OFF_MASTER_USEC\n                        The external synchronization timing.\n  --midas               Use midas for depth capture.\n\nmasking:\n  --mask                Apply mask by segmentation algorithm.\n  --segnet mediapipe,mediapipe-light,mediapipe-heavy\n                        Segmentation Network, default: mediapipe.\n\nperformance:\n  --parallel            Enable parallel for codec operations.\n  --num-threads NUM_THREADS\n                        Number of threads for parallelization.\n  --no-fastmath         Disable fastmath for codec operations.\n\ndebug:\n  --no-filter           Disable realsense image filter.\n  --no-preview          Disable preview to speed.\n  --record-crf RECORD_CRF\n                        Recording compression rate.\n  --view-pcd            Display PCB preview (deprecated, use --view-3d).\n  --view-3d             Display PCB preview.\n\nArgs that start with '--' can also be set in a config file (specified via -c).\nConfig file syntax allows: key=value, flag=true, stuff=[a,b,c] (for details,\nsee syntax at https://goo.gl/R74nmi). In general, command-line values override\nconfig file values which override defaults.\n```\n\n### About\nCopyright (c) 2024 Florian Bruggisser\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Send RGB-D images over spout / syphon with visiongraph.",
    "version": "0.2.1",
    "project_urls": {
        "Homepage": "https://github.com/cansik/space-stream"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b26991ce1242ca9fbb7e2f50508f7690ef6f79d1f131e9d99ff10d08ce95e344",
                "md5": "0dd8101f651a7e7fba5c2e7e0fdb3a2b",
                "sha256": "b387cc8f65dee6ce6ae39752bbcd77c67256255deb6791e0ca7ce8ef04eb2056"
            },
            "downloads": -1,
            "filename": "space_stream-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0dd8101f651a7e7fba5c2e7e0fdb3a2b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 23356,
            "upload_time": "2024-05-08T12:53:57",
            "upload_time_iso_8601": "2024-05-08T12:53:57.451775Z",
            "url": "https://files.pythonhosted.org/packages/b2/69/91ce1242ca9fbb7e2f50508f7690ef6f79d1f131e9d99ff10d08ce95e344/space_stream-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-08 12:53:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cansik",
    "github_project": "space-stream",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "space-stream"
}
        
Elapsed time: 0.24744s