# Rocket Welder SDK
[](https://www.nuget.org/packages/RocketWelder.SDK/)
[](https://pypi.org/project/rocket-welder-sdk/)
[](https://github.com/modelingevolution/rocket-welder-sdk-vcpkg-registry)
[](https://opensource.org/licenses/MIT)
**Client libraries for building custom AI/ML video processing containers that integrate with RocketWelder (Neuron) devices.**
## Overview
The Rocket Welder SDK enables AI/ML developers to build custom video processing containers for Neuron industrial vision devices. It provides high-performance, **zero-copy** frame access via shared memory, supporting real-time computer vision, object detection, and AI inference workloads.
**Target Audience**: AI/ML developers building containerized applications for:
- Real-time object detection (YOLO, custom models)
- Computer vision processing
- AI inference on video streams
- Industrial vision applications
## Table of Contents
- [Quick Start](#quick-start)
- [Your First AI Processing Container](#your-first-ai-processing-container)
- [Development Workflow](#development-workflow)
- [Deploying to Neuron Device](#deploying-to-neuron-device)
- [RocketWelder Integration](#rocketwelder-integration)
- [API Reference](#api-reference)
- [Production Best Practices](#production-best-practices)
## Quick Start
### Installation
| Language | Package Manager | Package Name |
|----------|----------------|--------------|
| C++ | vcpkg | rocket-welder-sdk |
| C# | NuGet | RocketWelder.SDK |
| Python | pip | rocket-welder-sdk |
#### Python
```bash
pip install rocket-welder-sdk
```
#### C#
```bash
dotnet add package RocketWelder.SDK
```
#### C++
```bash
vcpkg install rocket-welder-sdk
```
## Your First AI Processing Container
### Starting with Examples
The SDK includes ready-to-use examples in the `/examples` directory:
```
examples/
├── python/
│ ├── simple_client.py # Timestamp overlay example
│ ├── integration_client.py # Testing with --exit-after
│ └── Dockerfile # Ready-to-build container
├── csharp/
│ └── SimpleClient/
│ ├── Program.cs # Full example with UI controls
│ └── Dockerfile # Ready-to-build container
└── cpp/
├── simple_client.cpp
└── CMakeLists.txt
```
### Python Example - Simple Timestamp Overlay
```python
#!/usr/bin/env python3
import sys
import cv2
import numpy as np
from datetime import datetime
import rocket_welder_sdk as rw
# Create client - reads CONNECTION_STRING from environment or args
client = rw.Client.from_(sys.argv)
def process_frame(frame: np.ndarray) -> None:
"""Add timestamp overlay to frame - zero copy!"""
timestamp = datetime.now().strftime("%H:%M:%S")
cv2.putText(frame, timestamp, (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 255), 2)
# Start processing
client.start(process_frame)
# Keep running
while client.is_running:
time.sleep(0.1)
```
### Building Your Container
```bash
# Navigate to examples directory
cd python/examples
# Build Docker image
docker build -t my-ai-app:v1 -f Dockerfile ..
# Test locally with file
docker run --rm \
-e CONNECTION_STRING="file:///data/test.mp4?loop=true" \
-v /path/to/video.mp4:/data/test.mp4:ro \
my-ai-app:v1
```
## Development Workflow
### Step 1: Test Locally with Video File
Start by testing your container locally before deploying to Neuron:
```bash
# Build your container
docker build -t my-ai-app:v1 -f examples/python/Dockerfile .
# Test with a video file
docker run --rm \
-e CONNECTION_STRING="file:///data/test.mp4?loop=true&preview=false" \
-v $(pwd)/examples/test_stream.mp4:/data/test.mp4:ro \
my-ai-app:v1
```
You can also see preview in your terminal.
```bash
# Install x11-apps
sudo apt install x11-apps
# Test with a video file
docker run --rm \
-e CONNECTION_STRING="file:///data/test.mp4?loop=true&preview=true" \
-e DISPLAY=$DISPLAY \
-v /path/to/your/file.mp4:/data/test.mp4:ro -v /tmp/.X11-unix:/tmp/.X11-unix my-ai-app:v1
```
### Step 2: Test with Live Stream from Neuron
Once your container works locally, test it with a live stream from your Neuron device:
#### Configure RocketWelder Pipeline for Streaming
1. Access RocketWelder UI on your Neuron device (usually `http://neuron-ip:8080`)
2. Open **Pipeline Designer**
3. Click **"Add Element"**
4. Choose your video source (e.g., `pylonsrc` for Basler cameras)
5. Add **caps filter** to specify format: `video/x-raw,width=1920,height=1080,format=GRAY8`
6. Add **jpegenc** element
7. Add **tcpserversink** element with properties:
- `host`: `0.0.0.0`
- `port`: `5000`
8. Start the pipeline
Example pipeline:
```
pylonsrc → video/x-raw,width=1920,height=1080,format=GRAY8 → queue max-buffers-size=1, Leaky=Upstream → jpegenc → tcpserversink host=0.0.0.0 port=5000 sync=false
```
#### Connect from Your Dev Laptop
```bash
# On your laptop - connect to Neuron's TCP stream
docker run --rm \
-e CONNECTION_STRING="mjpeg+tcp://neuron-ip:5000" \
--network host \
my-ai-app:v1
```
This allows you to:
- Test your AI processing with real camera feeds
- Debug frame processing logic
- Measure performance with actual hardware
## Deploying to Neuron Device
### Option 1: Local Docker Registry (Recommended for Development)
This is the fastest workflow for iterative development:
#### Setup Registry on Your Laptop (One-time)
```bash
# Start a local Docker registry
docker run -d \
-p 5000:5000 \
--restart=always \
--name registry \
registry:2
# Verify it's running
curl http://localhost:5000/v2/_catalog
```
#### Configure Neuron to Use Your Laptop Registry (One-time)
```bash
# SSH to Neuron device
ssh user@neuron-ip
# Edit Docker daemon config
sudo nano /etc/docker/daemon.json
# Add your laptop's IP to insecure registries:
{
"insecure-registries": ["laptop-ip:5000"]
}
# Restart Docker
sudo systemctl restart docker
```
**Note**: Replace `laptop-ip` with your laptop's actual IP address (e.g., `192.168.1.100`).
To find it: `ip addr show` or `ifconfig`
#### Push Image to Your Registry
```bash
# On your laptop - tag for local registry
docker tag my-ai-app:v1 localhost:5000/my-ai-app:v1
# Push to registry
docker push localhost:5000/my-ai-app:v1
# Verify push
curl http://localhost:5000/v2/my-ai-app/tags/list
```
#### Pull on Neuron Device
```bash
# SSH to Neuron
ssh user@neuron-ip
# Pull from laptop registry
docker pull laptop-ip:5000/my-ai-app:v1
# Verify image
docker images | grep my-ai-app
```
#### Workflow Summary
```bash
# Iterative development loop:
1. Edit code on laptop
2. docker build -t localhost:5000/my-ai-app:v1 .
3. docker push localhost:5000/my-ai-app:v1
4. Configure in RocketWelder UI (once)
5. RocketWelder pulls and runs your container
```
### Option 2: Export/Import (For One-off Transfers)
Useful when you don't want to set up a registry:
```bash
# On your laptop - save image to tar
docker save my-ai-app:v1 | gzip > my-ai-app-v1.tar.gz
# Transfer to Neuron
scp my-ai-app-v1.tar.gz user@neuron-ip:/tmp/
# SSH to Neuron and load
ssh user@neuron-ip
docker load < /tmp/my-ai-app-v1.tar.gz
# Verify
docker images | grep my-ai-app
```
### Option 3: Azure Container Registry (Production)
For production deployments:
```bash
# Login to ACR (Azure Container Registry)
az acr login --name your-registry
# Tag and push
docker tag my-ai-app:v1 your-registry.azurecr.io/my-ai-app:v1
docker push your-registry.azurecr.io/my-ai-app:v1
# Configure Neuron to use ACR (credentials required)
```
## RocketWelder Integration
### Understanding zerosink vs zerofilter
RocketWelder provides two GStreamer elements for container integration:
| Element | Mode | Use Case |
|---------|------|----------|
| **zerosink** | One-way | RocketWelder → Your Container<br/>Read frames, process, log results |
| **zerofilter** | Duplex | RocketWelder ↔ Your Container<br/>Read frames, modify them, return modified frames |
**Most AI use cases use `zerosink`** (one-way mode):
- Object detection (draw bounding boxes)
- Classification (overlay labels)
- Analytics (count objects, log events)
**Use `zerofilter`** (duplex mode) when:
- You need to modify frames and return them to the pipeline
- Real-time visual effects/filters
- Frame enhancement before encoding
### Configuring Your Container in RocketWelder
#### Step-by-Step UI Configuration
1. **Access RocketWelder UI**
- Navigate to `http://neuron-ip:8080`
- Log in to your Neuron device
2. **Open Pipeline Designer**
- Go to **Pipelines** section
- Create new pipeline or edit existing
3. **Add Video Source**
- Click **"Add Element"**
- Choose your camera source (e.g., `pylonsrc`, `aravissrc`)
- Configure camera properties
4. **Add Format**
- Add caps filter: `video/x-raw,format=RGB`
5. **Add queueue**
- max-num-buffers: 1
- leaky: upstream
5. **Add ZeroBuffer Element**
- Click **"Add Element"**
- Select **"zerosink"** (or **"zerofilter"** for duplex mode)
- Scroll down in properties panel on the right
6. **Configure Consumer**
- Toggle **"Enable ZeroBuffer Consumer"** ✓
- Select **"Consumer Mode"** dropdown
- Choose **"Docker Container"** (not Process)
7. **Configure Docker Settings**
- **Image**: Enter your image name
- Local registry: `laptop-ip:5000/my-ai-app`
- ACR: `your-registry.azurecr.io/my-ai-app`
- Loaded image: `my-ai-app`
- **Tag**: `v1` (or your version tag)
- **Environment Variables**: (optional) Add custom env vars if needed
- **Auto-remove**: ✓ (recommended - cleans up container on stop)
8. **Save Pipeline Configuration**
9. **Start Pipeline**
- Click **"Start"** button
- RocketWelder will automatically:
- Pull your Docker image (if not present)
- Create shared memory buffer
- Launch your container with `CONNECTION_STRING` env var
- Start streaming frames
### Automatic Environment Variables
When RocketWelder launches your container, it automatically sets:
```bash
CONNECTION_STRING=shm://zerobuffer-abc123-456?size=20MB&metadata=4KB&mode=oneway
SessionId=def789-012 # For UI controls (if enabled)
EventStore=esdb://host.docker.internal:2113?tls=false # For external controls
```
Your SDK code simply reads `CONNECTION_STRING`:
```python
# Python - automatically reads CONNECTION_STRING from environment
client = rw.Client.from_(sys.argv)
```
```csharp
// C# - automatically reads CONNECTION_STRING
var client = RocketWelderClient.From(args);
```
### Example Pipeline Configurations
#### AI Object Detection Pipeline
```
pylonsrc
→ video/x-raw,width=1920,height=1080,format=Gray8
→ videoconvert
→ zerosink
└─ Docker: laptop-ip:5000/yolo-detector:v1
```
Your YOLO container receives frames, detects objects, draws bounding boxes.
#### Dual Output: AI Processing
```
pylonsrc
→ video/x-raw,width=1920,height=1080,format=Gray8
→ tee name=t
t. → queue → jpegenc → tcpserversink
t. → queue → zerofilter → queue → jpegenc → tcpserversink
└─ Docker: laptop-ip:5000/my-ai-app:v1
```
#### Real-time Frame Enhancement with Live Preview (Duplex Mode)
```
→ pylonsrc hdr-sequence="5000,5500" hdr-sequence2="19,150" hdr-profile=0
→ video/x-raw,width=1920,height=1080,format=Gray8
→ queue max-num-buffers=1 leaky=upstream
→ hdr mode=burst num-frames=2
→ sortingbuffer
→ queue max-num-buffers=1 leaky=upstream
→ zerofilter
└─ Docker: laptop-ip:5000/frame-enhancer:v1
→ queue max-num-buffers=1 leaky=upstream
→ jpegenc
→ multipartmux enable-html=true
→ tcpserversink host=0.0.0.0 port=5000 sync=false
```
In duplex mode with `zerofilter`, your container:
1. Receives input frames via shared memory (automatically configured by RocketWelder)
2. Processes them in real-time (e.g., AI enhancement, object detection, overlays)
3. Writes modified frames back to shared memory
4. Modified frames flow back into RocketWelder pipeline for streaming/display
**Pipeline elements explained:**
- `pylonsrc hdr-sequence="5000,5500"`: Configures HDR Profile 0 with 5000μs and 5500μs exposures (cycles automatically via camera sequencer)
- `hdr-sequence2="19,150"`: Configures HDR Profile 1 with 2 exposures for runtime switching
- `hdr-profile=0`: Starts with Profile 0 (can be changed at runtime to switch between lighting conditions), requires a branch with histogram, dre and pylontarget.
- `hdr processing-mode=burst num-frames=2`: HDR blending element - combines multiple exposures into single HDR frame
- `sortingbuffer skip-behaviour=hdr`: Reorders out-of-order frames from Pylon camera using HDR metadata (MasterSequence, ExposureSequenceIndex) - automatically detects frame order using `image_number` from Pylon metadata
- `zerofilter`: Bidirectional shared memory connection to your Docker container
- `jpegenc`: JPEG compression for network streaming
- `multipartmux enable-html=true`: Creates MJPEG stream with CORS headers for browser viewing
- `tcpserversink`: Streams to RocketWelder UI at `http://neuron-ip:5000`
**View live preview:**
Open in browser: `http://neuron-ip:5000` to see the processed video stream with your AI enhancements in real-time!
**HDR Profile Switching:**
The dual-profile system allows runtime switching between lighting conditions:
- Profile 0 (2 exposures): Fast cycling for normal conditions
- Profile 1 (2 exposures): More exposures for challenging lighting
- Switch dynamically via `hdr-profile` property without stopping the pipeline (requires another branch, histogram, dre, pylon-target)
**Use case examples:**
- **AI object detection**: Draw bounding boxes that appear in RocketWelder preview
- **Real-time enhancement**: AI super-resolution, denoising, stabilization
- **Visual feedback**: Add crosshairs, tracking overlays, status indicators
- **Quality control**: Highlight defects or areas of interest in industrial inspection
## Connection String Format
The SDK uses URI-style connection strings:
```
protocol://[host[:port]]/[path][?param1=value1¶m2=value2]
```
### Supported Protocols
#### Shared Memory (Production - Automatic)
```
shm://buffer-name?size=20MB&metadata=4KB&mode=oneway
```
When deployed with RocketWelder, this is set automatically via `CONNECTION_STRING` environment variable.
**Parameters:**
- `size`: Buffer size (default: 20MB, supports: B, KB, MB, GB)
- `metadata`: Metadata size (default: 4KB)
- `mode`: `oneway` (zerosink) or `duplex` (zerofilter)
#### File Protocol (Local Testing)
```
file:///path/to/video.mp4?loop=true&preview=false
```
**Parameters:**
- `loop`: Loop playback (`true`/`false`, default: `false`)
- `preview`: Show preview window (`true`/`false`, default: `false`)
#### MJPEG over TCP (Development/Testing)
```
mjpeg+tcp://neuron-ip:5000
```
Connect to RocketWelder's `tcpserversink` for development testing.
#### MJPEG over HTTP
```
mjpeg+http://camera-ip:8080
```
For network cameras or HTTP streamers.
## API Reference
### Python API
```python
import rocket_welder_sdk as rw
# Create client (reads CONNECTION_STRING from env or args)
client = rw.Client.from_(sys.argv)
# Or specify connection string directly
client = rw.Client.from_connection_string("shm://buffer-name?size=20MB")
# Process frames - one-way mode
@client.on_frame
def process_frame(frame: np.ndarray) -> None:
# frame is a numpy array (height, width, channels)
# Modify in-place for zero-copy performance
cv2.putText(frame, "AI Processing", (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 255, 0), 2)
# Process frames - duplex mode
def process_frame_duplex(input_frame: np.ndarray, output_frame: np.ndarray) -> None:
# Copy input to output and modify
np.copyto(output_frame, input_frame)
# Add AI overlay to output_frame
cv2.putText(output_frame, "Processed", (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 255, 0), 2)
# Start processing
client.start(process_frame) # or process_frame_duplex for duplex mode
# Keep running
while client.is_running:
time.sleep(0.1)
# Stop
client.stop()
```
### C# API
```csharp
using RocketWelder.SDK;
using Emgu.CV;
// Create client (reads CONNECTION_STRING from env or config)
var client = RocketWelderClient.From(args);
// Or specify connection string directly
var client = RocketWelderClient.FromConnectionString("shm://buffer-name?size=20MB");
// Process frames - one-way mode
client.Start((Mat frame) =>
{
// frame is an Emgu.CV.Mat (zero-copy)
CvInvoke.PutText(frame, "AI Processing", new Point(10, 30),
FontFace.HersheySimplex, 1.0, new MCvScalar(0, 255, 0), 2);
});
// Process frames - duplex mode
client.Start((Mat input, Mat output) =>
{
input.CopyTo(output);
CvInvoke.PutText(output, "Processed", new Point(10, 30),
FontFace.HersheySimplex, 1.0, new MCvScalar(0, 255, 0), 2);
});
```
### C++ API
```cpp
#include <rocket_welder/client.hpp>
#include <opencv2/opencv.hpp>
// Create client (reads CONNECTION_STRING from env or args)
auto client = rocket_welder::Client::from(argc, argv);
// Or specify connection string directly
auto client = rocket_welder::Client::from_connection_string("shm://buffer-name?size=20MB");
// Process frames - one-way mode
client.on_frame([](cv::Mat& frame) {
// frame is a cv::Mat reference (zero-copy)
cv::putText(frame, "AI Processing", cv::Point(10, 30),
cv::FONT_HERSHEY_SIMPLEX, 1.0, cv::Scalar(0, 255, 0), 2);
});
// Process frames - duplex mode
client.on_frame([](const cv::Mat& input, cv::Mat& output) {
input.copyTo(output);
cv::putText(output, "Processed", cv::Point(10, 30),
cv::FONT_HERSHEY_SIMPLEX, 1.0, cv::Scalar(0, 255, 0), 2);
});
// Start processing
client.start();
```
## Production Best Practices
### Performance Optimization
1. **Zero-Copy Processing**
- Modify frames in-place when possible
- Avoid unnecessary memory allocations in the frame processing loop
- Use OpenCV operations that work directly on the frame buffer
2. **Frame Rate Management**
```python
# Process every Nth frame for expensive AI operations
frame_count = 0
def process_frame(frame):
global frame_count
frame_count += 1
if frame_count % 5 == 0: # Process every 5th frame
run_expensive_ai_model(frame)
```
3. **Logging**
- Use structured logging with appropriate levels
- Avoid logging in the frame processing loop for production
- Log only important events (errors, detections, etc.)
### Error Handling
```python
import logging
import rocket_welder_sdk as rw
logger = logging.getLogger(__name__)
client = rw.Client.from_(sys.argv)
def on_error(sender, error):
logger.error(f"Client error: {error.Exception}")
# Implement recovery logic or graceful shutdown
client.OnError += on_error
```
### Monitoring
```python
import time
from datetime import datetime
class FrameStats:
def __init__(self):
self.frame_count = 0
self.start_time = time.time()
def update(self):
self.frame_count += 1
if self.frame_count % 100 == 0:
elapsed = time.time() - self.start_time
fps = self.frame_count / elapsed
logger.info(f"Processed {self.frame_count} frames, {fps:.1f} FPS")
stats = FrameStats()
def process_frame(frame):
stats.update()
# Your processing logic
```
### Docker Best Practices
1. **Use Multi-stage Builds**
```dockerfile
FROM python:3.12-slim as builder
# Build dependencies
FROM python:3.12-slim
# Copy only runtime artifacts
```
2. **Minimize Image Size**
- Use slim base images
- Remove build tools in final stage
- Clean apt cache: `rm -rf /var/lib/apt/lists/*`
3. **Health Checks**
```dockerfile
HEALTHCHECK --interval=30s --timeout=3s \
CMD pgrep -f my_app.py || exit 1
```
4. **Resource Limits** (in RocketWelder docker-compose or deployment)
```yaml
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
```
## Examples
The `examples/` directory contains complete working examples:
- **python/simple_client.py** - Minimal timestamp overlay
- **python/integration_client.py** - Testing with --exit-after flag
- **python/advanced_client.py** - Full-featured with UI controls
- **csharp/SimpleClient/** - Complete C# example with crosshair controls
- **cpp/simple_client.cpp** - C++ example
## Troubleshooting
### Container Doesn't Start
**Check Docker logs:**
```bash
docker ps -a | grep my-ai-app
docker logs <container-id>
```
**Common issues:**
- Image not found (check `docker images`)
- Insecure registry not configured on Neuron
### Cannot Pull from Laptop Registry
```bash
# On Neuron - test connectivity
ping laptop-ip
# Test registry access
curl http://laptop-ip:5000/v2/_catalog
# Check Docker daemon config
cat /etc/docker/daemon.json
# Restart Docker after config change
sudo systemctl restart docker
```
### SDK Connection Timeout
**Check shared memory buffer exists:**
```bash
# On Neuron device
ls -lh /dev/shm/
# Should see zerobuffer-* files
```
**Check RocketWelder pipeline status:**
- Is pipeline running?
- Is zerosink element configured correctly?
- Check RocketWelder logs for errors
### Low Frame Rate / Performance
1. **Check CPU usage:** `htop` or `docker stats`
2. **Reduce AI model complexity** or process every Nth frame
3. **Profile your code** to find bottlenecks
4. **Use GPU acceleration** if available (NVIDIA runtime)
## Support
- **Issues**: [GitHub Issues](https://github.com/modelingevolution/rocket-welder-sdk/issues)
- **Discussions**: [GitHub Discussions](https://github.com/modelingevolution/rocket-welder-sdk/discussions)
- **Documentation**: [https://docs.rocket-welder.io](https://docs.rocket-welder.io)
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- GStreamer Project for the multimedia framework
- ZeroBuffer contributors for the zero-copy buffer implementation
- OpenCV community for computer vision tools
Raw data
{
"_id": null,
"home_page": "https://github.com/modelingevolution/rocket-welder-sdk",
"name": "rocket-welder-sdk",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "ModelingEvolution <info@modelingevolution.com>",
"keywords": "video, streaming, gstreamer, ipc, shared-memory, zerobuffer, computer-vision",
"author": "ModelingEvolution",
"author_email": "ModelingEvolution <info@modelingevolution.com>",
"download_url": "https://files.pythonhosted.org/packages/a7/7b/fe381ad2a78b1134d0d587c84051217920f857ac45dff4155a81082c73e1/rocket_welder_sdk-1.1.30.tar.gz",
"platform": null,
"description": "# Rocket Welder SDK\n\n[](https://www.nuget.org/packages/RocketWelder.SDK/)\n[](https://pypi.org/project/rocket-welder-sdk/)\n[](https://github.com/modelingevolution/rocket-welder-sdk-vcpkg-registry)\n[](https://opensource.org/licenses/MIT)\n\n**Client libraries for building custom AI/ML video processing containers that integrate with RocketWelder (Neuron) devices.**\n\n## Overview\n\nThe Rocket Welder SDK enables AI/ML developers to build custom video processing containers for Neuron industrial vision devices. It provides high-performance, **zero-copy** frame access via shared memory, supporting real-time computer vision, object detection, and AI inference workloads.\n\n**Target Audience**: AI/ML developers building containerized applications for:\n- Real-time object detection (YOLO, custom models)\n- Computer vision processing\n- AI inference on video streams\n- Industrial vision applications\n\n## Table of Contents\n\n- [Quick Start](#quick-start)\n- [Your First AI Processing Container](#your-first-ai-processing-container)\n- [Development Workflow](#development-workflow)\n- [Deploying to Neuron Device](#deploying-to-neuron-device)\n- [RocketWelder Integration](#rocketwelder-integration)\n- [API Reference](#api-reference)\n- [Production Best Practices](#production-best-practices)\n\n## Quick Start\n\n### Installation\n\n| Language | Package Manager | Package Name |\n|----------|----------------|--------------|\n| C++ | vcpkg | rocket-welder-sdk |\n| C# | NuGet | RocketWelder.SDK |\n| Python | pip | rocket-welder-sdk |\n\n#### Python\n```bash\npip install rocket-welder-sdk\n```\n\n#### C#\n```bash\ndotnet add package RocketWelder.SDK\n```\n\n#### C++\n```bash\nvcpkg install rocket-welder-sdk\n```\n\n## Your First AI Processing Container\n\n### Starting with Examples\n\nThe SDK includes ready-to-use examples in the `/examples` directory:\n\n```\nexamples/\n\u251c\u2500\u2500 python/\n\u2502 \u251c\u2500\u2500 simple_client.py # Timestamp overlay example\n\u2502 \u251c\u2500\u2500 integration_client.py # Testing with --exit-after\n\u2502 \u2514\u2500\u2500 Dockerfile # Ready-to-build container\n\u251c\u2500\u2500 csharp/\n\u2502 \u2514\u2500\u2500 SimpleClient/\n\u2502 \u251c\u2500\u2500 Program.cs # Full example with UI controls\n\u2502 \u2514\u2500\u2500 Dockerfile # Ready-to-build container\n\u2514\u2500\u2500 cpp/\n \u251c\u2500\u2500 simple_client.cpp\n \u2514\u2500\u2500 CMakeLists.txt\n```\n\n### Python Example - Simple Timestamp Overlay\n\n```python\n#!/usr/bin/env python3\nimport sys\nimport cv2\nimport numpy as np\nfrom datetime import datetime\nimport rocket_welder_sdk as rw\n\n# Create client - reads CONNECTION_STRING from environment or args\nclient = rw.Client.from_(sys.argv)\n\ndef process_frame(frame: np.ndarray) -> None:\n \"\"\"Add timestamp overlay to frame - zero copy!\"\"\"\n timestamp = datetime.now().strftime(\"%H:%M:%S\")\n cv2.putText(frame, timestamp, (10, 30),\n cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 255), 2)\n\n# Start processing\nclient.start(process_frame)\n\n# Keep running\nwhile client.is_running:\n time.sleep(0.1)\n```\n\n### Building Your Container\n\n```bash\n# Navigate to examples directory\ncd python/examples\n\n# Build Docker image\ndocker build -t my-ai-app:v1 -f Dockerfile ..\n\n# Test locally with file\ndocker run --rm \\\n -e CONNECTION_STRING=\"file:///data/test.mp4?loop=true\" \\\n -v /path/to/video.mp4:/data/test.mp4:ro \\\n my-ai-app:v1\n```\n\n## Development Workflow\n\n### Step 1: Test Locally with Video File\n\nStart by testing your container locally before deploying to Neuron:\n\n```bash\n# Build your container\ndocker build -t my-ai-app:v1 -f examples/python/Dockerfile .\n\n# Test with a video file\ndocker run --rm \\\n -e CONNECTION_STRING=\"file:///data/test.mp4?loop=true&preview=false\" \\\n -v $(pwd)/examples/test_stream.mp4:/data/test.mp4:ro \\\n my-ai-app:v1\n```\n\nYou can also see preview in your terminal. \n\n```bash\n# Install x11-apps\nsudo apt install x11-apps\n\n# Test with a video file\ndocker run --rm \\\n -e CONNECTION_STRING=\"file:///data/test.mp4?loop=true&preview=true\" \\\n -e DISPLAY=$DISPLAY \\\n -v /path/to/your/file.mp4:/data/test.mp4:ro -v /tmp/.X11-unix:/tmp/.X11-unix my-ai-app:v1\n```\n\n### Step 2: Test with Live Stream from Neuron\n\nOnce your container works locally, test it with a live stream from your Neuron device:\n\n#### Configure RocketWelder Pipeline for Streaming\n\n1. Access RocketWelder UI on your Neuron device (usually `http://neuron-ip:8080`)\n2. Open **Pipeline Designer**\n3. Click **\"Add Element\"**\n4. Choose your video source (e.g., `pylonsrc` for Basler cameras)\n5. Add **caps filter** to specify format: `video/x-raw,width=1920,height=1080,format=GRAY8`\n6. Add **jpegenc** element\n7. Add **tcpserversink** element with properties:\n - `host`: `0.0.0.0`\n - `port`: `5000`\n8. Start the pipeline\n\nExample pipeline:\n```\npylonsrc \u2192 video/x-raw,width=1920,height=1080,format=GRAY8 \u2192 queue max-buffers-size=1, Leaky=Upstream \u2192 jpegenc \u2192 tcpserversink host=0.0.0.0 port=5000 sync=false\n```\n\n#### Connect from Your Dev Laptop\n\n```bash\n# On your laptop - connect to Neuron's TCP stream\ndocker run --rm \\\n -e CONNECTION_STRING=\"mjpeg+tcp://neuron-ip:5000\" \\\n --network host \\\n my-ai-app:v1\n```\n\nThis allows you to:\n- Test your AI processing with real camera feeds\n- Debug frame processing logic\n- Measure performance with actual hardware\n\n## Deploying to Neuron Device\n\n### Option 1: Local Docker Registry (Recommended for Development)\n\nThis is the fastest workflow for iterative development:\n\n#### Setup Registry on Your Laptop (One-time)\n\n```bash\n# Start a local Docker registry\ndocker run -d \\\n -p 5000:5000 \\\n --restart=always \\\n --name registry \\\n registry:2\n\n# Verify it's running\ncurl http://localhost:5000/v2/_catalog\n```\n\n#### Configure Neuron to Use Your Laptop Registry (One-time)\n\n```bash\n# SSH to Neuron device\nssh user@neuron-ip\n\n# Edit Docker daemon config\nsudo nano /etc/docker/daemon.json\n\n# Add your laptop's IP to insecure registries:\n{\n \"insecure-registries\": [\"laptop-ip:5000\"]\n}\n\n# Restart Docker\nsudo systemctl restart docker\n```\n\n**Note**: Replace `laptop-ip` with your laptop's actual IP address (e.g., `192.168.1.100`).\nTo find it: `ip addr show` or `ifconfig`\n\n#### Push Image to Your Registry\n\n```bash\n# On your laptop - tag for local registry\ndocker tag my-ai-app:v1 localhost:5000/my-ai-app:v1\n\n# Push to registry\ndocker push localhost:5000/my-ai-app:v1\n\n# Verify push\ncurl http://localhost:5000/v2/my-ai-app/tags/list\n```\n\n#### Pull on Neuron Device\n\n```bash\n# SSH to Neuron\nssh user@neuron-ip\n\n# Pull from laptop registry\ndocker pull laptop-ip:5000/my-ai-app:v1\n\n# Verify image\ndocker images | grep my-ai-app\n```\n\n#### Workflow Summary\n\n```bash\n# Iterative development loop:\n1. Edit code on laptop\n2. docker build -t localhost:5000/my-ai-app:v1 .\n3. docker push localhost:5000/my-ai-app:v1\n4. Configure in RocketWelder UI (once)\n5. RocketWelder pulls and runs your container\n```\n\n### Option 2: Export/Import (For One-off Transfers)\n\nUseful when you don't want to set up a registry:\n\n```bash\n# On your laptop - save image to tar\ndocker save my-ai-app:v1 | gzip > my-ai-app-v1.tar.gz\n\n# Transfer to Neuron\nscp my-ai-app-v1.tar.gz user@neuron-ip:/tmp/\n\n# SSH to Neuron and load\nssh user@neuron-ip\ndocker load < /tmp/my-ai-app-v1.tar.gz\n\n# Verify\ndocker images | grep my-ai-app\n```\n\n### Option 3: Azure Container Registry (Production)\n\nFor production deployments:\n\n```bash\n# Login to ACR (Azure Container Registry)\naz acr login --name your-registry\n\n# Tag and push\ndocker tag my-ai-app:v1 your-registry.azurecr.io/my-ai-app:v1\ndocker push your-registry.azurecr.io/my-ai-app:v1\n\n# Configure Neuron to use ACR (credentials required)\n```\n\n## RocketWelder Integration\n\n### Understanding zerosink vs zerofilter\n\nRocketWelder provides two GStreamer elements for container integration:\n\n| Element | Mode | Use Case |\n|---------|------|----------|\n| **zerosink** | One-way | RocketWelder \u2192 Your Container<br/>Read frames, process, log results |\n| **zerofilter** | Duplex | RocketWelder \u2194 Your Container<br/>Read frames, modify them, return modified frames |\n\n**Most AI use cases use `zerosink`** (one-way mode):\n- Object detection (draw bounding boxes)\n- Classification (overlay labels)\n- Analytics (count objects, log events)\n\n**Use `zerofilter`** (duplex mode) when:\n- You need to modify frames and return them to the pipeline\n- Real-time visual effects/filters\n- Frame enhancement before encoding\n\n### Configuring Your Container in RocketWelder\n\n#### Step-by-Step UI Configuration\n\n1. **Access RocketWelder UI**\n - Navigate to `http://neuron-ip:8080`\n - Log in to your Neuron device\n\n2. **Open Pipeline Designer**\n - Go to **Pipelines** section\n - Create new pipeline or edit existing\n\n3. **Add Video Source**\n - Click **\"Add Element\"**\n - Choose your camera source (e.g., `pylonsrc`, `aravissrc`)\n - Configure camera properties\n\n4. **Add Format** \n - Add caps filter: `video/x-raw,format=RGB`\n\n5. **Add queueue**\n - max-num-buffers: 1\n - leaky: upstream\n\n5. **Add ZeroBuffer Element**\n - Click **\"Add Element\"**\n - Select **\"zerosink\"** (or **\"zerofilter\"** for duplex mode)\n - Scroll down in properties panel on the right\n\n6. **Configure Consumer**\n - Toggle **\"Enable ZeroBuffer Consumer\"** \u2713\n - Select **\"Consumer Mode\"** dropdown\n - Choose **\"Docker Container\"** (not Process)\n\n7. **Configure Docker Settings**\n - **Image**: Enter your image name\n - Local registry: `laptop-ip:5000/my-ai-app`\n - ACR: `your-registry.azurecr.io/my-ai-app`\n - Loaded image: `my-ai-app`\n - **Tag**: `v1` (or your version tag)\n - **Environment Variables**: (optional) Add custom env vars if needed\n - **Auto-remove**: \u2713 (recommended - cleans up container on stop)\n\n8. **Save Pipeline Configuration**\n\n9. **Start Pipeline**\n - Click **\"Start\"** button\n - RocketWelder will automatically:\n - Pull your Docker image (if not present)\n - Create shared memory buffer\n - Launch your container with `CONNECTION_STRING` env var\n - Start streaming frames\n\n### Automatic Environment Variables\n\nWhen RocketWelder launches your container, it automatically sets:\n\n```bash\nCONNECTION_STRING=shm://zerobuffer-abc123-456?size=20MB&metadata=4KB&mode=oneway\nSessionId=def789-012 # For UI controls (if enabled)\nEventStore=esdb://host.docker.internal:2113?tls=false # For external controls\n```\n\nYour SDK code simply reads `CONNECTION_STRING`:\n\n```python\n# Python - automatically reads CONNECTION_STRING from environment\nclient = rw.Client.from_(sys.argv)\n```\n\n```csharp\n// C# - automatically reads CONNECTION_STRING\nvar client = RocketWelderClient.From(args);\n```\n\n### Example Pipeline Configurations\n\n#### AI Object Detection Pipeline\n\n```\npylonsrc\n \u2192 video/x-raw,width=1920,height=1080,format=Gray8\n \u2192 videoconvert\n \u2192 zerosink\n \u2514\u2500 Docker: laptop-ip:5000/yolo-detector:v1\n```\n\nYour YOLO container receives frames, detects objects, draws bounding boxes.\n\n#### Dual Output: AI Processing\n\n```\npylonsrc\n \u2192 video/x-raw,width=1920,height=1080,format=Gray8\n \u2192 tee name=t\n t. \u2192 queue \u2192 jpegenc \u2192 tcpserversink\n t. \u2192 queue \u2192 zerofilter \u2192 queue \u2192 jpegenc \u2192 tcpserversink\n \u2514\u2500 Docker: laptop-ip:5000/my-ai-app:v1\n```\n\n#### Real-time Frame Enhancement with Live Preview (Duplex Mode)\n\n```\n \u2192 pylonsrc hdr-sequence=\"5000,5500\" hdr-sequence2=\"19,150\" hdr-profile=0\n \u2192 video/x-raw,width=1920,height=1080,format=Gray8\n \u2192 queue max-num-buffers=1 leaky=upstream\n \u2192 hdr mode=burst num-frames=2\n \u2192 sortingbuffer \n \u2192 queue max-num-buffers=1 leaky=upstream\n \u2192 zerofilter\n \u2514\u2500 Docker: laptop-ip:5000/frame-enhancer:v1\n \u2192 queue max-num-buffers=1 leaky=upstream\n \u2192 jpegenc\n \u2192 multipartmux enable-html=true\n \u2192 tcpserversink host=0.0.0.0 port=5000 sync=false\n```\n\nIn duplex mode with `zerofilter`, your container:\n1. Receives input frames via shared memory (automatically configured by RocketWelder)\n2. Processes them in real-time (e.g., AI enhancement, object detection, overlays)\n3. Writes modified frames back to shared memory\n4. Modified frames flow back into RocketWelder pipeline for streaming/display\n\n**Pipeline elements explained:**\n- `pylonsrc hdr-sequence=\"5000,5500\"`: Configures HDR Profile 0 with 5000\u03bcs and 5500\u03bcs exposures (cycles automatically via camera sequencer)\n- `hdr-sequence2=\"19,150\"`: Configures HDR Profile 1 with 2 exposures for runtime switching\n- `hdr-profile=0`: Starts with Profile 0 (can be changed at runtime to switch between lighting conditions), requires a branch with histogram, dre and pylontarget.\n- `hdr processing-mode=burst num-frames=2`: HDR blending element - combines multiple exposures into single HDR frame\n- `sortingbuffer skip-behaviour=hdr`: Reorders out-of-order frames from Pylon camera using HDR metadata (MasterSequence, ExposureSequenceIndex) - automatically detects frame order using `image_number` from Pylon metadata \n- `zerofilter`: Bidirectional shared memory connection to your Docker container\n- `jpegenc`: JPEG compression for network streaming\n- `multipartmux enable-html=true`: Creates MJPEG stream with CORS headers for browser viewing\n- `tcpserversink`: Streams to RocketWelder UI at `http://neuron-ip:5000`\n\n**View live preview:**\nOpen in browser: `http://neuron-ip:5000` to see the processed video stream with your AI enhancements in real-time!\n\n**HDR Profile Switching:**\nThe dual-profile system allows runtime switching between lighting conditions:\n- Profile 0 (2 exposures): Fast cycling for normal conditions\n- Profile 1 (2 exposures): More exposures for challenging lighting\n- Switch dynamically via `hdr-profile` property without stopping the pipeline (requires another branch, histogram, dre, pylon-target)\n\n**Use case examples:**\n- **AI object detection**: Draw bounding boxes that appear in RocketWelder preview\n- **Real-time enhancement**: AI super-resolution, denoising, stabilization\n- **Visual feedback**: Add crosshairs, tracking overlays, status indicators\n- **Quality control**: Highlight defects or areas of interest in industrial inspection\n\n## Connection String Format\n\nThe SDK uses URI-style connection strings:\n\n```\nprotocol://[host[:port]]/[path][?param1=value1¶m2=value2]\n```\n\n### Supported Protocols\n\n#### Shared Memory (Production - Automatic)\n```\nshm://buffer-name?size=20MB&metadata=4KB&mode=oneway\n```\n\nWhen deployed with RocketWelder, this is set automatically via `CONNECTION_STRING` environment variable.\n\n**Parameters:**\n- `size`: Buffer size (default: 20MB, supports: B, KB, MB, GB)\n- `metadata`: Metadata size (default: 4KB)\n- `mode`: `oneway` (zerosink) or `duplex` (zerofilter)\n\n#### File Protocol (Local Testing)\n```\nfile:///path/to/video.mp4?loop=true&preview=false\n```\n\n**Parameters:**\n- `loop`: Loop playback (`true`/`false`, default: `false`)\n- `preview`: Show preview window (`true`/`false`, default: `false`)\n\n#### MJPEG over TCP (Development/Testing)\n```\nmjpeg+tcp://neuron-ip:5000\n```\n\nConnect to RocketWelder's `tcpserversink` for development testing.\n\n#### MJPEG over HTTP\n```\nmjpeg+http://camera-ip:8080\n```\n\nFor network cameras or HTTP streamers.\n\n## API Reference\n\n### Python API\n\n```python\nimport rocket_welder_sdk as rw\n\n# Create client (reads CONNECTION_STRING from env or args)\nclient = rw.Client.from_(sys.argv)\n\n# Or specify connection string directly\nclient = rw.Client.from_connection_string(\"shm://buffer-name?size=20MB\")\n\n# Process frames - one-way mode\n@client.on_frame\ndef process_frame(frame: np.ndarray) -> None:\n # frame is a numpy array (height, width, channels)\n # Modify in-place for zero-copy performance\n cv2.putText(frame, \"AI Processing\", (10, 30),\n cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 255, 0), 2)\n\n# Process frames - duplex mode\ndef process_frame_duplex(input_frame: np.ndarray, output_frame: np.ndarray) -> None:\n # Copy input to output and modify\n np.copyto(output_frame, input_frame)\n # Add AI overlay to output_frame\n cv2.putText(output_frame, \"Processed\", (10, 30),\n cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 255, 0), 2)\n\n# Start processing\nclient.start(process_frame) # or process_frame_duplex for duplex mode\n\n# Keep running\nwhile client.is_running:\n time.sleep(0.1)\n\n# Stop\nclient.stop()\n```\n\n### C# API\n\n```csharp\nusing RocketWelder.SDK;\nusing Emgu.CV;\n\n// Create client (reads CONNECTION_STRING from env or config)\nvar client = RocketWelderClient.From(args);\n\n// Or specify connection string directly\nvar client = RocketWelderClient.FromConnectionString(\"shm://buffer-name?size=20MB\");\n\n// Process frames - one-way mode\nclient.Start((Mat frame) =>\n{\n // frame is an Emgu.CV.Mat (zero-copy)\n CvInvoke.PutText(frame, \"AI Processing\", new Point(10, 30),\n FontFace.HersheySimplex, 1.0, new MCvScalar(0, 255, 0), 2);\n});\n\n// Process frames - duplex mode\nclient.Start((Mat input, Mat output) =>\n{\n input.CopyTo(output);\n CvInvoke.PutText(output, \"Processed\", new Point(10, 30),\n FontFace.HersheySimplex, 1.0, new MCvScalar(0, 255, 0), 2);\n});\n```\n\n### C++ API\n\n```cpp\n#include <rocket_welder/client.hpp>\n#include <opencv2/opencv.hpp>\n\n// Create client (reads CONNECTION_STRING from env or args)\nauto client = rocket_welder::Client::from(argc, argv);\n\n// Or specify connection string directly\nauto client = rocket_welder::Client::from_connection_string(\"shm://buffer-name?size=20MB\");\n\n// Process frames - one-way mode\nclient.on_frame([](cv::Mat& frame) {\n // frame is a cv::Mat reference (zero-copy)\n cv::putText(frame, \"AI Processing\", cv::Point(10, 30),\n cv::FONT_HERSHEY_SIMPLEX, 1.0, cv::Scalar(0, 255, 0), 2);\n});\n\n// Process frames - duplex mode\nclient.on_frame([](const cv::Mat& input, cv::Mat& output) {\n input.copyTo(output);\n cv::putText(output, \"Processed\", cv::Point(10, 30),\n cv::FONT_HERSHEY_SIMPLEX, 1.0, cv::Scalar(0, 255, 0), 2);\n});\n\n// Start processing\nclient.start();\n```\n\n## Production Best Practices\n\n### Performance Optimization\n\n1. **Zero-Copy Processing**\n - Modify frames in-place when possible\n - Avoid unnecessary memory allocations in the frame processing loop\n - Use OpenCV operations that work directly on the frame buffer\n\n2. **Frame Rate Management**\n ```python\n # Process every Nth frame for expensive AI operations\n frame_count = 0\n\n def process_frame(frame):\n global frame_count\n frame_count += 1\n if frame_count % 5 == 0: # Process every 5th frame\n run_expensive_ai_model(frame)\n ```\n\n3. **Logging**\n - Use structured logging with appropriate levels\n - Avoid logging in the frame processing loop for production\n - Log only important events (errors, detections, etc.)\n\n### Error Handling\n\n```python\nimport logging\nimport rocket_welder_sdk as rw\n\nlogger = logging.getLogger(__name__)\n\nclient = rw.Client.from_(sys.argv)\n\ndef on_error(sender, error):\n logger.error(f\"Client error: {error.Exception}\")\n # Implement recovery logic or graceful shutdown\n\nclient.OnError += on_error\n```\n\n### Monitoring\n\n```python\nimport time\nfrom datetime import datetime\n\nclass FrameStats:\n def __init__(self):\n self.frame_count = 0\n self.start_time = time.time()\n\n def update(self):\n self.frame_count += 1\n if self.frame_count % 100 == 0:\n elapsed = time.time() - self.start_time\n fps = self.frame_count / elapsed\n logger.info(f\"Processed {self.frame_count} frames, {fps:.1f} FPS\")\n\nstats = FrameStats()\n\ndef process_frame(frame):\n stats.update()\n # Your processing logic\n```\n\n### Docker Best Practices\n\n1. **Use Multi-stage Builds**\n ```dockerfile\n FROM python:3.12-slim as builder\n # Build dependencies\n\n FROM python:3.12-slim\n # Copy only runtime artifacts\n ```\n\n2. **Minimize Image Size**\n - Use slim base images\n - Remove build tools in final stage\n - Clean apt cache: `rm -rf /var/lib/apt/lists/*`\n\n3. **Health Checks**\n ```dockerfile\n HEALTHCHECK --interval=30s --timeout=3s \\\n CMD pgrep -f my_app.py || exit 1\n ```\n\n4. **Resource Limits** (in RocketWelder docker-compose or deployment)\n ```yaml\n deploy:\n resources:\n limits:\n cpus: '2.0'\n memory: 2G\n ```\n\n## Examples\n\nThe `examples/` directory contains complete working examples:\n\n- **python/simple_client.py** - Minimal timestamp overlay\n- **python/integration_client.py** - Testing with --exit-after flag\n- **python/advanced_client.py** - Full-featured with UI controls\n- **csharp/SimpleClient/** - Complete C# example with crosshair controls\n- **cpp/simple_client.cpp** - C++ example\n\n## Troubleshooting\n\n### Container Doesn't Start\n\n**Check Docker logs:**\n```bash\ndocker ps -a | grep my-ai-app\ndocker logs <container-id>\n```\n\n**Common issues:**\n- Image not found (check `docker images`)\n- Insecure registry not configured on Neuron\n\n### Cannot Pull from Laptop Registry\n\n```bash\n# On Neuron - test connectivity\nping laptop-ip\n\n# Test registry access\ncurl http://laptop-ip:5000/v2/_catalog\n\n# Check Docker daemon config\ncat /etc/docker/daemon.json\n\n# Restart Docker after config change\nsudo systemctl restart docker\n```\n\n### SDK Connection Timeout\n\n**Check shared memory buffer exists:**\n```bash\n# On Neuron device\nls -lh /dev/shm/\n\n# Should see zerobuffer-* files\n```\n\n**Check RocketWelder pipeline status:**\n- Is pipeline running?\n- Is zerosink element configured correctly?\n- Check RocketWelder logs for errors\n\n### Low Frame Rate / Performance\n\n1. **Check CPU usage:** `htop` or `docker stats`\n2. **Reduce AI model complexity** or process every Nth frame\n3. **Profile your code** to find bottlenecks\n4. **Use GPU acceleration** if available (NVIDIA runtime)\n\n## Support\n\n- **Issues**: [GitHub Issues](https://github.com/modelingevolution/rocket-welder-sdk/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/modelingevolution/rocket-welder-sdk/discussions)\n- **Documentation**: [https://docs.rocket-welder.io](https://docs.rocket-welder.io)\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Acknowledgments\n\n- GStreamer Project for the multimedia framework\n- ZeroBuffer contributors for the zero-copy buffer implementation\n- OpenCV community for computer vision tools\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "High-performance video streaming SDK for RocketWelder services using ZeroBuffer IPC",
"version": "1.1.30",
"project_urls": {
"Homepage": "https://github.com/modelingevolution/rocket-welder-sdk",
"Issues": "https://github.com/modelingevolution/rocket-welder-sdk/issues",
"Repository": "https://github.com/modelingevolution/rocket-welder-sdk.git"
},
"split_keywords": [
"video",
" streaming",
" gstreamer",
" ipc",
" shared-memory",
" zerobuffer",
" computer-vision"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "93fafbd895ee01e709a07efe7f43ac8b73e866454d13426a0d8b242beb077299",
"md5": "af1e2ac02fa9604c5cbe6d5223a8805c",
"sha256": "41df7f62ac98172df0e10eca96889d662dc9cfd58d2bcfe053f28008604c2e80"
},
"downloads": -1,
"filename": "rocket_welder_sdk-1.1.30-py3-none-any.whl",
"has_sig": false,
"md5_digest": "af1e2ac02fa9604c5cbe6d5223a8805c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 2722369,
"upload_time": "2025-10-13T14:12:03",
"upload_time_iso_8601": "2025-10-13T14:12:03.644118Z",
"url": "https://files.pythonhosted.org/packages/93/fa/fbd895ee01e709a07efe7f43ac8b73e866454d13426a0d8b242beb077299/rocket_welder_sdk-1.1.30-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a77bfe381ad2a78b1134d0d587c84051217920f857ac45dff4155a81082c73e1",
"md5": "b35c2ca337ede5ef6b49b34566676df0",
"sha256": "d755f78ef44d1ff16a9ef9790d19444b5c4c880a8302b644d4fd254e0ebaa385"
},
"downloads": -1,
"filename": "rocket_welder_sdk-1.1.30.tar.gz",
"has_sig": false,
"md5_digest": "b35c2ca337ede5ef6b49b34566676df0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 2742728,
"upload_time": "2025-10-13T14:12:05",
"upload_time_iso_8601": "2025-10-13T14:12:05.184240Z",
"url": "https://files.pythonhosted.org/packages/a7/7b/fe381ad2a78b1134d0d587c84051217920f857ac45dff4155a81082c73e1/rocket_welder_sdk-1.1.30.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-13 14:12:05",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "modelingevolution",
"github_project": "rocket-welder-sdk",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "rocket-welder-sdk"
}