stream-infer


Namestream-infer JSON
Version 0.4.2 PyPI version JSON
download
home_pagehttps://github.com/zaigie/stream-infer
SummaryVideo streaming inference framework, integrating image algorithms and models for real-time/offline video structuring
upload_time2024-01-12 06:20:29
maintainerZaiGie
docs_urlNone
authorZaiGie
requires_python>=3.8,<3.12.0
licenseApache-2.0
keywords machine-learning deep-learning vision ml dl ai streaming framework deepstream
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <img src="https://github.com/zaigie/stream-infer/blob/main/docs/img/logo_.png?raw=true" alt="Stream Infer" height="60px">

[![PyPI](https://img.shields.io/pypi/v/stream-infer?color=dark-green)](https://pypi.org/project/stream-infer/)
[![PyPI downloads](https://img.shields.io/pypi/dm/stream-infer?color=dark-green)](https://pypi.org/project/stream-infer/)
[![GitHub license](https://img.shields.io/github/license/zaigie/stream-infer?color=orange)](https://github.com/zaigie/stream-infer/blob/main/LICENSE)
[![GitHub commit activity](https://img.shields.io/github/commit-activity/m/zaigie/stream-infer)](https://github.com/zaigie/stream-infer/graphs/commit-activity)

<p align="left">
   <strong>English</strong> | <a href="https://github.com/zaigie/stream-infer/blob/main/README.zh.md">简体中文</a>
</p>

Stream Infer is a Python library designed for streaming inference in video processing applications, enabling the integration of various image algorithms for video structuring. It supports both real-time and offline inference modes.

In short, Stream Infer is a device hardware and ML framework agnostic, supporting a lightweight version of [NVIDIA DeepStream](https://developer.nvidia.com/deepstream-sdk) for cloud or edge IoT devices.

---

Often we want to use one or more image algorithms and models to analyze videos, and be able to set different frame capture logics and invocation frequencies for these algorithms, ultimately obtaining structured inferential results.

Sometimes it's even necessary to connect to a real-time camera or a live web stream, to infer and feedback results according to pre-set rules.

If you have the above requirements, then a simple Stream Infer can meet all your needs from development to debugging and to production operation.

![Flow](https://github.com/zaigie/stream-infer/blob/main/docs/img/flow.svg?raw=true)

## Features

- [x] Minimal dependencies, purely written in Python, not limited by hardware architecture
- [x] Supports all Python-based algorithm deployment frameworks
- [x] Run a video inference task in less than 10 lines code
- [x] Supports both offline and real-time inference, just by changing one parameter
  - Offline inference completely traverses the video frame by frame and follows the preset algorithm and logical serial inference to get the result
  - Real-time inference separates frame fetching from inference and generates large or small delays depending on the performance of the processing device
- [x] Offline inference supports recording video files to a local disk
- [x] Supports visual development and debugging on local/remote servers via [streamlit](https://github.com/streamlit/streamlit)
- [x] Support parameterized dynamic call, easy inference server development
- [x] Modules are low-coupled, with clear division of labor
- [ ] Recording and streaming under real-time inference

## Installation

```bash
pip install -U stream-infer
```

## Quick Start

All examples depend on YOLOv8, and additional packages may be required for these examples:

```bash
pip install ultralytics
```

The video files used in the examples are available at [sample-videos](https://github.com/intel-iot-devkit/sample-videos)

### Offline Inference

https://github.com/zaigie/stream_infer/assets/17232619/32aef0c9-89c7-4bc8-9dd6-25035bee2074

Offline inference processes a **finite-length video or stream** at the speed the computer can handle, performing inference serially while capturing frames.

Since inference invariably takes time, depending on machine performance, the entire process's duration **may be longer or shorter than the video's length**.

Besides debugging during the development phase, offline inference can also be used for video structuring analysis in production environments where real-time processing is not essential, such as:

- Post-meeting video analysis
- Surgical video review
- ...

View and run the demo:

- General operation: [examples/offline_general.py](https://github.com/zaigie/stream_infer/blob/main/examples/offline_general.py)
- Set up handlers to process frames and inference results, and display and record them using cv2: [examples/offline_custom_process_record.py](https://github.com/zaigie/stream_infer/blob/main/examples/offline_custom_process_record.py)

> [!WARNING]
> This example `offline_custom_process_record.py` use OpenCV GUI-related features, such as presentation Windows, which can be manually installed either `opencv-python` or `opencv-contrib-python`, or simply
>
> `pip install -U stream-infer[desktop]`

### Real-time Inference

Real-time inference handles a **finite/infinite-length video or stream**, playing at normal speed if finite.

In this mode, a custom-size frame track is maintained, continuously adding the current frame to the track, with the **playback process independent of the inference process**.

Since inference takes time and **playback does not wait for inference results**, there will inevitably be a delay between the inference results and the current scene.

Real-time inference is not suitable for the development phase but is often used in production environments for real-time scenarios like RTMP/RTSP/camera feeds:

- Various live broadcast scenarios
- Real-time monitoring
- Live meetings
- Clinical surgeries
- ...

View and run the demo:

- General Operation: [examples/realtime_general.py](https://github.com/zaigie/stream_infer/blob/main/examples/realtime_general.py)
- Set up the handler and manually print the inference result: [examples/realtime_custom_process.py](https://github.com/zaigie/stream_infer/blob/main/examples/realtime_custom_process.py)

### Dynamic Execution

Leveraging Python's reflection and dynamic import capabilities, it supports configuring all the parameters required for inference tasks through JSON.

This mode is mainly useful for the development of inference servers, where structured data can be passed in via REST/gRPC or other methods to initiate an inference task.

View and run the demo: [examples/dynamic_app.py](https://github.com/zaigie/stream_infer/blob/main/examples/dynamic_app.py)

### Visualization Development & Debugging

https://github.com/zaigie/stream_infer/assets/17232619/6cbd6858-0292-4759-8d4c-ace154567f8e

Implemented through a visual web application using [streamlit](https://github.com/streamlit/streamlit).

> The current interface text is Chinese.

This mode is primarily used for algorithm development and debugging on local/remote servers, supporting custom frame drawing and data display components.

To run this feature, install the server version:

```bash
pip install -U 'stream-infer[server]'
```

View and run the demo: [examples/streamlit_app.py](https://github.com/zaigie/stream_infer/blob/main/examples/streamlit_app.py)

```bash
streamlit run streamlit_app.py
```

## Modules

Please read the following content in conjunction with [examples](https://github.com/zaigie/stream_infer/blob/main/examples).

### BaseAlgo

Stream Infer simply encapsulates and abstracts all algorithms into classes with `init()` and `run()` functions, implementing BaseAlgo.

Although Stream Infer provides a framework for streaming inference, the actual algorithm functionality still needs to be developed by the user and **inherited from the BaseAlgo class** for uniform encapsulation and invocation.

For instance, if you have completed a real-time head detection algorithm, the official invocation method is:

```python
# https://modelscope.cn/models/damo/cv_tinynas_head-detection_damoyolo/summary
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks

model_id = 'damo/cv_tinynas_head-detection_damoyolo'
input_location = 'https://modelscope.oss-cn-beijing.aliyuncs.com/test/images/image_detection.jpg'

head_detection = pipeline(Tasks.domain_specific_object_detection, model=model_id)
result = head_detection(input_location)
print("result is : ", result)
```

To use it in Stream Infer, encapsulate it like this:

```python
from stream_infer.algo import BaseAlgo

class HeadDetectionAlgo(BaseAlgo):
    def init(self):
        self.model_id = 'damo/cv_tinynas_head-detection_damoyolo'
        self.head_detection = pipeline(Tasks.domain_specific_object_detection, model=model_id)

    def run(self, frames):
        return self.head_detection(frames[0])
```

This way, you have completed the encapsulation and can call it normally in the future.

> [!CAUTION]
> In many cases, cuda or mps is used to accelerate inference. However, when using these accelerations **and needing real-time inference in the production environment after development**:
>
> The `run()` function inherited from `BaseAlgo` **must not return any tensors**! Try to manually convert to standard Python data formats, like dicts.
>
> Or copy the tensor (to CPU) for sharing between processes, as GPU tensors cannot be directly shared in a multi-process environment in real-time inference.

### Dispatcher

Dispatcher is a central service for playing and reasoning, used to cache inference frames, distribute inference frames, and collect inference time and result data.

Dispatcher provides functions for adding/getting frames and times. Stream Infer has a built-in [DevelopDispatcher](https://github.com/zaigie/stream_infer/blob/main/stream_infer/dispatcher/develop.py) for manually storing and retrieving inference results.

You don't need to worry about the others, but to allow you to get the results and conveniently print or store them elsewhere, you should pay attention to the `collect()` function. Its source code implementation is as follows:

```python
def collect(self, position: int, algo_name: str, result):
    logger.debug(f"[{position}] collect {algo_name} result: {result}")
```

Based on this, if you wish to request the results to a REST service or perform other operations on existing data before the request, you can achieve this by **inheriting the Dispatcher class** and rewriting functions:

**Collect results to Redis**

```python
class RedisDispatcher(Dispatcher):
    def __init__(
        self, buffer: int, host: str = "localhost", port: int = 6379, db: int = 0
    ):
        super().__init__(buffer)
        self.conn = redis.Redis(host=host, port=port, db=db)

    def collect(self, position: int, algo_name: str, result):
        key = f"results:{algo_name}"
        self.conn.zadd(key, {result: position})
```

**Request results to REST**

```python
from stream_infer.dispatcher import Dispatcher
import requests
...
class RequestDispatcher(Dispatcher):
    def __init__(self, buffer):
        super().__init__(buffer)
        self.sess = requests.Session()
        ...

    def collect(self, position: int, algo_name: str, result):
        req_data = {
            "position": position
            "algo_name": algo_name
            "result": result
        }
        self.sess.post("http://xxx.com/result/", json=req_data)
```

Then instantiate:

```python
# Offline inference
dispatcher = RequestDispatcher.create(mode="offline", buffer=30)
# Real-time inference
dispatcher = RedisDispatcher.create(buffer=15, host="localhost", port=6379, db=1)
```

You may have noticed that the instantiation of dispatcher differs between offline and real-time inference. This is because **in real-time inference, playback and inference are not in the same process**, and both need to share the same dispatcher, only the mode parameter has been changed, but the internal implementation uses the DispatcherManager agent.

> [!CAUTION]
> For the `buffer` parameter, the default value is 30, which keeps the latest 30 frames of ndarray data in the buffer. **The larger this parameter, the more memory the program occupies!**
>
> It is recommended to set it to `buffer = max(frame_count * (frame_step if frame_step else 1))` based on the actual inference interval.

### Inference

Inference is the core of the framework, implementing functions such as loading algorithms and running inference.

An Inference object requires a Dispatcher object for frame retrieval and sending inference results.

```python
from stream_infer import Inference

...

inference = Inference(dispatcher)
```

When you need to load an algorithm, for example from the [BaseAlgo](#basealgo) section:

```python
from anywhere_algo import HeadDetectionAlgo, AnyOtherAlgo

...

inference = Inference(dispatcher)
inference.load_algo(HeadDetectionAlgo("head"), frame_count=1, frame_step=fps, interval=1)
inference.load_algo(AnyOtherAlgo("other"), 5, 6, 60)
```

Here, we can give HeadDetectionAlgo a name to identify the running algorithm (needed when collecting in Dispatcher and to avoid duplicates).

The parameters for loading an algorithm are the framework's core functionality, allowing you to freely implement frame retrieval logic:

- frame_count: The number of frames the algorithm needs to get, which is the number of frames the run() function will receive.
- frame_step: Take 1 frame every `frame_step`, up to `frame_count` frames, receive 0. (when `frame_count` is equal to 1, this parameter determines only the startup delay)
- interval: In seconds, indicating the frequency of algorithm calls, like `AnyOtherAlgo` will only be called once a minute to save resources when not needed.

### Producer

Producer loads videos or streams using different methods, such as PyAV, OpenCV, etc., and adjusts or transforms the frames in terms of width, height, and color space, eventually returning each frame as a numpy array.

Instantiating a Producer often requires inputting the frame width, height, and color order required for inference. The default color order is the same as the BGR order returned by `cv2.imread()`.

```python
from stream_infer.producer import PyAVProducer, OpenCVProducer

producer = PyAVProducer(1920, 1080)
producer = OpenCVProducer(1920, 1080)
```

> [!NOTE]
> In most cases `OpenCVProducer` is sufficient and performs well. However, you may still need to use `PyAVProducer` (based on ffmpeg) to load some videos or streams that OpenCV cannot decode

### Player

Player inputs dispatcher, producer, and the video/stream address for playback and inference.

```python
from stream_infer import Player

...

player = Player(dispatcher, producer, source, show_progress)
```

The `show_progress` parameter defaults to True, in which case the tqdm is used to display the progress bar. When set to False, progress is printed through the logger.

### Run

Simply run the entire script through Inference's `start()`.

```python
inference.start(player, fps=fps, position=0, mode="offline", recording_path="./processed.mp4")
```

- fps: Indicates the desired playback frame rate. **If the frame rate of the video source is higher than this number, it will skip frames through frame skipping logic to forcibly play at this specified frame rate**, thereby saving performance to some extent.
- position: Accepts a parameter in seconds, which can specify the start position for inference (only available in offline inference; how could you specify a position in real-time inference, right?).
- mode: The default is `realtime`.
- recording_path: After this parameter is added, the processed frame can be recorded into a new video file under offline inference.

It should be specifically noted that during the inference process, you may need to process the frames or inference results. We provide a `process` decorator and function to facilitate this purpose.

> [!WARNING]
> In a real-time inference environment, due to the reason of running multiple processes, it is not possible to use a decorator to set up the process procedure

Currently, the recorded videos only support the mp4 format. When you use `OpenCVProducer`, the files are encoded in mp4v, while under `PyAVProducer`, they are encoded in h264 mp4 format. We recommend using `PyAVProducer` as it offers a better compression rate.

For specific usage, you can refer to the example codes in [examples/offline_custom_process_record.py](https://github.com/zaigie/stream_infer/blob/main/examples/offline_custom_process_record.py) and [examples/realtime_custom_process.py](https://github.com/zaigie/stream_infer/blob/main/examples/realtime_custom_process.py).

## License

Stream Infer is licensed under the [Apache License](LICENSE).


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/zaigie/stream-infer",
    "name": "stream-infer",
    "maintainer": "ZaiGie",
    "docs_url": null,
    "requires_python": ">=3.8,<3.12.0",
    "maintainer_email": "jokerwho@yeah.net",
    "keywords": "machine-learning,deep-learning,vision,ML,DL,AI,streaming framework,DeepStream",
    "author": "ZaiGie",
    "author_email": "jokerwho@yeah.net",
    "download_url": "https://files.pythonhosted.org/packages/53/9e/b8e08db8f097530a69be5b04e286661373c16529bd401271e1c33c0829c3/stream_infer-0.4.2.tar.gz",
    "platform": null,
    "description": "# <img src=\"https://github.com/zaigie/stream-infer/blob/main/docs/img/logo_.png?raw=true\" alt=\"Stream Infer\" height=\"60px\">\n\n[![PyPI](https://img.shields.io/pypi/v/stream-infer?color=dark-green)](https://pypi.org/project/stream-infer/)\n[![PyPI downloads](https://img.shields.io/pypi/dm/stream-infer?color=dark-green)](https://pypi.org/project/stream-infer/)\n[![GitHub license](https://img.shields.io/github/license/zaigie/stream-infer?color=orange)](https://github.com/zaigie/stream-infer/blob/main/LICENSE)\n[![GitHub commit activity](https://img.shields.io/github/commit-activity/m/zaigie/stream-infer)](https://github.com/zaigie/stream-infer/graphs/commit-activity)\n\n<p align=\"left\">\n   <strong>English</strong> | <a href=\"https://github.com/zaigie/stream-infer/blob/main/README.zh.md\">\u7b80\u4f53\u4e2d\u6587</a>\n</p>\n\nStream Infer is a Python library designed for streaming inference in video processing applications, enabling the integration of various image algorithms for video structuring. It supports both real-time and offline inference modes.\n\nIn short, Stream Infer is a device hardware and ML framework agnostic, supporting a lightweight version of [NVIDIA DeepStream](https://developer.nvidia.com/deepstream-sdk) for cloud or edge IoT devices.\n\n---\n\nOften we want to use one or more image algorithms and models to analyze videos, and be able to set different frame capture logics and invocation frequencies for these algorithms, ultimately obtaining structured inferential results.\n\nSometimes it's even necessary to connect to a real-time camera or a live web stream, to infer and feedback results according to pre-set rules.\n\nIf you have the above requirements, then a simple Stream Infer can meet all your needs from development to debugging and to production operation.\n\n![Flow](https://github.com/zaigie/stream-infer/blob/main/docs/img/flow.svg?raw=true)\n\n## Features\n\n- [x] Minimal dependencies, purely written in Python, not limited by hardware architecture\n- [x] Supports all Python-based algorithm deployment frameworks\n- [x] Run a video inference task in less than 10 lines code\n- [x] Supports both offline and real-time inference, just by changing one parameter\n  - Offline inference completely traverses the video frame by frame and follows the preset algorithm and logical serial inference to get the result\n  - Real-time inference separates frame fetching from inference and generates large or small delays depending on the performance of the processing device\n- [x] Offline inference supports recording video files to a local disk\n- [x] Supports visual development and debugging on local/remote servers via [streamlit](https://github.com/streamlit/streamlit)\n- [x] Support parameterized dynamic call, easy inference server development\n- [x] Modules are low-coupled, with clear division of labor\n- [ ] Recording and streaming under real-time inference\n\n## Installation\n\n```bash\npip install -U stream-infer\n```\n\n## Quick Start\n\nAll examples depend on YOLOv8, and additional packages may be required for these examples:\n\n```bash\npip install ultralytics\n```\n\nThe video files used in the examples are available at [sample-videos](https://github.com/intel-iot-devkit/sample-videos)\n\n### Offline Inference\n\nhttps://github.com/zaigie/stream_infer/assets/17232619/32aef0c9-89c7-4bc8-9dd6-25035bee2074\n\nOffline inference processes a **finite-length video or stream** at the speed the computer can handle, performing inference serially while capturing frames.\n\nSince inference invariably takes time, depending on machine performance, the entire process's duration **may be longer or shorter than the video's length**.\n\nBesides debugging during the development phase, offline inference can also be used for video structuring analysis in production environments where real-time processing is not essential, such as:\n\n- Post-meeting video analysis\n- Surgical video review\n- ...\n\nView and run the demo:\n\n- General operation: [examples/offline_general.py](https://github.com/zaigie/stream_infer/blob/main/examples/offline_general.py)\n- Set up handlers to process frames and inference results, and display and record them using cv2: [examples/offline_custom_process_record.py](https://github.com/zaigie/stream_infer/blob/main/examples/offline_custom_process_record.py)\n\n> [!WARNING]\n> This example `offline_custom_process_record.py` use OpenCV GUI-related features, such as presentation Windows, which can be manually installed either `opencv-python` or `opencv-contrib-python`, or simply\n>\n> `pip install -U stream-infer[desktop]`\n\n### Real-time Inference\n\nReal-time inference handles a **finite/infinite-length video or stream**, playing at normal speed if finite.\n\nIn this mode, a custom-size frame track is maintained, continuously adding the current frame to the track, with the **playback process independent of the inference process**.\n\nSince inference takes time and **playback does not wait for inference results**, there will inevitably be a delay between the inference results and the current scene.\n\nReal-time inference is not suitable for the development phase but is often used in production environments for real-time scenarios like RTMP/RTSP/camera feeds:\n\n- Various live broadcast scenarios\n- Real-time monitoring\n- Live meetings\n- Clinical surgeries\n- ...\n\nView and run the demo:\n\n- General Operation: [examples/realtime_general.py](https://github.com/zaigie/stream_infer/blob/main/examples/realtime_general.py)\n- Set up the handler and manually print the inference result: [examples/realtime_custom_process.py](https://github.com/zaigie/stream_infer/blob/main/examples/realtime_custom_process.py)\n\n### Dynamic Execution\n\nLeveraging Python's reflection and dynamic import capabilities, it supports configuring all the parameters required for inference tasks through JSON.\n\nThis mode is mainly useful for the development of inference servers, where structured data can be passed in via REST/gRPC or other methods to initiate an inference task.\n\nView and run the demo: [examples/dynamic_app.py](https://github.com/zaigie/stream_infer/blob/main/examples/dynamic_app.py)\n\n### Visualization Development & Debugging\n\nhttps://github.com/zaigie/stream_infer/assets/17232619/6cbd6858-0292-4759-8d4c-ace154567f8e\n\nImplemented through a visual web application using [streamlit](https://github.com/streamlit/streamlit).\n\n> The current interface text is Chinese.\n\nThis mode is primarily used for algorithm development and debugging on local/remote servers, supporting custom frame drawing and data display components.\n\nTo run this feature, install the server version:\n\n```bash\npip install -U 'stream-infer[server]'\n```\n\nView and run the demo: [examples/streamlit_app.py](https://github.com/zaigie/stream_infer/blob/main/examples/streamlit_app.py)\n\n```bash\nstreamlit run streamlit_app.py\n```\n\n## Modules\n\nPlease read the following content in conjunction with [examples](https://github.com/zaigie/stream_infer/blob/main/examples).\n\n### BaseAlgo\n\nStream Infer simply encapsulates and abstracts all algorithms into classes with `init()` and `run()` functions, implementing BaseAlgo.\n\nAlthough Stream Infer provides a framework for streaming inference, the actual algorithm functionality still needs to be developed by the user and **inherited from the BaseAlgo class** for uniform encapsulation and invocation.\n\nFor instance, if you have completed a real-time head detection algorithm, the official invocation method is:\n\n```python\n# https://modelscope.cn/models/damo/cv_tinynas_head-detection_damoyolo/summary\nfrom modelscope.pipelines import pipeline\nfrom modelscope.utils.constant import Tasks\n\nmodel_id = 'damo/cv_tinynas_head-detection_damoyolo'\ninput_location = 'https://modelscope.oss-cn-beijing.aliyuncs.com/test/images/image_detection.jpg'\n\nhead_detection = pipeline(Tasks.domain_specific_object_detection, model=model_id)\nresult = head_detection(input_location)\nprint(\"result is : \", result)\n```\n\nTo use it in Stream Infer, encapsulate it like this:\n\n```python\nfrom stream_infer.algo import BaseAlgo\n\nclass HeadDetectionAlgo(BaseAlgo):\n    def init(self):\n        self.model_id = 'damo/cv_tinynas_head-detection_damoyolo'\n        self.head_detection = pipeline(Tasks.domain_specific_object_detection, model=model_id)\n\n    def run(self, frames):\n        return self.head_detection(frames[0])\n```\n\nThis way, you have completed the encapsulation and can call it normally in the future.\n\n> [!CAUTION]\n> In many cases, cuda or mps is used to accelerate inference. However, when using these accelerations **and needing real-time inference in the production environment after development**:\n>\n> The `run()` function inherited from `BaseAlgo` **must not return any tensors**! Try to manually convert to standard Python data formats, like dicts.\n>\n> Or copy the tensor (to CPU) for sharing between processes, as GPU tensors cannot be directly shared in a multi-process environment in real-time inference.\n\n### Dispatcher\n\nDispatcher is a central service for playing and reasoning, used to cache inference frames, distribute inference frames, and collect inference time and result data.\n\nDispatcher provides functions for adding/getting frames and times. Stream Infer has a built-in [DevelopDispatcher](https://github.com/zaigie/stream_infer/blob/main/stream_infer/dispatcher/develop.py) for manually storing and retrieving inference results.\n\nYou don't need to worry about the others, but to allow you to get the results and conveniently print or store them elsewhere, you should pay attention to the `collect()` function. Its source code implementation is as follows:\n\n```python\ndef collect(self, position: int, algo_name: str, result):\n    logger.debug(f\"[{position}] collect {algo_name} result: {result}\")\n```\n\nBased on this, if you wish to request the results to a REST service or perform other operations on existing data before the request, you can achieve this by **inheriting the Dispatcher class** and rewriting functions:\n\n**Collect results to Redis**\n\n```python\nclass RedisDispatcher(Dispatcher):\n    def __init__(\n        self, buffer: int, host: str = \"localhost\", port: int = 6379, db: int = 0\n    ):\n        super().__init__(buffer)\n        self.conn = redis.Redis(host=host, port=port, db=db)\n\n    def collect(self, position: int, algo_name: str, result):\n        key = f\"results:{algo_name}\"\n        self.conn.zadd(key, {result: position})\n```\n\n**Request results to REST**\n\n```python\nfrom stream_infer.dispatcher import Dispatcher\nimport requests\n...\nclass RequestDispatcher(Dispatcher):\n    def __init__(self, buffer):\n        super().__init__(buffer)\n        self.sess = requests.Session()\n        ...\n\n    def collect(self, position: int, algo_name: str, result):\n        req_data = {\n            \"position\": position\n            \"algo_name\": algo_name\n            \"result\": result\n        }\n        self.sess.post(\"http://xxx.com/result/\", json=req_data)\n```\n\nThen instantiate:\n\n```python\n# Offline inference\ndispatcher = RequestDispatcher.create(mode=\"offline\", buffer=30)\n# Real-time inference\ndispatcher = RedisDispatcher.create(buffer=15, host=\"localhost\", port=6379, db=1)\n```\n\nYou may have noticed that the instantiation of dispatcher differs between offline and real-time inference. This is because **in real-time inference, playback and inference are not in the same process**, and both need to share the same dispatcher, only the mode parameter has been changed, but the internal implementation uses the DispatcherManager agent.\n\n> [!CAUTION]\n> For the `buffer` parameter, the default value is 30, which keeps the latest 30 frames of ndarray data in the buffer. **The larger this parameter, the more memory the program occupies!**\n>\n> It is recommended to set it to `buffer = max(frame_count * (frame_step if frame_step else 1))` based on the actual inference interval.\n\n### Inference\n\nInference is the core of the framework, implementing functions such as loading algorithms and running inference.\n\nAn Inference object requires a Dispatcher object for frame retrieval and sending inference results.\n\n```python\nfrom stream_infer import Inference\n\n...\n\ninference = Inference(dispatcher)\n```\n\nWhen you need to load an algorithm, for example from the [BaseAlgo](#basealgo) section:\n\n```python\nfrom anywhere_algo import HeadDetectionAlgo, AnyOtherAlgo\n\n...\n\ninference = Inference(dispatcher)\ninference.load_algo(HeadDetectionAlgo(\"head\"), frame_count=1, frame_step=fps, interval=1)\ninference.load_algo(AnyOtherAlgo(\"other\"), 5, 6, 60)\n```\n\nHere, we can give HeadDetectionAlgo a name to identify the running algorithm (needed when collecting in Dispatcher and to avoid duplicates).\n\nThe parameters for loading an algorithm are the framework's core functionality, allowing you to freely implement frame retrieval logic:\n\n- frame_count: The number of frames the algorithm needs to get, which is the number of frames the run() function will receive.\n- frame_step: Take 1 frame every `frame_step`, up to `frame_count` frames, receive 0. (when `frame_count` is equal to 1, this parameter determines only the startup delay)\n- interval: In seconds, indicating the frequency of algorithm calls, like `AnyOtherAlgo` will only be called once a minute to save resources when not needed.\n\n### Producer\n\nProducer loads videos or streams using different methods, such as PyAV, OpenCV, etc., and adjusts or transforms the frames in terms of width, height, and color space, eventually returning each frame as a numpy array.\n\nInstantiating a Producer often requires inputting the frame width, height, and color order required for inference. The default color order is the same as the BGR order returned by `cv2.imread()`.\n\n```python\nfrom stream_infer.producer import PyAVProducer, OpenCVProducer\n\nproducer = PyAVProducer(1920, 1080)\nproducer = OpenCVProducer(1920, 1080)\n```\n\n> [!NOTE]\n> In most cases `OpenCVProducer` is sufficient and performs well. However, you may still need to use `PyAVProducer` (based on ffmpeg) to load some videos or streams that OpenCV cannot decode\n\n### Player\n\nPlayer inputs dispatcher, producer, and the video/stream address for playback and inference.\n\n```python\nfrom stream_infer import Player\n\n...\n\nplayer = Player(dispatcher, producer, source, show_progress)\n```\n\nThe `show_progress` parameter defaults to True, in which case the tqdm is used to display the progress bar. When set to False, progress is printed through the logger.\n\n### Run\n\nSimply run the entire script through Inference's `start()`.\n\n```python\ninference.start(player, fps=fps, position=0, mode=\"offline\", recording_path=\"./processed.mp4\")\n```\n\n- fps: Indicates the desired playback frame rate. **If the frame rate of the video source is higher than this number, it will skip frames through frame skipping logic to forcibly play at this specified frame rate**, thereby saving performance to some extent.\n- position: Accepts a parameter in seconds, which can specify the start position for inference (only available in offline inference; how could you specify a position in real-time inference, right?).\n- mode: The default is `realtime`.\n- recording_path: After this parameter is added, the processed frame can be recorded into a new video file under offline inference.\n\nIt should be specifically noted that during the inference process, you may need to process the frames or inference results. We provide a `process` decorator and function to facilitate this purpose.\n\n> [!WARNING]\n> In a real-time inference environment, due to the reason of running multiple processes, it is not possible to use a decorator to set up the process procedure\n\nCurrently, the recorded videos only support the mp4 format. When you use `OpenCVProducer`, the files are encoded in mp4v, while under `PyAVProducer`, they are encoded in h264 mp4 format. We recommend using `PyAVProducer` as it offers a better compression rate.\n\nFor specific usage, you can refer to the example codes in [examples/offline_custom_process_record.py](https://github.com/zaigie/stream_infer/blob/main/examples/offline_custom_process_record.py) and [examples/realtime_custom_process.py](https://github.com/zaigie/stream_infer/blob/main/examples/realtime_custom_process.py).\n\n## License\n\nStream Infer is licensed under the [Apache License](LICENSE).\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Video streaming inference framework, integrating image algorithms and models for real-time/offline video structuring",
    "version": "0.4.2",
    "project_urls": {
        "Documentation": "https://github.com/zaigie/stream-infer/blob/main/README.md",
        "Homepage": "https://github.com/zaigie/stream-infer",
        "Repository": "https://github.com/zaigie/stream-infer"
    },
    "split_keywords": [
        "machine-learning",
        "deep-learning",
        "vision",
        "ml",
        "dl",
        "ai",
        "streaming framework",
        "deepstream"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "51b4c5d59fd98f7688219041a2e1922fef89e26a310c11af05f8401e9adf2732",
                "md5": "df0f0b2f17c17d903063bc7897525657",
                "sha256": "bb582829f0fc74bcf46ed44e93f8cff232b5ad88231aeb8b92dd039b52c66663"
            },
            "downloads": -1,
            "filename": "stream_infer-0.4.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "df0f0b2f17c17d903063bc7897525657",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<3.12.0",
            "size": 26348,
            "upload_time": "2024-01-12T06:20:28",
            "upload_time_iso_8601": "2024-01-12T06:20:28.007928Z",
            "url": "https://files.pythonhosted.org/packages/51/b4/c5d59fd98f7688219041a2e1922fef89e26a310c11af05f8401e9adf2732/stream_infer-0.4.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "539eb8e08db8f097530a69be5b04e286661373c16529bd401271e1c33c0829c3",
                "md5": "a508ad3cfbe67dcda3de78ad1bd51e1e",
                "sha256": "80cb9aa32155993ff4c0b33f0b99142fc85a733d2b7e89ec99690b77115a38c9"
            },
            "downloads": -1,
            "filename": "stream_infer-0.4.2.tar.gz",
            "has_sig": false,
            "md5_digest": "a508ad3cfbe67dcda3de78ad1bd51e1e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<3.12.0",
            "size": 26457,
            "upload_time": "2024-01-12T06:20:29",
            "upload_time_iso_8601": "2024-01-12T06:20:29.965417Z",
            "url": "https://files.pythonhosted.org/packages/53/9e/b8e08db8f097530a69be5b04e286661373c16529bd401271e1c33c0829c3/stream_infer-0.4.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-12 06:20:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zaigie",
    "github_project": "stream-infer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "stream-infer"
}
        
Elapsed time: 0.16566s