smartoscreid


Namesmartoscreid JSON
Version 0.0.2 PyPI version JSON
download
home_pagehttps://git02.smartosc.com/division-1/computer-vision-project/people_counting
SummaryPeople counting project
upload_time2024-08-22 08:37:45
maintainerNone
docs_urlNone
authorCLOUD Team
requires_pythonNone
licenseMIT
keywords person re-identification deep learning computer vision
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <div align="center">Peoplce Counting Project</div>

## Overview

The People Counting Project is designed to detect and count the number of people entering and exiting a specified area using computer vision techniques. This project can be used in various settings such as retail stores, offices, and events to monitor foot traffic and gather valuable data.

## Features

- **People Detection**: Detects people in a video feed or image using a deep learning model.
- **Bidirectional Counting**: Tracks and counts people entering and exiting a specific area.

- **Estimate person appear time**: Tracks and estimate time per person appear in video

## Installation

### Prerequisites

- Python 3.x (x >= 10)
- pip or pip3
- A GPU (optional but recommended for faster processing)


### Steps

1. **Clone the repository:**

    ```bash
    git clone gitlab_url 
    ```

2. **Install dependencies:**

    ```bash
    pip install -r requirements.txt
    ```

3. **Download the pre-trained model (if applicable):**
    The model can download from model section



## Usage

```python 
from smartoscreid import PeopleCounting

# Your model path
model_path = 'smartoscreid/model/yolov8m.pt'

pc = PeopleCounting(model_path)

# List video from multiple camera
videos = ["input/Single1.mp4"]

pc.run(videos)
```

Result will generate in output folder

## <div align="center">Models</div>

YOLOv8 [Detect](https://docs.ultralytics.com/tasks/detect), [Track](https://docs.ultralytics.com/modes/track) mode is available for all Detect, Segment and Pose models.

All [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.

<details open><summary>Detection (COCO)</summary>

See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examples with these models trained on [COCO](https://docs.ultralytics.com/datasets/detect/coco/), which include 80 pre-trained classes.

| Model                                                                                | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt) | 640                   | 37.3                 | 80.4                           | 0.99                                | 3.2                | 8.7               |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt) | 640                   | 44.9                 | 128.4                          | 1.20                                | 11.2               | 28.6              |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt) | 640                   | 50.2                 | 234.7                          | 1.83                                | 25.9               | 78.9              |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l.pt) | 640                   | 52.9                 | 375.2                          | 2.39                                | 43.7               | 165.2             |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640                   | 53.9                 | 479.1                          | 3.53                                | 68.2               | 257.8             |


</details>


## <div align="center">Torchreid</div>
Torchreid is a library for deep-learning person re-identification, written in [PyTorch](https://pytorch.org/) and developed for our ICCV'19 project, [Omni-Scale Feature Learning for Person Re-Identification](https://arxiv.org/abs/1905.00953).

It features:

- Multi-GPU training
- Support both image- and video-reid
- End-to-end training and evaluation
- Incredibly easy preparation of reid datasets
- Multi-dataset training
- Cross-dataset evaluation
- Standard protocol used by most research papers
- Highly extensible (easy to add models, datasets, training methods, etc.)
- Implementations of state-of-the-art deep reid models
- Access to pretrained reid models
- Advanced training techniques
- Visualization tools (tensorboard, ranks, etc.)


Code: https://github.com/KaiyangZhou/deep-person-reid.

Documentation: https://kaiyangzhou.github.io/deep-person-reid/.

How-to instructions: https://kaiyangzhou.github.io/deep-person-reid/user_guide.

Model zoo: https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.

Tech report: https://arxiv.org/abs/1910.10093.

You can find some research projects that are built on top of Torchreid [here](https://github.com/KaiyangZhou/deep-person-reid/tree/master/projects).

## <div align="center">License</div>
This project is licensed under the MIT License - see the LICENSE file for details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://git02.smartosc.com/division-1/computer-vision-project/people_counting",
    "name": "smartoscreid",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "Person Re-Identification, Deep Learning, Computer Vision",
    "author": "CLOUD Team",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/24/a8/5adf463fe75d41c64c9067e30e0e1c3831c6d03ee289902d5df8a195e0d4/smartoscreid-0.0.2.tar.gz",
    "platform": null,
    "description": "# <div align=\"center\">Peoplce Counting Project</div>\n\n## Overview\n\nThe People Counting Project is designed to detect and count the number of people entering and exiting a specified area using computer vision techniques. This project can be used in various settings such as retail stores, offices, and events to monitor foot traffic and gather valuable data.\n\n## Features\n\n- **People Detection**: Detects people in a video feed or image using a deep learning model.\n- **Bidirectional Counting**: Tracks and counts people entering and exiting a specific area.\n\n- **Estimate person appear time**: Tracks and estimate time per person appear in video\n\n## Installation\n\n### Prerequisites\n\n- Python 3.x (x >= 10)\n- pip or pip3\n- A GPU (optional but recommended for faster processing)\n\n\n### Steps\n\n1. **Clone the repository:**\n\n    ```bash\n    git clone gitlab_url \n    ```\n\n2. **Install dependencies:**\n\n    ```bash\n    pip install -r requirements.txt\n    ```\n\n3. **Download the pre-trained model (if applicable):**\n    The model can download from model section\n\n\n\n## Usage\n\n```python \nfrom smartoscreid import PeopleCounting\n\n# Your model path\nmodel_path = 'smartoscreid/model/yolov8m.pt'\n\npc = PeopleCounting(model_path)\n\n# List video from multiple camera\nvideos = [\"input/Single1.mp4\"]\n\npc.run(videos)\n```\n\nResult will generate in output folder\n\n## <div align=\"center\">Models</div>\n\nYOLOv8 [Detect](https://docs.ultralytics.com/tasks/detect), [Track](https://docs.ultralytics.com/modes/track) mode is available for all Detect, Segment and Pose models.\n\nAll [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.\n\n<details open><summary>Detection (COCO)</summary>\n\nSee [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examples with these models trained on [COCO](https://docs.ultralytics.com/datasets/detect/coco/), which include 80 pre-trained classes.\n\n| Model                                                                                | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |\n| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |\n| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt) | 640                   | 37.3                 | 80.4                           | 0.99                                | 3.2                | 8.7               |\n| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt) | 640                   | 44.9                 | 128.4                          | 1.20                                | 11.2               | 28.6              |\n| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt) | 640                   | 50.2                 | 234.7                          | 1.83                                | 25.9               | 78.9              |\n| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l.pt) | 640                   | 52.9                 | 375.2                          | 2.39                                | 43.7               | 165.2             |\n| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640                   | 53.9                 | 479.1                          | 3.53                                | 68.2               | 257.8             |\n\n\n</details>\n\n\n## <div align=\"center\">Torchreid</div>\nTorchreid is a library for deep-learning person re-identification, written in [PyTorch](https://pytorch.org/) and developed for our ICCV'19 project, [Omni-Scale Feature Learning for Person Re-Identification](https://arxiv.org/abs/1905.00953).\n\nIt features:\n\n- Multi-GPU training\n- Support both image- and video-reid\n- End-to-end training and evaluation\n- Incredibly easy preparation of reid datasets\n- Multi-dataset training\n- Cross-dataset evaluation\n- Standard protocol used by most research papers\n- Highly extensible (easy to add models, datasets, training methods, etc.)\n- Implementations of state-of-the-art deep reid models\n- Access to pretrained reid models\n- Advanced training techniques\n- Visualization tools (tensorboard, ranks, etc.)\n\n\nCode: https://github.com/KaiyangZhou/deep-person-reid.\n\nDocumentation: https://kaiyangzhou.github.io/deep-person-reid/.\n\nHow-to instructions: https://kaiyangzhou.github.io/deep-person-reid/user_guide.\n\nModel zoo: https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.\n\nTech report: https://arxiv.org/abs/1910.10093.\n\nYou can find some research projects that are built on top of Torchreid [here](https://github.com/KaiyangZhou/deep-person-reid/tree/master/projects).\n\n## <div align=\"center\">License</div>\nThis project is licensed under the MIT License - see the LICENSE file for details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "People counting project",
    "version": "0.0.2",
    "project_urls": {
        "Homepage": "https://git02.smartosc.com/division-1/computer-vision-project/people_counting"
    },
    "split_keywords": [
        "person re-identification",
        " deep learning",
        " computer vision"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5c8e6f1b259ff41fbf85b4ba9be4ce454e7f151e97af9aab4384c71d7d7b57ce",
                "md5": "d05d247cbc1534af27fe1c6ef6a5eed2",
                "sha256": "380b63ce5102d83155206ff2e538a00d618d2050fffe31d0f83d90daec8f1580"
            },
            "downloads": -1,
            "filename": "smartoscreid-0.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d05d247cbc1534af27fe1c6ef6a5eed2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 143702,
            "upload_time": "2024-08-22T08:37:42",
            "upload_time_iso_8601": "2024-08-22T08:37:42.944127Z",
            "url": "https://files.pythonhosted.org/packages/5c/8e/6f1b259ff41fbf85b4ba9be4ce454e7f151e97af9aab4384c71d7d7b57ce/smartoscreid-0.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "24a85adf463fe75d41c64c9067e30e0e1c3831c6d03ee289902d5df8a195e0d4",
                "md5": "7101189cb8453fcd2257670e8bc0fa1f",
                "sha256": "58d64c72e20ca93629520861599d42030b1c5d08a1e19d98a9585240a22d4d02"
            },
            "downloads": -1,
            "filename": "smartoscreid-0.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "7101189cb8453fcd2257670e8bc0fa1f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 91761,
            "upload_time": "2024-08-22T08:37:45",
            "upload_time_iso_8601": "2024-08-22T08:37:45.723351Z",
            "url": "https://files.pythonhosted.org/packages/24/a8/5adf463fe75d41c64c9067e30e0e1c3831c6d03ee289902d5df8a195e0d4/smartoscreid-0.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-22 08:37:45",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "smartoscreid"
}
        
Elapsed time: 0.64736s