quicktake


Namequicktake JSON
Version 0.0.16 PyPI version JSON
download
home_page
SummaryOff-the-shelf computer vision ML models. Yolov5, gender and age determination.
upload_time2023-08-25 12:14:03
maintainer
docs_urlNone
authorZach Wolpe
requires_python>=3.8
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # QuickTake

Off-the-shelf computer vision ML models. Yolov5, gender and age determination.

The goal of this repository is to provide, easy to use, abstracted, APIs to powerful computer vision models.


## Models

$3$ models are currently available:

- `Object detection`
- `Gender determination`
- `Age determination`

## Model Engine

The models 
- `YoloV5`: Object detection. This forms the basis of the other models. Pretrained on `COCO`. Documentation [here](https://pjreddie.com/darknet/yolo/)
- `Gender`: `ResNet18` is used as the models backend. Transfer learning is applied to model gender. The additional gender training was done on the [gender classification dataset](https://www.kaggle.com/datasets/cashutosh/gender-classification-dataset), using code extract from [here](https://github.com/ndb796/Face-Gender-Classification-PyTorch/blob/main/Face_Gender_Classification_using_Transfer_Learning_with_ResNet18.ipynb).

- `Age`: The age model is an implementation of the `SSR-Net` paper: [SSR-Net: A Compact Soft Stagewise Regression Network for Age Estimation](https://www.ijcai.org/proceedings/2018/0150.pdf). The `pyTorch` model was largely derived from [oukohou](https://github.com/oukohou/SSR_Net_Pytorch/blob/master/inference_images.py).

## Getting Started

Install the package with pip:

````pip install quicktake````


## Usage

Build an instance of the class:

```python
from quicktake import QuickTake
```

#### Image Input

Each model is designed to handle $3$ types of input:

- `raw pixels (torch.Tensor)`: raw pixels of a single image. Used when streaming video input.
- `image path (str)`: path to an image. Used when processing a single image.
- `image directory (str)`: path to a directory of images. Used when processing a directory of images.

### Expected Use

`Gender` and `age` determination models are trained on faces. They work fine on a larger image, however, will fail to make multiple predictions in the case of multiple faces in a single image.

The API is currently designed to chain models:

1. `yolo` is used to identify objects.
2. `IF` a person is detected, the `gender` and `age` models are used to make predictions.

This is neatly bundled in the `QuickTake.yolo_loop()` method.

#### Getting Started

Launch a webcam stream:

```python
QL = QuickTake()
QL.launchStream()
```

_*Note*_: Each model returns the results `results_` as well as the runtime `time_`.

Run on a single frame:

```python
from IPython.display import display
from PIL import Image
import cv2

# example images
img = './data/random/dave.png'

# to avoid distractions
import warnings
warnings.filterwarnings('ignore')

# init module
from quicktake import QuickTake
qt = QuickTake()

# extract frame from raw image path
frame = qt.read_image(img)
```

We can now fit `qt.age(<frame>)` or `qt.gender(<frame>)` on the frame. Alternatively we can cycle through the objects detected  by `yolo` and if a person is detected, fit `qt.age()` and `qt.gender()`:

```python
# generate points
for _label, x0,y0,x1,y1, colour, thickness, results, res_df, age_, gender_ in qt.yolo_loop(frame):
    _label = QuickTake.generate_yolo_label(_label)
    QuickTake.add_block_to_image(frame, _label, x0,y0,x1,y1, colour=colour, thickness=thickness)
```

The result is an image with the bounding boxes and labels, confidence (in yolo prediction), age, and gender if a person is detected.

![Example output: a person is detected and thus age, gender are estimated](https://github.com/ZachWolpe/QuickTake/blob/main/data/output_frames/result_dav_2.png).

The staged output is also useful:

![Example of the `YoloV5` detection boundaries](https://github.com/ZachWolpe/QuickTake/blob/main/data/output_frames/result_ct_2.png).


For a more comprehensive _example_ directory. 


## Future

I have many more models; deployment methods & applications in the pipeline.

If you wish to contribute, please email me _@zachcolinwolpe@gmail.com_.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "quicktake",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "",
    "author": "Zach Wolpe",
    "author_email": "zachcolinwolpe@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/e8/30/7261a1626f5f87cd4ba45959e4aebc96d3b317c30f30d7c0089a249df699/quicktake-0.0.16.tar.gz",
    "platform": null,
    "description": "# QuickTake\n\nOff-the-shelf computer vision ML models. Yolov5, gender and age determination.\n\nThe goal of this repository is to provide, easy to use, abstracted, APIs to powerful computer vision models.\n\n\n## Models\n\n$3$ models are currently available:\n\n- `Object detection`\n- `Gender determination`\n- `Age determination`\n\n## Model Engine\n\nThe models \n- `YoloV5`: Object detection. This forms the basis of the other models. Pretrained on `COCO`. Documentation [here](https://pjreddie.com/darknet/yolo/)\n- `Gender`: `ResNet18` is used as the models backend. Transfer learning is applied to model gender. The additional gender training was done on the [gender classification dataset](https://www.kaggle.com/datasets/cashutosh/gender-classification-dataset), using code extract from [here](https://github.com/ndb796/Face-Gender-Classification-PyTorch/blob/main/Face_Gender_Classification_using_Transfer_Learning_with_ResNet18.ipynb).\n\n- `Age`: The age model is an implementation of the `SSR-Net` paper: [SSR-Net: A Compact Soft Stagewise Regression Network for Age Estimation](https://www.ijcai.org/proceedings/2018/0150.pdf). The `pyTorch` model was largely derived from [oukohou](https://github.com/oukohou/SSR_Net_Pytorch/blob/master/inference_images.py).\n\n## Getting Started\n\nInstall the package with pip:\n\n````pip install quicktake````\n\n\n## Usage\n\nBuild an instance of the class:\n\n```python\nfrom quicktake import QuickTake\n```\n\n#### Image Input\n\nEach model is designed to handle $3$ types of input:\n\n- `raw pixels (torch.Tensor)`: raw pixels of a single image. Used when streaming video input.\n- `image path (str)`: path to an image. Used when processing a single image.\n- `image directory (str)`: path to a directory of images. Used when processing a directory of images.\n\n### Expected Use\n\n`Gender` and `age` determination models are trained on faces. They work fine on a larger image, however, will fail to make multiple predictions in the case of multiple faces in a single image.\n\nThe API is currently designed to chain models:\n\n1. `yolo` is used to identify objects.\n2. `IF` a person is detected, the `gender` and `age` models are used to make predictions.\n\nThis is neatly bundled in the `QuickTake.yolo_loop()` method.\n\n#### Getting Started\n\nLaunch a webcam stream:\n\n```python\nQL = QuickTake()\nQL.launchStream()\n```\n\n_*Note*_: Each model returns the results `results_` as well as the runtime `time_`.\n\nRun on a single frame:\n\n```python\nfrom IPython.display import display\nfrom PIL import Image\nimport cv2\n\n# example images\nimg = './data/random/dave.png'\n\n# to avoid distractions\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# init module\nfrom quicktake import QuickTake\nqt = QuickTake()\n\n# extract frame from raw image path\nframe = qt.read_image(img)\n```\n\nWe can now fit `qt.age(<frame>)` or `qt.gender(<frame>)` on the frame. Alternatively we can cycle through the objects detected  by `yolo` and if a person is detected, fit `qt.age()` and `qt.gender()`:\n\n```python\n# generate points\nfor _label, x0,y0,x1,y1, colour, thickness, results, res_df, age_, gender_ in qt.yolo_loop(frame):\n    _label = QuickTake.generate_yolo_label(_label)\n    QuickTake.add_block_to_image(frame, _label, x0,y0,x1,y1, colour=colour, thickness=thickness)\n```\n\nThe result is an image with the bounding boxes and labels, confidence (in yolo prediction), age, and gender if a person is detected.\n\n![Example output: a person is detected and thus age, gender are estimated](https://github.com/ZachWolpe/QuickTake/blob/main/data/output_frames/result_dav_2.png).\n\nThe staged output is also useful:\n\n![Example of the `YoloV5` detection boundaries](https://github.com/ZachWolpe/QuickTake/blob/main/data/output_frames/result_ct_2.png).\n\n\nFor a more comprehensive _example_ directory. \n\n\n## Future\n\nI have many more models; deployment methods & applications in the pipeline.\n\nIf you wish to contribute, please email me _@zachcolinwolpe@gmail.com_.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Off-the-shelf computer vision ML models. Yolov5, gender and age determination.",
    "version": "0.0.16",
    "project_urls": {
        "Source code": "https://github.com/ZachWolpe/QuickTake"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0b546a806c2f0cdc7a6a51ddaac84eaa171f6746a0ae91a9ab49f6ea32f0fa7a",
                "md5": "b5e4cfa2bd7666515848b185e3a8d569",
                "sha256": "30e65fdfccf844adacf260e0ea938690bd0f69bd466fdb2fc264330541177dc2"
            },
            "downloads": -1,
            "filename": "quicktake-0.0.16-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b5e4cfa2bd7666515848b185e3a8d569",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 6445,
            "upload_time": "2023-08-25T12:14:01",
            "upload_time_iso_8601": "2023-08-25T12:14:01.661709Z",
            "url": "https://files.pythonhosted.org/packages/0b/54/6a806c2f0cdc7a6a51ddaac84eaa171f6746a0ae91a9ab49f6ea32f0fa7a/quicktake-0.0.16-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e8307261a1626f5f87cd4ba45959e4aebc96d3b317c30f30d7c0089a249df699",
                "md5": "a18258322f2b296c2e5aaf50001cc5b7",
                "sha256": "249891f1c1e7e0a01f285afd59cd5fc918392fd62c42ebd4e62986c634021633"
            },
            "downloads": -1,
            "filename": "quicktake-0.0.16.tar.gz",
            "has_sig": false,
            "md5_digest": "a18258322f2b296c2e5aaf50001cc5b7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 6327,
            "upload_time": "2023-08-25T12:14:03",
            "upload_time_iso_8601": "2023-08-25T12:14:03.409822Z",
            "url": "https://files.pythonhosted.org/packages/e8/30/7261a1626f5f87cd4ba45959e4aebc96d3b317c30f30d7c0089a249df699/quicktake-0.0.16.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-25 12:14:03",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ZachWolpe",
    "github_project": "QuickTake",
    "github_fetch_exception": true,
    "lcname": "quicktake"
}
        
Elapsed time: 0.10952s