ArtificialVision


NameArtificialVision JSON
Version 0.1.2 PyPI version JSON
download
home_page
SummaryArtificial Vision Library
upload_time2024-02-25 08:28:08
maintainer
docs_urlNone
author
requires_python>=3.9
license
keywords hwk060023 vision ai ml dl cv artificialvision artificialintelligence machinelearning deeplearning computervision python pypi package tutorial
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ArtificialVision

<img src="https://lh3.googleusercontent.com/u/0/drive-viewer/AEYmBYSOMvdeaLCLq2djzo1mgZIEd6-Qyll8boR6V7Z1VHYkH2IFJzg8geBFdcxis-KIyVdoawhJTa-mWCLmfImUXQIoCJVv8w=w1762-h1610" width=550> <br/>

[![PyPI version](https://badge.fury.io/py/artificialvision.svg)](https://badge.fury.io/py/artificialvision)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
![GitHub pull requests](https://img.shields.io/github/issues-pr/hwk06023/ArtificialVision)
![GitHub contributors](https://img.shields.io/github/contributors/hwk06023/ArtificialVision)
![GitHub stars](https://img.shields.io/github/stars/hwk06023/ArtificialVision?style=social)

<br/>

**❗️ The package is still under development and has not been released yet ❗️**  <br/>

**If the version exceeds 1.0.0, this message will be removed, and the package will become available.** <br/>

<br/>

## Installation


```bash
pip install artificialvision
```

<br/>

## What is ArtificialVision?

ArtificialVision is the package for makes it easy to get the outcomes of various Machine Learning & Computer Vision technologies.
This package's aims are improving the quality and increasing the productivity by using it for convenience in various research and development experiments. <br/>

In this Version, just inference & getting the various results are supported. Support for training and fine-tuning will be added in the future. <br/>

<br/>

## Contributing to ArtificialVision (Not yet)

All the contributions are welcome ! <br/>

Check the [ContributeGuide.md](ContributeGuide.md) for more information. <br/>




### Contributors

<img src = "https://contrib.rocks/image?repo=hwk06023/ArtificialVision"/>

<br/> <br/>






## Methods Tutorial

### Image Classification

**example**  <br/>

```python
from artificialvision import ImgClassification
import cv2 

# Read the image
img = cv2.imread('PATH of Image file')

# Get the classification result
ImgClassification.get_result(img)
```

<br/>

**Currently, only models pretrained on ImageNet are available.** <br/>

<br/>


### Object Detection

**example**  <br/>

```python
from artificialvision import ObjDetection
import cv2

''' Image '''
# Read the image
img = cv2.imread('PATH of Image file')

# Get the detection result with the bounding box
ObjDetection.get_result(img)

# Get the bounding box only
ObjDetection.get_result_with_box(img)

''' Video '''
# Read the video
video = cv2.VideoCapture('PATH of Video file', type=1)

# Get the detection result with the bounding box
ObjDetection.get_result(video)

# Get the bounding box only
ObjDetection.get_result_with_box(video)
```

**hyperparameters**  <br/>

- `type` : int, default is 0
    - 0 : Image
    - 1 : Video 

<br/> 

**Currently, only image and video matching are supported.** <br/>

<br/>


### Segmentation

**example**  <br/>

```python
from artificialvision import Segmentation
import cv2

''' Image '''
# Read the image
img = cv2.imread('PATH of Image file')

# Get the segmentation result
Segmentation.get_result(img)

# Get only the segment map
Segmentation.get_segment_map(img)
''' Video '''
# Read the video
video = cv2.VideoCapture('PATH of Video file', type=1)

# Get the segmentation result
Segmentation.get_result(video)

# Get only the segment map
Segmentation.get_segment_map(video)
''' Webcam (real-time) '''
# start the webcam(recording)
# if finished, press 'q' to stop & get the result
Segmentation.get_result(type=2)
```

**hyperparameters**  <br/>

- `type` : int, default is 0
    - 0 : Image
    - 1 : Video 
    - 2 : Webcam (real-time)

- `category` : int, default is 0
    - segmentation category
    - 0 : Semantic Segmentation
    - 1 : Instance Segmentation
    - 2 : Panoptic Segmentation

- `detail` : int, default is 0
    - segmentation detail
    - 0 : Segmentation Result (Overlayed Image)
    - 1 : Segmentation Map

- `get_poligon` : bool, default is False
    - If True, get the poligon points of the segmentation result. (Only for the instance segmentation)

<br/> 

**Currently, only image and video matching are supported.** <br/>

<br/>


### Image Matching

**example**  <br/>
 
```python
from artificialvision import ImgMatching
import cv2 

''' Image '''
# Read the images
img1 = cv2.imread('PATH of Image1 file')
img2 = cv2.imread('PATH of Image2 file')

# Get the matching score
ImgMatching.get_matching_score(img1, img2)

# Get the matching result
ImgMatching.get_matching_result(img1, img2)


''' Video '''
# Read the videos
video1 = cv2.VideoCapture('PATH of Video1 file')
video2 = cv2.VideoCapture('PATH of Video2 file')

# Get the matching score
ImgMatching.get_matching_score(video1, video2, type=1)

# Get the matching result
ImgMatching.get_matching_result(video1, video2, type=1)

''' Mixed '''
# Read the images for matchin
img_list = [img1, img2, img3, ...]

# Get the matching score
ImgMatching.get_matching_score(img_list, video1, type=2)

# Get the matching result
ImgMatching.get_matching_result(img_list, video1, type=2)

''' Webcam (real-time) '''
# start the webcam(recording)
# if finished, press 'q' to stop & get the result
ImgMatching.get_matching_result(img_list, type=3)
```

**hyperparameters**  <br/>

- `type` : int, default is 0
    - 0 : Image
    - 1 : Video 
    - 2 : Mixed
    - 3 : Webcam (real-time)
- `threshold` : float, default is 0.5
    - The threshold for the matching score. If the matching score is less than the threshold, it is considered as a matching result. Range is 0.0 ~ 1.0. Recommended is +-0.1 from the default value.

<br/>

**Currently, only image and video matching are supported.** <br/>

<br/> 

## Format

### Inference Data Format

| Inference data format                                                                           | Type in python                                        | Usage Example                  |
| ----------------------------------------------------------------------------------------------- | ----------------------------------------------------- | ------------------------------ |
| [Path of the data](#Methods-Tutorial)                                                           | ```str```                                             | '/Path/to/data/file.extension' |
| [List](#Methods-Tutorial)                                                                       | ```list```                                            | 
| [Numpy Array](#Methods-Tutorial)                                                                | ```numpy.ndarray```                                   |
| [Pytorch Tensor](#Methods-Tutorial)                                                             | ```torch.Tensor```                                    |
| [Tensorflow Tensor](#Methods-Tutorial)                                                          | ```tensorflow.python.framework.ops.EagerTensor```     |

### Inference Model Format

| Inference model format                                                     | `export.py --include` | Model                     |
|:---------------------------------------------------------------------------|:----------------------|:--------------------------|
| [PyTorch](https://pytorch.org/)                                            | -                     | `model.pt`              |
| [TorchScript](https://pytorch.org/docs/stable/jit.html)                    | `torchscript`         | `model.torchscript`     |
| [ONNX](https://onnx.ai/)                                                   | `onnx`                | `model.onnx`            |
| [OpenVINO](https://docs.openvino.ai/latest/index.html)                     | `openvino`            | `model_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt)                          | `engine`              | `model.engine`          |
| [CoreML](https://github.com/apple/coremltools)                             | `coreml`              | `model.mlmodel`         |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model)      | `saved_model`         | `model_saved_model/`    |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb`                  | `model.pb`              |
| [TensorFlow Lite](https://www.tensorflow.org/lite)                         | `tflite`              | `model.tflite`          |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/)         | `edgetpu`             | `model_edgetpu.tflite`  |
| [TensorFlow.js](https://www.tensorflow.org/js)                             | `tfjs`                | `model_web_model/`      |
| [PaddlePaddle](https://github.com/PaddlePaddle)                            | `paddle`              | `model_paddle_model/`   |


------

<br/>

**If you want to more information, check the [Official Docs(Not yet)]()**

<br/>




            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "ArtificialVision",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "hwk060023,Vision,AI,ML,DL,CV,ArtificialVision,ArtificialIntelligence,MachineLearning,DeepLearning,ComputerVision,Python,PyPI,Package,Tutorial",
    "author": "",
    "author_email": "hwk06023 <hwk06023@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/ec/4a/9ed0415bb711ad903bd60f6b8b456968f9ba34c5c7f56f6293f36d31983b/ArtificialVision-0.1.2.tar.gz",
    "platform": null,
    "description": "# ArtificialVision\n\n<img src=\"https://lh3.googleusercontent.com/u/0/drive-viewer/AEYmBYSOMvdeaLCLq2djzo1mgZIEd6-Qyll8boR6V7Z1VHYkH2IFJzg8geBFdcxis-KIyVdoawhJTa-mWCLmfImUXQIoCJVv8w=w1762-h1610\" width=550> <br/>\n\n[![PyPI version](https://badge.fury.io/py/artificialvision.svg)](https://badge.fury.io/py/artificialvision)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n![GitHub pull requests](https://img.shields.io/github/issues-pr/hwk06023/ArtificialVision)\n![GitHub contributors](https://img.shields.io/github/contributors/hwk06023/ArtificialVision)\n![GitHub stars](https://img.shields.io/github/stars/hwk06023/ArtificialVision?style=social)\n\n<br/>\n\n**\u2757\ufe0f The package is still under development and has not been released yet \u2757\ufe0f**  <br/>\n\n**If the version exceeds 1.0.0, this message will be removed, and the package will become available.** <br/>\n\n<br/>\n\n## Installation\n\n\n```bash\npip install artificialvision\n```\n\n<br/>\n\n## What is ArtificialVision?\n\nArtificialVision is the package for makes it easy to get the outcomes of various Machine Learning & Computer Vision technologies.\nThis package's aims are improving the quality and increasing the productivity by using it for convenience in various research and development experiments. <br/>\n\nIn this Version, just inference & getting the various results are supported. Support for training and fine-tuning will be added in the future. <br/>\n\n<br/>\n\n## Contributing to ArtificialVision (Not yet)\n\nAll the contributions are welcome ! <br/>\n\nCheck the [ContributeGuide.md](ContributeGuide.md) for more information. <br/>\n\n\n\n\n### Contributors\n\n<img src = \"https://contrib.rocks/image?repo=hwk06023/ArtificialVision\"/>\n\n<br/> <br/>\n\n\n\n\n\n\n## Methods Tutorial\n\n### Image Classification\n\n**example**  <br/>\n\n```python\nfrom artificialvision import ImgClassification\nimport cv2 \n\n# Read the image\nimg = cv2.imread('PATH of Image file')\n\n# Get the classification result\nImgClassification.get_result(img)\n```\n\n<br/>\n\n**Currently, only models pretrained on ImageNet are available.** <br/>\n\n<br/>\n\n\n### Object Detection\n\n**example**  <br/>\n\n```python\nfrom artificialvision import ObjDetection\nimport cv2\n\n''' Image '''\n# Read the image\nimg = cv2.imread('PATH of Image file')\n\n# Get the detection result with the bounding box\nObjDetection.get_result(img)\n\n# Get the bounding box only\nObjDetection.get_result_with_box(img)\n\n''' Video '''\n# Read the video\nvideo = cv2.VideoCapture('PATH of Video file', type=1)\n\n# Get the detection result with the bounding box\nObjDetection.get_result(video)\n\n# Get the bounding box only\nObjDetection.get_result_with_box(video)\n```\n\n**hyperparameters**  <br/>\n\n- `type` : int, default is 0\n    - 0 : Image\n    - 1 : Video \n\n<br/> \n\n**Currently, only image and video matching are supported.** <br/>\n\n<br/>\n\n\n### Segmentation\n\n**example**  <br/>\n\n```python\nfrom artificialvision import Segmentation\nimport cv2\n\n''' Image '''\n# Read the image\nimg = cv2.imread('PATH of Image file')\n\n# Get the segmentation result\nSegmentation.get_result(img)\n\n# Get only the segment map\nSegmentation.get_segment_map(img)\n''' Video '''\n# Read the video\nvideo = cv2.VideoCapture('PATH of Video file', type=1)\n\n# Get the segmentation result\nSegmentation.get_result(video)\n\n# Get only the segment map\nSegmentation.get_segment_map(video)\n''' Webcam (real-time) '''\n# start the webcam(recording)\n# if finished, press 'q' to stop & get the result\nSegmentation.get_result(type=2)\n```\n\n**hyperparameters**  <br/>\n\n- `type` : int, default is 0\n    - 0 : Image\n    - 1 : Video \n    - 2 : Webcam (real-time)\n\n- `category` : int, default is 0\n    - segmentation category\n    - 0 : Semantic Segmentation\n    - 1 : Instance Segmentation\n    - 2 : Panoptic Segmentation\n\n- `detail` : int, default is 0\n    - segmentation detail\n    - 0 : Segmentation Result (Overlayed Image)\n    - 1 : Segmentation Map\n\n- `get_poligon` : bool, default is False\n    - If True, get the poligon points of the segmentation result. (Only for the instance segmentation)\n\n<br/> \n\n**Currently, only image and video matching are supported.** <br/>\n\n<br/>\n\n\n### Image Matching\n\n**example**  <br/>\n \n```python\nfrom artificialvision import ImgMatching\nimport cv2 \n\n''' Image '''\n# Read the images\nimg1 = cv2.imread('PATH of Image1 file')\nimg2 = cv2.imread('PATH of Image2 file')\n\n# Get the matching score\nImgMatching.get_matching_score(img1, img2)\n\n# Get the matching result\nImgMatching.get_matching_result(img1, img2)\n\n\n''' Video '''\n# Read the videos\nvideo1 = cv2.VideoCapture('PATH of Video1 file')\nvideo2 = cv2.VideoCapture('PATH of Video2 file')\n\n# Get the matching score\nImgMatching.get_matching_score(video1, video2, type=1)\n\n# Get the matching result\nImgMatching.get_matching_result(video1, video2, type=1)\n\n''' Mixed '''\n# Read the images for matchin\nimg_list = [img1, img2, img3, ...]\n\n# Get the matching score\nImgMatching.get_matching_score(img_list, video1, type=2)\n\n# Get the matching result\nImgMatching.get_matching_result(img_list, video1, type=2)\n\n''' Webcam (real-time) '''\n# start the webcam(recording)\n# if finished, press 'q' to stop & get the result\nImgMatching.get_matching_result(img_list, type=3)\n```\n\n**hyperparameters**  <br/>\n\n- `type` : int, default is 0\n    - 0 : Image\n    - 1 : Video \n    - 2 : Mixed\n    - 3 : Webcam (real-time)\n- `threshold` : float, default is 0.5\n    - The threshold for the matching score. If the matching score is less than the threshold, it is considered as a matching result. Range is 0.0 ~ 1.0. Recommended is +-0.1 from the default value.\n\n<br/>\n\n**Currently, only image and video matching are supported.** <br/>\n\n<br/> \n\n## Format\n\n### Inference Data Format\n\n| Inference data format                                                                           | Type in python                                        | Usage Example                  |\n| ----------------------------------------------------------------------------------------------- | ----------------------------------------------------- | ------------------------------ |\n| [Path of the data](#Methods-Tutorial)                                                           | ```str```                                             | '/Path/to/data/file.extension' |\n| [List](#Methods-Tutorial)                                                                       | ```list```                                            | \n| [Numpy Array](#Methods-Tutorial)                                                                | ```numpy.ndarray```                                   |\n| [Pytorch Tensor](#Methods-Tutorial)                                                             | ```torch.Tensor```                                    |\n| [Tensorflow Tensor](#Methods-Tutorial)                                                          | ```tensorflow.python.framework.ops.EagerTensor```     |\n\n### Inference Model Format\n\n| Inference model format                                                     | `export.py --include` | Model                     |\n|:---------------------------------------------------------------------------|:----------------------|:--------------------------|\n| [PyTorch](https://pytorch.org/)                                            | -                     | `model.pt`              |\n| [TorchScript](https://pytorch.org/docs/stable/jit.html)                    | `torchscript`         | `model.torchscript`     |\n| [ONNX](https://onnx.ai/)                                                   | `onnx`                | `model.onnx`            |\n| [OpenVINO](https://docs.openvino.ai/latest/index.html)                     | `openvino`            | `model_openvino_model/` |\n| [TensorRT](https://developer.nvidia.com/tensorrt)                          | `engine`              | `model.engine`          |\n| [CoreML](https://github.com/apple/coremltools)                             | `coreml`              | `model.mlmodel`         |\n| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model)      | `saved_model`         | `model_saved_model/`    |\n| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb`                  | `model.pb`              |\n| [TensorFlow Lite](https://www.tensorflow.org/lite)                         | `tflite`              | `model.tflite`          |\n| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/)         | `edgetpu`             | `model_edgetpu.tflite`  |\n| [TensorFlow.js](https://www.tensorflow.org/js)                             | `tfjs`                | `model_web_model/`      |\n| [PaddlePaddle](https://github.com/PaddlePaddle)                            | `paddle`              | `model_paddle_model/`   |\n\n\n------\n\n<br/>\n\n**If you want to more information, check the [Official Docs(Not yet)]()**\n\n<br/>\n\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Artificial Vision Library",
    "version": "0.1.2",
    "project_urls": {
        "Source": "https://github.com/hwk06023/ArtificialVision"
    },
    "split_keywords": [
        "hwk060023",
        "vision",
        "ai",
        "ml",
        "dl",
        "cv",
        "artificialvision",
        "artificialintelligence",
        "machinelearning",
        "deeplearning",
        "computervision",
        "python",
        "pypi",
        "package",
        "tutorial"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c71d53db85fd9d7321565309f7a8e746471b57f189ceb47d36d306a2ec77ff90",
                "md5": "3f06f5c4cbd1a770420f03f724afea2f",
                "sha256": "9a1d48ba83e641833985a8e7655e5622f76f78fb5e1174ec9bebb403567b023c"
            },
            "downloads": -1,
            "filename": "ArtificialVision-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3f06f5c4cbd1a770420f03f724afea2f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 7163,
            "upload_time": "2024-02-25T08:28:06",
            "upload_time_iso_8601": "2024-02-25T08:28:06.628086Z",
            "url": "https://files.pythonhosted.org/packages/c7/1d/53db85fd9d7321565309f7a8e746471b57f189ceb47d36d306a2ec77ff90/ArtificialVision-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ec4a9ed0415bb711ad903bd60f6b8b456968f9ba34c5c7f56f6293f36d31983b",
                "md5": "2debee1c6bc40682495e4207e81ea5c6",
                "sha256": "77ddcae5cefb936725a181e9838e64618d03a7343725163653005b1b2058b232"
            },
            "downloads": -1,
            "filename": "ArtificialVision-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "2debee1c6bc40682495e4207e81ea5c6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 6566,
            "upload_time": "2024-02-25T08:28:08",
            "upload_time_iso_8601": "2024-02-25T08:28:08.045580Z",
            "url": "https://files.pythonhosted.org/packages/ec/4a/9ed0415bb711ad903bd60f6b8b456968f9ba34c5c7f56f6293f36d31983b/ArtificialVision-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-25 08:28:08",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hwk06023",
    "github_project": "ArtificialVision",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "artificialvision"
}
        
Elapsed time: 0.20921s