image-annotation


Nameimage-annotation JSON
Version 0.3.3 PyPI version JSON
download
home_pagehttps://github.com/hoc1190/image_annotation
Summarystreamlit components for image annotation, with customization
upload_time2024-02-13 05:51:19
maintainer
docs_urlNone
authorhirune924
requires_python>=3.6
license
keywords python streamlit react javascript
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Streamlit Image Annotation

Streamlit component for image annotation.

[![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://st-image-annotation.streamlit.app/)
[![PyPI](https://img.shields.io/pypi/v/streamlit-image-annotation)](https://pypi.org/project/streamlit-image-annotation/)
![](./image/demo.gif)
# Features
* You can easily launch an image annotation tool using streamlit.
* By customizing the pre- and post-processing, you can achieve your preferred annotation workflow.
* Currently supports classification, detection, point detection tasks.
* Simple UI that is easy to navigate.

# Install

```sh
pip install streamlit-image-annotation
```

# Example Usage
If you want to see other use cases, please check inside the examples folder.

```python
from glob import glob
import pandas as pd
import streamlit as st
from streamlit_image_annotation import classification

label_list = ['deer', 'human', 'dog', 'penguin', 'framingo', 'teddy bear']
image_path_list = glob('image/*.jpg')
if 'result_df' not in st.session_state:
    st.session_state['result_df'] = pd.DataFrame.from_dict({'image': image_path_list, 'label': [0]*len(image_path_list)}).copy()

num_page = st.slider('page', 0, len(image_path_list)-1, 0)
label = classification(image_path_list[num_page], 
                        label_list=label_list, 
                        default_label_index=int(st.session_state['result_df'].loc[num_page, 'label']))

if label is not None and label['label'] != st.session_state['result_df'].loc[num_page, 'label']:
    st.session_state['result_df'].loc[num_page, 'label'] = label_list.index(label['label'])
st.table(st.session_state['result_df'])
```

# API

```python
classification(
    image_path: str,
    label_list: List[str],
    default_label_index: Optional[int] = None,
    height: int = 512,
    width: int = 512,
    key: Optional[str] = None
)
```

- **image_path**: Image path.
- **label_list**: List of label candidates.
- **default_label_index**: Initial label index.
- **height**: The maximum height of the displayed image.
- **width**: The maximum width of the displayed image.
- **key**: An optional string to use as the unique key for the widget. Assign a key so the component is not remount every time the script is rerun.

- **Component Value**: {'label': label_name}

Example: [example code](example/classification.py)

```python
detection(
    image_path: str,
    label_list: List[str],
    bboxes: Optional[List[List[int, int, int, int]]] = None,
    labels: Optional[List[int]] = None,
    height: int = 512,
    width: int = 512,
    line_width: int = 5,
    key: Optional[str] = None
)
```

- **image_path**: Image path.
- **label_list**: List of label candidates.
- **bboxes**: Initial list of bounding boxes, where each bbox is in the format [x, y, w, h].
- **labels**: List of label for each initial bbox.
- **height**: The maximum height of the displayed image.
- **width**: The maximum width of the displayed image.
- **line_width**: The stroke width of the bbox.
- **key**: An optional string to use as the unique key for the widget. Assign a key so the component is not remount every time the script is rerun.

- **Component Value**: \[{'bbox':[x,y,width, height], 'label_id': label_id, 'label': label_name},...\]

Example: [example code](example/detection.py)

```python
pointdet(
    image_path: str,
    label_list: List[str],
    points: Optional[List[List[int, int]]] = None,
    labels: Optional[List[int]] = None,
    height: int = 512,
    width: int = 512,
    point_width: int =3,
    key: Optional[str] = None
)
```

- **image_path**: Image path.
- **label_list**: List of label candidates.
- **points**: Initial list of points, where each point is in the format [x, y].
- **labels**: List of label for each initial bbox.
- **height**: The maximum height of the displayed image.
- **width**: The maximum width of the displayed image.
- **point_width**: The stroke width of the bbox.
- **key**: An optional string to use as the unique key for the widget. Assign a key so the component is not remount every time the script is rerun.

- **Component Value**: \[{'bbox':[x,y], 'label_id': label_id, 'label': label_name},...\]

Example: [example code](example/pointdet.py)

# Future Work
* Addition of component for segmentation task.

# Development
## setup
```bash
cd Streamlit-Image-Annotation/
export PYTHONPATH=$PWD
```
and set `IS_RELEASE = False` in `Streamlit-Image-Annotation/__init__.py`.


## start frontend
```bash
git clone https://github.com/hirune924/Streamlit-Image-Annotation.git
cd Streamlit-Image-Annotation/streamlit_image_annotation/Detection
yarn
yarn start
```

## start streamlit
```bash
cd Streamlit-Image-Annotation/
streamlit run streamlit_image_annotation/Detection/__init__.py
```

## build
```bash
cd Streamlit-Image-Annotation/Classification/frontend
yarn build
cd Streamlit-Image-Annotation/Detection/frontend
yarn build
cd Streamlit-Image-Annotation/Point/frontend
yarn build
```
and set `IS_RELEASE = True` in `Streamlit-Image-Annotation/__init__.py`.

## make wheel
```bash
python setup.py sdist bdist_wheel
```
## upload
```bash
python3 -m twine upload --repository testpypi dist/*
python -m pip install --index-url https://test.pypi.org/simple/ --no-deps streamlit-image-annotation
```
```bash
twine upload dist/*
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/hoc1190/image_annotation",
    "name": "image-annotation",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "Python,Streamlit,React,JavaScript",
    "author": "hirune924",
    "author_email": "",
    "download_url": "",
    "platform": null,
    "description": "# Streamlit Image Annotation\n\nStreamlit component for image annotation.\n\n[![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://st-image-annotation.streamlit.app/)\n[![PyPI](https://img.shields.io/pypi/v/streamlit-image-annotation)](https://pypi.org/project/streamlit-image-annotation/)\n![](./image/demo.gif)\n# Features\n* You can easily launch an image annotation tool using streamlit.\n* By customizing the pre- and post-processing, you can achieve your preferred annotation workflow.\n* Currently supports classification, detection, point detection tasks.\n* Simple UI that is easy to navigate.\n\n# Install\n\n```sh\npip install streamlit-image-annotation\n```\n\n# Example Usage\nIf you want to see other use cases, please check inside the examples folder.\n\n```python\nfrom glob import glob\nimport pandas as pd\nimport streamlit as st\nfrom streamlit_image_annotation import classification\n\nlabel_list = ['deer', 'human', 'dog', 'penguin', 'framingo', 'teddy bear']\nimage_path_list = glob('image/*.jpg')\nif 'result_df' not in st.session_state:\n    st.session_state['result_df'] = pd.DataFrame.from_dict({'image': image_path_list, 'label': [0]*len(image_path_list)}).copy()\n\nnum_page = st.slider('page', 0, len(image_path_list)-1, 0)\nlabel = classification(image_path_list[num_page], \n                        label_list=label_list, \n                        default_label_index=int(st.session_state['result_df'].loc[num_page, 'label']))\n\nif label is not None and label['label'] != st.session_state['result_df'].loc[num_page, 'label']:\n    st.session_state['result_df'].loc[num_page, 'label'] = label_list.index(label['label'])\nst.table(st.session_state['result_df'])\n```\n\n# API\n\n```python\nclassification(\n    image_path: str,\n    label_list: List[str],\n    default_label_index: Optional[int] = None,\n    height: int = 512,\n    width: int = 512,\n    key: Optional[str] = None\n)\n```\n\n- **image_path**: Image path.\n- **label_list**: List of label candidates.\n- **default_label_index**: Initial label index.\n- **height**: The maximum height of the displayed image.\n- **width**: The maximum width of the displayed image.\n- **key**: An optional string to use as the unique key for the widget. Assign a key so the component is not remount every time the script is rerun.\n\n- **Component Value**: {'label': label_name}\n\nExample: [example code](example/classification.py)\n\n```python\ndetection(\n    image_path: str,\n    label_list: List[str],\n    bboxes: Optional[List[List[int, int, int, int]]] = None,\n    labels: Optional[List[int]] = None,\n    height: int = 512,\n    width: int = 512,\n    line_width: int = 5,\n    key: Optional[str] = None\n)\n```\n\n- **image_path**: Image path.\n- **label_list**: List of label candidates.\n- **bboxes**: Initial list of bounding boxes, where each bbox is in the format [x, y, w, h].\n- **labels**: List of label for each initial bbox.\n- **height**: The maximum height of the displayed image.\n- **width**: The maximum width of the displayed image.\n- **line_width**: The stroke width of the bbox.\n- **key**: An optional string to use as the unique key for the widget. Assign a key so the component is not remount every time the script is rerun.\n\n- **Component Value**: \\[{'bbox':[x,y,width, height], 'label_id': label_id, 'label': label_name},...\\]\n\nExample: [example code](example/detection.py)\n\n```python\npointdet(\n    image_path: str,\n    label_list: List[str],\n    points: Optional[List[List[int, int]]] = None,\n    labels: Optional[List[int]] = None,\n    height: int = 512,\n    width: int = 512,\n    point_width: int =3,\n    key: Optional[str] = None\n)\n```\n\n- **image_path**: Image path.\n- **label_list**: List of label candidates.\n- **points**: Initial list of points, where each point is in the format [x, y].\n- **labels**: List of label for each initial bbox.\n- **height**: The maximum height of the displayed image.\n- **width**: The maximum width of the displayed image.\n- **point_width**: The stroke width of the bbox.\n- **key**: An optional string to use as the unique key for the widget. Assign a key so the component is not remount every time the script is rerun.\n\n- **Component Value**: \\[{'bbox':[x,y], 'label_id': label_id, 'label': label_name},...\\]\n\nExample: [example code](example/pointdet.py)\n\n# Future Work\n* Addition of component for segmentation task.\n\n# Development\n## setup\n```bash\ncd Streamlit-Image-Annotation/\nexport PYTHONPATH=$PWD\n```\nand set `IS_RELEASE = False` in `Streamlit-Image-Annotation/__init__.py`.\n\n\n## start frontend\n```bash\ngit clone https://github.com/hirune924/Streamlit-Image-Annotation.git\ncd Streamlit-Image-Annotation/streamlit_image_annotation/Detection\nyarn\nyarn start\n```\n\n## start streamlit\n```bash\ncd Streamlit-Image-Annotation/\nstreamlit run streamlit_image_annotation/Detection/__init__.py\n```\n\n## build\n```bash\ncd Streamlit-Image-Annotation/Classification/frontend\nyarn build\ncd Streamlit-Image-Annotation/Detection/frontend\nyarn build\ncd Streamlit-Image-Annotation/Point/frontend\nyarn build\n```\nand set `IS_RELEASE = True` in `Streamlit-Image-Annotation/__init__.py`.\n\n## make wheel\n```bash\npython setup.py sdist bdist_wheel\n```\n## upload\n```bash\npython3 -m twine upload --repository testpypi dist/*\npython -m pip install --index-url https://test.pypi.org/simple/ --no-deps streamlit-image-annotation\n```\n```bash\ntwine upload dist/*\n```\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "streamlit components for image annotation, with customization",
    "version": "0.3.3",
    "project_urls": {
        "Homepage": "https://github.com/hoc1190/image_annotation"
    },
    "split_keywords": [
        "python",
        "streamlit",
        "react",
        "javascript"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "92eb24fcce2897e6d0bf29e7c36393b4a4a49129d2016ed2eb5337acc0ffdb99",
                "md5": "fa6824e1fc04a4ca542212c0ddb1d799",
                "sha256": "3eb9dd4ce316e7aad5a035a72d1c5e1fa5eaa588141fa653bec699f77694e966"
            },
            "downloads": -1,
            "filename": "image_annotation-0.3.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fa6824e1fc04a4ca542212c0ddb1d799",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 4333,
            "upload_time": "2024-02-13T05:51:19",
            "upload_time_iso_8601": "2024-02-13T05:51:19.179970Z",
            "url": "https://files.pythonhosted.org/packages/92/eb/24fcce2897e6d0bf29e7c36393b4a4a49129d2016ed2eb5337acc0ffdb99/image_annotation-0.3.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-13 05:51:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hoc1190",
    "github_project": "image_annotation",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "image-annotation"
}
        
Elapsed time: 2.89893s