xllabelme


Namexllabelme JSON
Version 5.1.7 PyPI version JSON
download
home_pagehttps://github.com/XLPRUtils/xllabelme
SummaryImage Polygonal Annotation with Python
upload_time2023-05-17 06:52:28
maintainer
docs_urlNone
authorcode4101,Kentaro Wada
requires_python
licenseGPLv3
keywords image annotation machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">
  <img src="https://github.com/wkentaro/labelme/blob/main/labelme/icons/icon.png?raw=true"><br/>labelme
</h1>

<h4 align="center">
  Image Polygonal Annotation with Python
</h4>

<div align="center">
  <a href="https://pypi.python.org/pypi/labelme"><img src="https://img.shields.io/pypi/v/labelme.svg"></a>
  <a href="https://pypi.org/project/labelme"><img src="https://img.shields.io/pypi/pyversions/labelme.svg"></a>
  <a href="https://github.com/wkentaro/labelme/actions"><img src="https://github.com/wkentaro/labelme/workflows/ci/badge.svg?branch=main&event=push"></a>
  <a href="https://hub.docker.com/r/wkentaro/labelme"><img src="https://img.shields.io/docker/cloud/build/wkentaro/labelme"></a>
</div>

<div align="center">
  <a href="https://github.com/wkentaro/labelme/blob/main/#installation?raw=true"><b>Installation</b></a> |
  <a href="https://github.com/wkentaro/labelme/blob/main/#usage"><b>Usage</b></a> |
  <a href="https://github.com/wkentaro/labelme/tree/main/examples/tutorial#tutorial-single-image-example"><b>Tutorial</b></a> |
  <a href="https://github.com/wkentaro/labelme/tree/main/examples"><b>Examples</b></a> |
  <a href="https://www.youtube.com/playlist?list=PLI6LvFw0iflh3o33YYnVIfOpaO0hc5Dzw"><b>Youtube FAQ</b></a>
</div>

<br/>

<div align="center">
  <img src="https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/.readme/annotation.jpg?raw=true" width="70%">
</div>

## Description

Labelme is a graphical image annotation tool inspired by <http://labelme.csail.mit.edu>.  
It is written in Python and uses Qt for its graphical interface.

<img src="https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/JPEGImages/2011_000006.jpg?raw=true" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationClassPNG/2011_000006.png" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationClassVisualization/2011_000006.jpg" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationObjectPNG/2011_000006.png" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationObjectVisualization/2011_000006.jpg" width="19%" />  
<i>VOC dataset example of instance segmentation.</i>

<img src="https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation/.readme/annotation.jpg?raw=true" width="30%" /> <img src="examples/bbox_detection/.readme/annotation.jpg" width="30%" /> <img src="examples/classification/.readme/annotation_cat.jpg" width="35%" />  
<i>Other examples (semantic segmentation, bbox detection, and classification).</i>

<img src="https://user-images.githubusercontent.com/4310419/47907116-85667800-de82-11e8-83d0-b9f4eb33268f.gif" width="30%" /> <img src="https://user-images.githubusercontent.com/4310419/47922172-57972880-deae-11e8-84f8-e4324a7c856a.gif" width="30%" /> <img src="https://user-images.githubusercontent.com/14256482/46932075-92145f00-d080-11e8-8d09-2162070ae57c.png" width="32%" />  
<i>Various primitives (polygon, rectangle, circle, line, and point).</i>


## Features

- [x] Image annotation for polygon, rectangle, circle, line and point. ([tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial))
- [x] Image flag annotation for classification and cleaning. ([#166](https://github.com/wkentaro/labelme/pull/166))
- [x] Video annotation. ([video annotation](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation?raw=true))
- [x] GUI customization (predefined labels / flags, auto-saving, label validation, etc). ([#144](https://github.com/wkentaro/labelme/pull/144))
- [x] Exporting VOC-format dataset for semantic/instance segmentation. ([semantic segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true), [instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true))
- [x] Exporting COCO-format dataset for instance segmentation. ([instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true))



## Requirements

- Ubuntu / macOS / Windows
- Python3
- [PyQt5 / PySide2](http://www.riverbankcomputing.co.uk/software/pyqt/intro)


## Installation

There are options:

- Platform agnostic installation: [Anaconda](https://github.com/wkentaro/labelme/blob/main/#anaconda), [Docker](https://github.com/wkentaro/labelme/blob/main/#docker)
- Platform specific installation: [Ubuntu](https://github.com/wkentaro/labelme/blob/main/#ubuntu), [macOS](https://github.com/wkentaro/labelme/blob/main/#macos), [Windows](https://github.com/wkentaro/labelme/blob/main/#windows)
- Pre-build binaries from [the release section](https://github.com/wkentaro/labelme/releases)

### Anaconda

You need install [Anaconda](https://www.continuum.io/downloads), then run below:

```bash
# python3
conda create --name=labelme python=3
source activate labelme
# conda install -c conda-forge pyside2
# conda install pyqt
# pip install pyqt5  # pyqt5 can be installed via pip on python3
pip install labelme
# or you can install everything by conda command
# conda install labelme -c conda-forge
```

### Docker

You need install [docker](https://www.docker.com), then run below:

```bash
# on macOS
socat TCP-LISTEN:6000,reuseaddr,fork UNIX-CLIENT:\"$DISPLAY\" &
docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=docker.for.mac.host.internal:0 -v $(pwd):/root/workdir wkentaro/labelme

# on Linux
xhost +
docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=:0 -v $(pwd):/root/workdir wkentaro/labelme
```

### Ubuntu

```bash
sudo apt-get install labelme

# or
sudo pip3 install labelme

# or install standalone executable from:
# https://github.com/wkentaro/labelme/releases
```

### macOS

```bash
brew install pyqt  # maybe pyqt5
pip install labelme

# or
brew install wkentaro/labelme/labelme  # command line interface
# brew install --cask wkentaro/labelme/labelme  # app

# or install standalone executable/app from:
# https://github.com/wkentaro/labelme/releases
```

### Windows

Install [Anaconda](https://www.continuum.io/downloads), then in an Anaconda Prompt run:

```bash
conda create --name=labelme python=3
conda activate labelme
pip install labelme

# or install standalone executable/app from:
# https://github.com/wkentaro/labelme/releases
```


## Usage

Run `labelme --help` for detail.  
The annotations are saved as a [JSON](http://www.json.org/) file.

```bash
labelme  # just open gui

# tutorial (single image example)
cd examples/tutorial
labelme apc2016_obj3.jpg  # specify image file
labelme apc2016_obj3.jpg -O apc2016_obj3.json  # close window after the save
labelme apc2016_obj3.jpg --nodata  # not include image data but relative image path in JSON file
labelme apc2016_obj3.jpg \
  --labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball  # specify label list

# semantic segmentation example
cd examples/semantic_segmentation
labelme data_annotated/  # Open directory to annotate all images in it
labelme data_annotated/ --labels labels.txt  # specify label list with a file
```

For more advanced usage, please refer to the examples:

* [Tutorial (Single Image Example)](https://github.com/wkentaro/labelme/blob/main/examples/tutorial)
* [Semantic Segmentation Example](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true)
* [Instance Segmentation Example](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true)
* [Video Annotation Example](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation?raw=true)

### Command Line Arguments
- `--output` specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.
- The first time you run labelme, it will create a config file in `~/.labelmerc`. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the `--config` flag.
- Without the `--nosortlabels` flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.
- Flags are assigned to an entire image. [Example](https://github.com/wkentaro/labelme/blob/main/examples/classification?raw=true)
- Labels are assigned to a single polygon. [Example](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection?raw=true)

## FAQ

- **How to convert JSON file to numpy array?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#convert-to-dataset).
- **How to load label PNG file?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#how-to-load-label-png-file).
- **How to get annotations for semantic segmentation?** See [examples/semantic_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true).
- **How to get annotations for instance segmentation?** See [examples/instance_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true).


## Developing

```bash
git clone https://github.com/wkentaro/labelme.git
cd labelme

# Install anaconda3 and labelme
curl -L https://github.com/wkentaro/dotfiles/raw/main/local/bin/install_anaconda3.sh | bash -s .
source .anaconda3/bin/activate
pip install -e .
```


## How to build standalone executable

Below shows how to build the standalone executable on macOS, Linux and Windows.  

```bash
# Setup conda
conda create --name labelme python=3.9
conda activate labelme

# Build the standalone executable
pip install .
pip install pyinstaller
pyinstaller labelme.spec
dist/labelme --version
```


## How to contribute

Make sure below test passes on your environment.  
See `.github/workflows/ci.yml` for more detail.

```bash
pip install -r requirements-dev.txt

flake8 .
black --line-length 79 --check labelme/
MPLBACKEND='agg' pytest -vsx tests/
```


## Acknowledgement

This repo is the fork of [mpitid/pylabelme](https://github.com/mpitid/pylabelme).


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/XLPRUtils/xllabelme",
    "name": "xllabelme",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Image Annotation,Machine Learning",
    "author": "code4101,Kentaro Wada",
    "author_email": "877362867@qq.com",
    "download_url": "https://files.pythonhosted.org/packages/08/99/14aeeb6e89dfa19fdca99fbf297ecf177c1a1ed836a7b53d68b83c23c209/xllabelme-5.1.7.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">\r\n  <img src=\"https://github.com/wkentaro/labelme/blob/main/labelme/icons/icon.png?raw=true\"><br/>labelme\r\n</h1>\r\n\r\n<h4 align=\"center\">\r\n  Image Polygonal Annotation with Python\r\n</h4>\r\n\r\n<div align=\"center\">\r\n  <a href=\"https://pypi.python.org/pypi/labelme\"><img src=\"https://img.shields.io/pypi/v/labelme.svg\"></a>\r\n  <a href=\"https://pypi.org/project/labelme\"><img src=\"https://img.shields.io/pypi/pyversions/labelme.svg\"></a>\r\n  <a href=\"https://github.com/wkentaro/labelme/actions\"><img src=\"https://github.com/wkentaro/labelme/workflows/ci/badge.svg?branch=main&event=push\"></a>\r\n  <a href=\"https://hub.docker.com/r/wkentaro/labelme\"><img src=\"https://img.shields.io/docker/cloud/build/wkentaro/labelme\"></a>\r\n</div>\r\n\r\n<div align=\"center\">\r\n  <a href=\"https://github.com/wkentaro/labelme/blob/main/#installation?raw=true\"><b>Installation</b></a> |\r\n  <a href=\"https://github.com/wkentaro/labelme/blob/main/#usage\"><b>Usage</b></a> |\r\n  <a href=\"https://github.com/wkentaro/labelme/tree/main/examples/tutorial#tutorial-single-image-example\"><b>Tutorial</b></a> |\r\n  <a href=\"https://github.com/wkentaro/labelme/tree/main/examples\"><b>Examples</b></a> |\r\n  <a href=\"https://www.youtube.com/playlist?list=PLI6LvFw0iflh3o33YYnVIfOpaO0hc5Dzw\"><b>Youtube FAQ</b></a>\r\n</div>\r\n\r\n<br/>\r\n\r\n<div align=\"center\">\r\n  <img src=\"https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/.readme/annotation.jpg?raw=true\" width=\"70%\">\r\n</div>\r\n\r\n## Description\r\n\r\nLabelme is a graphical image annotation tool inspired by <http://labelme.csail.mit.edu>.  \r\nIt is written in Python and uses Qt for its graphical interface.\r\n\r\n<img src=\"https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/JPEGImages/2011_000006.jpg?raw=true\" width=\"19%\" /> <img src=\"examples/instance_segmentation/data_dataset_voc/SegmentationClassPNG/2011_000006.png\" width=\"19%\" /> <img src=\"examples/instance_segmentation/data_dataset_voc/SegmentationClassVisualization/2011_000006.jpg\" width=\"19%\" /> <img src=\"examples/instance_segmentation/data_dataset_voc/SegmentationObjectPNG/2011_000006.png\" width=\"19%\" /> <img src=\"examples/instance_segmentation/data_dataset_voc/SegmentationObjectVisualization/2011_000006.jpg\" width=\"19%\" />  \r\n<i>VOC dataset example of instance segmentation.</i>\r\n\r\n<img src=\"https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation/.readme/annotation.jpg?raw=true\" width=\"30%\" /> <img src=\"examples/bbox_detection/.readme/annotation.jpg\" width=\"30%\" /> <img src=\"examples/classification/.readme/annotation_cat.jpg\" width=\"35%\" />  \r\n<i>Other examples (semantic segmentation, bbox detection, and classification).</i>\r\n\r\n<img src=\"https://user-images.githubusercontent.com/4310419/47907116-85667800-de82-11e8-83d0-b9f4eb33268f.gif\" width=\"30%\" /> <img src=\"https://user-images.githubusercontent.com/4310419/47922172-57972880-deae-11e8-84f8-e4324a7c856a.gif\" width=\"30%\" /> <img src=\"https://user-images.githubusercontent.com/14256482/46932075-92145f00-d080-11e8-8d09-2162070ae57c.png\" width=\"32%\" />  \r\n<i>Various primitives (polygon, rectangle, circle, line, and point).</i>\r\n\r\n\r\n## Features\r\n\r\n- [x] Image annotation for polygon, rectangle, circle, line and point. ([tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial))\r\n- [x] Image flag annotation for classification and cleaning. ([#166](https://github.com/wkentaro/labelme/pull/166))\r\n- [x] Video annotation. ([video annotation](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation?raw=true))\r\n- [x] GUI customization (predefined labels / flags, auto-saving, label validation, etc). ([#144](https://github.com/wkentaro/labelme/pull/144))\r\n- [x] Exporting VOC-format dataset for semantic/instance segmentation. ([semantic segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true), [instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true))\r\n- [x] Exporting COCO-format dataset for instance segmentation. ([instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true))\r\n\r\n\r\n\r\n## Requirements\r\n\r\n- Ubuntu / macOS / Windows\r\n- Python3\r\n- [PyQt5 / PySide2](http://www.riverbankcomputing.co.uk/software/pyqt/intro)\r\n\r\n\r\n## Installation\r\n\r\nThere are options:\r\n\r\n- Platform agnostic installation: [Anaconda](https://github.com/wkentaro/labelme/blob/main/#anaconda), [Docker](https://github.com/wkentaro/labelme/blob/main/#docker)\r\n- Platform specific installation: [Ubuntu](https://github.com/wkentaro/labelme/blob/main/#ubuntu), [macOS](https://github.com/wkentaro/labelme/blob/main/#macos), [Windows](https://github.com/wkentaro/labelme/blob/main/#windows)\r\n- Pre-build binaries from [the release section](https://github.com/wkentaro/labelme/releases)\r\n\r\n### Anaconda\r\n\r\nYou need install [Anaconda](https://www.continuum.io/downloads), then run below:\r\n\r\n```bash\r\n# python3\r\nconda create --name=labelme python=3\r\nsource activate labelme\r\n# conda install -c conda-forge pyside2\r\n# conda install pyqt\r\n# pip install pyqt5  # pyqt5 can be installed via pip on python3\r\npip install labelme\r\n# or you can install everything by conda command\r\n# conda install labelme -c conda-forge\r\n```\r\n\r\n### Docker\r\n\r\nYou need install [docker](https://www.docker.com), then run below:\r\n\r\n```bash\r\n# on macOS\r\nsocat TCP-LISTEN:6000,reuseaddr,fork UNIX-CLIENT:\\\"$DISPLAY\\\" &\r\ndocker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=docker.for.mac.host.internal:0 -v $(pwd):/root/workdir wkentaro/labelme\r\n\r\n# on Linux\r\nxhost +\r\ndocker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=:0 -v $(pwd):/root/workdir wkentaro/labelme\r\n```\r\n\r\n### Ubuntu\r\n\r\n```bash\r\nsudo apt-get install labelme\r\n\r\n# or\r\nsudo pip3 install labelme\r\n\r\n# or install standalone executable from:\r\n# https://github.com/wkentaro/labelme/releases\r\n```\r\n\r\n### macOS\r\n\r\n```bash\r\nbrew install pyqt  # maybe pyqt5\r\npip install labelme\r\n\r\n# or\r\nbrew install wkentaro/labelme/labelme  # command line interface\r\n# brew install --cask wkentaro/labelme/labelme  # app\r\n\r\n# or install standalone executable/app from:\r\n# https://github.com/wkentaro/labelme/releases\r\n```\r\n\r\n### Windows\r\n\r\nInstall [Anaconda](https://www.continuum.io/downloads), then in an Anaconda Prompt run:\r\n\r\n```bash\r\nconda create --name=labelme python=3\r\nconda activate labelme\r\npip install labelme\r\n\r\n# or install standalone executable/app from:\r\n# https://github.com/wkentaro/labelme/releases\r\n```\r\n\r\n\r\n## Usage\r\n\r\nRun `labelme --help` for detail.  \r\nThe annotations are saved as a [JSON](http://www.json.org/) file.\r\n\r\n```bash\r\nlabelme  # just open gui\r\n\r\n# tutorial (single image example)\r\ncd examples/tutorial\r\nlabelme apc2016_obj3.jpg  # specify image file\r\nlabelme apc2016_obj3.jpg -O apc2016_obj3.json  # close window after the save\r\nlabelme apc2016_obj3.jpg --nodata  # not include image data but relative image path in JSON file\r\nlabelme apc2016_obj3.jpg \\\r\n  --labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball  # specify label list\r\n\r\n# semantic segmentation example\r\ncd examples/semantic_segmentation\r\nlabelme data_annotated/  # Open directory to annotate all images in it\r\nlabelme data_annotated/ --labels labels.txt  # specify label list with a file\r\n```\r\n\r\nFor more advanced usage, please refer to the examples:\r\n\r\n* [Tutorial (Single Image Example)](https://github.com/wkentaro/labelme/blob/main/examples/tutorial)\r\n* [Semantic Segmentation Example](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true)\r\n* [Instance Segmentation Example](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true)\r\n* [Video Annotation Example](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation?raw=true)\r\n\r\n### Command Line Arguments\r\n- `--output` specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.\r\n- The first time you run labelme, it will create a config file in `~/.labelmerc`. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the `--config` flag.\r\n- Without the `--nosortlabels` flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.\r\n- Flags are assigned to an entire image. [Example](https://github.com/wkentaro/labelme/blob/main/examples/classification?raw=true)\r\n- Labels are assigned to a single polygon. [Example](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection?raw=true)\r\n\r\n## FAQ\r\n\r\n- **How to convert JSON file to numpy array?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#convert-to-dataset).\r\n- **How to load label PNG file?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#how-to-load-label-png-file).\r\n- **How to get annotations for semantic segmentation?** See [examples/semantic_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true).\r\n- **How to get annotations for instance segmentation?** See [examples/instance_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true).\r\n\r\n\r\n## Developing\r\n\r\n```bash\r\ngit clone https://github.com/wkentaro/labelme.git\r\ncd labelme\r\n\r\n# Install anaconda3 and labelme\r\ncurl -L https://github.com/wkentaro/dotfiles/raw/main/local/bin/install_anaconda3.sh | bash -s .\r\nsource .anaconda3/bin/activate\r\npip install -e .\r\n```\r\n\r\n\r\n## How to build standalone executable\r\n\r\nBelow shows how to build the standalone executable on macOS, Linux and Windows.  \r\n\r\n```bash\r\n# Setup conda\r\nconda create --name labelme python=3.9\r\nconda activate labelme\r\n\r\n# Build the standalone executable\r\npip install .\r\npip install pyinstaller\r\npyinstaller labelme.spec\r\ndist/labelme --version\r\n```\r\n\r\n\r\n## How to contribute\r\n\r\nMake sure below test passes on your environment.  \r\nSee `.github/workflows/ci.yml` for more detail.\r\n\r\n```bash\r\npip install -r requirements-dev.txt\r\n\r\nflake8 .\r\nblack --line-length 79 --check labelme/\r\nMPLBACKEND='agg' pytest -vsx tests/\r\n```\r\n\r\n\r\n## Acknowledgement\r\n\r\nThis repo is the fork of [mpitid/pylabelme](https://github.com/mpitid/pylabelme).\r\n\r\n",
    "bugtrack_url": null,
    "license": "GPLv3",
    "summary": "Image Polygonal Annotation with Python",
    "version": "5.1.7",
    "project_urls": {
        "Homepage": "https://github.com/XLPRUtils/xllabelme"
    },
    "split_keywords": [
        "image annotation",
        "machine learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "089914aeeb6e89dfa19fdca99fbf297ecf177c1a1ed836a7b53d68b83c23c209",
                "md5": "d85f621fae7c5ebd413c3f178ea58eff",
                "sha256": "595bafb7facb71da56aabae6df2d153736cf6c8b53f370af7c41716ca48defe5"
            },
            "downloads": -1,
            "filename": "xllabelme-5.1.7.tar.gz",
            "has_sig": false,
            "md5_digest": "d85f621fae7c5ebd413c3f178ea58eff",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 449885,
            "upload_time": "2023-05-17T06:52:28",
            "upload_time_iso_8601": "2023-05-17T06:52:28.014877Z",
            "url": "https://files.pythonhosted.org/packages/08/99/14aeeb6e89dfa19fdca99fbf297ecf177c1a1ed836a7b53d68b83c23c209/xllabelme-5.1.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-17 06:52:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "XLPRUtils",
    "github_project": "xllabelme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "xllabelme"
}
        
Elapsed time: 0.06781s