# Globox — Object Detection Toolbox
This framework can:
* parse all kinds of object detection datasets (ImageNet, COCO, YOLO, PascalVOC, OpenImage, CVAT, LabelMe, etc.) and show statistics,
* convert them to other formats (ImageNet, COCO, YOLO, PascalVOC, OpenImage, CVAT, LabelMe, etc.),
* and evaluate predictions using standard object detection metrics such as $AP_{[.5:.05:.95]}$, $AP_{50}$, $mAP$, $AR_{1}$, $AR_{10}$, $AR_{100}$.
This framework can be used both as a library in your own code and as a command line tool. This tool is designed to be simple to use, fast and correct.
## Install
You can install the package using pip:
```shell
pip install globox
```
## Use as a Library
### Parse Annotations
The library has three main components:
* `BoundingBox`: represents a bounding box with a label and an optional confidence score
* `Annotation`: represent the bounding boxes annotations for one image
* `AnnotationSet`: represents annotations for a set of images (a database)
The `AnnotationSet` class contains static methods to read different dataset formats:
```python
# COCO
coco = AnnotationSet.from_coco(file_path="path/to/file.json")
# YOLOv5
yolo = AnnotationSet.from_yolo_v5(
folder="path/to/files/",
image_folder="path/to/images/"
)
# Pascal VOC
pascal = AnnotationSet.from_pascal_voc(folder="path/to/files/")
```
`Annotation` offers file-level granularity for compatible datasets:
```python
annotation = Annotation.from_labelme(file_path="path/to/file.xml")
```
For more specific implementations the `BoundingBox` class contains lots of utilities to parse bounding boxes in different formats, like the `create()` method.
`AnnotationsSets` are set-like objects. They can be combined and annotations can be added:
```python
gts = coco | yolo
gts.add(annotation)
```
### Inspect Datasets
Iterators and efficient lookup by `image_id`'s are easy to use:
```python
if annotation in gts:
print("This annotation is present.")
if "image_123.jpg" in gts.image_ids:
print("Annotation of image 'image_123.jpg' is present.")
for box in gts.all_boxes:
print(box.label, box.area, box.is_ground_truth)
for annotation in gts:
nb_boxes = len(annotation.boxes)
print(f"{annotation.image_id}: {nb_boxes} boxes")
```
Datasets stats can printed to the console:
```python
coco_gts.show_stats()
```
```text
Database Stats
┏━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┓
┃ Label ┃ Images ┃ Boxes ┃
┡━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━┩
│ aeroplane │ 10 │ 15 │
│ bicycle │ 7 │ 14 │
│ bird │ 4 │ 6 │
│ boat │ 7 │ 11 │
│ bottle │ 9 │ 13 │
│ bus │ 5 │ 6 │
│ car │ 6 │ 14 │
│ cat │ 4 │ 5 │
│ chair │ 9 │ 15 │
│ cow │ 6 │ 14 │
│ diningtable │ 7 │ 7 │
│ dog │ 6 │ 8 │
│ horse │ 7 │ 7 │
│ motorbike │ 3 │ 5 │
│ person │ 41 │ 91 │
│ pottedplant │ 6 │ 7 │
│ sheep │ 4 │ 10 │
│ sofa │ 10 │ 10 │
│ train │ 5 │ 6 │
│ tvmonitor │ 8 │ 9 │
├─────────────┼────────┼───────┤
│ Total │ 100 │ 273 │
└─────────────┴────────┴───────┘
```
### Convert and Save to Many Formats
Datasets can be converted to and saved in other formats:
```python
# ImageNet
gts.save_imagenet(save_dir="pascalVOC_db/")
# YOLO Darknet
gts.save_yolo_darknet(
save_dir="yolo_train/",
label_to_id={"cat": 0, "dog": 1, "racoon": 2}
)
# YOLOv5
gts.save_yolo_v5(
save_dir="yolo_train/",
label_to_id={"cat": 0, "dog": 1, "racoon": 2},
)
# CVAT
gts.save_cvat(path="train.xml")
```
### COCO Evaluation
COCO Evaluation is also supported:
```python
evaluator = COCOEvaluator(
ground_truths=gts,
predictions=dets
)
ap = evaluator.ap()
ar_100 = evaluator.ar_100()
ap_75 = evaluator.ap_75()
ap_small = evaluator.ap_small()
...
```
All COCO standard metrics can be displayed in a pretty printed table with:
```python
evaluator.show_summary()
```
which outputs:
```text
COCO Evaluation
┏━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━┳...┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┓
┃ Label ┃ AP 50:95 ┃ AP 50 ┃ ┃ AR S ┃ AR M ┃ AR L ┃
┡━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━╇...╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━┩
│ airplane │ 22.7% │ 25.2% │ │ nan% │ 90.0% │ 0.0% │
│ apple │ 46.4% │ 57.4% │ │ 48.5% │ nan% │ nan% │
│ backpack │ 54.8% │ 85.1% │ │ 100.0% │ 72.0% │ 0.0% │
│ banana │ 73.6% │ 96.4% │ │ nan% │ 100.0% │ 70.0% │
. . . . . . . .
. . . . . . . .
. . . . . . . .
├───────────┼──────────┼────────┼...┼────────┼────────┼────────┤
│ Total │ 50.3% │ 69.7% │ │ 65.4% │ 60.3% │ 55.3% │
└───────────┴──────────┴────────┴...┴────────┴────────┴────────┘
```
The array of results can be saved in CSV format:
```python
evaluator.save_csv("where/to/save/results.csv")
```
Custom evaluations can be achieved with:
```python
evaluation = evaluator.evaluate(
iou_threshold=0.33,
max_detections=1_000,
size_range=(0.0, 10_000)
)
ap = evaluation.ap()
cat_ar = evaluation["cat"].ar
```
Evaluations are cached by `(iou_threshold, max_detections, size_range)` keys. This means that repetead queries to the evaluator are fast!
## Use in Command Line
If you only need to use Globox from the command line like an application, you can install the package through [pipx](https://pypa.github.io/pipx/):
```shell
pipx install globox
```
Globox will then be in your shell path and usable from anywhere.
### Usage
Get a summary of annotations for one dataset:
```shell
globox summary /yolo/folder/ --format yolo
```
Convert annotations from one format to another one:
```shell
globox convert input/yolo/folder/ output_coco_file_path.json --format yolo --save_fmt coco
```
Evaluate a set of detections with COCO metrics, display them and save them in a CSV file:
```shell
globox evaluate groundtruths/ predictions.json --format yolo --format_dets coco -s results.csv
```
Show the help message for an exhaustive list of options:
```shell
globox summary -h
globox convert -h
globox evaluate -h
```
## Run Tests
Clone the repo with its test data:
```shell
git clone https://github.com/laclouis5/globox --recurse-submodules=tests/globox_test_data
cd globox
```
Install dependencies with [uv](https://github.com/astral-sh/uv):
```shell
uv sync --dev
```
Run the tests:
```shell
uv run pytest tests
```
## Speed Banchmarks
Speed benchmark can be executed with:
```shell
uv run python tests/benchmark.py -n 5
```
The following speed test is performed using Python 3.11 and `timeit` with 5 iterations on a 2021 MacBook Pro 14" (M1 Pro 8 Cores and 16 GB of RAM). The dataset is COCO 2017 Validation which comprises 5k images and 36 781 bounding boxes.
Task |COCO |CVAT |OpenImage|LabelMe|PascalVOC|YOLO |TXT
-------|-----|-----|---------|-------|---------|-----|-----
Parsing|0.22s|0.12s|0.44s |0.60s |0.97s |1.45s|1.12s
Saving |0.32s|0.17s|0.14s |1.06s |1.08s |0.91s|0.85s
* `AnnotationSet.show_stats()`: 0.02 s
* Evalaution: 0.30 s
</details>
## Todo
* [x] Basic data structures and utilities
* [x] Parsers (ImageNet, COCO, YOLO, Pascal, OpenImage, CVAT, LabelMe)
* [x] Parser tests
* [x] Database summary and stats
* [x] Database converters
* [x] Visualization options
* [x] COCO Evaluation
* [x] Tests with a huge load (5k images)
* [x] CLI interface
* [x] Make `image_size` optional and raise err when required (bbox conversion)
* [x] Make file saving atomic with a temporary to avoid file corruption
* [x] Pip package!
* [ ] PascalVOC Evaluation
* [ ] Parsers for TFRecord and TensorFlow
* [ ] UI interface?
## Acknowledgement
This repo is based on the work of [Rafael Padilla](https://github.com/rafaelpadilla/review_object_detection_metrics).
## Contribution
Feel free to contribute, any help you can offer with this project is most welcome.
Raw data
{
"_id": null,
"home_page": null,
"name": "globox",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "Louis Lac <lac.louis5@gmail.com>",
"keywords": "annotation, average precision, bounding boxes, coco, cvat, mean average precision, metrics, object detection, openimages, pascal voc, yolo",
"author": null,
"author_email": "Louis Lac <lac.louis5@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/c3/de/ecc47a8bbf06bb39ae7f0b501f31de557e68fb2e3995be1908844a0af99a/globox-2.4.7.tar.gz",
"platform": null,
"description": "# Globox \u2014 Object Detection Toolbox\n\nThis framework can:\n\n* parse all kinds of object detection datasets (ImageNet, COCO, YOLO, PascalVOC, OpenImage, CVAT, LabelMe, etc.) and show statistics,\n* convert them to other formats (ImageNet, COCO, YOLO, PascalVOC, OpenImage, CVAT, LabelMe, etc.),\n* and evaluate predictions using standard object detection metrics such as $AP_{[.5:.05:.95]}$, $AP_{50}$, $mAP$, $AR_{1}$, $AR_{10}$, $AR_{100}$.\n\nThis framework can be used both as a library in your own code and as a command line tool. This tool is designed to be simple to use, fast and correct.\n\n## Install\n\nYou can install the package using pip:\n\n```shell\npip install globox\n```\n\n## Use as a Library\n\n### Parse Annotations\n\nThe library has three main components:\n\n* `BoundingBox`: represents a bounding box with a label and an optional confidence score\n* `Annotation`: represent the bounding boxes annotations for one image\n* `AnnotationSet`: represents annotations for a set of images (a database)\n\nThe `AnnotationSet` class contains static methods to read different dataset formats:\n\n```python\n# COCO\ncoco = AnnotationSet.from_coco(file_path=\"path/to/file.json\")\n\n# YOLOv5\nyolo = AnnotationSet.from_yolo_v5(\n folder=\"path/to/files/\",\n image_folder=\"path/to/images/\"\n)\n\n# Pascal VOC\npascal = AnnotationSet.from_pascal_voc(folder=\"path/to/files/\")\n```\n\n`Annotation` offers file-level granularity for compatible datasets:\n\n```python\nannotation = Annotation.from_labelme(file_path=\"path/to/file.xml\")\n```\n\nFor more specific implementations the `BoundingBox` class contains lots of utilities to parse bounding boxes in different formats, like the `create()` method.\n\n`AnnotationsSets` are set-like objects. They can be combined and annotations can be added:\n\n```python\ngts = coco | yolo\ngts.add(annotation)\n```\n\n### Inspect Datasets\n\nIterators and efficient lookup by `image_id`'s are easy to use:\n\n```python\nif annotation in gts:\n print(\"This annotation is present.\")\n\nif \"image_123.jpg\" in gts.image_ids:\n print(\"Annotation of image 'image_123.jpg' is present.\")\n\nfor box in gts.all_boxes:\n print(box.label, box.area, box.is_ground_truth)\n\nfor annotation in gts:\n nb_boxes = len(annotation.boxes)\n print(f\"{annotation.image_id}: {nb_boxes} boxes\")\n```\n\nDatasets stats can printed to the console:\n\n```python\ncoco_gts.show_stats()\n```\n\n```text\n Database Stats \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Label \u2503 Images \u2503 Boxes \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 aeroplane \u2502 10 \u2502 15 \u2502\n\u2502 bicycle \u2502 7 \u2502 14 \u2502\n\u2502 bird \u2502 4 \u2502 6 \u2502\n\u2502 boat \u2502 7 \u2502 11 \u2502\n\u2502 bottle \u2502 9 \u2502 13 \u2502\n\u2502 bus \u2502 5 \u2502 6 \u2502\n\u2502 car \u2502 6 \u2502 14 \u2502\n\u2502 cat \u2502 4 \u2502 5 \u2502\n\u2502 chair \u2502 9 \u2502 15 \u2502\n\u2502 cow \u2502 6 \u2502 14 \u2502\n\u2502 diningtable \u2502 7 \u2502 7 \u2502\n\u2502 dog \u2502 6 \u2502 8 \u2502\n\u2502 horse \u2502 7 \u2502 7 \u2502\n\u2502 motorbike \u2502 3 \u2502 5 \u2502\n\u2502 person \u2502 41 \u2502 91 \u2502\n\u2502 pottedplant \u2502 6 \u2502 7 \u2502\n\u2502 sheep \u2502 4 \u2502 10 \u2502\n\u2502 sofa \u2502 10 \u2502 10 \u2502\n\u2502 train \u2502 5 \u2502 6 \u2502\n\u2502 tvmonitor \u2502 8 \u2502 9 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Total \u2502 100 \u2502 273 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n### Convert and Save to Many Formats\n\nDatasets can be converted to and saved in other formats:\n\n```python\n# ImageNet\ngts.save_imagenet(save_dir=\"pascalVOC_db/\")\n\n# YOLO Darknet\ngts.save_yolo_darknet(\n save_dir=\"yolo_train/\", \n label_to_id={\"cat\": 0, \"dog\": 1, \"racoon\": 2}\n)\n\n# YOLOv5\ngts.save_yolo_v5(\n save_dir=\"yolo_train/\", \n label_to_id={\"cat\": 0, \"dog\": 1, \"racoon\": 2},\n)\n\n# CVAT\ngts.save_cvat(path=\"train.xml\")\n```\n\n### COCO Evaluation\n\nCOCO Evaluation is also supported:\n\n```python\nevaluator = COCOEvaluator(\n ground_truths=gts, \n predictions=dets\n)\n\nap = evaluator.ap()\nar_100 = evaluator.ar_100()\nap_75 = evaluator.ap_75()\nap_small = evaluator.ap_small()\n...\n```\n\nAll COCO standard metrics can be displayed in a pretty printed table with:\n\n```python\nevaluator.show_summary()\n```\n\nwhich outputs:\n\n```text\n COCO Evaluation\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533...\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Label \u2503 AP 50:95 \u2503 AP 50 \u2503 \u2503 AR S \u2503 AR M \u2503 AR L \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547...\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 airplane \u2502 22.7% \u2502 25.2% \u2502 \u2502 nan% \u2502 90.0% \u2502 0.0% \u2502\n\u2502 apple \u2502 46.4% \u2502 57.4% \u2502 \u2502 48.5% \u2502 nan% \u2502 nan% \u2502\n\u2502 backpack \u2502 54.8% \u2502 85.1% \u2502 \u2502 100.0% \u2502 72.0% \u2502 0.0% \u2502\n\u2502 banana \u2502 73.6% \u2502 96.4% \u2502 \u2502 nan% \u2502 100.0% \u2502 70.0% \u2502\n. . . . . . . .\n. . . . . . . .\n. . . . . . . .\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c...\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Total \u2502 50.3% \u2502 69.7% \u2502 \u2502 65.4% \u2502 60.3% \u2502 55.3% \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534...\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nThe array of results can be saved in CSV format:\n\n```python\nevaluator.save_csv(\"where/to/save/results.csv\")\n```\n\nCustom evaluations can be achieved with:\n\n```python\nevaluation = evaluator.evaluate(\n iou_threshold=0.33,\n max_detections=1_000,\n size_range=(0.0, 10_000)\n)\n\nap = evaluation.ap()\ncat_ar = evaluation[\"cat\"].ar\n```\n\nEvaluations are cached by `(iou_threshold, max_detections, size_range)` keys. This means that repetead queries to the evaluator are fast!\n\n## Use in Command Line\n\nIf you only need to use Globox from the command line like an application, you can install the package through [pipx](https://pypa.github.io/pipx/):\n\n```shell\npipx install globox\n```\n\nGlobox will then be in your shell path and usable from anywhere.\n\n### Usage\n\nGet a summary of annotations for one dataset:\n\n```shell\nglobox summary /yolo/folder/ --format yolo\n```\n\nConvert annotations from one format to another one:\n\n```shell\nglobox convert input/yolo/folder/ output_coco_file_path.json --format yolo --save_fmt coco\n```\n\nEvaluate a set of detections with COCO metrics, display them and save them in a CSV file:\n\n```shell\nglobox evaluate groundtruths/ predictions.json --format yolo --format_dets coco -s results.csv\n```\n\nShow the help message for an exhaustive list of options:\n\n```shell\nglobox summary -h\nglobox convert -h\nglobox evaluate -h\n```\n\n## Run Tests\n\nClone the repo with its test data:\n\n```shell\ngit clone https://github.com/laclouis5/globox --recurse-submodules=tests/globox_test_data\ncd globox\n```\n\nInstall dependencies with [uv](https://github.com/astral-sh/uv):\n\n```shell\nuv sync --dev\n```\n\nRun the tests:\n\n```shell\nuv run pytest tests\n```\n\n## Speed Banchmarks\n\nSpeed benchmark can be executed with:\n\n```shell\nuv run python tests/benchmark.py -n 5\n```\n\nThe following speed test is performed using Python 3.11 and `timeit` with 5 iterations on a 2021 MacBook Pro 14\" (M1 Pro 8 Cores and 16 GB of RAM). The dataset is COCO 2017 Validation which comprises 5k images and 36 781 bounding boxes.\n\nTask |COCO |CVAT |OpenImage|LabelMe|PascalVOC|YOLO |TXT\n-------|-----|-----|---------|-------|---------|-----|-----\nParsing|0.22s|0.12s|0.44s |0.60s |0.97s |1.45s|1.12s\nSaving |0.32s|0.17s|0.14s |1.06s |1.08s |0.91s|0.85s\n\n* `AnnotationSet.show_stats()`: 0.02 s\n* Evalaution: 0.30 s\n\n</details>\n\n## Todo\n\n* [x] Basic data structures and utilities\n* [x] Parsers (ImageNet, COCO, YOLO, Pascal, OpenImage, CVAT, LabelMe)\n* [x] Parser tests\n* [x] Database summary and stats\n* [x] Database converters\n* [x] Visualization options\n* [x] COCO Evaluation\n* [x] Tests with a huge load (5k images)\n* [x] CLI interface\n* [x] Make `image_size` optional and raise err when required (bbox conversion)\n* [x] Make file saving atomic with a temporary to avoid file corruption\n* [x] Pip package!\n* [ ] PascalVOC Evaluation\n* [ ] Parsers for TFRecord and TensorFlow\n* [ ] UI interface?\n\n## Acknowledgement\n\nThis repo is based on the work of [Rafael Padilla](https://github.com/rafaelpadilla/review_object_detection_metrics).\n\n## Contribution\n\nFeel free to contribute, any help you can offer with this project is most welcome.\n",
"bugtrack_url": null,
"license": null,
"summary": "Globox is a package and command line interface to read and convert object detection databases (COCO, YOLO, PascalVOC, LabelMe, CVAT, OpenImage, ...) and evaluate them with COCO and PascalVOC.",
"version": "2.4.7",
"project_urls": {
"Documentation": "https://github.com/laclouis5/globox/#readme",
"Homepage": "https://github.com/laclouis5/globox",
"Issues": "https://github.com/laclouis5/globox/issues",
"Repository": "https://github.com/laclouis5/globox"
},
"split_keywords": [
"annotation",
" average precision",
" bounding boxes",
" coco",
" cvat",
" mean average precision",
" metrics",
" object detection",
" openimages",
" pascal voc",
" yolo"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "87748a408585dc4f94ebb1a070a4f060c063aa117e350066ca3070fabeff9205",
"md5": "37084b7c31525d479c0b33c9d6921901",
"sha256": "42db140e5467c9539826d32b2ece02f9f1fed510d847fdb5cfe7c73265fff72f"
},
"downloads": -1,
"filename": "globox-2.4.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "37084b7c31525d479c0b33c9d6921901",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 34442,
"upload_time": "2024-11-20T23:17:11",
"upload_time_iso_8601": "2024-11-20T23:17:11.007973Z",
"url": "https://files.pythonhosted.org/packages/87/74/8a408585dc4f94ebb1a070a4f060c063aa117e350066ca3070fabeff9205/globox-2.4.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c3deecc47a8bbf06bb39ae7f0b501f31de557e68fb2e3995be1908844a0af99a",
"md5": "50490e12d9350dd33f73b6f83ac457b7",
"sha256": "b2499d5edef2a99a2843bdae98104a70b2e5a570ba14db0d3b304777834d9e02"
},
"downloads": -1,
"filename": "globox-2.4.7.tar.gz",
"has_sig": false,
"md5_digest": "50490e12d9350dd33f73b6f83ac457b7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 30591,
"upload_time": "2024-11-20T23:17:12",
"upload_time_iso_8601": "2024-11-20T23:17:12.833959Z",
"url": "https://files.pythonhosted.org/packages/c3/de/ecc47a8bbf06bb39ae7f0b501f31de557e68fb2e3995be1908844a0af99a/globox-2.4.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-20 23:17:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "laclouis5",
"github_project": "globox",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "globox"
}