odach


Nameodach JSON
Version 0.3.0 PyPI version JSON
download
home_pagehttps://github.com/kentaroy47/ODA-Object-Detection-ttA
SummaryODAch is a test-time-augmentation tool for pytorch 2d object detectors with YOLO support.
upload_time2025-08-18 04:17:42
maintainerNone
docs_urlNone
authorKentaro Yoshioka
requires_python>=3.7
licenseMIT
keywords object-detection pytorch tta test-time-augmentation yolo computer-vision
VCS
bugtrack_url
requirements torch numpy numba ultralytics Pillow torchvision
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ODAch, An Object Detection TTA tool for Pytorch
ODA is a test-time-augmentation (TTA) tool for 2d object detectors. 

For use in Kaggle object detection competitions.

:star: if it helps you! ;)

![](imgs/res.png)

# 🚀 YOLO Integration (New!)

ODAch now supports YOLOv5, YOLOv8, and newer YOLO models from Ultralytics! This is the most important feature for modern object detection workflows.

## Quick Start with YOLO

```python
import odach as oda
from ultralytics import YOLO

# Load your YOLO model
model = YOLO('yolov8n.pt')  # or yolov5, yolov6, yolov7, yolov8, yolov9

# Wrap the YOLO model for ODAch
yolo_wrapper = oda.wrap_yolo(model, imsize=640, score_threshold=0.25)

# Define TTA transformations
tta = [oda.HorizontalFlip(), oda.VerticalFlip(), oda.Rotate90Left(), oda.Rotate90Right()]

# Create TTA wrapper
tta_model = oda.TTAWrapper(yolo_wrapper, tta)

# Run inference with TTA
results = tta_model(images)
```

## YOLO Features

- **Multi-version support**: YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9
- **Automatic format conversion**: Handles YOLO output format automatically
- **Batch processing**: Process multiple images efficiently
- **Configurable thresholds**: Adjust confidence and IoU thresholds
- **Seamless integration**: Works with existing ODAch TTA pipeline

## YOLO TTA Example

```python
# Advanced YOLO TTA with multiple scales
tta = [
    oda.HorizontalFlip(), 
    oda.VerticalFlip(), 
    oda.Rotate90Left(), 
    oda.Rotate90Right(),
    oda.Multiply(0.9), 
    oda.Multiply(1.1)
]

# Multi-scale TTA
scale = [0.8, 0.9, 1.0, 1.1, 1.2]

# Create TTA wrapper with scales
tta_model = oda.TTAWrapper(yolo_wrapper, tta, scale)

# Run inference
results = tta_model(images)
```

See `example_yolo_usage.py` and `YOLO_INTEGRATION_README.md` for detailed examples.

---

# Install
`pip install odach`

# Usage
See `Example.ipynb`.

The setup is very simple, similar to [ttach](https://github.com/qubvel/ttach).

## Singlescale TTA
```python
import odach as oda
# Declare TTA variations
tta = [oda.HorizontalFlip(), oda.VerticalFlip(), oda.Rotate90Left(), oda.Multiply(0.9), oda.Multiply(1.1)]

# load image
img = loadimg(impath)
# wrap model and tta
tta_model = oda.TTAWrapper(model, tta)
# Execute TTA!
boxes, scores, labels = tta_model(img)
```

## Multiscale TTA
```python
import odach as oda
# Declare TTA variations
tta = [oda.HorizontalFlip(), oda.VerticalFlip(), oda.Rotate90Left(), oda.Multiply(0.9), oda.Multiply(1.1)]
# Declare scales to tta
scale = [0.8, 0.9, 1, 1.1, 1.2]

# load image
img = loadimg(impath)
# wrap model and tta
tta_model = oda.TTAWrapper(model, tta, scale)
# Execute TTA!
boxes, scores, labels = tta_model(img)
```

* The boxes are also filtered by nms(wbf default).

* The image size should be square.

## Model Output Wrapping
* Wrap your detection model so that the output is similar to torchvision frcnn format:
[["box":[[x,y,x2,y2], [], ..], "labels": [0,1,..], "scores": [1.0, 0.8, ..]]

* Example for EfficientDets
https://www.kaggle.com/kyoshioka47/example-of-2d-single-scale-tta-with-odach/

```python
# wrap effdet
oda_effdet = oda.wrap_effdet(effdet)
# Declare TTA variations
tta = [oda.HorizontalFlip(), oda.VerticalFlip(), oda.Rotate90Left()]
# Declare scales to tta
scale = [1]
# wrap model and tta
tta_model = oda.TTAWrapper(oda_effdet, tta, scale)
```

# Examples
## YOLO TTA Examples
- `example_yolo_usage.py` - Basic YOLO integration
- `YOLO_INTEGRATION_README.md` - Detailed YOLO usage guide

## Global Wheat Detection
[Example notebook](https://www.kaggle.com/kyoshioka47/example-of-odach)

# Thanks
nms, wbf are from https://kaggle.com/zfturbo

tta is based on https://github.com/qubvel/ttach, https://github.com/andrewekhalel/edafa/tree/master/edafa and https://www.kaggle.com/shonenkov/wbf-over-tta-single-model-efficientdet

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kentaroy47/ODA-Object-Detection-ttA",
    "name": "odach",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "Kentaro Yoshioka <meathouse47@gmail.com>",
    "keywords": "object-detection, pytorch, tta, test-time-augmentation, yolo, computer-vision",
    "author": "Kentaro Yoshioka",
    "author_email": "Kentaro Yoshioka <meathouse47@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/61/39/dad18a71bfa43670139b8d04e9c38637380b0c16941182cffc938aed59dc/odach-0.3.0.tar.gz",
    "platform": null,
    "description": "# ODAch, An Object Detection TTA tool for Pytorch\r\nODA is a test-time-augmentation (TTA) tool for 2d object detectors. \r\n\r\nFor use in Kaggle object detection competitions.\r\n\r\n:star: if it helps you! ;)\r\n\r\n![](imgs/res.png)\r\n\r\n# \ud83d\ude80 YOLO Integration (New!)\r\n\r\nODAch now supports YOLOv5, YOLOv8, and newer YOLO models from Ultralytics! This is the most important feature for modern object detection workflows.\r\n\r\n## Quick Start with YOLO\r\n\r\n```python\r\nimport odach as oda\r\nfrom ultralytics import YOLO\r\n\r\n# Load your YOLO model\r\nmodel = YOLO('yolov8n.pt')  # or yolov5, yolov6, yolov7, yolov8, yolov9\r\n\r\n# Wrap the YOLO model for ODAch\r\nyolo_wrapper = oda.wrap_yolo(model, imsize=640, score_threshold=0.25)\r\n\r\n# Define TTA transformations\r\ntta = [oda.HorizontalFlip(), oda.VerticalFlip(), oda.Rotate90Left(), oda.Rotate90Right()]\r\n\r\n# Create TTA wrapper\r\ntta_model = oda.TTAWrapper(yolo_wrapper, tta)\r\n\r\n# Run inference with TTA\r\nresults = tta_model(images)\r\n```\r\n\r\n## YOLO Features\r\n\r\n- **Multi-version support**: YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9\r\n- **Automatic format conversion**: Handles YOLO output format automatically\r\n- **Batch processing**: Process multiple images efficiently\r\n- **Configurable thresholds**: Adjust confidence and IoU thresholds\r\n- **Seamless integration**: Works with existing ODAch TTA pipeline\r\n\r\n## YOLO TTA Example\r\n\r\n```python\r\n# Advanced YOLO TTA with multiple scales\r\ntta = [\r\n    oda.HorizontalFlip(), \r\n    oda.VerticalFlip(), \r\n    oda.Rotate90Left(), \r\n    oda.Rotate90Right(),\r\n    oda.Multiply(0.9), \r\n    oda.Multiply(1.1)\r\n]\r\n\r\n# Multi-scale TTA\r\nscale = [0.8, 0.9, 1.0, 1.1, 1.2]\r\n\r\n# Create TTA wrapper with scales\r\ntta_model = oda.TTAWrapper(yolo_wrapper, tta, scale)\r\n\r\n# Run inference\r\nresults = tta_model(images)\r\n```\r\n\r\nSee `example_yolo_usage.py` and `YOLO_INTEGRATION_README.md` for detailed examples.\r\n\r\n---\r\n\r\n# Install\r\n`pip install odach`\r\n\r\n# Usage\r\nSee `Example.ipynb`.\r\n\r\nThe setup is very simple, similar to [ttach](https://github.com/qubvel/ttach).\r\n\r\n## Singlescale TTA\r\n```python\r\nimport odach as oda\r\n# Declare TTA variations\r\ntta = [oda.HorizontalFlip(), oda.VerticalFlip(), oda.Rotate90Left(), oda.Multiply(0.9), oda.Multiply(1.1)]\r\n\r\n# load image\r\nimg = loadimg(impath)\r\n# wrap model and tta\r\ntta_model = oda.TTAWrapper(model, tta)\r\n# Execute TTA!\r\nboxes, scores, labels = tta_model(img)\r\n```\r\n\r\n## Multiscale TTA\r\n```python\r\nimport odach as oda\r\n# Declare TTA variations\r\ntta = [oda.HorizontalFlip(), oda.VerticalFlip(), oda.Rotate90Left(), oda.Multiply(0.9), oda.Multiply(1.1)]\r\n# Declare scales to tta\r\nscale = [0.8, 0.9, 1, 1.1, 1.2]\r\n\r\n# load image\r\nimg = loadimg(impath)\r\n# wrap model and tta\r\ntta_model = oda.TTAWrapper(model, tta, scale)\r\n# Execute TTA!\r\nboxes, scores, labels = tta_model(img)\r\n```\r\n\r\n* The boxes are also filtered by nms(wbf default).\r\n\r\n* The image size should be square.\r\n\r\n## Model Output Wrapping\r\n* Wrap your detection model so that the output is similar to torchvision frcnn format:\r\n[[\"box\":[[x,y,x2,y2], [], ..], \"labels\": [0,1,..], \"scores\": [1.0, 0.8, ..]]\r\n\r\n* Example for EfficientDets\r\nhttps://www.kaggle.com/kyoshioka47/example-of-2d-single-scale-tta-with-odach/\r\n\r\n```python\r\n# wrap effdet\r\noda_effdet = oda.wrap_effdet(effdet)\r\n# Declare TTA variations\r\ntta = [oda.HorizontalFlip(), oda.VerticalFlip(), oda.Rotate90Left()]\r\n# Declare scales to tta\r\nscale = [1]\r\n# wrap model and tta\r\ntta_model = oda.TTAWrapper(oda_effdet, tta, scale)\r\n```\r\n\r\n# Examples\r\n## YOLO TTA Examples\r\n- `example_yolo_usage.py` - Basic YOLO integration\r\n- `YOLO_INTEGRATION_README.md` - Detailed YOLO usage guide\r\n\r\n## Global Wheat Detection\r\n[Example notebook](https://www.kaggle.com/kyoshioka47/example-of-odach)\r\n\r\n# Thanks\r\nnms, wbf are from https://kaggle.com/zfturbo\r\n\r\ntta is based on https://github.com/qubvel/ttach, https://github.com/andrewekhalel/edafa/tree/master/edafa and https://www.kaggle.com/shonenkov/wbf-over-tta-single-model-efficientdet\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "ODAch is a test-time-augmentation tool for pytorch 2d object detectors with YOLO support.",
    "version": "0.3.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/kentaroy47/ODA-Object-Detection-ttA/issues",
        "Documentation": "https://github.com/kentaroy47/ODA-Object-Detection-ttA#readme",
        "Homepage": "https://github.com/kentaroy47/ODA-Object-Detection-ttA",
        "Repository": "https://github.com/kentaroy47/ODA-Object-Detection-ttA"
    },
    "split_keywords": [
        "object-detection",
        " pytorch",
        " tta",
        " test-time-augmentation",
        " yolo",
        " computer-vision"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6b71c8d0bf8a2087d2a3a1eed4abd0beb222e92f071db19038bd91e016292e33",
                "md5": "4ac9a48b244594328588a60db5bd0a28",
                "sha256": "785a16591cdec0cb15d29018e82ec67787a14c99c9fb7c8a358436363c702bed"
            },
            "downloads": -1,
            "filename": "odach-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4ac9a48b244594328588a60db5bd0a28",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 13789,
            "upload_time": "2025-08-18T04:17:40",
            "upload_time_iso_8601": "2025-08-18T04:17:40.422098Z",
            "url": "https://files.pythonhosted.org/packages/6b/71/c8d0bf8a2087d2a3a1eed4abd0beb222e92f071db19038bd91e016292e33/odach-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6139dad18a71bfa43670139b8d04e9c38637380b0c16941182cffc938aed59dc",
                "md5": "51e59a951dcce9cfeedd375acfd5099e",
                "sha256": "5b82e14af9472552e55ea76eb4ea51c8cec15ca46756b5a805ad8ab7c09283b9"
            },
            "downloads": -1,
            "filename": "odach-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "51e59a951dcce9cfeedd375acfd5099e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 19639,
            "upload_time": "2025-08-18T04:17:42",
            "upload_time_iso_8601": "2025-08-18T04:17:42.935055Z",
            "url": "https://files.pythonhosted.org/packages/61/39/dad18a71bfa43670139b8d04e9c38637380b0c16941182cffc938aed59dc/odach-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-18 04:17:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kentaroy47",
    "github_project": "ODA-Object-Detection-ttA",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.7.0"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "1.19.0"
                ]
            ]
        },
        {
            "name": "numba",
            "specs": [
                [
                    ">=",
                    "0.50.0"
                ]
            ]
        },
        {
            "name": "ultralytics",
            "specs": [
                [
                    ">=",
                    "8.0.0"
                ]
            ]
        },
        {
            "name": "Pillow",
            "specs": [
                [
                    ">=",
                    "8.0.0"
                ]
            ]
        },
        {
            "name": "torchvision",
            "specs": [
                [
                    ">=",
                    "0.8.0"
                ]
            ]
        }
    ],
    "lcname": "odach"
}
        
Elapsed time: 0.45528s