<div align="center">
<p align="center">
<img src="https://raw.githubusercontent.com/BelloneLab/BANOS/main/content/Logo_BANOS.png" width="20%">
</p>
# Behavior Annotation Score (BANOS)
[GitHub Repo](https://github.com/BelloneLab/BANOS) |
[Installation](#installation) |
[Example Usage](#example)
[![PyPI version](https://badge.fury.io/py/banos.svg)](https://badge.fury.io/py/banos)
[![Downloads](https://static.pepy.tech/badge/banos)](https://pepy.tech/project/banos)
[![View on File Exchange](https://www.mathworks.com/matlabcentral/images/matlab-file-exchange.svg)](https://www.mathworks.com/matlabcentral/fileexchange/157916-banos)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
</div>
## Overview
This library is designed to calculate the Behavior Annotation Score (BANOS) for behavior annotations in video data, with implementation available in Python and Matlab.
BANOS is a set of metrics designed to evaluate algorithmic annotations against a ground truth, integrating aspects of accuracy, overlap, temporal precision, and continuity of behavior annotations segments, essential for researchers and practitioners in ethology and computer vision.
## Ethological Context and Key Concepts
### Background
In ethology (the science of animal behavior), the automatic annotation of behaviors from video data faces specific challenges for precise, contextually relevant annotations. Traditional metrics often focus on specific aspects of annotation performance, such as accuracy in a narrow sense, which may not fully encompass the ethological significance or practical applicability of an algorithm's output. There is a need for a more comprehensive metric framework that considers the diverse aspects of behavior annotation quality, providing a more complete picture of an algorithm's effectiveness in real-world scenarios.
### Introducing the Behavior Annotation Score (BANOS)
The BANOS is a set of metrics tailored for evaluating algorithmic behavior annotations against ground truths (typically human annotations), integrating multiple facets of accuracy to provide a comprehensive assessment.
## BANOS Metrics Formulas
<p align="center">
<img src="https://raw.githubusercontent.com/BelloneLab/BANOS/main/content/Schema_BANOS.png" width="364" height="224">
</p>
BANOS consists of the following metrics, all ranging from 0 (lowest score) to 1 (highest score), each with specific formulas and offering disctinct perspective of an algorithm:
1. **Detection Accuracy (DA)**
- Assess the accuracy of detecting behavioral segments with Precision, Recall and F1 score.
- **Precision (P)**: TP/(TP+FP)
- **Recall (R)**: TP/(TP+FN)
- **F1 Score**: (2xPxR)/(P+R)
2. **Segment Overlap (SO)**
- Assess the temporal overlap quality of for each annotated segment with temporal Intersection over Union (tIoU).
- **Temporal Intersection over Union (tIoU)**: Intersection of Predicted and Ground Truth Segments/Union of Predicted and Ground Truth Segments
3. **Temporal Precision (TP)**
- Asses the precision in predicting the start and end times of segments with the absolute differences between predicted and actual segment timings.
- **Temporal Precision**: 1/(1 + Absolute Start Time Deviation + Absolute End Time Deviation)
4. **Intra-bout Continuity (IC)**
- Assess the consistency of annotation within each segment by counting the number of annotation switches within a segment.
- **Intra-bout Label Consistency**: 1 - (Number of Label Switches within Segment / Segment Length)
## Installation
Install the BANOS package directly from PyPI:
:
```bash
pip install BANOS
```
### Python Dependencies
- pandas: This dependency should be automatically installed when you install BANOS from PyPI.
## Usage
Prepare your data as a dictionary where keys are file names, and values are tuples of prediction and ground truth DataFrames. Each DataFrame should have logical binary values with columns representing different behaviors.
#### Example
```python
# Loading data and using the library
import pandas as pd
data_dict = {
'file1': (pd.read_csv('predictions_file1.csv'), pd.read_csv('groundtruth_file1.csv')),
'file2': (pd.read_csv('predictions_file2.csv'), pd.read_csv('groundtruth_file2.csv')),
# ... more files ...
}
preprocessed_data, dropped_info = preprocess_data(data_dict)
banos_metrics = calculate_banos_for_each_file(preprocessed_data)
group_metrics, overall_metrics = aggregate_metrics(banos_metrics)
print("Group Metrics:", group_metrics)
print("Overall Metrics:", overall_metrics)
```
## Contributions and Support
For contributions or support, please open a pull request or issue in the GitHub repository.
Raw data
{
"_id": null,
"home_page": "https://github.com/BelloneLab/BANOS",
"name": "BANOS",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "behavior analysis,annotation,video,metrics",
"author": "Benoit Girad and Giuseppe Chindemi",
"author_email": "benoit.girard@unige.ch",
"download_url": "https://files.pythonhosted.org/packages/bf/71/d17c71a1df463a914f5545c43ee8728955cb9faa04ce6874b9614e3fca13/BANOS-0.1.5.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/BelloneLab/BANOS/main/content/Logo_BANOS.png\" width=\"20%\">\n</p>\n\n# Behavior Annotation Score (BANOS)\n\n[GitHub Repo](https://github.com/BelloneLab/BANOS) |\n[Installation](#installation) |\n[Example Usage](#example)\n\n[![PyPI version](https://badge.fury.io/py/banos.svg)](https://badge.fury.io/py/banos)\n[![Downloads](https://static.pepy.tech/badge/banos)](https://pepy.tech/project/banos)\n[![View on File Exchange](https://www.mathworks.com/matlabcentral/images/matlab-file-exchange.svg)](https://www.mathworks.com/matlabcentral/fileexchange/157916-banos)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n</div>\n\n\n## Overview\n\nThis library is designed to calculate the Behavior Annotation Score (BANOS) for behavior annotations in video data, with implementation available in Python and Matlab.\nBANOS is a set of metrics designed to evaluate algorithmic annotations against a ground truth, integrating aspects of accuracy, overlap, temporal precision, and continuity of behavior annotations segments, essential for researchers and practitioners in ethology and computer vision.\n\n## Ethological Context and Key Concepts\n\n### Background\n\nIn ethology (the science of animal behavior), the automatic annotation of behaviors from video data faces specific challenges for precise, contextually relevant annotations. Traditional metrics often focus on specific aspects of annotation performance, such as accuracy in a narrow sense, which may not fully encompass the ethological significance or practical applicability of an algorithm's output. There is a need for a more comprehensive metric framework that considers the diverse aspects of behavior annotation quality, providing a more complete picture of an algorithm's effectiveness in real-world scenarios.\n\n### Introducing the Behavior Annotation Score (BANOS)\n\nThe BANOS is a set of metrics tailored for evaluating algorithmic behavior annotations against ground truths (typically human annotations), integrating multiple facets of accuracy to provide a comprehensive assessment.\n\n## BANOS Metrics Formulas\n\n<p align=\"center\">\n <img src=\"https://raw.githubusercontent.com/BelloneLab/BANOS/main/content/Schema_BANOS.png\" width=\"364\" height=\"224\">\n</p>\n\nBANOS consists of the following metrics, all ranging from 0 (lowest score) to 1 (highest score), each with specific formulas and offering disctinct perspective of an algorithm:\n\n1. **Detection Accuracy (DA)**\n - Assess the accuracy of detecting behavioral segments with Precision, Recall and F1 score.\n - **Precision (P)**: TP/(TP+FP)\n - **Recall (R)**: TP/(TP+FN)\n - **F1 Score**: (2xPxR)/(P+R)\n\n2. **Segment Overlap (SO)**\n - Assess the temporal overlap quality of for each annotated segment with temporal Intersection over Union (tIoU).\n - **Temporal Intersection over Union (tIoU)**: Intersection of Predicted and Ground Truth Segments/Union of Predicted and Ground Truth Segments\n\n3. **Temporal Precision (TP)**\n - Asses the precision in predicting the start and end times of segments with the absolute differences between predicted and actual segment timings.\n - **Temporal Precision**: 1/(1 + Absolute Start Time Deviation + Absolute End Time Deviation)\n\n4. **Intra-bout Continuity (IC)**\n - Assess the consistency of annotation within each segment by counting the number of annotation switches within a segment.\n - **Intra-bout Label Consistency**: 1 - (Number of Label Switches within Segment / Segment Length)\n\n## Installation\n\nInstall the BANOS package directly from PyPI:\n:\n\n```bash\npip install BANOS\n```\n\n### Python Dependencies\n\n- pandas: This dependency should be automatically installed when you install BANOS from PyPI.\n\n## Usage\n\nPrepare your data as a dictionary where keys are file names, and values are tuples of prediction and ground truth DataFrames. Each DataFrame should have logical binary values with columns representing different behaviors.\n\n#### Example\n\n```python\n# Loading data and using the library\nimport pandas as pd\n\ndata_dict = {\n 'file1': (pd.read_csv('predictions_file1.csv'), pd.read_csv('groundtruth_file1.csv')),\n 'file2': (pd.read_csv('predictions_file2.csv'), pd.read_csv('groundtruth_file2.csv')),\n # ... more files ...\n}\n\npreprocessed_data, dropped_info = preprocess_data(data_dict)\nbanos_metrics = calculate_banos_for_each_file(preprocessed_data)\ngroup_metrics, overall_metrics = aggregate_metrics(banos_metrics)\n\nprint(\"Group Metrics:\", group_metrics)\nprint(\"Overall Metrics:\", overall_metrics)\n```\n\n\n## Contributions and Support\n\nFor contributions or support, please open a pull request or issue in the GitHub repository.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Set of metrics to assess behavior annotations of videos.",
"version": "0.1.5",
"project_urls": {
"Homepage": "https://github.com/BelloneLab/BANOS",
"Source": "https://github.com/BelloneLab/BANOS",
"Tracker": "https://github.com/BelloneLab/BANOS/issues"
},
"split_keywords": [
"behavior analysis",
"annotation",
"video",
"metrics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5d04461cee8339981b2c505e322b3cd1389b368872de9458b77259c559a908ef",
"md5": "805ac82b10692f51c9946ce6fc743648",
"sha256": "b57dc1c9cb15991ebbe7fb927178573e57ad24d1b0e20994b2cf1c9a957c2f78"
},
"downloads": -1,
"filename": "BANOS-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "805ac82b10692f51c9946ce6fc743648",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 7390,
"upload_time": "2024-02-02T15:28:49",
"upload_time_iso_8601": "2024-02-02T15:28:49.286009Z",
"url": "https://files.pythonhosted.org/packages/5d/04/461cee8339981b2c505e322b3cd1389b368872de9458b77259c559a908ef/BANOS-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "bf71d17c71a1df463a914f5545c43ee8728955cb9faa04ce6874b9614e3fca13",
"md5": "660fc06f0e6771c12a5eaf4d22feadf7",
"sha256": "f2af0d6da42c2fe409bd2bc085a083a0b577dae6efcfb51cfc2355f674ac4810"
},
"downloads": -1,
"filename": "BANOS-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "660fc06f0e6771c12a5eaf4d22feadf7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 7013,
"upload_time": "2024-02-02T15:28:50",
"upload_time_iso_8601": "2024-02-02T15:28:50.786949Z",
"url": "https://files.pythonhosted.org/packages/bf/71/d17c71a1df463a914f5545c43ee8728955cb9faa04ce6874b9614e3fca13/BANOS-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-02-02 15:28:50",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "BelloneLab",
"github_project": "BANOS",
"github_not_found": true,
"lcname": "banos"
}