[![PyPI version](https://badge.fury.io/py/motmetrics.svg)](https://pypi.org/project/ivtmetrics/0.0.1/)
# ivtmetrics
The **ivtmetrics** library provides a Python implementation of metrics for benchmarking surgical action triplet detection and recognition.
## Features at a glance
The following are available with ivtmetrics:
1. **Recognition Evaluation**: Provides AP metrics to measure the performance of a model on action triplet recognition.
2. **Detection Evaluation**: Supports Intersection over Union distances measure of the triplet localization with respect to the instruments.
3. **Flexible Analysis**: (1) Supports for switching between frame-wise to video-wise averaging of the AP.
(2) Supports disentangle prediction and obtained filtered performance for the various components of the triplets as well as their association performances at various levels.
<a name="installation"></a>
## Installation
### Install via PyPi
To install **ivtmetrics** use `pip`
```
pip install ivtmetrics
```
### Install via Conda
```
conda install -c nwoye ivtmetrics
```
Python 3.5-3.9 and numpy and scikit-learn are required.
<a name="Metrics"></a>
## Metrics
The metrics have been aligned with what is reported by [CholecT50](https://www.sciencedirect.com/science/article/abs/pii/S1361841522000846) benchmark.
**ivtmetrics** can be imported in the following way:
``` python
import ivtmetrics
```
The metrics implement both **recognition** and **detection** evaluation.
The metrics internally implement a disentangle function to help filter the triplet components as well as triplet different levels of association.
### Recognition Metrics
**Recognition ivtmetrics** can be used in the following ways:
``` python
metric = ivtmetrics.Recognition(num_class)
```
This takes an argument `num_class` which is default to `100`
The following function are possible with the `Recognition` class:
Name | Description
:--- | :---
update(`targets, predictions`)|takes in a (batch of) vector predictions and their corresponding groundtruth. vector size must match `num_class` in the class initialization.
video_end()|Call to make the end of one video sequence.
reset()|Reset current records. Useful during training and can be called at the begining of each epoch to avoid overlapping epoch performances.
reset_global()|Reset all records. Useful for switching between training/validation/testing or can be called at the begining of new experiment.
compute_AP(`component, ignore_null`)|Obtain the average precision on the fly. This gives the AP only on examples cases after the last `reset()` call. Useful for epoch performance during training.
compute_video_AP(`component, ignore_null`)|(RECOMMENDED) compute video-wise AP performance as used in CholecT50 benchmarks.
compute_global_AP(`component, ignore_null`)|compute frame-wise AP performance for all seen samples.
topK(`k, component`) | Obtain top K performance on action triplet recognition for all seen examples. args `k` can be any int between 1-99. k = [5,10,15,20] have been used in benchmark papers.
topClass(`k, component`)|Obtain top K recognized classes on action triplet recognition for all seen examples. args `k` can be any int between 1-99. k = 10 have been used in benchmark papers.
### args:
- args `component` can be any of the following ('i', 'v', 't', 'iv', 'it','ivt') to compute performance for (instrument, verb, target, instrument-verb, instrument-target, instrument-verb-target) respectively. default is 'ivt' for triplets.
- args `ignore_null` (optional, default=False): to ignore null triplet classes in the evaluation. This option is enabled in CholecTriplet2021 challenge.
- the output is a `dict` with keys("AP", "mAP") for per-class and mean AP respectively.
<a name="recognitionExample"></a>
#### Example usage
```python
import ivtmetrics
recognize = ivtmetrics.Recognition(num_class=100)
network = MyModel(...) # your model here
# training
for epoch in number-of-epochs:
recognize.reset()
for images, labels in dataloader(...): # your data loader
predictions = network(image)
recognize.update(labels, predictions)
results_i = recognize.compute_AP('i')
print("instrument per class AP", results_i["AP"])
print("instrument mean AP", results_i["mAP"])
results_ivt = recognize.compute_AP('ivt')
print("triplet mean AP", results_ivt["mAP"])
# evaluation
recognize.reset_global()
for video in videos:
for images, labels in dataloader(video, ..): # your data loader
predictions = network(image)
recognize.update(labels, predictions)
recognize.video_end()
results_i = recognize.compute_video_AP('i')
print("instrument per class AP", results_i["AP"])
print("instrument mean AP", results_i["mAP"])
results_it = recognize.compute_video_AP('it')
print("instrument-target mean AP", results_it["mAP"])
results_ivt = recognize.compute_video_AP('ivt')
print("triplet mean AP", results_ivt["mAP"])
```
Any `nan` value in results is for classes with no occurrence in the data sample.
### Detection Metrics
**Detection ivtmetrics** can be used in the following ways:
```python
metric = ivtmetrics.Detection(num_class, num_tool, threshold=0.5)
```
This takes an argument `num_class` which is default to `100` and `num_tool` which is default to `6`
The following function are possible with the `Detection` class:
Name | Description
:--- | :---
update(`targets, predictions, format`)|input: takes in a (batch of) list/dict predictions and their corresponding groundtruth. Each frame prediction/groundtruth can be either as a `list of list` or `list of dict`. (more details below).
video_end()|Call to make the end of one video sequence.
reset()|Reset current records. Useful during training and can be called at the begining of each epoch to avoid overlapping epoch performances.
reset_global()|Reset all records. Useful for switching between training/validation/testing or can be called at the begining of new experiment.
compute_AP(`component`)|Obtain the average precision on the fly. This gives the AP only on examples cases after the last `reset()` call. Useful for epoch performance during training.
compute_video_AP(`component`)|(RECOMMENDED) compute video-wise AP performance as used in CholecT50 benchmarks.
compute_global_AP(`component`)|compute frame-wise AP performance for all seen samples.
### args:
1. **list of list format**: [[tripletID, toolID, toolProbs, x, y, w, h], [tripletID, toolID, toolProbs, x, y, w, h], ...], where:
* `tripletID` = triplet unique identity
* `toolID` = instrument unique identity
* `toolProbs` = instrument detection confidence
* `x` = bounding box x1 coordiante
* `y` = bounding box y1 coordinate
* `w` = width of the box
* `h` = height of the box
* The [x,y,w,h] are scaled between 0..1
2. **list of dict format**: [{"triplet":tripletID, "instrument":[toolID, toolProbs, x, y, w, h]}, {"triplet":tripletID, "instrument":[toolID, toolProbs, x, y, w, h]}, ...].
3. `format` args describes the input format with either of the values ("list", "dict")
4. `component` can be any of the following ('i', 'v', 't', 'iv', 'it','ivt') to compute performance for (instrument, verb, target, instrument-verb, instrument-target, instrument-verb-target) respectively, default is 'ivt' for triplets.<
* the output is a `dict` with keys("AP", "mAP", "Rec", "mRec", "Pre", "mPre") for per-class AP, mean AP, per-class Recall, mean Recall, per-class Precision and mean Precision respectively.
<a name="detectionExample"></a>
#### Example usage
``` python
import ivtmetrics
detect = ivtmetrics.Detection(num_class=100)
network = MyModel(...) # your model here
# training
format = "list"
for epoch in number of epochs:
for images, labels in dataloader(...): # your data loader
predictions = network(image)
labels, predictions = formatYourLabels(labels, predictions)
detect.update(labels, predictions, format=format)
results_i = detect.compute_AP('i')
print("instrument per class AP", results_i["AP"])
print("instrument mean AP", results_i["mAP"])
results_ivt = detect.compute_AP('ivt')
print("triplet mean AP", results_ivt["mAP"])
detect.reset()
# evaluation
format = "dict"
for video in videos:
for images, labels in dataloader(video, ..): # your data loader
predictions = network(image)
labels, predictions = formatYourLabels(labels, predictions)
detect.update(labels, predictions, format=format)
detect.video_end()
results_ivt = detect.compute_video_AP('ivt')
print("triplet mean AP", results_ivt["mAP"])
print("triplet mean recall", results_ivt["mRec"])
print("triplet mean precision", results_ivt["mPre"])
```
Any `nan` value in results is for classes with no occurrence in the data sample.
<br />
<a name="Disentangle"></a>
### Disentangle
Although, the `Detection()` and `Recognition()` classes uses the `Disentangle()` internally, this function can still be used independently for component filtering in the following ways:
``` python
filter = ivtmetrics.Disentangle()
```
Afterwards, each of the component's predictions/labels can be filtered from the main triplet's predictions/labels as follows:
``` python
i_labels = filter.extract(inputs=ivt_labels, component="i")
v_preds = filter.extract(inputs=ivt_preds, component="v")
t_preds = filter.extract(inputs=ivt_preds, component="t")
iv_labels = filter.extract(inputs=ivt_labels, component="iv")
it_labels = filter.extract(inputs=ivt_labels, component="it")
```
### Triplet Association Scores (TAS)
This assesses the quality of bounding box - triplet ids association. using the following metrics:
- **LM: localize and match percent**: percentage of triplets localized at threshold (θ) and matched with correct triplet ids.
- **PLM: partially localize and match**: percentage of triplets matched with correct triplet ids but localization overlap is less than θ.
- **IDS: identity switch**: percentage of triplets localized at θ but with swapped ids within the frame.
- **IDS: identity miss**: percentage of triplets localized at θ but with incorrect ids (not swapped).
- **MIL: missed localization**: percentage of triplets matched with correct triplet ids with no matching localization bounding boxes.
- **FP: remaining false positives**: remaining false alarms after all other scores has been considered.
- **FN: remaining false negatives**: remaining missed predictions after all other scores has been considered.
The TAS metrics are automatically computed within the Detection class of ivtmetrics.
The results are accessed using the TAS metrics acronymns as keys, such as:
``` python
import ivtmetrics
detect = ivtmetrics.Detection(num_class=100)
"""
after a series of detect.update() call
"""
results_ivt = detect.compute_video_AP('ivt')
print("triplet matched and localized", results_ivt["ml"])
print("triplet identity switchd", results_ivt["ids"])
print("triplet missed localization", results_ivt["mil"])
```
<a name="Docker"></a>
## Docker
coming soon ..
<a name="Citation"></a>
# Citation
If you use this metrics in your project or research, please consider citing the associated publication:
```
@article{nwoye2022data,
title={Data Splits and Metrics for Benchmarking Methods on Surgical Action Triplet Datasets},
author={Nwoye, Chinedu Innocent and Padoy, Nicolas},
journal={arXiv preprint arXiv:2204.05235},
year={2022}
}
```
<a name="References"></a>
### References
1. Nwoye, C. I., Yu, T., Gonzalez, C., Seeliger, B., Mascagni, P., Mutter, D., ... & Padoy, N. (2021). Rendezvous: Attention Mechanisms for the Recognition of Surgical Action Triplets in Endoscopic Videos. arXiv preprint arXiv:2109.03223.
2. Nwoye, C. I., Gonzalez, C., Yu, T., Mascagni, P., Mutter, D., Marescaux, J., & Padoy, N. (2020, October). Recognition of instrument-tissue interactions in endoscopic videos via action triplets. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 364-374). Springer, Cham.
3. http://camma.u-strasbg.fr/datasets
4. https://cholectriplet2022.grand-challenge.org
5. https://cholectriplet2021.grand-challenge.org
## License
```
BSD 2-Clause License
Copyright (c) 2022, Research Group CAMMA
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.```
```
Raw data
{
"_id": null,
"home_page": "https://github.com/CAMMA-public/ivtmetrics",
"name": "ivtmetrics",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "triplet, average precision, AP, mAP",
"author": "Chinedu Nwoye",
"author_email": "nwoye.chinedu@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/c4/81/1080f332f897427d767e117bb8e6ac221ca09e7d9611a8d4ddb070bdb604/ivtmetrics-0.1.3.tar.gz",
"platform": null,
"description": "[![PyPI version](https://badge.fury.io/py/motmetrics.svg)](https://pypi.org/project/ivtmetrics/0.0.1/)\n\n# ivtmetrics\n\nThe **ivtmetrics** library provides a Python implementation of metrics for benchmarking surgical action triplet detection and recognition.\n\n## Features at a glance\n\nThe following are available with ivtmetrics:\n\n1. **Recognition Evaluation**: Provides AP metrics to measure the performance of a model on action triplet recognition. \n2. **Detection Evaluation**: Supports Intersection over Union distances measure of the triplet localization with respect to the instruments.\n3. **Flexible Analysis**: (1) Supports for switching between frame-wise to video-wise averaging of the AP.\n(2) Supports disentangle prediction and obtained filtered performance for the various components of the triplets as well as their association performances at various levels. \n\n\n\n<a name=\"installation\"></a>\n## Installation\n\n### Install via PyPi\n\nTo install **ivtmetrics** use `pip`\n```\npip install ivtmetrics\n```\n\n### Install via Conda\n\n```\nconda install -c nwoye ivtmetrics\n```\n\nPython 3.5-3.9 and numpy and scikit-learn are required.\n\n<a name=\"Metrics\"></a>\n## Metrics\n\nThe metrics have been aligned with what is reported by [CholecT50](https://www.sciencedirect.com/science/article/abs/pii/S1361841522000846) benchmark.\n**ivtmetrics** can be imported in the following way: \n\n``` python \nimport ivtmetrics\n```\n\nThe metrics implement both **recognition** and **detection** evaluation.\nThe metrics internally implement a disentangle function to help filter the triplet components as well as triplet different levels of association.\n\n### Recognition Metrics\n\n**Recognition ivtmetrics** can be used in the following ways:\n\n``` python\nmetric = ivtmetrics.Recognition(num_class)\n```\nThis takes an argument `num_class` which is default to `100`\n\nThe following function are possible with the `Recognition` class:\n\n\nName | Description\n:--- | :---\nupdate(`targets, predictions`)|takes in a (batch of) vector predictions and their corresponding groundtruth. vector size must match `num_class` in the class initialization.\nvideo_end()|Call to make the end of one video sequence.\nreset()|Reset current records. Useful during training and can be called at the begining of each epoch to avoid overlapping epoch performances.\nreset_global()|Reset all records. Useful for switching between training/validation/testing or can be called at the begining of new experiment.\ncompute_AP(`component, ignore_null`)|Obtain the average precision on the fly. This gives the AP only on examples cases after the last `reset()` call. Useful for epoch performance during training. \ncompute_video_AP(`component, ignore_null`)|(RECOMMENDED) compute video-wise AP performance as used in CholecT50 benchmarks.\ncompute_global_AP(`component, ignore_null`)|compute frame-wise AP performance for all seen samples.\ntopK(`k, component`) | Obtain top K performance on action triplet recognition for all seen examples. args `k` can be any int between 1-99. k = [5,10,15,20] have been used in benchmark papers.\ntopClass(`k, component`)|Obtain top K recognized classes on action triplet recognition for all seen examples. args `k` can be any int between 1-99. k = 10 have been used in benchmark papers.\n\n### args:\n- args `component` can be any of the following ('i', 'v', 't', 'iv', 'it','ivt') to compute performance for (instrument, verb, target, instrument-verb, instrument-target, instrument-verb-target) respectively. default is 'ivt' for triplets.\n- args `ignore_null` (optional, default=False): to ignore null triplet classes in the evaluation. This option is enabled in CholecTriplet2021 challenge.\n- the output is a `dict` with keys(\"AP\", \"mAP\") for per-class and mean AP respectively.\n\n\n\n\n\n\n<a name=\"recognitionExample\"></a>\n#### Example usage\n\n\n\n```python\nimport ivtmetrics\nrecognize = ivtmetrics.Recognition(num_class=100)\nnetwork = MyModel(...) # your model here \n# training\nfor epoch in number-of-epochs:\n recognize.reset()\n for images, labels in dataloader(...): # your data loader\n predictions = network(image)\n recognize.update(labels, predictions)\n results_i = recognize.compute_AP('i')\n print(\"instrument per class AP\", results_i[\"AP\"])\n print(\"instrument mean AP\", results_i[\"mAP\"])\n results_ivt = recognize.compute_AP('ivt')\n print(\"triplet mean AP\", results_ivt[\"mAP\"])\n\n# evaluation\nrecognize.reset_global()\nfor video in videos:\n for images, labels in dataloader(video, ..): # your data loader\n predictions = network(image)\n recognize.update(labels, predictions)\n recognize.video_end()\n \nresults_i = recognize.compute_video_AP('i')\nprint(\"instrument per class AP\", results_i[\"AP\"])\nprint(\"instrument mean AP\", results_i[\"mAP\"])\n\nresults_it = recognize.compute_video_AP('it')\nprint(\"instrument-target mean AP\", results_it[\"mAP\"])\n\nresults_ivt = recognize.compute_video_AP('ivt')\nprint(\"triplet mean AP\", results_ivt[\"mAP\"])\n```\n\nAny `nan` value in results is for classes with no occurrence in the data sample.\n\n\n\n\n\n### Detection Metrics\n\n**Detection ivtmetrics** can be used in the following ways:\n\n```python\nmetric = ivtmetrics.Detection(num_class, num_tool, threshold=0.5)\n\n```\nThis takes an argument `num_class` which is default to `100` and `num_tool` which is default to `6`\n\nThe following function are possible with the `Detection` class:\n\nName | Description\n:--- | :--- \nupdate(`targets, predictions, format`)|input: takes in a (batch of) list/dict predictions and their corresponding groundtruth. Each frame prediction/groundtruth can be either as a `list of list` or `list of dict`. (more details below).\nvideo_end()|Call to make the end of one video sequence.\nreset()|Reset current records. Useful during training and can be called at the begining of each epoch to avoid overlapping epoch performances.\nreset_global()|Reset all records. Useful for switching between training/validation/testing or can be called at the begining of new experiment.\ncompute_AP(`component`)|Obtain the average precision on the fly. This gives the AP only on examples cases after the last `reset()` call. Useful for epoch performance during training.\ncompute_video_AP(`component`)|(RECOMMENDED) compute video-wise AP performance as used in CholecT50 benchmarks.\ncompute_global_AP(`component`)|compute frame-wise AP performance for all seen samples.\n\n### args:\n1. **list of list format**: [[tripletID, toolID, toolProbs, x, y, w, h], [tripletID, toolID, toolProbs, x, y, w, h], ...], where: \n * `tripletID` = triplet unique identity\n * `toolID` = instrument unique identity\n * `toolProbs` = instrument detection confidence\n * `x` = bounding box x1 coordiante\n * `y` = bounding box y1 coordinate\n * `w` = width of the box\n * `h` = height of the box\n * The [x,y,w,h] are scaled between 0..1\n\n2. **list of dict format**: [{\"triplet\":tripletID, \"instrument\":[toolID, toolProbs, x, y, w, h]}, {\"triplet\":tripletID, \"instrument\":[toolID, toolProbs, x, y, w, h]}, ...]. \n3. `format` args describes the input format with either of the values (\"list\", \"dict\")\n4. `component` can be any of the following ('i', 'v', 't', 'iv', 'it','ivt') to compute performance for (instrument, verb, target, instrument-verb, instrument-target, instrument-verb-target) respectively, default is 'ivt' for triplets.<\n* the output is a `dict` with keys(\"AP\", \"mAP\", \"Rec\", \"mRec\", \"Pre\", \"mPre\") for per-class AP, mean AP, per-class Recall, mean Recall, per-class Precision and mean Precision respectively.\n\n\n<a name=\"detectionExample\"></a>\n#### Example usage\n\n``` python\nimport ivtmetrics\ndetect = ivtmetrics.Detection(num_class=100)\n\nnetwork = MyModel(...) # your model here\n\n# training\n\nformat = \"list\"\nfor epoch in number of epochs:\n for images, labels in dataloader(...): # your data loader\n predictions = network(image)\n labels, predictions = formatYourLabels(labels, predictions)\n detect.update(labels, predictions, format=format)\n \n results_i = detect.compute_AP('i')\n print(\"instrument per class AP\", results_i[\"AP\"])\n print(\"instrument mean AP\", results_i[\"mAP\"])\n \n results_ivt = detect.compute_AP('ivt')\n print(\"triplet mean AP\", results_ivt[\"mAP\"])\n detect.reset()\n\n\n# evaluation\n\nformat = \"dict\"\nfor video in videos:\n for images, labels in dataloader(video, ..): # your data loader\n predictions = network(image)\n labels, predictions = formatYourLabels(labels, predictions)\n detect.update(labels, predictions, format=format)\n detect.video_end()\n \nresults_ivt = detect.compute_video_AP('ivt')\nprint(\"triplet mean AP\", results_ivt[\"mAP\"])\nprint(\"triplet mean recall\", results_ivt[\"mRec\"])\nprint(\"triplet mean precision\", results_ivt[\"mPre\"])\n```\n\nAny `nan` value in results is for classes with no occurrence in the data sample.\n\n\n<br />\n<a name=\"Disentangle\"></a>\n\n### Disentangle\n\nAlthough, the `Detection()` and `Recognition()` classes uses the `Disentangle()` internally, this function can still be used independently for component filtering in the following ways:\n\n``` python\nfilter = ivtmetrics.Disentangle()\n```\n\nAfterwards, each of the component's predictions/labels can be filtered from the main triplet's predictions/labels as follows:\n\n``` python\ni_labels = filter.extract(inputs=ivt_labels, component=\"i\")\nv_preds = filter.extract(inputs=ivt_preds, component=\"v\")\nt_preds = filter.extract(inputs=ivt_preds, component=\"t\")\niv_labels = filter.extract(inputs=ivt_labels, component=\"iv\")\nit_labels = filter.extract(inputs=ivt_labels, component=\"it\")\n```\n\n\n\n### Triplet Association Scores (TAS)\n\nThis assesses the quality of bounding box - triplet ids association. using the following metrics:\n- **LM: localize and match percent**: percentage of triplets localized at threshold (\u03b8) and matched with correct triplet ids.\n- **PLM: partially localize and match**: percentage of triplets matched with correct triplet ids but localization overlap is less than \u03b8.\n- **IDS: identity switch**: percentage of triplets localized at \u03b8 but with swapped ids within the frame.\n- **IDS: identity miss**: percentage of triplets localized at \u03b8 but with incorrect ids (not swapped).\n- **MIL: missed localization**: percentage of triplets matched with correct triplet ids with no matching localization bounding boxes.\n- **FP: remaining false positives**: remaining false alarms after all other scores has been considered.\n- **FN: remaining false negatives**: remaining missed predictions after all other scores has been considered.\n\nThe TAS metrics are automatically computed within the Detection class of ivtmetrics.\nThe results are accessed using the TAS metrics acronymns as keys, such as:\n\n\n``` python\nimport ivtmetrics\ndetect = ivtmetrics.Detection(num_class=100)\n\"\"\"\n after a series of detect.update() call \n\"\"\"\nresults_ivt = detect.compute_video_AP('ivt')\nprint(\"triplet matched and localized\", results_ivt[\"ml\"])\nprint(\"triplet identity switchd\", results_ivt[\"ids\"])\nprint(\"triplet missed localization\", results_ivt[\"mil\"])\n```\n\n<a name=\"Docker\"></a>\n## Docker\n\ncoming soon ..\n\n\n<a name=\"Citation\"></a>\n# Citation\n\nIf you use this metrics in your project or research, please consider citing the associated publication:\n```\n@article{nwoye2022data,\n title={Data Splits and Metrics for Benchmarking Methods on Surgical Action Triplet Datasets},\n author={Nwoye, Chinedu Innocent and Padoy, Nicolas},\n journal={arXiv preprint arXiv:2204.05235},\n year={2022}\n}\n```\n\n\n\n\n<a name=\"References\"></a>\n### References\n\n1. Nwoye, C. I., Yu, T., Gonzalez, C., Seeliger, B., Mascagni, P., Mutter, D., ... & Padoy, N. (2021). Rendezvous: Attention Mechanisms for the Recognition of Surgical Action Triplets in Endoscopic Videos. arXiv preprint arXiv:2109.03223.\n2. Nwoye, C. I., Gonzalez, C., Yu, T., Mascagni, P., Mutter, D., Marescaux, J., & Padoy, N. (2020, October). Recognition of instrument-tissue interactions in endoscopic videos via action triplets. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 364-374). Springer, Cham.\n3. http://camma.u-strasbg.fr/datasets\n4. https://cholectriplet2022.grand-challenge.org\n5. https://cholectriplet2021.grand-challenge.org\n\n\n\n## License\n```\nBSD 2-Clause License\n\nCopyright (c) 2022, Research Group CAMMA\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.```\n```\n\n",
"bugtrack_url": null,
"license": "BSD 2-clause",
"summary": "A Python evaluation metrics package for action triplet recognition",
"version": "0.1.3",
"project_urls": {
"Download": "https://github.com/CAMMA-public/ivtmetrics/archive/refs/tags/v0.1.3.tar.gz",
"Homepage": "https://github.com/CAMMA-public/ivtmetrics"
},
"split_keywords": [
"triplet",
" average precision",
" ap",
" map"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "f204cbb48507407620c15c316adab64d034cf582403f5ad087c21780c2a4b3f8",
"md5": "5da69d196c804c2476313f0291970ef8",
"sha256": "b88d99d5d6144ba4619e42e25a81d91879f76d0b88c884f00e59b2b3aaef926a"
},
"downloads": -1,
"filename": "ivtmetrics-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5da69d196c804c2476313f0291970ef8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 17367,
"upload_time": "2024-12-02T15:39:54",
"upload_time_iso_8601": "2024-12-02T15:39:54.951564Z",
"url": "https://files.pythonhosted.org/packages/f2/04/cbb48507407620c15c316adab64d034cf582403f5ad087c21780c2a4b3f8/ivtmetrics-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c4811080f332f897427d767e117bb8e6ac221ca09e7d9611a8d4ddb070bdb604",
"md5": "ebb0e23e075458ee40b09b7e93e90728",
"sha256": "32bd1deb4b4e7d77bfdc51534a93d29402a09d640ac3a1afe85a6b2bbd626efd"
},
"downloads": -1,
"filename": "ivtmetrics-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "ebb0e23e075458ee40b09b7e93e90728",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 22367,
"upload_time": "2024-12-02T15:39:56",
"upload_time_iso_8601": "2024-12-02T15:39:56.685768Z",
"url": "https://files.pythonhosted.org/packages/c4/81/1080f332f897427d767e117bb8e6ac221ca09e7d9611a8d4ddb070bdb604/ivtmetrics-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-02 15:39:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "CAMMA-public",
"github_project": "ivtmetrics",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "ivtmetrics"
}