# LabGym: quantifying user-defined behaviors
[](https://pypi.org/project/LabGym/)
[](https://pypi.org/project/LabGym/)
[](https://pepy.tech/project/LabGym)
[](https://labgym.readthedocs.io/en/latest/?badge=latest)
<p> </p>
<!-- start elevator-pitch -->

<p> </p>
## Identifies social behaviors in multi-individual interactions
<p> </p>
**Distinguishing different social roles of multiple similar-looking interacting individuals**
 
<p> </p>
**Distinguishing different interactive behaviors among multiple animal-object interactions**
 
<p> </p>
**Distinguishing different social roles of animals in the field with unstable recording environments**
 
<p> </p>
## Identifies non-social behaviors
<p> </p>
**Identifying behaviors in diverse species in various recording environments**
 
 
<p> </p>
**Identifying behaviors with no posture changes such as cells 'changing color' and neurons 'firing'**
 
<p> </p>
## Quantifies each user-defined behavior
Computes a range of motion and kinematics parameters for each behavior. The parameters include **count**, **duration**, and **latency** of behavioral incidents, as well as **speed**, **acceleration**, **distance traveled**, and the **intensity** and **vigor** of motions during the behaviors. These parameters are output in spreadsheets.
Also provides visualization of analysis results, including annotated videos/images that visually mark each behavior event, temporal raster plots that show every behavior event of every individual overtime.

<p> </p>
An introduction video for a high-level understanding of what LabGym can do and how it works:
[](https://youtu.be/YoYhHMPbf_o)
<p> </p>
We have been making a series of tutorial videos to explain every function in LabGym. They are coming soon!
<p> </p>
Cite LabGym:
1. Yujia Hu, Carrie R Ferrario, Alexander D Maitland, Rita B Ionides, Anjesh Ghimire, Brendon Watson, Kenichi Iwasaki, Hope White, Yitao Xi, Jie Zhou, Bing Ye. ***LabGym*: Quantification of user-defined animal behaviors using learning-based holistic assessment.** Cell Reports Methods. 2023 Feb 24;3(3):100415. doi: 10.1016/j.crmeth.2023.100415. [Link](https://www.cell.com/cell-reports-methods/fulltext/S2667-2375(23)00026-7)
2. Kelly Goss, Lezio S. Bueno-Junior, Katherine Stangis, Théo Ardoin, Hanna Carmon, Jie Zhou, Rohan Satapathy, Isabelle Baker, Carolyn E. Jones-Tinsley, Miranda M. Lim, Brendon O. Watson, Cédric Sueur, Carrie R. Ferrario, Geoffrey G. Murphy, Bing Ye, Yujia Hu. **Quantifying social roles in multi-animal videos using subject-aware deep-learning.** bioRxiv. 2024 Jul 10:2024.07.07.602350. doi: 10.1101/2024.07.07.602350. [Link](https://www.biorxiv.org/content/10.1101/2024.07.07.602350v1)
<p> </p>
<!-- end elevator-pitch -->
# How to use LabGym?
## Overview
You can use LabGym via its user interface (no coding knowledge needed), or via command prompt. See [**Extended User Guide**](https://github.com/yujiahu415/LabGym/blob/master/LabGym_extended_user_guide.pdf) for details.
You may also refer to this [**Practical "How To" Guide**](https://github.com/yujiahu415/LabGym/blob/master/LabGym_practical_guide.pdf) with layman language and examples.
<p> </p>
***Put your mouse cursor above each button in the user interface to see a detailed description for it***.

<p> </p>
LabGym comprises three modules, each tailored to streamline the analysis process. Together, these modules create a cohesive workflow, enabling users to prepare, train, and analyze their behavioral data with accuracy and ease.
1. **'Preprocessing Module'**: This module is for optimizing video footage for analysis. It can trim videos to focus solely on the necessary time windows, crop frames to remove irrelevant regions, enhance video contrast to make the relevant details more discernible, reduce video fps to increase processing speed, or draw colored markers in videos to mark specific locations.
2. **'Training Module'**: Here, you can customize LabGym according to your specific research needs. You can train a Detector in this module to detect animals or objects of your interest in videos/images. You can also train a Categorizer to recognize specific behaviors that are defined by you.
3. **'Analysis Module'**: After customizing LabGym to your need, you can use this module for automated behavioral analysis in videos/images. It not only outputs comprehensive analysis results but also delves into these results to mine significant findings.
<p> </p>
## Usage Step 1: detect animals/objects
LabGym employs two distinct methods for detecting animals or objects in different scenarios.
<p> </p>
### 1. Subtract background
This method is fast and accurate but requires stable illumination and static background in videos to analysis. It does not require training neural networks, but you need to define a time window during which the animals are in motion for effective background extraction. A shorter time window leads to quicker processing. Typically, a duration of 10 to 30 seconds is adequate.
***How to select an appropriate time window for background extraction?***
To determine the optimal time window for background extraction, consider the animal's movement throughout the video. In the example below, in a 60-second video, selecting a 20-second window where the mouse moves frequently and covers different areas is ideal. The following three images are backgrounds extracted using the time windows of the first, second, and last 20 seconds, respectively. In the first and last 20 seconds, the mouse mostly stays either in left or right side and moves little and the extracted backgrounds contain animal trace, which is not ideal. In the second 20 seconds, the mouse frequently moves around and the extracted background is perfect:
   
<p> </p>
### 2. Use trained Detectors
This method incorporates [Detectron2](https://github.com/facebookresearch/detectron2), offering more versatility but at a slower processing speed compared to the **‘Subtract Background’** method. It excels in differentiating individual animals or objects, even during collisions, which is particularly beneficial for the **'Interactive advanced'** mode. To enhance processing speed, use a GPU or reduce the frame size during analysis. To train a **Detector** in **‘Training Module’**:
1. Click the **‘Generate Image Examples’** button to extract image frames from videos.
2. Annotate the outlines of animals or objects in these images. We recommend using [EZannot](https://github.com/yujiahu415/EZannot). It's tailored to LabGym's **Detectors**. It's free and 100% private and implements AI assistance that annotates the entire outline of an object with just one mouse click. It also performs image augmentation that expands an annotated image dataset to 135 folds of its original size. Alternatively, you may use online annotation tools like [Roboflow](https://roboflow.com), which makes your data public in its free version. If you use Roboflow, for annotation type, choose 'Instance Segmentation', and export the annotations in 'COCO instance segmentation' format, which generates a ‘*.json’ file. Importantly, when you generate a version of dataset, do NOT perform any preprocessing steps such as ‘auto orient’ and ‘resize (stretch)’. Instead, perform some augmentation based on which manipulations may occur in real scenarios. Note that the free augmentation methods in Roboflow is 3X, which is way less than those in [EZannot](https://github.com/yujiahu415/EZannot) (135X). More augmentations will result in better generalizability of the trained **Detectors** and require less images to annotate.
3. Use the **‘Train Detectors’** button to input these annotated images and commence training your **Detectors**.
<p> </p>
## Usage Step 2: identify and quantify behaviors
LabGym is equipped with four distinct modes of behavior identification to suit different scenarios.
<p> </p>
### 1. Interactive advanced
This mode is for analyzing the behavior of every individual in a group of animals or objects, such as a finger 'holding' or 'offering' a peanut, a chipmunk 'taking' or 'loading' a peanut, and a peanut 'being held', 'being taken', or 'being loaded'.

To train a **Categorizer** of this mode, you can sort the behavior examples (**Animation** and **Pattern Image**) according to the behaviors/social roles of the 'main character' highlighted in a magenta-color-coded 'spotlight'. In the four pairs of behavior examples below, behaviors are 'taking the offer', 'being taken', 'being held', and 'offering peanut', respectively.

<p> </p>
### 2. Interactive basic
Optimized for speed, this mode considers the entire interactive group (2 or more individuals) as an entity, streamlining the processing time compared to the **'Interactive advanced'** mode. It is ideal for scenarios where individual behaviors within the group are uniform or when the specific actions of each member are not the primary focus of the study, such as 'licking' and 'attempted copulation' (where we only need to identify behaviors of the male fly).

To train a **Categorizer** of this mode, you can sort the behavior examples (**Animation** and **Pattern Image**) according to the behaviors of the entire interacting group or the individual of primary focus of the study. In the 3 pairs of behavior examples below, behaviors are behaviors like 'orientating', 'singing while licking', and 'attempted copulation', respectively.

<p> </p>
### 3. Non-interactive
This mode is for identifying solitary behaviors of individuals that are not engaging in interactive activities.

To train a **Categorizer** of this mode, you can sort the behavior examples (**Animation** and **Pattern Image**) according to the behaviors of individuals.

<p> </p>
### 4. Static image
This mode is for identifying solitary behaviors of individuals in static images.
<p> </p>
## [Installation](https://labgym.readthedocs.io/en/latest/installation/index.html)
## [LabGym Zoo (trained models and training examples)](https://github.com/umyelab/LabGym/blob/master/LabGym_Zoo.md)
## [Reporting Issues](https://labgym.readthedocs.io/en/latest/issues.html)
## [Changelog](https://labgym.readthedocs.io/en/latest/changelog.html)
## [Contributing](https://labgym.readthedocs.io/en/latest/contributing/index.html)
Raw data
{
"_id": null,
"home_page": null,
"name": "LabGym",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.11,>=3.9",
"maintainer_email": null,
"keywords": "behavior analysis, user defined behaviors, behavior quantification",
"author": null,
"author_email": "Yujia Hu <yujiahu415@gmail.com>, Rohan Satapathy <rohansat@umich.edu>, John Ruckstuhl <john.ruckstuhl@gmail.com>, \"M. Victor Struman\" <strmark@umich.edu>, Kelly Goss <khgoss@umich.edu>, Isabelle Baker <ibaker@umich.edu>",
"download_url": "https://files.pythonhosted.org/packages/1d/49/894876658ee69ff4fa214ddedb2eb362a6599725fa2c53b98227266d7e44/labgym-2.9.6.tar.gz",
"platform": null,
"description": "# LabGym: quantifying user-defined behaviors\n\n[](https://pypi.org/project/LabGym/)\n[](https://pypi.org/project/LabGym/)\n[](https://pepy.tech/project/LabGym)\n[](https://labgym.readthedocs.io/en/latest/?badge=latest)\n\n<p> </p>\n\n<!-- start elevator-pitch -->\n\n\n\n<p> </p>\n\n## Identifies social behaviors in multi-individual interactions\n\n<p> </p>\n\n**Distinguishing different social roles of multiple similar-looking interacting individuals**\n\n \n\n<p> </p>\n\n**Distinguishing different interactive behaviors among multiple animal-object interactions**\n\n \n\n<p> </p>\n\n**Distinguishing different social roles of animals in the field with unstable recording environments**\n\n \n\n<p> </p>\n\n## Identifies non-social behaviors\n\n<p> </p>\n\n**Identifying behaviors in diverse species in various recording environments**\n\n \n \n\n<p> </p>\n\n**Identifying behaviors with no posture changes such as cells 'changing color' and neurons 'firing'**\n\n \n\n<p> </p>\n\n## Quantifies each user-defined behavior\n\nComputes a range of motion and kinematics parameters for each behavior. The parameters include **count**, **duration**, and **latency** of behavioral incidents, as well as **speed**, **acceleration**, **distance traveled**, and the **intensity** and **vigor** of motions during the behaviors. These parameters are output in spreadsheets.\n\nAlso provides visualization of analysis results, including annotated videos/images that visually mark each behavior event, temporal raster plots that show every behavior event of every individual overtime.\n\n\n\n<p> </p>\n\nAn introduction video for a high-level understanding of what LabGym can do and how it works:\n\n[](https://youtu.be/YoYhHMPbf_o)\n\n<p> </p>\n\nWe have been making a series of tutorial videos to explain every function in LabGym. They are coming soon!\n\n<p> </p>\n\nCite LabGym:\n1. Yujia Hu, Carrie R Ferrario, Alexander D Maitland, Rita B Ionides, Anjesh Ghimire, Brendon Watson, Kenichi Iwasaki, Hope White, Yitao Xi, Jie Zhou, Bing Ye. ***LabGym*: Quantification of user-defined animal behaviors using learning-based holistic assessment.** Cell Reports Methods. 2023 Feb 24;3(3):100415. doi: 10.1016/j.crmeth.2023.100415. [Link](https://www.cell.com/cell-reports-methods/fulltext/S2667-2375(23)00026-7)\n2. Kelly Goss, Lezio S. Bueno-Junior, Katherine Stangis, Th\u00e9o Ardoin, Hanna Carmon, Jie Zhou, Rohan Satapathy, Isabelle Baker, Carolyn E. Jones-Tinsley, Miranda M. Lim, Brendon O. Watson, C\u00e9dric Sueur, Carrie R. Ferrario, Geoffrey G. Murphy, Bing Ye, Yujia Hu. **Quantifying social roles in multi-animal videos using subject-aware deep-learning.** bioRxiv. 2024 Jul 10:2024.07.07.602350. doi: 10.1101/2024.07.07.602350. [Link](https://www.biorxiv.org/content/10.1101/2024.07.07.602350v1)\n\n<p> </p>\n\n<!-- end elevator-pitch -->\n\n# How to use LabGym?\n\n## Overview\n\nYou can use LabGym via its user interface (no coding knowledge needed), or via command prompt. See [**Extended User Guide**](https://github.com/yujiahu415/LabGym/blob/master/LabGym_extended_user_guide.pdf) for details. \n\nYou may also refer to this [**Practical \"How To\" Guide**](https://github.com/yujiahu415/LabGym/blob/master/LabGym_practical_guide.pdf) with layman language and examples.\n\n<p> </p>\n\n***Put your mouse cursor above each button in the user interface to see a detailed description for it***.\n\n\n\n<p> </p>\n\nLabGym comprises three modules, each tailored to streamline the analysis process. Together, these modules create a cohesive workflow, enabling users to prepare, train, and analyze their behavioral data with accuracy and ease.\n\n1. **'Preprocessing Module'**: This module is for optimizing video footage for analysis. It can trim videos to focus solely on the necessary time windows, crop frames to remove irrelevant regions, enhance video contrast to make the relevant details more discernible, reduce video fps to increase processing speed, or draw colored markers in videos to mark specific locations.\n\n2. **'Training Module'**: Here, you can customize LabGym according to your specific research needs. You can train a Detector in this module to detect animals or objects of your interest in videos/images. You can also train a Categorizer to recognize specific behaviors that are defined by you.\n\n3. **'Analysis Module'**: After customizing LabGym to your need, you can use this module for automated behavioral analysis in videos/images. It not only outputs comprehensive analysis results but also delves into these results to mine significant findings.\n\n<p> </p>\n\n## Usage Step 1: detect animals/objects\n\nLabGym employs two distinct methods for detecting animals or objects in different scenarios.\n\n<p> </p>\n\n### 1. Subtract background\n\nThis method is fast and accurate but requires stable illumination and static background in videos to analysis. It does not require training neural networks, but you need to define a time window during which the animals are in motion for effective background extraction. A shorter time window leads to quicker processing. Typically, a duration of 10 to 30 seconds is adequate.\n\n***How to select an appropriate time window for background extraction?***\n\nTo determine the optimal time window for background extraction, consider the animal's movement throughout the video. In the example below, in a 60-second video, selecting a 20-second window where the mouse moves frequently and covers different areas is ideal. The following three images are backgrounds extracted using the time windows of the first, second, and last 20 seconds, respectively. In the first and last 20 seconds, the mouse mostly stays either in left or right side and moves little and the extracted backgrounds contain animal trace, which is not ideal. In the second 20 seconds, the mouse frequently moves around and the extracted background is perfect:\n\n   \n\n<p> </p>\n\n### 2. Use trained Detectors\n\nThis method incorporates [Detectron2](https://github.com/facebookresearch/detectron2), offering more versatility but at a slower processing speed compared to the **\u2018Subtract Background\u2019** method. It excels in differentiating individual animals or objects, even during collisions, which is particularly beneficial for the **'Interactive advanced'** mode. To enhance processing speed, use a GPU or reduce the frame size during analysis. To train a **Detector** in **\u2018Training Module\u2019**: \n\n 1. Click the **\u2018Generate Image Examples\u2019** button to extract image frames from videos.\n 2. Annotate the outlines of animals or objects in these images. We recommend using [EZannot](https://github.com/yujiahu415/EZannot). It's tailored to LabGym's **Detectors**. It's free and 100% private and implements AI assistance that annotates the entire outline of an object with just one mouse click. It also performs image augmentation that expands an annotated image dataset to 135 folds of its original size. Alternatively, you may use online annotation tools like [Roboflow](https://roboflow.com), which makes your data public in its free version. If you use Roboflow, for annotation type, choose 'Instance Segmentation', and export the annotations in 'COCO instance segmentation' format, which generates a \u2018*.json\u2019 file. Importantly, when you generate a version of dataset, do NOT perform any preprocessing steps such as \u2018auto orient\u2019 and \u2018resize (stretch)\u2019. Instead, perform some augmentation based on which manipulations may occur in real scenarios. Note that the free augmentation methods in Roboflow is 3X, which is way less than those in [EZannot](https://github.com/yujiahu415/EZannot) (135X). More augmentations will result in better generalizability of the trained **Detectors** and require less images to annotate.\n 3. Use the **\u2018Train Detectors\u2019** button to input these annotated images and commence training your **Detectors**.\n\n<p> </p>\n\n## Usage Step 2: identify and quantify behaviors\n\nLabGym is equipped with four distinct modes of behavior identification to suit different scenarios.\n\n<p> </p>\n\n### 1. Interactive advanced\n\nThis mode is for analyzing the behavior of every individual in a group of animals or objects, such as a finger 'holding' or 'offering' a peanut, a chipmunk 'taking' or 'loading' a peanut, and a peanut 'being held', 'being taken', or 'being loaded'.\n\n\n\nTo train a **Categorizer** of this mode, you can sort the behavior examples (**Animation** and **Pattern Image**) according to the behaviors/social roles of the 'main character' highlighted in a magenta-color-coded 'spotlight'. In the four pairs of behavior examples below, behaviors are 'taking the offer', 'being taken', 'being held', and 'offering peanut', respectively.\n\n\n\n<p> </p>\n\n### 2. Interactive basic\n\nOptimized for speed, this mode considers the entire interactive group (2 or more individuals) as an entity, streamlining the processing time compared to the **'Interactive advanced'** mode. It is ideal for scenarios where individual behaviors within the group are uniform or when the specific actions of each member are not the primary focus of the study, such as 'licking' and 'attempted copulation' (where we only need to identify behaviors of the male fly).\n\n\n\nTo train a **Categorizer** of this mode, you can sort the behavior examples (**Animation** and **Pattern Image**) according to the behaviors of the entire interacting group or the individual of primary focus of the study. In the 3 pairs of behavior examples below, behaviors are behaviors like 'orientating', 'singing while licking', and 'attempted copulation', respectively.\n\n\n\n<p> </p>\n\n### 3. Non-interactive\n\nThis mode is for identifying solitary behaviors of individuals that are not engaging in interactive activities.\n\n\n\nTo train a **Categorizer** of this mode, you can sort the behavior examples (**Animation** and **Pattern Image**) according to the behaviors of individuals.\n\n\n\n<p> </p>\n\n### 4. Static image\n\nThis mode is for identifying solitary behaviors of individuals in static images.\n\n<p> </p>\n\n## [Installation](https://labgym.readthedocs.io/en/latest/installation/index.html)\n\n## [LabGym Zoo (trained models and training examples)](https://github.com/umyelab/LabGym/blob/master/LabGym_Zoo.md)\n\n## [Reporting Issues](https://labgym.readthedocs.io/en/latest/issues.html)\n\n## [Changelog](https://labgym.readthedocs.io/en/latest/changelog.html)\n\n## [Contributing](https://labgym.readthedocs.io/en/latest/contributing/index.html)\n",
"bugtrack_url": null,
"license": "GPL-3.0",
"summary": "Quantify user-defined behaviors.",
"version": "2.9.6",
"project_urls": {
"Documentation": "https://labgym.readthedocs.io/en/latest/",
"Homepage": "http://github.com/umyelab/LabGym",
"Issues": "http://github.com/umyelab/LabGym/issues"
},
"split_keywords": [
"behavior analysis",
" user defined behaviors",
" behavior quantification"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "193eeb788a93d1f05f8ae55e5672b5060a45fbcc680134811908964e3083839b",
"md5": "337b711565cb6c71687e5166f5073c82",
"sha256": "a13468af525cad2ce9965678b0095ba8a38652dc2e9cdf3ae6c25601e11229ac"
},
"downloads": -1,
"filename": "labgym-2.9.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "337b711565cb6c71687e5166f5073c82",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.11,>=3.9",
"size": 1017736,
"upload_time": "2025-08-15T18:39:56",
"upload_time_iso_8601": "2025-08-15T18:39:56.616317Z",
"url": "https://files.pythonhosted.org/packages/19/3e/eb788a93d1f05f8ae55e5672b5060a45fbcc680134811908964e3083839b/labgym-2.9.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1d49894876658ee69ff4fa214ddedb2eb362a6599725fa2c53b98227266d7e44",
"md5": "eff3e8021298366761465c3c5258556f",
"sha256": "393656516d6cf7c7d53e5f1303dee6d72fa2599f4cddf9234b9c78dbb9071031"
},
"downloads": -1,
"filename": "labgym-2.9.6.tar.gz",
"has_sig": false,
"md5_digest": "eff3e8021298366761465c3c5258556f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.11,>=3.9",
"size": 836595,
"upload_time": "2025-08-15T18:40:01",
"upload_time_iso_8601": "2025-08-15T18:40:01.497483Z",
"url": "https://files.pythonhosted.org/packages/1d/49/894876658ee69ff4fa214ddedb2eb362a6599725fa2c53b98227266d7e44/labgym-2.9.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-15 18:40:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "umyelab",
"github_project": "LabGym",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "labgym"
}