LabGym


NameLabGym JSON
Version 2.4.4 PyPI version JSON
download
home_pageNone
SummaryQuantify user-defined behaviors.
upload_time2024-04-16 02:19:53
maintainerNone
docs_urlNone
authorNone
requires_python<3.11,>=3.9
licenseGPL-3.0
keywords behavior analysis behavioral analysis user defined behaviors
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LabGym: quantifying user-defined behaviors

[![PyPI - Version](https://img.shields.io/pypi/v/LabGym)](https://pypi.org/project/LabGym/)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/LabGym)](https://pypi.org/project/LabGym/)
[![Downloads](https://static.pepy.tech/badge/LabGym)](https://pepy.tech/project/LabGym)
[![Documentation Status](https://readthedocs.org/projects/labgym/badge/?version=latest)](https://labgym.readthedocs.io/en/latest/?badge=latest)

<!-- start elevator-pitch -->
LabGym can:

1. **TRACK** multiple animals / objects without restrictions on recording environments
2. **IDENTIFY** user-defined social or non-social behaviors without restrictions on behavior types / animal species
3. **QUANTIFY** user-defined behaviors by providing quantitative measures for each behavior
4. **MINE** the analysis results to show statistically significant findings

A tutorial video for a high-level understanding of what LabGym can do, how it works, and how to use it:

[![Watch the video](https://img.youtube.com/vi/YoYhHMPbf_o/hqdefault.jpg)](https://youtu.be/YoYhHMPbf_o)


Cite LabGym: <https://www.cell.com/cell-reports-methods/fulltext/S2667-2375(23)00026-7>.

<!-- end elevator-pitch -->

For installation instructions and documentation, please see [https://labgym.readthedocs.io](https://labgym.readthedocs.io).

> [!NOTE]
> We are currently in the process of migrating documentation to the above website. If you can't find information you're looking for there,
> refer to the [extended user guide](./LabGym%20user%20guide_v2.2.pdf) for a more detailed reference on how to use LabGym.

<p>&nbsp;</p>

<!-- start what-can-labgym-do -->
## Identifies user-defined behaviors

LabGym is equipped with three distinct modes of behavior identification to suit different scenarios:

1. **'Interactive advanced'**

   This mode is for analyzing the behavior of each individual in a group of animals or objects. Below are two examples. Left: LabGym can differentiate between a finger 'holding a peanut' vs. 'offering a peanut', discern whether a chipmunk is 'taking the offer' or 'loading peanut', and identify the status of the peanut itself as 'being held', 'being taken', or 'being loaded'. Right: it can distinguish in a group of flies which fly is 'singing’ a courtship song, which one is ‘being courted’, which one is ‘resting’, and which one is ‘locomotion’.

   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_chipmunks_1.gif?raw=true)    ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_flies_1.gif?raw=true)

2. **'Interactive basic'**

   Optimized for speed, this mode considers the entire interactive group (2 or more individuals) as an entity, streamlining the processing time compared to the **'Interactive advanced'** mode. It is ideal for scenarios where individual behaviors within the group are uniform or when the specific actions of each member are not the primary focus of the study.

   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_chipmunks_2.gif?raw=true)    ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_flies_2.gif?raw=true)

3. **'Non-interactive'**

   This mode is for identifying solitary behaviors of individuals that are not engaging in interactive activities. It is suitable for studies where the emphasis is on non-social or independent behaviors.

   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_mice_1.gif?raw=true)    ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_mice_2.gif?raw=true)

   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_larvae.gif?raw=true)    ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_rats.gif?raw=true)

<p>&nbsp;</p>

## Quantifies user-defined behaviors

LabGym computes a range of motion and kinematics parameters for each behavior defined by users. The parameters include **count**, **duration**, and **latency** of behavioral incidents, as well as **speed**, **acceleration**, **distance traveled**, and the **intensity** and **vigor** of motions during the behaviors. LabGym outputs these parameters in spreadsheets.

![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Analysis_output.jpg?raw=true)

<p>&nbsp;</p>

LabGym also provides visualization of analysis results, including annotated videos that visually mark each behavior event, temporal raster plots that show every behavior event overtime. The temporal raster plots below were output by LabGym and show the changes in behavior events of rodent before and after amphetamine treatments (See detailed explanation in Hu et al. (2023) Cell Reports Methods 3(3): 100415).

![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Quantify%20behavior.jpg?raw=true)

<p>&nbsp;</p>

## Mines the analysis results

LabGym outputs diverse spreadsheets that store the behavior parameters that it calculates. It is labor intensive to dig into these spreadsheets manually for statistical analysis across different experimental groups and behavioral types and parameters. To address this, LabGym contains a data-mining module that automatically performs the statistical tests on every behavioral parameter across the experimental groups of your choice. The data-mining result below displays the details of the comparisons that show statistically significant differences between groups.

![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Results_mining.jpg?raw=true)

<p>&nbsp;</p>

<!-- end what-can-labgym-do -->

## Accessible and User-Friendly

LabGym eliminates the need for coding due to its intuitive user interface, ensuring ease of use for all users regardless of their programming skills. While the software operates efficiently without GPUs, it can achieve enhanced speed on systems equipped with NVIDIA GPUs and the CUDA toolkit (version==11.7) installed.

<p>&nbsp;</p>

# How to use LabGym?

Extended user guide: (https://github.com/yujiahu415/LabGym/blob/master/LabGym%20user%20guide_v2.2.pdf).

***Put your mouse cursor above each button in the user interface to see a detailed description for it***.

<p>&nbsp;</p>

![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/User%20interface.jpg?raw=true)

<p>&nbsp;</p>

LabGym comprises three modules, each tailored to streamline the analysis process. Together, these modules create a cohesive workflow, enabling users to prepare, train, and analyze their behavioral data with accuracy and ease.

1. **'Preprocessing Module'**: This module is for optimizing video footage for analysis. It can enhance video contrast to make the relevant details more discernible. It can trim videos to focus solely on the necessary time windows. It can also crop frames to remove irrelevant regions.

2. **'Training Module'**: Here, you can customize LabGym according to your specific research needs. You can train a Detector in this module to detect animals or objects of your interest in videos. You can also train a Categorizer to recognize specific behaviors that are defined by you.

3. **'Analysis Module'**: After customizing LabGym to your need, you can use this module for automated behavioral analysis in videos. It not only outputs comprehensive analysis results but also delves into these results to mine significant findings.

<p>&nbsp;</p>

## How to teach LabGym to recognize behaviors defined by you

Follow these three steps (three buttons in the **‘Training Module’**):

1. **'Generate Behavior Examples'**: Use this button to input your video files and let LabGym generate behavior examples from these videos. Each behavior example is comprised of an **Animation** paired with a **Pattern Image**, spanning a specific behavior episode that is defined by you.

2. **'Sort Behavior Examples'**: Once your behavior examples are generated, use this button to select appropriate examples and sort them according to their behavior types.

3. **'Train Categorizers'**: Finally, use this button to feed the sorted behavior examples into the system to train a Categorizer. The trained Categorizer is stored in LabGym and can categorize these behaviors automatically in future analyses.

LabGym has three modes of behavior identification, so it offers three modes for behavior examples:

1. **'Interactive advanced'**

   In this mode, each pair of **Animation** and **Pattern Image** highlights an entire interactive group, with a 'spotlight' focusing on the main character. The categorization is based on the main character’s behavior. In the example below, behaviors of the chipmunk ('taking the offer'), the peanuts ('being taken' and 'being held'), and the hand ('offering peanut') can be distinguished.

   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Chipmunks.gif?raw=true)

2. **'Interactive basic'**

   This mode captures all relevant animals or objects in each pair of **Animation** and **Pattern Image**. Sort these as one collective unit. In the following example of fly courtship behavior, behaviors like 'orientating', 'singing while licking', and 'attempted copulation' could be categorized, focusing on specific actions like the courtship steps of a male fly while disregarding the female's response, if the research focuses only on male courtship behaviors.

   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Flies.gif?raw=true)

3. **'Non-interactive'**

   Each pair of **Animation** and **Pattern Image** in this mode represents a 'monodrama', focusing solely on individual animals or objects without interaction.

   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Mice.gif?raw=true)

   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Larvae.gif?raw=true)

<p>&nbsp;</p>

## How does LabGym detect animals / objects?

LabGym employs two distinct methods for detecting animals or objects in different scenarios.

1. **Subtract background**

   This method is fast and accurate but requires stable illumination and static background in videos to analysis. It does not require training neural networks, but you need to define a time window during which the animals are in motion for effective background extraction. A shorter time window leads to quicker processing. Typically, a duration of 10 to 30 seconds is adequate.

    ***How to select an appropriate time window for background extraction?***

    To determine the optimal time window for background extraction, consider the animal's movement throughout the video. In the example below, in a 60-second video, selecting a 20-second window where the mouse moves frequently and covers different areas is ideal. The following three images are backgrounds extracted using the time windows of the first, second, and last 20 seconds, respectively. In the first and last 20 seconds, the mouse mostly stays either in left or right side and moves little and the extracted backgrounds contain animal trace, which is not ideal. In the second 20 seconds, the mouse frequently moves around and the extracted background is perfect:

    ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Background_extraction_demo.gif?raw=true)  ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Extracted_background_0-20.jpg?raw=true)  ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Extracted_background_20-40.jpg?raw=true)  ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Extracted_background_40-60.jpg?raw=true)

2. **Use trained Detectors**

   This method incorporates Detectron2 (https://github.com/facebookresearch/detectron2), offering more versatility but at a slower processing speed compared to the **‘Subtract Background’** method. It excels in differentiating individual animals or objects, even during collisions, which is particularly beneficial for the **'Interactive advanced'** mode. To enhance processing speed, use a GPU or reduce the frame size during analysis. To train a **Detector** in **‘Training Module’**: 

    1. Click the **‘Generate Image Examples’** button to extract image frames from videos.
    2. Use free online annotation tools like Roboflow (https://roboflow.com) to annotate the outlines of animals or objects in these images. For annotation type, choose 'Instance Segmentation', and export the annotations in 'COCO instance segmentation' format, which generates a ‘*.json’ file. In **'Interactive advanced'** mode, focus on images where individuals collide and annotate these boundaries precisely. Exposing the **Detector** to a variety of collision scenarios during training can significantly minimize identity switching in subsequent analyses.
    3. Use the **‘Train Detectors’** button to input these annotated images and commence training your **Detectors**.

## Installation

Please refer to the installation instructions in the 
[documentation](https://labgym.readthedocs.io/en/latest/installation.html).

## Reporting Issues

To report an issue with LabGym, refer to the walkthrough in the 
[documentation](https://labgym.readthedocs.io/en/latest/issues.html).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "LabGym",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.11,>=3.9",
    "maintainer_email": null,
    "keywords": "behavior analysis, behavioral analysis, user defined behaviors",
    "author": null,
    "author_email": "Yujia Hu <henryhu@umich.edu>, Rohan Satapathy <rohansat@umich.edu>, \"M. Victor Struman\" <strmark@umich.edu>, Kelly Goss <khgoss@umich.edu>, Isabelle Baker <ibaker@umich.edu>",
    "download_url": "https://files.pythonhosted.org/packages/b3/7f/8e5c18c3413bfb2b540910a586babfac4b3bd10e78b9e227d15667333ffe/labgym-2.4.4.tar.gz",
    "platform": null,
    "description": "# LabGym: quantifying user-defined behaviors\n\n[![PyPI - Version](https://img.shields.io/pypi/v/LabGym)](https://pypi.org/project/LabGym/)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/LabGym)](https://pypi.org/project/LabGym/)\n[![Downloads](https://static.pepy.tech/badge/LabGym)](https://pepy.tech/project/LabGym)\n[![Documentation Status](https://readthedocs.org/projects/labgym/badge/?version=latest)](https://labgym.readthedocs.io/en/latest/?badge=latest)\n\n<!-- start elevator-pitch -->\nLabGym can:\n\n1. **TRACK** multiple animals / objects without restrictions on recording environments\n2. **IDENTIFY** user-defined social or non-social behaviors without restrictions on behavior types / animal species\n3. **QUANTIFY** user-defined behaviors by providing quantitative measures for each behavior\n4. **MINE** the analysis results to show statistically significant findings\n\nA tutorial video for a high-level understanding of what LabGym can do, how it works, and how to use it:\n\n[![Watch the video](https://img.youtube.com/vi/YoYhHMPbf_o/hqdefault.jpg)](https://youtu.be/YoYhHMPbf_o)\n\n\nCite LabGym: <https://www.cell.com/cell-reports-methods/fulltext/S2667-2375(23)00026-7>.\n\n<!-- end elevator-pitch -->\n\nFor installation instructions and documentation, please see [https://labgym.readthedocs.io](https://labgym.readthedocs.io).\n\n> [!NOTE]\n> We are currently in the process of migrating documentation to the above website. If you can't find information you're looking for there,\n> refer to the [extended user guide](./LabGym%20user%20guide_v2.2.pdf) for a more detailed reference on how to use LabGym.\n\n<p>&nbsp;</p>\n\n<!-- start what-can-labgym-do -->\n## Identifies user-defined behaviors\n\nLabGym is equipped with three distinct modes of behavior identification to suit different scenarios:\n\n1. **'Interactive advanced'**\n\n   This mode is for analyzing the behavior of each individual in a group of animals or objects. Below are two examples. Left: LabGym can differentiate between a finger 'holding a peanut' vs. 'offering a peanut', discern whether a chipmunk is 'taking the offer' or 'loading peanut', and identify the status of the peanut itself as 'being held', 'being taken', or 'being loaded'. Right: it can distinguish in a group of flies which fly is 'singing\u2019 a courtship song, which one is \u2018being courted\u2019, which one is \u2018resting\u2019, and which one is \u2018locomotion\u2019.\n\n   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_chipmunks_1.gif?raw=true)    ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_flies_1.gif?raw=true)\n\n2. **'Interactive basic'**\n\n   Optimized for speed, this mode considers the entire interactive group (2 or more individuals) as an entity, streamlining the processing time compared to the **'Interactive advanced'** mode. It is ideal for scenarios where individual behaviors within the group are uniform or when the specific actions of each member are not the primary focus of the study.\n\n   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_chipmunks_2.gif?raw=true)    ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_flies_2.gif?raw=true)\n\n3. **'Non-interactive'**\n\n   This mode is for identifying solitary behaviors of individuals that are not engaging in interactive activities. It is suitable for studies where the emphasis is on non-social or independent behaviors.\n\n   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_mice_1.gif?raw=true)    ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_mice_2.gif?raw=true)\n\n   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_larvae.gif?raw=true)    ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Categorizer_rats.gif?raw=true)\n\n<p>&nbsp;</p>\n\n## Quantifies user-defined behaviors\n\nLabGym computes a range of motion and kinematics parameters for each behavior defined by users. The parameters include **count**, **duration**, and **latency** of behavioral incidents, as well as **speed**, **acceleration**, **distance traveled**, and the **intensity** and **vigor** of motions during the behaviors. LabGym outputs these parameters in spreadsheets.\n\n![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Analysis_output.jpg?raw=true)\n\n<p>&nbsp;</p>\n\nLabGym also provides visualization of analysis results, including annotated videos that visually mark each behavior event, temporal raster plots that show every behavior event overtime. The temporal raster plots below were output by LabGym and show the changes in behavior events of rodent before and after amphetamine treatments (See detailed explanation in Hu et al. (2023) Cell Reports Methods 3(3): 100415).\n\n![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Quantify%20behavior.jpg?raw=true)\n\n<p>&nbsp;</p>\n\n## Mines the analysis results\n\nLabGym outputs diverse spreadsheets that store the behavior parameters that it calculates. It is labor intensive to dig into these spreadsheets manually for statistical analysis across different experimental groups and behavioral types and parameters. To address this, LabGym contains a data-mining module that automatically performs the statistical tests on every behavioral parameter across the experimental groups of your choice. The data-mining result below displays the details of the comparisons that show statistically significant differences between groups.\n\n![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Results_mining.jpg?raw=true)\n\n<p>&nbsp;</p>\n\n<!-- end what-can-labgym-do -->\n\n## Accessible and User-Friendly\n\nLabGym eliminates the need for coding due to its intuitive user interface, ensuring ease of use for all users regardless of their programming skills. While the software operates efficiently without GPUs, it can achieve enhanced speed on systems equipped with NVIDIA GPUs and the CUDA toolkit (version==11.7) installed.\n\n<p>&nbsp;</p>\n\n# How to use LabGym?\n\nExtended user guide: (https://github.com/yujiahu415/LabGym/blob/master/LabGym%20user%20guide_v2.2.pdf).\n\n***Put your mouse cursor above each button in the user interface to see a detailed description for it***.\n\n<p>&nbsp;</p>\n\n![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/User%20interface.jpg?raw=true)\n\n<p>&nbsp;</p>\n\nLabGym comprises three modules, each tailored to streamline the analysis process. Together, these modules create a cohesive workflow, enabling users to prepare, train, and analyze their behavioral data with accuracy and ease.\n\n1. **'Preprocessing Module'**: This module is for optimizing video footage for analysis. It can enhance video contrast to make the relevant details more discernible. It can trim videos to focus solely on the necessary time windows. It can also crop frames to remove irrelevant regions.\n\n2. **'Training Module'**: Here, you can customize LabGym according to your specific research needs. You can train a Detector in this module to detect animals or objects of your interest in videos. You can also train a Categorizer to recognize specific behaviors that are defined by you.\n\n3. **'Analysis Module'**: After customizing LabGym to your need, you can use this module for automated behavioral analysis in videos. It not only outputs comprehensive analysis results but also delves into these results to mine significant findings.\n\n<p>&nbsp;</p>\n\n## How to teach LabGym to recognize behaviors defined by you\n\nFollow these three steps (three buttons in the **\u2018Training Module\u2019**):\n\n1. **'Generate Behavior Examples'**: Use this button to input your video files and let LabGym generate behavior examples from these videos. Each behavior example is comprised of an **Animation** paired with a **Pattern Image**, spanning a specific behavior episode that is defined by you.\n\n2. **'Sort Behavior Examples'**: Once your behavior examples are generated, use this button to select appropriate examples and sort them according to their behavior types.\n\n3. **'Train Categorizers'**: Finally, use this button to feed the sorted behavior examples into the system to train a Categorizer. The trained Categorizer is stored in LabGym and can categorize these behaviors automatically in future analyses.\n\nLabGym has three modes of behavior identification, so it offers three modes for behavior examples:\n\n1. **'Interactive advanced'**\n\n   In this mode, each pair of **Animation** and **Pattern Image** highlights an entire interactive group, with a 'spotlight' focusing on the main character. The categorization is based on the main character\u2019s behavior. In the example below, behaviors of the chipmunk ('taking the offer'), the peanuts ('being taken' and 'being held'), and the hand ('offering peanut') can be distinguished.\n\n   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Chipmunks.gif?raw=true)\n\n2. **'Interactive basic'**\n\n   This mode captures all relevant animals or objects in each pair of **Animation** and **Pattern Image**. Sort these as one collective unit. In the following example of fly courtship behavior, behaviors like 'orientating', 'singing while licking', and 'attempted copulation' could be categorized, focusing on specific actions like the courtship steps of a male fly while disregarding the female's response, if the research focuses only on male courtship behaviors.\n\n   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Flies.gif?raw=true)\n\n3. **'Non-interactive'**\n\n   Each pair of **Animation** and **Pattern Image** in this mode represents a 'monodrama', focusing solely on individual animals or objects without interaction.\n\n   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Mice.gif?raw=true)\n\n   ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Larvae.gif?raw=true)\n\n<p>&nbsp;</p>\n\n## How does LabGym detect animals / objects?\n\nLabGym employs two distinct methods for detecting animals or objects in different scenarios.\n\n1. **Subtract background**\n\n   This method is fast and accurate but requires stable illumination and static background in videos to analysis. It does not require training neural networks, but you need to define a time window during which the animals are in motion for effective background extraction. A shorter time window leads to quicker processing. Typically, a duration of 10 to 30 seconds is adequate.\n\n    ***How to select an appropriate time window for background extraction?***\n\n    To determine the optimal time window for background extraction, consider the animal's movement throughout the video. In the example below, in a 60-second video, selecting a 20-second window where the mouse moves frequently and covers different areas is ideal. The following three images are backgrounds extracted using the time windows of the first, second, and last 20 seconds, respectively. In the first and last 20 seconds, the mouse mostly stays either in left or right side and moves little and the extracted backgrounds contain animal trace, which is not ideal. In the second 20 seconds, the mouse frequently moves around and the extracted background is perfect:\n\n    ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Background_extraction_demo.gif?raw=true)  ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Extracted_background_0-20.jpg?raw=true)  ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Extracted_background_20-40.jpg?raw=true)  ![alt text](https://github.com/yujiahu415/LabGym/blob/master/Examples/Extracted_background_40-60.jpg?raw=true)\n\n2. **Use trained Detectors**\n\n   This method incorporates Detectron2 (https://github.com/facebookresearch/detectron2), offering more versatility but at a slower processing speed compared to the **\u2018Subtract Background\u2019** method. It excels in differentiating individual animals or objects, even during collisions, which is particularly beneficial for the **'Interactive advanced'** mode. To enhance processing speed, use a GPU or reduce the frame size during analysis. To train a **Detector** in **\u2018Training Module\u2019**: \n\n    1. Click the **\u2018Generate Image Examples\u2019** button to extract image frames from videos.\n    2. Use free online annotation tools like Roboflow (https://roboflow.com) to annotate the outlines of animals or objects in these images. For annotation type, choose 'Instance Segmentation', and export the annotations in 'COCO instance segmentation' format, which generates a \u2018*.json\u2019 file. In **'Interactive advanced'** mode, focus on images where individuals collide and annotate these boundaries precisely. Exposing the **Detector** to a variety of collision scenarios during training can significantly minimize identity switching in subsequent analyses.\n    3. Use the **\u2018Train Detectors\u2019** button to input these annotated images and commence training your **Detectors**.\n\n## Installation\n\nPlease refer to the installation instructions in the \n[documentation](https://labgym.readthedocs.io/en/latest/installation.html).\n\n## Reporting Issues\n\nTo report an issue with LabGym, refer to the walkthrough in the \n[documentation](https://labgym.readthedocs.io/en/latest/issues.html).\n",
    "bugtrack_url": null,
    "license": "GPL-3.0",
    "summary": "Quantify user-defined behaviors.",
    "version": "2.4.4",
    "project_urls": {
        "Documentation": "https://labgym.readthedocs.io/en/latest/",
        "Homepage": "http://github.com/umyelab/LabGym",
        "Issues": "http://github.com/umyelab/LabGym/issues"
    },
    "split_keywords": [
        "behavior analysis",
        " behavioral analysis",
        " user defined behaviors"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ae3165a891acf214946641ca3d49500d20ec4bbadc32b06a0972107fa791a39f",
                "md5": "061de143fbb8832e8ccd90cd081d2683",
                "sha256": "a0f57c4e7ad0eee4827a576d2fa2a5d0c35a957ef15f230d7e0cf3818f2e916f"
            },
            "downloads": -1,
            "filename": "labgym-2.4.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "061de143fbb8832e8ccd90cd081d2683",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.11,>=3.9",
            "size": 131730,
            "upload_time": "2024-04-16T02:19:52",
            "upload_time_iso_8601": "2024-04-16T02:19:52.600391Z",
            "url": "https://files.pythonhosted.org/packages/ae/31/65a891acf214946641ca3d49500d20ec4bbadc32b06a0972107fa791a39f/labgym-2.4.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b37f8e5c18c3413bfb2b540910a586babfac4b3bd10e78b9e227d15667333ffe",
                "md5": "3ad3c7d7a74a338c0fa0afa40d319bc8",
                "sha256": "41497673e2e1f38737cecb7d1763623e369c260f3c0cd3d4443b0b688ebe0769"
            },
            "downloads": -1,
            "filename": "labgym-2.4.4.tar.gz",
            "has_sig": false,
            "md5_digest": "3ad3c7d7a74a338c0fa0afa40d319bc8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.11,>=3.9",
            "size": 112526,
            "upload_time": "2024-04-16T02:19:53",
            "upload_time_iso_8601": "2024-04-16T02:19:53.799141Z",
            "url": "https://files.pythonhosted.org/packages/b3/7f/8e5c18c3413bfb2b540910a586babfac4b3bd10e78b9e227d15667333ffe/labgym-2.4.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-16 02:19:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "umyelab",
    "github_project": "LabGym",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "labgym"
}
        
Elapsed time: 0.25207s