motion-learning-toolbox


Namemotion-learning-toolbox JSON
Version 1.0.6 PyPI version JSON
download
home_pageNone
SummaryPython library for preprocessing of XR motion tracking data for machine learning applications.
upload_time2024-04-23 08:51:41
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseThis work by Christian Rack, Lukas Schach and Marc E. Latoschik is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
keywords motion_data preprocessing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Motion Learning Toolbox

The Motion Learning Toolbox is a Python library designed to facilitate the preprocessing of motion tracking data in extended reality (XR) setups. It's particularly useful for researchers and engineers wanting to use XR tracking data as input for machine learning models. Originally developed for academic research targeting the identification of XR users by their motions, this toolbox includes a variety of data encoding methods that enhance machine learning model performance.

The library is still in active development and we continue to add and update functionality. Therefore, any feedback and contributions are very welcome!

## Origins

The methods and techniques in this library have been developed, analysed and applied in our papers:

1. ["Comparison of Data Encodings and Machine Learning Architectures for User Identification on Arbitrary Motion Sequences"](https://ieeexplore.ieee.org/document/10024474), 2022, C. Rack, A. Hotho and M. E. Latoschik, IEEE AIVR
2. ["Who Is Alyx? A new Behavioral Biometric Dataset for User Identification in XR"](https://www.frontiersin.org/articles/10.3389/frvir.2023.1272234/full), 2023, C. Rack, T.  Fernando, M. Yalcin, A. Hotho, and M. E. Latoschik, Frontiers in Virtual Reality
3. ["Extensible Motion-based Identification of XR Users using Non-Specific Motion Data"](https://arxiv.org/abs/2302.07517), 2023, C. Rack, K. Kobs, T. Fernando, A. Hotho, M. E. Latoschik, *arXiv e-prints*

## Importance of Data Encoding
The core features of this library target the encoding of tracking data. Identifying users based on their motions usually starts with a raw stream of positional and rotational data, which we term scene-relative (SR) data. While SR data is informative, it includes information that can distort the learning objectives of identification models.  For instance, SR data includes not just user-specific characteristics but also information about the user's arbitrary position in the VR scene—features that don't contribute to user identity. To alleviate this, Motion Learning Toolbox offers additional encodings, such as:

- **Body-Relative (BR) Data**: Transforms the coordinate system to a frame of reference attached to the user's head, thereby filtering out some of the scene-specific noise.
  
- **Body-Relative Velocity (BRV) Data**: Computes the derivative of BR data over time, isolating the velocity component to focus on actual user movement.

- **Body-Relative Acceleration (BRA) Data**: A further derivative to capture the acceleration features for potentially improved model training.

These alternative encodings are instrumental in enhancing the ability of machine learning models to learn user-specific characteristics by minimizing the amount of irrelevant information. Besides providing these data encoding methods, the Motion Learning Toolbox also provides methods to resample and clean up XR tracking data.

## Setup Instructions
To get started with Motion Learning Toolbox, follow these steps:

1. Install library:
    ```bash
    pip install motion-learning-toolbox
    ```

2. Import the library into your Python script:
    ```python
    import motion_learning_toolbox as mlt
    ```

3. Use like this:
    ```python
    mlt.to_body_relative(...)
    ```

## Features

The following methods are explained in detail and demonstrated in [`examples/demo.ipynb`](examples/demo.ipynb).

- Data Cleanup
    - `fix_controller_mapping` - during calibration, XR systems might assign left and right controllers the wrong way around; this methods checks this and renames the columns if necessary.
    - `resample` – resamples the recording to a constant frame rate, using linear interpolation for positions and Slerping for quaternions.
    - `canonicalize_quaternions` - provides a unique representation for each quaternion, which is desirable for machine learning models.
- Data Encoding
    - `to_body_relative` - encodes scene-relative (SR) data to body-relative (BR) data.
    - `to_velocity` - encodes SR to scene-relative velocity (SRV) data, or BR to body-relative velocity (BRV) data.
    - `to_acceleration` - encodes SR to scene-relative acceleration (SRA) data, or BR to body-relative acceleration (BRA) data.

## Data Format

This library expects input tracking data as Pandas DataFrame. Positional columns should follow the pattern `<joint>_pos_<x/y/z>`. Rotations have to be encoded as quaternions and follow the pattern `<joint>_rot_<x/y/z/w>`. The order of the columns doesn't matter.

In [`examples/data.csv`](examples/data.csv) you find an example CSV file that yields a compatible DataFrame if loaded with `pd.read_csv(examples/data.csv)`.

## Usage Examples

In the [`examples`](examples) folder of the repository, you'll find a Jupyter Notebook named [`demo.ipynb`](examples/demo.ipynb) that demonstrates how to use most of the functions provided by this library. The notebook serves as a practical guide and showcases the functionality with real data.

## Contact

We welcome any discussion, ideas and feedback around this library. Feel free to either open an issue on GitHub or directly contact Christian Rack or Lukas Schach.

## Development

### Build and publish library

1. Bump version in pyproject.toml
2. Build with `python -m build` (make sure `build` is installed, otherwise do `pip install build`)
3. Upload to pypi with `twine upload -r pypi dist/* --skip-existing`
   - username: __token__
   - password: <api-token: pypi-...> 
## License Information

<p xmlns:cc="http://creativecommons.org/ns#">
  This work by <a rel="cc:attributionURL dct:creator" property="cc:attributionName" href="https://hci.uni-wuerzburg.de">Christian Rack, Lukas Schach and Marc E. Latoschik</a> is
  licensed under <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/?ref=chooser-v1" target="_blank" rel="license noopener noreferrer" style="display:inline-block;">CC BY-NC-SA 4.0
  <img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/cc.svg?ref=chooser-v1">
  <img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/by.svg?ref=chooser-v1">
  <img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/nc.svg?ref=chooser-v1">
  <img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/sa.svg?ref=chooser-v1"></a>
</p>

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "motion-learning-toolbox",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "motion_data, preprocessing",
    "author": null,
    "author_email": "Christian Rack <mail@chrisrack.de>, Lukas Schach <mail.lukas@schach.one>",
    "download_url": "https://files.pythonhosted.org/packages/39/3a/80c600eb8d739b6b9aecd898c372f19c0f737d4d440d077ec893c33750be/motion_learning_toolbox-1.0.6.tar.gz",
    "platform": null,
    "description": "# Motion Learning Toolbox\n\nThe Motion Learning Toolbox is a Python library designed to facilitate the preprocessing of motion tracking data in extended reality (XR) setups. It's particularly useful for researchers and engineers wanting to use XR tracking data as input for machine learning models. Originally developed for academic research targeting the identification of XR users by their motions, this toolbox includes a variety of data encoding methods that enhance machine learning model performance.\n\nThe library is still in active development and we continue to add and update functionality. Therefore, any feedback and contributions are very welcome!\n\n## Origins\n\nThe methods and techniques in this library have been developed, analysed and applied in our papers:\n\n1. [\"Comparison of Data Encodings and Machine Learning Architectures for User Identification on Arbitrary Motion Sequences\"](https://ieeexplore.ieee.org/document/10024474), 2022, C. Rack, A. Hotho and M. E. Latoschik, IEEE AIVR\n2. [\"Who Is Alyx? A new Behavioral Biometric Dataset for User Identification in XR\"](https://www.frontiersin.org/articles/10.3389/frvir.2023.1272234/full), 2023, C. Rack, T.  Fernando, M. Yalcin, A. Hotho, and M. E. Latoschik, Frontiers in Virtual Reality\n3. [\"Extensible Motion-based Identification of XR Users using Non-Specific Motion Data\"](https://arxiv.org/abs/2302.07517), 2023, C. Rack, K. Kobs, T. Fernando, A. Hotho, M. E. Latoschik, *arXiv e-prints*\n\n## Importance of Data Encoding\nThe core features of this library target the encoding of tracking data. Identifying users based on their motions usually starts with a raw stream of positional and rotational data, which we term scene-relative (SR) data. While SR data is informative, it includes information that can distort the learning objectives of identification models.  For instance, SR data includes not just user-specific characteristics but also information about the user's arbitrary position in the VR scene\u2014features that don't contribute to user identity. To alleviate this, Motion Learning Toolbox offers additional encodings, such as:\n\n- **Body-Relative (BR) Data**: Transforms the coordinate system to a frame of reference attached to the user's head, thereby filtering out some of the scene-specific noise.\n  \n- **Body-Relative Velocity (BRV) Data**: Computes the derivative of BR data over time, isolating the velocity component to focus on actual user movement.\n\n- **Body-Relative Acceleration (BRA) Data**: A further derivative to capture the acceleration features for potentially improved model training.\n\nThese alternative encodings are instrumental in enhancing the ability of machine learning models to learn user-specific characteristics by minimizing the amount of irrelevant information. Besides providing these data encoding methods, the Motion Learning Toolbox also provides methods to resample and clean up XR tracking data.\n\n## Setup Instructions\nTo get started with Motion Learning Toolbox, follow these steps:\n\n1. Install library:\n    ```bash\n    pip install motion-learning-toolbox\n    ```\n\n2. Import the library into your Python script:\n    ```python\n    import motion_learning_toolbox as mlt\n    ```\n\n3. Use like this:\n    ```python\n    mlt.to_body_relative(...)\n    ```\n\n## Features\n\nThe following methods are explained in detail and demonstrated in [`examples/demo.ipynb`](examples/demo.ipynb).\n\n- Data Cleanup\n    - `fix_controller_mapping` - during calibration, XR systems might assign left and right controllers the wrong way around; this methods checks this and renames the columns if necessary.\n    - `resample` \u2013 resamples the recording to a constant frame rate, using linear interpolation for positions and Slerping for quaternions.\n    - `canonicalize_quaternions` - provides a unique representation for each quaternion, which is desirable for machine learning models.\n- Data Encoding\n    - `to_body_relative` - encodes scene-relative (SR) data to body-relative (BR) data.\n    - `to_velocity` - encodes SR to scene-relative velocity (SRV) data, or BR to body-relative velocity (BRV) data.\n    - `to_acceleration` - encodes SR to scene-relative acceleration (SRA) data, or BR to body-relative acceleration (BRA) data.\n\n## Data Format\n\nThis library expects input tracking data as Pandas DataFrame. Positional columns should follow the pattern `<joint>_pos_<x/y/z>`. Rotations have to be encoded as quaternions and follow the pattern `<joint>_rot_<x/y/z/w>`. The order of the columns doesn't matter.\n\nIn [`examples/data.csv`](examples/data.csv) you find an example CSV file that yields a compatible DataFrame if loaded with `pd.read_csv(examples/data.csv)`.\n\n## Usage Examples\n\nIn the [`examples`](examples) folder of the repository, you'll find a Jupyter Notebook named [`demo.ipynb`](examples/demo.ipynb) that demonstrates how to use most of the functions provided by this library. The notebook serves as a practical guide and showcases the functionality with real data.\n\n## Contact\n\nWe welcome any discussion, ideas and feedback around this library. Feel free to either open an issue on GitHub or directly contact Christian Rack or Lukas Schach.\n\n## Development\n\n### Build and publish library\n\n1. Bump version in pyproject.toml\n2. Build with `python -m build` (make sure `build` is installed, otherwise do `pip install build`)\n3. Upload to pypi with `twine upload -r pypi dist/* --skip-existing`\n   - username: __token__\n   - password: <api-token: pypi-...> \n## License Information\n\n<p xmlns:cc=\"http://creativecommons.org/ns#\">\n  This work by <a rel=\"cc:attributionURL dct:creator\" property=\"cc:attributionName\" href=\"https://hci.uni-wuerzburg.de\">Christian Rack, Lukas Schach and Marc E. Latoschik</a> is\n  licensed under <a href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/?ref=chooser-v1\" target=\"_blank\" rel=\"license noopener noreferrer\" style=\"display:inline-block;\">CC BY-NC-SA 4.0\n  <img style=\"height:22px!important;margin-left:3px;vertical-align:text-bottom;\" src=\"https://mirrors.creativecommons.org/presskit/icons/cc.svg?ref=chooser-v1\">\n  <img style=\"height:22px!important;margin-left:3px;vertical-align:text-bottom;\" src=\"https://mirrors.creativecommons.org/presskit/icons/by.svg?ref=chooser-v1\">\n  <img style=\"height:22px!important;margin-left:3px;vertical-align:text-bottom;\" src=\"https://mirrors.creativecommons.org/presskit/icons/nc.svg?ref=chooser-v1\">\n  <img style=\"height:22px!important;margin-left:3px;vertical-align:text-bottom;\" src=\"https://mirrors.creativecommons.org/presskit/icons/sa.svg?ref=chooser-v1\"></a>\n</p>\n",
    "bugtrack_url": null,
    "license": "This work by Christian Rack, Lukas Schach and Marc E. Latoschik is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.",
    "summary": "Python library for preprocessing of XR motion tracking data for machine learning applications.",
    "version": "1.0.6",
    "project_urls": {
        "GitHub Repository": "https://github.com/cschell/Motion-Learning-Toolbox"
    },
    "split_keywords": [
        "motion_data",
        " preprocessing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8422f99d7c31b041e53c6f22af8d9acb13ff8e2f12574cfb1d93c00e1ee55a21",
                "md5": "ae3852ff639c66fe04d7c5cec10ba467",
                "sha256": "630b30478fc7aa7589b36aaea154f2ed9cd70b075ba38dcb5496332f4b7d33b4"
            },
            "downloads": -1,
            "filename": "motion_learning_toolbox-1.0.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ae3852ff639c66fe04d7c5cec10ba467",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 12574,
            "upload_time": "2024-04-23T08:51:39",
            "upload_time_iso_8601": "2024-04-23T08:51:39.429446Z",
            "url": "https://files.pythonhosted.org/packages/84/22/f99d7c31b041e53c6f22af8d9acb13ff8e2f12574cfb1d93c00e1ee55a21/motion_learning_toolbox-1.0.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "393a80c600eb8d739b6b9aecd898c372f19c0f737d4d440d077ec893c33750be",
                "md5": "372b03f560244c9d506277a7c1dda683",
                "sha256": "10eb22453abd85c87d87cbab83dd898c259aacf75a9790ff1a57647bae814872"
            },
            "downloads": -1,
            "filename": "motion_learning_toolbox-1.0.6.tar.gz",
            "has_sig": false,
            "md5_digest": "372b03f560244c9d506277a7c1dda683",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 15445,
            "upload_time": "2024-04-23T08:51:41",
            "upload_time_iso_8601": "2024-04-23T08:51:41.227696Z",
            "url": "https://files.pythonhosted.org/packages/39/3a/80c600eb8d739b6b9aecd898c372f19c0f737d4d440d077ec893c33750be/motion_learning_toolbox-1.0.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-23 08:51:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cschell",
    "github_project": "Motion-Learning-Toolbox",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "motion-learning-toolbox"
}
        
Elapsed time: 0.24798s