# Multimodal Emotion Expression Capture Amsterdam
[![github license badge](https://img.shields.io/github/license/mexca/mexca)](https://github.com/mexca/mexca)
[![RSD](https://img.shields.io/badge/rsd-mexca-00a3e3.svg)](https://research-software-directory.org/software/mexca)
[![read the docs badge](https://readthedocs.org/projects/pip/badge/)](https://mexca.readthedocs.io/en/latest/index.html)
[![fair-software badge](https://img.shields.io/badge/fair--software.eu-%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8B-yellow)](https://fair-software.eu)
[![workflow scq badge](https://sonarcloud.io/api/project_badges/measure?project=mexca_mexca&metric=alert_status)](https://sonarcloud.io/dashboard?id=mexca_mexca)
[![workflow scc badge](https://sonarcloud.io/api/project_badges/measure?project=mexca_mexca&metric=coverage)](https://sonarcloud.io/dashboard?id=mexca_mexca)
[![build](https://github.com/mexca/mexca/actions/workflows/build.yml/badge.svg)](https://github.com/mexca/mexca/actions/workflows/build.yml)
[![cffconvert](https://github.com/mexca/mexca/actions/workflows/cffconvert.yml/badge.svg)](https://github.com/mexca/mexca/actions/workflows/cffconvert.yml)
[![markdown-link-check](https://github.com/mexca/mexca/actions/workflows/markdown-link-check.yml/badge.svg)](https://github.com/mexca/mexca/actions/workflows/markdown-link-check.yml)
[![DOI](https://zenodo.org/badge/500818250.svg)](https://zenodo.org/badge/latestdoi/500818250)
[![docker hub badge](https://img.shields.io/static/v1?label=Docker%20Hub&message=mexca&color=blue&style=flat&logo=docker)](https://hub.docker.com/u/mexca)
[![docker build badge](https://img.shields.io/github/actions/workflow/status/mexca/mexca/docker.yml?label=Docker%20build&logo=docker)](https://github.com/mexca/mexca/actions/workflows/docker.yml)
[![black code style badge](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
<div align="center">
<img src="mexca_logo.png" width="400">
</div>
mexca is an open-source Python package which aims to capture human emotion expressions from videos in a single pipeline.
Check out our preprint:
Lüken, M., Moodley, K., Viviani, E., Pipal, C., & Schumacher, G. (2024, January 18). MEXCA - A simple and robust pipeline for capturing emotion expressions in faces, vocalization, and speech. *PsyArXiv*. https://doi.org/10.31234/osf.io/56svb
## How To Use Mexca
mexca implements the customizable yet easy-to-use Multimodal Emotion eXpression Capture Amsterdam (MEXCA) pipeline for extracting emotion expression features from videos.
It contains building blocks that can be used to extract features for individual modalities (i.e., facial expressions, voice, and dialogue/spoken text).
The blocks can also be integrated into a single pipeline to extract the features from all modalities at once.
Next to extracting features, mexca can also identify the speakers shown in the video by clustering speaker and face representations.
This allows users to compare emotion expressions across speakers, time, and contexts.
Please cite mexca if you use it for scientific or commercial purposes.
<div align="center">
<img src="docs/mexca_flowchart.png" width="600">
</div>
## Installation
mexca can be installed on Windows, macOS and Linux. We recommend Windows 10, macOS 12.6.x, or Ubuntu. The base package can be installed from PyPI via `pip`:
```console
pip install mexca
```
The dependencies for the additional components can be installed via:
```console
pip install mexca[vid,spe,voi,tra,sen]
```
or:
```console
pip install mexca[all]
```
The abbreviations indicate:
* `vid`: FaceExtractor
* `spe`: SpeakerIdentifier
* `voi`: VoiceExtractor
* `tra`: AudioTranscriber
* `sen`: SentimentExtractor
For details on the requirements and installation procedure, see the [Quick Installation](https://mexca.readthedocs.io/en/latest/quick_installation.html) and [Installation Details](https://mexca.readthedocs.io/en/latest/installation_details.html) sections of our documentation.
## Getting Started
If you would like to learn how to use mexca, take a look at our [demo](https://github.com/mexca/mexca/tree/main/examples) notebook and the [Getting Started](https://mexca.readthedocs.io/en/latest/getting_started.html) section of our documentation.
## Examples and Recipes
In the `examples/` folder, we currently provide two Jupyter notebooks (and a short demo):
- [example_custom_pipeline_components](https://github.com/mexca/mexca/blob/main/examples/example_custom_pipeline_components.ipynb) shows how the standard MEXCA pipeline can be customized and extended
- [example_emotion_feature_extraction](https://github.com/mexca/mexca/blob/main/examples/example_emotion_feature_extraction.ipynb) shows how to apply the MEXCA pipeline to a video and conduct a basic analysis of the extracted features
The `recipes/` folder contains two Python scripts that can easily be reused in a new project:
- [recipe_postprocess_features](https://github.com/mexca/mexca/blob/main/recipes/recipe_postprocess_features.py) applies a standard postprocessing routine to extracted features
- [recipe_standard_pipeline](https://github.com/mexca/mexca/blob/main/recipes/recipe_standard_pipeline.py) applies the standard MEXCA pipeline to a list of videos
## Components
The pipeline components are described [here](https://mexca.readthedocs.io/en/latest/components.html).
## Documentation
The documentation of mexca can be found on [Read the Docs](https://mexca.readthedocs.io/en/latest/index.html).
## Contributing
If you want to contribute to the development of mexca,
have a look at the [contribution guidelines](CONTRIBUTING.md).
## License
The code is licensed under the Apache 2.0 License. This means that mexca can be used, modified and redistributed for free, even for commercial purposes.
## Credits
Mexca is being developed by the [Netherlands eScience Center](https://www.esciencecenter.nl/) in collaboration with the [Hot Politics Lab](http://www.hotpolitics.eu/) at the University of Amsterdam.
This package was created with [Cookiecutter](https://github.com/audreyr/cookiecutter) and the [NLeSC/python-template](https://github.com/NLeSC/python-template).
[^1]: We explain the rationale for this setup in the [Docker](https://mexca.readthedocs.io/en/latest/docker.html) section.
Raw data
{
"_id": null,
"home_page": "https://github.com/mexca/mexca",
"name": "mexca",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.11,>3.7",
"maintainer_email": null,
"keywords": "emotion, multimodal, expression",
"author": "Malte Luken",
"author_email": "m.luken@esciencecenter.nl",
"download_url": "https://files.pythonhosted.org/packages/e3/1a/fab7f71307e220dcda39f326e70ee1249788f0649ec1e67b561cb0d63d3b/mexca-1.0.4.tar.gz",
"platform": null,
"description": "\n# Multimodal Emotion Expression Capture Amsterdam\n\n[![github license badge](https://img.shields.io/github/license/mexca/mexca)](https://github.com/mexca/mexca)\n[![RSD](https://img.shields.io/badge/rsd-mexca-00a3e3.svg)](https://research-software-directory.org/software/mexca)\n[![read the docs badge](https://readthedocs.org/projects/pip/badge/)](https://mexca.readthedocs.io/en/latest/index.html)\n[![fair-software badge](https://img.shields.io/badge/fair--software.eu-%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8B-yellow)](https://fair-software.eu)\n[![workflow scq badge](https://sonarcloud.io/api/project_badges/measure?project=mexca_mexca&metric=alert_status)](https://sonarcloud.io/dashboard?id=mexca_mexca)\n[![workflow scc badge](https://sonarcloud.io/api/project_badges/measure?project=mexca_mexca&metric=coverage)](https://sonarcloud.io/dashboard?id=mexca_mexca)\n[![build](https://github.com/mexca/mexca/actions/workflows/build.yml/badge.svg)](https://github.com/mexca/mexca/actions/workflows/build.yml)\n[![cffconvert](https://github.com/mexca/mexca/actions/workflows/cffconvert.yml/badge.svg)](https://github.com/mexca/mexca/actions/workflows/cffconvert.yml)\n[![markdown-link-check](https://github.com/mexca/mexca/actions/workflows/markdown-link-check.yml/badge.svg)](https://github.com/mexca/mexca/actions/workflows/markdown-link-check.yml)\n[![DOI](https://zenodo.org/badge/500818250.svg)](https://zenodo.org/badge/latestdoi/500818250)\n[![docker hub badge](https://img.shields.io/static/v1?label=Docker%20Hub&message=mexca&color=blue&style=flat&logo=docker)](https://hub.docker.com/u/mexca)\n[![docker build badge](https://img.shields.io/github/actions/workflow/status/mexca/mexca/docker.yml?label=Docker%20build&logo=docker)](https://github.com/mexca/mexca/actions/workflows/docker.yml)\n[![black code style badge](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n<div align=\"center\">\n<img src=\"mexca_logo.png\" width=\"400\">\n</div>\n\nmexca is an open-source Python package which aims to capture human emotion expressions from videos in a single pipeline.\n\nCheck out our preprint:\n\nL\u00fcken, M., Moodley, K., Viviani, E., Pipal, C., & Schumacher, G. (2024, January 18). MEXCA - A simple and robust pipeline for capturing emotion expressions in faces, vocalization, and speech. *PsyArXiv*. https://doi.org/10.31234/osf.io/56svb\n\n## How To Use Mexca\n\nmexca implements the customizable yet easy-to-use Multimodal Emotion eXpression Capture Amsterdam (MEXCA) pipeline for extracting emotion expression features from videos.\nIt contains building blocks that can be used to extract features for individual modalities (i.e., facial expressions, voice, and dialogue/spoken text).\nThe blocks can also be integrated into a single pipeline to extract the features from all modalities at once.\nNext to extracting features, mexca can also identify the speakers shown in the video by clustering speaker and face representations.\nThis allows users to compare emotion expressions across speakers, time, and contexts.\n\nPlease cite mexca if you use it for scientific or commercial purposes.\n\n<div align=\"center\">\n<img src=\"docs/mexca_flowchart.png\" width=\"600\">\n</div>\n\n## Installation\n\nmexca can be installed on Windows, macOS and Linux. We recommend Windows 10, macOS 12.6.x, or Ubuntu. The base package can be installed from PyPI via `pip`:\n\n```console\npip install mexca\n```\n\nThe dependencies for the additional components can be installed via:\n\n```console\npip install mexca[vid,spe,voi,tra,sen]\n```\n\nor:\n\n```console\npip install mexca[all]\n```\n\nThe abbreviations indicate:\n\n* `vid`: FaceExtractor\n* `spe`: SpeakerIdentifier\n* `voi`: VoiceExtractor\n* `tra`: AudioTranscriber\n* `sen`: SentimentExtractor\n\nFor details on the requirements and installation procedure, see the [Quick Installation](https://mexca.readthedocs.io/en/latest/quick_installation.html) and [Installation Details](https://mexca.readthedocs.io/en/latest/installation_details.html) sections of our documentation.\n\n## Getting Started\n\nIf you would like to learn how to use mexca, take a look at our [demo](https://github.com/mexca/mexca/tree/main/examples) notebook and the [Getting Started](https://mexca.readthedocs.io/en/latest/getting_started.html) section of our documentation.\n\n## Examples and Recipes\n\nIn the `examples/` folder, we currently provide two Jupyter notebooks (and a short demo):\n\n- [example_custom_pipeline_components](https://github.com/mexca/mexca/blob/main/examples/example_custom_pipeline_components.ipynb) shows how the standard MEXCA pipeline can be customized and extended\n- [example_emotion_feature_extraction](https://github.com/mexca/mexca/blob/main/examples/example_emotion_feature_extraction.ipynb) shows how to apply the MEXCA pipeline to a video and conduct a basic analysis of the extracted features\n\nThe `recipes/` folder contains two Python scripts that can easily be reused in a new project:\n\n- [recipe_postprocess_features](https://github.com/mexca/mexca/blob/main/recipes/recipe_postprocess_features.py) applies a standard postprocessing routine to extracted features\n- [recipe_standard_pipeline](https://github.com/mexca/mexca/blob/main/recipes/recipe_standard_pipeline.py) applies the standard MEXCA pipeline to a list of videos\n\n## Components\n\nThe pipeline components are described [here](https://mexca.readthedocs.io/en/latest/components.html).\n\n## Documentation\n\nThe documentation of mexca can be found on [Read the Docs](https://mexca.readthedocs.io/en/latest/index.html).\n\n## Contributing\n\nIf you want to contribute to the development of mexca,\nhave a look at the [contribution guidelines](CONTRIBUTING.md).\n\n## License\n\nThe code is licensed under the Apache 2.0 License. This means that mexca can be used, modified and redistributed for free, even for commercial purposes.\n\n## Credits\n\nMexca is being developed by the [Netherlands eScience Center](https://www.esciencecenter.nl/) in collaboration with the [Hot Politics Lab](http://www.hotpolitics.eu/) at the University of Amsterdam.\n\nThis package was created with [Cookiecutter](https://github.com/audreyr/cookiecutter) and the [NLeSC/python-template](https://github.com/NLeSC/python-template).\n\n[^1]: We explain the rationale for this setup in the [Docker](https://mexca.readthedocs.io/en/latest/docker.html) section.\n",
"bugtrack_url": null,
"license": null,
"summary": "Emotion expression capture from multiple modalities.",
"version": "1.0.4",
"project_urls": {
"Bug Tracker": "https://github.com/mexca/mexca/issues",
"Homepage": "https://github.com/mexca/mexca"
},
"split_keywords": [
"emotion",
" multimodal",
" expression"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b08f631e5f52ebd19a3c6249cfc76620af6a943dfcae552ffb764b75c5d28b8f",
"md5": "a325e4d668327d6d25fd9dbee83264ad",
"sha256": "4345935315c3a2db03ed73201915e371c89b5904fb5f42cf2a110e2496179ba5"
},
"downloads": -1,
"filename": "mexca-1.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a325e4d668327d6d25fd9dbee83264ad",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.11,>3.7",
"size": 57606,
"upload_time": "2024-05-01T12:27:36",
"upload_time_iso_8601": "2024-05-01T12:27:36.168249Z",
"url": "https://files.pythonhosted.org/packages/b0/8f/631e5f52ebd19a3c6249cfc76620af6a943dfcae552ffb764b75c5d28b8f/mexca-1.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e31afab7f71307e220dcda39f326e70ee1249788f0649ec1e67b561cb0d63d3b",
"md5": "89fb543d2c640d658ba77ad274223c82",
"sha256": "0bfea964b5ec9fdb1f324dbf7a3d244dac6c5969ed67dbcf368e571d81451854"
},
"downloads": -1,
"filename": "mexca-1.0.4.tar.gz",
"has_sig": false,
"md5_digest": "89fb543d2c640d658ba77ad274223c82",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.11,>3.7",
"size": 64444,
"upload_time": "2024-05-01T12:27:38",
"upload_time_iso_8601": "2024-05-01T12:27:38.144383Z",
"url": "https://files.pythonhosted.org/packages/e3/1a/fab7f71307e220dcda39f326e70ee1249788f0649ec1e67b561cb0d63d3b/mexca-1.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-01 12:27:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mexca",
"github_project": "mexca",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "mexca"
}