oceanai


Nameoceanai JSON
Version 1.0.0a43 PyPI version JSON
download
home_pagehttps://github.com/DmitryRyumin/oceanai
SummaryOCEAN-AI
upload_time2024-11-07 07:46:46
maintainerElena Ryumina, Dmitry Ryumin
docs_urlNone
authorElena Ryumina, Dmitry Ryumin, Alexey Karpov
requires_python<4,>=3.9
licenseBSD License
keywords ocean-ai machinelearning statistics computervision artificialintelligence preprocessing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # [OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)

<p align="center">
    <img src="https://raw.githubusercontent.com/aimclub/OCEANAI/main/docs/source/_static/logo.svg" alt="Logo" width="40%">
<p>

---

[![SAI](./docs/source/_static/badges/SAI_badge_flat.svg)](https://sai.itmo.ru/)
[![ITMO](./docs/source/_static/badges/ITMO_badge_flat.svg)](https://en.itmo.ru/en/)

![PyPI](https://img.shields.io/pypi/v/oceanai)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/oceanai)
![PyPI - Implementation](https://img.shields.io/pypi/implementation/oceanai)
![GitHub repo size](https://img.shields.io/github/repo-size/dmitryryumin/oceanai)
![PyPI - Status](https://img.shields.io/pypi/status/oceanai)
![PyPI - License](https://img.shields.io/pypi/l/oceanai)
![GitHub top language](https://img.shields.io/github/languages/top/dmitryryumin/oceanai)
![Documentation Status](https://readthedocs.org/projects/oceanai/badge/?version=latest)
[![App](https://img.shields.io/badge/🤗-DEMO--OCEANAI-FFD21F.svg)](https://huggingface.co/spaces/ElenaRyumina/OCEANAI)

---

| [Documentation in Russian](https://oceanai.readthedocs.io/ru/latest/index.html) |
|---------------------------------------------------------------------------------|

---

<h4 align="center"><span style="color:#EC256F;">Description</span></h4>

---

> **[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** is an open-source library consisting of a set of algorithms for intellectual analysis of human behavior based on multimodal data for automatic personality traits (PT) assessment. The library evaluates five PT: **O**penness to experience, **C**onscientiousness, **E**xtraversion, **A**greeableness, Non-**N**euroticism.

<p align="center">
    <img src="https://raw.githubusercontent.com/aimclub/OCEANAI/main/docs/source/_static/Pipeline_OCEANAI.en.svg" alt="Pipeline">
<p>

---

**[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** includes three main algorithms:

1. Audio Information Analysis Algorithm (AIA).
2. Video Information Analysis Algorithm (VIA).
3. Text Information Analysis Algorithm (TIA).
4. Multimodal Information Fusion Algorithm (MIF).

The AIA, VIA and TIA algorithms implement the functions of strong artificial intelligence (AI) in terms of complexing acoustic, visual and linguistic features built on different principles (hand-crafted and deep features), i.e. these algorithms implement the approaches of composite (hybrid) AI. The necessary pre-processing of audio, video and text information, the calculation of visual, acoustic and linguistic features and the output of predictions of personality traits based on them are carried out in the algorithms.

The MIF algorithm is a combination of three information analysis algorithms (AIA, VIA and TIA). This algorithm performs feature-level fusion obtained by the AIA, VIA and TIA algorithms.

In addition to the main task - unimodal and multimodal personality traits assessment, the features implemented in **[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** will allow researchers to solve other problems of analyzing human behavior, for example, affective state recognition.

To install the library, you should refer to the **[Installation and Update](https://oceanai.readthedocs.io/en/latest/user_guide/installation.html#id2)**.

To work with audio information, you should refer to the **[Audio information processing](https://oceanai.readthedocs.io/en/latest/user_guide/samples/audio.html)**.

To work with video information, you should refer to the **[Video information processing](https://oceanai.readthedocs.io/en/latest/user_guide/samples/video.html)**.

To work with text information, you should refer to the **[Text information processing](https://oceanai.readthedocs.io/en/latest/user_guide/samples/text.html)**.

To work with multimodal information, you should refer to the **[Multimodal information processing](https://oceanai.readthedocs.io/en/latest/user_guide/samples/multimodal.html)**.

The library solves practical tasks:

1. **[Ranking of potential candidates by professional responsibilities](https://oceanai.readthedocs.io/en/latest/user_guide/notebooks/Pipeline_practical_task_1.html)**.
2. **[Predicting consumer preferences for industrial goods](https://oceanai.readthedocs.io/en/latest/user_guide/notebooks/Pipeline_practical_task_2.html)**.
3. **[Forming effective work teams](https://oceanai.readthedocs.io/ru/latest/user_guide/notebooks/Pipeline_practical_task_3.html)**.

**[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** uses the latest open-source libraries for audio, video and text processing: **[librosa](https://librosa.org/)**, **[openSMILE](https://audeering.github.io/opensmile-python/)**, **[openCV](https://pypi.org/project/opencv-python/)**, **[mediapipe](https://google.github.io/mediapipe/getting_started/python)**, **[transformers](https://pypi.org/project/transformers)**.

**[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** is written in the **[python programming language](https://www.python.org/)**. Neural network models are implemented and trained using an open-source library code **[PyTorch](https://pytorch.org/)**.

---

## Research data

The **[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** library was tested on two corpora:

1) The publicly available and large-scale **[First Impressions V2 corpus](https://chalearnlap.cvc.uab.cat/dataset/24/description/)**.
2) On the first publicly available Russian-language **[Multimodal Personality Traits Assessment (MuPTA) corpus](https://hci.nw.ru/en/pages/mupta-corpus)**.

---

| [Development team](https://oceanai.readthedocs.io/en/latest/about.html) |
|-------------------------------------------------------------------------|

---

## Certificate of state registration of a computer program

**[Library of algorithms for intelligent analysis of human behavior based on multimodal data, providing human's personality traits assessment to perform professional duties (OCEAN-AI)](https://new.fips.ru/registers-doc-view/fips_servlet?DB=EVM&DocNumber=2023613724&TypeFile=html)**

## Certificate of state registration of a database

**[MuPTA - Multimodal Personality Traits Assessment Corpus](https://new.fips.ru/registers-doc-view/fips_servlet?DB=DB&DocNumber=2023624011&TypeFile=html)**

---

## Publications

### Journals

```bibtex
@article{ryumina24_prl,
    author = {Ryumina, Elena and Markitantov, Maxim and Ryumin, Dmitry and Karpov, Alexey},
    title = {{Gated Siamese Fusion Network based on Multimodal Deep and Hand-Crafted Features for Personality Traits Assessment}},
    volume = {185},
    pages = {45--51},
    journal = {Pattern Recognition Letters},
    year = {2024},
    issn = {0167--8655},
    doi = {10.1016/j.patrec.2024.07.004},
    url = {https://www.sciencedirect.com/science/article/pii/S0167865524002071},
}
```

```bibtex
@article{ryumina24_eswa,
    author = {Elena Ryumina and Maxim Markitantov and Dmitry Ryumin and Alexey Karpov},
    title = {OCEAN-AI Framework with EmoFormer Cross-Hemiface Attention Approach for Personality Traits Assessment},
    journal = {Expert Systems with Applications},
    volume = {239},
    pages = {122441},
    year = {2024},
    doi = {https://doi.org/10.1016/j.eswa.2023.122441},
}
```

```bibtex
@article{ryumina22_neurocomputing,
    author = {Elena Ryumina and Denis Dresvyanskiy and Alexey Karpov},
    title = {In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study},
    journal = {Neurocomputing},
    volume = {514},
    pages = {435--450},
    year = {2022},
    doi = {https://doi.org/10.1016/j.neucom.2022.10.013},
}
```

### Conferences

```bibtex
@inproceedings{ryumina24_interspeech,
    author = {Elena Ryumina and Dmitry Ryumin and and Alexey Karpov},
    title = {OCEAN-AI: Open Multimodal Framework for Personality Traits Assessment and HR-Processes Automatization},
    year = {2024},
    booktitle = {INTERSPEECH},
    pages = {3630--3631},
    doi = {https://www.isca-archive.org/interspeech_2024/ryumina24_interspeech.html},
}
```

```bibtex
@inproceedings{ryumina23_interspeech,
    author = {Elena Ryumina and Dmitry Ryumin and Maxim Markitantov and Heysem Kaya and Alexey Karpov},
    title = {Multimodal Personality Traits Assessment (MuPTA) Corpus: The Impact of Spontaneous and Read Speech},
    year = {2023},
    booktitle = {INTERSPEECH},
    pages = {4049--4053},
    doi = {https://doi.org/10.21437/Interspeech.2023-1686},
}
```

---

## Supported by

The study is supported by the [Research Center Strong Artificial Intelligence in Industry](https://sai.itmo.ru/)
of [ITMO University](https://en.itmo.ru/) as part of the plan of the center's program: Development and testing of an experimental prototype of a library of strong AI algorithms in terms of hybrid decision making based on the interaction of AI and decision maker based on models of professional behavior and cognitive processes of decision maker in poorly formalized tasks with high uncertainty.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/DmitryRyumin/oceanai",
    "name": "oceanai",
    "maintainer": "Elena Ryumina, Dmitry Ryumin",
    "docs_url": null,
    "requires_python": "<4,>=3.9",
    "maintainer_email": "ryumina_ev@mail.ru, dl_03.03.1991@mail.ru",
    "keywords": "OCEAN-AI, MachineLearning, Statistics, ComputerVision, ArtificialIntelligence, Preprocessing",
    "author": "Elena Ryumina, Dmitry Ryumin, Alexey Karpov",
    "author_email": "ryumina_ev@mail.ru, dl_03.03.1991@mail.ru, karpov@iias.spb.su",
    "download_url": "https://files.pythonhosted.org/packages/16/88/a21052fbdd94a1962f6ff5b2b2da473fa7ae69d605213ffe55f700d72e75/oceanai-1.0.0a43.tar.gz",
    "platform": null,
    "description": "# [OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)\n\n<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/aimclub/OCEANAI/main/docs/source/_static/logo.svg\" alt=\"Logo\" width=\"40%\">\n<p>\n\n---\n\n[![SAI](./docs/source/_static/badges/SAI_badge_flat.svg)](https://sai.itmo.ru/)\n[![ITMO](./docs/source/_static/badges/ITMO_badge_flat.svg)](https://en.itmo.ru/en/)\n\n![PyPI](https://img.shields.io/pypi/v/oceanai)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/oceanai)\n![PyPI - Implementation](https://img.shields.io/pypi/implementation/oceanai)\n![GitHub repo size](https://img.shields.io/github/repo-size/dmitryryumin/oceanai)\n![PyPI - Status](https://img.shields.io/pypi/status/oceanai)\n![PyPI - License](https://img.shields.io/pypi/l/oceanai)\n![GitHub top language](https://img.shields.io/github/languages/top/dmitryryumin/oceanai)\n![Documentation Status](https://readthedocs.org/projects/oceanai/badge/?version=latest)\n[![App](https://img.shields.io/badge/\ud83e\udd17-DEMO--OCEANAI-FFD21F.svg)](https://huggingface.co/spaces/ElenaRyumina/OCEANAI)\n\n---\n\n| [Documentation in Russian](https://oceanai.readthedocs.io/ru/latest/index.html) |\n|---------------------------------------------------------------------------------|\n\n---\n\n<h4 align=\"center\"><span style=\"color:#EC256F;\">Description</span></h4>\n\n---\n\n> **[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** is an open-source library consisting of a set of algorithms for intellectual analysis of human behavior based on multimodal data for automatic personality traits (PT) assessment. The library evaluates five PT: **O**penness to experience, **C**onscientiousness, **E**xtraversion, **A**greeableness, Non-**N**euroticism.\n\n<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/aimclub/OCEANAI/main/docs/source/_static/Pipeline_OCEANAI.en.svg\" alt=\"Pipeline\">\n<p>\n\n---\n\n**[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** includes three main algorithms:\n\n1. Audio Information Analysis Algorithm (AIA).\n2. Video Information Analysis Algorithm (VIA).\n3. Text Information Analysis Algorithm (TIA).\n4. Multimodal Information Fusion Algorithm (MIF).\n\nThe AIA, VIA and TIA algorithms implement the functions of strong artificial intelligence (AI) in terms of complexing acoustic, visual and linguistic features built on different principles (hand-crafted and deep features), i.e. these algorithms implement the approaches of composite (hybrid) AI. The necessary pre-processing of audio, video and text information, the calculation of visual, acoustic and linguistic features and the output of predictions of personality traits based on them are carried out in the algorithms.\n\nThe MIF algorithm is a combination of three information analysis algorithms (AIA, VIA and TIA). This algorithm performs feature-level fusion obtained by the AIA, VIA and TIA algorithms.\n\nIn addition to the main task - unimodal and multimodal personality traits assessment, the features implemented in **[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** will allow researchers to solve other problems of analyzing human behavior, for example, affective state recognition.\n\nTo install the library, you should refer to the **[Installation and Update](https://oceanai.readthedocs.io/en/latest/user_guide/installation.html#id2)**.\n\nTo work with audio information, you should refer to the **[Audio information processing](https://oceanai.readthedocs.io/en/latest/user_guide/samples/audio.html)**.\n\nTo work with video information, you should refer to the **[Video information processing](https://oceanai.readthedocs.io/en/latest/user_guide/samples/video.html)**.\n\nTo work with text information, you should refer to the **[Text information processing](https://oceanai.readthedocs.io/en/latest/user_guide/samples/text.html)**.\n\nTo work with multimodal information, you should refer to the **[Multimodal information processing](https://oceanai.readthedocs.io/en/latest/user_guide/samples/multimodal.html)**.\n\nThe library solves practical tasks:\n\n1. **[Ranking of potential candidates by professional responsibilities](https://oceanai.readthedocs.io/en/latest/user_guide/notebooks/Pipeline_practical_task_1.html)**.\n2. **[Predicting consumer preferences for industrial goods](https://oceanai.readthedocs.io/en/latest/user_guide/notebooks/Pipeline_practical_task_2.html)**.\n3. **[Forming effective work teams](https://oceanai.readthedocs.io/ru/latest/user_guide/notebooks/Pipeline_practical_task_3.html)**.\n\n**[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** uses the latest open-source libraries for audio, video and text processing: **[librosa](https://librosa.org/)**, **[openSMILE](https://audeering.github.io/opensmile-python/)**, **[openCV](https://pypi.org/project/opencv-python/)**, **[mediapipe](https://google.github.io/mediapipe/getting_started/python)**, **[transformers](https://pypi.org/project/transformers)**.\n\n**[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** is written in the **[python programming language](https://www.python.org/)**. Neural network models are implemented and trained using an open-source library code **[PyTorch](https://pytorch.org/)**.\n\n---\n\n## Research data\n\nThe **[OCEAN-AI](https://oceanai.readthedocs.io/en/latest/)** library was tested on two corpora:\n\n1) The publicly available and large-scale **[First Impressions V2 corpus](https://chalearnlap.cvc.uab.cat/dataset/24/description/)**.\n2) On the first publicly available Russian-language **[Multimodal Personality Traits Assessment (MuPTA) corpus](https://hci.nw.ru/en/pages/mupta-corpus)**.\n\n---\n\n| [Development team](https://oceanai.readthedocs.io/en/latest/about.html) |\n|-------------------------------------------------------------------------|\n\n---\n\n## Certificate of state registration of a computer program\n\n**[Library of algorithms for intelligent analysis of human behavior based on multimodal data, providing human's personality traits assessment to perform professional duties (OCEAN-AI)](https://new.fips.ru/registers-doc-view/fips_servlet?DB=EVM&DocNumber=2023613724&TypeFile=html)**\n\n## Certificate of state registration of a database\n\n**[MuPTA - Multimodal Personality Traits Assessment Corpus](https://new.fips.ru/registers-doc-view/fips_servlet?DB=DB&DocNumber=2023624011&TypeFile=html)**\n\n---\n\n## Publications\n\n### Journals\n\n```bibtex\n@article{ryumina24_prl,\n    author = {Ryumina, Elena and Markitantov, Maxim and Ryumin, Dmitry and Karpov, Alexey},\n    title = {{Gated Siamese Fusion Network based on Multimodal Deep and Hand-Crafted Features for Personality Traits Assessment}},\n    volume = {185},\n    pages = {45--51},\n    journal = {Pattern Recognition Letters},\n    year = {2024},\n    issn = {0167--8655},\n    doi = {10.1016/j.patrec.2024.07.004},\n    url = {https://www.sciencedirect.com/science/article/pii/S0167865524002071},\n}\n```\n\n```bibtex\n@article{ryumina24_eswa,\n    author = {Elena Ryumina and Maxim Markitantov and Dmitry Ryumin and Alexey Karpov},\n    title = {OCEAN-AI Framework with EmoFormer Cross-Hemiface Attention Approach for Personality Traits Assessment},\n    journal = {Expert Systems with Applications},\n    volume = {239},\n    pages = {122441},\n    year = {2024},\n    doi = {https://doi.org/10.1016/j.eswa.2023.122441},\n}\n```\n\n```bibtex\n@article{ryumina22_neurocomputing,\n    author = {Elena Ryumina and Denis Dresvyanskiy and Alexey Karpov},\n    title = {In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study},\n    journal = {Neurocomputing},\n    volume = {514},\n    pages = {435--450},\n    year = {2022},\n    doi = {https://doi.org/10.1016/j.neucom.2022.10.013},\n}\n```\n\n### Conferences\n\n```bibtex\n@inproceedings{ryumina24_interspeech,\n    author = {Elena Ryumina and Dmitry Ryumin and and Alexey Karpov},\n    title = {OCEAN-AI: Open Multimodal Framework for Personality Traits Assessment and HR-Processes Automatization},\n    year = {2024},\n    booktitle = {INTERSPEECH},\n    pages = {3630--3631},\n    doi = {https://www.isca-archive.org/interspeech_2024/ryumina24_interspeech.html},\n}\n```\n\n```bibtex\n@inproceedings{ryumina23_interspeech,\n    author = {Elena Ryumina and Dmitry Ryumin and Maxim Markitantov and Heysem Kaya and Alexey Karpov},\n    title = {Multimodal Personality Traits Assessment (MuPTA) Corpus: The Impact of Spontaneous and Read Speech},\n    year = {2023},\n    booktitle = {INTERSPEECH},\n    pages = {4049--4053},\n    doi = {https://doi.org/10.21437/Interspeech.2023-1686},\n}\n```\n\n---\n\n## Supported by\n\nThe study is supported by the [Research Center Strong Artificial Intelligence in Industry](https://sai.itmo.ru/)\nof [ITMO University](https://en.itmo.ru/) as part of the plan of the center's program: Development and testing of an experimental prototype of a library of strong AI algorithms in terms of hybrid decision making based on the interaction of AI and decision maker based on models of professional behavior and cognitive processes of decision maker in poorly formalized tasks with high uncertainty.\n",
    "bugtrack_url": null,
    "license": "BSD License",
    "summary": "OCEAN-AI",
    "version": "1.0.0a43",
    "project_urls": {
        "Bug Reports": "https://github.com/DmitryRyumin/oceanai/issues",
        "Documentation": "https://oceanai.readthedocs.io",
        "Download": "https://github.com/DmitryRyumin/oceanai/tags",
        "Homepage": "https://github.com/DmitryRyumin/oceanai",
        "Source Code": "https://github.com/DmitryRyumin/oceanai/tree/main/oceanai"
    },
    "split_keywords": [
        "ocean-ai",
        " machinelearning",
        " statistics",
        " computervision",
        " artificialintelligence",
        " preprocessing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e1f76142f00a389d531a49458c3c8d3d9d546f4795fbb03581c9d9909772a938",
                "md5": "4bc5c09283c9f871c6c75a4576e1fdaa",
                "sha256": "62a23c46cd4d875caeff8a0319699f469c7f9c55b35148e24a1c5e65fabc10e2"
            },
            "downloads": -1,
            "filename": "oceanai-1.0.0a43-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4bc5c09283c9f871c6c75a4576e1fdaa",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4,>=3.9",
            "size": 137451,
            "upload_time": "2024-11-07T07:46:44",
            "upload_time_iso_8601": "2024-11-07T07:46:44.656702Z",
            "url": "https://files.pythonhosted.org/packages/e1/f7/6142f00a389d531a49458c3c8d3d9d546f4795fbb03581c9d9909772a938/oceanai-1.0.0a43-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1688a21052fbdd94a1962f6ff5b2b2da473fa7ae69d605213ffe55f700d72e75",
                "md5": "4c7fb6c4996179a030e16f6a8fe2adfd",
                "sha256": "72d743135e76674c9d870ddac5d1bdd0b3a7fb38c323680382724bcf50a84e69"
            },
            "downloads": -1,
            "filename": "oceanai-1.0.0a43.tar.gz",
            "has_sig": false,
            "md5_digest": "4c7fb6c4996179a030e16f6a8fe2adfd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4,>=3.9",
            "size": 120065,
            "upload_time": "2024-11-07T07:46:46",
            "upload_time_iso_8601": "2024-11-07T07:46:46.250894Z",
            "url": "https://files.pythonhosted.org/packages/16/88/a21052fbdd94a1962f6ff5b2b2da473fa7ae69d605213ffe55f700d72e75/oceanai-1.0.0a43.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-07 07:46:46",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "DmitryRyumin",
    "github_project": "oceanai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "oceanai"
}
        
Elapsed time: 0.57247s