personalitylinmult


Namepersonalitylinmult JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryPersonalityLinMulT: Transformer-based Big Five Automatic Personality Perception.
upload_time2024-11-05 14:08:23
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseNone
keywords app bigfive linear-complexity attention multimodal personality transformer
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PersonalityLinMulT
[![License](https://img.shields.io/badge/license-MIT-yellow.svg)](LICENSE)
[![python](https://img.shields.io/badge/Python-3.11-3776AB.svg?style=flat&logo=python&logoColor=white)](https://www.python.org)
[![pytorch](https://img.shields.io/badge/PyTorch-2.4.1-EE4C2C.svg?style=flat&logo=pytorch)](https://pytorch.org)

LinMulT is trained for Big Five personality trait estimation using the First Impressions V2 dataset and sentiment estimation using the MOSI and MOSEI datasets.
* paper: **Multimodal Sentiment and Personality Perception Under Speech: A Comparison of Transformer-based Architectures** ([pdf](https://proceedings.mlr.press/v173/fodor22a/fodor22a.pf), [website](https://proceedings.mlr.press/v173/fodor22a.html))

# Setup
### Install package from PyPI for inference
```
pip install personalitylinmult
```

### Install package for training
```
git clone https://github.com/fodorad/PersonalityLinMulT
cd PersonalityLinMulT
pip install -e .[all]
pip install -U -r requirements.txt
```

#### Supported extras definitions:
| extras tag | description |
| --- | --- |
| train | dependencies for feature extraction, training the model from scratch and visualization |
| all | extends the 'train' dependencies for development. currently it is the same as 'train' tag |


# Related projects

### exordium
Collection of preprocessing functions and deep learning methods. This repository contains revised codes for fine landmark detection (including face, eye region, iris and pupil landmarks), head pose estimation, and eye feature calculation.
* code: https://github.com/fodorad/exordium

### (2022) LinMulT
General-purpose Multimodal Transformer with Linear Complexity Attention Mechanism. This base model is further modified and trained for various tasks and datasets.
* code: https://github.com/fodorad/LinMulT

### (2023) BlinkLinMulT
LinMulT is trained for blink presence detection and eye state recognition tasks.
Our results demonstrate comparable or superior performance compared to state-of-the-art models on 2 tasks, using 7 public benchmark databases.
* paper: BlinkLinMulT: Transformer-based Eye Blink Detection ([pdf](https://adamfodor.com/pdf/2023_Fodor_Adam_MDPI_BlinkLinMulT.pdf), [website](https://www.mdpi.com/2313-433X/9/10/196))
* code: https://github.com/fodorad/BlinkLinMulT


# Citation - BibTex
If you found our research helpful or influential please consider citing:

### (2023) BlinkLinMulT for blink presence detection and eye state recognition
```
@Article{fodor2023blinklinmult,
  title = {BlinkLinMulT: Transformer-Based Eye Blink Detection},
  author = {Fodor, Ádám and Fenech, Kristian and Lőrincz, András},
  journal = {Journal of Imaging},
  volume = {9},
  year = {2023},
  number = {10},
  article-number = {196},
  url = {https://www.mdpi.com/2313-433X/9/10/196},
  PubMedID = {37888303},
  ISSN = {2313-433X},
  DOI = {10.3390/jimaging9100196}
}
```

### (2022) LinMulT for personality trait and sentiment estimation
```
@InProceedings{pmlr-v173-fodor22a,
  title = {Multimodal Sentiment and Personality Perception Under Speech: A Comparison of Transformer-based Architectures},
  author = {Fodor, {\'A}d{\'a}m and Saboundji, Rachid R. and Jacques Junior, Julio C. S. and Escalera, Sergio and Gallardo-Pujol, David and L{\H{o}}rincz, Andr{\'a}s},
  booktitle = {Understanding Social Behavior in Dyadic and Small Group Interactions},
  pages = {218--241},
  year = {2022},
  editor = {Palmero, Cristina and Jacques Junior, Julio C. S. and Clapés, Albert and Guyon, Isabelle and Tu, Wei-Wei and Moeslund, Thomas B. and Escalera, Sergio},
  volume = {173},
  series = {Proceedings of Machine Learning Research},
  month = {16 Oct},
  publisher = {PMLR},
  pdf = {https://proceedings.mlr.press/v173/fodor22a/fodor22a.pdf},
  url = {https://proceedings.mlr.press/v173/fodor22a.html}
}
```

# Contact
* Ádám Fodor (foauaai@inf.elte.hu)
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "personalitylinmult",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "app, bigfive, linear-complexity attention, multimodal, personality, transformer",
    "author": null,
    "author_email": "fodorad <foauaai@inf.elte.hu>",
    "download_url": "https://files.pythonhosted.org/packages/9c/6a/0ceec942aa5c211012f513add3b7ba879263be97c4b3205481ae4394f91f/personalitylinmult-1.0.0.tar.gz",
    "platform": null,
    "description": "# PersonalityLinMulT\n[![License](https://img.shields.io/badge/license-MIT-yellow.svg)](LICENSE)\n[![python](https://img.shields.io/badge/Python-3.11-3776AB.svg?style=flat&logo=python&logoColor=white)](https://www.python.org)\n[![pytorch](https://img.shields.io/badge/PyTorch-2.4.1-EE4C2C.svg?style=flat&logo=pytorch)](https://pytorch.org)\n\nLinMulT is trained for Big Five personality trait estimation using the First Impressions V2 dataset and sentiment estimation using the MOSI and MOSEI datasets.\n* paper: **Multimodal Sentiment and Personality Perception Under Speech: A Comparison of Transformer-based Architectures** ([pdf](https://proceedings.mlr.press/v173/fodor22a/fodor22a.pf), [website](https://proceedings.mlr.press/v173/fodor22a.html))\n\n# Setup\n### Install package from PyPI for inference\n```\npip install personalitylinmult\n```\n\n### Install package for training\n```\ngit clone https://github.com/fodorad/PersonalityLinMulT\ncd PersonalityLinMulT\npip install -e .[all]\npip install -U -r requirements.txt\n```\n\n#### Supported extras definitions:\n| extras tag | description |\n| --- | --- |\n| train | dependencies for feature extraction, training the model from scratch and visualization |\n| all | extends the 'train' dependencies for development. currently it is the same as 'train' tag |\n\n\n# Related projects\n\n### exordium\nCollection of preprocessing functions and deep learning methods. This repository contains revised codes for fine landmark detection (including face, eye region, iris and pupil landmarks), head pose estimation, and eye feature calculation.\n* code: https://github.com/fodorad/exordium\n\n### (2022) LinMulT\nGeneral-purpose Multimodal Transformer with Linear Complexity Attention Mechanism. This base model is further modified and trained for various tasks and datasets.\n* code: https://github.com/fodorad/LinMulT\n\n### (2023) BlinkLinMulT\nLinMulT is trained for blink presence detection and eye state recognition tasks.\nOur results demonstrate comparable or superior performance compared to state-of-the-art models on 2 tasks, using 7 public benchmark databases.\n* paper: BlinkLinMulT: Transformer-based Eye Blink Detection ([pdf](https://adamfodor.com/pdf/2023_Fodor_Adam_MDPI_BlinkLinMulT.pdf), [website](https://www.mdpi.com/2313-433X/9/10/196))\n* code: https://github.com/fodorad/BlinkLinMulT\n\n\n# Citation - BibTex\nIf you found our research helpful or influential please consider citing:\n\n### (2023) BlinkLinMulT for blink presence detection and eye state recognition\n```\n@Article{fodor2023blinklinmult,\n  title = {BlinkLinMulT: Transformer-Based Eye Blink Detection},\n  author = {Fodor, \u00c1d\u00e1m and Fenech, Kristian and L\u0151rincz, Andr\u00e1s},\n  journal = {Journal of Imaging},\n  volume = {9},\n  year = {2023},\n  number = {10},\n  article-number = {196},\n  url = {https://www.mdpi.com/2313-433X/9/10/196},\n  PubMedID = {37888303},\n  ISSN = {2313-433X},\n  DOI = {10.3390/jimaging9100196}\n}\n```\n\n### (2022) LinMulT for personality trait and sentiment estimation\n```\n@InProceedings{pmlr-v173-fodor22a,\n  title = {Multimodal Sentiment and Personality Perception Under Speech: A Comparison of Transformer-based Architectures},\n  author = {Fodor, {\\'A}d{\\'a}m and Saboundji, Rachid R. and Jacques Junior, Julio C. S. and Escalera, Sergio and Gallardo-Pujol, David and L{\\H{o}}rincz, Andr{\\'a}s},\n  booktitle = {Understanding Social Behavior in Dyadic and Small Group Interactions},\n  pages = {218--241},\n  year = {2022},\n  editor = {Palmero, Cristina and Jacques Junior, Julio C. S. and Clap\u00e9s, Albert and Guyon, Isabelle and Tu, Wei-Wei and Moeslund, Thomas B. and Escalera, Sergio},\n  volume = {173},\n  series = {Proceedings of Machine Learning Research},\n  month = {16 Oct},\n  publisher = {PMLR},\n  pdf = {https://proceedings.mlr.press/v173/fodor22a/fodor22a.pdf},\n  url = {https://proceedings.mlr.press/v173/fodor22a.html}\n}\n```\n\n# Contact\n* \u00c1d\u00e1m Fodor (foauaai@inf.elte.hu)",
    "bugtrack_url": null,
    "license": null,
    "summary": "PersonalityLinMulT: Transformer-based Big Five Automatic Personality Perception.",
    "version": "1.0.0",
    "project_urls": {
        "Documentation": "https://github.com/fodorad/personalitylinmult#readme",
        "Issues": "https://github.com/fodorad/personalitylinmult/issues",
        "Source": "https://github.com/fodorad/personalitylinmult"
    },
    "split_keywords": [
        "app",
        " bigfive",
        " linear-complexity attention",
        " multimodal",
        " personality",
        " transformer"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "58c62de0a35eff73d0cfa9f2a3df40265c0e11b38a0b93901866384174e3a4a8",
                "md5": "a1c053f44494a9f988d4587cba9dde2e",
                "sha256": "5bc8ef795434a9061b7ada9b2ceb2be7f5ff28722914fa5a5dd9313b7c810b13"
            },
            "downloads": -1,
            "filename": "personalitylinmult-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a1c053f44494a9f988d4587cba9dde2e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 24091,
            "upload_time": "2024-11-05T14:08:25",
            "upload_time_iso_8601": "2024-11-05T14:08:25.918508Z",
            "url": "https://files.pythonhosted.org/packages/58/c6/2de0a35eff73d0cfa9f2a3df40265c0e11b38a0b93901866384174e3a4a8/personalitylinmult-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9c6a0ceec942aa5c211012f513add3b7ba879263be97c4b3205481ae4394f91f",
                "md5": "b52fb33fa61197d4b9b18e50dd0348b2",
                "sha256": "b448f9aa88bf085fdc8935c043582f11636c498598c7dc83f616099f8cafd4cf"
            },
            "downloads": -1,
            "filename": "personalitylinmult-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "b52fb33fa61197d4b9b18e50dd0348b2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 15514,
            "upload_time": "2024-11-05T14:08:23",
            "upload_time_iso_8601": "2024-11-05T14:08:23.777824Z",
            "url": "https://files.pythonhosted.org/packages/9c/6a/0ceec942aa5c211012f513add3b7ba879263be97c4b3205481ae4394f91f/personalitylinmult-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-05 14:08:23",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "fodorad",
    "github_project": "personalitylinmult#readme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "personalitylinmult"
}
        
Elapsed time: 0.87291s