linmult


Namelinmult JSON
Version 1.3.6 PyPI version JSON
download
home_pageNone
SummaryGeneral-purpose Multimodal Transformer with Linear Complexity Attention Mechanism.
upload_time2024-10-11 11:02:31
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords linear-complexity attention multimodal transformer
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LinMulT
[![License](https://img.shields.io/badge/license-MIT-yellow.svg)](LICENSE)
[![python](https://img.shields.io/badge/Python-3.10-3776AB.svg?style=flat&logo=python&logoColor=white)](https://www.python.org)
[![pytorch](https://img.shields.io/badge/PyTorch-2.0.1-EE4C2C.svg?style=flat&logo=pytorch)](https://pytorch.org)

General-purpose Multimodal Transformer with Linear Complexity Attention Mechanism.

# Setup
### Install package from PyPI
```
pip install linmult
```

### Install package for development
```
git clone https://github.com/fodorad/LinMulT
cd LinMulT
pip install -e .
pip install -U -r requirements.txt
python -m unittest
```

# Similar projects using LinMulT

### (2023) BlinkLinMulT
LinMulT is trained for blink presence detection and eye state recognition tasks.
Our results demonstrate comparable or superior performance compared to state-of-the-art models on 2 tasks, using 7 public benchmark databases.
* paper: BlinkLinMulT: Transformer-based Eye Blink Detection ([pdf](https://adamfodor.com/pdf/2023_Fodor_Adam_MDPI_BlinkLinMulT.pdf), [website](https://www.mdpi.com/2313-433X/9/10/196))
* code: https://github.com/fodorad/BlinkLinMulT

### (2022) PersonalityLinMulT
LinMulT is trained for Big Five personality trait estimation using the First Impressions V2 dataset and sentiment estimation using the MOSI and MOSEI datasets.
* paper: Multimodal Sentiment and Personality Perception Under Speech: A Comparison of Transformer-based Architectures ([pdf](https://proceedings.mlr.press/v173/fodor22a/fodor22a.pdf), [website](https://proceedings.mlr.press/v173/fodor22a.html))
* code: https://github.com/fodorad/PersonalityLinMulT


# Citation - BibTex
If you found our research helpful or influential please consider citing:

### (2023) LinMulT for blink presence detection and eye state recognition:
```
@article{blinklinmult-fodor23,
  title = {BlinkLinMulT: Transformer-based Eye Blink Detection},
  author = {Fodor, {\'A}d{\'a}m and Fenech, Kristian and L{\H{o}}rincz, Andr{\'a}s},
  journal = {...}
  pages = {1--19},
  year = {2023}
}
```

### (2022) LinMulT for personality trait and sentiment estimation:
```
@InProceedings{pmlr-v173-fodor22a,
  title = {Multimodal Sentiment and Personality Perception Under Speech: A Comparison of Transformer-based Architectures},
  author = {Fodor, {\'A}d{\'a}m and Saboundji, Rachid R. and Jacques Junior, Julio C. S. and Escalera, Sergio and Gallardo-Pujol, David and L{\H{o}}rincz, Andr{\'a}s},
  booktitle = {Understanding Social Behavior in Dyadic and Small Group Interactions},
  pages = {218--241},
  year = {2022},
  editor = {Palmero, Cristina and Jacques Junior, Julio C. S. and Clapés, Albert and Guyon, Isabelle and Tu, Wei-Wei and Moeslund, Thomas B. and Escalera, Sergio},
  volume = {173},
  series = {Proceedings of Machine Learning Research},
  month = {16 Oct},
  publisher = {PMLR},
  pdf = {https://proceedings.mlr.press/v173/fodor22a/fodor22a.pdf},
  url = {https://proceedings.mlr.press/v173/fodor22a.html}
}
```

# Acknowledgement
The code is inspired by the following two materials:

### Multimodal Transformer:
* paper: Multimodal Transformer for Unaligned Multimodal Language Sequences ([1906.00295](https://arxiv.org/pdf/1906.00295.pdf))
* code: https://github.com/yaohungt/Multimodal-Transformer

### Linear Attention:
* paper: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention ([2006.16236](https://arxiv.org/pdf/2006.16236.pdf))
* code: https://github.com/idiap/fast-transformers

# Contact
* Ádám Fodor (foauaai@inf.elte.hu) [[website](https://adamfodor.com)]
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "linmult",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "linear-complexity attention, multimodal, transformer",
    "author": null,
    "author_email": "fodorad <foauaai@inf.elte.hu>",
    "download_url": "https://files.pythonhosted.org/packages/ff/dc/95936cc6a49c9bdebf58c62b8813a57d44e35d015ec5e2bf35f6d7f46a02/linmult-1.3.6.tar.gz",
    "platform": null,
    "description": "# LinMulT\n[![License](https://img.shields.io/badge/license-MIT-yellow.svg)](LICENSE)\n[![python](https://img.shields.io/badge/Python-3.10-3776AB.svg?style=flat&logo=python&logoColor=white)](https://www.python.org)\n[![pytorch](https://img.shields.io/badge/PyTorch-2.0.1-EE4C2C.svg?style=flat&logo=pytorch)](https://pytorch.org)\n\nGeneral-purpose Multimodal Transformer with Linear Complexity Attention Mechanism.\n\n# Setup\n### Install package from PyPI\n```\npip install linmult\n```\n\n### Install package for development\n```\ngit clone https://github.com/fodorad/LinMulT\ncd LinMulT\npip install -e .\npip install -U -r requirements.txt\npython -m unittest\n```\n\n# Similar projects using LinMulT\n\n### (2023) BlinkLinMulT\nLinMulT is trained for blink presence detection and eye state recognition tasks.\nOur results demonstrate comparable or superior performance compared to state-of-the-art models on 2 tasks, using 7 public benchmark databases.\n* paper: BlinkLinMulT: Transformer-based Eye Blink Detection ([pdf](https://adamfodor.com/pdf/2023_Fodor_Adam_MDPI_BlinkLinMulT.pdf), [website](https://www.mdpi.com/2313-433X/9/10/196))\n* code: https://github.com/fodorad/BlinkLinMulT\n\n### (2022) PersonalityLinMulT\nLinMulT is trained for Big Five personality trait estimation using the First Impressions V2 dataset and sentiment estimation using the MOSI and MOSEI datasets.\n* paper: Multimodal Sentiment and Personality Perception Under Speech: A Comparison of Transformer-based Architectures ([pdf](https://proceedings.mlr.press/v173/fodor22a/fodor22a.pdf), [website](https://proceedings.mlr.press/v173/fodor22a.html))\n* code: https://github.com/fodorad/PersonalityLinMulT\n\n\n# Citation - BibTex\nIf you found our research helpful or influential please consider citing:\n\n### (2023) LinMulT for blink presence detection and eye state recognition:\n```\n@article{blinklinmult-fodor23,\n  title = {BlinkLinMulT: Transformer-based Eye Blink Detection},\n  author = {Fodor, {\\'A}d{\\'a}m and Fenech, Kristian and L{\\H{o}}rincz, Andr{\\'a}s},\n  journal = {...}\n  pages = {1--19},\n  year = {2023}\n}\n```\n\n### (2022) LinMulT for personality trait and sentiment estimation:\n```\n@InProceedings{pmlr-v173-fodor22a,\n  title = {Multimodal Sentiment and Personality Perception Under Speech: A Comparison of Transformer-based Architectures},\n  author = {Fodor, {\\'A}d{\\'a}m and Saboundji, Rachid R. and Jacques Junior, Julio C. S. and Escalera, Sergio and Gallardo-Pujol, David and L{\\H{o}}rincz, Andr{\\'a}s},\n  booktitle = {Understanding Social Behavior in Dyadic and Small Group Interactions},\n  pages = {218--241},\n  year = {2022},\n  editor = {Palmero, Cristina and Jacques Junior, Julio C. S. and Clap\u00e9s, Albert and Guyon, Isabelle and Tu, Wei-Wei and Moeslund, Thomas B. and Escalera, Sergio},\n  volume = {173},\n  series = {Proceedings of Machine Learning Research},\n  month = {16 Oct},\n  publisher = {PMLR},\n  pdf = {https://proceedings.mlr.press/v173/fodor22a/fodor22a.pdf},\n  url = {https://proceedings.mlr.press/v173/fodor22a.html}\n}\n```\n\n# Acknowledgement\nThe code is inspired by the following two materials:\n\n### Multimodal Transformer:\n* paper: Multimodal Transformer for Unaligned Multimodal Language Sequences ([1906.00295](https://arxiv.org/pdf/1906.00295.pdf))\n* code: https://github.com/yaohungt/Multimodal-Transformer\n\n### Linear Attention:\n* paper: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention ([2006.16236](https://arxiv.org/pdf/2006.16236.pdf))\n* code: https://github.com/idiap/fast-transformers\n\n# Contact\n* \u00c1d\u00e1m Fodor (foauaai@inf.elte.hu) [[website](https://adamfodor.com)]",
    "bugtrack_url": null,
    "license": null,
    "summary": "General-purpose Multimodal Transformer with Linear Complexity Attention Mechanism.",
    "version": "1.3.6",
    "project_urls": {
        "Documentation": "https://github.com/fodorad/linmult#readme",
        "Issues": "https://github.com/fodorad/linmult/issues",
        "Source": "https://github.com/fodorad/linmult"
    },
    "split_keywords": [
        "linear-complexity attention",
        " multimodal",
        " transformer"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "18e6e8e3476d655d38e10c940b8a5f256833e3eca24177b5f2480241d626dd81",
                "md5": "fefed970c0de1b540fe0ad29c1e9b26a",
                "sha256": "f038ef341b7adfbd955370a59931eb354e32b7677c89d26911dc89fb9a66e5fb"
            },
            "downloads": -1,
            "filename": "linmult-1.3.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fefed970c0de1b540fe0ad29c1e9b26a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 17076,
            "upload_time": "2024-10-11T11:02:33",
            "upload_time_iso_8601": "2024-10-11T11:02:33.061281Z",
            "url": "https://files.pythonhosted.org/packages/18/e6/e8e3476d655d38e10c940b8a5f256833e3eca24177b5f2480241d626dd81/linmult-1.3.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ffdc95936cc6a49c9bdebf58c62b8813a57d44e35d015ec5e2bf35f6d7f46a02",
                "md5": "5216a54865669b91484c9ace3022dba7",
                "sha256": "25e14fd49b9a15d95dcee8e9db9e210d9324dd5e138d14edabdcde6eae014bfd"
            },
            "downloads": -1,
            "filename": "linmult-1.3.6.tar.gz",
            "has_sig": false,
            "md5_digest": "5216a54865669b91484c9ace3022dba7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 14637,
            "upload_time": "2024-10-11T11:02:31",
            "upload_time_iso_8601": "2024-10-11T11:02:31.703520Z",
            "url": "https://files.pythonhosted.org/packages/ff/dc/95936cc6a49c9bdebf58c62b8813a57d44e35d015ec5e2bf35f6d7f46a02/linmult-1.3.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-11 11:02:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "fodorad",
    "github_project": "linmult#readme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "linmult"
}
        
Elapsed time: 0.69041s