EEG-Classifiers-Ensemble


NameEEG-Classifiers-Ensemble JSON
Version 0.1.6 PyPI version JSON
download
home_pagehttps://github.com/AlmaCuevas/Ensemble-EEG
SummaryTools for processing and classifying EEG tasks in real-time.
upload_time2024-10-26 13:50:43
maintainerNone
docs_urlNone
authorAlma Cuevas
requires_python<4,>=3.10
licenseMIT License Copyright (c) 2024 AlmaCuevas Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords eeg classification erp feature extraction transforms
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # EEG-Classifiers-Ensemble: Real-Time Data Classification System

DISCLAIMER: This project is still under development. Code and citations are not finished.


## Table of Contents
1. [Processing Methods](https://github.com/AlmaCuevas/voting_system_platform/tree/main#processing-methods)
2. [Datasets used](https://github.com/AlmaCuevas/voting_system_platform/tree/main#datasets-used)
3. [References](https://github.com/AlmaCuevas/voting_system_platform/tree/main#references)

## Processing Methods

### LSTM
Abdulghani, Walters, and Abed [2], Agarwal and Kumar [3], and Kumar and Scheme [4] use Long-Short Term Memory (LSTM) to
classify speech imagery. An LSTM, a recurrent neural network, can learn long-term dependencies between the discrete
steps in a time series data. The code used was obtained from GitHub by [5].

### GRU

### DiffE

### ShallowFBCSPNet

### Spatial Features

#### XDAWN+RG
The XDAWN spatial filter and Riemannian Geometry classifier (RG) algorithm [7] achieved an accuracy of 0.836. Riemannian
Geometry represents data as symmetric positive definite covariance matrices and maps them onto a specific geometric space.
It can be computationally intensive when dealing with high-dimensional data, so dimensionality reduction techniques like
XDAWN spatial filters are used alongside it. The approach is based on the idea that tasks like the P300-speller, EEG signals,
and mental states have a degree of invariance that covariance matrices can capture. Due to its logarithmic nature, the Riemann
distance is robust to noise. This method can potentially reduce or eliminate the calibration phase, especially when limited
training data is available.

#### Common Spatial Patterns

#### Covariances

### Time Features

These features are organized into columns with descriptive names, facilitating feature selection. The
resulting table of features serves as the input for classifiers, enabling the analysis of EEG signals.

#### EEGExtract
EEGExtract is a feature extraction code designed to process EEG data, here the input is segmented into various frequency
bands before feeding it to the extraction process. For each frequency band, EEGExtract computes a set of features,
including entropy, mobility, complexity, ratio, and Lyapunov exponent.

#### Statistical variables

##### Mean
The average value of the signal.
##### Skewness
A measure of the asymmetry of the probability distribution of the signal values.
##### Kurtosis
A measure of the “tailedness” of the probability distribution of the signal values.
##### Standard Deviation (Std)
A measure of the amount of variation or dispersion of the signal values.
##### Variance
The square of the standard deviation, representing the spread of the signal values.

##  Datasets used:
### [Aguilera](https://data.mendeley.com/datasets/57g8z63tmy/1) [15]
### [Nieto](https://openneuro.org/datasets/ds003626/versions/2.1.2) [16]

Their code is embedded in this repository to load their dataset. The original is [Inner_Speech_Dataset](https://github.com/N-Nieto/Inner_Speech_Dataset)

Ten Argentinian participants were involved in this experimental setup, and data from 136 channels were recorded, with 128 dedicated to EEG readings and 8 to muscle activity. The experiment focused on eliciting four specific commands from the subjects, namely ”arriba,” ”abajo,” ”derecha,” and ”izquierda,” corresponding to ”up,” ”down,” ”right,” and ”left.” To explore inner speech processes, each participant was instructed to engage in a mental exercise involving repeatedly imagining their voice and uttering the respective word.

### [Coretto](https://drive.google.com/file/d/0By7apHbIp8ENZVBLRFVlSFhzbHc/view?resourcekey=0-JVHv2UiRsxim41Wioro0EA) [17]
The Coretto dataset consists of 15 Argentinian subjects who are native Spanish speakers, with an average age of 25 years old. These subjects repeated words, including vowels and directional words, 50 times each at a sample frequency of 1024Hz. The words were visually presented, and the recordings were single takes. The dataset used the Geschwind-Wernicke model, focusing on specific electrode locations to minimize myoelectric noise interference during speech.

### Torres (Data available on request from the original authors) [18]
This dataset comprises the EEG signals of 27 right-handed subjects performing internal pronunciation of words without emitting sounds or doing facial movements. It is focused on
the recognition of five Spanish words corresponding to the English words “up,” “down,” “left,” “right,” and “select,” with which a computer cursor could be controlled. Unlike the other datasets, this one is not open-access and was kindly made available by interinstitutional agreement.

### [2020 International BCI Competition](https://osf.io/pq7vb/) [19]


# References

1. Miao, Z., Zhang, X., Zhao, M. & Ming, D. Lmda-net: a lightweight multi-dimensional attention network for general eeg-based brain-computer interface paradigms and interpretability. 10.48550/ARXIV.2303.16407 (2023).
2. Abdulghani, M. M., Walters, W. L. & Abed, K. H. Imagined speech classification using eeg and deep learning. Bioengineering 10, 649, 10.3390/bioengineering10060649 (2023).
3. Agarwal, P. & Kumar, S. Electroencephalography-based imagined speech recognition using deep long short-term memory network. ETRI J. 44, 672–685, 10.4218/etrij.2021-0118 (2022).
4. Kumar, P. & Scheme, E. A deep spatio-temporal model for eeg-based imagined speech recognition. ICASSP 2021 - 2021 IEEE Int. Conf. on Acoust. Speech Signal Process. (ICASSP) 10.1109/icassp39728.2021.9413989 (2021).
5. C. Brunner, G. R. M.-P. A. S. G. P., R. Leeb. Bigproject.
6. Nouri, M., Moradi, F., Ghaemi, H. & Motie Nasrabadi, A. Towards real-world bci: Ccspnet, a compact subject-independent motor imagery framework. Digit. Signal Process. 133, 103816, 10.1016/j.dsp.2022.103816 (2023).
7. Barachant, A. et al. pyriemann/pyriemann: v0.5, 10.5281/zenodo.8059038 (2023).
8. Saba-Sadiya, S., Chantland, E., Alhanai, T., Liu, T. & Ghassemi, M. M. Unsupervised eeg artifact detection and correction.
Front. Digit. Heal. 2, 57 (2020).
9. Kim, S., Lee, Y.-E., Lee, S.-H. & Lee, S.-W. Diff-e: Diffusion-based learning for decoding imagined speech eeg. (2023). 2307.14389.
10. Liu, X. et al. Tcacnet.
11. Liu, X. et al. Tcacnet: Temporal and channel attention convolutional network for motor imagery classification of eeg-based
bci. Inf. Process. amp; Manag. 59, 103001, 10.1016/j.ipm.2022.103001 (2022).
12. Lawhern, V. J. et al. Eegnet: a compact convolutional neural network for eeg-based brain–computer interfaces. J. Neural Eng. 15, 056013 (2018).
13. Tibor, S. R. et al. Deep learning with convolutional neural networks for eeg decoding and visualization. Hum. Brain Mapp. 38, 5391–5420, 10.1002/hbm.23730.
14. project, A. R. L. A. E. Army research laboratory (arl) eegmodels.
15. Aguilera-Rodríguez, E. Imagined speech datasets applying traditional and gamified acquisition paradigms, 10.17632/57G8Z63TMY.1 (2024).
16. Nieto, N., Peterson, V., Rufiner, H., Kamienkowski, J. & Spies, R. "inner speech", doi:10.18112/openneuro.ds003626.v2.1.2 (2022)
17. Coretto, G. A. P., Gareis, I. E. & Rufiner, H. L. Open access database of eeg signals recorded during imagined speech. In Symposium on Medical Information Processing and Analysis (2017)
18. A. A. Torres-García, C. A. Reyes-García, L. Villaseñor-Pineda, and J. M. Ramírez-Cortes, “Análisis de Señales Electroencefalográficas para la Clasificación de Habla Imaginada,” Revista Mexicana de Ingeniería Biomédica, vol. 34, no. 1, pp. 23–39, 2013. ISSN: 0188-9532.
19. Committee, B. 2020 international bci competition. Open Sci. Framew. 10.17605/OSF.IO/PQ7VB (2022)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AlmaCuevas/Ensemble-EEG",
    "name": "EEG-Classifiers-Ensemble",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4,>=3.10",
    "maintainer_email": null,
    "keywords": "EEG, classification, ERP, feature extraction, transforms",
    "author": "Alma Cuevas",
    "author_email": "Alma Cuevas <almarosa.arcr@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/84/5c/3a08edb35ecdaf67573e6614a7c8e3ee4cbb350564550022151176454d83/EEG-Classifiers-Ensemble-0.1.6.tar.gz",
    "platform": null,
    "description": "# EEG-Classifiers-Ensemble: Real-Time Data Classification System\r\n\r\nDISCLAIMER: This project is still under development. Code and citations are not finished.\r\n\r\n\r\n## Table of Contents\r\n1. [Processing Methods](https://github.com/AlmaCuevas/voting_system_platform/tree/main#processing-methods)\r\n2. [Datasets used](https://github.com/AlmaCuevas/voting_system_platform/tree/main#datasets-used)\r\n3. [References](https://github.com/AlmaCuevas/voting_system_platform/tree/main#references)\r\n\r\n## Processing Methods\r\n\r\n### LSTM\r\nAbdulghani, Walters, and Abed [2], Agarwal and Kumar [3], and Kumar and Scheme [4] use Long-Short Term Memory (LSTM) to\r\nclassify speech imagery. An LSTM, a recurrent neural network, can learn long-term dependencies between the discrete\r\nsteps in a time series data. The code used was obtained from GitHub by [5].\r\n\r\n### GRU\r\n\r\n### DiffE\r\n\r\n### ShallowFBCSPNet\r\n\r\n### Spatial Features\r\n\r\n#### XDAWN+RG\r\nThe XDAWN spatial filter and Riemannian Geometry classifier (RG) algorithm [7] achieved an accuracy of 0.836. Riemannian\r\nGeometry represents data as symmetric positive definite covariance matrices and maps them onto a specific geometric space.\r\nIt can be computationally intensive when dealing with high-dimensional data, so dimensionality reduction techniques like\r\nXDAWN spatial filters are used alongside it. The approach is based on the idea that tasks like the P300-speller, EEG signals,\r\nand mental states have a degree of invariance that covariance matrices can capture. Due to its logarithmic nature, the Riemann\r\ndistance is robust to noise. This method can potentially reduce or eliminate the calibration phase, especially when limited\r\ntraining data is available.\r\n\r\n#### Common Spatial Patterns\r\n\r\n#### Covariances\r\n\r\n### Time Features\r\n\r\nThese features are organized into columns with descriptive names, facilitating feature selection. The\r\nresulting table of features serves as the input for classifiers, enabling the analysis of EEG signals.\r\n\r\n#### EEGExtract\r\nEEGExtract is a feature extraction code designed to process EEG data, here the input is segmented into various frequency\r\nbands before feeding it to the extraction process. For each frequency band, EEGExtract computes a set of features,\r\nincluding entropy, mobility, complexity, ratio, and Lyapunov exponent.\r\n\r\n#### Statistical variables\r\n\r\n##### Mean\r\nThe average value of the signal.\r\n##### Skewness\r\nA measure of the asymmetry of the probability distribution of the signal values.\r\n##### Kurtosis\r\nA measure of the \u201ctailedness\u201d of the probability distribution of the signal values.\r\n##### Standard Deviation (Std)\r\nA measure of the amount of variation or dispersion of the signal values.\r\n##### Variance\r\nThe square of the standard deviation, representing the spread of the signal values.\r\n\r\n##  Datasets used:\r\n### [Aguilera](https://data.mendeley.com/datasets/57g8z63tmy/1) [15]\r\n### [Nieto](https://openneuro.org/datasets/ds003626/versions/2.1.2) [16]\r\n\r\nTheir code is embedded in this repository to load their dataset. The original is [Inner_Speech_Dataset](https://github.com/N-Nieto/Inner_Speech_Dataset)\r\n\r\nTen Argentinian participants were involved in this experimental setup, and data from 136 channels were recorded, with 128 dedicated to EEG readings and 8 to muscle activity. The experiment focused on eliciting four specific commands from the subjects, namely \u201darriba,\u201d \u201dabajo,\u201d \u201dderecha,\u201d and \u201dizquierda,\u201d corresponding to \u201dup,\u201d \u201ddown,\u201d \u201dright,\u201d and \u201dleft.\u201d To explore inner speech processes, each participant was instructed to engage in a mental exercise involving repeatedly imagining their voice and uttering the respective word.\r\n\r\n### [Coretto](https://drive.google.com/file/d/0By7apHbIp8ENZVBLRFVlSFhzbHc/view?resourcekey=0-JVHv2UiRsxim41Wioro0EA) [17]\r\nThe Coretto dataset consists of 15 Argentinian subjects who are native Spanish speakers, with an average age of 25 years old. These subjects repeated words, including vowels and directional words, 50 times each at a sample frequency of 1024Hz. The words were visually presented, and the recordings were single takes. The dataset used the Geschwind-Wernicke model, focusing on specific electrode locations to minimize myoelectric noise interference during speech.\r\n\r\n### Torres (Data available on request from the original authors) [18]\r\nThis dataset comprises the EEG signals of 27 right-handed subjects performing internal pronunciation of words without emitting sounds or doing facial movements. It is focused on\r\nthe recognition of five Spanish words corresponding to the English words \u201cup,\u201d \u201cdown,\u201d \u201cleft,\u201d \u201cright,\u201d and \u201cselect,\u201d with which a computer cursor could be controlled. Unlike the other datasets, this one is not open-access and was kindly made available by interinstitutional agreement.\r\n\r\n### [2020 International BCI Competition](https://osf.io/pq7vb/) [19]\r\n\r\n\r\n# References\r\n\r\n1. Miao, Z., Zhang, X., Zhao, M. & Ming, D. Lmda-net: a lightweight multi-dimensional attention network for general eeg-based brain-computer interface paradigms and interpretability. 10.48550/ARXIV.2303.16407 (2023).\r\n2. Abdulghani, M. M., Walters, W. L. & Abed, K. H. Imagined speech classification using eeg and deep learning. Bioengineering 10, 649, 10.3390/bioengineering10060649 (2023).\r\n3. Agarwal, P. & Kumar, S. Electroencephalography-based imagined speech recognition using deep long short-term memory network. ETRI J. 44, 672\u2013685, 10.4218/etrij.2021-0118 (2022).\r\n4. Kumar, P. & Scheme, E. A deep spatio-temporal model for eeg-based imagined speech recognition. ICASSP 2021 - 2021 IEEE Int. Conf. on Acoust. Speech Signal Process. (ICASSP) 10.1109/icassp39728.2021.9413989 (2021).\r\n5. C. Brunner, G. R. M.-P. A. S. G. P., R. Leeb. Bigproject.\r\n6. Nouri, M., Moradi, F., Ghaemi, H. & Motie Nasrabadi, A. Towards real-world bci: Ccspnet, a compact subject-independent motor imagery framework. Digit. Signal Process. 133, 103816, 10.1016/j.dsp.2022.103816 (2023).\r\n7. Barachant, A. et al. pyriemann/pyriemann: v0.5, 10.5281/zenodo.8059038 (2023).\r\n8. Saba-Sadiya, S., Chantland, E., Alhanai, T., Liu, T. & Ghassemi, M. M. Unsupervised eeg artifact detection and correction.\r\nFront. Digit. Heal. 2, 57 (2020).\r\n9. Kim, S., Lee, Y.-E., Lee, S.-H. & Lee, S.-W. Diff-e: Diffusion-based learning for decoding imagined speech eeg. (2023). 2307.14389.\r\n10. Liu, X. et al. Tcacnet.\r\n11. Liu, X. et al. Tcacnet: Temporal and channel attention convolutional network for motor imagery classification of eeg-based\r\nbci. Inf. Process. amp; Manag. 59, 103001, 10.1016/j.ipm.2022.103001 (2022).\r\n12. Lawhern, V. J. et al. Eegnet: a compact convolutional neural network for eeg-based brain\u2013computer interfaces. J. Neural Eng. 15, 056013 (2018).\r\n13. Tibor, S. R. et al. Deep learning with convolutional neural networks for eeg decoding and visualization. Hum. Brain Mapp. 38, 5391\u20135420, 10.1002/hbm.23730.\r\n14. project, A. R. L. A. E. Army research laboratory (arl) eegmodels.\r\n15. Aguilera-Rodr\u00edguez, E. Imagined speech datasets applying traditional and gamified acquisition paradigms, 10.17632/57G8Z63TMY.1 (2024).\r\n16. Nieto, N., Peterson, V., Rufiner, H., Kamienkowski, J. & Spies, R. \"inner speech\", doi:10.18112/openneuro.ds003626.v2.1.2 (2022)\r\n17. Coretto, G. A. P., Gareis, I. E. & Rufiner, H. L. Open access database of eeg signals recorded during imagined speech. In Symposium on Medical Information Processing and Analysis (2017)\r\n18. A. A. Torres-Garc\u00eda, C. A. Reyes-Garc\u00eda, L. Villase\u00f1or-Pineda, and J. M. Ram\u00edrez-Cortes, \u201cAn\u00e1lisis de Se\u00f1ales Electroencefalogr\u00e1ficas para la Clasificaci\u00f3n de Habla Imaginada,\u201d Revista Mexicana de Ingenier\u00eda Biom\u00e9dica, vol. 34, no. 1, pp. 23\u201339, 2013. ISSN: 0188-9532.\r\n19. Committee, B. 2020 international bci competition. Open Sci. Framew. 10.17605/OSF.IO/PQ7VB (2022)\r\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 AlmaCuevas  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Tools for processing and classifying EEG tasks in real-time.",
    "version": "0.1.6",
    "project_urls": {
        "Homepage": "https://github.com/AlmaCuevas/Ensemble-EEG"
    },
    "split_keywords": [
        "eeg",
        " classification",
        " erp",
        " feature extraction",
        " transforms"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "365af96254fc884c3680b78e6d6cb460823f473990a01a830b3ed19247e446ad",
                "md5": "df0683ab79d93740d47a90c4310abc86",
                "sha256": "297fa524d42d131f8de4eed4283bebf1a84b98415f2dd03b382be4fd9e908d2d"
            },
            "downloads": -1,
            "filename": "EEG_Classifiers_Ensemble-0.1.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "df0683ab79d93740d47a90c4310abc86",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4,>=3.10",
            "size": 56083,
            "upload_time": "2024-10-26T13:50:42",
            "upload_time_iso_8601": "2024-10-26T13:50:42.030131Z",
            "url": "https://files.pythonhosted.org/packages/36/5a/f96254fc884c3680b78e6d6cb460823f473990a01a830b3ed19247e446ad/EEG_Classifiers_Ensemble-0.1.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "845c3a08edb35ecdaf67573e6614a7c8e3ee4cbb350564550022151176454d83",
                "md5": "fbb0471a6b7a1900e2ce8082eb252382",
                "sha256": "fb2e0526c7ea0b63ca803a77a64be7817cf5125ad52fc9cf53ec929debc32979"
            },
            "downloads": -1,
            "filename": "EEG-Classifiers-Ensemble-0.1.6.tar.gz",
            "has_sig": false,
            "md5_digest": "fbb0471a6b7a1900e2ce8082eb252382",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4,>=3.10",
            "size": 136094,
            "upload_time": "2024-10-26T13:50:43",
            "upload_time_iso_8601": "2024-10-26T13:50:43.586750Z",
            "url": "https://files.pythonhosted.org/packages/84/5c/3a08edb35ecdaf67573e6614a7c8e3ee4cbb350564550022151176454d83/EEG-Classifiers-Ensemble-0.1.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-26 13:50:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AlmaCuevas",
    "github_project": "Ensemble-EEG",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "eeg-classifiers-ensemble"
}
        
Elapsed time: 1.99670s