EEG-Classifiers-Ensemble


NameEEG-Classifiers-Ensemble JSON
Version 0.1.5 PyPI version JSON
download
home_pagehttps://github.com/AlmaCuevas/Ensemble-EEG
SummaryTools for processing and classifying EEG tasks in real-time.
upload_time2024-09-22 16:05:01
maintainerNone
docs_urlNone
authorAlma Cuevas
requires_python<4,>=3.10
licenseMIT License Copyright (c) 2024 AlmaCuevas Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords eeg classification erp feature extraction transforms
VCS
bugtrack_url
requirements pyriemann keras matplotlib matplotlib-inline mne numpy pandas scikit-learn scipy termcolor tqdm wget skorch braindecode ema-pytorch einops emd brainflow keras-preprocessing autoreject PyWavelets dit statsmodels librosa pyinform antropy pre-commit poetry Flake8-pyproject torch torchsummary torchvision tensorflow moabb
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # EEG-Classifiers-Ensemble: Real-Time Data Classification System

DISCLAIMER: This project is still under development. Code and citations are not finished.


## Table of Contents
1. [Processing Methods](https://github.com/AlmaCuevas/voting_system_platform/tree/main#processing-methods)
2. [Datasets used](https://github.com/AlmaCuevas/voting_system_platform/tree/main#datasets-used)
3. [References](https://github.com/AlmaCuevas/voting_system_platform/tree/main#references)

## Processing Methods

### LSTM
Abdulghani, Walters, and Abed [2], Agarwal and Kumar [3], and Kumar and Scheme [4] use Long-Short Term Memory (LSTM) to
classify speech imagery. An LSTM, a recurrent neural network, can learn long-term dependencies between the discrete
steps in a time series data. The code used was obtained from GitHub by [5].

### GRU

### DiffE

### ShallowFBCSPNet

### Spatial Features

#### XDAWN+RG
The XDAWN spatial filter and Riemannian Geometry classifier (RG) algorithm [7] achieved an accuracy of 0.836. Riemannian
Geometry represents data as symmetric positive definite covariance matrices and maps them onto a specific geometric space.
It can be computationally intensive when dealing with high-dimensional data, so dimensionality reduction techniques like
XDAWN spatial filters are used alongside it. The approach is based on the idea that tasks like the P300-speller, EEG signals,
and mental states have a degree of invariance that covariance matrices can capture. Due to its logarithmic nature, the Riemann
distance is robust to noise. This method can potentially reduce or eliminate the calibration phase, especially when limited
training data is available.

#### Common Spatial Patterns

#### Covariances

### Time Features

These features are organized into columns with descriptive names, facilitating feature selection. The
resulting table of features serves as the input for classifiers, enabling the analysis of EEG signals.

#### EEGExtract
EEGExtract is a feature extraction code designed to process EEG data, here the input is segmented into various frequency
bands before feeding it to the extraction process. For each frequency band, EEGExtract computes a set of features,
including entropy, mobility, complexity, ratio, and Lyapunov exponent.

#### Statistical variables

##### Mean
The average value of the signal.
##### Skewness
A measure of the asymmetry of the probability distribution of the signal values.
##### Kurtosis
A measure of the “tailedness” of the probability distribution of the signal values.
##### Standard Deviation (Std)
A measure of the amount of variation or dispersion of the signal values.
##### Variance
The square of the standard deviation, representing the spread of the signal values.

##  Datasets used:
### [Aguilera](https://data.mendeley.com/datasets/57g8z63tmy/1) [15]
### [Nieto](https://openneuro.org/datasets/ds003626/versions/2.1.2) [16]

Their code is embedded in this repository to load their dataset. The original is [Inner_Speech_Dataset](https://github.com/N-Nieto/Inner_Speech_Dataset)

Ten Argentinian participants were involved in this experimental setup, and data from 136 channels were recorded, with 128 dedicated to EEG readings and 8 to muscle activity. The experiment focused on eliciting four specific commands from the subjects, namely ”arriba,” ”abajo,” ”derecha,” and ”izquierda,” corresponding to ”up,” ”down,” ”right,” and ”left.” To explore inner speech processes, each participant was instructed to engage in a mental exercise involving repeatedly imagining their voice and uttering the respective word.

### [Coretto](https://drive.google.com/file/d/0By7apHbIp8ENZVBLRFVlSFhzbHc/view?resourcekey=0-JVHv2UiRsxim41Wioro0EA) [17]
The Coretto dataset consists of 15 Argentinian subjects who are native Spanish speakers, with an average age of 25 years old. These subjects repeated words, including vowels and directional words, 50 times each at a sample frequency of 1024Hz. The words were visually presented, and the recordings were single takes. The dataset used the Geschwind-Wernicke model, focusing on specific electrode locations to minimize myoelectric noise interference during speech.

### Torres (Data available on request from the original authors) [18]
This dataset comprises the EEG signals of 27 right-handed subjects performing internal pronunciation of words without emitting sounds or doing facial movements. It is focused on
the recognition of five Spanish words corresponding to the English words “up,” “down,” “left,” “right,” and “select,” with which a computer cursor could be controlled. Unlike the other datasets, this one is not open-access and was kindly made available by interinstitutional agreement.

### [2020 International BCI Competition](https://osf.io/pq7vb/) [19]


# References

1. Miao, Z., Zhang, X., Zhao, M. & Ming, D. Lmda-net: a lightweight multi-dimensional attention network for general eeg-based brain-computer interface paradigms and interpretability. 10.48550/ARXIV.2303.16407 (2023).
2. Abdulghani, M. M., Walters, W. L. & Abed, K. H. Imagined speech classification using eeg and deep learning. Bioengineering 10, 649, 10.3390/bioengineering10060649 (2023).
3. Agarwal, P. & Kumar, S. Electroencephalography-based imagined speech recognition using deep long short-term memory network. ETRI J. 44, 672–685, 10.4218/etrij.2021-0118 (2022).
4. Kumar, P. & Scheme, E. A deep spatio-temporal model for eeg-based imagined speech recognition. ICASSP 2021 - 2021 IEEE Int. Conf. on Acoust. Speech Signal Process. (ICASSP) 10.1109/icassp39728.2021.9413989 (2021).
5. C. Brunner, G. R. M.-P. A. S. G. P., R. Leeb. Bigproject.
6. Nouri, M., Moradi, F., Ghaemi, H. & Motie Nasrabadi, A. Towards real-world bci: Ccspnet, a compact subject-independent motor imagery framework. Digit. Signal Process. 133, 103816, 10.1016/j.dsp.2022.103816 (2023).
7. Barachant, A. et al. pyriemann/pyriemann: v0.5, 10.5281/zenodo.8059038 (2023).
8. Saba-Sadiya, S., Chantland, E., Alhanai, T., Liu, T. & Ghassemi, M. M. Unsupervised eeg artifact detection and correction.
Front. Digit. Heal. 2, 57 (2020).
9. Kim, S., Lee, Y.-E., Lee, S.-H. & Lee, S.-W. Diff-e: Diffusion-based learning for decoding imagined speech eeg. (2023). 2307.14389.
10. Liu, X. et al. Tcacnet.
11. Liu, X. et al. Tcacnet: Temporal and channel attention convolutional network for motor imagery classification of eeg-based
bci. Inf. Process. amp; Manag. 59, 103001, 10.1016/j.ipm.2022.103001 (2022).
12. Lawhern, V. J. et al. Eegnet: a compact convolutional neural network for eeg-based brain–computer interfaces. J. Neural Eng. 15, 056013 (2018).
13. Tibor, S. R. et al. Deep learning with convolutional neural networks for eeg decoding and visualization. Hum. Brain Mapp. 38, 5391–5420, 10.1002/hbm.23730.
14. project, A. R. L. A. E. Army research laboratory (arl) eegmodels.
15. Aguilera-Rodríguez, E. Imagined speech datasets applying traditional and gamified acquisition paradigms, 10.17632/57G8Z63TMY.1 (2024).
16. Nieto, N., Peterson, V., Rufiner, H., Kamienkowski, J. & Spies, R. "inner speech", doi:10.18112/openneuro.ds003626.v2.1.2 (2022)
17. Coretto, G. A. P., Gareis, I. E. & Rufiner, H. L. Open access database of eeg signals recorded during imagined speech. In Symposium on Medical Information Processing and Analysis (2017)
18. A. A. Torres-García, C. A. Reyes-García, L. Villaseñor-Pineda, and J. M. Ramírez-Cortes, “Análisis de Señales Electroencefalográficas para la Clasificación de Habla Imaginada,” Revista Mexicana de Ingeniería Biomédica, vol. 34, no. 1, pp. 23–39, 2013. ISSN: 0188-9532.
19. Committee, B. 2020 international bci competition. Open Sci. Framew. 10.17605/OSF.IO/PQ7VB (2022)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AlmaCuevas/Ensemble-EEG",
    "name": "EEG-Classifiers-Ensemble",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4,>=3.10",
    "maintainer_email": null,
    "keywords": "EEG, classification, ERP, feature extraction, transforms",
    "author": "Alma Cuevas",
    "author_email": "Alma Cuevas <almarosa.arcr@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/6c/d2/a40327648bd1862f7c49a7c62ada07f0609d5e8e059b7804a01307d2f05a/EEG-Classifiers-Ensemble-0.1.5.tar.gz",
    "platform": null,
    "description": "# EEG-Classifiers-Ensemble: Real-Time Data Classification System\r\n\r\nDISCLAIMER: This project is still under development. Code and citations are not finished.\r\n\r\n\r\n## Table of Contents\r\n1. [Processing Methods](https://github.com/AlmaCuevas/voting_system_platform/tree/main#processing-methods)\r\n2. [Datasets used](https://github.com/AlmaCuevas/voting_system_platform/tree/main#datasets-used)\r\n3. [References](https://github.com/AlmaCuevas/voting_system_platform/tree/main#references)\r\n\r\n## Processing Methods\r\n\r\n### LSTM\r\nAbdulghani, Walters, and Abed [2], Agarwal and Kumar [3], and Kumar and Scheme [4] use Long-Short Term Memory (LSTM) to\r\nclassify speech imagery. An LSTM, a recurrent neural network, can learn long-term dependencies between the discrete\r\nsteps in a time series data. The code used was obtained from GitHub by [5].\r\n\r\n### GRU\r\n\r\n### DiffE\r\n\r\n### ShallowFBCSPNet\r\n\r\n### Spatial Features\r\n\r\n#### XDAWN+RG\r\nThe XDAWN spatial filter and Riemannian Geometry classifier (RG) algorithm [7] achieved an accuracy of 0.836. Riemannian\r\nGeometry represents data as symmetric positive definite covariance matrices and maps them onto a specific geometric space.\r\nIt can be computationally intensive when dealing with high-dimensional data, so dimensionality reduction techniques like\r\nXDAWN spatial filters are used alongside it. The approach is based on the idea that tasks like the P300-speller, EEG signals,\r\nand mental states have a degree of invariance that covariance matrices can capture. Due to its logarithmic nature, the Riemann\r\ndistance is robust to noise. This method can potentially reduce or eliminate the calibration phase, especially when limited\r\ntraining data is available.\r\n\r\n#### Common Spatial Patterns\r\n\r\n#### Covariances\r\n\r\n### Time Features\r\n\r\nThese features are organized into columns with descriptive names, facilitating feature selection. The\r\nresulting table of features serves as the input for classifiers, enabling the analysis of EEG signals.\r\n\r\n#### EEGExtract\r\nEEGExtract is a feature extraction code designed to process EEG data, here the input is segmented into various frequency\r\nbands before feeding it to the extraction process. For each frequency band, EEGExtract computes a set of features,\r\nincluding entropy, mobility, complexity, ratio, and Lyapunov exponent.\r\n\r\n#### Statistical variables\r\n\r\n##### Mean\r\nThe average value of the signal.\r\n##### Skewness\r\nA measure of the asymmetry of the probability distribution of the signal values.\r\n##### Kurtosis\r\nA measure of the \u201ctailedness\u201d of the probability distribution of the signal values.\r\n##### Standard Deviation (Std)\r\nA measure of the amount of variation or dispersion of the signal values.\r\n##### Variance\r\nThe square of the standard deviation, representing the spread of the signal values.\r\n\r\n##  Datasets used:\r\n### [Aguilera](https://data.mendeley.com/datasets/57g8z63tmy/1) [15]\r\n### [Nieto](https://openneuro.org/datasets/ds003626/versions/2.1.2) [16]\r\n\r\nTheir code is embedded in this repository to load their dataset. The original is [Inner_Speech_Dataset](https://github.com/N-Nieto/Inner_Speech_Dataset)\r\n\r\nTen Argentinian participants were involved in this experimental setup, and data from 136 channels were recorded, with 128 dedicated to EEG readings and 8 to muscle activity. The experiment focused on eliciting four specific commands from the subjects, namely \u201darriba,\u201d \u201dabajo,\u201d \u201dderecha,\u201d and \u201dizquierda,\u201d corresponding to \u201dup,\u201d \u201ddown,\u201d \u201dright,\u201d and \u201dleft.\u201d To explore inner speech processes, each participant was instructed to engage in a mental exercise involving repeatedly imagining their voice and uttering the respective word.\r\n\r\n### [Coretto](https://drive.google.com/file/d/0By7apHbIp8ENZVBLRFVlSFhzbHc/view?resourcekey=0-JVHv2UiRsxim41Wioro0EA) [17]\r\nThe Coretto dataset consists of 15 Argentinian subjects who are native Spanish speakers, with an average age of 25 years old. These subjects repeated words, including vowels and directional words, 50 times each at a sample frequency of 1024Hz. The words were visually presented, and the recordings were single takes. The dataset used the Geschwind-Wernicke model, focusing on specific electrode locations to minimize myoelectric noise interference during speech.\r\n\r\n### Torres (Data available on request from the original authors) [18]\r\nThis dataset comprises the EEG signals of 27 right-handed subjects performing internal pronunciation of words without emitting sounds or doing facial movements. It is focused on\r\nthe recognition of five Spanish words corresponding to the English words \u201cup,\u201d \u201cdown,\u201d \u201cleft,\u201d \u201cright,\u201d and \u201cselect,\u201d with which a computer cursor could be controlled. Unlike the other datasets, this one is not open-access and was kindly made available by interinstitutional agreement.\r\n\r\n### [2020 International BCI Competition](https://osf.io/pq7vb/) [19]\r\n\r\n\r\n# References\r\n\r\n1. Miao, Z., Zhang, X., Zhao, M. & Ming, D. Lmda-net: a lightweight multi-dimensional attention network for general eeg-based brain-computer interface paradigms and interpretability. 10.48550/ARXIV.2303.16407 (2023).\r\n2. Abdulghani, M. M., Walters, W. L. & Abed, K. H. Imagined speech classification using eeg and deep learning. Bioengineering 10, 649, 10.3390/bioengineering10060649 (2023).\r\n3. Agarwal, P. & Kumar, S. Electroencephalography-based imagined speech recognition using deep long short-term memory network. ETRI J. 44, 672\u2013685, 10.4218/etrij.2021-0118 (2022).\r\n4. Kumar, P. & Scheme, E. A deep spatio-temporal model for eeg-based imagined speech recognition. ICASSP 2021 - 2021 IEEE Int. Conf. on Acoust. Speech Signal Process. (ICASSP) 10.1109/icassp39728.2021.9413989 (2021).\r\n5. C. Brunner, G. R. M.-P. A. S. G. P., R. Leeb. Bigproject.\r\n6. Nouri, M., Moradi, F., Ghaemi, H. & Motie Nasrabadi, A. Towards real-world bci: Ccspnet, a compact subject-independent motor imagery framework. Digit. Signal Process. 133, 103816, 10.1016/j.dsp.2022.103816 (2023).\r\n7. Barachant, A. et al. pyriemann/pyriemann: v0.5, 10.5281/zenodo.8059038 (2023).\r\n8. Saba-Sadiya, S., Chantland, E., Alhanai, T., Liu, T. & Ghassemi, M. M. Unsupervised eeg artifact detection and correction.\r\nFront. Digit. Heal. 2, 57 (2020).\r\n9. Kim, S., Lee, Y.-E., Lee, S.-H. & Lee, S.-W. Diff-e: Diffusion-based learning for decoding imagined speech eeg. (2023). 2307.14389.\r\n10. Liu, X. et al. Tcacnet.\r\n11. Liu, X. et al. Tcacnet: Temporal and channel attention convolutional network for motor imagery classification of eeg-based\r\nbci. Inf. Process. amp; Manag. 59, 103001, 10.1016/j.ipm.2022.103001 (2022).\r\n12. Lawhern, V. J. et al. Eegnet: a compact convolutional neural network for eeg-based brain\u2013computer interfaces. J. Neural Eng. 15, 056013 (2018).\r\n13. Tibor, S. R. et al. Deep learning with convolutional neural networks for eeg decoding and visualization. Hum. Brain Mapp. 38, 5391\u20135420, 10.1002/hbm.23730.\r\n14. project, A. R. L. A. E. Army research laboratory (arl) eegmodels.\r\n15. Aguilera-Rodr\u00edguez, E. Imagined speech datasets applying traditional and gamified acquisition paradigms, 10.17632/57G8Z63TMY.1 (2024).\r\n16. Nieto, N., Peterson, V., Rufiner, H., Kamienkowski, J. & Spies, R. \"inner speech\", doi:10.18112/openneuro.ds003626.v2.1.2 (2022)\r\n17. Coretto, G. A. P., Gareis, I. E. & Rufiner, H. L. Open access database of eeg signals recorded during imagined speech. In Symposium on Medical Information Processing and Analysis (2017)\r\n18. A. A. Torres-Garc\u00eda, C. A. Reyes-Garc\u00eda, L. Villase\u00f1or-Pineda, and J. M. Ram\u00edrez-Cortes, \u201cAn\u00e1lisis de Se\u00f1ales Electroencefalogr\u00e1ficas para la Clasificaci\u00f3n de Habla Imaginada,\u201d Revista Mexicana de Ingenier\u00eda Biom\u00e9dica, vol. 34, no. 1, pp. 23\u201339, 2013. ISSN: 0188-9532.\r\n19. Committee, B. 2020 international bci competition. Open Sci. Framew. 10.17605/OSF.IO/PQ7VB (2022)\r\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 AlmaCuevas  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Tools for processing and classifying EEG tasks in real-time.",
    "version": "0.1.5",
    "project_urls": {
        "Homepage": "https://github.com/AlmaCuevas/Ensemble-EEG"
    },
    "split_keywords": [
        "eeg",
        " classification",
        " erp",
        " feature extraction",
        " transforms"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ec9126808c31e939b635c54923ebc4e7c3810b4adec42f247a8407fcaa5d0d8f",
                "md5": "2d4e8ed7f8b03ddaf2558b1a00135eec",
                "sha256": "645eeaf40e09ebd44f7b48dc62403f3db231092923b1eb3470139854b1ae1c9c"
            },
            "downloads": -1,
            "filename": "EEG_Classifiers_Ensemble-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2d4e8ed7f8b03ddaf2558b1a00135eec",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4,>=3.10",
            "size": 56372,
            "upload_time": "2024-09-22T16:04:58",
            "upload_time_iso_8601": "2024-09-22T16:04:58.869281Z",
            "url": "https://files.pythonhosted.org/packages/ec/91/26808c31e939b635c54923ebc4e7c3810b4adec42f247a8407fcaa5d0d8f/EEG_Classifiers_Ensemble-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6cd2a40327648bd1862f7c49a7c62ada07f0609d5e8e059b7804a01307d2f05a",
                "md5": "b2eb5c7489f073dcbe6bdef4ba413609",
                "sha256": "8274d93f25882e059c16947a7f97495a703fb80892fd632d25388f2ff0e5b19f"
            },
            "downloads": -1,
            "filename": "EEG-Classifiers-Ensemble-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "b2eb5c7489f073dcbe6bdef4ba413609",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4,>=3.10",
            "size": 136433,
            "upload_time": "2024-09-22T16:05:01",
            "upload_time_iso_8601": "2024-09-22T16:05:01.223873Z",
            "url": "https://files.pythonhosted.org/packages/6c/d2/a40327648bd1862f7c49a7c62ada07f0609d5e8e059b7804a01307d2f05a/EEG-Classifiers-Ensemble-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-22 16:05:01",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AlmaCuevas",
    "github_project": "Ensemble-EEG",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "pyriemann",
            "specs": [
                [
                    "==",
                    "0.5"
                ]
            ]
        },
        {
            "name": "keras",
            "specs": [
                [
                    "==",
                    "2.14.0"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": [
                [
                    "==",
                    "3.8.0"
                ]
            ]
        },
        {
            "name": "matplotlib-inline",
            "specs": [
                [
                    "==",
                    "0.1.6"
                ]
            ]
        },
        {
            "name": "mne",
            "specs": [
                [
                    "==",
                    "1.5.1"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "==",
                    "1.26.0"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    "==",
                    "2.1.1"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    "==",
                    "1.3.1"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    "==",
                    "1.11.3"
                ]
            ]
        },
        {
            "name": "termcolor",
            "specs": [
                [
                    "==",
                    "2.3.0"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": [
                [
                    "==",
                    "4.66.1"
                ]
            ]
        },
        {
            "name": "wget",
            "specs": [
                [
                    "==",
                    "3.2"
                ]
            ]
        },
        {
            "name": "skorch",
            "specs": [
                [
                    "==",
                    "0.15.0"
                ]
            ]
        },
        {
            "name": "braindecode",
            "specs": [
                [
                    "==",
                    "0.4.85"
                ]
            ]
        },
        {
            "name": "ema-pytorch",
            "specs": [
                [
                    "==",
                    "0.3.0"
                ]
            ]
        },
        {
            "name": "einops",
            "specs": [
                [
                    "==",
                    "0.7.0"
                ]
            ]
        },
        {
            "name": "emd",
            "specs": [
                [
                    "==",
                    "0.4.0"
                ]
            ]
        },
        {
            "name": "brainflow",
            "specs": [
                [
                    "==",
                    "2.4.0"
                ]
            ]
        },
        {
            "name": "keras-preprocessing",
            "specs": [
                [
                    "==",
                    "1.1.2"
                ]
            ]
        },
        {
            "name": "autoreject",
            "specs": [
                [
                    "==",
                    "0.4.3"
                ]
            ]
        },
        {
            "name": "PyWavelets",
            "specs": [
                [
                    "==",
                    "1.6.0"
                ]
            ]
        },
        {
            "name": "dit",
            "specs": [
                [
                    "==",
                    "1.5"
                ]
            ]
        },
        {
            "name": "statsmodels",
            "specs": [
                [
                    "==",
                    "0.14.2"
                ]
            ]
        },
        {
            "name": "librosa",
            "specs": [
                [
                    "==",
                    "0.10.2.post1"
                ]
            ]
        },
        {
            "name": "pyinform",
            "specs": [
                [
                    "==",
                    "0.2.0"
                ]
            ]
        },
        {
            "name": "antropy",
            "specs": [
                [
                    "==",
                    "0.1.6"
                ]
            ]
        },
        {
            "name": "pre-commit",
            "specs": [
                [
                    "==",
                    "3.7.1"
                ]
            ]
        },
        {
            "name": "poetry",
            "specs": [
                [
                    "==",
                    "1.8.3"
                ]
            ]
        },
        {
            "name": "Flake8-pyproject",
            "specs": [
                [
                    "==",
                    "1.2.3"
                ]
            ]
        },
        {
            "name": "torch",
            "specs": [
                [
                    "==",
                    "2.1.0"
                ]
            ]
        },
        {
            "name": "torchsummary",
            "specs": [
                [
                    "==",
                    "1.5.1"
                ]
            ]
        },
        {
            "name": "torchvision",
            "specs": [
                [
                    "==",
                    "0.16.0"
                ]
            ]
        },
        {
            "name": "tensorflow",
            "specs": [
                [
                    "==",
                    "2.14.0"
                ]
            ]
        },
        {
            "name": "moabb",
            "specs": [
                [
                    "==",
                    "1.0.0"
                ]
            ]
        }
    ],
    "lcname": "eeg-classifiers-ensemble"
}
        
Elapsed time: 0.48855s