Name | PyRCN JSON |
Version |
0.0.18
JSON |
| download |
home_page | https://github.com/TUD-STKS/PyRCN |
Summary | A scikit-learn-compatible framework for Reservoir Computing in Python |
upload_time | 2024-07-16 15:26:54 |
maintainer | None |
docs_url | None |
author | Peter Steiner |
requires_python | >=3.9 |
license | None |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# PyRCN
**A Python 3 framework for Reservoir Computing with a [scikit-learn](https://scikit-learn.org/stable/)-compatible API.**
[](https://badge.fury.io/py/PyRCN)
[](https://pyrcn.readthedocs.io/en/latest/?badge=latest)
PyRCN ("Python Reservoir Computing Networks") is a light-weight and transparent Python 3 framework for Reservoir Computing and is based on widely used scientific Python packages, such as numpy or scipy.
The API is fully scikit-learn-compatible, so that users of scikit-learn do not need to refactor their code in order to use the estimators implemented by this framework.
Scikit-learn's built-in parameter optimization methods and example datasets can also be used in the usual way.
PyRCN is used by the [Chair of Speech Technology and Cognitive Systems, Institute for Acoustics and Speech Communications, Technische Universität Dresden, Dresden, Germany](https://tu-dresden.de/ing/elektrotechnik/ias/stks?set_language=en)
and [IDLab (Internet and Data Lab), Ghent University, Ghent, Belgium](https://www.ugent.be/ea/idlab/en).
Currently, it implements Echo State Networks (ESNs) by Herbert Jaeger and Extreme Learning Machines (ELMs) by Guang-Bin Huang in different flavors, e.g. Classifier and Regressor. It is actively developed to be extended into several directions:
- Interaction with [sktime](http://sktime.org/)
- Interaction with [hmmlearn](https://hmmlearn.readthedocs.io/en/stable/)
- More towards future work: Related architectures, such as Liquid State Machines (LSMs) and Perturbative Neural Networks (PNNs)
PyRCN has successfully been used for several tasks:
- Music Information Retrieval (MIR)
- Multipitch tracking
- Onset detection
- $f_{0}$ analysis of spoken language
- GCI detection in raw audio signals
- Time Series Prediction
- Mackey-Glass benchmark test
- Stock price prediction
- Ongoing research tasks:
- Beat tracking in music signals
- Pattern recognition in sensor data
- Phoneme recognition
- Unsupervised pre-training of RCNs and optimization of ESNs
Please see the [References section](#references) for more information. For code examples, see [Getting started](#getting-started).
## Installation
### Prerequisites
PyRCN is developed using Python 3.9 or newer. It depends on the following packages:
- [numpy>=1.18.1](https://numpy.org/)
- [scikit-learn>=1.4](https://scikit-learn.org/stable/)
- [joblib>=0.13.2](https://joblib.readthedocs.io)
- [pandas>=1.0.0](https://pandas.pydata.org/)
- [matplotlib](https://matplotlib.org/)
- [seaborn](https://seaborn.pydata.org/)
### Installation from PyPI
The easiest and recommended way to install ``PyRCN`` is to use ``pip`` from [PyPI](https://pypi.org) :
```
pip install pyrcn
```
### Installation from source
If you plan to contribute to ``PyRCN``, you can also install the package from source.
Clone the Git repository:
```
git clone https://github.com/TUD-STKS/PyRCN.git
```
Install the package using ``setup.py``:
```
python setup.py install --user
```
## Offcial documentation
See [the official PyRCN documentation](https://pyrcn.readthedocs.io/en/latest/?badge=latest)
to learn more about the main features of PyRCN, its API and the installation process.
## Package structure
The package is structured in the following way:
- `doc`
- This folder includes the package documentation.
- `examples`
- This folder includes example code as Jupyter Notebooks and python scripts.
- `images`
- This folder includes the logos used in ´README.md´.
- `pyrcn`
- This folder includes the actual Python package.
## Getting Started
PyRCN includes currently variants of Echo State Networks (ESNs) and Extreme Learning Machines (ELMs): Regressors and Classifiers.
Basic example for the ESNClassifier:
```python
from sklearn.model_selection import train_test_split
from pyrcn.datasets import load_digits
from pyrcn.echo_state_network import ESNClassifier
X, y = load_digits(return_X_y=True, as_sequence=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = ESNClassifier()
clf.fit(X=X_train, y=y_train)
y_pred_classes = clf.predict(X=X_test) # output is the class for each input example
y_pred_proba = clf.predict_proba(X=X_test) # output are the class probabilities for each input example
```
Basic example for the ESNRegressor:
```python
from pyrcn.datasets import mackey_glass
from pyrcn.echo_state_network import ESNRegressor
X, y = mackey_glass(n_timesteps=20000)
reg = ESNRegressor()
reg.fit(X=X[:8000], y=y[:8000])
y_pred = reg.predict(X[8000:]) # output is the prediction for each input example
```
An extensive introduction to getting started with PyRCN is included in the [examples](https://github.com/TUD-STKS/PyRCN/blob/main/examples) directory.
The notebook [digits](https://github.com/TUD-STKS/PyRCN/blob/main/examples/digits.ipynb) or its corresponding [Python script](https://github.com/TUD-STKS/PyRCN/blob/main/examples/digits.py) show how to set up an ESN for a small hand-written digit recognition experiment.
Launch the digits notebook on Binder:
[](https://mybinder.org/v2/gh/TUD-STKS/PyRCN/main?filepath=examples%2Fdigits.ipynb)
The notebook [PyRCN_Intro](https://github.com/TUD-STKS/PyRCN/blob/main/examples/PyRCN_Intro.ipynb) or its corresponding [Python script](https://github.com/TUD-STKS/PyRCN/blob/main/examples/PyRCN_Intro.py) show how to construct different RCNs with building blocks.
[](https://mybinder.org/v2/gh/TUD-STKS/PyRCN/main?filepath=examples%2FPyRCN_Intro.ipynb)
The notebook [Impulse responses](https://github.com/TUD-STKS/PyRCN/blob/main/examples/esn_impulse_responses.ipynb) is an interactive tool to demonstrate the impact of different hyper-parameters on the impulse responses of an ESN.
[](https://mybinder.org/v2/gh/TUD-STKS/PyRCN/main?filepath=examples%2Fesn_impulse_responses.ipynb)
Fore more advanced examples, please have a look at our [Automatic Music Transcription Repository](https://github.com/TUD-STKS/Automatic-Music-Transcription), in which we provide an entire feature extraction, training and test pipeline for multipitch tracking and for note onset detection using PyRCN. This is currently transferred to this repository.
## Citation
If you use PyRCN, please cite the following publication:
```latex
@misc{steiner2021pyrcn,
title={PyRCN: A Toolbox for Exploration and Application of Reservoir Computing Networks},
author={Peter Steiner and Azarakhsh Jalalvand and Simon Stone and Peter Birkholz},
year={2021},
eprint={2103.04807},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
## References
[Unsupervised Pretraining of Echo State Networks for Onset Detection](https://link.springer.com/chapter/10.1007/978-3-030-85099-9_12)
```
@InProceedings{src:Steiner-21e,
author="Peter Steiner and Azarakhsh Jalalvand and Peter Birkholz",
editor="Igor Farka{\v{s}} and Paolo Masulli and Sebastian Otte and Stefan Wermter",
title="{U}nsupervised {P}retraining of {E}cho {S}tate {N}etworks for {O}nset {D}etection",
booktitle="Artificial Neural Networks and Machine Learning -- ICANN 2021",
year="2021",
publisher="Springer International Publishing",
address="Cham",
pages="59--70",
isbn="978-3-030-86383-8"
}
```
[Improved Acoustic Modeling for Automatic Piano Music Transcription Using Echo State Networks](https://link.springer.com/chapter/10.1007/978-3-030-85099-9_12)
```
@InProceedings{src:Steiner-21d,
author="Peter Steiner and Azarakhsh Jalalvand and Peter Birkholz",
editor="Ignacio Rojas and Gonzalo Joya and Andreu Catala",
title="{I}mproved {A}coustic {M}odeling for {A}utomatic {P}iano {M}usic {T}ranscription {U}sing {E}cho {S}tate {N}etworks",
booktitle="Advances in Computational Intelligence",
year="2021",
publisher="Springer International Publishing",
address="Cham",
pages="143--154",
isbn="978-3-030-85099-9"
}
```
Glottal Closure Instant Detection using Echo State Networks
- [Paper](http://www.essv.de/pdf/2021_161_168.pdf)
- [Repository](https://github.com/TUD-STKS/gci_estimation)
- Reference
```latex
@InProceedings{src:Steiner-21c,
title = {Glottal Closure Instant Detection using Echo State Networks},
author = {Peter Steiner and Ian S. Howard and Peter Birkholz},
year = {2021},
pages = {161--168},
keywords = {Oral},
booktitle = {Studientexte zur Sprachkommunikation: Elektronische Sprachsignalverarbeitung 2021},
editor = {Stefan Hillmann and Benjamin Weiss and Thilo Michael and Sebastian Möller},
publisher = {TUDpress, Dresden},
isbn = {978-3-95908-227-3}
}
```
Cluster-based Input Weight Initialization for Echo State Networks
```latex
@article{Steiner2022cluster,
author = {Steiner, Peter and Jalalvand, Azarakhsh and Birkholz, Peter},
doi = {10.1109/TNNLS.2022.3145565},
issn = {2162-2388},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
keywords = {},
month = {},
number = {},
pages = {1--12},
title = {Cluster-based Input Weight Initialization for Echo State Networks},
volume = {},
year = {2022},
}
```
PyRCN: A Toolbox for Exploration and Application of Reservoir Computing Networks
```latex
@article{Steiner2022pyrcn,
title = {PyRCN: A Toolbox for Exploration and Application of Reservoir Computing Networks},
journal = {Engineering Applications of Artificial Intelligence},
volume = {113},
pages = {104964},
year = {2022},
issn = {0952-1976},
doi = {10.1016/j.engappai.2022.104964},
url = {https://www.sciencedirect.com/science/article/pii/S0952197622001713},
author = {Peter Steiner and Azarakhsh Jalalvand and Simon Stone and Peter Birkholz},
}
```
[Feature Engineering and Stacked ESNs for Musical Onset Detection](https://ieeexplore.ieee.org/abstract/document/9413205)
```latex
@INPROCEEDINGS{src:Steiner-21b,
author={Peter Steiner and Azarakhsh Jalalvand and Simon Stone and Peter Birkholz},
booktitle={2020 25th International Conference on Pattern Recognition (ICPR)},
title={{F}eature {E}ngineering and {S}tacked {E}cho {S}tate {N}etworks for {M}usical {O}nset {D}etection},
year={2021},
volume={},
number={},
pages={9537--9544},
doi={10.1109/ICPR48806.2021.9413205}
}
```
[Multipitch tracking in music signals using Echo State Networks](https://ieeexplore.ieee.org/abstract/document/9287638)
```latex
@INPROCEEDINGS{src:Steiner-21a,
author={Peter Steiner and Simon Stone and Peter Birkholz and Azarakhsh Jalalvand},
booktitle={2020 28th European Signal Processing Conference (EUSIPCO)},
title={{M}ultipitch tracking in music signals using {E}cho {S}tate {N}etworks},
year={2021},
volume={},
number={},
pages={126--130},
keywords={},
doi={10.23919/Eusipco47968.2020.9287638},
ISSN={2076-1465},
month={Jan},
```
[Note Onset Detection using Echo State Networks](http://www.essv.de/pdf/2020_157_164.pdf)
```latex
@InProceedings{src:Steiner-20,
title = {Note Onset Detection using Echo State Networks},
author = {Peter Steiner and Simon Stone and Peter Birkholz},
year = {2020},
pages = {157--164},
keywords = {Poster},
booktitle = {Studientexte zur Sprachkommunikation: Elektronische Sprachsignalverarbeitung 2020},
editor = {Ronald Böck and Ingo Siegert and Andreas Wendemuth},
publisher = {TUDpress, Dresden},
isbn = {978-3-959081-93-1}
}
```
[Multiple-F0 Estimation using Echo State Networks](https://www.music-ir.org/mirex/abstracts/2019/SBJ1.pdf)
```latex
@inproceedings{src:Steiner-19,
title={{M}ultiple-{F}0 {E}stimation using {E}cho {S}tate {N}etworks},
booktitle={{MIREX}},
author={Peter Steiner and Azarakhsh Jalalvand and Peter Birkholz},
year={2019},
url = {https://www.music-ir.org/mirex/abstracts/2019/SBJ1.pdf}
}
```
## Acknowledgments
This research was funded by the European Social Fund (Application number: 100327771) and co-financed by tax funds based on the budget approved by the members of the Saxon State Parliament, and by Ghent University.
Raw data
{
"_id": null,
"home_page": "https://github.com/TUD-STKS/PyRCN",
"name": "PyRCN",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Peter Steiner",
"author_email": "peter.steiner@pyrcn.net",
"download_url": "https://files.pythonhosted.org/packages/3e/19/990ec5b30cffac4f1a7517f04459bf6cc725fa608a83e499109dd44e413b/pyrcn-0.0.18.tar.gz",
"platform": null,
"description": "# PyRCN\r\n**A Python 3 framework for Reservoir Computing with a [scikit-learn](https://scikit-learn.org/stable/)-compatible API.**\r\n\r\n[](https://badge.fury.io/py/PyRCN)\r\n[](https://pyrcn.readthedocs.io/en/latest/?badge=latest)\r\n\r\nPyRCN (\"Python Reservoir Computing Networks\") is a light-weight and transparent Python 3 framework for Reservoir Computing and is based on widely used scientific Python packages, such as numpy or scipy. \r\nThe API is fully scikit-learn-compatible, so that users of scikit-learn do not need to refactor their code in order to use the estimators implemented by this framework. \r\nScikit-learn's built-in parameter optimization methods and example datasets can also be used in the usual way.\r\n\r\nPyRCN is used by the [Chair of Speech Technology and Cognitive Systems, Institute for Acoustics and Speech Communications, Technische Universit\u00e4t Dresden, Dresden, Germany](https://tu-dresden.de/ing/elektrotechnik/ias/stks?set_language=en) \r\nand [IDLab (Internet and Data Lab), Ghent University, Ghent, Belgium](https://www.ugent.be/ea/idlab/en). \r\n\r\nCurrently, it implements Echo State Networks (ESNs) by Herbert Jaeger and Extreme Learning Machines (ELMs) by Guang-Bin Huang in different flavors, e.g. Classifier and Regressor. It is actively developed to be extended into several directions:\r\n\r\n- Interaction with [sktime](http://sktime.org/)\r\n- Interaction with [hmmlearn](https://hmmlearn.readthedocs.io/en/stable/)\r\n- More towards future work: Related architectures, such as Liquid State Machines (LSMs) and Perturbative Neural Networks (PNNs)\r\n\r\nPyRCN has successfully been used for several tasks:\r\n\r\n- Music Information Retrieval (MIR)\r\n - Multipitch tracking\r\n - Onset detection\r\n - $f_{0}$ analysis of spoken language\r\n - GCI detection in raw audio signals\r\n- Time Series Prediction\r\n - Mackey-Glass benchmark test\r\n - Stock price prediction\r\n- Ongoing research tasks:\r\n - Beat tracking in music signals\r\n - Pattern recognition in sensor data\r\n - Phoneme recognition\r\n - Unsupervised pre-training of RCNs and optimization of ESNs\r\n\r\nPlease see the [References section](#references) for more information. For code examples, see [Getting started](#getting-started).\r\n\r\n## Installation\r\n\r\n### Prerequisites\r\n\r\nPyRCN is developed using Python 3.9 or newer. It depends on the following packages:\r\n\r\n- [numpy>=1.18.1](https://numpy.org/)\r\n- [scikit-learn>=1.4](https://scikit-learn.org/stable/)\r\n- [joblib>=0.13.2](https://joblib.readthedocs.io)\r\n- [pandas>=1.0.0](https://pandas.pydata.org/)\r\n- [matplotlib](https://matplotlib.org/)\r\n- [seaborn](https://seaborn.pydata.org/)\r\n\r\n### Installation from PyPI\r\n\r\nThe easiest and recommended way to install ``PyRCN`` is to use ``pip`` from [PyPI](https://pypi.org) :\r\n\r\n```\r\npip install pyrcn\r\n```\r\n\r\n### Installation from source\r\n\r\nIf you plan to contribute to ``PyRCN``, you can also install the package from source.\r\n\r\nClone the Git repository:\r\n\r\n```\r\ngit clone https://github.com/TUD-STKS/PyRCN.git\r\n```\r\n\r\nInstall the package using ``setup.py``:\r\n```\r\npython setup.py install --user\r\n```\r\n\r\n## Offcial documentation\r\n\r\nSee [the official PyRCN documentation](https://pyrcn.readthedocs.io/en/latest/?badge=latest)\r\nto learn more about the main features of PyRCN, its API and the installation process.\r\n\r\n## Package structure\r\nThe package is structured in the following way: \r\n\r\n- `doc`\r\n - This folder includes the package documentation.\r\n- `examples`\r\n - This folder includes example code as Jupyter Notebooks and python scripts.\r\n- `images`\r\n - This folder includes the logos used in \u00b4README.md\u00b4.\r\n- `pyrcn`\r\n - This folder includes the actual Python package.\r\n\r\n\r\n## Getting Started\r\n\r\nPyRCN includes currently variants of Echo State Networks (ESNs) and Extreme Learning Machines (ELMs): Regressors and Classifiers.\r\n\r\nBasic example for the ESNClassifier:\r\n\r\n```python\r\nfrom sklearn.model_selection import train_test_split\r\nfrom pyrcn.datasets import load_digits\r\nfrom pyrcn.echo_state_network import ESNClassifier\r\n\r\nX, y = load_digits(return_X_y=True, as_sequence=True)\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\r\n\r\nclf = ESNClassifier()\r\nclf.fit(X=X_train, y=y_train)\r\n\r\ny_pred_classes = clf.predict(X=X_test) # output is the class for each input example\r\ny_pred_proba = clf.predict_proba(X=X_test) # output are the class probabilities for each input example\r\n```\r\n\r\nBasic example for the ESNRegressor:\r\n\r\n```python\r\nfrom pyrcn.datasets import mackey_glass\r\nfrom pyrcn.echo_state_network import ESNRegressor\r\n\r\nX, y = mackey_glass(n_timesteps=20000)\r\n\r\nreg = ESNRegressor()\r\nreg.fit(X=X[:8000], y=y[:8000])\r\n\r\ny_pred = reg.predict(X[8000:]) # output is the prediction for each input example\r\n```\r\n\r\nAn extensive introduction to getting started with PyRCN is included in the [examples](https://github.com/TUD-STKS/PyRCN/blob/main/examples) directory. \r\nThe notebook [digits](https://github.com/TUD-STKS/PyRCN/blob/main/examples/digits.ipynb) or its corresponding [Python script](https://github.com/TUD-STKS/PyRCN/blob/main/examples/digits.py) show how to set up an ESN for a small hand-written digit recognition experiment. \r\nLaunch the digits notebook on Binder: \r\n\r\n[](https://mybinder.org/v2/gh/TUD-STKS/PyRCN/main?filepath=examples%2Fdigits.ipynb)\r\n\r\nThe notebook [PyRCN_Intro](https://github.com/TUD-STKS/PyRCN/blob/main/examples/PyRCN_Intro.ipynb) or its corresponding [Python script](https://github.com/TUD-STKS/PyRCN/blob/main/examples/PyRCN_Intro.py) show how to construct different RCNs with building blocks. \r\n\r\n[](https://mybinder.org/v2/gh/TUD-STKS/PyRCN/main?filepath=examples%2FPyRCN_Intro.ipynb)\r\n\r\nThe notebook [Impulse responses](https://github.com/TUD-STKS/PyRCN/blob/main/examples/esn_impulse_responses.ipynb) is an interactive tool to demonstrate the impact of different hyper-parameters on the impulse responses of an ESN. \r\n\r\n[](https://mybinder.org/v2/gh/TUD-STKS/PyRCN/main?filepath=examples%2Fesn_impulse_responses.ipynb)\r\n\r\nFore more advanced examples, please have a look at our [Automatic Music Transcription Repository](https://github.com/TUD-STKS/Automatic-Music-Transcription), in which we provide an entire feature extraction, training and test pipeline for multipitch tracking and for note onset detection using PyRCN. This is currently transferred to this repository.\r\n\r\n## Citation\r\n\r\nIf you use PyRCN, please cite the following publication:\r\n\r\n```latex\r\n@misc{steiner2021pyrcn,\r\n title={PyRCN: A Toolbox for Exploration and Application of Reservoir Computing Networks}, \r\n author={Peter Steiner and Azarakhsh Jalalvand and Simon Stone and Peter Birkholz},\r\n year={2021},\r\n eprint={2103.04807},\r\n archivePrefix={arXiv},\r\n primaryClass={cs.LG}\r\n}\r\n```\r\n\r\n## References\r\n\r\n[Unsupervised Pretraining of Echo State Networks for Onset Detection](https://link.springer.com/chapter/10.1007/978-3-030-85099-9_12)\r\n\r\n```\r\n@InProceedings{src:Steiner-21e,\r\n author=\"Peter Steiner and Azarakhsh Jalalvand and Peter Birkholz\",\r\n editor=\"Igor Farka{\\v{s}} and Paolo Masulli and Sebastian Otte and Stefan Wermter\",\r\n title=\"{U}nsupervised {P}retraining of {E}cho {S}tate {N}etworks for {O}nset {D}etection\",\r\n booktitle=\"Artificial Neural Networks and Machine Learning -- ICANN 2021\",\r\n year=\"2021\",\r\n publisher=\"Springer International Publishing\",\r\n address=\"Cham\",\r\n pages=\"59--70\",\r\n isbn=\"978-3-030-86383-8\"\r\n}\r\n\r\n\r\n```\r\n\r\n[Improved Acoustic Modeling for Automatic Piano Music Transcription Using Echo State Networks](https://link.springer.com/chapter/10.1007/978-3-030-85099-9_12)\r\n\r\n```\r\n@InProceedings{src:Steiner-21d,\r\n\tauthor=\"Peter Steiner and Azarakhsh Jalalvand and Peter Birkholz\",\r\n\teditor=\"Ignacio Rojas and Gonzalo Joya and Andreu Catala\",\r\n\ttitle=\"{I}mproved {A}coustic {M}odeling for {A}utomatic {P}iano {M}usic {T}ranscription {U}sing {E}cho {S}tate {N}etworks\",\r\n\tbooktitle=\"Advances in Computational Intelligence\",\r\n\tyear=\"2021\",\r\n\tpublisher=\"Springer International Publishing\",\r\n\taddress=\"Cham\",\r\n\tpages=\"143--154\",\r\n\tisbn=\"978-3-030-85099-9\"\r\n}\r\n```\r\n\r\nGlottal Closure Instant Detection using Echo State Networks\r\n- [Paper](http://www.essv.de/pdf/2021_161_168.pdf)\r\n- [Repository](https://github.com/TUD-STKS/gci_estimation)\r\n- Reference\r\n```latex\r\n@InProceedings{src:Steiner-21c,\r\n\ttitle = {Glottal Closure Instant Detection using Echo State Networks},\r\n\tauthor = {Peter Steiner and Ian S. Howard and Peter Birkholz},\r\n\tyear = {2021},\r\n\tpages = {161--168},\r\n\tkeywords = {Oral},\r\n\tbooktitle = {Studientexte zur Sprachkommunikation: Elektronische Sprachsignalverarbeitung 2021},\r\n\teditor = {Stefan Hillmann and Benjamin Weiss and Thilo Michael and Sebastian M\u00f6ller},\r\n\tpublisher = {TUDpress, Dresden},\r\n\tisbn = {978-3-95908-227-3}\r\n}\r\n```\r\n\r\nCluster-based Input Weight Initialization for Echo State Networks\r\n\r\n```latex\r\n@article{Steiner2022cluster,\r\n\tauthor = {Steiner, Peter and Jalalvand, Azarakhsh and Birkholz, Peter},\r\n\tdoi = {10.1109/TNNLS.2022.3145565},\r\n\tissn = {2162-2388},\r\n\tjournal = {IEEE Transactions on Neural Networks and Learning Systems},\r\n\tkeywords = {},\r\n\tmonth = {},\r\n\tnumber = {},\r\n\tpages = {1--12},\r\n\ttitle = {Cluster-based Input Weight Initialization for Echo State Networks},\r\n\tvolume = {},\r\n\tyear = {2022},\r\n}\r\n```\r\n\r\nPyRCN: A Toolbox for Exploration and Application of Reservoir Computing Networks\r\n\r\n```latex\r\n@article{Steiner2022pyrcn,\r\n\ttitle = {PyRCN: A Toolbox for Exploration and Application of Reservoir Computing Networks},\r\n\tjournal = {Engineering Applications of Artificial Intelligence},\r\n\tvolume = {113},\r\n\tpages = {104964},\r\n\tyear = {2022},\r\n\tissn = {0952-1976},\r\n\tdoi = {10.1016/j.engappai.2022.104964},\r\n\turl = {https://www.sciencedirect.com/science/article/pii/S0952197622001713},\r\n\tauthor = {Peter Steiner and Azarakhsh Jalalvand and Simon Stone and Peter Birkholz},\r\n}\r\n```\r\n\r\n[Feature Engineering and Stacked ESNs for Musical Onset Detection](https://ieeexplore.ieee.org/abstract/document/9413205)\r\n\r\n```latex\r\n@INPROCEEDINGS{src:Steiner-21b,\r\n author={Peter Steiner and Azarakhsh Jalalvand and Simon Stone and Peter Birkholz},\r\n booktitle={2020 25th International Conference on Pattern Recognition (ICPR)},\r\n title={{F}eature {E}ngineering and {S}tacked {E}cho {S}tate {N}etworks for {M}usical {O}nset {D}etection},\r\n year={2021},\r\n volume={},\r\n number={},\r\n pages={9537--9544},\r\n doi={10.1109/ICPR48806.2021.9413205}\r\n}\r\n```\r\n\r\n[Multipitch tracking in music signals using Echo State Networks](https://ieeexplore.ieee.org/abstract/document/9287638)\r\n\r\n```latex\r\n@INPROCEEDINGS{src:Steiner-21a,\r\n author={Peter Steiner and Simon Stone and Peter Birkholz and Azarakhsh Jalalvand},\r\n booktitle={2020 28th European Signal Processing Conference (EUSIPCO)},\r\n title={{M}ultipitch tracking in music signals using {E}cho {S}tate {N}etworks},\r\n year={2021},\r\n volume={},\r\n number={},\r\n pages={126--130},\r\n keywords={},\r\n doi={10.23919/Eusipco47968.2020.9287638},\r\n ISSN={2076-1465},\r\n month={Jan},\r\n```\r\n\r\n[Note Onset Detection using Echo State Networks](http://www.essv.de/pdf/2020_157_164.pdf)\r\n\r\n```latex\r\n@InProceedings{src:Steiner-20,\r\n\ttitle = {Note Onset Detection using Echo State Networks},\r\n\tauthor = {Peter Steiner and Simon Stone and Peter Birkholz},\r\n\tyear = {2020},\r\n\tpages = {157--164},\r\n\tkeywords = {Poster},\r\n\tbooktitle = {Studientexte zur Sprachkommunikation: Elektronische Sprachsignalverarbeitung 2020},\r\n\teditor = {Ronald B\u00f6ck and Ingo Siegert and Andreas Wendemuth},\r\n\tpublisher = {TUDpress, Dresden},\r\n\tisbn = {978-3-959081-93-1}\r\n} \r\n```\r\n\r\n[Multiple-F0 Estimation using Echo State Networks](https://www.music-ir.org/mirex/abstracts/2019/SBJ1.pdf)\r\n\r\n```latex\r\n@inproceedings{src:Steiner-19,\r\n title={{M}ultiple-{F}0 {E}stimation using {E}cho {S}tate {N}etworks},\r\n booktitle={{MIREX}},\r\n author={Peter Steiner and Azarakhsh Jalalvand and Peter Birkholz},\r\n year={2019},\r\n url = {https://www.music-ir.org/mirex/abstracts/2019/SBJ1.pdf}\r\n}\r\n```\r\n\r\n\r\n## Acknowledgments\r\nThis research was funded by the European Social Fund (Application number: 100327771) and co-financed by tax funds based on the budget approved by the members of the Saxon State Parliament, and by Ghent University.\r\n",
"bugtrack_url": null,
"license": null,
"summary": "A scikit-learn-compatible framework for Reservoir Computing in Python",
"version": "0.0.18",
"project_urls": {
"Bug Tracker": "https://github.com/TUD-STKS/PyRCN/issues",
"Documentation": "https://pyrcn.readthedocs.io/",
"Funding": "https://pyrcn.net/",
"Homepage": "https://github.com/TUD-STKS/PyRCN",
"Source": "https://github.com/TUD-STKS/PyRCN/"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "08aa5a3efdb61f806fda3b7800d1b366c258dd10810e084c788cbec4ec3e8230",
"md5": "44037308a3df16c25988ff223fac27f3",
"sha256": "adde9939b2f2ad6467b4e96fab98b641748bceec84285aa5246d93daf94b314f"
},
"downloads": -1,
"filename": "PyRCN-0.0.18-py3-none-any.whl",
"has_sig": false,
"md5_digest": "44037308a3df16c25988ff223fac27f3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 82496,
"upload_time": "2024-07-16T15:26:52",
"upload_time_iso_8601": "2024-07-16T15:26:52.192950Z",
"url": "https://files.pythonhosted.org/packages/08/aa/5a3efdb61f806fda3b7800d1b366c258dd10810e084c788cbec4ec3e8230/PyRCN-0.0.18-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3e19990ec5b30cffac4f1a7517f04459bf6cc725fa608a83e499109dd44e413b",
"md5": "08e46e52504a790de7e6202da183e5d5",
"sha256": "56e74d08490abe089eea3845f262d37beed1834114f184e3b3b76c467a5616d2"
},
"downloads": -1,
"filename": "pyrcn-0.0.18.tar.gz",
"has_sig": false,
"md5_digest": "08e46e52504a790de7e6202da183e5d5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 74941,
"upload_time": "2024-07-16T15:26:54",
"upload_time_iso_8601": "2024-07-16T15:26:54.093575Z",
"url": "https://files.pythonhosted.org/packages/3e/19/990ec5b30cffac4f1a7517f04459bf6cc725fa608a83e499109dd44e413b/pyrcn-0.0.18.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-16 15:26:54",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "TUD-STKS",
"github_project": "PyRCN",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "pyrcn"
}