==================
Welcome to RETURNN
==================
`GitHub repository <https://github.com/rwth-i6/returnn>`__.
`RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>`_,
`RETURNN paper 2018 <https://arxiv.org/abs/1805.05225>`_.
RETURNN - RWTH extensible training framework for universal recurrent neural networks,
is a Theano/TensorFlow-based implementation of modern recurrent neural network architectures.
It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.
The high-level features and goals of RETURNN are:
* **Simplicity**
* Writing config / code is simple & straight-forward (setting up experiment, defining model)
* Debugging in case of problems is simple
* Reading config / code is simple (defined model, training, decoding all becomes clear)
* **Flexibility**
* Allow for many different kinds of experiments / models
* **Efficiency**
* Training speed
* Decoding speed
All items are important for research, decoding speed is esp. important for production.
See our `Interspeech 2020 tutorial "Efficient and Flexible Implementation of Machine Learning for ASR and MT" video <https://www.youtube.com/watch?v=wPKdYqSOlAY>`__
(`slides <https://www-i6.informatik.rwth-aachen.de/publications/download/1154/Zeyer--2020.pdf>`__)
with an introduction of the core concepts.
More specific features include:
- Mini-batch training of feed-forward neural networks
- Sequence-chunking based batch training for recurrent neural networks
- Long short-term memory recurrent neural networks
including our own fast CUDA kernel
- Multidimensional LSTM (GPU only, there is no CPU version)
- Memory management for large data sets
- Work distribution across multiple devices
- Flexible and fast architecture which allows all kinds of encoder-attention-decoder models
See `documentation <https://returnn.readthedocs.io/>`__.
See `basic usage <https://returnn.readthedocs.io/en/latest/basic_usage.html>`__
and `technological overview <https://returnn.readthedocs.io/en/latest/tech_overview.html>`__.
`Here is the video recording of a RETURNN overview talk <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.recording.cut.mp4>`_
(`slides <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.returnn-overview.session1.handout.v1.pdf>`__,
`exercise sheet <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.exercise_sheet.pdf>`__;
hosted by eBay).
There are `many example demos <https://github.com/rwth-i6/returnn/blob/master/demos/>`_
which work on artificially generated data,
i.e. they should work as-is.
There are `some real-world examples <https://github.com/rwth-i6/returnn-experiments>`_
such as setups for speech recognition on the Switchboard or LibriSpeech corpus.
Some benchmark setups against other frameworks
can be found `here <https://github.com/rwth-i6/returnn-benchmarks>`_.
The results are in the `RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>`_.
Performance benchmarks of our LSTM kernel vs CuDNN and other TensorFlow kernels
are in `TensorFlow LSTM benchmark <https://returnn.readthedocs.io/en/latest/tf_lstm_benchmark.html>`__.
There is also `a wiki <https://github.com/rwth-i6/returnn/wiki>`_.
Questions can also be asked on
`StackOverflow using the RETURNN tag <https://stackoverflow.com/questions/tagged/returnn>`_.
.. image:: https://github.com/rwth-i6/returnn/workflows/CI/badge.svg
:target: https://github.com/rwth-i6/returnn/actions
Dependencies
============
pip dependencies are listed in ``requirements.txt`` and ``requirements-dev``,
although some parts of the code may require additional dependencies (e.g. ``librosa``, ``resampy``) on-demand.
RETURNN supports Python >= 3.8. Bumps to the minimum Python version are listed in `CHANGELOG.md <https://github.com/rwth-i6/returnn/blob/master/CHANGELOG.md>`__.
TensorFlow-based setups require TensorFlow >= 2.2.
PyTorch-based setups require Torch >= 1.0.
Raw data
{
"_id": null,
"home_page": "https://github.com/rwth-i6/returnn/",
"name": "returnn",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Albert Zeyer",
"author_email": "albzey@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/3a/be/76dd6b1dcad9d2e4230df315a8a9bae652074dca35355cdf8662d22a92f1/returnn-1.20250717.120243.tar.gz",
"platform": null,
"description": "==================\nWelcome to RETURNN\n==================\n\n`GitHub repository <https://github.com/rwth-i6/returnn>`__.\n`RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>`_,\n`RETURNN paper 2018 <https://arxiv.org/abs/1805.05225>`_.\n\nRETURNN - RWTH extensible training framework for universal recurrent neural networks,\nis a Theano/TensorFlow-based implementation of modern recurrent neural network architectures.\nIt is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.\n\nThe high-level features and goals of RETURNN are:\n\n* **Simplicity**\n\n * Writing config / code is simple & straight-forward (setting up experiment, defining model)\n * Debugging in case of problems is simple\n * Reading config / code is simple (defined model, training, decoding all becomes clear)\n\n* **Flexibility**\n\n * Allow for many different kinds of experiments / models\n\n* **Efficiency**\n\n * Training speed\n * Decoding speed\n\nAll items are important for research, decoding speed is esp. important for production.\n\nSee our `Interspeech 2020 tutorial \"Efficient and Flexible Implementation of Machine Learning for ASR and MT\" video <https://www.youtube.com/watch?v=wPKdYqSOlAY>`__\n(`slides <https://www-i6.informatik.rwth-aachen.de/publications/download/1154/Zeyer--2020.pdf>`__)\nwith an introduction of the core concepts.\n\nMore specific features include:\n\n- Mini-batch training of feed-forward neural networks\n- Sequence-chunking based batch training for recurrent neural networks\n- Long short-term memory recurrent neural networks\n including our own fast CUDA kernel\n- Multidimensional LSTM (GPU only, there is no CPU version)\n- Memory management for large data sets\n- Work distribution across multiple devices\n- Flexible and fast architecture which allows all kinds of encoder-attention-decoder models\n\nSee `documentation <https://returnn.readthedocs.io/>`__.\nSee `basic usage <https://returnn.readthedocs.io/en/latest/basic_usage.html>`__\nand `technological overview <https://returnn.readthedocs.io/en/latest/tech_overview.html>`__.\n\n`Here is the video recording of a RETURNN overview talk <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.recording.cut.mp4>`_\n(`slides <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.returnn-overview.session1.handout.v1.pdf>`__,\n`exercise sheet <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.exercise_sheet.pdf>`__;\nhosted by eBay).\n\nThere are `many example demos <https://github.com/rwth-i6/returnn/blob/master/demos/>`_\nwhich work on artificially generated data,\ni.e. they should work as-is.\n\nThere are `some real-world examples <https://github.com/rwth-i6/returnn-experiments>`_\nsuch as setups for speech recognition on the Switchboard or LibriSpeech corpus.\n\nSome benchmark setups against other frameworks\ncan be found `here <https://github.com/rwth-i6/returnn-benchmarks>`_.\nThe results are in the `RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>`_.\nPerformance benchmarks of our LSTM kernel vs CuDNN and other TensorFlow kernels\nare in `TensorFlow LSTM benchmark <https://returnn.readthedocs.io/en/latest/tf_lstm_benchmark.html>`__.\n\nThere is also `a wiki <https://github.com/rwth-i6/returnn/wiki>`_.\nQuestions can also be asked on\n`StackOverflow using the RETURNN tag <https://stackoverflow.com/questions/tagged/returnn>`_.\n\n.. image:: https://github.com/rwth-i6/returnn/workflows/CI/badge.svg\n :target: https://github.com/rwth-i6/returnn/actions\n\nDependencies\n============\n\npip dependencies are listed in ``requirements.txt`` and ``requirements-dev``,\nalthough some parts of the code may require additional dependencies (e.g. ``librosa``, ``resampy``) on-demand.\n\nRETURNN supports Python >= 3.8. Bumps to the minimum Python version are listed in `CHANGELOG.md <https://github.com/rwth-i6/returnn/blob/master/CHANGELOG.md>`__.\n\nTensorFlow-based setups require TensorFlow >= 2.2.\n\nPyTorch-based setups require Torch >= 1.0.\n",
"bugtrack_url": null,
"license": "RETURNN license",
"summary": "The RWTH extensible training framework for universal recurrent neural networks",
"version": "1.20250717.120243",
"project_urls": {
"Homepage": "https://github.com/rwth-i6/returnn/"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3c2976ac4fad43282e15ee83c419a3555f81c8a0534447ae511d6d5264de9151",
"md5": "37934e1d6360734c35f3bc6d5d8e2073",
"sha256": "1d40a9965ee16bd1d981e8d7b59fd97ee268bf0ee6de9e582b1840e66048573f"
},
"downloads": -1,
"filename": "returnn-1.20250717.120243-py3-none-any.whl",
"has_sig": false,
"md5_digest": "37934e1d6360734c35f3bc6d5d8e2073",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 1501753,
"upload_time": "2025-07-17T10:17:08",
"upload_time_iso_8601": "2025-07-17T10:17:08.739366Z",
"url": "https://files.pythonhosted.org/packages/3c/29/76ac4fad43282e15ee83c419a3555f81c8a0534447ae511d6d5264de9151/returnn-1.20250717.120243-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "3abe76dd6b1dcad9d2e4230df315a8a9bae652074dca35355cdf8662d22a92f1",
"md5": "05d3ebbc910e6efc1fe1a332f2432e14",
"sha256": "a79377b4d305ec7f6a7d9c454d5c4fdb5efbd8ef3edf74f33e61fe745aa197e4"
},
"downloads": -1,
"filename": "returnn-1.20250717.120243.tar.gz",
"has_sig": false,
"md5_digest": "05d3ebbc910e6efc1fe1a332f2432e14",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 2353962,
"upload_time": "2025-07-17T10:17:11",
"upload_time_iso_8601": "2025-07-17T10:17:11.792755Z",
"url": "https://files.pythonhosted.org/packages/3a/be/76dd6b1dcad9d2e4230df315a8a9bae652074dca35355cdf8662d22a92f1/returnn-1.20250717.120243.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-17 10:17:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "rwth-i6",
"github_project": "returnn",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": []
},
{
"name": "h5py",
"specs": []
},
{
"name": "dm-tree",
"specs": []
}
],
"lcname": "returnn"
}