==================
Welcome to RETURNN
==================
`GitHub repository <https://github.com/rwth-i6/returnn>`__.
`RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>`_,
`RETURNN paper 2018 <https://arxiv.org/abs/1805.05225>`_.
RETURNN - RWTH extensible training framework for universal recurrent neural networks,
is a Theano/TensorFlow-based implementation of modern recurrent neural network architectures.
It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.
The high-level features and goals of RETURNN are:
* **Simplicity**
* Writing config / code is simple & straight-forward (setting up experiment, defining model)
* Debugging in case of problems is simple
* Reading config / code is simple (defined model, training, decoding all becomes clear)
* **Flexibility**
* Allow for many different kinds of experiments / models
* **Efficiency**
* Training speed
* Decoding speed
All items are important for research, decoding speed is esp. important for production.
See our `Interspeech 2020 tutorial "Efficient and Flexible Implementation of Machine Learning for ASR and MT" video <https://www.youtube.com/watch?v=wPKdYqSOlAY>`__
(`slides <https://www-i6.informatik.rwth-aachen.de/publications/download/1154/Zeyer--2020.pdf>`__)
with an introduction of the core concepts.
More specific features include:
- Mini-batch training of feed-forward neural networks
- Sequence-chunking based batch training for recurrent neural networks
- Long short-term memory recurrent neural networks
including our own fast CUDA kernel
- Multidimensional LSTM (GPU only, there is no CPU version)
- Memory management for large data sets
- Work distribution across multiple devices
- Flexible and fast architecture which allows all kinds of encoder-attention-decoder models
See `documentation <https://returnn.readthedocs.io/>`__.
See `basic usage <https://returnn.readthedocs.io/en/latest/basic_usage.html>`__
and `technological overview <https://returnn.readthedocs.io/en/latest/tech_overview.html>`__.
`Here is the video recording of a RETURNN overview talk <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.recording.cut.mp4>`_
(`slides <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.returnn-overview.session1.handout.v1.pdf>`__,
`exercise sheet <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.exercise_sheet.pdf>`__;
hosted by eBay).
There are `many example demos <https://github.com/rwth-i6/returnn/blob/master/demos/>`_
which work on artificially generated data,
i.e. they should work as-is.
There are `some real-world examples <https://github.com/rwth-i6/returnn-experiments>`_
such as setups for speech recognition on the Switchboard or LibriSpeech corpus.
Some benchmark setups against other frameworks
can be found `here <https://github.com/rwth-i6/returnn-benchmarks>`_.
The results are in the `RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>`_.
Performance benchmarks of our LSTM kernel vs CuDNN and other TensorFlow kernels
are in `TensorFlow LSTM benchmark <https://returnn.readthedocs.io/en/latest/tf_lstm_benchmark.html>`__.
There is also `a wiki <https://github.com/rwth-i6/returnn/wiki>`_.
Questions can also be asked on
`StackOverflow using the RETURNN tag <https://stackoverflow.com/questions/tagged/returnn>`_.
.. image:: https://github.com/rwth-i6/returnn/workflows/CI/badge.svg
:target: https://github.com/rwth-i6/returnn/actions
Dependencies
============
pip dependencies are listed in ``requirements.txt`` and ``requirements-dev``,
although some parts of the code may require additional dependencies (e.g. ``librosa``, ``resampy``) on-demand.
RETURNN supports Python >= 3.8. Bumps to the minimum Python version are listed in `CHANGELOG.md <https://github.com/rwth-i6/returnn/blob/master/CHANGELOG.md>`__.
TensorFlow-based setups require TensorFlow >= 2.2.
PyTorch-based setups require Torch >= 1.0.
Raw data
{
"_id": null,
"home_page": "https://github.com/rwth-i6/returnn/",
"name": "returnn",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Albert Zeyer",
"author_email": "albzey@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/c0/11/75cfa8c3fa9a8ed5a04f1e94f4e9851be6c51aea6caaf3e5888252ef95a1/returnn-1.20250709.140629.tar.gz",
"platform": null,
"description": "==================\nWelcome to RETURNN\n==================\n\n`GitHub repository <https://github.com/rwth-i6/returnn>`__.\n`RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>`_,\n`RETURNN paper 2018 <https://arxiv.org/abs/1805.05225>`_.\n\nRETURNN - RWTH extensible training framework for universal recurrent neural networks,\nis a Theano/TensorFlow-based implementation of modern recurrent neural network architectures.\nIt is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.\n\nThe high-level features and goals of RETURNN are:\n\n* **Simplicity**\n\n * Writing config / code is simple & straight-forward (setting up experiment, defining model)\n * Debugging in case of problems is simple\n * Reading config / code is simple (defined model, training, decoding all becomes clear)\n\n* **Flexibility**\n\n * Allow for many different kinds of experiments / models\n\n* **Efficiency**\n\n * Training speed\n * Decoding speed\n\nAll items are important for research, decoding speed is esp. important for production.\n\nSee our `Interspeech 2020 tutorial \"Efficient and Flexible Implementation of Machine Learning for ASR and MT\" video <https://www.youtube.com/watch?v=wPKdYqSOlAY>`__\n(`slides <https://www-i6.informatik.rwth-aachen.de/publications/download/1154/Zeyer--2020.pdf>`__)\nwith an introduction of the core concepts.\n\nMore specific features include:\n\n- Mini-batch training of feed-forward neural networks\n- Sequence-chunking based batch training for recurrent neural networks\n- Long short-term memory recurrent neural networks\n including our own fast CUDA kernel\n- Multidimensional LSTM (GPU only, there is no CPU version)\n- Memory management for large data sets\n- Work distribution across multiple devices\n- Flexible and fast architecture which allows all kinds of encoder-attention-decoder models\n\nSee `documentation <https://returnn.readthedocs.io/>`__.\nSee `basic usage <https://returnn.readthedocs.io/en/latest/basic_usage.html>`__\nand `technological overview <https://returnn.readthedocs.io/en/latest/tech_overview.html>`__.\n\n`Here is the video recording of a RETURNN overview talk <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.recording.cut.mp4>`_\n(`slides <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.returnn-overview.session1.handout.v1.pdf>`__,\n`exercise sheet <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.exercise_sheet.pdf>`__;\nhosted by eBay).\n\nThere are `many example demos <https://github.com/rwth-i6/returnn/blob/master/demos/>`_\nwhich work on artificially generated data,\ni.e. they should work as-is.\n\nThere are `some real-world examples <https://github.com/rwth-i6/returnn-experiments>`_\nsuch as setups for speech recognition on the Switchboard or LibriSpeech corpus.\n\nSome benchmark setups against other frameworks\ncan be found `here <https://github.com/rwth-i6/returnn-benchmarks>`_.\nThe results are in the `RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>`_.\nPerformance benchmarks of our LSTM kernel vs CuDNN and other TensorFlow kernels\nare in `TensorFlow LSTM benchmark <https://returnn.readthedocs.io/en/latest/tf_lstm_benchmark.html>`__.\n\nThere is also `a wiki <https://github.com/rwth-i6/returnn/wiki>`_.\nQuestions can also be asked on\n`StackOverflow using the RETURNN tag <https://stackoverflow.com/questions/tagged/returnn>`_.\n\n.. image:: https://github.com/rwth-i6/returnn/workflows/CI/badge.svg\n :target: https://github.com/rwth-i6/returnn/actions\n\nDependencies\n============\n\npip dependencies are listed in ``requirements.txt`` and ``requirements-dev``,\nalthough some parts of the code may require additional dependencies (e.g. ``librosa``, ``resampy``) on-demand.\n\nRETURNN supports Python >= 3.8. Bumps to the minimum Python version are listed in `CHANGELOG.md <https://github.com/rwth-i6/returnn/blob/master/CHANGELOG.md>`__.\n\nTensorFlow-based setups require TensorFlow >= 2.2.\n\nPyTorch-based setups require Torch >= 1.0.\n",
"bugtrack_url": null,
"license": "RETURNN license",
"summary": "The RWTH extensible training framework for universal recurrent neural networks",
"version": "1.20250709.140629",
"project_urls": {
"Homepage": "https://github.com/rwth-i6/returnn/"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "fea9a6f3cc1f6d30f6d5fa394415234dd846af419a4f92b281dbe3de9356efb8",
"md5": "cea579780c0284b732db47f989a711a3",
"sha256": "5d3084e8019b66bd2ae05ea070f84a1b8c23c3ef4f0f94ad334d914819a67877"
},
"downloads": -1,
"filename": "returnn-1.20250709.140629-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cea579780c0284b732db47f989a711a3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 1501685,
"upload_time": "2025-07-09T12:20:37",
"upload_time_iso_8601": "2025-07-09T12:20:37.644596Z",
"url": "https://files.pythonhosted.org/packages/fe/a9/a6f3cc1f6d30f6d5fa394415234dd846af419a4f92b281dbe3de9356efb8/returnn-1.20250709.140629-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c01175cfa8c3fa9a8ed5a04f1e94f4e9851be6c51aea6caaf3e5888252ef95a1",
"md5": "f9ea55bb984f22374a2caf938769027e",
"sha256": "8674ced813efdf13fe766872f8b1305b266ac6f2ca0abcc49566a20a45d6fc84"
},
"downloads": -1,
"filename": "returnn-1.20250709.140629.tar.gz",
"has_sig": false,
"md5_digest": "f9ea55bb984f22374a2caf938769027e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 2353761,
"upload_time": "2025-07-09T12:20:40",
"upload_time_iso_8601": "2025-07-09T12:20:40.988919Z",
"url": "https://files.pythonhosted.org/packages/c0/11/75cfa8c3fa9a8ed5a04f1e94f4e9851be6c51aea6caaf3e5888252ef95a1/returnn-1.20250709.140629.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-09 12:20:40",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "rwth-i6",
"github_project": "returnn",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": []
},
{
"name": "h5py",
"specs": []
},
{
"name": "dm-tree",
"specs": []
}
],
"lcname": "returnn"
}