keras-lmu


Namekeras-lmu JSON
Version 0.7.0 PyPI version JSON
download
home_pagehttps://www.nengo.ai/keras-lmu
SummaryKeras implementation of Legendre Memory Units
upload_time2023-07-20 21:54:41
maintainer
docs_urlNone
authorApplied Brain Research
requires_python>=3.8
licenseFree for non-commercial use
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            KerasLMU: Recurrent neural networks using Legendre Memory Units
---------------------------------------------------------------

`Paper <https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>`_

This is a Keras-based implementation of the
Legendre Memory Unit (LMU). The LMU is a novel memory cell for recurrent neural
networks that dynamically maintains information across long windows of time using
relatively few resources. It has been shown to perform as well as standard LSTM or
other RNN-based models in a variety of tasks, generally with fewer internal parameters
(see `this paper
<https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>`_ for more details). For the Permuted Sequential MNIST (psMNIST) task in particular, it has been demonstrated to outperform the current state-of-the-art results. See the note below for instructions on how to get access to this model.

The LMU is mathematically derived to orthogonalize its continuous-time history – doing
so by solving *d* coupled ordinary differential equations (ODEs), whose phase space
linearly maps onto sliding windows of time via the Legendre polynomials up to degree
*d* − 1 (the example for *d* = 12 is shown below).

.. image:: https://i.imgur.com/Uvl6tj5.png
   :target: https://i.imgur.com/Uvl6tj5.png
   :alt: Legendre polynomials

A single LMU cell expresses the following computational graph, which takes in an input
signal, **x**, and couples a optimal linear memory, **m**, with a nonlinear hidden
state, **h**. By default, this coupling is trained via backpropagation, while the
dynamics of the memory remain fixed.

.. image:: https://i.imgur.com/IJGUVg6.png
   :target: https://i.imgur.com/IJGUVg6.png
   :alt: Computational graph

The discretized **A** and **B** matrices are initialized according to the LMU's
mathematical derivation with respect to some chosen window length, **θ**.
Backpropagation can be used to learn this time-scale, or fine-tune **A** and **B**,
if necessary.

Both the kernels, **W**, and the encoders, **e**, are learned. Intuitively, the kernels
learn to compute nonlinear functions across the memory, while the encoders learn to
project the relevant information into the memory (see `paper
<https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>`_ for details).

.. note::

   The ``paper`` branch in the ``lmu`` GitHub repository includes a pre-trained
   Keras/TensorFlow model, located at ``models/psMNIST-standard.hdf5``, which obtains
   a psMNIST result of **97.15%**. Note that the network is using fewer internal
   state-variables and neurons than there are pixels in the input sequence.
   To reproduce the results from `this paper
   <https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>`_,
   run the notebooks in the ``experiments`` directory within the ``paper`` branch.

Nengo Examples
--------------

* `LMUs in Nengo (with online learning)
  <https://www.nengo.ai/nengo/examples/learning/lmu.html>`_
* `Spiking LMUs in Nengo Loihi (with online learning)
  <https://www.nengo.ai/nengo-loihi/examples/lmu.html>`_
* `LMUs in NengoDL (reproducing SotA on psMNIST)
  <https://www.nengo.ai/nengo-dl/examples/lmu.html>`_

Citation
--------

.. code-block::

   @inproceedings{voelker2019lmu,
     title={Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks},
     author={Aaron R. Voelker and Ivana Kaji\'c and Chris Eliasmith},
     booktitle={Advances in Neural Information Processing Systems},
     pages={15544--15553},
     year={2019}
   }

***************
Release history
***************

.. Changelog entries should follow this format:

   version (release date)
   ======================

   **section**

   - One-line description of change (link to Github issue/PR)

.. Changes should be organized in one of several sections:

   - Added
   - Changed
   - Deprecated
   - Removed
   - Fixed

0.7.0 (July 20, 2023)
=====================

*Compatible with TensorFlow 2.4 - 2.13*

**Changed**

- Minimum supported Python version is now 3.8 (3.7 reached end of life in June 2023).
  (`#54`_)

.. _#54: https://github.com/nengo/keras-lmu/pull/54

0.6.0 (May 5, 2023)
===================

*Compatible with TensorFlow 2.4 - 2.11*

**Changed**

- ``LMUFeedforward`` can now be used with unknown sequence lengths, and ``LMU`` will
  use ``LMUFeedforward`` for unknown sequence lengths (as long as the other conditions
  are met, as before). (`#52`_)
- Allow ``input_to_hidden=True`` with ``hidden_cell=None``. This will act as a skip
  connection. (`#52`_)
- Changed order of LMU states so that the LMU memory state always comes first, and
  any states from the hidden cell come afterwards. (`#52`_)

**Fixed**

- Fixed errors when setting non-default dtype on LMU layers. (`#52`_)

.. _#52: https://github.com/nengo/keras-lmu/pull/52

0.5.0 (January 26, 2023)
========================

*Compatible with TensorFlow 2.4 - 2.11*

**Added**

- Layers are registered with the Keras serialization system (no longer need to
  be passed as ``custom_objects``). (`#49`_)

.. _#49: https://github.com/nengo/keras-lmu/pull/49

0.4.2 (May 17, 2022)
====================

*Compatible with TensorFlow 2.1 - 2.9*

**Added**

- Added support for TensorFlow 2.9. (`#48`_)

.. _#48: https://github.com/nengo/keras-lmu/pull/48

0.4.1 (February 10, 2022)
=========================

*Compatible with TensorFlow 2.1 - 2.8*

**Added**

- Added support for TensorFlow 2.8. (`#46`_)
- Allow for optional bias on the memory component with the ``use_bias`` flag. (`#44`_)
- Added regularizer support for kernel, recurrent kernel, and bias. (`#44`_)

.. _#44: https://github.com/nengo/keras-lmu/pull/44
.. _#46: https://github.com/nengo/keras-lmu/pull/46

0.4.0 (August 16, 2021)
=======================

*Compatible with TensorFlow 2.1 - 2.7*

**Added**

- Setting ``kernel_initializer=None`` now removes the dense input kernel. (`#40`_)
- The ``keras_lmu.LMUFFT`` layer now supports ``memory_d > 1``. ``keras_lmu.LMU`` now
  uses this implementation for all values of ``memory_d`` when feedforward conditions
  are satisfied (no hidden-to-memory or memory-to-memory connections,
  and the sequence length is not ``None``). (`#40`_)
- Added ``trainable_theta`` option, which will allow the ``theta`` parameter to be
  learned during training. (`#41`_)
- Added ``discretizer`` option, which controls the method used to solve for the ``A``
  and ``B`` LMU matrices. This is mainly useful in combination with
  ``trainable_theta=True``, where setting ``discretizer="euler"`` may improve the
  training speed (possibly at the cost of some accuracy). (`#41`_)
- The ``keras_lmu.LMUFFT`` layer can now use raw convolution internally (as opposed to
  FFT-based convolution). The new ``conv_mode`` option exposes this. The new
  ``truncate_ir`` option allows truncating the impulse response when running with a
  raw convolution mode, for efficiency. Whether FFT-based or raw convolution is faster
  depends on the specific model, hardware, and amount of truncation. (`#42`_)

**Changed**

- The ``A`` and ``B`` matrices are now stored as constants instead of non-trainable
  variables. This can improve the training/inference speed, but it means that saved
  weights from previous versions will be incompatible. (`#41`_)
- Renamed ``keras_lmu.LMUFFT`` to ``keras_lmu.LMUFeedforward``. (`#42`_)

**Fixed**

- Fixed dropout support in TensorFlow 2.6. (`#42`_)

.. _#40: https://github.com/nengo/keras-lmu/pull/40
.. _#41: https://github.com/nengo/keras-lmu/pull/41
.. _#42: https://github.com/nengo/keras-lmu/pull/42

0.3.1 (November 16, 2020)
=========================

**Changed**

- Raise a validation error if ``hidden_to_memory`` or ``input_to_hidden`` are True
  when ``hidden_cell=None``. (`#26`_)

**Fixed**

- Fixed a bug with the autoswapping in ``keras_lmu.LMU`` during training. (`#28`_)
- Fixed a bug where dropout mask was not being reset properly in the hidden cell.
  (`#29`_)

.. _#26: https://github.com/nengo/keras-lmu/pull/26
.. _#28: https://github.com/nengo/keras-lmu/pull/28
.. _#29: https://github.com/nengo/keras-lmu/pull/29


0.3.0 (November 6, 2020)
========================

**Changed**

- Renamed module from ``lmu`` to ``keras_lmu`` (so it will now be imported via
  ``import keras_lmu``), renamed package from ``lmu`` to
  ``keras-lmu`` (so it will now be installed via ``pip install keras-lmu``), and
  changed any references to "NengoLMU" to "KerasLMU" (since this implementation is
  based in the Keras framework rather than Nengo). In the future the ``lmu`` namespace
  will be used as a meta-package to encapsulate LMU implementations in different
  frameworks. (`#24`_)

.. _#24: https://github.com/abr/lmu/pull/24

0.2.0 (November 2, 2020)
========================

**Added**

- Added documentation for package description, installation, usage, API, examples,
  and project information. (`#20`_)
- Added LMU FFT cell variant and auto-switching LMU class. (`#21`_)
- LMUs can now be used with any Keras RNN cell (e.g. LSTMs or GRUs) through the
  ``hidden_cell`` parameter. This can take an RNN cell (like
  ``tf.keras.layers.SimpleRNNCell`` or ``tf.keras.layers.LSTMCell``) or a feedforward
  layer (like ``tf.keras.layers.Dense``) or ``None`` (to create a memory-only LMU).
  The output of the LMU memory component will be fed to the ``hidden_cell``.
  (`#22`_)
- Added ``hidden_to_memory``, ``memory_to_memory``, and ``input_to_hidden`` parameters
  to ``LMUCell``, which can be used to enable/disable connections between components
  of the LMU. They default to disabled. (`#22`_)
- LMUs can now be used with multi-dimensional memory components. This is controlled
  through a new ``memory_d`` parameter of ``LMUCell``. (`#22`_)
- Added ``dropout`` parameter to ``LMUCell`` (which applies dropout to the input)
  and ``recurrent_dropout`` (which applies dropout to the ``memory_to_memory``
  connection, if it is enabled). Note that dropout can be added in the hidden
  component through the ``hidden_cell`` object. (`#22`_)

**Changed**

- Renamed ``lmu.lmu`` module to ``lmu.layers``. (`#22`_)
- Combined the ``*_encoders_initializer``parameters of ``LMUCell`` into a single
  ``kernel_initializer`` parameter. (`#22`_)
- Combined the ``*_kernel_initializer`` parameters of ``LMUCell`` into a single
  ``recurrent_kernel_initializer`` parameter. (`#22`_)

**Removed**

- Removed ``Legendre``, ``InputScaled``, ``LMUCellODE``, and ``LMUCellGating``
  classes. (`#22`_)
- Removed the ``method``, ``realizer``, and ``factory`` arguments from ``LMUCell``
  (they will take on the same default values as before, they just cannot be changed).
  (`#22`_)
- Removed the ``trainable_*`` arguments from ``LMUCell``. This functionality is
  largely redundant with the new functionality added for enabling/disabling internal
  LMU connections. These were primarily used previously for e.g. setting a connection to
  zero and then disabling learning, which can now be done more efficiently by
  disabling the connection entirely. (`#22`_)
- Removed the ``units`` and ``hidden_activation`` parameters of ``LMUCell`` (these are
  now specified directly in the ``hidden_cell``. (`#22`_)
- Removed the dependency on ``nengolib``. (`#22`_)
- Dropped support for Python 3.5, which reached its end of life in September 2020.
  (`#22`_)

.. _#20: https://github.com/abr/lmu/pull/20
.. _#21: https://github.com/abr/lmu/pull/21
.. _#22: https://github.com/abr/lmu/pull/22

0.1.0 (June 22, 2020)
=====================

Initial release of KerasLMU 0.1.0! Supports Python 3.5+.

The API is considered unstable; parts are likely to change in the future.

Thanks to all of the contributors for making this possible!

            

Raw data

            {
    "_id": null,
    "home_page": "https://www.nengo.ai/keras-lmu",
    "name": "keras-lmu",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "",
    "author": "Applied Brain Research",
    "author_email": "info@appliedbrainresearch.com",
    "download_url": "https://files.pythonhosted.org/packages/66/32/662edcb42de4e721b22cdefb47f7fe70a4df8bb88fcab1cbaeffdde424cf/keras-lmu-0.7.0.tar.gz",
    "platform": null,
    "description": "KerasLMU: Recurrent neural networks using Legendre Memory Units\n---------------------------------------------------------------\n\n`Paper <https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>`_\n\nThis is a Keras-based implementation of the\nLegendre Memory Unit (LMU). The LMU is a novel memory cell for recurrent neural\nnetworks that dynamically maintains information across long windows of time using\nrelatively few resources. It has been shown to perform as well as standard LSTM or\nother RNN-based models in a variety of tasks, generally with fewer internal parameters\n(see `this paper\n<https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>`_ for more details). For the Permuted Sequential MNIST (psMNIST) task in particular, it has been demonstrated to outperform the current state-of-the-art results. See the note below for instructions on how to get access to this model.\n\nThe LMU is mathematically derived to orthogonalize its continuous-time history \u2013 doing\nso by solving *d* coupled ordinary differential equations (ODEs), whose phase space\nlinearly maps onto sliding windows of time via the Legendre polynomials up to degree\n*d* \u2212 1 (the example for *d* = 12 is shown below).\n\n.. image:: https://i.imgur.com/Uvl6tj5.png\n   :target: https://i.imgur.com/Uvl6tj5.png\n   :alt: Legendre polynomials\n\nA single LMU cell expresses the following computational graph, which takes in an input\nsignal, **x**, and couples a optimal linear memory, **m**, with a nonlinear hidden\nstate, **h**. By default, this coupling is trained via backpropagation, while the\ndynamics of the memory remain fixed.\n\n.. image:: https://i.imgur.com/IJGUVg6.png\n   :target: https://i.imgur.com/IJGUVg6.png\n   :alt: Computational graph\n\nThe discretized **A** and **B** matrices are initialized according to the LMU's\nmathematical derivation with respect to some chosen window length, **\u03b8**.\nBackpropagation can be used to learn this time-scale, or fine-tune **A** and **B**,\nif necessary.\n\nBoth the kernels, **W**, and the encoders, **e**, are learned. Intuitively, the kernels\nlearn to compute nonlinear functions across the memory, while the encoders learn to\nproject the relevant information into the memory (see `paper\n<https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>`_ for details).\n\n.. note::\n\n   The ``paper`` branch in the ``lmu`` GitHub repository includes a pre-trained\n   Keras/TensorFlow model, located at ``models/psMNIST-standard.hdf5``, which obtains\n   a psMNIST result of **97.15%**. Note that the network is using fewer internal\n   state-variables and neurons than there are pixels in the input sequence.\n   To reproduce the results from `this paper\n   <https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>`_,\n   run the notebooks in the ``experiments`` directory within the ``paper`` branch.\n\nNengo Examples\n--------------\n\n* `LMUs in Nengo (with online learning)\n  <https://www.nengo.ai/nengo/examples/learning/lmu.html>`_\n* `Spiking LMUs in Nengo Loihi (with online learning)\n  <https://www.nengo.ai/nengo-loihi/examples/lmu.html>`_\n* `LMUs in NengoDL (reproducing SotA on psMNIST)\n  <https://www.nengo.ai/nengo-dl/examples/lmu.html>`_\n\nCitation\n--------\n\n.. code-block::\n\n   @inproceedings{voelker2019lmu,\n     title={Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks},\n     author={Aaron R. Voelker and Ivana Kaji\\'c and Chris Eliasmith},\n     booktitle={Advances in Neural Information Processing Systems},\n     pages={15544--15553},\n     year={2019}\n   }\n\n***************\nRelease history\n***************\n\n.. Changelog entries should follow this format:\n\n   version (release date)\n   ======================\n\n   **section**\n\n   - One-line description of change (link to Github issue/PR)\n\n.. Changes should be organized in one of several sections:\n\n   - Added\n   - Changed\n   - Deprecated\n   - Removed\n   - Fixed\n\n0.7.0 (July 20, 2023)\n=====================\n\n*Compatible with TensorFlow 2.4 - 2.13*\n\n**Changed**\n\n- Minimum supported Python version is now 3.8 (3.7 reached end of life in June 2023).\n  (`#54`_)\n\n.. _#54: https://github.com/nengo/keras-lmu/pull/54\n\n0.6.0 (May 5, 2023)\n===================\n\n*Compatible with TensorFlow 2.4 - 2.11*\n\n**Changed**\n\n- ``LMUFeedforward`` can now be used with unknown sequence lengths, and ``LMU`` will\n  use ``LMUFeedforward`` for unknown sequence lengths (as long as the other conditions\n  are met, as before). (`#52`_)\n- Allow ``input_to_hidden=True`` with ``hidden_cell=None``. This will act as a skip\n  connection. (`#52`_)\n- Changed order of LMU states so that the LMU memory state always comes first, and\n  any states from the hidden cell come afterwards. (`#52`_)\n\n**Fixed**\n\n- Fixed errors when setting non-default dtype on LMU layers. (`#52`_)\n\n.. _#52: https://github.com/nengo/keras-lmu/pull/52\n\n0.5.0 (January 26, 2023)\n========================\n\n*Compatible with TensorFlow 2.4 - 2.11*\n\n**Added**\n\n- Layers are registered with the Keras serialization system (no longer need to\n  be passed as ``custom_objects``). (`#49`_)\n\n.. _#49: https://github.com/nengo/keras-lmu/pull/49\n\n0.4.2 (May 17, 2022)\n====================\n\n*Compatible with TensorFlow 2.1 - 2.9*\n\n**Added**\n\n- Added support for TensorFlow 2.9. (`#48`_)\n\n.. _#48: https://github.com/nengo/keras-lmu/pull/48\n\n0.4.1 (February 10, 2022)\n=========================\n\n*Compatible with TensorFlow 2.1 - 2.8*\n\n**Added**\n\n- Added support for TensorFlow 2.8. (`#46`_)\n- Allow for optional bias on the memory component with the ``use_bias`` flag. (`#44`_)\n- Added regularizer support for kernel, recurrent kernel, and bias. (`#44`_)\n\n.. _#44: https://github.com/nengo/keras-lmu/pull/44\n.. _#46: https://github.com/nengo/keras-lmu/pull/46\n\n0.4.0 (August 16, 2021)\n=======================\n\n*Compatible with TensorFlow 2.1 - 2.7*\n\n**Added**\n\n- Setting ``kernel_initializer=None`` now removes the dense input kernel. (`#40`_)\n- The ``keras_lmu.LMUFFT`` layer now supports ``memory_d > 1``. ``keras_lmu.LMU`` now\n  uses this implementation for all values of ``memory_d`` when feedforward conditions\n  are satisfied (no hidden-to-memory or memory-to-memory connections,\n  and the sequence length is not ``None``). (`#40`_)\n- Added ``trainable_theta`` option, which will allow the ``theta`` parameter to be\n  learned during training. (`#41`_)\n- Added ``discretizer`` option, which controls the method used to solve for the ``A``\n  and ``B`` LMU matrices. This is mainly useful in combination with\n  ``trainable_theta=True``, where setting ``discretizer=\"euler\"`` may improve the\n  training speed (possibly at the cost of some accuracy). (`#41`_)\n- The ``keras_lmu.LMUFFT`` layer can now use raw convolution internally (as opposed to\n  FFT-based convolution). The new ``conv_mode`` option exposes this. The new\n  ``truncate_ir`` option allows truncating the impulse response when running with a\n  raw convolution mode, for efficiency. Whether FFT-based or raw convolution is faster\n  depends on the specific model, hardware, and amount of truncation. (`#42`_)\n\n**Changed**\n\n- The ``A`` and ``B`` matrices are now stored as constants instead of non-trainable\n  variables. This can improve the training/inference speed, but it means that saved\n  weights from previous versions will be incompatible. (`#41`_)\n- Renamed ``keras_lmu.LMUFFT`` to ``keras_lmu.LMUFeedforward``. (`#42`_)\n\n**Fixed**\n\n- Fixed dropout support in TensorFlow 2.6. (`#42`_)\n\n.. _#40: https://github.com/nengo/keras-lmu/pull/40\n.. _#41: https://github.com/nengo/keras-lmu/pull/41\n.. _#42: https://github.com/nengo/keras-lmu/pull/42\n\n0.3.1 (November 16, 2020)\n=========================\n\n**Changed**\n\n- Raise a validation error if ``hidden_to_memory`` or ``input_to_hidden`` are True\n  when ``hidden_cell=None``. (`#26`_)\n\n**Fixed**\n\n- Fixed a bug with the autoswapping in ``keras_lmu.LMU`` during training. (`#28`_)\n- Fixed a bug where dropout mask was not being reset properly in the hidden cell.\n  (`#29`_)\n\n.. _#26: https://github.com/nengo/keras-lmu/pull/26\n.. _#28: https://github.com/nengo/keras-lmu/pull/28\n.. _#29: https://github.com/nengo/keras-lmu/pull/29\n\n\n0.3.0 (November 6, 2020)\n========================\n\n**Changed**\n\n- Renamed module from ``lmu`` to ``keras_lmu`` (so it will now be imported via\n  ``import keras_lmu``), renamed package from ``lmu`` to\n  ``keras-lmu`` (so it will now be installed via ``pip install keras-lmu``), and\n  changed any references to \"NengoLMU\" to \"KerasLMU\" (since this implementation is\n  based in the Keras framework rather than Nengo). In the future the ``lmu`` namespace\n  will be used as a meta-package to encapsulate LMU implementations in different\n  frameworks. (`#24`_)\n\n.. _#24: https://github.com/abr/lmu/pull/24\n\n0.2.0 (November 2, 2020)\n========================\n\n**Added**\n\n- Added documentation for package description, installation, usage, API, examples,\n  and project information. (`#20`_)\n- Added LMU FFT cell variant and auto-switching LMU class. (`#21`_)\n- LMUs can now be used with any Keras RNN cell (e.g. LSTMs or GRUs) through the\n  ``hidden_cell`` parameter. This can take an RNN cell (like\n  ``tf.keras.layers.SimpleRNNCell`` or ``tf.keras.layers.LSTMCell``) or a feedforward\n  layer (like ``tf.keras.layers.Dense``) or ``None`` (to create a memory-only LMU).\n  The output of the LMU memory component will be fed to the ``hidden_cell``.\n  (`#22`_)\n- Added ``hidden_to_memory``, ``memory_to_memory``, and ``input_to_hidden`` parameters\n  to ``LMUCell``, which can be used to enable/disable connections between components\n  of the LMU. They default to disabled. (`#22`_)\n- LMUs can now be used with multi-dimensional memory components. This is controlled\n  through a new ``memory_d`` parameter of ``LMUCell``. (`#22`_)\n- Added ``dropout`` parameter to ``LMUCell`` (which applies dropout to the input)\n  and ``recurrent_dropout`` (which applies dropout to the ``memory_to_memory``\n  connection, if it is enabled). Note that dropout can be added in the hidden\n  component through the ``hidden_cell`` object. (`#22`_)\n\n**Changed**\n\n- Renamed ``lmu.lmu`` module to ``lmu.layers``. (`#22`_)\n- Combined the ``*_encoders_initializer``parameters of ``LMUCell`` into a single\n  ``kernel_initializer`` parameter. (`#22`_)\n- Combined the ``*_kernel_initializer`` parameters of ``LMUCell`` into a single\n  ``recurrent_kernel_initializer`` parameter. (`#22`_)\n\n**Removed**\n\n- Removed ``Legendre``, ``InputScaled``, ``LMUCellODE``, and ``LMUCellGating``\n  classes. (`#22`_)\n- Removed the ``method``, ``realizer``, and ``factory`` arguments from ``LMUCell``\n  (they will take on the same default values as before, they just cannot be changed).\n  (`#22`_)\n- Removed the ``trainable_*`` arguments from ``LMUCell``. This functionality is\n  largely redundant with the new functionality added for enabling/disabling internal\n  LMU connections. These were primarily used previously for e.g. setting a connection to\n  zero and then disabling learning, which can now be done more efficiently by\n  disabling the connection entirely. (`#22`_)\n- Removed the ``units`` and ``hidden_activation`` parameters of ``LMUCell`` (these are\n  now specified directly in the ``hidden_cell``. (`#22`_)\n- Removed the dependency on ``nengolib``. (`#22`_)\n- Dropped support for Python 3.5, which reached its end of life in September 2020.\n  (`#22`_)\n\n.. _#20: https://github.com/abr/lmu/pull/20\n.. _#21: https://github.com/abr/lmu/pull/21\n.. _#22: https://github.com/abr/lmu/pull/22\n\n0.1.0 (June 22, 2020)\n=====================\n\nInitial release of KerasLMU 0.1.0! Supports Python 3.5+.\n\nThe API is considered unstable; parts are likely to change in the future.\n\nThanks to all of the contributors for making this possible!\n",
    "bugtrack_url": null,
    "license": "Free for non-commercial use",
    "summary": "Keras implementation of Legendre Memory Units",
    "version": "0.7.0",
    "project_urls": {
        "Homepage": "https://www.nengo.ai/keras-lmu"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c952e1cc8325e543f826993fca5aa7e6a468ace3864259a252714511c23b8287",
                "md5": "b5c4c708de3aae15b5d213cf3f0fcbc7",
                "sha256": "44cc71a341685feb7918870a838a81a926effa6ec1dd4176ebb0f736992b7278"
            },
            "downloads": -1,
            "filename": "keras_lmu-0.7.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b5c4c708de3aae15b5d213cf3f0fcbc7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 22410,
            "upload_time": "2023-07-20T21:54:40",
            "upload_time_iso_8601": "2023-07-20T21:54:40.001709Z",
            "url": "https://files.pythonhosted.org/packages/c9/52/e1cc8325e543f826993fca5aa7e6a468ace3864259a252714511c23b8287/keras_lmu-0.7.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6632662edcb42de4e721b22cdefb47f7fe70a4df8bb88fcab1cbaeffdde424cf",
                "md5": "ebf173f502437f0bac8e01b0965edc1e",
                "sha256": "fa18b4e943ef74f11adec8fb6215be74083cc042937ba13577cae49d17ec4699"
            },
            "downloads": -1,
            "filename": "keras-lmu-0.7.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ebf173f502437f0bac8e01b0965edc1e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 1172321,
            "upload_time": "2023-07-20T21:54:41",
            "upload_time_iso_8601": "2023-07-20T21:54:41.912380Z",
            "url": "https://files.pythonhosted.org/packages/66/32/662edcb42de4e721b22cdefb47f7fe70a4df8bb88fcab1cbaeffdde424cf/keras-lmu-0.7.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-20 21:54:41",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "keras-lmu"
}
        
Elapsed time: 0.16121s