torch-kf


Nametorch-kf JSON
Version 0.0.4 PyPI version JSON
download
home_pagehttps://github.com/raphaelreme/torch-kf
SummaryKalman Filter implementation with PyTorch
upload_time2024-05-06 16:54:54
maintainerNone
docs_urlNone
authorRaphael Reme
requires_python>=3.7
licenseMIT
keywords kalman filter statistics pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # torch-kf

[![Lint and Test](https://github.com/raphaelreme/torch-kf/actions/workflows/tests.yml/badge.svg)](https://github.com/raphaelreme/torch-kf/actions/workflows/tests.yml)

PyTorch implementation of Kalman filters. It supports filtering of batch of signals, runs on gpu (supported by PyTorch) or multiple cpus.

This is based on rlabbe's [filterpy](https://github.com/rlabbe/filterpy) and [interactive book](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/) on kalman filters. Currently only traditional Kalman filters are implemented without any smoothing.

This implementation is designed for use-cases with multiple signals to filter. By construction, the Kalman filter computations are sequentials and cannot be parallelize, and usually involve quite small matrices (for physic-based system, the state is usually restricted to less than 10 dimensions), which cannot benefits from gpu/cpus parallelization. This is not true when there are multiples signals to filter in // (or multiple filters to run in //), which happens quite often.

`torch-kf` natively supports batch computations of Kalman filters (no need to loop on your batch of signals). Moreover, thanks to PyTorch, it distributes the computations automatically on your cpus, or is able to run on gpu. It is therefore much faster (**up to 1000 time faster**) when batch of signals are involved. If you have less than 10 signals to filter, [filterpy](https://github.com/rlabbe/filterpy) will still by faster (up to 10 times faster for a single signal) because PyTorch has a huge overhead when small matrices are involved.

This implementation is quite simple but not so much user friendly for people not familiar with PyTorch (or numpy) broadcasting rules. We highly recommend that you read about broadcasting before trying to use this library.

> [!WARNING]
> torch-kf is running by default in float32 and is implemented with the fastest but sadly not the more stable numerical scheme.
> We did not face any real issue yet, but be aware that this may become one for some use-cases. 

## Install

### Pip

```bash
$ pip install torch-kf
```

### Conda

Not yet available



## Getting started

```python

import torch
from torch_kf import KalmanFilter, GaussianState

# Some noisy_data to filter
# 1000 timesteps, 100 signals, 2D and an additional dimension to have vertical vectors (required for correct matmult)
noisy_data = torch.randn(1000, 100, 2, 1)

# Create a Kalman Filter (for instance a constant velocity filter) (See example or rlabbe's book)
F = torch.tensor([  # x_{t+1} = x_{t} + v_{t} * dt     (dt = 1)
    [1, 0, 1, 0.],
    [0, 1, 0, 1],
    [0, 0, 1, 0],
    [0, 0, 0, 1],
])
Q = torch.eye(4) * 1.5 **2  # 1.5 std on both pos and velocity (See examples or rlabee's book to build a better Q)
H = torch.tensor([  # Only x and y are measured
    [1, 0, 0, 0.],
    [0, 1, 0, 0],
])
R = torch.eye(2) * 3**2

kf = KalmanFilter(F, H, Q, R)

# Create an inital belief for each signal
# For instance let's start from 0 pos and 0 vel with a huge uncertainty
state = GaussianState(
    torch.zeros(100, 4, 1),  # Shape (100, 4, 1)
    torch.eye(4)[None].expand(100, 4, 4) * 150**2,  # Shape (100, 4, 4)
)

# And let's filter and save our signals all at once
# Store the state (x, y, dx, dy) for each element in the batch and each time
filtered_data = torch.empty((1000, 100, 4, 1))

for t, measure in enumerate(noisy_data):  # Update first and then predict in this case
    # Update with measure at time t
    state = kf.update(state, measure)

    # Save state at time t
    filtered_data[t] = state.mean

    # Predict for t + 1
    state = kf.predict(state)

# Alternatively you can use the already implemented filter method:
states = kf.filter(state, noisy_data, update_first=True, return_all=True)
# states.mean: (1000, 100, 4, 1)
# states.covariance: (1000, 100, 4, 4)

# And optionnally smooth the data (not online: all data should already be available and collected) using RTS smoothing
smoothed = kf.rts_smooth(states)
# smoothed.mean: (1000, 100, 4, 1)
# smoothed.covariance: (1000, 100, 4, 4)

```

## Examples

We provide simple examples of constant velocity kalman filter (1d, 2d, ...) in the `example` folder using batch of signals.

For instance, if system is a sinusoidal function with noisy measurement we can filter and smooth the data using kalman filters. Here is such an example with `nan` measurements in the middle of the filtering process:
![Sinusoidal position](images/sinusoidal_pos.png)
![Sinusoidal position](images/sinusoidal_vel.png)

We also benchmark our implementation to check when it is faster than filterpy. On a laptop with pretty good cpus and a GPU (a bit rusty), we have typically these performances:

![Computational time](images/computational_time.png)

One can see that both cpus and gpu version have a large overhead when the batch is small. But they may lead to a 200x (and 500x for gpu) speed up or more when numerous signals are filtered together.


## Contribute

Please feel free to open a PR or an issue at any time.

Many variants of Kalman filtering/smoothing are still missing and the documentation is pretty poor, in comparison [filterpy](https://github.com/rlabbe/filterpy) is a much more complete library and may give some ideas of what is missing.

<!-- ## Cite Us

This library has initially developped for multiple particle tracking in biology. If you find this library useful and use it in your own research, please cite us:
 -->


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/raphaelreme/torch-kf",
    "name": "torch-kf",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "Kalman filter, statistics, pytorch",
    "author": "Raphael Reme",
    "author_email": "raphaelreme-dev@protonmail.com",
    "download_url": "https://files.pythonhosted.org/packages/6d/0c/df262d6d4e9bb09598d27f700e9bda291db2922dbcdc0c38c2e96a4dbd2b/torch_kf-0.0.4.tar.gz",
    "platform": null,
    "description": "# torch-kf\n\n[![Lint and Test](https://github.com/raphaelreme/torch-kf/actions/workflows/tests.yml/badge.svg)](https://github.com/raphaelreme/torch-kf/actions/workflows/tests.yml)\n\nPyTorch implementation of Kalman filters. It supports filtering of batch of signals, runs on gpu (supported by PyTorch) or multiple cpus.\n\nThis is based on rlabbe's [filterpy](https://github.com/rlabbe/filterpy) and [interactive book](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/) on kalman filters. Currently only traditional Kalman filters are implemented without any smoothing.\n\nThis implementation is designed for use-cases with multiple signals to filter. By construction, the Kalman filter computations are sequentials and cannot be parallelize, and usually involve quite small matrices (for physic-based system, the state is usually restricted to less than 10 dimensions), which cannot benefits from gpu/cpus parallelization. This is not true when there are multiples signals to filter in // (or multiple filters to run in //), which happens quite often.\n\n`torch-kf` natively supports batch computations of Kalman filters (no need to loop on your batch of signals). Moreover, thanks to PyTorch, it distributes the computations automatically on your cpus, or is able to run on gpu. It is therefore much faster (**up to 1000 time faster**) when batch of signals are involved. If you have less than 10 signals to filter, [filterpy](https://github.com/rlabbe/filterpy) will still by faster (up to 10 times faster for a single signal) because PyTorch has a huge overhead when small matrices are involved.\n\nThis implementation is quite simple but not so much user friendly for people not familiar with PyTorch (or numpy) broadcasting rules. We highly recommend that you read about broadcasting before trying to use this library.\n\n> [!WARNING]\n> torch-kf is running by default in float32 and is implemented with the fastest but sadly not the more stable numerical scheme.\n> We did not face any real issue yet, but be aware that this may become one for some use-cases. \n\n## Install\n\n### Pip\n\n```bash\n$ pip install torch-kf\n```\n\n### Conda\n\nNot yet available\n\n\n\n## Getting started\n\n```python\n\nimport torch\nfrom torch_kf import KalmanFilter, GaussianState\n\n# Some noisy_data to filter\n# 1000 timesteps, 100 signals, 2D and an additional dimension to have vertical vectors (required for correct matmult)\nnoisy_data = torch.randn(1000, 100, 2, 1)\n\n# Create a Kalman Filter (for instance a constant velocity filter) (See example or rlabbe's book)\nF = torch.tensor([  # x_{t+1} = x_{t} + v_{t} * dt     (dt = 1)\n    [1, 0, 1, 0.],\n    [0, 1, 0, 1],\n    [0, 0, 1, 0],\n    [0, 0, 0, 1],\n])\nQ = torch.eye(4) * 1.5 **2  # 1.5 std on both pos and velocity (See examples or rlabee's book to build a better Q)\nH = torch.tensor([  # Only x and y are measured\n    [1, 0, 0, 0.],\n    [0, 1, 0, 0],\n])\nR = torch.eye(2) * 3**2\n\nkf = KalmanFilter(F, H, Q, R)\n\n# Create an inital belief for each signal\n# For instance let's start from 0 pos and 0 vel with a huge uncertainty\nstate = GaussianState(\n    torch.zeros(100, 4, 1),  # Shape (100, 4, 1)\n    torch.eye(4)[None].expand(100, 4, 4) * 150**2,  # Shape (100, 4, 4)\n)\n\n# And let's filter and save our signals all at once\n# Store the state (x, y, dx, dy) for each element in the batch and each time\nfiltered_data = torch.empty((1000, 100, 4, 1))\n\nfor t, measure in enumerate(noisy_data):  # Update first and then predict in this case\n    # Update with measure at time t\n    state = kf.update(state, measure)\n\n    # Save state at time t\n    filtered_data[t] = state.mean\n\n    # Predict for t + 1\n    state = kf.predict(state)\n\n# Alternatively you can use the already implemented filter method:\nstates = kf.filter(state, noisy_data, update_first=True, return_all=True)\n# states.mean: (1000, 100, 4, 1)\n# states.covariance: (1000, 100, 4, 4)\n\n# And optionnally smooth the data (not online: all data should already be available and collected) using RTS smoothing\nsmoothed = kf.rts_smooth(states)\n# smoothed.mean: (1000, 100, 4, 1)\n# smoothed.covariance: (1000, 100, 4, 4)\n\n```\n\n## Examples\n\nWe provide simple examples of constant velocity kalman filter (1d, 2d, ...) in the `example` folder using batch of signals.\n\nFor instance, if system is a sinusoidal function with noisy measurement we can filter and smooth the data using kalman filters. Here is such an example with `nan` measurements in the middle of the filtering process:\n![Sinusoidal position](images/sinusoidal_pos.png)\n![Sinusoidal position](images/sinusoidal_vel.png)\n\nWe also benchmark our implementation to check when it is faster than filterpy. On a laptop with pretty good cpus and a GPU (a bit rusty), we have typically these performances:\n\n![Computational time](images/computational_time.png)\n\nOne can see that both cpus and gpu version have a large overhead when the batch is small. But they may lead to a 200x (and 500x for gpu) speed up or more when numerous signals are filtered together.\n\n\n## Contribute\n\nPlease feel free to open a PR or an issue at any time.\n\nMany variants of Kalman filtering/smoothing are still missing and the documentation is pretty poor, in comparison [filterpy](https://github.com/rlabbe/filterpy) is a much more complete library and may give some ideas of what is missing.\n\n<!-- ## Cite Us\n\nThis library has initially developped for multiple particle tracking in biology. If you find this library useful and use it in your own research, please cite us:\n -->\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Kalman Filter implementation with PyTorch",
    "version": "0.0.4",
    "project_urls": {
        "Homepage": "https://github.com/raphaelreme/torch-kf"
    },
    "split_keywords": [
        "kalman filter",
        " statistics",
        " pytorch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "19802a7fac40143f47d1eeb7aa6f7df6e6efba9e56c97f187bf506df4b5eb468",
                "md5": "ab09a92bc040749a87cfa4c44309f33b",
                "sha256": "55176a5eecd88f716780f0eb2836dacebb8c3ff34ce3354a0e8ca958c237e057"
            },
            "downloads": -1,
            "filename": "torch_kf-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ab09a92bc040749a87cfa4c44309f33b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 10430,
            "upload_time": "2024-05-06T16:54:53",
            "upload_time_iso_8601": "2024-05-06T16:54:53.622294Z",
            "url": "https://files.pythonhosted.org/packages/19/80/2a7fac40143f47d1eeb7aa6f7df6e6efba9e56c97f187bf506df4b5eb468/torch_kf-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6d0cdf262d6d4e9bb09598d27f700e9bda291db2922dbcdc0c38c2e96a4dbd2b",
                "md5": "e8d699ee293f95e995cd79d126b5efc2",
                "sha256": "c59d6a875551b04deb1532a69ad76fc61b9f252bef3422941be6b3da3ead85a5"
            },
            "downloads": -1,
            "filename": "torch_kf-0.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "e8d699ee293f95e995cd79d126b5efc2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 12463,
            "upload_time": "2024-05-06T16:54:54",
            "upload_time_iso_8601": "2024-05-06T16:54:54.729866Z",
            "url": "https://files.pythonhosted.org/packages/6d/0c/df262d6d4e9bb09598d27f700e9bda291db2922dbcdc0c38c2e96a4dbd2b/torch_kf-0.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-06 16:54:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "raphaelreme",
    "github_project": "torch-kf",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "torch-kf"
}
        
Elapsed time: 0.31598s