# :eye: LENS - Locational Encoding with Neuromorphic Systems
![PyTorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=flat-square)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
[![QUT Centre for Robotics](https://img.shields.io/badge/collection-QUT%20Robotics-%23043d71?style=flat-square)](https://qcr.ai)
[![stars](https://img.shields.io/github/stars/AdamDHines/LENS.svg?style=flat-square)](https://github.com/AdamDHines/LENS/stargazers)
[![Downloads](https://static.pepy.tech/badge/lens-vpr)](https://pepy.tech/project/lens-vpr)
[![Conda Version](https://img.shields.io/conda/vn/conda-forge/lens-vpr.svg)](https://anaconda.org/conda-forge/lens-vpr)
![PyPI - Version](https://img.shields.io/pypi/v/lens-vpr)
[![GitHub repo size](https://img.shields.io/github/repo-size/AdamDHines/LENS.svg?style=flat-square)](./README.md)
This repository contains code for **LENS** - **L**ocational **E**ncoding with **N**euromorphic **S**ystems. LENS combines neuromorphic algoriths, sensors, and hardware to perform accurate, real-time robotic localization using visual place recognition (VPR). LENS can be used with the SynSense Speck2fDevKit board which houses a [SPECK<sup>TM</sup>](https://www.synsense.ai/products/speck-2/) dynamic vision sensor and neuromorphic processor for online VPR.
## License and citation
This repository is licensed under the [MIT License](./LICENSE). If you use our code, please cite our arXiv paper:
```
@misc{hines2024lens,
title={A compact neuromorphic system for ultra energy-efficient, on-device robot localization},
author={Adam D. Hines and Michael Milford and Tobias Fischer},
year={2024},
eprint={2408.16754},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2408.16754},
}
```
## Installation and setup
To run LENS, please download this repository and install the required dependencies.
### Get the code
Get the code by cloning the repository.
```console
git clone git@github.com:AdamDHines/LENS.git
cd ~/LENS
```
### Install dependencies
All dependencies can be instlled from our [conda-forge package](https://anaconda.org/conda-forge/lens-vpr), [PyPi package](https://pypi.org/project/lens-vpr/), or local `requirements.txt`. For the conda-forge package, we recommend using [micromamba](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html) or [miniforge](https://github.com/conda-forge/miniforge). Please ensure your Python version is <= 3.11.
#### conda package
```console
# Create a new environment and install packages
micromamba create -n lens-vpr -c conda-forge lens-vpr
# samna package is not available on conda-forge, so pip install it
micromamba activate lens-vpr
pip install samna
```
#### pip
```console
# Install from our PyPi package
pip install lens-vpr
# Install from local requirements.txt
pip install -r requirements.txt
```
## Quick start
Get started using our pretrained models and datasets to evaluate the system. For a full guide on training and evaluating your own datasets, please visit our [Wiki](https://github.com/AdamDHines/LENS/wiki).
### Run the inferencing model
To run a simulated event stream, you can try our pre-trained model and datasets. Using the `--sim_mat` and `--matching` flag will display a similarity matrix and perform Recall@N matching based on a ground truth matrix.
```console
python main.py --sim_mat --matching
```
### Train a new model
New models can be trained by parsing the `--train_model` flag. Try training a new model with our provided reference dataset.
```console
# Train a new model
python main.py --train_model
```
### Optimize network hyperparameters
For new models on custom datasets, you can optimize your network hyperparameters using [Weights & Biases](https://wandb.ai/site) through our convenient `optimizer.py` script.
```console
# Optimize network hyperparameters
python optimizer.py
```
For more details, please visit the [Wiki](https://github.com/AdamDHines/LENS/wiki/Setting-up-and-using-the-optimizer).
### Deployment on neuromoprhic hardware
If you have a SynSense Speck2fDevKit, you can try out LENS using our pre-trained model and datasets by deploying simulated event streams on-chip.
```console
# Generate a timebased simulation of event streams with pre-recorded data
python main.py --simulated_speck --sim_mat --matching
```
Additionally, models can be deployed onto the Speck2fDevKit for low-latency and energy efficient VPR with sequence matching in real-time. Use the `--event_driven` flag to start the online inferencing system.
```console
# Run the online inferencing model
python main.py --event_driven
```
For more details on deployment to the Speck2fDevKit, please visit the [Wiki](https://github.com/AdamDHines/LENS/wiki/Deploying-to-Speck2fDevKit).
## Issues, bugs, and feature requests
If you encounter problems whilst running the code or if you have a suggestion for a feature or improvement, please report it as an [issue](https://github.com/AdamDHines/VPRTempoNeuro/issues).
Raw data
{
"_id": null,
"home_page": "https://github.com/AdamDHines/LENS",
"name": "lens-vpr",
"maintainer": null,
"docs_url": null,
"requires_python": "!=3.12.*,>=3.6",
"maintainer_email": null,
"keywords": "robotics, visual-place-recognition, neuromorphic-computing, spiking-neural-network, dynamic-vision-sensors",
"author": "Adam D Hines, Michael Milford and Tobias Fischer",
"author_email": "adam.hines@qut.edu.au",
"download_url": "https://files.pythonhosted.org/packages/6e/80/ef4d58fba9cf516cd86b3303ce87eabb4e39474ca500923f45fed2c9fa1c/lens-vpr-0.1.2.tar.gz",
"platform": null,
"description": "# :eye: LENS - Locational Encoding with Neuromorphic Systems\n![PyTorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=flat-square)](https://creativecommons.org/licenses/by-nc-sa/4.0/)\n[![QUT Centre for Robotics](https://img.shields.io/badge/collection-QUT%20Robotics-%23043d71?style=flat-square)](https://qcr.ai)\n[![stars](https://img.shields.io/github/stars/AdamDHines/LENS.svg?style=flat-square)](https://github.com/AdamDHines/LENS/stargazers)\n[![Downloads](https://static.pepy.tech/badge/lens-vpr)](https://pepy.tech/project/lens-vpr)\n[![Conda Version](https://img.shields.io/conda/vn/conda-forge/lens-vpr.svg)](https://anaconda.org/conda-forge/lens-vpr)\n![PyPI - Version](https://img.shields.io/pypi/v/lens-vpr)\n[![GitHub repo size](https://img.shields.io/github/repo-size/AdamDHines/LENS.svg?style=flat-square)](./README.md)\n\nThis repository contains code for **LENS** - **L**ocational **E**ncoding with **N**euromorphic **S**ystems. LENS combines neuromorphic algoriths, sensors, and hardware to perform accurate, real-time robotic localization using visual place recognition (VPR). LENS can be used with the SynSense Speck2fDevKit board which houses a [SPECK<sup>TM</sup>](https://www.synsense.ai/products/speck-2/) dynamic vision sensor and neuromorphic processor for online VPR.\n\n## License and citation\nThis repository is licensed under the [MIT License](./LICENSE). If you use our code, please cite our arXiv paper:\n\n```\n@misc{hines2024lens,\n title={A compact neuromorphic system for ultra energy-efficient, on-device robot localization}, \n author={Adam D. Hines and Michael Milford and Tobias Fischer},\n year={2024},\n eprint={2408.16754},\n archivePrefix={arXiv},\n primaryClass={cs.RO},\n url={https://arxiv.org/abs/2408.16754}, \n}\n```\n\n\n## Installation and setup\nTo run LENS, please download this repository and install the required dependencies.\n\n### Get the code\nGet the code by cloning the repository.\n```console\ngit clone git@github.com:AdamDHines/LENS.git\ncd ~/LENS\n```\n\n### Install dependencies\nAll dependencies can be instlled from our [conda-forge package](https://anaconda.org/conda-forge/lens-vpr), [PyPi package](https://pypi.org/project/lens-vpr/), or local `requirements.txt`. For the conda-forge package, we recommend using [micromamba](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html) or [miniforge](https://github.com/conda-forge/miniforge). Please ensure your Python version is <= 3.11.\n\n#### conda package\n```console\n# Create a new environment and install packages\nmicromamba create -n lens-vpr -c conda-forge lens-vpr\n\n# samna package is not available on conda-forge, so pip install it\nmicromamba activate lens-vpr\npip install samna\n```\n\n#### pip\n```console\n# Install from our PyPi package\npip install lens-vpr\n\n# Install from local requirements.txt\npip install -r requirements.txt\n```\n\n## Quick start\nGet started using our pretrained models and datasets to evaluate the system. For a full guide on training and evaluating your own datasets, please visit our [Wiki](https://github.com/AdamDHines/LENS/wiki).\n\n### Run the inferencing model\nTo run a simulated event stream, you can try our pre-trained model and datasets. Using the `--sim_mat` and `--matching` flag will display a similarity matrix and perform Recall@N matching based on a ground truth matrix.\n\n```console\npython main.py --sim_mat --matching\n```\n\n### Train a new model\nNew models can be trained by parsing the `--train_model` flag. Try training a new model with our provided reference dataset.\n\n```console\n# Train a new model\npython main.py --train_model\n```\n\n### Optimize network hyperparameters\nFor new models on custom datasets, you can optimize your network hyperparameters using [Weights & Biases](https://wandb.ai/site) through our convenient `optimizer.py` script.\n\n```console\n# Optimize network hyperparameters\npython optimizer.py\n```\n\nFor more details, please visit the [Wiki](https://github.com/AdamDHines/LENS/wiki/Setting-up-and-using-the-optimizer).\n\n### Deployment on neuromoprhic hardware\nIf you have a SynSense Speck2fDevKit, you can try out LENS using our pre-trained model and datasets by deploying simulated event streams on-chip.\n\n```console\n# Generate a timebased simulation of event streams with pre-recorded data\npython main.py --simulated_speck --sim_mat --matching\n```\n\nAdditionally, models can be deployed onto the Speck2fDevKit for low-latency and energy efficient VPR with sequence matching in real-time. Use the `--event_driven` flag to start the online inferencing system.\n\n```console\n# Run the online inferencing model\npython main.py --event_driven\n```\n\nFor more details on deployment to the Speck2fDevKit, please visit the [Wiki](https://github.com/AdamDHines/LENS/wiki/Deploying-to-Speck2fDevKit).\n\n\n## Issues, bugs, and feature requests\nIf you encounter problems whilst running the code or if you have a suggestion for a feature or improvement, please report it as an [issue](https://github.com/AdamDHines/VPRTempoNeuro/issues).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "LENS: Locational Encoding with Neuromorphic Systems",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://github.com/AdamDHines/LENS"
},
"split_keywords": [
"robotics",
" visual-place-recognition",
" neuromorphic-computing",
" spiking-neural-network",
" dynamic-vision-sensors"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "4aff74e042570b8d83cfaf13dd6d1719e51e1cd26f03af3388b8122f76b76e4e",
"md5": "853209fdcb80f5f503c27ea21e89eb5f",
"sha256": "c7b7181f1121fd4a57f2831dd8fa674590d05064959664a57d5da36e6130e23f"
},
"downloads": -1,
"filename": "lens_vpr-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "853209fdcb80f5f503c27ea21e89eb5f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "!=3.12.*,>=3.6",
"size": 57696,
"upload_time": "2025-01-23T02:05:43",
"upload_time_iso_8601": "2025-01-23T02:05:43.014841Z",
"url": "https://files.pythonhosted.org/packages/4a/ff/74e042570b8d83cfaf13dd6d1719e51e1cd26f03af3388b8122f76b76e4e/lens_vpr-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "6e80ef4d58fba9cf516cd86b3303ce87eabb4e39474ca500923f45fed2c9fa1c",
"md5": "1b59d4de81a29dd27dc6e55aebaab66b",
"sha256": "e4fbfea9dd14435a2aa1f275007a619c9e475fb74b1ea87d1f9da4b11492a4da"
},
"downloads": -1,
"filename": "lens-vpr-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "1b59d4de81a29dd27dc6e55aebaab66b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "!=3.12.*,>=3.6",
"size": 46153,
"upload_time": "2025-01-23T02:05:44",
"upload_time_iso_8601": "2025-01-23T02:05:44.153858Z",
"url": "https://files.pythonhosted.org/packages/6e/80/ef4d58fba9cf516cd86b3303ce87eabb4e39474ca500923f45fed2c9fa1c/lens-vpr-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-23 02:05:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AdamDHines",
"github_project": "LENS",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "torch",
"specs": [
[
">=",
"2.1.1"
]
]
},
{
"name": "torchvision",
"specs": [
[
">=",
"0.16.1"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"1.26.2"
]
]
},
{
"name": "pandas",
"specs": [
[
">=",
"2.1.1"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.65.0"
]
]
},
{
"name": "prettytable",
"specs": [
[
">=",
"3.5.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
">=",
"1.2.2"
]
]
},
{
"name": "sinabs",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "h5py",
"specs": [
[
">=",
"3.10.0"
]
]
},
{
"name": "imageio",
"specs": [
[
">=",
"2.34.1"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.8.2"
]
]
},
{
"name": "pynmea2",
"specs": [
[
">=",
"1.19.0"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.11.4"
]
]
},
{
"name": "seaborn",
"specs": [
[
">=",
"0.13.2"
]
]
},
{
"name": "wandb",
"specs": [
[
">=",
"0.16.2"
]
]
}
],
"lcname": "lens-vpr"
}