# :eye: LENS - Locational Encoding with Neuromorphic Systems
![PyTorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=flat-square)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
[![QUT Centre for Robotics](https://img.shields.io/badge/collection-QUT%20Robotics-%23043d71?style=flat-square)](https://qcr.ai)
[![stars](https://img.shields.io/github/stars/AdamDHines/LENS.svg?style=flat-square)](https://github.com/AdamDHines/LENS/stargazers)
[![Downloads](https://static.pepy.tech/badge/lens-vpr)](https://pepy.tech/project/lens-vpr)
[![Conda Version](https://img.shields.io/conda/vn/conda-forge/lens-vpr.svg)](https://anaconda.org/conda-forge/lens-vpr)
![PyPI - Version](https://img.shields.io/pypi/v/lens-vpr)
[![GitHub repo size](https://img.shields.io/github/repo-size/AdamDHines/LENS.svg?style=flat-square)](./README.md)
This repository contains code for **LENS** - **L**ocational **E**ncoding with **N**euromorphic **S**ystems. LENS combines neuromorphic algoriths, sensors, and hardware to perform accurate, real-time robotic localization using visual place recognition (VPR). LENS can be used with the SynSense Speck2fDevKit board which houses a [SPECK<sup>TM</sup>](https://www.synsense.ai/products/speck-2/) dynamic vision sensor and neuromorphic processor for online VPR.
## License and citation
This repository is licensed under the [MIT License](./LICENSE). If you use our code, please cite our arXiv paper:
```
@misc{hines2024lens,
title={A compact neuromorphic system for ultra energy-efficient, on-device robot localization},
author={Adam D. Hines and Michael Milford and Tobias Fischer},
year={2024},
eprint={2408.16754},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2408.16754},
}
```
## Installation and setup
To run LENS, please download this repository and install the required dependencies.
### Get the code
Get the code by cloning the repository.
```console
git clone git@github.com:AdamDHines/LENS.git
cd ~/LENS
```
### Install dependencies
All dependencies can be instlled from our [conda-forge package](https://anaconda.org/conda-forge/lens-vpr), [PyPi package](https://pypi.org/project/lens-vpr/), or local `requirements.txt`. For the conda-forge package, we recommend using [micromamba](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html) or [miniforge](https://github.com/conda-forge/miniforge). Please ensure your Python version is <= 3.11.
#### conda package
```console
# Create a new environment and install packages
micromamba create -n lens-vpr -c conda-forge lens-vpr
# samna package is not available on conda-forge, so pip install it
micromamba activate lens-vpr
pip install samna
```
#### pip
```console
# Install from our PyPi package
pip install lens-vpr
# Install from local requirements.txt
pip install -r requirements.txt
```
## Quick start
Get started using our pretrained models and datasets to evaluate the system. For a full guide on training and evaluating your own datasets, please visit our [Wiki](https://github.com/AdamDHines/LENS/wiki).
### Run the inferencing model
To run a simulated event stream, you can try our pre-trained model and datasets. Using the `--sim_mat` and `--matching` flag will display a similarity matrix and perform Recall@N matching based on a ground truth matrix.
```console
python main.py --sim_mat --matching
```
### Train a new model
New models can be trained by parsing the `--train_model` flag. Try training a new model with our provided reference dataset.
```console
# Train a new model
python main.py --train_model
```
### Optimize network hyperparameters
For new models on custom datasets, you can optimize your network hyperparameters using [Weights & Biases](https://wandb.ai/site) through our convenient `optimizer.py` script.
```console
# Optimize network hyperparameters
python optimizer.py
```
For more details, please visit the [Wiki](https://github.com/AdamDHines/LENS/wiki/Setting-up-and-using-the-optimizer).
### Deployment on neuromoprhic hardware
If you have a SynSense Speck2fDevKit, you can try out LENS using our pre-trained model and datasets by deploying simulated event streams on-chip.
```console
# Generate a timebased simulation of event streams with pre-recorded data
python main.py --simulated_speck --sim_mat --matching
```
Additionally, models can be deployed onto the Speck2fDevKit for low-latency and energy efficient VPR with sequence matching in real-time. Use the `--event_driven` flag to start the online inferencing system.
```console
# Run the online inferencing model
python main.py --event_driven
```
For more details on deployment to the Speck2fDevKit, please visit the [Wiki](https://github.com/AdamDHines/LENS/wiki/Deploying-to-Speck2fDevKit).
## Issues, bugs, and feature requests
If you encounter problems whilst running the code or if you have a suggestion for a feature or improvement, please report it as an [issue](https://github.com/AdamDHines/VPRTempoNeuro/issues).
Raw data
{
"_id": null,
"home_page": "https://github.com/AdamDHines/LENS",
"name": "lens-vpr",
"maintainer": null,
"docs_url": null,
"requires_python": "!=3.12.*,>=3.6",
"maintainer_email": null,
"keywords": "robotics, visual-place-recognition, neuromorphic-computing, spiking-neural-network, dynamic-vision-sensors",
"author": "Adam D Hines, Michael Milford and Tobias Fischer",
"author_email": "adam.hines@qut.edu.au",
"download_url": "https://files.pythonhosted.org/packages/d4/a0/241babcaa8af22fb343729bab6bc0e9680e26d2407ed814f9d0915f7442d/lens-vpr-0.1.1.tar.gz",
"platform": null,
"description": "# :eye: LENS - Locational Encoding with Neuromorphic Systems\n![PyTorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=flat-square)](https://creativecommons.org/licenses/by-nc-sa/4.0/)\n[![QUT Centre for Robotics](https://img.shields.io/badge/collection-QUT%20Robotics-%23043d71?style=flat-square)](https://qcr.ai)\n[![stars](https://img.shields.io/github/stars/AdamDHines/LENS.svg?style=flat-square)](https://github.com/AdamDHines/LENS/stargazers)\n[![Downloads](https://static.pepy.tech/badge/lens-vpr)](https://pepy.tech/project/lens-vpr)\n[![Conda Version](https://img.shields.io/conda/vn/conda-forge/lens-vpr.svg)](https://anaconda.org/conda-forge/lens-vpr)\n![PyPI - Version](https://img.shields.io/pypi/v/lens-vpr)\n[![GitHub repo size](https://img.shields.io/github/repo-size/AdamDHines/LENS.svg?style=flat-square)](./README.md)\n\nThis repository contains code for **LENS** - **L**ocational **E**ncoding with **N**euromorphic **S**ystems. LENS combines neuromorphic algoriths, sensors, and hardware to perform accurate, real-time robotic localization using visual place recognition (VPR). LENS can be used with the SynSense Speck2fDevKit board which houses a [SPECK<sup>TM</sup>](https://www.synsense.ai/products/speck-2/) dynamic vision sensor and neuromorphic processor for online VPR.\n\n## License and citation\nThis repository is licensed under the [MIT License](./LICENSE). If you use our code, please cite our arXiv paper:\n\n```\n@misc{hines2024lens,\n title={A compact neuromorphic system for ultra energy-efficient, on-device robot localization}, \n author={Adam D. Hines and Michael Milford and Tobias Fischer},\n year={2024},\n eprint={2408.16754},\n archivePrefix={arXiv},\n primaryClass={cs.RO},\n url={https://arxiv.org/abs/2408.16754}, \n}\n```\n\n\n## Installation and setup\nTo run LENS, please download this repository and install the required dependencies.\n\n### Get the code\nGet the code by cloning the repository.\n```console\ngit clone git@github.com:AdamDHines/LENS.git\ncd ~/LENS\n```\n\n### Install dependencies\nAll dependencies can be instlled from our [conda-forge package](https://anaconda.org/conda-forge/lens-vpr), [PyPi package](https://pypi.org/project/lens-vpr/), or local `requirements.txt`. For the conda-forge package, we recommend using [micromamba](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html) or [miniforge](https://github.com/conda-forge/miniforge). Please ensure your Python version is <= 3.11.\n\n#### conda package\n```console\n# Create a new environment and install packages\nmicromamba create -n lens-vpr -c conda-forge lens-vpr\n\n# samna package is not available on conda-forge, so pip install it\nmicromamba activate lens-vpr\npip install samna\n```\n\n#### pip\n```console\n# Install from our PyPi package\npip install lens-vpr\n\n# Install from local requirements.txt\npip install -r requirements.txt\n```\n\n## Quick start\nGet started using our pretrained models and datasets to evaluate the system. For a full guide on training and evaluating your own datasets, please visit our [Wiki](https://github.com/AdamDHines/LENS/wiki).\n\n### Run the inferencing model\nTo run a simulated event stream, you can try our pre-trained model and datasets. Using the `--sim_mat` and `--matching` flag will display a similarity matrix and perform Recall@N matching based on a ground truth matrix.\n\n```console\npython main.py --sim_mat --matching\n```\n\n### Train a new model\nNew models can be trained by parsing the `--train_model` flag. Try training a new model with our provided reference dataset.\n\n```console\n# Train a new model\npython main.py --train_model\n```\n\n### Optimize network hyperparameters\nFor new models on custom datasets, you can optimize your network hyperparameters using [Weights & Biases](https://wandb.ai/site) through our convenient `optimizer.py` script.\n\n```console\n# Optimize network hyperparameters\npython optimizer.py\n```\n\nFor more details, please visit the [Wiki](https://github.com/AdamDHines/LENS/wiki/Setting-up-and-using-the-optimizer).\n\n### Deployment on neuromoprhic hardware\nIf you have a SynSense Speck2fDevKit, you can try out LENS using our pre-trained model and datasets by deploying simulated event streams on-chip.\n\n```console\n# Generate a timebased simulation of event streams with pre-recorded data\npython main.py --simulated_speck --sim_mat --matching\n```\n\nAdditionally, models can be deployed onto the Speck2fDevKit for low-latency and energy efficient VPR with sequence matching in real-time. Use the `--event_driven` flag to start the online inferencing system.\n\n```console\n# Run the online inferencing model\npython main.py --event_driven\n```\n\nFor more details on deployment to the Speck2fDevKit, please visit the [Wiki](https://github.com/AdamDHines/LENS/wiki/Deploying-to-Speck2fDevKit).\n\n\n## Issues, bugs, and feature requests\nIf you encounter problems whilst running the code or if you have a suggestion for a feature or improvement, please report it as an [issue](https://github.com/AdamDHines/VPRTempoNeuro/issues).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "LENS: Locational Encoding with Neuromorphic Systems",
"version": "0.1.1",
"project_urls": {
"Homepage": "https://github.com/AdamDHines/LENS"
},
"split_keywords": [
"robotics",
" visual-place-recognition",
" neuromorphic-computing",
" spiking-neural-network",
" dynamic-vision-sensors"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b916091155e9b7720fee53e0cb6da3509875c8ab1ed4bdea95aed2275cbcb408",
"md5": "628494c296ffb82ae6e6c84015f8dbdb",
"sha256": "082f0fa1da2155fdd052149edef603e93333628642fa3c4ab89bfbb786ee6843"
},
"downloads": -1,
"filename": "lens_vpr-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "628494c296ffb82ae6e6c84015f8dbdb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "!=3.12.*,>=3.6",
"size": 49804,
"upload_time": "2024-10-22T06:48:51",
"upload_time_iso_8601": "2024-10-22T06:48:51.075877Z",
"url": "https://files.pythonhosted.org/packages/b9/16/091155e9b7720fee53e0cb6da3509875c8ab1ed4bdea95aed2275cbcb408/lens_vpr-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d4a0241babcaa8af22fb343729bab6bc0e9680e26d2407ed814f9d0915f7442d",
"md5": "ed44295398ea295d7ddb45e7604f7108",
"sha256": "aac0707a5bd2e3c9f6dbbd155461906b84dd5be30b0b324655ed9ae29a22ecb0"
},
"downloads": -1,
"filename": "lens-vpr-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "ed44295398ea295d7ddb45e7604f7108",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "!=3.12.*,>=3.6",
"size": 39602,
"upload_time": "2024-10-22T06:48:52",
"upload_time_iso_8601": "2024-10-22T06:48:52.774691Z",
"url": "https://files.pythonhosted.org/packages/d4/a0/241babcaa8af22fb343729bab6bc0e9680e26d2407ed814f9d0915f7442d/lens-vpr-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-22 06:48:52",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AdamDHines",
"github_project": "LENS",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "torch",
"specs": [
[
">=",
"2.1.1"
]
]
},
{
"name": "torchvision",
"specs": [
[
">=",
"0.16.1"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"1.26.2"
]
]
},
{
"name": "pandas",
"specs": [
[
">=",
"2.1.1"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.65.0"
]
]
},
{
"name": "prettytable",
"specs": [
[
">=",
"3.5.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
">=",
"1.2.2"
]
]
},
{
"name": "sinabs",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "h5py",
"specs": [
[
">=",
"3.10.0"
]
]
},
{
"name": "imageio",
"specs": [
[
">=",
"2.34.1"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.8.2"
]
]
},
{
"name": "pynmea2",
"specs": [
[
">=",
"1.19.0"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.11.4"
]
]
},
{
"name": "seaborn",
"specs": [
[
">=",
"0.13.2"
]
]
},
{
"name": "wandb",
"specs": [
[
">=",
"0.16.2"
]
]
}
],
"lcname": "lens-vpr"
}