# VPRTempo - A Temporally Encoded Spiking Neural Network for Visual Place Recognition

[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
[](https://qcr.ai)
[](https://github.com/QVPR/VPRTempo/stargazers)
[](https://pepy.tech/project/vprtempo)
[](https://anaconda.org/conda-forge/vprtempo)

This repository contains code for [VPRTempo](https://vprtempo.github.io), a spiking neural network that uses temporally encoding to perform visual place recognition tasks. The network is based off of [BLiTNet](https://arxiv.org/pdf/2208.01204.pdf) and adapted to the [VPRSNN](https://github.com/QVPR/VPRSNN) framework.
<p style="width: 50%; display: block; margin-left: auto; margin-right: auto">
<img src="./assets/vprtempo_example.gif" alt="VPRTempo method diagram"/>
</p>
VPRTempo is built on a [torch.nn](https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html) framework and employs custom learning rules based on the temporal codes of spikes in order to train layer weights.
In this repository, we provide two networks:
- `VPRTempo`: Our base network architecture to perform visual place recognition (fp32)
- `VPRTempoQuant`: A modified base network with [Quantization Aware Training (QAT)](https://pytorch.org/docs/stable/quantization.html) enabled (int8)
To use VPRTempo, please follow the instructions below for installation and usage.
## :star: Update v1.1.8: What's new?
- Provided support for MPS Apple Silicon :green_apple:
- Minor bug fixes in evaluation metrics :bug:
- New auto-downloader for pre-trained models and Nordland image subsets for easier trialling :satellite:
## License & Citation
This repository is licensed under the [MIT License](./LICENSE)
If you use our code, please cite our IEEE ICRA [paper](https://ieeexplore.ieee.org/document/10610918):
```
@inproceedings{hines2024vprtempo,
title={VPRTempo: A Fast Temporally Encoded Spiking Neural Network for Visual Place Recognition},
author={Adam D. Hines and Peter G. Stratton and Michael Milford and Tobias Fischer},
year={2024},
pages={10200-10207},
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)}
}
```
## Installation and setup
VPRTempo uses [PyTorch](https://pytorch.org/) with the capability for [CUDA](https://developer.nvidia.com/cuda-toolkit) acceleration. Please use one of the following options below to install the required dependencies, and if desired follow the instructions to install CUDA for your hardware and operating system.
### Get the repository
Download the Github repository.
```console
git clone https://github.com/QVPR/VPRTempo.git
cd ~/VPRTempo
```
Once downloaded, please install the required dependencies to run the network through one of the following options:
### Option 1: Pip install
Dependencies for VPRTempo can downloaded from our [PyPi package](https://pypi.org/project/VPRTempo/). Please ensure `python --version` is >=3.6 and <3.13.
```python
pip install vprtempo
```
If you wish to enable CUDA, please follow the instructions on the [PyTorch - Get Started](https://pytorch.org/get-started/locally/) page to install the required software versions for your hardware and operating system.
### Option 2: Local requirements install
Dependencies can be installed either through our provided `requirements.txt` files. Please ensure `python --version` is >=3.6 and <3.13.
```python
pip install -r requirements.txt
```
As above, if you wish to install CUDA please visit [PyTorch - Get Started](https://pytorch.org/get-started/locally/).
### Option 3: Conda install
>**:heavy_exclamation_mark: Recommended:**
> Use [Mambaforge](https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html) instead of conda.
Requirements for VPRTempo may be installed using our [conda-forge package](https://anaconda.org/conda-forge/vprtempo).
```console
# Linux/OS X
conda create -n vprtempo -c conda-forge vprtempo
# Linux CUDA enabled
conda create -n vprtempo -c conda-forge -c pytorch -c nvidia vprtempo pytorch-cuda cudatoolkit
# Windows
conda create -n vprtempo -c pytorch python pytorch torchvision torchaudio cpuonly prettytable tqdm numpy pandas matplotlib requests
# Windows CUDA enabled
conda create -n vprtempo -c pytorch -c nvidia python torchvision torchaudio pytorch-cuda=11.7 cudatoolkit prettytable tqdm numpy pandas matplotlib requests
```
## Datasets
VPRTempo was developed to be simple to train and test a variety of datasets. Please see the information below about running a test with the Nordland and Oxford RobotCar datasets and how to organize custom datasets.
Please note that while we trained 3,300 places for Nordland and 450 for OxfordRobot car we only evaluated 2,700 and 360 places, respectively, ignoring the first 20% (see [Sect.4B Datasets](https://ieeexplore.ieee.org/document/10610918)).
### Nordland
VPRTempo was developed and tested using the [Nordland](https://nrkbeta.no/2013/01/15/nordlandsbanen-minute-by-minute-season-by-season/) traversal dataset. To download the full dataset, please visit [this repository](https://huggingface.co/datasets/Somayeh-h/Nordland?row=0).
To simplify first usage, we have set the defaults in `main.py` to train and test on a small subset of Nordland data with pre-trained models, automatically downloaded on first usage (see Pre-trained models, below).
For convenience, all data should be organised in the `./dataset` folder in the following way in order to train the network on multiple traversals of the same location.
```
--dataset
|--summer
|--spring
|--fall
|--winter
```
To replicate the results in our paper, please run the following.
```console
# Train the Nordland model
python main.py --train_new_model --database_places 3300 --database_dirs spring,fall --skip 0 --max_module 1100 --dataset nordland --dims 28,28 --patches 7 --filter 7
# Test the Nordland model
python main.py --database_places 3300 --database_dirs spring,fall --skip 7999 --dataset nordland --dims 28,28 --patches 7 --filter 7 --query_dir summer --query_places 2700 --sim_mat --max_module 1100
```
### Oxford RobotCar
In order to train and test on Oxford RobotCar, you will need to [register an account](https://mrgdatashare.robots.ox.ac.uk/register/) to get access to download the dataset before proceeding. We use 3 traverses (sun 2015-08-12-15-04-18, dusk 2014-11-21-16-07-03, and rain 2015-10-29-12-18-17) recorded from the `stero_left` camera, which can be downloaded using the [RobotCarDataset-Scraper](https://github.com/mttgdd/RobotCarDataset-Scraper) in the following way:
```console
# Copy the orc_lists.txt from this repo into the RobotCarDataset-Scraper repo
python scrape_mrgdatashare.py --choice_sensors stereo_left --choice_runs_file orc_list.txt --downloads_dir ~/VPRTempo/vprtempo/dataset/orc --datasets_file datasets.csv --username USERNAME --password PASSWORD
```
Next, use our helper script `process_orc.py` to demosaic and denoise the downloaded images. You'll need to download the [robotcar-dataset-sdk](https://github.com/ori-mrg/robotcar-dataset-sdk) repository and place the `process_orc.py` file into the `python` directory of the repository. Modify the `base_path` variable of `process_orc.py` to the location of your downloaded images.
```console
# Navigate to python directory, ensure process_orc.py and orc.csv are in this directory
cd ~/robotcar-dataset-sdk/python
# Run the demosaic and denoise
python process_orc.py
```
To replicate the results in our paper, please run the following.
```console
# Train the ORC model
python main.py --train_new_model --database_places 450 --database_dirs sun,rain --skip 0 --max_module 450 --dataset orc --dims 28,28 --patches 7 --filter 7
# Test the ORC model
python main.py --database_places 450 --database_dirs sun,rain --skip 630 --dataset orc --dims 28,28 --patches 7 --filter 7 --query_dir dusk --query_places 360 --sim_mat --max_module 450
```
### Custom Datasets
To define your own custom dataset to use with VPRTempo, you will need to follow the conventions for [PyTorch Datasets & Dataloaders](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html). We have included a convenient script `./vprtempo/src/create_data_csv.py` which will generate a .csv file that can be used to load custom datasets for training and inferencing. Simply modify the `dataset_name` variable to the folder containing your images.
To train a new model with a custom dataset, you can do the following.
```console
# Train new model - requires .csv file generated by create_data_csv.py
python main.py --train_new_model --dataset <your custom database name> --database_dirs <your custom database name>
# Test new model
python main.py --database_dirs <your custom database name> --dataset <your custom query name> --query_dir <your custom query name>
```
If image names are equivelant between database and query directories, you can simply use the one .csv file for both as in the example of Nordland and Oxford RobotCar.
## Usage
Running VPRTempo and VPRTempoQuant is handlded by `main.py`, which can be operated either through the command terminal or directly running the script. See below for more details.
### Prerequisites
* Training and testing data is organized as above (see **Datasets** on how to set up the Nordland dataset)
* The VPRTempo dependencies have been installed and/or the conda environment has been activated
### Pretrained models
We provide two pretrained models, for `VPRTempo` and `VPRTempoQuant`, that have learned a 500 place sequence from two Nordland traversals (Spring & Fall) which can be used to inference with Summer or Winter. To get the pre-trained models and Nordland images, simply run either inference network (below) which will automatically download and sort the models and images into the VPRTempo folder.
### Run the inference network
The `main.py` script handles running the inference network, there are two options:
#### Command terminal
```console
python main.py
```
<p style="width: 100%; display: block; margin-left: auto; margin-right: auto">
<img src="./assets/main_example.gif" alt="Example of the base VPRTempo networking running"/>
</p>
To run the quantized network, parse the `--quantize` argument. (Please note, MPS is not currently supported with PyTorch QAT)
```console
python main.py --quantize
```
<p style="width: 100%; display: block; margin-left: auto; margin-right: auto">
<img src="./assets/mainquant_example.gif" alt="Example of the quantized VPRTempo networking running"/>
</p>
### Train new network
If you do not wish to use the pretrained models or you would like to train your own, we can parse the `--train_new_model` flag to `main.py`. Note, if a pretrained model already exists you will be prompted if you would like to retrain it.
```console
# For VPRTempo
python main.py --train_new_model
# For VPRTempoQuant
python main.py --train_new_model --quantize
```
<p style="width: 100%; display: block; margin-left: auto; margin-right: auto">
<img src="./assets/train_example.gif" alt="Example of the training VPRTempo networking running"/>
</p>
## Tutorials
We provide a series of Jupyter Notebook [tutorials](https://github.com/QVPR/VPRTempo/tree/main/tutorials) that go through the basic operations and logic for VPRTempo and VPRTempoQuant.
## Issues, bugs, and feature requests
If you encounter problems whilst running the code or if you have a suggestion for a feature or improvement, please report it as an [issue](https://github.com/QVPR/VPRTempo/issues).
Raw data
{
"_id": null,
"home_page": "https://github.com/QVPR/VPRTempo",
"name": "VPRTempo",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.6",
"maintainer_email": null,
"keywords": "python, place recognition, spiking neural networks, computer vision, robotics",
"author": "Adam D Hines, Peter G Stratton, Michael Milford and Tobias Fischer",
"author_email": "adam.hines@qut.edu.au",
"download_url": "https://files.pythonhosted.org/packages/fc/81/0c5d2390e4dd310b121f24fff7acf2fbacad6befdc2995afb077427f147d/vprtempo-1.1.8.tar.gz",
"platform": null,
"description": "# VPRTempo - A Temporally Encoded Spiking Neural Network for Visual Place Recognition\n\n[](https://creativecommons.org/licenses/by-nc-sa/4.0/)\n[](https://qcr.ai)\n[](https://github.com/QVPR/VPRTempo/stargazers)\n[](https://pepy.tech/project/vprtempo)\n[](https://anaconda.org/conda-forge/vprtempo)\n\n\nThis repository contains code for [VPRTempo](https://vprtempo.github.io), a spiking neural network that uses temporally encoding to perform visual place recognition tasks. The network is based off of [BLiTNet](https://arxiv.org/pdf/2208.01204.pdf) and adapted to the [VPRSNN](https://github.com/QVPR/VPRSNN) framework. \n\n<p style=\"width: 50%; display: block; margin-left: auto; margin-right: auto\">\n <img src=\"./assets/vprtempo_example.gif\" alt=\"VPRTempo method diagram\"/>\n</p>\n\nVPRTempo is built on a [torch.nn](https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html) framework and employs custom learning rules based on the temporal codes of spikes in order to train layer weights. \n\nIn this repository, we provide two networks: \n - `VPRTempo`: Our base network architecture to perform visual place recognition (fp32)\n - `VPRTempoQuant`: A modified base network with [Quantization Aware Training (QAT)](https://pytorch.org/docs/stable/quantization.html) enabled (int8)\n\nTo use VPRTempo, please follow the instructions below for installation and usage.\n\n## :star: Update v1.1.8: What's new?\n - Provided support for MPS Apple Silicon :green_apple:\n - Minor bug fixes in evaluation metrics :bug:\n - New auto-downloader for pre-trained models and Nordland image subsets for easier trialling :satellite:\n\n## License & Citation\nThis repository is licensed under the [MIT License](./LICENSE) \n\nIf you use our code, please cite our IEEE ICRA [paper](https://ieeexplore.ieee.org/document/10610918):\n```\n@inproceedings{hines2024vprtempo,\n title={VPRTempo: A Fast Temporally Encoded Spiking Neural Network for Visual Place Recognition}, \n author={Adam D. Hines and Peter G. Stratton and Michael Milford and Tobias Fischer},\n year={2024},\n pages={10200-10207},\n booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)} \n}\n```\n## Installation and setup\nVPRTempo uses [PyTorch](https://pytorch.org/) with the capability for [CUDA](https://developer.nvidia.com/cuda-toolkit) acceleration. Please use one of the following options below to install the required dependencies, and if desired follow the instructions to install CUDA for your hardware and operating system.\n### Get the repository\nDownload the Github repository.\n```console\ngit clone https://github.com/QVPR/VPRTempo.git\ncd ~/VPRTempo\n```\nOnce downloaded, please install the required dependencies to run the network through one of the following options:\n\n### Option 1: Pip install\nDependencies for VPRTempo can downloaded from our [PyPi package](https://pypi.org/project/VPRTempo/). Please ensure `python --version` is >=3.6 and <3.13.\n\n```python\npip install vprtempo\n```\nIf you wish to enable CUDA, please follow the instructions on the [PyTorch - Get Started](https://pytorch.org/get-started/locally/) page to install the required software versions for your hardware and operating system.\n\n### Option 2: Local requirements install\nDependencies can be installed either through our provided `requirements.txt` files. Please ensure `python --version` is >=3.6 and <3.13.\n\n```python\npip install -r requirements.txt\n```\nAs above, if you wish to install CUDA please visit [PyTorch - Get Started](https://pytorch.org/get-started/locally/).\n### Option 3: Conda install\n>**:heavy_exclamation_mark: Recommended:**\n> Use [Mambaforge](https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html) instead of conda.\n\nRequirements for VPRTempo may be installed using our [conda-forge package](https://anaconda.org/conda-forge/vprtempo).\n\n```console\n# Linux/OS X\nconda create -n vprtempo -c conda-forge vprtempo\n\n# Linux CUDA enabled\nconda create -n vprtempo -c conda-forge -c pytorch -c nvidia vprtempo pytorch-cuda cudatoolkit\n\n# Windows\nconda create -n vprtempo -c pytorch python pytorch torchvision torchaudio cpuonly prettytable tqdm numpy pandas matplotlib requests\n\n# Windows CUDA enabled\nconda create -n vprtempo -c pytorch -c nvidia python torchvision torchaudio pytorch-cuda=11.7 cudatoolkit prettytable tqdm numpy pandas matplotlib requests\n\n```\n\n## Datasets\nVPRTempo was developed to be simple to train and test a variety of datasets. Please see the information below about running a test with the Nordland and Oxford RobotCar datasets and how to organize custom datasets.\n\nPlease note that while we trained 3,300 places for Nordland and 450 for OxfordRobot car we only evaluated 2,700 and 360 places, respectively, ignoring the first 20% (see [Sect.4B Datasets](https://ieeexplore.ieee.org/document/10610918)). \n\n### Nordland\nVPRTempo was developed and tested using the [Nordland](https://nrkbeta.no/2013/01/15/nordlandsbanen-minute-by-minute-season-by-season/) traversal dataset. To download the full dataset, please visit [this repository](https://huggingface.co/datasets/Somayeh-h/Nordland?row=0).\n\nTo simplify first usage, we have set the defaults in `main.py` to train and test on a small subset of Nordland data with pre-trained models, automatically downloaded on first usage (see Pre-trained models, below).\n\nFor convenience, all data should be organised in the `./dataset` folder in the following way in order to train the network on multiple traversals of the same location.\n\n```\n--dataset\n |--summer\n |--spring\n |--fall\n |--winter\n```\n\nTo replicate the results in our paper, please run the following.\n\n```console\n# Train the Nordland model\npython main.py --train_new_model --database_places 3300 --database_dirs spring,fall --skip 0 --max_module 1100 --dataset nordland --dims 28,28 --patches 7 --filter 7\n\n# Test the Nordland model\npython main.py --database_places 3300 --database_dirs spring,fall --skip 7999 --dataset nordland --dims 28,28 --patches 7 --filter 7 --query_dir summer --query_places 2700 --sim_mat --max_module 1100\n```\n\n### Oxford RobotCar\nIn order to train and test on Oxford RobotCar, you will need to [register an account](https://mrgdatashare.robots.ox.ac.uk/register/) to get access to download the dataset before proceeding. We use 3 traverses (sun 2015-08-12-15-04-18, dusk 2014-11-21-16-07-03, and rain 2015-10-29-12-18-17) recorded from the `stero_left` camera, which can be downloaded using the [RobotCarDataset-Scraper](https://github.com/mttgdd/RobotCarDataset-Scraper) in the following way:\n\n```console\n# Copy the orc_lists.txt from this repo into the RobotCarDataset-Scraper repo\npython scrape_mrgdatashare.py --choice_sensors stereo_left --choice_runs_file orc_list.txt --downloads_dir ~/VPRTempo/vprtempo/dataset/orc --datasets_file datasets.csv --username USERNAME --password PASSWORD\n```\n\nNext, use our helper script `process_orc.py` to demosaic and denoise the downloaded images. You'll need to download the [robotcar-dataset-sdk](https://github.com/ori-mrg/robotcar-dataset-sdk) repository and place the `process_orc.py` file into the `python` directory of the repository. Modify the `base_path` variable of `process_orc.py` to the location of your downloaded images.\n\n```console\n# Navigate to python directory, ensure process_orc.py and orc.csv are in this directory\ncd ~/robotcar-dataset-sdk/python\n\n# Run the demosaic and denoise\npython process_orc.py\n```\n\nTo replicate the results in our paper, please run the following.\n\n```console\n# Train the ORC model\npython main.py --train_new_model --database_places 450 --database_dirs sun,rain --skip 0 --max_module 450 --dataset orc --dims 28,28 --patches 7 --filter 7\n\n# Test the ORC model\npython main.py --database_places 450 --database_dirs sun,rain --skip 630 --dataset orc --dims 28,28 --patches 7 --filter 7 --query_dir dusk --query_places 360 --sim_mat --max_module 450\n```\n\n### Custom Datasets\nTo define your own custom dataset to use with VPRTempo, you will need to follow the conventions for [PyTorch Datasets & Dataloaders](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html). We have included a convenient script `./vprtempo/src/create_data_csv.py` which will generate a .csv file that can be used to load custom datasets for training and inferencing. Simply modify the `dataset_name` variable to the folder containing your images.\n\nTo train a new model with a custom dataset, you can do the following.\n\n```console\n# Train new model - requires .csv file generated by create_data_csv.py\npython main.py --train_new_model --dataset <your custom database name> --database_dirs <your custom database name>\n\n# Test new model\npython main.py --database_dirs <your custom database name> --dataset <your custom query name> --query_dir <your custom query name>\n```\nIf image names are equivelant between database and query directories, you can simply use the one .csv file for both as in the example of Nordland and Oxford RobotCar.\n\n## Usage\nRunning VPRTempo and VPRTempoQuant is handlded by `main.py`, which can be operated either through the command terminal or directly running the script. See below for more details.\n### Prerequisites\n* Training and testing data is organized as above (see **Datasets** on how to set up the Nordland dataset)\n* The VPRTempo dependencies have been installed and/or the conda environment has been activated\n\n### Pretrained models\nWe provide two pretrained models, for `VPRTempo` and `VPRTempoQuant`, that have learned a 500 place sequence from two Nordland traversals (Spring & Fall) which can be used to inference with Summer or Winter. To get the pre-trained models and Nordland images, simply run either inference network (below) which will automatically download and sort the models and images into the VPRTempo folder.\n\n### Run the inference network\nThe `main.py` script handles running the inference network, there are two options:\n\n#### Command terminal\n```console\npython main.py\n```\n<p style=\"width: 100%; display: block; margin-left: auto; margin-right: auto\">\n <img src=\"./assets/main_example.gif\" alt=\"Example of the base VPRTempo networking running\"/>\n</p>\n\nTo run the quantized network, parse the `--quantize` argument. (Please note, MPS is not currently supported with PyTorch QAT)\n```console\npython main.py --quantize\n```\n<p style=\"width: 100%; display: block; margin-left: auto; margin-right: auto\">\n <img src=\"./assets/mainquant_example.gif\" alt=\"Example of the quantized VPRTempo networking running\"/>\n</p>\n\n\n### Train new network\nIf you do not wish to use the pretrained models or you would like to train your own, we can parse the `--train_new_model` flag to `main.py`. Note, if a pretrained model already exists you will be prompted if you would like to retrain it.\n```console\n# For VPRTempo\npython main.py --train_new_model\n\n# For VPRTempoQuant\npython main.py --train_new_model --quantize\n```\n<p style=\"width: 100%; display: block; margin-left: auto; margin-right: auto\">\n <img src=\"./assets/train_example.gif\" alt=\"Example of the training VPRTempo networking running\"/>\n</p>\n\n## Tutorials\nWe provide a series of Jupyter Notebook [tutorials](https://github.com/QVPR/VPRTempo/tree/main/tutorials) that go through the basic operations and logic for VPRTempo and VPRTempoQuant. \n\n## Issues, bugs, and feature requests\nIf you encounter problems whilst running the code or if you have a suggestion for a feature or improvement, please report it as an [issue](https://github.com/QVPR/VPRTempo/issues).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "VPRTempo: A Fast Temporally Encoded Spiking Neural Network for Visual Place Recognition",
"version": "1.1.8",
"project_urls": {
"Homepage": "https://github.com/QVPR/VPRTempo"
},
"split_keywords": [
"python",
" place recognition",
" spiking neural networks",
" computer vision",
" robotics"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f2c62ad246d001275fa13ee019973e4d403fe1279b04360dadbcfbab54412a55",
"md5": "9fc294a3411291dc4b2424c74ef6fa83",
"sha256": "df62bb055bfc999a289d5067067d92b2ccf54d4ecece4192cbf29798af05e228"
},
"downloads": -1,
"filename": "VPRTempo-1.1.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9fc294a3411291dc4b2424c74ef6fa83",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.6",
"size": 40837,
"upload_time": "2025-01-24T00:13:45",
"upload_time_iso_8601": "2025-01-24T00:13:45.143734Z",
"url": "https://files.pythonhosted.org/packages/f2/c6/2ad246d001275fa13ee019973e4d403fe1279b04360dadbcfbab54412a55/VPRTempo-1.1.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "fc810c5d2390e4dd310b121f24fff7acf2fbacad6befdc2995afb077427f147d",
"md5": "52717bb4044a92f1593dd1a9a2048148",
"sha256": "6388defd079d2a6d0191a2a63a0e39a39cde9f6564bba29c96cf1eff98fa6eb4"
},
"downloads": -1,
"filename": "vprtempo-1.1.8.tar.gz",
"has_sig": false,
"md5_digest": "52717bb4044a92f1593dd1a9a2048148",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.6",
"size": 27371,
"upload_time": "2025-01-24T00:13:47",
"upload_time_iso_8601": "2025-01-24T00:13:47.505473Z",
"url": "https://files.pythonhosted.org/packages/fc/81/0c5d2390e4dd310b121f24fff7acf2fbacad6befdc2995afb077427f147d/vprtempo-1.1.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-24 00:13:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "QVPR",
"github_project": "VPRTempo",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "torch",
"specs": []
},
{
"name": "torchvision",
"specs": []
},
{
"name": "numpy",
"specs": []
},
{
"name": "pandas",
"specs": []
},
{
"name": "tqdm",
"specs": []
},
{
"name": "prettytable",
"specs": []
},
{
"name": "matplotlib",
"specs": []
},
{
"name": "requests",
"specs": []
}
],
"lcname": "vprtempo"
}