patchnetvlad


Namepatchnetvlad JSON
Version 0.1.8 PyPI version JSON
download
home_pagehttps://github.com/QVPR/Patch-NetVLAD
SummaryPatch-NetVLAD: An open-source Python implementation of the CVPR2021 paper
upload_time2024-03-13 00:16:42
maintainer
docs_urlNone
authorStephen Hausler, Sourav Garg, Ming Xu, Michael Milford and Tobias Fischer
requires_python>=3.6
licenseMIT
keywords python place recognition image retrieval computer vision robotics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=flat-square)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
[![stars](https://img.shields.io/github/stars/QVPR/Patch-NetVLAD.svg?style=flat-square)](https://github.com/QVPR/Patch-NetVLAD/stargazers)
[![GitHub issues](https://img.shields.io/github/issues/QVPR/Patch-NetVLAD.svg?style=flat-square)](https://github.com/QVPR/Patch-NetVLAD/issues)
[![GitHub closed issues](https://img.shields.io/github/issues-closed-raw/QVPR/Patch-NetVLAD?style=flat-square)](https://github.com/QVPR/Patch-NetVLAD/issues?q=is%3Aissue+is%3Aclosed)
[![GitHub repo size](https://img.shields.io/github/repo-size/QVPR/Patch-NetVLAD.svg?style=flat-square)](./README.md)
[![QUT Centre for Robotics](https://img.shields.io/badge/collection-QUT%20Robotics-%23043d71?style=flat-square)](https://qcr.github.io/collection/vpr_overview/)

[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-localization-on-extended-cmu-seasons&style=flat-square)](https://paperswithcode.com/sota/visual-localization-on-extended-cmu-seasons?p=patch-netvlad-multi-scale-fusion-of-locally)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-place-recognition-on-mapillary-val&style=flat-square)](https://paperswithcode.com/sota/visual-place-recognition-on-mapillary-val?p=patch-netvlad-multi-scale-fusion-of-locally)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-place-recognition-on-nordland&style=flat-square)](https://paperswithcode.com/sota/visual-place-recognition-on-nordland?p=patch-netvlad-multi-scale-fusion-of-locally)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-place-recognition-on-pittsburgh-30k&style=flat-square)](https://paperswithcode.com/sota/visual-place-recognition-on-pittsburgh-30k?p=patch-netvlad-multi-scale-fusion-of-locally)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-localization-on-robotcar-seasons-v2&style=flat-square)](https://paperswithcode.com/sota/visual-localization-on-robotcar-seasons-v2?p=patch-netvlad-multi-scale-fusion-of-locally)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-place-recognition-on-tokyo247&style=flat-square)](https://paperswithcode.com/sota/visual-place-recognition-on-tokyo247?p=patch-netvlad-multi-scale-fusion-of-locally)

This repository contains code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

The article can be found on [arXiv](https://arxiv.org/abs/2103.01486) and the [official proceedings](https://openaccess.thecvf.com/content/CVPR2021/html/Hausler_Patch-NetVLAD_Multi-Scale_Fusion_of_Locally-Global_Descriptors_for_Place_Recognition_CVPR_2021_paper.html).

<p style="width: 50%; display: block; margin-left: auto; margin-right: auto">
  <img src="./assets/patch_netvlad_method_diagram.png" alt="Patch-NetVLAD method diagram"/>
</p>

## License + attribution/citation

When using code within this repository, please refer the following [paper](https://openaccess.thecvf.com/content/CVPR2021/html/Hausler_Patch-NetVLAD_Multi-Scale_Fusion_of_Locally-Global_Descriptors_for_Place_Recognition_CVPR_2021_paper.html) in your publications:
```
@inproceedings{hausler2021patchnetvlad,
  title={Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition},
  author={Hausler, Stephen and Garg, Sourav and Xu, Ming and Milford, Michael and Fischer, Tobias},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={14141--14152},
  year={2021}
}
```

The code is licensed under the [MIT License](./LICENSE).

## Installation
We recommend using conda (or better: mamba) to install all dependencies. If you have not yet installed conda/mamba, please download and install [`mambaforge`](https://github.com/conda-forge/miniforge).

```bash
# On Linux:
conda create -n patchnetvlad python numpy pytorch-gpu torchvision natsort tqdm opencv pillow scikit-learn faiss matplotlib-base -c conda-forge
# On MacOS (x86 Intel processor):
conda create -n patchnetvlad python numpy pytorch torchvision natsort tqdm opencv pillow scikit-learn faiss matplotlib-base -c conda-forge
# On MacOS (ARM M1/M2 processor):
conda create -n patchnetvlad python numpy pytorch torchvision natsort tqdm opencv pillow scikit-learn faiss matplotlib-base -c conda-forge -c tobiasrobotics
# On Windows:
conda create -n patchnetvlad python numpy natsort tqdm opencv pillow scikit-learn faiss matplotlib-base -c conda-forge
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

conda activate patchnetvlad
```

We provide several pre-trained models and configuration files. The pre-trained models will be downloaded automatically into the `pretrained_models` the first time feature extraction is performed.

<details>
  <summary>Alternatively, you can manually download the pre-trained models into a folder of your choice; click to expand if you want to do so.</summary>

  We recommend downloading the models into the `pretrained_models` folder (which is setup in the config files within the `configs` directory):

  ```bash
  # Note: the pre-trained models will be downloaded automatically the first time feature extraction is performed
  # the steps below are optional!

  # You can use the download script which automatically downloads the models:
  python ./download_models.py

  # Manual download:
  cd pretrained_models
  wget -O mapillary_WPCA128.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/mapillary_WPCA128.pth.tar?download=true
  wget -O mapillary_WPCA512.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/mapillary_WPCA512.pth.tar?download=true
  wget -O mapillary_WPCA4096.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/mapillary_WPCA4096.pth.tar?download=true
  wget -O pittsburgh_WPCA128.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/pitts_WPCA128.pth.tar?download=true
  wget -O pittsburgh_WPCA512.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/pitts_WPCA512.pth.tar?download=true
  wget -O pittsburgh_WPCA4096.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/pitts_WPCA4096.pth.tar?download=true
  ```
</details>

If you want to use the shortcuts `patchnetvlad-match-two`, `patchnetvlad-feature-match` and `patchnetvlad-feature-extract`, you also need to run (which also lets you use Patch-NetVLAD in a modular way):
```bash
pip3 install --no-deps -e .
```


## Quick start

### Feature extraction
Replace `performance.ini` with `speed.ini` or `storage.ini` if you want, and adapt the dataset paths - examples are given for the Pittsburgh30k dataset (simply replace `pitts30k` with `tokyo247` or `nordland` for these datasets).

```bash
python feature_extract.py \
  --config_path patchnetvlad/configs/performance.ini \
  --dataset_file_path=pitts30k_imageNames_index.txt \
  --dataset_root_dir=/path/to/your/pitts/dataset \
  --output_features_dir patchnetvlad/output_features/pitts30k_index
```

Repeat for the query images by replacing `_index` with `_query`. Note that you have to adapt `dataset_root_dir`.

### Feature matching (dataset)
```bash
python feature_match.py \
  --config_path patchnetvlad/configs/performance.ini \
  --dataset_root_dir=/path/to/your/pitts/dataset \
  --query_file_path=pitts30k_imageNames_query.txt \
  --index_file_path=pitts30k_imageNames_index.txt \
  --query_input_features_dir patchnetvlad/output_features/pitts30k_query \
  --index_input_features_dir patchnetvlad/output_features/pitts30k_index \
  --ground_truth_path patchnetvlad/dataset_gt_files/pitts30k_test.npz \
  --result_save_folder patchnetvlad/results/pitts30k
```

Note that providing `ground_truth_path` is optional.

This will create three output files in the folder specified by `result_save_folder`:
- `recalls.txt` with a plain text output (only if `ground_truth_path` is specified)
- `NetVLAD_predictions.txt` with top 100 reference images for each query images obtained using "vanilla" NetVLAD in [Kapture format](https://github.com/naver/kapture)
- `PatchNetVLAD_predictions.txt` with top 100 reference images from above re-ranked by Patch-NetVLAD, again in [Kapture format](https://github.com/naver/kapture)

### Feature matching (two files)
```bash
python match_two.py \
--config_path patchnetvlad/configs/performance.ini \
--first_im_path=patchnetvlad/example_images/tokyo_query.jpg \
--second_im_path=patchnetvlad/example_images/tokyo_db.png
```

We provide the `match_two.py` script which computes the Patch-NetVLAD features for two given images and then determines the local feature matching between these images. While we provide example images, any image pair can be used.

The script will print a score value as an output, where a larger score indicates more similar images and a lower score means dissimilar images. The function also outputs a matching figure, showing the patch correspondances (after RANSAC) between the two images. The figure is saved as `results/patchMatchings.png`.

### Training
```bash
python train.py \
--config_path patchnetvlad/configs/train.ini \
--cache_path=/path/to/your/desired/cache/folder \
--save_path=/path/to/your/desired/checkpoint/save/folder \
--dataset_root_dir=/path/to/your/mapillary/dataset
```

To begin, request, download and unzip the Mapillary Street-level Sequences dataset (https://github.com/mapillary/mapillary_sls).
The provided script will train a new network from scratch, to resume training add --resume_path and set to a full path, filename and extension to an existing checkpoint file. Note to resume our provided models, first remove the WPCA layers.

After training a model, PCA can be added using add_pca.py.
```bash
python add_pca.py \
--config_path patchnetvlad/configs/train.ini \
--resume_path=full/path/with/extension/to/your/saved/checkpoint \
--dataset_root_dir=/path/to/your/mapillary/dataset
```

This will add an additional checkpoint file to the same folder as resume_path, except including a WPCA layer.

## FAQ
![Patch-NetVLAD qualitative results](./assets/patch_netvlad_qualitative_results.jpg)

### How to Create New Ground Truth Files

We provide three ready-to-go ground truth files in the dataset_gt_files folder, however, for evaluation on other datasets you will need to create your own .npz ground truth data files.
Each .npz stores three variables: `utmQ` (a numpy array of floats), `utmDb` (a numpy array of floats) and `posDistThr` (a scalar numpy float).

Each successive element within `utmQ` and `utmDb` needs to correspond to the corresponding row of the image list file. `posDistThr` is the ground truth tolerance value (typically in meters).

The following mock example details the steps required to create a new ground truth file:
1. Collect GPS data for your query and database traverses and convert to utm format. Ensure the data is sampled at the same rate as your images.
2. Select your own choice of posDistThr value.
3. Save these variables using Numpy, such as this line of code:
`np.savez('dataset_gt_files/my_dataset.npz', utmQ=my_utmQ, utmDb=my_utmDb, posDistThr=my_posDistThr)`

## Acknowledgements
We would like to thank Gustavo Carneiro, Niko Suenderhauf and Mark Zolotas for their valuable comments in preparing this paper. This work received funding from the Australian Government, via grant AUSMURIB000001 associated with ONR MURI grant N00014-19-1-2571. The authors acknowledge continued support from the Queensland University of Technology (QUT) through the Centre for Robotics.

## Related works
Please check out [this collection](https://qcr.github.io/collection/vpr_overview/) of related works on place recognition.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/QVPR/Patch-NetVLAD",
    "name": "patchnetvlad",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "python,place recognition,image retrieval,computer vision,robotics",
    "author": "Stephen Hausler, Sourav Garg, Ming Xu, Michael Milford and Tobias Fischer",
    "author_email": "stephen.hausler@hdr.qut.edu.au",
    "download_url": "https://files.pythonhosted.org/packages/84/d3/d65c1ccab8e7a7d67df7517cdf2bb4d6e689f7440108cb0bac6d06db6daa/patchnetvlad-0.1.8.tar.gz",
    "platform": null,
    "description": "# Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=flat-square)](https://creativecommons.org/licenses/by-nc-sa/4.0/)\n[![stars](https://img.shields.io/github/stars/QVPR/Patch-NetVLAD.svg?style=flat-square)](https://github.com/QVPR/Patch-NetVLAD/stargazers)\n[![GitHub issues](https://img.shields.io/github/issues/QVPR/Patch-NetVLAD.svg?style=flat-square)](https://github.com/QVPR/Patch-NetVLAD/issues)\n[![GitHub closed issues](https://img.shields.io/github/issues-closed-raw/QVPR/Patch-NetVLAD?style=flat-square)](https://github.com/QVPR/Patch-NetVLAD/issues?q=is%3Aissue+is%3Aclosed)\n[![GitHub repo size](https://img.shields.io/github/repo-size/QVPR/Patch-NetVLAD.svg?style=flat-square)](./README.md)\n[![QUT Centre for Robotics](https://img.shields.io/badge/collection-QUT%20Robotics-%23043d71?style=flat-square)](https://qcr.github.io/collection/vpr_overview/)\n\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-localization-on-extended-cmu-seasons&style=flat-square)](https://paperswithcode.com/sota/visual-localization-on-extended-cmu-seasons?p=patch-netvlad-multi-scale-fusion-of-locally)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-place-recognition-on-mapillary-val&style=flat-square)](https://paperswithcode.com/sota/visual-place-recognition-on-mapillary-val?p=patch-netvlad-multi-scale-fusion-of-locally)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-place-recognition-on-nordland&style=flat-square)](https://paperswithcode.com/sota/visual-place-recognition-on-nordland?p=patch-netvlad-multi-scale-fusion-of-locally)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-place-recognition-on-pittsburgh-30k&style=flat-square)](https://paperswithcode.com/sota/visual-place-recognition-on-pittsburgh-30k?p=patch-netvlad-multi-scale-fusion-of-locally)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-localization-on-robotcar-seasons-v2&style=flat-square)](https://paperswithcode.com/sota/visual-localization-on-robotcar-seasons-v2?p=patch-netvlad-multi-scale-fusion-of-locally)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/patch-netvlad-multi-scale-fusion-of-locally/visual-place-recognition-on-tokyo247&style=flat-square)](https://paperswithcode.com/sota/visual-place-recognition-on-tokyo247?p=patch-netvlad-multi-scale-fusion-of-locally)\n\nThis repository contains code for the CVPR2021 paper \"Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition\"\n\nThe article can be found on [arXiv](https://arxiv.org/abs/2103.01486) and the [official proceedings](https://openaccess.thecvf.com/content/CVPR2021/html/Hausler_Patch-NetVLAD_Multi-Scale_Fusion_of_Locally-Global_Descriptors_for_Place_Recognition_CVPR_2021_paper.html).\n\n<p style=\"width: 50%; display: block; margin-left: auto; margin-right: auto\">\n  <img src=\"./assets/patch_netvlad_method_diagram.png\" alt=\"Patch-NetVLAD method diagram\"/>\n</p>\n\n## License + attribution/citation\n\nWhen using code within this repository, please refer the following [paper](https://openaccess.thecvf.com/content/CVPR2021/html/Hausler_Patch-NetVLAD_Multi-Scale_Fusion_of_Locally-Global_Descriptors_for_Place_Recognition_CVPR_2021_paper.html) in your publications:\n```\n@inproceedings{hausler2021patchnetvlad,\n  title={Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition},\n  author={Hausler, Stephen and Garg, Sourav and Xu, Ming and Milford, Michael and Fischer, Tobias},\n  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  pages={14141--14152},\n  year={2021}\n}\n```\n\nThe code is licensed under the [MIT License](./LICENSE).\n\n## Installation\nWe recommend using conda (or better: mamba) to install all dependencies. If you have not yet installed conda/mamba, please download and install [`mambaforge`](https://github.com/conda-forge/miniforge).\n\n```bash\n# On Linux:\nconda create -n patchnetvlad python numpy pytorch-gpu torchvision natsort tqdm opencv pillow scikit-learn faiss matplotlib-base -c conda-forge\n# On MacOS (x86 Intel processor):\nconda create -n patchnetvlad python numpy pytorch torchvision natsort tqdm opencv pillow scikit-learn faiss matplotlib-base -c conda-forge\n# On MacOS (ARM M1/M2 processor):\nconda create -n patchnetvlad python numpy pytorch torchvision natsort tqdm opencv pillow scikit-learn faiss matplotlib-base -c conda-forge -c tobiasrobotics\n# On Windows:\nconda create -n patchnetvlad python numpy natsort tqdm opencv pillow scikit-learn faiss matplotlib-base -c conda-forge\nconda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia\n\nconda activate patchnetvlad\n```\n\nWe provide several pre-trained models and configuration files. The pre-trained models will be downloaded automatically into the `pretrained_models` the first time feature extraction is performed.\n\n<details>\n  <summary>Alternatively, you can manually download the pre-trained models into a folder of your choice; click to expand if you want to do so.</summary>\n\n  We recommend downloading the models into the `pretrained_models` folder (which is setup in the config files within the `configs` directory):\n\n  ```bash\n  # Note: the pre-trained models will be downloaded automatically the first time feature extraction is performed\n  # the steps below are optional!\n\n  # You can use the download script which automatically downloads the models:\n  python ./download_models.py\n\n  # Manual download:\n  cd pretrained_models\n  wget -O mapillary_WPCA128.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/mapillary_WPCA128.pth.tar?download=true\n  wget -O mapillary_WPCA512.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/mapillary_WPCA512.pth.tar?download=true\n  wget -O mapillary_WPCA4096.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/mapillary_WPCA4096.pth.tar?download=true\n  wget -O pittsburgh_WPCA128.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/pitts_WPCA128.pth.tar?download=true\n  wget -O pittsburgh_WPCA512.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/pitts_WPCA512.pth.tar?download=true\n  wget -O pittsburgh_WPCA4096.pth.tar https://huggingface.co/TobiasRobotics/Patch-NetVLAD/resolve/main/pitts_WPCA4096.pth.tar?download=true\n  ```\n</details>\n\nIf you want to use the shortcuts `patchnetvlad-match-two`, `patchnetvlad-feature-match` and `patchnetvlad-feature-extract`, you also need to run (which also lets you use Patch-NetVLAD in a modular way):\n```bash\npip3 install --no-deps -e .\n```\n\n\n## Quick start\n\n### Feature extraction\nReplace `performance.ini` with `speed.ini` or `storage.ini` if you want, and adapt the dataset paths - examples are given for the Pittsburgh30k dataset (simply replace `pitts30k` with `tokyo247` or `nordland` for these datasets).\n\n```bash\npython feature_extract.py \\\n  --config_path patchnetvlad/configs/performance.ini \\\n  --dataset_file_path=pitts30k_imageNames_index.txt \\\n  --dataset_root_dir=/path/to/your/pitts/dataset \\\n  --output_features_dir patchnetvlad/output_features/pitts30k_index\n```\n\nRepeat for the query images by replacing `_index` with `_query`. Note that you have to adapt `dataset_root_dir`.\n\n### Feature matching (dataset)\n```bash\npython feature_match.py \\\n  --config_path patchnetvlad/configs/performance.ini \\\n  --dataset_root_dir=/path/to/your/pitts/dataset \\\n  --query_file_path=pitts30k_imageNames_query.txt \\\n  --index_file_path=pitts30k_imageNames_index.txt \\\n  --query_input_features_dir patchnetvlad/output_features/pitts30k_query \\\n  --index_input_features_dir patchnetvlad/output_features/pitts30k_index \\\n  --ground_truth_path patchnetvlad/dataset_gt_files/pitts30k_test.npz \\\n  --result_save_folder patchnetvlad/results/pitts30k\n```\n\nNote that providing `ground_truth_path` is optional.\n\nThis will create three output files in the folder specified by `result_save_folder`:\n- `recalls.txt` with a plain text output (only if `ground_truth_path` is specified)\n- `NetVLAD_predictions.txt` with top 100 reference images for each query images obtained using \"vanilla\" NetVLAD in [Kapture format](https://github.com/naver/kapture)\n- `PatchNetVLAD_predictions.txt` with top 100 reference images from above re-ranked by Patch-NetVLAD, again in [Kapture format](https://github.com/naver/kapture)\n\n### Feature matching (two files)\n```bash\npython match_two.py \\\n--config_path patchnetvlad/configs/performance.ini \\\n--first_im_path=patchnetvlad/example_images/tokyo_query.jpg \\\n--second_im_path=patchnetvlad/example_images/tokyo_db.png\n```\n\nWe provide the `match_two.py` script which computes the Patch-NetVLAD features for two given images and then determines the local feature matching between these images. While we provide example images, any image pair can be used.\n\nThe script will print a score value as an output, where a larger score indicates more similar images and a lower score means dissimilar images. The function also outputs a matching figure, showing the patch correspondances (after RANSAC) between the two images. The figure is saved as `results/patchMatchings.png`.\n\n### Training\n```bash\npython train.py \\\n--config_path patchnetvlad/configs/train.ini \\\n--cache_path=/path/to/your/desired/cache/folder \\\n--save_path=/path/to/your/desired/checkpoint/save/folder \\\n--dataset_root_dir=/path/to/your/mapillary/dataset\n```\n\nTo begin, request, download and unzip the Mapillary Street-level Sequences dataset (https://github.com/mapillary/mapillary_sls).\nThe provided script will train a new network from scratch, to resume training add --resume_path and set to a full path, filename and extension to an existing checkpoint file. Note to resume our provided models, first remove the WPCA layers.\n\nAfter training a model, PCA can be added using add_pca.py.\n```bash\npython add_pca.py \\\n--config_path patchnetvlad/configs/train.ini \\\n--resume_path=full/path/with/extension/to/your/saved/checkpoint \\\n--dataset_root_dir=/path/to/your/mapillary/dataset\n```\n\nThis will add an additional checkpoint file to the same folder as resume_path, except including a WPCA layer.\n\n## FAQ\n![Patch-NetVLAD qualitative results](./assets/patch_netvlad_qualitative_results.jpg)\n\n### How to Create New Ground Truth Files\n\nWe provide three ready-to-go ground truth files in the dataset_gt_files folder, however, for evaluation on other datasets you will need to create your own .npz ground truth data files.\nEach .npz stores three variables: `utmQ` (a numpy array of floats), `utmDb` (a numpy array of floats) and `posDistThr` (a scalar numpy float).\n\nEach successive element within `utmQ` and `utmDb` needs to correspond to the corresponding row of the image list file. `posDistThr` is the ground truth tolerance value (typically in meters).\n\nThe following mock example details the steps required to create a new ground truth file:\n1. Collect GPS data for your query and database traverses and convert to utm format. Ensure the data is sampled at the same rate as your images.\n2. Select your own choice of posDistThr value.\n3. Save these variables using Numpy, such as this line of code:\n`np.savez('dataset_gt_files/my_dataset.npz', utmQ=my_utmQ, utmDb=my_utmDb, posDistThr=my_posDistThr)`\n\n## Acknowledgements\nWe would like to thank Gustavo Carneiro, Niko Suenderhauf and Mark Zolotas for their valuable comments in preparing this paper. This work received funding from the Australian Government, via grant AUSMURIB000001 associated with ONR MURI grant N00014-19-1-2571. The authors acknowledge continued support from the Queensland University of Technology (QUT) through the Centre for Robotics.\n\n## Related works\nPlease check out [this collection](https://qcr.github.io/collection/vpr_overview/) of related works on place recognition.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Patch-NetVLAD: An open-source Python implementation of the CVPR2021 paper",
    "version": "0.1.8",
    "project_urls": {
        "Homepage": "https://github.com/QVPR/Patch-NetVLAD"
    },
    "split_keywords": [
        "python",
        "place recognition",
        "image retrieval",
        "computer vision",
        "robotics"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5c0e8fd56ebbd51db55a4c325c31d15b8e90adb5677440e993840fba6c0c61b3",
                "md5": "24ae5d1dff8975aa1eb49699367ea112",
                "sha256": "211b85afbca2f3cc0b8969acca81c34c0711502a2d3be817457effc58f2ab67c"
            },
            "downloads": -1,
            "filename": "patchnetvlad-0.1.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "24ae5d1dff8975aa1eb49699367ea112",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 6165866,
            "upload_time": "2024-03-13T00:16:39",
            "upload_time_iso_8601": "2024-03-13T00:16:39.859105Z",
            "url": "https://files.pythonhosted.org/packages/5c/0e/8fd56ebbd51db55a4c325c31d15b8e90adb5677440e993840fba6c0c61b3/patchnetvlad-0.1.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "84d3d65c1ccab8e7a7d67df7517cdf2bb4d6e689f7440108cb0bac6d06db6daa",
                "md5": "77115d94e39afb2545a9d0e332a5698b",
                "sha256": "d55614c89ecaa28fba5a52c866517a49c6c1aa4479af1856c5a43eb0c4ae3d47"
            },
            "downloads": -1,
            "filename": "patchnetvlad-0.1.8.tar.gz",
            "has_sig": false,
            "md5_digest": "77115d94e39afb2545a9d0e332a5698b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 6099762,
            "upload_time": "2024-03-13T00:16:42",
            "upload_time_iso_8601": "2024-03-13T00:16:42.285497Z",
            "url": "https://files.pythonhosted.org/packages/84/d3/d65c1ccab8e7a7d67df7517cdf2bb4d6e689f7440108cb0bac6d06db6daa/patchnetvlad-0.1.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-13 00:16:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "QVPR",
    "github_project": "Patch-NetVLAD",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "patchnetvlad"
}
        
Elapsed time: 0.28726s