autofaiss


Nameautofaiss JSON
Version 2.17.0 PyPI version JSON
download
home_pagehttps://github.com/criteo/autofaiss
Summary# AutoFaiss
upload_time2024-01-22 22:04:21
maintainer
docs_urlNone
authorCriteo
requires_python
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AutoFaiss

[![pypi](https://img.shields.io/pypi/v/autofaiss.svg)](https://pypi.python.org/pypi/autofaiss)
[![ci](https://github.com/criteo/autofaiss/workflows/Continuous%20integration/badge.svg)](https://github.com/criteo/autofaiss/actions?query=workflow%3A%22Continuous+integration%22)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_getting_started.ipynb)

**Automatically create Faiss knn indices with the most optimal similarity search parameters.**

It selects the best indexing parameters to achieve the highest recalls given memory and query speed constraints.

## Doc and posts and notebooks

Using [faiss](https://github.com/facebookresearch/faiss) efficient indices, binary search, and heuristics, Autofaiss makes it possible to *automatically* build in 3 hours a large (200 million vectors, 1TB) KNN index in a low amount of memory (15 GB) with latency in milliseconds (10ms).

Get started by running this [colab notebook](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_getting_started.ipynb), then check the [full documentation](https://criteo.github.io/autofaiss).  
Get some insights on the automatic index selection function with this [colab notebook](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_index_selection_demo.ipynb).

Then you can check our [multimodal search example](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_multimodal_search.ipynb) (using OpenAI Clip model).

Read the [medium post](https://medium.com/criteo-engineering/introducing-autofaiss-an-automatic-k-nearest-neighbor-indexing-library-at-scale-c90842005a11) to learn more about it!

## Installation

To install run `pip install autofaiss`

It's probably best to create a virtual env:
``` bash
python -m venv .venv/autofaiss_env
source .venv/autofaiss_env/bin/activate
pip install -U pip
pip install autofaiss
```

## Using autofaiss in python

If you want to use autofaiss directly from python, check the [API documentation](https://criteo.github.io/autofaiss/API/api.html) and the [examples](examples)

In particular you can use autofaiss with on memory or on disk embeddings collections:

### Using in memory numpy arrays

If you have a few embeddings, you can use autofaiss with in memory numpy arrays:

```python
from autofaiss import build_index
import numpy as np

embeddings = np.float32(np.random.rand(100, 512))
index, index_infos = build_index(embeddings, save_on_disk=False)

query = np.float32(np.random.rand(1, 512))
_, I = index.search(query, 1)
print(I)
```

### Using numpy arrays saved as .npy files

If you have many embeddings file, it is preferred to save them on disk as .npy files then use autofaiss like this:

```python
from autofaiss import build_index

build_index(embeddings="embeddings", index_path="my_index_folder/knn.index",
            index_infos_path="my_index_folder/index_infos.json", max_index_memory_usage="4G",
            current_memory_available="4G")
```

## Memory-mapped indices

Faiss makes it possible to use memory-mapped indices. This is useful when you don't need a fast search time (>50ms)
and still want to reduce the memory footprint to the minimum.

We provide the should_be_memory_mappable boolean in build_index function to generate memory-mapped indices only.
Note: Only IVF indices can be memory-mapped in faiss, so the output index will be a IVF index.

To load an index in memory mapping mode, use the following code:
```python
import faiss
faiss.read_index("my_index_folder/knn.index", faiss.IO_FLAG_MMAP | faiss.IO_FLAG_READ_ONLY)
```

You can have a look to the [examples](examples/memory_mapped_autofaiss.py) to see how to use it.

Technical note: You can create a direct map on IVF indices with index.make_direct_map() (or directly from the
build_index function by passing the make_direct_map boolean). Doing this speeds up a lot
the .reconstruct() method, function that gives you the value of one of your vector given its rank.
However, this mapping will be stored in RAM... We advise you to create your own direct map in a memory-mapped
numpy array and then call .reconstruct_from_offset() with your custom direct_map.

## Using autofaiss with pyspark

Autofaiss allows you to build indices with Spark for the following two use cases:
- To build a big index in a distributed way
- Given a partitioned dataset of embeddings, building one index per partition in parallel and in a distributed way.

Prerequisities:

1. Install pyspark: `pip install pyspark`.
2. Prepare your embeddings files (partitioned or not).
3. Create a Spark session before calling autofaiss. If no Spark session exists, a default session will be creaed with a minimum configuration.

### Creating a big index in a distributed way

See [distributed_autofaiss.md](docs/distributed/distributed_autofaiss.md) for a complete guide.

It is possible to generate an index that would require more memory than what's available. To do so, you can control the number of index splits that will compose your index with `nb_indices_to_keep`.
For example, if `nb_indices_to_keep` is 10 and `index_path` is `knn.index`, the final index will be decomposed into 10 smaller indexes:
 - `knn.index01`
 - `knn.index02`
 - `knn.index03`
 - ...
 - `knn.index10`

A [concrete example](examples/distributed_autofaiss_n_indices.py) shows how to produce N indices and how to use them.

### Creating partitioned indexes

Given a partitioned dataset of embeddings, it is possible to create one index per partition by calling the method `build_partitioned_indexes`.

See this [example](examples/partitioned_indexes.py) that shows how to create partitioned indexes.

## Using the command line

Create embeddings
``` python
import os
import numpy as np
embeddings = np.random.rand(1000, 100)
os.mkdir("embeddings")
np.save("embeddings/part1.npy", embeddings)
os.mkdir("my_index_folder")
```

Generate a Knn index
``` bash
autofaiss build_index --embeddings="embeddings" --index_path="my_index_folder/knn.index" --index_infos_path="my_index_folder/index_infos.json" --metric_type="ip"
```

Try the index
``` python
import faiss
import glob
import numpy as np

my_index = faiss.read_index(glob.glob("my_index_folder/*.index")[0])

query_vector = np.float32(np.random.rand(1, 100))
k = 5
distances, indices = my_index.search(query_vector, k)

print(list(zip(distances[0], indices[0])))
```

## How are indices selected ?

To understand better why indices are selected and what are their characteristics, check the [index selection demo](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_index_selection_demo.ipynb)

## Command quick overview
Quick description of the `autofaiss build_index` command:

*embeddings*        -> Source path of the embeddings in numpy.  
*index_path*                -> Destination path of the created index.
*index_infos_path*          -> Destination path of the index infos.
*save_on_disk*              -> Save the index on the disk.
*metric_type*               -> Similarity distance for the queries.  

*index_key*                 -> (optional) Describe the index to build.  
*index_param*               -> (optional) Describe the hyperparameters of the index.  
*current_memory_available*  -> (optional) Describe the amount of memory available on the machine.  
*use_gpu*                   -> (optional) Whether to use GPU or not (not tested).  

## Command details

The `autofaiss build_index` command takes the following parameters:

| Flag available               |  Default     | Description                                                                                                                                                                                                                                               |
|------------------------------|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| --embeddings                 | required     | directory (or list of directories) containing your .npy embedding files. If there are several files, they are read in the lexicographical order. This can be a local path or a path in another Filesystem e.g. `hdfs://root/...` or `s3://...`                                                                                                        |
| --index_path                 | required     | Destination path of the faiss index on local machine.                                                                                                                                                                                                     |
| --index_infos_path           | required     | Destination path of the faiss index infos on local machine.                                                                                                                                                                                                     |
| --save_on_disk               | required     | Save the index on the disk.                                                                                                                                                                                                     |
| --file_format                | "npy"        | File format of the files in embeddings Can be either `npy` for numpy matrix files or `parquet` for parquet serialized tables |
| --embedding_column_name      | "embeddings" | Only necessary when file_format=`parquet` In this case this is the name of the column containing the embeddings (one vector per row) |
| --id_columns                 | None         | Can only be used when file_format=`parquet`. In this case these are the names of the columns containing the Ids of the vectors, and separate files will be generated to map these ids to indices in the KNN index |
| --ids_path                   | None         | Only useful when id_columns is not None and file_format=`parquet`. This will be the path (in any filesystem) where the mapping files Ids->vector index will be store in parquet format|
| --metric_type                |   "ip"       | (Optional) Similarity function used for query: ("ip" for inner product, "l2" for euclidian distance)                                                                                                                                                                                                            |
| --max_index_memory_usage     |  "32GB"      | (Optional) Maximum size in GB of the created index, this bound is strict.                                                                                                                        |
| --current_memory_available   |  "32GB"      | (Optional) Memory available (in GB) on the machine creating the index, having more memory is a boost because it reduces the swipe between RAM and disk.                                                                               |
| --max_index_query_time_ms    |    10        | (Optional) Bound on the query time for KNN search, this bound is approximative.                                                                                                                                   |
| --min_nearest_neighbors_to_retrieve |    20        | (Optional) Minimum number of nearest neighbors to retrieve when querying the index. Parameter used only during index hyperparameter finetuning step, it is not taken into account to select the indexing algorithm. This parameter has the priority over the max_index_query_time_ms constraint.                                                                                                                                      |
| --index_key                  |   None       | (Optional) If present, the Faiss index will be build using this description string in the index_factory, more detail in the [Faiss documentation](https://github.com/facebookresearch/faiss/wiki/The-index-factory)
| --index_param                |   None       | (Optional) If present, the Faiss index will be set using this description string of hyperparameters, more detail in the [Faiss documentation](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning) |
| --use_gpu                    |   False      | (Optional) Experimental, gpu training can be faster, but this feature is not tested so far.                                                                                                                                         |
| --nb_cores                   |   None       | (Optional) The number of cores to use, by default will use all cores                                                                                                                                         |
| --make_direct_map            |   False      | (Optional) Create a direct map allowing reconstruction of embeddings. This is only needed for IVF indices. Note that might increase the RAM usage (approximately 8GB for 1 billion embeddings).                                                                                                                                         |
| --should_be_memory_mappable  |   False      | (Optional) Boolean used to force the index to be selected among indices having an on-disk memory-mapping implementation.                                                                                                                                             |
| --distributed                |   None       | (Optional) If "pyspark", create the index using pyspark. Otherwise, the index is created on your local machine.|
| --temporary_indices_folder   |   "hdfs://root/tmp/distributed_autofaiss_indices"       | (Optional) Folder to save the temporary small indices, only used when distributed = "pyspark" |
| --verbose                    |   20         | (Optional) Set verbosity of logging output: DEBUG=10, INFO=20, WARN=30, ERROR=40, CRITICAL=50 |
| --nb_indices_to_keep         |   1          | (Optional) Number of indices to keep at most when distributed is "pyspark". |

## Install from source

First, create a virtual env and install dependencies:
```
python3 -m venv .env
source .env/bin/activate
make install
```


`python -m pytest -x -s -v tests -k "test_get_optimal_hyperparameters"` to run a specific test


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/criteo/autofaiss",
    "name": "autofaiss",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Criteo",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/34/dc/764ed49d3e8072efb549d939d4629a7da0aa35f563d69a9b0ae80fa1fa45/autofaiss-2.17.0.tar.gz",
    "platform": null,
    "description": "# AutoFaiss\n\n[![pypi](https://img.shields.io/pypi/v/autofaiss.svg)](https://pypi.python.org/pypi/autofaiss)\n[![ci](https://github.com/criteo/autofaiss/workflows/Continuous%20integration/badge.svg)](https://github.com/criteo/autofaiss/actions?query=workflow%3A%22Continuous+integration%22)\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_getting_started.ipynb)\n\n**Automatically create Faiss knn indices with the most optimal similarity search parameters.**\n\nIt selects the best indexing parameters to achieve the highest recalls given memory and query speed constraints.\n\n## Doc and posts and notebooks\n\nUsing [faiss](https://github.com/facebookresearch/faiss) efficient indices, binary search, and heuristics, Autofaiss makes it possible to *automatically* build in 3 hours a large (200 million vectors, 1TB) KNN index in a low amount of memory (15 GB) with latency in milliseconds (10ms).\n\nGet started by running this [colab notebook](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_getting_started.ipynb), then check the [full documentation](https://criteo.github.io/autofaiss).  \nGet some insights on the automatic index selection function with this [colab notebook](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_index_selection_demo.ipynb).\n\nThen you can check our [multimodal search example](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_multimodal_search.ipynb) (using OpenAI Clip model).\n\nRead the [medium post](https://medium.com/criteo-engineering/introducing-autofaiss-an-automatic-k-nearest-neighbor-indexing-library-at-scale-c90842005a11) to learn more about it!\n\n## Installation\n\nTo install run `pip install autofaiss`\n\nIt's probably best to create a virtual env:\n``` bash\npython -m venv .venv/autofaiss_env\nsource .venv/autofaiss_env/bin/activate\npip install -U pip\npip install autofaiss\n```\n\n## Using autofaiss in python\n\nIf you want to use autofaiss directly from python, check the [API documentation](https://criteo.github.io/autofaiss/API/api.html) and the [examples](examples)\n\nIn particular you can use autofaiss with on memory or on disk embeddings collections:\n\n### Using in memory numpy arrays\n\nIf you have a few embeddings, you can use autofaiss with in memory numpy arrays:\n\n```python\nfrom autofaiss import build_index\nimport numpy as np\n\nembeddings = np.float32(np.random.rand(100, 512))\nindex, index_infos = build_index(embeddings, save_on_disk=False)\n\nquery = np.float32(np.random.rand(1, 512))\n_, I = index.search(query, 1)\nprint(I)\n```\n\n### Using numpy arrays saved as .npy files\n\nIf you have many embeddings file, it is preferred to save them on disk as .npy files then use autofaiss like this:\n\n```python\nfrom autofaiss import build_index\n\nbuild_index(embeddings=\"embeddings\", index_path=\"my_index_folder/knn.index\",\n            index_infos_path=\"my_index_folder/index_infos.json\", max_index_memory_usage=\"4G\",\n            current_memory_available=\"4G\")\n```\n\n## Memory-mapped indices\n\nFaiss makes it possible to use memory-mapped indices. This is useful when you don't need a fast search time (>50ms)\nand still want to reduce the memory footprint to the minimum.\n\nWe provide the should_be_memory_mappable boolean in build_index function to generate memory-mapped indices only.\nNote: Only IVF indices can be memory-mapped in faiss, so the output index will be a IVF index.\n\nTo load an index in memory mapping mode, use the following code:\n```python\nimport faiss\nfaiss.read_index(\"my_index_folder/knn.index\", faiss.IO_FLAG_MMAP | faiss.IO_FLAG_READ_ONLY)\n```\n\nYou can have a look to the [examples](examples/memory_mapped_autofaiss.py) to see how to use it.\n\nTechnical note: You can create a direct map on IVF indices with index.make_direct_map() (or directly from the\nbuild_index function by passing the make_direct_map boolean). Doing this speeds up a lot\nthe .reconstruct() method, function that gives you the value of one of your vector given its rank.\nHowever, this mapping will be stored in RAM... We advise you to create your own direct map in a memory-mapped\nnumpy array and then call .reconstruct_from_offset() with your custom direct_map.\n\n## Using autofaiss with pyspark\n\nAutofaiss allows you to build indices with Spark for the following two use cases:\n- To build a big index in a distributed way\n- Given a partitioned dataset of embeddings, building one index per partition in parallel and in a distributed way.\n\nPrerequisities:\n\n1. Install pyspark: `pip install pyspark`.\n2. Prepare your embeddings files (partitioned or not).\n3. Create a Spark session before calling autofaiss. If no Spark session exists, a default session will be creaed with a minimum configuration.\n\n### Creating a big index in a distributed way\n\nSee [distributed_autofaiss.md](docs/distributed/distributed_autofaiss.md) for a complete guide.\n\nIt is possible to generate an index that would require more memory than what's available. To do so, you can control the number of index splits that will compose your index with `nb_indices_to_keep`.\nFor example, if `nb_indices_to_keep` is 10 and `index_path` is `knn.index`, the final index will be decomposed into 10 smaller indexes:\n - `knn.index01`\n - `knn.index02`\n - `knn.index03`\n - ...\n - `knn.index10`\n\nA [concrete example](examples/distributed_autofaiss_n_indices.py) shows how to produce N indices and how to use them.\n\n### Creating partitioned indexes\n\nGiven a partitioned dataset of embeddings, it is possible to create one index per partition by calling the method `build_partitioned_indexes`.\n\nSee this [example](examples/partitioned_indexes.py) that shows how to create partitioned indexes.\n\n## Using the command line\n\nCreate embeddings\n``` python\nimport os\nimport numpy as np\nembeddings = np.random.rand(1000, 100)\nos.mkdir(\"embeddings\")\nnp.save(\"embeddings/part1.npy\", embeddings)\nos.mkdir(\"my_index_folder\")\n```\n\nGenerate a Knn index\n``` bash\nautofaiss build_index --embeddings=\"embeddings\" --index_path=\"my_index_folder/knn.index\" --index_infos_path=\"my_index_folder/index_infos.json\" --metric_type=\"ip\"\n```\n\nTry the index\n``` python\nimport faiss\nimport glob\nimport numpy as np\n\nmy_index = faiss.read_index(glob.glob(\"my_index_folder/*.index\")[0])\n\nquery_vector = np.float32(np.random.rand(1, 100))\nk = 5\ndistances, indices = my_index.search(query_vector, k)\n\nprint(list(zip(distances[0], indices[0])))\n```\n\n## How are indices selected ?\n\nTo understand better why indices are selected and what are their characteristics, check the [index selection demo](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_index_selection_demo.ipynb)\n\n## Command quick overview\nQuick description of the `autofaiss build_index` command:\n\n*embeddings*        -> Source path of the embeddings in numpy.  \n*index_path*                -> Destination path of the created index.\n*index_infos_path*          -> Destination path of the index infos.\n*save_on_disk*              -> Save the index on the disk.\n*metric_type*               -> Similarity distance for the queries.  \n\n*index_key*                 -> (optional) Describe the index to build.  \n*index_param*               -> (optional) Describe the hyperparameters of the index.  \n*current_memory_available*  -> (optional) Describe the amount of memory available on the machine.  \n*use_gpu*                   -> (optional) Whether to use GPU or not (not tested).  \n\n## Command details\n\nThe `autofaiss build_index` command takes the following parameters:\n\n| Flag available               |  Default     | Description                                                                                                                                                                                                                                               |\n|------------------------------|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| --embeddings                 | required     | directory (or list of directories) containing your .npy embedding files. If there are several files, they are read in the lexicographical order. This can be a local path or a path in another Filesystem e.g. `hdfs://root/...` or `s3://...`                                                                                                        |\n| --index_path                 | required     | Destination path of the faiss index on local machine.                                                                                                                                                                                                     |\n| --index_infos_path           | required     | Destination path of the faiss index infos on local machine.                                                                                                                                                                                                     |\n| --save_on_disk               | required     | Save the index on the disk.                                                                                                                                                                                                     |\n| --file_format                | \"npy\"        | File format of the files in embeddings Can be either `npy` for numpy matrix files or `parquet` for parquet serialized tables |\n| --embedding_column_name      | \"embeddings\" | Only necessary when file_format=`parquet` In this case this is the name of the column containing the embeddings (one vector per row) |\n| --id_columns                 | None         | Can only be used when file_format=`parquet`. In this case these are the names of the columns containing the Ids of the vectors, and separate files will be generated to map these ids to indices in the KNN index |\n| --ids_path                   | None         | Only useful when id_columns is not None and file_format=`parquet`. This will be the path (in any filesystem) where the mapping files Ids->vector index will be store in parquet format|\n| --metric_type                |   \"ip\"       | (Optional) Similarity function used for query: (\"ip\" for inner product, \"l2\" for euclidian distance)                                                                                                                                                                                                            |\n| --max_index_memory_usage     |  \"32GB\"      | (Optional) Maximum size in GB of the created index, this bound is strict.                                                                                                                        |\n| --current_memory_available   |  \"32GB\"      | (Optional) Memory available (in GB) on the machine creating the index, having more memory is a boost because it reduces the swipe between RAM and disk.                                                                               |\n| --max_index_query_time_ms    |    10        | (Optional) Bound on the query time for KNN search, this bound is approximative.                                                                                                                                   |\n| --min_nearest_neighbors_to_retrieve |    20        | (Optional) Minimum number of nearest neighbors to retrieve when querying the index. Parameter used only during index hyperparameter finetuning step, it is not taken into account to select the indexing algorithm. This parameter has the priority over the max_index_query_time_ms constraint.                                                                                                                                      |\n| --index_key                  |   None       | (Optional) If present, the Faiss index will be build using this description string in the index_factory, more detail in the [Faiss documentation](https://github.com/facebookresearch/faiss/wiki/The-index-factory)\n| --index_param                |   None       | (Optional) If present, the Faiss index will be set using this description string of hyperparameters, more detail in the [Faiss documentation](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning) |\n| --use_gpu                    |   False      | (Optional) Experimental, gpu training can be faster, but this feature is not tested so far.                                                                                                                                         |\n| --nb_cores                   |   None       | (Optional) The number of cores to use, by default will use all cores                                                                                                                                         |\n| --make_direct_map            |   False      | (Optional) Create a direct map allowing reconstruction of embeddings. This is only needed for IVF indices. Note that might increase the RAM usage (approximately 8GB for 1 billion embeddings).                                                                                                                                         |\n| --should_be_memory_mappable  |   False      | (Optional) Boolean used to force the index to be selected among indices having an on-disk memory-mapping implementation.                                                                                                                                             |\n| --distributed                |   None       | (Optional) If \"pyspark\", create the index using pyspark. Otherwise, the index is created on your local machine.|\n| --temporary_indices_folder   |   \"hdfs://root/tmp/distributed_autofaiss_indices\"       | (Optional) Folder to save the temporary small indices, only used when distributed = \"pyspark\" |\n| --verbose                    |   20         | (Optional) Set verbosity of logging output: DEBUG=10, INFO=20, WARN=30, ERROR=40, CRITICAL=50 |\n| --nb_indices_to_keep         |   1          | (Optional) Number of indices to keep at most when distributed is \"pyspark\". |\n\n## Install from source\n\nFirst, create a virtual env and install dependencies:\n```\npython3 -m venv .env\nsource .env/bin/activate\nmake install\n```\n\n\n`python -m pytest -x -s -v tests -k \"test_get_optimal_hyperparameters\"` to run a specific test\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "# AutoFaiss",
    "version": "2.17.0",
    "project_urls": {
        "Homepage": "https://github.com/criteo/autofaiss"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3b399a9339c0dcbea945f6fc5df3dab9a262bcecd0af7a541bc91b222cec4822",
                "md5": "1899c1cfba661fdd53b6066ca4563fe3",
                "sha256": "a056144af7262756fce3e9bc4f78c47306d31a2d86413dd9e949fa0be11cfe5b"
            },
            "downloads": -1,
            "filename": "autofaiss-2.17.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1899c1cfba661fdd53b6066ca4563fe3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 70342,
            "upload_time": "2024-01-22T22:04:19",
            "upload_time_iso_8601": "2024-01-22T22:04:19.107656Z",
            "url": "https://files.pythonhosted.org/packages/3b/39/9a9339c0dcbea945f6fc5df3dab9a262bcecd0af7a541bc91b222cec4822/autofaiss-2.17.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "34dc764ed49d3e8072efb549d939d4629a7da0aa35f563d69a9b0ae80fa1fa45",
                "md5": "0f4ed0fbf903d9cc5586b8f7fbd8b832",
                "sha256": "bb4b6f0c8a7a8724e1c959c4cf9f491e969ab2340de1453728ca5c29b85aee69"
            },
            "downloads": -1,
            "filename": "autofaiss-2.17.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0f4ed0fbf903d9cc5586b8f7fbd8b832",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 57546,
            "upload_time": "2024-01-22T22:04:21",
            "upload_time_iso_8601": "2024-01-22T22:04:21.308888Z",
            "url": "https://files.pythonhosted.org/packages/34/dc/764ed49d3e8072efb549d939d4629a7da0aa35f563d69a9b0ae80fa1fa45/autofaiss-2.17.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-22 22:04:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "criteo",
    "github_project": "autofaiss",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "autofaiss"
}
        
Elapsed time: 0.20955s