# datasette-faiss
[![PyPI](https://img.shields.io/pypi/v/datasette-faiss.svg)](https://pypi.org/project/datasette-faiss/)
[![Changelog](https://img.shields.io/github/v/release/simonw/datasette-faiss?include_prereleases&label=changelog)](https://github.com/simonw/datasette-faiss/releases)
[![Tests](https://github.com/simonw/datasette-faiss/workflows/Test/badge.svg)](https://github.com/simonw/datasette-faiss/actions?query=workflow%3ATest)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-faiss/blob/main/LICENSE)
Maintain a [FAISS index](https://github.com/facebookresearch/faiss) for specified Datasette tables
See [Semantic search answers: Q&A against documentation with GPT3 + OpenAI embeddings](https://simonwillison.net/2023/Jan/13/semantic-search-answers/) for background on this project.
## Installation
Install this plugin in the same environment as Datasette.
datasette install datasette-faiss
## Usage
This plugin creates in-memory FAISS indexes for specified tables on startup, using an `IndexFlatL2` [FAISS index type](https://github.com/facebookresearch/faiss/wiki/Faiss-indexes).
If the tables are modified after the server has started the indexes will not (yet) pick up those changes.
### Configuration
The tables to be indexed must have `id` and `embedding` columns. The `embedding` column must be a `blob` containing embeddings that are arrays of floating point numbers that have been encoded using the following Python function:
```python
def encode(vector):
return struct.pack("f" * len(vector), *vector)
```
You can import that function from this package like so:
```python
from datasette_faiss import encode
```
You can specify which tables should have indexes created for them by adding this to `metadata.json`:
```json
{
"plugins": {
"datasette-faiss": {
"tables": [
["blog", "embeddings"]
]
}
}
}
```
Each table is an array listing the database name and the table name.
If you are using `metadata.yml` the configuration should look like this:
```yaml
plugins:
datasette-faiss:
tables:
- ["blog", "embeddings"]
```
### SQL functions
The plugin makes four new SQL functions available within Datasette:
#### faiss_search(database, table, embedding, k)
Returns the `k` nearest neighbors to the `embedding` found in the specified database and table. For example:
```sql
select faiss_search('blog', 'embeddings', (select embedding from embeddings where id = 3), 5)
```
This will return a JSON array of the five IDs of the records in the `embeddings` table in the `blog` database that are closest to the specified embedding. The returned value looks like this:
```json
["1", "1249", "1011", "5", "10"]
```
You can use the SQLite `json_each()` function to turn that into a table-like sequence that you can join against.
Here's an example query that does that:
```sql
with related as (
select value from json_each(
faiss_search(
'blog',
'embeddings',
(select embedding from embeddings limit 1),
5
)
)
)
select * from blog_entry, related
where id = value
```
#### faiss_search_with_scores(database, table, embedding, k)
Takes the same arguments as above, but the return value is a JSON array of pairs, each with an ID and a score - something like this:
```json
[
["1", 0.0],
["1249", 0.21042244136333466],
["1011", 0.29391372203826904],
["5", 0.29505783319473267],
["10", 0.31554925441741943]
]
```
#### faiss_encode(json_vector)
Given a JSON array of floats, returns the binary embedding blob that can be used with the other functions:
```sql
select faiss_encode('[2.4, 4.1, 1.8]')
-- Returns a 12 byte blob
select hex(faiss_encode('[2.4, 4.1, 1.8]'))
-- Returns 9A991940333383406666E63F
```
#### faiss_decode(vector_blob)
The opposite of `faiss_encode()`.
```sql
select faiss_decode(X'9A991940333383406666E63F')
```
Returns:
```json
[2.4000000953674316, 4.099999904632568, 1.7999999523162842]
```
Note that floating point arithmetic results in numbers that don't quite round-trip to the exact same expected value.
#### faiss_agg(id, embedding, compare_embedding, k)
This aggregate function can be used to find the `k` nearest neighbors to `compare_embedding` for each unique value of `id` in the table. For example:
```sql
select faiss_agg(
id, embedding, (select embedding from embeddings where id = 3), 5
) from embeddings
```
Unlike the `faiss_search()` function, this does not depend on the per-table index that the plugin creates when it first starts running. Instead, an index is built every time the aggregation function is run.
This means that it should only be used on smaller sets of values - once you get above 10,000 or so the performance from this function is likely to become prohibitively expensive.
The function returns a JSON array of IDs representing the `k` rows with the closest distance scores, like this:
```json
[1324, 344, 5562, 553, 2534]
```
You can use the `json_each()` function to turn that into a table-like sequence that you can join against.
#### faiss_agg_with_scores(id, embedding, compare_embedding, k)
This is similar to the `faiss_agg()` aggregate function but it returns a list of pairs, each with an ID and the corresponding score - something that looks like this (if `k` was 2):
```json
[[2412, 0.25], [1245, 24.25]]
```
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-faiss
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
Raw data
{
"_id": null,
"home_page": "https://github.com/simonw/datasette-faiss",
"name": "datasette-faiss",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "",
"author": "Simon Willison",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/a3/5b/cfd79b5e9ed849ae060ffce88c718b028eea6ca5c959a8c1941f2609f89b/datasette-faiss-0.2.tar.gz",
"platform": null,
"description": "# datasette-faiss\n\n[![PyPI](https://img.shields.io/pypi/v/datasette-faiss.svg)](https://pypi.org/project/datasette-faiss/)\n[![Changelog](https://img.shields.io/github/v/release/simonw/datasette-faiss?include_prereleases&label=changelog)](https://github.com/simonw/datasette-faiss/releases)\n[![Tests](https://github.com/simonw/datasette-faiss/workflows/Test/badge.svg)](https://github.com/simonw/datasette-faiss/actions?query=workflow%3ATest)\n[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-faiss/blob/main/LICENSE)\n\nMaintain a [FAISS index](https://github.com/facebookresearch/faiss) for specified Datasette tables\n\nSee [Semantic search answers: Q&A against documentation with GPT3 + OpenAI embeddings](https://simonwillison.net/2023/Jan/13/semantic-search-answers/) for background on this project.\n\n## Installation\n\nInstall this plugin in the same environment as Datasette.\n\n datasette install datasette-faiss\n\n## Usage\n\nThis plugin creates in-memory FAISS indexes for specified tables on startup, using an `IndexFlatL2` [FAISS index type](https://github.com/facebookresearch/faiss/wiki/Faiss-indexes).\n\nIf the tables are modified after the server has started the indexes will not (yet) pick up those changes.\n\n### Configuration\n\nThe tables to be indexed must have `id` and `embedding` columns. The `embedding` column must be a `blob` containing embeddings that are arrays of floating point numbers that have been encoded using the following Python function:\n\n```python\ndef encode(vector):\n return struct.pack(\"f\" * len(vector), *vector)\n```\nYou can import that function from this package like so:\n```python\nfrom datasette_faiss import encode\n```\nYou can specify which tables should have indexes created for them by adding this to `metadata.json`:\n```json\n{\n \"plugins\": {\n \"datasette-faiss\": {\n \"tables\": [\n [\"blog\", \"embeddings\"]\n ]\n }\n }\n}\n```\nEach table is an array listing the database name and the table name.\n\nIf you are using `metadata.yml` the configuration should look like this:\n\n```yaml\nplugins:\n datasette-faiss:\n tables:\n - [\"blog\", \"embeddings\"]\n```\n\n### SQL functions\n\nThe plugin makes four new SQL functions available within Datasette:\n\n#### faiss_search(database, table, embedding, k)\n \nReturns the `k` nearest neighbors to the `embedding` found in the specified database and table. For example:\n```sql\nselect faiss_search('blog', 'embeddings', (select embedding from embeddings where id = 3), 5)\n```\nThis will return a JSON array of the five IDs of the records in the `embeddings` table in the `blog` database that are closest to the specified embedding. The returned value looks like this:\n\n```json\n[\"1\", \"1249\", \"1011\", \"5\", \"10\"]\n```\nYou can use the SQLite `json_each()` function to turn that into a table-like sequence that you can join against.\n\nHere's an example query that does that:\n\n```sql\nwith related as (\n select value from json_each(\n faiss_search(\n 'blog',\n 'embeddings',\n (select embedding from embeddings limit 1),\n 5\n )\n )\n)\nselect * from blog_entry, related\nwhere id = value\n```\n#### faiss_search_with_scores(database, table, embedding, k)\n\nTakes the same arguments as above, but the return value is a JSON array of pairs, each with an ID and a score - something like this:\n\n```json\n[\n [\"1\", 0.0],\n [\"1249\", 0.21042244136333466],\n [\"1011\", 0.29391372203826904],\n [\"5\", 0.29505783319473267],\n [\"10\", 0.31554925441741943]\n]\n```\n\n#### faiss_encode(json_vector)\n\nGiven a JSON array of floats, returns the binary embedding blob that can be used with the other functions:\n\n```sql\nselect faiss_encode('[2.4, 4.1, 1.8]')\n-- Returns a 12 byte blob\nselect hex(faiss_encode('[2.4, 4.1, 1.8]'))\n-- Returns 9A991940333383406666E63F\n```\n\n#### faiss_decode(vector_blob)\n\nThe opposite of `faiss_encode()`.\n\n```sql\nselect faiss_decode(X'9A991940333383406666E63F')\n```\nReturns:\n```json\n[2.4000000953674316, 4.099999904632568, 1.7999999523162842]\n```\nNote that floating point arithmetic results in numbers that don't quite round-trip to the exact same expected value.\n\n#### faiss_agg(id, embedding, compare_embedding, k)\n\nThis aggregate function can be used to find the `k` nearest neighbors to `compare_embedding` for each unique value of `id` in the table. For example:\n\n```sql\nselect faiss_agg(\n id, embedding, (select embedding from embeddings where id = 3), 5\n) from embeddings\n```\nUnlike the `faiss_search()` function, this does not depend on the per-table index that the plugin creates when it first starts running. Instead, an index is built every time the aggregation function is run.\n\nThis means that it should only be used on smaller sets of values - once you get above 10,000 or so the performance from this function is likely to become prohibitively expensive.\n\nThe function returns a JSON array of IDs representing the `k` rows with the closest distance scores, like this:\n\n```json\n[1324, 344, 5562, 553, 2534]\n```\nYou can use the `json_each()` function to turn that into a table-like sequence that you can join against.\n\n#### faiss_agg_with_scores(id, embedding, compare_embedding, k)\n\nThis is similar to the `faiss_agg()` aggregate function but it returns a list of pairs, each with an ID and the corresponding score - something that looks like this (if `k` was 2):\n\n```json\n[[2412, 0.25], [1245, 24.25]]\n```\n\n## Development\n\nTo set up this plugin locally, first checkout the code. Then create a new virtual environment:\n\n cd datasette-faiss\n python3 -m venv venv\n source venv/bin/activate\n\nNow install the dependencies and test dependencies:\n\n pip install -e '.[test]'\n\nTo run the tests:\n\n pytest\n",
"bugtrack_url": null,
"license": "Apache License, Version 2.0",
"summary": "Maintain a FAISS index for specified Datasette tables",
"version": "0.2",
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c200d2d8b15d5e7887b9df7c6b0b647d71ae338616230b569a0b643faf6d97e3",
"md5": "d2358411c60ffdb5cb101dd2107dd4d3",
"sha256": "eef9dcf904bfd9d7c859fccc5a1dcb2b2169b67fcecde1b1d64a4e4bb47be936"
},
"downloads": -1,
"filename": "datasette_faiss-0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d2358411c60ffdb5cb101dd2107dd4d3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 9106,
"upload_time": "2023-01-19T23:39:12",
"upload_time_iso_8601": "2023-01-19T23:39:12.460780Z",
"url": "https://files.pythonhosted.org/packages/c2/00/d2d8b15d5e7887b9df7c6b0b647d71ae338616230b569a0b643faf6d97e3/datasette_faiss-0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a35bcfd79b5e9ed849ae060ffce88c718b028eea6ca5c959a8c1941f2609f89b",
"md5": "b409e0d9dff5841e07febe5b99a80c01",
"sha256": "336fa38822e47c4c6f3721cc06f1cd3fca34a3f09b6df6ca26748d3eba76cfdc"
},
"downloads": -1,
"filename": "datasette-faiss-0.2.tar.gz",
"has_sig": false,
"md5_digest": "b409e0d9dff5841e07febe5b99a80c01",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 8678,
"upload_time": "2023-01-19T23:39:14",
"upload_time_iso_8601": "2023-01-19T23:39:14.198621Z",
"url": "https://files.pythonhosted.org/packages/a3/5b/cfd79b5e9ed849ae060ffce88c718b028eea6ca5c959a8c1941f2609f89b/datasette-faiss-0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-01-19 23:39:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "simonw",
"github_project": "datasette-faiss",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "datasette-faiss"
}