bm25s


Namebm25s JSON
Version 0.2.4 PyPI version JSON
download
home_pagehttps://github.com/xhluca/bm25s
SummaryAn ultra-fast implementation of BM25 based on sparse matrices.
upload_time2024-11-13 22:47:27
maintainerNone
docs_urlNone
authorXing Han Lù
requires_python>=3.8
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

<h1>BM25-Sparse⚡</h1>

<i>BM25S is an ultrafast implementation of BM25 in pure Python, powered by Scipy sparse matrices</i>

<table>
      <tr>
            <td>
                  <a href="https://github.com/xhluca/bm25s">💻 GitHub</a>
            </td>
            <td>
                  <a href="https://bm25s.github.io">🏠 Homepage</a>
            </td>
            <td>
                  <a href="https://arxiv.org/abs/2407.03618">📝 Technical Report</a>
            </td>
            <td>
                  <a href="https://huggingface.co/blog/xhluca/bm25s">🤗 Blog Post</a>
            </td>
      </tr>
</table>

</div>

Welcome to `bm25s`, a library that implements BM25 in Python, allowing you to rank documents based on a query. BM25 is a widely used ranking function used for text retrieval tasks, and is a core component of search services like Elasticsearch.

It is designed to be:
* **Fast**: `bm25s` is implemented in pure Python and leverage Scipy sparse matrices to store eagerly computed scores for all document tokens. This allows extremely fast scoring at query time, improving performance over popular libraries by orders of magnitude (see benchmarks below).
* **Simple**: `bm25s` is designed to be easy to use and understand. You can install it with pip and start using it in minutes. There is no dependencies on Java or Pytorch - all you need is Scipy and Numpy, and optional lightweight dependencies for stemming.

Below, we compare `bm25s` with Elasticsearch in terms of speedup over `rank-bm25`, the most popular Python implementation of BM25. We measure the throughput in queries per second (QPS) on a few popular datasets from [BEIR](https://github.com/beir-cellar/beir) in a single-threaded setting.

![comparison](assets/comparison.png)

<details>
<summary>Click to show citation</summary><br>

```
@misc{bm25s,
      title={BM25S: Orders of magnitude faster lexical search via eager sparse scoring}, 
      author={Xing Han Lù},
      year={2024},
      eprint={2407.03618},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2407.03618}, 
}
```

</details>

> [!IMPORTANT]
> New in version 0.2.0: We are rolling out support for a numba backend, which gives around [2x speedup for larger datasets](https://github.com/xhluca/bm25-benchmarks?tab=readme-ov-file#queries-per-second)! Learn more about it and share your thoughts in [the version 0.2.0 release thread](https://github.com/xhluca/bm25s/discussions/58).

## Installation

You can install `bm25s` with pip:

```bash
pip install bm25s
```

If you want to use stemming for better results, you can install the recommended (but optional) dependencies:

```bash
# Install all extra dependencies
pip install bm25s[full]

# If you want to use stemming for better results, you can install a stemmer
pip install PyStemmer

# To speed up the top-k selection process, you can install `jax`
pip install jax[cpu]
```

## Quickstart

Here is a simple example of how to use `bm25s`:

```python
import bm25s
import Stemmer  # optional: for stemming

# Create your corpus here
corpus = [
    "a cat is a feline and likes to purr",
    "a dog is the human's best friend and loves to play",
    "a bird is a beautiful animal that can fly",
    "a fish is a creature that lives in water and swims",
]

# optional: create a stemmer
stemmer = Stemmer.Stemmer("english")

# Tokenize the corpus and only keep the ids (faster and saves memory)
corpus_tokens = bm25s.tokenize(corpus, stopwords="en", stemmer=stemmer)

# Create the BM25 model and index the corpus
retriever = bm25s.BM25()
retriever.index(corpus_tokens)

# Query the corpus
query = "does the fish purr like a cat?"
query_tokens = bm25s.tokenize(query, stemmer=stemmer)

# Get top-k results as a tuple of (doc ids, scores). Both are arrays of shape (n_queries, k)
results, scores = retriever.retrieve(query_tokens, corpus=corpus, k=2)

for i in range(results.shape[1]):
    doc, score = results[0, i], scores[0, i]
    print(f"Rank {i+1} (score: {score:.2f}): {doc}")

# You can save the arrays to a directory...
retriever.save("animal_index_bm25")

# You can save the corpus along with the model
retriever.save("animal_index_bm25", corpus=corpus)

# ...and load them when you need them
import bm25s
reloaded_retriever = bm25s.BM25.load("animal_index_bm25", load_corpus=True)
# set load_corpus=False if you don't need the corpus
```

For an example that shows how to quickly index a 2M-documents corpus (Natural Questions), check out [`examples/index_nq.py`](examples/index_nq.py).

## Flexibility

`bm25s` provides a flexible API that allows you to customize the BM25 model and the tokenization process. Here are some of the options you can use:

```python
# You can provide a list of queries instead of a single query
queries = ["What is a cat?", "is the bird a dog?"]

# Provide your own stopwords list if you don't like the default one
stopwords = ["a", "the"]

# For stemming, use any function that is callable on each word list
stemmer_fn = lambda lst: [word for word in lst]

# Tokenize the queries
query_token_ids = bm25s.tokenize(queries, stopwords=stopwords, stemmer=stemmer_fn)

# If you want the tokenizer to return strings instead of token ids, you can do this
query_token_strs = bm25s.tokenize(queries, return_ids=False)

# You can use a different corpus for retrieval, e.g., titles instead of full docs
titles = ["About Cat", "About Dog", "About Bird", "About Fish"]

# You can also choose to only return the documents and omit the scores
results = retriever.retrieve(query_token_ids, corpus=titles, k=2, return_as="documents")

# The documents are returned as a numpy array of shape (n_queries, k)
for i in range(results.shape[1]):
    print(f"Rank {i+1}: {results[0, i]}")
```

### Memory Efficient Retrieval

`bm25s` is designed to be memory efficient. You can use the `mmap` option to load the BM25 index as a memory-mapped file, which allows you to load the index without loading the full index into memory. This is useful when you have a large index and want to save memory:

```python
# Create a BM25 index
# ...

# let's say you have a large corpus
corpus = [
    "a very long document that is very long and has many words",
    "another long document that is long and has many words",
    # ...
]
# Save the BM25 index to a file
retriever.save("bm25s_very_big_index", corpus=corpus)

# Load the BM25 index as a memory-mapped file, which is memory efficient
# and reduce overhead of loading the full index into memory
retriever = bm25s.BM25.load("bm25s_very_big_index", mmap=True)
```

For an example of how to use retrieve using the `mmap=True` mode, check out [`examples/retrieve_nq.py`](examples/retrieve_nq.py).


## Tokenization

In addition to using the simple function `bm25s.tokenize`, you can also use the `Tokenizer` class to customize the tokenization process. This is useful when you want to use a different tokenizer, or when you want to use a different tokenization process for queries and documents:

```python
from bm25s.tokenization import Tokenizer

corpus = [
      "a cat is a feline and likes to purr",
      "a dog is the human's best friend and loves to play",
      "a bird is a beautiful animal that can fly",
      "a fish is a creature that lives in water and swims",
]

# Pick your favorite stemmer, and pass 
stemmer = None
stopwords = ["is"]
splitter = lambda x: x.split() # function or regex pattern
# Create a tokenizer
tokenizer = Tokenizer(
      stemmer=stemmer, stopwords=stopwords, splitter=splitter
)

corpus_tokens = tokenizer.tokenize(corpus)

# let's see what the tokens look like
print("tokens:", corpus_tokens)
print("vocab:", tokenizer.get_vocab_dict())

# note: the vocab dict will either be a dict of `word -> id` if you don't have a stemmer, and a dict of `stemmed word -> stem id` if you do.
# You can save the vocab. it's fine to use the same dir as your index if filename doesn't conflict
tokenizer.save_vocab(save_dir="bm25s_very_big_index")

# loading:
new_tokenizer = Tokenizer(stemmer=stemmer, stopwords=[], splitter=splitter)
new_tokenizer.load_vocab("bm25s_very_big_index")
print("vocab reloaded:", new_tokenizer.get_vocab_dict())

# the same can be done for stopwords
print("stopwords before reload:", new_tokenizer.stopwords)
tokenizer.save_stopwords(save_dir="bm25s_very_big_index")
new_tokenizer.load_stopwords("bm25s_very_big_index")
print("stopwords reloaded:", new_tokenizer.stopwords)
```

You can find advanced examples in [examples/tokenizer_class.py](examples/tokenizer_class.py), including how to:
* Pass a stemmer, stopwords, and splitter function/regex pattern
* Control whether vocabulary is updated by `tokenizer.tokenize` calls or not (by default, it will only be updated during the first call)
* Reset the tokenizer to its initial state with `tokenizer.reset_vocab()`
* Use the tokenizer in generator mode to save memory by `yield`ing one document at a time.
* Pass different outputs of the tokenizer to the `BM25.retrieve` function.



## Variants

You can use the following variants of BM25 in `bm25s` (see [Kamphuis et al. 2020](https://link.springer.com/chapter/10.1007/978-3-030-45442-5_4) for more details):

* Original implementation (`method="robertson"`) - we set `idf>=0` to avoid negatives
* ATIRE (`method="atire"`)
* BM25L (`method="bm25l"`)
* BM25+ (`method="bm25+"`)
* Lucene (`method="lucene"`)

By default, `bm25s` uses `method="lucene"`, which is Lucene's BM25 implementation (exact version). You can change the method by passing the `method` argument to the `BM25` constructor:

```python
# The IR book recommends default values of k1 between 1.2 and 2.0, and b=0.75
retriever = bm25s.BM25(method="robertson", k1=1.5, b=0.75)

# For BM25+, BM25L, you need a delta parameter (default is 0.5)
retriever = bm25s.BM25(method="bm25+", delta=1.5)

# You can also choose a different "method" for idf, while keeping the default for the rest
# for example, this is equivalent to rank-bm25 when `epsilon=0`
retriever = bm25s.BM25(method="atire", idf_method="robertson")
# and this is equivalent to bm25-pt
retriever = bm25s.BM25(method="atire", idf_method="lucene")
```

## Hugging Face Integration

`bm25` can naturally work with Hugging Face's `huggingface_hub`, allowing you to load and save to the model hub. This is useful for sharing BM25 indices and using community models. 

First, make sure you have a valid [access token for the Hugging Face model hub](https://huggingface.co/settings/tokens). This is needed to save models to the hub, or to load private models. Once you created it, you can add it to your environment variables (e.g. in your `.bashrc` or `.zshrc`):

```bash
export HUGGING_FACE_HUB_TOKEN="hf_..."
```

Now, let's install the `huggingface_hub` library:

```bash
pip install huggingface_hub
```

Let's see how to use `BM25SHF.save_to_hub` to save a BM25 index to the Hugging Face model hub:

```python
import os
import bm25s
from bm25s.hf import BM25HF

# Create a BM25 index
retriever = BM25HF()
# Create your corpus here
corpus = [
    "a cat is a feline and likes to purr",
    "a dog is the human's best friend and loves to play",
    "a bird is a beautiful animal that can fly",
    "a fish is a creature that lives in water and swims",
]
corpus_tokens = bm25s.tokenize(corpus)
retriever.index(corpus_tokens)

# Set your username and token
user = "your-username"
token = os.environ["HF_TOKEN"]
retriever.save_to_hub(f"{user}/bm25s-animals", token=token, corpus=corpus)
# You can also save it publicly with private=False
```

Then, you can use the following code to load a BM25 index from the Hugging Face model hub:

```python
import bm25s
from bm25s.hf import BM25HF

# Load a BM25 index from the Hugging Face model hub
user = "your-username"
retriever = BM25HF.load_from_hub(f"{user}/bm25s-animals")

# you can specify revision and load_corpus=True if needed
retriever = BM25HF.load_from_hub(
    f"{user}/bm25s-animals", revision="main", load_corpus=True
)

# if you want a low-memory usage, you can load as memory map with `mmap=True`
retriever = BM25HF.load_from_hub(
    f"{user}/bm25s-animals", load_corpus=True, mmap=True
)

# Query the corpus
query = "does the fish purr like a cat?"

# Tokenize the query
query_tokens = bm25s.tokenize(query)

# Get top-k results as a tuple of (doc ids, scores). Both are arrays of shape (n_queries, k)
results, scores = retriever.retrieve(query_tokens, k=2)
```

For a complete example, check out:
* [`examples/index_to_hf.py`](examples/index_to_hf.py) for indexing a corpus and upload to Huggingface Hub
* [`examples/retrieve_from_hf.py`](examples/retrieve_from_hf.py) for loading an index alongside corpus from Huggingface Hub and querying it.

## Comparison

Here are some benchmarks comparing `bm25s` to other popular BM25 implementations. We compare the following implementations:
* `bm25s`: Our implementation of BM25 in pure Python, powered by Scipy sparse matrices.
* `rank-bm25` (`Rank`): A popular Python implementation of BM25.
* `bm25_pt` (`PT`): A Pytorch implementation of BM25.
* `elasticsearch` (`ES`): Elasticsearch with BM25 configurations.

`OOM` means the implementation ran out of memory during the benchmark.

### Throughput (Queries per second)

We compare the throughput of the BM25 implementations on various datasets. The throughput is measured in queries per second (QPS), on a single-threaded Intel Xeon CPU @ 2.70GHz (found on Kaggle). For BM25S, we take the average of 10 runs. Instances exceeding 60 queries/s are in **bold**.

| Dataset          |       BM25S | Elastic |    BM25-PT |  Rank-BM25 |
| :--------------- | ----------: | ------: | ---------: | ---------: |
| arguana          |  **573.91** |   13.67 | **110.51** |          2 |
| climate-fever    |       13.09 |    4.02 |        OOM |       0.03 |
| cqadupstack      |  **170.91** |   13.38 |        OOM |       0.77 |
| dbpedia-entity   |       13.44 |   10.68 |        OOM |       0.11 |
| fever            |       20.19 |    7.45 |        OOM |       0.06 |
| fiqa             |  **507.03** |   16.96 |      20.52 |       4.46 |
| hotpotqa         |       20.88 |    7.11 |        OOM |       0.04 |
| msmarco          |        12.2 |   11.88 |        OOM |       0.07 |
| nfcorpus         | **1196.16** |   45.84 |     256.67 | **224.66** |
| nq               |       41.85 |   12.16 |        OOM |        0.1 |
| quora            |  **183.53** |    21.8 |       6.49 |       1.18 |
| scidocs          |  **767.05** |   17.93 |      41.34 |       9.01 |
| scifact          |  **952.92** |   20.81 |  **184.3** |       47.6 |
| trec-covid       |   **85.64** |    7.34 |       3.73 |       1.48 |
| webis-touche2020 |   **60.59** |   13.53 |        OOM |        1.1 |

More detailed benchmarks can be found in the [bm25-benchmarks repo](https://github.com/xhluca/bm25-benchmarks).

### Disk usage

`bm25s` is designed to be lightweight. This means the total disk usage of the package is minimal, as it only requires wheels for `numpy` (18MB), `scipy` (37MB), and the package itself is less than 100KB. After installation, the full virtual environment takes more space than `rank-bm25` but less than `pyserini` and `bm25_pt`:

| Package           | Disk Usage |
| ----------------- | ---------- |
| venv (no package) | 45MB       |
| `rank-bm25`       | 99MB       |
| `bm25s` (ours)    | 479MB      |
| `bm25_pt`         | 5346MB     |
| `pyserini`        | 6976MB     |
| `elastic`         | 1183MB     |

<details>
<summary>Show Details</summary>

The disk usage of the virtual environments is calculated using the following command:

```
$ du -s *env-* --block-size=1MB
6976    conda-env-pyserini
5346    venv-bm25-pt
479     venv-bm25s
45      venv-empty
99      venv-rank-bm25
```

For `pyserini`, we use the [recommended installation](https://github.com/castorini/pyserini/blob/master/docs/installation.md) with conda environment to account for Java dependencies.

</details>

### Optimized RAM usage

`bm25s` allows considerable memory saving through the use of *memory-mapping*, which allows the index to be stored on disk and loaded on demand. 

Using the `index_nq.py` to create an index, we can retrieve with:
* `examples/retrieve_nq.py`: setting `mmap=False` in the `main` function to load the index in memory, and `mmap=True` to load the index as a memory-mapped file. 
* `examples/retrieve_nq_with_batching.py`: This takes it a step further by batching the retrieval process, which allows for reloading the index after each batch (see *Mmap+Reload* below). This is useful when you have a large index and want to save memory.

We show the following results on the NQ dataset (2M+ documents, 100M+ tokens):

| Method        | Load Index (s) | Retrieval (s) | RAM post-index (GB) | RAM post-retrieve (GB) |
| ------------- | -------------- | ------------- | ------------------- | ---------------------- |
| In-memory     | 8.61           | 21.09         | 4.36                | 4.45                   |
| Memory-mapped | 0.53           | 20.22         | 0.49                | 2.16                   |
| Mmap+Reload   | 0.48           | 20.96         | 0.49                | 0.70                   |

We can see that memory-mapping the index allows for a significant reduction in memory usage, with comparable retrieval times. 

Similarly, for MSMARCO (8M+ documents, 300M+ tokens), we show the following results (running on the validation set), although the retrieval did not complete for the in-memory case:

| Method        | Load Index (s) | Retrieval (s) | RAM post-index (GB) | RAM post-retrieve (GB) |
| ------------- | -------------- | ------------- | ------------------- | ---------------------- |
| In-memory     | 25.71          | 93.66         | 10.21               | 10.34                  |
| Memory-mapped | 1.24           | 90.41         | 1.14                | 4.88                   |
| Mmap+Reload   | 1.17           | 97.89         | 1.14                | 1.38                   |

## Acknowledgement

* The central idea behind the scoring mechanism in this library is originally from [bm25_pt](https://github.com/jxmorris12/bm25_pt), which was a major inspiration to this project.
* The API of the [`BM25` class](https://github.com/xhluca/bm25s/blob/main/bm25s/__init__.py) is also heavily inspired by the design of BM25-pt, as well as that of rank-bm25.
* The multilingual stopwords are sourced from the [NLTK stopwords lists](https://github.com/nltk/nltk/blob/96ee715997e1c8d9148b6d8e1b32f412f31c7ff7/nltk/corpus/__init__.py#L315).
* The numba implementation are inspired by numba implementations originally proposed by [baguetter](https://github.com/mixedbread-ai/baguetter) and [retriv](https://github.com/AmenRa/retriv).
* The function `bm25s.utils.beir.evaluate` is taken from the [BEIR library](https://github.com/beir-cellar/beir). It follows the same license as the BEIR library, which is Apache 2.0.

## Citation

If you use `bm25s` in your work, please use the following bibtex:

```
@misc{bm25s,
      title={BM25S: Orders of magnitude faster lexical search via eager sparse scoring}, 
      author={Xing Han Lù},
      year={2024},
      eprint={2407.03618},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2407.03618}, 
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/xhluca/bm25s",
    "name": "bm25s",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Xing Han L\u00f9",
    "author_email": "bm25s@googlegroups.com",
    "download_url": "https://files.pythonhosted.org/packages/28/95/64597e432c17af68d83d3e2c3ec69a7c164bd6bc05e0bd96b9d38cf5f7a3/bm25s-0.2.4.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n<h1>BM25-Sparse\u26a1</h1>\n\n<i>BM25S is an ultrafast implementation of BM25 in pure Python, powered by Scipy sparse matrices</i>\n\n<table>\n      <tr>\n            <td>\n                  <a href=\"https://github.com/xhluca/bm25s\">\ud83d\udcbb GitHub</a>\n            </td>\n            <td>\n                  <a href=\"https://bm25s.github.io\">\ud83c\udfe0 Homepage</a>\n            </td>\n            <td>\n                  <a href=\"https://arxiv.org/abs/2407.03618\">\ud83d\udcdd Technical Report</a>\n            </td>\n            <td>\n                  <a href=\"https://huggingface.co/blog/xhluca/bm25s\">\ud83e\udd17 Blog Post</a>\n            </td>\n      </tr>\n</table>\n\n</div>\n\nWelcome to `bm25s`, a library that implements BM25 in Python, allowing you to rank documents based on a query. BM25 is a widely used ranking function used for text retrieval tasks, and is a core component of search services like Elasticsearch.\n\nIt is designed to be:\n* **Fast**: `bm25s` is implemented in pure Python and leverage Scipy sparse matrices to store eagerly computed scores for all document tokens. This allows extremely fast scoring at query time, improving performance over popular libraries by orders of magnitude (see benchmarks below).\n* **Simple**: `bm25s` is designed to be easy to use and understand. You can install it with pip and start using it in minutes. There is no dependencies on Java or Pytorch - all you need is Scipy and Numpy, and optional lightweight dependencies for stemming.\n\nBelow, we compare `bm25s` with Elasticsearch in terms of speedup over `rank-bm25`, the most popular Python implementation of BM25. We measure the throughput in queries per second (QPS) on a few popular datasets from [BEIR](https://github.com/beir-cellar/beir) in a single-threaded setting.\n\n![comparison](assets/comparison.png)\n\n<details>\n<summary>Click to show citation</summary><br>\n\n```\n@misc{bm25s,\n      title={BM25S: Orders of magnitude faster lexical search via eager sparse scoring}, \n      author={Xing Han L\u00f9},\n      year={2024},\n      eprint={2407.03618},\n      archivePrefix={arXiv},\n      primaryClass={cs.IR},\n      url={https://arxiv.org/abs/2407.03618}, \n}\n```\n\n</details>\n\n> [!IMPORTANT]\n> New in version 0.2.0: We are rolling out support for a numba backend, which gives around [2x speedup for larger datasets](https://github.com/xhluca/bm25-benchmarks?tab=readme-ov-file#queries-per-second)! Learn more about it and share your thoughts in [the version 0.2.0 release thread](https://github.com/xhluca/bm25s/discussions/58).\n\n## Installation\n\nYou can install `bm25s` with pip:\n\n```bash\npip install bm25s\n```\n\nIf you want to use stemming for better results, you can install the recommended (but optional) dependencies:\n\n```bash\n# Install all extra dependencies\npip install bm25s[full]\n\n# If you want to use stemming for better results, you can install a stemmer\npip install PyStemmer\n\n# To speed up the top-k selection process, you can install `jax`\npip install jax[cpu]\n```\n\n## Quickstart\n\nHere is a simple example of how to use `bm25s`:\n\n```python\nimport bm25s\nimport Stemmer  # optional: for stemming\n\n# Create your corpus here\ncorpus = [\n    \"a cat is a feline and likes to purr\",\n    \"a dog is the human's best friend and loves to play\",\n    \"a bird is a beautiful animal that can fly\",\n    \"a fish is a creature that lives in water and swims\",\n]\n\n# optional: create a stemmer\nstemmer = Stemmer.Stemmer(\"english\")\n\n# Tokenize the corpus and only keep the ids (faster and saves memory)\ncorpus_tokens = bm25s.tokenize(corpus, stopwords=\"en\", stemmer=stemmer)\n\n# Create the BM25 model and index the corpus\nretriever = bm25s.BM25()\nretriever.index(corpus_tokens)\n\n# Query the corpus\nquery = \"does the fish purr like a cat?\"\nquery_tokens = bm25s.tokenize(query, stemmer=stemmer)\n\n# Get top-k results as a tuple of (doc ids, scores). Both are arrays of shape (n_queries, k)\nresults, scores = retriever.retrieve(query_tokens, corpus=corpus, k=2)\n\nfor i in range(results.shape[1]):\n    doc, score = results[0, i], scores[0, i]\n    print(f\"Rank {i+1} (score: {score:.2f}): {doc}\")\n\n# You can save the arrays to a directory...\nretriever.save(\"animal_index_bm25\")\n\n# You can save the corpus along with the model\nretriever.save(\"animal_index_bm25\", corpus=corpus)\n\n# ...and load them when you need them\nimport bm25s\nreloaded_retriever = bm25s.BM25.load(\"animal_index_bm25\", load_corpus=True)\n# set load_corpus=False if you don't need the corpus\n```\n\nFor an example that shows how to quickly index a 2M-documents corpus (Natural Questions), check out [`examples/index_nq.py`](examples/index_nq.py).\n\n## Flexibility\n\n`bm25s` provides a flexible API that allows you to customize the BM25 model and the tokenization process. Here are some of the options you can use:\n\n```python\n# You can provide a list of queries instead of a single query\nqueries = [\"What is a cat?\", \"is the bird a dog?\"]\n\n# Provide your own stopwords list if you don't like the default one\nstopwords = [\"a\", \"the\"]\n\n# For stemming, use any function that is callable on each word list\nstemmer_fn = lambda lst: [word for word in lst]\n\n# Tokenize the queries\nquery_token_ids = bm25s.tokenize(queries, stopwords=stopwords, stemmer=stemmer_fn)\n\n# If you want the tokenizer to return strings instead of token ids, you can do this\nquery_token_strs = bm25s.tokenize(queries, return_ids=False)\n\n# You can use a different corpus for retrieval, e.g., titles instead of full docs\ntitles = [\"About Cat\", \"About Dog\", \"About Bird\", \"About Fish\"]\n\n# You can also choose to only return the documents and omit the scores\nresults = retriever.retrieve(query_token_ids, corpus=titles, k=2, return_as=\"documents\")\n\n# The documents are returned as a numpy array of shape (n_queries, k)\nfor i in range(results.shape[1]):\n    print(f\"Rank {i+1}: {results[0, i]}\")\n```\n\n### Memory Efficient Retrieval\n\n`bm25s` is designed to be memory efficient. You can use the `mmap` option to load the BM25 index as a memory-mapped file, which allows you to load the index without loading the full index into memory. This is useful when you have a large index and want to save memory:\n\n```python\n# Create a BM25 index\n# ...\n\n# let's say you have a large corpus\ncorpus = [\n    \"a very long document that is very long and has many words\",\n    \"another long document that is long and has many words\",\n    # ...\n]\n# Save the BM25 index to a file\nretriever.save(\"bm25s_very_big_index\", corpus=corpus)\n\n# Load the BM25 index as a memory-mapped file, which is memory efficient\n# and reduce overhead of loading the full index into memory\nretriever = bm25s.BM25.load(\"bm25s_very_big_index\", mmap=True)\n```\n\nFor an example of how to use retrieve using the `mmap=True` mode, check out [`examples/retrieve_nq.py`](examples/retrieve_nq.py).\n\n\n## Tokenization\n\nIn addition to using the simple function `bm25s.tokenize`, you can also use the `Tokenizer` class to customize the tokenization process. This is useful when you want to use a different tokenizer, or when you want to use a different tokenization process for queries and documents:\n\n```python\nfrom bm25s.tokenization import Tokenizer\n\ncorpus = [\n      \"a cat is a feline and likes to purr\",\n      \"a dog is the human's best friend and loves to play\",\n      \"a bird is a beautiful animal that can fly\",\n      \"a fish is a creature that lives in water and swims\",\n]\n\n# Pick your favorite stemmer, and pass \nstemmer = None\nstopwords = [\"is\"]\nsplitter = lambda x: x.split() # function or regex pattern\n# Create a tokenizer\ntokenizer = Tokenizer(\n      stemmer=stemmer, stopwords=stopwords, splitter=splitter\n)\n\ncorpus_tokens = tokenizer.tokenize(corpus)\n\n# let's see what the tokens look like\nprint(\"tokens:\", corpus_tokens)\nprint(\"vocab:\", tokenizer.get_vocab_dict())\n\n# note: the vocab dict will either be a dict of `word -> id` if you don't have a stemmer, and a dict of `stemmed word -> stem id` if you do.\n# You can save the vocab. it's fine to use the same dir as your index if filename doesn't conflict\ntokenizer.save_vocab(save_dir=\"bm25s_very_big_index\")\n\n# loading:\nnew_tokenizer = Tokenizer(stemmer=stemmer, stopwords=[], splitter=splitter)\nnew_tokenizer.load_vocab(\"bm25s_very_big_index\")\nprint(\"vocab reloaded:\", new_tokenizer.get_vocab_dict())\n\n# the same can be done for stopwords\nprint(\"stopwords before reload:\", new_tokenizer.stopwords)\ntokenizer.save_stopwords(save_dir=\"bm25s_very_big_index\")\nnew_tokenizer.load_stopwords(\"bm25s_very_big_index\")\nprint(\"stopwords reloaded:\", new_tokenizer.stopwords)\n```\n\nYou can find advanced examples in [examples/tokenizer_class.py](examples/tokenizer_class.py), including how to:\n* Pass a stemmer, stopwords, and splitter function/regex pattern\n* Control whether vocabulary is updated by `tokenizer.tokenize` calls or not (by default, it will only be updated during the first call)\n* Reset the tokenizer to its initial state with `tokenizer.reset_vocab()`\n* Use the tokenizer in generator mode to save memory by `yield`ing one document at a time.\n* Pass different outputs of the tokenizer to the `BM25.retrieve` function.\n\n\n\n## Variants\n\nYou can use the following variants of BM25 in `bm25s` (see [Kamphuis et al. 2020](https://link.springer.com/chapter/10.1007/978-3-030-45442-5_4) for more details):\n\n* Original implementation (`method=\"robertson\"`) - we set `idf>=0` to avoid negatives\n* ATIRE (`method=\"atire\"`)\n* BM25L (`method=\"bm25l\"`)\n* BM25+ (`method=\"bm25+\"`)\n* Lucene (`method=\"lucene\"`)\n\nBy default, `bm25s` uses `method=\"lucene\"`, which is Lucene's BM25 implementation (exact version). You can change the method by passing the `method` argument to the `BM25` constructor:\n\n```python\n# The IR book recommends default values of k1 between 1.2 and 2.0, and b=0.75\nretriever = bm25s.BM25(method=\"robertson\", k1=1.5, b=0.75)\n\n# For BM25+, BM25L, you need a delta parameter (default is 0.5)\nretriever = bm25s.BM25(method=\"bm25+\", delta=1.5)\n\n# You can also choose a different \"method\" for idf, while keeping the default for the rest\n# for example, this is equivalent to rank-bm25 when `epsilon=0`\nretriever = bm25s.BM25(method=\"atire\", idf_method=\"robertson\")\n# and this is equivalent to bm25-pt\nretriever = bm25s.BM25(method=\"atire\", idf_method=\"lucene\")\n```\n\n## Hugging Face Integration\n\n`bm25` can naturally work with Hugging Face's `huggingface_hub`, allowing you to load and save to the model hub. This is useful for sharing BM25 indices and using community models. \n\nFirst, make sure you have a valid [access token for the Hugging Face model hub](https://huggingface.co/settings/tokens). This is needed to save models to the hub, or to load private models. Once you created it, you can add it to your environment variables (e.g. in your `.bashrc` or `.zshrc`):\n\n```bash\nexport HUGGING_FACE_HUB_TOKEN=\"hf_...\"\n```\n\nNow, let's install the `huggingface_hub` library:\n\n```bash\npip install huggingface_hub\n```\n\nLet's see how to use `BM25SHF.save_to_hub` to save a BM25 index to the Hugging Face model hub:\n\n```python\nimport os\nimport bm25s\nfrom bm25s.hf import BM25HF\n\n# Create a BM25 index\nretriever = BM25HF()\n# Create your corpus here\ncorpus = [\n    \"a cat is a feline and likes to purr\",\n    \"a dog is the human's best friend and loves to play\",\n    \"a bird is a beautiful animal that can fly\",\n    \"a fish is a creature that lives in water and swims\",\n]\ncorpus_tokens = bm25s.tokenize(corpus)\nretriever.index(corpus_tokens)\n\n# Set your username and token\nuser = \"your-username\"\ntoken = os.environ[\"HF_TOKEN\"]\nretriever.save_to_hub(f\"{user}/bm25s-animals\", token=token, corpus=corpus)\n# You can also save it publicly with private=False\n```\n\nThen, you can use the following code to load a BM25 index from the Hugging Face model hub:\n\n```python\nimport bm25s\nfrom bm25s.hf import BM25HF\n\n# Load a BM25 index from the Hugging Face model hub\nuser = \"your-username\"\nretriever = BM25HF.load_from_hub(f\"{user}/bm25s-animals\")\n\n# you can specify revision and load_corpus=True if needed\nretriever = BM25HF.load_from_hub(\n    f\"{user}/bm25s-animals\", revision=\"main\", load_corpus=True\n)\n\n# if you want a low-memory usage, you can load as memory map with `mmap=True`\nretriever = BM25HF.load_from_hub(\n    f\"{user}/bm25s-animals\", load_corpus=True, mmap=True\n)\n\n# Query the corpus\nquery = \"does the fish purr like a cat?\"\n\n# Tokenize the query\nquery_tokens = bm25s.tokenize(query)\n\n# Get top-k results as a tuple of (doc ids, scores). Both are arrays of shape (n_queries, k)\nresults, scores = retriever.retrieve(query_tokens, k=2)\n```\n\nFor a complete example, check out:\n* [`examples/index_to_hf.py`](examples/index_to_hf.py) for indexing a corpus and upload to Huggingface Hub\n* [`examples/retrieve_from_hf.py`](examples/retrieve_from_hf.py) for loading an index alongside corpus from Huggingface Hub and querying it.\n\n## Comparison\n\nHere are some benchmarks comparing `bm25s` to other popular BM25 implementations. We compare the following implementations:\n* `bm25s`: Our implementation of BM25 in pure Python, powered by Scipy sparse matrices.\n* `rank-bm25` (`Rank`): A popular Python implementation of BM25.\n* `bm25_pt` (`PT`): A Pytorch implementation of BM25.\n* `elasticsearch` (`ES`): Elasticsearch with BM25 configurations.\n\n`OOM` means the implementation ran out of memory during the benchmark.\n\n### Throughput (Queries per second)\n\nWe compare the throughput of the BM25 implementations on various datasets. The throughput is measured in queries per second (QPS), on a single-threaded Intel Xeon CPU @ 2.70GHz (found on Kaggle). For BM25S, we take the average of 10 runs. Instances exceeding 60 queries/s are in **bold**.\n\n| Dataset          |       BM25S | Elastic |    BM25-PT |  Rank-BM25 |\n| :--------------- | ----------: | ------: | ---------: | ---------: |\n| arguana          |  **573.91** |   13.67 | **110.51** |          2 |\n| climate-fever    |       13.09 |    4.02 |        OOM |       0.03 |\n| cqadupstack      |  **170.91** |   13.38 |        OOM |       0.77 |\n| dbpedia-entity   |       13.44 |   10.68 |        OOM |       0.11 |\n| fever            |       20.19 |    7.45 |        OOM |       0.06 |\n| fiqa             |  **507.03** |   16.96 |      20.52 |       4.46 |\n| hotpotqa         |       20.88 |    7.11 |        OOM |       0.04 |\n| msmarco          |        12.2 |   11.88 |        OOM |       0.07 |\n| nfcorpus         | **1196.16** |   45.84 |     256.67 | **224.66** |\n| nq               |       41.85 |   12.16 |        OOM |        0.1 |\n| quora            |  **183.53** |    21.8 |       6.49 |       1.18 |\n| scidocs          |  **767.05** |   17.93 |      41.34 |       9.01 |\n| scifact          |  **952.92** |   20.81 |  **184.3** |       47.6 |\n| trec-covid       |   **85.64** |    7.34 |       3.73 |       1.48 |\n| webis-touche2020 |   **60.59** |   13.53 |        OOM |        1.1 |\n\nMore detailed benchmarks can be found in the [bm25-benchmarks repo](https://github.com/xhluca/bm25-benchmarks).\n\n### Disk usage\n\n`bm25s` is designed to be lightweight. This means the total disk usage of the package is minimal, as it only requires wheels for `numpy` (18MB), `scipy` (37MB), and the package itself is less than 100KB. After installation, the full virtual environment takes more space than `rank-bm25` but less than `pyserini` and `bm25_pt`:\n\n| Package           | Disk Usage |\n| ----------------- | ---------- |\n| venv (no package) | 45MB       |\n| `rank-bm25`       | 99MB       |\n| `bm25s` (ours)    | 479MB      |\n| `bm25_pt`         | 5346MB     |\n| `pyserini`        | 6976MB     |\n| `elastic`         | 1183MB     |\n\n<details>\n<summary>Show Details</summary>\n\nThe disk usage of the virtual environments is calculated using the following command:\n\n```\n$ du -s *env-* --block-size=1MB\n6976    conda-env-pyserini\n5346    venv-bm25-pt\n479     venv-bm25s\n45      venv-empty\n99      venv-rank-bm25\n```\n\nFor `pyserini`, we use the [recommended installation](https://github.com/castorini/pyserini/blob/master/docs/installation.md) with conda environment to account for Java dependencies.\n\n</details>\n\n### Optimized RAM usage\n\n`bm25s` allows considerable memory saving through the use of *memory-mapping*, which allows the index to be stored on disk and loaded on demand. \n\nUsing the `index_nq.py` to create an index, we can retrieve with:\n* `examples/retrieve_nq.py`: setting `mmap=False` in the `main` function to load the index in memory, and `mmap=True` to load the index as a memory-mapped file. \n* `examples/retrieve_nq_with_batching.py`: This takes it a step further by batching the retrieval process, which allows for reloading the index after each batch (see *Mmap+Reload* below). This is useful when you have a large index and want to save memory.\n\nWe show the following results on the NQ dataset (2M+ documents, 100M+ tokens):\n\n| Method        | Load Index (s) | Retrieval (s) | RAM post-index (GB) | RAM post-retrieve (GB) |\n| ------------- | -------------- | ------------- | ------------------- | ---------------------- |\n| In-memory     | 8.61           | 21.09         | 4.36                | 4.45                   |\n| Memory-mapped | 0.53           | 20.22         | 0.49                | 2.16                   |\n| Mmap+Reload   | 0.48           | 20.96         | 0.49                | 0.70                   |\n\nWe can see that memory-mapping the index allows for a significant reduction in memory usage, with comparable retrieval times. \n\nSimilarly, for MSMARCO (8M+ documents, 300M+ tokens), we show the following results (running on the validation set), although the retrieval did not complete for the in-memory case:\n\n| Method        | Load Index (s) | Retrieval (s) | RAM post-index (GB) | RAM post-retrieve (GB) |\n| ------------- | -------------- | ------------- | ------------------- | ---------------------- |\n| In-memory     | 25.71          | 93.66         | 10.21               | 10.34                  |\n| Memory-mapped | 1.24           | 90.41         | 1.14                | 4.88                   |\n| Mmap+Reload   | 1.17           | 97.89         | 1.14                | 1.38                   |\n\n## Acknowledgement\n\n* The central idea behind the scoring mechanism in this library is originally from [bm25_pt](https://github.com/jxmorris12/bm25_pt), which was a major inspiration to this project.\n* The API of the [`BM25` class](https://github.com/xhluca/bm25s/blob/main/bm25s/__init__.py) is also heavily inspired by the design of BM25-pt, as well as that of rank-bm25.\n* The multilingual stopwords are sourced from the [NLTK stopwords lists](https://github.com/nltk/nltk/blob/96ee715997e1c8d9148b6d8e1b32f412f31c7ff7/nltk/corpus/__init__.py#L315).\n* The numba implementation are inspired by numba implementations originally proposed by [baguetter](https://github.com/mixedbread-ai/baguetter) and [retriv](https://github.com/AmenRa/retriv).\n* The function `bm25s.utils.beir.evaluate` is taken from the [BEIR library](https://github.com/beir-cellar/beir). It follows the same license as the BEIR library, which is Apache 2.0.\n\n## Citation\n\nIf you use `bm25s` in your work, please use the following bibtex:\n\n```\n@misc{bm25s,\n      title={BM25S: Orders of magnitude faster lexical search via eager sparse scoring}, \n      author={Xing Han L\u00f9},\n      year={2024},\n      eprint={2407.03618},\n      archivePrefix={arXiv},\n      primaryClass={cs.IR},\n      url={https://arxiv.org/abs/2407.03618}, \n}\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "An ultra-fast implementation of BM25 based on sparse matrices.",
    "version": "0.2.4",
    "project_urls": {
        "Homepage": "https://github.com/xhluca/bm25s"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "04fbcb9fe236fbbe5a6dd61adbe81c76b4f62770aa694fd4ffffae48bc96a991",
                "md5": "770a8df4aa3f73b4cc3e6cc27282865c",
                "sha256": "cbd7a31280b5329481ea3c57954f890aa344c7b620561aec30ffc7ec43ca93f1"
            },
            "downloads": -1,
            "filename": "bm25s-0.2.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "770a8df4aa3f73b4cc3e6cc27282865c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 52509,
            "upload_time": "2024-11-13T22:47:25",
            "upload_time_iso_8601": "2024-11-13T22:47:25.919173Z",
            "url": "https://files.pythonhosted.org/packages/04/fb/cb9fe236fbbe5a6dd61adbe81c76b4f62770aa694fd4ffffae48bc96a991/bm25s-0.2.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "289564597e432c17af68d83d3e2c3ec69a7c164bd6bc05e0bd96b9d38cf5f7a3",
                "md5": "fab206b614f880a7f672d1fbd4107017",
                "sha256": "9be287b4f85ef8bda9f523b9f839d1572dc7070b06dc7b52d6b1fb5b23605bb2"
            },
            "downloads": -1,
            "filename": "bm25s-0.2.4.tar.gz",
            "has_sig": false,
            "md5_digest": "fab206b614f880a7f672d1fbd4107017",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 56281,
            "upload_time": "2024-11-13T22:47:27",
            "upload_time_iso_8601": "2024-11-13T22:47:27.856792Z",
            "url": "https://files.pythonhosted.org/packages/28/95/64597e432c17af68d83d3e2c3ec69a7c164bd6bc05e0bd96b9d38cf5f7a3/bm25s-0.2.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-13 22:47:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "xhluca",
    "github_project": "bm25s",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "bm25s"
}
        
Elapsed time: 0.83349s