SPLADERunner


NameSPLADERunner JSON
Version 0.1.6 PyPI version JSON
download
home_pagehttps://github.com/PrithivirajDamodaran/SPLADERunner
SummaryUltralight and Fast wrapper for the independent implementation of SPLADE++ models for your search & retrieval pipelines. Models and Library created by Prithivi Da, For PRs and Collaboration to checkout the readme.
upload_time2024-08-24 11:39:26
maintainerNone
docs_urlNone
authorPrithivi Da
requires_python>=3.6
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # SPLADERunner

## 1. What is it?

>Title is dedicated to the Original Blade Runners - Harrison Ford and the Author  Philip K. Dick of "Do Androids Dream of Electric Sheep?"

A Ultra-lite & Super-fast Python wrapper for the [independent implementation of SPLADE++ models](https://huggingface.co/prithivida/Splade_PP_en_v1) for your search & retrieval pipelines. Based on the papers Naver's [From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective](https://arxiv.org/pdf/2205.04733.pdf) and Google's [SparseEmbed](https://storage.googleapis.com/gweb-research2023-media/pubtools/pdf/79f16d3b3b948706d191a7fe6dd02abe516f5564.pdf6)

- ⚡ **Lite weight**: 
    - **No Torch or Transformers** needed.
    - **Runs on CPU** for query or passage expansion.
    - **FLOPS & Retrieval Efficient**: Refer model card for details.

   
## 🚀 Installation:

```python 
pip install spladerunner
```

## Usage:
```python

# One-time only init
from spladerunner import Expander
expander = Expander('Splade_PP_en_v1', 128) #pass model, max_seq_len

# Sample passage expansion
sparse_rep = expander.expand("The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science.")


# For solr or elastic or vanilla lucene stores.
sparse_rep = expander.expand("The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science.", outformat="lucene")

print(sparse_rep)

```

(Feel free to skip to 3 If you are expert in sparse and dense representations)

## 2. Why Sparse Representations? 

    
- **Lexical search** with BOW based sparse vectors are strong baselines, but they famously suffer from **vocabulary mismatch** problem, as they can only do exact term matching. 

<details>
    
Pros

    ✅ Efficient and Cheap.
    ✅ No need to fine-tune models.
    ✅️ Interpretable.
    ✅️ Exact Term Matches.

Cons

    ❌ Vocabulary mismatch (Need to remember exact terms)

</details>


- **Semantic Search** Learned Neural /  Dense retrievers with approximate nearest neighbors search has shown impressive results but they can
  
<details>
    
Pros

    ✅ Search how humans innately think.
    ✅ When finetuned beats sparse by long way.
    ✅ Easily works with Multiple modals.

Cons

    ❌ Suffers token amnesia (misses term matching), 
    ❌ Resource intensive (both index & retreival), 
    ❌ Famously hard to interpret.
    ❌ Needs fine-tuning for OOD data.

</details>

- Getting pros of both searches made sense and that gave rise to interest in **learning sparse representations** for queries and documents with some interpretability. The sparse representations also double as implicit or explicit (latent, contextualized) expansion mechanisms for both query and documents. If you are new to [query expansion learn more here from Daniel Tunkelang.](https://queryunderstanding.com/query-expansion-2d68d47cf9c8)



2a. **What the Models learn?**
- The model learns to project it's learned dense representations over a MLM head to give a vocabulary distribution.
  <center><img src="./images/vocproj.png" width=300/></center>

## 3. 💸 **Why SPLADERunner?**:
- $ Concious: Serverless deployments like Lambda are charged by memory & time per invocation
- Smaller package size = shorter cold start times, quicker re-deployments for Serverless.
    
## 4. 🎯 **Models**:
- Below are the list of models supported as of now.
    * [`prithivida/Splade_PP_en_v1`](https://huggingface.co/prithivida/Splade_PP_en_v1) (default model)

4a. 💸 **Where and How can you use?**
- [TBD]

4b. **How (and what) to contribute?**
- [TBD]

## 5. **Criticisms and Competitions to SPLADE and Learned Sparse representations:**

- [Wacky Weights in Learned Sparse Representations and the Revenge of Score-at-a-Time Query Evaluation](https://arxiv.org/pdf/2110.11540.pdf)
- [Query2doc: Query Expansion with Large Language Models](https://arxiv.org/pdf/2303.07678.pdf) 
*note: don't mistake this for docT5query, this is a recent work*


- *Thanks to [Nils Reimers](https://www.linkedin.com/in/reimersnils/) for*
    - The trolls :-) and timely inputs around evaluation.
- *Props to Naver folks, the original authors of the paper for such a robust research.*


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/PrithivirajDamodaran/SPLADERunner",
    "name": "SPLADERunner",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": null,
    "author": "Prithivi Da",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/22/39/cfc0c3f3319ee45e96d87c36ef1bd0ae5aa17aa5000628669b4aed72b30e/SPLADERunner-0.1.6.tar.gz",
    "platform": null,
    "description": "# SPLADERunner\n\n## 1. What is it?\n\n>Title is dedicated to the Original Blade Runners - Harrison Ford and the Author  Philip K. Dick of \"Do Androids Dream of Electric Sheep?\"\n\nA Ultra-lite &amp; Super-fast Python wrapper for the [independent implementation of SPLADE++ models](https://huggingface.co/prithivida/Splade_PP_en_v1) for your search & retrieval pipelines. Based on the papers Naver's [From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective](https://arxiv.org/pdf/2205.04733.pdf) and Google's [SparseEmbed](https://storage.googleapis.com/gweb-research2023-media/pubtools/pdf/79f16d3b3b948706d191a7fe6dd02abe516f5564.pdf6)\n\n- \u26a1 **Lite weight**: \n    - **No Torch or Transformers** needed.\n    - **Runs on CPU** for query or passage expansion.\n    - **FLOPS & Retrieval Efficient**: Refer model card for details.\n\n   \n## \ud83d\ude80 Installation:\n\n```python \npip install spladerunner\n```\n\n## Usage:\n```python\n\n# One-time only init\nfrom spladerunner import Expander\nexpander = Expander('Splade_PP_en_v1', 128) #pass model, max_seq_len\n\n# Sample passage expansion\nsparse_rep = expander.expand(\"The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science.\")\n\n\n# For solr or elastic or vanilla lucene stores.\nsparse_rep = expander.expand(\"The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science.\", outformat=\"lucene\")\n\nprint(sparse_rep)\n\n```\n\n(Feel free to skip to 3 If you are expert in sparse and dense representations)\n\n## 2. Why Sparse Representations? \n\n    \n- **Lexical search** with BOW based sparse vectors are strong baselines, but they famously suffer from **vocabulary mismatch** problem, as they can only do exact term matching. \n\n<details>\n    \nPros\n\n    \u2705 Efficient and Cheap.\n    \u2705 No need to fine-tune models.\n    \u2705\ufe0f Interpretable.\n    \u2705\ufe0f Exact Term Matches.\n\nCons\n\n    \u274c Vocabulary mismatch (Need to remember exact terms)\n\n</details>\n\n\n- **Semantic Search** Learned Neural /  Dense retrievers with approximate nearest neighbors search has shown impressive results but they can\n  \n<details>\n    \nPros\n\n    \u2705 Search how humans innately think.\n    \u2705 When finetuned beats sparse by long way.\n    \u2705 Easily works with Multiple modals.\n\nCons\n\n    \u274c Suffers token amnesia (misses term matching), \n    \u274c Resource intensive (both index & retreival), \n    \u274c Famously hard to interpret.\n    \u274c Needs fine-tuning for OOD data.\n\n</details>\n\n- Getting pros of both searches made sense and that gave rise to interest in **learning sparse representations** for queries and documents with some interpretability. The sparse representations also double as implicit or explicit (latent, contextualized) expansion mechanisms for both query and documents. If you are new to [query expansion learn more here from Daniel Tunkelang.](https://queryunderstanding.com/query-expansion-2d68d47cf9c8)\n\n\n\n2a. **What the Models learn?**\n- The model learns to project it's learned dense representations over a MLM head to give a vocabulary distribution.\n  <center><img src=\"./images/vocproj.png\" width=300/></center>\n\n## 3. \ud83d\udcb8 **Why SPLADERunner?**:\n- $ Concious: Serverless deployments like Lambda are charged by memory & time per invocation\n- Smaller package size = shorter cold start times, quicker re-deployments for Serverless.\n    \n## 4. \ud83c\udfaf **Models**:\n- Below are the list of models supported as of now.\n    * [`prithivida/Splade_PP_en_v1`](https://huggingface.co/prithivida/Splade_PP_en_v1) (default model)\n\n4a. \ud83d\udcb8 **Where and How can you use?**\n- [TBD]\n\n4b. **How (and what) to contribute?**\n- [TBD]\n\n## 5. **Criticisms and Competitions to SPLADE and Learned Sparse representations:**\n\n- [Wacky Weights in Learned Sparse Representations and the Revenge of Score-at-a-Time Query Evaluation](https://arxiv.org/pdf/2110.11540.pdf)\n- [Query2doc: Query Expansion with Large Language Models](https://arxiv.org/pdf/2303.07678.pdf) \n*note: don't mistake this for docT5query, this is a recent work*\n\n\n- *Thanks to [Nils Reimers](https://www.linkedin.com/in/reimersnils/) for*\n    - The trolls :-) and timely inputs around evaluation.\n- *Props to Naver folks, the original authors of the paper for such a robust research.*\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Ultralight and Fast wrapper for the independent implementation of SPLADE++ models for your search & retrieval pipelines. Models and Library created by Prithivi Da, For PRs and Collaboration to checkout the readme.",
    "version": "0.1.6",
    "project_urls": {
        "Homepage": "https://github.com/PrithivirajDamodaran/SPLADERunner"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0c8760ce4155e88239f6b6bfc401a1f2cf8cd4c646e5c3c696f00bb0d341a76a",
                "md5": "fce65d1e5194117550f0bb70a679531c",
                "sha256": "826517c3502cbffd6a9591982ae81e94fda43d2a35d5d44e65b0306e40d1e4d3"
            },
            "downloads": -1,
            "filename": "SPLADERunner-0.1.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fce65d1e5194117550f0bb70a679531c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 6179,
            "upload_time": "2024-08-24T11:39:24",
            "upload_time_iso_8601": "2024-08-24T11:39:24.818476Z",
            "url": "https://files.pythonhosted.org/packages/0c/87/60ce4155e88239f6b6bfc401a1f2cf8cd4c646e5c3c696f00bb0d341a76a/SPLADERunner-0.1.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2239cfc0c3f3319ee45e96d87c36ef1bd0ae5aa17aa5000628669b4aed72b30e",
                "md5": "2a5233eacfaed1b2519eaf575de3cbe3",
                "sha256": "c81d9b743d18fa2a1349658a751708d140e1d113da2f9e62bb901008a304f554"
            },
            "downloads": -1,
            "filename": "SPLADERunner-0.1.6.tar.gz",
            "has_sig": false,
            "md5_digest": "2a5233eacfaed1b2519eaf575de3cbe3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 5858,
            "upload_time": "2024-08-24T11:39:26",
            "upload_time_iso_8601": "2024-08-24T11:39:26.213665Z",
            "url": "https://files.pythonhosted.org/packages/22/39/cfc0c3f3319ee45e96d87c36ef1bd0ae5aa17aa5000628669b4aed72b30e/SPLADERunner-0.1.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-24 11:39:26",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "PrithivirajDamodaran",
    "github_project": "SPLADERunner",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "spladerunner"
}
        
Elapsed time: 0.70819s