rag-retrieval


Namerag-retrieval JSON
Version 0.2.2 PyPI version JSON
download
home_pageNone
SummaryA unified API for various RAG Retrieval reranker models.
upload_time2024-05-05 13:42:53
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2024 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords reranking retrieval rag nlp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
欢迎使用rag_retrieval库的Reranker模块,这里是一份Reranker的Tutorial,主要来介绍下Reranker的功能以及注意事项,希望您可以使用的更加得心应手。


# 安装

```bash
#为了避免自动安装的torch与本地的cuda不兼容,建议进行下一步之前先手动安装本地cuda版本兼容的torch。
pip install rag-retrieval
```

# 整体功能
rag_retrieval的Reranker,支持以下的功能。


我们开发了一个轻量级的python库[rag-retrieval](https://pypi.org/project/rag-retrieval/),提供统一的方式调用任意不同的RAG排序模型,其有以下的特点。

1.支持多种排序模型:支持常见的开源排序模型(corss encoder reranker,decoder-only 的llm reranker)

2.长doc友好:支持两种不同的对于长doc的处理逻辑(最大长度截断,最大分值切分)。

3.益于扩展:如果有新的排序模型,用户只需要继承basereranker,并且实现rank以及comput_score函数即可。



下面介绍下加载模型的过程。


# 加载模型

```python

import os
os.environ['CUDA_VISIBLE_DEVICES']='7'

from rag_retrieval import Reranker

#如果自动下载对应的模型失败,请先从huggface下载对应的模型到本地,然后这里输入本地的路径。

ranker = Reranker('BAAI/bge-reranker-base',dtype='fp16',verbose=0)
```

推荐使用os.environ['CUDA_VISIBLE_DEVICES']来指定显存。

**Reranker的参数**

```python
Reranker的
-   Reranker的参数为:
    - model_name: str,
    - model_type: Optional[str] = None,
    - verbose: int = 1,
    - **kwargs
```

**参数解释**


- model_name: 为对应的模型名字或者模型在本地的路径,如果自动下载对应的模型失败,请先从huggface下载对应的模型到本地,然后这里输入本地的路径。
- model_type: 为对应的reranker模型类型,目前支持cross-encoder,llm,colbert,可以不具体指定,那么,代码会根据输入的model_name自动推断选择哪种类型。
- verbose,是否打印出必要的debug信息,默认打印,如果测试无误,可设置verbose=0不打印。
- **kwargs**: 可以在此指定一些模型相关的参数,例如:
    - device:推理设备,可以设置为'cpu','cuda'等。如果不具体指定,那么按照以下优先级使用。如果有gpu,默认使用gpu,有mps,默认mps,如果有npu,默认使用npu。否则,使用cpu推理。
    - dtype:加载模型的类型,可以设置为'fp32',fp16','bf16'。如果不具体指定,默认使用fp32。设置fp16可加快推理速度。


# 支持的reranker模型

## Cross Encoder ranker

对于cross encoder 的ranker,rag_retrieval的Reranker支持多个强大的开源模型,总的来说,只要其cross encoder是使用transformers的**AutoModelForSequenceClassification**的模型结构,那么就可以支持使用Reranker来进行推理。举例如下。

- **bge系列的cross encoder模型,例如(BAAI/bge-reranker-base, BAAI/bge-reranker-large, BAAI/bge-reranker-v2-m3 )**

- **bce的cross encoder模型,例如(maidalun1020/bce-reranker-base_v1)**


## LLM ranker 

对于LLM ranker,rag_retrieval的Reranker支持多种强大的定制化LLM排序模型。也支持使用任意的LLM的chat模型来进行zero shot排序。举例如下。

- **bge系列的llm ranker模型,例如(BAAI/bge-reranker-v2-gemma, BAAI/bge-reranker-v2-minicpm-layerwise, BAAI/bge-reranker-v2-m3 )**

- **也支持使用任意的LLM的chat模型来进行zero shot排序**


**下面介绍下Reranker返回的reranker对象的核心方法**

# compute_score
该函数是用来计算输入的一对或者多对句子对的得分并返回。


```python
import os
os.environ['CUDA_VISIBLE_DEVICES']='7'

from rag_retrieval import Reranker


ranker = Reranker('BAAI/bge-reranker-base',dtype='fp16',verbose=0)

pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]


scores = ranker.compute_score(pairs)

print(scores)
```

[-8.1484375, 6.18359375]

**compute_score函数的参数**
```python
    def compute_score(self, 
        sentence_pairs: Union[List[Tuple[str, str]], Tuple[str, str]],
        batch_size: int = 256,
        max_length: int = 512,
        normalize: bool = False,
        enable_tqdm: bool = True,
    ):
```

**输入参数解释**
- sentence_pairs: 需要计算得分的一对或者多对句子对。
- batch_size: 模型一次前向推理的batch_size.在函数内部,会将sentence_pairs切分成多个batch_size来进行推理。
- max_length: 句子对的总长度,超过就会截断。
- normalize:是否会对计算出来的得分使用sigmod归一化到0-1之间。
- enable_tqdm:是否开启tqdm展示推理的进度。

对于LLM ranker 中的BAAI/bge-reranker-v2-minicpm-layerwise模型,可以在这里传递cutoff_layers指定推理的层数。其余模型不需要传递。
- cutoff_layers: list = None,
**返回解释**

如果输入是一对句子,那么返回一个float,代表这对句子的分值。如果输入是多对句子,那么返回一组fload的列表,是这一组的分值.



# rerank函数
该函数是用来计算query以及一组doc的得分,可以支持不同的长doc处理策略。

```python

import os
os.environ['CUDA_VISIBLE_DEVICES']='7'

from rag_retrieval import Reranker

ranker = Reranker('BAAI/bge-reranker-base',dtype='fp16',verbose=0)

query='what is panda?'

docs=['hi','The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']

doc_ranked = ranker.rerank(query,docs)
print(doc_ranked)

```
results=[Result(doc_id=1, text='The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.', score=6.18359375, rank=1), Result(doc_id=0, text='hi', score=-8.1484375, rank=2)] query='what is panda?' has_scores=True

**rerank的参数**

```python
    def rerank(self, 
        query: str, 
        docs: Union[List[str], str] = None,
        batch_size: int = 256,
        max_length: int = 512,
        normalize: bool = False,
        long_doc_process_strategy: str="max_score_slice",#['max_score_slice','max_length_truncation']
    ):  
```

**输入参数解释**
- query: query文本。
- docs: 一个或者一组doc的文本。
- batch_size: 模型一次前向推理的batch_size。
- max_length: 句子对的总长度,超过就会截断。
- normalize:是否会对计算出来的得分使用sigmod归一化到0-1之间。
- long_doc_process_strategy: 对于长doc处理的逻辑,可以选择
    - max_score_slice:将长doc按照长度切分,分别计算query和所有子doc的分数,取query与最大的子doc的分数作为query和整个doc的分数。
    - max_length_truncation:query加doc的长度超过max_length就会截断,来计算分数。

对于LLM ranker 中的BAAI/bge-reranker-v2-minicpm-layerwise模型,可以在这里传递cutoff_layers指定推理的层数。其余模型不需要传递。
- cutoff_layers: list = None,

**返回解释**

返回是一个RankedResults对象,其主要的属性有:results: List[Result]。一组Result对象,而Result的属性有:
- doc_id: Union[int, str]
- text: str
- score: Optional[float] = None
- rank: Optional[int] = None

RankedResults对象也有一些常见的方法如top_k:按照score返回top_k个Result.get_score_by_docid:输入doc在输入的顺序,得到对应的score。


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "rag-retrieval",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "A grass in the car <suda.jcli@qq.com>",
    "keywords": "reranking, retrieval, rag, nlp",
    "author": null,
    "author_email": "A grass in the car <suda.jcli@qq.com>",
    "download_url": "https://files.pythonhosted.org/packages/c9/96/c2dd6ca14b38ebf985777bdb79fc60361fa0e949e97abf5e2029af0db46d/rag_retrieval-0.2.2.tar.gz",
    "platform": null,
    "description": "\n\u6b22\u8fce\u4f7f\u7528rag_retrieval\u5e93\u7684Reranker\u6a21\u5757\uff0c\u8fd9\u91cc\u662f\u4e00\u4efdReranker\u7684Tutorial,\u4e3b\u8981\u6765\u4ecb\u7ecd\u4e0bReranker\u7684\u529f\u80fd\u4ee5\u53ca\u6ce8\u610f\u4e8b\u9879\uff0c\u5e0c\u671b\u60a8\u53ef\u4ee5\u4f7f\u7528\u7684\u66f4\u52a0\u5f97\u5fc3\u5e94\u624b\u3002\n\n\n# \u5b89\u88c5\n\n```bash\n#\u4e3a\u4e86\u907f\u514d\u81ea\u52a8\u5b89\u88c5\u7684torch\u4e0e\u672c\u5730\u7684cuda\u4e0d\u517c\u5bb9\uff0c\u5efa\u8bae\u8fdb\u884c\u4e0b\u4e00\u6b65\u4e4b\u524d\u5148\u624b\u52a8\u5b89\u88c5\u672c\u5730cuda\u7248\u672c\u517c\u5bb9\u7684torch\u3002\npip install rag-retrieval\n```\n\n# \u6574\u4f53\u529f\u80fd\nrag_retrieval\u7684Reranker,\u652f\u6301\u4ee5\u4e0b\u7684\u529f\u80fd\u3002\n\n\n\u6211\u4eec\u5f00\u53d1\u4e86\u4e00\u4e2a\u8f7b\u91cf\u7ea7\u7684python\u5e93[rag-retrieval](https://pypi.org/project/rag-retrieval/),\u63d0\u4f9b\u7edf\u4e00\u7684\u65b9\u5f0f\u8c03\u7528\u4efb\u610f\u4e0d\u540c\u7684RAG\u6392\u5e8f\u6a21\u578b\uff0c\u5176\u6709\u4ee5\u4e0b\u7684\u7279\u70b9\u3002\n\n1.\u652f\u6301\u591a\u79cd\u6392\u5e8f\u6a21\u578b\uff1a\u652f\u6301\u5e38\u89c1\u7684\u5f00\u6e90\u6392\u5e8f\u6a21\u578b(corss encoder reranker,decoder-only \u7684llm reranker)\n\n2.\u957fdoc\u53cb\u597d\uff1a\u652f\u6301\u4e24\u79cd\u4e0d\u540c\u7684\u5bf9\u4e8e\u957fdoc\u7684\u5904\u7406\u903b\u8f91(\u6700\u5927\u957f\u5ea6\u622a\u65ad\uff0c\u6700\u5927\u5206\u503c\u5207\u5206)\u3002\n\n3.\u76ca\u4e8e\u6269\u5c55\uff1a\u5982\u679c\u6709\u65b0\u7684\u6392\u5e8f\u6a21\u578b\uff0c\u7528\u6237\u53ea\u9700\u8981\u7ee7\u627fbasereranker\uff0c\u5e76\u4e14\u5b9e\u73b0rank\u4ee5\u53cacomput_score\u51fd\u6570\u5373\u53ef\u3002\n\n\n\n\u4e0b\u9762\u4ecb\u7ecd\u4e0b\u52a0\u8f7d\u6a21\u578b\u7684\u8fc7\u7a0b\u3002\n\n\n# \u52a0\u8f7d\u6a21\u578b\n\n```python\n\nimport os\nos.environ['CUDA_VISIBLE_DEVICES']='7'\n\nfrom rag_retrieval import Reranker\n\n#\u5982\u679c\u81ea\u52a8\u4e0b\u8f7d\u5bf9\u5e94\u7684\u6a21\u578b\u5931\u8d25\uff0c\u8bf7\u5148\u4ecehuggface\u4e0b\u8f7d\u5bf9\u5e94\u7684\u6a21\u578b\u5230\u672c\u5730\uff0c\u7136\u540e\u8fd9\u91cc\u8f93\u5165\u672c\u5730\u7684\u8def\u5f84\u3002\n\nranker = Reranker('BAAI/bge-reranker-base',dtype='fp16',verbose=0)\n```\n\n\u63a8\u8350\u4f7f\u7528os.environ['CUDA_VISIBLE_DEVICES']\u6765\u6307\u5b9a\u663e\u5b58\u3002\n\n**Reranker\u7684\u53c2\u6570**\n\n```python\nReranker\u7684\n-   Reranker\u7684\u53c2\u6570\u4e3a\uff1a\n    - model_name: str,\n    - model_type: Optional[str] = None,\n    - verbose: int = 1,\n    - **kwargs\n```\n\n**\u53c2\u6570\u89e3\u91ca**\n\n\n- model_name: \u4e3a\u5bf9\u5e94\u7684\u6a21\u578b\u540d\u5b57\u6216\u8005\u6a21\u578b\u5728\u672c\u5730\u7684\u8def\u5f84\uff0c\u5982\u679c\u81ea\u52a8\u4e0b\u8f7d\u5bf9\u5e94\u7684\u6a21\u578b\u5931\u8d25\uff0c\u8bf7\u5148\u4ecehuggface\u4e0b\u8f7d\u5bf9\u5e94\u7684\u6a21\u578b\u5230\u672c\u5730\uff0c\u7136\u540e\u8fd9\u91cc\u8f93\u5165\u672c\u5730\u7684\u8def\u5f84\u3002\n- model_type: \u4e3a\u5bf9\u5e94\u7684reranker\u6a21\u578b\u7c7b\u578b\uff0c\u76ee\u524d\u652f\u6301cross-encoder,llm,colbert\uff0c\u53ef\u4ee5\u4e0d\u5177\u4f53\u6307\u5b9a\uff0c\u90a3\u4e48\uff0c\u4ee3\u7801\u4f1a\u6839\u636e\u8f93\u5165\u7684model_name\u81ea\u52a8\u63a8\u65ad\u9009\u62e9\u54ea\u79cd\u7c7b\u578b\u3002\n- verbose\uff0c\u662f\u5426\u6253\u5370\u51fa\u5fc5\u8981\u7684debug\u4fe1\u606f\uff0c\u9ed8\u8ba4\u6253\u5370\uff0c\u5982\u679c\u6d4b\u8bd5\u65e0\u8bef\uff0c\u53ef\u8bbe\u7f6everbose=0\u4e0d\u6253\u5370\u3002\n- **kwargs**: \u53ef\u4ee5\u5728\u6b64\u6307\u5b9a\u4e00\u4e9b\u6a21\u578b\u76f8\u5173\u7684\u53c2\u6570\uff0c\u4f8b\u5982\uff1a\n    - device\uff1a\u63a8\u7406\u8bbe\u5907\uff0c\u53ef\u4ee5\u8bbe\u7f6e\u4e3a'cpu','cuda'\u7b49\u3002\u5982\u679c\u4e0d\u5177\u4f53\u6307\u5b9a\uff0c\u90a3\u4e48\u6309\u7167\u4ee5\u4e0b\u4f18\u5148\u7ea7\u4f7f\u7528\u3002\u5982\u679c\u6709gpu\uff0c\u9ed8\u8ba4\u4f7f\u7528gpu\uff0c\u6709mps,\u9ed8\u8ba4mps,\u5982\u679c\u6709npu,\u9ed8\u8ba4\u4f7f\u7528npu\u3002\u5426\u5219\uff0c\u4f7f\u7528cpu\u63a8\u7406\u3002\n    - dtype\uff1a\u52a0\u8f7d\u6a21\u578b\u7684\u7c7b\u578b\uff0c\u53ef\u4ee5\u8bbe\u7f6e\u4e3a'fp32',fp16'\uff0c'bf16'\u3002\u5982\u679c\u4e0d\u5177\u4f53\u6307\u5b9a\uff0c\u9ed8\u8ba4\u4f7f\u7528fp32\u3002\u8bbe\u7f6efp16\u53ef\u52a0\u5feb\u63a8\u7406\u901f\u5ea6\u3002\n\n\n# \u652f\u6301\u7684reranker\u6a21\u578b\n\n## Cross Encoder ranker\n\n\u5bf9\u4e8ecross encoder \u7684ranker\uff0crag_retrieval\u7684Reranker\u652f\u6301\u591a\u4e2a\u5f3a\u5927\u7684\u5f00\u6e90\u6a21\u578b,\u603b\u7684\u6765\u8bf4\uff0c\u53ea\u8981\u5176cross encoder\u662f\u4f7f\u7528transformers\u7684**AutoModelForSequenceClassification**\u7684\u6a21\u578b\u7ed3\u6784\uff0c\u90a3\u4e48\u5c31\u53ef\u4ee5\u652f\u6301\u4f7f\u7528Reranker\u6765\u8fdb\u884c\u63a8\u7406\u3002\u4e3e\u4f8b\u5982\u4e0b\u3002\n\n- **bge\u7cfb\u5217\u7684cross encoder\u6a21\u578b\uff0c\u4f8b\u5982(BAAI/bge-reranker-base, BAAI/bge-reranker-large, BAAI/bge-reranker-v2-m3 )**\n\n- **bce\u7684cross encoder\u6a21\u578b\uff0c\u4f8b\u5982(maidalun1020/bce-reranker-base_v1)**\n\n\n## LLM ranker \n\n\u5bf9\u4e8eLLM ranker\uff0crag_retrieval\u7684Reranker\u652f\u6301\u591a\u79cd\u5f3a\u5927\u7684\u5b9a\u5236\u5316LLM\u6392\u5e8f\u6a21\u578b\u3002\u4e5f\u652f\u6301\u4f7f\u7528\u4efb\u610f\u7684LLM\u7684chat\u6a21\u578b\u6765\u8fdb\u884czero shot\u6392\u5e8f\u3002\u4e3e\u4f8b\u5982\u4e0b\u3002\n\n- **bge\u7cfb\u5217\u7684llm ranker\u6a21\u578b\uff0c\u4f8b\u5982(BAAI/bge-reranker-v2-gemma, BAAI/bge-reranker-v2-minicpm-layerwise, BAAI/bge-reranker-v2-m3 )**\n\n- **\u4e5f\u652f\u6301\u4f7f\u7528\u4efb\u610f\u7684LLM\u7684chat\u6a21\u578b\u6765\u8fdb\u884czero shot\u6392\u5e8f**\n\n\n**\u4e0b\u9762\u4ecb\u7ecd\u4e0bReranker\u8fd4\u56de\u7684reranker\u5bf9\u8c61\u7684\u6838\u5fc3\u65b9\u6cd5**\n\n# compute_score\n\u8be5\u51fd\u6570\u662f\u7528\u6765\u8ba1\u7b97\u8f93\u5165\u7684\u4e00\u5bf9\u6216\u8005\u591a\u5bf9\u53e5\u5b50\u5bf9\u7684\u5f97\u5206\u5e76\u8fd4\u56de\u3002\n\n\n```python\nimport os\nos.environ['CUDA_VISIBLE_DEVICES']='7'\n\nfrom rag_retrieval import Reranker\n\n\nranker = Reranker('BAAI/bge-reranker-base',dtype='fp16',verbose=0)\n\npairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]\n\n\nscores = ranker.compute_score(pairs)\n\nprint(scores)\n```\n\n[-8.1484375, 6.18359375]\n\n**compute_score\u51fd\u6570\u7684\u53c2\u6570**\n```python\n    def compute_score(self, \n        sentence_pairs: Union[List[Tuple[str, str]], Tuple[str, str]],\n        batch_size: int = 256,\n        max_length: int = 512,\n        normalize: bool = False,\n        enable_tqdm: bool = True,\n    ):\n```\n\n**\u8f93\u5165\u53c2\u6570\u89e3\u91ca**\n- sentence_pairs\uff1a \u9700\u8981\u8ba1\u7b97\u5f97\u5206\u7684\u4e00\u5bf9\u6216\u8005\u591a\u5bf9\u53e5\u5b50\u5bf9\u3002\n- batch_size\uff1a \u6a21\u578b\u4e00\u6b21\u524d\u5411\u63a8\u7406\u7684batch_size.\u5728\u51fd\u6570\u5185\u90e8\uff0c\u4f1a\u5c06sentence_pairs\u5207\u5206\u6210\u591a\u4e2abatch_size\u6765\u8fdb\u884c\u63a8\u7406\u3002\n- max_length\uff1a \u53e5\u5b50\u5bf9\u7684\u603b\u957f\u5ea6\uff0c\u8d85\u8fc7\u5c31\u4f1a\u622a\u65ad\u3002\n- normalize\uff1a\u662f\u5426\u4f1a\u5bf9\u8ba1\u7b97\u51fa\u6765\u7684\u5f97\u5206\u4f7f\u7528sigmod\u5f52\u4e00\u5316\u52300-1\u4e4b\u95f4\u3002\n- enable_tqdm\uff1a\u662f\u5426\u5f00\u542ftqdm\u5c55\u793a\u63a8\u7406\u7684\u8fdb\u5ea6\u3002\n\n\u5bf9\u4e8eLLM ranker \u4e2d\u7684BAAI/bge-reranker-v2-minicpm-layerwise\u6a21\u578b\uff0c\u53ef\u4ee5\u5728\u8fd9\u91cc\u4f20\u9012cutoff_layers\u6307\u5b9a\u63a8\u7406\u7684\u5c42\u6570\u3002\u5176\u4f59\u6a21\u578b\u4e0d\u9700\u8981\u4f20\u9012\u3002\n- cutoff_layers: list = None,\n**\u8fd4\u56de\u89e3\u91ca**\n\n\u5982\u679c\u8f93\u5165\u662f\u4e00\u5bf9\u53e5\u5b50\uff0c\u90a3\u4e48\u8fd4\u56de\u4e00\u4e2afloat\uff0c\u4ee3\u8868\u8fd9\u5bf9\u53e5\u5b50\u7684\u5206\u503c\u3002\u5982\u679c\u8f93\u5165\u662f\u591a\u5bf9\u53e5\u5b50\uff0c\u90a3\u4e48\u8fd4\u56de\u4e00\u7ec4fload\u7684\u5217\u8868\uff0c\u662f\u8fd9\u4e00\u7ec4\u7684\u5206\u503c.\n\n\n\n# rerank\u51fd\u6570\n\u8be5\u51fd\u6570\u662f\u7528\u6765\u8ba1\u7b97query\u4ee5\u53ca\u4e00\u7ec4doc\u7684\u5f97\u5206\uff0c\u53ef\u4ee5\u652f\u6301\u4e0d\u540c\u7684\u957fdoc\u5904\u7406\u7b56\u7565\u3002\n\n```python\n\nimport os\nos.environ['CUDA_VISIBLE_DEVICES']='7'\n\nfrom rag_retrieval import Reranker\n\nranker = Reranker('BAAI/bge-reranker-base',dtype='fp16',verbose=0)\n\nquery='what is panda?'\n\ndocs=['hi','The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']\n\ndoc_ranked = ranker.rerank(query,docs)\nprint(doc_ranked)\n\n```\nresults=[Result(doc_id=1, text='The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.', score=6.18359375, rank=1), Result(doc_id=0, text='hi', score=-8.1484375, rank=2)] query='what is panda?' has_scores=True\n\n**rerank\u7684\u53c2\u6570**\n\n```python\n    def rerank(self, \n        query: str, \n        docs: Union[List[str], str] = None,\n        batch_size: int = 256,\n        max_length: int = 512,\n        normalize: bool = False,\n        long_doc_process_strategy: str=\"max_score_slice\",#['max_score_slice','max_length_truncation']\n    ):  \n```\n\n**\u8f93\u5165\u53c2\u6570\u89e3\u91ca**\n- query\uff1a query\u6587\u672c\u3002\n- docs\uff1a \u4e00\u4e2a\u6216\u8005\u4e00\u7ec4doc\u7684\u6587\u672c\u3002\n- batch_size\uff1a \u6a21\u578b\u4e00\u6b21\u524d\u5411\u63a8\u7406\u7684batch_size\u3002\n- max_length\uff1a \u53e5\u5b50\u5bf9\u7684\u603b\u957f\u5ea6\uff0c\u8d85\u8fc7\u5c31\u4f1a\u622a\u65ad\u3002\n- normalize\uff1a\u662f\u5426\u4f1a\u5bf9\u8ba1\u7b97\u51fa\u6765\u7684\u5f97\u5206\u4f7f\u7528sigmod\u5f52\u4e00\u5316\u52300-1\u4e4b\u95f4\u3002\n- long_doc_process_strategy\uff1a \u5bf9\u4e8e\u957fdoc\u5904\u7406\u7684\u903b\u8f91\uff0c\u53ef\u4ee5\u9009\u62e9\n    - max_score_slice\uff1a\u5c06\u957fdoc\u6309\u7167\u957f\u5ea6\u5207\u5206\uff0c\u5206\u522b\u8ba1\u7b97query\u548c\u6240\u6709\u5b50doc\u7684\u5206\u6570\uff0c\u53d6query\u4e0e\u6700\u5927\u7684\u5b50doc\u7684\u5206\u6570\u4f5c\u4e3aquery\u548c\u6574\u4e2adoc\u7684\u5206\u6570\u3002\n    - max_length_truncation\uff1aquery\u52a0doc\u7684\u957f\u5ea6\u8d85\u8fc7max_length\u5c31\u4f1a\u622a\u65ad,\u6765\u8ba1\u7b97\u5206\u6570\u3002\n\n\u5bf9\u4e8eLLM ranker \u4e2d\u7684BAAI/bge-reranker-v2-minicpm-layerwise\u6a21\u578b\uff0c\u53ef\u4ee5\u5728\u8fd9\u91cc\u4f20\u9012cutoff_layers\u6307\u5b9a\u63a8\u7406\u7684\u5c42\u6570\u3002\u5176\u4f59\u6a21\u578b\u4e0d\u9700\u8981\u4f20\u9012\u3002\n- cutoff_layers: list = None,\n\n**\u8fd4\u56de\u89e3\u91ca**\n\n\u8fd4\u56de\u662f\u4e00\u4e2aRankedResults\u5bf9\u8c61\uff0c\u5176\u4e3b\u8981\u7684\u5c5e\u6027\u6709\uff1aresults: List[Result]\u3002\u4e00\u7ec4Result\u5bf9\u8c61\uff0c\u800cResult\u7684\u5c5e\u6027\u6709\uff1a\n- doc_id: Union[int, str]\n- text: str\n- score: Optional[float] = None\n- rank: Optional[int] = None\n\nRankedResults\u5bf9\u8c61\u4e5f\u6709\u4e00\u4e9b\u5e38\u89c1\u7684\u65b9\u6cd5\u5982top_k:\u6309\u7167score\u8fd4\u56detop_k\u4e2aResult.get_score_by_docid:\u8f93\u5165doc\u5728\u8f93\u5165\u7684\u987a\u5e8f\uff0c\u5f97\u5230\u5bf9\u5e94\u7684score\u3002\n\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "A unified API for various RAG Retrieval  reranker models.",
    "version": "0.2.2",
    "project_urls": {
        "Homepage": "https://github.com/NLPJCL/RAG-Retrieval"
    },
    "split_keywords": [
        "reranking",
        " retrieval",
        " rag",
        " nlp"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d515fc1f6459008071f0eb47e761eb0f3731ccd62d11b5cf0c2ae3ae5598aeb5",
                "md5": "3fd687ae1730c2d3e78bb0bf7606daf7",
                "sha256": "c3c5c4bed9b85880604a096bf43e97f51cbc3edef86d8456a46c1865cf3b1e60"
            },
            "downloads": -1,
            "filename": "rag_retrieval-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3fd687ae1730c2d3e78bb0bf7606daf7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 15386,
            "upload_time": "2024-05-05T13:42:51",
            "upload_time_iso_8601": "2024-05-05T13:42:51.846935Z",
            "url": "https://files.pythonhosted.org/packages/d5/15/fc1f6459008071f0eb47e761eb0f3731ccd62d11b5cf0c2ae3ae5598aeb5/rag_retrieval-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c996c2dd6ca14b38ebf985777bdb79fc60361fa0e949e97abf5e2029af0db46d",
                "md5": "605e718d3819c12c7e604bf8e47ed89b",
                "sha256": "86cf6eb5e3660478666f9ac5dc987f3aeeb8249eb4b156f25f3c058a4e528c9d"
            },
            "downloads": -1,
            "filename": "rag_retrieval-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "605e718d3819c12c7e604bf8e47ed89b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 19109,
            "upload_time": "2024-05-05T13:42:53",
            "upload_time_iso_8601": "2024-05-05T13:42:53.472911Z",
            "url": "https://files.pythonhosted.org/packages/c9/96/c2dd6ca14b38ebf985777bdb79fc60361fa0e949e97abf5e2029af0db46d/rag_retrieval-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-05 13:42:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NLPJCL",
    "github_project": "RAG-Retrieval",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "rag-retrieval"
}
        
Elapsed time: 0.22531s