[license-image]: https://img.shields.io/badge/License-Apache%202.0-blue.svg
[license-url]: https://opensource.org/licenses/Apache-2.0
[pypi-image]: https://badge.fury.io/py/open-retrievals.svg
[pypi-url]: https://pypi.org/project/open-retrievals
[pepy-image]: https://pepy.tech/badge/retrievals/month
[pepy-url]: https://pepy.tech/project/retrievals
[build-image]: https://github.com/LongxingTan/open-retrievals/actions/workflows/test.yml/badge.svg?branch=master
[build-url]: https://github.com/LongxingTan/open-retrievals/actions/workflows/test.yml?query=branch%3Amaster
[lint-image]: https://github.com/LongxingTan/open-retrievals/actions/workflows/lint.yml/badge.svg?branch=master
[lint-url]: https://github.com/LongxingTan/open-retrievals/actions/workflows/lint.yml?query=branch%3Amaster
[docs-image]: https://readthedocs.org/projects/open-retrievals/badge/?version=latest
[docs-url]: https://open-retrievals.readthedocs.io/en/master/
[coverage-image]: https://codecov.io/gh/longxingtan/open-retrievals/branch/master/graph/badge.svg
[coverage-url]: https://codecov.io/github/longxingtan/open-retrievals?branch=master
[contributing-image]: https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat
[contributing-url]: https://github.com/longxingtan/open-retrievals/blob/master/CONTRIBUTING.md
<h1 align="center">
<img src="./docs/source/_static/logo.svg" width="420" align=center/>
</h1>
<div align="center">
[![LICENSE][license-image]][license-url]
[![PyPI Version][pypi-image]][pypi-url]
[![Build Status][build-image]][build-url]
[![Lint Status][lint-image]][lint-url]
[![Docs Status][docs-image]][docs-url]
[![Code Coverage][coverage-image]][coverage-url]
[![Contributing][contributing-image]][contributing-url]
**[Documentation](https://open-retrievals.readthedocs.io/en/master/)** | **[中文](https://github.com/LongxingTan/open-retrievals/blob/master/README_zh-CN.md)** | **[日本語](https://github.com/LongxingTan/open-retrievals/blob/master/README_ja-JP.md)**
</div>
![structure](./docs/source/_static/structure.png)
**Open-retrievals** unify text embedding, retrieval, reranking and RAG. It's easy, flexible and scalable to fine-tune the model.
- Embedding fine-tuned through point-wise, pairwise, listwise, contrastive learning and LLM.
- Reranking fine-tuned with Cross-Encoder, ColBERT and LLM.
- Easily build enhanced modular RAG, integrated with Transformers, Langchain and LlamaIndex.
| Experiment | Model | Original | Finetuned | Demo |
|-------------------------------|------------------------|----------|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **embed** pairwise finetune | bge-base-zh-v1.5 | 0.657 | **0.703** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/17KXe2lnNRID-HiVvMtzQnONiO74oGs91?usp=sharing) |
| **embed** LLM finetune (LoRA) | e5-mistral-7b-instruct | 0.651 | **0.699** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jj1kBQWFcuQ3a7P9ttnl1hgX7H8WA_Za?usp=sharing) |
| **rerank** cross encoder | bge-reranker-base | 0.666 | **0.706** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QvbUkZtG56SXomGYidwI4RQzwODQrWNm?usp=sharing) |
| **rerank** colbert | bge-m3 | 0.657 | **0.695** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QVtqhQ080ZMltXoJyODMmvEQYI6oo5kO?usp=sharing) |
| **rerank** LLM (LoRA) | bge-reranker-v2-gemma | 0.637 | **0.706** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1fzq1iV7-f8hNKFnjMmpVhVxadqPb9IXk?usp=sharing) |
* The eval metrics is MAP in 10% [t2-reranking data](https://huggingface.co/datasets/C-MTEB/T2Reranking).
* Read [more examples](./examples)
## Installation
**With pip**
```shell
pip install transformers
pip install open-retrievals
```
## Quick-start
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1-WBMisdWLeHUKlzJ2DrREXY_kSV8vjP3?usp=sharing)
<details><summary> Embedding from pretrained weights </summary>
```python
from retrievals import AutoModelForEmbedding
sentences = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
model_name_or_path = 'intfloat/e5-base-v2'
model = AutoModelForEmbedding.from_pretrained(model_name_or_path, pooling_method="mean")
embeddings = model.encode(sentences, normalize_embeddings=True)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
</details>
<details><summary> Index building for dense retrieval search </summary>
```python
from retrievals import AutoModelForEmbedding, AutoModelForRetrieval
sentences = ['A dog is chasing car.', 'A man is playing a guitar.']
model_name_or_path = "sentence-transformers/all-MiniLM-L6-v2"
index_path = './database/faiss/faiss.index'
model = AutoModelForEmbedding.from_pretrained(model_name_or_path, pooling_method='mean')
model.build_index(sentences, index_path=index_path)
query_embed = model.encode("He plays guitar.")
matcher = AutoModelForRetrieval()
dists, indices = matcher.search(query_embed, index_path=index_path)
print(indices)
```
</details>
<details><summary> Rerank using pretrained weights </summary>
```python
from retrievals import AutoModelForRanking
model_name_or_path: str = "BAAI/bge-reranker-base"
rerank_model = AutoModelForRanking.from_pretrained(model_name_or_path)
scores_list = rerank_model.compute_score(["In 1974, I won the championship in Southeast Asia in my first kickboxing match", "In 1982, I defeated the heavy hitter Ryu Long."])
print(scores_list)
```
</details>
<details><summary> RAG with LangChain integration </summary>
```shell
pip install langchain
pip install langchain_community
pip install chromadb
```
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1fJC-8er-a4NRkdJkwWr4On7lGt9rAO4P?usp=sharing)
```python
from retrievals.tools.langchain import LangchainEmbedding, LangchainReranker, LangchainLLM
from retrievals import AutoModelForRanking
from langchain.retrievers import ContextualCompressionRetriever
from langchain_community.vectorstores import Chroma as Vectorstore
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import RetrievalQA
persist_directory = './database/faiss.index'
embed_model_name_or_path = "sentence-transformers/all-MiniLM-L6-v2"
rerank_model_name_or_path = "BAAI/bge-reranker-base"
llm_model_name_or_path = "microsoft/Phi-3-mini-128k-instruct"
embeddings = LangchainEmbedding(model_name=embed_model_name_or_path)
vectordb = Vectorstore(
persist_directory=persist_directory,
embedding_function=embeddings,
)
retrieval_args = {"search_type" :"similarity", "score_threshold": 0.15, "k": 10}
retriever = vectordb.as_retriever(**retrieval_args)
ranker = AutoModelForRanking.from_pretrained(rerank_model_name_or_path)
reranker = LangchainReranker(model=ranker, top_n=3)
compression_retriever = ContextualCompressionRetriever(
base_compressor=reranker, base_retriever=retriever
)
llm = LangchainLLM(model_name_or_path=llm_model_name_or_path)
RESPONSE_TEMPLATE = """[INST]
<>
You are a helpful AI assistant. Use the following pieces of context to answer the user's question.<>
Anything between the following `context` html blocks is retrieved from a knowledge base.
{context}
REMEMBER:
- If you don't know the answer, just say that you don't know, don't try to make up an answer.
- Let's take a deep breath and think step-by-step.
Question: {question}[/INST]
Helpful Answer:
"""
PROMPT = PromptTemplate(template=RESPONSE_TEMPLATE, input_variables=["context", "question"])
qa_chain = RetrievalQA.from_chain_type(
llm,
chain_type='stuff',
retriever=compression_retriever,
chain_type_kwargs={
"verbose": True,
"prompt": PROMPT,
}
)
user_query = 'Introduce this'
response = qa_chain({"query": user_query})
print(response)
```
</details>
## Fine-tuning
<details><summary> Fine-tune embedding </summary>
```python
import torch.nn as nn
from datasets import load_dataset
from transformers import AutoTokenizer, AdamW, get_linear_schedule_with_warmup, TrainingArguments
from retrievals import AutoModelForEmbedding, RetrievalTrainer, PairCollator, TripletCollator
from retrievals.losses import ArcFaceAdaptiveMarginLoss, InfoNCE, SimCSE, TripletLoss
model_name_or_path: str = "sentence-transformers/paraphrase-multilingual-mpnet-base-v2"
batch_size: int = 32
epochs: int = 3
train_dataset = load_dataset('shibing624/nli_zh', 'STS-B')['train']
train_dataset = train_dataset.rename_columns({'sentence1': 'query', 'sentence2': 'positive'})
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)
model = AutoModelForEmbedding.from_pretrained(model_name_or_path, pooling_method="mean")
model = model.set_train_type('pairwise')
optimizer = AdamW(model.parameters(), lr=5e-5)
num_train_steps = int(len(train_dataset) / batch_size * epochs)
scheduler = get_linear_schedule_with_warmup(
optimizer, num_warmup_steps=0.05 * num_train_steps, num_training_steps=num_train_steps
)
training_arguments = TrainingArguments(
output_dir='./checkpoints',
num_train_epochs=epochs,
per_device_train_batch_size=batch_size,
remove_unused_columns=False,
logging_steps=100,
)
trainer = RetrievalTrainer(
model=model,
args=training_arguments,
train_dataset=train_dataset,
data_collator=PairCollator(tokenizer, query_max_length=32, document_max_length=128),
loss_fn=InfoNCE(nn.CrossEntropyLoss(label_smoothing=0.05)),
)
trainer.optimizer = optimizer
trainer.scheduler = scheduler
trainer.train()
```
</details>
<details><summary> Fine-tune LLM embedding </summary>
```python
import torch.nn as nn
from datasets import load_dataset
from transformers import AutoTokenizer, AdamW, get_linear_schedule_with_warmup, TrainingArguments
from retrievals import AutoModelForEmbedding, RetrievalTrainer, PairCollator, TripletCollator
from retrievals.losses import InfoNCE, SimCSE, TripletLoss
def add_instructions(example):
example['query'] = query_instruction + example['query']
example['positive'] = document_instruction + example['positive']
return example
model_name_or_path: str = "Qwen/Qwen2-1.5B-Instruct"
batch_size: int = 8
epochs: int = 3
query_instruction = "Retrieve relevant passages that answer the query\nQuery: "
document_instruction = "Document: "
train_dataset = load_dataset('shibing624/nli_zh', 'STS-B')['train']
train_dataset = train_dataset.rename_columns({'sentence1': 'query', 'sentence2': 'positive'})
train_dataset = train_dataset.map(add_instructions)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)
model = AutoModelForEmbedding.from_pretrained(model_name_or_path, pooling_method="last", use_lora=True)
model = model.set_train_type('pairwise', loss_fn=InfoNCE(nn.CrossEntropyLoss(label_smoothing=0.05)))
optimizer = AdamW(model.parameters(), lr=5e-5)
num_train_steps = int(len(train_dataset) / batch_size * epochs)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=0.05 * num_train_steps, num_training_steps=num_train_steps)
training_arguments = TrainingArguments(
output_dir='./checkpoints',
num_train_epochs=epochs,
per_device_train_batch_size=batch_size,
remove_unused_columns=False,
logging_steps=100,
)
trainer = RetrievalTrainer(
model=model,
args=training_arguments,
train_dataset=train_dataset,
data_collator=PairCollator(tokenizer, query_max_length=64, document_max_length=128),
)
trainer.optimizer = optimizer
trainer.scheduler = scheduler
trainer.train()
```
</details>
<details><summary> Fine-tune cross-encoder reranking </summary>
```python
from transformers import AutoTokenizer, TrainingArguments, get_cosine_schedule_with_warmup, AdamW
from retrievals import RerankCollator, AutoModelForRanking, RerankTrainer, RerankTrainDataset
model_name_or_path: str = "BAAI/bge-reranker-base"
max_length: int = 128
learning_rate: float = 3e-5
batch_size: int = 4
epochs: int = 3
output_dir: str = "./checkpoints"
train_dataset = RerankTrainDataset("C-MTEB/T2Reranking", positive_key="positive", negative_key="negative", dataset_split='dev')
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)
model = AutoModelForRanking.from_pretrained(model_name_or_path)
optimizer = AdamW(model.parameters(), lr=learning_rate)
num_train_steps = int(len(train_dataset) / batch_size * epochs)
scheduler = get_cosine_schedule_with_warmup(
optimizer,
num_warmup_steps=0.05 * num_train_steps,
num_training_steps=num_train_steps,
)
training_args = TrainingArguments(
learning_rate=learning_rate,
per_device_train_batch_size=batch_size,
num_train_epochs=epochs,
output_dir=output_dir,
remove_unused_columns=False,
logging_steps=100,
report_to="none",
)
trainer = RerankTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
data_collator=RerankCollator(tokenizer, max_length=max_length),
)
trainer.optimizer = optimizer
trainer.scheduler = scheduler
trainer.train()
```
</details>
<details><summary> Fine-tune ColBERT reranking </summary>
```python
import os
import transformers
from transformers import (
AdamW,
AutoTokenizer,
TrainingArguments,
get_cosine_schedule_with_warmup,
)
from retrievals import ColBERT, ColBertCollator, RerankTrainer, RetrievalTrainDataset
from retrievals.losses import ColbertLoss
transformers.logging.set_verbosity_error()
os.environ["WANDB_DISABLED"] = "true"
model_name_or_path: str = "BAAI/bge-m3"
learning_rate: float = 5e-6
batch_size: int = 32
epochs: int = 3
colbert_dim: int = 1024
output_dir: str = './checkpoints'
train_dataset = RetrievalTrainDataset('C-MTEB/T2Reranking', positive_key='positive', negative_key='negative', dataset_split='dev')
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)
data_collator = ColBertCollator(
tokenizer,
query_max_length=128,
document_max_length=256,
positive_key='positive',
negative_key='negative',
)
model = ColBERT.from_pretrained(
model_name_or_path,
colbert_dim=colbert_dim,
loss_fn=ColbertLoss(use_inbatch_negative=False),
)
optimizer = AdamW(model.parameters(), lr=learning_rate)
num_train_steps = int(len(train_dataset) / batch_size * epochs)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=0.05 * num_train_steps, num_training_steps=num_train_steps)
training_args = TrainingArguments(
learning_rate=learning_rate,
per_device_train_batch_size=batch_size,
num_train_epochs=epochs,
output_dir=output_dir,
remove_unused_columns=False,
logging_steps=100,
)
trainer = RerankTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
data_collator=data_collator,
)
trainer.optimizer = optimizer
trainer.scheduler = scheduler
trainer.train()
```
</details>
<details><summary> Fine-tune LLM reranking </summary>
```python
from transformers import (
AdamW,
AutoTokenizer,
TrainingArguments,
get_cosine_schedule_with_warmup,
)
from retrievals import (
LLMRanker,
LLMRerankCollator,
RerankTrainer,
RetrievalTrainDataset,
)
from retrievals.losses import TokenLoss
model_name_or_path: str = "Qwen/Qwen2-1.5B-Instruct"
max_length: int = 512
learning_rate: float = 3e-5
batch_size: int = 8
epochs: int = 3
task_prompt: str = (
"""Given a query A and a passage B, determine whether the passage contains an answer to the query"""
"""by providing a prediction of either 'Yes' or 'No'."""
)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)
train_dataset = RetrievalTrainDataset(
data_name_or_path='C-MTEB/T2Reranking',
positive_key='positive',
negative_key='negative',
query_instruction='A: ',
document_instruction='B: ',
dataset_split='dev',
)
data_collator = LLMRerankCollator(tokenizer=tokenizer, max_length=max_length, prompt=task_prompt, add_target_token='Yes')
token_index = tokenizer('Yes', add_special_tokens=False)['input_ids'][-1]
model = LLMRanker.from_pretrained(
model_name_or_path,
causal_lm=True,
use_fp16=True,
loss_fn=TokenLoss(token_index=token_index),
use_lora=True,
)
optimizer = AdamW(model.parameters(), lr=learning_rate)
num_train_steps = int(len(train_dataset) / batch_size * epochs)
scheduler = get_cosine_schedule_with_warmup(
optimizer,
num_warmup_steps=0.05 * num_train_steps,
num_training_steps=num_train_steps,
)
training_args = TrainingArguments(
learning_rate=learning_rate,
per_device_train_batch_size=batch_size,
num_train_epochs=epochs,
output_dir="./checkpoints",
remove_unused_columns=False,
)
trainer = RerankTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
data_collator=data_collator,
)
trainer.optimizer = optimizer
trainer.scheduler = scheduler
trainer.train()
```
</details>
## Reference & Acknowledge
- [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers)
- [luyug/Dense](https://github.com/luyug/Dense)
- [FlagOpen/FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)
Raw data
{
"_id": null,
"home_page": "https://github.com/LongxingTan/open-retrievals",
"name": "open-retrievals",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7.0",
"maintainer_email": null,
"keywords": "NLP retrieval RAG rerank text embedding contrastive",
"author": "Longxing Tan",
"author_email": "tanlongxing888@163.com",
"download_url": "https://files.pythonhosted.org/packages/51/f2/231d7ac63f4503f9706e0254ebc862f84af681da2c7112f68879f1a3cac0/open-retrievals-0.0.12.tar.gz",
"platform": null,
"description": "[license-image]: https://img.shields.io/badge/License-Apache%202.0-blue.svg\n[license-url]: https://opensource.org/licenses/Apache-2.0\n[pypi-image]: https://badge.fury.io/py/open-retrievals.svg\n[pypi-url]: https://pypi.org/project/open-retrievals\n[pepy-image]: https://pepy.tech/badge/retrievals/month\n[pepy-url]: https://pepy.tech/project/retrievals\n[build-image]: https://github.com/LongxingTan/open-retrievals/actions/workflows/test.yml/badge.svg?branch=master\n[build-url]: https://github.com/LongxingTan/open-retrievals/actions/workflows/test.yml?query=branch%3Amaster\n[lint-image]: https://github.com/LongxingTan/open-retrievals/actions/workflows/lint.yml/badge.svg?branch=master\n[lint-url]: https://github.com/LongxingTan/open-retrievals/actions/workflows/lint.yml?query=branch%3Amaster\n[docs-image]: https://readthedocs.org/projects/open-retrievals/badge/?version=latest\n[docs-url]: https://open-retrievals.readthedocs.io/en/master/\n[coverage-image]: https://codecov.io/gh/longxingtan/open-retrievals/branch/master/graph/badge.svg\n[coverage-url]: https://codecov.io/github/longxingtan/open-retrievals?branch=master\n[contributing-image]: https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat\n[contributing-url]: https://github.com/longxingtan/open-retrievals/blob/master/CONTRIBUTING.md\n\n<h1 align=\"center\">\n<img src=\"./docs/source/_static/logo.svg\" width=\"420\" align=center/>\n</h1>\n\n<div align=\"center\">\n\n [![LICENSE][license-image]][license-url]\n [![PyPI Version][pypi-image]][pypi-url]\n [![Build Status][build-image]][build-url]\n [![Lint Status][lint-image]][lint-url]\n [![Docs Status][docs-image]][docs-url]\n [![Code Coverage][coverage-image]][coverage-url]\n [![Contributing][contributing-image]][contributing-url]\n\n **[Documentation](https://open-retrievals.readthedocs.io/en/master/)** | **[\u4e2d\u6587](https://github.com/LongxingTan/open-retrievals/blob/master/README_zh-CN.md)** | **[\u65e5\u672c\u8a9e](https://github.com/LongxingTan/open-retrievals/blob/master/README_ja-JP.md)**\n\n</div>\n\n![structure](./docs/source/_static/structure.png)\n\n**Open-retrievals** unify text embedding, retrieval, reranking and RAG. It's easy, flexible and scalable to fine-tune the model.\n- Embedding fine-tuned through point-wise, pairwise, listwise, contrastive learning and LLM.\n- Reranking fine-tuned with Cross-Encoder, ColBERT and LLM.\n- Easily build enhanced modular RAG, integrated with Transformers, Langchain and LlamaIndex.\n\n| Experiment | Model | Original | Finetuned | Demo |\n|-------------------------------|------------------------|----------|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| **embed** pairwise finetune | bge-base-zh-v1.5 | 0.657 | **0.703** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/17KXe2lnNRID-HiVvMtzQnONiO74oGs91?usp=sharing) |\n| **embed** LLM finetune (LoRA) | e5-mistral-7b-instruct | 0.651 | **0.699** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jj1kBQWFcuQ3a7P9ttnl1hgX7H8WA_Za?usp=sharing) |\n| **rerank** cross encoder | bge-reranker-base | 0.666 | **0.706** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QvbUkZtG56SXomGYidwI4RQzwODQrWNm?usp=sharing) |\n| **rerank** colbert | bge-m3 | 0.657 | **0.695** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QVtqhQ080ZMltXoJyODMmvEQYI6oo5kO?usp=sharing) |\n| **rerank** LLM (LoRA) | bge-reranker-v2-gemma | 0.637 | **0.706** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1fzq1iV7-f8hNKFnjMmpVhVxadqPb9IXk?usp=sharing) |\n\n* The eval metrics is MAP in 10% [t2-reranking data](https://huggingface.co/datasets/C-MTEB/T2Reranking).\n* Read [more examples](./examples)\n\n\n## Installation\n\n**With pip**\n```shell\npip install transformers\npip install open-retrievals\n```\n\n\n## Quick-start\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1-WBMisdWLeHUKlzJ2DrREXY_kSV8vjP3?usp=sharing)\n\n<details><summary> Embedding from pretrained weights </summary>\n\n```python\nfrom retrievals import AutoModelForEmbedding\n\nsentences = [\n 'query: how much protein should a female eat',\n 'query: summit define',\n \"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.\",\n \"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.\"\n]\nmodel_name_or_path = 'intfloat/e5-base-v2'\nmodel = AutoModelForEmbedding.from_pretrained(model_name_or_path, pooling_method=\"mean\")\nembeddings = model.encode(sentences, normalize_embeddings=True)\nscores = (embeddings[:2] @ embeddings[2:].T) * 100\nprint(scores.tolist())\n```\n</details>\n\n<details><summary> Index building for dense retrieval search </summary>\n\n```python\nfrom retrievals import AutoModelForEmbedding, AutoModelForRetrieval\n\nsentences = ['A dog is chasing car.', 'A man is playing a guitar.']\nmodel_name_or_path = \"sentence-transformers/all-MiniLM-L6-v2\"\nindex_path = './database/faiss/faiss.index'\nmodel = AutoModelForEmbedding.from_pretrained(model_name_or_path, pooling_method='mean')\nmodel.build_index(sentences, index_path=index_path)\n\nquery_embed = model.encode(\"He plays guitar.\")\nmatcher = AutoModelForRetrieval()\ndists, indices = matcher.search(query_embed, index_path=index_path)\nprint(indices)\n```\n</details>\n\n<details><summary> Rerank using pretrained weights </summary>\n\n```python\nfrom retrievals import AutoModelForRanking\n\nmodel_name_or_path: str = \"BAAI/bge-reranker-base\"\nrerank_model = AutoModelForRanking.from_pretrained(model_name_or_path)\nscores_list = rerank_model.compute_score([\"In 1974, I won the championship in Southeast Asia in my first kickboxing match\", \"In 1982, I defeated the heavy hitter Ryu Long.\"])\nprint(scores_list)\n```\n</details>\n\n<details><summary> RAG with LangChain integration </summary>\n\n```shell\npip install langchain\npip install langchain_community\npip install chromadb\n```\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1fJC-8er-a4NRkdJkwWr4On7lGt9rAO4P?usp=sharing)\n\n```python\nfrom retrievals.tools.langchain import LangchainEmbedding, LangchainReranker, LangchainLLM\nfrom retrievals import AutoModelForRanking\nfrom langchain.retrievers import ContextualCompressionRetriever\nfrom langchain_community.vectorstores import Chroma as Vectorstore\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.chains import RetrievalQA\n\npersist_directory = './database/faiss.index'\nembed_model_name_or_path = \"sentence-transformers/all-MiniLM-L6-v2\"\nrerank_model_name_or_path = \"BAAI/bge-reranker-base\"\nllm_model_name_or_path = \"microsoft/Phi-3-mini-128k-instruct\"\n\nembeddings = LangchainEmbedding(model_name=embed_model_name_or_path)\nvectordb = Vectorstore(\n persist_directory=persist_directory,\n embedding_function=embeddings,\n)\nretrieval_args = {\"search_type\" :\"similarity\", \"score_threshold\": 0.15, \"k\": 10}\nretriever = vectordb.as_retriever(**retrieval_args)\n\nranker = AutoModelForRanking.from_pretrained(rerank_model_name_or_path)\nreranker = LangchainReranker(model=ranker, top_n=3)\ncompression_retriever = ContextualCompressionRetriever(\n base_compressor=reranker, base_retriever=retriever\n)\n\nllm = LangchainLLM(model_name_or_path=llm_model_name_or_path)\n\nRESPONSE_TEMPLATE = \"\"\"[INST]\n<>\nYou are a helpful AI assistant. Use the following pieces of context to answer the user's question.<>\nAnything between the following `context` html blocks is retrieved from a knowledge base.\n\n {context}\n\nREMEMBER:\n- If you don't know the answer, just say that you don't know, don't try to make up an answer.\n- Let's take a deep breath and think step-by-step.\n\nQuestion: {question}[/INST]\nHelpful Answer:\n\"\"\"\n\nPROMPT = PromptTemplate(template=RESPONSE_TEMPLATE, input_variables=[\"context\", \"question\"])\n\nqa_chain = RetrievalQA.from_chain_type(\n llm,\n chain_type='stuff',\n retriever=compression_retriever,\n chain_type_kwargs={\n \"verbose\": True,\n \"prompt\": PROMPT,\n }\n)\n\nuser_query = 'Introduce this'\nresponse = qa_chain({\"query\": user_query})\nprint(response)\n```\n</details>\n\n\n## Fine-tuning\n\n<details><summary> Fine-tune embedding </summary>\n\n```python\nimport torch.nn as nn\nfrom datasets import load_dataset\nfrom transformers import AutoTokenizer, AdamW, get_linear_schedule_with_warmup, TrainingArguments\nfrom retrievals import AutoModelForEmbedding, RetrievalTrainer, PairCollator, TripletCollator\nfrom retrievals.losses import ArcFaceAdaptiveMarginLoss, InfoNCE, SimCSE, TripletLoss\n\nmodel_name_or_path: str = \"sentence-transformers/paraphrase-multilingual-mpnet-base-v2\"\nbatch_size: int = 32\nepochs: int = 3\n\ntrain_dataset = load_dataset('shibing624/nli_zh', 'STS-B')['train']\ntrain_dataset = train_dataset.rename_columns({'sentence1': 'query', 'sentence2': 'positive'})\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)\nmodel = AutoModelForEmbedding.from_pretrained(model_name_or_path, pooling_method=\"mean\")\nmodel = model.set_train_type('pairwise')\n\noptimizer = AdamW(model.parameters(), lr=5e-5)\nnum_train_steps = int(len(train_dataset) / batch_size * epochs)\nscheduler = get_linear_schedule_with_warmup(\n optimizer, num_warmup_steps=0.05 * num_train_steps, num_training_steps=num_train_steps\n)\n\ntraining_arguments = TrainingArguments(\n output_dir='./checkpoints',\n num_train_epochs=epochs,\n per_device_train_batch_size=batch_size,\n remove_unused_columns=False,\n logging_steps=100,\n)\ntrainer = RetrievalTrainer(\n model=model,\n args=training_arguments,\n train_dataset=train_dataset,\n data_collator=PairCollator(tokenizer, query_max_length=32, document_max_length=128),\n loss_fn=InfoNCE(nn.CrossEntropyLoss(label_smoothing=0.05)),\n)\ntrainer.optimizer = optimizer\ntrainer.scheduler = scheduler\ntrainer.train()\n```\n</details>\n\n<details><summary> Fine-tune LLM embedding </summary>\n\n```python\nimport torch.nn as nn\nfrom datasets import load_dataset\nfrom transformers import AutoTokenizer, AdamW, get_linear_schedule_with_warmup, TrainingArguments\nfrom retrievals import AutoModelForEmbedding, RetrievalTrainer, PairCollator, TripletCollator\nfrom retrievals.losses import InfoNCE, SimCSE, TripletLoss\n\ndef add_instructions(example):\n example['query'] = query_instruction + example['query']\n example['positive'] = document_instruction + example['positive']\n return example\n\nmodel_name_or_path: str = \"Qwen/Qwen2-1.5B-Instruct\"\nbatch_size: int = 8\nepochs: int = 3\nquery_instruction = \"Retrieve relevant passages that answer the query\\nQuery: \"\ndocument_instruction = \"Document: \"\n\ntrain_dataset = load_dataset('shibing624/nli_zh', 'STS-B')['train']\ntrain_dataset = train_dataset.rename_columns({'sentence1': 'query', 'sentence2': 'positive'})\ntrain_dataset = train_dataset.map(add_instructions)\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)\nmodel = AutoModelForEmbedding.from_pretrained(model_name_or_path, pooling_method=\"last\", use_lora=True)\nmodel = model.set_train_type('pairwise', loss_fn=InfoNCE(nn.CrossEntropyLoss(label_smoothing=0.05)))\noptimizer = AdamW(model.parameters(), lr=5e-5)\nnum_train_steps = int(len(train_dataset) / batch_size * epochs)\nscheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=0.05 * num_train_steps, num_training_steps=num_train_steps)\n\ntraining_arguments = TrainingArguments(\n output_dir='./checkpoints',\n num_train_epochs=epochs,\n per_device_train_batch_size=batch_size,\n remove_unused_columns=False,\n logging_steps=100,\n)\ntrainer = RetrievalTrainer(\n model=model,\n args=training_arguments,\n train_dataset=train_dataset,\n data_collator=PairCollator(tokenizer, query_max_length=64, document_max_length=128),\n)\ntrainer.optimizer = optimizer\ntrainer.scheduler = scheduler\ntrainer.train()\n```\n</details>\n\n<details><summary> Fine-tune cross-encoder reranking </summary>\n\n```python\nfrom transformers import AutoTokenizer, TrainingArguments, get_cosine_schedule_with_warmup, AdamW\nfrom retrievals import RerankCollator, AutoModelForRanking, RerankTrainer, RerankTrainDataset\n\nmodel_name_or_path: str = \"BAAI/bge-reranker-base\"\nmax_length: int = 128\nlearning_rate: float = 3e-5\nbatch_size: int = 4\nepochs: int = 3\noutput_dir: str = \"./checkpoints\"\n\ntrain_dataset = RerankTrainDataset(\"C-MTEB/T2Reranking\", positive_key=\"positive\", negative_key=\"negative\", dataset_split='dev')\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)\nmodel = AutoModelForRanking.from_pretrained(model_name_or_path)\noptimizer = AdamW(model.parameters(), lr=learning_rate)\nnum_train_steps = int(len(train_dataset) / batch_size * epochs)\nscheduler = get_cosine_schedule_with_warmup(\n optimizer,\n num_warmup_steps=0.05 * num_train_steps,\n num_training_steps=num_train_steps,\n)\n\ntraining_args = TrainingArguments(\n learning_rate=learning_rate,\n per_device_train_batch_size=batch_size,\n num_train_epochs=epochs,\n output_dir=output_dir,\n remove_unused_columns=False,\n logging_steps=100,\n report_to=\"none\",\n)\ntrainer = RerankTrainer(\n model=model,\n args=training_args,\n train_dataset=train_dataset,\n data_collator=RerankCollator(tokenizer, max_length=max_length),\n)\ntrainer.optimizer = optimizer\ntrainer.scheduler = scheduler\ntrainer.train()\n```\n</details>\n\n<details><summary> Fine-tune ColBERT reranking </summary>\n\n```python\nimport os\nimport transformers\nfrom transformers import (\n AdamW,\n AutoTokenizer,\n TrainingArguments,\n get_cosine_schedule_with_warmup,\n)\n\nfrom retrievals import ColBERT, ColBertCollator, RerankTrainer, RetrievalTrainDataset\nfrom retrievals.losses import ColbertLoss\n\ntransformers.logging.set_verbosity_error()\nos.environ[\"WANDB_DISABLED\"] = \"true\"\n\nmodel_name_or_path: str = \"BAAI/bge-m3\"\nlearning_rate: float = 5e-6\nbatch_size: int = 32\nepochs: int = 3\ncolbert_dim: int = 1024\noutput_dir: str = './checkpoints'\n\ntrain_dataset = RetrievalTrainDataset('C-MTEB/T2Reranking', positive_key='positive', negative_key='negative', dataset_split='dev')\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)\ndata_collator = ColBertCollator(\n tokenizer,\n query_max_length=128,\n document_max_length=256,\n positive_key='positive',\n negative_key='negative',\n)\nmodel = ColBERT.from_pretrained(\n model_name_or_path,\n colbert_dim=colbert_dim,\n loss_fn=ColbertLoss(use_inbatch_negative=False),\n)\n\noptimizer = AdamW(model.parameters(), lr=learning_rate)\nnum_train_steps = int(len(train_dataset) / batch_size * epochs)\nscheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=0.05 * num_train_steps, num_training_steps=num_train_steps)\n\ntraining_args = TrainingArguments(\n learning_rate=learning_rate,\n per_device_train_batch_size=batch_size,\n num_train_epochs=epochs,\n output_dir=output_dir,\n remove_unused_columns=False,\n logging_steps=100,\n)\ntrainer = RerankTrainer(\n model=model,\n args=training_args,\n train_dataset=train_dataset,\n data_collator=data_collator,\n)\ntrainer.optimizer = optimizer\ntrainer.scheduler = scheduler\ntrainer.train()\n```\n</details>\n\n<details><summary> Fine-tune LLM reranking </summary>\n\n```python\nfrom transformers import (\n AdamW,\n AutoTokenizer,\n TrainingArguments,\n get_cosine_schedule_with_warmup,\n)\n\nfrom retrievals import (\n LLMRanker,\n LLMRerankCollator,\n RerankTrainer,\n RetrievalTrainDataset,\n)\nfrom retrievals.losses import TokenLoss\n\nmodel_name_or_path: str = \"Qwen/Qwen2-1.5B-Instruct\"\nmax_length: int = 512\nlearning_rate: float = 3e-5\nbatch_size: int = 8\nepochs: int = 3\ntask_prompt: str = (\n \"\"\"Given a query A and a passage B, determine whether the passage contains an answer to the query\"\"\"\n \"\"\"by providing a prediction of either 'Yes' or 'No'.\"\"\"\n)\n\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)\ntrain_dataset = RetrievalTrainDataset(\n data_name_or_path='C-MTEB/T2Reranking',\n positive_key='positive',\n negative_key='negative',\n query_instruction='A: ',\n document_instruction='B: ',\n dataset_split='dev',\n)\ndata_collator = LLMRerankCollator(tokenizer=tokenizer, max_length=max_length, prompt=task_prompt, add_target_token='Yes')\ntoken_index = tokenizer('Yes', add_special_tokens=False)['input_ids'][-1]\nmodel = LLMRanker.from_pretrained(\n model_name_or_path,\n causal_lm=True,\n use_fp16=True,\n loss_fn=TokenLoss(token_index=token_index),\n use_lora=True,\n)\n\noptimizer = AdamW(model.parameters(), lr=learning_rate)\nnum_train_steps = int(len(train_dataset) / batch_size * epochs)\nscheduler = get_cosine_schedule_with_warmup(\n optimizer,\n num_warmup_steps=0.05 * num_train_steps,\n num_training_steps=num_train_steps,\n)\n\ntraining_args = TrainingArguments(\n learning_rate=learning_rate,\n per_device_train_batch_size=batch_size,\n num_train_epochs=epochs,\n output_dir=\"./checkpoints\",\n remove_unused_columns=False,\n)\ntrainer = RerankTrainer(\n model=model,\n args=training_args,\n train_dataset=train_dataset,\n data_collator=data_collator,\n)\ntrainer.optimizer = optimizer\ntrainer.scheduler = scheduler\ntrainer.train()\n```\n</details>\n\n\n## Reference & Acknowledge\n- [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers)\n- [luyug/Dense](https://github.com/luyug/Dense)\n- [FlagOpen/FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)\n\n\n",
"bugtrack_url": null,
"license": "Apache 2.0 License",
"summary": "Text Embeddings for Retrieval and RAG based on transformers",
"version": "0.0.12",
"project_urls": {
"Homepage": "https://github.com/LongxingTan/open-retrievals"
},
"split_keywords": [
"nlp",
"retrieval",
"rag",
"rerank",
"text",
"embedding",
"contrastive"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "90634bf2efecf042bbdfdf4602462817bb21026587a2170df9ecc69e26047f2f",
"md5": "610e8ebbafea854854e392114afa797d",
"sha256": "5e308424f3812f9605750a3759a5fa9c43a10c660651b7b772d108c3653e8ebf"
},
"downloads": -1,
"filename": "open_retrievals-0.0.12-py3-none-any.whl",
"has_sig": false,
"md5_digest": "610e8ebbafea854854e392114afa797d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7.0",
"size": 93767,
"upload_time": "2024-09-23T11:47:39",
"upload_time_iso_8601": "2024-09-23T11:47:39.578589Z",
"url": "https://files.pythonhosted.org/packages/90/63/4bf2efecf042bbdfdf4602462817bb21026587a2170df9ecc69e26047f2f/open_retrievals-0.0.12-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "51f2231d7ac63f4503f9706e0254ebc862f84af681da2c7112f68879f1a3cac0",
"md5": "b38ea4b1e02285f03bba7dd1192e9a52",
"sha256": "3d1f07161837173d672bca3e0262a5c52f86b6b6ebf63d26f37cd525d7a51831"
},
"downloads": -1,
"filename": "open-retrievals-0.0.12.tar.gz",
"has_sig": false,
"md5_digest": "b38ea4b1e02285f03bba7dd1192e9a52",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7.0",
"size": 74643,
"upload_time": "2024-09-23T11:47:41",
"upload_time_iso_8601": "2024-09-23T11:47:41.347495Z",
"url": "https://files.pythonhosted.org/packages/51/f2/231d7ac63f4503f9706e0254ebc862f84af681da2c7112f68879f1a3cac0/open-retrievals-0.0.12.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-23 11:47:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LongxingTan",
"github_project": "open-retrievals",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "open-retrievals"
}