text2vec


Nametext2vec JSON
Version 1.2.9 PyPI version JSON
download
home_pagehttps://github.com/shibing624/text2vec
SummaryText to vector Tool, encode text
upload_time2023-09-20 03:08:22
maintainer
docs_urlNone
authorXuMing
requires_python>=3.6.0
licenseApache License 2.0
keywords word embedding text2vec chinese text similarity calculation tool similarity word2vec
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [**🇨🇳中文**](https://github.com/shibing624/text2vec/blob/master/README.md) | [**🌐English**](https://github.com/shibing624/text2vec/blob/master/README_EN.md) | [**📖文档/Docs**](https://github.com/shibing624/text2vec/wiki) | [**🤖模型/Models**](https://huggingface.co/shibing624) 

<div align="center">
  <a href="https://github.com/shibing624/text2vec">
    <img src="https://github.com/shibing624/text2vec/blob/master/docs/t2v-logo.png" height="150" alt="Logo">
  </a>
</div>

-----------------

# Text2vec: Text to Vector
[![PyPI version](https://badge.fury.io/py/text2vec.svg)](https://badge.fury.io/py/text2vec)
[![Downloads](https://static.pepy.tech/badge/text2vec)](https://pepy.tech/project/text2vec)
[![Contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg)](CONTRIBUTING.md)
[![License Apache 2.0](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)
[![python_version](https://img.shields.io/badge/Python-3.5%2B-green.svg)](requirements.txt)
[![GitHub issues](https://img.shields.io/github/issues/shibing624/text2vec.svg)](https://github.com/shibing624/text2vec/issues)
[![Wechat Group](http://vlog.sfyc.ltd/wechat_everyday/wxgroup_logo.png?imageView2/0/w/60/h/20)](#Contact)


**Text2vec**: Text to Vector, Get Sentence Embeddings. 文本向量化,把文本(包括词、句子、段落)表征为向量矩阵。

**text2vec**实现了Word2Vec、RankBM25、BERT、Sentence-BERT、CoSENT等多种文本表征、文本相似度计算模型,并在文本语义匹配(相似度计算)任务上比较了各模型的效果。

### News
[2023/09/19] v1.2.8版本: 支持多卡推理(多进程实现多GPU和多CPU推理),新增命令行工具(CLI),可以无需代码开发批量获取文本向量,详见[Release-v1.2.8](https://github.com/shibing624/text2vec/releases/tag/1.2.8)

[2023/09/03] v1.2.4版本: 支持FlagEmbedding模型训练,发布了中文匹配模型[shibing624/text2vec-bge-large-chinese](https://huggingface.co/shibing624/text2vec-bge-large-chinese),用CoSENT方法监督训练,基于`BAAI/bge-large-zh-noinstruct`用中文匹配数据集训练得到,并在中文测试集评估相对于原模型效果有提升,短文本区分度上提升明显,详见[Release-v1.2.4](https://github.com/shibing624/text2vec/releases/tag/1.2.4)

[2023/07/17] v1.2.2版本: 支持多卡训练,发布了多语言匹配模型[shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual),用CoSENT方法训练,基于`sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`用人工挑选后的多语言STS数据集[shibing624/nli-zh-all/text2vec-base-multilingual-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-multilingual-dataset)训练得到,并在中英文测试集评估相对于原模型效果有提升,详见[Release-v1.2.2](https://github.com/shibing624/text2vec/releases/tag/1.2.2)

[2023/06/19] v1.2.1版本: 更新了中文匹配模型`shibing624/text2vec-base-chinese-nli`为新版[shibing624/text2vec-base-chinese-sentence](https://huggingface.co/shibing624/text2vec-base-chinese-sentence),针对CoSENT的loss计算对排序敏感特点,人工挑选并整理出高质量的有相关性排序的STS数据集[shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset),在各评估集表现相对之前有提升;发布了适用于s2p的中文匹配模型[shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase),详见[Release-v1.2.1](https://github.com/shibing624/text2vec/releases/tag/1.2.1)

[2023/06/15] v1.2.0版本: 发布了中文匹配模型[shibing624/text2vec-base-chinese-nli](https://huggingface.co/shibing624/text2vec-base-chinese-nli),基于`nghuyong/ernie-3.0-base-zh`模型,使用了中文NLI数据集[shibing624/nli_zh](https://huggingface.co/datasets/shibing624/nli_zh)全部语料训练的CoSENT文本匹配模型,在各评估集表现提升明显,详见[Release-v1.2.0](https://github.com/shibing624/text2vec/releases/tag/1.2.0)

[2022/03/12] v1.1.4版本: 发布了中文匹配模型[shibing624/text2vec-base-chinese](https://huggingface.co/shibing624/text2vec-base-chinese),基于中文STS训练集训练的CoSENT匹配模型。详见[Release-v1.1.4](https://github.com/shibing624/text2vec/releases/tag/1.1.4)


**Guide**
- [Features](#Features)
- [Evaluation](#Evaluation)
- [Install](#install)
- [Usage](#usage)
- [Contact](#Contact)
- [References](#references)


## Features
### 文本向量表示模型
- [Word2Vec](https://github.com/shibing624/text2vec/blob/master/text2vec/word2vec.py):通过腾讯AI Lab开源的大规模高质量中文[词向量数据(800万中文词轻量版)](https://pan.baidu.com/s/1La4U4XNFe8s5BJqxPQpeiQ) (文件名:light_Tencent_AILab_ChineseEmbedding.bin 密码: tawe)实现词向量检索,本项目实现了句子(词向量求平均)的word2vec向量表示
- [SBERT(Sentence-BERT)](https://github.com/shibing624/text2vec/blob/master/text2vec/sentencebert_model.py):权衡性能和效率的句向量表示模型,训练时通过有监督训练BERT和softmax分类函数,文本匹配预测时直接取句子向量做余弦,句子表征方法,本项目基于PyTorch复现了Sentence-BERT模型的训练和预测
- [CoSENT(Cosine Sentence)](https://github.com/shibing624/text2vec/blob/master/text2vec/cosent_model.py):CoSENT模型提出了一种排序的损失函数,使训练过程更贴近预测,模型收敛速度和效果比Sentence-BERT更好,本项目基于PyTorch实现了CoSENT模型的训练和预测
- [BGE(BAAI general embedding)](https://github.com/shibing624/text2vec/blob/master/text2vec/bge_model.py):BGE模型按照[retromae](https://github.com/staoxiao/RetroMAE)方法进行预训练,[参考论文](https://aclanthology.org/2022.emnlp-main.35.pdf),再使用对比学习finetune微调训练模型,本项目基于PyTorch实现了BGE模型的微调训练和预测


详细文本向量表示方法见wiki: [文本向量表示方法](https://github.com/shibing624/text2vec/wiki/%E6%96%87%E6%9C%AC%E5%90%91%E9%87%8F%E8%A1%A8%E7%A4%BA%E6%96%B9%E6%B3%95)
## Evaluation

文本匹配

#### 英文匹配数据集的评测结果:


| Arch   | BaseModel                                        | Model                                                                                                                | English-STS-B | 
|:-------|:------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------|:-------------:|
| GloVe  | glove                                           | Avg_word_embeddings_glove_6B_300d                                                                                    |     61.77     |
| BERT   | bert-base-uncased                               | BERT-base-cls                                                                                                        |     20.29     |
| BERT   | bert-base-uncased                               | BERT-base-first_last_avg                                                                                             |     59.04     |
| BERT   | bert-base-uncased                               | BERT-base-first_last_avg-whiten(NLI)                                                                                 |     63.65     |
| SBERT  | sentence-transformers/bert-base-nli-mean-tokens | SBERT-base-nli-cls                                                                                                   |     73.65     |
| SBERT  | sentence-transformers/bert-base-nli-mean-tokens | SBERT-base-nli-first_last_avg                                                                                        |     77.96     |
| CoSENT | bert-base-uncased                               | CoSENT-base-first_last_avg                                                                                           |     69.93     |
| CoSENT | sentence-transformers/bert-base-nli-mean-tokens | CoSENT-base-nli-first_last_avg                                                                                       |     79.68     |
| CoSENT | sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | [shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual)                |     80.12     |

#### 中文匹配数据集的评测结果:


| Arch   | BaseModel                    | Model           | ATEC  |  BQ   | LCQMC | PAWSX | STS-B |  Avg  | 
|:-------|:----------------------------|:--------------------|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| SBERT  | bert-base-chinese           | SBERT-bert-base     | 46.36 | 70.36 | 78.72 | 46.86 | 66.41 | 61.74 |
| SBERT  | hfl/chinese-macbert-base    | SBERT-macbert-base  | 47.28 | 68.63 | 79.42 | 55.59 | 64.82 | 63.15 |
| SBERT  | hfl/chinese-roberta-wwm-ext | SBERT-roberta-ext   | 48.29 | 69.99 | 79.22 | 44.10 | 72.42 | 62.80 |
| CoSENT | bert-base-chinese           | CoSENT-bert-base    | 49.74 | 72.38 | 78.69 | 60.00 | 79.27 | 68.01 |
| CoSENT | hfl/chinese-macbert-base    | CoSENT-macbert-base | 50.39 | 72.93 | 79.17 | 60.86 | 79.30 | 68.53 |
| CoSENT | hfl/chinese-roberta-wwm-ext | CoSENT-roberta-ext  | 50.81 | 71.45 | 79.31 | 61.56 | 79.96 | 68.61 |

说明:
- 结果评测指标:spearman系数
- 为评测模型能力,结果均只用该数据集的train训练,在test上评估得到的表现,没用外部数据
- `SBERT-macbert-base`模型,是用SBert方法训练,运行[examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)代码可训练模型
- `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`模型是用SBert训练,是`paraphrase-MiniLM-L12-v2`模型的多语言版本,支持中文、英文等


### Release Models
- 本项目release模型的中文匹配评测结果:

| Arch       | BaseModel                                                   | Model                                                                                                                                             | ATEC  |  BQ   | LCQMC | PAWSX | STS-B | SOHU-dd | SOHU-dc |    Avg    |  QPS  |
|:-----------|:------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:-----:|:-----:|:-----:|:-----:|:-----:|:-------:|:-------:|:---------:|:-----:|
| Word2Vec   | word2vec                                                    | [w2v-light-tencent-chinese](https://ai.tencent.com/ailab/nlp/en/download.html)                                                                    | 20.00 | 31.49 | 59.46 | 2.57  | 55.78 |  55.04  |  20.70  |   35.03   | 23769 |
| SBERT      | xlm-roberta-base                                            | [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | 18.42 | 38.52 | 63.96 | 10.14 | 78.90 |  63.01  |  52.28  |   46.46   | 3138  |
| CoSENT     | hfl/chinese-macbert-base                                    | [shibing624/text2vec-base-chinese](https://huggingface.co/shibing624/text2vec-base-chinese)                                                       | 31.93 | 42.67 | 70.16 | 17.21 | 79.30 |  70.27  |  50.42  |   51.61   | 3008  |
| CoSENT     | hfl/chinese-lert-large                                      | [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese)                                                   | 32.61 | 44.59 | 69.30 | 14.51 | 79.44 |  73.01  |  59.04  |   53.12   | 2092  |
| CoSENT     | nghuyong/ernie-3.0-base-zh                                  | [shibing624/text2vec-base-chinese-sentence](https://huggingface.co/shibing624/text2vec-base-chinese-sentence)                                     | 43.37 | 61.43 | 73.48 | 38.90 | 78.25 |  70.60  |  53.08  |   59.87   | 3089  |
| CoSENT     | nghuyong/ernie-3.0-base-zh                                  | [shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase)                                 | 44.89 | 63.58 | 74.24 | 40.90 | 78.93 |  76.70  |  63.30  | **63.08** | 3066  |
| CoSENT     | sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | [shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual)                                             | 32.39 | 50.33 | 65.64 | 32.56 | 74.45 |  68.88  |  51.17  |   53.67   | 3138  |
| CoSENT     | BAAI/bge-large-zh-noinstruct                                | [shibing624/text2vec-bge-large-chinese](https://huggingface.co/shibing624/text2vec-bge-large-chinese)                                             | 38.41 | 61.34 | 71.72 | 35.15 | 76.44 |  71.81  |  63.15  |   59.72   |  844  |


说明:
- 结果评测指标:spearman系数
- `shibing624/text2vec-base-chinese`模型,是用CoSENT方法训练,基于`hfl/chinese-macbert-base`在中文STS-B数据训练得到,并在中文STS-B测试集评估达到较好效果,运行[examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)代码可训练模型,模型文件已经上传HF model hub,中文通用语义匹配任务推荐使用
- `shibing624/text2vec-base-chinese-sentence`模型,是用CoSENT方法训练,基于`nghuyong/ernie-3.0-base-zh`用人工挑选后的中文STS数据集[shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset)训练得到,并在中文各NLI测试集评估达到较好效果,运行[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)代码可训练模型,模型文件已经上传HF model hub,中文s2s(句子vs句子)语义匹配任务推荐使用
- `shibing624/text2vec-base-chinese-paraphrase`模型,是用CoSENT方法训练,基于`nghuyong/ernie-3.0-base-zh`用人工挑选后的中文STS数据集[shibing624/nli-zh-all/text2vec-base-chinese-paraphrase-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-paraphrase-dataset),数据集相对于[shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset)加入了s2p(sentence to paraphrase)数据,强化了其长文本的表征能力,并在中文各NLI测试集评估达到SOTA,运行[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)代码可训练模型,模型文件已经上传HF model hub,中文s2p(句子vs段落)语义匹配任务推荐使用
- `shibing624/text2vec-base-multilingual`模型,是用CoSENT方法训练,基于`sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`用人工挑选后的多语言STS数据集[shibing624/nli-zh-all/text2vec-base-multilingual-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-multilingual-dataset)训练得到,并在中英文测试集评估相对于原模型效果有提升,运行[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)代码可训练模型,模型文件已经上传HF model hub,多语言语义匹配任务推荐使用
- `shibing624/text2vec-bge-large-chinese`模型,是用CoSENT方法训练,基于`BAAI/bge-large-zh-noinstruct`用人工挑选后的中文STS数据集[shibing624/nli-zh-all/text2vec-base-chinese-paraphrase-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-paraphrase-dataset)训练得到,并在中文测试集评估相对于原模型效果有提升,在短文本区分度上提升明显,运行[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)代码可训练模型,模型文件已经上传HF model hub,中文s2s(句子vs句子)语义匹配任务推荐使用
- `w2v-light-tencent-chinese`是腾讯词向量的Word2Vec模型,CPU加载使用,适用于中文字面匹配任务和缺少数据的冷启动情况
- 各预训练模型均可以通过transformers调用,如MacBERT模型:`--model_name hfl/chinese-macbert-base` 或者roberta模型:`--model_name uer/roberta-medium-wwm-chinese-cluecorpussmall`
- 为测评模型的鲁棒性,加入了未训练过的SOHU测试集,用于测试模型的泛化能力;为达到开箱即用的实用效果,使用了搜集到的各中文匹配数据集,数据集也上传到HF datasets[链接见下方](#数据集)
- 中文匹配任务实验表明,pooling最优是`EncoderType.FIRST_LAST_AVG`和`EncoderType.MEAN`,两者预测效果差异很小
- 中文匹配评测结果复现,可以下载中文匹配数据集到`examples/data`,运行 [tests/model_spearman.py](https://github.com/shibing624/text2vec/blob/master/tests/model_spearman.py) 代码复现评测结果
- QPS的GPU测试环境是Tesla V100,显存32GB

模型训练实验报告:[实验报告](https://github.com/shibing624/text2vec/blob/master/docs/model_report.md)
## Demo

Official Demo: https://www.mulanai.com/product/short_text_sim/

HuggingFace Demo: https://huggingface.co/spaces/shibing624/text2vec

![](docs/hf.png)

run example: [examples/gradio_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/gradio_demo.py) to see the demo:
```shell
python examples/gradio_demo.py
```

## Install
```shell
pip install torch # conda install pytorch
pip install -U text2vec
```

or

```shell
pip install torch # conda install pytorch
pip install -r requirements.txt

git clone https://github.com/shibing624/text2vec.git
cd text2vec
pip install --no-deps .
```

## Usage

### 文本向量表征

基于`pretrained model`计算文本向量:

```zsh
>>> from text2vec import SentenceModel
>>> m = SentenceModel()
>>> m.encode("如何更换花呗绑定银行卡")
Embedding shape: (768,)
```

example: [examples/computing_embeddings_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/computing_embeddings_demo.py)

```python
import sys

sys.path.append('..')
from text2vec import SentenceModel
from text2vec import Word2Vec


def compute_emb(model):
    # Embed a list of sentences
    sentences = [
        '卡',
        '银行卡',
        '如何更换花呗绑定银行卡',
        '花呗更改绑定银行卡',
        'This framework generates embeddings for each input sentence',
        'Sentences are passed as a list of string.',
        'The quick brown fox jumps over the lazy dog.'
    ]
    sentence_embeddings = model.encode(sentences)
    print(type(sentence_embeddings), sentence_embeddings.shape)

    # The result is a list of sentence embeddings as numpy arrays
    for sentence, embedding in zip(sentences, sentence_embeddings):
        print("Sentence:", sentence)
        print("Embedding shape:", embedding.shape)
        print("Embedding head:", embedding[:10])
        print()


if __name__ == "__main__":
    # 中文句向量模型(CoSENT),中文语义匹配任务推荐,支持fine-tune继续训练
    t2v_model = SentenceModel("shibing624/text2vec-base-chinese")
    compute_emb(t2v_model)

    # 支持多语言的句向量模型(CoSENT),多语言(包括中英文)语义匹配任务推荐,支持fine-tune继续训练
    sbert_model = SentenceModel("shibing624/text2vec-base-multilingual")
    compute_emb(sbert_model)

    # 中文词向量模型(word2vec),中文字面匹配任务和冷启动适用
    w2v_model = Word2Vec("w2v-light-tencent-chinese")
    compute_emb(w2v_model)

```

output:
```
<class 'numpy.ndarray'> (7, 768)
Sentence: 卡
Embedding shape: (768,)

Sentence: 银行卡
Embedding shape: (768,)
 ... 
```

- 返回值`embeddings`是`numpy.ndarray`类型,shape为`(sentences_size, model_embedding_size)`,三个模型任选一种即可,推荐用第一个。
- `shibing624/text2vec-base-chinese`模型是CoSENT方法在中文STS-B数据集训练得到的,模型已经上传到huggingface的
模型库[shibing624/text2vec-base-chinese](https://huggingface.co/shibing624/text2vec-base-chinese),
是`text2vec.SentenceModel`指定的默认模型,可以通过上面示例调用,或者如下所示用[transformers库](https://github.com/huggingface/transformers)调用,
模型自动下载到本机路径:`~/.cache/huggingface/transformers`
- `w2v-light-tencent-chinese`是通过gensim加载的Word2Vec模型,使用腾讯词向量`Tencent_AILab_ChineseEmbedding.tar.gz`计算各字词的词向量,句子向量通过单词词
向量取平均值得到,模型自动下载到本机路径:`~/.text2vec/datasets/light_Tencent_AILab_ChineseEmbedding.bin`
- `text2vec`支持多卡推理(计算文本向量): [examples/computing_embeddings_multi_gpu_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/computing_embeddings_multi_gpu_demo.py)

#### Usage (HuggingFace Transformers)
Without [text2vec](https://github.com/shibing624/text2vec), you can use the model like this: 

First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

example: [examples/use_origin_transformers_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/use_origin_transformers_demo.py)

```python
import os
import torch
from transformers import AutoTokenizer, AutoModel

os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"


# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0]  # First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('shibing624/text2vec-base-chinese')
model = AutoModel.from_pretrained('shibing624/text2vec-base-chinese')
sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡']
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```

#### Usage (sentence-transformers)
[sentence-transformers](https://github.com/UKPLab/sentence-transformers) is a popular library to compute dense vector representations for sentences.

Install sentence-transformers:
```shell
pip install -U sentence-transformers
```
Then load model and predict:
```python
from sentence_transformers import SentenceTransformer

m = SentenceTransformer("shibing624/text2vec-base-chinese")
sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡']

sentence_embeddings = m.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```

#### `Word2Vec`词向量

提供两种`Word2Vec`词向量,任选一个:

  - 轻量版腾讯词向量 [百度云盘-密码:tawe](https://pan.baidu.com/s/1La4U4XNFe8s5BJqxPQpeiQ) 或 [谷歌云盘](https://drive.google.com/u/0/uc?id=1iQo9tBb2NgFOBxx0fA16AZpSgc-bG_Rp&export=download),二进制文件,111M,是简化后的高频143613个词,每个词向量还是200维(跟原版一样),运行程序,自动下载到 `~/.text2vec/datasets/light_Tencent_AILab_ChineseEmbedding.bin`
  - 腾讯词向量-官方全量, 6.78G放到: `~/.text2vec/datasets/Tencent_AILab_ChineseEmbedding.txt`,腾讯词向量主页:https://ai.tencent.com/ailab/nlp/zh/index.html 词向量下载地址:https://ai.tencent.com/ailab/nlp/en/download.html  更多查看[腾讯词向量介绍-wiki](https://github.com/shibing624/text2vec/wiki/%E8%85%BE%E8%AE%AF%E8%AF%8D%E5%90%91%E9%87%8F%E4%BB%8B%E7%BB%8D)


### 命令行模式(CLI)

支持批量获取文本向量

code: [cli.py](https://github.com/shibing624/text2vec/blob/master/text2vec/cli.py)

```
> text2vec -h                                    
usage: text2vec [-h] --input_file INPUT_FILE [--output_file OUTPUT_FILE] [--model_type MODEL_TYPE] [--model_name MODEL_NAME] [--encoder_type ENCODER_TYPE]
                [--batch_size BATCH_SIZE] [--max_seq_length MAX_SEQ_LENGTH] [--chunk_size CHUNK_SIZE] [--device DEVICE]
                [--show_progress_bar SHOW_PROGRESS_BAR] [--normalize_embeddings NORMALIZE_EMBEDDINGS]

text2vec cli

optional arguments:
  -h, --help            show this help message and exit
  --input_file INPUT_FILE
                        input file path, text file, required
  --output_file OUTPUT_FILE
                        output file path, output csv file, default text_embs.csv
  --model_type MODEL_TYPE
                        model type: sentencemodel, word2vec, default sentencemodel
  --model_name MODEL_NAME
                        model name or path, default shibing624/text2vec-base-chinese
  --encoder_type ENCODER_TYPE
                        encoder type: MEAN, CLS, POOLER, FIRST_LAST_AVG, LAST_AVG, default MEAN
  --batch_size BATCH_SIZE
                        batch size, default 32
  --max_seq_length MAX_SEQ_LENGTH
                        max sequence length, default 256
  --chunk_size CHUNK_SIZE
                        chunk size to save partial results, default 1000
  --device DEVICE       device: cpu, cuda, default None
  --show_progress_bar SHOW_PROGRESS_BAR
                        show progress bar, default True
  --normalize_embeddings NORMALIZE_EMBEDDINGS
                        normalize embeddings, default False
  --multi_gpu MULTI_GPU
                        multi gpu, default False
```

run:

```shell
pip install text2vec -U
text2vec --input_file input.txt --output_file out.csv --batch_size 128 --multi_gpu True
```

> 输入文件(required):`input.txt`

## 下游任务
### 1. 句子相似度计算

example: [examples/semantic_text_similarity_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/semantic_text_similarity_demo.py)

```python
import sys

sys.path.append('..')
from text2vec import Similarity

# Two lists of sentences
sentences1 = ['如何更换花呗绑定银行卡',
              'The cat sits outside',
              'A man is playing guitar',
              'The new movie is awesome']

sentences2 = ['花呗更改绑定银行卡',
              'The dog plays in the garden',
              'A woman watches TV',
              'The new movie is so great']

sim_model = Similarity()
for i in range(len(sentences1)):
    for j in range(len(sentences2)):
        score = sim_model.get_score(sentences1[i], sentences2[j])
        print("{} \t\t {} \t\t Score: {:.4f}".format(sentences1[i], sentences2[j], score))
```

output:
```shell
如何更换花呗绑定银行卡 		 花呗更改绑定银行卡 		 Score: 0.9477
如何更换花呗绑定银行卡 		 The dog plays in the garden 		 Score: -0.1748
如何更换花呗绑定银行卡 		 A woman watches TV 		 Score: -0.0839
如何更换花呗绑定银行卡 		 The new movie is so great 		 Score: -0.0044
The cat sits outside 		 花呗更改绑定银行卡 		 Score: -0.0097
The cat sits outside 		 The dog plays in the garden 		 Score: 0.1908
The cat sits outside 		 A woman watches TV 		 Score: -0.0203
The cat sits outside 		 The new movie is so great 		 Score: 0.0302
A man is playing guitar 		 花呗更改绑定银行卡 		 Score: -0.0010
A man is playing guitar 		 The dog plays in the garden 		 Score: 0.1062
A man is playing guitar 		 A woman watches TV 		 Score: 0.0055
A man is playing guitar 		 The new movie is so great 		 Score: 0.0097
The new movie is awesome 		 花呗更改绑定银行卡 		 Score: 0.0302
The new movie is awesome 		 The dog plays in the garden 		 Score: -0.0160
The new movie is awesome 		 A woman watches TV 		 Score: 0.1321
The new movie is awesome 		 The new movie is so great 		 Score: 0.9591
```

> 句子余弦相似度值`score`范围是[-1, 1],值越大越相似。

### 2. 文本匹配搜索

一般在文档候选集中找与query最相似的文本,常用于QA场景的问句相似匹配、文本相似检索等任务。


example: [examples/semantic_search_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/semantic_search_demo.py)

```python
import sys

sys.path.append('..')
from text2vec import SentenceModel, cos_sim, semantic_search

embedder = SentenceModel()

# Corpus with example sentences
corpus = [
    '花呗更改绑定银行卡',
    '我什么时候开通了花呗',
    'A man is eating food.',
    'A man is eating a piece of bread.',
    'The girl is carrying a baby.',
    'A man is riding a horse.',
    'A woman is playing violin.',
    'Two men pushed carts through the woods.',
    'A man is riding a white horse on an enclosed ground.',
    'A monkey is playing drums.',
    'A cheetah is running behind its prey.'
]
corpus_embeddings = embedder.encode(corpus)

# Query sentences:
queries = [
    '如何更换花呗绑定银行卡',
    'A man is eating pasta.',
    'Someone in a gorilla costume is playing a set of drums.',
    'A cheetah chases prey on across a field.']

for query in queries:
    query_embedding = embedder.encode(query)
    hits = semantic_search(query_embedding, corpus_embeddings, top_k=5)
    print("\n\n======================\n\n")
    print("Query:", query)
    print("\nTop 5 most similar sentences in corpus:")
    hits = hits[0]  # Get the hits for the first query
    for hit in hits:
        print(corpus[hit['corpus_id']], "(Score: {:.4f})".format(hit['score']))
```
output:
```shell
Query: 如何更换花呗绑定银行卡
Top 5 most similar sentences in corpus:
花呗更改绑定银行卡 (Score: 0.9477)
我什么时候开通了花呗 (Score: 0.3635)
A man is eating food. (Score: 0.0321)
A man is riding a horse. (Score: 0.0228)
Two men pushed carts through the woods. (Score: 0.0090)

======================
Query: A man is eating pasta.
Top 5 most similar sentences in corpus:
A man is eating food. (Score: 0.6734)
A man is eating a piece of bread. (Score: 0.4269)
A man is riding a horse. (Score: 0.2086)
A man is riding a white horse on an enclosed ground. (Score: 0.1020)
A cheetah is running behind its prey. (Score: 0.0566)

======================
Query: Someone in a gorilla costume is playing a set of drums.
Top 5 most similar sentences in corpus:
A monkey is playing drums. (Score: 0.8167)
A cheetah is running behind its prey. (Score: 0.2720)
A woman is playing violin. (Score: 0.1721)
A man is riding a horse. (Score: 0.1291)
A man is riding a white horse on an enclosed ground. (Score: 0.1213)

======================
Query: A cheetah chases prey on across a field.
Top 5 most similar sentences in corpus:
A cheetah is running behind its prey. (Score: 0.9147)
A monkey is playing drums. (Score: 0.2655)
A man is riding a horse. (Score: 0.1933)
A man is riding a white horse on an enclosed ground. (Score: 0.1733)
A man is eating food. (Score: 0.0329)
```

 

## 下游任务支持库
**similarities库[推荐]**

文本相似度计算和文本匹配搜索任务,推荐使用 [similarities库](https://github.com/shibing624/similarities) ,兼容本项目release的
Word2vec、SBERT、Cosent类语义匹配模型,还支持字面维度相似度计算、匹配搜索算法,支持文本、图像。

安装:
```pip install -U similarities```

句子相似度计算:
```python
from similarities import Similarity

m = Similarity()
r = m.similarity('如何更换花呗绑定银行卡', '花呗更改绑定银行卡')
print(f"similarity score: {float(r)}")  # similarity score: 0.855146050453186
```

## Models

### CoSENT model

CoSENT(Cosine Sentence)文本匹配模型,在Sentence-BERT上改进了CosineRankLoss的句向量方案


Network structure:

Training:

<img src="docs/cosent_train.png" width="300" />


Inference:

<img src="docs/inference.png" width="300" />

#### CoSENT 监督模型
训练和预测CoSENT模型:

- 在中文STS-B数据集训练和评估`CoSENT`模型

example: [examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)

```shell
cd examples
python training_sup_text_matching_model.py --model_arch cosent --do_train --do_predict --num_epochs 10 --model_name hfl/chinese-macbert-base --output_dir ./outputs/STS-B-cosent
```

- 在蚂蚁金融匹配数据集ATEC上训练和评估`CoSENT`模型

支持这些中文匹配数据集的使用:'ATEC', 'STS-B', 'BQ', 'LCQMC', 'PAWSX',具体参考HuggingFace datasets [https://huggingface.co/datasets/shibing624/nli_zh](https://huggingface.co/datasets/shibing624/nli_zh)
```shell
python training_sup_text_matching_model.py --task_name ATEC --model_arch cosent --do_train --do_predict --num_epochs 10 --model_name hfl/chinese-macbert-base --output_dir ./outputs/ATEC-cosent
```

- 在自有中文数据集上训练模型

example: [examples/training_sup_text_matching_model_mydata.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_mydata.py)

单卡训练:
```shell
CUDA_VISIBLE_DEVICES=0 python training_sup_text_matching_model_mydata.py --do_train --do_predict
```

多卡训练:
```shell
CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node 2  training_sup_text_matching_model_mydata.py --do_train --do_predict --output_dir outputs/STS-B-text2vec-macbert-v1 --batch_size 64 --bf16 --data_parallel 
```

训练集格式参考[examples/data/STS-B/STS-B.valid.data](https://github.com/shibing624/text2vec/blob/master/examples/data/STS-B/STS-B.valid.data)

```shell
sentence1   sentence2   label
一个女孩在给她的头发做发型。	一个女孩在梳头。	2
一群男人在海滩上踢足球。	一群男孩在海滩上踢足球。	3
一个女人在测量另一个女人的脚踝。	女人测量另一个女人的脚踝。	5
```

`label`可以是0,1标签,0代表两个句子不相似,1代表相似;也可以是0-5的评分,评分越高,表示两个句子越相似。模型都能支持。


- 在英文STS-B数据集训练和评估`CoSENT`模型

example: [examples/training_sup_text_matching_model_en.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_en.py)

```shell
cd examples
python training_sup_text_matching_model_en.py --model_arch cosent --do_train --do_predict --num_epochs 10 --model_name bert-base-uncased  --output_dir ./outputs/STS-B-en-cosent
```

#### CoSENT 无监督模型
- 在英文NLI数据集训练`CoSENT`模型,在STS-B测试集评估效果

example: [examples/training_unsup_text_matching_model_en.py](https://github.com/shibing624/text2vec/blob/master/examples/training_unsup_text_matching_model_en.py)

```shell
cd examples
python training_unsup_text_matching_model_en.py --model_arch cosent --do_train --do_predict --num_epochs 10 --model_name bert-base-uncased --output_dir ./outputs/STS-B-en-unsup-cosent
```


### Sentence-BERT model

Sentence-BERT文本匹配模型,表征式句向量表示方案

Network structure:

Training:

<img src="docs/sbert_train.png" width="300" />


Inference:

<img src="docs/sbert_inference.png" width="300" />

#### SentenceBERT 监督模型
- 在中文STS-B数据集训练和评估`SBERT`模型

example: [examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)

```shell
cd examples
python training_sup_text_matching_model.py --model_arch sentencebert --do_train --do_predict --num_epochs 10 --model_name hfl/chinese-macbert-base --output_dir ./outputs/STS-B-sbert
```
- 在英文STS-B数据集训练和评估`SBERT`模型

example: [examples/training_sup_text_matching_model_en.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_en.py)

```shell
cd examples
python training_sup_text_matching_model_en.py --model_arch sentencebert --do_train --do_predict --num_epochs 10 --model_name bert-base-uncased --output_dir ./outputs/STS-B-en-sbert
```

#### SentenceBERT 无监督模型
- 在英文NLI数据集训练`SBERT`模型,在STS-B测试集评估效果

example: [examples/training_unsup_text_matching_model_en.py](https://github.com/shibing624/text2vec/blob/master/examples/training_unsup_text_matching_model_en.py)

```shell
cd examples
python training_unsup_text_matching_model_en.py --model_arch sentencebert --do_train --do_predict --num_epochs 10 --model_name bert-base-uncased --output_dir ./outputs/STS-B-en-unsup-sbert
```

### BERT-Match model
BERT文本匹配模型,原生BERT匹配网络结构,交互式句向量匹配模型

Network structure:

Training and inference:

<img src="docs/bert-fc-train.png" width="300" />

训练脚本同上[examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)。



### BGE model

#### BGE 监督模型
- 在中文STS-B数据集训练和评估`BGE`模型

example: [examples/training_bge_model_mydata.py](https://github.com/shibing624/text2vec/blob/master/examples/training_bge_model_mydata.py)

```shell
cd examples
python training_bge_model_mydata.py --model_arch bge --do_train --do_predict --num_epochs 4 --output_dir ./outputs/STS-B-bge-v1 --batch_size 4 --save_model_every_epoch --bf16
```

- 自建BGE训练集

BGE模型微调训练,使用对比学习训练模型,输入数据的格式是一个三元组' (query, positive, negative) '

```shell
cd examples/data
python build_zh_bge_dataset.py
python hard_negatives_mine.py
```
1. `build_zh_bge_dataset.py` 基于中文STS-B生成三元组训练集,格式如下:
```json lines
{"query":"一个男人正在往锅里倒油。","pos":["一个男人正在往锅里倒油。"],"neg":["亲俄军队进入克里米亚乌克兰海军基地","配有木制家具的优雅餐厅。","马雅瓦蒂要求总统统治查谟和克什米尔","非典还夺去了多伦多地区44人的生命,其中包括两名护士和一名医生。","在一次采访中,身为犯罪学家的希利说,这里和全国各地的许多议员都对死刑抱有戒心。","豚鼠吃胡萝卜。","狗嘴里叼着一根棍子在水中游泳。","拉里·佩奇说Android很重要,不是关键","法国、比利时、德国、瑞典、意大利和英国为印度计划向缅甸出售的先进轻型直升机提供零部件和技术。","巴林赛马会在动乱中进行"]}
```
2. `hard_negatives_mine.py` 使用faiss相似匹配,挖掘难负例。


### 模型蒸馏(Model Distillation)

由于text2vec训练的模型可以使用[sentence-transformers](https://github.com/UKPLab/sentence-transformers)库加载,此处复用其模型蒸馏方法[distillation](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/distillation)。

1. 模型降维,参考[dimensionality_reduction.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/distillation/dimensionality_reduction.py)使用PCA对模型输出embedding降维,可减少milvus等向量检索数据库的存储压力,还能轻微提升模型效果。
2. 模型蒸馏,参考[model_distillation.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/distillation/model_distillation.py)使用蒸馏方法,将Teacher大模型蒸馏到更少layers层数的student模型中,在权衡效果的情况下,可大幅提升模型预测速度。

### 模型部署

提供两种部署模型,搭建服务的方法: 1)基于Jina搭建gRPC服务【推荐】;2)基于FastAPI搭建原生Http服务。

#### Jina服务
采用C/S模式搭建高性能服务,支持docker云原生,gRPC/HTTP/WebSocket,支持多个模型同时预测,GPU多卡处理。

- 安装:
```pip install jina```

- 启动服务:

example: [examples/jina_server_demo.py](examples/jina_server_demo.py)
```python
from jina import Flow

port = 50001
f = Flow(port=port).add(
    uses='jinahub://Text2vecEncoder',
    uses_with={'model_name': 'shibing624/text2vec-base-chinese'}
)

with f:
    # backend server forever
    f.block()
```

该模型预测方法(executor)已经上传到[JinaHub](https://hub.jina.ai/executor/eq45c9uq),里面包括docker、k8s部署方法。

- 调用服务:


```python
from jina import Client
from docarray import Document, DocumentArray

port = 50001

c = Client(port=port)

data = ['如何更换花呗绑定银行卡',
        '花呗更改绑定银行卡']
print("data:", data)
print('data embs:')
r = c.post('/', inputs=DocumentArray([Document(text='如何更换花呗绑定银行卡'), Document(text='花呗更改绑定银行卡')]))
print(r.embeddings)
```

批量调用方法见example: [examples/jina_client_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/jina_client_demo.py)


#### FastAPI服务

- 安装:
```pip install fastapi uvicorn```

- 启动服务:

example: [examples/fastapi_server_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/fastapi_server_demo.py)
```shell
cd examples
python fastapi_server_demo.py
```

- 调用服务:
```shell
curl -X 'GET' \
  'http://0.0.0.0:8001/emb?q=hello' \
  -H 'accept: application/json'
```


## Dataset

- 本项目release的数据集:

| Dataset                    | Introduce                                                                | Download Link                                                                                                                                                                                                                                                                                         |
|:---------------------------|:-------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| shibing624/nli-zh-all      | 中文语义匹配数据合集,整合了文本推理,相似,摘要,问答,指令微调等任务的820万高质量数据,并转化为匹配格式数据集                | [https://huggingface.co/datasets/shibing624/nli-zh-all](https://huggingface.co/datasets/shibing624/nli-zh-all)                                                                                                                                                                                        |
| shibing624/snli-zh         | 中文SNLI和MultiNLI数据集,翻译自英文SNLI和MultiNLI                                    | [https://huggingface.co/datasets/shibing624/snli-zh](https://huggingface.co/datasets/shibing624/snli-zh)                                                                                                                                                                                              |
| shibing624/nli_zh          | 中文语义匹配数据集,整合了中文ATEC、BQ、LCQMC、PAWSX、STS-B共5个任务的数据集                        | [https://huggingface.co/datasets/shibing624/nli_zh](https://huggingface.co/datasets/shibing624/nli_zh) </br> or </br> [百度网盘(提取码:qkt6)](https://pan.baidu.com/s/1d6jSiU1wHQAEMWJi7JJWCQ) </br> or </br> [github](https://github.com/shibing624/text2vec/releases/download/1.1.2/senteval_cn.zip) </br> |
| shibing624/sts-sohu2021    | 中文语义匹配数据集,2021搜狐校园文本匹配算法大赛数据集                                            | [https://huggingface.co/datasets/shibing624/sts-sohu2021](https://huggingface.co/datasets/shibing624/sts-sohu2021)                                                                                                                                                                                    |
| ATEC                       | 中文ATEC数据集,蚂蚁金服Q-Qpair数据集                                                 | [ATEC](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC)                                                                                                                                                                                                                                 |
| BQ                         | 中文BQ(Bank Question)数据集,银行Q-Qpair数据集                                      | [BQ](http://icrc.hitsz.edu.cn/info/1037/1162.htm)                                                                                                                                                                                                                                                     |
| LCQMC                      | 中文LCQMC(large-scale Chinese question matching corpus)数据集,Q-Qpair数据集      | [LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html)                                                                                                                                                                                                                                               |
| PAWSX                      | 中文PAWS(Paraphrase Adversaries from Word Scrambling)数据集,Q-Qpair数据集        | [PAWSX](https://arxiv.org/abs/1908.11828)                                                                                                                                                                                                                                                             |
| STS-B                      | 中文STS-B数据集,中文自然语言推理数据集,从英文STS-B翻译为中文的数据集                                 | [STS-B](https://github.com/pluto-junzeng/CNSD)                                                                                                                                                                                                                                                        |


常用英文匹配数据集:

- 英文匹配数据集:multi_nli: https://huggingface.co/datasets/multi_nli
- 英文匹配数据集:snli: https://huggingface.co/datasets/snli
- https://huggingface.co/datasets/metaeval/cnli
- https://huggingface.co/datasets/mteb/stsbenchmark-sts
- https://huggingface.co/datasets/JeremiahZ/simcse_sup_nli
- https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7


数据集使用示例:
```shell
pip install datasets
```

```python
from datasets import load_dataset

dataset = load_dataset("shibing624/nli_zh", "STS-B") # ATEC or BQ or LCQMC or PAWSX or STS-B
print(dataset)
print(dataset['test'][0])
```

output:
```shell
DatasetDict({
    train: Dataset({
        features: ['sentence1', 'sentence2', 'label'],
        num_rows: 5231
    })
    validation: Dataset({
        features: ['sentence1', 'sentence2', 'label'],
        num_rows: 1458
    })
    test: Dataset({
        features: ['sentence1', 'sentence2', 'label'],
        num_rows: 1361
    })
})
{'sentence1': '一个女孩在给她的头发做发型。', 'sentence2': '一个女孩在梳头。', 'label': 2}
```





## Contact

- Issue(建议):[![GitHub issues](https://img.shields.io/github/issues/shibing624/text2vec.svg)](https://github.com/shibing624/text2vec/issues)
- 邮件我:xuming: xuming624@qq.com
- 微信我:加我*微信号:xuming624, 备注:姓名-公司-NLP* 进NLP交流群。

<img src="docs/wechat.jpeg" width="200" />


## Citation

如果你在研究中使用了text2vec,请按如下格式引用:

APA:
```latex
Xu, M. Text2vec: Text to vector toolkit (Version 1.1.2) [Computer software]. https://github.com/shibing624/text2vec
```

BibTeX:
```latex
@misc{Text2vec,
  author = {Ming Xu},
  title = {Text2vec: Text to vector toolkit},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/shibing624/text2vec}},
}
```

## License


授权协议为 [The Apache License 2.0](LICENSE),可免费用做商业用途。请在产品说明中附加text2vec的链接和授权协议。


## Contribute
项目代码还很粗糙,如果大家对代码有所改进,欢迎提交回本项目,在提交之前,注意以下两点:

 - 在`tests`添加相应的单元测试
 - 使用`python -m pytest -v`来运行所有单元测试,确保所有单测都是通过的

之后即可提交PR。

## References
- [将句子表示为向量(上):无监督句子表示学习(sentence embedding)](https://www.cnblogs.com/llhthinker/p/10335164.html)
- [将句子表示为向量(下):无监督句子表示学习(sentence embedding)](https://www.cnblogs.com/llhthinker/p/10341841.html)
- [A Simple but Tough-to-Beat Baseline for Sentence Embeddings[Sanjeev Arora and Yingyu Liang and Tengyu Ma, 2017]](https://openreview.net/forum?id=SyK00v5xx)
- [四种计算文本相似度的方法对比[Yves Peirsman]](https://zhuanlan.zhihu.com/p/37104535)
- [Improvements to BM25 and Language Models Examined](http://www.cs.otago.ac.nz/homepages/andrew/papers/2014-2.pdf)
- [CoSENT:比Sentence-BERT更有效的句向量方案](https://kexue.fm/archives/8847)
- [谈谈文本匹配和多轮检索](https://zhuanlan.zhihu.com/p/111769969)
- [Sentence-transformers](https://www.sbert.net/examples/applications/computing-embeddings/README.html)
- [One Embedder, Any Task: Instruction-Finetuned Text Embeddings](https://arxiv.org/abs/2212.09741)
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/shibing624/text2vec",
    "name": "text2vec",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6.0",
    "maintainer_email": "",
    "keywords": "word embedding,text2vec,Chinese Text Similarity Calculation Tool,similarity,word2vec",
    "author": "XuMing",
    "author_email": "xuming624@qq.com",
    "download_url": "https://files.pythonhosted.org/packages/53/76/1431fff7d01aad17d6be40e2ac7275173585fe1e87fa4a350535c8d918f0/text2vec-1.2.9.tar.gz",
    "platform": null,
    "description": "[**\ud83c\udde8\ud83c\uddf3\u4e2d\u6587**](https://github.com/shibing624/text2vec/blob/master/README.md) | [**\ud83c\udf10English**](https://github.com/shibing624/text2vec/blob/master/README_EN.md) | [**\ud83d\udcd6\u6587\u6863/Docs**](https://github.com/shibing624/text2vec/wiki) | [**\ud83e\udd16\u6a21\u578b/Models**](https://huggingface.co/shibing624) \n\n<div align=\"center\">\n  <a href=\"https://github.com/shibing624/text2vec\">\n    <img src=\"https://github.com/shibing624/text2vec/blob/master/docs/t2v-logo.png\" height=\"150\" alt=\"Logo\">\n  </a>\n</div>\n\n-----------------\n\n# Text2vec: Text to Vector\n[![PyPI version](https://badge.fury.io/py/text2vec.svg)](https://badge.fury.io/py/text2vec)\n[![Downloads](https://static.pepy.tech/badge/text2vec)](https://pepy.tech/project/text2vec)\n[![Contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg)](CONTRIBUTING.md)\n[![License Apache 2.0](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)\n[![python_version](https://img.shields.io/badge/Python-3.5%2B-green.svg)](requirements.txt)\n[![GitHub issues](https://img.shields.io/github/issues/shibing624/text2vec.svg)](https://github.com/shibing624/text2vec/issues)\n[![Wechat Group](http://vlog.sfyc.ltd/wechat_everyday/wxgroup_logo.png?imageView2/0/w/60/h/20)](#Contact)\n\n\n**Text2vec**: Text to Vector, Get Sentence Embeddings. \u6587\u672c\u5411\u91cf\u5316\uff0c\u628a\u6587\u672c(\u5305\u62ec\u8bcd\u3001\u53e5\u5b50\u3001\u6bb5\u843d)\u8868\u5f81\u4e3a\u5411\u91cf\u77e9\u9635\u3002\n\n**text2vec**\u5b9e\u73b0\u4e86Word2Vec\u3001RankBM25\u3001BERT\u3001Sentence-BERT\u3001CoSENT\u7b49\u591a\u79cd\u6587\u672c\u8868\u5f81\u3001\u6587\u672c\u76f8\u4f3c\u5ea6\u8ba1\u7b97\u6a21\u578b\uff0c\u5e76\u5728\u6587\u672c\u8bed\u4e49\u5339\u914d\uff08\u76f8\u4f3c\u5ea6\u8ba1\u7b97\uff09\u4efb\u52a1\u4e0a\u6bd4\u8f83\u4e86\u5404\u6a21\u578b\u7684\u6548\u679c\u3002\n\n### News\n[2023/09/19] v1.2.8\u7248\u672c: \u652f\u6301\u591a\u5361\u63a8\u7406\uff08\u591a\u8fdb\u7a0b\u5b9e\u73b0\u591aGPU\u548c\u591aCPU\u63a8\u7406\uff09\uff0c\u65b0\u589e\u547d\u4ee4\u884c\u5de5\u5177\uff08CLI\uff09\uff0c\u53ef\u4ee5\u65e0\u9700\u4ee3\u7801\u5f00\u53d1\u6279\u91cf\u83b7\u53d6\u6587\u672c\u5411\u91cf\uff0c\u8be6\u89c1[Release-v1.2.8](https://github.com/shibing624/text2vec/releases/tag/1.2.8)\n\n[2023/09/03] v1.2.4\u7248\u672c: \u652f\u6301FlagEmbedding\u6a21\u578b\u8bad\u7ec3\uff0c\u53d1\u5e03\u4e86\u4e2d\u6587\u5339\u914d\u6a21\u578b[shibing624/text2vec-bge-large-chinese](https://huggingface.co/shibing624/text2vec-bge-large-chinese)\uff0c\u7528CoSENT\u65b9\u6cd5\u76d1\u7763\u8bad\u7ec3\uff0c\u57fa\u4e8e`BAAI/bge-large-zh-noinstruct`\u7528\u4e2d\u6587\u5339\u914d\u6570\u636e\u96c6\u8bad\u7ec3\u5f97\u5230\uff0c\u5e76\u5728\u4e2d\u6587\u6d4b\u8bd5\u96c6\u8bc4\u4f30\u76f8\u5bf9\u4e8e\u539f\u6a21\u578b\u6548\u679c\u6709\u63d0\u5347\uff0c\u77ed\u6587\u672c\u533a\u5206\u5ea6\u4e0a\u63d0\u5347\u660e\u663e\uff0c\u8be6\u89c1[Release-v1.2.4](https://github.com/shibing624/text2vec/releases/tag/1.2.4)\n\n[2023/07/17] v1.2.2\u7248\u672c: \u652f\u6301\u591a\u5361\u8bad\u7ec3\uff0c\u53d1\u5e03\u4e86\u591a\u8bed\u8a00\u5339\u914d\u6a21\u578b[shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual)\uff0c\u7528CoSENT\u65b9\u6cd5\u8bad\u7ec3\uff0c\u57fa\u4e8e`sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`\u7528\u4eba\u5de5\u6311\u9009\u540e\u7684\u591a\u8bed\u8a00STS\u6570\u636e\u96c6[shibing624/nli-zh-all/text2vec-base-multilingual-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-multilingual-dataset)\u8bad\u7ec3\u5f97\u5230\uff0c\u5e76\u5728\u4e2d\u82f1\u6587\u6d4b\u8bd5\u96c6\u8bc4\u4f30\u76f8\u5bf9\u4e8e\u539f\u6a21\u578b\u6548\u679c\u6709\u63d0\u5347\uff0c\u8be6\u89c1[Release-v1.2.2](https://github.com/shibing624/text2vec/releases/tag/1.2.2)\n\n[2023/06/19] v1.2.1\u7248\u672c: \u66f4\u65b0\u4e86\u4e2d\u6587\u5339\u914d\u6a21\u578b`shibing624/text2vec-base-chinese-nli`\u4e3a\u65b0\u7248[shibing624/text2vec-base-chinese-sentence](https://huggingface.co/shibing624/text2vec-base-chinese-sentence)\uff0c\u9488\u5bf9CoSENT\u7684loss\u8ba1\u7b97\u5bf9\u6392\u5e8f\u654f\u611f\u7279\u70b9\uff0c\u4eba\u5de5\u6311\u9009\u5e76\u6574\u7406\u51fa\u9ad8\u8d28\u91cf\u7684\u6709\u76f8\u5173\u6027\u6392\u5e8f\u7684STS\u6570\u636e\u96c6[shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset)\uff0c\u5728\u5404\u8bc4\u4f30\u96c6\u8868\u73b0\u76f8\u5bf9\u4e4b\u524d\u6709\u63d0\u5347\uff1b\u53d1\u5e03\u4e86\u9002\u7528\u4e8es2p\u7684\u4e2d\u6587\u5339\u914d\u6a21\u578b[shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase)\uff0c\u8be6\u89c1[Release-v1.2.1](https://github.com/shibing624/text2vec/releases/tag/1.2.1)\n\n[2023/06/15] v1.2.0\u7248\u672c: \u53d1\u5e03\u4e86\u4e2d\u6587\u5339\u914d\u6a21\u578b[shibing624/text2vec-base-chinese-nli](https://huggingface.co/shibing624/text2vec-base-chinese-nli)\uff0c\u57fa\u4e8e`nghuyong/ernie-3.0-base-zh`\u6a21\u578b\uff0c\u4f7f\u7528\u4e86\u4e2d\u6587NLI\u6570\u636e\u96c6[shibing624/nli_zh](https://huggingface.co/datasets/shibing624/nli_zh)\u5168\u90e8\u8bed\u6599\u8bad\u7ec3\u7684CoSENT\u6587\u672c\u5339\u914d\u6a21\u578b\uff0c\u5728\u5404\u8bc4\u4f30\u96c6\u8868\u73b0\u63d0\u5347\u660e\u663e\uff0c\u8be6\u89c1[Release-v1.2.0](https://github.com/shibing624/text2vec/releases/tag/1.2.0)\n\n[2022/03/12] v1.1.4\u7248\u672c: \u53d1\u5e03\u4e86\u4e2d\u6587\u5339\u914d\u6a21\u578b[shibing624/text2vec-base-chinese](https://huggingface.co/shibing624/text2vec-base-chinese)\uff0c\u57fa\u4e8e\u4e2d\u6587STS\u8bad\u7ec3\u96c6\u8bad\u7ec3\u7684CoSENT\u5339\u914d\u6a21\u578b\u3002\u8be6\u89c1[Release-v1.1.4](https://github.com/shibing624/text2vec/releases/tag/1.1.4)\n\n\n**Guide**\n- [Features](#Features)\n- [Evaluation](#Evaluation)\n- [Install](#install)\n- [Usage](#usage)\n- [Contact](#Contact)\n- [References](#references)\n\n\n## Features\n### \u6587\u672c\u5411\u91cf\u8868\u793a\u6a21\u578b\n- [Word2Vec](https://github.com/shibing624/text2vec/blob/master/text2vec/word2vec.py)\uff1a\u901a\u8fc7\u817e\u8bafAI Lab\u5f00\u6e90\u7684\u5927\u89c4\u6a21\u9ad8\u8d28\u91cf\u4e2d\u6587[\u8bcd\u5411\u91cf\u6570\u636e\uff08800\u4e07\u4e2d\u6587\u8bcd\u8f7b\u91cf\u7248\uff09](https://pan.baidu.com/s/1La4U4XNFe8s5BJqxPQpeiQ) (\u6587\u4ef6\u540d\uff1alight_Tencent_AILab_ChineseEmbedding.bin \u5bc6\u7801: tawe\uff09\u5b9e\u73b0\u8bcd\u5411\u91cf\u68c0\u7d22\uff0c\u672c\u9879\u76ee\u5b9e\u73b0\u4e86\u53e5\u5b50\uff08\u8bcd\u5411\u91cf\u6c42\u5e73\u5747\uff09\u7684word2vec\u5411\u91cf\u8868\u793a\n- [SBERT(Sentence-BERT)](https://github.com/shibing624/text2vec/blob/master/text2vec/sentencebert_model.py)\uff1a\u6743\u8861\u6027\u80fd\u548c\u6548\u7387\u7684\u53e5\u5411\u91cf\u8868\u793a\u6a21\u578b\uff0c\u8bad\u7ec3\u65f6\u901a\u8fc7\u6709\u76d1\u7763\u8bad\u7ec3BERT\u548csoftmax\u5206\u7c7b\u51fd\u6570\uff0c\u6587\u672c\u5339\u914d\u9884\u6d4b\u65f6\u76f4\u63a5\u53d6\u53e5\u5b50\u5411\u91cf\u505a\u4f59\u5f26\uff0c\u53e5\u5b50\u8868\u5f81\u65b9\u6cd5\uff0c\u672c\u9879\u76ee\u57fa\u4e8ePyTorch\u590d\u73b0\u4e86Sentence-BERT\u6a21\u578b\u7684\u8bad\u7ec3\u548c\u9884\u6d4b\n- [CoSENT(Cosine Sentence)](https://github.com/shibing624/text2vec/blob/master/text2vec/cosent_model.py)\uff1aCoSENT\u6a21\u578b\u63d0\u51fa\u4e86\u4e00\u79cd\u6392\u5e8f\u7684\u635f\u5931\u51fd\u6570\uff0c\u4f7f\u8bad\u7ec3\u8fc7\u7a0b\u66f4\u8d34\u8fd1\u9884\u6d4b\uff0c\u6a21\u578b\u6536\u655b\u901f\u5ea6\u548c\u6548\u679c\u6bd4Sentence-BERT\u66f4\u597d\uff0c\u672c\u9879\u76ee\u57fa\u4e8ePyTorch\u5b9e\u73b0\u4e86CoSENT\u6a21\u578b\u7684\u8bad\u7ec3\u548c\u9884\u6d4b\n- [BGE(BAAI general embedding)](https://github.com/shibing624/text2vec/blob/master/text2vec/bge_model.py)\uff1aBGE\u6a21\u578b\u6309\u7167[retromae](https://github.com/staoxiao/RetroMAE)\u65b9\u6cd5\u8fdb\u884c\u9884\u8bad\u7ec3\uff0c[\u53c2\u8003\u8bba\u6587](https://aclanthology.org/2022.emnlp-main.35.pdf)\uff0c\u518d\u4f7f\u7528\u5bf9\u6bd4\u5b66\u4e60finetune\u5fae\u8c03\u8bad\u7ec3\u6a21\u578b\uff0c\u672c\u9879\u76ee\u57fa\u4e8ePyTorch\u5b9e\u73b0\u4e86BGE\u6a21\u578b\u7684\u5fae\u8c03\u8bad\u7ec3\u548c\u9884\u6d4b\n\n\n\u8be6\u7ec6\u6587\u672c\u5411\u91cf\u8868\u793a\u65b9\u6cd5\u89c1wiki: [\u6587\u672c\u5411\u91cf\u8868\u793a\u65b9\u6cd5](https://github.com/shibing624/text2vec/wiki/%E6%96%87%E6%9C%AC%E5%90%91%E9%87%8F%E8%A1%A8%E7%A4%BA%E6%96%B9%E6%B3%95)\n## Evaluation\n\n\u6587\u672c\u5339\u914d\n\n#### \u82f1\u6587\u5339\u914d\u6570\u636e\u96c6\u7684\u8bc4\u6d4b\u7ed3\u679c\uff1a\n\n\n| Arch   | BaseModel                                        | Model                                                                                                                | English-STS-B | \n|:-------|:------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------|:-------------:|\n| GloVe  | glove                                           | Avg_word_embeddings_glove_6B_300d                                                                                    |     61.77     |\n| BERT   | bert-base-uncased                               | BERT-base-cls                                                                                                        |     20.29     |\n| BERT   | bert-base-uncased                               | BERT-base-first_last_avg                                                                                             |     59.04     |\n| BERT   | bert-base-uncased                               | BERT-base-first_last_avg-whiten(NLI)                                                                                 |     63.65     |\n| SBERT  | sentence-transformers/bert-base-nli-mean-tokens | SBERT-base-nli-cls                                                                                                   |     73.65     |\n| SBERT  | sentence-transformers/bert-base-nli-mean-tokens | SBERT-base-nli-first_last_avg                                                                                        |     77.96     |\n| CoSENT | bert-base-uncased                               | CoSENT-base-first_last_avg                                                                                           |     69.93     |\n| CoSENT | sentence-transformers/bert-base-nli-mean-tokens | CoSENT-base-nli-first_last_avg                                                                                       |     79.68     |\n| CoSENT | sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | [shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual)                |     80.12     |\n\n#### \u4e2d\u6587\u5339\u914d\u6570\u636e\u96c6\u7684\u8bc4\u6d4b\u7ed3\u679c\uff1a\n\n\n| Arch   | BaseModel                    | Model           | ATEC  |  BQ   | LCQMC | PAWSX | STS-B |  Avg  | \n|:-------|:----------------------------|:--------------------|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\n| SBERT  | bert-base-chinese           | SBERT-bert-base     | 46.36 | 70.36 | 78.72 | 46.86 | 66.41 | 61.74 |\n| SBERT  | hfl/chinese-macbert-base    | SBERT-macbert-base  | 47.28 | 68.63 | 79.42 | 55.59 | 64.82 | 63.15 |\n| SBERT  | hfl/chinese-roberta-wwm-ext | SBERT-roberta-ext   | 48.29 | 69.99 | 79.22 | 44.10 | 72.42 | 62.80 |\n| CoSENT | bert-base-chinese           | CoSENT-bert-base    | 49.74 | 72.38 | 78.69 | 60.00 | 79.27 | 68.01 |\n| CoSENT | hfl/chinese-macbert-base    | CoSENT-macbert-base | 50.39 | 72.93 | 79.17 | 60.86 | 79.30 | 68.53 |\n| CoSENT | hfl/chinese-roberta-wwm-ext | CoSENT-roberta-ext  | 50.81 | 71.45 | 79.31 | 61.56 | 79.96 | 68.61 |\n\n\u8bf4\u660e\uff1a\n- \u7ed3\u679c\u8bc4\u6d4b\u6307\u6807\uff1aspearman\u7cfb\u6570\n- \u4e3a\u8bc4\u6d4b\u6a21\u578b\u80fd\u529b\uff0c\u7ed3\u679c\u5747\u53ea\u7528\u8be5\u6570\u636e\u96c6\u7684train\u8bad\u7ec3\uff0c\u5728test\u4e0a\u8bc4\u4f30\u5f97\u5230\u7684\u8868\u73b0\uff0c\u6ca1\u7528\u5916\u90e8\u6570\u636e\n- `SBERT-macbert-base`\u6a21\u578b\uff0c\u662f\u7528SBert\u65b9\u6cd5\u8bad\u7ec3\uff0c\u8fd0\u884c[examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)\u4ee3\u7801\u53ef\u8bad\u7ec3\u6a21\u578b\n- `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`\u6a21\u578b\u662f\u7528SBert\u8bad\u7ec3\uff0c\u662f`paraphrase-MiniLM-L12-v2`\u6a21\u578b\u7684\u591a\u8bed\u8a00\u7248\u672c\uff0c\u652f\u6301\u4e2d\u6587\u3001\u82f1\u6587\u7b49\n\n\n### Release Models\n- \u672c\u9879\u76eerelease\u6a21\u578b\u7684\u4e2d\u6587\u5339\u914d\u8bc4\u6d4b\u7ed3\u679c\uff1a\n\n| Arch       | BaseModel                                                   | Model                                                                                                                                             | ATEC  |  BQ   | LCQMC | PAWSX | STS-B | SOHU-dd | SOHU-dc |    Avg    |  QPS  |\n|:-----------|:------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:-----:|:-----:|:-----:|:-----:|:-----:|:-------:|:-------:|:---------:|:-----:|\n| Word2Vec   | word2vec                                                    | [w2v-light-tencent-chinese](https://ai.tencent.com/ailab/nlp/en/download.html)                                                                    | 20.00 | 31.49 | 59.46 | 2.57  | 55.78 |  55.04  |  20.70  |   35.03   | 23769 |\n| SBERT      | xlm-roberta-base                                            | [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | 18.42 | 38.52 | 63.96 | 10.14 | 78.90 |  63.01  |  52.28  |   46.46   | 3138  |\n| CoSENT     | hfl/chinese-macbert-base                                    | [shibing624/text2vec-base-chinese](https://huggingface.co/shibing624/text2vec-base-chinese)                                                       | 31.93 | 42.67 | 70.16 | 17.21 | 79.30 |  70.27  |  50.42  |   51.61   | 3008  |\n| CoSENT     | hfl/chinese-lert-large                                      | [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese)                                                   | 32.61 | 44.59 | 69.30 | 14.51 | 79.44 |  73.01  |  59.04  |   53.12   | 2092  |\n| CoSENT     | nghuyong/ernie-3.0-base-zh                                  | [shibing624/text2vec-base-chinese-sentence](https://huggingface.co/shibing624/text2vec-base-chinese-sentence)                                     | 43.37 | 61.43 | 73.48 | 38.90 | 78.25 |  70.60  |  53.08  |   59.87   | 3089  |\n| CoSENT     | nghuyong/ernie-3.0-base-zh                                  | [shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase)                                 | 44.89 | 63.58 | 74.24 | 40.90 | 78.93 |  76.70  |  63.30  | **63.08** | 3066  |\n| CoSENT     | sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | [shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual)                                             | 32.39 | 50.33 | 65.64 | 32.56 | 74.45 |  68.88  |  51.17  |   53.67   | 3138  |\n| CoSENT     | BAAI/bge-large-zh-noinstruct                                | [shibing624/text2vec-bge-large-chinese](https://huggingface.co/shibing624/text2vec-bge-large-chinese)                                             | 38.41 | 61.34 | 71.72 | 35.15 | 76.44 |  71.81  |  63.15  |   59.72   |  844  |\n\n\n\u8bf4\u660e\uff1a\n- \u7ed3\u679c\u8bc4\u6d4b\u6307\u6807\uff1aspearman\u7cfb\u6570\n- `shibing624/text2vec-base-chinese`\u6a21\u578b\uff0c\u662f\u7528CoSENT\u65b9\u6cd5\u8bad\u7ec3\uff0c\u57fa\u4e8e`hfl/chinese-macbert-base`\u5728\u4e2d\u6587STS-B\u6570\u636e\u8bad\u7ec3\u5f97\u5230\uff0c\u5e76\u5728\u4e2d\u6587STS-B\u6d4b\u8bd5\u96c6\u8bc4\u4f30\u8fbe\u5230\u8f83\u597d\u6548\u679c\uff0c\u8fd0\u884c[examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)\u4ee3\u7801\u53ef\u8bad\u7ec3\u6a21\u578b\uff0c\u6a21\u578b\u6587\u4ef6\u5df2\u7ecf\u4e0a\u4f20HF model hub\uff0c\u4e2d\u6587\u901a\u7528\u8bed\u4e49\u5339\u914d\u4efb\u52a1\u63a8\u8350\u4f7f\u7528\n- `shibing624/text2vec-base-chinese-sentence`\u6a21\u578b\uff0c\u662f\u7528CoSENT\u65b9\u6cd5\u8bad\u7ec3\uff0c\u57fa\u4e8e`nghuyong/ernie-3.0-base-zh`\u7528\u4eba\u5de5\u6311\u9009\u540e\u7684\u4e2d\u6587STS\u6570\u636e\u96c6[shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset)\u8bad\u7ec3\u5f97\u5230\uff0c\u5e76\u5728\u4e2d\u6587\u5404NLI\u6d4b\u8bd5\u96c6\u8bc4\u4f30\u8fbe\u5230\u8f83\u597d\u6548\u679c\uff0c\u8fd0\u884c[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)\u4ee3\u7801\u53ef\u8bad\u7ec3\u6a21\u578b\uff0c\u6a21\u578b\u6587\u4ef6\u5df2\u7ecf\u4e0a\u4f20HF model hub\uff0c\u4e2d\u6587s2s(\u53e5\u5b50vs\u53e5\u5b50)\u8bed\u4e49\u5339\u914d\u4efb\u52a1\u63a8\u8350\u4f7f\u7528\n- `shibing624/text2vec-base-chinese-paraphrase`\u6a21\u578b\uff0c\u662f\u7528CoSENT\u65b9\u6cd5\u8bad\u7ec3\uff0c\u57fa\u4e8e`nghuyong/ernie-3.0-base-zh`\u7528\u4eba\u5de5\u6311\u9009\u540e\u7684\u4e2d\u6587STS\u6570\u636e\u96c6[shibing624/nli-zh-all/text2vec-base-chinese-paraphrase-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-paraphrase-dataset)\uff0c\u6570\u636e\u96c6\u76f8\u5bf9\u4e8e[shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset)\u52a0\u5165\u4e86s2p(sentence to paraphrase)\u6570\u636e\uff0c\u5f3a\u5316\u4e86\u5176\u957f\u6587\u672c\u7684\u8868\u5f81\u80fd\u529b\uff0c\u5e76\u5728\u4e2d\u6587\u5404NLI\u6d4b\u8bd5\u96c6\u8bc4\u4f30\u8fbe\u5230SOTA\uff0c\u8fd0\u884c[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)\u4ee3\u7801\u53ef\u8bad\u7ec3\u6a21\u578b\uff0c\u6a21\u578b\u6587\u4ef6\u5df2\u7ecf\u4e0a\u4f20HF model hub\uff0c\u4e2d\u6587s2p(\u53e5\u5b50vs\u6bb5\u843d)\u8bed\u4e49\u5339\u914d\u4efb\u52a1\u63a8\u8350\u4f7f\u7528\n- `shibing624/text2vec-base-multilingual`\u6a21\u578b\uff0c\u662f\u7528CoSENT\u65b9\u6cd5\u8bad\u7ec3\uff0c\u57fa\u4e8e`sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`\u7528\u4eba\u5de5\u6311\u9009\u540e\u7684\u591a\u8bed\u8a00STS\u6570\u636e\u96c6[shibing624/nli-zh-all/text2vec-base-multilingual-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-multilingual-dataset)\u8bad\u7ec3\u5f97\u5230\uff0c\u5e76\u5728\u4e2d\u82f1\u6587\u6d4b\u8bd5\u96c6\u8bc4\u4f30\u76f8\u5bf9\u4e8e\u539f\u6a21\u578b\u6548\u679c\u6709\u63d0\u5347\uff0c\u8fd0\u884c[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)\u4ee3\u7801\u53ef\u8bad\u7ec3\u6a21\u578b\uff0c\u6a21\u578b\u6587\u4ef6\u5df2\u7ecf\u4e0a\u4f20HF model hub\uff0c\u591a\u8bed\u8a00\u8bed\u4e49\u5339\u914d\u4efb\u52a1\u63a8\u8350\u4f7f\u7528\n- `shibing624/text2vec-bge-large-chinese`\u6a21\u578b\uff0c\u662f\u7528CoSENT\u65b9\u6cd5\u8bad\u7ec3\uff0c\u57fa\u4e8e`BAAI/bge-large-zh-noinstruct`\u7528\u4eba\u5de5\u6311\u9009\u540e\u7684\u4e2d\u6587STS\u6570\u636e\u96c6[shibing624/nli-zh-all/text2vec-base-chinese-paraphrase-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-paraphrase-dataset)\u8bad\u7ec3\u5f97\u5230\uff0c\u5e76\u5728\u4e2d\u6587\u6d4b\u8bd5\u96c6\u8bc4\u4f30\u76f8\u5bf9\u4e8e\u539f\u6a21\u578b\u6548\u679c\u6709\u63d0\u5347\uff0c\u5728\u77ed\u6587\u672c\u533a\u5206\u5ea6\u4e0a\u63d0\u5347\u660e\u663e\uff0c\u8fd0\u884c[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)\u4ee3\u7801\u53ef\u8bad\u7ec3\u6a21\u578b\uff0c\u6a21\u578b\u6587\u4ef6\u5df2\u7ecf\u4e0a\u4f20HF model hub\uff0c\u4e2d\u6587s2s(\u53e5\u5b50vs\u53e5\u5b50)\u8bed\u4e49\u5339\u914d\u4efb\u52a1\u63a8\u8350\u4f7f\u7528\n- `w2v-light-tencent-chinese`\u662f\u817e\u8baf\u8bcd\u5411\u91cf\u7684Word2Vec\u6a21\u578b\uff0cCPU\u52a0\u8f7d\u4f7f\u7528\uff0c\u9002\u7528\u4e8e\u4e2d\u6587\u5b57\u9762\u5339\u914d\u4efb\u52a1\u548c\u7f3a\u5c11\u6570\u636e\u7684\u51b7\u542f\u52a8\u60c5\u51b5\n- \u5404\u9884\u8bad\u7ec3\u6a21\u578b\u5747\u53ef\u4ee5\u901a\u8fc7transformers\u8c03\u7528\uff0c\u5982MacBERT\u6a21\u578b\uff1a`--model_name hfl/chinese-macbert-base` \u6216\u8005roberta\u6a21\u578b\uff1a`--model_name uer/roberta-medium-wwm-chinese-cluecorpussmall`\n- \u4e3a\u6d4b\u8bc4\u6a21\u578b\u7684\u9c81\u68d2\u6027\uff0c\u52a0\u5165\u4e86\u672a\u8bad\u7ec3\u8fc7\u7684SOHU\u6d4b\u8bd5\u96c6\uff0c\u7528\u4e8e\u6d4b\u8bd5\u6a21\u578b\u7684\u6cdb\u5316\u80fd\u529b\uff1b\u4e3a\u8fbe\u5230\u5f00\u7bb1\u5373\u7528\u7684\u5b9e\u7528\u6548\u679c\uff0c\u4f7f\u7528\u4e86\u641c\u96c6\u5230\u7684\u5404\u4e2d\u6587\u5339\u914d\u6570\u636e\u96c6\uff0c\u6570\u636e\u96c6\u4e5f\u4e0a\u4f20\u5230HF datasets[\u94fe\u63a5\u89c1\u4e0b\u65b9](#\u6570\u636e\u96c6)\n- \u4e2d\u6587\u5339\u914d\u4efb\u52a1\u5b9e\u9a8c\u8868\u660e\uff0cpooling\u6700\u4f18\u662f`EncoderType.FIRST_LAST_AVG`\u548c`EncoderType.MEAN`\uff0c\u4e24\u8005\u9884\u6d4b\u6548\u679c\u5dee\u5f02\u5f88\u5c0f\n- \u4e2d\u6587\u5339\u914d\u8bc4\u6d4b\u7ed3\u679c\u590d\u73b0\uff0c\u53ef\u4ee5\u4e0b\u8f7d\u4e2d\u6587\u5339\u914d\u6570\u636e\u96c6\u5230`examples/data`\uff0c\u8fd0\u884c [tests/model_spearman.py](https://github.com/shibing624/text2vec/blob/master/tests/model_spearman.py) \u4ee3\u7801\u590d\u73b0\u8bc4\u6d4b\u7ed3\u679c\n- QPS\u7684GPU\u6d4b\u8bd5\u73af\u5883\u662fTesla V100\uff0c\u663e\u5b5832GB\n\n\u6a21\u578b\u8bad\u7ec3\u5b9e\u9a8c\u62a5\u544a\uff1a[\u5b9e\u9a8c\u62a5\u544a](https://github.com/shibing624/text2vec/blob/master/docs/model_report.md)\n## Demo\n\nOfficial Demo: https://www.mulanai.com/product/short_text_sim/\n\nHuggingFace Demo: https://huggingface.co/spaces/shibing624/text2vec\n\n![](docs/hf.png)\n\nrun example: [examples/gradio_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/gradio_demo.py) to see the demo:\n```shell\npython examples/gradio_demo.py\n```\n\n## Install\n```shell\npip install torch # conda install pytorch\npip install -U text2vec\n```\n\nor\n\n```shell\npip install torch # conda install pytorch\npip install -r requirements.txt\n\ngit clone https://github.com/shibing624/text2vec.git\ncd text2vec\npip install --no-deps .\n```\n\n## Usage\n\n### \u6587\u672c\u5411\u91cf\u8868\u5f81\n\n\u57fa\u4e8e`pretrained model`\u8ba1\u7b97\u6587\u672c\u5411\u91cf\uff1a\n\n```zsh\n>>> from text2vec import SentenceModel\n>>> m = SentenceModel()\n>>> m.encode(\"\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361\")\nEmbedding shape: (768,)\n```\n\nexample: [examples/computing_embeddings_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/computing_embeddings_demo.py)\n\n```python\nimport sys\n\nsys.path.append('..')\nfrom text2vec import SentenceModel\nfrom text2vec import Word2Vec\n\n\ndef compute_emb(model):\n    # Embed a list of sentences\n    sentences = [\n        '\u5361',\n        '\u94f6\u884c\u5361',\n        '\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361',\n        '\u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361',\n        'This framework generates embeddings for each input sentence',\n        'Sentences are passed as a list of string.',\n        'The quick brown fox jumps over the lazy dog.'\n    ]\n    sentence_embeddings = model.encode(sentences)\n    print(type(sentence_embeddings), sentence_embeddings.shape)\n\n    # The result is a list of sentence embeddings as numpy arrays\n    for sentence, embedding in zip(sentences, sentence_embeddings):\n        print(\"Sentence:\", sentence)\n        print(\"Embedding shape:\", embedding.shape)\n        print(\"Embedding head:\", embedding[:10])\n        print()\n\n\nif __name__ == \"__main__\":\n    # \u4e2d\u6587\u53e5\u5411\u91cf\u6a21\u578b(CoSENT)\uff0c\u4e2d\u6587\u8bed\u4e49\u5339\u914d\u4efb\u52a1\u63a8\u8350\uff0c\u652f\u6301fine-tune\u7ee7\u7eed\u8bad\u7ec3\n    t2v_model = SentenceModel(\"shibing624/text2vec-base-chinese\")\n    compute_emb(t2v_model)\n\n    # \u652f\u6301\u591a\u8bed\u8a00\u7684\u53e5\u5411\u91cf\u6a21\u578b\uff08CoSENT\uff09\uff0c\u591a\u8bed\u8a00\uff08\u5305\u62ec\u4e2d\u82f1\u6587\uff09\u8bed\u4e49\u5339\u914d\u4efb\u52a1\u63a8\u8350\uff0c\u652f\u6301fine-tune\u7ee7\u7eed\u8bad\u7ec3\n    sbert_model = SentenceModel(\"shibing624/text2vec-base-multilingual\")\n    compute_emb(sbert_model)\n\n    # \u4e2d\u6587\u8bcd\u5411\u91cf\u6a21\u578b(word2vec)\uff0c\u4e2d\u6587\u5b57\u9762\u5339\u914d\u4efb\u52a1\u548c\u51b7\u542f\u52a8\u9002\u7528\n    w2v_model = Word2Vec(\"w2v-light-tencent-chinese\")\n    compute_emb(w2v_model)\n\n```\n\noutput:\n```\n<class 'numpy.ndarray'> (7, 768)\nSentence: \u5361\nEmbedding shape: (768,)\n\nSentence: \u94f6\u884c\u5361\nEmbedding shape: (768,)\n ... \n```\n\n- \u8fd4\u56de\u503c`embeddings`\u662f`numpy.ndarray`\u7c7b\u578b\uff0cshape\u4e3a`(sentences_size, model_embedding_size)`\uff0c\u4e09\u4e2a\u6a21\u578b\u4efb\u9009\u4e00\u79cd\u5373\u53ef\uff0c\u63a8\u8350\u7528\u7b2c\u4e00\u4e2a\u3002\n- `shibing624/text2vec-base-chinese`\u6a21\u578b\u662fCoSENT\u65b9\u6cd5\u5728\u4e2d\u6587STS-B\u6570\u636e\u96c6\u8bad\u7ec3\u5f97\u5230\u7684\uff0c\u6a21\u578b\u5df2\u7ecf\u4e0a\u4f20\u5230huggingface\u7684\n\u6a21\u578b\u5e93[shibing624/text2vec-base-chinese](https://huggingface.co/shibing624/text2vec-base-chinese)\uff0c\n\u662f`text2vec.SentenceModel`\u6307\u5b9a\u7684\u9ed8\u8ba4\u6a21\u578b\uff0c\u53ef\u4ee5\u901a\u8fc7\u4e0a\u9762\u793a\u4f8b\u8c03\u7528\uff0c\u6216\u8005\u5982\u4e0b\u6240\u793a\u7528[transformers\u5e93](https://github.com/huggingface/transformers)\u8c03\u7528\uff0c\n\u6a21\u578b\u81ea\u52a8\u4e0b\u8f7d\u5230\u672c\u673a\u8def\u5f84\uff1a`~/.cache/huggingface/transformers`\n- `w2v-light-tencent-chinese`\u662f\u901a\u8fc7gensim\u52a0\u8f7d\u7684Word2Vec\u6a21\u578b\uff0c\u4f7f\u7528\u817e\u8baf\u8bcd\u5411\u91cf`Tencent_AILab_ChineseEmbedding.tar.gz`\u8ba1\u7b97\u5404\u5b57\u8bcd\u7684\u8bcd\u5411\u91cf\uff0c\u53e5\u5b50\u5411\u91cf\u901a\u8fc7\u5355\u8bcd\u8bcd\n\u5411\u91cf\u53d6\u5e73\u5747\u503c\u5f97\u5230\uff0c\u6a21\u578b\u81ea\u52a8\u4e0b\u8f7d\u5230\u672c\u673a\u8def\u5f84\uff1a`~/.text2vec/datasets/light_Tencent_AILab_ChineseEmbedding.bin`\n- `text2vec`\u652f\u6301\u591a\u5361\u63a8\u7406(\u8ba1\u7b97\u6587\u672c\u5411\u91cf): [examples/computing_embeddings_multi_gpu_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/computing_embeddings_multi_gpu_demo.py)\n\n#### Usage (HuggingFace Transformers)\nWithout [text2vec](https://github.com/shibing624/text2vec), you can use the model like this: \n\nFirst, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.\n\nexample: [examples/use_origin_transformers_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/use_origin_transformers_demo.py)\n\n```python\nimport os\nimport torch\nfrom transformers import AutoTokenizer, AutoModel\n\nos.environ[\"KMP_DUPLICATE_LIB_OK\"] = \"TRUE\"\n\n\n# Mean Pooling - Take attention mask into account for correct averaging\ndef mean_pooling(model_output, attention_mask):\n    token_embeddings = model_output[0]  # First element of model_output contains all token embeddings\n    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()\n    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)\n\n\n# Load model from HuggingFace Hub\ntokenizer = AutoTokenizer.from_pretrained('shibing624/text2vec-base-chinese')\nmodel = AutoModel.from_pretrained('shibing624/text2vec-base-chinese')\nsentences = ['\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361', '\u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361']\n# Tokenize sentences\nencoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')\n\n# Compute token embeddings\nwith torch.no_grad():\n    model_output = model(**encoded_input)\n# Perform pooling. In this case, max pooling.\nsentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])\nprint(\"Sentence embeddings:\")\nprint(sentence_embeddings)\n```\n\n#### Usage (sentence-transformers)\n[sentence-transformers](https://github.com/UKPLab/sentence-transformers) is a popular library to compute dense vector representations for sentences.\n\nInstall sentence-transformers:\n```shell\npip install -U sentence-transformers\n```\nThen load model and predict:\n```python\nfrom sentence_transformers import SentenceTransformer\n\nm = SentenceTransformer(\"shibing624/text2vec-base-chinese\")\nsentences = ['\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361', '\u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361']\n\nsentence_embeddings = m.encode(sentences)\nprint(\"Sentence embeddings:\")\nprint(sentence_embeddings)\n```\n\n#### `Word2Vec`\u8bcd\u5411\u91cf\n\n\u63d0\u4f9b\u4e24\u79cd`Word2Vec`\u8bcd\u5411\u91cf\uff0c\u4efb\u9009\u4e00\u4e2a\uff1a\n\n  - \u8f7b\u91cf\u7248\u817e\u8baf\u8bcd\u5411\u91cf [\u767e\u5ea6\u4e91\u76d8-\u5bc6\u7801:tawe](https://pan.baidu.com/s/1La4U4XNFe8s5BJqxPQpeiQ) \u6216 [\u8c37\u6b4c\u4e91\u76d8](https://drive.google.com/u/0/uc?id=1iQo9tBb2NgFOBxx0fA16AZpSgc-bG_Rp&export=download)\uff0c\u4e8c\u8fdb\u5236\u6587\u4ef6\uff0c111M\uff0c\u662f\u7b80\u5316\u540e\u7684\u9ad8\u9891143613\u4e2a\u8bcd\uff0c\u6bcf\u4e2a\u8bcd\u5411\u91cf\u8fd8\u662f200\u7ef4\uff08\u8ddf\u539f\u7248\u4e00\u6837\uff09\uff0c\u8fd0\u884c\u7a0b\u5e8f\uff0c\u81ea\u52a8\u4e0b\u8f7d\u5230 `~/.text2vec/datasets/light_Tencent_AILab_ChineseEmbedding.bin`\n  - \u817e\u8baf\u8bcd\u5411\u91cf-\u5b98\u65b9\u5168\u91cf, 6.78G\u653e\u5230\uff1a `~/.text2vec/datasets/Tencent_AILab_ChineseEmbedding.txt`\uff0c\u817e\u8baf\u8bcd\u5411\u91cf\u4e3b\u9875\uff1ahttps://ai.tencent.com/ailab/nlp/zh/index.html \u8bcd\u5411\u91cf\u4e0b\u8f7d\u5730\u5740\uff1ahttps://ai.tencent.com/ailab/nlp/en/download.html  \u66f4\u591a\u67e5\u770b[\u817e\u8baf\u8bcd\u5411\u91cf\u4ecb\u7ecd-wiki](https://github.com/shibing624/text2vec/wiki/%E8%85%BE%E8%AE%AF%E8%AF%8D%E5%90%91%E9%87%8F%E4%BB%8B%E7%BB%8D)\n\n\n### \u547d\u4ee4\u884c\u6a21\u5f0f\uff08CLI\uff09\n\n\u652f\u6301\u6279\u91cf\u83b7\u53d6\u6587\u672c\u5411\u91cf\n\ncode: [cli.py](https://github.com/shibing624/text2vec/blob/master/text2vec/cli.py)\n\n```\n> text2vec -h                                    \nusage: text2vec [-h] --input_file INPUT_FILE [--output_file OUTPUT_FILE] [--model_type MODEL_TYPE] [--model_name MODEL_NAME] [--encoder_type ENCODER_TYPE]\n                [--batch_size BATCH_SIZE] [--max_seq_length MAX_SEQ_LENGTH] [--chunk_size CHUNK_SIZE] [--device DEVICE]\n                [--show_progress_bar SHOW_PROGRESS_BAR] [--normalize_embeddings NORMALIZE_EMBEDDINGS]\n\ntext2vec cli\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --input_file INPUT_FILE\n                        input file path, text file, required\n  --output_file OUTPUT_FILE\n                        output file path, output csv file, default text_embs.csv\n  --model_type MODEL_TYPE\n                        model type: sentencemodel, word2vec, default sentencemodel\n  --model_name MODEL_NAME\n                        model name or path, default shibing624/text2vec-base-chinese\n  --encoder_type ENCODER_TYPE\n                        encoder type: MEAN, CLS, POOLER, FIRST_LAST_AVG, LAST_AVG, default MEAN\n  --batch_size BATCH_SIZE\n                        batch size, default 32\n  --max_seq_length MAX_SEQ_LENGTH\n                        max sequence length, default 256\n  --chunk_size CHUNK_SIZE\n                        chunk size to save partial results, default 1000\n  --device DEVICE       device: cpu, cuda, default None\n  --show_progress_bar SHOW_PROGRESS_BAR\n                        show progress bar, default True\n  --normalize_embeddings NORMALIZE_EMBEDDINGS\n                        normalize embeddings, default False\n  --multi_gpu MULTI_GPU\n                        multi gpu, default False\n```\n\nrun\uff1a\n\n```shell\npip install text2vec -U\ntext2vec --input_file input.txt --output_file out.csv --batch_size 128 --multi_gpu True\n```\n\n> \u8f93\u5165\u6587\u4ef6\uff08required\uff09\uff1a`input.txt`\n\n## \u4e0b\u6e38\u4efb\u52a1\n### 1. \u53e5\u5b50\u76f8\u4f3c\u5ea6\u8ba1\u7b97\n\nexample: [examples/semantic_text_similarity_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/semantic_text_similarity_demo.py)\n\n```python\nimport sys\n\nsys.path.append('..')\nfrom text2vec import Similarity\n\n# Two lists of sentences\nsentences1 = ['\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361',\n              'The cat sits outside',\n              'A man is playing guitar',\n              'The new movie is awesome']\n\nsentences2 = ['\u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361',\n              'The dog plays in the garden',\n              'A woman watches TV',\n              'The new movie is so great']\n\nsim_model = Similarity()\nfor i in range(len(sentences1)):\n    for j in range(len(sentences2)):\n        score = sim_model.get_score(sentences1[i], sentences2[j])\n        print(\"{} \\t\\t {} \\t\\t Score: {:.4f}\".format(sentences1[i], sentences2[j], score))\n```\n\noutput:\n```shell\n\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361 \t\t \u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361 \t\t Score: 0.9477\n\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361 \t\t The dog plays in the garden \t\t Score: -0.1748\n\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361 \t\t A woman watches TV \t\t Score: -0.0839\n\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361 \t\t The new movie is so great \t\t Score: -0.0044\nThe cat sits outside \t\t \u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361 \t\t Score: -0.0097\nThe cat sits outside \t\t The dog plays in the garden \t\t Score: 0.1908\nThe cat sits outside \t\t A woman watches TV \t\t Score: -0.0203\nThe cat sits outside \t\t The new movie is so great \t\t Score: 0.0302\nA man is playing guitar \t\t \u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361 \t\t Score: -0.0010\nA man is playing guitar \t\t The dog plays in the garden \t\t Score: 0.1062\nA man is playing guitar \t\t A woman watches TV \t\t Score: 0.0055\nA man is playing guitar \t\t The new movie is so great \t\t Score: 0.0097\nThe new movie is awesome \t\t \u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361 \t\t Score: 0.0302\nThe new movie is awesome \t\t The dog plays in the garden \t\t Score: -0.0160\nThe new movie is awesome \t\t A woman watches TV \t\t Score: 0.1321\nThe new movie is awesome \t\t The new movie is so great \t\t Score: 0.9591\n```\n\n> \u53e5\u5b50\u4f59\u5f26\u76f8\u4f3c\u5ea6\u503c`score`\u8303\u56f4\u662f[-1, 1]\uff0c\u503c\u8d8a\u5927\u8d8a\u76f8\u4f3c\u3002\n\n### 2. \u6587\u672c\u5339\u914d\u641c\u7d22\n\n\u4e00\u822c\u5728\u6587\u6863\u5019\u9009\u96c6\u4e2d\u627e\u4e0equery\u6700\u76f8\u4f3c\u7684\u6587\u672c\uff0c\u5e38\u7528\u4e8eQA\u573a\u666f\u7684\u95ee\u53e5\u76f8\u4f3c\u5339\u914d\u3001\u6587\u672c\u76f8\u4f3c\u68c0\u7d22\u7b49\u4efb\u52a1\u3002\n\n\nexample: [examples/semantic_search_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/semantic_search_demo.py)\n\n```python\nimport sys\n\nsys.path.append('..')\nfrom text2vec import SentenceModel, cos_sim, semantic_search\n\nembedder = SentenceModel()\n\n# Corpus with example sentences\ncorpus = [\n    '\u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361',\n    '\u6211\u4ec0\u4e48\u65f6\u5019\u5f00\u901a\u4e86\u82b1\u5457',\n    'A man is eating food.',\n    'A man is eating a piece of bread.',\n    'The girl is carrying a baby.',\n    'A man is riding a horse.',\n    'A woman is playing violin.',\n    'Two men pushed carts through the woods.',\n    'A man is riding a white horse on an enclosed ground.',\n    'A monkey is playing drums.',\n    'A cheetah is running behind its prey.'\n]\ncorpus_embeddings = embedder.encode(corpus)\n\n# Query sentences:\nqueries = [\n    '\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361',\n    'A man is eating pasta.',\n    'Someone in a gorilla costume is playing a set of drums.',\n    'A cheetah chases prey on across a field.']\n\nfor query in queries:\n    query_embedding = embedder.encode(query)\n    hits = semantic_search(query_embedding, corpus_embeddings, top_k=5)\n    print(\"\\n\\n======================\\n\\n\")\n    print(\"Query:\", query)\n    print(\"\\nTop 5 most similar sentences in corpus:\")\n    hits = hits[0]  # Get the hits for the first query\n    for hit in hits:\n        print(corpus[hit['corpus_id']], \"(Score: {:.4f})\".format(hit['score']))\n```\noutput:\n```shell\nQuery: \u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361\nTop 5 most similar sentences in corpus:\n\u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361 (Score: 0.9477)\n\u6211\u4ec0\u4e48\u65f6\u5019\u5f00\u901a\u4e86\u82b1\u5457 (Score: 0.3635)\nA man is eating food. (Score: 0.0321)\nA man is riding a horse. (Score: 0.0228)\nTwo men pushed carts through the woods. (Score: 0.0090)\n\n======================\nQuery: A man is eating pasta.\nTop 5 most similar sentences in corpus:\nA man is eating food. (Score: 0.6734)\nA man is eating a piece of bread. (Score: 0.4269)\nA man is riding a horse. (Score: 0.2086)\nA man is riding a white horse on an enclosed ground. (Score: 0.1020)\nA cheetah is running behind its prey. (Score: 0.0566)\n\n======================\nQuery: Someone in a gorilla costume is playing a set of drums.\nTop 5 most similar sentences in corpus:\nA monkey is playing drums. (Score: 0.8167)\nA cheetah is running behind its prey. (Score: 0.2720)\nA woman is playing violin. (Score: 0.1721)\nA man is riding a horse. (Score: 0.1291)\nA man is riding a white horse on an enclosed ground. (Score: 0.1213)\n\n======================\nQuery: A cheetah chases prey on across a field.\nTop 5 most similar sentences in corpus:\nA cheetah is running behind its prey. (Score: 0.9147)\nA monkey is playing drums. (Score: 0.2655)\nA man is riding a horse. (Score: 0.1933)\nA man is riding a white horse on an enclosed ground. (Score: 0.1733)\nA man is eating food. (Score: 0.0329)\n```\n\n \n\n## \u4e0b\u6e38\u4efb\u52a1\u652f\u6301\u5e93\n**similarities\u5e93[\u63a8\u8350]**\n\n\u6587\u672c\u76f8\u4f3c\u5ea6\u8ba1\u7b97\u548c\u6587\u672c\u5339\u914d\u641c\u7d22\u4efb\u52a1\uff0c\u63a8\u8350\u4f7f\u7528 [similarities\u5e93](https://github.com/shibing624/similarities) \uff0c\u517c\u5bb9\u672c\u9879\u76eerelease\u7684\nWord2vec\u3001SBERT\u3001Cosent\u7c7b\u8bed\u4e49\u5339\u914d\u6a21\u578b\uff0c\u8fd8\u652f\u6301\u5b57\u9762\u7ef4\u5ea6\u76f8\u4f3c\u5ea6\u8ba1\u7b97\u3001\u5339\u914d\u641c\u7d22\u7b97\u6cd5\uff0c\u652f\u6301\u6587\u672c\u3001\u56fe\u50cf\u3002\n\n\u5b89\u88c5\uff1a\n```pip install -U similarities```\n\n\u53e5\u5b50\u76f8\u4f3c\u5ea6\u8ba1\u7b97\uff1a\n```python\nfrom similarities import Similarity\n\nm = Similarity()\nr = m.similarity('\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361', '\u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361')\nprint(f\"similarity score: {float(r)}\")  # similarity score: 0.855146050453186\n```\n\n## Models\n\n### CoSENT model\n\nCoSENT\uff08Cosine Sentence\uff09\u6587\u672c\u5339\u914d\u6a21\u578b\uff0c\u5728Sentence-BERT\u4e0a\u6539\u8fdb\u4e86CosineRankLoss\u7684\u53e5\u5411\u91cf\u65b9\u6848\n\n\nNetwork structure:\n\nTraining:\n\n<img src=\"docs/cosent_train.png\" width=\"300\" />\n\n\nInference:\n\n<img src=\"docs/inference.png\" width=\"300\" />\n\n#### CoSENT \u76d1\u7763\u6a21\u578b\n\u8bad\u7ec3\u548c\u9884\u6d4bCoSENT\u6a21\u578b\uff1a\n\n- \u5728\u4e2d\u6587STS-B\u6570\u636e\u96c6\u8bad\u7ec3\u548c\u8bc4\u4f30`CoSENT`\u6a21\u578b\n\nexample: [examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)\n\n```shell\ncd examples\npython training_sup_text_matching_model.py --model_arch cosent --do_train --do_predict --num_epochs 10 --model_name hfl/chinese-macbert-base --output_dir ./outputs/STS-B-cosent\n```\n\n- \u5728\u8682\u8681\u91d1\u878d\u5339\u914d\u6570\u636e\u96c6ATEC\u4e0a\u8bad\u7ec3\u548c\u8bc4\u4f30`CoSENT`\u6a21\u578b\n\n\u652f\u6301\u8fd9\u4e9b\u4e2d\u6587\u5339\u914d\u6570\u636e\u96c6\u7684\u4f7f\u7528\uff1a'ATEC', 'STS-B', 'BQ', 'LCQMC', 'PAWSX'\uff0c\u5177\u4f53\u53c2\u8003HuggingFace datasets [https://huggingface.co/datasets/shibing624/nli_zh](https://huggingface.co/datasets/shibing624/nli_zh)\n```shell\npython training_sup_text_matching_model.py --task_name ATEC --model_arch cosent --do_train --do_predict --num_epochs 10 --model_name hfl/chinese-macbert-base --output_dir ./outputs/ATEC-cosent\n```\n\n- \u5728\u81ea\u6709\u4e2d\u6587\u6570\u636e\u96c6\u4e0a\u8bad\u7ec3\u6a21\u578b\n\nexample: [examples/training_sup_text_matching_model_mydata.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_mydata.py)\n\n\u5355\u5361\u8bad\u7ec3\uff1a\n```shell\nCUDA_VISIBLE_DEVICES=0 python training_sup_text_matching_model_mydata.py --do_train --do_predict\n```\n\n\u591a\u5361\u8bad\u7ec3\uff1a\n```shell\nCUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node 2  training_sup_text_matching_model_mydata.py --do_train --do_predict --output_dir outputs/STS-B-text2vec-macbert-v1 --batch_size 64 --bf16 --data_parallel \n```\n\n\u8bad\u7ec3\u96c6\u683c\u5f0f\u53c2\u8003[examples/data/STS-B/STS-B.valid.data](https://github.com/shibing624/text2vec/blob/master/examples/data/STS-B/STS-B.valid.data)\n\n```shell\nsentence1   sentence2   label\n\u4e00\u4e2a\u5973\u5b69\u5728\u7ed9\u5979\u7684\u5934\u53d1\u505a\u53d1\u578b\u3002\t\u4e00\u4e2a\u5973\u5b69\u5728\u68b3\u5934\u3002\t2\n\u4e00\u7fa4\u7537\u4eba\u5728\u6d77\u6ee9\u4e0a\u8e22\u8db3\u7403\u3002\t\u4e00\u7fa4\u7537\u5b69\u5728\u6d77\u6ee9\u4e0a\u8e22\u8db3\u7403\u3002\t3\n\u4e00\u4e2a\u5973\u4eba\u5728\u6d4b\u91cf\u53e6\u4e00\u4e2a\u5973\u4eba\u7684\u811a\u8e1d\u3002\t\u5973\u4eba\u6d4b\u91cf\u53e6\u4e00\u4e2a\u5973\u4eba\u7684\u811a\u8e1d\u3002\t5\n```\n\n`label`\u53ef\u4ee5\u662f0\uff0c1\u6807\u7b7e\uff0c0\u4ee3\u8868\u4e24\u4e2a\u53e5\u5b50\u4e0d\u76f8\u4f3c\uff0c1\u4ee3\u8868\u76f8\u4f3c\uff1b\u4e5f\u53ef\u4ee5\u662f0-5\u7684\u8bc4\u5206\uff0c\u8bc4\u5206\u8d8a\u9ad8\uff0c\u8868\u793a\u4e24\u4e2a\u53e5\u5b50\u8d8a\u76f8\u4f3c\u3002\u6a21\u578b\u90fd\u80fd\u652f\u6301\u3002\n\n\n- \u5728\u82f1\u6587STS-B\u6570\u636e\u96c6\u8bad\u7ec3\u548c\u8bc4\u4f30`CoSENT`\u6a21\u578b\n\nexample: [examples/training_sup_text_matching_model_en.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_en.py)\n\n```shell\ncd examples\npython training_sup_text_matching_model_en.py --model_arch cosent --do_train --do_predict --num_epochs 10 --model_name bert-base-uncased  --output_dir ./outputs/STS-B-en-cosent\n```\n\n#### CoSENT \u65e0\u76d1\u7763\u6a21\u578b\n- \u5728\u82f1\u6587NLI\u6570\u636e\u96c6\u8bad\u7ec3`CoSENT`\u6a21\u578b\uff0c\u5728STS-B\u6d4b\u8bd5\u96c6\u8bc4\u4f30\u6548\u679c\n\nexample: [examples/training_unsup_text_matching_model_en.py](https://github.com/shibing624/text2vec/blob/master/examples/training_unsup_text_matching_model_en.py)\n\n```shell\ncd examples\npython training_unsup_text_matching_model_en.py --model_arch cosent --do_train --do_predict --num_epochs 10 --model_name bert-base-uncased --output_dir ./outputs/STS-B-en-unsup-cosent\n```\n\n\n### Sentence-BERT model\n\nSentence-BERT\u6587\u672c\u5339\u914d\u6a21\u578b\uff0c\u8868\u5f81\u5f0f\u53e5\u5411\u91cf\u8868\u793a\u65b9\u6848\n\nNetwork structure:\n\nTraining:\n\n<img src=\"docs/sbert_train.png\" width=\"300\" />\n\n\nInference:\n\n<img src=\"docs/sbert_inference.png\" width=\"300\" />\n\n#### SentenceBERT \u76d1\u7763\u6a21\u578b\n- \u5728\u4e2d\u6587STS-B\u6570\u636e\u96c6\u8bad\u7ec3\u548c\u8bc4\u4f30`SBERT`\u6a21\u578b\n\nexample: [examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)\n\n```shell\ncd examples\npython training_sup_text_matching_model.py --model_arch sentencebert --do_train --do_predict --num_epochs 10 --model_name hfl/chinese-macbert-base --output_dir ./outputs/STS-B-sbert\n```\n- \u5728\u82f1\u6587STS-B\u6570\u636e\u96c6\u8bad\u7ec3\u548c\u8bc4\u4f30`SBERT`\u6a21\u578b\n\nexample: [examples/training_sup_text_matching_model_en.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_en.py)\n\n```shell\ncd examples\npython training_sup_text_matching_model_en.py --model_arch sentencebert --do_train --do_predict --num_epochs 10 --model_name bert-base-uncased --output_dir ./outputs/STS-B-en-sbert\n```\n\n#### SentenceBERT \u65e0\u76d1\u7763\u6a21\u578b\n- \u5728\u82f1\u6587NLI\u6570\u636e\u96c6\u8bad\u7ec3`SBERT`\u6a21\u578b\uff0c\u5728STS-B\u6d4b\u8bd5\u96c6\u8bc4\u4f30\u6548\u679c\n\nexample: [examples/training_unsup_text_matching_model_en.py](https://github.com/shibing624/text2vec/blob/master/examples/training_unsup_text_matching_model_en.py)\n\n```shell\ncd examples\npython training_unsup_text_matching_model_en.py --model_arch sentencebert --do_train --do_predict --num_epochs 10 --model_name bert-base-uncased --output_dir ./outputs/STS-B-en-unsup-sbert\n```\n\n### BERT-Match model\nBERT\u6587\u672c\u5339\u914d\u6a21\u578b\uff0c\u539f\u751fBERT\u5339\u914d\u7f51\u7edc\u7ed3\u6784\uff0c\u4ea4\u4e92\u5f0f\u53e5\u5411\u91cf\u5339\u914d\u6a21\u578b\n\nNetwork structure:\n\nTraining and inference:\n\n<img src=\"docs/bert-fc-train.png\" width=\"300\" />\n\n\u8bad\u7ec3\u811a\u672c\u540c\u4e0a[examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)\u3002\n\n\n\n### BGE model\n\n#### BGE \u76d1\u7763\u6a21\u578b\n- \u5728\u4e2d\u6587STS-B\u6570\u636e\u96c6\u8bad\u7ec3\u548c\u8bc4\u4f30`BGE`\u6a21\u578b\n\nexample: [examples/training_bge_model_mydata.py](https://github.com/shibing624/text2vec/blob/master/examples/training_bge_model_mydata.py)\n\n```shell\ncd examples\npython training_bge_model_mydata.py --model_arch bge --do_train --do_predict --num_epochs 4 --output_dir ./outputs/STS-B-bge-v1 --batch_size 4 --save_model_every_epoch --bf16\n```\n\n- \u81ea\u5efaBGE\u8bad\u7ec3\u96c6\n\nBGE\u6a21\u578b\u5fae\u8c03\u8bad\u7ec3\uff0c\u4f7f\u7528\u5bf9\u6bd4\u5b66\u4e60\u8bad\u7ec3\u6a21\u578b\uff0c\u8f93\u5165\u6570\u636e\u7684\u683c\u5f0f\u662f\u4e00\u4e2a\u4e09\u5143\u7ec4' (query, positive, negative) '\n\n```shell\ncd examples/data\npython build_zh_bge_dataset.py\npython hard_negatives_mine.py\n```\n1. `build_zh_bge_dataset.py` \u57fa\u4e8e\u4e2d\u6587STS-B\u751f\u6210\u4e09\u5143\u7ec4\u8bad\u7ec3\u96c6\uff0c\u683c\u5f0f\u5982\u4e0b\uff1a\n```json lines\n{\"query\":\"\u4e00\u4e2a\u7537\u4eba\u6b63\u5728\u5f80\u9505\u91cc\u5012\u6cb9\u3002\",\"pos\":[\"\u4e00\u4e2a\u7537\u4eba\u6b63\u5728\u5f80\u9505\u91cc\u5012\u6cb9\u3002\"],\"neg\":[\"\u4eb2\u4fc4\u519b\u961f\u8fdb\u5165\u514b\u91cc\u7c73\u4e9a\u4e4c\u514b\u5170\u6d77\u519b\u57fa\u5730\",\"\u914d\u6709\u6728\u5236\u5bb6\u5177\u7684\u4f18\u96c5\u9910\u5385\u3002\",\"\u9a6c\u96c5\u74e6\u8482\u8981\u6c42\u603b\u7edf\u7edf\u6cbb\u67e5\u8c1f\u548c\u514b\u4ec0\u7c73\u5c14\",\"\u975e\u5178\u8fd8\u593a\u53bb\u4e86\u591a\u4f26\u591a\u5730\u533a44\u4eba\u7684\u751f\u547d\uff0c\u5176\u4e2d\u5305\u62ec\u4e24\u540d\u62a4\u58eb\u548c\u4e00\u540d\u533b\u751f\u3002\",\"\u5728\u4e00\u6b21\u91c7\u8bbf\u4e2d\uff0c\u8eab\u4e3a\u72af\u7f6a\u5b66\u5bb6\u7684\u5e0c\u5229\u8bf4\uff0c\u8fd9\u91cc\u548c\u5168\u56fd\u5404\u5730\u7684\u8bb8\u591a\u8bae\u5458\u90fd\u5bf9\u6b7b\u5211\u62b1\u6709\u6212\u5fc3\u3002\",\"\u8c5a\u9f20\u5403\u80e1\u841d\u535c\u3002\",\"\u72d7\u5634\u91cc\u53fc\u7740\u4e00\u6839\u68cd\u5b50\u5728\u6c34\u4e2d\u6e38\u6cf3\u3002\",\"\u62c9\u91cc\u00b7\u4f69\u5947\u8bf4Android\u5f88\u91cd\u8981\uff0c\u4e0d\u662f\u5173\u952e\",\"\u6cd5\u56fd\u3001\u6bd4\u5229\u65f6\u3001\u5fb7\u56fd\u3001\u745e\u5178\u3001\u610f\u5927\u5229\u548c\u82f1\u56fd\u4e3a\u5370\u5ea6\u8ba1\u5212\u5411\u7f05\u7538\u51fa\u552e\u7684\u5148\u8fdb\u8f7b\u578b\u76f4\u5347\u673a\u63d0\u4f9b\u96f6\u90e8\u4ef6\u548c\u6280\u672f\u3002\",\"\u5df4\u6797\u8d5b\u9a6c\u4f1a\u5728\u52a8\u4e71\u4e2d\u8fdb\u884c\"]}\n```\n2. `hard_negatives_mine.py` \u4f7f\u7528faiss\u76f8\u4f3c\u5339\u914d\uff0c\u6316\u6398\u96be\u8d1f\u4f8b\u3002\n\n\n### \u6a21\u578b\u84b8\u998f\uff08Model Distillation\uff09\n\n\u7531\u4e8etext2vec\u8bad\u7ec3\u7684\u6a21\u578b\u53ef\u4ee5\u4f7f\u7528[sentence-transformers](https://github.com/UKPLab/sentence-transformers)\u5e93\u52a0\u8f7d\uff0c\u6b64\u5904\u590d\u7528\u5176\u6a21\u578b\u84b8\u998f\u65b9\u6cd5[distillation](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/distillation)\u3002\n\n1. \u6a21\u578b\u964d\u7ef4\uff0c\u53c2\u8003[dimensionality_reduction.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/distillation/dimensionality_reduction.py)\u4f7f\u7528PCA\u5bf9\u6a21\u578b\u8f93\u51faembedding\u964d\u7ef4\uff0c\u53ef\u51cf\u5c11milvus\u7b49\u5411\u91cf\u68c0\u7d22\u6570\u636e\u5e93\u7684\u5b58\u50a8\u538b\u529b\uff0c\u8fd8\u80fd\u8f7b\u5fae\u63d0\u5347\u6a21\u578b\u6548\u679c\u3002\n2. \u6a21\u578b\u84b8\u998f\uff0c\u53c2\u8003[model_distillation.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/distillation/model_distillation.py)\u4f7f\u7528\u84b8\u998f\u65b9\u6cd5\uff0c\u5c06Teacher\u5927\u6a21\u578b\u84b8\u998f\u5230\u66f4\u5c11layers\u5c42\u6570\u7684student\u6a21\u578b\u4e2d\uff0c\u5728\u6743\u8861\u6548\u679c\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u5927\u5e45\u63d0\u5347\u6a21\u578b\u9884\u6d4b\u901f\u5ea6\u3002\n\n### \u6a21\u578b\u90e8\u7f72\n\n\u63d0\u4f9b\u4e24\u79cd\u90e8\u7f72\u6a21\u578b\uff0c\u642d\u5efa\u670d\u52a1\u7684\u65b9\u6cd5\uff1a 1\uff09\u57fa\u4e8eJina\u642d\u5efagRPC\u670d\u52a1\u3010\u63a8\u8350\u3011\uff1b2\uff09\u57fa\u4e8eFastAPI\u642d\u5efa\u539f\u751fHttp\u670d\u52a1\u3002\n\n#### Jina\u670d\u52a1\n\u91c7\u7528C/S\u6a21\u5f0f\u642d\u5efa\u9ad8\u6027\u80fd\u670d\u52a1\uff0c\u652f\u6301docker\u4e91\u539f\u751f\uff0cgRPC/HTTP/WebSocket\uff0c\u652f\u6301\u591a\u4e2a\u6a21\u578b\u540c\u65f6\u9884\u6d4b\uff0cGPU\u591a\u5361\u5904\u7406\u3002\n\n- \u5b89\u88c5\uff1a\n```pip install jina```\n\n- \u542f\u52a8\u670d\u52a1\uff1a\n\nexample: [examples/jina_server_demo.py](examples/jina_server_demo.py)\n```python\nfrom jina import Flow\n\nport = 50001\nf = Flow(port=port).add(\n    uses='jinahub://Text2vecEncoder',\n    uses_with={'model_name': 'shibing624/text2vec-base-chinese'}\n)\n\nwith f:\n    # backend server forever\n    f.block()\n```\n\n\u8be5\u6a21\u578b\u9884\u6d4b\u65b9\u6cd5\uff08executor\uff09\u5df2\u7ecf\u4e0a\u4f20\u5230[JinaHub](https://hub.jina.ai/executor/eq45c9uq)\uff0c\u91cc\u9762\u5305\u62ecdocker\u3001k8s\u90e8\u7f72\u65b9\u6cd5\u3002\n\n- \u8c03\u7528\u670d\u52a1\uff1a\n\n\n```python\nfrom jina import Client\nfrom docarray import Document, DocumentArray\n\nport = 50001\n\nc = Client(port=port)\n\ndata = ['\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361',\n        '\u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361']\nprint(\"data:\", data)\nprint('data embs:')\nr = c.post('/', inputs=DocumentArray([Document(text='\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361'), Document(text='\u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361')]))\nprint(r.embeddings)\n```\n\n\u6279\u91cf\u8c03\u7528\u65b9\u6cd5\u89c1example: [examples/jina_client_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/jina_client_demo.py)\n\n\n#### FastAPI\u670d\u52a1\n\n- \u5b89\u88c5\uff1a\n```pip install fastapi uvicorn```\n\n- \u542f\u52a8\u670d\u52a1\uff1a\n\nexample: [examples/fastapi_server_demo.py](https://github.com/shibing624/text2vec/blob/master/examples/fastapi_server_demo.py)\n```shell\ncd examples\npython fastapi_server_demo.py\n```\n\n- \u8c03\u7528\u670d\u52a1\uff1a\n```shell\ncurl -X 'GET' \\\n  'http://0.0.0.0:8001/emb?q=hello' \\\n  -H 'accept: application/json'\n```\n\n\n## Dataset\n\n- \u672c\u9879\u76eerelease\u7684\u6570\u636e\u96c6\uff1a\n\n| Dataset                    | Introduce                                                                | Download Link                                                                                                                                                                                                                                                                                         |\n|:---------------------------|:-------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| shibing624/nli-zh-all      | \u4e2d\u6587\u8bed\u4e49\u5339\u914d\u6570\u636e\u5408\u96c6\uff0c\u6574\u5408\u4e86\u6587\u672c\u63a8\u7406\uff0c\u76f8\u4f3c\uff0c\u6458\u8981\uff0c\u95ee\u7b54\uff0c\u6307\u4ee4\u5fae\u8c03\u7b49\u4efb\u52a1\u7684820\u4e07\u9ad8\u8d28\u91cf\u6570\u636e\uff0c\u5e76\u8f6c\u5316\u4e3a\u5339\u914d\u683c\u5f0f\u6570\u636e\u96c6                | [https://huggingface.co/datasets/shibing624/nli-zh-all](https://huggingface.co/datasets/shibing624/nli-zh-all)                                                                                                                                                                                        |\n| shibing624/snli-zh         | \u4e2d\u6587SNLI\u548cMultiNLI\u6570\u636e\u96c6\uff0c\u7ffb\u8bd1\u81ea\u82f1\u6587SNLI\u548cMultiNLI                                    | [https://huggingface.co/datasets/shibing624/snli-zh](https://huggingface.co/datasets/shibing624/snli-zh)                                                                                                                                                                                              |\n| shibing624/nli_zh          | \u4e2d\u6587\u8bed\u4e49\u5339\u914d\u6570\u636e\u96c6\uff0c\u6574\u5408\u4e86\u4e2d\u6587ATEC\u3001BQ\u3001LCQMC\u3001PAWSX\u3001STS-B\u51715\u4e2a\u4efb\u52a1\u7684\u6570\u636e\u96c6                        | [https://huggingface.co/datasets/shibing624/nli_zh](https://huggingface.co/datasets/shibing624/nli_zh) </br> or </br> [\u767e\u5ea6\u7f51\u76d8(\u63d0\u53d6\u7801:qkt6)](https://pan.baidu.com/s/1d6jSiU1wHQAEMWJi7JJWCQ) </br> or </br> [github](https://github.com/shibing624/text2vec/releases/download/1.1.2/senteval_cn.zip) </br> |\n| shibing624/sts-sohu2021    | \u4e2d\u6587\u8bed\u4e49\u5339\u914d\u6570\u636e\u96c6\uff0c2021\u641c\u72d0\u6821\u56ed\u6587\u672c\u5339\u914d\u7b97\u6cd5\u5927\u8d5b\u6570\u636e\u96c6                                            | [https://huggingface.co/datasets/shibing624/sts-sohu2021](https://huggingface.co/datasets/shibing624/sts-sohu2021)                                                                                                                                                                                    |\n| ATEC                       | \u4e2d\u6587ATEC\u6570\u636e\u96c6\uff0c\u8682\u8681\u91d1\u670dQ-Qpair\u6570\u636e\u96c6                                                 | [ATEC](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC)                                                                                                                                                                                                                                 |\n| BQ                         | \u4e2d\u6587BQ(Bank Question)\u6570\u636e\u96c6\uff0c\u94f6\u884cQ-Qpair\u6570\u636e\u96c6                                      | [BQ](http://icrc.hitsz.edu.cn/info/1037/1162.htm)                                                                                                                                                                                                                                                     |\n| LCQMC                      | \u4e2d\u6587LCQMC(large-scale Chinese question matching corpus)\u6570\u636e\u96c6\uff0cQ-Qpair\u6570\u636e\u96c6      | [LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html)                                                                                                                                                                                                                                               |\n| PAWSX                      | \u4e2d\u6587PAWS(Paraphrase Adversaries from Word Scrambling)\u6570\u636e\u96c6\uff0cQ-Qpair\u6570\u636e\u96c6        | [PAWSX](https://arxiv.org/abs/1908.11828)                                                                                                                                                                                                                                                             |\n| STS-B                      | \u4e2d\u6587STS-B\u6570\u636e\u96c6\uff0c\u4e2d\u6587\u81ea\u7136\u8bed\u8a00\u63a8\u7406\u6570\u636e\u96c6\uff0c\u4ece\u82f1\u6587STS-B\u7ffb\u8bd1\u4e3a\u4e2d\u6587\u7684\u6570\u636e\u96c6                                 | [STS-B](https://github.com/pluto-junzeng/CNSD)                                                                                                                                                                                                                                                        |\n\n\n\u5e38\u7528\u82f1\u6587\u5339\u914d\u6570\u636e\u96c6\uff1a\n\n- \u82f1\u6587\u5339\u914d\u6570\u636e\u96c6\uff1amulti_nli: https://huggingface.co/datasets/multi_nli\n- \u82f1\u6587\u5339\u914d\u6570\u636e\u96c6\uff1asnli: https://huggingface.co/datasets/snli\n- https://huggingface.co/datasets/metaeval/cnli\n- https://huggingface.co/datasets/mteb/stsbenchmark-sts\n- https://huggingface.co/datasets/JeremiahZ/simcse_sup_nli\n- https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7\n\n\n\u6570\u636e\u96c6\u4f7f\u7528\u793a\u4f8b\uff1a\n```shell\npip install datasets\n```\n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"shibing624/nli_zh\", \"STS-B\") # ATEC or BQ or LCQMC or PAWSX or STS-B\nprint(dataset)\nprint(dataset['test'][0])\n```\n\noutput:\n```shell\nDatasetDict({\n    train: Dataset({\n        features: ['sentence1', 'sentence2', 'label'],\n        num_rows: 5231\n    })\n    validation: Dataset({\n        features: ['sentence1', 'sentence2', 'label'],\n        num_rows: 1458\n    })\n    test: Dataset({\n        features: ['sentence1', 'sentence2', 'label'],\n        num_rows: 1361\n    })\n})\n{'sentence1': '\u4e00\u4e2a\u5973\u5b69\u5728\u7ed9\u5979\u7684\u5934\u53d1\u505a\u53d1\u578b\u3002', 'sentence2': '\u4e00\u4e2a\u5973\u5b69\u5728\u68b3\u5934\u3002', 'label': 2}\n```\n\n\n\n\n\n## Contact\n\n- Issue(\u5efa\u8bae)\uff1a[![GitHub issues](https://img.shields.io/github/issues/shibing624/text2vec.svg)](https://github.com/shibing624/text2vec/issues)\n- \u90ae\u4ef6\u6211\uff1axuming: xuming624@qq.com\n- \u5fae\u4fe1\u6211\uff1a\u52a0\u6211*\u5fae\u4fe1\u53f7\uff1axuming624, \u5907\u6ce8\uff1a\u59d3\u540d-\u516c\u53f8-NLP* \u8fdbNLP\u4ea4\u6d41\u7fa4\u3002\n\n<img src=\"docs/wechat.jpeg\" width=\"200\" />\n\n\n## Citation\n\n\u5982\u679c\u4f60\u5728\u7814\u7a76\u4e2d\u4f7f\u7528\u4e86text2vec\uff0c\u8bf7\u6309\u5982\u4e0b\u683c\u5f0f\u5f15\u7528\uff1a\n\nAPA:\n```latex\nXu, M. Text2vec: Text to vector toolkit (Version 1.1.2) [Computer software]. https://github.com/shibing624/text2vec\n```\n\nBibTeX:\n```latex\n@misc{Text2vec,\n  author = {Ming Xu},\n  title = {Text2vec: Text to vector toolkit},\n  year = {2023},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https://github.com/shibing624/text2vec}},\n}\n```\n\n## License\n\n\n\u6388\u6743\u534f\u8bae\u4e3a [The Apache License 2.0](LICENSE)\uff0c\u53ef\u514d\u8d39\u7528\u505a\u5546\u4e1a\u7528\u9014\u3002\u8bf7\u5728\u4ea7\u54c1\u8bf4\u660e\u4e2d\u9644\u52a0text2vec\u7684\u94fe\u63a5\u548c\u6388\u6743\u534f\u8bae\u3002\n\n\n## Contribute\n\u9879\u76ee\u4ee3\u7801\u8fd8\u5f88\u7c97\u7cd9\uff0c\u5982\u679c\u5927\u5bb6\u5bf9\u4ee3\u7801\u6709\u6240\u6539\u8fdb\uff0c\u6b22\u8fce\u63d0\u4ea4\u56de\u672c\u9879\u76ee\uff0c\u5728\u63d0\u4ea4\u4e4b\u524d\uff0c\u6ce8\u610f\u4ee5\u4e0b\u4e24\u70b9\uff1a\n\n - \u5728`tests`\u6dfb\u52a0\u76f8\u5e94\u7684\u5355\u5143\u6d4b\u8bd5\n - \u4f7f\u7528`python -m pytest -v`\u6765\u8fd0\u884c\u6240\u6709\u5355\u5143\u6d4b\u8bd5\uff0c\u786e\u4fdd\u6240\u6709\u5355\u6d4b\u90fd\u662f\u901a\u8fc7\u7684\n\n\u4e4b\u540e\u5373\u53ef\u63d0\u4ea4PR\u3002\n\n## References\n- [\u5c06\u53e5\u5b50\u8868\u793a\u4e3a\u5411\u91cf\uff08\u4e0a\uff09\uff1a\u65e0\u76d1\u7763\u53e5\u5b50\u8868\u793a\u5b66\u4e60\uff08sentence embedding\uff09](https://www.cnblogs.com/llhthinker/p/10335164.html)\n- [\u5c06\u53e5\u5b50\u8868\u793a\u4e3a\u5411\u91cf\uff08\u4e0b\uff09\uff1a\u65e0\u76d1\u7763\u53e5\u5b50\u8868\u793a\u5b66\u4e60\uff08sentence embedding\uff09](https://www.cnblogs.com/llhthinker/p/10341841.html)\n- [A Simple but Tough-to-Beat Baseline for Sentence Embeddings[Sanjeev Arora and Yingyu Liang and Tengyu Ma, 2017]](https://openreview.net/forum?id=SyK00v5xx)\n- [\u56db\u79cd\u8ba1\u7b97\u6587\u672c\u76f8\u4f3c\u5ea6\u7684\u65b9\u6cd5\u5bf9\u6bd4[Yves Peirsman]](https://zhuanlan.zhihu.com/p/37104535)\n- [Improvements to BM25 and Language Models Examined](http://www.cs.otago.ac.nz/homepages/andrew/papers/2014-2.pdf)\n- [CoSENT\uff1a\u6bd4Sentence-BERT\u66f4\u6709\u6548\u7684\u53e5\u5411\u91cf\u65b9\u6848](https://kexue.fm/archives/8847)\n- [\u8c08\u8c08\u6587\u672c\u5339\u914d\u548c\u591a\u8f6e\u68c0\u7d22](https://zhuanlan.zhihu.com/p/111769969)\n- [Sentence-transformers](https://www.sbert.net/examples/applications/computing-embeddings/README.html)\n- [One Embedder, Any Task: Instruction-Finetuned Text Embeddings](https://arxiv.org/abs/2212.09741)",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Text to vector Tool, encode text",
    "version": "1.2.9",
    "project_urls": {
        "Homepage": "https://github.com/shibing624/text2vec"
    },
    "split_keywords": [
        "word embedding",
        "text2vec",
        "chinese text similarity calculation tool",
        "similarity",
        "word2vec"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "53761431fff7d01aad17d6be40e2ac7275173585fe1e87fa4a350535c8d918f0",
                "md5": "ef811016f28afc7f3fb99685152ab53e",
                "sha256": "9d10f6611bce223f2e75df556b9e150a7e9377f2a614a9c5d4a7f2ca58bd9510"
            },
            "downloads": -1,
            "filename": "text2vec-1.2.9.tar.gz",
            "has_sig": false,
            "md5_digest": "ef811016f28afc7f3fb99685152ab53e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6.0",
            "size": 86033,
            "upload_time": "2023-09-20T03:08:22",
            "upload_time_iso_8601": "2023-09-20T03:08:22.381609Z",
            "url": "https://files.pythonhosted.org/packages/53/76/1431fff7d01aad17d6be40e2ac7275173585fe1e87fa4a350535c8d918f0/text2vec-1.2.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-20 03:08:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "shibing624",
    "github_project": "text2vec",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "text2vec"
}
        
Elapsed time: 0.11928s