keybert


Namekeybert JSON
Version 0.8.4 PyPI version JSON
download
home_pagehttps://github.com/MaartenGr/keyBERT
SummaryKeyBERT performs keyword extraction with state-of-the-art transformer models.
upload_time2024-02-15 11:20:13
maintainer
docs_urlNone
authorMaarten Grootendorst
requires_python>=3.6
license
keywords nlp bert keyword extraction embeddings
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![PyPI - Python](https://img.shields.io/badge/python-3.6%20|%203.7%20|%203.8-blue.svg)](https://pypi.org/project/keybert/)
[![PyPI - License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/MaartenGr/keybert/blob/master/LICENSE)
[![PyPI - PyPi](https://img.shields.io/pypi/v/keyBERT)](https://pypi.org/project/keybert/)
[![Build](https://img.shields.io/github/actions/workflow/status/MaartenGr/keyBERT/testing.yml?branch=master)](https://pypi.org/keybert/)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1OxpgwKqSzODtO3vS7Xe1nEmZMCAIMckX?usp=sharing)

<img src="images/logo.png" width="35%" height="35%" align="right" />

# KeyBERT

KeyBERT is a minimal and easy-to-use keyword extraction technique that leverages BERT embeddings to
create keywords and keyphrases that are most similar to a document.

Corresponding medium post can be found [here](https://towardsdatascience.com/keyword-extraction-with-bert-724efca412ea).

<a name="toc"/></a>
## Table of Contents  
<!--ts-->  
   1. [About the Project](#about)  
   2. [Getting Started](#gettingstarted)  
        2.1. [Installation](#installation)  
        2.2. [Basic Usage](#usage)  
        2.3. [Max Sum Distance](#maxsum)  
        2.4. [Maximal Marginal Relevance](#maximal)  
        2.5. [Embedding Models](#embeddings)  
   3. [Large Language Models](#llms)  
<!--te-->  


<a name="about"/></a>
## 1. About the Project
[Back to ToC](#toc)

Although there are already many methods available for keyword generation
(e.g.,
[Rake](https://github.com/aneesha/RAKE),
[YAKE!](https://github.com/LIAAD/yake), TF-IDF, etc.)
I wanted to create a very basic, but powerful method for extracting keywords and keyphrases.
This is where **KeyBERT** comes in! Which uses BERT-embeddings and simple cosine similarity
to find the sub-phrases in a document that are the most similar to the document itself.

First, document embeddings are extracted with BERT to get a document-level representation.
Then, word embeddings are extracted for N-gram words/phrases. Finally, we use cosine similarity
to find the words/phrases that are the most similar to the document. The most similar words could
then be identified as the words that best describe the entire document.

KeyBERT is by no means unique and is created as a quick and easy method
for creating keywords and keyphrases. Although there are many great
papers and solutions out there that use BERT-embeddings
(e.g.,
[1](https://github.com/pranav-ust/BERT-keyphrase-extraction),
[2](https://github.com/ibatra/BERT-Keyword-Extractor),
[3](https://www.preprints.org/manuscript/201908.0073/download/final_file),
), I could not find a BERT-based solution that did not have to be trained from scratch and
could be used for beginners (**correct me if I'm wrong!**).
Thus, the goal was a `pip install keybert` and at most 3 lines of code in usage.

<a name="gettingstarted"/></a>
## 2. Getting Started
[Back to ToC](#toc)

<a name="installation"/></a>
###  2.1. Installation
Installation can be done using [pypi](https://pypi.org/project/keybert/):

```
pip install keybert
```

You may want to install more depending on the transformers and language backends that you will be using. The possible installations are:

```
pip install keybert[flair]
pip install keybert[gensim]
pip install keybert[spacy]
pip install keybert[use]
```

<a name="usage"/></a>
###  2.2. Usage

The most minimal example can be seen below for the extraction of keywords:
```python
from keybert import KeyBERT

doc = """
         Supervised learning is the machine learning task of learning a function that
         maps an input to an output based on example input-output pairs. It infers a
         function from labeled training data consisting of a set of training examples.
         In supervised learning, each example is a pair consisting of an input object
         (typically a vector) and a desired output value (also called the supervisory signal).
         A supervised learning algorithm analyzes the training data and produces an inferred function,
         which can be used for mapping new examples. An optimal scenario will allow for the
         algorithm to correctly determine the class labels for unseen instances. This requires
         the learning algorithm to generalize from the training data to unseen situations in a
         'reasonable' way (see inductive bias).
      """
kw_model = KeyBERT()
keywords = kw_model.extract_keywords(doc)
```

You can set `keyphrase_ngram_range` to set the length of the resulting keywords/keyphrases:

```python
>>> kw_model.extract_keywords(doc, keyphrase_ngram_range=(1, 1), stop_words=None)
[('learning', 0.4604),
 ('algorithm', 0.4556),
 ('training', 0.4487),
 ('class', 0.4086),
 ('mapping', 0.3700)]
```

To extract keyphrases, simply set `keyphrase_ngram_range` to (1, 2) or higher depending on the number
of words you would like in the resulting keyphrases:

```python
>>> kw_model.extract_keywords(doc, keyphrase_ngram_range=(1, 2), stop_words=None)
[('learning algorithm', 0.6978),
 ('machine learning', 0.6305),
 ('supervised learning', 0.5985),
 ('algorithm analyzes', 0.5860),
 ('learning function', 0.5850)]
```

We can highlight the keywords in the document by simply setting `highlight`:

```python
keywords = kw_model.extract_keywords(doc, highlight=True)
```
<img src="images/highlight.png" width="75%" height="75%" />


**NOTE**: For a full overview of all possible transformer models see [sentence-transformer](https://www.sbert.net/docs/pretrained_models.html).
I would advise either `"all-MiniLM-L6-v2"` for English documents or `"paraphrase-multilingual-MiniLM-L12-v2"`
for multi-lingual documents or any other language.

<a name="maxsum"/></a>
###  2.3. Max Sum Distance

To diversify the results, we take the 2 x top_n most similar words/phrases to the document.
Then, we take all top_n combinations from the 2 x top_n words and extract the combination
that are the least similar to each other by cosine similarity.

```python
>>> kw_model.extract_keywords(doc, keyphrase_ngram_range=(3, 3), stop_words='english',
                              use_maxsum=True, nr_candidates=20, top_n=5)
[('set training examples', 0.7504),
 ('generalize training data', 0.7727),
 ('requires learning algorithm', 0.5050),
 ('supervised learning algorithm', 0.3779),
 ('learning machine learning', 0.2891)]
```


<a name="maximal"/></a>
###  2.4. Maximal Marginal Relevance

To diversify the results, we can use Maximal Margin Relevance (MMR) to create
keywords / keyphrases which is also based on cosine similarity. The results
with **high diversity**:

```python
>>> kw_model.extract_keywords(doc, keyphrase_ngram_range=(3, 3), stop_words='english',
                              use_mmr=True, diversity=0.7)
[('algorithm generalize training', 0.7727),
 ('labels unseen instances', 0.1649),
 ('new examples optimal', 0.4185),
 ('determine class labels', 0.4774),
 ('supervised learning algorithm', 0.7502)]
```

The results with **low diversity**:

```python
>>> kw_model.extract_keywords(doc, keyphrase_ngram_range=(3, 3), stop_words='english',
                              use_mmr=True, diversity=0.2)
[('algorithm generalize training', 0.7727),
 ('supervised learning algorithm', 0.7502),
 ('learning machine learning', 0.7577),
 ('learning algorithm analyzes', 0.7587),
 ('learning algorithm generalize', 0.7514)]
```


<a name="embeddings"/></a>
###  2.5. Embedding Models
KeyBERT supports many embedding models that can be used to embed the documents and words:

* Sentence-Transformers
* Flair
* Spacy
* Gensim
* USE

Click [here](https://maartengr.github.io/KeyBERT/guides/embeddings.html) for a full overview of all supported embedding models.

**Sentence-Transformers**  
You can select any model from `sentence-transformers` [here](https://www.sbert.net/docs/pretrained_models.html)
and pass it through KeyBERT with `model`:

```python
from keybert import KeyBERT
kw_model = KeyBERT(model='all-MiniLM-L6-v2')
```

Or select a SentenceTransformer model with your own parameters:

```python
from keybert import KeyBERT
from sentence_transformers import SentenceTransformer

sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
kw_model = KeyBERT(model=sentence_model)
```

**Flair**  
[Flair](https://github.com/flairNLP/flair) allows you to choose almost any embedding model that
is publicly available. Flair can be used as follows:

```python
from keybert import KeyBERT
from flair.embeddings import TransformerDocumentEmbeddings

roberta = TransformerDocumentEmbeddings('roberta-base')
kw_model = KeyBERT(model=roberta)
```

You can select any 🤗 transformers model [here](https://huggingface.co/models).

<a name="llms"/></a>
## 3. Large Language Models
[Back to ToC](#toc)

With `KeyLLM` you can new perform keyword extraction with Large Language Models (LLM). You can find the full documentation [here](https://maartengr.github.io/KeyBERT/guides/keyllm.html) but there are two examples that are common with this new method. Make sure to install the OpenAI package through `pip install openai` before you start.

First, we can ask OpenAI directly to extract keywords:

```python
import openai
from keybert.llm import OpenAI
from keybert import KeyLLM

# Create your LLM
client = openai.OpenAI(api_key=MY_API_KEY)
llm = OpenAI(client)

# Load it in KeyLLM
kw_model = KeyLLM(llm)
```

This will query any ChatGPT model and ask it to extract keywords from text.

Second, we can find documents that are likely to have the same keywords and only extract keywords for those. 
This is much more efficient then asking the keywords for every single documents. There are likely documents that 
have the exact same keywords. Doing so is straightforward:

```python
import openai
from keybert.llm import OpenAI
from keybert import KeyLLM
from sentence_transformers import SentenceTransformer

# Extract embeddings
model = SentenceTransformer('all-MiniLM-L6-v2')
embeddings = model.encode(MY_DOCUMENTS, convert_to_tensor=True)

# Create your LLM
client = openai.OpenAI(api_key=MY_API_KEY)
llm = OpenAI(client)

# Load it in KeyLLM
kw_model = KeyLLM(llm)

# Extract keywords
keywords = kw_model.extract_keywords(MY_DOCUMENTS, embeddings=embeddings, threshold=.75)
```

You can use the `threshold` parameter to decide how similar documents need to be in order to receive the same keywords.

## Citation
To cite KeyBERT in your work, please use the following bibtex reference:

```bibtex
@misc{grootendorst2020keybert,
  author       = {Maarten Grootendorst},
  title        = {KeyBERT: Minimal keyword extraction with BERT.},
  year         = 2020,
  publisher    = {Zenodo},
  version      = {v0.3.0},
  doi          = {10.5281/zenodo.4461265},
  url          = {https://doi.org/10.5281/zenodo.4461265}
}
```

## References
Below, you can find several resources that were used for the creation of KeyBERT
but most importantly, these are amazing resources for creating impressive keyword extraction models:

**Papers**:
* Sharma, P., & Li, Y. (2019). [Self-Supervised Contextual Keyword and Keyphrase Retrieval with Self-Labelling.](https://www.preprints.org/manuscript/201908.0073/download/final_file)

**Github Repos**:
* https://github.com/thunlp/BERT-KPE
* https://github.com/ibatra/BERT-Keyword-Extractor
* https://github.com/pranav-ust/BERT-keyphrase-extraction
* https://github.com/swisscom/ai-research-keyphrase-extraction

**MMR**:
The selection of keywords/keyphrases was modeled after:
* https://github.com/swisscom/ai-research-keyphrase-extraction

**NOTE**: If you find a paper or github repo that has an easy-to-use implementation
of BERT-embeddings for keyword/keyphrase extraction, let me know! I'll make sure to
add a reference to this repo.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/MaartenGr/keyBERT",
    "name": "keybert",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "nlp bert keyword extraction embeddings",
    "author": "Maarten Grootendorst",
    "author_email": "maartengrootendorst@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/dc/8a/e7a08fa77da53d136111ad22fc0ca10350f1d3fe9658a5db6a5dbcdaa832/keybert-0.8.4.tar.gz",
    "platform": null,
    "description": "[![PyPI - Python](https://img.shields.io/badge/python-3.6%20|%203.7%20|%203.8-blue.svg)](https://pypi.org/project/keybert/)\r\n[![PyPI - License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/MaartenGr/keybert/blob/master/LICENSE)\r\n[![PyPI - PyPi](https://img.shields.io/pypi/v/keyBERT)](https://pypi.org/project/keybert/)\r\n[![Build](https://img.shields.io/github/actions/workflow/status/MaartenGr/keyBERT/testing.yml?branch=master)](https://pypi.org/keybert/)\r\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1OxpgwKqSzODtO3vS7Xe1nEmZMCAIMckX?usp=sharing)\r\n\r\n<img src=\"images/logo.png\" width=\"35%\" height=\"35%\" align=\"right\" />\r\n\r\n# KeyBERT\r\n\r\nKeyBERT is a minimal and easy-to-use keyword extraction technique that leverages BERT embeddings to\r\ncreate keywords and keyphrases that are most similar to a document.\r\n\r\nCorresponding medium post can be found [here](https://towardsdatascience.com/keyword-extraction-with-bert-724efca412ea).\r\n\r\n<a name=\"toc\"/></a>\r\n## Table of Contents  \r\n<!--ts-->  \r\n   1. [About the Project](#about)  \r\n   2. [Getting Started](#gettingstarted)  \r\n        2.1. [Installation](#installation)  \r\n        2.2. [Basic Usage](#usage)  \r\n        2.3. [Max Sum Distance](#maxsum)  \r\n        2.4. [Maximal Marginal Relevance](#maximal)  \r\n        2.5. [Embedding Models](#embeddings)  \r\n   3. [Large Language Models](#llms)  \r\n<!--te-->  \r\n\r\n\r\n<a name=\"about\"/></a>\r\n## 1. About the Project\r\n[Back to ToC](#toc)\r\n\r\nAlthough there are already many methods available for keyword generation\r\n(e.g.,\r\n[Rake](https://github.com/aneesha/RAKE),\r\n[YAKE!](https://github.com/LIAAD/yake), TF-IDF, etc.)\r\nI wanted to create a very basic, but powerful method for extracting keywords and keyphrases.\r\nThis is where **KeyBERT** comes in! Which uses BERT-embeddings and simple cosine similarity\r\nto find the sub-phrases in a document that are the most similar to the document itself.\r\n\r\nFirst, document embeddings are extracted with BERT to get a document-level representation.\r\nThen, word embeddings are extracted for N-gram words/phrases. Finally, we use cosine similarity\r\nto find the words/phrases that are the most similar to the document. The most similar words could\r\nthen be identified as the words that best describe the entire document.\r\n\r\nKeyBERT is by no means unique and is created as a quick and easy method\r\nfor creating keywords and keyphrases. Although there are many great\r\npapers and solutions out there that use BERT-embeddings\r\n(e.g.,\r\n[1](https://github.com/pranav-ust/BERT-keyphrase-extraction),\r\n[2](https://github.com/ibatra/BERT-Keyword-Extractor),\r\n[3](https://www.preprints.org/manuscript/201908.0073/download/final_file),\r\n), I could not find a BERT-based solution that did not have to be trained from scratch and\r\ncould be used for beginners (**correct me if I'm wrong!**).\r\nThus, the goal was a `pip install keybert` and at most 3 lines of code in usage.\r\n\r\n<a name=\"gettingstarted\"/></a>\r\n## 2. Getting Started\r\n[Back to ToC](#toc)\r\n\r\n<a name=\"installation\"/></a>\r\n###  2.1. Installation\r\nInstallation can be done using [pypi](https://pypi.org/project/keybert/):\r\n\r\n```\r\npip install keybert\r\n```\r\n\r\nYou may want to install more depending on the transformers and language backends that you will be using. The possible installations are:\r\n\r\n```\r\npip install keybert[flair]\r\npip install keybert[gensim]\r\npip install keybert[spacy]\r\npip install keybert[use]\r\n```\r\n\r\n<a name=\"usage\"/></a>\r\n###  2.2. Usage\r\n\r\nThe most minimal example can be seen below for the extraction of keywords:\r\n```python\r\nfrom keybert import KeyBERT\r\n\r\ndoc = \"\"\"\r\n         Supervised learning is the machine learning task of learning a function that\r\n         maps an input to an output based on example input-output pairs. It infers a\r\n         function from labeled training data consisting of a set of training examples.\r\n         In supervised learning, each example is a pair consisting of an input object\r\n         (typically a vector) and a desired output value (also called the supervisory signal).\r\n         A supervised learning algorithm analyzes the training data and produces an inferred function,\r\n         which can be used for mapping new examples. An optimal scenario will allow for the\r\n         algorithm to correctly determine the class labels for unseen instances. This requires\r\n         the learning algorithm to generalize from the training data to unseen situations in a\r\n         'reasonable' way (see inductive bias).\r\n      \"\"\"\r\nkw_model = KeyBERT()\r\nkeywords = kw_model.extract_keywords(doc)\r\n```\r\n\r\nYou can set `keyphrase_ngram_range` to set the length of the resulting keywords/keyphrases:\r\n\r\n```python\r\n>>> kw_model.extract_keywords(doc, keyphrase_ngram_range=(1, 1), stop_words=None)\r\n[('learning', 0.4604),\r\n ('algorithm', 0.4556),\r\n ('training', 0.4487),\r\n ('class', 0.4086),\r\n ('mapping', 0.3700)]\r\n```\r\n\r\nTo extract keyphrases, simply set `keyphrase_ngram_range` to (1, 2) or higher depending on the number\r\nof words you would like in the resulting keyphrases:\r\n\r\n```python\r\n>>> kw_model.extract_keywords(doc, keyphrase_ngram_range=(1, 2), stop_words=None)\r\n[('learning algorithm', 0.6978),\r\n ('machine learning', 0.6305),\r\n ('supervised learning', 0.5985),\r\n ('algorithm analyzes', 0.5860),\r\n ('learning function', 0.5850)]\r\n```\r\n\r\nWe can highlight the keywords in the document by simply setting `highlight`:\r\n\r\n```python\r\nkeywords = kw_model.extract_keywords(doc, highlight=True)\r\n```\r\n<img src=\"images/highlight.png\" width=\"75%\" height=\"75%\" />\r\n\r\n\r\n**NOTE**: For a full overview of all possible transformer models see [sentence-transformer](https://www.sbert.net/docs/pretrained_models.html).\r\nI would advise either `\"all-MiniLM-L6-v2\"` for English documents or `\"paraphrase-multilingual-MiniLM-L12-v2\"`\r\nfor multi-lingual documents or any other language.\r\n\r\n<a name=\"maxsum\"/></a>\r\n###  2.3. Max Sum Distance\r\n\r\nTo diversify the results, we take the 2 x top_n most similar words/phrases to the document.\r\nThen, we take all top_n combinations from the 2 x top_n words and extract the combination\r\nthat are the least similar to each other by cosine similarity.\r\n\r\n```python\r\n>>> kw_model.extract_keywords(doc, keyphrase_ngram_range=(3, 3), stop_words='english',\r\n                              use_maxsum=True, nr_candidates=20, top_n=5)\r\n[('set training examples', 0.7504),\r\n ('generalize training data', 0.7727),\r\n ('requires learning algorithm', 0.5050),\r\n ('supervised learning algorithm', 0.3779),\r\n ('learning machine learning', 0.2891)]\r\n```\r\n\r\n\r\n<a name=\"maximal\"/></a>\r\n###  2.4. Maximal Marginal Relevance\r\n\r\nTo diversify the results, we can use Maximal Margin Relevance (MMR) to create\r\nkeywords / keyphrases which is also based on cosine similarity. The results\r\nwith **high diversity**:\r\n\r\n```python\r\n>>> kw_model.extract_keywords(doc, keyphrase_ngram_range=(3, 3), stop_words='english',\r\n                              use_mmr=True, diversity=0.7)\r\n[('algorithm generalize training', 0.7727),\r\n ('labels unseen instances', 0.1649),\r\n ('new examples optimal', 0.4185),\r\n ('determine class labels', 0.4774),\r\n ('supervised learning algorithm', 0.7502)]\r\n```\r\n\r\nThe results with **low diversity**:\r\n\r\n```python\r\n>>> kw_model.extract_keywords(doc, keyphrase_ngram_range=(3, 3), stop_words='english',\r\n                              use_mmr=True, diversity=0.2)\r\n[('algorithm generalize training', 0.7727),\r\n ('supervised learning algorithm', 0.7502),\r\n ('learning machine learning', 0.7577),\r\n ('learning algorithm analyzes', 0.7587),\r\n ('learning algorithm generalize', 0.7514)]\r\n```\r\n\r\n\r\n<a name=\"embeddings\"/></a>\r\n###  2.5. Embedding Models\r\nKeyBERT supports many embedding models that can be used to embed the documents and words:\r\n\r\n* Sentence-Transformers\r\n* Flair\r\n* Spacy\r\n* Gensim\r\n* USE\r\n\r\nClick [here](https://maartengr.github.io/KeyBERT/guides/embeddings.html) for a full overview of all supported embedding models.\r\n\r\n**Sentence-Transformers**  \r\nYou can select any model from `sentence-transformers` [here](https://www.sbert.net/docs/pretrained_models.html)\r\nand pass it through KeyBERT with `model`:\r\n\r\n```python\r\nfrom keybert import KeyBERT\r\nkw_model = KeyBERT(model='all-MiniLM-L6-v2')\r\n```\r\n\r\nOr select a SentenceTransformer model with your own parameters:\r\n\r\n```python\r\nfrom keybert import KeyBERT\r\nfrom sentence_transformers import SentenceTransformer\r\n\r\nsentence_model = SentenceTransformer(\"all-MiniLM-L6-v2\")\r\nkw_model = KeyBERT(model=sentence_model)\r\n```\r\n\r\n**Flair**  \r\n[Flair](https://github.com/flairNLP/flair) allows you to choose almost any embedding model that\r\nis publicly available. Flair can be used as follows:\r\n\r\n```python\r\nfrom keybert import KeyBERT\r\nfrom flair.embeddings import TransformerDocumentEmbeddings\r\n\r\nroberta = TransformerDocumentEmbeddings('roberta-base')\r\nkw_model = KeyBERT(model=roberta)\r\n```\r\n\r\nYou can select any \ud83e\udd17 transformers model [here](https://huggingface.co/models).\r\n\r\n<a name=\"llms\"/></a>\r\n## 3. Large Language Models\r\n[Back to ToC](#toc)\r\n\r\nWith `KeyLLM` you can new perform keyword extraction with Large Language Models (LLM). You can find the full documentation [here](https://maartengr.github.io/KeyBERT/guides/keyllm.html) but there are two examples that are common with this new method. Make sure to install the OpenAI package through `pip install openai` before you start.\r\n\r\nFirst, we can ask OpenAI directly to extract keywords:\r\n\r\n```python\r\nimport openai\r\nfrom keybert.llm import OpenAI\r\nfrom keybert import KeyLLM\r\n\r\n# Create your LLM\r\nclient = openai.OpenAI(api_key=MY_API_KEY)\r\nllm = OpenAI(client)\r\n\r\n# Load it in KeyLLM\r\nkw_model = KeyLLM(llm)\r\n```\r\n\r\nThis will query any ChatGPT model and ask it to extract keywords from text.\r\n\r\nSecond, we can find documents that are likely to have the same keywords and only extract keywords for those. \r\nThis is much more efficient then asking the keywords for every single documents. There are likely documents that \r\nhave the exact same keywords. Doing so is straightforward:\r\n\r\n```python\r\nimport openai\r\nfrom keybert.llm import OpenAI\r\nfrom keybert import KeyLLM\r\nfrom sentence_transformers import SentenceTransformer\r\n\r\n# Extract embeddings\r\nmodel = SentenceTransformer('all-MiniLM-L6-v2')\r\nembeddings = model.encode(MY_DOCUMENTS, convert_to_tensor=True)\r\n\r\n# Create your LLM\r\nclient = openai.OpenAI(api_key=MY_API_KEY)\r\nllm = OpenAI(client)\r\n\r\n# Load it in KeyLLM\r\nkw_model = KeyLLM(llm)\r\n\r\n# Extract keywords\r\nkeywords = kw_model.extract_keywords(MY_DOCUMENTS, embeddings=embeddings, threshold=.75)\r\n```\r\n\r\nYou can use the `threshold` parameter to decide how similar documents need to be in order to receive the same keywords.\r\n\r\n## Citation\r\nTo cite KeyBERT in your work, please use the following bibtex reference:\r\n\r\n```bibtex\r\n@misc{grootendorst2020keybert,\r\n  author       = {Maarten Grootendorst},\r\n  title        = {KeyBERT: Minimal keyword extraction with BERT.},\r\n  year         = 2020,\r\n  publisher    = {Zenodo},\r\n  version      = {v0.3.0},\r\n  doi          = {10.5281/zenodo.4461265},\r\n  url          = {https://doi.org/10.5281/zenodo.4461265}\r\n}\r\n```\r\n\r\n## References\r\nBelow, you can find several resources that were used for the creation of KeyBERT\r\nbut most importantly, these are amazing resources for creating impressive keyword extraction models:\r\n\r\n**Papers**:\r\n* Sharma, P., & Li, Y. (2019). [Self-Supervised Contextual Keyword and Keyphrase Retrieval with Self-Labelling.](https://www.preprints.org/manuscript/201908.0073/download/final_file)\r\n\r\n**Github Repos**:\r\n* https://github.com/thunlp/BERT-KPE\r\n* https://github.com/ibatra/BERT-Keyword-Extractor\r\n* https://github.com/pranav-ust/BERT-keyphrase-extraction\r\n* https://github.com/swisscom/ai-research-keyphrase-extraction\r\n\r\n**MMR**:\r\nThe selection of keywords/keyphrases was modeled after:\r\n* https://github.com/swisscom/ai-research-keyphrase-extraction\r\n\r\n**NOTE**: If you find a paper or github repo that has an easy-to-use implementation\r\nof BERT-embeddings for keyword/keyphrase extraction, let me know! I'll make sure to\r\nadd a reference to this repo.\r\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "KeyBERT performs keyword extraction with state-of-the-art transformer models.",
    "version": "0.8.4",
    "project_urls": {
        "Homepage": "https://github.com/MaartenGr/keyBERT"
    },
    "split_keywords": [
        "nlp",
        "bert",
        "keyword",
        "extraction",
        "embeddings"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "dc8ae7a08fa77da53d136111ad22fc0ca10350f1d3fe9658a5db6a5dbcdaa832",
                "md5": "88c843deedc8e115a5bddc1b27e9f3f4",
                "sha256": "35473ab0842cf6d09bcfd0677cee9a21a8b0448bbb3f309ea3827ca297f24611"
            },
            "downloads": -1,
            "filename": "keybert-0.8.4.tar.gz",
            "has_sig": false,
            "md5_digest": "88c843deedc8e115a5bddc1b27e9f3f4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 29974,
            "upload_time": "2024-02-15T11:20:13",
            "upload_time_iso_8601": "2024-02-15T11:20:13.366218Z",
            "url": "https://files.pythonhosted.org/packages/dc/8a/e7a08fa77da53d136111ad22fc0ca10350f1d3fe9658a5db6a5dbcdaa832/keybert-0.8.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-15 11:20:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "MaartenGr",
    "github_project": "keyBERT",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "keybert"
}
        
Elapsed time: 0.18246s