BERTSimilar


NameBERTSimilar JSON
Version 0.2.3 PyPI version JSON
download
home_pagehttps://github.com/rdpahalavan/BERTSimilarWords
SummaryGet Similar Words and Embeddings using BERT Models
upload_time2023-07-23 02:46:01
maintainer
docs_urlNone
authorPahalavan R D
requires_python>=3.7.0
licenseApache License 2.0
keywords bert nlp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # BERTSimilar

## Get Similar Words and Embeddings using BERT Models

BERTSimilar is used to get similar words and embeddings using BERT models. It uses **bert-base-cased** model as default and cosine similarity to find the closest word to the given words.

BERT generates contextual word embeddings, so the word embedding for the same word will differ based on its context. For example, the word **Apple** in *"Apple is a good fruit"* and *"Apple is a good phone"* have different word embeddings. Generating word embeddings for all vocabulary in the English language based on context is time-consuming and needs many resources. So, this library requires the vocabulary for generating word embeddings beforehand.

Vocabularies used to generate word embeddings can be given in two ways:

* [Using Wikipedia Pages](#using-wikipedia-pages)
* [Using Text Files](#using-text-files) (.docx and .txt)

## Install and Import

Install the Python package using
```
pip install BERTSimilar
```

Import the module using
```python
>>> from BERTSimilar import SimilarWords
```

## Providing the Vocabulary

Provide the text (in terms of paragraphs), so the BERT model can generate the word embeddings for all the words present in the text.

<h3 id="using-wikipedia-pages">Using Wikipedia Pages</h3>

1) Using Wikipedia page names as a list (the content of the pages will be taken as input and processed)

```python
>>> wikipedia_pages = ['Apple', 'Apple Inc.']
>>> similar = SimilarWords().load_dataset(wikipedia_page_list=wikipedia_pages)

# To get the Wikipedia pages used,
>>> similar.wikipedia_dataset_info
{'Apple': 'https://en.wikipedia.org/wiki/Apple',
 'Apple Inc.': 'https://en.wikipedia.org/wiki/Apple_Inc.'}
```

2) Using Wikipedia search query as a string (the content of the pages related to the query will be taken as input and processed)

```python
# Get 5 Wikipedia pages based on the query
>>> similar = SimilarWords().load_dataset(wikipedia_query='Apple', wikipedia_query_limit=5)

# To get the Wikipedia pages used (duplicate pages are ignored),
>>> similar.wikipedia_dataset_info
{'Apple': 'https://en.wikipedia.org/wiki/Apple',
 'Apple Inc.': 'https://en.wikipedia.org/wiki/Apple_Inc.',
 'Apples to Apples': 'https://en.wikipedia.org/wiki/Apples_to_Apples',
 'MacOS': 'https://en.wikipedia.org/wiki/MacOS'}
```

3) Using Wikipedia search queries as a list (the content of the pages related to each query will be taken as input and processed)

```python
# Get 5 Wikipedia pages based on each query
>>> similar = SimilarWords().load_dataset(wikipedia_query=['Apple', 'Banana'], wikipedia_query_limit=5)

# To get the Wikipedia pages used (duplicate pages are ignored),
>>> similar.wikipedia_dataset_info
{'Apple': 'https://en.wikipedia.org/wiki/Apple',
 'Apple Inc.': 'https://en.wikipedia.org/wiki/Apple_Inc.',
 'Apples to Apples': 'https://en.wikipedia.org/wiki/Apples_to_Apples',
 'MacOS': 'https://en.wikipedia.org/wiki/MacOS',
 'Banana': 'https://en.wikipedia.org/wiki/Banana',
 'Cooking banana': 'https://en.wikipedia.org/wiki/Cooking_banana',
 'Banana republic': 'https://en.wikipedia.org/wiki/Banana_republic',
 'Banana ketchup': 'https://en.wikipedia.org/wiki/Banana_ketchup'}
```

<h3 id="using-text-files">Using Text Files</h3>

File extensions supported are .docx and .txt (For other file types, please convert them to the supporting format)

1) Using a single text file (the content of the file will be taken as input and processed)

```python
>>> similar = SimilarWords().load_dataset(dataset_path='Book_1.docx')
```

2) Using multiple text files (the contents of each file will be taken as input and processed)

```python
>>> similar = SimilarWords().load_dataset(dataset_path=['Book_1.docx','Book_1.txt'])
```

### SimilarWords() Parameters

You can pass these parameters to customize the initialization.

- **model** - the BERT model to use (default: bert-base-cased)
- **max_heading_length** - the maximum heading length. Lengths more than this are considered paragraphs (default: 10)
- **max_document_length** - the maximum paragraph length. Lengths more than this are split into multiple paragraphs (default: 300)
- **exclude_stopwords** - by default all stopwords are excluded from tags. To include stopwords, pass the stopwords as a list of strings to include (default: None)
- **embeddings_scaler** - Scaler to standardize the embeddings (default: None)

## Find Similar Words

Similar words are generated using the `find_similar_words` method. This method calculates the cosine similarity between the average of the input words based on the given context and all the words present in the given vocabulary. The similar words and the embedding used to select the nearest words will be returned. This embedding is the representation of input words and context. The parameters for this method are

- **input_words** - the input words (list of strings)
- **input_context** - the input context (string) (optional) (default: None)
- **input_embedding** - an embedding can be given in place of input words and context (numpy array) (default: None)
- **output_words_ngram** - n-gram words expected as output (integer) (optional) (default: 1)
  - if 1, means output like *{'apple', 'car'}*
  - if 2, means output like *{'apple cake', 'modern car'}*
  - likewise, the maximum value is 10
  - if 0, all n-grams combined like *{'Apple', 'Apple laptop', 'red color car'}*
- **max_output_words** - the maximum number of output words to be generated (integer) (optional) (default: 10)
- **pos_to_exclude** - the words are ignored in the output if these part of speech tags are present in it (list of strings) (optional) (default: None)
  - if ['VBN'], the word *"used car"* will be ignored in the output as *"used"* is a verb (VBN means past participle verb)
  - available POS tags can be found in the [Attributes](#similarwords-attributes) section
- **context_similarity_factor** - used to tune the context-matching process, and find the best paragraphs related to the given input words (float) (optional) (default: 0.25)
  - possible values are from 0 to 1
  - value closer to 0 will do a strict context-matching and a closer to 1 will do lenient context-matching
- **output_filter_factor** - uses to ignore words that are similar to the given input in the output (float) (optional) (default: 0.5)
  - possible values are from 0 to 1
  - value closer to 0 will do a strict comparison and a value closer to 1 will do a lenient comparison
- **single_word_split** - whether to split n-gram words when given as input (boolean) (optional) (default: True)
  - whether to split the n-gram words given as input into single words
  - if True, *"Apple phones"* given as input will be split into *"Apple"* and *"phones"* separately and processed
- **uncased_lemmatization** - whether to uncase and lemmatize the input (boolean) (optional) (default: True)
  - whether to uncase and lemmatize the input
  - if True, *"Apple phones"* given as input will be converted to *"apple phone"* and processed

## Demo

### Example 1

```python
>>> from BERTSimilar import SimilarWords
>>> similar = SimilarWords().load_dataset(wikipedia_query='Apple', wikipedia_query_limit=5)

>>> words, embedding = similar.find_similar_words(input_context='company',input_words=['Apple'])
>>> words
{'iPhone': 0.7655301993367924,
 'Microsoft': 0.7644559773925612,
 'Samsung': 0.7483747939272186,
 'Nokia': 0.7418908483628721,
 'Macintosh': 0.7415292245659537,
 'iOS': 0.7409453358937249,
 'AppleCare': 0.7381210698272941,
 'iPadOS': 0.7112217377139232,
 'iTunes': 0.7007508157223745,
 'macOS': 0.69984740983893}

>>> words, embedding = similar.find_similar_words(input_context='fruit',input_words=['Apple'])
>>> words
{'applejack': 0.8045216200651304,
 'Trees': 0.7926505935113519,
 'trees': 0.7806807879003239,
 'berries': 0.7689437435792672,
 'seeds': 0.7540070238557037,
 'peaches': 0.7381803534675645,
 'Orange': 0.733131237417253,
 'orchards': 0.7296196594053761,
 'juice': 0.7247635163014543,
 'nuts': 0.724424004884171}
```

### Example 2

```python
>>> from BERTSimilar import SimilarWords
>>> similar = SimilarWords().load_dataset(wikipedia_query='Tesla', wikipedia_query_limit=10)

>>> words, embedding = similar.find_similar_words(input_context='Tesla Motors', input_words=['CEO'], output_words_ngram=5, max_output_words=5)
>>> words
{'Chief Executive Elon Musk handing': 0.7596588355056113,
 '2018 CEO Elon Musk briefly': 0.751011374230985,
 'August 2018 CEO Elon Musk': 0.7492089016517951,
 '2021 CEO Elon Musk revealed': 0.7470401856896459,
 'SEC questioned Tesla CFO Zach': 0.738144930474394}

>>> words, embedding = similar.find_similar_words(input_words=['Nikola Tesla'], output_words_ngram=0, max_output_words=5)
>>> words
{'Tesla Nikola Tesla Corner': 0.9203870154998232,
 'IEEEThe Nikola Tesla Memorial': 0.8932847992637643,
 'electrical engineer Nikola Tesla': 0.8811208719958945,
 'Serbian American inventor Nikola Tesla': 0.8766566716046287,
 'Nikola Tesla Technical Museum': 0.8759513407776292}
```

<h2 id="similarwords-attributes">SimilarWords() Attributes</h2>

These attributes can be used to get values or modify default values of the SimilarWords class.

To get the value of the attributes,

```python
>>> similar = SimilarWords().load_dataset(dataset_path='Book_1.docx')

# This will return all the words
>>> similar.bert_words

# This will return the embeddings for all the words
>>> similar.bert_vectors
```

To change the values of the attributes,

```python
>>> similar = SimilarWords()
>>> similar.max_ngram
10

>>> similar.max_ngram = 12
>>> similar = similar.load_dataset(dataset_path='Book_1.docx')
>>> similar.max_ngram
12
```

- **tokenizer** -  to get the BERT tokenizer
- **model** - to get the BERT model
- **bert_words** -  to get all words
- **bert_vectors** -  to get embeddings of all words
- **bert_words_ngram** - to get the n-gram words
  - bert_words_ngram[0] gives unigram words
  - bert_words_ngram[1] gives bigram words
  - bert_words_ngram[n-1] gives n-gram words
- **bert_vectors_ngram** - to get the BERT word embeddings for the n-gram words
  - bert_words_ngram[0] gives word embeddings of the unigram words
  - bert_words_ngram[1] gives word embeddings of the bigram words
  - bert_words_ngram[n-1] gives word embeddings of the n-gram words
- **bert_words_all** - to get all n-gram words as a flattened list
- **bert_vectors_all** - to get all embeddings as a flattened list
- **document_list** - to get the paragraphs
- **max_ngram** - maximum n-gram words to generate
  - default: 10 (10-gram words)
- **punctuations** - to get the punctuations to be removed from the dataset
  - default: '''!"#$%&\'()*+,-./:—;<=>−?–@[\\]^_`{|}~'''
- **doc_regex** - the regular expression to be used to process the text files
  - default: "[\([][0-9]+[\])]|[”“‘’‛‟]|\d+\s"
- **stop_words** - the stop words to be ignored in the output (can be modified)
- **max_heading_length** - total words in a paragraph less than this length will be considered as heading
  - default: 10
- **max_document_length** - total words in a paragraph greater than this will be split into multiple paragraphs
  - default: 300
- **pos_tags_info()** - to get the POS tags and information to be used in the `find_similar_words` method



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rdpahalavan/BERTSimilarWords",
    "name": "BERTSimilar",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7.0",
    "maintainer_email": "",
    "keywords": "BERT NLP",
    "author": "Pahalavan R D",
    "author_email": "rdpahalavan24@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/1c/29/cea2ec46fbc3141050e631ad7e8762a4f3ed6c0241a72f2b9cf82cb20550/BERTSimilar-0.2.3.tar.gz",
    "platform": null,
    "description": "# BERTSimilar\n\n## Get Similar Words and Embeddings using BERT Models\n\nBERTSimilar is used to get similar words and embeddings using BERT models. It uses **bert-base-cased** model as default and cosine similarity to find the closest word to the given words.\n\nBERT generates contextual word embeddings, so the word embedding for the same word will differ based on its context. For example, the word **Apple** in *\"Apple is a good\u00a0fruit\"* and *\"Apple is a good phone\"* have different word embeddings. Generating word embeddings for all vocabulary in the English language based on context is time-consuming and needs many resources. So, this library requires the vocabulary for generating word embeddings beforehand.\n\nVocabularies used to generate word embeddings can be given in two ways:\n\n* [Using Wikipedia Pages](#using-wikipedia-pages)\n* [Using Text Files](#using-text-files) (.docx and .txt)\n\n## Install and Import\n\nInstall the Python package using\n```\npip install BERTSimilar\n```\n\nImport the module using\n```python\n>>> from BERTSimilar import SimilarWords\n```\n\n## Providing the Vocabulary\n\nProvide the text (in terms of paragraphs), so the BERT model can generate the word embeddings for all the words present in the text.\n\n<h3 id=\"using-wikipedia-pages\">Using Wikipedia Pages</h3>\n\n1) Using Wikipedia page names as a list (the content of the pages will be taken as input and processed)\n\n```python\n>>> wikipedia_pages = ['Apple', 'Apple Inc.']\n>>> similar = SimilarWords().load_dataset(wikipedia_page_list=wikipedia_pages)\n\n# To get the Wikipedia pages used,\n>>> similar.wikipedia_dataset_info\n{'Apple': 'https://en.wikipedia.org/wiki/Apple',\n 'Apple Inc.': 'https://en.wikipedia.org/wiki/Apple_Inc.'}\n```\n\n2) Using Wikipedia search query as a string (the content of the pages related to the query will be taken as input and processed)\n\n```python\n# Get 5 Wikipedia pages based on the query\n>>> similar = SimilarWords().load_dataset(wikipedia_query='Apple', wikipedia_query_limit=5)\n\n# To get the Wikipedia pages used (duplicate pages are ignored),\n>>> similar.wikipedia_dataset_info\n{'Apple': 'https://en.wikipedia.org/wiki/Apple',\n 'Apple Inc.': 'https://en.wikipedia.org/wiki/Apple_Inc.',\n 'Apples to Apples': 'https://en.wikipedia.org/wiki/Apples_to_Apples',\n 'MacOS': 'https://en.wikipedia.org/wiki/MacOS'}\n```\n\n3) Using Wikipedia search queries as a list (the content of the pages related to each query will be taken as input and processed)\n\n```python\n# Get 5 Wikipedia pages based on each query\n>>> similar = SimilarWords().load_dataset(wikipedia_query=['Apple', 'Banana'], wikipedia_query_limit=5)\n\n# To get the Wikipedia pages used (duplicate pages are ignored),\n>>> similar.wikipedia_dataset_info\n{'Apple': 'https://en.wikipedia.org/wiki/Apple',\n 'Apple Inc.': 'https://en.wikipedia.org/wiki/Apple_Inc.',\n 'Apples to Apples': 'https://en.wikipedia.org/wiki/Apples_to_Apples',\n 'MacOS': 'https://en.wikipedia.org/wiki/MacOS',\n 'Banana': 'https://en.wikipedia.org/wiki/Banana',\n 'Cooking banana': 'https://en.wikipedia.org/wiki/Cooking_banana',\n 'Banana republic': 'https://en.wikipedia.org/wiki/Banana_republic',\n 'Banana ketchup': 'https://en.wikipedia.org/wiki/Banana_ketchup'}\n```\n\n<h3 id=\"using-text-files\">Using Text Files</h3>\n\nFile extensions supported are .docx and .txt (For other file types, please convert them to the supporting format)\n\n1) Using a single text file (the content of the file will be taken as input and processed)\n\n```python\n>>> similar = SimilarWords().load_dataset(dataset_path='Book_1.docx')\n```\n\n2) Using multiple text files (the contents of each file will be taken as input and processed)\n\n```python\n>>> similar = SimilarWords().load_dataset(dataset_path=['Book_1.docx','Book_1.txt'])\n```\n\n### SimilarWords() Parameters\n\nYou can pass these parameters to customize the initialization.\n\n- **model** - the BERT model to use (default: bert-base-cased)\n- **max_heading_length** - the maximum heading length. Lengths more than this are considered paragraphs (default: 10)\n- **max_document_length** - the maximum paragraph length. Lengths more than this are split into multiple paragraphs (default: 300)\n- **exclude_stopwords** - by default all stopwords are excluded from tags. To include stopwords, pass the stopwords as a list of strings to include (default: None)\n- **embeddings_scaler** - Scaler to standardize the embeddings (default: None)\n\n## Find Similar Words\n\nSimilar words are generated using the `find_similar_words` method. This method calculates the cosine similarity between the average of the input words based on the given context and all the words present in the given vocabulary. The similar words and the embedding used to select the nearest words will be returned. This embedding is the representation of input words and context. The parameters for this method are\n\n- **input_words** - the input words (list of strings)\n- **input_context** - the input context (string) (optional) (default: None)\n- **input_embedding** - an embedding can be given in place of input words and context (numpy array) (default: None)\n- **output_words_ngram** - n-gram words expected as output (integer) (optional) (default: 1)\n  - if 1, means output like *{'apple', 'car'}*\n  - if 2, means output like *{'apple cake', 'modern car'}*\n  - likewise, the maximum value is 10\n  - if 0, all n-grams combined like *{'Apple', 'Apple laptop', 'red color car'}*\n- **max_output_words** - the maximum number of output words to be generated (integer) (optional) (default: 10)\n- **pos_to_exclude** - the words are ignored in the output if these part of speech tags are present in it (list of strings) (optional) (default: None)\n  - if ['VBN'], the word *\"used car\"* will be ignored in the output as *\"used\"* is a verb (VBN means past participle verb)\n  - available POS tags can be found in the [Attributes](#similarwords-attributes) section\n- **context_similarity_factor** - used to tune the context-matching process, and find the best paragraphs related to the given input words (float) (optional) (default: 0.25)\n  - possible values are from 0 to 1\n  - value closer to 0 will do a strict context-matching and a closer to 1 will do lenient context-matching\n- **output_filter_factor** - uses to ignore words that are similar to the given input in the output (float) (optional) (default: 0.5)\n  - possible values are from 0 to 1\n  - value closer to 0 will do a strict comparison and a value closer to 1 will do a lenient comparison\n- **single_word_split** - whether to split n-gram words when given as input (boolean) (optional) (default: True)\n  - whether to split the n-gram words given as input into single words\n  - if True, *\"Apple phones\"* given as input will be split into *\"Apple\"* and *\"phones\"* separately and processed\n- **uncased_lemmatization** - whether to uncase and lemmatize the input (boolean) (optional) (default: True)\n  - whether to uncase and lemmatize the input\n  - if True, *\"Apple phones\"* given as input will be converted to *\"apple phone\"* and processed\n\n## Demo\n\n### Example 1\n\n```python\n>>> from BERTSimilar import SimilarWords\n>>> similar = SimilarWords().load_dataset(wikipedia_query='Apple', wikipedia_query_limit=5)\n\n>>> words, embedding = similar.find_similar_words(input_context='company',input_words=['Apple'])\n>>> words\n{'iPhone': 0.7655301993367924,\n 'Microsoft': 0.7644559773925612,\n 'Samsung': 0.7483747939272186,\n 'Nokia': 0.7418908483628721,\n 'Macintosh': 0.7415292245659537,\n 'iOS': 0.7409453358937249,\n 'AppleCare': 0.7381210698272941,\n 'iPadOS': 0.7112217377139232,\n 'iTunes': 0.7007508157223745,\n 'macOS': 0.69984740983893}\n\n>>> words, embedding = similar.find_similar_words(input_context='fruit',input_words=['Apple'])\n>>> words\n{'applejack': 0.8045216200651304,\n 'Trees': 0.7926505935113519,\n 'trees': 0.7806807879003239,\n 'berries': 0.7689437435792672,\n 'seeds': 0.7540070238557037,\n 'peaches': 0.7381803534675645,\n 'Orange': 0.733131237417253,\n 'orchards': 0.7296196594053761,\n 'juice': 0.7247635163014543,\n 'nuts': 0.724424004884171}\n```\n\n### Example 2\n\n```python\n>>> from BERTSimilar import SimilarWords\n>>> similar = SimilarWords().load_dataset(wikipedia_query='Tesla', wikipedia_query_limit=10)\n\n>>> words, embedding = similar.find_similar_words(input_context='Tesla Motors', input_words=['CEO'], output_words_ngram=5, max_output_words=5)\n>>> words\n{'Chief Executive Elon Musk handing': 0.7596588355056113,\n '2018 CEO Elon Musk briefly': 0.751011374230985,\n 'August 2018 CEO Elon Musk': 0.7492089016517951,\n '2021 CEO Elon Musk revealed': 0.7470401856896459,\n 'SEC questioned Tesla CFO Zach': 0.738144930474394}\n\n>>> words, embedding = similar.find_similar_words(input_words=['Nikola Tesla'], output_words_ngram=0, max_output_words=5)\n>>> words\n{'Tesla Nikola Tesla Corner': 0.9203870154998232,\n 'IEEEThe Nikola Tesla Memorial': 0.8932847992637643,\n 'electrical engineer Nikola Tesla': 0.8811208719958945,\n 'Serbian American inventor Nikola Tesla': 0.8766566716046287,\n 'Nikola Tesla Technical Museum': 0.8759513407776292}\n```\n\n<h2 id=\"similarwords-attributes\">SimilarWords() Attributes</h2>\n\nThese attributes can be used to get values or modify default values of the SimilarWords class.\n\nTo get the value of the attributes,\n\n```python\n>>> similar = SimilarWords().load_dataset(dataset_path='Book_1.docx')\n\n# This will return all the words\n>>> similar.bert_words\n\n# This will return the embeddings for all the words\n>>> similar.bert_vectors\n```\n\nTo change the values of the attributes,\n\n```python\n>>> similar = SimilarWords()\n>>> similar.max_ngram\n10\n\n>>> similar.max_ngram = 12\n>>> similar = similar.load_dataset(dataset_path='Book_1.docx')\n>>> similar.max_ngram\n12\n```\n\n- **tokenizer** -  to get the BERT tokenizer\n- **model** - to get the BERT model\n- **bert_words** -  to get all words\n- **bert_vectors** -  to get embeddings of all words\n- **bert_words_ngram** - to get the n-gram words\n  - bert_words_ngram[0] gives unigram words\n  - bert_words_ngram[1] gives bigram words\n  - bert_words_ngram[n-1] gives n-gram words\n- **bert_vectors_ngram** - to get the BERT word embeddings for the n-gram words\n  - bert_words_ngram[0] gives word embeddings of the unigram words\n  - bert_words_ngram[1] gives word embeddings of the bigram words\n  - bert_words_ngram[n-1] gives word embeddings of the n-gram words\n- **bert_words_all** - to get all n-gram words as a flattened list\n- **bert_vectors_all** - to get all embeddings as a flattened list\n- **document_list** - to get the paragraphs\n- **max_ngram** - maximum n-gram words to generate\n  - default: 10 (10-gram words)\n- **punctuations** - to get the punctuations to be removed from the dataset\n  - default: '''!\"#$%&\\'()*+,-./:\u2014;<=>\u2212?\u2013@[\\\\]^_`{|}~'''\n- **doc_regex** - the regular expression to be used to process the text files\n  - default: \"[\\([][0-9]+[\\])]|[\u201d\u201c\u2018\u2019\u201b\u201f]|\\d+\\s\"\n- **stop_words** - the stop words to be ignored in the output (can be modified)\n- **max_heading_length** - total words in a paragraph less than this length will be considered as heading\n  - default: 10\n- **max_document_length** - total words in a paragraph greater than this will be split into multiple paragraphs\n  - default: 300\n- **pos_tags_info()** - to get the POS tags and information to be used in the `find_similar_words` method\n\n\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Get Similar Words and Embeddings using BERT Models",
    "version": "0.2.3",
    "project_urls": {
        "Homepage": "https://github.com/rdpahalavan/BERTSimilarWords"
    },
    "split_keywords": [
        "bert",
        "nlp"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "62bd4d5161aed12e054cf3ef70ecebf1917f5082b3bd43f48dd1cd8d09826769",
                "md5": "c432d15c8f750d871b04c96d14814795",
                "sha256": "78a0ad20a84ccfb1058102672f215b69d637ce6dd324463020f8b6bb73aeb715"
            },
            "downloads": -1,
            "filename": "BERTSimilar-0.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c432d15c8f750d871b04c96d14814795",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7.0",
            "size": 16489,
            "upload_time": "2023-07-23T02:46:00",
            "upload_time_iso_8601": "2023-07-23T02:46:00.165828Z",
            "url": "https://files.pythonhosted.org/packages/62/bd/4d5161aed12e054cf3ef70ecebf1917f5082b3bd43f48dd1cd8d09826769/BERTSimilar-0.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1c29cea2ec46fbc3141050e631ad7e8762a4f3ed6c0241a72f2b9cf82cb20550",
                "md5": "723700b9b327c198f7b2dcc454dd7ad9",
                "sha256": "82447be7e3a7755106ee742f8a1abaa9191d975d5e2c0300ef8907b9c4ce6ecc"
            },
            "downloads": -1,
            "filename": "BERTSimilar-0.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "723700b9b327c198f7b2dcc454dd7ad9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.0",
            "size": 10657,
            "upload_time": "2023-07-23T02:46:01",
            "upload_time_iso_8601": "2023-07-23T02:46:01.438586Z",
            "url": "https://files.pythonhosted.org/packages/1c/29/cea2ec46fbc3141050e631ad7e8762a4f3ed6c0241a72f2b9cf82cb20550/BERTSimilar-0.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-23 02:46:01",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rdpahalavan",
    "github_project": "BERTSimilarWords",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "bertsimilar"
}
        
Elapsed time: 0.11009s