spacy-universal-sentence-encoder


Namespacy-universal-sentence-encoder JSON
Version 0.4.6 PyPI version JSON
download
home_pagehttps://github.com/MartinoMensio/spacy-universal-sentence-encoder
SummarySpaCy models for using Universal Sentence Encoder from TensorFlow Hub
upload_time2023-03-23 12:49:15
maintainer
docs_urlNone
authorMartino Mensio
requires_python
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Tests](https://github.com/MartinoMensio/spacy-universal-sentence-encoder/actions/workflows/tests.yml/badge.svg)](https://github.com/MartinoMensio/spacy-universal-sentence-encoder/actions/workflows/tests.yml)
[![Downloads](https://static.pepy.tech/badge/spacy-universal-sentence-encoder)](https://pepy.tech/project/spacy-universal-sentence-encoder)
[![Current Release Version](https://img.shields.io/github/release/MartinoMensio/spacy-universal-sentence-encoder.svg?&logo=github)](https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases)
[![pypi Version](https://img.shields.io/pypi/v/spacy-universal-sentence-encoder.svg?&logo=pypi&logoColor=white)](https://pypi.org/project/spacy-universal-sentence-encoder/)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

# Spacy - Universal Sentence Encoder

Make use of Google's Universal Sentence Encoder directly within SpaCy.
This library lets you embed [Docs](https://spacy.io/api/doc), [Spans](https://spacy.io/api/span) and [Tokens](https://spacy.io/api/token) from the [Universal Sentence Encoder family available on TensorFlow Hub](https://tfhub.dev/google/collections/universal-sentence-encoder/1).

For using sentence-BERT in spaCy, see https://github.com/MartinoMensio/spacy-sentence-bert

## Motivation
There are many different reasons to not always use BERT. For example to have embeddings that are tuned specifically for another task (e.g. sentence similarity). See this very useful blog article:
https://blog.floydhub.com/when-the-best-nlp-model-is-not-the-best-choice/

The Universal Sentence Encoder is trained on different tasks which are more suited to identifying sentence similarity. [Google AI blog](https://ai.googleblog.com/2018/05/advances-in-semantic-textual-similarity.html) [paper](https://arxiv.org/abs/1803.11175)

This library uses the [`user_hooks` of spaCy](https://spacy.io/usage/processing-pipelines#custom-components-user-hooks) to use an external model for the vectors, in this case a simple wrapper to the models available on TensorFlow Hub.

## Install

You can install this library from:
- github: `pip install git+https://github.com/MartinoMensio/spacy-universal-sentence-encoder.git`
- pyPI: `pip install spacy-universal-sentence-encoder`

Compatibility:
- python:
  - 3.6: compatible but not actively tested
  - 3.7/3.8/3.9/3.10: compatible and actively tested
  - 3.11 compatible but relies on rc version of [tensorflow-text](https://pypi.org/project/tensorflow-text/) for multilingual models
- tensorflow>=2.4.0,<3.0.0
- spacy>=3.0.0,<4.0.0 (SpaCy v3 API changed a lot from v2)

To use the multilingual version of the models, you need to install the extra named `multi` with the command: `pip install spacy-universal-sentence-encoder[multi]`. This installs the dependency `tensorflow-text` that is required to run the multilingual models. Note that `tensorflow-text` is currently in RC version for python3.11.

In alternative, you can install the following standalone pre-packaged models with pip. Each model can be installed independently:

| model name | source | pip package |
|------------|--------|---|
| en_use_md  | https://tfhub.dev/google/universal-sentence-encoder | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/en_use_md-0.4.6.tar.gz#en_use_md-0.4.6` |
| en_use_lg  | https://tfhub.dev/google/universal-sentence-encoder-large | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/en_use_lg-0.4.6.tar.gz#en_use_lg-0.4.6` |
| xx_use_md  | https://tfhub.dev/google/universal-sentence-encoder-multilingual | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/xx_use_md-0.4.6.tar.gz#xx_use_md-0.4.6` |
| xx_use_lg  | https://tfhub.dev/google/universal-sentence-encoder-multilingual-large | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/xx_use_lg-0.4.6.tar.gz#xx_use_lg-0.4.6` |

In addition, also [CMLM models](https://openreview.net/pdf?id=WDVD4lUCTzU) are now available:

| model name | source | pip package |
|------------|--------|---|
| en_use_cmlm_md  | https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/en_use_cmlm_md-0.4.6.tar.gz#en_use_cmlm_md-0.4.6` |
| en_use_cmlm_lg  | https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-large | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/en_use_cmlm_lg-0.4.6.tar.gz#en_use_cmlm_lg-0.4.6` |
| xx_use_cmlm  | https://tfhub.dev/google/universal-sentence-encoder-cmlm/multilingual-base | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/xx_use_cmlm-0.4.6.tar.gz#xx_use_cmlm-0.4.6` |
| xx_use_cmlm_br  | https://tfhub.dev/google/universal-sentence-encoder-cmlm/multilingual-base-br | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/xx_use_cmlm_br-0.4.6.tar.gz#xx_use_cmlm_br-0.4.6` |

## Usage

### Loading the model

If you installed the model standalone packages (see table above) you can use the usual spacy API to load this model:

```python
import spacy
nlp = spacy.load('en_use_md')
```

Otherwise you need to load the model in the following way:

```python
import spacy_universal_sentence_encoder
nlp = spacy_universal_sentence_encoder.load_model('xx_use_lg')
```

The third option is to load the model on your existing spaCy pipeline:

```python
import spacy
# this is your nlp object that can be any spaCy model
nlp = spacy.load('en_core_web_sm')

# add the pipeline stage (will be mapped to the most adequate model from the table above, en_use_md)
nlp.add_pipe('universal_sentence_encoder')
```

In all of the three options, the first time that you load a certain Universal Sentence Encoder model, it will be downloaded from TF Hub (see section below to use an already downloaded model, or to change the location of the model files).

The last option (using `nlp.add_pipe`) can be customised with the following configurations:

- `use_model_url`: allows to use a specific TFHub URL
- `preprocessor_url`: for TFHub models that need specific preprocessing with another TFHub model (e.g., see documentation of [CMLM models](https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base))
- `model_name`: to load a specific model instead of mapping the current (language, size) to one of the options in the table above
- `enable_cache`: default `True`, enables an internal cache to avoid embedding the same text (doc/span/token) twice. It makes the computation faster (when enough duplicates are embedded) but has a memory footprint because all the embeddings extracted are kept in the cache
- `debug`: default `False` shows debugging information.

To use the configurations, when adding the pipe stage pass a dict as additional argument, for example:

```python
nlp.add_pipe('universal_sentence_encoder', config={'enable_cache': False})
```

### Use the embeddings

After adding to the pipeline, you can use the embedding models by using the various properties and methods of Docs, Spans and Tokens:

```python
# load as before
import spacy
nlp = spacy.load('en_core_web_lg')
nlp.add_pipe('universal_sentence_encoder')

# get two documents
doc_1 = nlp('Hi there, how are you?')
doc_2 = nlp('Hello there, how are you doing today?')
# Inspect the shape of the Doc, Span and Token vectors
print(doc_1.vector.shape) # the full document representation
print(doc_1[3], doc_1[3].vector.shape) # the word "how"
print(doc_1[3:6], doc_1[3:6].vector.shape) # the span "how are you"

# or use the similarity method that is based on the vectors, on Doc, Span or Token
print(doc_1.similarity(doc_2[0:7]))
```

## Common issues

Here you can find the most common issues with possible solutions.

### Using a pre-downloaded model

If you want to use a model that you have already downloaded from TensorFlow Hub, belonging to the [Universal Sentence Encoder family](https://tfhub.dev/google/collections/universal-sentence-encoder/1), you can use it by doing the following:

- locate the full path of the folder where you have downloaded and extracted the model. Let's suppose the location is `$HOME/tfhub_models`
- rename the folder of the extracted model (the one directly containing the folders `variables` and the file `saved_model.pb`) to the sha1 hash of the TFHub model [source](https://medium.com/@xianbao.qian/how-to-run-tf-hub-locally-without-internet-connection-4506b850a915). The mapping URL / sha1 values is the following:
  - [`en_use_md`](https://tfhub.dev/google/universal-sentence-encoder/4): `063d866c06683311b44b4992fd46003be952409c`
  - [`en_use_lg`](https://tfhub.dev/google/universal-sentence-encoder-large/5): `c9fe785512ca4a1b179831acb18a0c6bfba603dd`
  - [`xx_use_md`](https://tfhub.dev/google/universal-sentence-encoder-multilingual/3): `26c892ffbc8d7b032f5a95f316e2841ed4f1608c`
  - [`xx_use_lg`](https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3): `97e68b633b7cf018904eb965602b92c9f3ad14c9`
- set the environment variable `TFHUB_CACHE_DIR` to the location containing the renamed folder, for example `$HOME/tfhub_models` (set it before trying to download the model: `export TFHUB_CACHE_DIR=$HOME/tfhub_models`)
- Now load your model and it should see that it was already downloaded

### Serialisation

To serialise and deserialise nlp objects, SpaCy does not restore `user_hooks` after deserialisation, so a call to `from_bytes` will result in not using the TensorFlow vectors, so the similarities won't be good. For this reason the suggested solution is:

- serialise with `bytes = doc.to_bytes()` normally
- deserialise with `spacy_universal_sentence_encoder.doc_from_bytes(nlp, bytes)` which will also restore the user hooks

### Multiprocessing

This library, relying on TensorFlow, is not fork-safe. This means that if you are using this library inside multiple processes (e.g. with a `multiprocessing.pool.Pool`), your processes will deadlock.
The solutions are:
- use a thread-based environment (e.g. `multiprocessing.pool.ThreadPool`)
- only use this library inside the created processes (first create the processes and then import and use the library)

### `Using `nlp.pipe` with multiple processes

Spacy does not restore user hooks (`UserWarning: [W109]`) therefore if you use `nlp.pipe` with multiple processes, you won't be able to use the `.vector` on `doc`, `span` and `token`. I am developing a workaround.
<!-- Use `._.vector` instead. -->




## Utils

To build and upload
```bash
# change version
VERSION=0.4.6
# change version references everywhere
# update locally installed package
pip install -r requirements.txt
# build the standalone models (8)
./build_models.sh
# build the archive at dist/spacy_universal_sentence_encoder-${VERSION}.tar.gz
python setup.py sdist
# upload to pypi
twine upload dist/spacy_universal_sentence_encoder-${VERSION}.tar.gz
# upload language packages to github
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/MartinoMensio/spacy-universal-sentence-encoder",
    "name": "spacy-universal-sentence-encoder",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Martino Mensio",
    "author_email": "martino.mensio@open.ac.uk",
    "download_url": "https://files.pythonhosted.org/packages/5b/11/9eee30bbe6110ac63939df08ba3007f4a8244a086dafb645bacb826a6a9f/spacy_universal_sentence_encoder-0.4.6.tar.gz",
    "platform": null,
    "description": "[![Tests](https://github.com/MartinoMensio/spacy-universal-sentence-encoder/actions/workflows/tests.yml/badge.svg)](https://github.com/MartinoMensio/spacy-universal-sentence-encoder/actions/workflows/tests.yml)\n[![Downloads](https://static.pepy.tech/badge/spacy-universal-sentence-encoder)](https://pepy.tech/project/spacy-universal-sentence-encoder)\n[![Current Release Version](https://img.shields.io/github/release/MartinoMensio/spacy-universal-sentence-encoder.svg?&logo=github)](https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases)\n[![pypi Version](https://img.shields.io/pypi/v/spacy-universal-sentence-encoder.svg?&logo=pypi&logoColor=white)](https://pypi.org/project/spacy-universal-sentence-encoder/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n# Spacy - Universal Sentence Encoder\n\nMake use of Google's Universal Sentence Encoder directly within SpaCy.\nThis library lets you embed [Docs](https://spacy.io/api/doc), [Spans](https://spacy.io/api/span) and [Tokens](https://spacy.io/api/token) from the [Universal Sentence Encoder family available on TensorFlow Hub](https://tfhub.dev/google/collections/universal-sentence-encoder/1).\n\nFor using sentence-BERT in spaCy, see https://github.com/MartinoMensio/spacy-sentence-bert\n\n## Motivation\nThere are many different reasons to not always use BERT. For example to have embeddings that are tuned specifically for another task (e.g. sentence similarity). See this very useful blog article:\nhttps://blog.floydhub.com/when-the-best-nlp-model-is-not-the-best-choice/\n\nThe Universal Sentence Encoder is trained on different tasks which are more suited to identifying sentence similarity. [Google AI blog](https://ai.googleblog.com/2018/05/advances-in-semantic-textual-similarity.html) [paper](https://arxiv.org/abs/1803.11175)\n\nThis library uses the [`user_hooks` of spaCy](https://spacy.io/usage/processing-pipelines#custom-components-user-hooks) to use an external model for the vectors, in this case a simple wrapper to the models available on TensorFlow Hub.\n\n## Install\n\nYou can install this library from:\n- github: `pip install git+https://github.com/MartinoMensio/spacy-universal-sentence-encoder.git`\n- pyPI: `pip install spacy-universal-sentence-encoder`\n\nCompatibility:\n- python:\n  - 3.6: compatible but not actively tested\n  - 3.7/3.8/3.9/3.10: compatible and actively tested\n  - 3.11 compatible but relies on rc version of [tensorflow-text](https://pypi.org/project/tensorflow-text/) for multilingual models\n- tensorflow>=2.4.0,<3.0.0\n- spacy>=3.0.0,<4.0.0 (SpaCy v3 API changed a lot from v2)\n\nTo use the multilingual version of the models, you need to install the extra named `multi` with the command: `pip install spacy-universal-sentence-encoder[multi]`. This installs the dependency `tensorflow-text` that is required to run the multilingual models. Note that `tensorflow-text` is currently in RC version for python3.11.\n\nIn alternative, you can install the following standalone pre-packaged models with pip. Each model can be installed independently:\n\n| model name | source | pip package |\n|------------|--------|---|\n| en_use_md  | https://tfhub.dev/google/universal-sentence-encoder | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/en_use_md-0.4.6.tar.gz#en_use_md-0.4.6` |\n| en_use_lg  | https://tfhub.dev/google/universal-sentence-encoder-large | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/en_use_lg-0.4.6.tar.gz#en_use_lg-0.4.6` |\n| xx_use_md  | https://tfhub.dev/google/universal-sentence-encoder-multilingual | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/xx_use_md-0.4.6.tar.gz#xx_use_md-0.4.6` |\n| xx_use_lg  | https://tfhub.dev/google/universal-sentence-encoder-multilingual-large | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/xx_use_lg-0.4.6.tar.gz#xx_use_lg-0.4.6` |\n\nIn addition, also [CMLM models](https://openreview.net/pdf?id=WDVD4lUCTzU) are now available:\n\n| model name | source | pip package |\n|------------|--------|---|\n| en_use_cmlm_md  | https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/en_use_cmlm_md-0.4.6.tar.gz#en_use_cmlm_md-0.4.6` |\n| en_use_cmlm_lg  | https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-large | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/en_use_cmlm_lg-0.4.6.tar.gz#en_use_cmlm_lg-0.4.6` |\n| xx_use_cmlm  | https://tfhub.dev/google/universal-sentence-encoder-cmlm/multilingual-base | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/xx_use_cmlm-0.4.6.tar.gz#xx_use_cmlm-0.4.6` |\n| xx_use_cmlm_br  | https://tfhub.dev/google/universal-sentence-encoder-cmlm/multilingual-base-br | `pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.6/xx_use_cmlm_br-0.4.6.tar.gz#xx_use_cmlm_br-0.4.6` |\n\n## Usage\n\n### Loading the model\n\nIf you installed the model standalone packages (see table above) you can use the usual spacy API to load this model:\n\n```python\nimport spacy\nnlp = spacy.load('en_use_md')\n```\n\nOtherwise you need to load the model in the following way:\n\n```python\nimport spacy_universal_sentence_encoder\nnlp = spacy_universal_sentence_encoder.load_model('xx_use_lg')\n```\n\nThe third option is to load the model on your existing spaCy pipeline:\n\n```python\nimport spacy\n# this is your nlp object that can be any spaCy model\nnlp = spacy.load('en_core_web_sm')\n\n# add the pipeline stage (will be mapped to the most adequate model from the table above, en_use_md)\nnlp.add_pipe('universal_sentence_encoder')\n```\n\nIn all of the three options, the first time that you load a certain Universal Sentence Encoder model, it will be downloaded from TF Hub (see section below to use an already downloaded model, or to change the location of the model files).\n\nThe last option (using `nlp.add_pipe`) can be customised with the following configurations:\n\n- `use_model_url`: allows to use a specific TFHub URL\n- `preprocessor_url`: for TFHub models that need specific preprocessing with another TFHub model (e.g., see documentation of [CMLM models](https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base))\n- `model_name`: to load a specific model instead of mapping the current (language, size) to one of the options in the table above\n- `enable_cache`: default `True`, enables an internal cache to avoid embedding the same text (doc/span/token) twice. It makes the computation faster (when enough duplicates are embedded) but has a memory footprint because all the embeddings extracted are kept in the cache\n- `debug`: default `False` shows debugging information.\n\nTo use the configurations, when adding the pipe stage pass a dict as additional argument, for example:\n\n```python\nnlp.add_pipe('universal_sentence_encoder', config={'enable_cache': False})\n```\n\n### Use the embeddings\n\nAfter adding to the pipeline, you can use the embedding models by using the various properties and methods of Docs, Spans and Tokens:\n\n```python\n# load as before\nimport spacy\nnlp = spacy.load('en_core_web_lg')\nnlp.add_pipe('universal_sentence_encoder')\n\n# get two documents\ndoc_1 = nlp('Hi there, how are you?')\ndoc_2 = nlp('Hello there, how are you doing today?')\n# Inspect the shape of the Doc, Span and Token vectors\nprint(doc_1.vector.shape) # the full document representation\nprint(doc_1[3], doc_1[3].vector.shape) # the word \"how\"\nprint(doc_1[3:6], doc_1[3:6].vector.shape) # the span \"how are you\"\n\n# or use the similarity method that is based on the vectors, on Doc, Span or Token\nprint(doc_1.similarity(doc_2[0:7]))\n```\n\n## Common issues\n\nHere you can find the most common issues with possible solutions.\n\n### Using a pre-downloaded model\n\nIf you want to use a model that you have already downloaded from TensorFlow Hub, belonging to the [Universal Sentence Encoder family](https://tfhub.dev/google/collections/universal-sentence-encoder/1), you can use it by doing the following:\n\n- locate the full path of the folder where you have downloaded and extracted the model. Let's suppose the location is `$HOME/tfhub_models`\n- rename the folder of the extracted model (the one directly containing the folders `variables` and the file `saved_model.pb`) to the sha1 hash of the TFHub model [source](https://medium.com/@xianbao.qian/how-to-run-tf-hub-locally-without-internet-connection-4506b850a915). The mapping URL / sha1 values is the following:\n  - [`en_use_md`](https://tfhub.dev/google/universal-sentence-encoder/4): `063d866c06683311b44b4992fd46003be952409c`\n  - [`en_use_lg`](https://tfhub.dev/google/universal-sentence-encoder-large/5): `c9fe785512ca4a1b179831acb18a0c6bfba603dd`\n  - [`xx_use_md`](https://tfhub.dev/google/universal-sentence-encoder-multilingual/3): `26c892ffbc8d7b032f5a95f316e2841ed4f1608c`\n  - [`xx_use_lg`](https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3): `97e68b633b7cf018904eb965602b92c9f3ad14c9`\n- set the environment variable `TFHUB_CACHE_DIR` to the location containing the renamed folder, for example `$HOME/tfhub_models` (set it before trying to download the model: `export TFHUB_CACHE_DIR=$HOME/tfhub_models`)\n- Now load your model and it should see that it was already downloaded\n\n### Serialisation\n\nTo serialise and deserialise nlp objects, SpaCy does not restore `user_hooks` after deserialisation, so a call to `from_bytes` will result in not using the TensorFlow vectors, so the similarities won't be good. For this reason the suggested solution is:\n\n- serialise with `bytes = doc.to_bytes()` normally\n- deserialise with `spacy_universal_sentence_encoder.doc_from_bytes(nlp, bytes)` which will also restore the user hooks\n\n### Multiprocessing\n\nThis library, relying on TensorFlow, is not fork-safe. This means that if you are using this library inside multiple processes (e.g. with a `multiprocessing.pool.Pool`), your processes will deadlock.\nThe solutions are:\n- use a thread-based environment (e.g. `multiprocessing.pool.ThreadPool`)\n- only use this library inside the created processes (first create the processes and then import and use the library)\n\n### `Using `nlp.pipe` with multiple processes\n\nSpacy does not restore user hooks (`UserWarning: [W109]`) therefore if you use `nlp.pipe` with multiple processes, you won't be able to use the `.vector` on `doc`, `span` and `token`. I am developing a workaround.\n<!-- Use `._.vector` instead. -->\n\n\n\n\n## Utils\n\nTo build and upload\n```bash\n# change version\nVERSION=0.4.6\n# change version references everywhere\n# update locally installed package\npip install -r requirements.txt\n# build the standalone models (8)\n./build_models.sh\n# build the archive at dist/spacy_universal_sentence_encoder-${VERSION}.tar.gz\npython setup.py sdist\n# upload to pypi\ntwine upload dist/spacy_universal_sentence_encoder-${VERSION}.tar.gz\n# upload language packages to github\n```\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "SpaCy models for using Universal Sentence Encoder from TensorFlow Hub",
    "version": "0.4.6",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5b119eee30bbe6110ac63939df08ba3007f4a8244a086dafb645bacb826a6a9f",
                "md5": "a6eb3d3732d210cdb307095633bdaea8",
                "sha256": "822404c1d7c8b82ceb2385803efc11a8b3d2a08467c2083d6102a4555c82e1e2"
            },
            "downloads": -1,
            "filename": "spacy_universal_sentence_encoder-0.4.6.tar.gz",
            "has_sig": false,
            "md5_digest": "a6eb3d3732d210cdb307095633bdaea8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 15306,
            "upload_time": "2023-03-23T12:49:15",
            "upload_time_iso_8601": "2023-03-23T12:49:15.329547Z",
            "url": "https://files.pythonhosted.org/packages/5b/11/9eee30bbe6110ac63939df08ba3007f4a8244a086dafb645bacb826a6a9f/spacy_universal_sentence_encoder-0.4.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-23 12:49:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "MartinoMensio",
    "github_project": "spacy-universal-sentence-encoder",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "spacy-universal-sentence-encoder"
}
        
Elapsed time: 0.05124s