sentence-transformers


Namesentence-transformers JSON
Version 2.6.1 PyPI version JSON
download
home_pagehttps://www.SBERT.net
SummaryMultilingual text embeddings
upload_time2024-03-26 08:53:22
maintainerNone
docs_urlNone
authorNils Reimers
requires_python>=3.8.0
licenseApache License 2.0
keywords transformer networks bert xlnet sentence embedding pytorch nlp deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!--- BADGES: START --->
[![GitHub - License](https://img.shields.io/github/license/UKPLab/sentence-transformers?logo=github&style=flat&color=green)][#github-license]
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/sentence-transformers?logo=pypi&style=flat&color=blue)][#pypi-package]
[![PyPI - Package Version](https://img.shields.io/pypi/v/sentence-transformers?logo=pypi&style=flat&color=orange)][#pypi-package]
[![Conda - Platform](https://img.shields.io/conda/pn/conda-forge/sentence-transformers?logo=anaconda&style=flat)][#conda-forge-package]
[![Conda (channel only)](https://img.shields.io/conda/vn/conda-forge/sentence-transformers?logo=anaconda&style=flat&color=orange)][#conda-forge-package]
[![Docs - GitHub.io](https://img.shields.io/static/v1?logo=github&style=flat&color=pink&label=docs&message=sentence-transformers)][#docs-package]
<!--- 
[![PyPI - Downloads](https://img.shields.io/pypi/dm/sentence-transformers?logo=pypi&style=flat&color=green)][#pypi-package]
[![Conda](https://img.shields.io/conda/dn/conda-forge/sentence-transformers?logo=anaconda)][#conda-forge-package] 
--->

[#github-license]: https://github.com/UKPLab/sentence-transformers/blob/master/LICENSE
[#pypi-package]: https://pypi.org/project/sentence-transformers/
[#conda-forge-package]: https://anaconda.org/conda-forge/sentence-transformers
[#docs-package]: https://www.sbert.net/
<!--- BADGES: END --->

# Sentence Transformers: Multilingual Sentence, Paragraph, and Image Embeddings using BERT & Co.

This framework provides an easy method to compute dense vector representations for **sentences**, **paragraphs**, and **images**. The models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa etc. and achieve state-of-the-art performance in various tasks. Text is embedded in vector space such that similar text are closer and can efficiently be found using cosine similarity.

We provide an increasing number of **[state-of-the-art pretrained models](https://www.sbert.net/docs/pretrained_models.html)** for more than 100 languages, fine-tuned for various use-cases.

Further, this framework allows an easy  **[fine-tuning of custom embeddings models](https://www.sbert.net/docs/training/overview.html)**, to achieve maximal performance on your specific task.

For the **full documentation**, see **[www.SBERT.net](https://www.sbert.net)**.

The following publications are integrated in this framework:

- [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084) (EMNLP 2019)
- [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813) (EMNLP 2020)
- [Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks](https://arxiv.org/abs/2010.08240) (NAACL 2021)
- [The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes](https://arxiv.org/abs/2012.14210) (arXiv 2020)
- [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979) (arXiv 2021)
- [BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models](https://arxiv.org/abs/2104.08663) (arXiv 2021)
- [Matryoshka Representation Learning](https://arxiv.org/abs/2205.13147) (arXiv 2022)

## Installation

We recommend **Python 3.8** or higher, **[PyTorch 1.11.0](https://pytorch.org/get-started/locally/)** or higher and **[transformers v4.32.0](https://github.com/huggingface/transformers)** or higher. The code does **not** work with Python 2.7.

**Install with pip**

Install the *sentence-transformers* with `pip`:

```
pip install -U sentence-transformers
```

**Install with conda**

You can install the *sentence-transformers* with `conda`:

```
conda install -c conda-forge sentence-transformers
```

**Install from sources**

Alternatively, you can also clone the latest version from the [repository](https://github.com/UKPLab/sentence-transformers) and install it directly from the source code:

````
pip install -e .
```` 

**PyTorch with CUDA**

If you want to use a GPU / CUDA, you must install PyTorch with the matching CUDA Version. Follow
[PyTorch - Get Started](https://pytorch.org/get-started/locally/) for further details how to install PyTorch.

## Getting Started

See [Quickstart](https://www.sbert.net/docs/quickstart.html) in our documenation.

[This example](https://github.com/UKPLab/sentence-transformers/tree/master/examples/applications/computing-embeddings/computing_embeddings.py) shows you how to use an already trained Sentence Transformer model to embed sentences for another task.

First download a pretrained model.

````python
from sentence_transformers import SentenceTransformer

model = SentenceTransformer("all-MiniLM-L6-v2")
````

Then provide some sentences to the model.

````python
sentences = [
    "This framework generates embeddings for each input sentence",
    "Sentences are passed as a list of string.",
    "The quick brown fox jumps over the lazy dog.",
]
sentence_embeddings = model.encode(sentences)
````

And that's it already. We now have a list of numpy arrays with the embeddings.

````python
for sentence, embedding in zip(sentences, sentence_embeddings):
    print("Sentence:", sentence)
    print("Embedding:", embedding)
    print("")
````

## Pre-Trained Models

We provide a large list of [Pretrained Models](https://www.sbert.net/docs/pretrained_models.html) for more than 100 languages. Some models are general purpose models, while others produce embeddings for specific use cases. Pre-trained models can be loaded by just passing the model name: `SentenceTransformer('model_name')`.

[»  Full list of pretrained models](https://www.sbert.net/docs/pretrained_models.html)

## Training

This framework allows you to fine-tune your own sentence embedding methods, so that you get task-specific sentence embeddings. You have various options to choose from in order to get perfect sentence embeddings for your specific task. 

See [Training Overview](https://www.sbert.net/docs/training/overview.html) for an introduction how to train your own embedding models. We provide [various examples](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training) how to train models on various datasets.

Some highlights are:
- Support of various transformer networks including BERT, RoBERTa, XLM-R, DistilBERT, Electra, BART, ...
- Multi-Lingual and multi-task learning
- Evaluation during training to find optimal model
- [20+ loss-functions](https://www.sbert.net/docs/package_reference/losses.html) allowing to tune models specifically for semantic search, paraphrase mining, semantic similarity comparison, clustering, triplet loss, contrastive loss.

## Performance

Our models are evaluated extensively on 15+ datasets including challening domains like Tweets, Reddit, emails. They achieve by far the **best performance** from all available sentence embedding methods. Further, we provide several **smaller models** that are **optimized for speed**.

[» Full list of pretrained models](https://www.sbert.net/docs/pretrained_models.html)

## Application Examples

You can use this framework for:

- [Computing Sentence Embeddings](https://www.sbert.net/examples/applications/computing-embeddings/README.html)
- [Semantic Textual Similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html)
- [Clustering](https://www.sbert.net/examples/applications/clustering/README.html)
- [Paraphrase Mining](https://www.sbert.net/examples/applications/paraphrase-mining/README.html)
 - [Translated Sentence Mining](https://www.sbert.net/examples/applications/parallel-sentence-mining/README.html)
 - [Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
 - [Retrieve & Re-Rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) 
 - [Text Summarization](https://www.sbert.net/examples/applications/text-summarization/README.html) 
- [Multilingual Image Search, Clustering & Duplicate Detection](https://www.sbert.net/examples/applications/image-search/README.html)

and many more use-cases.

For all examples, see [examples/applications](https://github.com/UKPLab/sentence-transformers/tree/master/examples/applications).

## Citing & Authors

If you find this repository helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):

```bibtex 
@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
```

If you use one of the multilingual models, feel free to cite our publication [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813):

```bibtex
@inproceedings{reimers-2020-multilingual-sentence-bert,
    title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2020",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/2004.09813",
}
```

Please have a look at [Publications](https://www.sbert.net/docs/publications.html) for our different publications that are integrated into SentenceTransformers.

Contact person: Tom Aarsen, [tom.aarsen@huggingface.co](mailto:tom.aarsen@huggingface.co)

https://www.ukp.tu-darmstadt.de/

Don't hesitate to open an issue if something is broken (and it shouldn't be) or if you have further questions.

> This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

            

Raw data

            {
    "_id": null,
    "home_page": "https://www.SBERT.net",
    "name": "sentence-transformers",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": null,
    "keywords": "Transformer Networks BERT XLNet sentence embedding PyTorch NLP deep learning",
    "author": "Nils Reimers",
    "author_email": "info@nils-reimers.de",
    "download_url": "https://files.pythonhosted.org/packages/a5/fe/d675f39be9352fe25052dcff2f911a982bb34a6b5768d0d440b723643fdd/sentence-transformers-2.6.1.tar.gz",
    "platform": null,
    "description": "<!--- BADGES: START --->\r\n[![GitHub - License](https://img.shields.io/github/license/UKPLab/sentence-transformers?logo=github&style=flat&color=green)][#github-license]\r\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/sentence-transformers?logo=pypi&style=flat&color=blue)][#pypi-package]\r\n[![PyPI - Package Version](https://img.shields.io/pypi/v/sentence-transformers?logo=pypi&style=flat&color=orange)][#pypi-package]\r\n[![Conda - Platform](https://img.shields.io/conda/pn/conda-forge/sentence-transformers?logo=anaconda&style=flat)][#conda-forge-package]\r\n[![Conda (channel only)](https://img.shields.io/conda/vn/conda-forge/sentence-transformers?logo=anaconda&style=flat&color=orange)][#conda-forge-package]\r\n[![Docs - GitHub.io](https://img.shields.io/static/v1?logo=github&style=flat&color=pink&label=docs&message=sentence-transformers)][#docs-package]\r\n<!--- \r\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/sentence-transformers?logo=pypi&style=flat&color=green)][#pypi-package]\r\n[![Conda](https://img.shields.io/conda/dn/conda-forge/sentence-transformers?logo=anaconda)][#conda-forge-package] \r\n--->\r\n\r\n[#github-license]: https://github.com/UKPLab/sentence-transformers/blob/master/LICENSE\r\n[#pypi-package]: https://pypi.org/project/sentence-transformers/\r\n[#conda-forge-package]: https://anaconda.org/conda-forge/sentence-transformers\r\n[#docs-package]: https://www.sbert.net/\r\n<!--- BADGES: END --->\r\n\r\n# Sentence Transformers: Multilingual Sentence, Paragraph, and Image Embeddings using BERT & Co.\r\n\r\nThis framework provides an easy method to compute dense vector representations for **sentences**, **paragraphs**, and **images**. The models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa etc. and achieve state-of-the-art performance in various tasks. Text is embedded in vector space such that similar text are closer and can efficiently be found using cosine similarity.\r\n\r\nWe provide an increasing number of **[state-of-the-art pretrained models](https://www.sbert.net/docs/pretrained_models.html)** for more than 100 languages, fine-tuned for various use-cases.\r\n\r\nFurther, this framework allows an easy  **[fine-tuning of custom embeddings models](https://www.sbert.net/docs/training/overview.html)**, to achieve maximal performance on your specific task.\r\n\r\nFor the **full documentation**, see **[www.SBERT.net](https://www.sbert.net)**.\r\n\r\nThe following publications are integrated in this framework:\r\n\r\n- [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084) (EMNLP 2019)\r\n- [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813) (EMNLP 2020)\r\n- [Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks](https://arxiv.org/abs/2010.08240) (NAACL 2021)\r\n- [The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes](https://arxiv.org/abs/2012.14210) (arXiv 2020)\r\n- [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979) (arXiv 2021)\r\n- [BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models](https://arxiv.org/abs/2104.08663) (arXiv 2021)\r\n- [Matryoshka Representation Learning](https://arxiv.org/abs/2205.13147) (arXiv 2022)\r\n\r\n## Installation\r\n\r\nWe recommend **Python 3.8** or higher, **[PyTorch 1.11.0](https://pytorch.org/get-started/locally/)** or higher and **[transformers v4.32.0](https://github.com/huggingface/transformers)** or higher. The code does **not** work with Python 2.7.\r\n\r\n**Install with pip**\r\n\r\nInstall the *sentence-transformers* with `pip`:\r\n\r\n```\r\npip install -U sentence-transformers\r\n```\r\n\r\n**Install with conda**\r\n\r\nYou can install the *sentence-transformers* with `conda`:\r\n\r\n```\r\nconda install -c conda-forge sentence-transformers\r\n```\r\n\r\n**Install from sources**\r\n\r\nAlternatively, you can also clone the latest version from the [repository](https://github.com/UKPLab/sentence-transformers) and install it directly from the source code:\r\n\r\n````\r\npip install -e .\r\n```` \r\n\r\n**PyTorch with CUDA**\r\n\r\nIf you want to use a GPU / CUDA, you must install PyTorch with the matching CUDA Version. Follow\r\n[PyTorch - Get Started](https://pytorch.org/get-started/locally/) for further details how to install PyTorch.\r\n\r\n## Getting Started\r\n\r\nSee [Quickstart](https://www.sbert.net/docs/quickstart.html) in our documenation.\r\n\r\n[This example](https://github.com/UKPLab/sentence-transformers/tree/master/examples/applications/computing-embeddings/computing_embeddings.py) shows you how to use an already trained Sentence Transformer model to embed sentences for another task.\r\n\r\nFirst download a pretrained model.\r\n\r\n````python\r\nfrom sentence_transformers import SentenceTransformer\r\n\r\nmodel = SentenceTransformer(\"all-MiniLM-L6-v2\")\r\n````\r\n\r\nThen provide some sentences to the model.\r\n\r\n````python\r\nsentences = [\r\n    \"This framework generates embeddings for each input sentence\",\r\n    \"Sentences are passed as a list of string.\",\r\n    \"The quick brown fox jumps over the lazy dog.\",\r\n]\r\nsentence_embeddings = model.encode(sentences)\r\n````\r\n\r\nAnd that's it already. We now have a list of numpy arrays with the embeddings.\r\n\r\n````python\r\nfor sentence, embedding in zip(sentences, sentence_embeddings):\r\n    print(\"Sentence:\", sentence)\r\n    print(\"Embedding:\", embedding)\r\n    print(\"\")\r\n````\r\n\r\n## Pre-Trained Models\r\n\r\nWe provide a large list of [Pretrained Models](https://www.sbert.net/docs/pretrained_models.html) for more than 100 languages. Some models are general purpose models, while others produce embeddings for specific use cases. Pre-trained models can be loaded by just passing the model name: `SentenceTransformer('model_name')`.\r\n\r\n[\u00bb  Full list of pretrained models](https://www.sbert.net/docs/pretrained_models.html)\r\n\r\n## Training\r\n\r\nThis framework allows you to fine-tune your own sentence embedding methods, so that you get task-specific sentence embeddings. You have various options to choose from in order to get perfect sentence embeddings for your specific task. \r\n\r\nSee [Training Overview](https://www.sbert.net/docs/training/overview.html) for an introduction how to train your own embedding models. We provide [various examples](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training) how to train models on various datasets.\r\n\r\nSome highlights are:\r\n- Support of various transformer networks including BERT, RoBERTa, XLM-R, DistilBERT, Electra, BART, ...\r\n- Multi-Lingual and multi-task learning\r\n- Evaluation during training to find optimal model\r\n- [20+ loss-functions](https://www.sbert.net/docs/package_reference/losses.html) allowing to tune models specifically for semantic search, paraphrase mining, semantic similarity comparison, clustering, triplet loss, contrastive loss.\r\n\r\n## Performance\r\n\r\nOur models are evaluated extensively on 15+ datasets including challening domains like Tweets, Reddit, emails. They achieve by far the **best performance** from all available sentence embedding methods. Further, we provide several **smaller models** that are **optimized for speed**.\r\n\r\n[\u00bb Full list of pretrained models](https://www.sbert.net/docs/pretrained_models.html)\r\n\r\n## Application Examples\r\n\r\nYou can use this framework for:\r\n\r\n- [Computing Sentence Embeddings](https://www.sbert.net/examples/applications/computing-embeddings/README.html)\r\n- [Semantic Textual Similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html)\r\n- [Clustering](https://www.sbert.net/examples/applications/clustering/README.html)\r\n- [Paraphrase Mining](https://www.sbert.net/examples/applications/paraphrase-mining/README.html)\r\n - [Translated Sentence Mining](https://www.sbert.net/examples/applications/parallel-sentence-mining/README.html)\r\n - [Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)\r\n - [Retrieve & Re-Rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) \r\n - [Text Summarization](https://www.sbert.net/examples/applications/text-summarization/README.html) \r\n- [Multilingual Image Search, Clustering & Duplicate Detection](https://www.sbert.net/examples/applications/image-search/README.html)\r\n\r\nand many more use-cases.\r\n\r\nFor all examples, see [examples/applications](https://github.com/UKPLab/sentence-transformers/tree/master/examples/applications).\r\n\r\n## Citing & Authors\r\n\r\nIf you find this repository helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):\r\n\r\n```bibtex \r\n@inproceedings{reimers-2019-sentence-bert,\r\n    title = \"Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks\",\r\n    author = \"Reimers, Nils and Gurevych, Iryna\",\r\n    booktitle = \"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing\",\r\n    month = \"11\",\r\n    year = \"2019\",\r\n    publisher = \"Association for Computational Linguistics\",\r\n    url = \"https://arxiv.org/abs/1908.10084\",\r\n}\r\n```\r\n\r\nIf you use one of the multilingual models, feel free to cite our publication [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813):\r\n\r\n```bibtex\r\n@inproceedings{reimers-2020-multilingual-sentence-bert,\r\n    title = \"Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation\",\r\n    author = \"Reimers, Nils and Gurevych, Iryna\",\r\n    booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing\",\r\n    month = \"11\",\r\n    year = \"2020\",\r\n    publisher = \"Association for Computational Linguistics\",\r\n    url = \"https://arxiv.org/abs/2004.09813\",\r\n}\r\n```\r\n\r\nPlease have a look at [Publications](https://www.sbert.net/docs/publications.html) for our different publications that are integrated into SentenceTransformers.\r\n\r\nContact person: Tom Aarsen, [tom.aarsen@huggingface.co](mailto:tom.aarsen@huggingface.co)\r\n\r\nhttps://www.ukp.tu-darmstadt.de/\r\n\r\nDon't hesitate to open an issue if something is broken (and it shouldn't be) or if you have further questions.\r\n\r\n> This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.\r\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Multilingual text embeddings",
    "version": "2.6.1",
    "project_urls": {
        "Download": "https://github.com/UKPLab/sentence-transformers/",
        "Homepage": "https://www.SBERT.net"
    },
    "split_keywords": [
        "transformer",
        "networks",
        "bert",
        "xlnet",
        "sentence",
        "embedding",
        "pytorch",
        "nlp",
        "deep",
        "learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ba207ef81df2e07322d95332d07c1c38c597f543c1f666d689a3153ba6fa09e3",
                "md5": "55a26e1200531f7f5d7af5a81a117d89",
                "sha256": "a887e17696b513f99a709ce1f37fd547f53857aebe863785ede546c303b09ea0"
            },
            "downloads": -1,
            "filename": "sentence_transformers-2.6.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "55a26e1200531f7f5d7af5a81a117d89",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 163319,
            "upload_time": "2024-03-26T08:53:20",
            "upload_time_iso_8601": "2024-03-26T08:53:20.427280Z",
            "url": "https://files.pythonhosted.org/packages/ba/20/7ef81df2e07322d95332d07c1c38c597f543c1f666d689a3153ba6fa09e3/sentence_transformers-2.6.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a5fed675f39be9352fe25052dcff2f911a982bb34a6b5768d0d440b723643fdd",
                "md5": "3e964c40f7b023a8eb57e853e17bfec7",
                "sha256": "633ad6b70e390ea335de8689652a5d6c21a323b79ed19519c2f392451088487f"
            },
            "downloads": -1,
            "filename": "sentence-transformers-2.6.1.tar.gz",
            "has_sig": false,
            "md5_digest": "3e964c40f7b023a8eb57e853e17bfec7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 123174,
            "upload_time": "2024-03-26T08:53:22",
            "upload_time_iso_8601": "2024-03-26T08:53:22.578440Z",
            "url": "https://files.pythonhosted.org/packages/a5/fe/d675f39be9352fe25052dcff2f911a982bb34a6b5768d0d440b723643fdd/sentence-transformers-2.6.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-26 08:53:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "UKPLab",
    "github_project": "sentence-transformers",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "sentence-transformers"
}
        
Elapsed time: 0.24545s