fastembed


Namefastembed JSON
Version 0.2.6 PyPI version JSON
download
home_pagehttps://github.com/qdrant/fastembed
SummaryFast, light, accurate library built for retrieval embedding generation
upload_time2024-04-01 15:25:11
maintainerNone
docs_urlNone
authorNirantK
requires_python<3.13,>=3.8.0
licenseApache License
keywords vector embedding neural search qdrant sentence-transformers
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ⚡️ What is FastEmbed?

FastEmbed is a lightweight, fast, Python library built for embedding generation. We [support popular text models](https://qdrant.github.io/fastembed/examples/Supported_Models/). Please [open a GitHub issue](https://github.com/qdrant/fastembed/issues/new) if you want us to add a new model.

The default text embedding (`TextEmbedding`) model is Flag Embedding, presented in the [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard. It supports "query" and "passage" prefixes for the input text. Here is an example for [Retrieval Embedding Generation](https://qdrant.github.io/fastembed/examples/Retrieval_with_FastEmbed/) and how to use [FastEmbed with Qdrant](https://qdrant.github.io/fastembed/examples/Usage_With_Qdrant/).

## 📈 Why FastEmbed?

1. Light: FastEmbed is a lightweight library with few external dependencies. We don't require a GPU and don't download GBs of PyTorch dependencies, and instead use the ONNX Runtime. This makes it a great candidate for serverless runtimes like AWS Lambda. 

2. Fast: FastEmbed is designed for speed. We use the ONNX Runtime, which is faster than PyTorch. We also use data-parallelism for encoding large datasets.

3. Accurate: FastEmbed is better than OpenAI Ada-002. We also [supported](https://qdrant.github.io/fastembed/examples/Supported_Models/) an ever expanding set of models, including a few multilingual models.

## 🚀 Installation

To install the FastEmbed library, pip works:

```bash
pip install fastembed
```

## 📖 Quickstart

```python
from fastembed import TextEmbedding
from typing import List

# Example list of documents
documents: List[str] = [
    "This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.",
    "fastembed is supported by and maintained by Qdrant.",
]

# This will trigger the model download and initialization
embedding_model = TextEmbedding()
print("The model BAAI/bge-small-en-v1.5 is ready to use.")

embeddings_generator = embedding_model.embed(documents)  # reminder this is a generator
embeddings_list = list(embedding_model.embed(documents))
  # you can also convert the generator to a list, and that to a numpy array
len(embeddings_list[0]) # Vector of 384 dimensions
```

## Usage with Qdrant

Installation with Qdrant Client in Python:

```bash
pip install qdrant-client[fastembed]
```

You might have to use ```pip install 'qdrant-client[fastembed]'``` on zsh.

```python
from qdrant_client import QdrantClient

# Initialize the client
client = QdrantClient("localhost", port=6333) # For production
# client = QdrantClient(":memory:") # For small experiments

# Prepare your documents, metadata, and IDs
docs = ["Qdrant has Langchain integrations", "Qdrant also has Llama Index integrations"]
metadata = [
    {"source": "Langchain-docs"},
    {"source": "Llama-index-docs"},
]
ids = [42, 2]

# If you want to change the model:
# client.set_model("sentence-transformers/all-MiniLM-L6-v2")
# List of supported models: https://qdrant.github.io/fastembed/examples/Supported_Models

# Use the new add() instead of upsert()
# This internally calls embed() of the configured embedding model
client.add(
    collection_name="demo_collection",
    documents=docs,
    metadata=metadata,
    ids=ids
)

search_result = client.query(
    collection_name="demo_collection",
    query_text="This is a query document"
)
print(search_result)
```

#### Similar Work

Ilyas M. wrote about using [FlagEmbeddings with Optimum](https://twitter.com/IlysMoutawwakil/status/1705215192425288017) over CUDA.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/qdrant/fastembed",
    "name": "fastembed",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.8.0",
    "maintainer_email": null,
    "keywords": "vector, embedding, neural, search, qdrant, sentence-transformers",
    "author": "NirantK",
    "author_email": "nirant.bits@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/dc/4b/67ae61bae6e4fc3e75ee216a26a0d00dcfbe34f6f1eca688eeaa82e91c0b/fastembed-0.2.6.tar.gz",
    "platform": null,
    "description": "# \u26a1\ufe0f What is FastEmbed?\n\nFastEmbed is a lightweight, fast, Python library built for embedding generation. We [support popular text models](https://qdrant.github.io/fastembed/examples/Supported_Models/). Please [open a GitHub issue](https://github.com/qdrant/fastembed/issues/new) if you want us to add a new model.\n\nThe default text embedding (`TextEmbedding`) model is Flag Embedding, presented in the [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard. It supports \"query\" and \"passage\" prefixes for the input text. Here is an example for [Retrieval Embedding Generation](https://qdrant.github.io/fastembed/examples/Retrieval_with_FastEmbed/) and how to use [FastEmbed with Qdrant](https://qdrant.github.io/fastembed/examples/Usage_With_Qdrant/).\n\n## \ud83d\udcc8 Why FastEmbed?\n\n1. Light: FastEmbed is a lightweight library with few external dependencies. We don't require a GPU and don't download GBs of PyTorch dependencies, and instead use the ONNX Runtime. This makes it a great candidate for serverless runtimes like AWS Lambda. \n\n2. Fast: FastEmbed is designed for speed. We use the ONNX Runtime, which is faster than PyTorch. We also use data-parallelism for encoding large datasets.\n\n3. Accurate: FastEmbed is better than OpenAI Ada-002. We also [supported](https://qdrant.github.io/fastembed/examples/Supported_Models/) an ever expanding set of models, including a few multilingual models.\n\n## \ud83d\ude80 Installation\n\nTo install the FastEmbed library, pip works:\n\n```bash\npip install fastembed\n```\n\n## \ud83d\udcd6 Quickstart\n\n```python\nfrom fastembed import TextEmbedding\nfrom typing import List\n\n# Example list of documents\ndocuments: List[str] = [\n    \"This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.\",\n    \"fastembed is supported by and maintained by Qdrant.\",\n]\n\n# This will trigger the model download and initialization\nembedding_model = TextEmbedding()\nprint(\"The model BAAI/bge-small-en-v1.5 is ready to use.\")\n\nembeddings_generator = embedding_model.embed(documents)  # reminder this is a generator\nembeddings_list = list(embedding_model.embed(documents))\n  # you can also convert the generator to a list, and that to a numpy array\nlen(embeddings_list[0]) # Vector of 384 dimensions\n```\n\n## Usage with Qdrant\n\nInstallation with Qdrant Client in Python:\n\n```bash\npip install qdrant-client[fastembed]\n```\n\nYou might have to use ```pip install 'qdrant-client[fastembed]'``` on zsh.\n\n```python\nfrom qdrant_client import QdrantClient\n\n# Initialize the client\nclient = QdrantClient(\"localhost\", port=6333) # For production\n# client = QdrantClient(\":memory:\") # For small experiments\n\n# Prepare your documents, metadata, and IDs\ndocs = [\"Qdrant has Langchain integrations\", \"Qdrant also has Llama Index integrations\"]\nmetadata = [\n    {\"source\": \"Langchain-docs\"},\n    {\"source\": \"Llama-index-docs\"},\n]\nids = [42, 2]\n\n# If you want to change the model:\n# client.set_model(\"sentence-transformers/all-MiniLM-L6-v2\")\n# List of supported models: https://qdrant.github.io/fastembed/examples/Supported_Models\n\n# Use the new add() instead of upsert()\n# This internally calls embed() of the configured embedding model\nclient.add(\n    collection_name=\"demo_collection\",\n    documents=docs,\n    metadata=metadata,\n    ids=ids\n)\n\nsearch_result = client.query(\n    collection_name=\"demo_collection\",\n    query_text=\"This is a query document\"\n)\nprint(search_result)\n```\n\n#### Similar Work\n\nIlyas M. wrote about using [FlagEmbeddings with Optimum](https://twitter.com/IlysMoutawwakil/status/1705215192425288017) over CUDA.\n\n",
    "bugtrack_url": null,
    "license": "Apache License",
    "summary": "Fast, light, accurate library built for retrieval embedding generation",
    "version": "0.2.6",
    "project_urls": {
        "Homepage": "https://github.com/qdrant/fastembed",
        "Repository": "https://github.com/qdrant/fastembed"
    },
    "split_keywords": [
        "vector",
        " embedding",
        " neural",
        " search",
        " qdrant",
        " sentence-transformers"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "09f6d5a06248107c9ae5c7cde3c3a59654935cb783ad42ab92e866c00bbcdad8",
                "md5": "261b9403722a9f316d5f6008423e86ed",
                "sha256": "3e18633291722087abebccccd7fcdffafef643cb22d203370d7fad4fa83c10fb"
            },
            "downloads": -1,
            "filename": "fastembed-0.2.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "261b9403722a9f316d5f6008423e86ed",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.8.0",
            "size": 26782,
            "upload_time": "2024-04-01T15:25:09",
            "upload_time_iso_8601": "2024-04-01T15:25:09.749179Z",
            "url": "https://files.pythonhosted.org/packages/09/f6/d5a06248107c9ae5c7cde3c3a59654935cb783ad42ab92e866c00bbcdad8/fastembed-0.2.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "dc4b67ae61bae6e4fc3e75ee216a26a0d00dcfbe34f6f1eca688eeaa82e91c0b",
                "md5": "96eeb416eb105efb28903d79b531736f",
                "sha256": "adaed5b46e19cc1bbe5f98f2b3ffecfc4d2a48d27512e28ff5bfe92a42649a66"
            },
            "downloads": -1,
            "filename": "fastembed-0.2.6.tar.gz",
            "has_sig": false,
            "md5_digest": "96eeb416eb105efb28903d79b531736f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.8.0",
            "size": 19240,
            "upload_time": "2024-04-01T15:25:11",
            "upload_time_iso_8601": "2024-04-01T15:25:11.587531Z",
            "url": "https://files.pythonhosted.org/packages/dc/4b/67ae61bae6e4fc3e75ee216a26a0d00dcfbe34f6f1eca688eeaa82e91c0b/fastembed-0.2.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-01 15:25:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "qdrant",
    "github_project": "fastembed",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "fastembed"
}
        
Elapsed time: 0.21589s