# ⚡️ What is FastEmbed?
FastEmbed is a lightweight, fast, Python library built for embedding generation. We [support popular text models](https://qdrant.github.io/fastembed/examples/Supported_Models/). Please [open a GitHub issue](https://github.com/qdrant/fastembed/issues/new) if you want us to add a new model.
The default text embedding (`TextEmbedding`) model is Flag Embedding, presented in the [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard. It supports "query" and "passage" prefixes for the input text. Here is an example for [Retrieval Embedding Generation](https://qdrant.github.io/fastembed/qdrant/Retrieval_with_FastEmbed/) and how to use [FastEmbed with Qdrant](https://qdrant.github.io/fastembed/qdrant/Usage_With_Qdrant/).
## 📈 Why FastEmbed?
1. Light: FastEmbed is a lightweight library with few external dependencies. We don't require a GPU and don't download GBs of PyTorch dependencies, and instead use the ONNX Runtime. This makes it a great candidate for serverless runtimes like AWS Lambda.
2. Fast: FastEmbed is designed for speed. We use the ONNX Runtime, which is faster than PyTorch. We also use data parallelism for encoding large datasets.
3. Accurate: FastEmbed is better than OpenAI Ada-002. We also [support](https://qdrant.github.io/fastembed/examples/Supported_Models/) an ever-expanding set of models, including a few multilingual models.
## 🚀 Installation
To install the FastEmbed library, pip works best. You can install it with or without GPU support:
```bash
pip install fastembed
# or with GPU support
pip install fastembed-gpu
```
## 📖 Quickstart
```python
from fastembed import TextEmbedding
# Example list of documents
documents: list[str] = [
"This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.",
"fastembed is supported by and maintained by Qdrant.",
]
# This will trigger the model download and initialization
embedding_model = TextEmbedding()
print("The model BAAI/bge-small-en-v1.5 is ready to use.")
embeddings_generator = embedding_model.embed(documents) # reminder this is a generator
embeddings_list = list(embedding_model.embed(documents))
# you can also convert the generator to a list, and that to a numpy array
len(embeddings_list[0]) # Vector of 384 dimensions
```
Fastembed supports a variety of models for different tasks and modalities.
The list of all the available models can be found [here](https://qdrant.github.io/fastembed/examples/Supported_Models/)
### 🎒 Dense text embeddings
```python
from fastembed import TextEmbedding
model = TextEmbedding(model_name="BAAI/bge-small-en-v1.5")
embeddings = list(model.embed(documents))
# [
# array([-0.1115, 0.0097, 0.0052, 0.0195, ...], dtype=float32),
# array([-0.1019, 0.0635, -0.0332, 0.0522, ...], dtype=float32)
# ]
```
Dense text embedding can also be extended with models which are not in the list of supported models.
```python
from fastembed import TextEmbedding
from fastembed.common.model_description import PoolingType, ModelSource
TextEmbedding.add_custom_model(
model="intfloat/multilingual-e5-small",
pooling=PoolingType.MEAN,
normalization=True,
sources=ModelSource(hf="intfloat/multilingual-e5-small"), # can be used with an `url` to load files from a private storage
dim=384,
model_file="onnx/model.onnx", # can be used to load an already supported model with another optimization or quantization, e.g. onnx/model_O4.onnx
)
model = TextEmbedding(model_name="intfloat/multilingual-e5-small")
embeddings = list(model.embed(documents))
```
### 🔱 Sparse text embeddings
* SPLADE++
```python
from fastembed import SparseTextEmbedding
model = SparseTextEmbedding(model_name="prithivida/Splade_PP_en_v1")
embeddings = list(model.embed(documents))
# [
# SparseEmbedding(indices=[ 17, 123, 919, ... ], values=[0.71, 0.22, 0.39, ...]),
# SparseEmbedding(indices=[ 38, 12, 91, ... ], values=[0.11, 0.22, 0.39, ...])
# ]
```
<!--
* BM42 - ([link](ToDo))
```
from fastembed import SparseTextEmbedding
model = SparseTextEmbedding(model_name="Qdrant/bm42-all-minilm-l6-v2-attentions")
embeddings = list(model.embed(documents))
# [
# SparseEmbedding(indices=[ 17, 123, 919, ... ], values=[0.71, 0.22, 0.39, ...]),
# SparseEmbedding(indices=[ 38, 12, 91, ... ], values=[0.11, 0.22, 0.39, ...])
# ]
```
-->
### 🦥 Late interaction models (aka ColBERT)
```python
from fastembed import LateInteractionTextEmbedding
model = LateInteractionTextEmbedding(model_name="colbert-ir/colbertv2.0")
embeddings = list(model.embed(documents))
# [
# array([
# [-0.1115, 0.0097, 0.0052, 0.0195, ...],
# [-0.1019, 0.0635, -0.0332, 0.0522, ...],
# ]),
# array([
# [-0.9019, 0.0335, -0.0032, 0.0991, ...],
# [-0.2115, 0.8097, 0.1052, 0.0195, ...],
# ]),
# ]
```
### 🖼️ Image embeddings
```python
from fastembed import ImageEmbedding
images = [
"./path/to/image1.jpg",
"./path/to/image2.jpg",
]
model = ImageEmbedding(model_name="Qdrant/clip-ViT-B-32-vision")
embeddings = list(model.embed(images))
# [
# array([-0.1115, 0.0097, 0.0052, 0.0195, ...], dtype=float32),
# array([-0.1019, 0.0635, -0.0332, 0.0522, ...], dtype=float32)
# ]
```
### Late interaction multimodal models (ColPali)
```python
from fastembed import LateInteractionMultimodalEmbedding
doc_images = [
"./path/to/qdrant_pdf_doc_1_screenshot.jpg",
"./path/to/colpali_pdf_doc_2_screenshot.jpg",
]
query = "What is Qdrant?"
model = LateInteractionMultimodalEmbedding(model_name="Qdrant/colpali-v1.3-fp16")
doc_images_embeddings = list(model.embed_image(doc_images))
# shape (2, 1030, 128)
# [array([[-0.03353882, -0.02090454, ..., -0.15576172, -0.07678223]], dtype=float32)]
query_embedding = model.embed_text(query)
# shape (1, 20, 128)
# [array([[-0.00218201, 0.14758301, ..., -0.02207947, 0.16833496]], dtype=float32)]
```
### 🔄 Rerankers
```python
from fastembed.rerank.cross_encoder import TextCrossEncoder
query = "Who is maintaining Qdrant?"
documents: list[str] = [
"This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.",
"fastembed is supported by and maintained by Qdrant.",
]
encoder = TextCrossEncoder(model_name="Xenova/ms-marco-MiniLM-L-6-v2")
scores = list(encoder.rerank(query, documents))
# [-11.48061752319336, 5.472434997558594]
```
Text cross encoders can also be extended with models which are not in the list of supported models.
```python
from fastembed.rerank.cross_encoder import TextCrossEncoder
from fastembed.common.model_description import ModelSource
TextCrossEncoder.add_custom_model(
model="Xenova/ms-marco-MiniLM-L-4-v2",
model_file="onnx/model.onnx",
sources=ModelSource(hf="Xenova/ms-marco-MiniLM-L-4-v2"),
)
model = TextCrossEncoder(model_name="Xenova/ms-marco-MiniLM-L-4-v2")
scores = list(model.rerank_pairs(
[("What is AI?", "Artificial intelligence is ..."), ("What is ML?", "Machine learning is ..."),]
))
```
## ⚡️ FastEmbed on a GPU
FastEmbed supports running on GPU devices.
It requires installation of the `fastembed-gpu` package.
```bash
pip install fastembed-gpu
```
Check our [example](https://qdrant.github.io/fastembed/examples/FastEmbed_GPU/) for detailed instructions, CUDA 12.x support and troubleshooting of the common issues.
```python
from fastembed import TextEmbedding
embedding_model = TextEmbedding(
model_name="BAAI/bge-small-en-v1.5",
providers=["CUDAExecutionProvider"]
)
print("The model BAAI/bge-small-en-v1.5 is ready to use on a GPU.")
```
## Usage with Qdrant
Installation with Qdrant Client in Python:
```bash
pip install qdrant-client[fastembed]
```
or
```bash
pip install qdrant-client[fastembed-gpu]
```
You might have to use quotes ```pip install 'qdrant-client[fastembed]'``` on zsh.
```python
from qdrant_client import QdrantClient, models
# Initialize the client
client = QdrantClient("localhost", port=6333) # For production
# client = QdrantClient(":memory:") # For experimentation
model_name = "sentence-transformers/all-MiniLM-L6-v2"
payload = [
{"document": "Qdrant has Langchain integrations", "source": "Langchain-docs", },
{"document": "Qdrant also has Llama Index integrations", "source": "LlamaIndex-docs"},
]
docs = [models.Document(text=data["document"], model=model_name) for data in payload]
ids = [42, 2]
client.create_collection(
"demo_collection",
vectors_config=models.VectorParams(
size=client.get_embedding_size(model_name), distance=models.Distance.COSINE)
)
client.upload_collection(
collection_name="demo_collection",
vectors=docs,
ids=ids,
payload=payload,
)
search_result = client.query_points(
collection_name="demo_collection",
query=models.Document(text="This is a query document", model=model_name)
).points
print(search_result)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "fastembed",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9.0",
"maintainer_email": null,
"keywords": "vector, embedding, neural, search, qdrant, sentence-transformers",
"author": "Qdrant Team",
"author_email": "info@qdrant.tech",
"download_url": "https://files.pythonhosted.org/packages/65/f6/e8d3d9d487f95b698c9ff0d04d4e050d8fca9fa4cba58cff60fd519d1976/fastembed-0.7.3.tar.gz",
"platform": null,
"description": "# \u26a1\ufe0f What is FastEmbed?\n\nFastEmbed is a lightweight, fast, Python library built for embedding generation. We [support popular text models](https://qdrant.github.io/fastembed/examples/Supported_Models/). Please [open a GitHub issue](https://github.com/qdrant/fastembed/issues/new) if you want us to add a new model.\n\nThe default text embedding (`TextEmbedding`) model is Flag Embedding, presented in the [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard. It supports \"query\" and \"passage\" prefixes for the input text. Here is an example for [Retrieval Embedding Generation](https://qdrant.github.io/fastembed/qdrant/Retrieval_with_FastEmbed/) and how to use [FastEmbed with Qdrant](https://qdrant.github.io/fastembed/qdrant/Usage_With_Qdrant/).\n\n## \ud83d\udcc8 Why FastEmbed?\n\n1. Light: FastEmbed is a lightweight library with few external dependencies. We don't require a GPU and don't download GBs of PyTorch dependencies, and instead use the ONNX Runtime. This makes it a great candidate for serverless runtimes like AWS Lambda. \n\n2. Fast: FastEmbed is designed for speed. We use the ONNX Runtime, which is faster than PyTorch. We also use data parallelism for encoding large datasets.\n\n3. Accurate: FastEmbed is better than OpenAI Ada-002. We also [support](https://qdrant.github.io/fastembed/examples/Supported_Models/) an ever-expanding set of models, including a few multilingual models.\n\n## \ud83d\ude80 Installation\n\nTo install the FastEmbed library, pip works best. You can install it with or without GPU support:\n\n```bash\npip install fastembed\n\n# or with GPU support\n\npip install fastembed-gpu\n```\n\n## \ud83d\udcd6 Quickstart\n\n```python\nfrom fastembed import TextEmbedding\n\n\n# Example list of documents\ndocuments: list[str] = [\n \"This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.\",\n \"fastembed is supported by and maintained by Qdrant.\",\n]\n\n# This will trigger the model download and initialization\nembedding_model = TextEmbedding()\nprint(\"The model BAAI/bge-small-en-v1.5 is ready to use.\")\n\nembeddings_generator = embedding_model.embed(documents) # reminder this is a generator\nembeddings_list = list(embedding_model.embed(documents))\n # you can also convert the generator to a list, and that to a numpy array\nlen(embeddings_list[0]) # Vector of 384 dimensions\n```\n\nFastembed supports a variety of models for different tasks and modalities.\nThe list of all the available models can be found [here](https://qdrant.github.io/fastembed/examples/Supported_Models/)\n### \ud83c\udf92 Dense text embeddings\n\n```python\nfrom fastembed import TextEmbedding\n\nmodel = TextEmbedding(model_name=\"BAAI/bge-small-en-v1.5\")\nembeddings = list(model.embed(documents))\n\n# [\n# array([-0.1115, 0.0097, 0.0052, 0.0195, ...], dtype=float32),\n# array([-0.1019, 0.0635, -0.0332, 0.0522, ...], dtype=float32)\n# ]\n\n```\n\nDense text embedding can also be extended with models which are not in the list of supported models.\n\n```python\nfrom fastembed import TextEmbedding\nfrom fastembed.common.model_description import PoolingType, ModelSource\n\nTextEmbedding.add_custom_model(\n model=\"intfloat/multilingual-e5-small\",\n pooling=PoolingType.MEAN,\n normalization=True,\n sources=ModelSource(hf=\"intfloat/multilingual-e5-small\"), # can be used with an `url` to load files from a private storage\n dim=384,\n model_file=\"onnx/model.onnx\", # can be used to load an already supported model with another optimization or quantization, e.g. onnx/model_O4.onnx\n)\nmodel = TextEmbedding(model_name=\"intfloat/multilingual-e5-small\")\nembeddings = list(model.embed(documents))\n```\n\n\n### \ud83d\udd31 Sparse text embeddings\n\n* SPLADE++\n\n```python\nfrom fastembed import SparseTextEmbedding\n\nmodel = SparseTextEmbedding(model_name=\"prithivida/Splade_PP_en_v1\")\nembeddings = list(model.embed(documents))\n\n# [\n# SparseEmbedding(indices=[ 17, 123, 919, ... ], values=[0.71, 0.22, 0.39, ...]),\n# SparseEmbedding(indices=[ 38, 12, 91, ... ], values=[0.11, 0.22, 0.39, ...])\n# ]\n```\n\n<!--\n* BM42 - ([link](ToDo))\n\n```\nfrom fastembed import SparseTextEmbedding\n\nmodel = SparseTextEmbedding(model_name=\"Qdrant/bm42-all-minilm-l6-v2-attentions\")\nembeddings = list(model.embed(documents))\n\n# [\n# SparseEmbedding(indices=[ 17, 123, 919, ... ], values=[0.71, 0.22, 0.39, ...]),\n# SparseEmbedding(indices=[ 38, 12, 91, ... ], values=[0.11, 0.22, 0.39, ...])\n# ]\n```\n-->\n\n### \ud83e\udda5 Late interaction models (aka ColBERT)\n\n\n```python\nfrom fastembed import LateInteractionTextEmbedding\n\nmodel = LateInteractionTextEmbedding(model_name=\"colbert-ir/colbertv2.0\")\nembeddings = list(model.embed(documents))\n\n# [\n# array([\n# [-0.1115, 0.0097, 0.0052, 0.0195, ...],\n# [-0.1019, 0.0635, -0.0332, 0.0522, ...],\n# ]),\n# array([\n# [-0.9019, 0.0335, -0.0032, 0.0991, ...],\n# [-0.2115, 0.8097, 0.1052, 0.0195, ...],\n# ]), \n# ]\n```\n\n### \ud83d\uddbc\ufe0f Image embeddings\n\n```python\nfrom fastembed import ImageEmbedding\n\nimages = [\n \"./path/to/image1.jpg\",\n \"./path/to/image2.jpg\",\n]\n\nmodel = ImageEmbedding(model_name=\"Qdrant/clip-ViT-B-32-vision\")\nembeddings = list(model.embed(images))\n\n# [\n# array([-0.1115, 0.0097, 0.0052, 0.0195, ...], dtype=float32),\n# array([-0.1019, 0.0635, -0.0332, 0.0522, ...], dtype=float32)\n# ]\n```\n\n### Late interaction multimodal models (ColPali)\n\n```python\nfrom fastembed import LateInteractionMultimodalEmbedding\n\ndoc_images = [\n \"./path/to/qdrant_pdf_doc_1_screenshot.jpg\",\n \"./path/to/colpali_pdf_doc_2_screenshot.jpg\",\n]\n\nquery = \"What is Qdrant?\"\n\nmodel = LateInteractionMultimodalEmbedding(model_name=\"Qdrant/colpali-v1.3-fp16\")\ndoc_images_embeddings = list(model.embed_image(doc_images))\n# shape (2, 1030, 128)\n# [array([[-0.03353882, -0.02090454, ..., -0.15576172, -0.07678223]], dtype=float32)]\nquery_embedding = model.embed_text(query)\n# shape (1, 20, 128)\n# [array([[-0.00218201, 0.14758301, ..., -0.02207947, 0.16833496]], dtype=float32)]\n```\n\n### \ud83d\udd04 Rerankers\n```python\nfrom fastembed.rerank.cross_encoder import TextCrossEncoder\n\nquery = \"Who is maintaining Qdrant?\"\ndocuments: list[str] = [\n \"This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.\",\n \"fastembed is supported by and maintained by Qdrant.\",\n]\nencoder = TextCrossEncoder(model_name=\"Xenova/ms-marco-MiniLM-L-6-v2\")\nscores = list(encoder.rerank(query, documents))\n\n# [-11.48061752319336, 5.472434997558594]\n```\n\nText cross encoders can also be extended with models which are not in the list of supported models.\n\n```python\nfrom fastembed.rerank.cross_encoder import TextCrossEncoder \nfrom fastembed.common.model_description import ModelSource\n\nTextCrossEncoder.add_custom_model(\n model=\"Xenova/ms-marco-MiniLM-L-4-v2\",\n model_file=\"onnx/model.onnx\",\n sources=ModelSource(hf=\"Xenova/ms-marco-MiniLM-L-4-v2\"),\n)\nmodel = TextCrossEncoder(model_name=\"Xenova/ms-marco-MiniLM-L-4-v2\")\nscores = list(model.rerank_pairs(\n [(\"What is AI?\", \"Artificial intelligence is ...\"), (\"What is ML?\", \"Machine learning is ...\"),]\n))\n```\n\n## \u26a1\ufe0f FastEmbed on a GPU\n\nFastEmbed supports running on GPU devices.\nIt requires installation of the `fastembed-gpu` package.\n\n```bash\npip install fastembed-gpu\n```\n\nCheck our [example](https://qdrant.github.io/fastembed/examples/FastEmbed_GPU/) for detailed instructions, CUDA 12.x support and troubleshooting of the common issues.\n\n```python\nfrom fastembed import TextEmbedding\n\nembedding_model = TextEmbedding(\n model_name=\"BAAI/bge-small-en-v1.5\", \n providers=[\"CUDAExecutionProvider\"]\n)\nprint(\"The model BAAI/bge-small-en-v1.5 is ready to use on a GPU.\")\n\n```\n\n## Usage with Qdrant\n\nInstallation with Qdrant Client in Python:\n\n```bash\npip install qdrant-client[fastembed]\n```\n\nor \n\n```bash\npip install qdrant-client[fastembed-gpu]\n```\n\nYou might have to use quotes ```pip install 'qdrant-client[fastembed]'``` on zsh.\n\n```python\nfrom qdrant_client import QdrantClient, models\n\n# Initialize the client\nclient = QdrantClient(\"localhost\", port=6333) # For production\n# client = QdrantClient(\":memory:\") # For experimentation\n\nmodel_name = \"sentence-transformers/all-MiniLM-L6-v2\"\npayload = [\n {\"document\": \"Qdrant has Langchain integrations\", \"source\": \"Langchain-docs\", },\n {\"document\": \"Qdrant also has Llama Index integrations\", \"source\": \"LlamaIndex-docs\"},\n]\ndocs = [models.Document(text=data[\"document\"], model=model_name) for data in payload]\nids = [42, 2]\n\nclient.create_collection(\n \"demo_collection\",\n vectors_config=models.VectorParams(\n size=client.get_embedding_size(model_name), distance=models.Distance.COSINE)\n)\n\nclient.upload_collection(\n collection_name=\"demo_collection\",\n vectors=docs,\n ids=ids,\n payload=payload,\n)\n\nsearch_result = client.query_points(\n collection_name=\"demo_collection\",\n query=models.Document(text=\"This is a query document\", model=model_name)\n).points\nprint(search_result)\n```\n\n",
"bugtrack_url": null,
"license": "Apache License",
"summary": "Fast, light, accurate library built for retrieval embedding generation",
"version": "0.7.3",
"project_urls": {
"Homepage": "https://github.com/qdrant/fastembed",
"Repository": "https://github.com/qdrant/fastembed"
},
"split_keywords": [
"vector",
" embedding",
" neural",
" search",
" qdrant",
" sentence-transformers"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1938447aabefddda026c3b65b3b9f1fec48ab78b648441e3e530bf8d78b26bdf",
"md5": "8adb034a9cac6e4abd08088e55c598ef",
"sha256": "a377b57843abd773318042960be39f1aef29827530acb98b035a554742a85cdf"
},
"downloads": -1,
"filename": "fastembed-0.7.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8adb034a9cac6e4abd08088e55c598ef",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9.0",
"size": 105322,
"upload_time": "2025-08-29T11:19:45",
"upload_time_iso_8601": "2025-08-29T11:19:45.400803Z",
"url": "https://files.pythonhosted.org/packages/19/38/447aabefddda026c3b65b3b9f1fec48ab78b648441e3e530bf8d78b26bdf/fastembed-0.7.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "65f6e8d3d9d487f95b698c9ff0d04d4e050d8fca9fa4cba58cff60fd519d1976",
"md5": "d0aebdd9d2d022d88bc5e51040510427",
"sha256": "04e95eb5ccc706513166c23bf8e5429ed160c5783b7b11514431a77624d480a5"
},
"downloads": -1,
"filename": "fastembed-0.7.3.tar.gz",
"has_sig": false,
"md5_digest": "d0aebdd9d2d022d88bc5e51040510427",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9.0",
"size": 66561,
"upload_time": "2025-08-29T11:19:46",
"upload_time_iso_8601": "2025-08-29T11:19:46.521403Z",
"url": "https://files.pythonhosted.org/packages/65/f6/e8d3d9d487f95b698c9ff0d04d4e050d8fca9fa4cba58cff60fd519d1976/fastembed-0.7.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-29 11:19:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "qdrant",
"github_project": "fastembed",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "fastembed"
}