Name | llama-index-embeddings-deepinfra JSON |
Version |
0.1.1
JSON |
| download |
home_page | None |
Summary | llama-index embeddings deepinfra integration |
upload_time | 2024-05-31 14:25:58 |
maintainer | None |
docs_url | None |
author | Oguz Vuruskaner |
requires_python | <4.0,>=3.8.1 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Embeddings Integration: Deepinfra
With this integration, you can use the Deepinfra embeddings model to get embeddings for your text data.
Here is the link to the [embeddings models](https://deepinfra.com./models/embeddings).
First, you need to sign up on the [Deepinfra website](https://deepinfra.com/) and get the API token.
You can copy model_ids over the model cards and start using them in your code.
## Installation
```bash
pip install llama-index llama-index-embeddings-deepinfra
```
## Usage
```python
from dotenv import load_dotenv, find_dotenv
from llama_index.embeddings.deepinfra import DeepInfraEmbeddingModel
# Load environment variables
_ = load_dotenv(find_dotenv())
# Initialize model with optional configuration
model = DeepInfraEmbeddingModel(
model_id="BAAI/bge-large-en-v1.5", # Use custom model ID
api_token="YOUR_API_TOKEN", # Optionally provide token here
normalize=True, # Optional normalization
text_prefix="text: ", # Optional text prefix
query_prefix="query: ", # Optional query prefix
)
# Example usage
response = model.get_text_embedding("hello world")
# Batch requests
texts = ["hello world", "goodbye world"]
response = model.get_text_embedding_batch(texts)
# Query requests
response = model.get_query_embedding("hello world")
# Asynchronous requests
async def main():
text = "hello world"
response = await model.aget_text_embedding(text)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-embeddings-deepinfra",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8.1",
"maintainer_email": null,
"keywords": null,
"author": "Oguz Vuruskaner",
"author_email": "oguzvuruskaner@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/c5/39/c3350cbd670500b21db28726b74dac9dd4e3f030bf6ccf9f98ed431aebef/llama_index_embeddings_deepinfra-0.1.1.tar.gz",
"platform": null,
"description": "# LlamaIndex Embeddings Integration: Deepinfra\n\nWith this integration, you can use the Deepinfra embeddings model to get embeddings for your text data.\nHere is the link to the [embeddings models](https://deepinfra.com./models/embeddings).\n\nFirst, you need to sign up on the [Deepinfra website](https://deepinfra.com/) and get the API token.\nYou can copy model_ids over the model cards and start using them in your code.\n\n## Installation\n\n```bash\npip install llama-index llama-index-embeddings-deepinfra\n```\n\n## Usage\n\n```python\nfrom dotenv import load_dotenv, find_dotenv\nfrom llama_index.embeddings.deepinfra import DeepInfraEmbeddingModel\n\n# Load environment variables\n_ = load_dotenv(find_dotenv())\n\n# Initialize model with optional configuration\nmodel = DeepInfraEmbeddingModel(\n model_id=\"BAAI/bge-large-en-v1.5\", # Use custom model ID\n api_token=\"YOUR_API_TOKEN\", # Optionally provide token here\n normalize=True, # Optional normalization\n text_prefix=\"text: \", # Optional text prefix\n query_prefix=\"query: \", # Optional query prefix\n)\n\n# Example usage\nresponse = model.get_text_embedding(\"hello world\")\n\n# Batch requests\ntexts = [\"hello world\", \"goodbye world\"]\nresponse = model.get_text_embedding_batch(texts)\n\n# Query requests\nresponse = model.get_query_embedding(\"hello world\")\n\n\n# Asynchronous requests\nasync def main():\n text = \"hello world\"\n response = await model.aget_text_embedding(text)\n\n\nif __name__ == \"__main__\":\n import asyncio\n\n asyncio.run(main())\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index embeddings deepinfra integration",
"version": "0.1.1",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "926d569b27746302e7fd4761a3891ba30cf07fe89a738b3d5d15ef4dfc631caf",
"md5": "621d4caad77c9ed246f343ac488030c8",
"sha256": "fe0a902d510f6770aab6eeadc49ce12eae2d72d9583b547a00fcf8c564c806f7"
},
"downloads": -1,
"filename": "llama_index_embeddings_deepinfra-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "621d4caad77c9ed246f343ac488030c8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8.1",
"size": 4450,
"upload_time": "2024-05-31T14:25:57",
"upload_time_iso_8601": "2024-05-31T14:25:57.340618Z",
"url": "https://files.pythonhosted.org/packages/92/6d/569b27746302e7fd4761a3891ba30cf07fe89a738b3d5d15ef4dfc631caf/llama_index_embeddings_deepinfra-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c539c3350cbd670500b21db28726b74dac9dd4e3f030bf6ccf9f98ed431aebef",
"md5": "e368348aa00be7afa081309d546c707f",
"sha256": "d59d6488166e16dfbcd8435a4d775a2a4d2a7e0cf80f6445504b7f8f0ecb2abd"
},
"downloads": -1,
"filename": "llama_index_embeddings_deepinfra-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "e368348aa00be7afa081309d546c707f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8.1",
"size": 3842,
"upload_time": "2024-05-31T14:25:58",
"upload_time_iso_8601": "2024-05-31T14:25:58.310844Z",
"url": "https://files.pythonhosted.org/packages/c5/39/c3350cbd670500b21db28726b74dac9dd4e3f030bf6ccf9f98ed431aebef/llama_index_embeddings_deepinfra-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-31 14:25:58",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-embeddings-deepinfra"
}