Name | llama-index-embeddings-nebius JSON |
Version |
0.2.0
JSON |
| download |
home_page | None |
Summary | llama-index embeddings Nebius AI Studio integration |
upload_time | 2024-11-20 16:10:03 |
maintainer | None |
docs_url | None |
author | Your Name |
requires_python | <4.0,>=3.8.1 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Embeddings Integration: [Nebius AI Studio](https://studio.nebius.ai/)
## Overview
Integrate with Nebius AI Studio API, which provides access to open-source state-of-the-art text embeddings models.
## Installation
```bash
pip install llama-index-embeddings-nebius
```
## Usage
### Initialization
#### With environmental variables.
```.env
NEBIUS_API_KEY=your_api_key
```
```python
from llama_index.embeddings.nebius import NebiusEmbedding
embed_model = NebiusEmbedding(model_name="BAAI/bge-en-icl")
```
#### Without environmental variables
```python
from llama_index.embeddings.nebius import NebiusEmbedding
embed_model = NebiusEmbedding(
api_key="your_api_key", model_name="BAAI/bge-en-icl"
)
```
### Launching
#### Basic usage
```python
text = "Everyone loves justice at another person's expense"
embeddings = embed_model.get_text_embedding(text)
print(embeddings[:5])
```
#### Asynchronous usage
```python
text = "Everyone loves justice at another person's expense"
embeddings = await embed_model.aget_text_embedding(text)
print(embeddings[:5])
```
#### Batched usage
```python
texts = [
"As the hours pass",
"I will let you know",
"That I need to ask",
"Before I'm alone",
]
embeddings = embed_model.get_text_embedding_batch(texts)
print(*[x[:3] for x in embeddings], sep="\n")
```
#### Batched asynchronous usage
```python
texts = [
"As the hours pass",
"I will let you know",
"That I need to ask",
"Before I'm alone",
]
embeddings = await embed_model.aget_text_embedding_batch(texts)
print(*[x[:3] for x in embeddings], sep="\n")
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-embeddings-nebius",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8.1",
"maintainer_email": null,
"keywords": null,
"author": "Your Name",
"author_email": "you@example.com",
"download_url": "https://files.pythonhosted.org/packages/4d/29/eb1a8c8d987014b8015952cb39d3966a480f3fb807e235c6d9ec8d234431/llama_index_embeddings_nebius-0.2.0.tar.gz",
"platform": null,
"description": "# LlamaIndex Embeddings Integration: [Nebius AI Studio](https://studio.nebius.ai/)\n\n## Overview\n\nIntegrate with Nebius AI Studio API, which provides access to open-source state-of-the-art text embeddings models.\n\n## Installation\n\n```bash\npip install llama-index-embeddings-nebius\n```\n\n## Usage\n\n### Initialization\n\n#### With environmental variables.\n\n```.env\nNEBIUS_API_KEY=your_api_key\n\n```\n\n```python\nfrom llama_index.embeddings.nebius import NebiusEmbedding\n\nembed_model = NebiusEmbedding(model_name=\"BAAI/bge-en-icl\")\n```\n\n#### Without environmental variables\n\n```python\nfrom llama_index.embeddings.nebius import NebiusEmbedding\n\nembed_model = NebiusEmbedding(\n api_key=\"your_api_key\", model_name=\"BAAI/bge-en-icl\"\n)\n```\n\n### Launching\n\n#### Basic usage\n\n```python\ntext = \"Everyone loves justice at another person's expense\"\nembeddings = embed_model.get_text_embedding(text)\nprint(embeddings[:5])\n```\n\n#### Asynchronous usage\n\n```python\ntext = \"Everyone loves justice at another person's expense\"\nembeddings = await embed_model.aget_text_embedding(text)\nprint(embeddings[:5])\n```\n\n#### Batched usage\n\n```python\ntexts = [\n \"As the hours pass\",\n \"I will let you know\",\n \"That I need to ask\",\n \"Before I'm alone\",\n]\n\nembeddings = embed_model.get_text_embedding_batch(texts)\nprint(*[x[:3] for x in embeddings], sep=\"\\n\")\n```\n\n#### Batched asynchronous usage\n\n```python\ntexts = [\n \"As the hours pass\",\n \"I will let you know\",\n \"That I need to ask\",\n \"Before I'm alone\",\n]\n\nembeddings = await embed_model.aget_text_embedding_batch(texts)\nprint(*[x[:3] for x in embeddings], sep=\"\\n\")\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index embeddings Nebius AI Studio integration",
"version": "0.2.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e2d6d666d426b01ac94c86accebcc08c523d4624f1ddf11e28700f7c56936ecd",
"md5": "df03d5909b9001fca62bc9d94a8f096a",
"sha256": "68be8c94371245aa2517a5f0907c07fb75390d2986170d51bc4ccb27f39d66a9"
},
"downloads": -1,
"filename": "llama_index_embeddings_nebius-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "df03d5909b9001fca62bc9d94a8f096a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8.1",
"size": 3547,
"upload_time": "2024-11-20T16:10:01",
"upload_time_iso_8601": "2024-11-20T16:10:01.835759Z",
"url": "https://files.pythonhosted.org/packages/e2/d6/d666d426b01ac94c86accebcc08c523d4624f1ddf11e28700f7c56936ecd/llama_index_embeddings_nebius-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4d29eb1a8c8d987014b8015952cb39d3966a480f3fb807e235c6d9ec8d234431",
"md5": "3a5f911ea555c61259edb2bd123c53fa",
"sha256": "b68772c694da9367d61e478f12854fc7c5d941f01ab71ef35420253b7cf22b9f"
},
"downloads": -1,
"filename": "llama_index_embeddings_nebius-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "3a5f911ea555c61259edb2bd123c53fa",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8.1",
"size": 2858,
"upload_time": "2024-11-20T16:10:03",
"upload_time_iso_8601": "2024-11-20T16:10:03.616289Z",
"url": "https://files.pythonhosted.org/packages/4d/29/eb1a8c8d987014b8015952cb39d3966a480f3fb807e235c6d9ec8d234431/llama_index_embeddings_nebius-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-20 16:10:03",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-embeddings-nebius"
}