Name | llama-index-embeddings-nebius JSON |
Version |
0.4.0
JSON |
| download |
home_page | None |
Summary | llama-index embeddings Nebius AI Studio integration |
upload_time | 2025-07-30 21:31:05 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <4.0,>=3.9 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Embeddings Integration: [Nebius AI Studio](https://studio.nebius.ai/)
## Overview
Integrate with Nebius AI Studio API, which provides access to open-source state-of-the-art text embeddings models.
## Installation
```bash
pip install llama-index-embeddings-nebius
```
## Usage
### Initialization
#### With environmental variables.
```.env
NEBIUS_API_KEY=your_api_key
```
```python
from llama_index.embeddings.nebius import NebiusEmbedding
embed_model = NebiusEmbedding(model_name="BAAI/bge-en-icl")
```
#### Without environmental variables
```python
from llama_index.embeddings.nebius import NebiusEmbedding
embed_model = NebiusEmbedding(
api_key="your_api_key", model_name="BAAI/bge-en-icl"
)
```
### Launching
#### Basic usage
```python
text = "Everyone loves justice at another person's expense"
embeddings = embed_model.get_text_embedding(text)
print(embeddings[:5])
```
#### Asynchronous usage
```python
text = "Everyone loves justice at another person's expense"
embeddings = await embed_model.aget_text_embedding(text)
print(embeddings[:5])
```
#### Batched usage
```python
texts = [
"As the hours pass",
"I will let you know",
"That I need to ask",
"Before I'm alone",
]
embeddings = embed_model.get_text_embedding_batch(texts)
print(*[x[:3] for x in embeddings], sep="\n")
```
#### Batched asynchronous usage
```python
texts = [
"As the hours pass",
"I will let you know",
"That I need to ask",
"Before I'm alone",
]
embeddings = await embed_model.aget_text_embedding_batch(texts)
print(*[x[:3] for x in embeddings], sep="\n")
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-embeddings-nebius",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Your Name <you@example.com>",
"download_url": "https://files.pythonhosted.org/packages/d7/45/8d2ed7f08db2a1218ce7f7b41edeba3086895c9cd24db5ce946c6e26c996/llama_index_embeddings_nebius-0.4.0.tar.gz",
"platform": null,
"description": "# LlamaIndex Embeddings Integration: [Nebius AI Studio](https://studio.nebius.ai/)\n\n## Overview\n\nIntegrate with Nebius AI Studio API, which provides access to open-source state-of-the-art text embeddings models.\n\n## Installation\n\n```bash\npip install llama-index-embeddings-nebius\n```\n\n## Usage\n\n### Initialization\n\n#### With environmental variables.\n\n```.env\nNEBIUS_API_KEY=your_api_key\n\n```\n\n```python\nfrom llama_index.embeddings.nebius import NebiusEmbedding\n\nembed_model = NebiusEmbedding(model_name=\"BAAI/bge-en-icl\")\n```\n\n#### Without environmental variables\n\n```python\nfrom llama_index.embeddings.nebius import NebiusEmbedding\n\nembed_model = NebiusEmbedding(\n api_key=\"your_api_key\", model_name=\"BAAI/bge-en-icl\"\n)\n```\n\n### Launching\n\n#### Basic usage\n\n```python\ntext = \"Everyone loves justice at another person's expense\"\nembeddings = embed_model.get_text_embedding(text)\nprint(embeddings[:5])\n```\n\n#### Asynchronous usage\n\n```python\ntext = \"Everyone loves justice at another person's expense\"\nembeddings = await embed_model.aget_text_embedding(text)\nprint(embeddings[:5])\n```\n\n#### Batched usage\n\n```python\ntexts = [\n \"As the hours pass\",\n \"I will let you know\",\n \"That I need to ask\",\n \"Before I'm alone\",\n]\n\nembeddings = embed_model.get_text_embedding_batch(texts)\nprint(*[x[:3] for x in embeddings], sep=\"\\n\")\n```\n\n#### Batched asynchronous usage\n\n```python\ntexts = [\n \"As the hours pass\",\n \"I will let you know\",\n \"That I need to ask\",\n \"Before I'm alone\",\n]\n\nembeddings = await embed_model.aget_text_embedding_batch(texts)\nprint(*[x[:3] for x in embeddings], sep=\"\\n\")\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "llama-index embeddings Nebius AI Studio integration",
"version": "0.4.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "70b2eb99306332a73b43a624e1471e5e39a51d64c713ff1aeb18ee050c142577",
"md5": "84efcf571afaa4ad7ea737b760077e9c",
"sha256": "74c0f444dd4c0ed4761b2f1619f4de20b5a903440d3339e6db44a792b7491280"
},
"downloads": -1,
"filename": "llama_index_embeddings_nebius-0.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "84efcf571afaa4ad7ea737b760077e9c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 4383,
"upload_time": "2025-07-30T21:31:05",
"upload_time_iso_8601": "2025-07-30T21:31:05.068856Z",
"url": "https://files.pythonhosted.org/packages/70/b2/eb99306332a73b43a624e1471e5e39a51d64c713ff1aeb18ee050c142577/llama_index_embeddings_nebius-0.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "d7458d2ed7f08db2a1218ce7f7b41edeba3086895c9cd24db5ce946c6e26c996",
"md5": "f62d044464bc42782538963e17c95bdf",
"sha256": "198d4dae1ba4540fddb29bcd84a0884676b3ee0a68e8aafa9af57dd0b0538b59"
},
"downloads": -1,
"filename": "llama_index_embeddings_nebius-0.4.0.tar.gz",
"has_sig": false,
"md5_digest": "f62d044464bc42782538963e17c95bdf",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 4369,
"upload_time": "2025-07-30T21:31:05",
"upload_time_iso_8601": "2025-07-30T21:31:05.976263Z",
"url": "https://files.pythonhosted.org/packages/d7/45/8d2ed7f08db2a1218ce7f7b41edeba3086895c9cd24db5ce946c6e26c996/llama_index_embeddings_nebius-0.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-30 21:31:05",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-embeddings-nebius"
}