Name | llama-index-llms-opea JSON |
Version |
0.1.0
JSON |
| download |
home_page | None |
Summary | llama-index llms opea integration |
upload_time | 2025-01-15 15:54:58 |
maintainer | None |
docs_url | None |
author | Logan Markewich |
requires_python | <4.0,>=3.8.1 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Llms Integration: OPEA LLM
OPEA (Open Platform for Enterprise AI) is a platform for building, deploying, and scaling AI applications. As part of this platform, many core gen-ai components are available for deployment as microservices, including LLMs.
Visit [https://opea.dev](https://opea.dev) for more information, and their [GitHub](https://github.com/opea-project/GenAIComps) for the source code of the OPEA components.
## Installation
1. Install the required Python packages:
```bash
%pip install llama-index-llms-opea
```
## Usage
```python
from llama_index.core.llms import ChatMessage
from llama_index.llms.opea import OPEA
llm = OPEA(
model="meta-llama/Meta-Llama-3.1-8B-Instruct",
api_base="http://localhost:8080/v1",
temperature=0.7,
max_tokens=256,
additional_kwargs={"top_p": 0.95},
)
# Complete a prompt
response = llm.complete("What is the capital of France?")
print(response)
# Stream a chat response
response = llm.stream_chat(
[ChatMessage(role="user", content="What is the capital of France?")]
)
for chunk in response:
print(chunk.delta, end="", flush=True)
```
All available methods include:
- `complete()`
- `stream_complete()`
- `chat()`
- `stream_chat()`
as well as async versions of the methods with the `a` prefix.
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-llms-opea",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8.1",
"maintainer_email": null,
"keywords": null,
"author": "Logan Markewich",
"author_email": "logan@runllama.ai",
"download_url": "https://files.pythonhosted.org/packages/75/2e/9be7aa99e072648a4ad1a620478359de70d11116effb409363f2e965c825/llama_index_llms_opea-0.1.0.tar.gz",
"platform": null,
"description": "# LlamaIndex Llms Integration: OPEA LLM\n\nOPEA (Open Platform for Enterprise AI) is a platform for building, deploying, and scaling AI applications. As part of this platform, many core gen-ai components are available for deployment as microservices, including LLMs.\n\nVisit [https://opea.dev](https://opea.dev) for more information, and their [GitHub](https://github.com/opea-project/GenAIComps) for the source code of the OPEA components.\n\n## Installation\n\n1. Install the required Python packages:\n\n```bash\n%pip install llama-index-llms-opea\n```\n\n## Usage\n\n```python\nfrom llama_index.core.llms import ChatMessage\nfrom llama_index.llms.opea import OPEA\n\nllm = OPEA(\n model=\"meta-llama/Meta-Llama-3.1-8B-Instruct\",\n api_base=\"http://localhost:8080/v1\",\n temperature=0.7,\n max_tokens=256,\n additional_kwargs={\"top_p\": 0.95},\n)\n\n# Complete a prompt\nresponse = llm.complete(\"What is the capital of France?\")\nprint(response)\n\n# Stream a chat response\nresponse = llm.stream_chat(\n [ChatMessage(role=\"user\", content=\"What is the capital of France?\")]\n)\nfor chunk in response:\n print(chunk.delta, end=\"\", flush=True)\n```\n\nAll available methods include:\n\n- `complete()`\n- `stream_complete()`\n- `chat()`\n- `stream_chat()`\n\nas well as async versions of the methods with the `a` prefix.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index llms opea integration",
"version": "0.1.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d3b467c2f7d1cb81a63b831aeeb27aa4bd13c504f4dcf1bc03113c5e3aed02c5",
"md5": "52e1a17b7f00ef12538bad09b71c54cd",
"sha256": "fc3bdf30b05f6ac1f60c90e955a62e3ab0359c618165f09371ba7349245fe5b4"
},
"downloads": -1,
"filename": "llama_index_llms_opea-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "52e1a17b7f00ef12538bad09b71c54cd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8.1",
"size": 2533,
"upload_time": "2025-01-15T15:54:56",
"upload_time_iso_8601": "2025-01-15T15:54:56.744775Z",
"url": "https://files.pythonhosted.org/packages/d3/b4/67c2f7d1cb81a63b831aeeb27aa4bd13c504f4dcf1bc03113c5e3aed02c5/llama_index_llms_opea-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "752e9be7aa99e072648a4ad1a620478359de70d11116effb409363f2e965c825",
"md5": "85de6473eb58977380eac738a947fb30",
"sha256": "d6a7c33cf276f1b2ab8990546e0ed828d87594c76df1cb45951322fa1168dd94"
},
"downloads": -1,
"filename": "llama_index_llms_opea-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "85de6473eb58977380eac738a947fb30",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8.1",
"size": 2242,
"upload_time": "2025-01-15T15:54:58",
"upload_time_iso_8601": "2025-01-15T15:54:58.371470Z",
"url": "https://files.pythonhosted.org/packages/75/2e/9be7aa99e072648a4ad1a620478359de70d11116effb409363f2e965c825/llama_index_llms_opea-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-15 15:54:58",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-llms-opea"
}