Name | llama-index-llms-ibm JSON |
Version |
0.2.3
JSON |
| download |
home_page | None |
Summary | llama-index llms IBM watsonx.ai integration |
upload_time | 2024-11-05 15:07:45 |
maintainer | None |
docs_url | None |
author | IBM |
requires_python | <4.0,>=3.10 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex LLMs Integration: IBM
This package integrates the LlamaIndex LLMs API with the IBM watsonx.ai Foundation Models API by leveraging `ibm-watsonx-ai` [SDK](https://ibm.github.io/watsonx-ai-python-sdk/index.html). With this integration, you can use one of the models that are available in IBM watsonx.ai to perform model inference.
## Installation
```bash
pip install llama-index-llms-ibm
```
## Usage
### Setting up
To use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key:
1. **Obtain an API Key:** For more details on how to create and manage an API key, refer to [Managing user API keys](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui).
2. **Set the API Key as an Environment Variable:** For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable:
```python
import os
from getpass import getpass
watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key
```
Alternatively, you can set the environment variable in your terminal.
- **Linux/macOS:** Open your terminal and execute the following command:
```bash
export WATSONX_APIKEY='your_ibm_api_key'
```
To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.
- **Windows:** For Command Prompt, use:
```cmd
set WATSONX_APIKEY=your_ibm_api_key
```
### Loading the model
You might need to adjust model parameters for different models or tasks. For more details on parameters, see [Available MetaNames](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#metanames.GenTextParamsMetaNames).
```python
temperature = 0.5
max_new_tokens = 50
additional_params = {
"min_new_tokens": 1,
"top_k": 50,
}
```
Initialize the WatsonxLLM class with the previously set parameters.
```python
from llama_index.llms.ibm import WatsonxLLM
watsonx_llm = WatsonxLLM(
model_id="PASTE THE CHOSEN MODEL_ID HERE",
url="PASTE YOUR URL HERE",
project_id="PASTE YOUR PROJECT_ID HERE",
temperature=temperature,
max_new_tokens=max_new_tokens,
additional_params=additional_params,
)
```
**Note:**
- To provide context for the API call, you must pass the `project_id` or `space_id`. To get your project or space ID, open your project or space, go to the **Manage** tab, and click **General**. For more information see: [Project documentation](https://www.ibm.com/docs/en/watsonx-as-a-service?topic=projects) or [Deployment space documentation](https://www.ibm.com/docs/en/watsonx/saas?topic=spaces-creating-deployment).
- Depending on the region of your provisioned service instance, use one of the URLs listed in [watsonx.ai API Authentication](https://ibm.github.io/watsonx-ai-python-sdk/setup_cloud.html#authentication).
- You need to specify the model you want to use for inferencing through `model_id`. You can find the list of available models in [Supported foundation models](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#ibm_watsonx_ai.foundation_models.utils.enums.ModelTypes).
Alternatively, you can use Cloud Pak for Data credentials. For more details, refer to [watsonx.ai software setup](https://ibm.github.io/watsonx-ai-python-sdk/setup_cpd.html).
```python
watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
url="PASTE YOUR URL HERE",
username="PASTE YOUR USERNAME HERE",
password="PASTE YOUR PASSWORD HERE",
instance_id="openshift",
version="4.8",
project_id="PASTE YOUR PROJECT_ID HERE",
temperature=temperature,
max_new_tokens=max_new_tokens,
additional_params=additional_params,
)
```
### Create a Completion
Below is an example that shows how to call the model directly using a string type prompt:
```python
response = watsonx_llm.complete("What is a Generative AI?")
print(response)
```
### Calling `chat` with a list of messages
To create `chat` completions by providing a list of messages, use the following code:
```python
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(role="system", content="You are an AI assistant"),
ChatMessage(role="user", content="Who are you?"),
]
response = watsonx_llm.chat(messages)
print(response)
```
### Streaming the model output
To stream the model output, use the following code:
```python
for chunk in watsonx_llm.stream_complete(
"Describe your favorite city and why it is your favorite."
):
print(chunk.delta, end="")
```
Similarly, to stream the `chat` completions, use the following code:
```python
messages = [
ChatMessage(role="system", content="You are an AI assistant"),
ChatMessage(role="user", content="Who are you?"),
]
for chunk in watsonx_llm.stream_chat(messages):
print(chunk.delta, end="")
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-llms-ibm",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "IBM",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/2c/74/276eb4e6140b80de2cfa5d7f1712e765264c55ac9867a0d589e19578af91/llama_index_llms_ibm-0.2.3.tar.gz",
"platform": null,
"description": "# LlamaIndex LLMs Integration: IBM\n\nThis package integrates the LlamaIndex LLMs API with the IBM watsonx.ai Foundation Models API by leveraging `ibm-watsonx-ai` [SDK](https://ibm.github.io/watsonx-ai-python-sdk/index.html). With this integration, you can use one of the models that are available in IBM watsonx.ai to perform model inference.\n\n## Installation\n\n```bash\npip install llama-index-llms-ibm\n```\n\n## Usage\n\n### Setting up\n\nTo use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key:\n\n1. **Obtain an API Key:** For more details on how to create and manage an API key, refer to [Managing user API keys](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui).\n2. **Set the API Key as an Environment Variable:** For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable:\n\n```python\nimport os\nfrom getpass import getpass\n\nwatsonx_api_key = getpass()\nos.environ[\"WATSONX_APIKEY\"] = watsonx_api_key\n```\n\nAlternatively, you can set the environment variable in your terminal.\n\n- **Linux/macOS:** Open your terminal and execute the following command:\n\n ```bash\n export WATSONX_APIKEY='your_ibm_api_key'\n ```\n\n To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.\n\n- **Windows:** For Command Prompt, use:\n ```cmd\n set WATSONX_APIKEY=your_ibm_api_key\n ```\n\n### Loading the model\n\nYou might need to adjust model parameters for different models or tasks. For more details on parameters, see [Available MetaNames](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#metanames.GenTextParamsMetaNames).\n\n```python\ntemperature = 0.5\nmax_new_tokens = 50\nadditional_params = {\n \"min_new_tokens\": 1,\n \"top_k\": 50,\n}\n```\n\nInitialize the WatsonxLLM class with the previously set parameters.\n\n```python\nfrom llama_index.llms.ibm import WatsonxLLM\n\nwatsonx_llm = WatsonxLLM(\n model_id=\"PASTE THE CHOSEN MODEL_ID HERE\",\n url=\"PASTE YOUR URL HERE\",\n project_id=\"PASTE YOUR PROJECT_ID HERE\",\n temperature=temperature,\n max_new_tokens=max_new_tokens,\n additional_params=additional_params,\n)\n```\n\n**Note:**\n\n- To provide context for the API call, you must pass the `project_id` or `space_id`. To get your project or space ID, open your project or space, go to the **Manage** tab, and click **General**. For more information see: [Project documentation](https://www.ibm.com/docs/en/watsonx-as-a-service?topic=projects) or [Deployment space documentation](https://www.ibm.com/docs/en/watsonx/saas?topic=spaces-creating-deployment).\n- Depending on the region of your provisioned service instance, use one of the URLs listed in [watsonx.ai API Authentication](https://ibm.github.io/watsonx-ai-python-sdk/setup_cloud.html#authentication).\n- You need to specify the model you want to use for inferencing through `model_id`. You can find the list of available models in [Supported foundation models](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#ibm_watsonx_ai.foundation_models.utils.enums.ModelTypes).\n\nAlternatively, you can use Cloud Pak for Data credentials. For more details, refer to [watsonx.ai software setup](https://ibm.github.io/watsonx-ai-python-sdk/setup_cpd.html).\n\n```python\nwatsonx_llm = WatsonxLLM(\n model_id=\"ibm/granite-13b-instruct-v2\",\n url=\"PASTE YOUR URL HERE\",\n username=\"PASTE YOUR USERNAME HERE\",\n password=\"PASTE YOUR PASSWORD HERE\",\n instance_id=\"openshift\",\n version=\"4.8\",\n project_id=\"PASTE YOUR PROJECT_ID HERE\",\n temperature=temperature,\n max_new_tokens=max_new_tokens,\n additional_params=additional_params,\n)\n```\n\n### Create a Completion\n\nBelow is an example that shows how to call the model directly using a string type prompt:\n\n```python\nresponse = watsonx_llm.complete(\"What is a Generative AI?\")\nprint(response)\n```\n\n### Calling `chat` with a list of messages\n\nTo create `chat` completions by providing a list of messages, use the following code:\n\n```python\nfrom llama_index.core.llms import ChatMessage\n\nmessages = [\n ChatMessage(role=\"system\", content=\"You are an AI assistant\"),\n ChatMessage(role=\"user\", content=\"Who are you?\"),\n]\nresponse = watsonx_llm.chat(messages)\nprint(response)\n```\n\n### Streaming the model output\n\nTo stream the model output, use the following code:\n\n```python\nfor chunk in watsonx_llm.stream_complete(\n \"Describe your favorite city and why it is your favorite.\"\n):\n print(chunk.delta, end=\"\")\n```\n\nSimilarly, to stream the `chat` completions, use the following code:\n\n```python\nmessages = [\n ChatMessage(role=\"system\", content=\"You are an AI assistant\"),\n ChatMessage(role=\"user\", content=\"Who are you?\"),\n]\n\nfor chunk in watsonx_llm.stream_chat(messages):\n print(chunk.delta, end=\"\")\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index llms IBM watsonx.ai integration",
"version": "0.2.3",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "74e41e02ab38b1eacb904770650cf9a2fe43deb885be2499c95998f3b9f2baa9",
"md5": "84167154f837553890ea1269f7dcc73a",
"sha256": "043f7db12fd02fd6bb01edcb91f4ba4fb1c918e27fbdd88d07d7067d0db70c13"
},
"downloads": -1,
"filename": "llama_index_llms_ibm-0.2.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "84167154f837553890ea1269f7dcc73a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 9433,
"upload_time": "2024-11-05T15:07:44",
"upload_time_iso_8601": "2024-11-05T15:07:44.565023Z",
"url": "https://files.pythonhosted.org/packages/74/e4/1e02ab38b1eacb904770650cf9a2fe43deb885be2499c95998f3b9f2baa9/llama_index_llms_ibm-0.2.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2c74276eb4e6140b80de2cfa5d7f1712e765264c55ac9867a0d589e19578af91",
"md5": "40980fda527e69e7312bcd8767232355",
"sha256": "faef41dcff2d3430848dea2ceb739985fef809a2f59eb317ba8c402ac3d240f2"
},
"downloads": -1,
"filename": "llama_index_llms_ibm-0.2.3.tar.gz",
"has_sig": false,
"md5_digest": "40980fda527e69e7312bcd8767232355",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 10343,
"upload_time": "2024-11-05T15:07:45",
"upload_time_iso_8601": "2024-11-05T15:07:45.641186Z",
"url": "https://files.pythonhosted.org/packages/2c/74/276eb4e6140b80de2cfa5d7f1712e765264c55ac9867a0d589e19578af91/llama_index_llms_ibm-0.2.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-05 15:07:45",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-llms-ibm"
}