| Name | llama-index-postprocessor-ibm JSON |
| Version |
0.3.1
JSON |
| download |
| home_page | None |
| Summary | llama-index postprocessor IBM watsonx.ai integration |
| upload_time | 2025-09-08 20:48:49 |
| maintainer | None |
| docs_url | None |
| author | IBM |
| requires_python | <3.13,>=3.10 |
| license | None |
| keywords |
|
| VCS |
|
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# LlamaIndex Postprocessor Integration: IBM
This package integrates the LlamaIndex Postprocessor API with the IBM watsonx.ai Rerank API by leveraging `ibm-watsonx-ai` [SDK](https://ibm.github.io/watsonx-ai-python-sdk/index.html).
## Installation
```bash
pip install llama-index-postprocessor-ibm
```
## Usage
### Setting up
#### Install other required packages:
```bash
pip install -qU llama-index
pip install -qU llama-index-llms-ibm
pip install -qU llama-index-embeddings-ibm
```
To use IBM's Foundation Models, Embeddings and Rerank, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key:
1. **Obtain an API Key:** For more details on how to create and manage an API key, refer to [Managing user API keys](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui).
2. **Set the API Key as an Environment Variable:** For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable:
```python
import os
from getpass import getpass
watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key
```
Alternatively, you can set the environment variable in your terminal.
- **Linux/macOS:** Open your terminal and execute the following command:
```bash
export WATSONX_APIKEY='your_ibm_api_key'
```
To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.
- **Windows:** For Command Prompt, use:
```cmd
set WATSONX_APIKEY=your_ibm_api_key
```
**Note**:
- To provide context for the API call, you must pass the `project_id` or `space_id`. To get your project or space ID, open your project or space, go to the **Manage** tab, and click **General**. For more information see: [Project documentation](https://www.ibm.com/docs/en/watsonx-as-a-service?topic=projects) or [Deployment space documentation](https://www.ibm.com/docs/en/watsonx/saas?topic=spaces-creating-deployment).
- Depending on the region of your provisioned service instance, use one of the urls listed in [watsonx.ai API Authentication](https://ibm.github.io/watsonx-ai-python-sdk/setup_cloud.html#authentication).
In this example, we’ll use the `project_id` and Dallas URL.
Provide `PROJECT_ID` that will be used for initialize each watsonx integration instance.
```python
PROJECT_ID = "PASTE YOUR PROJECT_ID HERE"
URL = "https://us-south.ml.cloud.ibm.com"
```
### Download data and load documents
```bash
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
```python
from llama_index.core import SimpleDirectoryReader
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
```
### Load the Rerank
You might need to adjust rerank parameters for different tasks:
```python
truncate_input_tokens = 512
```
#### Initialize `WatsonxRerank` instance.
You need to specify the `model_id` that will be used for rerank. You can find the list of all the available models in [Supported reranker models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models-embed.html?context=wx#rerank).
```python
from llama_index.postprocessor.ibm import WatsonxRerank
watsonx_rerank = WatsonxRerank(
model_id="cross-encoder/ms-marco-minilm-l-12-v2",
top_n=2,
url=URL,
project_id=PROJECT_ID,
truncate_input_tokens=truncate_input_tokens,
)
```
Alternatively, you can use Cloud Pak for Data credentials. For details, see [watsonx.ai software setup](https://ibm.github.io/watsonx-ai-python-sdk/setup_cpd.html).
```python
from llama_index.postprocessor.ibm import WatsonxRerank
watsonx_rerank = WatsonxRerank(
model_id="cross-encoder/ms-marco-minilm-l-12-v2",
url=URL,
username="PASTE YOUR USERNAME HERE",
password="PASTE YOUR PASSWORD HERE",
instance_id="openshift",
version="5.1",
project_id=PROJECT_ID,
truncate_input_tokens=truncate_input_tokens,
)
```
### Load the embedding model
#### Initialize the `WatsonxEmbeddings` instance.
> For more information about `WatsonxEmbeddings` please refer to the `llama-index-embeddings-ibm` package description.
You might need to adjust embedding parameters for different tasks:
```python
truncate_input_tokens = 512
```
You need to specify the `model_id` that will be used for embedding. You can find the list of all the available models in [Supported embedding models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models-embed.html?context=wx#embed).
```python
from llama_index.embeddings.ibm import WatsonxEmbeddings
watsonx_embedding = WatsonxEmbeddings(
model_id="ibm/slate-30m-english-rtrvr",
url=URL,
project_id=PROJECT_ID,
truncate_input_tokens=truncate_input_tokens,
)
```
Change default settings
```python
from llama_index.core import Settings
Settings.chunk_size = 512
```
#### Build index
```python
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(
documents=documents, embed_model=watsonx_embedding
)
```
### Load the LLM
#### Initialize the `WatsonxLLM` instance.
> For more information about `WatsonxLLM` please refer to the `llama-index-llms-ibm` package description.
You need to specify the `model_id` that will be used for inferencing. You can find the list of all the available models in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=wx).
You might need to adjust model `parameters` for different models or tasks. For details, refer to [Available MetaNames](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#metanames.GenTextParamsMetaNames).
```python
max_new_tokens = 128
```
```python
from llama_index.llms.ibm import WatsonxLLM
watsonx_llm = WatsonxLLM(
model_id="meta-llama/llama-3-3-70b-instruct",
url=URL,
project_id=PROJECT_ID,
max_new_tokens=max_new_tokens,
)
```
### Send a query
#### Retrieve top 10 most relevant nodes, then filter with `WatsonxRerank`
```python
query_engine = index.as_query_engine(
llm=watsonx_llm,
similarity_top_k=10,
node_postprocessors=[watsonx_rerank],
)
response = query_engine.query(
"What did Sam Altman do in this essay?",
)
```
```python
from llama_index.core.response.pprint_utils import pprint_response
pprint_response(response, show_source=True)
```
#### Directly retrieve top 2 most similar nodes
```python
query_engine = index.as_query_engine(
llm=watsonx_llm,
similarity_top_k=2,
)
response = query_engine.query(
"What did Sam Altman do in this essay?",
)
```
Retrieved context is irrelevant and response is hallucinated.
```python
pprint_response(response, show_source=True)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-postprocessor-ibm",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "IBM",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/04/cc/ad3b0d12f6061ddb3247c7c3435683c8678d91ff6afcbedce238abb65596/llama_index_postprocessor_ibm-0.3.1.tar.gz",
"platform": null,
"description": "# LlamaIndex Postprocessor Integration: IBM\n\nThis package integrates the LlamaIndex Postprocessor API with the IBM watsonx.ai Rerank API by leveraging `ibm-watsonx-ai` [SDK](https://ibm.github.io/watsonx-ai-python-sdk/index.html).\n\n## Installation\n\n```bash\npip install llama-index-postprocessor-ibm\n```\n\n## Usage\n\n### Setting up\n\n#### Install other required packages:\n\n```bash\npip install -qU llama-index\npip install -qU llama-index-llms-ibm\npip install -qU llama-index-embeddings-ibm\n```\n\nTo use IBM's Foundation Models, Embeddings and Rerank, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key:\n\n1. **Obtain an API Key:** For more details on how to create and manage an API key, refer to [Managing user API keys](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui).\n2. **Set the API Key as an Environment Variable:** For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable:\n\n```python\nimport os\nfrom getpass import getpass\n\nwatsonx_api_key = getpass()\nos.environ[\"WATSONX_APIKEY\"] = watsonx_api_key\n```\n\nAlternatively, you can set the environment variable in your terminal.\n\n- **Linux/macOS:** Open your terminal and execute the following command:\n\n ```bash\n export WATSONX_APIKEY='your_ibm_api_key'\n ```\n\n To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.\n\n- **Windows:** For Command Prompt, use:\n ```cmd\n set WATSONX_APIKEY=your_ibm_api_key\n ```\n\n**Note**:\n\n- To provide context for the API call, you must pass the `project_id` or `space_id`. To get your project or space ID, open your project or space, go to the **Manage** tab, and click **General**. For more information see: [Project documentation](https://www.ibm.com/docs/en/watsonx-as-a-service?topic=projects) or [Deployment space documentation](https://www.ibm.com/docs/en/watsonx/saas?topic=spaces-creating-deployment).\n- Depending on the region of your provisioned service instance, use one of the urls listed in [watsonx.ai API Authentication](https://ibm.github.io/watsonx-ai-python-sdk/setup_cloud.html#authentication).\n\nIn this example, we\u2019ll use the `project_id` and Dallas URL.\n\nProvide `PROJECT_ID` that will be used for initialize each watsonx integration instance.\n\n```python\nPROJECT_ID = \"PASTE YOUR PROJECT_ID HERE\"\nURL = \"https://us-south.ml.cloud.ibm.com\"\n```\n\n### Download data and load documents\n\n```bash\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n```python\nfrom llama_index.core import SimpleDirectoryReader\n\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\n```\n\n### Load the Rerank\n\nYou might need to adjust rerank parameters for different tasks:\n\n```python\ntruncate_input_tokens = 512\n```\n\n#### Initialize `WatsonxRerank` instance.\n\nYou need to specify the `model_id` that will be used for rerank. You can find the list of all the available models in [Supported reranker models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models-embed.html?context=wx#rerank).\n\n```python\nfrom llama_index.postprocessor.ibm import WatsonxRerank\n\nwatsonx_rerank = WatsonxRerank(\n model_id=\"cross-encoder/ms-marco-minilm-l-12-v2\",\n top_n=2,\n url=URL,\n project_id=PROJECT_ID,\n truncate_input_tokens=truncate_input_tokens,\n)\n```\n\nAlternatively, you can use Cloud Pak for Data credentials. For details, see [watsonx.ai software setup](https://ibm.github.io/watsonx-ai-python-sdk/setup_cpd.html).\n\n```python\nfrom llama_index.postprocessor.ibm import WatsonxRerank\n\nwatsonx_rerank = WatsonxRerank(\n model_id=\"cross-encoder/ms-marco-minilm-l-12-v2\",\n url=URL,\n username=\"PASTE YOUR USERNAME HERE\",\n password=\"PASTE YOUR PASSWORD HERE\",\n instance_id=\"openshift\",\n version=\"5.1\",\n project_id=PROJECT_ID,\n truncate_input_tokens=truncate_input_tokens,\n)\n```\n\n### Load the embedding model\n\n#### Initialize the `WatsonxEmbeddings` instance.\n\n> For more information about `WatsonxEmbeddings` please refer to the `llama-index-embeddings-ibm` package description.\n\nYou might need to adjust embedding parameters for different tasks:\n\n```python\ntruncate_input_tokens = 512\n```\n\nYou need to specify the `model_id` that will be used for embedding. You can find the list of all the available models in [Supported embedding models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models-embed.html?context=wx#embed).\n\n```python\nfrom llama_index.embeddings.ibm import WatsonxEmbeddings\n\nwatsonx_embedding = WatsonxEmbeddings(\n model_id=\"ibm/slate-30m-english-rtrvr\",\n url=URL,\n project_id=PROJECT_ID,\n truncate_input_tokens=truncate_input_tokens,\n)\n```\n\nChange default settings\n\n```python\nfrom llama_index.core import Settings\n\nSettings.chunk_size = 512\n```\n\n#### Build index\n\n```python\nfrom llama_index.core import VectorStoreIndex\n\nindex = VectorStoreIndex.from_documents(\n documents=documents, embed_model=watsonx_embedding\n)\n```\n\n### Load the LLM\n\n#### Initialize the `WatsonxLLM` instance.\n\n> For more information about `WatsonxLLM` please refer to the `llama-index-llms-ibm` package description.\n\nYou need to specify the `model_id` that will be used for inferencing. You can find the list of all the available models in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=wx).\n\nYou might need to adjust model `parameters` for different models or tasks. For details, refer to [Available MetaNames](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#metanames.GenTextParamsMetaNames).\n\n```python\nmax_new_tokens = 128\n```\n\n```python\nfrom llama_index.llms.ibm import WatsonxLLM\n\nwatsonx_llm = WatsonxLLM(\n model_id=\"meta-llama/llama-3-3-70b-instruct\",\n url=URL,\n project_id=PROJECT_ID,\n max_new_tokens=max_new_tokens,\n)\n```\n\n### Send a query\n\n#### Retrieve top 10 most relevant nodes, then filter with `WatsonxRerank`\n\n```python\nquery_engine = index.as_query_engine(\n llm=watsonx_llm,\n similarity_top_k=10,\n node_postprocessors=[watsonx_rerank],\n)\nresponse = query_engine.query(\n \"What did Sam Altman do in this essay?\",\n)\n```\n\n```python\nfrom llama_index.core.response.pprint_utils import pprint_response\n\npprint_response(response, show_source=True)\n```\n\n#### Directly retrieve top 2 most similar nodes\n\n```python\nquery_engine = index.as_query_engine(\n llm=watsonx_llm,\n similarity_top_k=2,\n)\nresponse = query_engine.query(\n \"What did Sam Altman do in this essay?\",\n)\n```\n\nRetrieved context is irrelevant and response is hallucinated.\n\n```python\npprint_response(response, show_source=True)\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "llama-index postprocessor IBM watsonx.ai integration",
"version": "0.3.1",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "17165f7708fc7f5d830893cec9bc36a0959521d44edafa816e626259f3bc2d0b",
"md5": "a91ada2a026b9007b300b1b98723e011",
"sha256": "3a0116c1859937733073c4904fd62fbc8b13ae3f40158b8895dd44231669d76e"
},
"downloads": -1,
"filename": "llama_index_postprocessor_ibm-0.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a91ada2a026b9007b300b1b98723e011",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.10",
"size": 7107,
"upload_time": "2025-09-08T20:48:49",
"upload_time_iso_8601": "2025-09-08T20:48:49.015693Z",
"url": "https://files.pythonhosted.org/packages/17/16/5f7708fc7f5d830893cec9bc36a0959521d44edafa816e626259f3bc2d0b/llama_index_postprocessor_ibm-0.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "04ccad3b0d12f6061ddb3247c7c3435683c8678d91ff6afcbedce238abb65596",
"md5": "77c78a21826487c0ee532313cba4d4d8",
"sha256": "964df61131dede0b9f2602d5be6ad9da9edefde3fc44ced4270fd46dd6fdc9db"
},
"downloads": -1,
"filename": "llama_index_postprocessor_ibm-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "77c78a21826487c0ee532313cba4d4d8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.10",
"size": 7287,
"upload_time": "2025-09-08T20:48:49",
"upload_time_iso_8601": "2025-09-08T20:48:49.765581Z",
"url": "https://files.pythonhosted.org/packages/04/cc/ad3b0d12f6061ddb3247c7c3435683c8678d91ff6afcbedce238abb65596/llama_index_postprocessor_ibm-0.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-08 20:48:49",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-postprocessor-ibm"
}