Name | llama-index-llms-mistralai JSON |
Version |
0.3.3
JSON |
| download |
home_page | None |
Summary | llama-index llms mistral ai integration |
upload_time | 2025-02-17 23:11:58 |
maintainer | None |
docs_url | None |
author | Your Name |
requires_python | <4.0,>=3.9 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Llms Integration: Mistral
## Installation
Install the required packages using the following commands:
```bash
%pip install llama-index-llms-mistralai
!pip install llama-index
```
## Basic Usage
### Initialize the MistralAI Model
To use the MistralAI model, create an instance and provide your API key:
```python
from llama_index.llms.mistralai import MistralAI
llm = MistralAI(api_key="<replace-with-your-key>")
```
### Generate Completions
To generate a text completion for a prompt, use the `complete` method:
```python
resp = llm.complete("Paul Graham is ")
print(resp)
```
### Chat with the Model
You can also chat with the model using a list of messages. Here’s an example:
```python
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(role="system", content="You are CEO of MistralAI."),
ChatMessage(role="user", content="Tell me the story about La plateforme"),
]
resp = MistralAI().chat(messages)
print(resp)
```
### Using Random Seed
To set a random seed for reproducibility, initialize the model with the `random_seed` parameter:
```python
resp = MistralAI(random_seed=42).chat(messages)
print(resp)
```
## Streaming Responses
### Stream Completions
You can stream responses using the `stream_complete` method:
```python
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
```
### Stream Chat Responses
To stream chat messages, use the following code:
```python
messages = [
ChatMessage(role="system", content="You are CEO of MistralAI."),
ChatMessage(role="user", content="Tell me the story about La plateforme"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
```
## Configure Model
To use a specific model configuration, initialize the model with the desired model name:
```python
llm = MistralAI(model="mistral-medium")
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
```
## Function Calling
You can call functions from the model by defining tools. Here’s an example:
```python
from llama_index.llms.mistralai import MistralAI
from llama_index.core.tools import FunctionTool
def multiply(a: int, b: int) -> int:
"""Multiply two integers and return the result."""
return a * b
def mystery(a: int, b: int) -> int:
"""Mystery function on two integers."""
return a * b + a + b
mystery_tool = FunctionTool.from_defaults(fn=mystery)
multiply_tool = FunctionTool.from_defaults(fn=multiply)
llm = MistralAI(model="mistral-large-latest")
response = llm.predict_and_call(
[mystery_tool, multiply_tool],
user_msg="What happens if I run the mystery function on 5 and 7",
)
print(str(response))
```
### LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/mistralai/
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-llms-mistralai",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Your Name",
"author_email": "you@example.com",
"download_url": "https://files.pythonhosted.org/packages/03/89/0a54f0d75d28217f99f298e18a1a54c6bcacadee05e4c7d304fbd9ab23f9/llama_index_llms_mistralai-0.3.3.tar.gz",
"platform": null,
"description": "# LlamaIndex Llms Integration: Mistral\n\n## Installation\n\nInstall the required packages using the following commands:\n\n```bash\n%pip install llama-index-llms-mistralai\n!pip install llama-index\n```\n\n## Basic Usage\n\n### Initialize the MistralAI Model\n\nTo use the MistralAI model, create an instance and provide your API key:\n\n```python\nfrom llama_index.llms.mistralai import MistralAI\n\nllm = MistralAI(api_key=\"<replace-with-your-key>\")\n```\n\n### Generate Completions\n\nTo generate a text completion for a prompt, use the `complete` method:\n\n```python\nresp = llm.complete(\"Paul Graham is \")\nprint(resp)\n```\n\n### Chat with the Model\n\nYou can also chat with the model using a list of messages. Here\u2019s an example:\n\n```python\nfrom llama_index.core.llms import ChatMessage\n\nmessages = [\n ChatMessage(role=\"system\", content=\"You are CEO of MistralAI.\"),\n ChatMessage(role=\"user\", content=\"Tell me the story about La plateforme\"),\n]\nresp = MistralAI().chat(messages)\nprint(resp)\n```\n\n### Using Random Seed\n\nTo set a random seed for reproducibility, initialize the model with the `random_seed` parameter:\n\n```python\nresp = MistralAI(random_seed=42).chat(messages)\nprint(resp)\n```\n\n## Streaming Responses\n\n### Stream Completions\n\nYou can stream responses using the `stream_complete` method:\n\n```python\nresp = llm.stream_complete(\"Paul Graham is \")\nfor r in resp:\n print(r.delta, end=\"\")\n```\n\n### Stream Chat Responses\n\nTo stream chat messages, use the following code:\n\n```python\nmessages = [\n ChatMessage(role=\"system\", content=\"You are CEO of MistralAI.\"),\n ChatMessage(role=\"user\", content=\"Tell me the story about La plateforme\"),\n]\nresp = llm.stream_chat(messages)\nfor r in resp:\n print(r.delta, end=\"\")\n```\n\n## Configure Model\n\nTo use a specific model configuration, initialize the model with the desired model name:\n\n```python\nllm = MistralAI(model=\"mistral-medium\")\nresp = llm.stream_complete(\"Paul Graham is \")\nfor r in resp:\n print(r.delta, end=\"\")\n```\n\n## Function Calling\n\nYou can call functions from the model by defining tools. Here\u2019s an example:\n\n```python\nfrom llama_index.llms.mistralai import MistralAI\nfrom llama_index.core.tools import FunctionTool\n\n\ndef multiply(a: int, b: int) -> int:\n \"\"\"Multiply two integers and return the result.\"\"\"\n return a * b\n\n\ndef mystery(a: int, b: int) -> int:\n \"\"\"Mystery function on two integers.\"\"\"\n return a * b + a + b\n\n\nmystery_tool = FunctionTool.from_defaults(fn=mystery)\nmultiply_tool = FunctionTool.from_defaults(fn=multiply)\n\nllm = MistralAI(model=\"mistral-large-latest\")\nresponse = llm.predict_and_call(\n [mystery_tool, multiply_tool],\n user_msg=\"What happens if I run the mystery function on 5 and 7\",\n)\nprint(str(response))\n```\n\n### LLM Implementation example\n\nhttps://docs.llamaindex.ai/en/stable/examples/llm/mistralai/\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index llms mistral ai integration",
"version": "0.3.3",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "711c06cb87047b31a69c6631ba6dbf82167bc91ad7ff9bcbe1d683dbfec96f3e",
"md5": "f746208235d8837e55b0270de815ab6b",
"sha256": "f19ebefcd190b52749d4bea5f934b84b915a2b7211d652180af706c73eb3a972"
},
"downloads": -1,
"filename": "llama_index_llms_mistralai-0.3.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f746208235d8837e55b0270de815ab6b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 6848,
"upload_time": "2025-02-17T23:11:55",
"upload_time_iso_8601": "2025-02-17T23:11:55.895499Z",
"url": "https://files.pythonhosted.org/packages/71/1c/06cb87047b31a69c6631ba6dbf82167bc91ad7ff9bcbe1d683dbfec96f3e/llama_index_llms_mistralai-0.3.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "03890a54f0d75d28217f99f298e18a1a54c6bcacadee05e4c7d304fbd9ab23f9",
"md5": "3a4ab522860396b247f1ba5e737986f5",
"sha256": "1ff3d4224df1175ef08f79772c30b6ceef84de5ef99598dd27f54fec33d2ec8d"
},
"downloads": -1,
"filename": "llama_index_llms_mistralai-0.3.3.tar.gz",
"has_sig": false,
"md5_digest": "3a4ab522860396b247f1ba5e737986f5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 6141,
"upload_time": "2025-02-17T23:11:58",
"upload_time_iso_8601": "2025-02-17T23:11:58.362966Z",
"url": "https://files.pythonhosted.org/packages/03/89/0a54f0d75d28217f99f298e18a1a54c6bcacadee05e4c7d304fbd9ab23f9/llama_index_llms_mistralai-0.3.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-17 23:11:58",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-llms-mistralai"
}