Name | xai-sdk JSON |
Version |
1.0.1
JSON |
| download |
home_page | None |
Summary | The official Python SDK for the xAI API |
upload_time | 2025-07-24 20:54:23 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | None |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div align="center">
<img src="https://avatars.githubusercontent.com/u/130314967?s=200&v=4" alt="xAI Logo" width="100" />
<h1>xAI Python SDK</h1>
<p>The official Python SDK for xAI's APIs</p>
<a href="https://pypi.org/project/xai-sdk">
<img src="https://img.shields.io/pypi/v/xai-sdk" alt="PyPI Version" />
</a>
<a href="">
<img src="https://img.shields.io/pypi/l/xai-sdk" alt="License" />
</a>
<a href="">
<img src="https://img.shields.io/pypi/pyversions/xai-sdk" alt="Python Version" />
</a>
</div>
<br>
The xAI Python SDK is a gRPC-based Python library for interacting with xAI's APIs. Built for Python 3.10 and above, it offers both **synchronous** and **asynchronous** clients.
Whether you're generating text, images, or structured outputs, the xAI SDK is designed to be intuitive, robust, and developer-friendly.
## Documentation
Comprehensive documentation is available at [docs.x.ai](https://docs.x.ai). Explore detailed guides, API references, and tutorials to get the most out of the xAI SDK.
## Installation
Install from PyPI with pip.
```bash
pip install xai-sdk
```
Alternatively you can also use [uv](https://docs.astral.sh/uv/)
```bash
uv add xai-sdk
```
### Requirements
Python 3.10 or higher is required to use the xAI SDK.
## Usage
The xAI SDK supports both synchronous (`xai_sdk.Client`) and asynchronous (`xai_sdk.AsyncClient`) clients. For a complete set of examples demonstrating the SDK's capabilities, including authentication, chat, image generation, function calling, and more, refer to the [examples folder](https://github.com/xai-org/xai-sdk-python/tree/main/examples).
### Client Instantiation
To use the xAI SDK, you need to instantiate either a synchronous or asynchronous client. By default, the SDK looks for an environment variable named `XAI_API_KEY` for authentication. If this variable is set, you can instantiate the clients without explicitly passing the API key:
```python
from xai_sdk import Client, AsyncClient
# Synchronous client
sync_client = Client()
# Asynchronous client
async_client = AsyncClient()
```
If you prefer to explicitly pass the API key, you can do so using `os.getenv` or by loading it from a `.env` file using the `python-dotenv` package:
```python
import os
from dotenv import load_dotenv
from xai_sdk import Client, AsyncClient
load_dotenv()
api_key = os.getenv("XAI_API_KEY")
sync_client = Client(api_key=api_key)
async_client = AsyncClient(api_key=api_key)
```
Make sure to set the `XAI_API_KEY` environment variable or load it from a `.env` file before using the SDK. This ensures secure handling of your API key without hardcoding it into your codebase.
### Multi-Turn Chat (Synchronous)
The xAI SDK supports multi-turn conversations with a simple `append` method to manage conversation history, making it ideal for interactive applications.
First, create a `chat` instance, start `append`ing messages to it, and finally call `sample` to yield a response from the model. While the underlying APIs are still stateless, this approach makes it easy to manage the message history.
```python
from xai_sdk import Client
from xai_sdk.chat import system, user
client = Client()
chat = client.chat.create(
model="grok-3",
messages=[system("You are a pirate assistant.")]
)
while True:
prompt = input("You: ")
if prompt.lower() == "exit":
break
chat.append(user(prompt))
response = chat.sample()
print(f"Grok: {response.content}")
chat.append(response)
```
### Multi-Turn Chat (Asynchronous)
For async usage, simply import `AsyncClient` instead of `Client`.
```python
import asyncio
from xai_sdk import AsyncClient
from xai_sdk.chat import system, user
async def main():
client = AsyncClient()
chat = client.chat.create(
model="grok-3",
messages=[system("You are a pirate assistant.")]
)
while True:
prompt = input("You: ")
if prompt.lower() == "exit":
break
chat.append(user(prompt))
response = await chat.sample()
print(f"Grok: {response.content}")
chat.append(response)
if __name__ == "__main__":
asyncio.run(main())
```
### Streaming
The xAI SDK supports streaming responses, allowing you to process model outputs in real-time, which is ideal for interactive applications like chatbots. The `stream` method returns a tuple containing `response` and `chunk`. The chunks contain the text deltas from the stream, while the `response` variable automatically accumulates the response as the stream progresses.
```python
from xai_sdk import Client
from xai_sdk.chat import user
client = Client()
chat = client.chat.create(model="grok-3")
while True:
prompt = input("You: ")
if prompt.lower() == "exit":
break
chat.append(user(prompt))
print("Grok: ", end="", flush=True)
for response, chunk in chat.stream():
print(chunk.content, end="", flush=True)
print()
chat.append(response)
```
### Image Understanding
You can easily interleave images and text together, making tasks like image understanding and analysis easy.
```python
from xai_sdk import Client
from xai_sdk.chat import image, user
client = Client()
chat = client.chat.create(model="grok-2-vision")
chat.append(
user(
"Which animal looks happier in these images?",
image("https://images.unsplash.com/photo-1561037404-61cd46aa615b"), # Puppy
image("https://images.unsplash.com/photo-1514888286974-6c03e2ca1dba") # Kitten
)
)
response = chat.sample()
print(f"Grok: {response.content}")
```
## Advanced Features
The xAI SDK excels in advanced use cases, such as:
- **Function Calling**: Define tools and let the model intelligently call them (see sync [function_calling.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/function_calling.py) and async [function_calling.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/function_calling.py)).
- **Image Generation**: Generate images with image generation models (see sync [image_generation.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/image_generation.py) and async [image_generation.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/image_generation.py)).
- **Image Understanding**: Analyze images with vision models (see sync [image_understanding.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/image_understanding.py) and async [image_understanding.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/image_understanding.py)).
- **Structured Outputs**: Return model responses as structured objects in the form of Pydantic models (see sync [structured_outputs.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/structured_outputs.py) and async [structured_outputs.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/structured_outputs.py)).
- **Reasoning Models**: Leverage reasoning-focused models with configurable effort levels (see sync [reasoning.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/reasoning.py) and async [reasoning.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/reasoning.py)).
- **Deferred Chat**: Sample a long-running response from a model via polling (see sync [deferred_chat.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/deferred_chat.py) and async [deferred_chat.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/deferred_chat.py)).
- **Tokenization**: Tokenize text with the tokenizer API (see sync [tokenizer.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/tokenizer.py) and async [tokenizer.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/tokenizer.py)).
- **Models**: Retrieve information on different models available to you, including, name, aliases, token price, max prompt length etc (see sync [models.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/models.py) and async [models.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/models.py))
- **Live Search**: Augment Grok's knowledge with up-to-date information from the web and 𝕏 (see sync [search.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/search.py) and async [search.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/search.py))
## Timeouts
The xAI SDK allows you to set a timeout for API requests during client initialization. This timeout applies to all RPCs and methods used with that client instance. The default timeout is 15 minutes (900 seconds).
It is not currently possible to specify timeouts for an individual RPC/client method.
To set a custom timeout, pass the `timeout` parameter when creating the `Client` or `AsyncClient`. The timeout is specified in seconds.
Example for synchronous client:
```python
from xai_sdk import Client
# Set timeout to 5 minutes (300 seconds)
sync_client = Client(timeout=300)
```
Example for asynchronous client:
```python
from xai_sdk import AsyncClient
# Set timeout to 5 minutes (300 seconds)
async_client = AsyncClient(timeout=300)
```
In the case of a timeout, a `grpc.RpcError` (for synchronous clients) or `grpc.aio.AioRpcError` (for asynchronous clients) will be raised with the gRPC status code `grpc.StatusCode.DEADLINE_EXCEEDED`.
## Retries
The xAI SDK has retries enabled by default for certain types of failed requests. If the service returns an `UNAVAILABLE` error, the SDK will automatically retry the request with exponential backoff. The default retry policy is configured as follows:
- **Maximum Attempts**: 5
- **Initial Backoff**: 0.1 seconds
- **Maximum Backoff**: 1 second
- **Backoff Multiplier**: 2
This means that after an initial failure, the SDK will wait 0.1 seconds before the first retry, then 0.2 seconds, 0.4 seconds, and so on, up to a maximum of 1 second between attempts, for a total of up to 5 attempts.
You can disable retries by setting the `grpc.enable_retries` channel option to `0` when initializing the client:
```python
from xai_sdk import Client
# Disable retries
sync_client = Client(channel_options=[("grpc.enable_retries", 0)])
```
Similarly, for the asynchronous client:
```python
from xai_sdk import AsyncClient
# Disable retries
async_client = AsyncClient(channel_options=[("grpc.enable_retries", 0)])
```
#### Custom Retry Policy
You can configure your own retry policy by setting the `grpc.service_config` channel option with a JSON string that defines the retry behavior. The JSON structure should follow the gRPC service config format. Here's an example of how to set a custom retry policy:
```python
import json
from xai_sdk import Client
# Define a custom retry policy
custom_retry_policy = json.dumps({
"methodConfig": [{
"name": [{}], # Applies to all methods
"retryPolicy": {
"maxAttempts": 3, # Reduced number of attempts
"initialBackoff": "0.5s", # Longer initial wait
"maxBackoff": "2s", # Longer maximum wait
"backoffMultiplier": 1.5, # Slower increase in wait time
"retryableStatusCodes": ["UNAVAILABLE", "RESOURCE_EXHAUSTED"] # Additional status code for retry
}
}]
})
# Initialize client with custom retry policy
sync_client = Client(channel_options=[
("grpc.service_config", custom_retry_policy)
])
```
Similarly, for the asynchronous client:
```python
import json
from xai_sdk import AsyncClient
# Define a custom retry policy
custom_retry_policy = json.dumps({
"methodConfig": [{
"name": [{}], # Applies to all methods
"retryPolicy": {
"maxAttempts": 3, # Reduced number of attempts
"initialBackoff": "0.5s", # Longer initial wait
"maxBackoff": "2s", # Longer maximum wait
"backoffMultiplier": 1.5, # Slower increase in wait time
"retryableStatusCodes": ["UNAVAILABLE", "RESOURCE_EXHAUSTED"] # Additional status code for retry
}
}]
})
# Initialize async client with custom retry policy
async_client = AsyncClient(channel_options=[
("grpc.service_config", custom_retry_policy)
])
```
In this example, the custom policy reduces the maximum number of attempts to 3, increases the initial backoff to 0.5 seconds, sets a maximum backoff of 2 seconds, uses a smaller backoff multiplier of 1.5, and allows retries on both `UNAVAILABLE` and `RESOURCE_EXHAUSTED` status codes.
Note that when setting a custom `grpc.service_config`, it will override the default retry policy.
## Accessing Underlying Proto Objects
In rare cases, you might need to access the raw proto object returned from an API call. While the xAI SDK is designed to expose most commonly needed fields directly on the response objects for ease of use, there could be scenarios where accessing the underlying proto object is necessary for advanced or custom processing.
You can access the raw proto object on any response by using the `.proto` attribute. Here's an example of how to do this with a chat response:
```python
from xai_sdk import Client
from xai_sdk.chat import user
client = Client()
chat = client.chat.create(model="grok-3")
chat.append(user("Hello, how are you?"))
response = chat.sample()
# Access the underlying proto object
# In this case, this will be of type xai_sdk.proto.chat_pb2.GetChatCompletionResponse
proto_object = response.proto
print(proto_object)
```
Please note that you should rarely need to interact with the proto object directly, as the SDK is built to provide a more user-friendly interface to the data. Use this approach only when specific fields or data structures not exposed by the SDK are required for your application. If you find yourself needing to regularly access the proto object directly, please consider opening an issue so that we can improve the experience.
## Error Codes
When using the xAI SDK, you may encounter various error codes returned by the API. These errors are based on gRPC status codes and provide insight into what went wrong with a request. For the synchronous client (`Client`), errors will be of type `grpc.RpcError`, while for the asynchronous client (`AsyncClient`), errors will be of type `grpc.aio.AioRpcError`.
Below is a table of common gRPC status codes you might encounter when using the xAI SDK:
| gRPC Status Code | Meaning | xAI SDK/API Context |
|---------------------------|------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
| `UNKNOWN` | An unknown error occurred. | An unexpected issue occurred on the server side, not specifically related to the request. |
| `INVALID_ARGUMENT` | The client specified an invalid argument. | An invalid argument was provided to the model/endpoint, such as incorrect parameters or malformed input.|
| `DEADLINE_EXCEEDED` | The deadline for the request expired before the operation completed. | Raised if the request exceeds the timeout specified by the client (default is 900 seconds, configurable during client instantiation). |
| `NOT_FOUND` | A specified resource was not found. | A requested model or resource does not exist. |
| `PERMISSION_DENIED` | The caller does not have permission to execute the specified operation.| The API key is disabled, blocked, or lacks sufficient permissions to access a specific model or feature. |
| `UNAUTHENTICATED` | The request does not have valid authentication credentials. | The API key is missing, invalid, or expired. |
| `RESOURCE_EXHAUSTED` | A resource quota has been exceeded (e.g., rate limits). | The user has exceeded their API usage quota or rate limits for requests. |
| `INTERNAL` | An internal error occurred. | An internal server error occurred on the xAI API side. |
| `UNAVAILABLE` | The service is currently unavailable. This is often a transient error. | The model or endpoint invoked is temporarily down or there are connectivity issues. The SDK defaults to automatically retrying errors with this status code. |
| `DATA_LOSS` | Unrecoverable data loss or corruption occurred. | Occurs when a user provides an image via URL in API calls (e.g., in a chat conversation) and the server fails to fetch the image from that URL. |
These error codes can help diagnose issues with API requests. When handling errors, ensure you check the specific status code to understand the nature of the problem and take appropriate action.
## Versioning
The xAI SDK generally follows [Semantic Versioning (SemVer)](https://semver.org/) to ensure a clear and predictable approach to versioning. Semantic Versioning uses a three-part version number in the format `MAJOR.MINOR.PATCH`, where:
- **MAJOR** version increments indicate backwards-incompatible API changes.
- **MINOR** version increments indicate the addition of backward-compatible functionality.
- **PATCH** version increments indicate backward-compatible bug fixes.
This approach helps developers understand the impact of upgrading to a new version of the SDK. We strive to maintain backward compatibility in minor and patch releases, ensuring that your applications continue to work seamlessly. However, please note that while we aim to restrict breaking changes to major version updates, some backwards incompatible changes to library internals may occasionally occur in minor or patch releases. These changes will typically not affect the public API, but if you are interacting with internal components or structures, we recommend reviewing release notes for each update to avoid unexpected issues.
This project maintains a [changelog](https://github.com/xai-org/xai-sdk-python/tree/main/CHANGELOG.md) such that developers can track updates and changes to the SDK as new versions are released.
### Determining the Installed Version
You can easily check the version of the xAI SDK installed in your environment using either of the following methods:
- **Using pip/uv**: Run the following command in your terminal to see the installed version of the SDK:
```bash
pip show xai-sdk
```
or
```bash
uv pip show xai-sdk
```
This will display detailed information about the package, including the version number.
- **Programmatically in Python**: You can access the version information directly in your code by importing the SDK and checking the `__version__` attribute:
```python
import xai_sdk
print(xai_sdk.__version__)
```
These methods allow you to confirm which version of the xAI SDK you are currently using, which can be helpful for debugging or ensuring compatibility with specific features.
## License
The xAI SDK is distributed under the Apache-2.0 License
## Contributing
See the [documentation](https://github.com/xai-org/xai-sdk-python/tree/main/CONTRIBUTING.md) on contributing to this project.
Raw data
{
"_id": null,
"home_page": null,
"name": "xai-sdk",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "xAI <support@x.ai>",
"download_url": "https://files.pythonhosted.org/packages/8b/03/cb264cab08b3c97acd00634fe1331f6d26ad94bc0f02f44e46e1fda5573f/xai_sdk-1.0.1.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <img src=\"https://avatars.githubusercontent.com/u/130314967?s=200&v=4\" alt=\"xAI Logo\" width=\"100\" />\n <h1>xAI Python SDK</h1>\n <p>The official Python SDK for xAI's APIs</p>\n <a href=\"https://pypi.org/project/xai-sdk\">\n <img src=\"https://img.shields.io/pypi/v/xai-sdk\" alt=\"PyPI Version\" />\n </a>\n <a href=\"\">\n <img src=\"https://img.shields.io/pypi/l/xai-sdk\" alt=\"License\" />\n </a>\n <a href=\"\">\n <img src=\"https://img.shields.io/pypi/pyversions/xai-sdk\" alt=\"Python Version\" />\n </a>\n</div>\n\n<br>\n\nThe xAI Python SDK is a gRPC-based Python library for interacting with xAI's APIs. Built for Python 3.10 and above, it offers both **synchronous** and **asynchronous** clients.\n\nWhether you're generating text, images, or structured outputs, the xAI SDK is designed to be intuitive, robust, and developer-friendly.\n\n## Documentation\n\nComprehensive documentation is available at [docs.x.ai](https://docs.x.ai). Explore detailed guides, API references, and tutorials to get the most out of the xAI SDK.\n\n## Installation\n\nInstall from PyPI with pip.\n\n```bash\npip install xai-sdk\n```\n\nAlternatively you can also use [uv](https://docs.astral.sh/uv/)\n\n```bash\nuv add xai-sdk\n```\n\n### Requirements\nPython 3.10 or higher is required to use the xAI SDK.\n\n## Usage\n\nThe xAI SDK supports both synchronous (`xai_sdk.Client`) and asynchronous (`xai_sdk.AsyncClient`) clients. For a complete set of examples demonstrating the SDK's capabilities, including authentication, chat, image generation, function calling, and more, refer to the [examples folder](https://github.com/xai-org/xai-sdk-python/tree/main/examples).\n\n### Client Instantiation\n\nTo use the xAI SDK, you need to instantiate either a synchronous or asynchronous client. By default, the SDK looks for an environment variable named `XAI_API_KEY` for authentication. If this variable is set, you can instantiate the clients without explicitly passing the API key:\n\n```python\nfrom xai_sdk import Client, AsyncClient\n\n# Synchronous client\nsync_client = Client()\n\n# Asynchronous client\nasync_client = AsyncClient()\n```\n\nIf you prefer to explicitly pass the API key, you can do so using `os.getenv` or by loading it from a `.env` file using the `python-dotenv` package:\n\n```python\nimport os\nfrom dotenv import load_dotenv\nfrom xai_sdk import Client, AsyncClient\n\nload_dotenv()\n\napi_key = os.getenv(\"XAI_API_KEY\")\nsync_client = Client(api_key=api_key)\nasync_client = AsyncClient(api_key=api_key)\n```\n\nMake sure to set the `XAI_API_KEY` environment variable or load it from a `.env` file before using the SDK. This ensures secure handling of your API key without hardcoding it into your codebase.\n\n### Multi-Turn Chat (Synchronous)\n\nThe xAI SDK supports multi-turn conversations with a simple `append` method to manage conversation history, making it ideal for interactive applications.\n\nFirst, create a `chat` instance, start `append`ing messages to it, and finally call `sample` to yield a response from the model. While the underlying APIs are still stateless, this approach makes it easy to manage the message history.\n\n```python\nfrom xai_sdk import Client\nfrom xai_sdk.chat import system, user\n\nclient = Client()\nchat = client.chat.create(\n model=\"grok-3\",\n messages=[system(\"You are a pirate assistant.\")]\n)\n\nwhile True:\n prompt = input(\"You: \")\n if prompt.lower() == \"exit\":\n break\n chat.append(user(prompt))\n response = chat.sample()\n print(f\"Grok: {response.content}\")\n chat.append(response)\n```\n\n### Multi-Turn Chat (Asynchronous)\n\nFor async usage, simply import `AsyncClient` instead of `Client`.\n\n```python\nimport asyncio\nfrom xai_sdk import AsyncClient\nfrom xai_sdk.chat import system, user\n\nasync def main():\n client = AsyncClient()\n chat = client.chat.create(\n model=\"grok-3\",\n messages=[system(\"You are a pirate assistant.\")]\n )\n\n while True:\n prompt = input(\"You: \")\n if prompt.lower() == \"exit\":\n break\n chat.append(user(prompt))\n response = await chat.sample()\n print(f\"Grok: {response.content}\")\n chat.append(response)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### Streaming\n\nThe xAI SDK supports streaming responses, allowing you to process model outputs in real-time, which is ideal for interactive applications like chatbots. The `stream` method returns a tuple containing `response` and `chunk`. The chunks contain the text deltas from the stream, while the `response` variable automatically accumulates the response as the stream progresses.\n\n```python\nfrom xai_sdk import Client\nfrom xai_sdk.chat import user\n\nclient = Client()\nchat = client.chat.create(model=\"grok-3\")\n\nwhile True:\n prompt = input(\"You: \")\n if prompt.lower() == \"exit\":\n break\n chat.append(user(prompt))\n print(\"Grok: \", end=\"\", flush=True)\n for response, chunk in chat.stream():\n print(chunk.content, end=\"\", flush=True)\n print()\n chat.append(response)\n```\n\n### Image Understanding\n\nYou can easily interleave images and text together, making tasks like image understanding and analysis easy.\n\n```python\nfrom xai_sdk import Client\nfrom xai_sdk.chat import image, user\n\nclient = Client()\nchat = client.chat.create(model=\"grok-2-vision\")\n\nchat.append(\n user(\n \"Which animal looks happier in these images?\",\n image(\"https://images.unsplash.com/photo-1561037404-61cd46aa615b\"), # Puppy\n image(\"https://images.unsplash.com/photo-1514888286974-6c03e2ca1dba\") # Kitten\n )\n)\nresponse = chat.sample()\nprint(f\"Grok: {response.content}\")\n```\n\n## Advanced Features\n\nThe xAI SDK excels in advanced use cases, such as:\n\n- **Function Calling**: Define tools and let the model intelligently call them (see sync [function_calling.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/function_calling.py) and async [function_calling.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/function_calling.py)).\n- **Image Generation**: Generate images with image generation models (see sync [image_generation.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/image_generation.py) and async [image_generation.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/image_generation.py)).\n- **Image Understanding**: Analyze images with vision models (see sync [image_understanding.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/image_understanding.py) and async [image_understanding.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/image_understanding.py)).\n- **Structured Outputs**: Return model responses as structured objects in the form of Pydantic models (see sync [structured_outputs.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/structured_outputs.py) and async [structured_outputs.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/structured_outputs.py)).\n- **Reasoning Models**: Leverage reasoning-focused models with configurable effort levels (see sync [reasoning.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/reasoning.py) and async [reasoning.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/reasoning.py)).\n- **Deferred Chat**: Sample a long-running response from a model via polling (see sync [deferred_chat.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/deferred_chat.py) and async [deferred_chat.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/deferred_chat.py)).\n- **Tokenization**: Tokenize text with the tokenizer API (see sync [tokenizer.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/tokenizer.py) and async [tokenizer.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/tokenizer.py)).\n- **Models**: Retrieve information on different models available to you, including, name, aliases, token price, max prompt length etc (see sync [models.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/models.py) and async [models.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/models.py))\n- **Live Search**: Augment Grok's knowledge with up-to-date information from the web and \ud835\udd4f (see sync [search.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/sync/search.py) and async [search.py](https://github.com/xai-org/xai-sdk-python/tree/main/examples/aio/search.py))\n\n## Timeouts\n\nThe xAI SDK allows you to set a timeout for API requests during client initialization. This timeout applies to all RPCs and methods used with that client instance. The default timeout is 15 minutes (900 seconds).\n\nIt is not currently possible to specify timeouts for an individual RPC/client method.\n\nTo set a custom timeout, pass the `timeout` parameter when creating the `Client` or `AsyncClient`. The timeout is specified in seconds.\n\nExample for synchronous client:\n\n```python\nfrom xai_sdk import Client\n\n# Set timeout to 5 minutes (300 seconds)\nsync_client = Client(timeout=300)\n```\n\nExample for asynchronous client:\n\n```python\nfrom xai_sdk import AsyncClient\n\n# Set timeout to 5 minutes (300 seconds)\nasync_client = AsyncClient(timeout=300)\n```\n\nIn the case of a timeout, a `grpc.RpcError` (for synchronous clients) or `grpc.aio.AioRpcError` (for asynchronous clients) will be raised with the gRPC status code `grpc.StatusCode.DEADLINE_EXCEEDED`.\n\n## Retries\n\nThe xAI SDK has retries enabled by default for certain types of failed requests. If the service returns an `UNAVAILABLE` error, the SDK will automatically retry the request with exponential backoff. The default retry policy is configured as follows:\n\n- **Maximum Attempts**: 5\n- **Initial Backoff**: 0.1 seconds\n- **Maximum Backoff**: 1 second\n- **Backoff Multiplier**: 2\n\nThis means that after an initial failure, the SDK will wait 0.1 seconds before the first retry, then 0.2 seconds, 0.4 seconds, and so on, up to a maximum of 1 second between attempts, for a total of up to 5 attempts.\n\nYou can disable retries by setting the `grpc.enable_retries` channel option to `0` when initializing the client:\n\n```python\nfrom xai_sdk import Client\n\n# Disable retries\nsync_client = Client(channel_options=[(\"grpc.enable_retries\", 0)])\n```\n\nSimilarly, for the asynchronous client:\n\n```python\nfrom xai_sdk import AsyncClient\n\n# Disable retries\nasync_client = AsyncClient(channel_options=[(\"grpc.enable_retries\", 0)])\n```\n\n#### Custom Retry Policy\n\nYou can configure your own retry policy by setting the `grpc.service_config` channel option with a JSON string that defines the retry behavior. The JSON structure should follow the gRPC service config format. Here's an example of how to set a custom retry policy:\n\n```python\nimport json\nfrom xai_sdk import Client\n\n# Define a custom retry policy\ncustom_retry_policy = json.dumps({\n \"methodConfig\": [{\n \"name\": [{}], # Applies to all methods\n \"retryPolicy\": {\n \"maxAttempts\": 3, # Reduced number of attempts\n \"initialBackoff\": \"0.5s\", # Longer initial wait\n \"maxBackoff\": \"2s\", # Longer maximum wait\n \"backoffMultiplier\": 1.5, # Slower increase in wait time\n \"retryableStatusCodes\": [\"UNAVAILABLE\", \"RESOURCE_EXHAUSTED\"] # Additional status code for retry\n }\n }]\n})\n\n# Initialize client with custom retry policy\nsync_client = Client(channel_options=[\n (\"grpc.service_config\", custom_retry_policy)\n])\n```\n\nSimilarly, for the asynchronous client:\n\n```python\nimport json\nfrom xai_sdk import AsyncClient\n\n# Define a custom retry policy\ncustom_retry_policy = json.dumps({\n \"methodConfig\": [{\n \"name\": [{}], # Applies to all methods\n \"retryPolicy\": {\n \"maxAttempts\": 3, # Reduced number of attempts\n \"initialBackoff\": \"0.5s\", # Longer initial wait\n \"maxBackoff\": \"2s\", # Longer maximum wait\n \"backoffMultiplier\": 1.5, # Slower increase in wait time\n \"retryableStatusCodes\": [\"UNAVAILABLE\", \"RESOURCE_EXHAUSTED\"] # Additional status code for retry\n }\n }]\n})\n\n# Initialize async client with custom retry policy\nasync_client = AsyncClient(channel_options=[\n (\"grpc.service_config\", custom_retry_policy)\n])\n```\n\nIn this example, the custom policy reduces the maximum number of attempts to 3, increases the initial backoff to 0.5 seconds, sets a maximum backoff of 2 seconds, uses a smaller backoff multiplier of 1.5, and allows retries on both `UNAVAILABLE` and `RESOURCE_EXHAUSTED` status codes.\n\nNote that when setting a custom `grpc.service_config`, it will override the default retry policy.\n\n## Accessing Underlying Proto Objects\n\nIn rare cases, you might need to access the raw proto object returned from an API call. While the xAI SDK is designed to expose most commonly needed fields directly on the response objects for ease of use, there could be scenarios where accessing the underlying proto object is necessary for advanced or custom processing.\n\nYou can access the raw proto object on any response by using the `.proto` attribute. Here's an example of how to do this with a chat response:\n\n```python\nfrom xai_sdk import Client\nfrom xai_sdk.chat import user\n\nclient = Client()\nchat = client.chat.create(model=\"grok-3\")\nchat.append(user(\"Hello, how are you?\"))\nresponse = chat.sample()\n\n# Access the underlying proto object\n# In this case, this will be of type xai_sdk.proto.chat_pb2.GetChatCompletionResponse\nproto_object = response.proto\nprint(proto_object)\n```\n\nPlease note that you should rarely need to interact with the proto object directly, as the SDK is built to provide a more user-friendly interface to the data. Use this approach only when specific fields or data structures not exposed by the SDK are required for your application. If you find yourself needing to regularly access the proto object directly, please consider opening an issue so that we can improve the experience.\n\n## Error Codes\n\nWhen using the xAI SDK, you may encounter various error codes returned by the API. These errors are based on gRPC status codes and provide insight into what went wrong with a request. For the synchronous client (`Client`), errors will be of type `grpc.RpcError`, while for the asynchronous client (`AsyncClient`), errors will be of type `grpc.aio.AioRpcError`.\n\nBelow is a table of common gRPC status codes you might encounter when using the xAI SDK:\n\n| gRPC Status Code | Meaning | xAI SDK/API Context |\n|---------------------------|------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|\n| `UNKNOWN` | An unknown error occurred. | An unexpected issue occurred on the server side, not specifically related to the request. |\n| `INVALID_ARGUMENT` | The client specified an invalid argument. | An invalid argument was provided to the model/endpoint, such as incorrect parameters or malformed input.|\n| `DEADLINE_EXCEEDED` | The deadline for the request expired before the operation completed. | Raised if the request exceeds the timeout specified by the client (default is 900 seconds, configurable during client instantiation). |\n| `NOT_FOUND` | A specified resource was not found. | A requested model or resource does not exist. |\n| `PERMISSION_DENIED` | The caller does not have permission to execute the specified operation.| The API key is disabled, blocked, or lacks sufficient permissions to access a specific model or feature. |\n| `UNAUTHENTICATED` | The request does not have valid authentication credentials. | The API key is missing, invalid, or expired. |\n| `RESOURCE_EXHAUSTED` | A resource quota has been exceeded (e.g., rate limits). | The user has exceeded their API usage quota or rate limits for requests. |\n| `INTERNAL` | An internal error occurred. | An internal server error occurred on the xAI API side. |\n| `UNAVAILABLE` | The service is currently unavailable. This is often a transient error. | The model or endpoint invoked is temporarily down or there are connectivity issues. The SDK defaults to automatically retrying errors with this status code. |\n| `DATA_LOSS` | Unrecoverable data loss or corruption occurred. | Occurs when a user provides an image via URL in API calls (e.g., in a chat conversation) and the server fails to fetch the image from that URL. |\n\nThese error codes can help diagnose issues with API requests. When handling errors, ensure you check the specific status code to understand the nature of the problem and take appropriate action.\n\n## Versioning\n\nThe xAI SDK generally follows [Semantic Versioning (SemVer)](https://semver.org/) to ensure a clear and predictable approach to versioning. Semantic Versioning uses a three-part version number in the format `MAJOR.MINOR.PATCH`, where:\n\n- **MAJOR** version increments indicate backwards-incompatible API changes.\n- **MINOR** version increments indicate the addition of backward-compatible functionality.\n- **PATCH** version increments indicate backward-compatible bug fixes.\n\nThis approach helps developers understand the impact of upgrading to a new version of the SDK. We strive to maintain backward compatibility in minor and patch releases, ensuring that your applications continue to work seamlessly. However, please note that while we aim to restrict breaking changes to major version updates, some backwards incompatible changes to library internals may occasionally occur in minor or patch releases. These changes will typically not affect the public API, but if you are interacting with internal components or structures, we recommend reviewing release notes for each update to avoid unexpected issues.\n\nThis project maintains a [changelog](https://github.com/xai-org/xai-sdk-python/tree/main/CHANGELOG.md) such that developers can track updates and changes to the SDK as new versions are released.\n\n### Determining the Installed Version\n\nYou can easily check the version of the xAI SDK installed in your environment using either of the following methods:\n\n- **Using pip/uv**: Run the following command in your terminal to see the installed version of the SDK:\n ```bash\n pip show xai-sdk\n ```\n or \n\n ```bash\n uv pip show xai-sdk\n ```\n \n This will display detailed information about the package, including the version number.\n\n- **Programmatically in Python**: You can access the version information directly in your code by importing the SDK and checking the `__version__` attribute:\n ```python\n import xai_sdk\n print(xai_sdk.__version__)\n ```\n\nThese methods allow you to confirm which version of the xAI SDK you are currently using, which can be helpful for debugging or ensuring compatibility with specific features.\n\n## License\n\nThe xAI SDK is distributed under the Apache-2.0 License\n\n## Contributing\n\nSee the [documentation](https://github.com/xai-org/xai-sdk-python/tree/main/CONTRIBUTING.md) on contributing to this project.",
"bugtrack_url": null,
"license": null,
"summary": "The official Python SDK for the xAI API",
"version": "1.0.1",
"project_urls": {
"Documentation": "https://docs.x.ai",
"Homepage": "https://github.com/xai-org/xai-sdk-python",
"Repository": "https://github.com/xai-org/xai-sdk-python"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b20d58bdedf87393637cd064c844f5cd53efe1044686a0464a1d09c2210b3cea",
"md5": "0ba89136b80f56ab9004addb51be9bc0",
"sha256": "7c143b78254cd336118953e0ddf7b895895c9b7f3051a9c17df7898e2ccd1a4c"
},
"downloads": -1,
"filename": "xai_sdk-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0ba89136b80f56ab9004addb51be9bc0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 109585,
"upload_time": "2025-07-24T20:54:22",
"upload_time_iso_8601": "2025-07-24T20:54:22.093361Z",
"url": "https://files.pythonhosted.org/packages/b2/0d/58bdedf87393637cd064c844f5cd53efe1044686a0464a1d09c2210b3cea/xai_sdk-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "8b03cb264cab08b3c97acd00634fe1331f6d26ad94bc0f02f44e46e1fda5573f",
"md5": "803f6eebac1e0019f07bac342e32d036",
"sha256": "bd01762f53de571e410a6144a5597b73833d1fdb1d13faed558dcc7931a8e0f9"
},
"downloads": -1,
"filename": "xai_sdk-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "803f6eebac1e0019f07bac342e32d036",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 218690,
"upload_time": "2025-07-24T20:54:23",
"upload_time_iso_8601": "2025-07-24T20:54:23.533750Z",
"url": "https://files.pythonhosted.org/packages/8b/03/cb264cab08b3c97acd00634fe1331f6d26ad94bc0f02f44e46e1fda5573f/xai_sdk-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-24 20:54:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "xai-org",
"github_project": "xai-sdk-python",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "xai-sdk"
}