# Google Gen AI SDK
[](https://pypi.org/project/google-genai/)

[](https://pypistats.org/packages/google-genai)
--------
**Documentation:** https://googleapis.github.io/python-genai/
-----
Google Gen AI Python SDK provides an interface for developers to integrate
Google's generative models into their Python applications. It supports the
[Gemini Developer API](https://ai.google.dev/gemini-api/docs) and
[Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview)
APIs.
## Installation
```sh
pip install google-genai
```
## Imports
```python
from google import genai
from google.genai import types
```
## Create a client
Please run one of the following code blocks to create a client for
different services ([Gemini Developer API](https://ai.google.dev/gemini-api/docs) or [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview)).
```python
from google import genai
# Only run this block for Gemini Developer API
client = genai.Client(api_key='GEMINI_API_KEY')
```
```python
from google import genai
# Only run this block for Vertex AI API
client = genai.Client(
vertexai=True, project='your-project-id', location='us-central1'
)
```
**(Optional) Using environment variables:**
You can create a client by configuring the necessary environment variables.
Configuration setup instructions depends on whether you're using the Gemini
Developer API or the Gemini API in Vertex AI.
**Gemini Developer API:** Set the `GEMINI_API_KEY` or `GOOGLE_API_KEY`.
It will automatically be picked up by the client. It's recommended that you
set only one of those variables, but if both are set, `GOOGLE_API_KEY` takes
precedence.
```bash
export GEMINI_API_KEY='your-api-key'
```
**Gemini API on Vertex AI:** Set `GOOGLE_GENAI_USE_VERTEXAI`,
`GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION`, as shown below:
```bash
export GOOGLE_GENAI_USE_VERTEXAI=true
export GOOGLE_CLOUD_PROJECT='your-project-id'
export GOOGLE_CLOUD_LOCATION='us-central1'
```
```python
from google import genai
client = genai.Client()
```
### API Selection
By default, the SDK uses the beta API endpoints provided by Google to support
preview features in the APIs. The stable API endpoints can be selected by
setting the API version to `v1`.
To set the API version use `http_options`. For example, to set the API version
to `v1` for Vertex AI:
```python
from google import genai
from google.genai import types
client = genai.Client(
vertexai=True,
project='your-project-id',
location='us-central1',
http_options=types.HttpOptions(api_version='v1')
)
```
To set the API version to `v1alpha` for the Gemini Developer API:
```python
from google import genai
from google.genai import types
client = genai.Client(
api_key='GEMINI_API_KEY',
http_options=types.HttpOptions(api_version='v1alpha')
)
```
### Faster async client option: Aiohttp
By default we use httpx for both sync and async client implementations. In order
to have faster performance, you may install `google-genai[aiohttp]`. In Gen AI
SDK we configure `trust_env=True` to match with the default behavior of httpx.
Additional args of `aiohttp.ClientSession.request()` ([see _RequestOptions args](https://github.com/aio-libs/aiohttp/blob/v3.12.13/aiohttp/client.py#L170)) can be passed
through the following way:
```python
http_options = types.HttpOptions(
async_client_args={'cookies': ..., 'ssl': ...},
)
client=Client(..., http_options=http_options)
```
### Proxy
Both httpx and aiohttp libraries use `urllib.request.getproxies` from
environment variables. Before client initialization, you may set proxy (and
optional SSL_CERT_FILE) by setting the environment variables:
```bash
export HTTPS_PROXY='http://username:password@proxy_uri:port'
export SSL_CERT_FILE='client.pem'
```
If you need `socks5` proxy, httpx [supports](https://www.python-httpx.org/advanced/proxies/#socks) `socks5` proxy if you pass it via
args to `httpx.Client()`. You may install `httpx[socks]` to use it.
Then, you can pass it through the following way:
```python
http_options = types.HttpOptions(
client_args={'proxy': 'socks5://user:pass@host:port'},
async_client_args={'proxy': 'socks5://user:pass@host:port'},,
)
client=Client(..., http_options=http_options)
```
## Types
Parameter types can be specified as either dictionaries(`TypedDict`) or
[Pydantic Models](https://pydantic.readthedocs.io/en/stable/model.html).
Pydantic model types are available in the `types` module.
## Models
The `client.models` module exposes model inferencing and model getters.
See the 'Create a client' section above to initialize a client.
### Generate Content
#### with text content
```python
response = client.models.generate_content(
model='gemini-2.0-flash-001', contents='Why is the sky blue?'
)
print(response.text)
```
#### with uploaded file (Gemini Developer API only)
download the file in console.
```sh
!wget -q https://storage.googleapis.com/generativeai-downloads/data/a11.txt
```
python code.
```python
file = client.files.upload(file='a11.txt')
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents=['Could you summarize this file?', file]
)
print(response.text)
```
#### How to structure `contents` argument for `generate_content`
The SDK always converts the inputs to the `contents` argument into
`list[types.Content]`.
The following shows some common ways to provide your inputs.
##### Provide a `list[types.Content]`
This is the canonical way to provide contents, SDK will not do any conversion.
##### Provide a `types.Content` instance
```python
from google.genai import types
contents = types.Content(
role='user',
parts=[types.Part.from_text(text='Why is the sky blue?')]
)
```
SDK converts this to
```python
[
types.Content(
role='user',
parts=[types.Part.from_text(text='Why is the sky blue?')]
)
]
```
##### Provide a string
```python
contents='Why is the sky blue?'
```
The SDK will assume this is a text part, and it converts this into the following:
```python
[
types.UserContent(
parts=[
types.Part.from_text(text='Why is the sky blue?')
]
)
]
```
Where a `types.UserContent` is a subclass of `types.Content`, it sets the
`role` field to be `user`.
##### Provide a list of string
```python
contents=['Why is the sky blue?', 'Why is the cloud white?']
```
The SDK assumes these are 2 text parts, it converts this into a single content,
like the following:
```python
[
types.UserContent(
parts=[
types.Part.from_text(text='Why is the sky blue?'),
types.Part.from_text(text='Why is the cloud white?'),
]
)
]
```
Where a `types.UserContent` is a subclass of `types.Content`, the
`role` field in `types.UserContent` is fixed to be `user`.
##### Provide a function call part
```python
from google.genai import types
contents = types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
)
```
The SDK converts a function call part to a content with a `model` role:
```python
[
types.ModelContent(
parts=[
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
)
]
)
]
```
Where a `types.ModelContent` is a subclass of `types.Content`, the
`role` field in `types.ModelContent` is fixed to be `model`.
##### Provide a list of function call parts
```python
from google.genai import types
contents = [
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
),
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'New York'}
),
]
```
The SDK converts a list of function call parts to the a content with a `model` role:
```python
[
types.ModelContent(
parts=[
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
),
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'New York'}
)
]
)
]
```
Where a `types.ModelContent` is a subclass of `types.Content`, the
`role` field in `types.ModelContent` is fixed to be `model`.
##### Provide a non function call part
```python
from google.genai import types
contents = types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
```
The SDK converts all non function call parts into a content with a `user` role.
```python
[
types.UserContent(parts=[
types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
])
]
```
##### Provide a list of non function call parts
```python
from google.genai import types
contents = [
types.Part.from_text('What is this image about?'),
types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
]
```
The SDK will convert the list of parts into a content with a `user` role
```python
[
types.UserContent(
parts=[
types.Part.from_text('What is this image about?'),
types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
]
)
]
```
##### Mix types in contents
You can also provide a list of `types.ContentUnion`. The SDK leaves items of
`types.Content` as is, it groups consecutive non function call parts into a
single `types.UserContent`, and it groups consecutive function call parts into
a single `types.ModelContent`.
If you put a list within a list, the inner list can only contain
`types.PartUnion` items. The SDK will convert the inner list into a single
`types.UserContent`.
### System Instructions and Other Configs
The output of the model can be influenced by several optional settings
available in generate_content's config parameter. For example, increasing
`max_output_tokens` is essential for longer model responses. To make a model more
deterministic, lowering the `temperature` parameter reduces randomness, with
values near 0 minimizing variability. Capabilities and parameter defaults for
each model is shown in the
[Vertex AI docs](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash)
and [Gemini API docs](https://ai.google.dev/gemini-api/docs/models) respectively.
```python
from google.genai import types
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='high',
config=types.GenerateContentConfig(
system_instruction='I say high, you say low',
max_output_tokens=3,
temperature=0.3,
),
)
print(response.text)
```
### Typed Config
All API methods support Pydantic types for parameters as well as
dictionaries. You can get the type from `google.genai.types`.
```python
from google.genai import types
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents=types.Part.from_text(text='Why is the sky blue?'),
config=types.GenerateContentConfig(
temperature=0,
top_p=0.95,
top_k=20,
candidate_count=1,
seed=5,
max_output_tokens=100,
stop_sequences=['STOP!'],
presence_penalty=0.0,
frequency_penalty=0.0,
),
)
print(response.text)
```
### List Base Models
To retrieve tuned models, see [list tuned models](#list-tuned-models).
```python
for model in client.models.list():
print(model)
```
```python
pager = client.models.list(config={'page_size': 10})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
```
#### List Base Models (Asynchronous)
```python
async for job in await client.aio.models.list():
print(job)
```
```python
async_pager = await client.aio.models.list(config={'page_size': 10})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
```
### Safety Settings
```python
from google.genai import types
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='Say something bad.',
config=types.GenerateContentConfig(
safety_settings=[
types.SafetySetting(
category='HARM_CATEGORY_HATE_SPEECH',
threshold='BLOCK_ONLY_HIGH',
)
]
),
)
print(response.text)
```
### Function Calling
#### Automatic Python function Support
You can pass a Python function directly and it will be automatically
called and responded by default.
```python
from google.genai import types
def get_current_weather(location: str) -> str:
"""Returns the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
"""
return 'sunny'
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='What is the weather like in Boston?',
config=types.GenerateContentConfig(tools=[get_current_weather]),
)
print(response.text)
```
#### Disabling automatic function calling
If you pass in a python function as a tool directly, and do not want
automatic function calling, you can disable automatic function calling
as follows:
```python
from google.genai import types
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='What is the weather like in Boston?',
config=types.GenerateContentConfig(
tools=[get_current_weather],
automatic_function_calling=types.AutomaticFunctionCallingConfig(
disable=True
),
),
)
```
With automatic function calling disabled, you will get a list of function call
parts in the response:
```python
function_calls: Optional[List[types.FunctionCall]] = response.function_calls
```
#### Manually declare and invoke a function for function calling
If you don't want to use the automatic function support, you can manually
declare the function and invoke it.
The following example shows how to declare a function and pass it as a tool.
Then you will receive a function call part in the response.
```python
from google.genai import types
function = types.FunctionDeclaration(
name='get_current_weather',
description='Get the current weather in a given location',
parameters=types.Schema(
type='OBJECT',
properties={
'location': types.Schema(
type='STRING',
description='The city and state, e.g. San Francisco, CA',
),
},
required=['location'],
),
)
tool = types.Tool(function_declarations=[function])
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='What is the weather like in Boston?',
config=types.GenerateContentConfig(tools=[tool]),
)
print(response.function_calls[0])
```
After you receive the function call part from the model, you can invoke the function
and get the function response. And then you can pass the function response to
the model.
The following example shows how to do it for a simple function invocation.
```python
from google.genai import types
user_prompt_content = types.Content(
role='user',
parts=[types.Part.from_text(text='What is the weather like in Boston?')],
)
function_call_part = response.function_calls[0]
function_call_content = response.candidates[0].content
try:
function_result = get_current_weather(
**function_call_part.function_call.args
)
function_response = {'result': function_result}
except (
Exception
) as e: # instead of raising the exception, you can let the model handle it
function_response = {'error': str(e)}
function_response_part = types.Part.from_function_response(
name=function_call_part.name,
response=function_response,
)
function_response_content = types.Content(
role='tool', parts=[function_response_part]
)
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents=[
user_prompt_content,
function_call_content,
function_response_content,
],
config=types.GenerateContentConfig(
tools=[tool],
),
)
print(response.text)
```
#### Function calling with `ANY` tools config mode
If you configure function calling mode to be `ANY`, then the model will always
return function call parts. If you also pass a python function as a tool, by
default the SDK will perform automatic function calling until the remote calls exceed the
maximum remote call for automatic function calling (default to 10 times).
If you'd like to disable automatic function calling in `ANY` mode:
```python
from google.genai import types
def get_current_weather(location: str) -> str:
"""Returns the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
"""
return "sunny"
response = client.models.generate_content(
model="gemini-2.0-flash-001",
contents="What is the weather like in Boston?",
config=types.GenerateContentConfig(
tools=[get_current_weather],
automatic_function_calling=types.AutomaticFunctionCallingConfig(
disable=True
),
tool_config=types.ToolConfig(
function_calling_config=types.FunctionCallingConfig(mode='ANY')
),
),
)
```
If you'd like to set `x` number of automatic function call turns, you can
configure the maximum remote calls to be `x + 1`.
Assuming you prefer `1` turn for automatic function calling.
```python
from google.genai import types
def get_current_weather(location: str) -> str:
"""Returns the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
"""
return "sunny"
response = client.models.generate_content(
model="gemini-2.0-flash-001",
contents="What is the weather like in Boston?",
config=types.GenerateContentConfig(
tools=[get_current_weather],
automatic_function_calling=types.AutomaticFunctionCallingConfig(
maximum_remote_calls=2
),
tool_config=types.ToolConfig(
function_calling_config=types.FunctionCallingConfig(mode='ANY')
),
),
)
```
#### Model Context Protocol (MCP) support (experimental)
Built-in [MCP](https://modelcontextprotocol.io/introduction) support is an
experimental feature. You can pass a local MCP server as a tool directly.
```python
import os
import asyncio
from datetime import datetime
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from google import genai
client = genai.Client()
# Create server parameters for stdio connection
server_params = StdioServerParameters(
command="npx", # Executable
args=["-y", "@philschmid/weather-mcp"], # MCP Server
env=None, # Optional environment variables
)
async def run():
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Prompt to get the weather for the current day in London.
prompt = f"What is the weather in London in {datetime.now().strftime('%Y-%m-%d')}?"
# Initialize the connection between client and server
await session.initialize()
# Send request to the model with MCP function declarations
response = await client.aio.models.generate_content(
model="gemini-2.5-flash",
contents=prompt,
config=genai.types.GenerateContentConfig(
temperature=0,
tools=[session], # uses the session, will automatically call the tool using automatic function calling
),
)
print(response.text)
# Start the asyncio event loop and run the main function
asyncio.run(run())
```
### JSON Response Schema
However you define your schema, don't duplicate it in your input prompt,
including by giving examples of expected JSON output. If you do, the generated
output might be lower in quality.
#### Pydantic Model Schema support
Schemas can be provided as Pydantic Models.
```python
from pydantic import BaseModel
from google.genai import types
class CountryInfo(BaseModel):
name: str
population: int
capital: str
continent: str
gdp: int
official_language: str
total_area_sq_mi: int
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='Give me information for the United States.',
config=types.GenerateContentConfig(
response_mime_type='application/json',
response_schema=CountryInfo,
),
)
print(response.text)
```
```python
from google.genai import types
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='Give me information for the United States.',
config=types.GenerateContentConfig(
response_mime_type='application/json',
response_schema={
'required': [
'name',
'population',
'capital',
'continent',
'gdp',
'official_language',
'total_area_sq_mi',
],
'properties': {
'name': {'type': 'STRING'},
'population': {'type': 'INTEGER'},
'capital': {'type': 'STRING'},
'continent': {'type': 'STRING'},
'gdp': {'type': 'INTEGER'},
'official_language': {'type': 'STRING'},
'total_area_sq_mi': {'type': 'INTEGER'},
},
'type': 'OBJECT',
},
),
)
print(response.text)
```
### Enum Response Schema
#### Text Response
You can set response_mime_type to 'text/x.enum' to return one of those enum
values as the response.
```python
class InstrumentEnum(Enum):
PERCUSSION = 'Percussion'
STRING = 'String'
WOODWIND = 'Woodwind'
BRASS = 'Brass'
KEYBOARD = 'Keyboard'
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='What instrument plays multiple notes at once?',
config={
'response_mime_type': 'text/x.enum',
'response_schema': InstrumentEnum,
},
)
print(response.text)
```
#### JSON Response
You can also set response_mime_type to 'application/json', the response will be
identical but in quotes.
```python
from enum import Enum
class InstrumentEnum(Enum):
PERCUSSION = 'Percussion'
STRING = 'String'
WOODWIND = 'Woodwind'
BRASS = 'Brass'
KEYBOARD = 'Keyboard'
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='What instrument plays multiple notes at once?',
config={
'response_mime_type': 'application/json',
'response_schema': InstrumentEnum,
},
)
print(response.text)
```
### Generate Content (Synchronous Streaming)
Generate content in a streaming format so that the model outputs streams back
to you, rather than being returned as one chunk.
#### Streaming for text content
```python
for chunk in client.models.generate_content_stream(
model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
):
print(chunk.text, end='')
```
#### Streaming for image content
If your image is stored in [Google Cloud Storage](https://cloud.google.com/storage),
you can use the `from_uri` class method to create a `Part` object.
```python
from google.genai import types
for chunk in client.models.generate_content_stream(
model='gemini-2.0-flash-001',
contents=[
'What is this image about?',
types.Part.from_uri(
file_uri='gs://generativeai-downloads/images/scones.jpg',
mime_type='image/jpeg',
),
],
):
print(chunk.text, end='')
```
If your image is stored in your local file system, you can read it in as bytes
data and use the `from_bytes` class method to create a `Part` object.
```python
from google.genai import types
YOUR_IMAGE_PATH = 'your_image_path'
YOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'
with open(YOUR_IMAGE_PATH, 'rb') as f:
image_bytes = f.read()
for chunk in client.models.generate_content_stream(
model='gemini-2.0-flash-001',
contents=[
'What is this image about?',
types.Part.from_bytes(data=image_bytes, mime_type=YOUR_IMAGE_MIME_TYPE),
],
):
print(chunk.text, end='')
```
### Generate Content (Asynchronous Non Streaming)
`client.aio` exposes all the analogous [`async` methods](https://docs.python.org/3/library/asyncio.html)
that are available on `client`. Note that it applies to all the modules.
For example, `client.aio.models.generate_content` is the `async` version
of `client.models.generate_content`
```python
response = await client.aio.models.generate_content(
model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
)
print(response.text)
```
### Generate Content (Asynchronous Streaming)
```python
async for chunk in await client.aio.models.generate_content_stream(
model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
):
print(chunk.text, end='')
```
### Count Tokens and Compute Tokens
```python
response = client.models.count_tokens(
model='gemini-2.0-flash-001',
contents='why is the sky blue?',
)
print(response)
```
#### Compute Tokens
Compute tokens is only supported in Vertex AI.
```python
response = client.models.compute_tokens(
model='gemini-2.0-flash-001',
contents='why is the sky blue?',
)
print(response)
```
##### Async
```python
response = await client.aio.models.count_tokens(
model='gemini-2.0-flash-001',
contents='why is the sky blue?',
)
print(response)
```
### Embed Content
```python
response = client.models.embed_content(
model='text-embedding-004',
contents='why is the sky blue?',
)
print(response)
```
```python
from google.genai import types
# multiple contents with config
response = client.models.embed_content(
model='text-embedding-004',
contents=['why is the sky blue?', 'What is your age?'],
config=types.EmbedContentConfig(output_dimensionality=10),
)
print(response)
```
### Imagen
#### Generate Images
Support for generate images in Gemini Developer API is behind an allowlist
```python
from google.genai import types
# Generate Image
response1 = client.models.generate_images(
model='imagen-3.0-generate-002',
prompt='An umbrella in the foreground, and a rainy night sky in the background',
config=types.GenerateImagesConfig(
number_of_images=1,
include_rai_reason=True,
output_mime_type='image/jpeg',
),
)
response1.generated_images[0].image.show()
```
#### Upscale Image
Upscale image is only supported in Vertex AI.
```python
from google.genai import types
# Upscale the generated image from above
response2 = client.models.upscale_image(
model='imagen-3.0-generate-001',
image=response1.generated_images[0].image,
upscale_factor='x2',
config=types.UpscaleImageConfig(
include_rai_reason=True,
output_mime_type='image/jpeg',
),
)
response2.generated_images[0].image.show()
```
#### Edit Image
Edit image uses a separate model from generate and upscale.
Edit image is only supported in Vertex AI.
```python
# Edit the generated image from above
from google.genai import types
from google.genai.types import RawReferenceImage, MaskReferenceImage
raw_ref_image = RawReferenceImage(
reference_id=1,
reference_image=response1.generated_images[0].image,
)
# Model computes a mask of the background
mask_ref_image = MaskReferenceImage(
reference_id=2,
config=types.MaskReferenceConfig(
mask_mode='MASK_MODE_BACKGROUND',
mask_dilation=0,
),
)
response3 = client.models.edit_image(
model='imagen-3.0-capability-001',
prompt='Sunlight and clear sky',
reference_images=[raw_ref_image, mask_ref_image],
config=types.EditImageConfig(
edit_mode='EDIT_MODE_INPAINT_INSERTION',
number_of_images=1,
include_rai_reason=True,
output_mime_type='image/jpeg',
),
)
response3.generated_images[0].image.show()
```
### Veo
Support for generating videos is considered public preview
#### Generate Videos (Text to Video)
```python
from google.genai import types
# Create operation
operation = client.models.generate_videos(
model='veo-2.0-generate-001',
prompt='A neon hologram of a cat driving at top speed',
config=types.GenerateVideosConfig(
number_of_videos=1,
duration_seconds=5,
enhance_prompt=True,
),
)
# Poll operation
while not operation.done:
time.sleep(20)
operation = client.operations.get(operation)
video = operation.response.generated_videos[0].video
video.show()
```
#### Generate Videos (Image to Video)
```python
from google.genai import types
# Read local image (uses mimetypes.guess_type to infer mime type)
image = types.Image.from_file("local/path/file.png")
# Create operation
operation = client.models.generate_videos(
model='veo-2.0-generate-001',
# Prompt is optional if image is provided
prompt='Night sky',
image=image,
config=types.GenerateVideosConfig(
number_of_videos=1,
duration_seconds=5,
enhance_prompt=True,
# Can also pass an Image into last_frame for frame interpolation
),
)
# Poll operation
while not operation.done:
time.sleep(20)
operation = client.operations.get(operation)
video = operation.response.generated_videos[0].video
video.show()
```
#### Generate Videos (Video to Video)
Currently, only Vertex supports Video to Video generation (Video extension).
```python
from google.genai import types
# Read local video (uses mimetypes.guess_type to infer mime type)
video = types.Video.from_file("local/path/video.mp4")
# Create operation
operation = client.models.generate_videos(
model='veo-2.0-generate-001',
# Prompt is optional if Video is provided
prompt='Night sky',
# Input video must be in GCS
video=types.Video(
uri="gs://bucket-name/inputs/videos/cat_driving.mp4",
),
config=types.GenerateVideosConfig(
number_of_videos=1,
duration_seconds=5,
enhance_prompt=True,
),
)
# Poll operation
while not operation.done:
time.sleep(20)
operation = client.operations.get(operation)
video = operation.response.generated_videos[0].video
video.show()
```
## Chats
Create a chat session to start a multi-turn conversations with the model. Then,
use `chat.send_message` function multiple times within the same chat session so
that it can reflect on its previous responses (i.e., engage in an ongoing
conversation). See the 'Create a client' section above to initialize a client.
### Send Message (Synchronous Non-Streaming)
```python
chat = client.chats.create(model='gemini-2.0-flash-001')
response = chat.send_message('tell me a story')
print(response.text)
response = chat.send_message('summarize the story you told me in 1 sentence')
print(response.text)
```
### Send Message (Synchronous Streaming)
```python
chat = client.chats.create(model='gemini-2.0-flash-001')
for chunk in chat.send_message_stream('tell me a story'):
print(chunk.text)
```
### Send Message (Asynchronous Non-Streaming)
```python
chat = client.aio.chats.create(model='gemini-2.0-flash-001')
response = await chat.send_message('tell me a story')
print(response.text)
```
### Send Message (Asynchronous Streaming)
```python
chat = client.aio.chats.create(model='gemini-2.0-flash-001')
async for chunk in await chat.send_message_stream('tell me a story'):
print(chunk.text)
```
## Files
Files are only supported in Gemini Developer API. See the 'Create a client'
section above to initialize a client.
```cmd
!gsutil cp gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf .
!gsutil cp gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf .
```
### Upload
```python
file1 = client.files.upload(file='2312.11805v3.pdf')
file2 = client.files.upload(file='2403.05530.pdf')
print(file1)
print(file2)
```
### Get
```python
file1 = client.files.upload(file='2312.11805v3.pdf')
file_info = client.files.get(name=file1.name)
```
### Delete
```python
file3 = client.files.upload(file='2312.11805v3.pdf')
client.files.delete(name=file3.name)
```
## Caches
`client.caches` contains the control plane APIs for cached content. See the
'Create a client' section above to initialize a client.
### Create
```python
from google.genai import types
if client.vertexai:
file_uris = [
'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',
'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf',
]
else:
file_uris = [file1.uri, file2.uri]
cached_content = client.caches.create(
model='gemini-2.0-flash-001',
config=types.CreateCachedContentConfig(
contents=[
types.Content(
role='user',
parts=[
types.Part.from_uri(
file_uri=file_uris[0], mime_type='application/pdf'
),
types.Part.from_uri(
file_uri=file_uris[1],
mime_type='application/pdf',
),
],
)
],
system_instruction='What is the sum of the two pdfs?',
display_name='test cache',
ttl='3600s',
),
)
```
### Get
```python
cached_content = client.caches.get(name=cached_content.name)
```
### Generate Content with Caches
```python
from google.genai import types
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='Summarize the pdfs',
config=types.GenerateContentConfig(
cached_content=cached_content.name,
),
)
print(response.text)
```
## Tunings
`client.tunings` contains tuning job APIs and supports supervised fine
tuning through `tune`. See the 'Create a client' section above to initialize a
client.
### Tune
- Vertex AI supports tuning from GCS source or from a Vertex Multimodal Dataset
- Gemini Developer API supports tuning from inline examples
```python
from google.genai import types
if client.vertexai:
model = 'gemini-2.0-flash-001'
training_dataset = types.TuningDataset(
# or gcs_uri=my_vertex_multimodal_dataset
gcs_uri='gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl',
)
else:
model = 'models/gemini-2.0-flash-001'
# or gcs_uri=my_vertex_multimodal_dataset.resource_name
training_dataset = types.TuningDataset(
examples=[
types.TuningExample(
text_input=f'Input text {i}',
output=f'Output text {i}',
)
for i in range(5)
],
)
```
```python
from google.genai import types
tuning_job = client.tunings.tune(
base_model=model,
training_dataset=training_dataset,
config=types.CreateTuningJobConfig(
epoch_count=1, tuned_model_display_name='test_dataset_examples model'
),
)
print(tuning_job)
```
### Get Tuning Job
```python
tuning_job = client.tunings.get(name=tuning_job.name)
print(tuning_job)
```
```python
import time
running_states = set(
[
'JOB_STATE_PENDING',
'JOB_STATE_RUNNING',
]
)
while tuning_job.state in running_states:
print(tuning_job.state)
tuning_job = client.tunings.get(name=tuning_job.name)
time.sleep(10)
```
#### Use Tuned Model
```python
response = client.models.generate_content(
model=tuning_job.tuned_model.endpoint,
contents='why is the sky blue?',
)
print(response.text)
```
### Get Tuned Model
```python
tuned_model = client.models.get(model=tuning_job.tuned_model.model)
print(tuned_model)
```
### List Tuned Models
To retrieve base models, see [list base models](#list-base-models).
```python
for model in client.models.list(config={'page_size': 10, 'query_base': False}):
print(model)
```
```python
pager = client.models.list(config={'page_size': 10, 'query_base': False})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
```
#### Async
```python
async for job in await client.aio.models.list(config={'page_size': 10, 'query_base': False}):
print(job)
```
```python
async_pager = await client.aio.models.list(config={'page_size': 10, 'query_base': False})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
```
### Update Tuned Model
```python
from google.genai import types
model = pager[0]
model = client.models.update(
model=model.name,
config=types.UpdateModelConfig(
display_name='my tuned model', description='my tuned model description'
),
)
print(model)
```
### List Tuning Jobs
```python
for job in client.tunings.list(config={'page_size': 10}):
print(job)
```
```python
pager = client.tunings.list(config={'page_size': 10})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
```
#### Async
```python
async for job in await client.aio.tunings.list(config={'page_size': 10}):
print(job)
```
```python
async_pager = await client.aio.tunings.list(config={'page_size': 10})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
```
## Batch Prediction
Only supported in Vertex AI. See the 'Create a client' section above to
initialize a client.
### Create
```python
# Specify model and source file only, destination and job display name will be auto-populated
job = client.batches.create(
model='gemini-2.0-flash-001',
src='bq://my-project.my-dataset.my-table',
)
job
```
```python
# Get a job by name
job = client.batches.get(name=job.name)
job.state
```
```python
completed_states = set(
[
'JOB_STATE_SUCCEEDED',
'JOB_STATE_FAILED',
'JOB_STATE_CANCELLED',
'JOB_STATE_PAUSED',
]
)
while job.state not in completed_states:
print(job.state)
job = client.batches.get(name=job.name)
time.sleep(30)
job
```
### List
```python
for job in client.batches.list(config=types.ListBatchJobsConfig(page_size=10)):
print(job)
```
```python
pager = client.batches.list(config=types.ListBatchJobsConfig(page_size=10))
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
```
#### Async
```python
async for job in await client.aio.batches.list(
config=types.ListBatchJobsConfig(page_size=10)
):
print(job)
```
```python
async_pager = await client.aio.batches.list(
config=types.ListBatchJobsConfig(page_size=10)
)
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
```
### Delete
```python
# Delete the job resource
delete_job = client.batches.delete(name=job.name)
delete_job
```
## Error Handling
To handle errors raised by the model service, the SDK provides this [APIError](https://github.com/googleapis/python-genai/blob/main/google/genai/errors.py) class.
```python
from google.genai import errors
try:
client.models.generate_content(
model="invalid-model-name",
contents="What is your name?",
)
except errors.APIError as e:
print(e.code) # 404
print(e.message)
```
## Extra Request Body
The `extra_body` field in `HttpOptions` accepts a dictionary of additional JSON
properties to include in the request body. This can be used to access new or
experimental backend features that are not yet formally supported in the SDK.
The structure of the dictionary must match the backend API's request structure.
- VertexAI backend API docs: https://cloud.google.com/vertex-ai/docs/reference/rest
- GeminiAPI backend API docs: https://ai.google.dev/api/rest
```python
response = client.models.generate_content(
model="gemini-2.5-pro",
contents="What is the weather in Boston? and how about Sunnyvale?",
config=types.GenerateContentConfig(
tools=[get_current_weather],
http_options=types.HttpOptions(extra_body={'tool_config': {'function_calling_config': {'mode': 'COMPOSITIONAL'}}}),
),
)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "google-genai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Google LLC <googleapis-packages@google.com>",
"download_url": "https://files.pythonhosted.org/packages/7f/59/c9b9148c8702b60253f5a251f16ae436534c5d4362da193c9db05ac9858c/google_genai-1.25.0.tar.gz",
"platform": null,
"description": "# Google Gen AI SDK\n\n[](https://pypi.org/project/google-genai/)\n\n[](https://pypistats.org/packages/google-genai)\n\n--------\n**Documentation:** https://googleapis.github.io/python-genai/\n\n-----\n\nGoogle Gen AI Python SDK provides an interface for developers to integrate\nGoogle's generative models into their Python applications. It supports the\n[Gemini Developer API](https://ai.google.dev/gemini-api/docs) and\n[Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview)\nAPIs.\n\n## Installation\n\n```sh\npip install google-genai\n```\n\n## Imports\n\n```python\nfrom google import genai\nfrom google.genai import types\n```\n\n## Create a client\n\nPlease run one of the following code blocks to create a client for\ndifferent services ([Gemini Developer API](https://ai.google.dev/gemini-api/docs) or [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview)).\n\n```python\nfrom google import genai\n\n# Only run this block for Gemini Developer API\nclient = genai.Client(api_key='GEMINI_API_KEY')\n```\n\n```python\nfrom google import genai\n\n# Only run this block for Vertex AI API\nclient = genai.Client(\n vertexai=True, project='your-project-id', location='us-central1'\n)\n```\n\n**(Optional) Using environment variables:**\n\nYou can create a client by configuring the necessary environment variables.\nConfiguration setup instructions depends on whether you're using the Gemini\nDeveloper API or the Gemini API in Vertex AI.\n\n**Gemini Developer API:** Set the `GEMINI_API_KEY` or `GOOGLE_API_KEY`.\nIt will automatically be picked up by the client. It's recommended that you\nset only one of those variables, but if both are set, `GOOGLE_API_KEY` takes\nprecedence.\n\n```bash\nexport GEMINI_API_KEY='your-api-key'\n```\n\n**Gemini API on Vertex AI:** Set `GOOGLE_GENAI_USE_VERTEXAI`,\n`GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION`, as shown below:\n\n```bash\nexport GOOGLE_GENAI_USE_VERTEXAI=true\nexport GOOGLE_CLOUD_PROJECT='your-project-id'\nexport GOOGLE_CLOUD_LOCATION='us-central1'\n```\n\n```python\nfrom google import genai\n\nclient = genai.Client()\n```\n\n### API Selection\n\nBy default, the SDK uses the beta API endpoints provided by Google to support\npreview features in the APIs. The stable API endpoints can be selected by\nsetting the API version to `v1`.\n\nTo set the API version use `http_options`. For example, to set the API version\nto `v1` for Vertex AI:\n\n```python\nfrom google import genai\nfrom google.genai import types\n\nclient = genai.Client(\n vertexai=True,\n project='your-project-id',\n location='us-central1',\n http_options=types.HttpOptions(api_version='v1')\n)\n```\n\nTo set the API version to `v1alpha` for the Gemini Developer API:\n\n```python\nfrom google import genai\nfrom google.genai import types\n\nclient = genai.Client(\n api_key='GEMINI_API_KEY',\n http_options=types.HttpOptions(api_version='v1alpha')\n)\n```\n\n### Faster async client option: Aiohttp\n\nBy default we use httpx for both sync and async client implementations. In order\nto have faster performance, you may install `google-genai[aiohttp]`. In Gen AI\nSDK we configure `trust_env=True` to match with the default behavior of httpx.\nAdditional args of `aiohttp.ClientSession.request()` ([see _RequestOptions args](https://github.com/aio-libs/aiohttp/blob/v3.12.13/aiohttp/client.py#L170)) can be passed\nthrough the following way:\n\n```python\n\nhttp_options = types.HttpOptions(\n async_client_args={'cookies': ..., 'ssl': ...},\n)\n\nclient=Client(..., http_options=http_options)\n```\n\n### Proxy\n\nBoth httpx and aiohttp libraries use `urllib.request.getproxies` from\nenvironment variables. Before client initialization, you may set proxy (and\noptional SSL_CERT_FILE) by setting the environment variables:\n\n\n```bash\nexport HTTPS_PROXY='http://username:password@proxy_uri:port'\nexport SSL_CERT_FILE='client.pem'\n```\n\nIf you need `socks5` proxy, httpx [supports](https://www.python-httpx.org/advanced/proxies/#socks) `socks5` proxy if you pass it via\nargs to `httpx.Client()`. You may install `httpx[socks]` to use it.\nThen, you can pass it through the following way:\n\n```python\n\nhttp_options = types.HttpOptions(\n client_args={'proxy': 'socks5://user:pass@host:port'},\n async_client_args={'proxy': 'socks5://user:pass@host:port'},,\n)\n\nclient=Client(..., http_options=http_options)\n```\n\n## Types\n\nParameter types can be specified as either dictionaries(`TypedDict`) or\n[Pydantic Models](https://pydantic.readthedocs.io/en/stable/model.html).\nPydantic model types are available in the `types` module.\n\n## Models\n\nThe `client.models` module exposes model inferencing and model getters.\nSee the 'Create a client' section above to initialize a client.\n\n### Generate Content\n\n#### with text content\n\n```python\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001', contents='Why is the sky blue?'\n)\nprint(response.text)\n```\n\n#### with uploaded file (Gemini Developer API only)\ndownload the file in console.\n\n```sh\n!wget -q https://storage.googleapis.com/generativeai-downloads/data/a11.txt\n```\n\npython code.\n\n```python\nfile = client.files.upload(file='a11.txt')\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents=['Could you summarize this file?', file]\n)\nprint(response.text)\n```\n\n#### How to structure `contents` argument for `generate_content`\nThe SDK always converts the inputs to the `contents` argument into\n`list[types.Content]`.\nThe following shows some common ways to provide your inputs.\n\n##### Provide a `list[types.Content]`\nThis is the canonical way to provide contents, SDK will not do any conversion.\n\n##### Provide a `types.Content` instance\n\n```python\nfrom google.genai import types\n\ncontents = types.Content(\n role='user',\n parts=[types.Part.from_text(text='Why is the sky blue?')]\n)\n```\n\nSDK converts this to\n\n```python\n[\n types.Content(\n role='user',\n parts=[types.Part.from_text(text='Why is the sky blue?')]\n )\n]\n```\n\n##### Provide a string\n\n```python\ncontents='Why is the sky blue?'\n```\n\nThe SDK will assume this is a text part, and it converts this into the following:\n\n```python\n[\n types.UserContent(\n parts=[\n types.Part.from_text(text='Why is the sky blue?')\n ]\n )\n]\n```\n\nWhere a `types.UserContent` is a subclass of `types.Content`, it sets the\n`role` field to be `user`.\n\n##### Provide a list of string\n\n```python\ncontents=['Why is the sky blue?', 'Why is the cloud white?']\n```\n\nThe SDK assumes these are 2 text parts, it converts this into a single content,\nlike the following:\n\n```python\n[\n types.UserContent(\n parts=[\n types.Part.from_text(text='Why is the sky blue?'),\n types.Part.from_text(text='Why is the cloud white?'),\n ]\n )\n]\n```\n\nWhere a `types.UserContent` is a subclass of `types.Content`, the\n`role` field in `types.UserContent` is fixed to be `user`.\n\n##### Provide a function call part\n\n```python\nfrom google.genai import types\n\ncontents = types.Part.from_function_call(\n name='get_weather_by_location',\n args={'location': 'Boston'}\n)\n```\n\nThe SDK converts a function call part to a content with a `model` role:\n\n```python\n[\n types.ModelContent(\n parts=[\n types.Part.from_function_call(\n name='get_weather_by_location',\n args={'location': 'Boston'}\n )\n ]\n )\n]\n```\n\nWhere a `types.ModelContent` is a subclass of `types.Content`, the\n`role` field in `types.ModelContent` is fixed to be `model`.\n\n##### Provide a list of function call parts\n\n```python\nfrom google.genai import types\n\ncontents = [\n types.Part.from_function_call(\n name='get_weather_by_location',\n args={'location': 'Boston'}\n ),\n types.Part.from_function_call(\n name='get_weather_by_location',\n args={'location': 'New York'}\n ),\n]\n```\n\nThe SDK converts a list of function call parts to the a content with a `model` role:\n\n```python\n[\n types.ModelContent(\n parts=[\n types.Part.from_function_call(\n name='get_weather_by_location',\n args={'location': 'Boston'}\n ),\n types.Part.from_function_call(\n name='get_weather_by_location',\n args={'location': 'New York'}\n )\n ]\n )\n]\n```\n\nWhere a `types.ModelContent` is a subclass of `types.Content`, the\n`role` field in `types.ModelContent` is fixed to be `model`.\n\n##### Provide a non function call part\n\n```python\nfrom google.genai import types\n\ncontents = types.Part.from_uri(\n file_uri: 'gs://generativeai-downloads/images/scones.jpg',\n mime_type: 'image/jpeg',\n)\n```\n\nThe SDK converts all non function call parts into a content with a `user` role.\n\n```python\n[\n types.UserContent(parts=[\n types.Part.from_uri(\n file_uri: 'gs://generativeai-downloads/images/scones.jpg',\n mime_type: 'image/jpeg',\n )\n ])\n]\n```\n\n##### Provide a list of non function call parts\n\n```python\nfrom google.genai import types\n\ncontents = [\n types.Part.from_text('What is this image about?'),\n types.Part.from_uri(\n file_uri: 'gs://generativeai-downloads/images/scones.jpg',\n mime_type: 'image/jpeg',\n )\n]\n```\n\nThe SDK will convert the list of parts into a content with a `user` role\n\n```python\n[\n types.UserContent(\n parts=[\n types.Part.from_text('What is this image about?'),\n types.Part.from_uri(\n file_uri: 'gs://generativeai-downloads/images/scones.jpg',\n mime_type: 'image/jpeg',\n )\n ]\n )\n]\n```\n\n##### Mix types in contents\n\nYou can also provide a list of `types.ContentUnion`. The SDK leaves items of\n`types.Content` as is, it groups consecutive non function call parts into a\nsingle `types.UserContent`, and it groups consecutive function call parts into\na single `types.ModelContent`.\n\nIf you put a list within a list, the inner list can only contain\n`types.PartUnion` items. The SDK will convert the inner list into a single\n`types.UserContent`.\n\n### System Instructions and Other Configs\n\nThe output of the model can be influenced by several optional settings\navailable in generate_content's config parameter. For example, increasing\n`max_output_tokens` is essential for longer model responses. To make a model more\ndeterministic, lowering the `temperature` parameter reduces randomness, with\nvalues near 0 minimizing variability. Capabilities and parameter defaults for\neach model is shown in the\n[Vertex AI docs](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash)\nand [Gemini API docs](https://ai.google.dev/gemini-api/docs/models) respectively.\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents='high',\n config=types.GenerateContentConfig(\n system_instruction='I say high, you say low',\n max_output_tokens=3,\n temperature=0.3,\n ),\n)\nprint(response.text)\n```\n\n### Typed Config\n\nAll API methods support Pydantic types for parameters as well as\ndictionaries. You can get the type from `google.genai.types`.\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents=types.Part.from_text(text='Why is the sky blue?'),\n config=types.GenerateContentConfig(\n temperature=0,\n top_p=0.95,\n top_k=20,\n candidate_count=1,\n seed=5,\n max_output_tokens=100,\n stop_sequences=['STOP!'],\n presence_penalty=0.0,\n frequency_penalty=0.0,\n ),\n)\n\nprint(response.text)\n```\n\n### List Base Models\n\nTo retrieve tuned models, see [list tuned models](#list-tuned-models).\n\n```python\nfor model in client.models.list():\n print(model)\n```\n\n```python\npager = client.models.list(config={'page_size': 10})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### List Base Models (Asynchronous)\n\n```python\nasync for job in await client.aio.models.list():\n print(job)\n```\n\n```python\nasync_pager = await client.aio.models.list(config={'page_size': 10})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Safety Settings\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents='Say something bad.',\n config=types.GenerateContentConfig(\n safety_settings=[\n types.SafetySetting(\n category='HARM_CATEGORY_HATE_SPEECH',\n threshold='BLOCK_ONLY_HIGH',\n )\n ]\n ),\n)\nprint(response.text)\n```\n\n### Function Calling\n\n#### Automatic Python function Support\n\nYou can pass a Python function directly and it will be automatically\ncalled and responded by default.\n\n```python\nfrom google.genai import types\n\ndef get_current_weather(location: str) -> str:\n \"\"\"Returns the current weather.\n\n Args:\n location: The city and state, e.g. San Francisco, CA\n \"\"\"\n return 'sunny'\n\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents='What is the weather like in Boston?',\n config=types.GenerateContentConfig(tools=[get_current_weather]),\n)\n\nprint(response.text)\n```\n#### Disabling automatic function calling\nIf you pass in a python function as a tool directly, and do not want\nautomatic function calling, you can disable automatic function calling\nas follows:\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents='What is the weather like in Boston?',\n config=types.GenerateContentConfig(\n tools=[get_current_weather],\n automatic_function_calling=types.AutomaticFunctionCallingConfig(\n disable=True\n ),\n ),\n)\n```\n\nWith automatic function calling disabled, you will get a list of function call\nparts in the response:\n\n```python\nfunction_calls: Optional[List[types.FunctionCall]] = response.function_calls\n```\n\n#### Manually declare and invoke a function for function calling\n\nIf you don't want to use the automatic function support, you can manually\ndeclare the function and invoke it.\n\nThe following example shows how to declare a function and pass it as a tool.\nThen you will receive a function call part in the response.\n\n```python\nfrom google.genai import types\n\nfunction = types.FunctionDeclaration(\n name='get_current_weather',\n description='Get the current weather in a given location',\n parameters=types.Schema(\n type='OBJECT',\n properties={\n 'location': types.Schema(\n type='STRING',\n description='The city and state, e.g. San Francisco, CA',\n ),\n },\n required=['location'],\n ),\n)\n\ntool = types.Tool(function_declarations=[function])\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents='What is the weather like in Boston?',\n config=types.GenerateContentConfig(tools=[tool]),\n)\n\nprint(response.function_calls[0])\n```\n\nAfter you receive the function call part from the model, you can invoke the function\nand get the function response. And then you can pass the function response to\nthe model.\nThe following example shows how to do it for a simple function invocation.\n\n```python\nfrom google.genai import types\n\nuser_prompt_content = types.Content(\n role='user',\n parts=[types.Part.from_text(text='What is the weather like in Boston?')],\n)\nfunction_call_part = response.function_calls[0]\nfunction_call_content = response.candidates[0].content\n\n\ntry:\n function_result = get_current_weather(\n **function_call_part.function_call.args\n )\n function_response = {'result': function_result}\nexcept (\n Exception\n) as e: # instead of raising the exception, you can let the model handle it\n function_response = {'error': str(e)}\n\n\nfunction_response_part = types.Part.from_function_response(\n name=function_call_part.name,\n response=function_response,\n)\nfunction_response_content = types.Content(\n role='tool', parts=[function_response_part]\n)\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents=[\n user_prompt_content,\n function_call_content,\n function_response_content,\n ],\n config=types.GenerateContentConfig(\n tools=[tool],\n ),\n)\n\nprint(response.text)\n```\n\n#### Function calling with `ANY` tools config mode\n\nIf you configure function calling mode to be `ANY`, then the model will always\nreturn function call parts. If you also pass a python function as a tool, by\ndefault the SDK will perform automatic function calling until the remote calls exceed the\nmaximum remote call for automatic function calling (default to 10 times).\n\nIf you'd like to disable automatic function calling in `ANY` mode:\n\n```python\nfrom google.genai import types\n\ndef get_current_weather(location: str) -> str:\n \"\"\"Returns the current weather.\n\n Args:\n location: The city and state, e.g. San Francisco, CA\n \"\"\"\n return \"sunny\"\n\nresponse = client.models.generate_content(\n model=\"gemini-2.0-flash-001\",\n contents=\"What is the weather like in Boston?\",\n config=types.GenerateContentConfig(\n tools=[get_current_weather],\n automatic_function_calling=types.AutomaticFunctionCallingConfig(\n disable=True\n ),\n tool_config=types.ToolConfig(\n function_calling_config=types.FunctionCallingConfig(mode='ANY')\n ),\n ),\n)\n```\n\nIf you'd like to set `x` number of automatic function call turns, you can\nconfigure the maximum remote calls to be `x + 1`.\nAssuming you prefer `1` turn for automatic function calling.\n\n```python\nfrom google.genai import types\n\ndef get_current_weather(location: str) -> str:\n \"\"\"Returns the current weather.\n\n Args:\n location: The city and state, e.g. San Francisco, CA\n \"\"\"\n return \"sunny\"\n\nresponse = client.models.generate_content(\n model=\"gemini-2.0-flash-001\",\n contents=\"What is the weather like in Boston?\",\n config=types.GenerateContentConfig(\n tools=[get_current_weather],\n automatic_function_calling=types.AutomaticFunctionCallingConfig(\n maximum_remote_calls=2\n ),\n tool_config=types.ToolConfig(\n function_calling_config=types.FunctionCallingConfig(mode='ANY')\n ),\n ),\n)\n```\n\n#### Model Context Protocol (MCP) support (experimental)\n\nBuilt-in [MCP](https://modelcontextprotocol.io/introduction) support is an\nexperimental feature. You can pass a local MCP server as a tool directly.\n\n```python\nimport os\nimport asyncio\nfrom datetime import datetime\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\nfrom google import genai\n\nclient = genai.Client()\n\n# Create server parameters for stdio connection\nserver_params = StdioServerParameters(\n command=\"npx\", # Executable\n args=[\"-y\", \"@philschmid/weather-mcp\"], # MCP Server\n env=None, # Optional environment variables\n)\n\nasync def run():\n async with stdio_client(server_params) as (read, write):\n async with ClientSession(read, write) as session:\n # Prompt to get the weather for the current day in London.\n prompt = f\"What is the weather in London in {datetime.now().strftime('%Y-%m-%d')}?\"\n\n # Initialize the connection between client and server\n await session.initialize()\n\n # Send request to the model with MCP function declarations\n response = await client.aio.models.generate_content(\n model=\"gemini-2.5-flash\",\n contents=prompt,\n config=genai.types.GenerateContentConfig(\n temperature=0,\n tools=[session], # uses the session, will automatically call the tool using automatic function calling\n ),\n )\n print(response.text)\n\n# Start the asyncio event loop and run the main function\nasyncio.run(run())\n```\n\n### JSON Response Schema\n\nHowever you define your schema, don't duplicate it in your input prompt,\nincluding by giving examples of expected JSON output. If you do, the generated\noutput might be lower in quality.\n\n#### Pydantic Model Schema support\n\nSchemas can be provided as Pydantic Models.\n\n```python\nfrom pydantic import BaseModel\nfrom google.genai import types\n\n\nclass CountryInfo(BaseModel):\n name: str\n population: int\n capital: str\n continent: str\n gdp: int\n official_language: str\n total_area_sq_mi: int\n\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents='Give me information for the United States.',\n config=types.GenerateContentConfig(\n response_mime_type='application/json',\n response_schema=CountryInfo,\n ),\n)\nprint(response.text)\n```\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents='Give me information for the United States.',\n config=types.GenerateContentConfig(\n response_mime_type='application/json',\n response_schema={\n 'required': [\n 'name',\n 'population',\n 'capital',\n 'continent',\n 'gdp',\n 'official_language',\n 'total_area_sq_mi',\n ],\n 'properties': {\n 'name': {'type': 'STRING'},\n 'population': {'type': 'INTEGER'},\n 'capital': {'type': 'STRING'},\n 'continent': {'type': 'STRING'},\n 'gdp': {'type': 'INTEGER'},\n 'official_language': {'type': 'STRING'},\n 'total_area_sq_mi': {'type': 'INTEGER'},\n },\n 'type': 'OBJECT',\n },\n ),\n)\nprint(response.text)\n```\n\n### Enum Response Schema\n\n#### Text Response\n\nYou can set response_mime_type to 'text/x.enum' to return one of those enum\nvalues as the response.\n\n```python\nclass InstrumentEnum(Enum):\n PERCUSSION = 'Percussion'\n STRING = 'String'\n WOODWIND = 'Woodwind'\n BRASS = 'Brass'\n KEYBOARD = 'Keyboard'\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents='What instrument plays multiple notes at once?',\n config={\n 'response_mime_type': 'text/x.enum',\n 'response_schema': InstrumentEnum,\n },\n )\nprint(response.text)\n```\n\n#### JSON Response\n\nYou can also set response_mime_type to 'application/json', the response will be\nidentical but in quotes.\n\n```python\nfrom enum import Enum\n\nclass InstrumentEnum(Enum):\n PERCUSSION = 'Percussion'\n STRING = 'String'\n WOODWIND = 'Woodwind'\n BRASS = 'Brass'\n KEYBOARD = 'Keyboard'\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents='What instrument plays multiple notes at once?',\n config={\n 'response_mime_type': 'application/json',\n 'response_schema': InstrumentEnum,\n },\n )\nprint(response.text)\n```\n\n### Generate Content (Synchronous Streaming)\n\nGenerate content in a streaming format so that the model outputs streams back\nto you, rather than being returned as one chunk.\n\n#### Streaming for text content\n\n```python\nfor chunk in client.models.generate_content_stream(\n model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'\n):\n print(chunk.text, end='')\n```\n\n#### Streaming for image content\n\nIf your image is stored in [Google Cloud Storage](https://cloud.google.com/storage),\nyou can use the `from_uri` class method to create a `Part` object.\n\n```python\nfrom google.genai import types\n\nfor chunk in client.models.generate_content_stream(\n model='gemini-2.0-flash-001',\n contents=[\n 'What is this image about?',\n types.Part.from_uri(\n file_uri='gs://generativeai-downloads/images/scones.jpg',\n mime_type='image/jpeg',\n ),\n ],\n):\n print(chunk.text, end='')\n```\n\nIf your image is stored in your local file system, you can read it in as bytes\ndata and use the `from_bytes` class method to create a `Part` object.\n\n```python\nfrom google.genai import types\n\nYOUR_IMAGE_PATH = 'your_image_path'\nYOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'\nwith open(YOUR_IMAGE_PATH, 'rb') as f:\n image_bytes = f.read()\n\nfor chunk in client.models.generate_content_stream(\n model='gemini-2.0-flash-001',\n contents=[\n 'What is this image about?',\n types.Part.from_bytes(data=image_bytes, mime_type=YOUR_IMAGE_MIME_TYPE),\n ],\n):\n print(chunk.text, end='')\n```\n\n### Generate Content (Asynchronous Non Streaming)\n\n`client.aio` exposes all the analogous [`async` methods](https://docs.python.org/3/library/asyncio.html)\nthat are available on `client`. Note that it applies to all the modules.\n\nFor example, `client.aio.models.generate_content` is the `async` version\nof `client.models.generate_content`\n\n```python\nresponse = await client.aio.models.generate_content(\n model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'\n)\n\nprint(response.text)\n```\n\n### Generate Content (Asynchronous Streaming)\n\n\n```python\nasync for chunk in await client.aio.models.generate_content_stream(\n model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'\n):\n print(chunk.text, end='')\n```\n\n### Count Tokens and Compute Tokens\n\n```python\nresponse = client.models.count_tokens(\n model='gemini-2.0-flash-001',\n contents='why is the sky blue?',\n)\nprint(response)\n```\n\n#### Compute Tokens\n\nCompute tokens is only supported in Vertex AI.\n\n```python\nresponse = client.models.compute_tokens(\n model='gemini-2.0-flash-001',\n contents='why is the sky blue?',\n)\nprint(response)\n```\n\n##### Async\n\n```python\nresponse = await client.aio.models.count_tokens(\n model='gemini-2.0-flash-001',\n contents='why is the sky blue?',\n)\nprint(response)\n```\n\n### Embed Content\n\n```python\nresponse = client.models.embed_content(\n model='text-embedding-004',\n contents='why is the sky blue?',\n)\nprint(response)\n```\n\n```python\nfrom google.genai import types\n\n# multiple contents with config\nresponse = client.models.embed_content(\n model='text-embedding-004',\n contents=['why is the sky blue?', 'What is your age?'],\n config=types.EmbedContentConfig(output_dimensionality=10),\n)\n\nprint(response)\n```\n\n### Imagen\n\n#### Generate Images\n\nSupport for generate images in Gemini Developer API is behind an allowlist\n\n```python\nfrom google.genai import types\n\n# Generate Image\nresponse1 = client.models.generate_images(\n model='imagen-3.0-generate-002',\n prompt='An umbrella in the foreground, and a rainy night sky in the background',\n config=types.GenerateImagesConfig(\n number_of_images=1,\n include_rai_reason=True,\n output_mime_type='image/jpeg',\n ),\n)\nresponse1.generated_images[0].image.show()\n```\n\n#### Upscale Image\n\nUpscale image is only supported in Vertex AI.\n\n```python\nfrom google.genai import types\n\n# Upscale the generated image from above\nresponse2 = client.models.upscale_image(\n model='imagen-3.0-generate-001',\n image=response1.generated_images[0].image,\n upscale_factor='x2',\n config=types.UpscaleImageConfig(\n include_rai_reason=True,\n output_mime_type='image/jpeg',\n ),\n)\nresponse2.generated_images[0].image.show()\n```\n\n#### Edit Image\n\nEdit image uses a separate model from generate and upscale.\n\nEdit image is only supported in Vertex AI.\n\n```python\n# Edit the generated image from above\nfrom google.genai import types\nfrom google.genai.types import RawReferenceImage, MaskReferenceImage\n\nraw_ref_image = RawReferenceImage(\n reference_id=1,\n reference_image=response1.generated_images[0].image,\n)\n\n# Model computes a mask of the background\nmask_ref_image = MaskReferenceImage(\n reference_id=2,\n config=types.MaskReferenceConfig(\n mask_mode='MASK_MODE_BACKGROUND',\n mask_dilation=0,\n ),\n)\n\nresponse3 = client.models.edit_image(\n model='imagen-3.0-capability-001',\n prompt='Sunlight and clear sky',\n reference_images=[raw_ref_image, mask_ref_image],\n config=types.EditImageConfig(\n edit_mode='EDIT_MODE_INPAINT_INSERTION',\n number_of_images=1,\n include_rai_reason=True,\n output_mime_type='image/jpeg',\n ),\n)\nresponse3.generated_images[0].image.show()\n```\n\n### Veo\n\nSupport for generating videos is considered public preview\n\n#### Generate Videos (Text to Video)\n\n```python\nfrom google.genai import types\n\n# Create operation\noperation = client.models.generate_videos(\n model='veo-2.0-generate-001',\n prompt='A neon hologram of a cat driving at top speed',\n config=types.GenerateVideosConfig(\n number_of_videos=1,\n duration_seconds=5,\n enhance_prompt=True,\n ),\n)\n\n# Poll operation\nwhile not operation.done:\n time.sleep(20)\n operation = client.operations.get(operation)\n\nvideo = operation.response.generated_videos[0].video\nvideo.show()\n```\n\n#### Generate Videos (Image to Video)\n\n```python\nfrom google.genai import types\n\n# Read local image (uses mimetypes.guess_type to infer mime type)\nimage = types.Image.from_file(\"local/path/file.png\")\n\n# Create operation\noperation = client.models.generate_videos(\n model='veo-2.0-generate-001',\n # Prompt is optional if image is provided\n prompt='Night sky',\n image=image,\n config=types.GenerateVideosConfig(\n number_of_videos=1,\n duration_seconds=5,\n enhance_prompt=True,\n # Can also pass an Image into last_frame for frame interpolation\n ),\n)\n\n# Poll operation\nwhile not operation.done:\n time.sleep(20)\n operation = client.operations.get(operation)\n\nvideo = operation.response.generated_videos[0].video\nvideo.show()\n```\n\n#### Generate Videos (Video to Video)\n\nCurrently, only Vertex supports Video to Video generation (Video extension).\n\n```python\nfrom google.genai import types\n\n# Read local video (uses mimetypes.guess_type to infer mime type)\nvideo = types.Video.from_file(\"local/path/video.mp4\")\n\n# Create operation\noperation = client.models.generate_videos(\n model='veo-2.0-generate-001',\n # Prompt is optional if Video is provided\n prompt='Night sky',\n # Input video must be in GCS\n video=types.Video(\n uri=\"gs://bucket-name/inputs/videos/cat_driving.mp4\",\n ),\n config=types.GenerateVideosConfig(\n number_of_videos=1,\n duration_seconds=5,\n enhance_prompt=True,\n ),\n)\n\n# Poll operation\nwhile not operation.done:\n time.sleep(20)\n operation = client.operations.get(operation)\n\nvideo = operation.response.generated_videos[0].video\nvideo.show()\n```\n\n## Chats\n\nCreate a chat session to start a multi-turn conversations with the model. Then,\nuse `chat.send_message` function multiple times within the same chat session so\nthat it can reflect on its previous responses (i.e., engage in an ongoing\n conversation). See the 'Create a client' section above to initialize a client.\n\n### Send Message (Synchronous Non-Streaming)\n\n```python\nchat = client.chats.create(model='gemini-2.0-flash-001')\nresponse = chat.send_message('tell me a story')\nprint(response.text)\nresponse = chat.send_message('summarize the story you told me in 1 sentence')\nprint(response.text)\n```\n\n### Send Message (Synchronous Streaming)\n\n```python\nchat = client.chats.create(model='gemini-2.0-flash-001')\nfor chunk in chat.send_message_stream('tell me a story'):\n print(chunk.text)\n```\n\n### Send Message (Asynchronous Non-Streaming)\n\n```python\nchat = client.aio.chats.create(model='gemini-2.0-flash-001')\nresponse = await chat.send_message('tell me a story')\nprint(response.text)\n```\n\n### Send Message (Asynchronous Streaming)\n\n```python\nchat = client.aio.chats.create(model='gemini-2.0-flash-001')\nasync for chunk in await chat.send_message_stream('tell me a story'):\n print(chunk.text)\n```\n\n## Files\n\nFiles are only supported in Gemini Developer API. See the 'Create a client'\nsection above to initialize a client.\n\n```cmd\n!gsutil cp gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf .\n!gsutil cp gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf .\n```\n\n### Upload\n\n```python\nfile1 = client.files.upload(file='2312.11805v3.pdf')\nfile2 = client.files.upload(file='2403.05530.pdf')\n\nprint(file1)\nprint(file2)\n```\n\n### Get\n\n```python\nfile1 = client.files.upload(file='2312.11805v3.pdf')\nfile_info = client.files.get(name=file1.name)\n```\n\n### Delete\n\n```python\nfile3 = client.files.upload(file='2312.11805v3.pdf')\n\nclient.files.delete(name=file3.name)\n```\n\n## Caches\n\n`client.caches` contains the control plane APIs for cached content. See the\n'Create a client' section above to initialize a client.\n\n### Create\n\n```python\nfrom google.genai import types\n\nif client.vertexai:\n file_uris = [\n 'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',\n 'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf',\n ]\nelse:\n file_uris = [file1.uri, file2.uri]\n\ncached_content = client.caches.create(\n model='gemini-2.0-flash-001',\n config=types.CreateCachedContentConfig(\n contents=[\n types.Content(\n role='user',\n parts=[\n types.Part.from_uri(\n file_uri=file_uris[0], mime_type='application/pdf'\n ),\n types.Part.from_uri(\n file_uri=file_uris[1],\n mime_type='application/pdf',\n ),\n ],\n )\n ],\n system_instruction='What is the sum of the two pdfs?',\n display_name='test cache',\n ttl='3600s',\n ),\n)\n```\n\n### Get\n\n```python\ncached_content = client.caches.get(name=cached_content.name)\n```\n\n### Generate Content with Caches\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-001',\n contents='Summarize the pdfs',\n config=types.GenerateContentConfig(\n cached_content=cached_content.name,\n ),\n)\nprint(response.text)\n```\n\n## Tunings\n\n`client.tunings` contains tuning job APIs and supports supervised fine\ntuning through `tune`. See the 'Create a client' section above to initialize a\nclient.\n\n### Tune\n\n- Vertex AI supports tuning from GCS source or from a Vertex Multimodal Dataset\n- Gemini Developer API supports tuning from inline examples\n\n```python\nfrom google.genai import types\n\nif client.vertexai:\n model = 'gemini-2.0-flash-001'\n training_dataset = types.TuningDataset(\n # or gcs_uri=my_vertex_multimodal_dataset\n gcs_uri='gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl',\n )\nelse:\n model = 'models/gemini-2.0-flash-001'\n # or gcs_uri=my_vertex_multimodal_dataset.resource_name\n training_dataset = types.TuningDataset(\n examples=[\n types.TuningExample(\n text_input=f'Input text {i}',\n output=f'Output text {i}',\n )\n for i in range(5)\n ],\n )\n```\n\n```python\nfrom google.genai import types\n\ntuning_job = client.tunings.tune(\n base_model=model,\n training_dataset=training_dataset,\n config=types.CreateTuningJobConfig(\n epoch_count=1, tuned_model_display_name='test_dataset_examples model'\n ),\n)\nprint(tuning_job)\n```\n\n### Get Tuning Job\n\n```python\ntuning_job = client.tunings.get(name=tuning_job.name)\nprint(tuning_job)\n```\n\n```python\nimport time\n\nrunning_states = set(\n [\n 'JOB_STATE_PENDING',\n 'JOB_STATE_RUNNING',\n ]\n)\n\nwhile tuning_job.state in running_states:\n print(tuning_job.state)\n tuning_job = client.tunings.get(name=tuning_job.name)\n time.sleep(10)\n```\n\n#### Use Tuned Model\n\n```python\nresponse = client.models.generate_content(\n model=tuning_job.tuned_model.endpoint,\n contents='why is the sky blue?',\n)\n\nprint(response.text)\n```\n\n### Get Tuned Model\n\n```python\ntuned_model = client.models.get(model=tuning_job.tuned_model.model)\nprint(tuned_model)\n```\n\n### List Tuned Models\n\nTo retrieve base models, see [list base models](#list-base-models).\n\n```python\nfor model in client.models.list(config={'page_size': 10, 'query_base': False}):\n print(model)\n```\n\n```python\npager = client.models.list(config={'page_size': 10, 'query_base': False})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n```python\nasync for job in await client.aio.models.list(config={'page_size': 10, 'query_base': False}):\n print(job)\n```\n\n```python\nasync_pager = await client.aio.models.list(config={'page_size': 10, 'query_base': False})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Update Tuned Model\n\n```python\nfrom google.genai import types\n\nmodel = pager[0]\n\nmodel = client.models.update(\n model=model.name,\n config=types.UpdateModelConfig(\n display_name='my tuned model', description='my tuned model description'\n ),\n)\n\nprint(model)\n```\n\n\n### List Tuning Jobs\n\n```python\nfor job in client.tunings.list(config={'page_size': 10}):\n print(job)\n```\n\n```python\npager = client.tunings.list(config={'page_size': 10})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n```python\nasync for job in await client.aio.tunings.list(config={'page_size': 10}):\n print(job)\n```\n\n```python\nasync_pager = await client.aio.tunings.list(config={'page_size': 10})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n## Batch Prediction\n\nOnly supported in Vertex AI. See the 'Create a client' section above to\ninitialize a client.\n\n### Create\n\n```python\n# Specify model and source file only, destination and job display name will be auto-populated\njob = client.batches.create(\n model='gemini-2.0-flash-001',\n src='bq://my-project.my-dataset.my-table',\n)\n\njob\n```\n\n```python\n# Get a job by name\njob = client.batches.get(name=job.name)\n\njob.state\n```\n\n```python\ncompleted_states = set(\n [\n 'JOB_STATE_SUCCEEDED',\n 'JOB_STATE_FAILED',\n 'JOB_STATE_CANCELLED',\n 'JOB_STATE_PAUSED',\n ]\n)\n\nwhile job.state not in completed_states:\n print(job.state)\n job = client.batches.get(name=job.name)\n time.sleep(30)\n\njob\n```\n\n### List\n\n```python\nfor job in client.batches.list(config=types.ListBatchJobsConfig(page_size=10)):\n print(job)\n```\n\n```python\npager = client.batches.list(config=types.ListBatchJobsConfig(page_size=10))\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n```python\nasync for job in await client.aio.batches.list(\n config=types.ListBatchJobsConfig(page_size=10)\n):\n print(job)\n```\n\n```python\nasync_pager = await client.aio.batches.list(\n config=types.ListBatchJobsConfig(page_size=10)\n)\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Delete\n\n```python\n# Delete the job resource\ndelete_job = client.batches.delete(name=job.name)\n\ndelete_job\n```\n\n## Error Handling\n\nTo handle errors raised by the model service, the SDK provides this [APIError](https://github.com/googleapis/python-genai/blob/main/google/genai/errors.py) class.\n\n```python\nfrom google.genai import errors\n\ntry:\n client.models.generate_content(\n model=\"invalid-model-name\",\n contents=\"What is your name?\",\n )\nexcept errors.APIError as e:\n print(e.code) # 404\n print(e.message)\n```\n\n## Extra Request Body\n\nThe `extra_body` field in `HttpOptions` accepts a dictionary of additional JSON\nproperties to include in the request body. This can be used to access new or\nexperimental backend features that are not yet formally supported in the SDK.\nThe structure of the dictionary must match the backend API's request structure.\n\n- VertexAI backend API docs: https://cloud.google.com/vertex-ai/docs/reference/rest\n- GeminiAPI backend API docs: https://ai.google.dev/api/rest\n\n```python\nresponse = client.models.generate_content(\n model=\"gemini-2.5-pro\",\n contents=\"What is the weather in Boston? and how about Sunnyvale?\",\n config=types.GenerateContentConfig(\n tools=[get_current_weather],\n http_options=types.HttpOptions(extra_body={'tool_config': {'function_calling_config': {'mode': 'COMPOSITIONAL'}}}),\n ),\n)\n```\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "GenAI Python SDK",
"version": "1.25.0",
"project_urls": {
"Homepage": "https://github.com/googleapis/python-genai"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f6ec149f3d49b56cf848142071772aabb1c290b535bd9b5327a5dfccf1d00332",
"md5": "fc510ab2ec7e35c15c70202796aa9d62",
"sha256": "fb5cee79b9a0a1b2afd5cfdf279099ecebd186551eefcaa6ec0c6016244e6138"
},
"downloads": -1,
"filename": "google_genai-1.25.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "fc510ab2ec7e35c15c70202796aa9d62",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 226847,
"upload_time": "2025-07-09T20:53:46",
"upload_time_iso_8601": "2025-07-09T20:53:46.532417Z",
"url": "https://files.pythonhosted.org/packages/f6/ec/149f3d49b56cf848142071772aabb1c290b535bd9b5327a5dfccf1d00332/google_genai-1.25.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7f59c9b9148c8702b60253f5a251f16ae436534c5d4362da193c9db05ac9858c",
"md5": "aa069c1ba887ed81906bcf4b34700653",
"sha256": "a08a79c819a5d949d9948cd372e36e512bf85cd28158994daaa36d0ec4cb2b02"
},
"downloads": -1,
"filename": "google_genai-1.25.0.tar.gz",
"has_sig": false,
"md5_digest": "aa069c1ba887ed81906bcf4b34700653",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 228141,
"upload_time": "2025-07-09T20:53:47",
"upload_time_iso_8601": "2025-07-09T20:53:47.885334Z",
"url": "https://files.pythonhosted.org/packages/7f/59/c9b9148c8702b60253f5a251f16ae436534c5d4362da193c9db05ac9858c/google_genai-1.25.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-09 20:53:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "googleapis",
"github_project": "python-genai",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "absl-py",
"specs": [
[
"==",
"2.1.0"
]
]
},
{
"name": "annotated-types",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "anyio",
"specs": [
[
"==",
"4.8.0"
]
]
},
{
"name": "cachetools",
"specs": [
[
"==",
"5.5.0"
]
]
},
{
"name": "certifi",
"specs": [
[
"==",
"2024.8.30"
]
]
},
{
"name": "charset-normalizer",
"specs": [
[
"==",
"3.4.0"
]
]
},
{
"name": "coverage",
"specs": [
[
"==",
"7.6.9"
]
]
},
{
"name": "httpx",
"specs": [
[
"==",
"0.28.1"
]
]
},
{
"name": "google-auth",
"specs": [
[
"==",
"2.37.0"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.10"
]
]
},
{
"name": "iniconfig",
"specs": [
[
"==",
"2.0.0"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"24.2"
]
]
},
{
"name": "pillow",
"specs": [
[
"==",
"11.0.0"
]
]
},
{
"name": "pluggy",
"specs": [
[
"==",
"1.5.0"
]
]
},
{
"name": "pyasn1",
"specs": [
[
"==",
"0.6.1"
]
]
},
{
"name": "pyasn1_modules",
"specs": [
[
"==",
"0.4.1"
]
]
},
{
"name": "pydantic",
"specs": [
[
"==",
"2.9.2"
]
]
},
{
"name": "pydantic_core",
"specs": [
[
"==",
"2.23.4"
]
]
},
{
"name": "pytest",
"specs": [
[
"==",
"8.3.4"
]
]
},
{
"name": "pytest-asyncio",
"specs": [
[
"==",
"0.25.0"
]
]
},
{
"name": "pytest-cov",
"specs": [
[
"==",
"6.0.0"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.4"
]
]
},
{
"name": "rsa",
"specs": [
[
"==",
"4.9"
]
]
},
{
"name": "tenacity",
"specs": [
[
"==",
"8.2.3"
]
]
},
{
"name": "typing_extensions",
"specs": [
[
"==",
"4.12.2"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"2.2.3"
]
]
},
{
"name": "websockets",
"specs": [
[
"==",
"15.0.0"
]
]
},
{
"name": "mcp",
"specs": [
[
"==",
"1.8.1"
]
]
}
],
"lcname": "google-genai"
}