Name | google-genai JSON |
Version |
0.4.0
JSON |
| download |
home_page | None |
Summary | GenAI Python SDK |
upload_time | 2025-01-08 18:51:48 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | Apache-2.0 |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Google Gen AI SDK
[![PyPI version](https://img.shields.io/pypi/v/google-genai.svg)](https://pypi.org/project/google-genai/)
--------
**Documentation:** https://googleapis.github.io/python-genai/
-----
## Installation
``` cmd
pip install google-genai
```
## Imports
``` python
from google import genai
from google.genai import types
```
## Create a client
Please run one of the following code blocks to create a client for
different services (Google AI or Vertex). Feel free to switch the client
and run all the examples to see how it behaves under different APIs.
``` python
# Only run this block for Google AI API
client = genai.Client(api_key='YOUR_API_KEY')
```
``` python
# Only run this block for Vertex AI API
client = genai.Client(
vertexai=True, project='your-project-id', location='us-central1'
)
```
## Types
Parameter types can be specified as either dictionaries(TypedDict) or
pydantic Models. Pydantic model types are available in the `types`
module.
## Models
The `client.models` modules exposes model inferencing and model getters.
### Generate Content
``` python
response = client.models.generate_content(
model='gemini-2.0-flash-exp', contents='What is your name?'
)
print(response.text)
```
### System Instructions and Other Configs
``` python
response = client.models.generate_content(
model='gemini-2.0-flash-exp',
contents='high',
config=types.GenerateContentConfig(
system_instruction='I say high, you say low',
temperature= 0.3,
),
)
print(response.text)
```
### Typed Config
All API methods support pydantic types for parameters as well as
dictionaries. You can get the type from `google.genai.types`.
``` python
response = client.models.generate_content(
model='gemini-2.0-flash-exp',
contents=types.Part.from_text('Why is sky blue?'),
config=types.GenerateContentConfig(
temperature=0,
top_p=0.95,
top_k=20,
candidate_count=1,
seed=5,
max_output_tokens=100,
stop_sequences=["STOP!"],
presence_penalty=0.0,
frequency_penalty=0.0,
)
)
response
```
### Safety Settings
``` python
response = client.models.generate_content(
model='gemini-2.0-flash-exp',
contents='Say something bad.',
config=types.GenerateContentConfig(
safety_settings= [types.SafetySetting(
category='HARM_CATEGORY_HATE_SPEECH',
threshold='BLOCK_ONLY_HIGH',
)]
),
)
print(response.text)
```
### Function Calling
#### Automatic Python function Support
You can pass a python function directly and it will be automatically
called and responded.
``` python
def get_current_weather(location: str,) -> int:
"""Returns the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
"""
return 'sunny'
response = client.models.generate_content(
model='gemini-2.0-flash-exp',
contents="What is the weather like in Boston?",
config=types.GenerateContentConfig(tools=[get_current_weather],)
)
response.text
```
#### Manually declare and invoke a function for function calling
If you don't want to use the automatic function support, you can manually
declare the function and invoke it.
The following example shows how to declare a function and pass it as a tool.
Then you will receive a function call part in the response.
``` python
function = dict(
name="get_current_weather",
description="Get the current weather in a given location",
parameters={
"type": "OBJECT",
"properties": {
"location": {
"type": "STRING",
"description": "The city and state, e.g. San Francisco, CA",
},
},
"required": ["location"],
}
)
tool = types.Tool(function_declarations=[function])
response = client.models.generate_content(
model='gemini-2.0-flash-exp',
contents="What is the weather like in Boston?",
config=types.GenerateContentConfig(tools=[tool],)
)
response.candidates[0].content.parts[0].function_call
```
After you receive the function call part from model, you can invoke the function
and get the function response. And then you can pass the function response to
the model.
The following example shows how to do it for a simple function invocation.
``` python
function_call_part = response.candidates[0].content.parts[0]
try:
function_result = get_current_weather(**function_call_part.function_call.args)
function_response = {'result': function_result}
except Exception as e: # instead of raising the exception, you can let the model handle it
function_response = {'error': str(e)}
function_response_part = types.Part.from_function_response(
name=function_call_part.function_call.name,
response=function_response,
)
response = client.models.generate_content(
model='gemini-2.0-flash-exp',
contents=[
types.Part.from_text("What is the weather like in Boston?"),
function_call_part,
function_response_part,
])
response
```
### JSON Response Schema
#### Pydantic Model Schema support
Schemas can be provided as Pydantic Models.
``` python
from pydantic import BaseModel
class CountryInfo(BaseModel):
name: str
population: int
capital: str
continent: str
gdp: int
official_language: str
total_area_sq_mi: int
response = client.models.generate_content(
model='gemini-2.0-flash-exp',
contents='Give me information of the United States.',
config=types.GenerateContentConfig(
response_mime_type= 'application/json',
response_schema= CountryInfo,
),
)
print(response.text)
```
``` python
response = client.models.generate_content(
model='gemini-2.0-flash-exp',
contents='Give me information of the United States.',
config={
'response_mime_type': 'application/json',
'response_schema': {
'required': [
'name',
'population',
'capital',
'continent',
'gdp',
'official_language',
'total_area_sq_mi',
],
'properties': {
'name': {'type': 'STRING'},
'population': {'type': 'INTEGER'},
'capital': {'type': 'STRING'},
'continent': {'type': 'STRING'},
'gdp': {'type': 'INTEGER'},
'official_language': {'type': 'STRING'},
'total_area_sq_mi': {'type': 'INTEGER'},
},
'type': 'OBJECT',
},
},
)
print(response.text)
```
### Streaming
#### Streaming for text content
``` python
for chunk in client.models.generate_content_stream(
model='gemini-2.0-flash-exp', contents='Tell me a story in 300 words.'
):
print(chunk.text)
```
#### Streaming for image content
If your image is stored in Google Cloud Storage, you can use the `from_uri`
class method to create a Part object.
``` python
for chunk in client.models.generate_content_stream(
model='gemini-1.5-flash',
contents=[
'What is this image about?',
types.Part.from_uri(
file_uri='gs://generativeai-downloads/images/scones.jpg',
mime_type='image/jpeg'
)
],
):
print(chunk.text)
```
If your image is stored in your local file system, you can read it in as bytes
data and use the `from_bytes` class method to create a Part object.
``` python
YOUR_IMAGE_PATH = 'your_image_path'
YOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'
with open(YOUR_IMAGE_PATH, 'rb') as f:
image_bytes = f.read()
for chunk in client.models.generate_content_stream(
model='gemini-1.5-flash',
contents=[
'What is this image about?',
types.Part.from_bytes(
data=image_bytes,
mime_type=YOUR_IMAGE_MIME_TYPE
)
],
):
print(chunk.text)
```
### Async
`client.aio` exposes all the analogous `async` methods that are
available on `client`
For example, `client.aio.models.generate_content` is the async version
of `client.models.generate_content`
``` python
request = await client.aio.models.generate_content(
model='gemini-2.0-flash-exp', contents='Tell me a story in 300 words.'
)
print(response.text)
```
### Streaming
``` python
async for response in client.aio.models.generate_content_stream(
model='gemini-2.0-flash-exp', contents='Tell me a story in 300 words.'
):
print(response.text)
```
### Count Tokens and Compute Tokens
``` python
response = client.models.count_tokens(
model='gemini-2.0-flash-exp',
contents='What is your name?',
)
print(response)
```
#### Compute Tokens
Compute tokens is not supported by Google AI.
``` python
response = client.models.compute_tokens(
model='gemini-2.0-flash-exp',
contents='What is your name?',
)
print(response)
```
##### Async
``` python
response = await client.aio.models.count_tokens(
model='gemini-2.0-flash-exp',
contents='What is your name?',
)
print(response)
```
### Embed Content
``` python
response = client.models.embed_content(
model='text-embedding-004',
contents='What is your name?',
)
response
```
``` python
# multiple contents with config
response = client.models.embed_content(
model='text-embedding-004',
contents=['What is your name?', 'What is your age?'],
config=types.EmbedContentConfig(output_dimensionality= 10)
)
response
```
### Imagen
#### Generate Image
Support for generate image in Google AI is behind an allowlist
``` python
# Generate Image
response1 = client.models.generate_image(
model='imagen-3.0-generate-001',
prompt='An umbrella in the foreground, and a rainy night sky in the background',
config=types.GenerateImageConfig(
negative_prompt= 'human',
number_of_images= 1,
include_rai_reason= True,
output_mime_type= 'image/jpeg'
)
)
response1.generated_images[0].image.show()
```
#### Upscale Image
Upscale image is not supported in Google AI.
``` python
# Upscale the generated image from above
response2 = client.models.upscale_image(
model='imagen-3.0-generate-001',
image=response1.generated_images[0].image,
upscale_factor='x2',
config=types.UpscaleImageConfig(
include_rai_reason= True,
output_mime_type= 'image/jpeg',
),
)
response2.generated_images[0].image.show()
```
#### Edit Image
Edit image uses a separate model from generate and upscale.
Edit image is not supported in Google AI.
``` python
# Edit the generated image from above
from google.genai.types import RawReferenceImage, MaskReferenceImage
raw_ref_image = RawReferenceImage(
reference_id=1,
reference_image=response1.generated_images[0].image,
)
# Model computes a mask of the background
mask_ref_image = MaskReferenceImage(
reference_id=2,
config=types.MaskReferenceConfig(
mask_mode='MASK_MODE_BACKGROUND',
mask_dilation=0,
),
)
response3 = client.models.edit_image(
model='imagen-3.0-capability-001',
prompt='Sunlight and clear sky',
reference_images=[raw_ref_image, mask_ref_image],
config=types.EditImageConfig(
edit_mode= 'EDIT_MODE_INPAINT_INSERTION',
number_of_images= 1,
negative_prompt= 'human',
include_rai_reason= True,
output_mime_type= 'image/jpeg',
),
)
response3.generated_images[0].image.show()
```
## Chats
Create a chat session to start a multi-turn conversations with the model.
### Send Message
```python
chat = client.chats.create(model='gemini-2.0-flash-exp')
response = chat.send_message('tell me a story')
print(response.text)
```
### Streaming
```python
chat = client.chats.create(model='gemini-2.0-flash-exp')
for chunk in chat.send_message_stream('tell me a story'):
print(chunk.text)
```
### Async
```python
chat = client.aio.chats.create(model='gemini-2.0-flash-exp')
response = await chat.send_message('tell me a story')
print(response.text)
```
### Async Streaming
```python
chat = client.aio.chats.create(model='gemini-2.0-flash-exp')
async for chunk in chat.send_message_stream('tell me a story'):
print(chunk.text)
```
## Files (Only Google AI)
``` python
!gsutil cp gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf .
!gsutil cp gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf .
```
### Upload
``` python
file1 = client.files.upload(path='2312.11805v3.pdf')
file2 = client.files.upload(path='2403.05530.pdf')
print(file1)
print(file2)
```
### Delete
``` python
file3 = client.files.upload(path='2312.11805v3.pdf')
client.files.delete(name=file3.name)
```
## Caches
`client.caches` contains the control plane APIs for cached content
### Create
``` python
if client.vertexai:
file_uris = [
'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',
'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf'
]
else:
file_uris = [file1.uri, file2.uri]
cached_content = client.caches.create(
model='gemini-1.5-pro-002',
config=types.CreateCachedContentConfig(
contents=[
types.Content(
role='user',
parts=[
types.Part.from_uri(
file_uri=file_uris[0],
mime_type='application/pdf'),
types.Part.from_uri(
file_uri=file_uris[1],
mime_type='application/pdf',)])
],
system_instruction='What is the sum of the two pdfs?',
display_name='test cache',
ttl='3600s',
),
)
```
### Get
``` python
client.caches.get(name=cached_content.name)
```
### Generate Content
``` python
client.models.generate_content(
model='gemini-1.5-pro-002',
contents='Summarize the pdfs',
config=types.GenerateContentConfig(
cached_content=cached_content.name,
)
)
```
## Tunings
`client.tunings` contains tuning job APIs and supports supervised fine
tuning through `tune` and distillation through `distill`
### Tune
- Vertex supports tuning from GCS source
- Google AI supports tuning from inline examples
``` python
if client.vertexai:
model = 'gemini-1.5-pro-002'
training_dataset=types.TuningDataset(
gcs_uri='gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl',
)
else:
model = 'models/gemini-1.0-pro-001'
training_dataset=types.TuningDataset(
examples=[
types.TuningExample(
text_input=f"Input text {i}",
output=f"Output text {i}",
)
for i in range(5)
],
)
```
``` python
tuning_job = client.tunings.tune(
base_model=model,
training_dataset=training_dataset,
config=types.CreateTuningJobConfig(
epoch_count= 1,
tuned_model_display_name="test_dataset_examples model"
)
)
tuning_job
```
### Get Tuning Job
``` python
tuning_job = client.tunings.get(name=tuning_job.name)
tuning_job
```
``` python
import time
running_states = set([
"JOB_STATE_PENDING",
"JOB_STATE_RUNNING",
])
while tuning_job.state in running_states:
print(tuning_job.state)
tuning_job = client.tunings.get(name=tuning_job.name)
time.sleep(10)
```
#### Use Tuned Model
``` python
response = client.models.generate_content(
model=tuning_job.tuned_model.endpoint,
contents='What is your name?',
)
response.text
```
### Get Tuned Model
``` python
tuned_model = client.models.get(model=tuning_job.tuned_model.model)
tuned_model
```
### List Tuned Models
``` python
for model in client.models.list(config={'page_size': 10}):
print(model)
```
``` python
pager = client.models.list(config={'page_size': 10})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
```
#### Async
``` python
async for job in await client.aio.models.list(config={'page_size': 10}):
print(job)
```
``` python
async_pager = await client.aio.models.list(config={'page_size': 10})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
```
### Update Tuned Model
``` python
model = pager[0]
model = client.models.update(
model=model.name,
config=types.UpdateModelConfig(
display_name='my tuned model',
description='my tuned model description'))
model
```
### Distillation
Only supported on Vertex. Requires allowlist.
``` python
distillation_job = client.tunings.distill(
student_model="gemma-2b-1.1-it",
teacher_model="gemini-1.5-pro-002",
training_dataset=genai.types.DistillationDataset(
gcs_uri="gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl",
),
config=genai.types.CreateDistillationJobConfig(
epoch_count=1,
pipeline_root_directory=(
"gs://my-bucket"
),
),
)
distillation_job
```
``` python
tcompleted_states = set([
"JOB_STATE_SUCCEEDED",
"JOB_STATE_FAILED",
"JOB_STATE_CANCELLED",
"JOB_STATE_PAUSED"
])
while distillation_job.state not in completed_states:
print(distillation_job.state)
distillation_job = client.tunings.get(name=distillation_job.name)
time.sleep(10)
```
``` python
distillation_job
```
### List Tuning Jobs
``` python
for job in client.tunings.list(config={'page_size': 10}):
print(job)
```
``` python
pager = client.tunings.list(config={'page_size': 10})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
```
#### Async
``` python
async for job in await client.aio.tunings.list(config={'page_size': 10}):
print(job)
```
``` python
async_pager = await client.aio.tunings.list(config={'page_size': 10})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
```
## Batch Prediction
Only supported in Vertex AI.
### Create
``` python
# Specify model and source file only, destination and job display name will be auto-populated
job = client.batches.create(
model='gemini-1.5-flash-002',
src='bq://my-project.my-dataset.my-table',
)
job
```
``` python
# Get a job by name
job = client.batches.get(name=job.name)
job.state
```
``` python
completed_states = set([
"JOB_STATE_SUCCEEDED",
"JOB_STATE_FAILED",
"JOB_STATE_CANCELLED",
"JOB_STATE_PAUSED"
])
while job.state not in completed_states:
print(job.state)
job = client.batches.get(name=job.name)
time.sleep(30)
job
```
### List
``` python
for job in client.batches.list(config={'page_size': 10}):
print(job)
```
``` python
pager = client.batches.list(config={'page_size': 10})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
```
#### Async
``` python
async for job in await client.aio.batches.list(config={'page_size': 10}):
print(job)
```
``` python
async_pager = await client.aio.batches.list(config={'page_size': 10})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
```
### Delete
``` python
# Delete the job resource
delete_job = client.batches.delete(name=job.name)
delete_job
```
Raw data
{
"_id": null,
"home_page": null,
"name": "google-genai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Google LLC <googleapis-packages@google.com>",
"download_url": "https://files.pythonhosted.org/packages/8f/fa/e8c81d37ffe7d8aa05573494735cdc432a97b77f641a08caa959de19523d/google_genai-0.4.0.tar.gz",
"platform": null,
"description": "# Google Gen AI SDK\n\n[![PyPI version](https://img.shields.io/pypi/v/google-genai.svg)](https://pypi.org/project/google-genai/)\n\n--------\n**Documentation:** https://googleapis.github.io/python-genai/\n\n-----\n\n## Installation\n\n``` cmd\npip install google-genai\n```\n\n## Imports\n\n``` python\nfrom google import genai\nfrom google.genai import types\n```\n\n## Create a client\n\nPlease run one of the following code blocks to create a client for\ndifferent services (Google AI or Vertex). Feel free to switch the client\nand run all the examples to see how it behaves under different APIs.\n\n``` python\n# Only run this block for Google AI API\nclient = genai.Client(api_key='YOUR_API_KEY')\n```\n\n``` python\n# Only run this block for Vertex AI API\nclient = genai.Client(\n vertexai=True, project='your-project-id', location='us-central1'\n)\n```\n\n## Types\n\nParameter types can be specified as either dictionaries(TypedDict) or\npydantic Models. Pydantic model types are available in the `types`\nmodule.\n\n## Models\n\nThe `client.models` modules exposes model inferencing and model getters.\n\n### Generate Content\n\n``` python\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-exp', contents='What is your name?'\n)\nprint(response.text)\n```\n\n### System Instructions and Other Configs\n\n``` python\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-exp',\n contents='high',\n config=types.GenerateContentConfig(\n system_instruction='I say high, you say low',\n temperature= 0.3,\n ),\n)\nprint(response.text)\n```\n\n### Typed Config\n\nAll API methods support pydantic types for parameters as well as\ndictionaries. You can get the type from `google.genai.types`.\n\n``` python\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-exp',\n contents=types.Part.from_text('Why is sky blue?'),\n config=types.GenerateContentConfig(\n temperature=0,\n top_p=0.95,\n top_k=20,\n candidate_count=1,\n seed=5,\n max_output_tokens=100,\n stop_sequences=[\"STOP!\"],\n presence_penalty=0.0,\n frequency_penalty=0.0,\n )\n)\n\nresponse\n```\n\n### Safety Settings\n\n``` python\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-exp',\n contents='Say something bad.',\n config=types.GenerateContentConfig(\n safety_settings= [types.SafetySetting(\n category='HARM_CATEGORY_HATE_SPEECH',\n threshold='BLOCK_ONLY_HIGH',\n )]\n ),\n)\nprint(response.text)\n```\n\n### Function Calling\n\n#### Automatic Python function Support\n\nYou can pass a python function directly and it will be automatically\ncalled and responded.\n\n``` python\ndef get_current_weather(location: str,) -> int:\n \"\"\"Returns the current weather.\n\n Args:\n location: The city and state, e.g. San Francisco, CA\n \"\"\"\n return 'sunny'\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-exp',\n contents=\"What is the weather like in Boston?\",\n config=types.GenerateContentConfig(tools=[get_current_weather],)\n)\n\nresponse.text\n```\n\n#### Manually declare and invoke a function for function calling\n\nIf you don't want to use the automatic function support, you can manually\ndeclare the function and invoke it.\n\nThe following example shows how to declare a function and pass it as a tool.\nThen you will receive a function call part in the response.\n\n``` python\nfunction = dict(\n name=\"get_current_weather\",\n description=\"Get the current weather in a given location\",\n parameters={\n \"type\": \"OBJECT\",\n \"properties\": {\n \"location\": {\n \"type\": \"STRING\",\n \"description\": \"The city and state, e.g. San Francisco, CA\",\n },\n },\n \"required\": [\"location\"],\n }\n)\n\ntool = types.Tool(function_declarations=[function])\n\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-exp',\n contents=\"What is the weather like in Boston?\",\n config=types.GenerateContentConfig(tools=[tool],)\n)\n\nresponse.candidates[0].content.parts[0].function_call\n```\n\nAfter you receive the function call part from model, you can invoke the function\nand get the function response. And then you can pass the function response to\nthe model.\nThe following example shows how to do it for a simple function invocation.\n\n``` python\nfunction_call_part = response.candidates[0].content.parts[0]\n\ntry:\n function_result = get_current_weather(**function_call_part.function_call.args)\n function_response = {'result': function_result}\nexcept Exception as e: # instead of raising the exception, you can let the model handle it\n function_response = {'error': str(e)}\n\n\nfunction_response_part = types.Part.from_function_response(\n name=function_call_part.function_call.name,\n response=function_response,\n)\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-exp',\n contents=[\n types.Part.from_text(\"What is the weather like in Boston?\"),\n function_call_part,\n function_response_part,\n ])\n\nresponse\n```\n\n### JSON Response Schema\n\n#### Pydantic Model Schema support\n\nSchemas can be provided as Pydantic Models.\n\n``` python\nfrom pydantic import BaseModel\n\nclass CountryInfo(BaseModel):\n name: str\n population: int\n capital: str\n continent: str\n gdp: int\n official_language: str\n total_area_sq_mi: int\n\n\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-exp',\n contents='Give me information of the United States.',\n config=types.GenerateContentConfig(\n response_mime_type= 'application/json',\n response_schema= CountryInfo,\n ),\n)\nprint(response.text)\n```\n\n``` python\nresponse = client.models.generate_content(\n model='gemini-2.0-flash-exp',\n contents='Give me information of the United States.',\n config={\n 'response_mime_type': 'application/json',\n 'response_schema': {\n 'required': [\n 'name',\n 'population',\n 'capital',\n 'continent',\n 'gdp',\n 'official_language',\n 'total_area_sq_mi',\n ],\n 'properties': {\n 'name': {'type': 'STRING'},\n 'population': {'type': 'INTEGER'},\n 'capital': {'type': 'STRING'},\n 'continent': {'type': 'STRING'},\n 'gdp': {'type': 'INTEGER'},\n 'official_language': {'type': 'STRING'},\n 'total_area_sq_mi': {'type': 'INTEGER'},\n },\n 'type': 'OBJECT',\n },\n },\n)\nprint(response.text)\n```\n\n### Streaming\n\n#### Streaming for text content\n\n``` python\nfor chunk in client.models.generate_content_stream(\n model='gemini-2.0-flash-exp', contents='Tell me a story in 300 words.'\n):\n print(chunk.text)\n```\n\n#### Streaming for image content\n\nIf your image is stored in Google Cloud Storage, you can use the `from_uri`\nclass method to create a Part object.\n\n``` python\nfor chunk in client.models.generate_content_stream(\n model='gemini-1.5-flash',\n contents=[\n 'What is this image about?',\n types.Part.from_uri(\n file_uri='gs://generativeai-downloads/images/scones.jpg',\n mime_type='image/jpeg'\n )\n ],\n):\n print(chunk.text)\n```\n\nIf your image is stored in your local file system, you can read it in as bytes\ndata and use the `from_bytes` class method to create a Part object.\n\n``` python\nYOUR_IMAGE_PATH = 'your_image_path'\nYOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'\nwith open(YOUR_IMAGE_PATH, 'rb') as f:\n image_bytes = f.read()\n\nfor chunk in client.models.generate_content_stream(\n model='gemini-1.5-flash',\n contents=[\n 'What is this image about?',\n types.Part.from_bytes(\n data=image_bytes,\n mime_type=YOUR_IMAGE_MIME_TYPE\n )\n ],\n):\n print(chunk.text)\n```\n\n### Async\n\n`client.aio` exposes all the analogous `async` methods that are\navailable on `client`\n\nFor example, `client.aio.models.generate_content` is the async version\nof `client.models.generate_content`\n\n``` python\nrequest = await client.aio.models.generate_content(\n model='gemini-2.0-flash-exp', contents='Tell me a story in 300 words.'\n)\n\nprint(response.text)\n```\n\n### Streaming\n\n``` python\nasync for response in client.aio.models.generate_content_stream(\n model='gemini-2.0-flash-exp', contents='Tell me a story in 300 words.'\n):\n print(response.text)\n```\n\n### Count Tokens and Compute Tokens\n\n``` python\nresponse = client.models.count_tokens(\n model='gemini-2.0-flash-exp',\n contents='What is your name?',\n)\nprint(response)\n```\n\n#### Compute Tokens\n\nCompute tokens is not supported by Google AI.\n\n``` python\nresponse = client.models.compute_tokens(\n model='gemini-2.0-flash-exp',\n contents='What is your name?',\n)\nprint(response)\n```\n\n##### Async\n\n``` python\nresponse = await client.aio.models.count_tokens(\n model='gemini-2.0-flash-exp',\n contents='What is your name?',\n)\nprint(response)\n```\n\n### Embed Content\n\n``` python\nresponse = client.models.embed_content(\n model='text-embedding-004',\n contents='What is your name?',\n)\nresponse\n```\n\n``` python\n# multiple contents with config\nresponse = client.models.embed_content(\n model='text-embedding-004',\n contents=['What is your name?', 'What is your age?'],\n config=types.EmbedContentConfig(output_dimensionality= 10)\n)\n\nresponse\n```\n\n### Imagen\n\n#### Generate Image\n\nSupport for generate image in Google AI is behind an allowlist\n\n``` python\n# Generate Image\nresponse1 = client.models.generate_image(\n model='imagen-3.0-generate-001',\n prompt='An umbrella in the foreground, and a rainy night sky in the background',\n config=types.GenerateImageConfig(\n negative_prompt= 'human',\n number_of_images= 1,\n include_rai_reason= True,\n output_mime_type= 'image/jpeg'\n )\n)\nresponse1.generated_images[0].image.show()\n```\n\n#### Upscale Image\n\nUpscale image is not supported in Google AI.\n\n``` python\n# Upscale the generated image from above\nresponse2 = client.models.upscale_image(\n model='imagen-3.0-generate-001',\n image=response1.generated_images[0].image,\n upscale_factor='x2',\n config=types.UpscaleImageConfig(\n include_rai_reason= True,\n output_mime_type= 'image/jpeg',\n ),\n)\nresponse2.generated_images[0].image.show()\n```\n\n#### Edit Image\n\nEdit image uses a separate model from generate and upscale.\n\nEdit image is not supported in Google AI.\n\n``` python\n# Edit the generated image from above\nfrom google.genai.types import RawReferenceImage, MaskReferenceImage\nraw_ref_image = RawReferenceImage(\n reference_id=1,\n reference_image=response1.generated_images[0].image,\n)\n\n# Model computes a mask of the background\nmask_ref_image = MaskReferenceImage(\n reference_id=2,\n config=types.MaskReferenceConfig(\n mask_mode='MASK_MODE_BACKGROUND',\n mask_dilation=0,\n ),\n)\n\nresponse3 = client.models.edit_image(\n model='imagen-3.0-capability-001',\n prompt='Sunlight and clear sky',\n reference_images=[raw_ref_image, mask_ref_image],\n config=types.EditImageConfig(\n edit_mode= 'EDIT_MODE_INPAINT_INSERTION',\n number_of_images= 1,\n negative_prompt= 'human',\n include_rai_reason= True,\n output_mime_type= 'image/jpeg',\n ),\n)\nresponse3.generated_images[0].image.show()\n```\n\n## Chats\n\nCreate a chat session to start a multi-turn conversations with the model.\n\n### Send Message\n\n```python\nchat = client.chats.create(model='gemini-2.0-flash-exp')\nresponse = chat.send_message('tell me a story')\nprint(response.text)\n```\n\n### Streaming\n\n```python\nchat = client.chats.create(model='gemini-2.0-flash-exp')\nfor chunk in chat.send_message_stream('tell me a story'):\n print(chunk.text)\n```\n\n### Async\n\n```python\nchat = client.aio.chats.create(model='gemini-2.0-flash-exp')\nresponse = await chat.send_message('tell me a story')\nprint(response.text)\n```\n\n### Async Streaming\n\n```python\nchat = client.aio.chats.create(model='gemini-2.0-flash-exp')\nasync for chunk in chat.send_message_stream('tell me a story'):\n print(chunk.text)\n```\n\n## Files (Only Google AI)\n\n``` python\n!gsutil cp gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf .\n!gsutil cp gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf .\n```\n\n### Upload\n\n``` python\nfile1 = client.files.upload(path='2312.11805v3.pdf')\nfile2 = client.files.upload(path='2403.05530.pdf')\n\nprint(file1)\nprint(file2)\n```\n\n### Delete\n\n``` python\nfile3 = client.files.upload(path='2312.11805v3.pdf')\n\nclient.files.delete(name=file3.name)\n```\n\n## Caches\n\n`client.caches` contains the control plane APIs for cached content\n\n### Create\n\n``` python\nif client.vertexai:\n file_uris = [\n 'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',\n 'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf'\n ]\nelse:\n file_uris = [file1.uri, file2.uri]\n\ncached_content = client.caches.create(\n model='gemini-1.5-pro-002',\n config=types.CreateCachedContentConfig(\n contents=[\n types.Content(\n role='user',\n parts=[\n types.Part.from_uri(\n file_uri=file_uris[0],\n mime_type='application/pdf'),\n types.Part.from_uri(\n file_uri=file_uris[1],\n mime_type='application/pdf',)])\n ],\n system_instruction='What is the sum of the two pdfs?',\n display_name='test cache',\n ttl='3600s',\n ),\n )\n```\n\n### Get\n\n``` python\nclient.caches.get(name=cached_content.name)\n```\n\n### Generate Content\n\n``` python\nclient.models.generate_content(\n model='gemini-1.5-pro-002',\n contents='Summarize the pdfs',\n config=types.GenerateContentConfig(\n cached_content=cached_content.name,\n )\n)\n```\n\n## Tunings\n\n`client.tunings` contains tuning job APIs and supports supervised fine\ntuning through `tune` and distillation through `distill`\n\n### Tune\n\n- Vertex supports tuning from GCS source\n- Google AI supports tuning from inline examples\n\n``` python\nif client.vertexai:\n model = 'gemini-1.5-pro-002'\n training_dataset=types.TuningDataset(\n gcs_uri='gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl',\n )\nelse:\n model = 'models/gemini-1.0-pro-001'\n training_dataset=types.TuningDataset(\n examples=[\n types.TuningExample(\n text_input=f\"Input text {i}\",\n output=f\"Output text {i}\",\n )\n for i in range(5)\n ],\n )\n```\n\n``` python\ntuning_job = client.tunings.tune(\n base_model=model,\n training_dataset=training_dataset,\n config=types.CreateTuningJobConfig(\n epoch_count= 1,\n tuned_model_display_name=\"test_dataset_examples model\"\n )\n)\ntuning_job\n```\n\n### Get Tuning Job\n\n``` python\ntuning_job = client.tunings.get(name=tuning_job.name)\ntuning_job\n```\n\n``` python\nimport time\n\nrunning_states = set([\n \"JOB_STATE_PENDING\",\n \"JOB_STATE_RUNNING\",\n])\n\nwhile tuning_job.state in running_states:\n print(tuning_job.state)\n tuning_job = client.tunings.get(name=tuning_job.name)\n time.sleep(10)\n```\n\n#### Use Tuned Model\n\n``` python\nresponse = client.models.generate_content(\n model=tuning_job.tuned_model.endpoint,\n contents='What is your name?',\n)\n\nresponse.text\n```\n\n### Get Tuned Model\n\n``` python\ntuned_model = client.models.get(model=tuning_job.tuned_model.model)\ntuned_model\n```\n\n### List Tuned Models\n\n``` python\nfor model in client.models.list(config={'page_size': 10}):\n print(model)\n```\n\n``` python\npager = client.models.list(config={'page_size': 10})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n``` python\nasync for job in await client.aio.models.list(config={'page_size': 10}):\n print(job)\n```\n\n``` python\nasync_pager = await client.aio.models.list(config={'page_size': 10})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Update Tuned Model\n\n``` python\nmodel = pager[0]\n\nmodel = client.models.update(\n model=model.name,\n config=types.UpdateModelConfig(\n display_name='my tuned model',\n description='my tuned model description'))\n\nmodel\n```\n\n### Distillation\n\nOnly supported on Vertex. Requires allowlist.\n\n``` python\ndistillation_job = client.tunings.distill(\n student_model=\"gemma-2b-1.1-it\",\n teacher_model=\"gemini-1.5-pro-002\",\n training_dataset=genai.types.DistillationDataset(\n gcs_uri=\"gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl\",\n ),\n config=genai.types.CreateDistillationJobConfig(\n epoch_count=1,\n pipeline_root_directory=(\n \"gs://my-bucket\"\n ),\n ),\n)\ndistillation_job\n```\n\n``` python\ntcompleted_states = set([\n \"JOB_STATE_SUCCEEDED\",\n \"JOB_STATE_FAILED\",\n \"JOB_STATE_CANCELLED\",\n \"JOB_STATE_PAUSED\"\n])\n\nwhile distillation_job.state not in completed_states:\n print(distillation_job.state)\n distillation_job = client.tunings.get(name=distillation_job.name)\n time.sleep(10)\n```\n\n``` python\ndistillation_job\n```\n\n### List Tuning Jobs\n\n``` python\nfor job in client.tunings.list(config={'page_size': 10}):\n print(job)\n```\n\n``` python\npager = client.tunings.list(config={'page_size': 10})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n``` python\nasync for job in await client.aio.tunings.list(config={'page_size': 10}):\n print(job)\n```\n\n``` python\nasync_pager = await client.aio.tunings.list(config={'page_size': 10})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n## Batch Prediction\n\nOnly supported in Vertex AI.\n\n### Create\n\n``` python\n# Specify model and source file only, destination and job display name will be auto-populated\njob = client.batches.create(\n model='gemini-1.5-flash-002',\n src='bq://my-project.my-dataset.my-table',\n)\n\njob\n```\n\n``` python\n# Get a job by name\njob = client.batches.get(name=job.name)\n\njob.state\n```\n\n``` python\ncompleted_states = set([\n \"JOB_STATE_SUCCEEDED\",\n \"JOB_STATE_FAILED\",\n \"JOB_STATE_CANCELLED\",\n \"JOB_STATE_PAUSED\"\n])\n\nwhile job.state not in completed_states:\n print(job.state)\n job = client.batches.get(name=job.name)\n time.sleep(30)\n\njob\n```\n\n### List\n\n``` python\nfor job in client.batches.list(config={'page_size': 10}):\n print(job)\n```\n\n``` python\npager = client.batches.list(config={'page_size': 10})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n``` python\nasync for job in await client.aio.batches.list(config={'page_size': 10}):\n print(job)\n```\n\n``` python\nasync_pager = await client.aio.batches.list(config={'page_size': 10})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Delete\n\n``` python\n# Delete the job resource\ndelete_job = client.batches.delete(name=job.name)\n\ndelete_job\n```\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "GenAI Python SDK",
"version": "0.4.0",
"project_urls": {
"Homepage": "https://github.com/googleapis/python-genai"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9daccf91960fc842f8c3387be8abeaa01deb0e6b20a72a028b70107f58e13150",
"md5": "33598cce0ea7bbd73b711cb3ada18d2f",
"sha256": "2cbfea3cb47d4ac54ee3d3f9ecd79ff72298cac13e150828afdc5ed62768ed00"
},
"downloads": -1,
"filename": "google_genai-0.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "33598cce0ea7bbd73b711cb3ada18d2f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 113562,
"upload_time": "2025-01-08T18:51:46",
"upload_time_iso_8601": "2025-01-08T18:51:46.552216Z",
"url": "https://files.pythonhosted.org/packages/9d/ac/cf91960fc842f8c3387be8abeaa01deb0e6b20a72a028b70107f58e13150/google_genai-0.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8ffae8c81d37ffe7d8aa05573494735cdc432a97b77f641a08caa959de19523d",
"md5": "07eb6def414bb8a6d80798aba945eebd",
"sha256": "d14ce2e941063092cfc98726aeabcae44f179456e3a4906ee5f28dc91b0663fb"
},
"downloads": -1,
"filename": "google_genai-0.4.0.tar.gz",
"has_sig": false,
"md5_digest": "07eb6def414bb8a6d80798aba945eebd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 107625,
"upload_time": "2025-01-08T18:51:48",
"upload_time_iso_8601": "2025-01-08T18:51:48.621198Z",
"url": "https://files.pythonhosted.org/packages/8f/fa/e8c81d37ffe7d8aa05573494735cdc432a97b77f641a08caa959de19523d/google_genai-0.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-08 18:51:48",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "googleapis",
"github_project": "python-genai",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "google-genai"
}