# langchain-google-vertexai
This package contains the LangChain integrations for Google Cloud generative models.
## Contents
1. [Installation](#installation)
2. [Chat Models](#chat-models)
* [Multimodal inputs](#multimodal-inputs)
3. [Embeddings](#embeddings)
4. [LLMs](#llms)
5. [Code Generation](#code-generation)
* [Example: Generate a Python function](#example-generate-a-python-function)
* [Example: Generate JavaScript code](#example-generate-javascript-code)
* [Notes](#notes)
## Installation
```bash
pip install -U langchain-google-vertexai
```
## Chat Models
`ChatVertexAI` class exposes models such as `gemini-pro` and other Gemini variants.
To use, you should have a Google Cloud project with APIs enabled, and configured credentials. Initialize the model as:
```python
from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model_name="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")
```
### Multimodal inputs
Gemini supports image inputs when providing a single chat message. Example:
```python
from langchain_core.messages import HumanMessage
from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model_name="gemini-2.0-flash-001")
message = HumanMessage(
content=[
{
"type": "text",
"text": "What's in this image?",
},
{"type": "image_url", "image_url": {"url": "https://picsum.photos/seed/picsum/200/300"}},
]
)
llm.invoke([message])
```
The value of `image_url` can be:
* A public image URL
* An accessible Google Cloud Storage (GCS) file (e.g., `"gcs://path/to/file.png"`)
* A base64 encoded image (e.g., `"data:image/png;base64,abcd124"`)
### Multimodal Outputs
Gemini supports image output. Example:
```python
from langchain_core.messages import HumanMessage
from langchain_google_vertexai import ChatVertexAI, Modality
llm = ChatVertexAI(model_name="gemini-2.0-flash-preview-image-generation",
response_modalities = [Modality.TEXT, Modality.IMAGE])
message = HumanMessage(
content=[
{
"type": "text",
"text": "Generate an image of a cat.",
},
]
)
llm.invoke([message])
```
## Embeddings
Google Cloud embeddings models can be used as:
```python
from langchain_google_vertexai import VertexAIEmbeddings
embeddings = VertexAIEmbeddings()
embeddings.embed_query("hello, world!")
```
## LLMs
Use Google Cloud's generative AI models as LangChain LLMs:
```python
from langchain_core.prompts import PromptTemplate
from langchain_google_vertexai import ChatVertexAI
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
llm = ChatVertexAI(model_name="gemini-pro")
chain = prompt | llm
question = "Who was the president of the USA in 1994?"
print(chain.invoke({"question": question}))
```
## Code Generation
You can use Gemini models for code generation tasks to generate code snippets, functions, or scripts in various programming languages.
### Example: Generate a Python function
```python
from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model_name="gemini-pro", temperature=0.3, max_output_tokens=1000)
prompt = "Write a Python function that checks if a string is a valid email address."
generated_code = llm.invoke(prompt)
print(generated_code)
```
### Example: Generate JavaScript code
```python
from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model_name="gemini-pro", temperature=0.3, max_output_tokens=1000)
prompt_js = "Write a JavaScript function that returns the factorial of a number."
print(llm.invoke(prompt_js))
```
### Notes
* Adjust `temperature` to control creativity (higher values increase randomness).
* Use `max_output_tokens` to limit the length of the generated code.
* Gemini models are well-suited for code generation tasks with advanced understanding of programming concepts.
Raw data
{
"_id": null,
"home_page": "https://github.com/langchain-ai/langchain-google",
"name": "langchain-google-vertexai",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/54/cf/0f19c7f6bf336f4681ee63e9ba91d94f112ced688825ac22584a3b72b6f8/langchain_google_vertexai-2.0.28.tar.gz",
"platform": null,
"description": "# langchain-google-vertexai\n\nThis package contains the LangChain integrations for Google Cloud generative models.\n\n## Contents\n\n1. [Installation](#installation)\n2. [Chat Models](#chat-models)\n * [Multimodal inputs](#multimodal-inputs)\n3. [Embeddings](#embeddings)\n4. [LLMs](#llms)\n5. [Code Generation](#code-generation)\n * [Example: Generate a Python function](#example-generate-a-python-function)\n * [Example: Generate JavaScript code](#example-generate-javascript-code)\n * [Notes](#notes)\n\n## Installation\n\n```bash\npip install -U langchain-google-vertexai\n```\n\n## Chat Models\n\n`ChatVertexAI` class exposes models such as `gemini-pro` and other Gemini variants.\n\nTo use, you should have a Google Cloud project with APIs enabled, and configured credentials. Initialize the model as:\n\n```python\nfrom langchain_google_vertexai import ChatVertexAI\n\nllm = ChatVertexAI(model_name=\"gemini-pro\")\nllm.invoke(\"Sing a ballad of LangChain.\")\n```\n\n### Multimodal inputs\n\nGemini supports image inputs when providing a single chat message. Example:\n\n```python\nfrom langchain_core.messages import HumanMessage\nfrom langchain_google_vertexai import ChatVertexAI\n\nllm = ChatVertexAI(model_name=\"gemini-2.0-flash-001\")\nmessage = HumanMessage(\n content=[\n {\n \"type\": \"text\",\n \"text\": \"What's in this image?\",\n },\n {\"type\": \"image_url\", \"image_url\": {\"url\": \"https://picsum.photos/seed/picsum/200/300\"}},\n ]\n)\nllm.invoke([message])\n```\n\nThe value of `image_url` can be:\n\n* A public image URL\n* An accessible Google Cloud Storage (GCS) file (e.g., `\"gcs://path/to/file.png\"`)\n* A base64 encoded image (e.g., `\"data:image/png;base64,abcd124\"`)\n\n### Multimodal Outputs\n\nGemini supports image output. Example:\n\n```python\nfrom langchain_core.messages import HumanMessage\nfrom langchain_google_vertexai import ChatVertexAI, Modality\n\nllm = ChatVertexAI(model_name=\"gemini-2.0-flash-preview-image-generation\",\n response_modalities = [Modality.TEXT, Modality.IMAGE])\nmessage = HumanMessage(\n content=[\n {\n \"type\": \"text\",\n \"text\": \"Generate an image of a cat.\",\n },\n ]\n)\nllm.invoke([message])\n```\n\n## Embeddings\n\nGoogle Cloud embeddings models can be used as:\n\n```python\nfrom langchain_google_vertexai import VertexAIEmbeddings\n\nembeddings = VertexAIEmbeddings()\nembeddings.embed_query(\"hello, world!\")\n```\n\n## LLMs\n\nUse Google Cloud's generative AI models as LangChain LLMs:\n\n```python\nfrom langchain_core.prompts import PromptTemplate\nfrom langchain_google_vertexai import ChatVertexAI\n\ntemplate = \"\"\"Question: {question}\n\nAnswer: Let's think step by step.\"\"\"\nprompt = PromptTemplate.from_template(template)\n\nllm = ChatVertexAI(model_name=\"gemini-pro\")\nchain = prompt | llm\n\nquestion = \"Who was the president of the USA in 1994?\"\nprint(chain.invoke({\"question\": question}))\n```\n\n## Code Generation\n\nYou can use Gemini models for code generation tasks to generate code snippets, functions, or scripts in various programming languages.\n\n### Example: Generate a Python function\n\n```python\nfrom langchain_google_vertexai import ChatVertexAI\n\nllm = ChatVertexAI(model_name=\"gemini-pro\", temperature=0.3, max_output_tokens=1000)\n\nprompt = \"Write a Python function that checks if a string is a valid email address.\"\n\ngenerated_code = llm.invoke(prompt)\nprint(generated_code)\n```\n\n### Example: Generate JavaScript code\n\n```python\nfrom langchain_google_vertexai import ChatVertexAI\n\nllm = ChatVertexAI(model_name=\"gemini-pro\", temperature=0.3, max_output_tokens=1000)\nprompt_js = \"Write a JavaScript function that returns the factorial of a number.\"\n\nprint(llm.invoke(prompt_js))\n```\n\n### Notes\n\n* Adjust `temperature` to control creativity (higher values increase randomness).\n* Use `max_output_tokens` to limit the length of the generated code.\n* Gemini models are well-suited for code generation tasks with advanced understanding of programming concepts.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "An integration package connecting Google VertexAI and LangChain",
"version": "2.0.28",
"project_urls": {
"Homepage": "https://github.com/langchain-ai/langchain-google",
"Repository": "https://github.com/langchain-ai/langchain-google",
"Source Code": "https://github.com/langchain-ai/langchain-google/tree/main/libs/vertexai"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "6634ee3d5aec5b1fed6961d2584ecb8d366849edf44ba983824320705658c2cb",
"md5": "7c2e5e5a71df55ef2a673ed3031c4352",
"sha256": "8dc7a5cfb50b19fa455a284bed9f4f330e0eb462f3251773576b207780eb7709"
},
"downloads": -1,
"filename": "langchain_google_vertexai-2.0.28-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7c2e5e5a71df55ef2a673ed3031c4352",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 103964,
"upload_time": "2025-08-04T17:20:55",
"upload_time_iso_8601": "2025-08-04T17:20:55.469491Z",
"url": "https://files.pythonhosted.org/packages/66/34/ee3d5aec5b1fed6961d2584ecb8d366849edf44ba983824320705658c2cb/langchain_google_vertexai-2.0.28-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "54cf0f19c7f6bf336f4681ee63e9ba91d94f112ced688825ac22584a3b72b6f8",
"md5": "44b515b457378007d7eeacd830c777f6",
"sha256": "35fcd366752b6a5b8fa490c80816b9e8d372edc39fb5b3ad177b1d4f269781f2"
},
"downloads": -1,
"filename": "langchain_google_vertexai-2.0.28.tar.gz",
"has_sig": false,
"md5_digest": "44b515b457378007d7eeacd830c777f6",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 88116,
"upload_time": "2025-08-04T17:20:56",
"upload_time_iso_8601": "2025-08-04T17:20:56.669982Z",
"url": "https://files.pythonhosted.org/packages/54/cf/0f19c7f6bf336f4681ee63e9ba91d94f112ced688825ac22584a3b72b6f8/langchain_google_vertexai-2.0.28.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-04 17:20:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "langchain-ai",
"github_project": "langchain-google",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "langchain-google-vertexai"
}