# langchain-google-vertexai
This package contains the LangChain integrations for Google Cloud generative models.
## Installation
```bash
pip install -U langchain-google-vertexai
```
## Chat Models
`ChatVertexAI` class exposes models such as `gemini-pro` and `chat-bison`.
To use, you should have Google Cloud project with APIs enabled, and configured credentials. Initialize the model as:
```python
from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model_name="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")
```
You can use other models, e.g. `chat-bison`:
```python
from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model_name="chat-bison", temperature=0.3)
llm.invoke("Sing a ballad of LangChain.")
```
#### Multimodal inputs
Gemini vision model supports image inputs when providing a single chat message. Example:
```python
from langchain_core.messages import HumanMessage
from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model_name="gemini-pro-vision")
# example
message = HumanMessage(
content=[
{
"type": "text",
"text": "What's in this image?",
}, # You can optionally provide text parts
{"type": "image_url", "image_url": {"url": "https://picsum.photos/seed/picsum/200/300"}},
]
)
llm.invoke([message])
```
The value of `image_url` can be any of the following:
- A public image URL
- An accessible gcs file (e.g., "gcs://path/to/file.png")
- A local file path
- A base64 encoded image (e.g., `data:image/png;base64,abcd124`)
## Embeddings
You can use Google Cloud's embeddings models as:
```python
from langchain_google_vertexai import VertexAIEmbeddings
embeddings = VertexAIEmbeddings()
embeddings.embed_query("hello, world!")
```
## LLMs
You can use Google Cloud's generative AI models as Langchain LLMs:
```python
from langchain_core.prompts import PromptTemplate
from langchain_google_vertexai import ChatVertexAI
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
llm = ChatVertexAI(model_name="gemini-pro")
chain = prompt | llm
question = "Who was the president of the USA in 1994?"
print(chain.invoke({"question": question}))
```
You can use Gemini and Palm models, including code-generations ones:
```python
from langchain_google_vertexai import VertexAI
llm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3)
question = "Write a python function that checks if a string is a valid email address"
output = llm(question)
```
Raw data
{
"_id": null,
"home_page": "https://github.com/langchain-ai/langchain-google",
"name": "langchain-google-vertexai",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/5a/28/9481e1b1ffc0681a6a42c6d07b3b68d253ce80dd4ddc972323cdbd70a661/langchain_google_vertexai-2.0.6.tar.gz",
"platform": null,
"description": "# langchain-google-vertexai\n\nThis package contains the LangChain integrations for Google Cloud generative models.\n\n## Installation\n\n```bash\npip install -U langchain-google-vertexai\n```\n\n## Chat Models\n\n`ChatVertexAI` class exposes models such as `gemini-pro` and `chat-bison`.\n\nTo use, you should have Google Cloud project with APIs enabled, and configured credentials. Initialize the model as:\n\n```python\nfrom langchain_google_vertexai import ChatVertexAI\n\nllm = ChatVertexAI(model_name=\"gemini-pro\")\nllm.invoke(\"Sing a ballad of LangChain.\")\n```\n\nYou can use other models, e.g. `chat-bison`:\n\n```python\nfrom langchain_google_vertexai import ChatVertexAI\n\nllm = ChatVertexAI(model_name=\"chat-bison\", temperature=0.3)\nllm.invoke(\"Sing a ballad of LangChain.\")\n```\n\n#### Multimodal inputs\n\nGemini vision model supports image inputs when providing a single chat message. Example:\n\n```python\nfrom langchain_core.messages import HumanMessage\nfrom langchain_google_vertexai import ChatVertexAI\n\nllm = ChatVertexAI(model_name=\"gemini-pro-vision\")\n# example\nmessage = HumanMessage(\n content=[\n {\n \"type\": \"text\",\n \"text\": \"What's in this image?\",\n }, # You can optionally provide text parts\n {\"type\": \"image_url\", \"image_url\": {\"url\": \"https://picsum.photos/seed/picsum/200/300\"}},\n ]\n)\nllm.invoke([message])\n```\n\nThe value of `image_url` can be any of the following:\n\n- A public image URL\n- An accessible gcs file (e.g., \"gcs://path/to/file.png\")\n- A local file path\n- A base64 encoded image (e.g., `data:image/png;base64,abcd124`)\n\n## Embeddings\n\nYou can use Google Cloud's embeddings models as:\n\n```python\nfrom langchain_google_vertexai import VertexAIEmbeddings\n\nembeddings = VertexAIEmbeddings()\nembeddings.embed_query(\"hello, world!\")\n```\n\n## LLMs\n\nYou can use Google Cloud's generative AI models as Langchain LLMs:\n\n```python\nfrom langchain_core.prompts import PromptTemplate\nfrom langchain_google_vertexai import ChatVertexAI\n\ntemplate = \"\"\"Question: {question}\n\nAnswer: Let's think step by step.\"\"\"\nprompt = PromptTemplate.from_template(template)\n\nllm = ChatVertexAI(model_name=\"gemini-pro\")\nchain = prompt | llm\n\nquestion = \"Who was the president of the USA in 1994?\"\nprint(chain.invoke({\"question\": question}))\n```\n\nYou can use Gemini and Palm models, including code-generations ones:\n\n```python\nfrom langchain_google_vertexai import VertexAI\n\nllm = VertexAI(model_name=\"code-bison\", max_output_tokens=1000, temperature=0.3)\n\nquestion = \"Write a python function that checks if a string is a valid email address\"\n\noutput = llm(question)\n```\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "An integration package connecting Google VertexAI and LangChain",
"version": "2.0.6",
"project_urls": {
"Homepage": "https://github.com/langchain-ai/langchain-google",
"Repository": "https://github.com/langchain-ai/langchain-google",
"Source Code": "https://github.com/langchain-ai/langchain-google/tree/main/libs/vertexai"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "52626fff39d11da7099b48b974320b8c01949b9318a8e26c192d282bed5e6909",
"md5": "ad835535e2b077dc0d025c4a97b702a0",
"sha256": "1de299193ae40c0cd54d564b1906f92910731e4942adef926c539b94a784e370"
},
"downloads": -1,
"filename": "langchain_google_vertexai-2.0.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ad835535e2b077dc0d025c4a97b702a0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 89872,
"upload_time": "2024-10-29T20:58:35",
"upload_time_iso_8601": "2024-10-29T20:58:35.592011Z",
"url": "https://files.pythonhosted.org/packages/52/62/6fff39d11da7099b48b974320b8c01949b9318a8e26c192d282bed5e6909/langchain_google_vertexai-2.0.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "5a289481e1b1ffc0681a6a42c6d07b3b68d253ce80dd4ddc972323cdbd70a661",
"md5": "8202e4c73f5b14376ef7dffb882d9738",
"sha256": "cc9c3a99b8406ec41956172ac260d5d00a377539cda88416da2a186f061e4f0a"
},
"downloads": -1,
"filename": "langchain_google_vertexai-2.0.6.tar.gz",
"has_sig": false,
"md5_digest": "8202e4c73f5b14376ef7dffb882d9738",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 75151,
"upload_time": "2024-10-29T20:58:36",
"upload_time_iso_8601": "2024-10-29T20:58:36.990737Z",
"url": "https://files.pythonhosted.org/packages/5a/28/9481e1b1ffc0681a6a42c6d07b3b68d253ce80dd4ddc972323cdbd70a661/langchain_google_vertexai-2.0.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-29 20:58:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "langchain-ai",
"github_project": "langchain-google",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "langchain-google-vertexai"
}