# langchain-google-genai
This package contains the LangChain integrations for Gemini through their generative-ai SDK.
## Installation
```bash
pip install -U langchain-google-genai
```
### Image utilities
To use image utility methods, like loading images from GCS urls, install with extras group 'images':
```bash
pip install -e "langchain-google-genai[images]"
```
## Chat Models
This package contains the `ChatGoogleGenerativeAI` class, which is the recommended way to interface with the Google Gemini series of models.
To use, install the requirements, and configure your environment.
```bash
export GOOGLE_API_KEY=your-api-key
```
Then initialize
```python
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")
```
#### Multimodal inputs
Gemini vision model supports image inputs when providing a single chat message. Example:
```
from langchain_core.messages import HumanMessage
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-pro-vision")
# example
message = HumanMessage(
content=[
{
"type": "text",
"text": "What's in this image?",
}, # You can optionally provide text parts
{"type": "image_url", "image_url": "https://picsum.photos/seed/picsum/200/300"},
]
)
llm.invoke([message])
```
The value of `image_url` can be any of the following:
- A public image URL
- An accessible gcs file (e.g., "gcs://path/to/file.png")
- A base64 encoded image (e.g., `data:image/png;base64,abcd124`)
- A PIL image
## Embeddings
This package also adds support for google's embeddings models.
```
from langchain_google_genai import GoogleGenerativeAIEmbeddings
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
embeddings.embed_query("hello, world!")
```
## Semantic Retrieval
Enables retrieval augmented generation (RAG) in your application.
```
# Create a new store for housing your documents.
corpus_store = GoogleVectorStore.create_corpus(display_name="My Corpus")
# Create a new document under the above corpus.
document_store = GoogleVectorStore.create_document(
corpus_id=corpus_store.corpus_id, display_name="My Document"
)
# Upload some texts to the document.
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)
for file in DirectoryLoader(path="data/").load():
documents = text_splitter.split_documents([file])
document_store.add_documents(documents)
# Talk to your entire corpus with possibly many documents.
aqa = corpus_store.as_aqa()
answer = aqa.invoke("What is the meaning of life?")
# Read the response along with the attributed passages and answerability.
print(response.answer)
print(response.attributed_passages)
print(response.answerable_probability)
```
Raw data
{
"_id": null,
"home_page": "https://github.com/langchain-ai/langchain-google",
"name": "langchain-google-genai",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/87/8f/b1052becd03a2e3a1a51b06d26d82ad9a418e95ab6ac902e2991c0fe5ae1/langchain_google_genai-2.0.4.tar.gz",
"platform": null,
"description": "# langchain-google-genai\n\nThis package contains the LangChain integrations for Gemini through their generative-ai SDK.\n\n## Installation\n\n```bash\npip install -U langchain-google-genai\n```\n\n### Image utilities\nTo use image utility methods, like loading images from GCS urls, install with extras group 'images':\n\n```bash\npip install -e \"langchain-google-genai[images]\"\n```\n\n## Chat Models\n\nThis package contains the `ChatGoogleGenerativeAI` class, which is the recommended way to interface with the Google Gemini series of models.\n\nTo use, install the requirements, and configure your environment.\n\n```bash\nexport GOOGLE_API_KEY=your-api-key\n```\n\nThen initialize\n\n```python\nfrom langchain_google_genai import ChatGoogleGenerativeAI\n\nllm = ChatGoogleGenerativeAI(model=\"gemini-pro\")\nllm.invoke(\"Sing a ballad of LangChain.\")\n```\n\n#### Multimodal inputs\n\nGemini vision model supports image inputs when providing a single chat message. Example:\n\n```\nfrom langchain_core.messages import HumanMessage\nfrom langchain_google_genai import ChatGoogleGenerativeAI\n\nllm = ChatGoogleGenerativeAI(model=\"gemini-pro-vision\")\n# example\nmessage = HumanMessage(\n content=[\n {\n \"type\": \"text\",\n \"text\": \"What's in this image?\",\n }, # You can optionally provide text parts\n {\"type\": \"image_url\", \"image_url\": \"https://picsum.photos/seed/picsum/200/300\"},\n ]\n)\nllm.invoke([message])\n```\n\nThe value of `image_url` can be any of the following:\n\n- A public image URL\n- An accessible gcs file (e.g., \"gcs://path/to/file.png\")\n- A base64 encoded image (e.g., `data:image/png;base64,abcd124`)\n- A PIL image\n\n\n\n## Embeddings\n\nThis package also adds support for google's embeddings models.\n\n```\nfrom langchain_google_genai import GoogleGenerativeAIEmbeddings\n\nembeddings = GoogleGenerativeAIEmbeddings(model=\"models/embedding-001\")\nembeddings.embed_query(\"hello, world!\")\n```\n\n## Semantic Retrieval\n\nEnables retrieval augmented generation (RAG) in your application.\n\n```\n# Create a new store for housing your documents.\ncorpus_store = GoogleVectorStore.create_corpus(display_name=\"My Corpus\")\n\n# Create a new document under the above corpus.\ndocument_store = GoogleVectorStore.create_document(\n corpus_id=corpus_store.corpus_id, display_name=\"My Document\"\n)\n\n# Upload some texts to the document.\ntext_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)\nfor file in DirectoryLoader(path=\"data/\").load():\n documents = text_splitter.split_documents([file])\n document_store.add_documents(documents)\n\n# Talk to your entire corpus with possibly many documents. \naqa = corpus_store.as_aqa()\nanswer = aqa.invoke(\"What is the meaning of life?\")\n\n# Read the response along with the attributed passages and answerability.\nprint(response.answer)\nprint(response.attributed_passages)\nprint(response.answerable_probability)\n```\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "An integration package connecting Google's genai package and LangChain",
"version": "2.0.4",
"project_urls": {
"Homepage": "https://github.com/langchain-ai/langchain-google",
"Repository": "https://github.com/langchain-ai/langchain-google",
"Source Code": "https://github.com/langchain-ai/langchain-google/tree/main/libs/genai"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fff5973de330f4647f18b877f939a73f4ea6b0de5606bf563be486eee632fe84",
"md5": "cad06289716289c0ceef7225b955fe8f",
"sha256": "bb2d9cb1d0e97cf772bf37a02063bd41ae9d6309eb49a7b2491bad34a88817bc"
},
"downloads": -1,
"filename": "langchain_google_genai-2.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cad06289716289c0ceef7225b955fe8f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 41803,
"upload_time": "2024-10-31T18:37:09",
"upload_time_iso_8601": "2024-10-31T18:37:09.583779Z",
"url": "https://files.pythonhosted.org/packages/ff/f5/973de330f4647f18b877f939a73f4ea6b0de5606bf563be486eee632fe84/langchain_google_genai-2.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "878fb1052becd03a2e3a1a51b06d26d82ad9a418e95ab6ac902e2991c0fe5ae1",
"md5": "51dd380572cd2c5545707ccc5b4ef605",
"sha256": "2ac2e5c5a8ad5a7d22e6d1e169af1ecae4afc77ca53e7dadf72e11a543f874b5"
},
"downloads": -1,
"filename": "langchain_google_genai-2.0.4.tar.gz",
"has_sig": false,
"md5_digest": "51dd380572cd2c5545707ccc5b4ef605",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 37630,
"upload_time": "2024-10-31T18:37:10",
"upload_time_iso_8601": "2024-10-31T18:37:10.567671Z",
"url": "https://files.pythonhosted.org/packages/87/8f/b1052becd03a2e3a1a51b06d26d82ad9a418e95ab6ac902e2991c0fe5ae1/langchain_google_genai-2.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-31 18:37:10",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "langchain-ai",
"github_project": "langchain-google",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "langchain-google-genai"
}