langchain-google-genai


Namelangchain-google-genai JSON
Version 1.0.4 PyPI version JSON
download
home_pagehttps://github.com/langchain-ai/langchain-google
SummaryAn integration package connecting Google's genai package and LangChain
upload_time2024-05-16 13:47:53
maintainerNone
docs_urlNone
authorNone
requires_python<4.0,>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # langchain-google-genai

This package contains the LangChain integrations for Gemini through their generative-ai SDK.

## Installation

```bash
pip install -U langchain-google-genai
```

### Image utilities
To use image utility methods, like loading images from GCS urls, install with extras group 'images':

```bash
pip install -e "langchain-google-genai[images]"
```

## Chat Models

This package contains the `ChatGoogleGenerativeAI` class, which is the recommended way to interface with the Google Gemini series of models.

To use, install the requirements, and configure your environment.

```bash
export GOOGLE_API_KEY=your-api-key
```

Then initialize

```python
from langchain_google_genai import ChatGoogleGenerativeAI

llm = ChatGoogleGenerativeAI(model="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")
```

#### Multimodal inputs

Gemini vision model supports image inputs when providing a single chat message. Example:

```
from langchain_core.messages import HumanMessage
from langchain_google_genai import ChatGoogleGenerativeAI

llm = ChatGoogleGenerativeAI(model="gemini-pro-vision")
# example
message = HumanMessage(
    content=[
        {
            "type": "text",
            "text": "What's in this image?",
        },  # You can optionally provide text parts
        {"type": "image_url", "image_url": "https://picsum.photos/seed/picsum/200/300"},
    ]
)
llm.invoke([message])
```

The value of `image_url` can be any of the following:

- A public image URL
- An accessible gcs file (e.g., "gcs://path/to/file.png")
- A local file path
- A base64 encoded image (e.g., `data:image/png;base64,abcd124`)
- A PIL image



## Embeddings

This package also adds support for google's embeddings models.

```
from langchain_google_genai import GoogleGenerativeAIEmbeddings

embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
embeddings.embed_query("hello, world!")
```

## Semantic Retrieval

Enables retrieval augmented generation (RAG) in your application.

```
# Create a new store for housing your documents.
corpus_store = GoogleVectorStore.create_corpus(display_name="My Corpus")

# Create a new document under the above corpus.
document_store = GoogleVectorStore.create_document(
    corpus_id=corpus_store.corpus_id, display_name="My Document"
)

# Upload some texts to the document.
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)
for file in DirectoryLoader(path="data/").load():
    documents = text_splitter.split_documents([file])
    document_store.add_documents(documents)

# Talk to your entire corpus with possibly many documents. 
aqa = corpus_store.as_aqa()
answer = aqa.invoke("What is the meaning of life?")

# Read the response along with the attributed passages and answerability.
print(response.answer)
print(response.attributed_passages)
print(response.answerable_probability)
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/langchain-ai/langchain-google",
    "name": "langchain-google-genai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/2a/47/cd57010d7a4b7088458617d0188b312aa084ca2cc961321409f5e246662f/langchain_google_genai-1.0.4.tar.gz",
    "platform": null,
    "description": "# langchain-google-genai\n\nThis package contains the LangChain integrations for Gemini through their generative-ai SDK.\n\n## Installation\n\n```bash\npip install -U langchain-google-genai\n```\n\n### Image utilities\nTo use image utility methods, like loading images from GCS urls, install with extras group 'images':\n\n```bash\npip install -e \"langchain-google-genai[images]\"\n```\n\n## Chat Models\n\nThis package contains the `ChatGoogleGenerativeAI` class, which is the recommended way to interface with the Google Gemini series of models.\n\nTo use, install the requirements, and configure your environment.\n\n```bash\nexport GOOGLE_API_KEY=your-api-key\n```\n\nThen initialize\n\n```python\nfrom langchain_google_genai import ChatGoogleGenerativeAI\n\nllm = ChatGoogleGenerativeAI(model=\"gemini-pro\")\nllm.invoke(\"Sing a ballad of LangChain.\")\n```\n\n#### Multimodal inputs\n\nGemini vision model supports image inputs when providing a single chat message. Example:\n\n```\nfrom langchain_core.messages import HumanMessage\nfrom langchain_google_genai import ChatGoogleGenerativeAI\n\nllm = ChatGoogleGenerativeAI(model=\"gemini-pro-vision\")\n# example\nmessage = HumanMessage(\n    content=[\n        {\n            \"type\": \"text\",\n            \"text\": \"What's in this image?\",\n        },  # You can optionally provide text parts\n        {\"type\": \"image_url\", \"image_url\": \"https://picsum.photos/seed/picsum/200/300\"},\n    ]\n)\nllm.invoke([message])\n```\n\nThe value of `image_url` can be any of the following:\n\n- A public image URL\n- An accessible gcs file (e.g., \"gcs://path/to/file.png\")\n- A local file path\n- A base64 encoded image (e.g., `data:image/png;base64,abcd124`)\n- A PIL image\n\n\n\n## Embeddings\n\nThis package also adds support for google's embeddings models.\n\n```\nfrom langchain_google_genai import GoogleGenerativeAIEmbeddings\n\nembeddings = GoogleGenerativeAIEmbeddings(model=\"models/embedding-001\")\nembeddings.embed_query(\"hello, world!\")\n```\n\n## Semantic Retrieval\n\nEnables retrieval augmented generation (RAG) in your application.\n\n```\n# Create a new store for housing your documents.\ncorpus_store = GoogleVectorStore.create_corpus(display_name=\"My Corpus\")\n\n# Create a new document under the above corpus.\ndocument_store = GoogleVectorStore.create_document(\n    corpus_id=corpus_store.corpus_id, display_name=\"My Document\"\n)\n\n# Upload some texts to the document.\ntext_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)\nfor file in DirectoryLoader(path=\"data/\").load():\n    documents = text_splitter.split_documents([file])\n    document_store.add_documents(documents)\n\n# Talk to your entire corpus with possibly many documents. \naqa = corpus_store.as_aqa()\nanswer = aqa.invoke(\"What is the meaning of life?\")\n\n# Read the response along with the attributed passages and answerability.\nprint(response.answer)\nprint(response.attributed_passages)\nprint(response.answerable_probability)\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "An integration package connecting Google's genai package and LangChain",
    "version": "1.0.4",
    "project_urls": {
        "Homepage": "https://github.com/langchain-ai/langchain-google",
        "Repository": "https://github.com/langchain-ai/langchain-google",
        "Source Code": "https://github.com/langchain-ai/langchain-google/tree/main/libs/genai"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3c56da975f58a267ad9551e5d8a9cb82640257dddc8d575fbf7c00672f7b8b7c",
                "md5": "055e07d8a2f0d95a85017f92a295aff3",
                "sha256": "e567cc401f8d629fce489ee031d258da7fa4b7da0abb8ed926d6990c650b659e"
            },
            "downloads": -1,
            "filename": "langchain_google_genai-1.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "055e07d8a2f0d95a85017f92a295aff3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 33760,
            "upload_time": "2024-05-16T13:47:52",
            "upload_time_iso_8601": "2024-05-16T13:47:52.355389Z",
            "url": "https://files.pythonhosted.org/packages/3c/56/da975f58a267ad9551e5d8a9cb82640257dddc8d575fbf7c00672f7b8b7c/langchain_google_genai-1.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2a47cd57010d7a4b7088458617d0188b312aa084ca2cc961321409f5e246662f",
                "md5": "bdf595b9f7f519bebfe5039957cc6290",
                "sha256": "b6beccfe7504ce9f8778a8df23dc49239fd91cf076a55d61759a09fc1373ca26"
            },
            "downloads": -1,
            "filename": "langchain_google_genai-1.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "bdf595b9f7f519bebfe5039957cc6290",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 29701,
            "upload_time": "2024-05-16T13:47:53",
            "upload_time_iso_8601": "2024-05-16T13:47:53.974289Z",
            "url": "https://files.pythonhosted.org/packages/2a/47/cd57010d7a4b7088458617d0188b312aa084ca2cc961321409f5e246662f/langchain_google_genai-1.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-16 13:47:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "langchain-ai",
    "github_project": "langchain-google",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "langchain-google-genai"
}
        
Elapsed time: 0.25547s