langchain-elasticsearch


Namelangchain-elasticsearch JSON
Version 0.1.3 PyPI version JSON
download
home_pagehttps://github.com/langchain-ai/langchain-elastic
SummaryAn integration package connecting Elasticsearch and LangChain
upload_time2024-04-26 09:55:05
maintainerNone
docs_urlNone
authorNone
requires_python<4.0,>=3.8.1
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # langchain-elasticsearch

This package contains the LangChain integration with Elasticsearch.

## Installation

```bash
pip install -U langchain-elasticsearch
```

## Elasticsearch setup

### Elastic Cloud

You need a running Elasticsearch deployment. The easiest way to start one is through [Elastic Cloud](https://cloud.elastic.co/).
You can sign up for a [free trial](https://www.elastic.co/cloud/cloud-trial-overview).

1. [Create a deployment](https://www.elastic.co/guide/en/cloud/current/ec-create-deployment.html)
2. Get your Cloud ID:
    1. In the [Elastic Cloud console](https://cloud.elastic.co), click "Manage" next to your deployment
    2. Copy the Cloud ID and paste it into the `es_cloud_id` parameter below
3. Create an API key:
    1. In the [Elastic Cloud console](https://cloud.elastic.co), click "Open" next to your deployment
    2. In the left-hand side menu, go to "Stack Management", then to "API Keys"
    3. Click "Create API key"
    4. Enter a name for the API key and click "Create"
    5. Copy the API key and paste it into the `es_api_key` parameter below

### Elastic Cloud

Alternatively, you can run Elasticsearch via Docker as described in the [docs](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch).

## Usage

### ElasticsearchStore

The `ElasticsearchStore` class exposes Elasticsearch as a vector store.

```python
from langchain_elasticsearch import ElasticsearchStore

embeddings = ... # use a LangChain Embeddings class or ElasticsearchEmbeddings

vectorstore = ElasticsearchStore(
    es_cloud_id="your-cloud-id",
    es_api_key="your-api-key",
    index_name="your-index-name",
    embeddings=embeddings,
)
```

### ElasticsearchRetriever

The `ElasticsearchRetriever` class can be user to implement more complex queries.
This can be useful for power users and necessary if data was ingested outside of LangChain
(for example using a web crawler).

```python
def fuzzy_query(search_query: str) -> Dict:
    return {
        "query": {
            "match": {
                text_field: {
                    "query": search_query,
                    "fuzziness": "AUTO",
                }
            },
        },
    }


fuzzy_retriever = ElasticsearchRetriever.from_es_params(
    es_cloud_id="your-cloud-id",
    es_api_key="your-api-key",
    index_name="your-index-name",
    body_func=fuzzy_query,
    content_field=text_field,
)

fuzzy_retriever.get_relevant_documents("fooo")
```

### ElasticsearchEmbeddings

The `ElasticsearchEmbeddings` class provides an interface to generate embeddings using a model
deployed in an Elasticsearch cluster.

```python
from langchain_elasticsearch import ElasticsearchEmbeddings

embeddings = ElasticsearchEmbeddings.from_credentials(
    model_id="your-model-id",
    input_field="your-input-field",
    es_cloud_id="your-cloud-id",
    es_api_key="your-api-key",
)
```

### ElasticsearchChatMessageHistory

The `ElasticsearchChatMessageHistory` class stores chat histories in Elasticsearch.

```python
from langchain_elasticsearch import ElasticsearchChatMessageHistory

chat_history = ElasticsearchChatMessageHistory(
    index="your-index-name",
    session_id="your-session-id",
    es_cloud_id="your-cloud-id",
    es_api_key="your-api-key",
)
```


### ElasticsearchCache

A caching layer for LLMs that uses Elasticsearch.

Simple example:

```python
from elasticsearch import Elasticsearch
from langchain.globals import set_llm_cache

from langchain_elasticsearch import ElasticsearchCache

es_client = Elasticsearch(hosts="http://localhost:9200")
set_llm_cache(
    ElasticsearchCache(
        es_connection=es_client,
        index_name="llm-chat-cache",
        metadata={"project": "my_chatgpt_project"},
    )
)
```

The `index_name` parameter can also accept aliases. This allows to use the 
[ILM: Manage the index lifecycle](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html)
that we suggest to consider for managing retention and controlling cache growth.

Look at the class docstring for all parameters.

#### Index the generated text

The cached data won't be searchable by default.
The developer can customize the building of the Elasticsearch document in order to add indexed text fields,
where to put, for example, the text generated by the LLM.

This can be done by subclassing end overriding methods.
The new cache class can be applied also to a pre-existing cache index:

```python
import json
from typing import Any, Dict, List

from elasticsearch import Elasticsearch
from langchain.globals import set_llm_cache
from langchain_core.caches import RETURN_VAL_TYPE

from langchain_elasticsearch import ElasticsearchCache


class SearchableElasticsearchCache(ElasticsearchCache):
    @property
    def mapping(self) -> Dict[str, Any]:
        mapping = super().mapping
        mapping["mappings"]["properties"]["parsed_llm_output"] = {
            "type": "text",
            "analyzer": "english",
        }
        return mapping

    def build_document(
        self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE
    ) -> Dict[str, Any]:
        body = super().build_document(prompt, llm_string, return_val)
        body["parsed_llm_output"] = self._parse_output(body["llm_output"])
        return body

    @staticmethod
    def _parse_output(data: List[str]) -> List[str]:
        return [
            json.loads(output)["kwargs"]["message"]["kwargs"]["content"]
            for output in data
        ]


es_client = Elasticsearch(hosts="http://localhost:9200")
set_llm_cache(
    SearchableElasticsearchCache(es_connection=es_client, index_name="llm-chat-cache")
)
```

When overriding the mapping and the document building, 
please only make additive modifications, keeping the base mapping intact.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/langchain-ai/langchain-elastic",
    "name": "langchain-elasticsearch",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8.1",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/6c/39/968f72ae9485cdd52918d7143676e252888af59406ccc79974b93b5b3b3f/langchain_elasticsearch-0.1.3.tar.gz",
    "platform": null,
    "description": "# langchain-elasticsearch\n\nThis package contains the LangChain integration with Elasticsearch.\n\n## Installation\n\n```bash\npip install -U langchain-elasticsearch\n```\n\n## Elasticsearch setup\n\n### Elastic Cloud\n\nYou need a running Elasticsearch deployment. The easiest way to start one is through [Elastic Cloud](https://cloud.elastic.co/).\nYou can sign up for a [free trial](https://www.elastic.co/cloud/cloud-trial-overview).\n\n1. [Create a deployment](https://www.elastic.co/guide/en/cloud/current/ec-create-deployment.html)\n2. Get your Cloud ID:\n    1. In the [Elastic Cloud console](https://cloud.elastic.co), click \"Manage\" next to your deployment\n    2. Copy the Cloud ID and paste it into the `es_cloud_id` parameter below\n3. Create an API key:\n    1. In the [Elastic Cloud console](https://cloud.elastic.co), click \"Open\" next to your deployment\n    2. In the left-hand side menu, go to \"Stack Management\", then to \"API Keys\"\n    3. Click \"Create API key\"\n    4. Enter a name for the API key and click \"Create\"\n    5. Copy the API key and paste it into the `es_api_key` parameter below\n\n### Elastic Cloud\n\nAlternatively, you can run Elasticsearch via Docker as described in the [docs](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch).\n\n## Usage\n\n### ElasticsearchStore\n\nThe `ElasticsearchStore` class exposes Elasticsearch as a vector store.\n\n```python\nfrom langchain_elasticsearch import ElasticsearchStore\n\nembeddings = ... # use a LangChain Embeddings class or ElasticsearchEmbeddings\n\nvectorstore = ElasticsearchStore(\n    es_cloud_id=\"your-cloud-id\",\n    es_api_key=\"your-api-key\",\n    index_name=\"your-index-name\",\n    embeddings=embeddings,\n)\n```\n\n### ElasticsearchRetriever\n\nThe `ElasticsearchRetriever` class can be user to implement more complex queries.\nThis can be useful for power users and necessary if data was ingested outside of LangChain\n(for example using a web crawler).\n\n```python\ndef fuzzy_query(search_query: str) -> Dict:\n    return {\n        \"query\": {\n            \"match\": {\n                text_field: {\n                    \"query\": search_query,\n                    \"fuzziness\": \"AUTO\",\n                }\n            },\n        },\n    }\n\n\nfuzzy_retriever = ElasticsearchRetriever.from_es_params(\n    es_cloud_id=\"your-cloud-id\",\n    es_api_key=\"your-api-key\",\n    index_name=\"your-index-name\",\n    body_func=fuzzy_query,\n    content_field=text_field,\n)\n\nfuzzy_retriever.get_relevant_documents(\"fooo\")\n```\n\n### ElasticsearchEmbeddings\n\nThe `ElasticsearchEmbeddings` class provides an interface to generate embeddings using a model\ndeployed in an Elasticsearch cluster.\n\n```python\nfrom langchain_elasticsearch import ElasticsearchEmbeddings\n\nembeddings = ElasticsearchEmbeddings.from_credentials(\n    model_id=\"your-model-id\",\n    input_field=\"your-input-field\",\n    es_cloud_id=\"your-cloud-id\",\n    es_api_key=\"your-api-key\",\n)\n```\n\n### ElasticsearchChatMessageHistory\n\nThe `ElasticsearchChatMessageHistory` class stores chat histories in Elasticsearch.\n\n```python\nfrom langchain_elasticsearch import ElasticsearchChatMessageHistory\n\nchat_history = ElasticsearchChatMessageHistory(\n    index=\"your-index-name\",\n    session_id=\"your-session-id\",\n    es_cloud_id=\"your-cloud-id\",\n    es_api_key=\"your-api-key\",\n)\n```\n\n\n### ElasticsearchCache\n\nA caching layer for LLMs that uses Elasticsearch.\n\nSimple example:\n\n```python\nfrom elasticsearch import Elasticsearch\nfrom langchain.globals import set_llm_cache\n\nfrom langchain_elasticsearch import ElasticsearchCache\n\nes_client = Elasticsearch(hosts=\"http://localhost:9200\")\nset_llm_cache(\n    ElasticsearchCache(\n        es_connection=es_client,\n        index_name=\"llm-chat-cache\",\n        metadata={\"project\": \"my_chatgpt_project\"},\n    )\n)\n```\n\nThe `index_name` parameter can also accept aliases. This allows to use the \n[ILM: Manage the index lifecycle](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html)\nthat we suggest to consider for managing retention and controlling cache growth.\n\nLook at the class docstring for all parameters.\n\n#### Index the generated text\n\nThe cached data won't be searchable by default.\nThe developer can customize the building of the Elasticsearch document in order to add indexed text fields,\nwhere to put, for example, the text generated by the LLM.\n\nThis can be done by subclassing end overriding methods.\nThe new cache class can be applied also to a pre-existing cache index:\n\n```python\nimport json\nfrom typing import Any, Dict, List\n\nfrom elasticsearch import Elasticsearch\nfrom langchain.globals import set_llm_cache\nfrom langchain_core.caches import RETURN_VAL_TYPE\n\nfrom langchain_elasticsearch import ElasticsearchCache\n\n\nclass SearchableElasticsearchCache(ElasticsearchCache):\n    @property\n    def mapping(self) -> Dict[str, Any]:\n        mapping = super().mapping\n        mapping[\"mappings\"][\"properties\"][\"parsed_llm_output\"] = {\n            \"type\": \"text\",\n            \"analyzer\": \"english\",\n        }\n        return mapping\n\n    def build_document(\n        self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE\n    ) -> Dict[str, Any]:\n        body = super().build_document(prompt, llm_string, return_val)\n        body[\"parsed_llm_output\"] = self._parse_output(body[\"llm_output\"])\n        return body\n\n    @staticmethod\n    def _parse_output(data: List[str]) -> List[str]:\n        return [\n            json.loads(output)[\"kwargs\"][\"message\"][\"kwargs\"][\"content\"]\n            for output in data\n        ]\n\n\nes_client = Elasticsearch(hosts=\"http://localhost:9200\")\nset_llm_cache(\n    SearchableElasticsearchCache(es_connection=es_client, index_name=\"llm-chat-cache\")\n)\n```\n\nWhen overriding the mapping and the document building, \nplease only make additive modifications, keeping the base mapping intact.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "An integration package connecting Elasticsearch and LangChain",
    "version": "0.1.3",
    "project_urls": {
        "Homepage": "https://github.com/langchain-ai/langchain-elastic",
        "Repository": "https://github.com/langchain-ai/langchain-elastic",
        "Source Code": "https://github.com/langchain-ai/langchain-elastic/tree/main/libs/elasticsearch"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5c87bccb7084b704b5c580b55bb585860e69fc3e1e5a493d4b0c0bc1023b4e90",
                "md5": "2259c3ad03ddfd0f337d6a1c212969b3",
                "sha256": "5bcc223fdb3a19ca7f918d8325c88de89e0b2bad4518f940b4de472064d0827e"
            },
            "downloads": -1,
            "filename": "langchain_elasticsearch-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2259c3ad03ddfd0f337d6a1c212969b3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8.1",
            "size": 23886,
            "upload_time": "2024-04-26T09:55:04",
            "upload_time_iso_8601": "2024-04-26T09:55:04.235128Z",
            "url": "https://files.pythonhosted.org/packages/5c/87/bccb7084b704b5c580b55bb585860e69fc3e1e5a493d4b0c0bc1023b4e90/langchain_elasticsearch-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6c39968f72ae9485cdd52918d7143676e252888af59406ccc79974b93b5b3b3f",
                "md5": "cb4c672a17f5278a8115bd15422cc589",
                "sha256": "5aa19067c75a84d0918fb0120ca978a6c513ae3e6626b396418e3b9ae2aef2cf"
            },
            "downloads": -1,
            "filename": "langchain_elasticsearch-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "cb4c672a17f5278a8115bd15422cc589",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8.1",
            "size": 21812,
            "upload_time": "2024-04-26T09:55:05",
            "upload_time_iso_8601": "2024-04-26T09:55:05.750370Z",
            "url": "https://files.pythonhosted.org/packages/6c/39/968f72ae9485cdd52918d7143676e252888af59406ccc79974b93b5b3b3f/langchain_elasticsearch-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-26 09:55:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "langchain-ai",
    "github_project": "langchain-elastic",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "langchain-elasticsearch"
}
        
Elapsed time: 0.23838s