llm-elasticsearch-cache


Namellm-elasticsearch-cache JSON
Version 0.2.6 PyPI version JSON
download
home_pagehttps://github.com/SpazioDati/llm-elasticsearch-cache
Summary[IMPORTANT: This library is now part of LangChain, follow its official documentation] A caching layer for LLMs that exploits Elasticsearch, fully compatible with LangChain caching, both for chat and embeddings models.
upload_time2024-05-30 10:47:03
maintainerGiacomo Berardi
docs_urlNone
authorSpazioDati s.r.l.
requires_python<4.0,>=3.10
licenseMIT
keywords langchain elasticsearch openai llm chatgpt
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            > [!IMPORTANT]
> ## This library is now part of LangChain, follow the official documentation, e.g. [for the LLM cache](https://python.langchain.com/v0.2/docs/integrations/llm_caching/#elasticsearch-cache)

# llm-elasticsearch-cache

A caching layer for LLMs that exploits Elasticsearch, fully compatible with LangChain caching, both for chat and embeddings models.

## Install

```shell
pip install llm-elasticsearch-cache
```

## Chat cache usage

The LangChain cache can be used similarly to the
[other cache integrations](https://python.langchain.com/v0.2/docs/integrations/llm_caching/).

### Basic example

```python
from langchain.globals import set_llm_cache
from llmescache.langchain import ElasticsearchCache
from elasticsearch import Elasticsearch

es_client = Elasticsearch(hosts="http://localhost:9200")
set_llm_cache(
    ElasticsearchCache(
        es_client=es_client, 
        es_index="llm-chat-cache", 
        metadata={"project": "my_chatgpt_project"}
    )
)
```

The `es_index` parameter can also take aliases. This allows to use the 
[ILM: Manage the index lifecycle](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html)
that we suggest to consider for managing retention and controlling cache growth.

Look at the class docstring for all parameters.

### Index the generated text

The cached data won't be searchable by default.
The developer can customize the building of the Elasticsearch document in order to add indexed text fields,
where to put, for example, the text generated by the LLM.

This can be done by subclassing end overriding methods.
The new cache class can be applied also to a pre-existing cache index:

```python
from llmescache.langchain import ElasticsearchCache
from elasticsearch import Elasticsearch
from langchain_core.caches import RETURN_VAL_TYPE
from typing import Any, Dict, List
from langchain.globals import set_llm_cache
import json


class SearchableElasticsearchCache(ElasticsearchCache):

    @property
    def mapping(self) -> Dict[str, Any]:
        mapping = super().mapping
        mapping["mappings"]["properties"]["parsed_llm_output"] = {"type": "text", "analyzer": "english"}
        return mapping
    
    def build_document(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> Dict[str, Any]:
        body = super().build_document(prompt, llm_string, return_val)
        body["parsed_llm_output"] = self._parse_output(body["llm_output"])
        return body

    @staticmethod
    def _parse_output(data: List[str]) -> List[str]:
        return [json.loads(output)["kwargs"]["message"]["kwargs"]["content"] for output in data]


es_client = Elasticsearch(hosts="http://localhost:9200")
set_llm_cache(SearchableElasticsearchCache(es_client=es_client, es_index="llm-chat-cache"))
```

## Embeddings cache usage

Caching embeddings is obtained by using the [CacheBackedEmbeddings](https://python.langchain.com/docs/modules/data_connection/text_embedding/caching_embeddings),
in a slightly different way than the official documentation.

```python
from llmescache.langchain import ElasticsearchStore
from elasticsearch import Elasticsearch
from langchain.embeddings import CacheBackedEmbeddings
from langchain_openai import OpenAIEmbeddings

es_client = Elasticsearch(hosts="http://localhost:9200")

underlying_embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
store = ElasticsearchStore(
    es_client=es_client, 
    es_index="llm-embeddings-cache",
    namespace=underlying_embeddings.model,
    metadata={"project": "my_llm_project"}
)
cached_embeddings = CacheBackedEmbeddings(
    underlying_embeddings, 
    store
)
```

Similarly to the chat cache, one can subclass `ElasticsearchStore` in order to index vectors for search.

```python
from llmescache.langchain import ElasticsearchStore
from typing import Any, Dict, List

class SearchableElasticsearchStore(ElasticsearchStore):

    @property
    def mapping(self) -> Dict[str, Any]:
        mapping = super().mapping
        mapping["mappings"]["properties"]["vector"] = {"type": "dense_vector", "dims": 1536, "index": True, "similarity": "dot_product"}
        return mapping
    
    def build_document(self, llm_input: str, vector: List[float]) -> Dict[str, Any]:
        body = super().build_document(llm_input, vector)
        body["vector"] = vector
        return body
```

Be aware that `CacheBackedEmbeddings` does 
[not currently support caching queries](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cache.CacheBackedEmbeddings.html#langchain.embeddings.cache.CacheBackedEmbeddings.embed_query),
this means that text queries, for vector searches, won't be cached.
However, by overriding the `embed_query` method one should be able to easily implement it.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/SpazioDati/llm-elasticsearch-cache",
    "name": "llm-elasticsearch-cache",
    "maintainer": "Giacomo Berardi",
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": "berardi@spaziodati.eu",
    "keywords": "langchain, elasticsearch, openai, llm, chatgpt",
    "author": "SpazioDati s.r.l.",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/2c/7f/f7fa66f58ac3817f5fb28bd7dfabc8698a059f409cb701e2683e1aea6d3c/llm_elasticsearch_cache-0.2.6.tar.gz",
    "platform": null,
    "description": "> [!IMPORTANT]\n> ## This library is now part of LangChain, follow the official documentation, e.g. [for the LLM cache](https://python.langchain.com/v0.2/docs/integrations/llm_caching/#elasticsearch-cache)\n\n# llm-elasticsearch-cache\n\nA caching layer for LLMs that exploits Elasticsearch, fully compatible with LangChain caching, both for chat and embeddings models.\n\n## Install\n\n```shell\npip install llm-elasticsearch-cache\n```\n\n## Chat cache usage\n\nThe LangChain cache can be used similarly to the\n[other cache integrations](https://python.langchain.com/v0.2/docs/integrations/llm_caching/).\n\n### Basic example\n\n```python\nfrom langchain.globals import set_llm_cache\nfrom llmescache.langchain import ElasticsearchCache\nfrom elasticsearch import Elasticsearch\n\nes_client = Elasticsearch(hosts=\"http://localhost:9200\")\nset_llm_cache(\n    ElasticsearchCache(\n        es_client=es_client, \n        es_index=\"llm-chat-cache\", \n        metadata={\"project\": \"my_chatgpt_project\"}\n    )\n)\n```\n\nThe `es_index` parameter can also take aliases. This allows to use the \n[ILM: Manage the index lifecycle](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html)\nthat we suggest to consider for managing retention and controlling cache growth.\n\nLook at the class docstring for all parameters.\n\n### Index the generated text\n\nThe cached data won't be searchable by default.\nThe developer can customize the building of the Elasticsearch document in order to add indexed text fields,\nwhere to put, for example, the text generated by the LLM.\n\nThis can be done by subclassing end overriding methods.\nThe new cache class can be applied also to a pre-existing cache index:\n\n```python\nfrom llmescache.langchain import ElasticsearchCache\nfrom elasticsearch import Elasticsearch\nfrom langchain_core.caches import RETURN_VAL_TYPE\nfrom typing import Any, Dict, List\nfrom langchain.globals import set_llm_cache\nimport json\n\n\nclass SearchableElasticsearchCache(ElasticsearchCache):\n\n    @property\n    def mapping(self) -> Dict[str, Any]:\n        mapping = super().mapping\n        mapping[\"mappings\"][\"properties\"][\"parsed_llm_output\"] = {\"type\": \"text\", \"analyzer\": \"english\"}\n        return mapping\n    \n    def build_document(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> Dict[str, Any]:\n        body = super().build_document(prompt, llm_string, return_val)\n        body[\"parsed_llm_output\"] = self._parse_output(body[\"llm_output\"])\n        return body\n\n    @staticmethod\n    def _parse_output(data: List[str]) -> List[str]:\n        return [json.loads(output)[\"kwargs\"][\"message\"][\"kwargs\"][\"content\"] for output in data]\n\n\nes_client = Elasticsearch(hosts=\"http://localhost:9200\")\nset_llm_cache(SearchableElasticsearchCache(es_client=es_client, es_index=\"llm-chat-cache\"))\n```\n\n## Embeddings cache usage\n\nCaching embeddings is obtained by using the [CacheBackedEmbeddings](https://python.langchain.com/docs/modules/data_connection/text_embedding/caching_embeddings),\nin a slightly different way than the official documentation.\n\n```python\nfrom llmescache.langchain import ElasticsearchStore\nfrom elasticsearch import Elasticsearch\nfrom langchain.embeddings import CacheBackedEmbeddings\nfrom langchain_openai import OpenAIEmbeddings\n\nes_client = Elasticsearch(hosts=\"http://localhost:9200\")\n\nunderlying_embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\nstore = ElasticsearchStore(\n    es_client=es_client, \n    es_index=\"llm-embeddings-cache\",\n    namespace=underlying_embeddings.model,\n    metadata={\"project\": \"my_llm_project\"}\n)\ncached_embeddings = CacheBackedEmbeddings(\n    underlying_embeddings, \n    store\n)\n```\n\nSimilarly to the chat cache, one can subclass `ElasticsearchStore` in order to index vectors for search.\n\n```python\nfrom llmescache.langchain import ElasticsearchStore\nfrom typing import Any, Dict, List\n\nclass SearchableElasticsearchStore(ElasticsearchStore):\n\n    @property\n    def mapping(self) -> Dict[str, Any]:\n        mapping = super().mapping\n        mapping[\"mappings\"][\"properties\"][\"vector\"] = {\"type\": \"dense_vector\", \"dims\": 1536, \"index\": True, \"similarity\": \"dot_product\"}\n        return mapping\n    \n    def build_document(self, llm_input: str, vector: List[float]) -> Dict[str, Any]:\n        body = super().build_document(llm_input, vector)\n        body[\"vector\"] = vector\n        return body\n```\n\nBe aware that `CacheBackedEmbeddings` does \n[not currently support caching queries](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cache.CacheBackedEmbeddings.html#langchain.embeddings.cache.CacheBackedEmbeddings.embed_query),\nthis means that text queries, for vector searches, won't be cached.\nHowever, by overriding the `embed_query` method one should be able to easily implement it.",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "[IMPORTANT: This library is now part of LangChain, follow its official documentation] A caching layer for LLMs that exploits Elasticsearch, fully compatible with LangChain caching, both for chat and embeddings models.",
    "version": "0.2.6",
    "project_urls": {
        "Homepage": "https://github.com/SpazioDati/llm-elasticsearch-cache",
        "Repository": "https://github.com/SpazioDati/llm-elasticsearch-cache"
    },
    "split_keywords": [
        "langchain",
        " elasticsearch",
        " openai",
        " llm",
        " chatgpt"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ab4981551f5bee42d127668e9784c94d18b69cacd86f7b064e7706434e3ceef8",
                "md5": "1627eab02b194fed0cec45771099c4a7",
                "sha256": "987481231a3770512942d84dccbd010a664a896d05b95cff96caf51c65ab06b2"
            },
            "downloads": -1,
            "filename": "llm_elasticsearch_cache-0.2.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1627eab02b194fed0cec45771099c4a7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 8927,
            "upload_time": "2024-05-30T10:47:01",
            "upload_time_iso_8601": "2024-05-30T10:47:01.654148Z",
            "url": "https://files.pythonhosted.org/packages/ab/49/81551f5bee42d127668e9784c94d18b69cacd86f7b064e7706434e3ceef8/llm_elasticsearch_cache-0.2.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2c7ff7fa66f58ac3817f5fb28bd7dfabc8698a059f409cb701e2683e1aea6d3c",
                "md5": "be97511b6b95c9b37028c90ba2b148b5",
                "sha256": "8acd013ba7746177c72d36573efaad3a3547b351128a2d3f67deb4298aa6d07b"
            },
            "downloads": -1,
            "filename": "llm_elasticsearch_cache-0.2.6.tar.gz",
            "has_sig": false,
            "md5_digest": "be97511b6b95c9b37028c90ba2b148b5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 7712,
            "upload_time": "2024-05-30T10:47:03",
            "upload_time_iso_8601": "2024-05-30T10:47:03.420825Z",
            "url": "https://files.pythonhosted.org/packages/2c/7f/f7fa66f58ac3817f5fb28bd7dfabc8698a059f409cb701e2683e1aea6d3c/llm_elasticsearch_cache-0.2.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-30 10:47:03",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "SpazioDati",
    "github_project": "llm-elasticsearch-cache",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "llm-elasticsearch-cache"
}
        
Elapsed time: 0.50835s