# Semantic Cache
Semantic Cache is a tool for caching natural text based on semantic similarity. It's ideal for any task that involves querying or retrieving information based on meaning, such as natural language classification or caching AI responses. Two pieces of text can be similar but not identical (e.g., "great places to check out in Spain" vs. "best places to visit in Spain"). Traditional caching doesn't recognize this semantic similarity and misses opportunities for reuse.
Semantic Cache allows you to:
- Easily classify natural text into predefined categories
- Avoid redundant LLM work by caching AI responses
- Reduce API latency by responding to similar queries with already cached values
<img src="./assets/how-semantic-cache-works.png" width="700">
## Highlights
- **Uses semantic similarity**: Stores cache entries by their meaning, not just the literal characters
- **Handles synonyms**: Recognizes and handles synonyms
- **Complex query support**: Understands long and nested user queries
- **Customizable**: Set a custom proximity threshold to filter out less relevant results
## Getting Started
### Prerequisites
- An Upstash Vector database (create one [here](https://console.upstash.com/vector))
### Installation
After creating a vector database, you should install the repository using the following command.
```bash
git clone git@github.com:ErayEroglu/python-semantic-caching.git
```
### Setup
First, create an Upstash Vector database [here](https://console.upstash.com/vector). You'll need the `url` and `token` credentials to connect your semantic cache. Important: Choose any pre-made embedding model when creating your database.
> [!NOTE]
> Different embedding models are great for different use cases. For example, if low latency is a priority, choose a model with a smaller dimension size like `bge-small-en-v1.5`. If accuracy is important, choose a model with more dimensions.
Create a `.env` file in the src directory of your project and add your Upstash Vector URL and token:
```plaintext
UPSTASH_VECTOR_REST_URL=https://example.upstash.io
UPSTASH_VECTOR_REST_TOKEN=your_secret_token_here
```
### Using Semantic Cache
After setting up environment variables and installing the repository, the virtual environment must be activated the by entering the following command to the console in the src directory:
```bash
source ./bin/activate
```
Then, a basic demo can be created like this:
```python
def main():
# set environment variables
load_dotenv()
UPSTASH_VECTOR_REST_URL = os.getenv('UPSTASH_VECTOR_REST_URL')
UPSTASH_VECTOR_REST_TOKEN = os.getenv('UPSTASH_VECTOR_REST_TOKEN')
# initialize Upstash database
cache = SemanticCache(url=UPSTASH_VECTOR_REST_URL, token=UPSTASH_VECTOR_REST_TOKEN, min_proximity=0.7)
cache.set('The most crowded city in Turkiye', 'Istanbul')
sleep(1)
result = cache.get('Which city has the most population in Turkiye?')
sleep(1)
print(result)
if __name__ == '__main__':
main() # outputs Istanbul
```
### The `minProximity` Parameter
The `minProximity` parameter ranges from `0` to `1`. It lets you define the minimum relevance score to determine a cache hit. The higher this number, the more similar your user input must be to the cached content to be a hit. In practice, a score of 0.95 indicates a very high similarity, while a score of 0.75 already indicates a low similarity. For example, a value of 1.00, the highest possible, would only accept an _exact_ match of your user query and cache content as a cache hit.
## Examples
The following examples demonstrate how you can utilize Semantic Cache in various use cases:
> [!NOTE]
> We add a 1-second delay after setting the data to allow time for the vector index to update. This delay is necessary to ensure that the data is available for retrieval.
### Basic Semantic Retrieval
```python
cache.set('Capital of Turkiye', 'Ankara')
sleep(1)
result = cache.get('What is the capital of Turkiye?')
sleep(1)
print(result) # outputs Ankara
```
### Handling Synonyms
```python
cache.set('The last champion of European Football Championship', 'Italy')
sleep(1)
result = cache.get('Which country is the winner of the most recent European Football Championship?')
sleep(1)
print(result) # outputs Italy
```
### Complex Queries
```python
cache.set('The largest economy in the world, 'USA')
sleep(1)
result = cache.get('Which country has the highest GDP?')
sleep(1)
print(result) # outputs USA
```
### Different Contexts
```python
cache.set("New York population as of 2020 census", "8.8 million")
cache.set("Major economic activities in New York", "Finance, technology, and tourism")
sleep(1)
result1 = cache.get("How many people lived in NYC according to the last census?")
sleep(1)
result2 = cache.get("What are the key industries in New York?")
sleep(1)
print(result1) # outputs 8.8 million
print(result2) # outputs Finance, technology, and tourism
```
## Contributing
We appreciate your contributions! If you'd like to contribute to this project, please fork the repository, make changes, and submit a pull request.
## License
It is distributed under the MIT License. See `LICENSE` for more information.
Raw data
{
"_id": null,
"home_page": null,
"name": "py-semantic-caching",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "python, first package",
"author": "Eray Ero\u011flu",
"author_email": "erayeroglu07@email.com",
"download_url": "https://files.pythonhosted.org/packages/d9/32/4167639eacf8459fcf5f735ac7cde2f182e59416cb0557b01ae28819633a/py_semantic_caching-0.0.1.tar.gz",
"platform": null,
"description": "# Semantic Cache\n\nSemantic Cache is a tool for caching natural text based on semantic similarity. It's ideal for any task that involves querying or retrieving information based on meaning, such as natural language classification or caching AI responses. Two pieces of text can be similar but not identical (e.g., \"great places to check out in Spain\" vs. \"best places to visit in Spain\"). Traditional caching doesn't recognize this semantic similarity and misses opportunities for reuse.\n\nSemantic Cache allows you to:\n\n- Easily classify natural text into predefined categories\n- Avoid redundant LLM work by caching AI responses\n- Reduce API latency by responding to similar queries with already cached values\n\n<img src=\"./assets/how-semantic-cache-works.png\" width=\"700\">\n\n## Highlights\n\n- **Uses semantic similarity**: Stores cache entries by their meaning, not just the literal characters\n- **Handles synonyms**: Recognizes and handles synonyms\n- **Complex query support**: Understands long and nested user queries\n- **Customizable**: Set a custom proximity threshold to filter out less relevant results\n\n## Getting Started\n\n### Prerequisites\n\n- An Upstash Vector database (create one [here](https://console.upstash.com/vector))\n\n### Installation\n\nAfter creating a vector database, you should install the repository using the following command.\n\n```bash\ngit clone git@github.com:ErayEroglu/python-semantic-caching.git \n```\n\n### Setup\n\nFirst, create an Upstash Vector database [here](https://console.upstash.com/vector). You'll need the `url` and `token` credentials to connect your semantic cache. Important: Choose any pre-made embedding model when creating your database.\n\n> [!NOTE] \n> Different embedding models are great for different use cases. For example, if low latency is a priority, choose a model with a smaller dimension size like `bge-small-en-v1.5`. If accuracy is important, choose a model with more dimensions.\n\nCreate a `.env` file in the src directory of your project and add your Upstash Vector URL and token:\n\n```plaintext\nUPSTASH_VECTOR_REST_URL=https://example.upstash.io\nUPSTASH_VECTOR_REST_TOKEN=your_secret_token_here\n```\n\n### Using Semantic Cache\n\nAfter setting up environment variables and installing the repository, the virtual environment must be activated the by entering the following command to the console in the src directory:\n\n```bash\nsource ./bin/activate\n ```\n\nThen, a basic demo can be created like this:\n\n```python\ndef main():\n # set environment variables\n load_dotenv()\n UPSTASH_VECTOR_REST_URL = os.getenv('UPSTASH_VECTOR_REST_URL')\n UPSTASH_VECTOR_REST_TOKEN = os.getenv('UPSTASH_VECTOR_REST_TOKEN')\n\n # initialize Upstash database\n cache = SemanticCache(url=UPSTASH_VECTOR_REST_URL, token=UPSTASH_VECTOR_REST_TOKEN, min_proximity=0.7)\n cache.set('The most crowded city in Turkiye', 'Istanbul')\n sleep(1)\n result = cache.get('Which city has the most population in Turkiye?')\n sleep(1)\n print(result)\n \nif __name__ == '__main__':\n main() # outputs Istanbul\n```\n\n### The `minProximity` Parameter\n\nThe `minProximity` parameter ranges from `0` to `1`. It lets you define the minimum relevance score to determine a cache hit. The higher this number, the more similar your user input must be to the cached content to be a hit. In practice, a score of 0.95 indicates a very high similarity, while a score of 0.75 already indicates a low similarity. For example, a value of 1.00, the highest possible, would only accept an _exact_ match of your user query and cache content as a cache hit.\n\n## Examples\n\nThe following examples demonstrate how you can utilize Semantic Cache in various use cases:\n\n> [!NOTE] \n> We add a 1-second delay after setting the data to allow time for the vector index to update. This delay is necessary to ensure that the data is available for retrieval.\n\n### Basic Semantic Retrieval\n\n```python\ncache.set('Capital of Turkiye', 'Ankara')\nsleep(1)\nresult = cache.get('What is the capital of Turkiye?')\nsleep(1)\nprint(result) # outputs Ankara\n```\n\n### Handling Synonyms\n\n```python\ncache.set('The last champion of European Football Championship', 'Italy')\nsleep(1)\nresult = cache.get('Which country is the winner of the most recent European Football Championship?')\nsleep(1)\nprint(result) # outputs Italy\n```\n\n### Complex Queries\n\n```python\ncache.set('The largest economy in the world, 'USA')\nsleep(1)\nresult = cache.get('Which country has the highest GDP?')\nsleep(1)\nprint(result) # outputs USA\n```\n\n### Different Contexts\n\n```python\ncache.set(\"New York population as of 2020 census\", \"8.8 million\")\ncache.set(\"Major economic activities in New York\", \"Finance, technology, and tourism\")\nsleep(1)\nresult1 = cache.get(\"How many people lived in NYC according to the last census?\")\nsleep(1)\nresult2 = cache.get(\"What are the key industries in New York?\")\nsleep(1)\nprint(result1) # outputs 8.8 million\nprint(result2) # outputs Finance, technology, and tourism\n```\n\n## Contributing\n\nWe appreciate your contributions! If you'd like to contribute to this project, please fork the repository, make changes, and submit a pull request.\n\n## License\n\nIt is distributed under the MIT License. See `LICENSE` for more information.\n",
"bugtrack_url": null,
"license": null,
"summary": "Semantic Caching with Python",
"version": "0.0.1",
"project_urls": null,
"split_keywords": [
"python",
" first package"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fa2e1c2317035855bcbec4b35ff6e38ddc332537aaed2bd6e5458fe97183a6e8",
"md5": "945a50261b559a386bc84d13392db01e",
"sha256": "fb0cb88e0a279a02aafcc420dc063491afd217c42fe40b7c80fd76e88514dd10"
},
"downloads": -1,
"filename": "py_semantic_caching-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "945a50261b559a386bc84d13392db01e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 3518,
"upload_time": "2024-07-01T14:19:02",
"upload_time_iso_8601": "2024-07-01T14:19:02.727698Z",
"url": "https://files.pythonhosted.org/packages/fa/2e/1c2317035855bcbec4b35ff6e38ddc332537aaed2bd6e5458fe97183a6e8/py_semantic_caching-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d9324167639eacf8459fcf5f735ac7cde2f182e59416cb0557b01ae28819633a",
"md5": "963351a5b33db2e8a4eaf64e2a8f5aac",
"sha256": "84c45926d2d10276ab8fc3a68173357e905f638a104fcbcea30fd65fd500f5ce"
},
"downloads": -1,
"filename": "py_semantic_caching-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "963351a5b33db2e8a4eaf64e2a8f5aac",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 3714,
"upload_time": "2024-07-01T14:19:04",
"upload_time_iso_8601": "2024-07-01T14:19:04.595687Z",
"url": "https://files.pythonhosted.org/packages/d9/32/4167639eacf8459fcf5f735ac7cde2f182e59416cb0557b01ae28819633a/py_semantic_caching-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-01 14:19:04",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "py-semantic-caching"
}