# fastapi-cache
[![pypi](https://img.shields.io/pypi/v/fastapi-cache-ava.svg?style=flat)](https://pypi.org/p/fastapi-cache-ava)
[![license](https://img.shields.io/github/license/long2ice/fastapi-cache)](https://github.com/long2ice/fastapi-cache/blob/main/LICENSE)
[![CI/CD](https://github.com/long2ice/fastapi-cache/actions/workflows/ci-cd.yml/badge.svg)](https://github.com/long2ice/fastapi-cache/actions/workflows/ci-cd.yml)
## Introduction
`fastapi-cache` is a tool to cache FastAPI endpoint and function results, with
backends supporting Redis, Memcached, and Amazon DynamoDB.
## Features
- Supports `redis`, `memcache`, `dynamodb`, and `in-memory` backends.
- Easy integration with [FastAPI](https://fastapi.tiangolo.com/).
- Support for HTTP cache headers like `ETag` and `Cache-Control`, as well as conditional `If-Match-None` requests.
## Requirements
- FastAPI
- `redis` when using `RedisBackend`.
- `memcache` when using `MemcacheBackend`.
- `aiobotocore` when using `DynamoBackend`.
## Install
```shell
> pip install fastapi-cache-ava
```
or
```shell
> pip install "fastapi-cache-ava[redis]"
```
or
```shell
> pip install "fastapi-cache-ava[memcache]"
```
or
```shell
> pip install "fastapi-cache-ava[dynamodb]"
```
## Usage
### Quick Start
```python
from collections.abc import AsyncIterator
from contextlib import asynccontextmanager
from fastapi import FastAPI
from starlette.requests import Request
from starlette.responses import Response
from fastapi_cache import FastAPICache
from fastapi_cache.backends.redis import RedisBackend
from fastapi_cache.decorator import cache
from redis import asyncio as aioredis
app = FastAPI()
@cache()
async def get_cache():
return 1
@app.get("/")
@cache(expire=60)
async def index():
return dict(hello="world")
@asynccontextmanager
async def lifespan(_: FastAPI) -> AsyncIterator[None]:
redis = aioredis.from_url("redis://localhost")
FastAPICache.init(RedisBackend(redis), prefix="fastapi-cache")
yield
```
### Initialization
First you must call `FastAPICache.init` during startup FastAPI startup; this is where you set global configuration.
### Use the `@cache` decorator
If you want cache a FastAPI response transparently, you can use the `@cache`
decorator between the router decorator and the view function.
Parameter | type | default | description
------------ | ----| --------- | --------
`expire` | `int` | | sets the caching time in seconds
`namespace` | `str` | `""` | namespace to use to store certain cache items
`coder` | `Coder` | `JsonCoder` | which coder to use, e.g. `JsonCoder`
`key_builder` | `KeyBuilder` callable | `default_key_builder` | which key builder to use
`injected_dependency_namespace` | `str` | `__fastapi_cache` | prefix for injected dependency keywords.
`cache_status_header` | `str` | `X-FastAPI-Cache` | Name for the header on the response indicating if the request was served from cache; either `HIT` or `MISS`.
You can also use the `@cache` decorator on regular functions to cache their result.
### Injected Request and Response dependencies
The `cache` decorator injects dependencies for the `Request` and `Response`
objects, so that it can add cache control headers to the outgoing response, and
return a 304 Not Modified response when the incoming request has a matching
`If-Non-Match` header. This only happens if the decorated endpoint doesn't already
list these dependencies already.
The keyword arguments for these extra dependencies are named
`__fastapi_cache_request` and `__fastapi_cache_response` to minimize collisions.
Use the `injected_dependency_namespace` argument to `@cache` to change the
prefix used if those names would clash anyway.
### Supported data types
When using the (default) `JsonCoder`, the cache can store any data type that FastAPI can convert to JSON, including Pydantic models and dataclasses,
_provided_ that your endpoint has a correct return type annotation. An
annotation is not needed if the return type is a standard JSON-supported Python
type such as a dictionary or a list.
E.g. for an endpoint that returns a Pydantic model named `SomeModel`, the return annotation is used to ensure that the cached result is converted back to the correct class:
```python
from .models import SomeModel, create_some_model
@app.get("/foo")
@cache(expire=60)
async def foo() -> SomeModel:
return create_some_model()
```
It is not sufficient to configure a response model in the route decorator; the cache needs to know what the method itself returns. If no return type decorator is given, the primitive JSON type is returned instead.
For broader type support, use the `fastapi_cache.coder.PickleCoder` or implement a custom coder (see below).
### Custom coder
By default use `JsonCoder`, you can write custom coder to encode and decode cache result, just need
inherit `fastapi_cache.coder.Coder`.
```python
from typing import Any
import orjson
from fastapi.encoders import jsonable_encoder
from fastapi_cache import Coder
class ORJsonCoder(Coder):
@classmethod
def encode(cls, value: Any) -> bytes:
return orjson.dumps(
value,
default=jsonable_encoder,
option=orjson.OPT_NON_STR_KEYS | orjson.OPT_SERIALIZE_NUMPY,
)
@classmethod
def decode(cls, value: bytes) -> Any:
return orjson.loads(value)
@app.get("/")
@cache(expire=60, coder=ORJsonCoder)
async def index():
return dict(hello="world")
```
### Custom key builder
By default the `default_key_builder` builtin key builder is used; this creates a
cache key from the function module and name, and the positional and keyword
arguments converted to their `repr()` representations, encoded as a MD5 hash.
You can provide your own by passing a key builder in to `@cache()`, or to
`FastAPICache.init()` to apply globally.
For example, if you wanted to use the request method, URL and query string as a cache key instead of the function identifier you could use:
```python
def request_key_builder(
func,
namespace: str = "",
*,
request: Request = None,
response: Response = None,
*args,
**kwargs,
):
return ":".join([
namespace,
request.method.lower(),
request.url.path,
repr(sorted(request.query_params.items()))
])
@app.get("/")
@cache(expire=60, key_builder=request_key_builder)
async def index():
return dict(hello="world")
```
## Backend notes
### InMemoryBackend
The `InMemoryBackend` stores cache data in memory and only deletes when an
expired key is accessed. This means that if you don't access a function after
data has been cached, the data will not be removed automatically.
### RedisBackend
When using the Redis backend, please make sure you pass in a redis client that does [_not_ decode responses][redis-decode] (`decode_responses` **must** be `False`, which is the default). Cached data is stored as `bytes` (binary), decoding these in the Redis client would break caching.
[redis-decode]: https://redis-py.readthedocs.io/en/latest/examples/connection_examples.html#by-default-Redis-return-binary-responses,-to-decode-them-use-decode_responses=True
## Tests and coverage
```shell
coverage run -m pytest
coverage html
xdg-open htmlcov/index.html
```
## License
This project is licensed under the [Apache-2.0](https://github.com/long2ice/fastapi-cache/blob/master/LICENSE) License.
Raw data
{
"_id": null,
"home_page": "https://github.com/long2ice/fastapi-cache",
"name": "fastapi-cache-ava",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.11",
"maintainer_email": null,
"keywords": "fastapi, cache, caching",
"author": "long2ice",
"author_email": "long2ice@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/93/95/57898f102cddfa76399987eec10bf5be59f86f380703aa1850c9e5293054/fastapi_cache_ava-0.2.2.tar.gz",
"platform": null,
"description": "# fastapi-cache\n\n[![pypi](https://img.shields.io/pypi/v/fastapi-cache-ava.svg?style=flat)](https://pypi.org/p/fastapi-cache-ava)\n[![license](https://img.shields.io/github/license/long2ice/fastapi-cache)](https://github.com/long2ice/fastapi-cache/blob/main/LICENSE)\n[![CI/CD](https://github.com/long2ice/fastapi-cache/actions/workflows/ci-cd.yml/badge.svg)](https://github.com/long2ice/fastapi-cache/actions/workflows/ci-cd.yml)\n\n## Introduction\n\n`fastapi-cache` is a tool to cache FastAPI endpoint and function results, with\nbackends supporting Redis, Memcached, and Amazon DynamoDB.\n\n## Features\n\n- Supports `redis`, `memcache`, `dynamodb`, and `in-memory` backends.\n- Easy integration with [FastAPI](https://fastapi.tiangolo.com/).\n- Support for HTTP cache headers like `ETag` and `Cache-Control`, as well as conditional `If-Match-None` requests.\n\n## Requirements\n\n- FastAPI\n- `redis` when using `RedisBackend`.\n- `memcache` when using `MemcacheBackend`.\n- `aiobotocore` when using `DynamoBackend`.\n\n## Install\n\n```shell\n> pip install fastapi-cache-ava\n```\n\nor\n\n```shell\n> pip install \"fastapi-cache-ava[redis]\"\n```\n\nor\n\n```shell\n> pip install \"fastapi-cache-ava[memcache]\"\n```\n\nor\n\n```shell\n> pip install \"fastapi-cache-ava[dynamodb]\"\n```\n\n## Usage\n\n### Quick Start\n\n```python\nfrom collections.abc import AsyncIterator\nfrom contextlib import asynccontextmanager\n\nfrom fastapi import FastAPI\nfrom starlette.requests import Request\nfrom starlette.responses import Response\n\nfrom fastapi_cache import FastAPICache\nfrom fastapi_cache.backends.redis import RedisBackend\nfrom fastapi_cache.decorator import cache\n\nfrom redis import asyncio as aioredis\n\napp = FastAPI()\n\n\n@cache()\nasync def get_cache():\n return 1\n\n\n@app.get(\"/\")\n@cache(expire=60)\nasync def index():\n return dict(hello=\"world\")\n\n@asynccontextmanager\nasync def lifespan(_: FastAPI) -> AsyncIterator[None]:\n redis = aioredis.from_url(\"redis://localhost\")\n FastAPICache.init(RedisBackend(redis), prefix=\"fastapi-cache\")\n yield\n```\n\n### Initialization\n\nFirst you must call `FastAPICache.init` during startup FastAPI startup; this is where you set global configuration.\n\n### Use the `@cache` decorator\n\nIf you want cache a FastAPI response transparently, you can use the `@cache`\ndecorator between the router decorator and the view function.\n\nParameter | type | default | description\n------------ | ----| --------- | --------\n`expire` | `int` | | sets the caching time in seconds\n`namespace` | `str` | `\"\"` | namespace to use to store certain cache items\n`coder` | `Coder` | `JsonCoder` | which coder to use, e.g. `JsonCoder`\n`key_builder` | `KeyBuilder` callable | `default_key_builder` | which key builder to use\n`injected_dependency_namespace` | `str` | `__fastapi_cache` | prefix for injected dependency keywords.\n`cache_status_header` | `str` | `X-FastAPI-Cache` | Name for the header on the response indicating if the request was served from cache; either `HIT` or `MISS`.\n\nYou can also use the `@cache` decorator on regular functions to cache their result.\n\n### Injected Request and Response dependencies\n\nThe `cache` decorator injects dependencies for the `Request` and `Response`\nobjects, so that it can add cache control headers to the outgoing response, and\nreturn a 304 Not Modified response when the incoming request has a matching\n`If-Non-Match` header. This only happens if the decorated endpoint doesn't already\nlist these dependencies already.\n\nThe keyword arguments for these extra dependencies are named\n`__fastapi_cache_request` and `__fastapi_cache_response` to minimize collisions.\nUse the `injected_dependency_namespace` argument to `@cache` to change the\nprefix used if those names would clash anyway.\n\n\n### Supported data types\n\nWhen using the (default) `JsonCoder`, the cache can store any data type that FastAPI can convert to JSON, including Pydantic models and dataclasses,\n_provided_ that your endpoint has a correct return type annotation. An\nannotation is not needed if the return type is a standard JSON-supported Python\ntype such as a dictionary or a list.\n\nE.g. for an endpoint that returns a Pydantic model named `SomeModel`, the return annotation is used to ensure that the cached result is converted back to the correct class:\n\n```python\nfrom .models import SomeModel, create_some_model\n\n@app.get(\"/foo\")\n@cache(expire=60)\nasync def foo() -> SomeModel:\n return create_some_model()\n```\n\nIt is not sufficient to configure a response model in the route decorator; the cache needs to know what the method itself returns. If no return type decorator is given, the primitive JSON type is returned instead.\n\nFor broader type support, use the `fastapi_cache.coder.PickleCoder` or implement a custom coder (see below).\n\n### Custom coder\n\nBy default use `JsonCoder`, you can write custom coder to encode and decode cache result, just need\ninherit `fastapi_cache.coder.Coder`.\n\n```python\nfrom typing import Any\nimport orjson\nfrom fastapi.encoders import jsonable_encoder\nfrom fastapi_cache import Coder\n\nclass ORJsonCoder(Coder):\n @classmethod\n def encode(cls, value: Any) -> bytes:\n return orjson.dumps(\n value,\n default=jsonable_encoder,\n option=orjson.OPT_NON_STR_KEYS | orjson.OPT_SERIALIZE_NUMPY,\n )\n\n @classmethod\n def decode(cls, value: bytes) -> Any:\n return orjson.loads(value)\n\n\n@app.get(\"/\")\n@cache(expire=60, coder=ORJsonCoder)\nasync def index():\n return dict(hello=\"world\")\n```\n\n### Custom key builder\n\nBy default the `default_key_builder` builtin key builder is used; this creates a\ncache key from the function module and name, and the positional and keyword\narguments converted to their `repr()` representations, encoded as a MD5 hash.\nYou can provide your own by passing a key builder in to `@cache()`, or to\n`FastAPICache.init()` to apply globally.\n\nFor example, if you wanted to use the request method, URL and query string as a cache key instead of the function identifier you could use:\n\n```python\ndef request_key_builder(\n func,\n namespace: str = \"\",\n *,\n request: Request = None,\n response: Response = None,\n *args,\n **kwargs,\n):\n return \":\".join([\n namespace,\n request.method.lower(),\n request.url.path,\n repr(sorted(request.query_params.items()))\n ])\n\n\n@app.get(\"/\")\n@cache(expire=60, key_builder=request_key_builder)\nasync def index():\n return dict(hello=\"world\")\n```\n\n## Backend notes\n\n### InMemoryBackend\n\nThe `InMemoryBackend` stores cache data in memory and only deletes when an\nexpired key is accessed. This means that if you don't access a function after\ndata has been cached, the data will not be removed automatically.\n\n### RedisBackend\n\nWhen using the Redis backend, please make sure you pass in a redis client that does [_not_ decode responses][redis-decode] (`decode_responses` **must** be `False`, which is the default). Cached data is stored as `bytes` (binary), decoding these in the Redis client would break caching.\n\n[redis-decode]: https://redis-py.readthedocs.io/en/latest/examples/connection_examples.html#by-default-Redis-return-binary-responses,-to-decode-them-use-decode_responses=True\n\n## Tests and coverage\n\n```shell\ncoverage run -m pytest\ncoverage html\nxdg-open htmlcov/index.html\n```\n\n## License\n\nThis project is licensed under the [Apache-2.0](https://github.com/long2ice/fastapi-cache/blob/master/LICENSE) License.\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Cache for FastAPI",
"version": "0.2.2",
"project_urls": {
"Documentation": "https://github.com/long2ice/fastapi-cache",
"Homepage": "https://github.com/long2ice/fastapi-cache",
"Repository": "https://github.com/long2ice/fastapi-cache.git"
},
"split_keywords": [
"fastapi",
" cache",
" caching"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "076e0f5165be5e898a415a3044c0046d982e13876c3ffabcc6e9b2c2efdffc89",
"md5": "7f3278b514137cf722d5303ef886fe34",
"sha256": "e228f185449f584b0b9022dda19dda13348ea8d34a5d350eaacd64fdd24daad9"
},
"downloads": -1,
"filename": "fastapi_cache_ava-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7f3278b514137cf722d5303ef886fe34",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.11",
"size": 25741,
"upload_time": "2024-06-25T21:45:03",
"upload_time_iso_8601": "2024-06-25T21:45:03.509113Z",
"url": "https://files.pythonhosted.org/packages/07/6e/0f5165be5e898a415a3044c0046d982e13876c3ffabcc6e9b2c2efdffc89/fastapi_cache_ava-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "939557898f102cddfa76399987eec10bf5be59f86f380703aa1850c9e5293054",
"md5": "8358a6b5661b3dfcf9a2a9cbdbc801a7",
"sha256": "b604adcac9707186b8cb8a04fa566726bf4ec5c5e603d4322d2b3e69dd06699a"
},
"downloads": -1,
"filename": "fastapi_cache_ava-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "8358a6b5661b3dfcf9a2a9cbdbc801a7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.11",
"size": 17947,
"upload_time": "2024-06-25T21:45:05",
"upload_time_iso_8601": "2024-06-25T21:45:05.901400Z",
"url": "https://files.pythonhosted.org/packages/93/95/57898f102cddfa76399987eec10bf5be59f86f380703aa1850c9e5293054/fastapi_cache_ava-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-25 21:45:05",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "long2ice",
"github_project": "fastapi-cache",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "fastapi-cache-ava"
}