memoization


Namememoization JSON
Version 0.4.0 PyPI version JSON
download
home_pagehttps://github.com/lonelyenvoy/python-memoization
SummaryA powerful caching library for Python, with TTL support and multiple algorithm options. (https://github.com/lonelyenvoy/python-memoization)
upload_time2021-08-01 18:48:53
maintainer
docs_urlNone
authorlonelyenvoy
requires_python>=3, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4
licenseMIT
keywords memoization memorization remember decorator cache caching function callable functional ttl limited capacity fast high-performance optimization
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage
            # python-memoization

[![Repository][repositorysvg]][repository] [![Build Status][travismaster]][travis] [![Codacy Badge][codacysvg]][codacy]
[![Coverage Status][coverallssvg]][coveralls] [![Downloads][downloadssvg]][repository]
<br>
[![PRs welcome][prsvg]][pr] [![License][licensesvg]][license] [![Supports Python][pythonsvg]][python]


A powerful caching library for Python, with TTL support and multiple algorithm options.

If you like this work, please [star](https://github.com/lonelyenvoy/python-memoization) it on GitHub.

## Why choose this library?

Perhaps you know about [```functools.lru_cache```](https://docs.python.org/3/library/functools.html#functools.lru_cache)
in Python 3, and you may be wondering why we are reinventing the wheel.

Well, actually not. This lib is based on ```functools```. Please find below the comparison with ```lru_cache```.

|Features|```functools.lru_cache```|```memoization```|
|--------|-------------------|-----------|
|Configurable max size|✔️|✔️|
|Thread safety|✔️|✔️|
|Flexible argument typing (typed & untyped)|✔️|Always typed|
|Cache statistics|✔️|✔️|
|LRU (Least Recently Used) as caching algorithm|✔️|✔️|
|LFU (Least Frequently Used) as caching algorithm|No support|✔️|
|FIFO (First In First Out) as caching algorithm|No support|✔️|
|Extensibility for new caching algorithms|No support|✔️|
|TTL (Time-To-Live) support|No support|✔️|
|Support for unhashable arguments (dict, list, etc.)|No support|✔️|
|Custom cache keys|No support|✔️|
|On-demand partial cache clearing|No support|✔️|
|Iterating through the cache|No support|✔️|
|Python version|3.2+|3.4+|

```memoization``` solves some drawbacks of ```functools.lru_cache```:

1. ```lru_cache``` does not support __unhashable types__, which means function arguments cannot contain dict or list.

```python
>>> from functools import lru_cache
>>> @lru_cache()
... def f(x): return x
... 
>>> f([1, 2])  # unsupported
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
```

2. ```lru_cache``` is vulnerable to [__hash collision attack__](https://learncryptography.com/hash-functions/hash-collision-attack)
   and can be hacked or compromised. Using this technique, attackers can make your program __unexpectedly slow__ by
   feeding the cached function with certain cleverly designed inputs. However, in ```memoization```, caching is always
   typed, which means ```f(3)``` and ```f(3.0)``` will be treated as different calls and cached separately. Also,
   you can build your own cache key with a unique hashing strategy. These measures __prevents the attack__ from
   happening (or at least makes it a lot harder).

```python
>>> hash((1,))
3430019387558
>>> hash(3430019387558.0)  # two different arguments with an identical hash value
3430019387558
```

3. Unlike `lru_cache`, `memoization` is designed to be highly extensible, which make it easy for developers to add and integrate
__any caching algorithms__ (beyond FIFO, LRU and LFU) into this library. See [Contributing Guidance](https://github.com/lonelyenvoy/python-memoization/blob/master/CONTRIBUTING.md) for further detail.


## Installation

```bash
pip install -U memoization
```


## 1-Minute Tutorial

```python
from memoization import cached

@cached
def func(arg):
    ...  # do something slow
```

Simple enough - the results of ```func()``` are cached. 
Repetitive calls to ```func()``` with the same arguments run ```func()``` only once, enhancing performance.

>:warning:__WARNING:__ for functions with unhashable arguments, the default setting may not enable `memoization` to work properly. See [custom cache keys](https://github.com/lonelyenvoy/python-memoization#custom-cache-keys) section below for details.

## 15-Minute Tutorial

You will learn about the advanced features in the following tutorial, which enable you to customize `memoization` .

Configurable options include `ttl`, `max_size`, `algorithm`, `thread_safe`, `order_independent` and `custom_key_maker`.

### TTL (Time-To-Live)

```python
@cached(ttl=5)  # the cache expires after 5 seconds
def expensive_db_query(user_id):
    ...
```

For impure functions, TTL (in second) will be a solution. This will be useful when the function returns resources that is valid only for a short time, e.g. fetching something from databases.

### Limited cache capacity
 
```python
@cached(max_size=128)  # the cache holds no more than 128 items
def get_a_very_large_object(filename):
    ...
```

By default, if you don't specify ```max_size```, the cache can hold unlimited number of items.
When the cache is fully occupied, the former data will be overwritten by a certain algorithm described below.

### Choosing your caching algorithm

```python
from memoization import cached, CachingAlgorithmFlag

@cached(max_size=128, algorithm=CachingAlgorithmFlag.LFU)  # the cache overwrites items using the LFU algorithm
def func(arg):
    ...
```

Possible values for ```algorithm``` are:

- `CachingAlgorithmFlag.LRU`: _Least Recently Used_  (default)
- `CachingAlgorithmFlag.LFU`: _Least Frequently Used_ 
- `CachingAlgorithmFlag.FIFO`: _First In First Out_ 

This option is valid only when a ```max_size``` is explicitly specified.

### Thread safe?

```python
@cached(thread_safe=False)
def func(arg):
    ...
```

```thread_safe``` is ```True``` by default. Setting it to ```False``` enhances performance.

### Order-independent cache key

By default, the following function calls will be treated differently and cached twice, which means the cache misses at the second call.

```python
func(a=1, b=1)
func(b=1, a=1)
```

You can avoid this behavior by passing an `order_independent` argument to the decorator, although it will slow down the performance a little bit. 

```python
@cached(order_independent=True)
def func(**kwargs):
    ...
```

### Custom cache keys

Prior to memorize your function inputs and outputs (i.e. putting them into a cache), `memoization` needs to
build a __cache key__ using the inputs, so that the outputs can be retrieved later.

> By default, `memoization` tries to combine all your function
arguments and calculate its hash value using `hash()`. If it turns out that parts of your arguments are
unhashable, `memoization` will fall back to turning them into a string using `str()`. This behavior relies
on the assumption that the string exactly represents the internal state of the arguments, which is true for
built-in types.

However, this is not true for all objects. __If you pass objects which are
instances of non-built-in classes, sometimes you will need to override the default key-making procedure__,
because the `str()` function on these objects may not hold the correct information about their states.

Here are some suggestions. __Implementations of a valid key maker__:

- MUST be a function with the same signature as the cached function.
- MUST produce unique keys, which means two sets of different arguments always map to two different keys.
- MUST produce hashable keys, and a key is comparable with another key (`memoization` only needs to check for their equality).
- should compute keys efficiently and produce small objects as keys.

Example:

```python
def get_employee_id(employee):
    return employee.id  # returns a string or a integer

@cached(custom_key_maker=get_employee_id)
def calculate_performance(employee):
    ...
```

Note that writing a robust key maker function can be challenging in some situations. If you find it difficult,
feel free to ask for help by submitting an [issue](https://github.com/lonelyenvoy/python-memoization/issues).


### Knowing how well the cache is behaving

```python
>>> @cached
... def f(x): return x
... 
>>> f.cache_info()
CacheInfo(hits=0, misses=0, current_size=0, max_size=None, algorithm=<CachingAlgorithmFlag.LRU: 2>, ttl=None, thread_safe=True, order_independent=False, use_custom_key=False)
```

With ```cache_info```, you can retrieve the number of ```hits``` and ```misses``` of the cache, and other information indicating the caching status.

- `hits`: the number of cache hits
- `misses`: the number of cache misses
- `current_size`: the number of items that were cached
- `max_size`: the maximum number of items that can be cached (user-specified)
- `algorithm`: caching algorithm (user-specified)
- `ttl`: Time-To-Live value (user-specified)
- `thread_safe`: whether the cache is thread safe (user-specified)
- `order_independent`: whether the cache is kwarg-order-independent (user-specified)
- `use_custom_key`: whether a custom key maker is used

### Other APIs

- Access the original undecorated function `f` by `f.__wrapped__`.
- Clear the cache by `f.cache_clear()`.
- Check whether the cache is empty by `f.cache_is_empty()`.
- Check whether the cache is full by `f.cache_is_full()`.
- Disable `SyntaxWarning` by `memoization.suppress_warnings()`.

## Advanced API References

<details>
<summary>Details</summary>

### Checking whether the cache contains something

#### cache_contains_argument(function_arguments, alive_only)

```
Return True if the cache contains a cached item with the specified function call arguments

:param function_arguments:  Can be a list, a tuple or a dict.
                            - Full arguments: use a list to represent both positional arguments and keyword
                              arguments. The list contains two elements, a tuple (positional arguments) and
                              a dict (keyword arguments). For example,
                                f(1, 2, 3, a=4, b=5, c=6)
                              can be represented by:
                                [(1, 2, 3), {'a': 4, 'b': 5, 'c': 6}]
                            - Positional arguments only: when the arguments does not include keyword arguments,
                              a tuple can be used to represent positional arguments. For example,
                                f(1, 2, 3)
                              can be represented by:
                                (1, 2, 3)
                            - Keyword arguments only: when the arguments does not include positional arguments,
                              a dict can be used to represent keyword arguments. For example,
                                f(a=4, b=5, c=6)
                              can be represented by:
                                {'a': 4, 'b': 5, 'c': 6}

:param alive_only:          Whether to check alive cache item only (default to True).

:return:                    True if the desired cached item is present, False otherwise.
```

#### cache_contains_result(return_value, alive_only)

```
Return True if the cache contains a cache item with the specified user function return value. O(n) time
complexity.

:param return_value:        A return value coming from the user function.

:param alive_only:          Whether to check alive cache item only (default to True).

:return:                    True if the desired cached item is present, False otherwise.
```

### Iterating through the cache

#### cache_arguments()

```
Get user function arguments of all alive cache elements

see also: cache_items()

Example:
   @cached
   def f(a, b, c, d):
       ...
   f(1, 2, c=3, d=4)
   for argument in f.cache_arguments():
       print(argument)  # ((1, 2), {'c': 3, 'd': 4})

:return: an iterable which iterates through a list of a tuple containing a tuple (positional arguments) and
        a dict (keyword arguments)
```

#### cache_results()

```
Get user function return values of all alive cache elements

see also: cache_items()

Example:
   @cached
   def f(a):
       return a
   f('hello')
   for result in f.cache_results():
       print(result)  # 'hello'

:return: an iterable which iterates through a list of user function result (of any type)
```

#### cache_items()

```
Get cache items, i.e. entries of all alive cache elements, in the form of (argument, result).

argument: a tuple containing a tuple (positional arguments) and a dict (keyword arguments).
result: a user function return value of any type.

see also: cache_arguments(), cache_results().

Example:
   @cached
   def f(a, b, c, d):
       return 'the answer is ' + str(a)
   f(1, 2, c=3, d=4)
   for argument, result in f.cache_items():
       print(argument)  # ((1, 2), {'c': 3, 'd': 4})
       print(result)    # 'the answer is 1'

:return: an iterable which iterates through a list of (argument, result) entries
```

#### cache_for_each()

```
Perform the given action for each cache element in an order determined by the algorithm until all
elements have been processed or the action throws an error

:param consumer:           an action function to process the cache elements. Must have 3 arguments:
                             def consumer(user_function_arguments, user_function_result, is_alive): ...
                           user_function_arguments is a tuple holding arguments in the form of (args, kwargs).
                             args is a tuple holding positional arguments.
                             kwargs is a dict holding keyword arguments.
                             for example, for a function: foo(a, b, c, d), calling it by: foo(1, 2, c=3, d=4)
                             user_function_arguments == ((1, 2), {'c': 3, 'd': 4})
                           user_function_result is a return value coming from the user function.
                           is_alive is a boolean value indicating whether the cache is still alive
                           (if a TTL is given).
```

### Removing something from the cache

#### cache_clear()

```
Clear the cache and its statistics information
```

#### cache_remove_if(predicate)

```
Remove all cache elements that satisfy the given predicate

:param predicate:           a predicate function to judge whether the cache elements should be removed. Must
                            have 3 arguments, and returns True or False:
                              def consumer(user_function_arguments, user_function_result, is_alive): ...
                            user_function_arguments is a tuple holding arguments in the form of (args, kwargs).
                              args is a tuple holding positional arguments.
                              kwargs is a dict holding keyword arguments.
                              for example, for a function: foo(a, b, c, d), calling it by: foo(1, 2, c=3, d=4)
                              user_function_arguments == ((1, 2), {'c': 3, 'd': 4})
                            user_function_result is a return value coming from the user function.
                            is_alive is a boolean value indicating whether the cache is still alive
                            (if a TTL is given).

:return:                    True if at least one element is removed, False otherwise.
```

</details>

## Q&A

1. **Q: There are duplicated code in `memoization` and most of them can be eliminated by using another level of
abstraction (e.g. classes and multiple inheritance). Why not refactor?**

   A: We would like to keep the code in a proper level of abstraction. However, these abstractions make it run slower.
As this is a caching library focusing on speed, we have to give up some elegance for better performance. Refactoring
is our future work.


2. **Q: I have submitted an issue and not received a reply for a long time. Anyone can help me?**

   A: Sorry! We are not working full-time, but working voluntarily on this project, so you might experience some delay.
We appreciate your patience.


## Contributing

This project welcomes contributions from anyone.
- [Read Contributing Guidance](https://github.com/lonelyenvoy/python-memoization/blob/master/CONTRIBUTING.md) first.
- [Submit bugs](https://github.com/lonelyenvoy/python-memoization/issues) and help us verify fixes.
- [Submit pull requests](https://github.com/lonelyenvoy/python-memoization/pulls) for bug fixes and features and discuss existing proposals. Please make sure that your PR passes the tests in ```test.py```.
- [See contributors](https://github.com/lonelyenvoy/python-memoization/blob/master/CONTRIBUTORS.md) of this project.


## License

[The MIT License](https://github.com/lonelyenvoy/python-memoization/blob/master/LICENSE)


[pythonsvg]: https://img.shields.io/pypi/pyversions/memoization.svg
[python]: https://www.python.org

[travismaster]: https://travis-ci.com/lonelyenvoy/python-memoization.svg?branch=master
[travis]: https://travis-ci.com/lonelyenvoy/python-memoization

[coverallssvg]: https://coveralls.io/repos/github/lonelyenvoy/python-memoization/badge.svg?branch=master
[coveralls]: https://coveralls.io/github/lonelyenvoy/python-memoization?branch=master

[repositorysvg]: https://img.shields.io/pypi/v/memoization
[repository]: https://pypi.org/project/memoization

[downloadssvg]: https://img.shields.io/pypi/dm/memoization

[prsvg]: https://img.shields.io/badge/pull_requests-welcome-blue.svg
[pr]: https://github.com/lonelyenvoy/python-memoization#contributing

[licensesvg]: https://img.shields.io/badge/license-MIT-blue.svg
[license]: https://github.com/lonelyenvoy/python-memoization/blob/master/LICENSE

[codacysvg]: https://api.codacy.com/project/badge/Grade/52c68fb9de6b4b149e77e8e173616db6
[codacy]: https://www.codacy.com/manual/petrinchor/python-memoization?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=lonelyenvoy/python-memoization&amp;utm_campaign=Badge_Grade



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/lonelyenvoy/python-memoization",
    "name": "memoization",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4",
    "maintainer_email": "",
    "keywords": "memoization memorization remember decorator cache caching function callable functional ttl limited capacity fast high-performance optimization",
    "author": "lonelyenvoy",
    "author_email": "petrinchor@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/af/53/e948a943e16423a87ced16e34ea7583c300e161a4c3e85d47d77d83830bf/memoization-0.4.0.tar.gz",
    "platform": "",
    "description": "# python-memoization\n\n[![Repository][repositorysvg]][repository] [![Build Status][travismaster]][travis] [![Codacy Badge][codacysvg]][codacy]\n[![Coverage Status][coverallssvg]][coveralls] [![Downloads][downloadssvg]][repository]\n<br>\n[![PRs welcome][prsvg]][pr] [![License][licensesvg]][license] [![Supports Python][pythonsvg]][python]\n\n\nA powerful caching library for Python, with TTL support and multiple algorithm options.\n\nIf you like this work, please [star](https://github.com/lonelyenvoy/python-memoization) it on GitHub.\n\n## Why choose this library?\n\nPerhaps you know about [```functools.lru_cache```](https://docs.python.org/3/library/functools.html#functools.lru_cache)\nin Python 3, and you may be wondering why we are reinventing the wheel.\n\nWell, actually not. This lib is based on ```functools```. Please find below the comparison with ```lru_cache```.\n\n|Features|```functools.lru_cache```|```memoization```|\n|--------|-------------------|-----------|\n|Configurable max size|\u2714\ufe0f|\u2714\ufe0f|\n|Thread safety|\u2714\ufe0f|\u2714\ufe0f|\n|Flexible argument typing (typed & untyped)|\u2714\ufe0f|Always typed|\n|Cache statistics|\u2714\ufe0f|\u2714\ufe0f|\n|LRU (Least Recently Used) as caching algorithm|\u2714\ufe0f|\u2714\ufe0f|\n|LFU (Least Frequently Used) as caching algorithm|No support|\u2714\ufe0f|\n|FIFO (First In First Out) as caching algorithm|No support|\u2714\ufe0f|\n|Extensibility for new caching algorithms|No support|\u2714\ufe0f|\n|TTL (Time-To-Live) support|No support|\u2714\ufe0f|\n|Support for unhashable arguments (dict, list, etc.)|No support|\u2714\ufe0f|\n|Custom cache keys|No support|\u2714\ufe0f|\n|On-demand partial cache clearing|No support|\u2714\ufe0f|\n|Iterating through the cache|No support|\u2714\ufe0f|\n|Python version|3.2+|3.4+|\n\n```memoization``` solves some drawbacks of ```functools.lru_cache```:\n\n1. ```lru_cache``` does not support __unhashable types__, which means function arguments cannot contain dict or list.\n\n```python\n>>> from functools import lru_cache\n>>> @lru_cache()\n... def f(x): return x\n... \n>>> f([1, 2])  # unsupported\nTraceback (most recent call last):\n  File \"<stdin>\", line 1, in <module>\nTypeError: unhashable type: 'list'\n```\n\n2. ```lru_cache``` is vulnerable to [__hash collision attack__](https://learncryptography.com/hash-functions/hash-collision-attack)\n   and can be hacked or compromised. Using this technique, attackers can make your program __unexpectedly slow__ by\n   feeding the cached function with certain cleverly designed inputs. However, in ```memoization```, caching is always\n   typed, which means ```f(3)``` and ```f(3.0)``` will be treated as different calls and cached separately. Also,\n   you can build your own cache key with a unique hashing strategy. These measures __prevents the attack__ from\n   happening (or at least makes it a lot harder).\n\n```python\n>>> hash((1,))\n3430019387558\n>>> hash(3430019387558.0)  # two different arguments with an identical hash value\n3430019387558\n```\n\n3. Unlike `lru_cache`, `memoization` is designed to be highly extensible, which make it easy for developers to add and integrate\n__any caching algorithms__ (beyond FIFO, LRU and LFU) into this library. See [Contributing Guidance](https://github.com/lonelyenvoy/python-memoization/blob/master/CONTRIBUTING.md) for further detail.\n\n\n## Installation\n\n```bash\npip install -U memoization\n```\n\n\n## 1-Minute Tutorial\n\n```python\nfrom memoization import cached\n\n@cached\ndef func(arg):\n    ...  # do something slow\n```\n\nSimple enough - the results of ```func()``` are cached. \nRepetitive calls to ```func()``` with the same arguments run ```func()``` only once, enhancing performance.\n\n>:warning:__WARNING:__ for functions with unhashable arguments, the default setting may not enable `memoization` to work properly. See [custom cache keys](https://github.com/lonelyenvoy/python-memoization#custom-cache-keys) section below for details.\n\n## 15-Minute Tutorial\n\nYou will learn about the advanced features in the following tutorial, which enable you to customize `memoization` .\n\nConfigurable options include `ttl`, `max_size`, `algorithm`, `thread_safe`, `order_independent` and `custom_key_maker`.\n\n### TTL (Time-To-Live)\n\n```python\n@cached(ttl=5)  # the cache expires after 5 seconds\ndef expensive_db_query(user_id):\n    ...\n```\n\nFor impure functions, TTL (in second) will be a solution. This will be useful when the function returns resources that is valid only for a short time, e.g. fetching something from databases.\n\n### Limited cache capacity\n \n```python\n@cached(max_size=128)  # the cache holds no more than 128 items\ndef get_a_very_large_object(filename):\n    ...\n```\n\nBy default, if you don't specify ```max_size```, the cache can hold unlimited number of items.\nWhen the cache is fully occupied, the former data will be overwritten by a certain algorithm described below.\n\n### Choosing your caching algorithm\n\n```python\nfrom memoization import cached, CachingAlgorithmFlag\n\n@cached(max_size=128, algorithm=CachingAlgorithmFlag.LFU)  # the cache overwrites items using the LFU algorithm\ndef func(arg):\n    ...\n```\n\nPossible values for ```algorithm``` are:\n\n- `CachingAlgorithmFlag.LRU`: _Least Recently Used_  (default)\n- `CachingAlgorithmFlag.LFU`: _Least Frequently Used_ \n- `CachingAlgorithmFlag.FIFO`: _First In First Out_ \n\nThis option is valid only when a ```max_size``` is explicitly specified.\n\n### Thread safe?\n\n```python\n@cached(thread_safe=False)\ndef func(arg):\n    ...\n```\n\n```thread_safe``` is ```True``` by default. Setting it to ```False``` enhances performance.\n\n### Order-independent cache key\n\nBy default, the following function calls will be treated differently and cached twice, which means the cache misses at the second call.\n\n```python\nfunc(a=1, b=1)\nfunc(b=1, a=1)\n```\n\nYou can avoid this behavior by passing an `order_independent` argument to the decorator, although it will slow down the performance a little bit. \n\n```python\n@cached(order_independent=True)\ndef func(**kwargs):\n    ...\n```\n\n### Custom cache keys\n\nPrior to memorize your function inputs and outputs (i.e. putting them into a cache), `memoization` needs to\nbuild a __cache key__ using the inputs, so that the outputs can be retrieved later.\n\n> By default, `memoization` tries to combine all your function\narguments and calculate its hash value using `hash()`. If it turns out that parts of your arguments are\nunhashable, `memoization` will fall back to turning them into a string using `str()`. This behavior relies\non the assumption that the string exactly represents the internal state of the arguments, which is true for\nbuilt-in types.\n\nHowever, this is not true for all objects. __If you pass objects which are\ninstances of non-built-in classes, sometimes you will need to override the default key-making procedure__,\nbecause the `str()` function on these objects may not hold the correct information about their states.\n\nHere are some suggestions. __Implementations of a valid key maker__:\n\n- MUST be a function with the same signature as the cached function.\n- MUST produce unique keys, which means two sets of different arguments always map to two different keys.\n- MUST produce hashable keys, and a key is comparable with another key (`memoization` only needs to check for their equality).\n- should compute keys efficiently and produce small objects as keys.\n\nExample:\n\n```python\ndef get_employee_id(employee):\n    return employee.id  # returns a string or a integer\n\n@cached(custom_key_maker=get_employee_id)\ndef calculate_performance(employee):\n    ...\n```\n\nNote that writing a robust key maker function can be challenging in some situations. If you find it difficult,\nfeel free to ask for help by submitting an [issue](https://github.com/lonelyenvoy/python-memoization/issues).\n\n\n### Knowing how well the cache is behaving\n\n```python\n>>> @cached\n... def f(x): return x\n... \n>>> f.cache_info()\nCacheInfo(hits=0, misses=0, current_size=0, max_size=None, algorithm=<CachingAlgorithmFlag.LRU: 2>, ttl=None, thread_safe=True, order_independent=False, use_custom_key=False)\n```\n\nWith ```cache_info```, you can retrieve the number of ```hits``` and ```misses``` of the cache, and other information indicating the caching status.\n\n- `hits`: the number of cache hits\n- `misses`: the number of cache misses\n- `current_size`: the number of items that were cached\n- `max_size`: the maximum number of items that can be cached (user-specified)\n- `algorithm`: caching algorithm (user-specified)\n- `ttl`: Time-To-Live value (user-specified)\n- `thread_safe`: whether the cache is thread safe (user-specified)\n- `order_independent`: whether the cache is kwarg-order-independent (user-specified)\n- `use_custom_key`: whether a custom key maker is used\n\n### Other APIs\n\n- Access the original undecorated function `f` by `f.__wrapped__`.\n- Clear the cache by `f.cache_clear()`.\n- Check whether the cache is empty by `f.cache_is_empty()`.\n- Check whether the cache is full by `f.cache_is_full()`.\n- Disable `SyntaxWarning` by `memoization.suppress_warnings()`.\n\n## Advanced API References\n\n<details>\n<summary>Details</summary>\n\n### Checking whether the cache contains something\n\n#### cache_contains_argument(function_arguments, alive_only)\n\n```\nReturn True if the cache contains a cached item with the specified function call arguments\n\n:param function_arguments:  Can be a list, a tuple or a dict.\n                            - Full arguments: use a list to represent both positional arguments and keyword\n                              arguments. The list contains two elements, a tuple (positional arguments) and\n                              a dict (keyword arguments). For example,\n                                f(1, 2, 3, a=4, b=5, c=6)\n                              can be represented by:\n                                [(1, 2, 3), {'a': 4, 'b': 5, 'c': 6}]\n                            - Positional arguments only: when the arguments does not include keyword arguments,\n                              a tuple can be used to represent positional arguments. For example,\n                                f(1, 2, 3)\n                              can be represented by:\n                                (1, 2, 3)\n                            - Keyword arguments only: when the arguments does not include positional arguments,\n                              a dict can be used to represent keyword arguments. For example,\n                                f(a=4, b=5, c=6)\n                              can be represented by:\n                                {'a': 4, 'b': 5, 'c': 6}\n\n:param alive_only:          Whether to check alive cache item only (default to True).\n\n:return:                    True if the desired cached item is present, False otherwise.\n```\n\n#### cache_contains_result(return_value, alive_only)\n\n```\nReturn True if the cache contains a cache item with the specified user function return value. O(n) time\ncomplexity.\n\n:param return_value:        A return value coming from the user function.\n\n:param alive_only:          Whether to check alive cache item only (default to True).\n\n:return:                    True if the desired cached item is present, False otherwise.\n```\n\n### Iterating through the cache\n\n#### cache_arguments()\n\n```\nGet user function arguments of all alive cache elements\n\nsee also: cache_items()\n\nExample:\n   @cached\n   def f(a, b, c, d):\n       ...\n   f(1, 2, c=3, d=4)\n   for argument in f.cache_arguments():\n       print(argument)  # ((1, 2), {'c': 3, 'd': 4})\n\n:return: an iterable which iterates through a list of a tuple containing a tuple (positional arguments) and\n        a dict (keyword arguments)\n```\n\n#### cache_results()\n\n```\nGet user function return values of all alive cache elements\n\nsee also: cache_items()\n\nExample:\n   @cached\n   def f(a):\n       return a\n   f('hello')\n   for result in f.cache_results():\n       print(result)  # 'hello'\n\n:return: an iterable which iterates through a list of user function result (of any type)\n```\n\n#### cache_items()\n\n```\nGet cache items, i.e. entries of all alive cache elements, in the form of (argument, result).\n\nargument: a tuple containing a tuple (positional arguments) and a dict (keyword arguments).\nresult: a user function return value of any type.\n\nsee also: cache_arguments(), cache_results().\n\nExample:\n   @cached\n   def f(a, b, c, d):\n       return 'the answer is ' + str(a)\n   f(1, 2, c=3, d=4)\n   for argument, result in f.cache_items():\n       print(argument)  # ((1, 2), {'c': 3, 'd': 4})\n       print(result)    # 'the answer is 1'\n\n:return: an iterable which iterates through a list of (argument, result) entries\n```\n\n#### cache_for_each()\n\n```\nPerform the given action for each cache element in an order determined by the algorithm until all\nelements have been processed or the action throws an error\n\n:param consumer:           an action function to process the cache elements. Must have 3 arguments:\n                             def consumer(user_function_arguments, user_function_result, is_alive): ...\n                           user_function_arguments is a tuple holding arguments in the form of (args, kwargs).\n                             args is a tuple holding positional arguments.\n                             kwargs is a dict holding keyword arguments.\n                             for example, for a function: foo(a, b, c, d), calling it by: foo(1, 2, c=3, d=4)\n                             user_function_arguments == ((1, 2), {'c': 3, 'd': 4})\n                           user_function_result is a return value coming from the user function.\n                           is_alive is a boolean value indicating whether the cache is still alive\n                           (if a TTL is given).\n```\n\n### Removing something from the cache\n\n#### cache_clear()\n\n```\nClear the cache and its statistics information\n```\n\n#### cache_remove_if(predicate)\n\n```\nRemove all cache elements that satisfy the given predicate\n\n:param predicate:           a predicate function to judge whether the cache elements should be removed. Must\n                            have 3 arguments, and returns True or False:\n                              def consumer(user_function_arguments, user_function_result, is_alive): ...\n                            user_function_arguments is a tuple holding arguments in the form of (args, kwargs).\n                              args is a tuple holding positional arguments.\n                              kwargs is a dict holding keyword arguments.\n                              for example, for a function: foo(a, b, c, d), calling it by: foo(1, 2, c=3, d=4)\n                              user_function_arguments == ((1, 2), {'c': 3, 'd': 4})\n                            user_function_result is a return value coming from the user function.\n                            is_alive is a boolean value indicating whether the cache is still alive\n                            (if a TTL is given).\n\n:return:                    True if at least one element is removed, False otherwise.\n```\n\n</details>\n\n## Q&A\n\n1. **Q: There are duplicated code in `memoization` and most of them can be eliminated by using another level of\nabstraction (e.g. classes and multiple inheritance). Why not refactor?**\n\n   A: We would like to keep the code in a proper level of abstraction. However, these abstractions make it run slower.\nAs this is a caching library focusing on speed, we have to give up some elegance for better performance. Refactoring\nis our future work.\n\n\n2. **Q: I have submitted an issue and not received a reply for a long time. Anyone can help me?**\n\n   A: Sorry! We are not working full-time, but working voluntarily on this project, so you might experience some delay.\nWe appreciate your patience.\n\n\n## Contributing\n\nThis project welcomes contributions from anyone.\n- [Read Contributing Guidance](https://github.com/lonelyenvoy/python-memoization/blob/master/CONTRIBUTING.md) first.\n- [Submit bugs](https://github.com/lonelyenvoy/python-memoization/issues) and help us verify fixes.\n- [Submit pull requests](https://github.com/lonelyenvoy/python-memoization/pulls) for bug fixes and features and discuss existing proposals. Please make sure that your PR passes the tests in ```test.py```.\n- [See contributors](https://github.com/lonelyenvoy/python-memoization/blob/master/CONTRIBUTORS.md) of this project.\n\n\n## License\n\n[The MIT License](https://github.com/lonelyenvoy/python-memoization/blob/master/LICENSE)\n\n\n[pythonsvg]: https://img.shields.io/pypi/pyversions/memoization.svg\n[python]: https://www.python.org\n\n[travismaster]: https://travis-ci.com/lonelyenvoy/python-memoization.svg?branch=master\n[travis]: https://travis-ci.com/lonelyenvoy/python-memoization\n\n[coverallssvg]: https://coveralls.io/repos/github/lonelyenvoy/python-memoization/badge.svg?branch=master\n[coveralls]: https://coveralls.io/github/lonelyenvoy/python-memoization?branch=master\n\n[repositorysvg]: https://img.shields.io/pypi/v/memoization\n[repository]: https://pypi.org/project/memoization\n\n[downloadssvg]: https://img.shields.io/pypi/dm/memoization\n\n[prsvg]: https://img.shields.io/badge/pull_requests-welcome-blue.svg\n[pr]: https://github.com/lonelyenvoy/python-memoization#contributing\n\n[licensesvg]: https://img.shields.io/badge/license-MIT-blue.svg\n[license]: https://github.com/lonelyenvoy/python-memoization/blob/master/LICENSE\n\n[codacysvg]: https://api.codacy.com/project/badge/Grade/52c68fb9de6b4b149e77e8e173616db6\n[codacy]: https://www.codacy.com/manual/petrinchor/python-memoization?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=lonelyenvoy/python-memoization&amp;utm_campaign=Badge_Grade\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A powerful caching library for Python, with TTL support and multiple algorithm options. (https://github.com/lonelyenvoy/python-memoization)",
    "version": "0.4.0",
    "split_keywords": [
        "memoization",
        "memorization",
        "remember",
        "decorator",
        "cache",
        "caching",
        "function",
        "callable",
        "functional",
        "ttl",
        "limited",
        "capacity",
        "fast",
        "high-performance",
        "optimization"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "1238251cd1c439afee630fac9e5830bd",
                "sha256": "fde5e7cd060ef45b135e0310cfec17b2029dc472ccb5bbbbb42a503d4538a135"
            },
            "downloads": -1,
            "filename": "memoization-0.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "1238251cd1c439afee630fac9e5830bd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4",
            "size": 41209,
            "upload_time": "2021-08-01T18:48:53",
            "upload_time_iso_8601": "2021-08-01T18:48:53.002284Z",
            "url": "https://files.pythonhosted.org/packages/af/53/e948a943e16423a87ced16e34ea7583c300e161a4c3e85d47d77d83830bf/memoization-0.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-08-01 18:48:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "lonelyenvoy",
    "github_project": "python-memoization",
    "travis_ci": true,
    "coveralls": true,
    "github_actions": false,
    "lcname": "memoization"
}
        
Elapsed time: 0.01749s