django-cache-memoize


Namedjango-cache-memoize JSON
Version 0.2.0 PyPI version JSON
download
home_pagehttps://github.com/peterbe/django-cache-memoize
SummaryDjango utility for a memoization decorator that uses the Django cache framework.
upload_time2023-09-14 15:17:00
maintainer
docs_urlNone
authorPeter Bengtsson
requires_python>=3.8
licenseMPL-2.0
keywords django memoize cache decorator
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ====================
django-cache-memoize
====================

* License: MPL 2.0

.. image:: https://github.com/peterbe/django-cache-memoize/workflows/Python/badge.svg
   :alt: Build Status
   :target: https://github.com/peterbe/django-cache-memoize/actions?query=workflow%3APython

.. image:: https://readthedocs.org/projects/django-cache-memoize/badge/?version=latest
   :alt: Documentation Status
   :target: https://django-cache-memoize.readthedocs.io/en/latest/?badge=latest

.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
  :target: https://github.com/ambv/black

Django utility for a memoization decorator that uses the Django cache framework.

For versions of Python and Django, check out `the tox.ini file`_.

.. _`the tox.ini file`: https://github.com/peterbe/django-cache-memoize/blob/master/tox.ini

Key Features
------------

* Memoized function calls can be invalidated.

* Works with non-trivial arguments and keyword arguments

* Insight into cache hits and cache missed with a callback.

* Ability to use as a "guard" for repeated execution when storing the function
  result isn't important or needed.


Installation
============

.. code-block:: python

    pip install django-cache-memoize

Usage
=====

.. code-block:: python

    # Import the decorator
    from cache_memoize import cache_memoize

    # Attach decorator to cacheable function with a timeout of 100 seconds.
    @cache_memoize(100)
    def expensive_function(start, end):
        return random.randint(start, end)

    # Just a regular Django view
    def myview(request):
        # If you run this view repeatedly you'll get the same
        # output every time for 100 seconds.
        return http.HttpResponse(str(expensive_function(0, 100)))


The caching uses `Django's default cache framework`_. Ultimately, it calls
``django.core.cache.cache.set(cache_key, function_out, expiration)``.
So if you have a function that returns something that can't be pickled and
cached it won't work.

    For cases like this, Django exposes a simple, low-level cache API. You can
    use this API to store objects in the cache with any level of granularity
    you like. You can cache any Python object that can be pickled safely:
    strings, dictionaries, lists of model objects, and so forth. (Most
    common Python objects can be pickled; refer to the Python documentation
    for more information about pickling.)

See `documentation`_.


.. _`Django's default cache framework`: https://docs.djangoproject.com/en/1.11/topics/cache/
.. _`documentation`: https://docs.djangoproject.com/en/1.11/topics/cache/#the-low-level-cache-api


Example Usage
=============

This blog post: `How to use django-cache-memoize`_

It demonstrates similarly to the above Usage example but with a little more
detail. In particular it demonstrates the difference between *not* using
``django-cache-memoize`` and then adding it to your code after.

.. _`How to use django-cache-memoize`: https://www.peterbe.com/plog/how-to-use-django-cache-memoize

Advanced Usage
==============

``args_rewrite``
~~~~~~~~~~~~~~~~

Internally the decorator rewrites every argument and keyword argument to
the function it wraps into a concatenated string. The first thing you
might want to do is help the decorator rewrite the arguments to something
more suitable as a cache key string. For example, suppose you have instances
of a class whose ``__str__`` method doesn't return a unique value. For example:

.. code-block:: python

    class Record(models.Model):
        name = models.CharField(max_length=100)
        lastname = models.CharField(max_length=100)
        friends = models.ManyToManyField(SomeOtherModel)

        def __str__(self):
            return self.name

    # Example use:
    >>> record = Record.objects.create(name='Peter', lastname='Bengtsson')
    >>> print(record)
    Peter
    >>> record2 = Record.objects.create(name='Peter', lastname='Different')
    >>> print(record2)
    Peter

This is a contrived example, but basically *you know* that the ``str()``
conversion of certain arguments isn't safe. Then you can pass in a callable
called ``args_rewrite``. It gets the same positional and keyword arguments
as the function you're decorating. Here's an example implementation:

.. code-block:: python

    from cache_memoize import cache_memoize

    def count_friends_args_rewrite(record):
        # The 'id' is always unique. Use that instead of the default __str__
        return record.id

    @cache_memoize(100, args_rewrite=count_friends_args_rewrite)
    def count_friends(record):
        # Assume this is an expensive function that can be memoize cached.
        return record.friends.all().count()


``prefix``
~~~~~~~~~~

By default the prefix becomes the name of the function. Consider:

.. code-block:: python

    from cache_memoize import cache_memoize

    @cache_memoize(10, prefix='randomness')
    def function1():
        return random.random()

    @cache_memoize(10, prefix='randomness')
    def function2():  # different name, same arguments, same functionality
        return random.random()

    # Example use
    >>> function1()
    0.39403406043780986
    >>> function1()
    0.39403406043780986
    >>> # ^ repeated of course
    >>> function2()
    0.39403406043780986
    >>> # ^ because the prefix was forcibly the same, the cache key is the same


``hit_callable``
~~~~~~~~~~~~~~~~

If set, a function that gets called with the original argument and keyword
arguments **if** the cache was able to find and return a cache hit.
For example, suppose you want to tell your ``statsd`` server every time
there's a cache hit.

.. code-block:: python

    from cache_memoize import cache_memoize

    def _cache_hit(user, **kwargs):
        statsdthing.incr(f'cachehit:{user.id}', 1)

    @cache_memoize(10, hit_callable=_cache_hit)
    def calculate_tax(user, tax=0.1):
        return ...


``miss_callable``
~~~~~~~~~~~~~~~~~

Exact same functionality as ``hit_callable`` except the obvious difference
that it gets called if it was *not* a cache hit.

``store_result``
~~~~~~~~~~~~~~~~

This is useful if you have a function you want to make sure only gets called
once per timeout expiration but you don't actually care that much about
what the function return value was. Perhaps because you know that the
function returns something that would quickly fill up your ``memcached`` or
perhaps you know it returns something that can't be pickled. Then you
can set ``store_result`` to ``False``. This is equivalent to your function
returning ``True``.

.. code-block:: python

    from cache_memoize import cache_memoize

    @cache_memoize(1000, store_result=False)
    def send_tax_returns(user):
        # something something time consuming
        ...
        return some_none_pickleable_thing

    def myview(request):
        # View this view as much as you like the 'send_tax_returns' function
        # won't be called more than once every 1000 seconds.
        send_tax_returns(request.user)

``cache_exceptions``
~~~~~~~~~~~~~~~~~~~~

This is useful if you have a function that can raise an exception as valid
result. If the cached function raises any of specified exceptions is the
exception cached and raised as normal. Subsequent cached calls will
immediately re-raise the exception and the function will not be executed.
``cache_exceptions`` accepts an Exception or a tuple of Exceptions.


This option allows you to cache said exceptions like any other result.
Only exceptions raised from the list of classes provided as cache_exceptions
are cached, all others are propagated immediately.

.. code-block:: python

    >>> from cache_memoize import cache_memoize

    >>> class InvalidParameter(Exception):
    ...     pass

    >>> @cache_memoize(1000, cache_exceptions=(InvalidParameter, ))
    ... def run_calculations(parameter):
    ...     # something something time consuming
    ...     raise InvalidParameter

    >>> run_calculations(1)
    Traceback (most recent call last):
    ...
    InvalidParameter

    # run_calculations will now raise InvalidParameter immediately
    # without running the expensive calculation
    >>> run_calculations(1)
    Traceback (most recent call last):
    ...
    InvalidParameter

``cache_alias``
~~~~~~~~~~~~~~~

The ``cache_alias`` argument allows you to use a cache other than the default.

.. code-block:: python

    # Given settings like:
    # CACHES = {
    #     'default': {...},
    #     'other': {...},
    # }

    @cache_memoize(1000, cache_alias='other')
    def myfunc(start, end):
        return random.random()


Cache invalidation
~~~~~~~~~~~~~~~~~~

When you want to "undo" some caching done, you simply call the function
again with the same arguments except you add ``.invalidate`` to the function.

.. code-block:: python

    from cache_memoize import cache_memoize

    @cache_memoize(10)
    def expensive_function(start, end):
        return random.randint(start, end)

    >>> expensive_function(1, 100)
    65
    >>> expensive_function(1, 100)
    65
    >>> expensive_function(100, 200)
    121
    >>> exensive_function.invalidate(1, 100)
    >>> expensive_function(1, 100)
    89
    >>> expensive_function(100, 200)
    121

An "alias" of doing the same thing is to pass a keyword argument called
``_refresh=True``. Like this:

.. code-block:: python

    # Continuing from the code block above
    >>> expensive_function(100, 200)
    121
    >>> expensive_function(100, 200, _refresh=True)
    177
    >>> expensive_function(100, 200)
    177

There is no way to clear more than one cache key. In the above example,
you had to know the "original arguments" when you wanted to invalidate
the cache. There is no method "search" for all cache keys that match a
certain pattern.


Compatibility
=============

* Python 3.8, 3.9, 3.10 & 3.11

* Django 3.2, 4.1 & 4.2

Check out the `tox.ini`_ file for more up-to-date compatibility by
test coverage.

.. _`tox.ini`: https://github.com/peterbe/django-cache-memoize/blob/master/tox.ini

Prior Art
=========

History
~~~~~~~

`Mozilla Symbol Server`_ is written in Django. It's a web service that
sits between C++ debuggers and AWS S3. It shuffles symbol files in and out of
AWS S3. Symbol files are for C++ (and other compiled languages) what
sourcemaps are for JavaScript.

This service gets a LOT of traffic. The download traffic (proxying requests
for symbols in S3) gets about ~40 requests per second. Due to the nature
of the application most of these GETs result in a 404 Not Found but instead
of asking AWS S3 for every single file, these lookups are cached in a
highly configured `Redis`_ configuration. This Redis cache is also connected
to the part of the code that uploads new files.

New uploads are arriving as zip file bundles of files, from Mozilla's build
systems, at a rate of about 600MB every minute, each containing on average
about 100 files each. When a new upload comes in we need to quickly be able
find out if it exists in S3 and this gets cached since often the same files
are repeated in different uploads. But when a file does get uploaded into S3
we need to quickly and confidently invalidate any local caches. That way you
get to keep a really aggressive cache without any stale periods.

This is the use case ``django-cache-memoize`` was built for and tested in.
It was originally written for Python 3.6 in Django 1.11 but when
extracted, made compatible with Python 2.7 and as far back as Django 1.8.

``django-cache-memoize`` is also used in `SongSear.ch`_ to cache short
queries in the autocomplete search input. All autocomplete is done by
Elasticsearch, which is amazingly fast, but not as fast as ``memcached``.


.. _`Mozilla Symbol Server`: https://symbols.mozilla.org
.. _`Redis`: https://redis.io/
.. _`SongSear.ch`: https://songsear.ch


"Competition"
~~~~~~~~~~~~~

There is already `django-memoize`_ by `Thomas Vavrys`_.
It too is available as a memoization decorator you use in Django. And it
uses the default cache framework as a storage. It used ``inspect`` on the
decorated function to build a cache key.

In benchmarks running both ``django-memoize`` and ``django-cache-memoize``
I found ``django-cache-memoize`` to be **~4 times faster** on average.

Another key difference is that ``django-cache-memoize`` uses ``str()`` and
``django-memoize`` uses ``repr()`` which in certain cases of mutable objects
(e.g. class instances) as arguments the caching will not work. For example,
this does *not* work in ``django-memoize``:

.. code-block:: python

    from memoize import memoize

    @memoize(60)
    def count_user_groups(user):
        return user.groups.all().count()

    def myview(request):
        # this will never be memoized
        print(count_user_groups(request.user))

However, this works...

.. code-block:: python

    from cache_memoize import cache_memoize

    @cache_memoize(60)
    def count_user_groups(user):
        return user.groups.all().count()

    def myview(request):
        # this *will* work as expected
        print(count_user_groups(request.user))


.. _`django-memoize`: http://pythonhosted.org/django-memoize/
.. _`Thomas Vavrys`: https://github.com/tvavrys


Development
===========

The most basic thing is to clone the repo and run:

.. code-block:: shell

    pip install -e ".[dev]"
    tox


Code style is all black
~~~~~~~~~~~~~~~~~~~~~~~

All code has to be formatted with `Black <https://pypi.org/project/black/>`_
and the best tool for checking this is
`therapist <https://pypi.org/project/therapist/>`_ since it can help you run
all, help you fix things, and help you make sure linting is passing before
you git commit. This project also uses ``flake8`` to check other things
Black can't check.

To check linting with ``tox`` use:

.. code:: bash

    tox -e lint-py36

To install the ``therapist`` pre-commit hook simply run:

.. code:: bash

    therapist install

When you run ``therapist run`` it will only check the files you've touched.
To run it for all files use:

.. code:: bash

    therapist run --use-tracked-files

And to fix all/any issues run:

.. code:: bash

    therapist run --use-tracked-files --fix



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/peterbe/django-cache-memoize",
    "name": "django-cache-memoize",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "django,memoize,cache,decorator",
    "author": "Peter Bengtsson",
    "author_email": "mail@peterbe.com",
    "download_url": "https://files.pythonhosted.org/packages/e4/dc/7d0301fa054d0a94b0431977d60ec62a3c9d24a9318695e36718800facdc/django-cache-memoize-0.2.0.tar.gz",
    "platform": null,
    "description": "====================\ndjango-cache-memoize\n====================\n\n* License: MPL 2.0\n\n.. image:: https://github.com/peterbe/django-cache-memoize/workflows/Python/badge.svg\n   :alt: Build Status\n   :target: https://github.com/peterbe/django-cache-memoize/actions?query=workflow%3APython\n\n.. image:: https://readthedocs.org/projects/django-cache-memoize/badge/?version=latest\n   :alt: Documentation Status\n   :target: https://django-cache-memoize.readthedocs.io/en/latest/?badge=latest\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n  :target: https://github.com/ambv/black\n\nDjango utility for a memoization decorator that uses the Django cache framework.\n\nFor versions of Python and Django, check out `the tox.ini file`_.\n\n.. _`the tox.ini file`: https://github.com/peterbe/django-cache-memoize/blob/master/tox.ini\n\nKey Features\n------------\n\n* Memoized function calls can be invalidated.\n\n* Works with non-trivial arguments and keyword arguments\n\n* Insight into cache hits and cache missed with a callback.\n\n* Ability to use as a \"guard\" for repeated execution when storing the function\n  result isn't important or needed.\n\n\nInstallation\n============\n\n.. code-block:: python\n\n    pip install django-cache-memoize\n\nUsage\n=====\n\n.. code-block:: python\n\n    # Import the decorator\n    from cache_memoize import cache_memoize\n\n    # Attach decorator to cacheable function with a timeout of 100 seconds.\n    @cache_memoize(100)\n    def expensive_function(start, end):\n        return random.randint(start, end)\n\n    # Just a regular Django view\n    def myview(request):\n        # If you run this view repeatedly you'll get the same\n        # output every time for 100 seconds.\n        return http.HttpResponse(str(expensive_function(0, 100)))\n\n\nThe caching uses `Django's default cache framework`_. Ultimately, it calls\n``django.core.cache.cache.set(cache_key, function_out, expiration)``.\nSo if you have a function that returns something that can't be pickled and\ncached it won't work.\n\n    For cases like this, Django exposes a simple, low-level cache API. You can\n    use this API to store objects in the cache with any level of granularity\n    you like. You can cache any Python object that can be pickled safely:\n    strings, dictionaries, lists of model objects, and so forth. (Most\n    common Python objects can be pickled; refer to the Python documentation\n    for more information about pickling.)\n\nSee `documentation`_.\n\n\n.. _`Django's default cache framework`: https://docs.djangoproject.com/en/1.11/topics/cache/\n.. _`documentation`: https://docs.djangoproject.com/en/1.11/topics/cache/#the-low-level-cache-api\n\n\nExample Usage\n=============\n\nThis blog post: `How to use django-cache-memoize`_\n\nIt demonstrates similarly to the above Usage example but with a little more\ndetail. In particular it demonstrates the difference between *not* using\n``django-cache-memoize`` and then adding it to your code after.\n\n.. _`How to use django-cache-memoize`: https://www.peterbe.com/plog/how-to-use-django-cache-memoize\n\nAdvanced Usage\n==============\n\n``args_rewrite``\n~~~~~~~~~~~~~~~~\n\nInternally the decorator rewrites every argument and keyword argument to\nthe function it wraps into a concatenated string. The first thing you\nmight want to do is help the decorator rewrite the arguments to something\nmore suitable as a cache key string. For example, suppose you have instances\nof a class whose ``__str__`` method doesn't return a unique value. For example:\n\n.. code-block:: python\n\n    class Record(models.Model):\n        name = models.CharField(max_length=100)\n        lastname = models.CharField(max_length=100)\n        friends = models.ManyToManyField(SomeOtherModel)\n\n        def __str__(self):\n            return self.name\n\n    # Example use:\n    >>> record = Record.objects.create(name='Peter', lastname='Bengtsson')\n    >>> print(record)\n    Peter\n    >>> record2 = Record.objects.create(name='Peter', lastname='Different')\n    >>> print(record2)\n    Peter\n\nThis is a contrived example, but basically *you know* that the ``str()``\nconversion of certain arguments isn't safe. Then you can pass in a callable\ncalled ``args_rewrite``. It gets the same positional and keyword arguments\nas the function you're decorating. Here's an example implementation:\n\n.. code-block:: python\n\n    from cache_memoize import cache_memoize\n\n    def count_friends_args_rewrite(record):\n        # The 'id' is always unique. Use that instead of the default __str__\n        return record.id\n\n    @cache_memoize(100, args_rewrite=count_friends_args_rewrite)\n    def count_friends(record):\n        # Assume this is an expensive function that can be memoize cached.\n        return record.friends.all().count()\n\n\n``prefix``\n~~~~~~~~~~\n\nBy default the prefix becomes the name of the function. Consider:\n\n.. code-block:: python\n\n    from cache_memoize import cache_memoize\n\n    @cache_memoize(10, prefix='randomness')\n    def function1():\n        return random.random()\n\n    @cache_memoize(10, prefix='randomness')\n    def function2():  # different name, same arguments, same functionality\n        return random.random()\n\n    # Example use\n    >>> function1()\n    0.39403406043780986\n    >>> function1()\n    0.39403406043780986\n    >>> # ^ repeated of course\n    >>> function2()\n    0.39403406043780986\n    >>> # ^ because the prefix was forcibly the same, the cache key is the same\n\n\n``hit_callable``\n~~~~~~~~~~~~~~~~\n\nIf set, a function that gets called with the original argument and keyword\narguments **if** the cache was able to find and return a cache hit.\nFor example, suppose you want to tell your ``statsd`` server every time\nthere's a cache hit.\n\n.. code-block:: python\n\n    from cache_memoize import cache_memoize\n\n    def _cache_hit(user, **kwargs):\n        statsdthing.incr(f'cachehit:{user.id}', 1)\n\n    @cache_memoize(10, hit_callable=_cache_hit)\n    def calculate_tax(user, tax=0.1):\n        return ...\n\n\n``miss_callable``\n~~~~~~~~~~~~~~~~~\n\nExact same functionality as ``hit_callable`` except the obvious difference\nthat it gets called if it was *not* a cache hit.\n\n``store_result``\n~~~~~~~~~~~~~~~~\n\nThis is useful if you have a function you want to make sure only gets called\nonce per timeout expiration but you don't actually care that much about\nwhat the function return value was. Perhaps because you know that the\nfunction returns something that would quickly fill up your ``memcached`` or\nperhaps you know it returns something that can't be pickled. Then you\ncan set ``store_result`` to ``False``. This is equivalent to your function\nreturning ``True``.\n\n.. code-block:: python\n\n    from cache_memoize import cache_memoize\n\n    @cache_memoize(1000, store_result=False)\n    def send_tax_returns(user):\n        # something something time consuming\n        ...\n        return some_none_pickleable_thing\n\n    def myview(request):\n        # View this view as much as you like the 'send_tax_returns' function\n        # won't be called more than once every 1000 seconds.\n        send_tax_returns(request.user)\n\n``cache_exceptions``\n~~~~~~~~~~~~~~~~~~~~\n\nThis is useful if you have a function that can raise an exception as valid\nresult. If the cached function raises any of specified exceptions is the\nexception cached and raised as normal. Subsequent cached calls will\nimmediately re-raise the exception and the function will not be executed.\n``cache_exceptions`` accepts an Exception or a tuple of Exceptions.\n\n\nThis option allows you to cache said exceptions like any other result.\nOnly exceptions raised from the list of classes provided as cache_exceptions\nare cached, all others are propagated immediately.\n\n.. code-block:: python\n\n    >>> from cache_memoize import cache_memoize\n\n    >>> class InvalidParameter(Exception):\n    ...     pass\n\n    >>> @cache_memoize(1000, cache_exceptions=(InvalidParameter, ))\n    ... def run_calculations(parameter):\n    ...     # something something time consuming\n    ...     raise InvalidParameter\n\n    >>> run_calculations(1)\n    Traceback (most recent call last):\n    ...\n    InvalidParameter\n\n    # run_calculations will now raise InvalidParameter immediately\n    # without running the expensive calculation\n    >>> run_calculations(1)\n    Traceback (most recent call last):\n    ...\n    InvalidParameter\n\n``cache_alias``\n~~~~~~~~~~~~~~~\n\nThe ``cache_alias`` argument allows you to use a cache other than the default.\n\n.. code-block:: python\n\n    # Given settings like:\n    # CACHES = {\n    #     'default': {...},\n    #     'other': {...},\n    # }\n\n    @cache_memoize(1000, cache_alias='other')\n    def myfunc(start, end):\n        return random.random()\n\n\nCache invalidation\n~~~~~~~~~~~~~~~~~~\n\nWhen you want to \"undo\" some caching done, you simply call the function\nagain with the same arguments except you add ``.invalidate`` to the function.\n\n.. code-block:: python\n\n    from cache_memoize import cache_memoize\n\n    @cache_memoize(10)\n    def expensive_function(start, end):\n        return random.randint(start, end)\n\n    >>> expensive_function(1, 100)\n    65\n    >>> expensive_function(1, 100)\n    65\n    >>> expensive_function(100, 200)\n    121\n    >>> exensive_function.invalidate(1, 100)\n    >>> expensive_function(1, 100)\n    89\n    >>> expensive_function(100, 200)\n    121\n\nAn \"alias\" of doing the same thing is to pass a keyword argument called\n``_refresh=True``. Like this:\n\n.. code-block:: python\n\n    # Continuing from the code block above\n    >>> expensive_function(100, 200)\n    121\n    >>> expensive_function(100, 200, _refresh=True)\n    177\n    >>> expensive_function(100, 200)\n    177\n\nThere is no way to clear more than one cache key. In the above example,\nyou had to know the \"original arguments\" when you wanted to invalidate\nthe cache. There is no method \"search\" for all cache keys that match a\ncertain pattern.\n\n\nCompatibility\n=============\n\n* Python 3.8, 3.9, 3.10 & 3.11\n\n* Django 3.2, 4.1 & 4.2\n\nCheck out the `tox.ini`_ file for more up-to-date compatibility by\ntest coverage.\n\n.. _`tox.ini`: https://github.com/peterbe/django-cache-memoize/blob/master/tox.ini\n\nPrior Art\n=========\n\nHistory\n~~~~~~~\n\n`Mozilla Symbol Server`_ is written in Django. It's a web service that\nsits between C++ debuggers and AWS S3. It shuffles symbol files in and out of\nAWS S3. Symbol files are for C++ (and other compiled languages) what\nsourcemaps are for JavaScript.\n\nThis service gets a LOT of traffic. The download traffic (proxying requests\nfor symbols in S3) gets about ~40 requests per second. Due to the nature\nof the application most of these GETs result in a 404 Not Found but instead\nof asking AWS S3 for every single file, these lookups are cached in a\nhighly configured `Redis`_ configuration. This Redis cache is also connected\nto the part of the code that uploads new files.\n\nNew uploads are arriving as zip file bundles of files, from Mozilla's build\nsystems, at a rate of about 600MB every minute, each containing on average\nabout 100 files each. When a new upload comes in we need to quickly be able\nfind out if it exists in S3 and this gets cached since often the same files\nare repeated in different uploads. But when a file does get uploaded into S3\nwe need to quickly and confidently invalidate any local caches. That way you\nget to keep a really aggressive cache without any stale periods.\n\nThis is the use case ``django-cache-memoize`` was built for and tested in.\nIt was originally written for Python 3.6 in Django 1.11 but when\nextracted, made compatible with Python 2.7 and as far back as Django 1.8.\n\n``django-cache-memoize`` is also used in `SongSear.ch`_ to cache short\nqueries in the autocomplete search input. All autocomplete is done by\nElasticsearch, which is amazingly fast, but not as fast as ``memcached``.\n\n\n.. _`Mozilla Symbol Server`: https://symbols.mozilla.org\n.. _`Redis`: https://redis.io/\n.. _`SongSear.ch`: https://songsear.ch\n\n\n\"Competition\"\n~~~~~~~~~~~~~\n\nThere is already `django-memoize`_ by `Thomas Vavrys`_.\nIt too is available as a memoization decorator you use in Django. And it\nuses the default cache framework as a storage. It used ``inspect`` on the\ndecorated function to build a cache key.\n\nIn benchmarks running both ``django-memoize`` and ``django-cache-memoize``\nI found ``django-cache-memoize`` to be **~4 times faster** on average.\n\nAnother key difference is that ``django-cache-memoize`` uses ``str()`` and\n``django-memoize`` uses ``repr()`` which in certain cases of mutable objects\n(e.g. class instances) as arguments the caching will not work. For example,\nthis does *not* work in ``django-memoize``:\n\n.. code-block:: python\n\n    from memoize import memoize\n\n    @memoize(60)\n    def count_user_groups(user):\n        return user.groups.all().count()\n\n    def myview(request):\n        # this will never be memoized\n        print(count_user_groups(request.user))\n\nHowever, this works...\n\n.. code-block:: python\n\n    from cache_memoize import cache_memoize\n\n    @cache_memoize(60)\n    def count_user_groups(user):\n        return user.groups.all().count()\n\n    def myview(request):\n        # this *will* work as expected\n        print(count_user_groups(request.user))\n\n\n.. _`django-memoize`: http://pythonhosted.org/django-memoize/\n.. _`Thomas Vavrys`: https://github.com/tvavrys\n\n\nDevelopment\n===========\n\nThe most basic thing is to clone the repo and run:\n\n.. code-block:: shell\n\n    pip install -e \".[dev]\"\n    tox\n\n\nCode style is all black\n~~~~~~~~~~~~~~~~~~~~~~~\n\nAll code has to be formatted with `Black <https://pypi.org/project/black/>`_\nand the best tool for checking this is\n`therapist <https://pypi.org/project/therapist/>`_ since it can help you run\nall, help you fix things, and help you make sure linting is passing before\nyou git commit. This project also uses ``flake8`` to check other things\nBlack can't check.\n\nTo check linting with ``tox`` use:\n\n.. code:: bash\n\n    tox -e lint-py36\n\nTo install the ``therapist`` pre-commit hook simply run:\n\n.. code:: bash\n\n    therapist install\n\nWhen you run ``therapist run`` it will only check the files you've touched.\nTo run it for all files use:\n\n.. code:: bash\n\n    therapist run --use-tracked-files\n\nAnd to fix all/any issues run:\n\n.. code:: bash\n\n    therapist run --use-tracked-files --fix\n\n\n",
    "bugtrack_url": null,
    "license": "MPL-2.0",
    "summary": "Django utility for a memoization decorator that uses the Django cache framework.",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://github.com/peterbe/django-cache-memoize"
    },
    "split_keywords": [
        "django",
        "memoize",
        "cache",
        "decorator"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2d41e1e8e2cca52b02dd5f11c2b1d93e5fd6a0ac68f6d2a4c5f9fede68ddc2fb",
                "md5": "97cb205cdaa81b8fb8c88198f29067fb",
                "sha256": "a6bfd112da699d1fa85955a1e15b7c48ee25e58044398958e269678db10736f3"
            },
            "downloads": -1,
            "filename": "django_cache_memoize-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "97cb205cdaa81b8fb8c88198f29067fb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 14399,
            "upload_time": "2023-09-14T15:16:58",
            "upload_time_iso_8601": "2023-09-14T15:16:58.969774Z",
            "url": "https://files.pythonhosted.org/packages/2d/41/e1e8e2cca52b02dd5f11c2b1d93e5fd6a0ac68f6d2a4c5f9fede68ddc2fb/django_cache_memoize-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e4dc7d0301fa054d0a94b0431977d60ec62a3c9d24a9318695e36718800facdc",
                "md5": "40d1e0332338c9f001b0ce4dcba96494",
                "sha256": "79950a027ba40e4aff4efed587b76036bf5ba1f59329d7b158797b832be72ca6"
            },
            "downloads": -1,
            "filename": "django-cache-memoize-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "40d1e0332338c9f001b0ce4dcba96494",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 14556,
            "upload_time": "2023-09-14T15:17:00",
            "upload_time_iso_8601": "2023-09-14T15:17:00.930445Z",
            "url": "https://files.pythonhosted.org/packages/e4/dc/7d0301fa054d0a94b0431977d60ec62a3c9d24a9318695e36718800facdc/django-cache-memoize-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-14 15:17:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "peterbe",
    "github_project": "django-cache-memoize",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "django-cache-memoize"
}
        
Elapsed time: 0.10915s