django-perf-rec


Namedjango-perf-rec JSON
Version 4.25.0 PyPI version JSON
download
home_pagehttps://github.com/adamchainz/django-perf-rec
SummaryKeep detailed records of the performance of your Django code.
upload_time2023-10-11 09:36:53
maintainer
docs_urlNone
authorAdam Johnson
requires_python>=3.8
licenseMIT
keywords django
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ===============
django-perf-rec
===============

.. image:: https://img.shields.io/github/actions/workflow/status/adamchainz/django-perf-rec/main.yml?branch=main&style=for-the-badge
   :target: https://github.com/adamchainz/django-perf-rec/actions?workflow=CI

.. image:: https://img.shields.io/pypi/v/django-perf-rec.svg?style=for-the-badge
   :target: https://pypi.org/project/django-perf-rec/

.. image:: https://img.shields.io/badge/code%20style-black-000000.svg?style=for-the-badge
   :target: https://github.com/psf/black

.. image:: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white&style=for-the-badge
   :target: https://github.com/pre-commit/pre-commit
   :alt: pre-commit

"Keep detailed records of the performance of your Django code."

**django-perf-rec** is like Django's ``assertNumQueries`` on steroids. It lets
you track the individual queries and cache operations that occur in your code.
Use it in your tests like so:

.. code-block:: python

    def test_home(self):
        with django_perf_rec.record():
            self.client.get("/")

It then stores a YAML file alongside the test file that tracks the queries and
operations, looking something like:

.. code-block:: yaml

    MyTests.test_home:
    - cache|get: home_data.user_id.#
    - db: 'SELECT ... FROM myapp_table WHERE (myapp_table.id = #)'
    - db: 'SELECT ... FROM myapp_table WHERE (myapp_table.id = #)'

When the test is run again, the new record will be compared with the one in the
YAML file. If they are different, an assertion failure will be raised, failing
the test. Magic!

The queries and keys are 'fingerprinted', replacing information that seems
variable with `#` and `...`. This is done to avoid spurious failures when e.g.
primary keys are different, random data is used, new columns are added to
tables, etc.

If you check the YAML file in along with your tests, you'll have unbreakable
performance with much better information about any regressions compared to
``assertNumQueries``. If you are fine with the changes from a failing test,
just remove the file and rerun the test to regenerate it.

For more information, see our `introductory blog
post <https://adamj.eu/tech/2016/09/26/introducing-django-perf-rec/>`_ that
says a little more about why we made it.

Installation
============

Use **pip**:

.. code-block:: bash

    python -m pip install django-perf-rec

Requirements
============

Python 3.8 to 3.12 supported.

Django 3.2 to 5.0 supported.

----

**Are your tests slow?**
Check out my book `Speed Up Your Django Tests <https://adamchainz.gumroad.com/l/suydt>`__ which covers loads of ways to write faster, more accurate tests.

----

API
===

``record(record_name: str | None=None, path: str | None=None, capture_traceback: callable[[Operation], bool] | None=None, capture_operation: callable[[Operation], bool] | None=None)``
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Return a context manager that will be used for a single performance test.

The arguments must be passed as keyword arguments.

``path`` is the path to a directory or file in which to store the record. If it
ends with ``'/'``, or is left as ``None``, the filename will be automatically
determined by looking at the filename the calling code is in and replacing the
``.py[c]`` extension with ``.perf.yml``. If it points to a directory that
doesn't exist, that directory will be created.

``record_name`` is the name of the record inside the performance file to use.
If left as ``None``, the code assumes you are inside a Django ``TestCase`` and
uses magic stack inspection to find that test case, and uses a name based upon
the test case name + the test method name + an optional counter if you invoke
``record()`` multiple times inside the same test method.

Whilst open, the context manager tracks all DB queries on all connections, and
all cache operations on all defined caches. It names the connection/cache in
the tracked operation it uses, except from for the ``default`` one.

When the context manager exits, it will use the list of operations it has
gathered. If the relevant file specified using ``path`` doesn't exist, or
doesn't contain data for the specific ``record_name``, it will be created and
saved and the test will pass with no assertions. However if the record **does**
exist inside the file, the collected record will be compared with the original
one, and if different, an ``AssertionError`` will be raised. When running on
pytest, this will use its fancy assertion rewriting; in other test runners/uses
the full diff will be attached to the message.

Example:

.. code-block:: python

    import django_perf_rec

    from app.models import Author


    class AuthorPerformanceTests(TestCase):
        def test_special_method(self):
            with django_perf_rec.record():
                list(Author.objects.special_method())


``capture_traceback``, if not ``None``, should be a function that takes one
argument, the given DB or cache operation, and returns a ``bool`` indicating
if a traceback should be captured for the operation (by default, they are not).
Capturing tracebacks allows fine-grained debugging of code paths causing the
operations. Be aware that records differing only by the presence of tracebacks
will not match and cause an ``AssertionError`` to be raised, so it's not
normally suitable to permanently record the tracebacks.

For example, if you wanted to know what code paths query the table
``my_table``, you could use a ``capture_traceback`` function like so:

.. code-block:: python

    def debug_sql_query(operation):
        return "my_tables" in operation.query


    def test_special_method(self):
        with django_perf_rec.record(capture_traceback=debug_sql_query):
            list(Author.objects.special_method())

The performance record here would include a standard Python traceback attached
to each SQL query containing "my_table".


``capture_operation``, if not ``None``, should be a function that takes one
argument, the given DB or cache operation, and returns a ``bool`` indicating if
the operation should be recorded at all (by default, all operations are
recorded). Not capturing some operations allows for hiding some code paths to be
ignored in your tests, such as for ignoring database queries that would be
replaced by an external service in production.

For example, if you knew that in testing all queries to some table would be
replaced in production with something else you could use a ``capture_operation``
function like so:

.. code-block:: python

    def hide_my_tables(operation):
        return "my_tables" in operation.query


    def test_special_function(self):
        with django_perf_rec.record(capture_operation=hide_my_tables):
            list(Author.objects.all())


``TestCaseMixin``
-----------------

A mixin class to be added to your custom ``TestCase`` subclass so you can use
**django-perf-rec** across your codebase without needing to import it in each
individual test file. It adds one method, ``record_performance()``, whose
signature is the same as ``record()`` above.

Example:

.. code-block:: python

    # yplan/test.py
    from django.test import TestCase as OrigTestCase
    from django_perf_rec import TestCaseMixin


    class TestCase(TestCaseMixin, OrigTestCase):
        pass


    # app/tests/models/test_author.py
    from app.models import Author
    from yplan.test import TestCase


    class AuthorPerformanceTests(TestCase):
        def test_special_method(self):
            with self.record_performance():
                list(Author.objects.special_method())

``get_perf_path(file_path)``
----------------------------

Encapsulates the logic used in ``record()`` to form ``path`` from the path of
the file containing the currently running test, mostly swapping '.py' or '.pyc'
for '.perf.yml'. You might want to use this when calling ``record()`` from
somewhere other than inside a test (which causes the automatic inspection to
fail), to match the same filename.

``get_record_name(test_name, class_name=None)``
-----------------------------------------------

Encapsulates the logic used in ``record()`` to form a ``record_name`` from
details of the currently running test. You might want to use this when calling
``record()`` from somewhere other than inside a test (which causes the
automatic inspection to fail), to match the same ``record_name``.

Settings
========

Behaviour can be customized with a dictionary called ``PERF_REC`` in your
Django settings, for example:

.. code-block:: python

    PERF_REC = {
        "MODE": "once",
    }

The possible keys to this dictionary are explained below.

``HIDE_COLUMNS``
----------------

The ``HIDE_COLUMNS`` setting may be used to change the way **django-perf-rec**
simplifies SQL in the recording files it makes. It takes a boolean:

* ``True`` (default) causes column lists in queries to be collapsed, e.g.
  ``SELECT a, b, c FROM t`` becomes ``SELECT ... FROM t``. This is useful
  because selected columns often don't affect query time in typical
  Django applications, it makes the records easier to read, and they then don't
  need updating every time model fields are changed.
* ``False`` stops the collapsing behaviour, causing all the columns to be
  output in the files.

``MODE``
--------

The ``MODE`` setting may be used to change the way **django-perf-rec** behaves
when a performance record does not exist during a test run.

* ``'once'`` (default) creates missing records silently.
* ``'none'`` raises ``AssertionError`` when a record does not exist. You
  probably want to use this mode in CI, to ensure new tests fail if their
  corresponding performance records were not committed.
* ``'all'`` creates missing records and then raises ``AssertionError``.
* ``'overwrite'`` creates or updates records silently.

Usage in Pytest
===============

If you're using Pytest, you might want to call ``record()`` from within a
Pytest fixture and have it automatically apply to all your tests. We have an
example of this, see the file `test_pytest_fixture_usage.py
<https://github.com/adamchainz/django-perf-rec/blob/main/tests/test_pytest_fixture_usage.py>`_
in the test suite.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/adamchainz/django-perf-rec",
    "name": "django-perf-rec",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "Django",
    "author": "Adam Johnson",
    "author_email": "me@adamj.eu",
    "download_url": "https://files.pythonhosted.org/packages/4c/29/0ba804392e2bb8ecf4d07524d2537441fc974812e4c2e6d81fed90d960f5/django_perf_rec-4.25.0.tar.gz",
    "platform": null,
    "description": "===============\ndjango-perf-rec\n===============\n\n.. image:: https://img.shields.io/github/actions/workflow/status/adamchainz/django-perf-rec/main.yml?branch=main&style=for-the-badge\n   :target: https://github.com/adamchainz/django-perf-rec/actions?workflow=CI\n\n.. image:: https://img.shields.io/pypi/v/django-perf-rec.svg?style=for-the-badge\n   :target: https://pypi.org/project/django-perf-rec/\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg?style=for-the-badge\n   :target: https://github.com/psf/black\n\n.. image:: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white&style=for-the-badge\n   :target: https://github.com/pre-commit/pre-commit\n   :alt: pre-commit\n\n\"Keep detailed records of the performance of your Django code.\"\n\n**django-perf-rec** is like Django's ``assertNumQueries`` on steroids. It lets\nyou track the individual queries and cache operations that occur in your code.\nUse it in your tests like so:\n\n.. code-block:: python\n\n    def test_home(self):\n        with django_perf_rec.record():\n            self.client.get(\"/\")\n\nIt then stores a YAML file alongside the test file that tracks the queries and\noperations, looking something like:\n\n.. code-block:: yaml\n\n    MyTests.test_home:\n    - cache|get: home_data.user_id.#\n    - db: 'SELECT ... FROM myapp_table WHERE (myapp_table.id = #)'\n    - db: 'SELECT ... FROM myapp_table WHERE (myapp_table.id = #)'\n\nWhen the test is run again, the new record will be compared with the one in the\nYAML file. If they are different, an assertion failure will be raised, failing\nthe test. Magic!\n\nThe queries and keys are 'fingerprinted', replacing information that seems\nvariable with `#` and `...`. This is done to avoid spurious failures when e.g.\nprimary keys are different, random data is used, new columns are added to\ntables, etc.\n\nIf you check the YAML file in along with your tests, you'll have unbreakable\nperformance with much better information about any regressions compared to\n``assertNumQueries``. If you are fine with the changes from a failing test,\njust remove the file and rerun the test to regenerate it.\n\nFor more information, see our `introductory blog\npost <https://adamj.eu/tech/2016/09/26/introducing-django-perf-rec/>`_ that\nsays a little more about why we made it.\n\nInstallation\n============\n\nUse **pip**:\n\n.. code-block:: bash\n\n    python -m pip install django-perf-rec\n\nRequirements\n============\n\nPython 3.8 to 3.12 supported.\n\nDjango 3.2 to 5.0 supported.\n\n----\n\n**Are your tests slow?**\nCheck out my book `Speed Up Your Django Tests <https://adamchainz.gumroad.com/l/suydt>`__ which covers loads of ways to write faster, more accurate tests.\n\n----\n\nAPI\n===\n\n``record(record_name: str | None=None, path: str | None=None, capture_traceback: callable[[Operation], bool] | None=None, capture_operation: callable[[Operation], bool] | None=None)``\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nReturn a context manager that will be used for a single performance test.\n\nThe arguments must be passed as keyword arguments.\n\n``path`` is the path to a directory or file in which to store the record. If it\nends with ``'/'``, or is left as ``None``, the filename will be automatically\ndetermined by looking at the filename the calling code is in and replacing the\n``.py[c]`` extension with ``.perf.yml``. If it points to a directory that\ndoesn't exist, that directory will be created.\n\n``record_name`` is the name of the record inside the performance file to use.\nIf left as ``None``, the code assumes you are inside a Django ``TestCase`` and\nuses magic stack inspection to find that test case, and uses a name based upon\nthe test case name + the test method name + an optional counter if you invoke\n``record()`` multiple times inside the same test method.\n\nWhilst open, the context manager tracks all DB queries on all connections, and\nall cache operations on all defined caches. It names the connection/cache in\nthe tracked operation it uses, except from for the ``default`` one.\n\nWhen the context manager exits, it will use the list of operations it has\ngathered. If the relevant file specified using ``path`` doesn't exist, or\ndoesn't contain data for the specific ``record_name``, it will be created and\nsaved and the test will pass with no assertions. However if the record **does**\nexist inside the file, the collected record will be compared with the original\none, and if different, an ``AssertionError`` will be raised. When running on\npytest, this will use its fancy assertion rewriting; in other test runners/uses\nthe full diff will be attached to the message.\n\nExample:\n\n.. code-block:: python\n\n    import django_perf_rec\n\n    from app.models import Author\n\n\n    class AuthorPerformanceTests(TestCase):\n        def test_special_method(self):\n            with django_perf_rec.record():\n                list(Author.objects.special_method())\n\n\n``capture_traceback``, if not ``None``, should be a function that takes one\nargument, the given DB or cache operation, and returns a ``bool`` indicating\nif a traceback should be captured for the operation (by default, they are not).\nCapturing tracebacks allows fine-grained debugging of code paths causing the\noperations. Be aware that records differing only by the presence of tracebacks\nwill not match and cause an ``AssertionError`` to be raised, so it's not\nnormally suitable to permanently record the tracebacks.\n\nFor example, if you wanted to know what code paths query the table\n``my_table``, you could use a ``capture_traceback`` function like so:\n\n.. code-block:: python\n\n    def debug_sql_query(operation):\n        return \"my_tables\" in operation.query\n\n\n    def test_special_method(self):\n        with django_perf_rec.record(capture_traceback=debug_sql_query):\n            list(Author.objects.special_method())\n\nThe performance record here would include a standard Python traceback attached\nto each SQL query containing \"my_table\".\n\n\n``capture_operation``, if not ``None``, should be a function that takes one\nargument, the given DB or cache operation, and returns a ``bool`` indicating if\nthe operation should be recorded at all (by default, all operations are\nrecorded). Not capturing some operations allows for hiding some code paths to be\nignored in your tests, such as for ignoring database queries that would be\nreplaced by an external service in production.\n\nFor example, if you knew that in testing all queries to some table would be\nreplaced in production with something else you could use a ``capture_operation``\nfunction like so:\n\n.. code-block:: python\n\n    def hide_my_tables(operation):\n        return \"my_tables\" in operation.query\n\n\n    def test_special_function(self):\n        with django_perf_rec.record(capture_operation=hide_my_tables):\n            list(Author.objects.all())\n\n\n``TestCaseMixin``\n-----------------\n\nA mixin class to be added to your custom ``TestCase`` subclass so you can use\n**django-perf-rec** across your codebase without needing to import it in each\nindividual test file. It adds one method, ``record_performance()``, whose\nsignature is the same as ``record()`` above.\n\nExample:\n\n.. code-block:: python\n\n    # yplan/test.py\n    from django.test import TestCase as OrigTestCase\n    from django_perf_rec import TestCaseMixin\n\n\n    class TestCase(TestCaseMixin, OrigTestCase):\n        pass\n\n\n    # app/tests/models/test_author.py\n    from app.models import Author\n    from yplan.test import TestCase\n\n\n    class AuthorPerformanceTests(TestCase):\n        def test_special_method(self):\n            with self.record_performance():\n                list(Author.objects.special_method())\n\n``get_perf_path(file_path)``\n----------------------------\n\nEncapsulates the logic used in ``record()`` to form ``path`` from the path of\nthe file containing the currently running test, mostly swapping '.py' or '.pyc'\nfor '.perf.yml'. You might want to use this when calling ``record()`` from\nsomewhere other than inside a test (which causes the automatic inspection to\nfail), to match the same filename.\n\n``get_record_name(test_name, class_name=None)``\n-----------------------------------------------\n\nEncapsulates the logic used in ``record()`` to form a ``record_name`` from\ndetails of the currently running test. You might want to use this when calling\n``record()`` from somewhere other than inside a test (which causes the\nautomatic inspection to fail), to match the same ``record_name``.\n\nSettings\n========\n\nBehaviour can be customized with a dictionary called ``PERF_REC`` in your\nDjango settings, for example:\n\n.. code-block:: python\n\n    PERF_REC = {\n        \"MODE\": \"once\",\n    }\n\nThe possible keys to this dictionary are explained below.\n\n``HIDE_COLUMNS``\n----------------\n\nThe ``HIDE_COLUMNS`` setting may be used to change the way **django-perf-rec**\nsimplifies SQL in the recording files it makes. It takes a boolean:\n\n* ``True`` (default) causes column lists in queries to be collapsed, e.g.\n  ``SELECT a, b, c FROM t`` becomes ``SELECT ... FROM t``. This is useful\n  because selected columns often don't affect query time in typical\n  Django applications, it makes the records easier to read, and they then don't\n  need updating every time model fields are changed.\n* ``False`` stops the collapsing behaviour, causing all the columns to be\n  output in the files.\n\n``MODE``\n--------\n\nThe ``MODE`` setting may be used to change the way **django-perf-rec** behaves\nwhen a performance record does not exist during a test run.\n\n* ``'once'`` (default) creates missing records silently.\n* ``'none'`` raises ``AssertionError`` when a record does not exist. You\n  probably want to use this mode in CI, to ensure new tests fail if their\n  corresponding performance records were not committed.\n* ``'all'`` creates missing records and then raises ``AssertionError``.\n* ``'overwrite'`` creates or updates records silently.\n\nUsage in Pytest\n===============\n\nIf you're using Pytest, you might want to call ``record()`` from within a\nPytest fixture and have it automatically apply to all your tests. We have an\nexample of this, see the file `test_pytest_fixture_usage.py\n<https://github.com/adamchainz/django-perf-rec/blob/main/tests/test_pytest_fixture_usage.py>`_\nin the test suite.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Keep detailed records of the performance of your Django code.",
    "version": "4.25.0",
    "project_urls": {
        "Changelog": "https://github.com/adamchainz/django-perf-rec/blob/main/CHANGELOG.rst",
        "Homepage": "https://github.com/adamchainz/django-perf-rec",
        "Mastodon": "https://fosstodon.org/@adamchainz",
        "Twitter": "https://twitter.com/adamchainz"
    },
    "split_keywords": [
        "django"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "adab5041443a3c12987a10545d41e41c39dbe28a4517104d81877974070d9d22",
                "md5": "6cf86b26a00b627ac95e8e1ecec9e0cb",
                "sha256": "6434090ebd4eab94024ec6085da31ab099fdaf00a08469b1ad29edad6a5decd2"
            },
            "downloads": -1,
            "filename": "django_perf_rec-4.25.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6cf86b26a00b627ac95e8e1ecec9e0cb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 17546,
            "upload_time": "2023-10-11T09:36:51",
            "upload_time_iso_8601": "2023-10-11T09:36:51.215545Z",
            "url": "https://files.pythonhosted.org/packages/ad/ab/5041443a3c12987a10545d41e41c39dbe28a4517104d81877974070d9d22/django_perf_rec-4.25.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4c290ba804392e2bb8ecf4d07524d2537441fc974812e4c2e6d81fed90d960f5",
                "md5": "3f92655a9289fc0e596e64c918f822c9",
                "sha256": "24496138e951e5b311b87968b04b615bdc2ad74385d7c76662e6afb9e5c3371e"
            },
            "downloads": -1,
            "filename": "django_perf_rec-4.25.0.tar.gz",
            "has_sig": false,
            "md5_digest": "3f92655a9289fc0e596e64c918f822c9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 21714,
            "upload_time": "2023-10-11T09:36:53",
            "upload_time_iso_8601": "2023-10-11T09:36:53.060443Z",
            "url": "https://files.pythonhosted.org/packages/4c/29/0ba804392e2bb8ecf4d07524d2537441fc974812e4c2e6d81fed90d960f5/django_perf_rec-4.25.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-11 09:36:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "adamchainz",
    "github_project": "django-perf-rec",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "django-perf-rec"
}
        
Elapsed time: 0.13741s