=============
pytest-sentry
=============
.. image:: https://img.shields.io/pypi/v/pytest-sentry
:target: https://pypi.org/project/pytest-sentry/
.. image:: https://img.shields.io/pypi/l/pytest-sentry
:target: https://pypi.org/project/pytest-sentry/
``pytest-sentry`` is a `pytest <https://pytest.org>`_ plugin that uses `Sentry
<https://sentry.io/>`_ to store and aggregate information about your testruns.
**This is not an official Sentry product.**
Tracking flaky tests as errors
==============================
Let's say you have a testsuite with some flaky tests that randomly break your
CI build due to network issues, race conditions or other stuff that you don't
want to fix immediately. The known workaround is to retry those tests
automatically, for example using `pytest-rerunfailures
<https://github.com/pytest-dev/pytest-rerunfailures>`_.
One concern against plugins like this is that they just hide the bugs in your
testsuite or even other code. After all your CI build is green and your code
probably works most of the time.
``pytest-sentry`` tries to make that choice a bit easier by tracking flaky test
failures in a place separate from your build status. Sentry is already a
good choice for keeping tabs on all kinds of errors, important or not, in
production, so let's try to use it in testsuites too.
The prerequisite is that you already make use of ``pytest`` and
``pytest-rerunfailures`` in CI. Now install ``pytest-sentry`` and set the
``PYTEST_SENTRY_DSN`` environment variable to the DSN of a new Sentry project.
Now every test failure that is "fixed" by retrying the test is reported to
Sentry, but still does not break CI. Tests that consistently fail will not be
reported.
Tracking the performance of your testsuite
==========================================
By default ``pytest-sentry`` will send `Performance
<https://sentry.io/for/performance/>`_ data to Sentry:
* Fixture setup is reported as "transaction" to Sentry, such that you can
answer questions like "what is my slowest test fixture" and "what is my most
used test fixture".
* Calls to the test function itself are reported as separate transaction such
that you can find large, slow tests as well.
* Fixture setup related to a particular test item will be in the same trace,
i.e. will have same trace ID. There is no common parent transaction though.
It is purposefully dropped to spare quota as it does not contain interesting
information::
pytest.runtest.protocol [one time, not sent]
pytest.fixture.setup [multiple times, sent]
pytest.runtest.call [one time, sent]
The trace is per-test-item. For correlating transactions across an entire
test run, use the automatically attached CI tags or attach some tag on your
own.
To measure performance data, install ``pytest-sentry`` and set
``PYTEST_SENTRY_DSN``, like with errors. By default, the extension will send all
performance data to Sentry. If you want to limit the amount of data sent, you
can set the ``PYTEST_SENTRY_TRACES_SAMPLE_RATE`` environment variable to a float
between ``0`` and ``1``. This will cause only a random sample of transactions to
be sent to Sentry.
Transactions can have noticeable runtime overhead over just reporting errors.
To disable, use a marker::
import pytest
import pytest_sentry
pytestmarker = pytest.mark.sentry_client({"traces_sample_rate": 0.0})
Advanced Options
================
``pytest-sentry`` supports marking your tests to use a different DSN, client or
scope per-test. You can use this to provide custom options to the ``Client``
object from the `Sentry SDK for Python
<https://github.com/getsentry/sentry-python>`_::
import random
import pytest
from sentry_sdk import Scope
from pytest_sentry import Client
@pytest.mark.sentry_client(None)
def test_no_sentry():
# Even though flaky, this test never gets reported to sentry
assert random.random() > 0.5
@pytest.mark.sentry_client("MY NEW DSN")
def test_custom_dsn():
# Use a different DSN to report errors for this one
assert random.random() > 0.5
# Other invocations:
@pytest.mark.sentry_client(Client("CUSTOM DSN"))
@pytest.mark.sentry_client(lambda: Client("CUSTOM DSN"))
@pytest.mark.sentry_client(Scope(Client("CUSTOM DSN")))
@pytest.mark.sentry_client({"dsn": ..., "debug": True})
The ``Client`` class exposed by ``pytest-sentry`` only has different default
integrations. It disables some of the error-capturing integrations to avoid
sending random expected errors into your project.
Accessing the used Sentry client
================================
You will notice that the global functions such as
``sentry_sdk.capture_message`` will not actually send events into the same DSN
you configured this plugin with. That's because ``pytest-sentry`` goes to
extreme lenghts to keep its own SDK setup separate from the SDK setup of the
tested code.
``pytest-sentry`` exposes the ``sentry_test_scope`` fixture whose return value is
the ``Scope`` being used to send events to Sentry. Use ``with use_scope(entry_test_scope):``
to temporarily switch context. You can use this to set custom tags like so::
def test_foo(sentry_test_scope):
with use_scope(sentry_test_scope):
sentry_sdk.set_tag("pull_request", os.environ['EXAMPLE_CI_PULL_REQUEST'])
Why all the hassle with the context manager? Just imagine if your tested
application would start to log some (expected) errors on its own. You would
immediately exceed your quota!
Always reporting test failures
==============================
You can always report all test failures to Sentry by setting the environment
variable ``PYTEST_SENTRY_ALWAYS_REPORT=1``.
This can be enabled for builds on the ``main`` or release branch, to catch
certain kinds of tests that are flaky across builds, but consistently fail or
pass within one testrun.
License
=======
Licensed under 2-clause BSD, see ``LICENSE``.
Raw data
{
"_id": null,
"home_page": "https://github.com/untitaker/pytest-sentry",
"name": "pytest-sentry",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Markus Unterwaditzer",
"author_email": "markus@unterwaditzer.net",
"download_url": "https://files.pythonhosted.org/packages/7c/7b/79ecce6640882fab9c327debef1ff4a834ac33dbb79f8adc267f4bfbebe0/pytest_sentry-0.3.0.tar.gz",
"platform": null,
"description": "=============\npytest-sentry\n=============\n\n.. image:: https://img.shields.io/pypi/v/pytest-sentry\n :target: https://pypi.org/project/pytest-sentry/\n\n.. image:: https://img.shields.io/pypi/l/pytest-sentry\n :target: https://pypi.org/project/pytest-sentry/\n\n``pytest-sentry`` is a `pytest <https://pytest.org>`_ plugin that uses `Sentry\n<https://sentry.io/>`_ to store and aggregate information about your testruns.\n\n**This is not an official Sentry product.**\n\nTracking flaky tests as errors\n==============================\n\nLet's say you have a testsuite with some flaky tests that randomly break your\nCI build due to network issues, race conditions or other stuff that you don't\nwant to fix immediately. The known workaround is to retry those tests\nautomatically, for example using `pytest-rerunfailures\n<https://github.com/pytest-dev/pytest-rerunfailures>`_.\n\nOne concern against plugins like this is that they just hide the bugs in your\ntestsuite or even other code. After all your CI build is green and your code\nprobably works most of the time.\n\n``pytest-sentry`` tries to make that choice a bit easier by tracking flaky test\nfailures in a place separate from your build status. Sentry is already a\ngood choice for keeping tabs on all kinds of errors, important or not, in\nproduction, so let's try to use it in testsuites too.\n\nThe prerequisite is that you already make use of ``pytest`` and\n``pytest-rerunfailures`` in CI. Now install ``pytest-sentry`` and set the\n``PYTEST_SENTRY_DSN`` environment variable to the DSN of a new Sentry project.\n\nNow every test failure that is \"fixed\" by retrying the test is reported to\nSentry, but still does not break CI. Tests that consistently fail will not be\nreported.\n\nTracking the performance of your testsuite\n==========================================\n\nBy default ``pytest-sentry`` will send `Performance\n<https://sentry.io/for/performance/>`_ data to Sentry:\n\n* Fixture setup is reported as \"transaction\" to Sentry, such that you can\n answer questions like \"what is my slowest test fixture\" and \"what is my most\n used test fixture\".\n\n* Calls to the test function itself are reported as separate transaction such\n that you can find large, slow tests as well.\n\n* Fixture setup related to a particular test item will be in the same trace,\n i.e. will have same trace ID. There is no common parent transaction though.\n It is purposefully dropped to spare quota as it does not contain interesting\n information::\n\n pytest.runtest.protocol [one time, not sent]\n pytest.fixture.setup [multiple times, sent]\n pytest.runtest.call [one time, sent]\n\n The trace is per-test-item. For correlating transactions across an entire\n test run, use the automatically attached CI tags or attach some tag on your\n own.\n\nTo measure performance data, install ``pytest-sentry`` and set\n``PYTEST_SENTRY_DSN``, like with errors. By default, the extension will send all\nperformance data to Sentry. If you want to limit the amount of data sent, you\ncan set the ``PYTEST_SENTRY_TRACES_SAMPLE_RATE`` environment variable to a float\nbetween ``0`` and ``1``. This will cause only a random sample of transactions to\nbe sent to Sentry.\n\nTransactions can have noticeable runtime overhead over just reporting errors.\nTo disable, use a marker::\n\n import pytest\n import pytest_sentry\n\n pytestmarker = pytest.mark.sentry_client({\"traces_sample_rate\": 0.0})\n\nAdvanced Options\n================\n\n``pytest-sentry`` supports marking your tests to use a different DSN, client or\nscope per-test. You can use this to provide custom options to the ``Client``\nobject from the `Sentry SDK for Python\n<https://github.com/getsentry/sentry-python>`_::\n\n import random\n import pytest\n\n from sentry_sdk import Scope\n from pytest_sentry import Client\n\n @pytest.mark.sentry_client(None)\n def test_no_sentry():\n # Even though flaky, this test never gets reported to sentry\n assert random.random() > 0.5\n\n @pytest.mark.sentry_client(\"MY NEW DSN\")\n def test_custom_dsn():\n # Use a different DSN to report errors for this one\n assert random.random() > 0.5\n\n # Other invocations:\n\n @pytest.mark.sentry_client(Client(\"CUSTOM DSN\"))\n @pytest.mark.sentry_client(lambda: Client(\"CUSTOM DSN\"))\n @pytest.mark.sentry_client(Scope(Client(\"CUSTOM DSN\")))\n @pytest.mark.sentry_client({\"dsn\": ..., \"debug\": True})\n\n\nThe ``Client`` class exposed by ``pytest-sentry`` only has different default\nintegrations. It disables some of the error-capturing integrations to avoid\nsending random expected errors into your project.\n\nAccessing the used Sentry client\n================================\n\nYou will notice that the global functions such as\n``sentry_sdk.capture_message`` will not actually send events into the same DSN\nyou configured this plugin with. That's because ``pytest-sentry`` goes to\nextreme lenghts to keep its own SDK setup separate from the SDK setup of the\ntested code.\n\n``pytest-sentry`` exposes the ``sentry_test_scope`` fixture whose return value is\nthe ``Scope`` being used to send events to Sentry. Use ``with use_scope(entry_test_scope):``\nto temporarily switch context. You can use this to set custom tags like so::\n\n def test_foo(sentry_test_scope):\n with use_scope(sentry_test_scope):\n sentry_sdk.set_tag(\"pull_request\", os.environ['EXAMPLE_CI_PULL_REQUEST'])\n\n\nWhy all the hassle with the context manager? Just imagine if your tested\napplication would start to log some (expected) errors on its own. You would\nimmediately exceed your quota!\n\nAlways reporting test failures\n==============================\n\nYou can always report all test failures to Sentry by setting the environment\nvariable ``PYTEST_SENTRY_ALWAYS_REPORT=1``.\n\nThis can be enabled for builds on the ``main`` or release branch, to catch\ncertain kinds of tests that are flaky across builds, but consistently fail or\npass within one testrun.\n\nLicense\n=======\n\nLicensed under 2-clause BSD, see ``LICENSE``.\n",
"bugtrack_url": null,
"license": "BSD-2-Clause",
"summary": "A pytest plugin to send testrun information to Sentry.io",
"version": "0.3.0",
"project_urls": {
"Homepage": "https://github.com/untitaker/pytest-sentry"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a2a6a7645df6aa1e3b37c1330e006f8a2f48ba7ecab24cab700fa8120ffdb9e8",
"md5": "e48d73194f6b8d8340780cd7aa789630",
"sha256": "ae182f11c18179533d4d760fd79f1992cc351062e69d253297a4cc7639535db7"
},
"downloads": -1,
"filename": "pytest_sentry-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e48d73194f6b8d8340780cd7aa789630",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 8096,
"upload_time": "2024-04-25T11:13:08",
"upload_time_iso_8601": "2024-04-25T11:13:08.223090Z",
"url": "https://files.pythonhosted.org/packages/a2/a6/a7645df6aa1e3b37c1330e006f8a2f48ba7ecab24cab700fa8120ffdb9e8/pytest_sentry-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7c7b79ecce6640882fab9c327debef1ff4a834ac33dbb79f8adc267f4bfbebe0",
"md5": "ceddf20ad9b9bb9c948a696d12b4a199",
"sha256": "d26fb87d306a3b51ed3de40d2ccef2fb856438f04dcb93c8b44a54f9c09f1e20"
},
"downloads": -1,
"filename": "pytest_sentry-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "ceddf20ad9b9bb9c948a696d12b4a199",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 8916,
"upload_time": "2024-04-25T11:13:09",
"upload_time_iso_8601": "2024-04-25T11:13:09.350509Z",
"url": "https://files.pythonhosted.org/packages/7c/7b/79ecce6640882fab9c327debef1ff4a834ac33dbb79f8adc267f4bfbebe0/pytest_sentry-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-25 11:13:09",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "untitaker",
"github_project": "pytest-sentry",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pytest-sentry"
}