Simple WSGI A/B testing - Swab
==============================
What is A/B testing?
--------------------
A/B testing is a way of comparing two versions of a web page against each
other, to see which performs best for your visitors. It could be testing
changes to your website copy, visual design or user interface.
When you run an A/B test experiment you need to tell Swab what variants you
have, and what goals you want to optimize for. Swab will then randomly assign
visitors to each variant and keep track of how many times each variant is
shown, along with how many of those visits resulted in a conversion.
Using this data, Swab can show you the conversion rate for each variant along
with some basic statistics to help you decide whether there is a meaningful
difference between the versions.
Setting up a Swab instance
--------------------------
Swab needs a directory where it can save the data files it uses for tracking
trial and conversion data::
from swab import Swab
s = Swab('/tmp/.swab-test-data')
Then you need to tell swab about the experiments you want to run, the variants
available and the name of the conversion goal::
s.add_experiment('button-color', ['red', 'blue'], 'signup')
Finally you need to wrap your WSGI app in swab's middleware::
application = s.middleware(application)
Integrating swab in your app
----------------------------
Swab makes a number of functions available to you that you can put in your application code:
show_variant(environ, experiment, record=False, variant=None)
Return the variant name to show for the current request. In the above
example, a call to ``show_variant(environ, 'button-color')`` would
return either ``'red'`` or ``'blue'``
record_trial_tag(environ, experiment)
Return the HTML tag for a javascript beacon that should be placed in
the page you are testing. The tag causes the user's browser to load a
referenced javascript file, triggering swab to record a trial for the
given experiment.
If you only have a single experiment running on the requested page and
have previously called ``show_variant`` you can safely omit the
experiment name.
record_trial(environ, experiment)
If you don't want to use the javascript beacon to track trials, you can
call ``record_trial`` directly. The javascript beacon method is
preferred as it is unlikely to be triggered by bots.
If you only have a single experiment running on the requested page and
have previously called ``show_variant`` you can safely omit the
experiment name.
record_goal(environ, goal, experiment)
Record a goal conversion for the named experiment
Viewing results
---------------
Test results are available at the URL ``/swab/results``.
Caching
-------
Swab automatically adds a ``Cache-Control: no-cache`` response header if
``show_variant`` or ``record_trial`` was called during the request. This
helps avoid proxies caching your test variants. It will also remove any other
cache related headers (eg 'ETag' or 'Last-Modified'). If you don't want this
behaviour, you need to pass ``cache_control=False`` when creating the Swab
instance.
Viewing the variants
--------------------
To test your competing pages append '?swab.<experiment-name>=<variant-name>' to
URLs to force any given variant to be shown.
Basic design
============
Each visitor is assigned an identity which is persisted by means of a cookie.
The identity is a base64 encoded randomly generated byte sequence. This
identity is used as a seed for a RNG, which is used to switch visitors into
test groups.
Every time a test is shown, a line
is entered into a file at ``<datadir>/<experiment>/<variant>/__all__``. This is
triggered by calling ``record_trial``
Every time a goal is recorded (triggered by calling ``record_goal``), a
line is entered into a file at ``<datadir>/<experiment>/<variant>/<goal>``
Each log line has the format ``<timestamp>:<identity>\n``.
No file locking is used: it is assumed that this will be run on a system where
each line is smaller than the fs blocksize, allowing us to avoid this overhead.
The lines may become interleaved, but there should be no risk of corruption
even with multiple simultaneous writes. See
http://www.perlmonks.org/?node_id=486488 for a discussion of the issue.
0.2.4
-----
* Add support for Python 3.12
* Drop support for Python 3.8
0.2.3 (released 2024-05-02)
---------------------------
* Bugfix: fix cookie max-age value
0.2.2 (released 2018-02-23)
---------------------------
* Bugfix: fix for exception triggered when a bot visits a page containing
``record_trial_tag``
0.2.1 (released 2018-02-23)
---------------------------
* Bugfix: fixed link rendering on test results page
0.2.0 (released 2018-02-23)
---------------------------
* Compatibility with python 3
* Allow the application to force a variant when calling show_variant
* Improved JS snippet no longer blocks browser rendering
* No longer records duplicate trials if show_variant is called twice
* Allow experiments to customize the swabid generation strategy - useful if
you want to deterministically seed the RNG based on some request attribute.
* Allow weighted variants: ``add_experiment('foo', 'AAAB')`` will show
variant A 75% of the time.
* Include bayesian results calculation based on
http://www.evanmiller.org/bayesian-ab-testing.html#binary_ab_implementation
* Better caching: only sets cookies on pages where an experiment is invoked
* ``record_trial_tag`` can now infer the experiment name from a previous call
to ``show_variant``: less duplicated code when running an experiment.
* Results now show results per visitor by default
Version 0.1.3
-------------
* Added a javascript beacon to record tests (helps exclude bots)
* Better exclusion of bots on server side too
* Record trial app won't raise an error if the experiment name doesn't exist
* Removed debug flag, the ability to force a variant is now always present
* Strip HTTP caching headers if an experiment has been invoked during the request
* Improved accuracy of conversion tracking
* Cookie path can be specified in middleware configuration
Version 0.1.2
-------------
* Minor bugfixes
Version 0.1.1
-------------
* Bugfix for ZeroDivisionErrors when no data has been collected
Version 0.1
-------------
* Initial release
Raw data
{
"_id": null,
"home_page": "https://ollycope.com/software/swab/",
"name": "swab",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "ab, a/b, a/bingo, split testing",
"author": "Oliver Cope",
"author_email": "oliver@redgecko.org",
"download_url": null,
"platform": null,
"description": "Simple WSGI A/B testing - Swab\n==============================\n\nWhat is A/B testing?\n--------------------\n\nA/B testing is a way of comparing two versions of a web page against each\nother, to see which performs best for your visitors. It could be testing\nchanges to your website copy, visual design or user interface.\n\nWhen you run an A/B test experiment you need to tell Swab what variants you\nhave, and what goals you want to optimize for. Swab will then randomly assign\nvisitors to each variant and keep track of how many times each variant is\nshown, along with how many of those visits resulted in a conversion.\n\nUsing this data, Swab can show you the conversion rate for each variant along\nwith some basic statistics to help you decide whether there is a meaningful\ndifference between the versions.\n\n\nSetting up a Swab instance\n--------------------------\n\nSwab needs a directory where it can save the data files it uses for tracking\ntrial and conversion data::\n\n from swab import Swab\n s = Swab('/tmp/.swab-test-data')\n\nThen you need to tell swab about the experiments you want to run, the variants\navailable and the name of the conversion goal::\n\n s.add_experiment('button-color', ['red', 'blue'], 'signup')\n\nFinally you need to wrap your WSGI app in swab's middleware::\n\n application = s.middleware(application)\n\nIntegrating swab in your app\n----------------------------\n\nSwab makes a number of functions available to you that you can put in your application code:\n\n show_variant(environ, experiment, record=False, variant=None)\n\n Return the variant name to show for the current request. In the above\n example, a call to ``show_variant(environ, 'button-color')`` would\n return either ``'red'`` or ``'blue'``\n\n record_trial_tag(environ, experiment)\n\n Return the HTML tag for a javascript beacon that should be placed in\n the page you are testing. The tag causes the user's browser to load a\n referenced javascript file, triggering swab to record a trial for the\n given experiment.\n\n If you only have a single experiment running on the requested page and\n have previously called ``show_variant`` you can safely omit the\n experiment name.\n\n record_trial(environ, experiment)\n\n If you don't want to use the javascript beacon to track trials, you can\n call ``record_trial`` directly. The javascript beacon method is\n preferred as it is unlikely to be triggered by bots.\n\n If you only have a single experiment running on the requested page and\n have previously called ``show_variant`` you can safely omit the\n experiment name.\n\n record_goal(environ, goal, experiment)\n\n Record a goal conversion for the named experiment\n\nViewing results\n---------------\n\nTest results are available at the URL ``/swab/results``.\n\nCaching\n-------\n\nSwab automatically adds a ``Cache-Control: no-cache`` response header if\n``show_variant`` or ``record_trial`` was called during the request. This\nhelps avoid proxies caching your test variants. It will also remove any other\ncache related headers (eg 'ETag' or 'Last-Modified'). If you don't want this\nbehaviour, you need to pass ``cache_control=False`` when creating the Swab\ninstance.\n\nViewing the variants\n--------------------\n\nTo test your competing pages append '?swab.<experiment-name>=<variant-name>' to\nURLs to force any given variant to be shown.\n\nBasic design\n============\n\nEach visitor is assigned an identity which is persisted by means of a cookie.\nThe identity is a base64 encoded randomly generated byte sequence. This\nidentity is used as a seed for a RNG, which is used to switch visitors into\ntest groups.\n\nEvery time a test is shown, a line\nis entered into a file at ``<datadir>/<experiment>/<variant>/__all__``. This is\ntriggered by calling ``record_trial``\n\nEvery time a goal is recorded (triggered by calling ``record_goal``), a\nline is entered into a file at ``<datadir>/<experiment>/<variant>/<goal>``\n\nEach log line has the format ``<timestamp>:<identity>\\n``.\n\nNo file locking is used: it is assumed that this will be run on a system where\neach line is smaller than the fs blocksize, allowing us to avoid this overhead.\nThe lines may become interleaved, but there should be no risk of corruption\neven with multiple simultaneous writes. See\nhttp://www.perlmonks.org/?node_id=486488 for a discussion of the issue.\n\n\n\n0.2.4\n-----\n\n* Add support for Python 3.12\n* Drop support for Python 3.8\n\n0.2.3 (released 2024-05-02)\n---------------------------\n\n* Bugfix: fix cookie max-age value\n\n0.2.2 (released 2018-02-23)\n---------------------------\n\n* Bugfix: fix for exception triggered when a bot visits a page containing\n ``record_trial_tag``\n\n0.2.1 (released 2018-02-23)\n---------------------------\n\n* Bugfix: fixed link rendering on test results page\n\n0.2.0 (released 2018-02-23)\n---------------------------\n\n* Compatibility with python 3\n* Allow the application to force a variant when calling show_variant\n* Improved JS snippet no longer blocks browser rendering\n* No longer records duplicate trials if show_variant is called twice\n* Allow experiments to customize the swabid generation strategy - useful if\n you want to deterministically seed the RNG based on some request attribute.\n* Allow weighted variants: ``add_experiment('foo', 'AAAB')`` will show\n variant A 75% of the time.\n* Include bayesian results calculation based on\n http://www.evanmiller.org/bayesian-ab-testing.html#binary_ab_implementation\n* Better caching: only sets cookies on pages where an experiment is invoked\n* ``record_trial_tag`` can now infer the experiment name from a previous call\n to ``show_variant``: less duplicated code when running an experiment.\n* Results now show results per visitor by default\n\nVersion 0.1.3\n-------------\n\n* Added a javascript beacon to record tests (helps exclude bots)\n* Better exclusion of bots on server side too\n* Record trial app won't raise an error if the experiment name doesn't exist\n* Removed debug flag, the ability to force a variant is now always present\n* Strip HTTP caching headers if an experiment has been invoked during the request\n* Improved accuracy of conversion tracking\n* Cookie path can be specified in middleware configuration\n\nVersion 0.1.2\n-------------\n\n* Minor bugfixes\n\nVersion 0.1.1\n-------------\n\n* Bugfix for ZeroDivisionErrors when no data has been collected\n\nVersion 0.1\n-------------\n\n* Initial release\n\n",
"bugtrack_url": null,
"license": "BSD",
"summary": "Swab: Simple WSGI A/B testing",
"version": "0.2.3",
"project_urls": {
"Homepage": "https://ollycope.com/software/swab/"
},
"split_keywords": [
"ab",
" a/b",
" a/bingo",
" split testing"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "14944ca87d1b21df608a39ac1a1967e642e07eeae8e6ad4b11f78323dd032ef3",
"md5": "36f9a45ac8bf95e7c7a65dba445459fd",
"sha256": "e49715c371a8c8fc37c5c0d9fe46c373292a7beb3e1ffef982822f806791e30e"
},
"downloads": -1,
"filename": "swab-0.2.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "36f9a45ac8bf95e7c7a65dba445459fd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 18882,
"upload_time": "2024-10-08T09:12:44",
"upload_time_iso_8601": "2024-10-08T09:12:44.207609Z",
"url": "https://files.pythonhosted.org/packages/14/94/4ca87d1b21df608a39ac1a1967e642e07eeae8e6ad4b11f78323dd032ef3/swab-0.2.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-08 09:12:44",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "swab"
}