pytest-ver


Namepytest-ver JSON
Version 0.0.64 PyPI version JSON
download
home_pagehttps://bitbucket.org/arrizza-public/pytest-ver/src/master
SummaryPytest module with Verification Protocol, Verification Report and Trace Matrix
upload_time2024-02-07 03:49:48
maintainer
docs_urlNone
authorJA
requires_python
licenseMIT
keywords verification pytest
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## Summary

This module is used to write verification test cases
within a Python & pytest environment. FDA compliant reports can
then be generated.

The verification test cases are written in Python which uses
the pytest module to invoke the test cases. During
execution, various data is captured. The data is then used by
the report facility to generate Test Protocol, Test Report
and Trace Matrix documents in docx, pdf and text formats. A
summary is also generated indicating percentage of requirements
that are complete, passing/failing etc.

This repo is used to develop pytest-ver. See the matching
repo <https://bitbucket.org/arrizza-public/pytest-ver-tutorial/src/master/> for
full details of how to use it. There is a User Guide with full
details there. That repo tests a simulated IV pump
application, including a set of requirements and test scripts
that use pytest-ver. A sample set of reports can be generated
and reviewed.

There are also a couple of repos that have used pytest-ver as
proof-of-concept:

* <https://bitbucket.org/arrizza-public/socket-oneline/src/master/>
* <https://bitbucket.org/arrizza-public/gui-api-tkinter/src/master/>

## Usage

* See repo <https://bitbucket.org/arrizza-public/pytest-ver-tutorial/src/master/>
  for full instructions on how to use pytest-ver

* User Guide.docx in the pytest-ver-tutorial repo for additional details

### Quick Guide

* the file test_sample.py shows various examples of using pytest-ver.
* to invoke all tests use:

```bash
./doit
```

or invoke a subset of test cases:

```bash
# the "-k" switch is part of pytest
./doit -k test_0

# invoke only passing tests that are fully automated
./doit -k "not semi and not fails"

# invoke only semi-manual tests
./doit -k "semi"
```

### invoking the test_sample script

The test is run in doit by:

```bash
function run_it()
```

The report is run in doit by:

```bash
function run_report()
```

see out/ver directory for pdf or docx reports

## Installation

The doc/installation_*.md files contain additional information
for installing pytest-ver
for various supported platforms:

* installation_macos.md : MAC OS
* installation_msys2.md : MSYS2 on Windows
* installation_ubu.md   : Ubuntu

## Python environment

Note that the set_env.sh script sets the python environment for
various platforms and sets variables

* $pyexe - the python executable
* $pybin - the location of the environment bin directory

```bash
source set_env.sh
source "${pybin}/activate"
# ... skip ...
$pyexe helpers/ver_report.py
````

## Report Output

* the output files are in the out/ver directory

```bash
ls out/ver
pytest_ver.log   # the log output
pytest_ver.txt   # output from doit script
*_protocol.json  # data generated during the test case run
summary.docx     # docx version of summary document
summary.pdf      # pdf version of summary document
summary.txt      # text version of summary document
test_protocol.*  # test protocol document in various formats
test_report.*    # test report document in various formats
trace.*          # trace matrix document in various formats 
```

## Check conftest.py

If you want to use the pytest-ver command line switches from pytest,
ensure you call the cli_addoption() and cli_configure() functions in conftest.py

```python
from pytest_ver import pth


# -------------------
def pytest_addoption(parser):
    pth.cfg.cli_addoption(parser)


# -------------------
def pytest_configure(config):
    pth.cfg.cli_configure(config)
```

## Writing a test case

* import unittest and pytest as normal. Then import pytest_ver:

```python
import unittest
import pytest
from pytest_ver import pth
```

Note: 'pth' is a global that holds a reference to PytestHarness.
The harness holds references to all classes needed during a test run.

* next create a normal unit test for pytest/unitest

```python
# -------------------
class TestSample(unittest.TestCase):
    # --------------------
    @classmethod
    def setUpClass(cls):
        pth.init()

    # -------------------
    def setUp(self):
        pass

    # -------------------
    def tearDown(self):
        pass

    # --------------------
    @classmethod
    def tearDownClass(self):
        pth.term()
```

* To create a protocol use pth.proto.protocol() with a protocol
  id and description
* To create steps within that protocol, use pth.proto.step()

```python
# --------------------
def test_0(self):
    # declare a new protcol id and it's description
    pth.proto.protocol('tp-000', 'basic pass/fail tests')
    pth.proto.set_dut_serialno('sn-0123')

    pth.proto.step('try checks with 1 failure')
    pth.ver.verify_equal(1, 2, reqids='SRS-001')
    pth.ver.verify_equal(1, 1, reqids='SRS-002')
    pth.ver.verify_equal(1, 1, reqids='SRS-003')
    pth.proto.comment('should be 1 failure and 2 passes')
```

at this point, there is one protocol TP-000 that has 1 step.

Use doit to run it:

```bash
./doit -k test_0
```

### Output

Check the stdout or the out/ver/pytest_ver.txt file:

* indicates that a failure occured
* the return code from the script is non-zero

### Report documents

Check the generated documents in the out/ver/ dirctory.

* summary.pdf should indicate:
    * there are a total of 7 requirements (see srs_sample.json)
    * there are 2 passing requirements wghich is 28.8% of all
      requirements
    * there is 1 failing requirement which is 14.3% of all
      requirements
    * there are 4 requirements that were not tested which is
      57.1% of all requirements

* test_report.pdf and/or test_protocol.pdf should indicate:
* the test run type is "dev" so this was not a formal run
* the test run id is "dev-001". This can be set in cfg.json to
  track individual test runs
* the date time the document was generated

* There should be one protocol TP-000
* The location of the protocol is test_sample.py(line number)
* The protocol had only 1 step which tested requirement SRS-001
* The report document shows the expected and actual values and
  that result was a FAIL,
  and the location of the failing verify() function
* The report document shows a comment
* There is table after the protocol showing the requirement
  SRS-001 and it's description
* Note the header and footer information comes from the cfg.json
  file

### Pytest markers

* you can use pytest markers as normal

```python
# --------------------
# @pytest.mark.skip(reason='skip')
@pytest.mark.smoketest1
def test_init2(self):
    pth.proto.protocol('tp-002', 'test the init2')

    pth.proto.step('verify1 everything is equal')
    pth.ver.verify(1 == 1, reqid='SRS-001')
    # note: this is the second time this requirment is verified


# --------------------
# @pytest.mark.skip(reason='skip')
def test_init3(self):
    pth.proto.protocol('tp-003', 'test the init3')

    pth.proto.step('verify1 everything is equal')
    pth.ver.verify(1 == 1, reqid='SRS-004')
```

## Verification Functions

* To create verification tests use pth.ver.verify()

```python
# note: you can use normal pytest and unitest functions
# but their results won't show up in the report
self.assertEqual(x, y)

# do a verification against a requirement
pth.ver.verify_equal(x, y, reqid='SRS-001')
pth.ver.verify_equal(x, 1, reqid='SRS-001')
# since all verifys passed, this step's result is PASS

pth.proto.step('verify2')
pth.ver.verify(False, reqid='SRS-002')
pth.ver.verify(True, reqid='SRS-002')
pth.ver.verify(True, reqid='SRS-002')
# since one verify failed, this step's result is FAIL

pth.proto.step('verify3')
pth.ver.verify(True, reqid='SRS-003')
pth.ver.verify(True, reqid='SRS-003')
pth.ver.verify(False, reqid='SRS-003')
# since one verify failed, this step's result is FAIL
```

* See doc/User Guide.docx for a full list of verification functions

```python
verify(actual)  # verify actual is true
verify_true(actual)  # verify actual is true
verify_false(actual)  # verify actual is false
verify_equal(expected, actual)  # verify actual == expected
verify_not_equal(expected, actual)  # verify actual != expected
verify_none(actual)  # verify actual is None
verify_is_none(actual)  # verify actual is None
verify_not_none(actual)  # verify actual is not None
verify_in(actual, exp_list)  # verify actual is in the expected list
verify_not_in(actual, exp_list)  # verify actual is not in the expected list
verify_lt(left, right)  # verify left < right
verify_le(left, right)  # verify left <= right
verify_gt(left, right)  # verify left > right
verify_ge(left, right)  # verify left >= right
verify_reqex(actual, regex)  # verify actual matches the regex
verify_not_reqex(actual, regex)  # verify actual does not matche the regex
verify_delta(expected, actual, abs_tolerance)  # verify actual == expected within +/- tolerance
verify_not_delta(expected, actual, abs_tolerance)  # verify actual outside +/- tolerance
verify_delta_pct(expected, actual, pct_tolerance)  # verify actual == expected within +/- percent
verify_not_delta_pct(expected, actual, pct_tolerance)  # verify actual outside +/- percent
```

* A pass does not generate any stdout
* A fail reports various information
    * the location of the failure
    * the expected value (and it's python type)
    * the actual value (and it's python type)
    * a traceback at the time of the failures

```bash
FAILURE: at test_sample.py(37)
   Expected (int)     : 1
   Actual   (int)     : 2
test_sample.py:37 in test_0() -> pth.ver.verify_equal(1, 2, reqids='SRS-001')
pytest_ver/lib/verifier.py:98 in verify_equal() -> self._handle_fail(rs)
pytest_ver/lib/verifier.py:412 in _handle_fail() -> raise AssertionError(f'at {rs.location}{msg}')
```

* The test_report.txt document shows some additional information:
    * the protocol id and description and its location
    * which step failed
    * the date time stamp (dts) when the failure occurred
    * the requirement id

```bash
==== protocol: TP-000 basic pass/fail tests
     location: test_sample.py(33)
     Step 1  : try checks with 1 failure
       > dts          : 2022-12-11 06:00:52
       > result       : FAIL
       > actual       : 2
       > actual raw   : 2
       > expected     : 1
       > expected raw : 1
       > reqids       : {'SRS-001': 1}
       > location     : test_sample.py(37)
```

## Generate Report

* to generate a report use: pth.report()

* see helpers/ver_report.py

```python
import os
import sys

sys.path.insert(1, os.path.join('.'))

from pytest_ver import *  # noqa

# generate the report
pth.cfg.cli_parse()
# force iuvmode to be
pth.cfg.cli_set('iuvmode', False)

pth.init(report_mode=True)
pth.report()
pth.term()
```

to invoke it:

```bash
$pyexe helpers/ver_report.py
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://bitbucket.org/arrizza-public/pytest-ver/src/master",
    "name": "pytest-ver",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "verification,pytest",
    "author": "JA",
    "author_email": "cppgent0@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/4d/78/1d4f0f2c71ba0871017dfcf5f26dacf2e10a38df3e1e99215bd3fa52f167/pytest-ver-0.0.64.tar.gz",
    "platform": null,
    "description": "## Summary\n\nThis module is used to write verification test cases\nwithin a Python & pytest environment. FDA compliant reports can\nthen be generated.\n\nThe verification test cases are written in Python which uses\nthe pytest module to invoke the test cases. During\nexecution, various data is captured. The data is then used by\nthe report facility to generate Test Protocol, Test Report\nand Trace Matrix documents in docx, pdf and text formats. A\nsummary is also generated indicating percentage of requirements\nthat are complete, passing/failing etc.\n\nThis repo is used to develop pytest-ver. See the matching\nrepo <https://bitbucket.org/arrizza-public/pytest-ver-tutorial/src/master/> for\nfull details of how to use it. There is a User Guide with full\ndetails there. That repo tests a simulated IV pump\napplication, including a set of requirements and test scripts\nthat use pytest-ver. A sample set of reports can be generated\nand reviewed.\n\nThere are also a couple of repos that have used pytest-ver as\nproof-of-concept:\n\n* <https://bitbucket.org/arrizza-public/socket-oneline/src/master/>\n* <https://bitbucket.org/arrizza-public/gui-api-tkinter/src/master/>\n\n## Usage\n\n* See repo <https://bitbucket.org/arrizza-public/pytest-ver-tutorial/src/master/>\n  for full instructions on how to use pytest-ver\n\n* User Guide.docx in the pytest-ver-tutorial repo for additional details\n\n### Quick Guide\n\n* the file test_sample.py shows various examples of using pytest-ver.\n* to invoke all tests use:\n\n```bash\n./doit\n```\n\nor invoke a subset of test cases:\n\n```bash\n# the \"-k\" switch is part of pytest\n./doit -k test_0\n\n# invoke only passing tests that are fully automated\n./doit -k \"not semi and not fails\"\n\n# invoke only semi-manual tests\n./doit -k \"semi\"\n```\n\n### invoking the test_sample script\n\nThe test is run in doit by:\n\n```bash\nfunction run_it()\n```\n\nThe report is run in doit by:\n\n```bash\nfunction run_report()\n```\n\nsee out/ver directory for pdf or docx reports\n\n## Installation\n\nThe doc/installation_*.md files contain additional information\nfor installing pytest-ver\nfor various supported platforms:\n\n* installation_macos.md : MAC OS\n* installation_msys2.md : MSYS2 on Windows\n* installation_ubu.md   : Ubuntu\n\n## Python environment\n\nNote that the set_env.sh script sets the python environment for\nvarious platforms and sets variables\n\n* $pyexe - the python executable\n* $pybin - the location of the environment bin directory\n\n```bash\nsource set_env.sh\nsource \"${pybin}/activate\"\n# ... skip ...\n$pyexe helpers/ver_report.py\n````\n\n## Report Output\n\n* the output files are in the out/ver directory\n\n```bash\nls out/ver\npytest_ver.log   # the log output\npytest_ver.txt   # output from doit script\n*_protocol.json  # data generated during the test case run\nsummary.docx     # docx version of summary document\nsummary.pdf      # pdf version of summary document\nsummary.txt      # text version of summary document\ntest_protocol.*  # test protocol document in various formats\ntest_report.*    # test report document in various formats\ntrace.*          # trace matrix document in various formats \n```\n\n## Check conftest.py\n\nIf you want to use the pytest-ver command line switches from pytest,\nensure you call the cli_addoption() and cli_configure() functions in conftest.py\n\n```python\nfrom pytest_ver import pth\n\n\n# -------------------\ndef pytest_addoption(parser):\n    pth.cfg.cli_addoption(parser)\n\n\n# -------------------\ndef pytest_configure(config):\n    pth.cfg.cli_configure(config)\n```\n\n## Writing a test case\n\n* import unittest and pytest as normal. Then import pytest_ver:\n\n```python\nimport unittest\nimport pytest\nfrom pytest_ver import pth\n```\n\nNote: 'pth' is a global that holds a reference to PytestHarness.\nThe harness holds references to all classes needed during a test run.\n\n* next create a normal unit test for pytest/unitest\n\n```python\n# -------------------\nclass TestSample(unittest.TestCase):\n    # --------------------\n    @classmethod\n    def setUpClass(cls):\n        pth.init()\n\n    # -------------------\n    def setUp(self):\n        pass\n\n    # -------------------\n    def tearDown(self):\n        pass\n\n    # --------------------\n    @classmethod\n    def tearDownClass(self):\n        pth.term()\n```\n\n* To create a protocol use pth.proto.protocol() with a protocol\n  id and description\n* To create steps within that protocol, use pth.proto.step()\n\n```python\n# --------------------\ndef test_0(self):\n    # declare a new protcol id and it's description\n    pth.proto.protocol('tp-000', 'basic pass/fail tests')\n    pth.proto.set_dut_serialno('sn-0123')\n\n    pth.proto.step('try checks with 1 failure')\n    pth.ver.verify_equal(1, 2, reqids='SRS-001')\n    pth.ver.verify_equal(1, 1, reqids='SRS-002')\n    pth.ver.verify_equal(1, 1, reqids='SRS-003')\n    pth.proto.comment('should be 1 failure and 2 passes')\n```\n\nat this point, there is one protocol TP-000 that has 1 step.\n\nUse doit to run it:\n\n```bash\n./doit -k test_0\n```\n\n### Output\n\nCheck the stdout or the out/ver/pytest_ver.txt file:\n\n* indicates that a failure occured\n* the return code from the script is non-zero\n\n### Report documents\n\nCheck the generated documents in the out/ver/ dirctory.\n\n* summary.pdf should indicate:\n    * there are a total of 7 requirements (see srs_sample.json)\n    * there are 2 passing requirements wghich is 28.8% of all\n      requirements\n    * there is 1 failing requirement which is 14.3% of all\n      requirements\n    * there are 4 requirements that were not tested which is\n      57.1% of all requirements\n\n* test_report.pdf and/or test_protocol.pdf should indicate:\n* the test run type is \"dev\" so this was not a formal run\n* the test run id is \"dev-001\". This can be set in cfg.json to\n  track individual test runs\n* the date time the document was generated\n\n* There should be one protocol TP-000\n* The location of the protocol is test_sample.py(line number)\n* The protocol had only 1 step which tested requirement SRS-001\n* The report document shows the expected and actual values and\n  that result was a FAIL,\n  and the location of the failing verify() function\n* The report document shows a comment\n* There is table after the protocol showing the requirement\n  SRS-001 and it's description\n* Note the header and footer information comes from the cfg.json\n  file\n\n### Pytest markers\n\n* you can use pytest markers as normal\n\n```python\n# --------------------\n# @pytest.mark.skip(reason='skip')\n@pytest.mark.smoketest1\ndef test_init2(self):\n    pth.proto.protocol('tp-002', 'test the init2')\n\n    pth.proto.step('verify1 everything is equal')\n    pth.ver.verify(1 == 1, reqid='SRS-001')\n    # note: this is the second time this requirment is verified\n\n\n# --------------------\n# @pytest.mark.skip(reason='skip')\ndef test_init3(self):\n    pth.proto.protocol('tp-003', 'test the init3')\n\n    pth.proto.step('verify1 everything is equal')\n    pth.ver.verify(1 == 1, reqid='SRS-004')\n```\n\n## Verification Functions\n\n* To create verification tests use pth.ver.verify()\n\n```python\n# note: you can use normal pytest and unitest functions\n# but their results won't show up in the report\nself.assertEqual(x, y)\n\n# do a verification against a requirement\npth.ver.verify_equal(x, y, reqid='SRS-001')\npth.ver.verify_equal(x, 1, reqid='SRS-001')\n# since all verifys passed, this step's result is PASS\n\npth.proto.step('verify2')\npth.ver.verify(False, reqid='SRS-002')\npth.ver.verify(True, reqid='SRS-002')\npth.ver.verify(True, reqid='SRS-002')\n# since one verify failed, this step's result is FAIL\n\npth.proto.step('verify3')\npth.ver.verify(True, reqid='SRS-003')\npth.ver.verify(True, reqid='SRS-003')\npth.ver.verify(False, reqid='SRS-003')\n# since one verify failed, this step's result is FAIL\n```\n\n* See doc/User Guide.docx for a full list of verification functions\n\n```python\nverify(actual)  # verify actual is true\nverify_true(actual)  # verify actual is true\nverify_false(actual)  # verify actual is false\nverify_equal(expected, actual)  # verify actual == expected\nverify_not_equal(expected, actual)  # verify actual != expected\nverify_none(actual)  # verify actual is None\nverify_is_none(actual)  # verify actual is None\nverify_not_none(actual)  # verify actual is not None\nverify_in(actual, exp_list)  # verify actual is in the expected list\nverify_not_in(actual, exp_list)  # verify actual is not in the expected list\nverify_lt(left, right)  # verify left < right\nverify_le(left, right)  # verify left <= right\nverify_gt(left, right)  # verify left > right\nverify_ge(left, right)  # verify left >= right\nverify_reqex(actual, regex)  # verify actual matches the regex\nverify_not_reqex(actual, regex)  # verify actual does not matche the regex\nverify_delta(expected, actual, abs_tolerance)  # verify actual == expected within +/- tolerance\nverify_not_delta(expected, actual, abs_tolerance)  # verify actual outside +/- tolerance\nverify_delta_pct(expected, actual, pct_tolerance)  # verify actual == expected within +/- percent\nverify_not_delta_pct(expected, actual, pct_tolerance)  # verify actual outside +/- percent\n```\n\n* A pass does not generate any stdout\n* A fail reports various information\n    * the location of the failure\n    * the expected value (and it's python type)\n    * the actual value (and it's python type)\n    * a traceback at the time of the failures\n\n```bash\nFAILURE: at test_sample.py(37)\n   Expected (int)     : 1\n   Actual   (int)     : 2\ntest_sample.py:37 in test_0() -> pth.ver.verify_equal(1, 2, reqids='SRS-001')\npytest_ver/lib/verifier.py:98 in verify_equal() -> self._handle_fail(rs)\npytest_ver/lib/verifier.py:412 in _handle_fail() -> raise AssertionError(f'at {rs.location}{msg}')\n```\n\n* The test_report.txt document shows some additional information:\n    * the protocol id and description and its location\n    * which step failed\n    * the date time stamp (dts) when the failure occurred\n    * the requirement id\n\n```bash\n==== protocol: TP-000 basic pass/fail tests\n     location: test_sample.py(33)\n     Step 1  : try checks with 1 failure\n       > dts          : 2022-12-11 06:00:52\n       > result       : FAIL\n       > actual       : 2\n       > actual raw   : 2\n       > expected     : 1\n       > expected raw : 1\n       > reqids       : {'SRS-001': 1}\n       > location     : test_sample.py(37)\n```\n\n## Generate Report\n\n* to generate a report use: pth.report()\n\n* see helpers/ver_report.py\n\n```python\nimport os\nimport sys\n\nsys.path.insert(1, os.path.join('.'))\n\nfrom pytest_ver import *  # noqa\n\n# generate the report\npth.cfg.cli_parse()\n# force iuvmode to be\npth.cfg.cli_set('iuvmode', False)\n\npth.init(report_mode=True)\npth.report()\npth.term()\n```\n\nto invoke it:\n\n```bash\n$pyexe helpers/ver_report.py\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Pytest module with Verification Protocol, Verification Report and Trace Matrix",
    "version": "0.0.64",
    "project_urls": {
        "Download": "https://bitbucket.org/arrizza-public/pytest-ver/get/master.zip",
        "Homepage": "https://bitbucket.org/arrizza-public/pytest-ver/src/master"
    },
    "split_keywords": [
        "verification",
        "pytest"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4d781d4f0f2c71ba0871017dfcf5f26dacf2e10a38df3e1e99215bd3fa52f167",
                "md5": "e120a5bcf0070cadb5d1d4cd37651b3d",
                "sha256": "2027a244d9e084ce70e28c3a5647ae3a52fd7d54cca09b6754a40a45e0c698d8"
            },
            "downloads": -1,
            "filename": "pytest-ver-0.0.64.tar.gz",
            "has_sig": false,
            "md5_digest": "e120a5bcf0070cadb5d1d4cd37651b3d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 62748,
            "upload_time": "2024-02-07T03:49:48",
            "upload_time_iso_8601": "2024-02-07T03:49:48.405604Z",
            "url": "https://files.pythonhosted.org/packages/4d/78/1d4f0f2c71ba0871017dfcf5f26dacf2e10a38df3e1e99215bd3fa52f167/pytest-ver-0.0.64.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-07 03:49:48",
    "github": false,
    "gitlab": false,
    "bitbucket": true,
    "codeberg": false,
    "bitbucket_user": "arrizza-public",
    "bitbucket_project": "pytest-ver",
    "lcname": "pytest-ver"
}
        
JA
Elapsed time: 0.19425s