pytest-md-report


Namepytest-md-report JSON
Version 0.6.2 PyPI version JSON
download
home_pagehttps://github.com/thombashi/pytest-md-report
SummaryA pytest plugin to generate test outcomes reports with markdown table format.
upload_time2024-05-18 15:29:26
maintainerNone
docs_urlNone
authorTsuyoshi Hombashi
requires_python>=3.7
licenseMIT License
keywords pytest plugin markdown
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            .. contents:: **pytest-md-report**
   :backlinks: top
   :depth: 2


Summary
============================================
|PyPI pkg ver| |Supported Python ver| |Supported Python impl| |CI status| |CodeQL|

.. |PyPI pkg ver| image:: https://badge.fury.io/py/pytest-md-report.svg
    :target: https://badge.fury.io/py/pytest-md-report
    :alt: PyPI package version

.. |Supported Python impl| image:: https://img.shields.io/pypi/implementation/pytest-md-report.svg
    :target: https://pypi.org/project/pytest-md-report
    :alt: Supported Python implementations

.. |Supported Python ver| image:: https://img.shields.io/pypi/pyversions/pytest-md-report.svg
    :target: https://pypi.org/project/pytest-md-report
    :alt: Supported Python versions

.. |CI status| image:: https://github.com/thombashi/pytest-md-report/actions/workflows/ci.yml/badge.svg
    :target: https://github.com/thombashi/pytest-md-report/actions/workflows/ci.yml
    :alt: CI status of Linux/macOS/Windows

.. |CodeQL| image:: https://github.com/thombashi/pytest-md-report/actions/workflows/github-code-scanning/codeql/badge.svg
    :target: https://github.com/thombashi/pytest-md-report/actions/workflows/github-code-scanning/codeql
    :alt: CodeQL

A pytest plugin to generate test outcomes reports with markdown table format.

Installation
============================================
::

    pip install pytest-md-report


Usage
============================================
::

    pytest --md-report examples/

.. figure:: https://cdn.jsdelivr.net/gh/thombashi/pytest-md-report@master/ss/pytest_md_report_example.png
    :scale: 80%
    :alt: https://github.com/thombashi/pytest-md-report/blob/master/ss/pytest_md_report_example.png

    Output example


Other examples
--------------------------------------------
Increase verbosity level (``--md-report-verbose`` option):

::

    pytest --md-report --md-report-verbose=1 examples/

.. figure:: https://cdn.jsdelivr.net/gh/thombashi/pytest-md-report@master/ss/pytest_md_report_example_verbose.png
    :scale: 80%
    :alt: https://github.com/thombashi/pytest-md-report/blob/master/ss/pytest_md_report_example_verbose.png

    Output example (verbose)

Not rendering results of zero value (``--md-report-zeros emmpty`` option):

::

    pytest --md-report --md-report-zeros empty --md-report-color never examples/

::

    |         filepath         | passed | failed | error | skipped | xfailed | xpassed | SUBTOTAL |
    | ------------------------ | -----: | -----: | ----: | ------: | ------: | ------: | -------: |
    | examples/test_error.py   |        |        |     2 |         |         |         |        2 |
    | examples/test_failed.py  |        |      2 |       |         |         |         |        2 |
    | examples/test_pass.py    |      2 |        |       |         |         |         |        2 |
    | examples/test_skipped.py |        |        |       |       2 |         |         |        2 |
    | examples/test_xfailed.py |        |        |       |         |       2 |         |        2 |
    | examples/test_xpassed.py |        |        |       |         |         |       2 |        2 |
    | TOTAL                    |      2 |      2 |     2 |       2 |       2 |       2 |       12 |

Generate GitHub Flavored Markdown (GFM) report:

::

    pytest --md-report --md-report-flavor gfm examples/

GFM rendering result can be seen at `here <https://github.com/thombashi/pytest-md-report/blob/master/examples/gfm_report.md>`__.


Config file examples
--------------------------------------------
You can set configurations with ``pyproject.toml`` or ``setup.cfg`` as follows.

:Example of ``pyproject.toml``:
    .. code-block:: toml

        [tool.pytest.ini_options]
        md_report = true
        md_report_verbose = 0
        md_report_color = "auto"

:Example of ``setup.cfg``:
    .. code-block:: ini

        [tool:pytest]
        md_report = True
        md_report_verbose = 0
        md_report_color = auto


Add report to pull requests
-----------------------------------------------
You can add test reports to pull requests by GitHub actions workflow like the below:

.. code-block:: yaml

    name: md-report - pull request example

    on:
      pull_request:

    jobs:
      run-tests:
        runs-on: ubuntu-latest
        permissions:
          contents: read
          pull-requests: write

        steps:
          - uses: actions/checkout@v4

          - uses: actions/setup-python@v5
            with:
              python-version: '3.12'
              cache: pip

          - name: Install dependencies
            run: pip install --upgrade pytest-md-report

          - name: Run tests
            env:
              REPORT_OUTPUT: md_report.md
            shell: bash
            run: |
              echo "REPORT_FILE=${REPORT_OUTPUT}" >> "$GITHUB_ENV"
              pytest -v --md-report --md-report-flavor gfm --md-report-exclude-outcomes passed skipped xpassed --md-report-output "$REPORT_OUTPUT"

          - name: Render the report to the PR when tests fail
            uses: marocchino/sticky-pull-request-comment@v2
            if: failure()
            with:
              header: test-report
              recreate: true
              path: ${{ env.REPORT_FILE }}

.. figure:: https://cdn.jsdelivr.net/gh/thombashi/pytest-md-report@master/ss/md-report_gha.png
    :scale: 80%
    :alt: https://github.com/thombashi/pytest-md-report/blob/master/ss/md-report_gha.png

    Rendering result


Add report to pull requests: only failed tests
-----------------------------------------------
You can exclude specific test outcomes from the report by using the ``--md-report-exclude-outcomes`` option.
The below example excludes ``passed``, ``skipped``, and ``xpassed`` test outcomes from the report and posts the report to the pull request when tests fail with verbose output.

.. code-block:: yaml

    name: md-report - pull request example

    on:
      pull_request:

    jobs:
      run-tests:
        runs-on: ubuntu-latest
        permissions:
          contents: read
          pull-requests: write

        steps:
          - uses: actions/checkout@v4

          - uses: actions/setup-python@v5
            with:
              python-version: '3.12'
              cache: pip

          - name: Install dependencies
            run: pip install --upgrade pytest-md-report

          - name: Run tests
            env:
              REPORT_OUTPUT: md_report.md
            shell: bash
            run: |
              echo "REPORT_FILE=${report_file}" >> "$GITHUB_ENV"
              pytest -v --md-report --md-report-flavor gfm --md-report-exclude-outcomes passed skipped xpassed --md-report-output "$report_file"

          - name: Render the report to the PR when tests fail
            uses: marocchino/sticky-pull-request-comment@v2
            if: failure()
            with:
              header: test-report
              recreate: true
              path: ${{ env.REPORT_FILE }}

.. figure:: https://cdn.jsdelivr.net/gh/thombashi/pytest-md-report@master/ss/md-report_exclude_outcomes_verbose_output.png
    :scale: 80%
    :alt: https://github.com/thombashi/pytest-md-report/blob/master/ss/md-report_exclude_outcomes_verbose_output.png

    Rendering result


Add reports to the job summary of the GitHub action workflow runs
-----------------------------------------------------------------------------
The below example adds test reports to the job summary of the GitHub action workflow runs when tests fail.

.. code-block:: yaml

    name: md-report - job summary example

    on:
      pull_request:

    jobs:
      run-tests:
        runs-on: ${{ matrix.os }}
        strategy:
          fail-fast: false
          matrix:
            os: [ubuntu-latest, windows-latest]

        steps:
          - uses: actions/checkout@v4

          - uses: actions/setup-python@v5
            with:
              python-version: '3.12'
              cache: pip

          - name: Install dependencies
            run: pip install --upgrade pytest-md-report

          - name: Run tests
            env:
              REPORT_OUTPUT: md_report.md
            shell: bash
            run: |
              echo "REPORT_FILE=${REPORT_OUTPUT}" >> "$GITHUB_ENV"
              pytest -v --md-report --md-report-flavor gfm --md-report-exclude-outcomes passed skipped xpassed --md-report-output "$REPORT_OUTPUT"

          - name: Output reports to the job summary when tests fail
            if: failure()
            shell: bash
            run: |
              if [ -f "$REPORT_FILE" ]; then
                echo "<details><summary>Failed Test Report</summary>" >> $GITHUB_STEP_SUMMARY
                echo "" >> $GITHUB_STEP_SUMMARY
                cat "$REPORT_FILE" >> $GITHUB_STEP_SUMMARY
                echo "" >> $GITHUB_STEP_SUMMARY
                echo "</details>" >> $GITHUB_STEP_SUMMARY
              fi

.. figure:: https://cdn.jsdelivr.net/gh/thombashi/pytest-md-report@master/ss/md-report_job-summary_full.png
    :scale: 80%
    :alt: https://github.com/thombashi/pytest-md-report/blob/master/ss/md-report_job-summary_full.png

    Rendering result


Options
============================================

Command options
--------------------------------------------
::

    generate test outcomes report with markdown table format:
      --md-report           Create a Markdown report. you can also specify the value
                            with PYTEST_MD_REPORT environment variable.
      --md-report-verbose=VERBOSITY_LEVEL
                            Verbosity level for pytest-md-report.
                            If not set, use the verbosity level of pytest.
                            Defaults to 0.
                            you can also specify the value with
                            PYTEST_MD_REPORT_VERBOSE environment variable.
      --md-report-output=FILEPATH
                            Path to a file to the outputs test report.
                            Overwrite a file content if the file already exists.
                            you can also specify the value with
                            PYTEST_MD_REPORT_OUTPUT environment variable.
      --md-report-tee       output test report for both standard output and a file.
                            you can also specify the value with PYTEST_MD_REPORT_TEE
                            environment variable.
      --md-report-color={auto,text,never}
                            How coloring output reports.
                            auto: detect the output destination and colorize reports
                            appropriately with the output.
                            for terminal output, render colored (text and
                            background) reports using ANSI escape codes.
                            for file output, render the report without color.
                            text: render colored text reports by using ANSI escape
                            codes.
                            never: render report without color.
                            Defaults to 'auto'.
                            you can also specify the value with
                            PYTEST_MD_REPORT_COLOR environment variable.
      --md-report-margin=MARGIN
                            Margin size for each cell.
                            Defaults to 1.
                            you can also specify the value with
                            PYTEST_MD_REPORT_MARGIN environment variable.
      --md-report-zeros={number,empty}
                            Rendering method for results of zero values.
                            number: render as a digit number (0).
                            empty: not rendering.
                            Automatically set to 'number' when the CI environment
                            variable is set to
                            TRUE (case insensitive) to display reports correctly at
                            CI services.
                            Defaults to 'number'.
                            you can also specify the value with
                            PYTEST_MD_REPORT_ZEROS environment variable.
      --md-report-success-color=MD_REPORT_SUCCESS_COLOR
                            Text color of succeeded results.
                            Specify a color name (one of the black/red/green/yellow/
                            blue/magenta/cyan/white/lightblack/lightred/lightgreen/l
                            ightyellow/lightblue/lightmagenta/lightcyan/lightwhite)
                            or a color code (e.g. #ff1020).
                            Defaults to 'light_green'.
                            you can also specify the value with
                            PYTEST_MD_REPORT_SUCCESS_COLOR environment variable.
      --md-report-skip-color=MD_REPORT_SKIP_COLOR
                            Text color of skipped results.
                            Specify a color name (one of the black/red/green/yellow/
                            blue/magenta/cyan/white/lightblack/lightred/lightgreen/l
                            ightyellow/lightblue/lightmagenta/lightcyan/lightwhite)
                            or a color code (e.g. #ff1020).
                            Defaults to 'light_yellow'.
                            you can also specify the value with
                            PYTEST_MD_REPORT_SKIP_COLOR environment variable.
      --md-report-error-color=MD_REPORT_ERROR_COLOR
                            Text color of failed results.
                            Specify a color name (one of the black/red/green/yellow/
                            blue/magenta/cyan/white/lightblack/lightred/lightgreen/l
                            ightyellow/lightblue/lightmagenta/lightcyan/lightwhite)
                            or a color code (e.g. #ff1020).
                            Defaults to 'light_red'.
                            you can also specify the value with
                            PYTEST_MD_REPORT_ERROR_COLOR environment variable.
      --md-report-flavor={common_mark,github,gfm,jekyll,kramdown}
                            Markdown flavor of the output report.
                            Defaults to 'common_mark'.
                            you can also specify the value with
                            PYTEST_MD_REPORT_FLAVOR environment variable.
      --md-report-exclude-outcomes=MD_REPORT_EXCLUDE_OUTCOMES [MD_REPORT_EXCLUDE_OUTCOMES ...]
                            List of test outcomes to exclude from the report.
                            When specifying as an environment variable, pass a
                            comma-separated string
                            (e.g. 'passed,skipped').
                            Defaults to '[]'.
                            you can also specify the value with
                            PYTEST_MD_REPORT_EXCLUDE_OUTCOMES environment variable.


ini-options
--------------------------------------------
[pytest] ini-options in the first ``pytest.ini``/``tox.ini``/``setup.cfg``/``pyproject.toml (pytest 6.0.0 or later)`` file found:

::

  md_report (bool):     Create a Markdown report.
  md_report_verbose (string):
                        Verbosity level for pytest-md-report. If not set, use
                        the verbosity level of pytest. Defaults to 0.
  md_report_color (string):
                        How coloring output reports. auto: detect the output
                        destination and colorize reports appropriately with the
                        output. for terminal output, render colored (text and
                        background) reports using ANSI escape codes. for file
                        output, render the report without color. text: render
                        colored text reports by using ANSI escape codes. never:
                        render report without color. Defaults to 'auto'.
  md_report_output (string):
                        Path to a file to the outputs test report. Overwrite a
                        file content if the file already exists.
  md_report_tee (string):
                        output test report for both standard output and a file.
  md_report_margin (string):
                        Margin size for each cell. Defaults to 1.
  md_report_zeros (string):
                        Rendering method for results of zero values. number:
                        render as a digit number (0). empty: not rendering.
                        Automatically set to 'number' when the CI environment
                        variable is set to TRUE (case insensitive) to display
                        reports correctly at CI services. Defaults to 'number'.
  md_report_success_color (string):
                        Text color of succeeded results. Specify a color name
                        (one of the black/red/green/yellow/blue/magenta/cyan/whi
                        te/lightblack/lightred/lightgreen/lightyellow/lightblue/
                        lightmagenta/lightcyan/lightwhite) or a color code (e.g.
                        #ff1020). Defaults to 'light_green'.
  md_report_skip_color (string):
                        Text color of skipped results. Specify a color name (one
                        of the black/red/green/yellow/blue/magenta/cyan/white/li
                        ghtblack/lightred/lightgreen/lightyellow/lightblue/light
                        magenta/lightcyan/lightwhite) or a color code (e.g.
                        #ff1020). Defaults to 'light_yellow'.
  md_report_error_color (string):
                        Text color of failed results. Specify a color name (one
                        of the black/red/green/yellow/blue/magenta/cyan/white/li
                        ghtblack/lightred/lightgreen/lightyellow/lightblue/light
                        magenta/lightcyan/lightwhite) or a color code (e.g.
                        #ff1020). Defaults to 'light_red'.
  md_report_flavor (string):
                        Markdown flavor of the output report. Defaults to
                        'common_mark'.
  md_report_exclude_outcomes (args):
                        List of test outcomes to exclude from the report. When
                        specifying as an environment variable, pass a
                        comma-separated string (e.g. 'passed,skipped'). Defaults
                        to '[]'.


Dependencies
============================================
- Python 3.7+
- `Python package dependencies (automatically installed) <https://github.com/thombashi/pytest-md-report/network/dependencies>`__

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/thombashi/pytest-md-report",
    "name": "pytest-md-report",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "pytest, plugin, markdown",
    "author": "Tsuyoshi Hombashi",
    "author_email": "tsuyoshi.hombashi@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/62/b3/b68a712044b2096c28edf73e262e17e56c82a30f55abc16b8e9d39c24650/pytest_md_report-0.6.2.tar.gz",
    "platform": null,
    "description": ".. contents:: **pytest-md-report**\n   :backlinks: top\n   :depth: 2\n\n\nSummary\n============================================\n|PyPI pkg ver| |Supported Python ver| |Supported Python impl| |CI status| |CodeQL|\n\n.. |PyPI pkg ver| image:: https://badge.fury.io/py/pytest-md-report.svg\n    :target: https://badge.fury.io/py/pytest-md-report\n    :alt: PyPI package version\n\n.. |Supported Python impl| image:: https://img.shields.io/pypi/implementation/pytest-md-report.svg\n    :target: https://pypi.org/project/pytest-md-report\n    :alt: Supported Python implementations\n\n.. |Supported Python ver| image:: https://img.shields.io/pypi/pyversions/pytest-md-report.svg\n    :target: https://pypi.org/project/pytest-md-report\n    :alt: Supported Python versions\n\n.. |CI status| image:: https://github.com/thombashi/pytest-md-report/actions/workflows/ci.yml/badge.svg\n    :target: https://github.com/thombashi/pytest-md-report/actions/workflows/ci.yml\n    :alt: CI status of Linux/macOS/Windows\n\n.. |CodeQL| image:: https://github.com/thombashi/pytest-md-report/actions/workflows/github-code-scanning/codeql/badge.svg\n    :target: https://github.com/thombashi/pytest-md-report/actions/workflows/github-code-scanning/codeql\n    :alt: CodeQL\n\nA pytest plugin to generate test outcomes reports with markdown table format.\n\nInstallation\n============================================\n::\n\n    pip install pytest-md-report\n\n\nUsage\n============================================\n::\n\n    pytest --md-report examples/\n\n.. figure:: https://cdn.jsdelivr.net/gh/thombashi/pytest-md-report@master/ss/pytest_md_report_example.png\n    :scale: 80%\n    :alt: https://github.com/thombashi/pytest-md-report/blob/master/ss/pytest_md_report_example.png\n\n    Output example\n\n\nOther examples\n--------------------------------------------\nIncrease verbosity level (``--md-report-verbose`` option):\n\n::\n\n    pytest --md-report --md-report-verbose=1 examples/\n\n.. figure:: https://cdn.jsdelivr.net/gh/thombashi/pytest-md-report@master/ss/pytest_md_report_example_verbose.png\n    :scale: 80%\n    :alt: https://github.com/thombashi/pytest-md-report/blob/master/ss/pytest_md_report_example_verbose.png\n\n    Output example (verbose)\n\nNot rendering results of zero value (``--md-report-zeros emmpty`` option):\n\n::\n\n    pytest --md-report --md-report-zeros empty --md-report-color never examples/\n\n::\n\n    |         filepath         | passed | failed | error | skipped | xfailed | xpassed | SUBTOTAL |\n    | ------------------------ | -----: | -----: | ----: | ------: | ------: | ------: | -------: |\n    | examples/test_error.py   |        |        |     2 |         |         |         |        2 |\n    | examples/test_failed.py  |        |      2 |       |         |         |         |        2 |\n    | examples/test_pass.py    |      2 |        |       |         |         |         |        2 |\n    | examples/test_skipped.py |        |        |       |       2 |         |         |        2 |\n    | examples/test_xfailed.py |        |        |       |         |       2 |         |        2 |\n    | examples/test_xpassed.py |        |        |       |         |         |       2 |        2 |\n    | TOTAL                    |      2 |      2 |     2 |       2 |       2 |       2 |       12 |\n\nGenerate GitHub Flavored Markdown (GFM) report:\n\n::\n\n    pytest --md-report --md-report-flavor gfm examples/\n\nGFM rendering result can be seen at `here <https://github.com/thombashi/pytest-md-report/blob/master/examples/gfm_report.md>`__.\n\n\nConfig file examples\n--------------------------------------------\nYou can set configurations with ``pyproject.toml`` or ``setup.cfg`` as follows.\n\n:Example of ``pyproject.toml``:\n    .. code-block:: toml\n\n        [tool.pytest.ini_options]\n        md_report = true\n        md_report_verbose = 0\n        md_report_color = \"auto\"\n\n:Example of ``setup.cfg``:\n    .. code-block:: ini\n\n        [tool:pytest]\n        md_report = True\n        md_report_verbose = 0\n        md_report_color = auto\n\n\nAdd report to pull requests\n-----------------------------------------------\nYou can add test reports to pull requests by GitHub actions workflow like the below:\n\n.. code-block:: yaml\n\n    name: md-report - pull request example\n\n    on:\n      pull_request:\n\n    jobs:\n      run-tests:\n        runs-on: ubuntu-latest\n        permissions:\n          contents: read\n          pull-requests: write\n\n        steps:\n          - uses: actions/checkout@v4\n\n          - uses: actions/setup-python@v5\n            with:\n              python-version: '3.12'\n              cache: pip\n\n          - name: Install dependencies\n            run: pip install --upgrade pytest-md-report\n\n          - name: Run tests\n            env:\n              REPORT_OUTPUT: md_report.md\n            shell: bash\n            run: |\n              echo \"REPORT_FILE=${REPORT_OUTPUT}\" >> \"$GITHUB_ENV\"\n              pytest -v --md-report --md-report-flavor gfm --md-report-exclude-outcomes passed skipped xpassed --md-report-output \"$REPORT_OUTPUT\"\n\n          - name: Render the report to the PR when tests fail\n            uses: marocchino/sticky-pull-request-comment@v2\n            if: failure()\n            with:\n              header: test-report\n              recreate: true\n              path: ${{ env.REPORT_FILE }}\n\n.. figure:: https://cdn.jsdelivr.net/gh/thombashi/pytest-md-report@master/ss/md-report_gha.png\n    :scale: 80%\n    :alt: https://github.com/thombashi/pytest-md-report/blob/master/ss/md-report_gha.png\n\n    Rendering result\n\n\nAdd report to pull requests: only failed tests\n-----------------------------------------------\nYou can exclude specific test outcomes from the report by using the ``--md-report-exclude-outcomes`` option.\nThe below example excludes ``passed``, ``skipped``, and ``xpassed`` test outcomes from the report and posts the report to the pull request when tests fail with verbose output.\n\n.. code-block:: yaml\n\n    name: md-report - pull request example\n\n    on:\n      pull_request:\n\n    jobs:\n      run-tests:\n        runs-on: ubuntu-latest\n        permissions:\n          contents: read\n          pull-requests: write\n\n        steps:\n          - uses: actions/checkout@v4\n\n          - uses: actions/setup-python@v5\n            with:\n              python-version: '3.12'\n              cache: pip\n\n          - name: Install dependencies\n            run: pip install --upgrade pytest-md-report\n\n          - name: Run tests\n            env:\n              REPORT_OUTPUT: md_report.md\n            shell: bash\n            run: |\n              echo \"REPORT_FILE=${report_file}\" >> \"$GITHUB_ENV\"\n              pytest -v --md-report --md-report-flavor gfm --md-report-exclude-outcomes passed skipped xpassed --md-report-output \"$report_file\"\n\n          - name: Render the report to the PR when tests fail\n            uses: marocchino/sticky-pull-request-comment@v2\n            if: failure()\n            with:\n              header: test-report\n              recreate: true\n              path: ${{ env.REPORT_FILE }}\n\n.. figure:: https://cdn.jsdelivr.net/gh/thombashi/pytest-md-report@master/ss/md-report_exclude_outcomes_verbose_output.png\n    :scale: 80%\n    :alt: https://github.com/thombashi/pytest-md-report/blob/master/ss/md-report_exclude_outcomes_verbose_output.png\n\n    Rendering result\n\n\nAdd reports to the job summary of the GitHub action workflow runs\n-----------------------------------------------------------------------------\nThe below example adds test reports to the job summary of the GitHub action workflow runs when tests fail.\n\n.. code-block:: yaml\n\n    name: md-report - job summary example\n\n    on:\n      pull_request:\n\n    jobs:\n      run-tests:\n        runs-on: ${{ matrix.os }}\n        strategy:\n          fail-fast: false\n          matrix:\n            os: [ubuntu-latest, windows-latest]\n\n        steps:\n          - uses: actions/checkout@v4\n\n          - uses: actions/setup-python@v5\n            with:\n              python-version: '3.12'\n              cache: pip\n\n          - name: Install dependencies\n            run: pip install --upgrade pytest-md-report\n\n          - name: Run tests\n            env:\n              REPORT_OUTPUT: md_report.md\n            shell: bash\n            run: |\n              echo \"REPORT_FILE=${REPORT_OUTPUT}\" >> \"$GITHUB_ENV\"\n              pytest -v --md-report --md-report-flavor gfm --md-report-exclude-outcomes passed skipped xpassed --md-report-output \"$REPORT_OUTPUT\"\n\n          - name: Output reports to the job summary when tests fail\n            if: failure()\n            shell: bash\n            run: |\n              if [ -f \"$REPORT_FILE\" ]; then\n                echo \"<details><summary>Failed Test Report</summary>\" >> $GITHUB_STEP_SUMMARY\n                echo \"\" >> $GITHUB_STEP_SUMMARY\n                cat \"$REPORT_FILE\" >> $GITHUB_STEP_SUMMARY\n                echo \"\" >> $GITHUB_STEP_SUMMARY\n                echo \"</details>\" >> $GITHUB_STEP_SUMMARY\n              fi\n\n.. figure:: https://cdn.jsdelivr.net/gh/thombashi/pytest-md-report@master/ss/md-report_job-summary_full.png\n    :scale: 80%\n    :alt: https://github.com/thombashi/pytest-md-report/blob/master/ss/md-report_job-summary_full.png\n\n    Rendering result\n\n\nOptions\n============================================\n\nCommand options\n--------------------------------------------\n::\n\n    generate test outcomes report with markdown table format:\n      --md-report           Create a Markdown report. you can also specify the value\n                            with PYTEST_MD_REPORT environment variable.\n      --md-report-verbose=VERBOSITY_LEVEL\n                            Verbosity level for pytest-md-report.\n                            If not set, use the verbosity level of pytest.\n                            Defaults to 0.\n                            you can also specify the value with\n                            PYTEST_MD_REPORT_VERBOSE environment variable.\n      --md-report-output=FILEPATH\n                            Path to a file to the outputs test report.\n                            Overwrite a file content if the file already exists.\n                            you can also specify the value with\n                            PYTEST_MD_REPORT_OUTPUT environment variable.\n      --md-report-tee       output test report for both standard output and a file.\n                            you can also specify the value with PYTEST_MD_REPORT_TEE\n                            environment variable.\n      --md-report-color={auto,text,never}\n                            How coloring output reports.\n                            auto: detect the output destination and colorize reports\n                            appropriately with the output.\n                            for terminal output, render colored (text and\n                            background) reports using ANSI escape codes.\n                            for file output, render the report without color.\n                            text: render colored text reports by using ANSI escape\n                            codes.\n                            never: render report without color.\n                            Defaults to 'auto'.\n                            you can also specify the value with\n                            PYTEST_MD_REPORT_COLOR environment variable.\n      --md-report-margin=MARGIN\n                            Margin size for each cell.\n                            Defaults to 1.\n                            you can also specify the value with\n                            PYTEST_MD_REPORT_MARGIN environment variable.\n      --md-report-zeros={number,empty}\n                            Rendering method for results of zero values.\n                            number: render as a digit number (0).\n                            empty: not rendering.\n                            Automatically set to 'number' when the CI environment\n                            variable is set to\n                            TRUE (case insensitive) to display reports correctly at\n                            CI services.\n                            Defaults to 'number'.\n                            you can also specify the value with\n                            PYTEST_MD_REPORT_ZEROS environment variable.\n      --md-report-success-color=MD_REPORT_SUCCESS_COLOR\n                            Text color of succeeded results.\n                            Specify a color name (one of the black/red/green/yellow/\n                            blue/magenta/cyan/white/lightblack/lightred/lightgreen/l\n                            ightyellow/lightblue/lightmagenta/lightcyan/lightwhite)\n                            or a color code (e.g. #ff1020).\n                            Defaults to 'light_green'.\n                            you can also specify the value with\n                            PYTEST_MD_REPORT_SUCCESS_COLOR environment variable.\n      --md-report-skip-color=MD_REPORT_SKIP_COLOR\n                            Text color of skipped results.\n                            Specify a color name (one of the black/red/green/yellow/\n                            blue/magenta/cyan/white/lightblack/lightred/lightgreen/l\n                            ightyellow/lightblue/lightmagenta/lightcyan/lightwhite)\n                            or a color code (e.g. #ff1020).\n                            Defaults to 'light_yellow'.\n                            you can also specify the value with\n                            PYTEST_MD_REPORT_SKIP_COLOR environment variable.\n      --md-report-error-color=MD_REPORT_ERROR_COLOR\n                            Text color of failed results.\n                            Specify a color name (one of the black/red/green/yellow/\n                            blue/magenta/cyan/white/lightblack/lightred/lightgreen/l\n                            ightyellow/lightblue/lightmagenta/lightcyan/lightwhite)\n                            or a color code (e.g. #ff1020).\n                            Defaults to 'light_red'.\n                            you can also specify the value with\n                            PYTEST_MD_REPORT_ERROR_COLOR environment variable.\n      --md-report-flavor={common_mark,github,gfm,jekyll,kramdown}\n                            Markdown flavor of the output report.\n                            Defaults to 'common_mark'.\n                            you can also specify the value with\n                            PYTEST_MD_REPORT_FLAVOR environment variable.\n      --md-report-exclude-outcomes=MD_REPORT_EXCLUDE_OUTCOMES [MD_REPORT_EXCLUDE_OUTCOMES ...]\n                            List of test outcomes to exclude from the report.\n                            When specifying as an environment variable, pass a\n                            comma-separated string\n                            (e.g. 'passed,skipped').\n                            Defaults to '[]'.\n                            you can also specify the value with\n                            PYTEST_MD_REPORT_EXCLUDE_OUTCOMES environment variable.\n\n\nini-options\n--------------------------------------------\n[pytest] ini-options in the first ``pytest.ini``/``tox.ini``/``setup.cfg``/``pyproject.toml (pytest 6.0.0 or later)`` file found:\n\n::\n\n  md_report (bool):     Create a Markdown report.\n  md_report_verbose (string):\n                        Verbosity level for pytest-md-report. If not set, use\n                        the verbosity level of pytest. Defaults to 0.\n  md_report_color (string):\n                        How coloring output reports. auto: detect the output\n                        destination and colorize reports appropriately with the\n                        output. for terminal output, render colored (text and\n                        background) reports using ANSI escape codes. for file\n                        output, render the report without color. text: render\n                        colored text reports by using ANSI escape codes. never:\n                        render report without color. Defaults to 'auto'.\n  md_report_output (string):\n                        Path to a file to the outputs test report. Overwrite a\n                        file content if the file already exists.\n  md_report_tee (string):\n                        output test report for both standard output and a file.\n  md_report_margin (string):\n                        Margin size for each cell. Defaults to 1.\n  md_report_zeros (string):\n                        Rendering method for results of zero values. number:\n                        render as a digit number (0). empty: not rendering.\n                        Automatically set to 'number' when the CI environment\n                        variable is set to TRUE (case insensitive) to display\n                        reports correctly at CI services. Defaults to 'number'.\n  md_report_success_color (string):\n                        Text color of succeeded results. Specify a color name\n                        (one of the black/red/green/yellow/blue/magenta/cyan/whi\n                        te/lightblack/lightred/lightgreen/lightyellow/lightblue/\n                        lightmagenta/lightcyan/lightwhite) or a color code (e.g.\n                        #ff1020). Defaults to 'light_green'.\n  md_report_skip_color (string):\n                        Text color of skipped results. Specify a color name (one\n                        of the black/red/green/yellow/blue/magenta/cyan/white/li\n                        ghtblack/lightred/lightgreen/lightyellow/lightblue/light\n                        magenta/lightcyan/lightwhite) or a color code (e.g.\n                        #ff1020). Defaults to 'light_yellow'.\n  md_report_error_color (string):\n                        Text color of failed results. Specify a color name (one\n                        of the black/red/green/yellow/blue/magenta/cyan/white/li\n                        ghtblack/lightred/lightgreen/lightyellow/lightblue/light\n                        magenta/lightcyan/lightwhite) or a color code (e.g.\n                        #ff1020). Defaults to 'light_red'.\n  md_report_flavor (string):\n                        Markdown flavor of the output report. Defaults to\n                        'common_mark'.\n  md_report_exclude_outcomes (args):\n                        List of test outcomes to exclude from the report. When\n                        specifying as an environment variable, pass a\n                        comma-separated string (e.g. 'passed,skipped'). Defaults\n                        to '[]'.\n\n\nDependencies\n============================================\n- Python 3.7+\n- `Python package dependencies (automatically installed) <https://github.com/thombashi/pytest-md-report/network/dependencies>`__\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "A pytest plugin to generate test outcomes reports with markdown table format.",
    "version": "0.6.2",
    "project_urls": {
        "Changlog": "https://github.com/thombashi/pytest-md-report/releases",
        "Homepage": "https://github.com/thombashi/pytest-md-report",
        "Source": "https://github.com/thombashi/pytest-md-report",
        "Tracker": "https://github.com/thombashi/pytest-md-report/issues"
    },
    "split_keywords": [
        "pytest",
        " plugin",
        " markdown"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eee96fd1b512826377e17193289147d8a697e0b56bbba4dbee709e62e7d8e94a",
                "md5": "45ab6d137ad7b430f2a3e657e07d9ae7",
                "sha256": "66e27efa5c155c87eb4700d60876e61a85c13361448c4031fda964c43e63c9b9"
            },
            "downloads": -1,
            "filename": "pytest_md_report-0.6.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "45ab6d137ad7b430f2a3e657e07d9ae7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 13905,
            "upload_time": "2024-05-18T15:29:24",
            "upload_time_iso_8601": "2024-05-18T15:29:24.374330Z",
            "url": "https://files.pythonhosted.org/packages/ee/e9/6fd1b512826377e17193289147d8a697e0b56bbba4dbee709e62e7d8e94a/pytest_md_report-0.6.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "62b3b68a712044b2096c28edf73e262e17e56c82a30f55abc16b8e9d39c24650",
                "md5": "98d369544bfbc418cd0062b0a6031a96",
                "sha256": "5e96c655ebc9b5c3c7b78bf7c5382c1f68056e96904430252790f8737de5ce99"
            },
            "downloads": -1,
            "filename": "pytest_md_report-0.6.2.tar.gz",
            "has_sig": false,
            "md5_digest": "98d369544bfbc418cd0062b0a6031a96",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 283407,
            "upload_time": "2024-05-18T15:29:26",
            "upload_time_iso_8601": "2024-05-18T15:29:26.421005Z",
            "url": "https://files.pythonhosted.org/packages/62/b3/b68a712044b2096c28edf73e262e17e56c82a30f55abc16b8e9d39c24650/pytest_md_report-0.6.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-18 15:29:26",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "thombashi",
    "github_project": "pytest-md-report",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "pytest-md-report"
}
        
Elapsed time: 2.19813s