mote-gtest-gui


Namemote-gtest-gui JSON
Version 0.9.0 PyPI version JSON
download
home_pagehttps://github.com/tomzox/gtest_gui
SummaryModule tester's Gtest GUI is a full-featured graphical user-interface to C++ test applications using the GoogleTest framework.
upload_time2023-06-08 19:39:31
maintainer
docs_urlNone
authorT. Zoerner
requires_python
license
keywords google-test gtest testing-tools test-runners tkinter gui
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Module-Tester's Gtest GUI
=========================

Description
-----------

**Module-tester's GtestGui** is a test-runner with graphical user-interface for C++ test applications using the GoogleTest framework.

GtestGui will work with any application that implements the Gtest command line interface, however it is designed especially for C++ developers using test-driven design based on module testing and integration testing. These differ from unit-testing by longer execution times and usually not fully predictable results (i.e. "flakiness"), which in turn require multiple repetitions of each test case. To support this well, GtestGui offers firstly easily accessible ways for test case filtering and concurrent scheduling across multiple CPUs; Secondly, there are multiple features for tracking progress and managing results via sorting and filtering, which are fully usable already while a test campaign still is in progress.

GtestGui typically is started with the path of an executable on the command line which is built using the gtest library. Using the "Run" button in the GUI, this executable can then be started and its progress be monitored live in the result log frame of the main window. Additional controls allow specifying options such as test case filter string and repetition count, which are forwarded to the executable via the respective "\ ``--gtest_*``" command line arguments.

While tests are running, test verdicts can be monitored live in the result log. Traces of passed or failed tests can already be analyzed simply by double-clicking on an entry in the result log, which will open the trace of that particular test case run. For this purpose GtestGui comes bundled with *Trowser*, which is a graphical browser for large line-oriented text files. Trowser in principle is just a text browser with syntax highlighting and search, but its search capabilities are designed especially to facilitate analysis of complex debug logs, essentially by allowing to build a condensed (filtered) view of the trace file in the search result window via incremental searches and manual manipulations. By default, GtestGui is configured to user *trowser.py*, but there is also a more modern-looking Qt5 variant. GtestGui can also be configured to use any other GUI application for opening traces.

Test control
------------

The application main window consists of a menu bar, a frame containing test controls, a frame containing the test result log, and a text frame for trace preview. This chapter describes the top-most frame with the test controls.

Executable selection
~~~~~~~~~~~~~~~~~~~~

Before tests can be started or a test filter be defined, a target executable has to be selected. If the executable file was not already specified on the command line, then this is done via *Select executable file...* in the *Control* menu. Either select a file via the file selector dialog, or choose one from the list of previously used executables. (Note when hovering the mouse over enries in this list, the full path and timestamp of the file is displayed as tool-tip. The entry is grayed out if the file no longer exists.)

Upon selecting an executable file, the test case list is read automatocally by running it with the "\ ``--gtest_list_tests``" command line option. If that fails with an error, or if the list is empty, executable selection is aborted.

Whenever starting a new test campaign, GtestGui will automatically check if a new executable version is available by checking the file timestamp. If a change is detected, the test case list is read again. The *Refresh test case list* command in the menu allows performing the same steps independently. That command is useful if you want to specify newly added test case names in the test case filter string before starting a new test campaign (maybe to only run the new test cases.)

Results of the previous executable are kept in the result log window, but the log entries are updated to include the executable name. Repeating test cases of other executables via the *Repeat* button is not possible; You have to manually switch back to the respective executable to allow that.

The executable cannot be "refreshed" or switched while a test campaign is running.

Test campaign control buttons
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Most prominent in the test control frame are the green buttons, which directly control execution of the configured executable file. The same commands are also in the "Control" menu:

*Run*:
  Starts the text executable in a separate process, with its output redirected into a pipe which is read by GtestGui for progress monitoring. The output is also saved into a file.

  Note when the timestamp of the executable file on disk has changed, GtestGui automatically reads the test case list to check for changes. An error will be reported if the current test case filter contains pattern that no longer match any test case. If the timestamp has not changed, the age of the file is shown in a status message below the buttons, to warn you about this in case you forget to build the executable after making changes.

  Multiple processes are started if the *CPUs* value is larger than 1. Most of the time, GtestGui will use gtest's "sharding" feature, which assigns a static sub-set of tests to each process. However, if repetition count is larger than one and the number of configured CPUs is larger than the number of test cases, or if the remainder of division of test cases by CPUs is large, GtestGui may instead or additionally partition by repetitions.

  Note when a test process crashes during a campaign, it is currently not restarted. That is because gtest's static sharding does not allow disabling the instable test case without influencing test case partitioning.

*Stop*:
  Sends a TERM signal to the test processes and waits for them to finish. When termination takes a long time (possibly because the executable is hung) and the button is clicked a second time, a KILL signal is sent.

*Resume*:
  Restarts test case execution using the same test case filter setting and executable version as previously used, without resetting the test result statistics. Other options, such as repetition or CPU count may be modified. This operation is useful when a long test run has to be paused temporarily as the CPUs are needed otherwise, or for changing options such as the number of used CPUs.

  This command will also use the same version of the test executable as used previously if option *Create copy of executable file* is enabled, see `Configuration`_. This allows resuming execution even when the executable at the configured path no longer exists, for example due to a failed build.

  When resuming, scheduling cannot restart exactly where it was stopped due to limitations in gtest. If repetition count is 1, all test cases will be rescheduled. For higher repetition count, the lowest number of remaining repetitions across all selected test cases is rescheduled.

*Repeat*:
  Repeats the test cases marked manually for repetition via the result log, or all previously failed test cases if none were selected. This allows quick repetition of individual test cases without changing the test case filter.

Test case filter
~~~~~~~~~~~~~~~~

The entry field at the top of the test control frame allows specifying a test case filter, so that only a matching sub-set of test cases is run. The filter can be entered manually using the same syntax as the "\ ``--gtest-filter``" command line option: The format of a filter expression is a ":"-separated list of test case names of wildcard patterns (positive patterns), optionally followed by a "-" and another ":"-separated pattern list (negative patterns). A test matches the filter if and only if it matches any of the positive patterns, but none of the negative ones. Wildcard characters are "*" (matching any sub-string) and "?" (matching any single character). As a special case, when no positive pattern is specified, all test cases are considered matching.

Alternatively, the test case filter can be modified via the drop-down menu below the entry field (which can be opened by the Cursor-Down key or a click on the drop-down button next to the entry field). The menu has entries for selecting and deselecting entries test suites as well as individual test cases. When modifying the filter this way, GtestGui will update the entry field with the shortest filter expression it can find using trailing wild card and negative patterns.

Yet another alternative for modifying test case filters is the test case list dialog, either via its context menu or the "Return" and "Delete" key bindings. Finally note any modification to the test case filter can be undone using "Control-Z" key binding in the entry field, or redone using "Control-Y" key binding.

After renaming a test case or adding a new test case, use the *Refresh test case list* command in the *Control* menu to read the test case list from the executable file. Afterward the new test case names can be used in the filter string.

Test control options
~~~~~~~~~~~~~~~~~~~~

*Repetitions*:
  If a value larger than 1 is entered here, it is passed via the "\ ``--gtest_repeat=NNN``" option on the executable's command line. This causes each test case to be repeated the given number of times.

*CPUs*:
  This option is described in `Test campaign control buttons`_.

*Ignore filter*:
  When more than one CPU is configured, this option can be used for scheduling different sets of test cases on different CPUs: The first set of CPUs runs only test cases matching the test case filter. The second set of CPUs runs all test cases. The size of the second set is determined by the given number.

  This feature is useful when running a long test campaign after modifying a test case as it allows effectively increasing the repetition count of the modified test case. It is also useful when running test cases that aim to find race conditions, as the additional concurrent execution of all test cases serves to generate a background load that increases randomness of thread scheduling.

*Fail limit*:
  When set to a non-zero value, the test campaign is stopped after the given number of failed test cases was reached. Note for the limit the total of failures is counted across all test cases and all CPUs.

  This option is **not** using the respective Gtest option, as that option would not work as expected when using multiple CPUs (as it would work independently for tests on each CPU). Instead, the handling is implemented in result handling in GtestGui. As there is a delay resulting from buffering in the pipeline between test application and GtestGui, more test cases may have failed in the mean time, so that the actual number of failures after the actual end of all test processes may be higher than the limit.

*Clean traces of passed tests*:
  When enabled, trace output of passed test case runs is not written to the trace output file. If all test cases of a test campaign passed, the complete output file is removed automatically when tests are stopped. This feature is intended for long test campaigns to reduce consumption of disk space.

*Clean core files*:
  When enabled, core files with pattern "\ ``core.PID``" are deleted automatically after a test case crashed. (See chapter `Result log`_ for more details on core files.) Note GtestGui can only clean core files from processes it controls directly. It is not able to clean core dumps created by death tests that are child processes of the main test process.

*Shuffle execution order*:
  When enabled, "\ ``--gtest_shuffle``" option is set via the executable's command line. This option randomizes the order of test execution.

*Run disabled tests*:
  When enabled, "\ ``--gtest_also_run_disabled_tests``" option is set via the executable's command line. The option enables execution of test suites and individual test cases whose name starts with "\ ``DISABLED``".

  The option also affects test case filter and test case selection menus within GtestGui: When the option is not set, entering filter pattern "\ ``*DISABLED*``" would raise a warning that it matches no test cases (even if there are some with that name). The drop-down menu below the entry field would no show such names.

*Break on failure*:
  When enabled, "\ ``--gtest_break_on_failure``" option is set via the executable's command line. This will cause SIGTRAP to be sent to the test process upon the first failure. As no debugger is attached, this will cause the process to crash.

  When core dumps are enabled in the kernel, the core will be saved by GtestGui and can be analyzed via the *Extract stack trace from core dump* command in the result log's context menu. When core dumps are not enabled, this option is probably not useful.

*Break on exception*:
  When enabled, "\ ``--gtest_catch_exceptions=0``" option is set via the executable's command line. This will disable catching of exceptions by the Gtest framework, which means any exception not caught by the test case itself causes the test process to crash due to an unhandled exception.

  When core dumps are enabled in the kernel, the core will be saved by GtestGui and can be analyzed via the *Extract stack trace from core dump* command in the result log's context menu. When core dumps are not enabled, this option is probably not useful.

*Valgrind* and *Valgrind - 2nd option set*:
  The two valgrind options serve to run each execution of the test executable under valgrind using the configured command line. Notably, valgrind checks are performed across the complete lifetime of the test process, thus spanning all test cases or test repetitions. Therefore, if for example a memory leak is reported at the end, it cannot be determined which test case caused it (or it may even be caused by interaction of the test sequence.) Therefore valgrind errors are reported with a special entry in the result log. Some kind of errors such as invalid memory accesses can be mapped to test cases based on the position of the error report in the output stream. Note however that the position may not exactly reflect the timing of occurrence due to possible buffering in output streams within the test executable.) See `Result log`_ and `Configuration`_ for more details.

Status and progress monitoring
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The lower left part of the test control frame shows the status of the latest test campaign. The left box with label "Status" shows three numbers with the following meaning:

*Running*:
  Shows the number of test processes currently executing test cases. (See `Caveats`_ for an explanation why this number may be lower than the number of requested "CPUs".)

*Passed*:
  Shows the number of test cases that were passed or skipped.

*Failed*:
  Shows the number of test cases that failed or crashed. The number also includes a possible additional fail verdict by valgrind at the end of a test process.

The left box with label "Progress" shows the completion ratio in form of a progress bar. The ratio is calculated as the number of received results (i.e. passed, skipped, failed, or crashed) divided by the number of expected results. The number of expected results is the number of test cases selected by the test case filter, multiplied with the repetition count.

In case the *Ignore filter* option in `Test control options`_ is set to a non-zero value, completion ratio of the respective test jobs is disregarded for the progress display, as these jobs are terminated automatically once the regular test jobs have completed. Note the "Status" frame however does include results received from these jobs, so that the numbers shown there may exceed the configured repetition count for tests matching the test case filter.

When hovering the mouse over the progress bar, a tool-tip text shows additional details about the progress, namely the ratio of completed test cases and repetitions and estimated remaining run time.

Result log
----------

The result log frame is located in the middle of the main window. When started for the first time, the log is usually empty. However, results can also be imported via the command line, for example from a file that contains output from a test executable that was redirected into a file. The result log may also show results from a previous run of GtestGui, if auto-import is enabled in `Configuration`_.

The result log contains one line for each ``[ OK ]``, ``[ SKIPPED ]`` and ``[ FAILED ]`` line in the test application's gtest-generated output. The test executable's output stream is redirected to a pipe that is read continuously by GtestGui for this purpose. GtestGui also stores this output to a file, so that the trace output between ``[ RUN ]`` and verdict text line can be opened in a trace browser.

In addition to the standard verdicts generated by the gtest library, GtestGui supports verdict ``[ CRASHED ]``, which is appended to the trace file when an executable terminates with a non-zero exit code within a test case.

When running tests under **valgrind**, a special result log entry "Valgrind error" is added if valgrind's exit code signals detection of an error. This case is special, as it's not known which test case caused the error, if more than one was running. Double-clicking this entry will therefore open the complete trace file.

Each entry in the result log contains the following information:

-   Local time at which the result was captured.

-   Verdict: Passed, failed, skipped, crashed, or special case valgrind of startup errors.

-   Test case name.

-   "Seed" value parsed from trace output, if a regular expression was configured for that purpose in `Configuration`_.

-   Test duration as reported by gtest (in milliseconds).

-   In case of failure, source code file and line where of the first "Failure" was reported in trace output.

-   Timestamp or name of executable that generated the test output, in case the executable has changed since running the test.

When selecting an entry by clicking on it, the corresponding trace output is shown in the trace preview frame below the result log. In case of a test case failure, the view is centered on the first line containing "Failure" and the text is marked by light red background.

Double-clicking on an entry opens the trace of that entry in an external application, which can be selected via the Configuration dialog. Default application is "trace browser", a text browser with syntax highlighting and search and filtering capabilities especially tailored for trace analysis. As GTest writes trace output of all test cases into a single file, only the portion between "\ ``[ RUN ]``" and "\ ``[ OK ]``" or "\ ``[ FAILED ]``" respectively of the selected test case is extracted and passed to the application.

When clicking on an entry with the right mouse button, a context menu opens that allow opening the trace file, adding or removing the selected test case from the filter option, scheduling a test for repetition, excluding a result from the log display, or removing results. The context menu also has an entry for sending the complete trace file to the external trace browser application; This may be needed in rare cases when behavior of a test case depends on the sequence of preceding tests.

Additional commands are offered in the "Result log" drop-down of the main window menu. The commands allow sorting and filtering the result log by various criteria. By default, log entries are sorted by the time they were created at. When running a test campaign, it's recommended to enabled the *Show only failed* filter, so that it's easy to track which test cases failed.

While a test campaign is running, new result entries are added at the bottom (when in default sort order) and the view is scrolled automatically to keep the added entry visible. This auto-scrolling can be stopped by selecting any entry in the list. To return to auto-scrolling, deselect the last entry either by clicking on it while holding the *Control* key, or by clicking in the empty line at the end of the list.

Result entries can be deleted using the context menu or by pressing the *Delete* key. To delete all entries, press *Control-A* (i.e. select complete list) and then *Delete*. Note actual trace files are removed from disk only if all test case results stored in it have been deleted from the log.

Post-mortem core dump analysis
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

POSIX platforms only: When selecting the result of a test case that caused a crash of the test process, the context menu has an entry that allows analyzing the generated core dump file. The Analysis consists of a thread overview and stack-trace (backtrace) of each thread. This allows finding quickly at which test step the crash occurred.

To allow this, ``/proc/sys/kernel/core_pattern`` needs to be configured as "\ ``core.%p``", or alternatively as "\ ``core``", when additionally ``/proc/sys/kernel/core_uses_pid`` is set to "1". This way, a file named "\ ``core.PID``" will be created by the operating system in the directory where GtestGui was started.

If GtestGui thus finds a core file with a matching process ID after a process crashed, it will automatically preserve that core file for analysis by moving it into the directory where trace text output files are stored and renaming it to the same name of the corresponding trace file with prefix "core". It will also preserve the executable file version by keeping a hard link to the same executable in the trace directory for as long as the core file exists.

Test case list dialog
---------------------

A dialog showing the list of test cases read from the selected executable file can be opened via the control menu.

For each test case, the list shows in the first column if the test case is currently enabled for execution, the test case name, the number of times it has passed and failed in the current campaign and its accumulated execution time.

By default, test cases in the list are in the order as received. The list can be sorted in different ways using the context menu: The list can be sorted alpabetically, by execution or failure count within the current test campaign, or by test case execution duration.

By default, all test cases defined in the executable are listed. You can filter the list via the context menu to show only test cases enabled via test case filter in the main window, or only test cases that failed in the current test campaign, or only test cases whose name or test suite namedoesn't start with "\ ``DISABLED_``".

The "Run" column is updated to reflect changes in the "Test filter" entry field in the main window. Updates occur when you press the "Return" key, or when keyboard focus is moved out of the entry field. When the "Show only tests enabled to run" filter is active in the test case list dialog, the test case sub-set shown in the list is also updated to match the current filter string. This feature can be used for testing the manually entered filter string against the list, as you'll see exactly which test cases it selects.

Inversely, you can use the list for modifying the test case filter in the main window: This can be done via the *Add/Remove selected test case* and *Add/Remove selected test suite* commands in the list's context menu. The latter works on all test cases in the test suite of the selected test case. The same can be achive via key bindings: *Return* adds selected test cases and *Del* removes selected test cases. Note for selecting multiple lines in the list via keyboard, hold the *Shift* button while moving the cursor via the Up/Down keys.

Test cases named "\ ``DISABLED_``" can only be enabled via the test case list commands when option *Run disabled tests* in the main window is checked.

Note while the filter string is empty, all test cases are considered as enabled.  Nevertheless, the context menu will offer adding selected test cases. If you do so, then implictly all test cases except for the one added by the command get disabled.

After making changes to the test case filter via the dialog, the filter string in the main window is updated automatically to reflect the selection. The filter uses wildcards to minimize the filter string length.

Job list dialog
---------------

A dialog window showing the status of test processes in a currently ongoing test campaign can be opened via the *Open job list* command in the control menu. If no test campaign is active, it will only show a message informing that currently no processes are running.

During an ongoing test campaign, the list shows for each process its ID assigned by the operating system ("PID"), if it is a background job ("BgJob"), the number of bytes of trace output received from it ("Traced"), the number of test case results found in received trace ("Results"), the percentage of completed results from expected results ("Done"), and finally the name of the current test case reported via "\ ``[ RUN ]``" in received trace.

Note a "background job" is one for which the test case filter is ignored as per *Ignore filter* option in `Test control options`_. These jobs are special as they are terminated automatically when the last regular job is completed. For this reason their completion ratio (i.e. "Done" column) is disregarded for progress display in the main window.

The dialog is useful in case you suspect that a test campaign may be hung (for example when a test case ran into a deadlock or busy loop.) You could notice that firstly by the number of results not increasing and if that's the case, by the number of received bytes received as trace output not increasing. (The latter will however not if your test cases generate few or no trace output.)

The context menu allows sending an *ABORT* signal to the process, which will terminate it and cause a core image to be dumped on UNIX. The test case will be reported as crashed in the result log. This can be used for debugging a hung process: You can find where it was hung by using the *Extract stack trace from core dump* command in context menu of the result log entry. Alternatively, you could use the PID for attaching a debugger to the live process (e.g. "\ ``gdb -p PID exe_file``").

Configuration
-------------

User-interface options
~~~~~~~~~~~~~~~~~~~~~~

A few simple options for the user interface are available directly in the *Configure* menu:

*Select font for result log*:
  Selects the font used in the result log list in the main window, as well as in the test case list and job list dialogs. By default, the font is determined by Python's Tkinter and depends on the platform.

*Select font for trace preview*:
  Selects the font used in the trace preview frame at the bottom of the main window. By default, a fixed font is used; Font family and size are determined by Python's Tkinter and depend on the platform.

*Show test controls*:
  This option can be unchecked for temporarily hiding the test control frame in the main window. This allows for more space for the result log during result analysis. This option is not stored in persistent configuration and will always be enabled upon start of the application. Note while the controls are not shown, some operations are still possible via key bindings as well as the *Control* menu.

*Show tool-tip popups*:
  The option can be unchecked to disable display of "tool-tip" popup windows when hovering with the mouse on labels or check-buttons in the main window and dialogs which have such built-in help. Changes of the option are stored in the configuration file, so that it is persistent. This may be useful once you are sufficiently familiar with the tool.

Test management options
~~~~~~~~~~~~~~~~~~~~~~~

The following configuration options are available in the *Options* dialog window that can be opened via the *Configure* menu:

*Trace browser*
  This entry field selects the external application used for displaying trace files and trace snippets, when double-clicking on an entry in the result log. By default, trace browser *trowser.py* is used. You can either specify just the application file name, or its full path if it is not found via ``PATH`` configured in environment. The path of the trace file to be displayed will be appended to the given command line.

  Currently the application path name or parameters cannot contain space characters, as these are assumed to be separators.

  Note for the Windows platform: When using the default of ``trowser.py``, you may need to insert the Python interpreter in front of "trowser.py", depending on your Python installation. In that case you need to add the full path to where ``trowser.py`` is installed.

  Enable option *Browser supports reading from STDIN* if the selected trace browser supports reading text from "standard input" via a pipeline. In this case filename "\ ``-``" is passed on the command line instead of a file name.  The default browser *trowser.py* supports this. When not enabled, GtestGui has to create temporary files for passing trace snippets to the browser application.

*Pattern for seed*
  If a regular expression pattern is specified here, it will be applied to the trace of each test case. The string returned by the first match group (i.e. the first set of capturing parenthesis) will be shown in the corresponding result log as "seed". (This is intended for allowing repeat of a test sequence exactly even for test cases using randomness, by starting their PRNG with the same seed. This is not yet supported however, due to lack of an interface for passing a list of seed values via the GTest command line interface.)

*Directory for trace files*
  Specifies the directory where to store temporary files for trace output and core dump files collected from the executable under test. If empty, the current working directory at the time of starting GtestGui is used. Note sub-directories will be created in the given directory for each executable file version. If you want to use the "copy executable" option, the specified directory needs to be in the same filesystem as the executables. If you want to keep core dumps, the directory needs to be in the same filesystem as the working directory (because they will be moved, not copied due to size.)

*Automatically remove trace files of passed tests upon exit*
  When enabled, output from passed test cases is automatically removed from created trace files upon exiting the application. Trace files and sub-directories only containing passed test results are thus removed entirely. Note imported trace files are never modified or removed automatically, so you may need to remove these manually once after enabling this option (e.g. via result log context menu).

*Automatically import trace files upon start*
  When enabled, all trace files found in sub-directories under the configured trace directory are read after starting GtestGui. Test case results found in the files are shown in the result log window.

*Create copy of executable under test*
  When enabled, a copy of the current executable under test is made within the configured trace directory. (Technically, the copy is achieved by creating a so-called "hard link", so that no additional disk space is needed.) This is recommended so that recompiling the executable does not affect the current test run (i.e. compilation may either fail with error "file busy" when tests are running, or tests may crash). This option is required for allowing to extract stack traces from core dump files taken from an older executable version. Note this option may not work when using trace directories in locations such as /tmp on UNIX-like systems, as these usually are configured to disallow executable files for security reasons.

*Valgrind command line*
  *UNIX only:* Command lines to use for running test executables when one of the "Valgrind" options in the main window is enabled. The executable name and gtest options will be appended to the given command line.

  There are two separate entry fields, corresponding to the two check-buttons in the main window. This is intended for allowing to configure a configuration variant that runs faster and one that is slower but performs deeper analysis. By default, the command will perform check for memory leaks (notably at end of all test cases, not for individual test cases in a run of multiple tests) and for use of uninitialized memory. The second command line has additional options for tracking the origin of uninitialized memory.

  Currently the path or parameters cannot contain space characters, as these are assumed to be separators.

*Valgrind supports --exit-code*
  When this option is set, parameter "--error-exitcode=125" will be appended to the given valgrind command lines. This is required for detecting automatically that valgrind found errors during test execution. Only when enabled, result logs will report valgrind errors.

The debugger used for extracting stack traces from core files (POSIX only) is currently not configurable; It is hard-coded to "\ ``gdb``", which should be somewhere in the ``PATH`` configured in environment.

Caveats
-------

This chapter lists notable limitations that are not easy to overcome due to design choices.

Concurrent scheduling
  Concurrent scheduling requested via option *CPUs* is based on the "sharding" feature provided by Gtest framework. Unfortunately, Gtest only supports static case test partitioning, which means for example when using two CPUs, the set of test cases is split in two parts, of which the first is executed in one test process and the second in the other process.

  One problem arises when the number of test cases is smaller than the number of requested CPUs. This typically occurs when trying to run a single test case many times.  Sharding would then only use a single CPU, as it does not consider repetitions in its "sharding" algorithm. GtestGui works around that by calculating if it is more efficient to partition test cases by repetition than by sharding, or if a combination of both is even better. In the mentioned example, it would not use sharding, but instead run half the number of repetitions in one process, and the second half in the second. For more complex configurations such as 10 repetitions of 9 test cases on 8 CPUs, a combination of both methods will be used.

  A second problem, that GtestGui cannot work around, occurs when test cases have non-uniform execution time. As the "sharding" algorithm uses static partitioning solely based on the number of test cases per process, differences in execution times are not considered. For example, when two tests are scheduled for 100 times and test case A takes 1 second, but test case B only 1 millisecond, Gtest will still schedule all runs of A in the first process and all runs of B in the second process. Thus, the second process will sit idle for 99.9% of the total test execution time.

Test case crashes
  In a well-behaved test application, failures are normally reported via Gtest macros or exceptions. Thus, the failure is recorded and the next test case is executed. Sometimes however, a test case may have a bug that leads to a crash of the complete process. In this case all following test cases or test case repetitions are no longer executed.

  Currently GtestGui will not attempt to restart tests after a crash, because it expects that the same test case will crash again and thus keep blocking following tests. It is not possible to disable the instable test, as this would interfere with partitioning by Gtest "sharding", i.e. the set of test cases run by that test case would be altered. The only way around is for the use to manually disable the instable test case and then restart the test campaign.

Overload of GUI by concurrent unit-testing
  When using multiple CPUs for running very short-running test cases, such as typical unit-tests, the GUI application may be overloaded and thus appear unresponsive and hung. This is a result of Python's implementation of multi-threading, which does not allow using more than one effective CPU due to exclusive locks in interpreter execution. Therefore, when parsing of test output streams takes 100% of a CPU, no time is left for updating the GUI even though separate threads are used.

For more minor constraints and ideas for enhancements see file ``TODO.txt``, which is part of the package.

Files
-----

**$HOME/.config/gtest_gui/gtest_gui.rc**
  This file stores parameters that are configured via the `Configuration`_ dialog. Additional it contains persistent state such as the list of previously loaded executable files and the size and position of resizable dialog windows.

**trace.NNNN/trace.NNNN**
  Output from test applications is stored to sub-directories files called "\ ``trace.``" with a number appended. The number is an arbitrary identifier for the test executable whose output they contain. The directory contains a separate output file for each spawned test process.  Files in this directory are removed when removing results in the `Result log`_ of the GUI. By default, traces containing only passed test case results are also cleaned upon exiting the GUI.

  By default, the sub-directories are created in the current working directory where GtestGui is started. Another base directory may be specified in `Configuration`_.

  If multiple instances of the GUI are started, they will use the same directory for storage. For single-user systems this works well enough, as conflicts may occur only when pressing "Run" button concurrently in different instances. For multi-user setup it is not recommended to share the directory. (Seeing other user's test results would be confusing anyway.)

**TEMP/gtest_gui_tmpXXX**
  A temporary directory at the default place used by the operating system for such purpose will be created, for for example for unpacking trace snippets for display, or exporting trace files to archives. A different directory is used by each instance of GtestGui. The directory is removed automatically when the GUI is closed. Note the latter may fail on some platforms if a trace browser application still has opened an exported trace file at the time of quitting the GUI.





            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/tomzox/gtest_gui",
    "name": "mote-gtest-gui",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "google-test,gtest,testing-tools,test-runners,tkinter,GUI",
    "author": "T. Zoerner",
    "author_email": "tomzox@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/68/92/a29947bdda120670b41cafd4a90a732d1518fee13afb0f7a3df380046a7f/mote-gtest-gui-0.9.0.tar.gz",
    "platform": "posix",
    "description": "Module-Tester's Gtest GUI\n=========================\n\nDescription\n-----------\n\n**Module-tester's GtestGui** is a test-runner with graphical user-interface for C++ test applications using the GoogleTest framework.\n\nGtestGui will work with any application that implements the Gtest command line interface, however it is designed especially for C++ developers using test-driven design based on module testing and integration testing. These differ from unit-testing by longer execution times and usually not fully predictable results (i.e. \"flakiness\"), which in turn require multiple repetitions of each test case. To support this well, GtestGui offers firstly easily accessible ways for test case filtering and concurrent scheduling across multiple CPUs; Secondly, there are multiple features for tracking progress and managing results via sorting and filtering, which are fully usable already while a test campaign still is in progress.\n\nGtestGui typically is started with the path of an executable on the command line which is built using the gtest library. Using the \"Run\" button in the GUI, this executable can then be started and its progress be monitored live in the result log frame of the main window. Additional controls allow specifying options such as test case filter string and repetition count, which are forwarded to the executable via the respective \"\\ ``--gtest_*``\" command line arguments.\n\nWhile tests are running, test verdicts can be monitored live in the result log. Traces of passed or failed tests can already be analyzed simply by double-clicking on an entry in the result log, which will open the trace of that particular test case run. For this purpose GtestGui comes bundled with *Trowser*, which is a graphical browser for large line-oriented text files. Trowser in principle is just a text browser with syntax highlighting and search, but its search capabilities are designed especially to facilitate analysis of complex debug logs, essentially by allowing to build a condensed (filtered) view of the trace file in the search result window via incremental searches and manual manipulations. By default, GtestGui is configured to user *trowser.py*, but there is also a more modern-looking Qt5 variant. GtestGui can also be configured to use any other GUI application for opening traces.\n\nTest control\n------------\n\nThe application main window consists of a menu bar, a frame containing test controls, a frame containing the test result log, and a text frame for trace preview. This chapter describes the top-most frame with the test controls.\n\nExecutable selection\n~~~~~~~~~~~~~~~~~~~~\n\nBefore tests can be started or a test filter be defined, a target executable has to be selected. If the executable file was not already specified on the command line, then this is done via *Select executable file...* in the *Control* menu. Either select a file via the file selector dialog, or choose one from the list of previously used executables. (Note when hovering the mouse over enries in this list, the full path and timestamp of the file is displayed as tool-tip. The entry is grayed out if the file no longer exists.)\n\nUpon selecting an executable file, the test case list is read automatocally by running it with the \"\\ ``--gtest_list_tests``\" command line option. If that fails with an error, or if the list is empty, executable selection is aborted.\n\nWhenever starting a new test campaign, GtestGui will automatically check if a new executable version is available by checking the file timestamp. If a change is detected, the test case list is read again. The *Refresh test case list* command in the menu allows performing the same steps independently. That command is useful if you want to specify newly added test case names in the test case filter string before starting a new test campaign (maybe to only run the new test cases.)\n\nResults of the previous executable are kept in the result log window, but the log entries are updated to include the executable name. Repeating test cases of other executables via the *Repeat* button is not possible; You have to manually switch back to the respective executable to allow that.\n\nThe executable cannot be \"refreshed\" or switched while a test campaign is running.\n\nTest campaign control buttons\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nMost prominent in the test control frame are the green buttons, which directly control execution of the configured executable file. The same commands are also in the \"Control\" menu:\n\n*Run*:\n  Starts the text executable in a separate process, with its output redirected into a pipe which is read by GtestGui for progress monitoring. The output is also saved into a file.\n\n  Note when the timestamp of the executable file on disk has changed, GtestGui automatically reads the test case list to check for changes. An error will be reported if the current test case filter contains pattern that no longer match any test case. If the timestamp has not changed, the age of the file is shown in a status message below the buttons, to warn you about this in case you forget to build the executable after making changes.\n\n  Multiple processes are started if the *CPUs* value is larger than 1. Most of the time, GtestGui will use gtest's \"sharding\" feature, which assigns a static sub-set of tests to each process. However, if repetition count is larger than one and the number of configured CPUs is larger than the number of test cases, or if the remainder of division of test cases by CPUs is large, GtestGui may instead or additionally partition by repetitions.\n\n  Note when a test process crashes during a campaign, it is currently not restarted. That is because gtest's static sharding does not allow disabling the instable test case without influencing test case partitioning.\n\n*Stop*:\n  Sends a TERM signal to the test processes and waits for them to finish. When termination takes a long time (possibly because the executable is hung) and the button is clicked a second time, a KILL signal is sent.\n\n*Resume*:\n  Restarts test case execution using the same test case filter setting and executable version as previously used, without resetting the test result statistics. Other options, such as repetition or CPU count may be modified. This operation is useful when a long test run has to be paused temporarily as the CPUs are needed otherwise, or for changing options such as the number of used CPUs.\n\n  This command will also use the same version of the test executable as used previously if option *Create copy of executable file* is enabled, see `Configuration`_. This allows resuming execution even when the executable at the configured path no longer exists, for example due to a failed build.\n\n  When resuming, scheduling cannot restart exactly where it was stopped due to limitations in gtest. If repetition count is 1, all test cases will be rescheduled. For higher repetition count, the lowest number of remaining repetitions across all selected test cases is rescheduled.\n\n*Repeat*:\n  Repeats the test cases marked manually for repetition via the result log, or all previously failed test cases if none were selected. This allows quick repetition of individual test cases without changing the test case filter.\n\nTest case filter\n~~~~~~~~~~~~~~~~\n\nThe entry field at the top of the test control frame allows specifying a test case filter, so that only a matching sub-set of test cases is run. The filter can be entered manually using the same syntax as the \"\\ ``--gtest-filter``\" command line option: The format of a filter expression is a \":\"-separated list of test case names of wildcard patterns (positive patterns), optionally followed by a \"-\" and another \":\"-separated pattern list (negative patterns). A test matches the filter if and only if it matches any of the positive patterns, but none of the negative ones. Wildcard characters are \"*\" (matching any sub-string) and \"?\" (matching any single character). As a special case, when no positive pattern is specified, all test cases are considered matching.\n\nAlternatively, the test case filter can be modified via the drop-down menu below the entry field (which can be opened by the Cursor-Down key or a click on the drop-down button next to the entry field). The menu has entries for selecting and deselecting entries test suites as well as individual test cases. When modifying the filter this way, GtestGui will update the entry field with the shortest filter expression it can find using trailing wild card and negative patterns.\n\nYet another alternative for modifying test case filters is the test case list dialog, either via its context menu or the \"Return\" and \"Delete\" key bindings. Finally note any modification to the test case filter can be undone using \"Control-Z\" key binding in the entry field, or redone using \"Control-Y\" key binding.\n\nAfter renaming a test case or adding a new test case, use the *Refresh test case list* command in the *Control* menu to read the test case list from the executable file. Afterward the new test case names can be used in the filter string.\n\nTest control options\n~~~~~~~~~~~~~~~~~~~~\n\n*Repetitions*:\n  If a value larger than 1 is entered here, it is passed via the \"\\ ``--gtest_repeat=NNN``\" option on the executable's command line. This causes each test case to be repeated the given number of times.\n\n*CPUs*:\n  This option is described in `Test campaign control buttons`_.\n\n*Ignore filter*:\n  When more than one CPU is configured, this option can be used for scheduling different sets of test cases on different CPUs: The first set of CPUs runs only test cases matching the test case filter. The second set of CPUs runs all test cases. The size of the second set is determined by the given number.\n\n  This feature is useful when running a long test campaign after modifying a test case as it allows effectively increasing the repetition count of the modified test case. It is also useful when running test cases that aim to find race conditions, as the additional concurrent execution of all test cases serves to generate a background load that increases randomness of thread scheduling.\n\n*Fail limit*:\n  When set to a non-zero value, the test campaign is stopped after the given number of failed test cases was reached. Note for the limit the total of failures is counted across all test cases and all CPUs.\n\n  This option is **not** using the respective Gtest option, as that option would not work as expected when using multiple CPUs (as it would work independently for tests on each CPU). Instead, the handling is implemented in result handling in GtestGui. As there is a delay resulting from buffering in the pipeline between test application and GtestGui, more test cases may have failed in the mean time, so that the actual number of failures after the actual end of all test processes may be higher than the limit.\n\n*Clean traces of passed tests*:\n  When enabled, trace output of passed test case runs is not written to the trace output file. If all test cases of a test campaign passed, the complete output file is removed automatically when tests are stopped. This feature is intended for long test campaigns to reduce consumption of disk space.\n\n*Clean core files*:\n  When enabled, core files with pattern \"\\ ``core.PID``\" are deleted automatically after a test case crashed. (See chapter `Result log`_ for more details on core files.) Note GtestGui can only clean core files from processes it controls directly. It is not able to clean core dumps created by death tests that are child processes of the main test process.\n\n*Shuffle execution order*:\n  When enabled, \"\\ ``--gtest_shuffle``\" option is set via the executable's command line. This option randomizes the order of test execution.\n\n*Run disabled tests*:\n  When enabled, \"\\ ``--gtest_also_run_disabled_tests``\" option is set via the executable's command line. The option enables execution of test suites and individual test cases whose name starts with \"\\ ``DISABLED``\".\n\n  The option also affects test case filter and test case selection menus within GtestGui: When the option is not set, entering filter pattern \"\\ ``*DISABLED*``\" would raise a warning that it matches no test cases (even if there are some with that name). The drop-down menu below the entry field would no show such names.\n\n*Break on failure*:\n  When enabled, \"\\ ``--gtest_break_on_failure``\" option is set via the executable's command line. This will cause SIGTRAP to be sent to the test process upon the first failure. As no debugger is attached, this will cause the process to crash.\n\n  When core dumps are enabled in the kernel, the core will be saved by GtestGui and can be analyzed via the *Extract stack trace from core dump* command in the result log's context menu. When core dumps are not enabled, this option is probably not useful.\n\n*Break on exception*:\n  When enabled, \"\\ ``--gtest_catch_exceptions=0``\" option is set via the executable's command line. This will disable catching of exceptions by the Gtest framework, which means any exception not caught by the test case itself causes the test process to crash due to an unhandled exception.\n\n  When core dumps are enabled in the kernel, the core will be saved by GtestGui and can be analyzed via the *Extract stack trace from core dump* command in the result log's context menu. When core dumps are not enabled, this option is probably not useful.\n\n*Valgrind* and *Valgrind - 2nd option set*:\n  The two valgrind options serve to run each execution of the test executable under valgrind using the configured command line. Notably, valgrind checks are performed across the complete lifetime of the test process, thus spanning all test cases or test repetitions. Therefore, if for example a memory leak is reported at the end, it cannot be determined which test case caused it (or it may even be caused by interaction of the test sequence.) Therefore valgrind errors are reported with a special entry in the result log. Some kind of errors such as invalid memory accesses can be mapped to test cases based on the position of the error report in the output stream. Note however that the position may not exactly reflect the timing of occurrence due to possible buffering in output streams within the test executable.) See `Result log`_ and `Configuration`_ for more details.\n\nStatus and progress monitoring\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe lower left part of the test control frame shows the status of the latest test campaign. The left box with label \"Status\" shows three numbers with the following meaning:\n\n*Running*:\n  Shows the number of test processes currently executing test cases. (See `Caveats`_ for an explanation why this number may be lower than the number of requested \"CPUs\".)\n\n*Passed*:\n  Shows the number of test cases that were passed or skipped.\n\n*Failed*:\n  Shows the number of test cases that failed or crashed. The number also includes a possible additional fail verdict by valgrind at the end of a test process.\n\nThe left box with label \"Progress\" shows the completion ratio in form of a progress bar. The ratio is calculated as the number of received results (i.e. passed, skipped, failed, or crashed) divided by the number of expected results. The number of expected results is the number of test cases selected by the test case filter, multiplied with the repetition count.\n\nIn case the *Ignore filter* option in `Test control options`_ is set to a non-zero value, completion ratio of the respective test jobs is disregarded for the progress display, as these jobs are terminated automatically once the regular test jobs have completed. Note the \"Status\" frame however does include results received from these jobs, so that the numbers shown there may exceed the configured repetition count for tests matching the test case filter.\n\nWhen hovering the mouse over the progress bar, a tool-tip text shows additional details about the progress, namely the ratio of completed test cases and repetitions and estimated remaining run time.\n\nResult log\n----------\n\nThe result log frame is located in the middle of the main window. When started for the first time, the log is usually empty. However, results can also be imported via the command line, for example from a file that contains output from a test executable that was redirected into a file. The result log may also show results from a previous run of GtestGui, if auto-import is enabled in `Configuration`_.\n\nThe result log contains one line for each ``[ OK ]``, ``[ SKIPPED ]`` and ``[ FAILED ]`` line in the test application's gtest-generated output. The test executable's output stream is redirected to a pipe that is read continuously by GtestGui for this purpose. GtestGui also stores this output to a file, so that the trace output between ``[ RUN ]`` and verdict text line can be opened in a trace browser.\n\nIn addition to the standard verdicts generated by the gtest library, GtestGui supports verdict ``[ CRASHED ]``, which is appended to the trace file when an executable terminates with a non-zero exit code within a test case.\n\nWhen running tests under **valgrind**, a special result log entry \"Valgrind error\" is added if valgrind's exit code signals detection of an error. This case is special, as it's not known which test case caused the error, if more than one was running. Double-clicking this entry will therefore open the complete trace file.\n\nEach entry in the result log contains the following information:\n\n-   Local time at which the result was captured.\n\n-   Verdict: Passed, failed, skipped, crashed, or special case valgrind of startup errors.\n\n-   Test case name.\n\n-   \"Seed\" value parsed from trace output, if a regular expression was configured for that purpose in `Configuration`_.\n\n-   Test duration as reported by gtest (in milliseconds).\n\n-   In case of failure, source code file and line where of the first \"Failure\" was reported in trace output.\n\n-   Timestamp or name of executable that generated the test output, in case the executable has changed since running the test.\n\nWhen selecting an entry by clicking on it, the corresponding trace output is shown in the trace preview frame below the result log. In case of a test case failure, the view is centered on the first line containing \"Failure\" and the text is marked by light red background.\n\nDouble-clicking on an entry opens the trace of that entry in an external application, which can be selected via the Configuration dialog. Default application is \"trace browser\", a text browser with syntax highlighting and search and filtering capabilities especially tailored for trace analysis. As GTest writes trace output of all test cases into a single file, only the portion between \"\\ ``[ RUN ]``\" and \"\\ ``[ OK ]``\" or \"\\ ``[ FAILED ]``\" respectively of the selected test case is extracted and passed to the application.\n\nWhen clicking on an entry with the right mouse button, a context menu opens that allow opening the trace file, adding or removing the selected test case from the filter option, scheduling a test for repetition, excluding a result from the log display, or removing results. The context menu also has an entry for sending the complete trace file to the external trace browser application; This may be needed in rare cases when behavior of a test case depends on the sequence of preceding tests.\n\nAdditional commands are offered in the \"Result log\" drop-down of the main window menu. The commands allow sorting and filtering the result log by various criteria. By default, log entries are sorted by the time they were created at. When running a test campaign, it's recommended to enabled the *Show only failed* filter, so that it's easy to track which test cases failed.\n\nWhile a test campaign is running, new result entries are added at the bottom (when in default sort order) and the view is scrolled automatically to keep the added entry visible. This auto-scrolling can be stopped by selecting any entry in the list. To return to auto-scrolling, deselect the last entry either by clicking on it while holding the *Control* key, or by clicking in the empty line at the end of the list.\n\nResult entries can be deleted using the context menu or by pressing the *Delete* key. To delete all entries, press *Control-A* (i.e. select complete list) and then *Delete*. Note actual trace files are removed from disk only if all test case results stored in it have been deleted from the log.\n\nPost-mortem core dump analysis\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nPOSIX platforms only: When selecting the result of a test case that caused a crash of the test process, the context menu has an entry that allows analyzing the generated core dump file. The Analysis consists of a thread overview and stack-trace (backtrace) of each thread. This allows finding quickly at which test step the crash occurred.\n\nTo allow this, ``/proc/sys/kernel/core_pattern`` needs to be configured as \"\\ ``core.%p``\", or alternatively as \"\\ ``core``\", when additionally ``/proc/sys/kernel/core_uses_pid`` is set to \"1\". This way, a file named \"\\ ``core.PID``\" will be created by the operating system in the directory where GtestGui was started.\n\nIf GtestGui thus finds a core file with a matching process ID after a process crashed, it will automatically preserve that core file for analysis by moving it into the directory where trace text output files are stored and renaming it to the same name of the corresponding trace file with prefix \"core\". It will also preserve the executable file version by keeping a hard link to the same executable in the trace directory for as long as the core file exists.\n\nTest case list dialog\n---------------------\n\nA dialog showing the list of test cases read from the selected executable file can be opened via the control menu.\n\nFor each test case, the list shows in the first column if the test case is currently enabled for execution, the test case name, the number of times it has passed and failed in the current campaign and its accumulated execution time.\n\nBy default, test cases in the list are in the order as received. The list can be sorted in different ways using the context menu: The list can be sorted alpabetically, by execution or failure count within the current test campaign, or by test case execution duration.\n\nBy default, all test cases defined in the executable are listed. You can filter the list via the context menu to show only test cases enabled via test case filter in the main window, or only test cases that failed in the current test campaign, or only test cases whose name or test suite namedoesn't start with \"\\ ``DISABLED_``\".\n\nThe \"Run\" column is updated to reflect changes in the \"Test filter\" entry field in the main window. Updates occur when you press the \"Return\" key, or when keyboard focus is moved out of the entry field. When the \"Show only tests enabled to run\" filter is active in the test case list dialog, the test case sub-set shown in the list is also updated to match the current filter string. This feature can be used for testing the manually entered filter string against the list, as you'll see exactly which test cases it selects.\n\nInversely, you can use the list for modifying the test case filter in the main window: This can be done via the *Add/Remove selected test case* and *Add/Remove selected test suite* commands in the list's context menu. The latter works on all test cases in the test suite of the selected test case. The same can be achive via key bindings: *Return* adds selected test cases and *Del* removes selected test cases. Note for selecting multiple lines in the list via keyboard, hold the *Shift* button while moving the cursor via the Up/Down keys.\n\nTest cases named \"\\ ``DISABLED_``\" can only be enabled via the test case list commands when option *Run disabled tests* in the main window is checked.\n\nNote while the filter string is empty, all test cases are considered as enabled.  Nevertheless, the context menu will offer adding selected test cases. If you do so, then implictly all test cases except for the one added by the command get disabled.\n\nAfter making changes to the test case filter via the dialog, the filter string in the main window is updated automatically to reflect the selection. The filter uses wildcards to minimize the filter string length.\n\nJob list dialog\n---------------\n\nA dialog window showing the status of test processes in a currently ongoing test campaign can be opened via the *Open job list* command in the control menu. If no test campaign is active, it will only show a message informing that currently no processes are running.\n\nDuring an ongoing test campaign, the list shows for each process its ID assigned by the operating system (\"PID\"), if it is a background job (\"BgJob\"), the number of bytes of trace output received from it (\"Traced\"), the number of test case results found in received trace (\"Results\"), the percentage of completed results from expected results (\"Done\"), and finally the name of the current test case reported via \"\\ ``[ RUN ]``\" in received trace.\n\nNote a \"background job\" is one for which the test case filter is ignored as per *Ignore filter* option in `Test control options`_. These jobs are special as they are terminated automatically when the last regular job is completed. For this reason their completion ratio (i.e. \"Done\" column) is disregarded for progress display in the main window.\n\nThe dialog is useful in case you suspect that a test campaign may be hung (for example when a test case ran into a deadlock or busy loop.) You could notice that firstly by the number of results not increasing and if that's the case, by the number of received bytes received as trace output not increasing. (The latter will however not if your test cases generate few or no trace output.)\n\nThe context menu allows sending an *ABORT* signal to the process, which will terminate it and cause a core image to be dumped on UNIX. The test case will be reported as crashed in the result log. This can be used for debugging a hung process: You can find where it was hung by using the *Extract stack trace from core dump* command in context menu of the result log entry. Alternatively, you could use the PID for attaching a debugger to the live process (e.g. \"\\ ``gdb -p PID exe_file``\").\n\nConfiguration\n-------------\n\nUser-interface options\n~~~~~~~~~~~~~~~~~~~~~~\n\nA few simple options for the user interface are available directly in the *Configure* menu:\n\n*Select font for result log*:\n  Selects the font used in the result log list in the main window, as well as in the test case list and job list dialogs. By default, the font is determined by Python's Tkinter and depends on the platform.\n\n*Select font for trace preview*:\n  Selects the font used in the trace preview frame at the bottom of the main window. By default, a fixed font is used; Font family and size are determined by Python's Tkinter and depend on the platform.\n\n*Show test controls*:\n  This option can be unchecked for temporarily hiding the test control frame in the main window. This allows for more space for the result log during result analysis. This option is not stored in persistent configuration and will always be enabled upon start of the application. Note while the controls are not shown, some operations are still possible via key bindings as well as the *Control* menu.\n\n*Show tool-tip popups*:\n  The option can be unchecked to disable display of \"tool-tip\" popup windows when hovering with the mouse on labels or check-buttons in the main window and dialogs which have such built-in help. Changes of the option are stored in the configuration file, so that it is persistent. This may be useful once you are sufficiently familiar with the tool.\n\nTest management options\n~~~~~~~~~~~~~~~~~~~~~~~\n\nThe following configuration options are available in the *Options* dialog window that can be opened via the *Configure* menu:\n\n*Trace browser*\n  This entry field selects the external application used for displaying trace files and trace snippets, when double-clicking on an entry in the result log. By default, trace browser *trowser.py* is used. You can either specify just the application file name, or its full path if it is not found via ``PATH`` configured in environment. The path of the trace file to be displayed will be appended to the given command line.\n\n  Currently the application path name or parameters cannot contain space characters, as these are assumed to be separators.\n\n  Note for the Windows platform: When using the default of ``trowser.py``, you may need to insert the Python interpreter in front of \"trowser.py\", depending on your Python installation. In that case you need to add the full path to where ``trowser.py`` is installed.\n\n  Enable option *Browser supports reading from STDIN* if the selected trace browser supports reading text from \"standard input\" via a pipeline. In this case filename \"\\ ``-``\" is passed on the command line instead of a file name.  The default browser *trowser.py* supports this. When not enabled, GtestGui has to create temporary files for passing trace snippets to the browser application.\n\n*Pattern for seed*\n  If a regular expression pattern is specified here, it will be applied to the trace of each test case. The string returned by the first match group (i.e. the first set of capturing parenthesis) will be shown in the corresponding result log as \"seed\". (This is intended for allowing repeat of a test sequence exactly even for test cases using randomness, by starting their PRNG with the same seed. This is not yet supported however, due to lack of an interface for passing a list of seed values via the GTest command line interface.)\n\n*Directory for trace files*\n  Specifies the directory where to store temporary files for trace output and core dump files collected from the executable under test. If empty, the current working directory at the time of starting GtestGui is used. Note sub-directories will be created in the given directory for each executable file version. If you want to use the \"copy executable\" option, the specified directory needs to be in the same filesystem as the executables. If you want to keep core dumps, the directory needs to be in the same filesystem as the working directory (because they will be moved, not copied due to size.)\n\n*Automatically remove trace files of passed tests upon exit*\n  When enabled, output from passed test cases is automatically removed from created trace files upon exiting the application. Trace files and sub-directories only containing passed test results are thus removed entirely. Note imported trace files are never modified or removed automatically, so you may need to remove these manually once after enabling this option (e.g. via result log context menu).\n\n*Automatically import trace files upon start*\n  When enabled, all trace files found in sub-directories under the configured trace directory are read after starting GtestGui. Test case results found in the files are shown in the result log window.\n\n*Create copy of executable under test*\n  When enabled, a copy of the current executable under test is made within the configured trace directory. (Technically, the copy is achieved by creating a so-called \"hard link\", so that no additional disk space is needed.) This is recommended so that recompiling the executable does not affect the current test run (i.e. compilation may either fail with error \"file busy\" when tests are running, or tests may crash). This option is required for allowing to extract stack traces from core dump files taken from an older executable version. Note this option may not work when using trace directories in locations such as /tmp on UNIX-like systems, as these usually are configured to disallow executable files for security reasons.\n\n*Valgrind command line*\n  *UNIX only:* Command lines to use for running test executables when one of the \"Valgrind\" options in the main window is enabled. The executable name and gtest options will be appended to the given command line.\n\n  There are two separate entry fields, corresponding to the two check-buttons in the main window. This is intended for allowing to configure a configuration variant that runs faster and one that is slower but performs deeper analysis. By default, the command will perform check for memory leaks (notably at end of all test cases, not for individual test cases in a run of multiple tests) and for use of uninitialized memory. The second command line has additional options for tracking the origin of uninitialized memory.\n\n  Currently the path or parameters cannot contain space characters, as these are assumed to be separators.\n\n*Valgrind supports --exit-code*\n  When this option is set, parameter \"--error-exitcode=125\" will be appended to the given valgrind command lines. This is required for detecting automatically that valgrind found errors during test execution. Only when enabled, result logs will report valgrind errors.\n\nThe debugger used for extracting stack traces from core files (POSIX only) is currently not configurable; It is hard-coded to \"\\ ``gdb``\", which should be somewhere in the ``PATH`` configured in environment.\n\nCaveats\n-------\n\nThis chapter lists notable limitations that are not easy to overcome due to design choices.\n\nConcurrent scheduling\n  Concurrent scheduling requested via option *CPUs* is based on the \"sharding\" feature provided by Gtest framework. Unfortunately, Gtest only supports static case test partitioning, which means for example when using two CPUs, the set of test cases is split in two parts, of which the first is executed in one test process and the second in the other process.\n\n  One problem arises when the number of test cases is smaller than the number of requested CPUs. This typically occurs when trying to run a single test case many times.  Sharding would then only use a single CPU, as it does not consider repetitions in its \"sharding\" algorithm. GtestGui works around that by calculating if it is more efficient to partition test cases by repetition than by sharding, or if a combination of both is even better. In the mentioned example, it would not use sharding, but instead run half the number of repetitions in one process, and the second half in the second. For more complex configurations such as 10 repetitions of 9 test cases on 8 CPUs, a combination of both methods will be used.\n\n  A second problem, that GtestGui cannot work around, occurs when test cases have non-uniform execution time. As the \"sharding\" algorithm uses static partitioning solely based on the number of test cases per process, differences in execution times are not considered. For example, when two tests are scheduled for 100 times and test case A takes 1 second, but test case B only 1 millisecond, Gtest will still schedule all runs of A in the first process and all runs of B in the second process. Thus, the second process will sit idle for 99.9% of the total test execution time.\n\nTest case crashes\n  In a well-behaved test application, failures are normally reported via Gtest macros or exceptions. Thus, the failure is recorded and the next test case is executed. Sometimes however, a test case may have a bug that leads to a crash of the complete process. In this case all following test cases or test case repetitions are no longer executed.\n\n  Currently GtestGui will not attempt to restart tests after a crash, because it expects that the same test case will crash again and thus keep blocking following tests. It is not possible to disable the instable test, as this would interfere with partitioning by Gtest \"sharding\", i.e. the set of test cases run by that test case would be altered. The only way around is for the use to manually disable the instable test case and then restart the test campaign.\n\nOverload of GUI by concurrent unit-testing\n  When using multiple CPUs for running very short-running test cases, such as typical unit-tests, the GUI application may be overloaded and thus appear unresponsive and hung. This is a result of Python's implementation of multi-threading, which does not allow using more than one effective CPU due to exclusive locks in interpreter execution. Therefore, when parsing of test output streams takes 100% of a CPU, no time is left for updating the GUI even though separate threads are used.\n\nFor more minor constraints and ideas for enhancements see file ``TODO.txt``, which is part of the package.\n\nFiles\n-----\n\n**$HOME/.config/gtest_gui/gtest_gui.rc**\n  This file stores parameters that are configured via the `Configuration`_ dialog. Additional it contains persistent state such as the list of previously loaded executable files and the size and position of resizable dialog windows.\n\n**trace.NNNN/trace.NNNN**\n  Output from test applications is stored to sub-directories files called \"\\ ``trace.``\" with a number appended. The number is an arbitrary identifier for the test executable whose output they contain. The directory contains a separate output file for each spawned test process.  Files in this directory are removed when removing results in the `Result log`_ of the GUI. By default, traces containing only passed test case results are also cleaned upon exiting the GUI.\n\n  By default, the sub-directories are created in the current working directory where GtestGui is started. Another base directory may be specified in `Configuration`_.\n\n  If multiple instances of the GUI are started, they will use the same directory for storage. For single-user systems this works well enough, as conflicts may occur only when pressing \"Run\" button concurrently in different instances. For multi-user setup it is not recommended to share the directory. (Seeing other user's test results would be confusing anyway.)\n\n**TEMP/gtest_gui_tmpXXX**\n  A temporary directory at the default place used by the operating system for such purpose will be created, for for example for unpacking trace snippets for display, or exporting trace files to archives. A different directory is used by each instance of GtestGui. The directory is removed automatically when the GUI is closed. Note the latter may fail on some platforms if a trace browser application still has opened an exported trace file at the time of quitting the GUI.\n\n\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Module tester's Gtest GUI is a full-featured graphical user-interface to C++ test applications using the GoogleTest framework.",
    "version": "0.9.0",
    "project_urls": {
        "Homepage": "https://github.com/tomzox/gtest_gui"
    },
    "split_keywords": [
        "google-test",
        "gtest",
        "testing-tools",
        "test-runners",
        "tkinter",
        "gui"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6892a29947bdda120670b41cafd4a90a732d1518fee13afb0f7a3df380046a7f",
                "md5": "5c4a1a1d3afb466809823857b7aa8bac",
                "sha256": "3a29f1a161724eaa7af395e76a621c7078d39c1f1d07cc3b5be6bd0759e2daf9"
            },
            "downloads": -1,
            "filename": "mote-gtest-gui-0.9.0.tar.gz",
            "has_sig": false,
            "md5_digest": "5c4a1a1d3afb466809823857b7aa8bac",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 129905,
            "upload_time": "2023-06-08T19:39:31",
            "upload_time_iso_8601": "2023-06-08T19:39:31.585723Z",
            "url": "https://files.pythonhosted.org/packages/68/92/a29947bdda120670b41cafd4a90a732d1518fee13afb0f7a3df380046a7f/mote-gtest-gui-0.9.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-08 19:39:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "tomzox",
    "github_project": "gtest_gui",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "mote-gtest-gui"
}
        
Elapsed time: 0.11633s