Previously it could be just plain reporter name, e.g. `xml`, but
it could not specify other reporter options. This change is not
particularly useful for the built-in reporters, as it mostly comes
in handy for combining specific custom reporter with custom arguments,
and the built-in reporters do not have those.
This changes the compact reporter's summary of test run totals to use
the same format as the console reporter. This means that while output is
no longer on a single line (two instead), it now includes totals for
`failedButOk` test cases and assertions, which were previously missing.
Not all reporters use a format that supports this, so TeamCity
and Automake reporters still do not report it. The console
reporter now reports it even on successful runs, where before
it only reported the rng seed in the header, which was showed
either for failed run, or for run with `-s`.
CLoses#2065
When the added Bazel configuration flag is enabled,
a default JUnit reporter will be added if the XML
envrioment variable is defined.
Fix include paths for generated config header.
Enable Bazel config by default when building with
Bazel.
Co-authored-by: Martin Hořeňovský <martin.horenovsky@gmail.com>
This greatly simplifies running Catch2 tests in single binary
in parallel from external test runners. Instead of having to
shard the tests by tags/test names, an external test runner
can now just ask for test shard 2 (out of X), and execute that
in single process, without having to know what tests are actually
in the shard.
Note that sharding also applies to test listing, and happens after
tests were ordered according to the `--order` feature.
This means that e.g. for `TEST_CASE` with two sibling `SECTION`s
the event will fire twice, because the `TEST_CASE` will be entered
twice.
Closes#2107 (the event mentioned there already exists, but this
is its counterpart that we also want to provide to users)
A test runner already has a --durations option to print durations.
However, this isn't entirely satisfactory.
When there are many tests, this produces output spam which makes it hard
to find the test failure output. Nevertheless, it is helpful to be
informed of tests which are unusually slow.
Therefore, introduce a new option --min-duration that causes all
durations above a certain threshold to be printed. This allows slow
tests to be visible without mentioning every test.