A test runner already has a --durations option to print durations.
However, this isn't entirely satisfactory.
When there are many tests, this produces output spam which makes it hard
to find the test failure output. Nevertheless, it is helpful to be
informed of tests which are unusually slow.
Therefore, introduce a new option --min-duration that causes all
durations above a certain threshold to be printed. This allows slow
tests to be visible without mentioning every test.
This simplified variant supports only a subset of the functionality
in `std::unique_ptr<T>`. `Catch::Detail::unique_ptr<T>` only supports
single element pointer (no array support) with default deleter.
By removing the support for custom deleters, we also avoid requiring
significant machinery to support EBO, speeding up instantiations of
`unique_ptr<T>` significantly. Catch2 also currently does not need
to support `unique_ptr<T[]>`, so that is not supported either.
There are two reasons for this:
1) It is highly unlikely that someone has use for this header,
which has no customization points and only provides simplest
possible main, and cannot link the static library which also
provides a default main implementation.
2) It being a header was causing extra complications with
the convenience headers, and our checking script. This would either
require special handling in the checking script, or would break user's
of the main convenience header.
All in all, it is simpler and better in the long term to remove it,
than to fix its problems.
I do not think we need a safeguard against not including files in
CMake anymore, and as it is, it caused annoying false positive about
the default main implementation.
Thanks to the changes to compilation model, the tests for benchmarking
macros have been made part of the normal test run. This means that
the only purpose these separately compiled tests served was to waste
CI time.
When running tests in parallel, CTest runs the tests in decreasing
order of cost (time required), to get the largest speed up from
parallelism. However, the initial cost estimates for all tests are
0, and they are only updated after a test run. This works on a dev
machine, where the tests are ran over and over again, because
eventually the estimates become quite precise, but CI always does
a clean build with 0 estimates.
Because we have 2 slow tests, we want them to run first to avoid
losing parallelism. To do this, we provide them with a cost estimate
manually.