The WithinRelMatcher does not listen to the precision specification; it just naively pipes to stringstream. If you specify the precision, that has no impact on the outputted values when the match fails.
This fix makes the WithinRelMatcher listen to the precision (https://github.com/catchorg/Catch2/blob/devel/docs/tostring.md#floating-point-precision) specified by: Catch::StringMaker<float>::precision
The issue was that `capture_by_value` was meant to be specialized
by plain type, e.g. `capture_by_value<std::weak_ordering>`, but
the TMP in Decomposer did not properly throw away `const` (and
`volatile`) qualifiers, before taking the value of `capture_by_value`.
Now we use intrinsics when possible, and fallback to optimized
implementation in portable C++. The difference is about 4x when
we can use intrinsics and about 2x when we cannot.
This should speed up our Lemire's algorithm implementation nicely.
The previously used `make_unsigned` approach combined with the overload
set of `extendedMult` caused compilation issues on MacOS platform. By
forcing the selection to be one of `std::uintX_t` types we don't need
to complicate the overload set further.
Previously it could be just plain reporter name, e.g. `xml`, but
it could not specify other reporter options. This change is not
particularly useful for the built-in reporters, as it mostly comes
in handy for combining specific custom reporter with custom arguments,
and the built-in reporters do not have those.
This target previously did not specify its minimum C++ standard.
AppleClang was emitting warnings about the use of C++11 features
which is fixed by telling CMake that this target requires C++14.
Because this target does not link to the existing CMake targets
it never inherited that C++ standard requirement.
This was originally motivated by `REQUIRE((a <=> b) == 0)` no
longer compiling using MSVC. After some investigation, I found
that they changed their implementation of the zero literal
detector from the previous pointer-constructor with deleted
other constructors, into one that uses `consteval` constructor
from int.
This breaks the previous detection logic, because now
`is_foo_comparable<std::strong_ordering, int>` is true, but
actually trying to compare them is a compile-time error...
The solution was to make the decomposition `constexpr` and rely
on a late C++20 DR that makes it so that `consteval` propagates
up through the callstack of `constexpr` functions, until it either
runs out of `constexpr` functions, or succeeds.
However, the default handling of types in decomposition is to
take a reference to them. This reference never becomes dangling,
but because the constexpr evaluation engine cannot prove this,
decomposition paths taking references to objects cannot be
actually evaluated at compilation time. Thankfully we already
did have a value-oriented decomposition path for arithmetic types
(as these are common linkage-less types), so we could just
explicitly spell out the `std::foo_ordering` types as also being
supposed to be decomposed by-value.
Two more fun facts about these changes
1) The original motivation of the MSVC change was to avoid
trigering a `Wzero-as-null-pointer-constant` warning. I still
do not believe this was a good decision.
2) Current latest version of MSVC does not actually implement the
aforementioned C++20 DR, so even with this commit, MSVC cannot
compile `REQUIRE((a <=> b) == 0)`.
Without SSE2 enabled, x86 targets will use x87 FPU, which breaks
the tests checking for reproducible results from our random
floating point number generators. The output is still reproducible,
at least between binaries targetting x87, but the tests hardcode
results for the whole pipeline being done in 32/64bit precision.
Closes#2796
By moving to use our `uniform_integer_distribution`, which is
reproducible across different platforms, instead of the stdlib
one which is not, we can provide reproducible results for `float`s
and `double`s. Still no reproducibility for `long double`s, because
those are too different across different platforms.
* Utility for extended mult n x n bits -> 2n bits
* Utility to adapt output from URBG to target (unsigned) integral
type
* Utility to reorder signed values into unsigned type while keeping
the order.
Specifically we add
* `gamma(a, b)`, which returns the magnitude of largest 1-ULP
step in range [a, b].
* `count_equidistant_float(a, b, distance)`, which returns the
number of equi-distant floats in range [a, b].
Technically, the declaration should not have a space between
the quotes and the underscore, because `_foo` is a reserved
identifier, but `""_foo` is not. In general it works, but newer
Clang versions warn about this, because WG21 wants to deprecate
and later remove this form completely.
The basic idea was to reduce the number of things dependent on the `Clock`
type. To that end, I replaced `Duration<Clock>` with `IDuration` typedef
for `std::nanoseconds`, and `FloatDuration<Clock>` with `FDuration`
typedef for `Duration<double, std::nano>`. We can generally assume that
any clock's duration can be expressed in nanoseconds, as long as we insert
`duration_cast`s into the right places.
Note that we cannot remove all dependence on `Clock` as a template
arguments, because functions that actually measure the elapsed time have
to use the Clock.
We also changed some template function arguments to pass plain function
pointers, so that the actual implementation can be placed into a cpp file.
* AssertionEnd does not reset the assertion info yet. That is done after populateReaction. And reset assertion info would also reset the result disposition to normal, so that any uncaught exception would be reported as failure
* Approving test output changes due to added unit tests
* Unit tests to throw std::runtime_error instead of std::exception
* Add a unit test to test incomplete assertion handler
---------
Co-authored-by: Ross <ross.tang@gfo-x.com>