This PR adds functionality to skip around ANSI escape sequences in catch_textflow so they do not contribute to line length and line wrapping code does not split escape sequences in the middle. I've implemented this by creating a AnsiSkippingString abstraction that has a bidirectional iterator that can skip around escape sequences while iterating. Additionally I refactored Column::const_iterator to be iterator-based rather than index-based so this abstraction is a simple drop-in for std::string.
Currently only color sequences are handled, other escape sequences are left unaffected.
Motivation: Text with ANSI color sequences gets messed up when being output by Catch2 #2833.
`max_digits10` is the number of decimal digits that the type _can_
represent, in other words, the number of decimal digits needed to
disambiguate any two floating point numbers when serialized to
a string.
Closes#1210
When a signal is caught, the destructors of Sections will not be called.
Thus, we must call `sectionEndedEarly` manually for those Sections.
Example test case:
```
TEST_CASE("broken") {
SECTION("section") {
/// Use illegal cpu instruction
__asm__ __volatile__("ud2" : : : "memory");
}
}
```
Remove `#pragma intrinsic(_umul128)` because it doesn't work on
ARM64EC and x64 works without it.
Co-authored-by: Agoston Szepessy <agos@microsoft.com>
The WithinRelMatcher does not listen to the precision specification; it just naively pipes to stringstream. If you specify the precision, that has no impact on the outputted values when the match fails.
This fix makes the WithinRelMatcher listen to the precision (https://github.com/catchorg/Catch2/blob/devel/docs/tostring.md#floating-point-precision) specified by: Catch::StringMaker<float>::precision
The issue was that `capture_by_value` was meant to be specialized
by plain type, e.g. `capture_by_value<std::weak_ordering>`, but
the TMP in Decomposer did not properly throw away `const` (and
`volatile`) qualifiers, before taking the value of `capture_by_value`.
Our implementation should be slightly faster, and has the
advantage of being consistent between platforms. This does not
have immediate user impact, because we currently use random_device
to generate random seed for resampling, but if we decide to change
this in the future, it is one less place to fix.
Now we use intrinsics when possible, and fallback to optimized
implementation in portable C++. The difference is about 4x when
we can use intrinsics and about 2x when we cannot.
This should speed up our Lemire's algorithm implementation nicely.
The previously used `make_unsigned` approach combined with the overload
set of `extendedMult` caused compilation issues on MacOS platform. By
forcing the selection to be one of `std::uintX_t` types we don't need
to complicate the overload set further.
Previously it could be just plain reporter name, e.g. `xml`, but
it could not specify other reporter options. This change is not
particularly useful for the built-in reporters, as it mostly comes
in handy for combining specific custom reporter with custom arguments,
and the built-in reporters do not have those.
The Core Guidelines state "A base class destructor should be either
public and virtual, or protected and non-virtual" so, hopefully, no
static analyser will complain.
INTERFACE should be used on dependencies from other than our source files.
PRIVATE should be used the include happens in our source files.
In this case (src/catch2/internal/catch_debug_console.cpp:20) the include
is from our source file.
I encountered a issue with this when building Catch2 for Android in
combination with BUILD_SHARED_LIBS. Changing the visibility to PRIVATE
fixes the issue.
This was originally motivated by `REQUIRE((a <=> b) == 0)` no
longer compiling using MSVC. After some investigation, I found
that they changed their implementation of the zero literal
detector from the previous pointer-constructor with deleted
other constructors, into one that uses `consteval` constructor
from int.
This breaks the previous detection logic, because now
`is_foo_comparable<std::strong_ordering, int>` is true, but
actually trying to compare them is a compile-time error...
The solution was to make the decomposition `constexpr` and rely
on a late C++20 DR that makes it so that `consteval` propagates
up through the callstack of `constexpr` functions, until it either
runs out of `constexpr` functions, or succeeds.
However, the default handling of types in decomposition is to
take a reference to them. This reference never becomes dangling,
but because the constexpr evaluation engine cannot prove this,
decomposition paths taking references to objects cannot be
actually evaluated at compilation time. Thankfully we already
did have a value-oriented decomposition path for arithmetic types
(as these are common linkage-less types), so we could just
explicitly spell out the `std::foo_ordering` types as also being
supposed to be decomposed by-value.
Two more fun facts about these changes
1) The original motivation of the MSVC change was to avoid
trigering a `Wzero-as-null-pointer-constant` warning. I still
do not believe this was a good decision.
2) Current latest version of MSVC does not actually implement the
aforementioned C++20 DR, so even with this commit, MSVC cannot
compile `REQUIRE((a <=> b) == 0)`.
This triggers when running clang-tidy's bugprone-* family of checks
in code which uses Catch2 even if Catch2 headers are marked as
SYSTEM headers due to how code expanded from macros is treated as
first party even if the macro comes from a 3rd party library. The
fix is luckily pretty straightforward.
This check is added in clang-tidy-18 due for release later this year.
Classes will automatically inherit the virtual-ness of their base
class destructors. If the base class already has a virtual
destructor and the derived class needs default destructor semantics
then the derived class can omit defining the destructor in favor of
the compiler automatically defining it.
This has an additional benefit of reenabling move semantics. The
presence of a user-specified destructor automatically disables move
operations.
This is the recommended way of adding new Opts in our documentation
for using custom main, but we did not compile the code to see if it
works. We now compile the example as part of the BUILD_EXAMPLES
option.
Fixes#2787
This removes about 200 pointless copies from printing the help
message (the original motivation for the change), and also nicely
improves performance of the various reporters that depend on
TextFlow.
The code is now even worse mess than before, due to the ad-hoc
implementation of Result-ish type based on virtual functions in
Clara, but it has dropped the allocations for empty binary down
to 151.
Because TokenStream is copied around a lot, moving it to use
`StringRef` removes _a lot_ of allocations per `Opt` in the parser.
Args are not copied around much, but changing them as well makes it
obvious that they do not participate in the ownership.
The changes add up to removing ~180 allocations for "empty"
invocation of the test binary.
(`./tests/ExtraTests/NoTests --allow-running-no-tests -o /dev/null`
is down to 317 allocs)
This prevents the full construction from being O(N^2) in number
of `Opt`s, and also reduces the number of allocations for running
no tests significantly:
`tests/SelfTest`: 7705 -> 6095
`tests/ExtraTests/NoTests` 2215 -> 605
* Clara::Opt::getHelpColumns returns single item
It could never return multiple items, but for some reason it
was wrapping that single item in a vector.
* Use ReusableStringStream in Clara
* Reserve HelpColumns ahead of time
* Use StringRef for descriptions in HelpColumn type
The combination of these changes ends up removing about 7% (~200)
of allocations when Catch2 has to prepare output for `-h`.