Expanded docs on tests cases and sections (still work-in-progress)

- also touched up some outdated bits in the tutorial
This commit is contained in:
Phil Nash 2013-10-01 08:20:08 +01:00
parent a35ee200da
commit 4ab680a4fb
3 changed files with 93 additions and 12 deletions

View File

@ -12,7 +12,7 @@ Note that options are described according to the following pattern:
<a href="#nothrow"> ` -e, --nothrow`</a><br /> <a href="#nothrow"> ` -e, --nothrow`</a><br />
<a href="#usage"> ` -h, -?, --help`</a><br /> <a href="#usage"> ` -h, -?, --help`</a><br />
<a name="test" /> <a id="test"></a>
## Specifying which tests to run ## Specifying which tests to run
<pre>&lt;test-spec> ...</pre> <pre>&lt;test-spec> ...</pre>
@ -49,7 +49,7 @@ A series of tags form an AND expression wheras a comma seperated sequence forms
This matches all tests tagged `[one]` and `[two]`, as well as all tests tagged `[three]` This matches all tests tagged `[one]` and `[two]`, as well as all tests tagged `[three]`
<a name="reporter" /> <a id="reporter"></a>
## Choosing a reporter to use ## Choosing a reporter to use
<pre>-r, --reporter &lt;reporter></pre> <pre>-r, --reporter &lt;reporter></pre>
@ -64,21 +64,21 @@ The bundled reporters are:
The JUnit reporter is an xml format that follows the structure of the JUnit XML Report ANT task, as consumed by a number of third-party tools, including Continuous Integration servers such as Hudson. If not otherwise needed, the standard XML reporter is preferred as this is a streaming reporter, whereas the Junit reporter needs to hold all its results until the end so it can write the overall results into attributes of the root node. The JUnit reporter is an xml format that follows the structure of the JUnit XML Report ANT task, as consumed by a number of third-party tools, including Continuous Integration servers such as Hudson. If not otherwise needed, the standard XML reporter is preferred as this is a streaming reporter, whereas the Junit reporter needs to hold all its results until the end so it can write the overall results into attributes of the root node.
<a name="break" /> <a id="break"></a>
## Breaking into the debugger ## Breaking into the debugger
<pre>-b, --break</pre> <pre>-b, --break</pre>
In some IDEs (currently XCode and Visual Studio) it is possible for Catch to break into the debugger on a test failure. This can be very helpful during debug sessions - especially when there is more than one path through a particular test. In some IDEs (currently XCode and Visual Studio) it is possible for Catch to break into the debugger on a test failure. This can be very helpful during debug sessions - especially when there is more than one path through a particular test.
In addition to the command line option, ensure you have built your code with the DEBUG preprocessor symbol In addition to the command line option, ensure you have built your code with the DEBUG preprocessor symbol
<a name="success" /> <a id="success"></a>
## Showing results for successful tests ## Showing results for successful tests
<pre>-s, --success</pre> <pre>-s, --success</pre>
Usually you only want to see reporting for failed tests. Sometimes it's useful to see *all* the output (especially when you don't trust that that test you just added worked first time!). Usually you only want to see reporting for failed tests. Sometimes it's useful to see *all* the output (especially when you don't trust that that test you just added worked first time!).
To see successul, as well as failing, test results just pass this option. Note that each reporter may treat this option differently. The Junit reporter, for example, logs all results regardless. To see successul, as well as failing, test results just pass this option. Note that each reporter may treat this option differently. The Junit reporter, for example, logs all results regardless.
<a name="abort" /> <a id="abort"></a>
## Aborting after a certain number of failures ## Aborting after a certain number of failures
<pre>-a, --abort <pre>-a, --abort
-x, --abortx [&lt;failure threshold>] -x, --abortx [&lt;failure threshold>]
@ -89,7 +89,7 @@ If a ```CHECK``` assertion fails even the current test case is not aborted.
Sometimes this results in a flood of failure messages and you'd rather just see the first few. Specifying ```-a``` or ```--abort``` on its own will abort the whole test run on the first failed assertion of any kind. Use ```-x``` or ```--abortx``` followed by a number to abort after that number of assertion failures. Sometimes this results in a flood of failure messages and you'd rather just see the first few. Specifying ```-a``` or ```--abort``` on its own will abort the whole test run on the first failed assertion of any kind. Use ```-x``` or ```--abortx``` followed by a number to abort after that number of assertion failures.
<a name="list" /> <a id="list"></a>
## Listing available tests, tags or reporters ## Listing available tests, tags or reporters
<pre>-l, --list-tests <pre>-l, --list-tests
-t, --list-tags -t, --list-tags
@ -103,20 +103,20 @@ If one or more test-specs have been supplied too then only the matching tests wi
```--list-reporters``` lists the available reporters. ```--list-reporters``` lists the available reporters.
<a name="output" /> <a id="output"></a>
## Sending output to a file ## Sending output to a file
<pre>-o, --out &lt;filename> <pre>-o, --out &lt;filename>
</pre> </pre>
Use this option to send all output to a file. By default output is sent to stdout (note that uses of stdout and stderr *from within test cases* are redirected and included in the report - so even stderr will effectively end up on stdout). Use this option to send all output to a file. By default output is sent to stdout (note that uses of stdout and stderr *from within test cases* are redirected and included in the report - so even stderr will effectively end up on stdout).
<a name="name" /> <a id="name"></a>
## Naming a test run ## Naming a test run
<pre>-n, --name &lt;name for test run></pre> <pre>-n, --name &lt;name for test run></pre>
If a name is supplied it will be used by the reporter to provide an overall name for the test run. This can be useful if you are sending to a file, for example, and need to distinguish different test runs - either from different Catch executables or runs of the same executable with different options. If not supplied the name is defaulted to the name of the executable. If a name is supplied it will be used by the reporter to provide an overall name for the test run. This can be useful if you are sending to a file, for example, and need to distinguish different test runs - either from different Catch executables or runs of the same executable with different options. If not supplied the name is defaulted to the name of the executable.
<a name="nothrow" /> <a id="nothrow"></a>
## Eliding assertions expected to throw ## Eliding assertions expected to throw
<pre>-e, --nothrow</pre> <pre>-e, --nothrow</pre>
@ -126,7 +126,7 @@ These can be a nuisance in certain debugging environments that may break when ex
When running with this option any throw checking assertions are skipped so as not to contribute additional noise. Be careful if this affects the behaviour of subsequent tests. When running with this option any throw checking assertions are skipped so as not to contribute additional noise. Be careful if this affects the behaviour of subsequent tests.
<a name="usage" /> <a id="usage"></a>
## Usage ## Usage
<pre>-h, -?, --help</pre> <pre>-h, -?, --help</pre>

View File

@ -0,0 +1,30 @@
# Test cases and sections
While Catch fully supports the traditional, *x*Unit, style of class-based fixtures containing test case methods this is not the preferred style.
Instead Catch provides a powerful mechanism for nesting test case sections within a test case. For a more detailed discussion see the [tutorial](tutorial.md#testCasesAndSections).
Test cases and sections are very easy to use in practice:
**TEST_CASE(** _test name_ [**,** _tags_ ] **)**
**SECTION(** _section name_ **)**
_test name_ and _section name_ are free form, quoted, strings. The optional _tags_ argument is a quoted string containing one or more tags enclosed in square brackets. Tags are discussed below. Test names must be unique within the Catch executable.
For examples see the [Tutorial](tutorial.md)
## Tags
-{placeholder for documentation of tags}-
## User Story/ BDD-style test cases
In addition to Catch's take on the classic style of test cases, Catch supports an alternative syntax that allow tests to be written as "executable specifications" (one of the early goals of BDD). This set of macros map on to TEST_CASEs and SECTIONs, with a little internal support to make them smoother to work with.
**SCENARIO(** _scenario name_ )
-{placeholder for given-when-then docs}-
---
[Home](../README.md)

View File

@ -87,10 +87,61 @@ Of course there are still more issues to do deal with. For example we'll hit pro
Although this was a simple test it's been enough to demonstrate a few things about how Catch is used. Let's take moment to consider those before we move on. Although this was a simple test it's been enough to demonstrate a few things about how Catch is used. Let's take moment to consider those before we move on.
1. All we did was ```#define``` one identifier and ```#include``` one header and we got everything - even an implementation of ```main()``` that will [respond to command line arguments](command-line.md). You can only use that ```#define``` in one implementation file, for (hopefully) obvious reasons. Once you have more than one file with unit tests in you'll just ```#include "catch.hpp"``` and go. Usually it's a good idea to have a dedicated implementation file that just has ```#define CATCH_CONFIG_MAIN``` and ```#include "catch.hpp"```. You can also provide your own implementation of main and drive Catch yourself (see [Supplying-your-own-main()](own-main.md). 1. All we did was ```#define``` one identifier and ```#include``` one header and we got everything - even an implementation of ```main()``` that will [respond to command line arguments](command-line.md). You can only use that ```#define``` in one implementation file, for (hopefully) obvious reasons. Once you have more than one file with unit tests in you'll just ```#include "catch.hpp"``` and go. Usually it's a good idea to have a dedicated implementation file that just has ```#define CATCH_CONFIG_MAIN``` and ```#include "catch.hpp"```. You can also provide your own implementation of main and drive Catch yourself (see [Supplying-your-own-main()](own-main.md).
2. We introduce test cases with the TEST_CASE macro. This macro takes two arguments - a hierarchical test name (forward slash separated, by convention) and a free-form description. The test name should be unique - and ideally will logically group related tests together like folders in a file system. You can run sets of tests by specifying a wildcarded test name. 2. We introduce test cases with the TEST_CASE macro. This macro takes one or two arguments - a free form test name and, optionally, one or more tags (for more see <a href="#testCasesAndSections">Test cases and Sections</a>, below. The test name must be unique. You can run sets of tests by specifying a wildcarded test name or a tag expression. See the [command line docs](command-line.md) for more information on running tests.
3. The name and description arguments are just strings. We haven't had to declare a function or method - or explicitly register the test case anywhere. Behind the scenes a function with a generated name is defined for you, and automatically registered using static registry classes. By abstracting the function name away we can name our tests without the constraints of identifier names. 3. The name and tags arguments are just strings. We haven't had to declare a function or method - or explicitly register the test case anywhere. Behind the scenes a function with a generated name is defined for you, and automatically registered using static registry classes. By abstracting the function name away we can name our tests without the constraints of identifier names.
4. We write our individual test assertions using the REQUIRE macro. Rather than a separate macro for each type of condition we express the condition naturally using C/C++ syntax. Behind the scenes a simple set of expression templates captures the left-hand-side and right-hand-side of the expression so we can display the values in our test report. As we'll see later there _are_ other assertion macros - but because of this technique the number of them is drastically reduced. 4. We write our individual test assertions using the REQUIRE macro. Rather than a separate macro for each type of condition we express the condition naturally using C/C++ syntax. Behind the scenes a simple set of expression templates captures the left-hand-side and right-hand-side of the expression so we can display the values in our test report. As we'll see later there _are_ other assertion macros - but because of this technique the number of them is drastically reduced.
<a id="testCasesAndSections"></a>
## Test cases and sections
Most test frameworks have a class-based fixture mechanism. That is, test cases map to methods on a class and common setup and teardown can be performed in ```setup()``` and ```teardown()``` methods (or constructor/ destructor in languages, like C++, that support deterministic destruction).
While Catch fully supports this way of working there are a few problems with the approach. In particular the way your code must be split up, and the blunt granularity (you can only have one setup/ teardown pair across a set of methods - sometimes you want slightly different setup in each method - or you may want several levels of setup. We'll revisit that concept shortly and, hopefully, make it clearer). It was <a href="http://jamesnewkirk.typepad.com/posts/2007/09/why-you-should-.html">problems like these</a> that led James Newkirk, who led the team that built NUnit, to start again from scratch and <a href="http://jamesnewkirk.typepad.com/posts/2007/09/announcing-xuni.html">build xUnit</a>).
Catch takes a different approach (to both NUnut and xUnit) that is a more natural fit for C++ and the C family of languages. This is best explaned through an example:
```c++
TEST_CASE( "vectors can be sized and resized", "[vector]" ) {
std::vector<int> v( 5 );
REQUIRE( v.size() == 5 );
REQUIRE( v.capacity() >= 5 );
SECTION( "resizing bigger changes size and capacity" ) {
v.resize( 10 );
REQUIRE( v.size() == 10 );
REQUIRE( v.capacity() >= 10 );
}
SECTION( "resizing smaller changes size but not capacity" ) {
v.resize( 0 );
REQUIRE( v.size() == 0 );
REQUIRE( v.capacity() >= 5 );
}
SECTION( "reserving bigger changes capacity but not size" ) {
v.reserve( 10 );
REQUIRE( v.size() == 5 );
REQUIRE( v.capacity() >= 10 );
}
SECTION( "reserving smaller does not change size or capacity" ) {
v.reserve( 0 );
REQUIRE( v.size() == 5 );
REQUIRE( v.capacity() >= 5 );
}
}
```
For each ```SECTION``` the ```TEST_CASE``` is executed from the start - so as we enter each section we know that size is 5 and capacity is at least 5. We enforced those requirements with the ```REQUIRE```s at the top level so we can be confident in them.
This works because the ```SECTION``` macro contains an if statement that calls back into Catch to see if the section should be executed. One leaf section is executed on each run through a ```TEST_CASE```. The other sections are skipped. Next time through the next section is executed, and so on until no new sections are encountered.
So far so good - this is already an improvement on the setup/ teardown approach because now we see our setup code inline and we can use the stack.
-{placeholder for documentation on nested sections}-
## Next steps ## Next steps
For more specific information see the [Reference pages](reference-index.md) For more specific information see the [Reference pages](reference-index.md)