1 # Advanced googletest Topics
6 Now that you have read the [googletest Primer](primer.md) and learned how to write
7 tests using googletest, it's time to learn some new tricks. This document will
8 show you more assertions as well as how to construct complex failure messages,
9 propagate fatal failures, reuse and speed up your test fixtures, and use various
10 flags with your tests.
14 This section covers some less frequently used, but still significant,
17 ### Explicit Success and Failure
19 These three assertions do not actually test a value or expression. Instead, they
20 generate a success or failure directly. Like the macros that actually perform a
21 test, you may stream a custom failure message into them.
27 Generates a success. This does **NOT** make the overall test succeed. A test is
28 considered successful only if none of its assertions fail during its execution.
30 NOTE: `SUCCEED()` is purely documentary and currently doesn't generate any
31 user-visible output. However, we may add `SUCCEED()` messages to googletest's
37 ADD_FAILURE_AT("file_path", line_number);
40 `FAIL()` generates a fatal failure, while `ADD_FAILURE()` and `ADD_FAILURE_AT()`
41 generate a nonfatal failure. These are useful when control flow, rather than a
42 Boolean expression, determines the test's success or failure. For example, you
43 might want to write something like:
50 ... some other checks ...
52 FAIL() << "We shouldn't get here.";
56 NOTE: you can only use `FAIL()` in functions that return `void`. See the
57 [Assertion Placement section](#assertion-placement) for more information.
59 **Availability**: Linux, Windows, Mac.
61 ### Exception Assertions
63 These are for verifying that a piece of code throws (or does not throw) an
64 exception of the given type:
66 Fatal assertion | Nonfatal assertion | Verifies
67 ------------------------------------------ | ------------------------------------------ | --------
68 `ASSERT_THROW(statement, exception_type);` | `EXPECT_THROW(statement, exception_type);` | `statement` throws an exception of the given type
69 `ASSERT_ANY_THROW(statement);` | `EXPECT_ANY_THROW(statement);` | `statement` throws an exception of any type
70 `ASSERT_NO_THROW(statement);` | `EXPECT_NO_THROW(statement);` | `statement` doesn't throw any exception
75 ASSERT_THROW(Foo(5), bar_exception);
83 **Availability**: Linux, Windows, Mac; requires exceptions to be enabled in the
84 build environment (note that `google3` **disables** exceptions).
86 ### Predicate Assertions for Better Error Messages
88 Even though googletest has a rich set of assertions, they can never be complete,
89 as it's impossible (nor a good idea) to anticipate all scenarios a user might
90 run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a
91 complex expression, for lack of a better macro. This has the problem of not
92 showing you the values of the parts of the expression, making it hard to
93 understand what went wrong. As a workaround, some users choose to construct the
94 failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
95 is awkward especially when the expression has side-effects or is expensive to
98 googletest gives you three different options to solve this problem:
100 #### Using an Existing Boolean Function
102 If you already have a function or functor that returns `bool` (or a type that
103 can be implicitly converted to `bool`), you can use it in a *predicate
104 assertion* to get the function arguments printed for free:
106 | Fatal assertion | Nonfatal assertion | Verifies |
107 | ---------------------------------- | ---------------------------------- | --------------------------- |
108 | `ASSERT_PRED1(pred1, val1);` | `EXPECT_PRED1(pred1, val1);` | `pred1(val1)` is true |
109 | `ASSERT_PRED2(pred2, val1, val2);` | `EXPECT_PRED2(pred2, val1, val2);` | `pred2(val1, val2)` is true |
110 | `...` | `...` | ... |
112 In the above, `predn` is an `n`-ary predicate function or functor, where `val1`,
113 `val2`, ..., and `valn` are its arguments. The assertion succeeds if the
114 predicate returns `true` when applied to the given arguments, and fails
115 otherwise. When the assertion fails, it prints the value of each argument. In
116 either case, the arguments are evaluated exactly once.
118 Here's an example. Given
121 // Returns true if m and n have no common divisors except 1.
122 bool MutuallyPrime(int m, int n) { ... }
132 EXPECT_PRED2(MutuallyPrime, a, b);
135 will succeed, while the assertion
138 EXPECT_PRED2(MutuallyPrime, b, c);
141 will fail with the message
144 MutuallyPrime(b, c) is false, where
151 > 1. If you see a compiler error "no matching function to call" when using
152 > `ASSERT_PRED*` or `EXPECT_PRED*`, please see
153 > [this](faq.md#OverloadedPredicate) for how to resolve it.
154 > 1. Currently we only provide predicate assertions of arity <= 5. If you need
155 > a higher-arity assertion, let [us](https://github.com/google/googletest/issues) know.
157 **Availability**: Linux, Windows, Mac.
159 #### Using a Function That Returns an AssertionResult
161 While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not
162 satisfactory: you have to use different macros for different arities, and it
163 feels more like Lisp than C++. The `::testing::AssertionResult` class solves
166 An `AssertionResult` object represents the result of an assertion (whether it's
167 a success or a failure, and an associated message). You can create an
168 `AssertionResult` using one of these factory functions:
173 // Returns an AssertionResult object to indicate that an assertion has
175 AssertionResult AssertionSuccess();
177 // Returns an AssertionResult object to indicate that an assertion has
179 AssertionResult AssertionFailure();
184 You can then use the `<<` operator to stream messages to the `AssertionResult`
187 To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`),
188 write a predicate function that returns `AssertionResult` instead of `bool`. For
189 example, if you define `IsEven()` as:
192 ::testing::AssertionResult IsEven(int n) {
194 return ::testing::AssertionSuccess();
196 return ::testing::AssertionFailure() << n << " is odd";
208 the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print:
211 Value of: IsEven(Fib(4))
212 Actual: false (3 is odd)
216 instead of a more opaque
219 Value of: IsEven(Fib(4))
224 If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well
225 (one third of Boolean assertions in the Google code base are negative ones), and
226 are fine with making the predicate slower in the success case, you can supply a
230 ::testing::AssertionResult IsEven(int n) {
232 return ::testing::AssertionSuccess() << n << " is even";
234 return ::testing::AssertionFailure() << n << " is odd";
238 Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print
241 Value of: IsEven(Fib(6))
242 Actual: true (8 is even)
246 **Availability**: Linux, Windows, Mac.
248 #### Using a Predicate-Formatter
250 If you find the default message generated by `(ASSERT|EXPECT)_PRED*` and
251 `(ASSERT|EXPECT)_(TRUE|FALSE)` unsatisfactory, or some arguments to your
252 predicate do not support streaming to `ostream`, you can instead use the
253 following *predicate-formatter assertions* to *fully* customize how the message
256 Fatal assertion | Nonfatal assertion | Verifies
257 ------------------------------------------------ | ------------------------------------------------ | --------
258 `ASSERT_PRED_FORMAT1(pred_format1, val1);` | `EXPECT_PRED_FORMAT1(pred_format1, val1);` | `pred_format1(val1)` is successful
259 `ASSERT_PRED_FORMAT2(pred_format2, val1, val2);` | `EXPECT_PRED_FORMAT2(pred_format2, val1, val2);` | `pred_format2(val1, val2)` is successful
262 The difference between this and the previous group of macros is that instead of
263 a predicate, `(ASSERT|EXPECT)_PRED_FORMAT*` take a *predicate-formatter*
264 (`pred_formatn`), which is a function or functor with the signature:
267 ::testing::AssertionResult PredicateFormattern(const char* expr1,
277 where `val1`, `val2`, ..., and `valn` are the values of the predicate arguments,
278 and `expr1`, `expr2`, ..., and `exprn` are the corresponding expressions as they
279 appear in the source code. The types `T1`, `T2`, ..., and `Tn` can be either
280 value types or reference types. For example, if an argument has type `Foo`, you
281 can declare it as either `Foo` or `const Foo&`, whichever is appropriate.
283 As an example, let's improve the failure message in `MutuallyPrime()`, which was
284 used with `EXPECT_PRED2()`:
287 // Returns the smallest prime common divisor of m and n,
288 // or 1 when m and n are mutually prime.
289 int SmallestPrimeCommonDivisor(int m, int n) { ... }
291 // A predicate-formatter for asserting that two integers are mutually prime.
292 ::testing::AssertionResult AssertMutuallyPrime(const char* m_expr,
296 if (MutuallyPrime(m, n)) return ::testing::AssertionSuccess();
298 return ::testing::AssertionFailure() << m_expr << " and " << n_expr
299 << " (" << m << " and " << n << ") are not mutually prime, "
300 << "as they have a common divisor " << SmallestPrimeCommonDivisor(m, n);
304 With this predicate-formatter, we can use
307 EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c);
310 to generate the message
313 b and c (4 and 10) are not mutually prime, as they have a common divisor 2.
316 As you may have realized, many of the built-in assertions we introduced earlier
317 are special cases of `(EXPECT|ASSERT)_PRED_FORMAT*`. In fact, most of them are
318 indeed defined using `(EXPECT|ASSERT)_PRED_FORMAT*`.
320 **Availability**: Linux, Windows, Mac.
322 ### Floating-Point Comparison
324 Comparing floating-point numbers is tricky. Due to round-off errors, it is very
325 unlikely that two floating-points will match exactly. Therefore, `ASSERT_EQ` 's
326 naive comparison usually doesn't work. And since floating-points can have a wide
327 value range, no single fixed error bound works. It's better to compare by a
328 fixed relative error bound, except for values close to 0 due to the loss of
331 In general, for floating-point comparison to make sense, the user needs to
332 carefully choose the error bound. If they don't want or care to, comparing in
333 terms of Units in the Last Place (ULPs) is a good default, and googletest
334 provides assertions to do this. Full details about ULPs are quite long; if you
335 want to learn more, see
336 [here](https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/).
338 #### Floating-Point Macros
340 | Fatal assertion | Nonfatal assertion | Verifies |
341 | ------------------------------- | ------------------------------ | ---------------------------------------- |
342 | `ASSERT_FLOAT_EQ(val1, val2);` | `EXPECT_FLOAT_EQ(val1,val2);` | the two `float` values are almost equal |
343 | `ASSERT_DOUBLE_EQ(val1, val2);` | `EXPECT_DOUBLE_EQ(val1, val2);`| the two `double` values are almost equal |
345 By "almost equal" we mean the values are within 4 ULP's from each other.
347 NOTE: `CHECK_DOUBLE_EQ()` in `base/logging.h` uses a fixed absolute error bound,
348 so its result may differ from that of the googletest macros. That macro is
349 unsafe and has been deprecated. Please don't use it any more.
351 The following assertions allow you to choose the acceptable error bound:
353 | Fatal assertion | Nonfatal assertion | Verifies |
354 | ------------------------------------- | ------------------------------------- | ------------------------- |
355 | `ASSERT_NEAR(val1, val2, abs_error);` | `EXPECT_NEAR(val1, val2, abs_error);` | the difference between `val1` and `val2` doesn't exceed the given absolute error |
357 **Availability**: Linux, Windows, Mac.
359 #### Floating-Point Predicate-Format Functions
361 Some floating-point operations are useful, but not that often used. In order to
362 avoid an explosion of new macros, we provide them as predicate-format functions
363 that can be used in predicate assertion macros (e.g. `EXPECT_PRED_FORMAT2`,
367 EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2);
368 EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2);
371 Verifies that `val1` is less than, or almost equal to, `val2`. You can replace
372 `EXPECT_PRED_FORMAT2` in the above table with `ASSERT_PRED_FORMAT2`.
374 **Availability**: Linux, Windows, Mac.
376 ### Asserting Using gMock Matchers
378 Google-developed C++ mocking framework [gMock](../../googlemock) comes with a
379 library of matchers for validating arguments passed to mock objects. A gMock
380 *matcher* is basically a predicate that knows how to describe itself. It can be
381 used in these assertion macros:
383 | Fatal assertion | Nonfatal assertion | Verifies |
384 | ------------------------------ | ------------------------------ | --------------------- |
385 | `ASSERT_THAT(value, matcher);` | `EXPECT_THAT(value, matcher);` | value matches matcher |
387 For example, `StartsWith(prefix)` is a matcher that matches a string starting
388 with `prefix`, and you can write:
391 using ::testing::StartsWith;
393 // Verifies that Foo() returns a string starting with "Hello".
394 EXPECT_THAT(Foo(), StartsWith("Hello"));
397 Read this [recipe](../../googlemock/docs/CookBook.md#using-matchers-in-google-test-assertions) in
398 the gMock Cookbook for more details.
400 gMock has a rich set of matchers. You can do many things googletest cannot do
401 alone with them. For a list of matchers gMock provides, read
402 [this](../../googlemock/docs/CookBook.md#using-matchers). Especially useful among them are
403 some [protocol buffer matchers](https://github.com/google/nucleus/blob/master/nucleus/testing/protocol-buffer-matchers.h). It's easy to write
404 your [own matchers](../../googlemock/docs/CookBook.md#writing-new-matchers-quickly) too.
406 For example, you can use gMock's
407 [EqualsProto](https://github.com/google/nucleus/blob/master/nucleus/testing/protocol-buffer-matchers.h)
408 to compare protos in your tests:
411 #include "testing/base/public/gmock.h"
412 using ::testing::EqualsProto;
414 EXPECT_THAT(actual_proto, EqualsProto("foo: 123 bar: 'xyz'"));
415 EXPECT_THAT(*actual_proto_ptr, EqualsProto(expected_proto));
418 gMock is bundled with googletest, so you don't need to add any build dependency
419 in order to take advantage of this. Just include `"testing/base/public/gmock.h"`
420 and you're ready to go.
422 **Availability**: Linux, Windows, and Mac.
424 ### More String Assertions
426 (Please read the [previous](#AssertThat) section first if you haven't.)
428 You can use the gMock [string matchers](../../googlemock/docs/CheatSheet.md#string-matchers)
429 with `EXPECT_THAT()` or `ASSERT_THAT()` to do more string comparison tricks
430 (sub-string, prefix, suffix, regular expression, and etc). For example,
433 using ::testing::HasSubstr;
434 using ::testing::MatchesRegex;
436 ASSERT_THAT(foo_string, HasSubstr("needle"));
437 EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+"));
440 **Availability**: Linux, Windows, Mac.
442 If the string contains a well-formed HTML or XML document, you can check whether
443 its DOM tree matches an [XPath
444 expression](http://www.w3.org/TR/xpath/#contents):
447 // Currently still in //template/prototemplate/testing:xpath_matcher
448 #include "template/prototemplate/testing/xpath_matcher.h"
449 using prototemplate::testing::MatchesXPath;
450 EXPECT_THAT(html_string, MatchesXPath("//a[text()='click here']"));
453 **Availability**: Linux.
455 ### Windows HRESULT assertions
457 These assertions test for `HRESULT` success or failure.
459 Fatal assertion | Nonfatal assertion | Verifies
460 -------------------------------------- | -------------------------------------- | --------
461 `ASSERT_HRESULT_SUCCEEDED(expression)` | `EXPECT_HRESULT_SUCCEEDED(expression)` | `expression` is a success `HRESULT`
462 `ASSERT_HRESULT_FAILED(expression)` | `EXPECT_HRESULT_FAILED(expression)` | `expression` is a failure `HRESULT`
464 The generated output contains the human-readable error message associated with
465 the `HRESULT` code returned by `expression`.
467 You might use them like this:
470 CComPtr<IShellDispatch2> shell;
471 ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application"));
473 ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty, empty));
476 **Availability**: Windows.
480 You can call the function
483 ::testing::StaticAssertTypeEq<T1, T2>();
486 to assert that types `T1` and `T2` are the same. The function does nothing if
487 the assertion is satisfied. If the types are different, the function call will
488 fail to compile, and the compiler error message will likely (depending on the
489 compiler) show you the actual values of `T1` and `T2`. This is mainly useful
490 inside template code.
492 **Caveat**: When used inside a member function of a class template or a function
493 template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is
494 instantiated. For example, given:
497 template <typename T> class Foo {
499 void Bar() { ::testing::StaticAssertTypeEq<int, T>(); }
506 void Test1() { Foo<bool> foo; }
509 will not generate a compiler error, as `Foo<bool>::Bar()` is never actually
510 instantiated. Instead, you need:
513 void Test2() { Foo<bool> foo; foo.Bar(); }
516 to cause a compiler error.
518 **Availability**: Linux, Windows, Mac.
520 ### Assertion Placement
522 You can use assertions in any C++ function. In particular, it doesn't have to be
523 a method of the test fixture class. The one constraint is that assertions that
524 generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in
525 void-returning functions. This is a consequence of Google's not using
526 exceptions. By placing it in a non-void function you'll get a confusing compile
527 error like `"error: void value not ignored as it ought to be"` or `"cannot
528 initialize return object of type 'bool' with an rvalue of type 'void'"` or
529 `"error: no viable conversion from 'void' to 'string'"`.
531 If you need to use fatal assertions in a function that returns non-void, one
532 option is to make the function return the value in an out parameter instead. For
533 example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
534 need to make sure that `*result` contains some sensible value even when the
535 function returns prematurely. As the function now returns `void`, you can use
536 any assertion inside of it.
538 If changing the function's type is not an option, you should just use assertions
539 that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`.
541 NOTE: Constructors and destructors are not considered void-returning functions,
542 according to the C++ language specification, and so you may not use fatal
543 assertions in them. You'll get a compilation error if you try. A simple
544 workaround is to transfer the entire body of the constructor or destructor to a
545 private void-returning method. However, you should be aware that a fatal
546 assertion failure in a constructor does not terminate the current test, as your
547 intuition might suggest; it merely returns from the constructor early, possibly
548 leaving your object in a partially-constructed state. Likewise, a fatal
549 assertion failure in a destructor may leave your object in a
550 partially-destructed state. Use assertions carefully in these situations!
552 ## Teaching googletest How to Print Your Values
554 When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument
555 values to help you debug. It does this using a user-extensible value printer.
557 This printer knows how to print built-in C++ types, native arrays, STL
558 containers, and any type that supports the `<<` operator. For other types, it
559 prints the raw bytes in the value and hopes that you the user can figure it out.
561 As mentioned earlier, the printer is *extensible*. That means you can teach it
562 to do a better job at printing your particular type than to dump the bytes. To
563 do that, define `<<` for your type:
566 // Streams are allowed only for logging. Don't include this for
567 // any other purpose.
572 class Bar { // We want googletest to be able to print instances of this.
574 // Create a free inline friend function.
575 friend std::ostream& operator<<(std::ostream& os, const Bar& bar) {
576 return os << bar.DebugString(); // whatever needed to print bar to os
580 // If you can't declare the function in the class it's important that the
581 // << operator is defined in the SAME namespace that defines Bar. C++'s look-up
582 // rules rely on that.
583 std::ostream& operator<<(std::ostream& os, const Bar& bar) {
584 return os << bar.DebugString(); // whatever needed to print bar to os
590 Sometimes, this might not be an option: your team may consider it bad style to
591 have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that
592 doesn't do what you want (and you cannot change it). If so, you can instead
593 define a `PrintTo()` function like this:
596 // Streams are allowed only for logging. Don't include this for
597 // any other purpose.
604 friend void PrintTo(const Bar& bar, std::ostream* os) {
605 *os << bar.DebugString(); // whatever needed to print bar to os
609 // If you can't declare the function in the class it's important that PrintTo()
610 // is defined in the SAME namespace that defines Bar. C++'s look-up rules rely
612 void PrintTo(const Bar& bar, std::ostream* os) {
613 *os << bar.DebugString(); // whatever needed to print bar to os
619 If you have defined both `<<` and `PrintTo()`, the latter will be used when
620 googletest is concerned. This allows you to customize how the value appears in
621 googletest's output without affecting code that relies on the behavior of its
624 If you want to print a value `x` using googletest's value printer yourself, just
625 call `::testing::PrintToString(x)`, which returns an `std::string`:
628 vector<pair<Bar, int> > bar_ints = GetBarIntVector();
630 EXPECT_TRUE(IsCorrectBarIntVector(bar_ints))
631 << "bar_ints = " << ::testing::PrintToString(bar_ints);
636 In many applications, there are assertions that can cause application failure if
637 a condition is not met. These sanity checks, which ensure that the program is in
638 a known good state, are there to fail at the earliest possible time after some
639 program state is corrupted. If the assertion checks the wrong condition, then
640 the program may proceed in an erroneous state, which could lead to memory
641 corruption, security holes, or worse. Hence it is vitally important to test that
642 such assertion statements work as expected.
644 Since these precondition checks cause the processes to die, we call such tests
645 _death tests_. More generally, any test that checks that a program terminates
646 (except by throwing an exception) in an expected fashion is also a death test.
649 Note that if a piece of code throws an exception, we don't consider it "death"
650 for the purpose of death tests, as the caller of the code could catch the
651 exception and avoid the crash. If you want to verify exceptions thrown by your
652 code, see [Exception Assertions](#exception-assertions).
654 If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see
657 ### How to Write a Death Test
659 googletest has the following macros to support death tests:
661 Fatal assertion | Nonfatal assertion | Verifies
662 ---------------------------------------------- | ---------------------------------------------- | --------
663 `ASSERT_DEATH(statement, regex);` | `EXPECT_DEATH(statement, regex);` | `statement` crashes with the given error
664 `ASSERT_DEATH_IF_SUPPORTED(statement, regex);` | `EXPECT_DEATH_IF_SUPPORTED(statement, regex);` | if death tests are supported, verifies that `statement` crashes with the given error; otherwise verifies nothing
665 `ASSERT_EXIT(statement, predicate, regex);` | `EXPECT_EXIT(statement, predicate, regex);` | `statement` exits with the given error, and its exit code matches `predicate`
667 where `statement` is a statement that is expected to cause the process to die,
668 `predicate` is a function or function object that evaluates an integer exit
669 status, and `regex` is a (Perl) regular expression that the stderr output of
670 `statement` is expected to match. Note that `statement` can be *any valid
671 statement* (including *compound statement*) and doesn't have to be an
675 As usual, the `ASSERT` variants abort the current test function, while the
676 `EXPECT` variants do not.
678 > NOTE: We use the word "crash" here to mean that the process terminates with a
679 > *non-zero* exit status code. There are two possibilities: either the process
680 > has called `exit()` or `_exit()` with a non-zero value, or it may be killed by
683 > This means that if `*statement*` terminates the process with a 0 exit code, it
684 > is *not* considered a crash by `EXPECT_DEATH`. Use `EXPECT_EXIT` instead if
685 > this is the case, or if you want to restrict the exit code more precisely.
687 A predicate here must accept an `int` and return a `bool`. The death test
688 succeeds only if the predicate returns `true`. googletest defines a few
689 predicates that handle the most common cases:
692 ::testing::ExitedWithCode(exit_code)
695 This expression is `true` if the program exited normally with the given exit
699 ::testing::KilledBySignal(signal_number) // Not available on Windows.
702 This expression is `true` if the program was killed by the given signal.
704 The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicate
705 that verifies the process' exit code is non-zero.
707 Note that a death test only cares about three things:
709 1. does `statement` abort or exit the process?
710 2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status
711 satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`)
712 is the exit status non-zero? And
713 3. does the stderr output match `regex`?
715 In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it
716 will **not** cause the death test to fail, as googletest assertions don't abort
719 To write a death test, simply use one of the above macros inside your test
720 function. For example,
723 TEST(MyDeathTest, Foo) {
724 // This death test uses a compound statement.
728 }, "Error on line .* of Foo()");
731 TEST(MyDeathTest, NormalExit) {
732 EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success");
735 TEST(MyDeathTest, KillMyself) {
736 EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL),
737 "Sending myself unblockable signal");
743 * calling `Foo(5)` causes the process to die with the given error message,
744 * calling `NormalExit()` causes the process to print `"Success"` to stderr and
745 exit with exit code 0, and
746 * calling `KillMyself()` kills the process with signal `SIGKILL`.
748 The test function body may contain other assertions and statements as well, if
751 ### Death Test Naming
753 IMPORTANT: We strongly recommend you to follow the convention of naming your
754 **test case** (not test) `*DeathTest` when it contains a death test, as
755 demonstrated in the above example. The [Death Tests And
756 Threads](#death-tests-and-threads) section below explains why.
758 If a test fixture class is shared by normal tests and death tests, you can use
759 `using` or `typedef` to introduce an alias for the fixture class and avoid
760 duplicating its code:
763 class FooTest : public ::testing::Test { ... };
765 using FooDeathTest = FooTest;
767 TEST_F(FooTest, DoesThis) {
771 TEST_F(FooDeathTest, DoesThat) {
776 **Availability**: Linux, Windows (requires MSVC 8.0 or above), Cygwin, and Mac
778 ### Regular Expression Syntax
781 On POSIX systems (e.g. Linux, Cygwin, and Mac), googletest uses the
782 [POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04)
783 syntax. To learn about this syntax, you may want to read this
784 [Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions).
786 On Windows, googletest uses its own simple regular expression implementation. It
787 lacks many features. For example, we don't support union (`"x|y"`), grouping
788 (`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among
789 others. Below is what we do support (`A` denotes a literal character, period
790 (`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular
794 ---------- | --------------------------------------------------------------
795 `c` | matches any literal character `c`
796 `\\d` | matches any decimal digit
797 `\\D` | matches any character that's not a decimal digit
801 `\\s` | matches any ASCII whitespace, including `\n`
802 `\\S` | matches any character that's not a whitespace
805 `\\w` | matches any letter, `_`, or decimal digit
806 `\\W` | matches any character that `\\w` doesn't match
807 `\\c` | matches any literal character `c`, which must be a punctuation
808 `.` | matches any single character except `\n`
809 `A?` | matches 0 or 1 occurrences of `A`
810 `A*` | matches 0 or many occurrences of `A`
811 `A+` | matches 1 or many occurrences of `A`
812 `^` | matches the beginning of a string (not that of each line)
813 `$` | matches the end of a string (not that of each line)
814 `xy` | matches `x` followed by `y`
816 To help you determine which capability is available on your system, googletest
817 defines macros to govern which regular expression it is using. The macros are:
818 <!--absl:google3-begin(google3-only)-->`GTEST_USES_PCRE=1`, or
819 <!--absl:google3-end--> `GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If
820 you want your death tests to work in all cases, you can either `#if` on these
821 macros or use the more limited syntax only.
825 Under the hood, `ASSERT_EXIT()` spawns a new process and executes the death test
826 statement in that process. The details of how precisely that happens depend on
827 the platform and the variable ::testing::GTEST_FLAG(death_test_style) (which is
828 initialized from the command-line flag `--gtest_death_test_style`).
830 * On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn the
832 * If the variable's value is `"fast"`, the death test statement is
833 immediately executed.
834 * If the variable's value is `"threadsafe"`, the child process re-executes
835 the unit test binary just as it was originally invoked, but with some
836 extra flags to cause just the single death test under consideration to
838 * On Windows, the child is spawned using the `CreateProcess()` API, and
839 re-executes the binary to cause just the single death test under
840 consideration to be run - much like the `threadsafe` mode on POSIX.
842 Other values for the variable are illegal and will cause the death test to fail.
843 Currently, the flag's default value is
844 "fast". However, we reserve
845 the right to change it in the future. Therefore, your tests should not depend on
846 this. In either case, the parent process waits for the child process to
847 complete, and checks that
849 1. the child's exit status satisfies the predicate, and
850 2. the child's stderr matches the regular expression.
852 If the death test statement runs to completion without dying, the child process
853 will nonetheless terminate, and the assertion fails.
855 ### Death Tests And Threads
857 The reason for the two death test styles has to do with thread safety. Due to
858 well-known problems with forking in the presence of threads, death tests should
859 be run in a single-threaded context. Sometimes, however, it isn't feasible to
860 arrange that kind of environment. For example, statically-initialized modules
861 may start threads before main is ever reached. Once threads have been created,
862 it may be difficult or impossible to clean them up.
864 googletest has three features intended to raise awareness of threading issues.
866 1. A warning is emitted if multiple threads are running when a death test is
868 2. Test cases with a name ending in "DeathTest" are run before all other tests.
869 3. It uses `clone()` instead of `fork()` to spawn the child process on Linux
870 (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely
871 to cause the child to hang when the parent process has multiple threads.
873 It's perfectly fine to create threads inside a death test statement; they are
874 executed in a separate process and cannot affect the parent.
876 ### Death Test Styles
879 The "threadsafe" death test style was introduced in order to help mitigate the
880 risks of testing in a possibly multithreaded environment. It trades increased
881 test execution time (potentially dramatically so) for improved thread safety.
883 The automated testing framework does not set the style flag. You can choose a
884 particular style of death tests by setting the flag programmatically:
887 testing::FLAGS_gtest_death_test_style="threadsafe"
890 You can do this in `main()` to set the style for all death tests in the binary,
891 or in individual tests. Recall that flags are saved before running each test and
892 restored afterwards, so you need not do that yourself. For example:
895 int main(int argc, char** argv) {
896 InitGoogle(argv[0], &argc, &argv, true);
897 ::testing::FLAGS_gtest_death_test_style = "fast";
898 return RUN_ALL_TESTS();
901 TEST(MyDeathTest, TestOne) {
902 ::testing::FLAGS_gtest_death_test_style = "threadsafe";
903 // This test is run in the "threadsafe" style:
904 ASSERT_DEATH(ThisShouldDie(), "");
907 TEST(MyDeathTest, TestTwo) {
908 // This test is run in the "fast" style:
909 ASSERT_DEATH(ThisShouldDie(), "");
916 The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If
917 it leaves the current function via a `return` statement or by throwing an
918 exception, the death test is considered to have failed. Some googletest macros
919 may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid
922 Since `statement` runs in the child process, any in-memory side effect (e.g.
923 modifying a variable, releasing memory, etc) it causes will *not* be observable
924 in the parent process. In particular, if you release memory in a death test,
925 your program will fail the heap check as the parent process will never see the
926 memory reclaimed. To solve this problem, you can
928 1. try not to free memory in a death test;
929 2. free the memory again in the parent process; or
930 3. do not use the heap checker in your program.
932 Due to an implementation detail, you cannot place multiple death test assertions
933 on the same line; otherwise, compilation will fail with an unobvious error
936 Despite the improved thread safety afforded by the "threadsafe" style of death
937 test, thread problems such as deadlock are still possible in the presence of
938 handlers registered with `pthread_atfork(3)`.
941 ## Using Assertions in Sub-routines
943 ### Adding Traces to Assertions
945 If a test sub-routine is called from several places, when an assertion inside it
946 fails, it can be hard to tell which invocation of the sub-routine the failure is
948 You can alleviate this problem using extra logging or custom failure messages,
949 but that usually clutters up your tests. A better solution is to use the
950 `SCOPED_TRACE` macro or the `ScopedTrace` utility:
953 SCOPED_TRACE(message);
954 ScopedTrace trace("file_path", line_number, message);
957 where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE`
958 macro will cause the current file name, line number, and the given message to be
959 added in every failure message. `ScopedTrace` accepts explicit file name and
960 line number in arguments, which is useful for writing test helpers. The effect
961 will be undone when the control leaves the current lexical scope.
966 10: void Sub1(int n) {
967 11: EXPECT_EQ(1, Bar(n));
968 12: EXPECT_EQ(2, Bar(n + 1));
971 15: TEST(FooTest, Bar) {
973 17: SCOPED_TRACE("A"); // This trace point will be included in
974 18: // every failure in this scope.
982 could result in messages like these:
985 path/to/foo_test.cc:11: Failure
990 path/to/foo_test.cc:17: A
992 path/to/foo_test.cc:12: Failure
998 Without the trace, it would've been difficult to know which invocation of
999 `Sub1()` the two failures come from respectively. (You could add
1001 an extra message to each assertion in `Sub1()` to indicate the value of `n`, but
1004 Some tips on using `SCOPED_TRACE`:
1006 1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the
1007 beginning of a sub-routine, instead of at each call site.
1008 2. When calling sub-routines inside a loop, make the loop iterator part of the
1009 message in `SCOPED_TRACE` such that you can know which iteration the failure
1011 3. Sometimes the line number of the trace point is enough for identifying the
1012 particular invocation of a sub-routine. In this case, you don't have to
1013 choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
1014 4. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer
1015 scope. In this case, all active trace points will be included in the failure
1016 messages, in reverse order they are encountered.
1017 5. The trace dump is clickable in Emacs - hit `return` on a line number and
1018 you'll be taken to that line in the source file!
1020 **Availability**: Linux, Windows, Mac.
1022 ### Propagating Fatal Failures
1024 A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that
1025 when they fail they only abort the _current function_, not the entire test. For
1026 example, the following test will segfault:
1030 // Generates a fatal failure and aborts the current function.
1033 // The following won't be executed.
1037 TEST(FooTest, Bar) {
1038 Subroutine(); // The intended behavior is for the fatal failure
1039 // in Subroutine() to abort the entire test.
1041 // The actual behavior: the function goes on after Subroutine() returns.
1043 *p = 3; // Segfault!
1047 To alleviate this, googletest provides three different solutions. You could use
1048 either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
1049 `HasFatalFailure()` function. They are described in the following two
1052 #### Asserting on Subroutines with an exception
1054 The following code can turn ASSERT-failure into an exception:
1057 class ThrowListener : public testing::EmptyTestEventListener {
1058 void OnTestPartResult(const testing::TestPartResult& result) override {
1059 if (result.type() == testing::TestPartResult::kFatalFailure) {
1060 throw testing::AssertionException(result);
1064 int main(int argc, char** argv) {
1066 testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener);
1067 return RUN_ALL_TESTS();
1071 This listener should be added after other listeners if you have any, otherwise
1072 they won't see failed `OnTestPartResult`.
1074 #### Asserting on Subroutines
1076 As shown above, if your test calls a subroutine that has an `ASSERT_*` failure
1077 in it, the test will continue after the subroutine returns. This may not be what
1080 Often people want fatal failures to propagate like exceptions. For that
1081 googletest offers the following macros:
1083 Fatal assertion | Nonfatal assertion | Verifies
1084 ------------------------------------- | ------------------------------------- | --------
1085 `ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread.
1087 Only failures in the thread that executes the assertion are checked to determine
1088 the result of this type of assertions. If `statement` creates new threads,
1089 failures in these threads are ignored.
1094 ASSERT_NO_FATAL_FAILURE(Foo());
1097 EXPECT_NO_FATAL_FAILURE({
1102 **Availability**: Linux, Windows, Mac. Assertions from multiple threads are
1103 currently not supported on Windows.
1105 #### Checking for Failures in the Current Test
1107 `HasFatalFailure()` in the `::testing::Test` class returns `true` if an
1108 assertion in the current test has suffered a fatal failure. This allows
1109 functions to catch fatal failures in a sub-routine and return early.
1115 static bool HasFatalFailure();
1119 The typical usage, which basically simulates the behavior of a thrown exception,
1123 TEST(FooTest, Bar) {
1125 // Aborts if Subroutine() had a fatal failure.
1126 if (HasFatalFailure()) return;
1128 // The following won't be executed.
1133 If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
1134 fixture, you must add the `::testing::Test::` prefix, as in:
1137 if (::testing::Test::HasFatalFailure()) return;
1140 Similarly, `HasNonfatalFailure()` returns `true` if the current test has at
1141 least one non-fatal failure, and `HasFailure()` returns `true` if the current
1142 test has at least one failure of either kind.
1144 **Availability**: Linux, Windows, Mac.
1146 ## Logging Additional Information
1148 In your test code, you can call `RecordProperty("key", value)` to log additional
1149 information, where `value` can be either a string or an `int`. The *last* value
1150 recorded for a key will be emitted to the [XML output](#generating-an-xml-report) if you
1151 specify one. For example, the test
1154 TEST_F(WidgetUsageTest, MinAndMaxWidgets) {
1155 RecordProperty("MaximumWidgets", ComputeMaxUsage());
1156 RecordProperty("MinimumWidgets", ComputeMinUsage());
1160 will output XML like this:
1164 <testcase name="MinAndMaxWidgets" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" />
1170 > * `RecordProperty()` is a static member of the `Test` class. Therefore it
1171 > needs to be prefixed with `::testing::Test::` if used outside of the
1172 > `TEST` body and the test fixture class.
1173 > * `*key*` must be a valid XML attribute name, and cannot conflict with the
1174 > ones already used by googletest (`name`, `status`, `time`, `classname`,
1175 > `type_param`, and `value_param`).
1176 > * Calling `RecordProperty()` outside of the lifespan of a test is allowed.
1177 > If it's called outside of a test but between a test case's
1178 > `SetUpTestCase()` and `TearDownTestCase()` methods, it will be attributed
1179 > to the XML element for the test case. If it's called outside of all test
1180 > cases (e.g. in a test environment), it will be attributed to the top-level
1183 **Availability**: Linux, Windows, Mac.
1185 ## Sharing Resources Between Tests in the Same Test Case
1187 googletest creates a new test fixture object for each test in order to make
1188 tests independent and easier to debug. However, sometimes tests use resources
1189 that are expensive to set up, making the one-copy-per-test model prohibitively
1192 If the tests don't change the resource, there's no harm in their sharing a
1193 single resource copy. So, in addition to per-test set-up/tear-down, googletest
1194 also supports per-test-case set-up/tear-down. To use it:
1196 1. In your test fixture class (say `FooTest` ), declare as `static` some member
1197 variables to hold the shared resources.
1198 1. Outside your test fixture class (typically just below it), define those
1199 member variables, optionally giving them initial values.
1200 1. In the same test fixture class, define a `static void SetUpTestCase()`
1201 function (remember not to spell it as **`SetupTestCase`** with a small `u`!)
1202 to set up the shared resources and a `static void TearDownTestCase()`
1203 function to tear them down.
1205 That's it! googletest automatically calls `SetUpTestCase()` before running the
1206 *first test* in the `FooTest` test case (i.e. before creating the first
1207 `FooTest` object), and calls `TearDownTestCase()` after running the *last test*
1208 in it (i.e. after deleting the last `FooTest` object). In between, the tests can
1209 use the shared resources.
1211 Remember that the test order is undefined, so your code can't depend on a test
1212 preceding or following another. Also, the tests must either not modify the state
1213 of any shared resource, or, if they do modify the state, they must restore the
1214 state to its original value before passing control to the next test.
1216 Here's an example of per-test-case set-up and tear-down:
1219 class FooTest : public ::testing::Test {
1221 // Per-test-case set-up.
1222 // Called before the first test in this test case.
1223 // Can be omitted if not needed.
1224 static void SetUpTestCase() {
1225 shared_resource_ = new ...;
1228 // Per-test-case tear-down.
1229 // Called after the last test in this test case.
1230 // Can be omitted if not needed.
1231 static void TearDownTestCase() {
1232 delete shared_resource_;
1233 shared_resource_ = NULL;
1236 // You can define per-test set-up logic as usual.
1237 virtual void SetUp() { ... }
1239 // You can define per-test tear-down logic as usual.
1240 virtual void TearDown() { ... }
1242 // Some expensive resource shared by all tests.
1243 static T* shared_resource_;
1246 T* FooTest::shared_resource_ = NULL;
1248 TEST_F(FooTest, Test1) {
1249 ... you can refer to shared_resource_ here ...
1252 TEST_F(FooTest, Test2) {
1253 ... you can refer to shared_resource_ here ...
1257 NOTE: Though the above code declares `SetUpTestCase()` protected, it may
1258 sometimes be necessary to declare it public, such as when using it with
1261 **Availability**: Linux, Windows, Mac.
1263 ## Global Set-Up and Tear-Down
1265 Just as you can do set-up and tear-down at the test level and the test case
1266 level, you can also do it at the test program level. Here's how.
1268 First, you subclass the `::testing::Environment` class to define a test
1269 environment, which knows how to set-up and tear-down:
1274 virtual ~Environment() {}
1276 // Override this to define how to set up the environment.
1277 virtual void SetUp() {}
1279 // Override this to define how to tear down the environment.
1280 virtual void TearDown() {}
1284 Then, you register an instance of your environment class with googletest by
1285 calling the `::testing::AddGlobalTestEnvironment()` function:
1288 Environment* AddGlobalTestEnvironment(Environment* env);
1291 Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of
1292 each environment object, then runs the tests if none of the environments
1293 reported fatal failures and `GTEST_SKIP()` was not called. `RUN_ALL_TESTS()`
1294 always calls `TearDown()` with each environment object, regardless of whether
1295 or not the tests were run.
1297 It's OK to register multiple environment objects. In this case, their `SetUp()`
1298 will be called in the order they are registered, and their `TearDown()` will be
1299 called in the reverse order.
1301 Note that googletest takes ownership of the registered environment objects.
1302 Therefore **do not delete them** by yourself.
1304 You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called,
1305 probably in `main()`. If you use `gtest_main`, you need to call this before
1306 `main()` starts for it to take effect. One way to do this is to define a global
1310 ::testing::Environment* const foo_env =
1311 ::testing::AddGlobalTestEnvironment(new FooEnvironment);
1314 However, we strongly recommend you to write your own `main()` and call
1315 `AddGlobalTestEnvironment()` there, as relying on initialization of global
1316 variables makes the code harder to read and may cause problems when you register
1317 multiple environments from different translation units and the environments have
1318 dependencies among them (remember that the compiler doesn't guarantee the order
1319 in which global variables from different translation units are initialized).
1321 ## Value-Parameterized Tests
1323 *Value-parameterized tests* allow you to test your code with different
1324 parameters without writing multiple copies of the same test. This is useful in a
1325 number of situations, for example:
1327 * You have a piece of code whose behavior is affected by one or more
1328 command-line flags. You want to make sure your code performs correctly for
1329 various values of those flags.
1330 * You want to test different implementations of an OO interface.
1331 * You want to test your code over various inputs (a.k.a. data-driven testing).
1332 This feature is easy to abuse, so please exercise your good sense when doing
1335 ### How to Write Value-Parameterized Tests
1337 To write value-parameterized tests, first you should define a fixture class. It
1338 must be derived from both `::testing::Test` and
1339 `::testing::WithParamInterface<T>` (the latter is a pure interface), where `T`
1340 is the type of your parameter values. For convenience, you can just derive the
1341 fixture class from `::testing::TestWithParam<T>`, which itself is derived from
1342 both `::testing::Test` and `::testing::WithParamInterface<T>`. `T` can be any
1343 copyable type. If it's a raw pointer, you are responsible for managing the
1344 lifespan of the pointed values.
1346 NOTE: If your test fixture defines `SetUpTestCase()` or `TearDownTestCase()`
1347 they must be declared **public** rather than **protected** in order to use
1352 public ::testing::TestWithParam<const char*> {
1353 // You can implement all the usual fixture class members here.
1354 // To access the test parameter, call GetParam() from class
1355 // TestWithParam<T>.
1358 // Or, when you want to add parameters to a pre-existing fixture class:
1359 class BaseTest : public ::testing::Test {
1362 class BarTest : public BaseTest,
1363 public ::testing::WithParamInterface<const char*> {
1368 Then, use the `TEST_P` macro to define as many test patterns using this fixture
1369 as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you
1373 TEST_P(FooTest, DoesBlah) {
1374 // Inside a test, access the test parameter with the GetParam() method
1375 // of the TestWithParam<T> class:
1376 EXPECT_TRUE(foo.Blah(GetParam()));
1380 TEST_P(FooTest, HasBlahBlah) {
1385 Finally, you can use `INSTANTIATE_TEST_CASE_P` to instantiate the test case with
1386 any set of parameters you want. googletest defines a number of functions for
1387 generating test parameters. They return what we call (surprise!) *parameter
1388 generators*. Here is a summary of them, which are all in the `testing`
1391 | Parameter Generator | Behavior |
1392 | ---------------------------- | ------------------------------------------- |
1393 | `Range(begin, end [, step])` | Yields values `{begin, begin+step, begin+step+step, ...}`. The values do not include `end`. `step` defaults to 1. |
1394 | `Values(v1, v2, ..., vN)` | Yields values `{v1, v2, ..., vN}`. |
1395 | `ValuesIn(container)` and `ValuesIn(begin,end)` | Yields values from a C-style array, an STL-style container, or an iterator range `[begin, end)`. |
1396 | `Bool()` | Yields sequence `{false, true}`. |
1397 | `Combine(g1, g2, ..., gN)` | Yields all combinations (Cartesian product) as std\:\:tuples of the values generated by the `N` generators. |
1399 For more details, see the comments at the definitions of these functions.
1401 The following statement will instantiate tests from the `FooTest` test case each
1402 with parameter values `"meeny"`, `"miny"`, and `"moe"`.
1405 INSTANTIATE_TEST_CASE_P(InstantiationName,
1407 ::testing::Values("meeny", "miny", "moe"));
1410 NOTE: The code above must be placed at global or namespace scope, not at
1413 NOTE: Don't forget this step! If you do your test will silently pass, but none
1414 of its cases will ever run!
1416 To distinguish different instances of the pattern (yes, you can instantiate it
1417 more than once), the first argument to `INSTANTIATE_TEST_CASE_P` is a prefix
1418 that will be added to the actual test case name. Remember to pick unique
1419 prefixes for different instantiations. The tests from the instantiation above
1420 will have these names:
1422 * `InstantiationName/FooTest.DoesBlah/0` for `"meeny"`
1423 * `InstantiationName/FooTest.DoesBlah/1` for `"miny"`
1424 * `InstantiationName/FooTest.DoesBlah/2` for `"moe"`
1425 * `InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"`
1426 * `InstantiationName/FooTest.HasBlahBlah/1` for `"miny"`
1427 * `InstantiationName/FooTest.HasBlahBlah/2` for `"moe"`
1429 You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests).
1431 This statement will instantiate all tests from `FooTest` again, each with
1432 parameter values `"cat"` and `"dog"`:
1435 const char* pets[] = {"cat", "dog"};
1436 INSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest,
1437 ::testing::ValuesIn(pets));
1440 The tests from the instantiation above will have these names:
1442 * `AnotherInstantiationName/FooTest.DoesBlah/0` for `"cat"`
1443 * `AnotherInstantiationName/FooTest.DoesBlah/1` for `"dog"`
1444 * `AnotherInstantiationName/FooTest.HasBlahBlah/0` for `"cat"`
1445 * `AnotherInstantiationName/FooTest.HasBlahBlah/1` for `"dog"`
1447 Please note that `INSTANTIATE_TEST_CASE_P` will instantiate *all* tests in the
1448 given test case, whether their definitions come before or *after* the
1449 `INSTANTIATE_TEST_CASE_P` statement.
1451 You can see sample7_unittest.cc and sample8_unittest.cc for more examples.
1453 **Availability**: Linux, Windows (requires MSVC 8.0 or above), Mac
1455 ### Creating Value-Parameterized Abstract Tests
1457 In the above, we define and instantiate `FooTest` in the *same* source file.
1458 Sometimes you may want to define value-parameterized tests in a library and let
1459 other people instantiate them later. This pattern is known as *abstract tests*.
1460 As an example of its application, when you are designing an interface you can
1461 write a standard suite of abstract tests (perhaps using a factory function as
1462 the test parameter) that all implementations of the interface are expected to
1463 pass. When someone implements the interface, they can instantiate your suite to
1464 get all the interface-conformance tests for free.
1466 To define abstract tests, you should organize your code like this:
1468 1. Put the definition of the parameterized test fixture class (e.g. `FooTest`)
1469 in a header file, say `foo_param_test.h`. Think of this as *declaring* your
1471 1. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes
1472 `foo_param_test.h`. Think of this as *implementing* your abstract tests.
1474 Once they are defined, you can instantiate them by including `foo_param_test.h`,
1475 invoking `INSTANTIATE_TEST_CASE_P()`, and depending on the library target that
1476 contains `foo_param_test.cc`. You can instantiate the same abstract test case
1477 multiple times, possibly in different source files.
1479 ### Specifying Names for Value-Parameterized Test Parameters
1481 The optional last argument to `INSTANTIATE_TEST_CASE_P()` allows the user to
1482 specify a function or functor that generates custom test name suffixes based on
1483 the test parameters. The function should accept one argument of type
1484 `testing::TestParamInfo<class ParamType>`, and return `std::string`.
1486 `testing::PrintToStringParamName` is a builtin test suffix generator that
1487 returns the value of `testing::PrintToString(GetParam())`. It does not work for
1488 `std::string` or C strings.
1490 NOTE: test names must be non-empty, unique, and may only contain ASCII
1491 alphanumeric characters. In particular, they [should not contain
1492 underscores](https://g3doc.corp.google.com/third_party/googletest/googletest/g3doc/faq.md#no-underscores).
1495 class MyTestCase : public testing::TestWithParam<int> {};
1497 TEST_P(MyTestCase, MyTest)
1499 std::cout << "Example Test Param: " << GetParam() << std::endl;
1502 INSTANTIATE_TEST_CASE_P(MyGroup, MyTestCase, testing::Range(0, 10),
1503 testing::PrintToStringParamName());
1508 Suppose you have multiple implementations of the same interface and want to make
1509 sure that all of them satisfy some common requirements. Or, you may have defined
1510 several types that are supposed to conform to the same "concept" and you want to
1511 verify it. In both cases, you want the same test logic repeated for different
1514 While you can write one `TEST` or `TEST_F` for each type you want to test (and
1515 you may even factor the test logic into a function template that you invoke from
1516 the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n`
1517 types, you'll end up writing `m*n` `TEST`s.
1519 *Typed tests* allow you to repeat the same test logic over a list of types. You
1520 only need to write the test logic once, although you must know the type list
1521 when writing typed tests. Here's how you do it:
1523 First, define a fixture class template. It should be parameterized by a type.
1524 Remember to derive it from `::testing::Test`:
1527 template <typename T>
1528 class FooTest : public ::testing::Test {
1531 typedef std::list<T> List;
1537 Next, associate a list of types with the test case, which will be repeated for
1538 each type in the list:
1541 using MyTypes = ::testing::Types<char, int, unsigned int>;
1542 TYPED_TEST_CASE(FooTest, MyTypes);
1545 The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_CASE`
1546 macro to parse correctly. Otherwise the compiler will think that each comma in
1547 the type list introduces a new macro argument.
1549 Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this
1550 test case. You can repeat this as many times as you want:
1553 TYPED_TEST(FooTest, DoesBlah) {
1554 // Inside a test, refer to the special name TypeParam to get the type
1555 // parameter. Since we are inside a derived class template, C++ requires
1556 // us to visit the members of FooTest via 'this'.
1557 TypeParam n = this->value_;
1559 // To visit static members of the fixture, add the 'TestFixture::'
1561 n += TestFixture::shared_;
1563 // To refer to typedefs in the fixture, add the 'typename TestFixture::'
1564 // prefix. The 'typename' is required to satisfy the compiler.
1565 typename TestFixture::List values;
1567 values.push_back(n);
1571 TYPED_TEST(FooTest, HasPropertyA) { ... }
1574 You can see sample6_unittest.cc
1576 **Availability**: Linux, Windows (requires MSVC 8.0 or above), Mac
1578 ## Type-Parameterized Tests
1580 *Type-parameterized tests* are like typed tests, except that they don't require
1581 you to know the list of types ahead of time. Instead, you can define the test
1582 logic first and instantiate it with different type lists later. You can even
1583 instantiate it more than once in the same program.
1585 If you are designing an interface or concept, you can define a suite of
1586 type-parameterized tests to verify properties that any valid implementation of
1587 the interface/concept should have. Then, the author of each implementation can
1588 just instantiate the test suite with their type to verify that it conforms to
1589 the requirements, without having to write similar tests repeatedly. Here's an
1592 First, define a fixture class template, as we did with typed tests:
1595 template <typename T>
1596 class FooTest : public ::testing::Test {
1601 Next, declare that you will define a type-parameterized test case:
1604 TYPED_TEST_CASE_P(FooTest);
1607 Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat
1608 this as many times as you want:
1611 TYPED_TEST_P(FooTest, DoesBlah) {
1612 // Inside a test, refer to TypeParam to get the type parameter.
1617 TYPED_TEST_P(FooTest, HasPropertyA) { ... }
1620 Now the tricky part: you need to register all test patterns using the
1621 `REGISTER_TYPED_TEST_CASE_P` macro before you can instantiate them. The first
1622 argument of the macro is the test case name; the rest are the names of the tests
1626 REGISTER_TYPED_TEST_CASE_P(FooTest,
1627 DoesBlah, HasPropertyA);
1630 Finally, you are free to instantiate the pattern with the types you want. If you
1631 put the above code in a header file, you can `#include` it in multiple C++
1632 source files and instantiate it multiple times.
1635 typedef ::testing::Types<char, int, unsigned int> MyTypes;
1636 INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, MyTypes);
1639 To distinguish different instances of the pattern, the first argument to the
1640 `INSTANTIATE_TYPED_TEST_CASE_P` macro is a prefix that will be added to the
1641 actual test case name. Remember to pick unique prefixes for different instances.
1643 In the special case where the type list contains only one type, you can write
1644 that type directly without `::testing::Types<...>`, like this:
1647 INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, int);
1650 You can see `sample6_unittest.cc` for a complete example.
1652 **Availability**: Linux, Windows (requires MSVC 8.0 or above), Mac
1654 ## Testing Private Code
1656 If you change your software's internal implementation, your tests should not
1657 break as long as the change is not observable by users. Therefore, **per the
1658 black-box testing principle, most of the time you should test your code through
1659 its public interfaces.**
1661 **If you still find yourself needing to test internal implementation code,
1662 consider if there's a better design.** The desire to test internal
1663 implementation is often a sign that the class is doing too much. Consider
1664 extracting an implementation class, and testing it. Then use that implementation
1665 class in the original class.
1667 If you absolutely have to test non-public interface code though, you can. There
1668 are two cases to consider:
1670 * Static functions ( *not* the same as static member functions!) or unnamed
1672 * Private or protected class members
1674 To test them, we use the following special techniques:
1676 * Both static functions and definitions/declarations in an unnamed namespace
1677 are only visible within the same translation unit. To test them, you can
1678 `#include` the entire `.cc` file being tested in your `*_test.cc` file.
1679 (including `.cc` files is not a good way to reuse code - you should not do
1680 this in production code!)
1682 However, a better approach is to move the private code into the
1683 `foo::internal` namespace, where `foo` is the namespace your project
1684 normally uses, and put the private declarations in a `*-internal.h` file.
1685 Your production `.cc` files and your tests are allowed to include this
1686 internal header, but your clients are not. This way, you can fully test your
1687 internal implementation without leaking it to your clients.
1689 * Private class members are only accessible from within the class or by
1690 friends. To access a class' private members, you can declare your test
1691 fixture as a friend to the class and define accessors in your fixture. Tests
1692 using the fixture can then access the private members of your production
1693 class via the accessors in the fixture. Note that even though your fixture
1694 is a friend to your production class, your tests are not automatically
1695 friends to it, as they are technically defined in sub-classes of the
1698 Another way to test private members is to refactor them into an
1699 implementation class, which is then declared in a `*-internal.h` file. Your
1700 clients aren't allowed to include this header but your tests can. Such is
1702 [Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/)
1703 (Private Implementation) idiom.
1705 Or, you can declare an individual test as a friend of your class by adding
1706 this line in the class body:
1709 FRIEND_TEST(TestCaseName, TestName);
1717 #include "gtest/gtest_prod.h"
1722 FRIEND_TEST(FooTest, BarReturnsZeroOnNull);
1729 TEST(FooTest, BarReturnsZeroOnNull) {
1731 EXPECT_EQ(0, foo.Bar(NULL)); // Uses Foo's private member Bar().
1735 Pay special attention when your class is defined in a namespace, as you
1736 should define your test fixtures and tests in the same namespace if you want
1737 them to be friends of your class. For example, if the code to be tested
1741 namespace my_namespace {
1744 friend class FooTest;
1745 FRIEND_TEST(FooTest, Bar);
1746 FRIEND_TEST(FooTest, Baz);
1747 ... definition of the class Foo ...
1750 } // namespace my_namespace
1753 Your test code should be something like:
1756 namespace my_namespace {
1758 class FooTest : public ::testing::Test {
1763 TEST_F(FooTest, Bar) { ... }
1764 TEST_F(FooTest, Baz) { ... }
1766 } // namespace my_namespace
1770 ## "Catching" Failures
1772 If you are building a testing utility on top of googletest, you'll want to test
1773 your utility. What framework would you use to test it? googletest, of course.
1775 The challenge is to verify that your testing utility reports failures correctly.
1776 In frameworks that report a failure by throwing an exception, you could catch
1777 the exception and assert on it. But googletest doesn't use exceptions, so how do
1778 we test that a piece of code generates an expected failure?
1780 gunit-spi.h contains some constructs to do this. After #including this header,
1784 EXPECT_FATAL_FAILURE(statement, substring);
1787 to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the
1788 current thread whose message contains the given `substring`, or use
1791 EXPECT_NONFATAL_FAILURE(statement, substring);
1794 if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.
1796 Only failures in the current thread are checked to determine the result of this
1797 type of expectations. If `statement` creates new threads, failures in these
1798 threads are also ignored. If you want to catch failures in other threads as
1799 well, use one of the following macros instead:
1802 EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1803 EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1806 NOTE: Assertions from multiple threads are currently not supported on Windows.
1808 For technical reasons, there are some caveats:
1810 1. You cannot stream a failure message to either macro.
1812 1. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference
1813 local non-static variables or non-static members of `this` object.
1815 1. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()()` cannot return a
1819 ## Getting the Current Test's Name
1821 Sometimes a function may need to know the name of the currently running test.
1822 For example, you may be using the `SetUp()` method of your test fixture to set
1823 the golden file name based on which test is running. The `::testing::TestInfo`
1824 class has this information:
1831 // Returns the test case name and the test name, respectively.
1833 // Do NOT delete or free the return value - it's managed by the
1835 const char* test_case_name() const;
1836 const char* name() const;
1842 To obtain a `TestInfo` object for the currently running test, call
1843 `current_test_info()` on the `UnitTest` singleton object:
1846 // Gets information about the currently running test.
1847 // Do NOT delete the returned object - it's managed by the UnitTest class.
1848 const ::testing::TestInfo* const test_info =
1849 ::testing::UnitTest::GetInstance()->current_test_info();
1853 printf("We are in test %s of test case %s.\n",
1855 test_info->test_case_name());
1858 `current_test_info()` returns a null pointer if no test is running. In
1859 particular, you cannot find the test case name in `TestCaseSetUp()`,
1860 `TestCaseTearDown()` (where you know the test case name implicitly), or
1861 functions called from them.
1863 **Availability**: Linux, Windows, Mac.
1865 ## Extending googletest by Handling Test Events
1867 googletest provides an **event listener API** to let you receive notifications
1868 about the progress of a test program and test failures. The events you can
1869 listen to include the start and end of the test program, a test case, or a test
1870 method, among others. You may use this API to augment or replace the standard
1871 console output, replace the XML output, or provide a completely different form
1872 of output, such as a GUI or a database. You can also use test events as
1873 checkpoints to implement a resource leak checker, for example.
1875 **Availability**: Linux, Windows, Mac.
1877 ### Defining Event Listeners
1879 To define a event listener, you subclass either testing::TestEventListener or
1880 testing::EmptyTestEventListener The former is an (abstract) interface, where
1881 *each pure virtual method can be overridden to handle a test event* (For
1882 example, when a test starts, the `OnTestStart()` method will be called.). The
1883 latter provides an empty implementation of all methods in the interface, such
1884 that a subclass only needs to override the methods it cares about.
1886 When an event is fired, its context is passed to the handler function as an
1887 argument. The following argument types are used:
1889 * UnitTest reflects the state of the entire test program,
1890 * TestCase has information about a test case, which can contain one or more
1892 * TestInfo contains the state of a test, and
1893 * TestPartResult represents the result of a test assertion.
1895 An event handler function can examine the argument it receives to find out
1896 interesting information about the event and the test program's state.
1901 class MinimalistPrinter : public ::testing::EmptyTestEventListener {
1902 // Called before a test starts.
1903 virtual void OnTestStart(const ::testing::TestInfo& test_info) {
1904 printf("*** Test %s.%s starting.\n",
1905 test_info.test_case_name(), test_info.name());
1908 // Called after a failed assertion or a SUCCESS().
1909 virtual void OnTestPartResult(const ::testing::TestPartResult& test_part_result) {
1910 printf("%s in %s:%d\n%s\n",
1911 test_part_result.failed() ? "*** Failure" : "Success",
1912 test_part_result.file_name(),
1913 test_part_result.line_number(),
1914 test_part_result.summary());
1917 // Called after a test ends.
1918 virtual void OnTestEnd(const ::testing::TestInfo& test_info) {
1919 printf("*** Test %s.%s ending.\n",
1920 test_info.test_case_name(), test_info.name());
1925 ### Using Event Listeners
1927 To use the event listener you have defined, add an instance of it to the
1928 googletest event listener list (represented by class TestEventListeners - note
1929 the "s" at the end of the name) in your `main()` function, before calling
1933 int main(int argc, char** argv) {
1934 ::testing::InitGoogleTest(&argc, argv);
1935 // Gets hold of the event listener list.
1936 ::testing::TestEventListeners& listeners =
1937 ::testing::UnitTest::GetInstance()->listeners();
1938 // Adds a listener to the end. googletest takes the ownership.
1939 listeners.Append(new MinimalistPrinter);
1940 return RUN_ALL_TESTS();
1944 There's only one problem: the default test result printer is still in effect, so
1945 its output will mingle with the output from your minimalist printer. To suppress
1946 the default printer, just release it from the event listener list and delete it.
1947 You can do so by adding one line:
1951 delete listeners.Release(listeners.default_result_printer());
1952 listeners.Append(new MinimalistPrinter);
1953 return RUN_ALL_TESTS();
1956 Now, sit back and enjoy a completely different output from your tests. For more
1957 details, you can read this sample9_unittest.cc
1959 You may append more than one listener to the list. When an `On*Start()` or
1960 `OnTestPartResult()` event is fired, the listeners will receive it in the order
1961 they appear in the list (since new listeners are added to the end of the list,
1962 the default text printer and the default XML generator will receive the event
1963 first). An `On*End()` event will be received by the listeners in the *reverse*
1964 order. This allows output by listeners added later to be framed by output from
1965 listeners added earlier.
1967 ### Generating Failures in Listeners
1969 You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc)
1970 when processing an event. There are some restrictions:
1972 1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will
1973 cause `OnTestPartResult()` to be called recursively).
1974 1. A listener that handles `OnTestPartResult()` is not allowed to generate any
1977 When you add listeners to the listener list, you should put listeners that
1978 handle `OnTestPartResult()` *before* listeners that can generate failures. This
1979 ensures that failures generated by the latter are attributed to the right test
1982 We have a sample of failure-raising listener sample10_unittest.cc
1984 ## Running Test Programs: Advanced Options
1986 googletest test programs are ordinary executables. Once built, you can run them
1987 directly and affect their behavior via the following environment variables
1988 and/or command line flags. For the flags to work, your programs must call
1989 `::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
1991 To see a list of supported flags and their usage, please run your test program
1992 with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short.
1994 If an option is specified both by an environment variable and by a flag, the
1995 latter takes precedence.
1999 #### Listing Test Names
2001 Sometimes it is necessary to list the available tests in a program before
2002 running them so that a filter may be applied if needed. Including the flag
2003 `--gtest_list_tests` overrides all other flags and lists tests in the following
2014 None of the tests listed are actually run if the flag is provided. There is no
2015 corresponding environment variable for this flag.
2017 **Availability**: Linux, Windows, Mac.
2019 #### Running a Subset of the Tests
2021 By default, a googletest program runs all tests the user has defined. Sometimes,
2022 you want to run only a subset of the tests (e.g. for debugging or quickly
2023 verifying a change). If you set the `GTEST_FILTER` environment variable or the
2024 `--gtest_filter` flag to a filter string, googletest will only run the tests
2025 whose full names (in the form of `TestCaseName.TestName`) match the filter.
2027 The format of a filter is a '`:`'-separated list of wildcard patterns (called
2028 the *positive patterns*) optionally followed by a '`-`' and another
2029 '`:`'-separated pattern list (called the *negative patterns*). A test matches
2030 the filter if and only if it matches any of the positive patterns but does not
2031 match any of the negative patterns.
2033 A pattern may contain `'*'` (matches any string) or `'?'` (matches any single
2034 character). For convenience, the filter
2036 `'*-NegativePatterns'` can be also written as `'-NegativePatterns'`.
2040 * `./foo_test` Has no flag, and thus runs all its tests.
2041 * `./foo_test --gtest_filter=*` Also runs everything, due to the single
2042 match-everything `*` value.
2043 * `./foo_test --gtest_filter=FooTest.*` Runs everything in test case `FooTest`
2045 * `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full
2046 name contains either `"Null"` or `"Constructor"` .
2047 * `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
2048 * `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test
2049 case `FooTest` except `FooTest.Bar`.
2050 * `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs
2051 everything in test case `FooTest` except `FooTest.Bar` and everything in
2052 test case `BarTest` except `BarTest.Foo`.
2054 #### Temporarily Disabling Tests
2056 If you have a broken test that you cannot fix right away, you can add the
2057 `DISABLED_` prefix to its name. This will exclude it from execution. This is
2058 better than commenting out the code or using `#if 0`, as disabled tests are
2059 still compiled (and thus won't rot).
2061 If you need to disable all tests in a test case, you can either add `DISABLED_`
2062 to the front of the name of each test, or alternatively add it to the front of
2065 For example, the following tests won't be run by googletest, even though they
2066 will still be compiled:
2069 // Tests that Foo does Abc.
2070 TEST(FooTest, DISABLED_DoesAbc) { ... }
2072 class DISABLED_BarTest : public ::testing::Test { ... };
2074 // Tests that Bar does Xyz.
2075 TEST_F(DISABLED_BarTest, DoesXyz) { ... }
2078 NOTE: This feature should only be used for temporary pain-relief. You still have
2079 to fix the disabled tests at a later date. As a reminder, googletest will print
2080 a banner warning you if a test program contains any disabled tests.
2082 TIP: You can easily count the number of disabled tests you have using `gsearch`
2083 and/or `grep`. This number can be used as a metric for improving your test
2086 **Availability**: Linux, Windows, Mac.
2088 #### Temporarily Enabling Disabled Tests
2090 To include disabled tests in test execution, just invoke the test program with
2091 the `--gtest_also_run_disabled_tests` flag or set the
2092 `GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`.
2093 You can combine this with the `--gtest_filter` flag to further select which
2094 disabled tests to run.
2096 **Availability**: Linux, Windows, Mac.
2098 ### Repeating the Tests
2100 Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it
2101 will fail only 1% of the time, making it rather hard to reproduce the bug under
2102 a debugger. This can be a major source of frustration.
2104 The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in
2105 a program many times. Hopefully, a flaky test will eventually fail and give you
2106 a chance to debug. Here's how to use it:
2109 $ foo_test --gtest_repeat=1000
2110 Repeat foo_test 1000 times and don't stop at failures.
2112 $ foo_test --gtest_repeat=-1
2113 A negative count means repeating forever.
2115 $ foo_test --gtest_repeat=1000 --gtest_break_on_failure
2116 Repeat foo_test 1000 times, stopping at the first failure. This
2117 is especially useful when running under a debugger: when the test
2118 fails, it will drop into the debugger and you can then inspect
2119 variables and stacks.
2121 $ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.*
2122 Repeat the tests whose name matches the filter 1000 times.
2125 If your test program contains [global set-up/tear-down](#global-set-up-and-tear-down) code, it
2126 will be repeated in each iteration as well, as the flakiness may be in it. You
2127 can also specify the repeat count by setting the `GTEST_REPEAT` environment
2130 **Availability**: Linux, Windows, Mac.
2132 ### Shuffling the Tests
2134 You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`
2135 environment variable to `1`) to run the tests in a program in a random order.
2136 This helps to reveal bad dependencies between tests.
2138 By default, googletest uses a random seed calculated from the current time.
2139 Therefore you'll get a different order every time. The console output includes
2140 the random seed value, such that you can reproduce an order-related test failure
2141 later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED`
2142 flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an
2143 integer in the range [0, 99999]. The seed value 0 is special: it tells
2144 googletest to do the default behavior of calculating the seed from the current
2147 If you combine this with `--gtest_repeat=N`, googletest will pick a different
2148 random seed and re-shuffle the tests in each iteration.
2150 **Availability**: Linux, Windows, Mac.
2152 ### Controlling Test Output
2154 #### Colored Terminal Output
2156 googletest can use colors in its terminal output to make it easier to spot the
2157 important information:
2160 <span style="color:green">[----------]<span style="color:black"> 1 test from FooTest<br/>
2161 <span style="color:green">[ RUN ]<span style="color:black"> FooTest.DoesAbc<br/>
2162 <span style="color:green">[ OK ]<span style="color:black"> FooTest.DoesAbc<br/>
2163 <span style="color:green">[----------]<span style="color:black"> 2 tests from BarTest<br/>
2164 <span style="color:green">[ RUN ]<span style="color:black"> BarTest.HasXyzProperty<br/>
2165 <span style="color:green">[ OK ]<span style="color:black"> BarTest.HasXyzProperty<br/>
2166 <span style="color:green">[ RUN ]<span style="color:black"> BarTest.ReturnsTrueOnSuccess<br/>
2167 ... some error messages ...<br/>
2168 <span style="color:red">[ FAILED ] <span style="color:black">BarTest.ReturnsTrueOnSuccess<br/>
2170 <span style="color:green">[==========]<span style="color:black"> 30 tests from 14 test cases ran.<br/>
2171 <span style="color:green">[ PASSED ]<span style="color:black"> 28 tests.<br/>
2172 <span style="color:red">[ FAILED ]<span style="color:black"> 2 tests, listed below:<br/>
2173 <span style="color:red">[ FAILED ]<span style="color:black"> BarTest.ReturnsTrueOnSuccess<br/>
2174 <span style="color:red">[ FAILED ]<span style="color:black"> AnotherTest.DoesXyz<br/>
2177 You can set the `GTEST_COLOR` environment variable or the `--gtest_color`
2178 command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
2179 disable colors, or let googletest decide. When the value is `auto`, googletest
2180 will use colors if and only if the output goes to a terminal and (on non-Windows
2181 platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`.
2183 **Availability**: Linux, Windows, Mac.
2185 #### Suppressing the Elapsed Time
2187 By default, googletest prints the time it takes to run each test. To disable
2188 that, run the test program with the `--gtest_print_time=0` command line flag, or
2189 set the GTEST_PRINT_TIME environment variable to `0`.
2191 **Availability**: Linux, Windows, Mac.
2193 #### Suppressing UTF-8 Text Output
2195 In case of assertion failures, googletest prints expected and actual values of
2196 type `string` both as hex-encoded strings as well as in readable UTF-8 text if
2197 they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8
2198 text because, for example, you don't have an UTF-8 compatible output medium, run
2199 the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8`
2200 environment variable to `0`.
2202 **Availability**: Linux, Windows, Mac.
2205 #### Generating an XML Report
2207 googletest can emit a detailed XML report to a file in addition to its normal
2208 textual output. The report contains the duration of each test, and thus can help
2209 you identify slow tests. The report is also used by the http://unittest
2210 dashboard to show per-test-method error messages.
2212 To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
2213 `--gtest_output` flag to the string `"xml:path_to_output_file"`, which will
2214 create the file at the given location. You can also just use the string `"xml"`,
2215 in which case the output can be found in the `test_detail.xml` file in the
2218 If you specify a directory (for example, `"xml:output/directory/"` on Linux or
2219 `"xml:output\directory\"` on Windows), googletest will create the XML file in
2220 that directory, named after the test executable (e.g. `foo_test.xml` for test
2221 program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
2222 over from a previous run), googletest will pick a different name (e.g.
2223 `foo_test_1.xml`) to avoid overwriting it.
2226 The report is based on the `junitreport` Ant task. Since that format was
2227 originally intended for Java, a little interpretation is required to make it
2228 apply to googletest tests, as shown here:
2231 <testsuites name="AllTests" ...>
2232 <testsuite name="test_case_name" ...>
2233 <testcase name="test_name" ...>
2234 <failure message="..."/>
2235 <failure message="..."/>
2236 <failure message="..."/>
2242 * The root `<testsuites>` element corresponds to the entire test program.
2243 * `<testsuite>` elements correspond to googletest test cases.
2244 * `<testcase>` elements correspond to googletest test functions.
2246 For instance, the following program
2249 TEST(MathTest, Addition) { ... }
2250 TEST(MathTest, Subtraction) { ... }
2251 TEST(LogicTest, NonContradiction) { ... }
2254 could generate this report:
2257 <?xml version="1.0" encoding="UTF-8"?>
2258 <testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests">
2259 <testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015">
2260 <testcase name="Addition" status="run" time="0.007" classname="">
2261 <failure message="Value of: add(1, 1)
 Actual: 3
Expected: 2" type="">...</failure>
2262 <failure message="Value of: add(1, -1)
 Actual: 1
Expected: 0" type="">...</failure>
2264 <testcase name="Subtraction" status="run" time="0.005" classname="">
2267 <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005">
2268 <testcase name="NonContradiction" status="run" time="0.005" classname="">
2276 * The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how
2277 many test functions the googletest program or test case contains, while the
2278 `failures` attribute tells how many of them failed.
2280 * The `time` attribute expresses the duration of the test, test case, or
2281 entire test program in seconds.
2283 * The `timestamp` attribute records the local date and time of the test
2286 * Each `<failure>` element corresponds to a single failed googletest
2289 **Availability**: Linux, Windows, Mac.
2291 #### Generating an JSON Report
2293 googletest can also emit a JSON report as an alternative format to XML. To
2294 generate the JSON report, set the `GTEST_OUTPUT` environment variable or the
2295 `--gtest_output` flag to the string `"json:path_to_output_file"`, which will
2296 create the file at the given location. You can also just use the string
2297 `"json"`, in which case the output can be found in the `test_detail.json` file
2298 in the current directory.
2300 The report format conforms to the following JSON Schema:
2304 "$schema": "http://json-schema.org/schema#",
2310 "name": { "type": "string" },
2311 "tests": { "type": "integer" },
2312 "failures": { "type": "integer" },
2313 "disabled": { "type": "integer" },
2314 "time": { "type": "string" },
2318 "$ref": "#/definitions/TestInfo"
2326 "name": { "type": "string" },
2329 "enum": ["RUN", "NOTRUN"]
2331 "time": { "type": "string" },
2332 "classname": { "type": "string" },
2336 "$ref": "#/definitions/Failure"
2344 "failures": { "type": "string" },
2345 "type": { "type": "string" }
2350 "tests": { "type": "integer" },
2351 "failures": { "type": "integer" },
2352 "disabled": { "type": "integer" },
2353 "errors": { "type": "integer" },
2356 "format": "date-time"
2358 "time": { "type": "string" },
2359 "name": { "type": "string" },
2363 "$ref": "#/definitions/TestCase"
2370 The report uses the format that conforms to the following Proto3 using the [JSON
2371 encoding](https://developers.google.com/protocol-buffers/docs/proto3#json):
2378 import "google/protobuf/timestamp.proto";
2379 import "google/protobuf/duration.proto";
2386 google.protobuf.Timestamp timestamp = 5;
2387 google.protobuf.Duration time = 6;
2389 repeated TestCase testsuites = 8;
2398 google.protobuf.Duration time = 6;
2399 repeated TestInfo testsuite = 7;
2409 google.protobuf.Duration time = 3;
2410 string classname = 4;
2412 string failures = 1;
2415 repeated Failure failures = 5;
2419 For instance, the following program
2422 TEST(MathTest, Addition) { ... }
2423 TEST(MathTest, Subtraction) { ... }
2424 TEST(LogicTest, NonContradiction) { ... }
2427 could generate this report:
2435 "timestamp": "2011-10-31T18:52:42Z"
2452 "message": "Value of: add(1, 1)\x0A Actual: 3\x0AExpected: 2",
2456 "message": "Value of: add(1, -1)\x0A Actual: 1\x0AExpected: 0",
2462 "name": "Subtraction",
2470 "name": "LogicTest",
2477 "name": "NonContradiction",
2488 IMPORTANT: The exact format of the JSON document is subject to change.
2490 **Availability**: Linux, Windows, Mac.
2492 ### Controlling How Failures Are Reported
2494 #### Turning Assertion Failures into Break-Points
2496 When running test programs under a debugger, it's very convenient if the
2497 debugger can catch an assertion failure and automatically drop into interactive
2498 mode. googletest's *break-on-failure* mode supports this behavior.
2500 To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
2501 other than `0` . Alternatively, you can use the `--gtest_break_on_failure`
2504 **Availability**: Linux, Windows, Mac.
2506 #### Disabling Catching Test-Thrown Exceptions
2508 googletest can be used either with or without exceptions enabled. If a test
2509 throws a C++ exception or (on Windows) a structured exception (SEH), by default
2510 googletest catches it, reports it as a test failure, and continues with the next
2511 test method. This maximizes the coverage of a test run. Also, on Windows an
2512 uncaught exception will cause a pop-up window, so catching the exceptions allows
2513 you to run the tests automatically.
2515 When debugging the test failures, however, you may instead want the exceptions
2516 to be handled by the debugger, such that you can examine the call stack when an
2517 exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS`
2518 environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when
2521 **Availability**: Linux, Windows, Mac.