Screenplay: Unittest: GTest Basics¶
Before We Start¶
Installation¶
Fedora:
# dnf install gtest-devel
# dnf install gmock-devel
Debian/Ubuntu
# apt-get install libgtest-dev
# apt-get install google-mock
Documentation¶
Simplest Test: No Test¶
#include <gtest/gtest.h>
int main(int argc, char** argv)
{
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Compile and run:
$ g++ simple.cc -o simple -lgtest
$ ./simple
[==========] Running 0 tests from 0 test cases.
[==========] 0 tests from 0 test cases ran. (0 ms total)
[ PASSED ] 0 tests.
Next Simplest: Single File¶
Aha, so RUN_ALL_TESTS()
runs all tests. There only have to be
tests there, so add two suites.
Discussion:
Test structure, suites
#include <gtest/gtest.h>
TEST(OneSuite, Test1) {}
TEST(OneSuite, Test2) {}
TEST(AnotherSuite, Test1) {}
TEST(AnotherSuite, Test2) {}
int main(int argc, char** argv)
{
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Compile and run:
$ g++ self-contained.cc -o self-contained -lgtest
$ ./self-contained
[==========] Running 2 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 2 tests from Suite
[ RUN ] Suite.Test1
[ OK ] Suite.Test1 (0 ms)
[ RUN ] Suite.Test2
[ OK ] Suite.Test2 (0 ms)
[----------] 2 tests from Suite (0 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 1 test case ran. (1 ms total)
[ PASSED ] 2 tests.
Executing Tests Selectively¶
Most important: RTFM
$ ./self-contained --help
...
--gtest_filter=POSTIVE_PATTERNS[-NEGATIVE_PATTERNS]
Run only the tests whose name matches one of the positive patterns but
none of the negative patterns. '?' matches any single character; '*'
matches any substring; ':' separates two patterns.
...
Running the program executes all tests that are there, which might be many.
$ ./self-contained --gtest_list_tests
OneSuite.
Test1
Test2
AnotherSuite.
Test1
Test2
Want to run only OneSuite.Test2
,
$ ./self-contained --gtest_filter='OneSuite.Test1'
Note: Google Test filter = OneSuite.Test1
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from OneSuite
[ RUN ] OneSuite.Test1
[ OK ] OneSuite.Test1 (0 ms)
[----------] 1 test from OneSuite (0 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (0 ms total)
[ PASSED ] 1 test.
Filter is actually a shell glob,
$ ./self-contained --gtest_filter='OneSuite*'
Note: Google Test filter = OneSuite*
[==========] Running 2 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 2 tests from OneSuite
[ RUN ] OneSuite.Test1
[ OK ] OneSuite.Test1 (0 ms)
[ RUN ] OneSuite.Test2
[ OK ] OneSuite.Test2 (0 ms)
[----------] 2 tests from OneSuite (0 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 1 test case ran. (0 ms total)
[ PASSED ] 2 tests.
$ ./self-contained --gtest_filter='*.Test1'
...
Fatal Failure¶
FAIL()
¶
FAIL()
is definitely fatal, in that the test stops immediately
in it.
#include <gtest/gtest.h>
#include <iostream>
TEST(FailDemo, FailIsFatal)
{
FAIL();
std::cout << "succeeded" << std::endl;
}
int main(int argc, char** argv)
{
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Compile and run:
$ g++ fail.cc -o fail -lgtest
$ ./fail
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from FailDemo
[ RUN ] FailDemo.FailIsFatal
fail.cc:6: Failure
Failed
[ FAILED ] FailDemo.FailIsFatal (1 ms)
[----------] 1 test from FailDemo (1 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (1 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] FailDemo.FailIsFatal
Discussion
We do not see the output after
FAIL()
ASSERT_*()
¶
#include <gtest/gtest.h>
#include <iostream>
TEST(AssertDemo, AssertIsFatal)
{
ASSERT_TRUE('X' == 'U');
ASSERT_EQ('X', 'U');
std::cout << "succeeded" << std::endl;
}
int main(int argc, char** argv)
{
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Compile and run:
$ g++ assert.cc -o assert -lgtest
$ ./assert
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from AssertDemo
[ RUN ] AssertDemo.AssertIsFatal
assert.cc:6: Failure
Value of: 'X' == 'U'
Actual: false
Expected: true
[ FAILED ] AssertDemo.AssertIsFatal (1 ms)
[----------] 1 test from AssertDemo (1 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (1 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] AssertDemo.AssertIsFatal
1 FAILED TEST
Discussion.
assert.cc
has two assertions to mean the same thingWe see only one failure reported
The second isn’t executed because the first fails
This is the essence in the term fatal: stop immediately
Non-Fatal Failure¶
Instead of using ASSERT_*()
like above, use EXPECT_*()
.
#include <gtest/gtest.h>
TEST(ExpectDemo, ExpectIsNonFatal)
{
EXPECT_TRUE('X' == 'U');
EXPECT_EQ('X', 'U');
}
int main(int argc, char** argv)
{
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Compile and run:
$ g++ expect.cc -o expect -lgtest
$ ./expect
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from ExpectDemo
[ RUN ] ExpectDemo.ExpectIsNonFatal
expect.cc:5: Failure
Value of: 'X' == 'U'
Actual: false
Expected: true
expect.cc:6: Failure
Expected equality of these values:
'X'
Which is: 'X' (88, 0x58)
'U'
Which is: 'U' (85, 0x55)
[ FAILED ] ExpectDemo.ExpectIsNonFatal (0 ms)
[----------] 1 test from ExpectDemo (1 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (1 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] ExpectDemo.ExpectIsNonFatal
1 FAILED TEST
Discussion
We see two failures reported
Both
EXPECT_()
are evaluatedTest continues to run
Reported as failed nonetheless
Custom Message Output¶
All test macros support <<
ostream
operators.
#include <gtest/gtest.h>
TEST(FailDemo, FailIsFatal)
{
FAIL() << "argh, this is fatal";
}
int main(int argc, char** argv)
{
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Last Not Least: Structure¶
Discussion
Hm. This way programs become rather large.
Running every single one is cumbersome and error prone.
“Lost Tests Syndrome”
Could use preprocessor to include tests :-) but no
Plan
Externalize tests into separate
.cc
files.Link them together into a single Test Runner
Test Cases and Suites¶
Copy what we have down into a
tests/
subdirectory (for example, to have a structure)Small standalone fragments, waiting to be aggregated.
Do nothing on their own
#include <gtest/gtest.h>
TEST(OneSuite, Test1) {}
TEST(OneSuite, Test2) {}
TEST(AnotherSuite, Test1) {}
TEST(AnotherSuite, Test2) {}
#include <gtest/gtest.h>
#include <iostream>
TEST(FailDemo, FailIsFatal)
{
FAIL();
std::cout << "succeeded" << std::endl;
}
#include <gtest/gtest.h>
#include <iostream>
TEST(AssertDemo, AssertIsFatal)
{
ASSERT_TRUE('X' == 'U');
ASSERT_EQ('X', 'U');
std::cout << "succeeded" << std::endl;
}
#include <gtest/gtest.h>
TEST(ExpectDemo, ExpectIsNonFatal)
{
EXPECT_TRUE('X' == 'U');
EXPECT_EQ('X', 'U');
}