Best Practices for Java testing with JUnit

JUnit is a popular testing library for Java applications and I extensively used it when working at Amazon for the numerous Java applications and services there. However, I came across a number of different anti-patterns and areas to improve the quality of the test code. This post introduces many of the different tricks and patterns that I’ve learned and shared with my coworkers, and now want to share

Another library to know and reference is Mockito, which I use extensively in JUnit test cases and will reference this too below.

These are all real things that I’ve seen developers do.

Migrate from JUnit4 to JUnit5

If you’re still using JUnit4, why should migrate? Much of this guide will reference features in JUnit5.

  • JUnit5 has been out since 2017
  • JUnit5 removes several testing paradigms that contributed to broken test cases
  • JUnit5 test suites can be be run in the same package with JUnit4 test cases allowing you to slowly migrate (reference)
  • JUnit5’s Extension API works much better than the runner and can be used to compose different cross-cutting testing concerns

https://junit.org/junit5/docs/current/user-guide/#migrating-from-junit4

Good test cases give meaningful error messages

A failing test case that doesn’t tell you why it failed is not very developer friendly.

Test cases are often times capturing business logic and decisions about how code should behave. For example, they check that action X happens when Y case, but sometimes in the future it’s hard to understand why X should happen in that case. When it’s not obvious why the test case is asserting a situation, provide a useful Javadoc above the method or above the assertion to clarify to other developers.

Remember: Just because you know why it exists today, doesn’t mean you’ll remember why you did something 6 months down the road.

1
2
3
4
5
6
7
8
/**
 * CorporateId is a mandatory field on the Device entity. Without it
 * we'd fail the Frobinator process, thus the DAO must reject it.
 **/
@Test
void testNoRecordSavedWhenCorporationMissing() {
   // ...
}

Prefer using assertEquals and friends over assertTrue

Given the following two test cases that both assert on the size of an array:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
@Test
void testAssertTrue() {
    List<Long> myList = new ArrayList<>();
    Assertions.assertTrue(myList.size() == 1);
}


@Test
void testAssertEquals() {
    List<Long> myList = new ArrayList<>();
    Assertions.assertEquals(1, myList.size());
}

Which error message is easier to read?

1
2
3
4
5
org.opentest4j.AssertionFailedError: 
Expected :true
Actual   :false

	at FooTest.testAssertTrue(FooTest.java:7)

Or this one using assertEquals?

1
2
3
4
5
org.opentest4j.AssertionFailedError: 
Expected :1
Actual   :0

	at FooTest.testAssertEquals(FooTest.java:13)

When assertTrue fails, it doesn’t tell you why it failed. You have to go to the line of code to understand why it failed. Imagine if you had five test cases that all failed and they all said gave no useful message. It would take a long time to fix the problem.

Instead, take a look at the Assertions class in JUnit5 and find a relevant method that best matches your assertion. Some examples:

  • assertArrayEquals
  • assertEquals
  • assertInstanceOf
  • assertNotEquals
  • assertNotNull
  • assertSame
  • assertThrows

If assertTrue is your only option, provide an assertion message (see next item) to help clarify the problem.

Provide assertion messages when the problem is non obvious

An assertion failure message becomes non-obvious when the message does not clearly convey what property is being compared.

1
2
3
4
5
@Test
void testAssertEquals() {
    List<Long> myList = new ArrayList<>();
    Assertions.assertEquals(1, myList.size());
}

When this fails, it just states the expected value is 1, but actual is 0. It doesn’t say why.

1
2
3
4
5
6
7
@Test
void testAssertEquals() {
    List<Long> myList = new ArrayList<>();
    myList.add(5L);
    Assertions.assertEquals(1, myList.size(), 
        "The array is supposed to contain one item because we added one to it.");
}

Adding a message at the end can help clarify the problem to the developer. It’s not required to add messages to all assertions.

Use assertAll when testing different properties on an entity

The assertAll method is a special assertion that enables you to perform multiple asserts and fail if any of them failed. If multiple assertions fail, then it’ll print out all failed assertions making it easy to see problems at a glance:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
void testAssertEquals() {
    MyObject object = new MyObject("test", "foobar");

    Assertions.assertNotNull(object);
    // Further asserts depend on the above, so it must be separated out
    Assertions.assertAll(
        () -> Assertions.assertEquals("testf", object.first, "First field"),
        () -> Assertions.assertEquals("test", object.second, "Second field")
    );
}

When this fails, it clearly states all the problems at once so I can tackle them instead of fixing one thing, rerunning the tests, then fixing the next problem, until it finally goes green:

1
2
3
org.opentest4j.MultipleFailuresError: MyObject check (2 failures)
	org.opentest4j.AssertionFailedError: First field ==> expected: <testf> but was: <test>
	org.opentest4j.AssertionFailedError: Second field ==> expected: <test> but was: <foobar>

Note that you shouldn’t put all assertions into a single assertAll method. If any assertions depend on previous results, for example I need to separate out into multiple phases of assertions. Otherwise, the future assertion failures provide more and more meaningless messages.

1
2
3
4
5
6
7
Assertions.assertNotNull(object);
Foo foo = object.getFoo();
Assertions.assertNotNull(foo);
Assertions.assertAll(
    () -> Assertions.assertEquals("baz", foo.getBaz()),
    () -> Assertions.assertEquals("bar", foo.getBar())
);

Don’t verify inside a finally block

In Java, finally blocks are executed even if an exception is thrown. If your block of code that you’re testing fails an assertion or throws an exception, then running more assertions in the finally block will mask the original exception and instead will show you verification failure exception. This will mask the exception that matters with an exception message that is obviously going to fail because the Code under Test failed.

Bad:

1
2
3
4
5
6
7
8
@Test
public void testSomething() {
  try {
    myObject.callSomething(); // <-- What if this throws?
  } finally {
    Mockito.verify(someObject).didSomething();
  }
}

Instead, avoid running verifications in a finally block and run them after you run your code. This ensures that when your test case fails, you’ll always see the most relevant and useful exception message.

Better:

1
2
3
4
5
@Test
public void testSomething() {
  myObject.callSomething();
  Mockito.verify(someObject).didSomething();
}

Test Case Accuracy

A test case must be able to fail

Some developers will just write a unit test that covers their newly written code to get the code coverage, then think that’s a sufficient test. A test case that doesn’t fail isn’t useful and even worse, if it doesn’t properly catch bugs, then it gives a false sense of security that the business logic does work correctly.

Ensure that your test cases do fail when your code has bugs or problems. Try introducing an issue and seeing if your test cases fail. Another strategy is TDD (Test Driven Development.) In this paradigm, you write test cases first that refer to code that doesn’t work and implement assertions, then write the code to make the test cases pass.

The PIT Mutation Testing framework is another strategy to ensure that your test cases are effectively testing code. When you run a PIT test against your unit tests, it’ll modify the production code randomly and verify that a test case fails.

IntelliJ Inspection Name: Java -> JUnit -> JUnit test method without any assertions

Don’t use @Test(expected = *Exception.class) (JUnit4)

In JUnit4, it’s common to write unit tests that look like this to test that your code throws exceptions in error cases:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
@Test(expected = NullPointerException.class)
public void testArgumentAssert() {
  // If a NullPointerException is thrown ANYWHERE in this test, it'll pass.

  // Initialization works
  something.testFunction(); // What if "something" was null?

  object.callSomething(null);

  // If the exception is thrown, this never executes
  Mockito.verify(something).importantMethod();
}

However, this introduces the risk of false test passes, i.e. the test can pass when it should fail.

NullPointerException is commonly thrown with parameter validators (e.g. Lombok’s @Nonnull) if the caller passes in a null for a parameter, but it’s also thrown if you call a method on a null method. Devs often times want to validate these parameter validators, but since NPE can mean a variety of things, their test cases end up being low value.

I also frequently see developers expect a RuntimeException, but this has many subclasses. If you expect this type, how do you know it’s what you expected?

Additionally, there’s a Mockito verification after the exception is thrown. This line will never execute, so your Mockito verification is entirely worthless.

Better:

Instead, upgrade to JUnit 5 and use the new Assertions.assertThrows method:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
@Test
public void testArgumentAssert() {
  // Initialization works
  something.testFunction(); 

  // We guarantee that only NPEs thrown on this line are considered passes
  NullPointerException ex = Assertions.assertThrows(NullPointerException.class, () -> object.callSomething(null));

  // With the exception, we can verify that it's the exception that we expected.
  Assertions.assertEquals("Argument foo should not be null", ex.getMessage());

  Mockito.verify(something).importantMethod();
}

With a handle to the actual exception, you can know that it was thrown on the line you expected, however some care needs to be made still that you’re catching what you expect.

Reducing Boilerplate

Create helper test methods

Test case readability matters. Sometimes a test suite will contain a lot of test case methods that all create test harnesses, create mocks, walk through test flows, or perform validations and they end up looking the same over and over again.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
@Test
public void testSomeCase1() {
   Mockito.doReturn(xyz).when(fooBar).someCall(foo);
   Mockito.doReturn(abc).when(xyz).someCall();
   // more test case initialization

   AResult myResult = myClass.performImportantAction(fooBar, 1);
   // Some test code

   Assertions.assertEquals("expected", myRequest.getField());
   // more Assertions
}

@Test
public void testSomeCase2() {
   Mockito.doReturn(xyz).when(fooBar).someCall(foo);
   Mockito.doReturn(abc).when(xyz).someCall();
   // more test case initialization

   AResult myResult = myClass.performImportantAction(fooBar, 2);
   // Some test code

   Assertions.assertEquals("expected", myRequest.getField());
   // more Assertions
}

@Test
public void testSomeCase3() {
   Mockito.doReturn(xyz).when(fooBar).someCall(foo);
   Mockito.doReturn(abc).when(xyz).someCall();
   // more test case initialization

   AResult myResult = myClass.performImportantAction(fooBar, 3);
   // Some test code

   Assertions.assertEquals("expected", myRequest.getField());
   // more Assertions
}

Sure, we were able to add test coverage that verified the code worked, but it’s an unreadable mess. Code reviewers won’t be able to read it to ensure it’s doing the right thing, other developers won’t be able to understand it. Instead create reusable methods.

In the below example, I moved all common logic out to separate methods. Mocks that are needed for all test cases go into the beforeEach, mocks needed only for some methods go into a private method that is then called depending on the test case, then wrapper method is created to call the target method and perform common assertions.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
@BeforeEach
void beforeEach() {
   // Place common initialize code here.
   // JUnit calls it before every test case
   Mockito.doReturn(xyz).when(fooBar).someCall(foo);
   Mockito.doReturn(abc).when(xyz).someCall();
}

// If some test cases have differing situations
// Create methods that initial mocked objects
private void mockSpecialCase1() {
   Mockito.doReturn(123).when(xyz).calculateValue();
}

private ARest performActionAndVerify(int case) {
   AResult myResult = myClass.performImportantAction(fooBar, case);

   // Perform any common assertions that always exist for all test cases
   Assertions.assertEquals("expected", myRequest.getField());

   return myResult;
}

@Test
public void testSomeCase1() {
   mockSpecialCase1();

   performActionAndVerify(1);
}

@Test
public void testSomeCase2() {
   AResult myResult = performActionAndVerify(2);
   // Some test code

   Assertions.assertEquals("something", myRequest.getAnotherField());
   // more Assertions
}

@Test
public void testSomeCase3() {
   AResult myResult = performActionAndVerify(3);

   Assertions.assertEquals(5, myRequest.getAnotherField());
}

Each test case becomes easier to read as there’s less irrelevant code in each method.

JUnit5 Extensions

Do you find yourself writing unit test classes that contain lots of the same initialization or tear down logic? In the previous examples, we discussed options for duplicating within a single class, but sometimes multiple classes all have to do some work that isn’t the goal of the test class.

For example, when unit testing a service that emits metrics or X-Ray traces, you may end up with a bunch of code responsible for initializing, collecting, and verifying that those metrics are emitted from many different classes. Each test class itself shouldn’t have to handle this logic and instead should delegate to a common class.

Instead of multiple classes all looking like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public class FooControllerTest {
    private FooController fooController = new FooController();
   
    @BeforeEach
    void beforeEach() {
       // Initialize metrics
       // Create mocks
    }

    @AfterEach
    void afterEach() {
       // Tear down metrics
    }
}

public class BarControllerTest {
    private BarController BarController = new BarController();
   
    @BeforeEach
    void beforeEach() {
       // Initialize metrics
       // Create mocks
    }

    @AfterEach
    void afterEach() {
       // Tear down metrics
    }
}

For this, JUnit provides the extension API that provides many different places to hook into the test runner. Here’s how an example extension can cleanup a class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
public class FooControllerTest {
    private FooController fooController = new FooController();

    // MetricsExtension is a custom extension that you write
    // that handles mocking 
    @RegisterExtension
    static MetricsExtension metricsExtension = new MetricsExtension();

    @Test
    void testApiMethod() {
       fooController.callApiMethod();

       // --- Assertions
       // Extension classes can be used to handle assertions too
       metricsExtension.assertMetricsEmitted("DatabaseSuccess", 1);
    }
}

Appendix

Enabling IntelliJ Inspections

IntelliJ’s inspections provide a number of extra static analysis checks that you can enable to catch bugs. To enable one specified in this blog post, go to File -> Settings -> Editor -> Inspections, then find the mentioned inspection.

All testing related inspections I have enabled:

  • Java -> JUnit
    • assertEquals() called on array
    • JUnit test method in product source
    • JUnit test method without any assertions
    • JUnit 5 malformed @Nested class
    • JUnit 5 malformed repeated test
    • Malformed setUp() or tearDown()
    • Malformed @Before or @After method
    • Malformed @BeforeClass@BeforeAll
    • Malformed test method
    • Parameterized test class without data provider method
    • Test class with no tests
Copyright - All Rights Reserved

Comments

Comments are currently unavailable while I move to this new blog platform. To give feedback, send an email to adam [at] this website url.