Octopull/Java: Introducing JUnit
Home C++ articles Java articles Agile articles Bridge lessons

Introducing JUnit by Alan Griffiths

 


One of the interesting things about software development is that there is always something new to discover. And while the JUnit test framework isn't new in "internet time" it is fairly new to me - partly because I've only recently joined an organisation that works primarily in Java.

Looking at test harnesses that I wrote prior to investigating JUnit there is a striking similarity to the structure of the JUnit framework - the main difference being that my test harnesses classes had repetitive segments of "boiler plate" code in addition to the tests themselves. I had been thinking of ways to factor out this code into a unit test framework - but my embryonic design did not reach the level of evolution JUnit. I'd like to think that JUnit is where I'd have reached after I'd "thrown one away" (Brooks).

This article is based on an overview that I wrote to help introduce developers on a pilot project to unit testing. It covers the strategic reasons for using unit tests and the tactics of writing a unit test harness using the freely available JUnit framework to minimise the work involved. It does not cover all the options for using JUnit. More details and JUnit itself are available from the JUnit website at http://www.junit.org/ .

What is a unit test for?

A unit test exercises "a unit" of production code in isolation from the full system and checks that the results are as expected. The size of "a unit" to be tested depends on the size of a set of coherent functionality and in practice varies between a class and a package. The purpose is to identify bugs in the code being tested prior to integrating the code into the rest of the system. JUnit supports writing unit test in Java for Java code.

The reason for identifying bugs before integration is that it is far easier (i.e. quicker and cheaper) to identify and correct problems (and demonstrate that the solution works) in isolation.

This provides benefits both prior to the initial integration of a piece and before integrating subsequent changes. If one needs to change existing code the fact that the unit test still passes the resulting code makes one more confident that the change doesn't break anything. And the better the unit test the more confident one can be - ideally the test should be updated to include the new functionality prior to making the change (of course it then fails until the change has been completed).

Who writes a unit test and when?

There are several schools of thought on this, all with merit. The approach most likely to identify problems is for the unit test and the production code to be developed independently against the specification. This is the most likely to pick up misinterpretation of, or ambiguities in, the specification. However, it is also the most costly - two developers need to be involved, and there are intimate dependencies between their work (even minor changes to a method interface are disruptive).

An approach that has proved more popular recently - and is promoted as part of "Extreme Programming" [XP] is to develop the interface, the test harness, and the implementation. (In roughly that order - although some parts of the implementation may be completed before unrelated parts of the test harness). Actually, since XP also promotes "pair programming" this also involves two developers (although as they work together co-ordination isn't an issue). Regardless of other the adoption of other XP practices this unit-testing regime is valid.

What goes into a test harness?

Regardless of who writes the unit test or when, it should be focused on exercising the code most at risk of being in error. (The number of errors likely to be detected by exercising typical "setName()/getName()" methods doesn't usually justify writing a test for them).

If the design documentation contains usage scenarios for the objects being tested then these are a good source of tests. However, scenarios are often not written out explicitly where they are readily apparent from other aspects of the design - if this is the case then appropriate tests should be equally apparent.

Another good source of test cases is problems found in production code after integration, the usage that manifests the problem is often worth encapsulating into a test case.

What is JUnit for?

The preceding discussion shows why we need test harnesses, but why use JUnit?

A piece of test code cannot be run in isolation, it needs to be part of a runtime environment. Also, it is desirable to automate the running of unit tests - such as periodically running all the test harnesses in the system to prove that nothing is broken. For this unit tests need to meet certain criteria: a successful test shouldn't need manual checking, a failed test should deliver adequate documentation for diagnosis.

The running of tests, recording of results and reporting of errors is common to all tests, and JUnit provides precisely this functionality. All that the test developer needs to write is the test code itself - and (despite some advertising that I've seen) there is no avoiding that!

An Example

The simplest possible JUnit test case looks something like this:

import junit.framework.TestCase;

public class Test extends TestCase {
    public Test(String name) {
        super(name);
    }

    public void testSimpleTest() {
        // Test Code goes here
    }
}

OK, I lied - not only is it necessary to write the test code it is also necessary to write a simple constructor. But the above is enough to allow us to compile and run the test. The command lines for compiling running the test are similar to the following:

javac -classpath ".;../junit/junit.jar" Test.java
java -classpath ".;../junit/junit.jar" junit.textui.TestRunner Test

The JUnit TestRunner returns a report similar to the following:

.
Time: 0

OK (1 tests)

The test class can easily be extended by adding further tests, for example:

public void testHelloWorld()
    {
        String hello = "Hello";
        String world = "world";

        assertEquals("Hello world", hello+world);
    }

This test code makes use of assertEquals - one of a number of assertion methods provided by the JUnit interface Assert . There are overloaded variants of assertEquals to deal with various types of parameter, including the base types, containers, and Strings - if the two parameters are equal then it simply returns, otherwise it throws an unchecked exception used within the framework to report on the error.

The result of all this is that the above test is executed (in the same way), the results would show the failure of the test as follows:

..F
Time: 0.01
There was 1 failure:
1) testHelloWorld(Test) "expected:<Hello world> but
was:<Helloworld>"

FAILURES!!!
Tests run: 2,  Failures: 1,  Errors: 0

Working with JUnit

In the working practices I introduced it is the responsibility of each developer to ensure that JUnit test classes are submitted to support every piece of work. The piece of work is not complete until the test code is provided and Unit Testing has been performed satisfactorily. (Exceptions are made for some pieces of work, such as user interface components - these are tested visually.)

This approach is, of course, a compromise between complete trust in developers and independently verifying all work done. It may not work in all environments or with all developers. However, the situation I was faced with is common (far too common) in our industry: that of developers under pressure to deliver and lacking adequate support from the organisation for maintaining a "professional standard" of code. We all know the results: slapdash code that barely compiles, long sessions debugging (or rewriting) code that was "finished" long ago...

Adopting JUnit has made the process of introducing test harnesses easier than my previous experiences. Naturally there are other variables (management support is a big one). But I'm sure that the effectiveness with which the JUnit design separates "framework" from the specific test is a major factor. Another factor is its integration with "Ant" ( http://jakarta.apache.org/ ) - however, Antis the subject of another article.

Conclusions

So, apart from "I wish I'd written it first", what do I think of JUnit? It is easy to use, easy to introduce into a project, and does its job.

In the pilot project the use of unit tests has had a very visible effect on both the scheduling (tasks rarely overrun because of the need to debug "finished" code already in the project codebase) and morale (development tasks have clearly delimited boundaries).

References

The Mythical Man-Month

Brooks

 

Copyright 2001 Alan Griffiths