From Zero to Tests
Sierpinski pyramids - photo credit CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=647064
How to get started when your codebase has no automated tests.
I often encounter software projects that have absolutely no automated tests. This is rarely because the development team feels that authoring tests is a waste of their time. It’s most often the case that the team would love to have automated tests. They are just completely stuck and unsure of how to get started, and if they stay stuck this way and never add any kind of automated tests, then things start to get difficult for the team. Experimentation becomes risky because the team is nervous about breaking something and then only finding out when an end-user complains. This can make it difficult for the team to embrace deploying their application more often, keeping their dependencies up-to-date, cleaning up code that’s hard to understand, and a variety of other activities that are assisted by having automated tests.
If you feel like your project or team is stuck when it comes to automated testing, then I’d like to provide some guidance and specific techniques for how to get started. I encourage you to try and follow along using your own project.
Oh, and a side note. There are many more ways to test your application than by writing automated tests. I find it a little annoying when authors just use the word “test” when they really mean “automated test”. However, I think it would get more than a little pedantic if I write out “automated test” everywhere. So in this article, unless I include another qualifier such as “manual” or “exploratory”, whenever you see the word “test”, please keep in mind that I’m talking about an automated test.
Start Small - Tiny is Better
The biggest challenges in your life are the hardest to get started on. Our instinct is to attack really big problems with equally big solutions. Adding automated tests to a software project is one of the biggest challenges that a team will face, and my experience has taught me that teams tackle this problem by attempting to create the perfect testing framework which will solve all of their problems. That’s not what we’re going to do here. Instead, let’s start small and work up from there.
I’ve written in the past about Mike Cohn’s Test Pyramid, and we’re going to use the test pyramid concept to guide our work when creating a test suite from scratch for an existing application.
As a quick refresher, the Test Pyramid is a way of evaluating the different kinds of tests that you have covering your codebase. The top of the pyramid is where you place your acceptance tests; the ones that evaluate your application as a whole. The interior of the pyramid is where you place your integration tests, which evaluate the interactions between multiple parts of your application to ensure that they are all working together correctly, and the bottom of the pyramid is where you place your unit tests, those that focus on one small piece of your application (usually a function or in some cases a class) in complete isolation from any others.
When starting from scratch, I find that teams tend to focus too much on either acceptance tests or unit tests; either the top of the pyramid or the bottom. The teams that focus at the top create a very large library of acceptance tests in an attempt to capture the precise behavior of the system that they are working on. The teams that focus on the bottom dive into areas of the codebase that they hope to eventually refactor, and they work towards making sure that every function in that part of the application has 100% code coverage (a measure of how much of a function is executed by a test).
I’m not an advocate of either approach. Instead, I think it’s important for teams to focus on maintaining a balanced pyramid while they are creating their test suite. That means that the smallest possible balanced pyramid has only 1 unit test. So that’s where you get started. You write 1 unit test; that’s the first block in your pyramid. Visually, that might look like this:
Next, you write an additional unit test. This creates the base of your pyramid. Which visually might look like this:
At that point, if you continued writing only unit tests, you wouldn’t have a pyramid, just a flat slab. So before writing another unit test, you write an acceptance test.
Now the pyramid starts to take shape. Notice that we don’t have any integration tests, yet. Don’t worry, we’ll get to those, but before we do, we need to chat about what each of these unit tests should actually test.
The First Unit Test
Pick a function in your application, the smaller the better, but let’s assume you have a function that looks like this:
function odd(value) {
if value % 2 == 0 {
return false;
} else {
return true;
}
}
That’s the first function that we’re going to test. The first test that we’re going to write for this function is one that passes it a value that it’s likely to see when your application is working correctly.
Here’s what that might look like.
Note: I’m intentionally skipping over the details of what test framework to pick. The code below is using an imaginary testing framework that simply fails a test if the fail()
method is called. I’m not going into those details here because that’s enough content to fill up an entire article all by itself. Additionally, that article is going to read very differently depending on the language that your team is using. I’m attempting to keep this article language neutral. The snippets that you’ll see are written in JavaScript, but there’s no specific test framework that I’m demonstrating.
function testOdd() {
var expected = true;
var actual = odd(1);
if (expected != actual) {
fail();
}
}
Now we run that test and make sure that it passes. Easy right? I hear what you’re objecting to. “That test is too trivial! It doesn’t test anything useful.” That’s a fair critique and, indeed, it’s not testing anything in your application that’s useful, but you have manually tested something else about your application and its architecture. You’ve tested that you can select a testing framework, write code that uses that testing framework, and see the results. That series of steps is not trivial for teams that have never worked with automated tests before.
The Second Unit Test
The second unit test is going to focus on that same function as earlier, but instead, it’s going to pass it a value that it’s not likely to receive when your application is working properly. Such a test might look like this:
function testOddWithInvalidValue() {
var expected = NaN;
var actual = odd(“one”);
if (expected != actual) {
fail();
}
}
Again, this is something of a contrived example because we’re working with an imaginary project instead of your team’s project, but imagine that your team expects for the odd function to return NaN (not a number) when it tries to determine if the string “one” is odd.
When we run this, it should fail. That’s because the implementation isn’t written that way. It performs a mathematical operation to determine if the number that was provided is evenly divisible by 2 by using the modulus operator %. The modulus operator doesn’t know how to work with a string, “one”, so it returns NaN. NaN doesn’t equal 0, so our code returns true.
Is this a bad thing? It’s hard to say. We didn’t write this function. There might be other code in the system that relies on this strange behavior. So while it’s tempting to do so, we’d better not change the implementation code. Instead, let’s change our test.
function testOddWithInvalidValue() {
var expected = true;
var actual = odd(“two”);
if (expected != actual) {
fail();
}
}
We’ve changed the expected value to true and to make things clear that this is strange, we changed the input to “two”. Now the test should be passing. Before we move on, let’s leave a note for ourselves to investigate this potential bug. Whatever issue tracking system that you’re using, go create a task to dig into why odd(“two”)
returns true
.
So what are we testing here? We’re continuing to familiarize ourselves with our testing framework, and we’re deepening our understanding of one aspect of the application’s implementation. We’ve also likely found a bug.
We could keep going in this manner, potentially uncovering even more bugs or edge cases and increasing our understanding of the code that we’re working with, but now we’ve got two unit tests already. Before we add any more, we need to shift our attention to creating an acceptance test so that we can keep the pyramid balanced.
The First Acceptance Test
Now it’s time to build our first acceptance test, but again, we’re forced to ask ourselves what should we test. For the unit tests, we picked a very trivial, very easy chunk of code to test so that we could start building out the unit testing infrastructure. We’re going to do something very similar for the first acceptance test, but we’re going to do so from the point of view of an actual user of your application.
What’s the simplest possible thing you can do with your application?
Pause and think about that for a moment.
Chances are that the answer you’ve come up with is much more complicated than what I’m going to suggest as a starting point. Just run it, launch it, boot it, or whatever phrase your team uses to describe starting your application so that it can receive input or report that it didn’t find anything that it was looking for.
Again, a complaint I often hear is that this isn’t actually testing anything. That’s not precisely fair. It’s testing that your application doesn’t crash as soon as it starts. You’re right that it’s not evaluating any of the unique tasks that your application was written to perform, and yet by just making sure that your program runs, you’ve actually tested a lot of plumbing that has to be correct.
Another potential complaint is that simply running the app isn’t likely to run the code that we created our unit tests for above. That’s true, but it’s not our goal of this exercise. We want to force ourselves to first get comfortable with the tooling that we have to use to start writing acceptance tests.
The code below depends on an imaginary way to both start the app and determine if it’s correctly accepting requests. Again, the details of how to accomplish that are outside the scope of this particular article.
function testAppStart() {
startApp();
if (!appIsAcceptingRequests) {
fail();
}
}
Final Thoughts
When you’re building a test suite from scratch, pay attention to the shape of the test pyramid that’s being created. Focusing on keeping that pyramid balanced while you go will avoid the pitfalls of having to cope with an unbalanced test pyramid in the future. Also, the first tests that you write may seem trivial, but it’s important to just get started. Then you will have a place that you have created something that can be further refined.
Here are some further points for consideration. How should integration tests be introduced to the structure that’s been presented here? What are some strategies for adding more tests and still maintaining the balance that’s been achieved so far?
Want to be alerted when we publish future blogs? Sign up for our newsletter!