Testing Solid Code

Wednesday May 11th 2016 by Paul Kimmel

If you have written solid code, you have accomplished one half of the objective. Solid code is the protagonist, or hero, of our story, but testing is the more sedate side kick that often saves the day after the hubris of the hero has almost certainly lead to calamity.

If you write solid code but don't handle testing correctly, there is still a huge chance that you will spend too much money, miss critical defects, delay delivery, and ultimately dissatisfy your customers. Please read "Writing Solid Code" for more information on writing solid code.


There is an old joke that goes something like "What do you call a thousand lawyers on the bottom of the ocean? A good start!" The same can be said for project managers.

Of the projects I have worked on in 28 years, predominantly the failures were caused by the managers. The truth is, a lot of managers follow the "can't program," tester, QA, business analyst, and then project manager path. This means they were the weakest developers, and now they are the "boss." I have seen project managers hire too many people, the wrong people, pick the wrong priorities, create the wrong schedule, follow goofy, faddish trends that are almost useless, and remain completely ignorant of the facts, strategies, and progress that will yield success. I have seen project managers completely ignore the will and needs of the customer.

The idea of project manager as "technical" leader and boss is outdated, outmoded, and almost completely useless in software development. PM as cheerleader, resource getter—as in buy me a faster computer—and buyer of snacks and meeting douche is a better role for project managers. The reason is very simple: Even if you were a techie before becoming a project manager, technical skills are perishable. A CSS superstar six months later will have forgotten most of what they know and the rest will have changed.

Sadly, managers make the rules, even the wrong ones. So, although you'll be provided with extraordinary strategies for testing solid code, if your pointy-headed manager over rules following these strategies, the outcome will most likely be dodgy.

In that vein, let's look at the degree and timing of unit testing, the viability (or not) of Katas for TDD, trace bias, and how much QA you really should need.

To Kata or Not to Kata

I prefer to start with the hate-mail generators first. Being considerate, let's dispense with something you may have strong feelings about—Katas—so you can bail early if you feel like sending hate mail.

Katas, or literally "forms," refer to a pattern of choreographed movements. The idea is that you perform the movements over and over until they become myelinated—think building high speed insulation for neuro-transmitters {in other words, muscle memory}—and can be performed much more quickly, more precisely, with substantially less painstaking aforethought. Simple, right?

Unfortunately, strictly performing katas won't keep you from getting your ass kicked in a street fight and they won't help you write solid code. The reason is simple in both instances. In the first instance, someone with a gun, knife, pipe, or a really poor attitude vis-à-vis the sanctity of life isn't going to care about your choreography, and Test Driven Development katas start with the idea that you write tests to fail first. Ennnnnh!

Even Tae Kwon Do doesn't have a kata that says "start by losing." Skip the first step: writing code to fail first. It is a waste of time. This just means you can't come up with a reasonably good way not to fail first. Secondly, well-practiced code-Katas are show-offy and good for demos and that's all.

Skip Katas.

What about TDD?

Well, TDD says write a failing test first and then tweak some code inch by inch until the test passes. Magically, you should have just the right amount of code to cover the solution without waste. This is wrong, too. You are in the business of solving the business problems, and tests don't do that. You're also going to create waste, which is why Refactoring exists. Go ahead and be a little wasteful.

The correct way to test is take a reasonably good stab at the solution and then write unit tests to coverage. Skip all of the intentional failing. What is important is that you write code that works and you write tests that cover all of the lines of code. Katas are for circus acts.

Unit Testing and Trace Bias

Unit testing is the responsibility of the developer writing the code. You write the code and know how it is supposed to work, so naturally you'll write tests to "prove" that it does that. This, as my friend Chris says, is "Trace Bias."

Trace bias is acting on foreknowledge that says such and such code should yield result x. But, trace bias is why you have QA, right? Wrong. Trace bias is why you write unit tests to code coverage.

Code coverage is where you write tests that cover all of the lines of code, including the error handling. Code coverage also includes anti-bias tests and additional tests that are written when bugs are inevitably discovered. QA is not your testing team.


I like mocks. Mocks are cool. Mocks are fun. Mocks are nifty. Mocks are generally wrapper classes that permit you to provide answers to an interface question without calling the actual dependent class. For example, if an interface defines a function Foo() that returns an integer, the Mock wrapper would permit you to provide that answer. Generally, the state or answers provided to a Mock object are provided via lambda expressions—inline functions. Unfortunately, mocks are fake and, in fact, sometimes they are called fakes.

Suppose you write a mock to test an MVC controller. Is it better to test the controller by itself or better to run an integration test on the controller and have the code go all the way through? The latter is a better test and there is nothing fake about it.

I have heard the argument that mocks let you test discrete layers which facilitates distributed teams working on discrete blah! Blah! Blah! Blah! These team compositions hardly work. In fact, I have never seen a distributed team working on horizontal layers work. I have seen such teams produce crappy code that has horrible cohesion working like this, but not real teams.

As a general—almost all the time practice—the code is decomposed in horizontal layers and I prefer developers working on a vertical edge trace that covers an entire user story. This means the unit tests test each of the layers independently and the top-most layers are actually integration tests.

Mocks are okay, but mocks aren't a silver bullet. Tests on top of each horizontal, stacking code layer, test layer, code layer test layer effectively yields code coverage for individual components and integrated components. This will yield solid results.

Design Patterns and Refactoring

Refactoring is relevant to testing because you need tests to refactor. Refactoring is improving code without changing it and unit tests provide assurance that the code behaves the same, even after its been changed. You also need refactoring to improve your metrics—cyclomatic, maintenance, and lines of code—because better metrics generally means fewer tests. Fewer tests mean you get to coverage much more quickly.

Design patterns are important because they are meaty chunks that are known to solve well-known classes of problems. For example, you can use the Visitor with Verdict for validation—a design behavior pattern—or fluent, for example, if you have access to a good fluent validation library.

Refactoring and design patterns are essential to writing and testing solid code because there is interplay off of all of the test techniques. Tests support Refactoring. Refactoring supports improving code, which in turn reduces the number of tests. Design patterns reduce hackery, and so it goes.

The implicit goal in all of this is to write perfect code. Perfect code performs all of its behaviors with a size approaching 0. The closer you to get to the zero-perfection limit, the closer you get to solid code.

Figure 1: The equation for writing perfect code

(Okay the equation is made up, but zero lines of code is pretty easy to debug.)


Managers generally allocate way too much time and money to testers and QA. The testing team's job is not to find your bugs. Your job is to deliver them bug-free code. You won't, but if you do your job right they will be extremely hard pressed to break your code.

QA's job is not to find your bugs. Their job is fit and finish. QA is the same job the kid at the car dealership has: run the car through the wash, check for crumbs, spritz on some armor all. This takes an hour.

If you have a 12 month project and three of that is QA, your programmers are being lazy and you're wasting money. Repeat after me: QA does not test.

QA may run your unit tests. QA may run your integration tests. QA may run your Fiddler scripts, but QA is not responsible for bug free code. Programmers are.

Finally, never, ever run manual tests. (The exception is the first time you run a test with manual inputs and then never ever run it manually again.) Everything is automated and, by the time it gets to QA, the results are pro forma: your code is rock solid.


The best managers won't play boss. They will help identify professional resources and then stand back and let them do their jobs. The worst managers will say things like stop all forward development and train the interns. The best managers will facilitate. The worst managers will force feed you technical decisions that are disastrous.

Unit testing to code coverage which includes actual integration tests against Refactored code with design patterns and great metrics is how you get great code. Even though this won't guarantee great apps, it will guarantee apps that do what you told them to do.

Now, you know how to test and how much testing to do. Now, go find people to work with who respect that.

About the Author

Paul Kimmel is an architect and code monkey who works on telekinesis and zombie apocalypse survival strategies in his spare time.

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved