If I can test my app entirely with integration tests, why do any unit testing?

Someone at work asked why they should write any unit tests if the whole app can be integration tested. Basically, the question was:

With complete integration testing, do unit tests add enough value to justify the additional time and added complexity? In this case adding an IoC would not be trivial given the limited support in the Katana middleware I’m using. And mocking the data layer is always such a drag, imo.

I replied as follows:

Great question! To buy some time I will start by quibbling with a couple of your premises:

1. Unit tests add complexity
I apologize if this comes off as smug or sanctimonious, but I’ll write it anyway with the promise that I’m trying to be helpful and not jerky: if you’re following SOLID, it shouldn’t add any complexity to add unit tests later…you should just be adding a bunch of unit test fixtures to your solution. I almost exclusively practice “test-after development”, so I can say from experience this is possible. I promise to anyone reading this that even if you skip the tests, your software quality and maintainability will higher if you write as SOLID-ly as possible at all times (even for JavaScript!).

2. Katana has limited support for IoC
If you’re hosting WebAPI on Katana it should be trivial to implement pure DI (hand-coded object graphs) or insert a DI container. Check this out.

3. Mocking the data layer is a drag
If this is routinely painful are you maybe unit testing parts that might lend themselves more naturally to integration tests? We add automated integration tests all the time for this purpose in [my department].

That didn’t buy me much time, so I’m going to say the answer to your overall question is “it depends”. I don’t see any problems with what you’re doing *in its current state*, but I do have some longer-term concerns about an all either-or strategy for unit/integration testing.

In either case (all unit tests or all integration tests), I strongly believe you should use appropriate code coverage tools and analyze your gaps to determine the best strategy to proceed.

In my experience if you have only integration tests you will eventually find gaps on coverage reports because chunks of code are either 1) completely inaccessible via the primary integration point (e.g. the UI or top-level API), or 2) require such complicated integration state transitions to reach as to be impractical to either write or run (i.e. costing large time multipliers just to reach a few more lines of code). That doesn’t mean you can’t or shouldn’t test that code, just that you might need more complicated tests than you originally thought, or that you might see an opportunity to add unit tests to part of the code base not well served by your integration suite.

As your application grows, it will also typically take longer and longer to execute a comprehensive integration test suite because you have to stand up infrastructure and plumbing to get them to execute, and typically end up adding more and more complicated state transitions and interactions as your features expand. This sometimes causes people to avoid running integration tests as frequently as unit tests, but this can be helped somewhat through technical means (using CI, gated checkins, etc.) or through process means (code reviews, manual coverage verification, etc.), so be sure to watch for less frequent use of the integration suite.

The opposite is mostly true for an all unit test suite: your test runs should be much shorter and should be easier for developers to run very frequently, but you’re going to see different gaps on your coverage reports, mostly at infrastructure interconnects like between the UI and business logic, between business logic and persistence, and between service tiers.

There is definitely a sweet spot for both, but my opinion is that software quality favors more unit tests as applications grow, particularly if you don’t have a dedicated QA resource with a vested interest in keeping up integration tests for you. This is not uncontroversial, so experienced people disagree about this all the time 😉 It has been my experience that large integration suites driven from the front end are fairly brittle. I’ve done front end-driven integration tests with dedicated tools, with assistive frameworks, and through hand-coding, for web, WinForms and WPF apps. The test churn can be overwhelming if you need to make a major front end change at any point, and at times this has caused me to either abandon integration tests altogether or trim them back to the most basic of automated smoke tests and add more unit tests.

Case in point, the gig I was at before my current one had me coding a multiplatform MS Word extension against a REST API written in Ruby. The API was being designed at the same time, so there was a lot of churn in general, but the Jenkins job for the Ruby app would frequently have 90% of the tests break at once, and it sometimes took a few hours just to get them all working again. It also took the integration suite >10 minutes to run, which was why the breaks would only be seen on CI–it was way too long to run for the developers to exercise the suite frequently on their own machines, so they just waited until Jenkins told them they broke something and then they’d react. Phooey.

The primary reason unit tests don’t result in that much churn is that they are oriented around individual implementations, whereas an integration surface like a front end UI or API tends to be a relatively monolithic structure which, although tending to change uniformly, also tends to fundamentally alter the application interaction when modified. Refactoring a couple of interfaces will result in a limited number of new coverage gaps or unit test breaks, but shouldn’t break anything not related to those interfaces. A small front end change can end up breaking every single integration test.

So because of performance and brittleness concerns, I tend to favor unit tests for overall code coverage, and integration tests only for specific application hotspots, like exercising the data access layer or covering areas with a history of regressions that are hard or impossible to unit test.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s