When TDD Is Not a Good Fit

I like to use Test-Driven Development (TDD) when coding. However, in some circumstances, TDD is more of a hinderance than a help. This happens when how to solve the problem is not clear. Then it is better to first write a solution and evaluate if it solves the problem. Writing tests only makes sense after the solution is viable.

Last week, I came across examples of where I developed new functionality without using TDD. In both cases, it felt like the natural thing to do. So I thought a bit about why that was.

Avoiding Premature Tests

There were two cases where I didn’t start with tests. In both cases, had I started by writing tests, I would have wasted effort on writing tests for solutions that turned out to be wrong.

Example 1. Our system takes in values from an external source. Typically there is a one to one mapping between an external value, and the corresponding internal value. However, now there was a case where we needed to take in two different external values for a given internal value. The decision of which of the two to use would be taken later, and could be changed back and forth.

Even though it sounds straight forward, the code that takes in the values and does the mapping is quite complex. Therefore it is hard to know all the implications of a change. So it took a stab at implementing it the way I thought would be best. But since I wasn’t sure if I would find any problems with my implementation, I did not write any tests. Instead I concentrated on getting an end-to-end solution working, so I could validate that my solution did in fact work. As it turned out, my solution had side-effects that caused some other existing mappings to fail.

So I had to rework my solution to be less general. I opted to identify the specific case I wanted to change, and make sure the new mechanism only applied to that case. With that change, it worked fine. At this point, I added tests for the changes I had made. If I had started by writing tests for the first solution, before I had verified that it would work, I would have wasted my time on tests for the wrong functionality.

Example 2. As we were moving some interactions with an external system to be asynchronous, we realized that we must stop some data modifications while waiting for the response. To see how to fit this in, I added a simple sleep in our code to force a longer wait for the response. When attempting a modification in this state, I realized that preventing those modifications was more complicated than I had thought. I already had an idea of what I needed to do, but trying it out in the system showed me the problems with my naive solution.

Here too, I did not start by writing tests. Instead, I started by trying out a solution. At the time I didn’t reflect on why I did it that way. But thinking more about it now, I think it was because I was not confident that I had the right solution. So it made sense to first try out a solution to see if it worked or not.

In both case, once I had confidence in my solution, I added unit tests to cover the new code (where feasible). Even though the tests were added later, the TDD mindset is still helpful. I always consider how the code I write can be tested. For example, using a stand-alone function in Python means that I will be able to test it with minimal surrounding context.

I Know What I Have To Do

In other cases, TDD works as intended – the tests drive the design. One example is when I needed a message handler that would collect all parts of an SMS message and assemble them in the right order. The message handler also needed to time out after a number of seconds if some parts of the message didn’t arrive. In this case, TDD worked really well. To be able to test the behavior when timing out, I separated out the time-dependent parts – more details are here.

The difference here is that it was clear to me what I needed to do. The collecting of the message parts would be a new piece of functionality that was not depending on how other parts worked. Sure, it needed to fit in to the existing code base, but that didn’t affect how it would work. In the two examples above, it was not clear from the start how it would work, so more exploration was needed.

Conclusion

TDD is supposed to let the tests drive the solution. This works well in a lot of cases. However, if you are not sure a given solution will work, it is a waste of effort to write tests before you know it is the correct solution. This uncertainty often happens when you need to fit new functionality into complicated existing code. In theory you should be able to read the existing code and figure out how the new solution should fit in. However, in practice it might be better to try out a solution in the test system to verify that it works as intended. In any case, writing tests for code you are not sure solves the problem is wasted effort.

11 responses to “When TDD Is Not a Good Fit

  1. “if you are not sure a given solution will work, it is a waste of effort to write tests before you know it is the correct solution.”

    To me this sounds like you’ve completely missed the point of TDD. When you begin to solve a task, the only thing you should be thinking is “What do I need to achieve? What are the inputs and outputs of this method / component?”

    By focusing your tests on what end functionality you want to achieve, and not on the specifics of what solution you want to implement, you should ALWAYS be able to write your tests first and build up the solution with TDD.

    Again going back to my quotation, it sounds like you are coupling your tests to your solution in some way – the tests you write should apply to any solution you could possibly write, as long as the solution takes in some inputs and gives the correct output, as expected by the tests.

    I encourage you to first start with the mentality “TDD is always viable” and try to pick out flaws in your process that are hindering it. For me, it was vague requirements that weren’t broken down into technical tasks, meaning I had to do it on the fly. Now that I’ve requested better preparation of tasks from my team, a ticket will almost spell out the tests I need to write – all the scenarios, the inputs and outputs. From there, I can easily write the tests first even if I can’t see what the solution will be.

    It’s hard to see why your examples lead you to decide that TDD wasn’t a good fit without code examples and potentially pair programming with you, but I do believe where there’s a will, there’s a way. Apologies for the rant, I’m very passionate about TDD!

    • The problem is when “What do I need to achieve?” does not have a clear answer. If you are not clear on what you are building, TDD wont help. In the examples I had, the way to understand what to build was to write some code (without tests) to explore the possible solutions.
      I think TDD is a great tool in a lot of situations. But even for tools you really like, you should be able to give examples of where it is not appropriate.

  2. To me it sounds that, in some way, you still did TDD. Your solution was driven not by unit tests but by integration tests (you needed something to work along with existing code, so first you checked that what you had in mind fit there).

  3. Hi Henrik,
    What you are describing sounds like a spike: code experiments that test a theory. The underlying assumption with spikes is that you will throw away the spike once the theory has been verified, and then you can apply TDD to build the real solution.
    http://www.extremeprogramming.org/rules/spike.html
    Cheers,
    Kevin

    • I agree! Typically I keep the code I have come up with, and add tests afterwards to flesh it out. What is your experience? Do you throw away the code and start over?

      • Been a while since I did any coding, but I can say that a spike is supposed to be quick and dirty – I mean really quick and dirty. One is probably going to ignore all good design practice in order to write the code that validates your theory as quickly and cheaply as possible. So one should not expect to reuse the code and also it goes against TDD. Put another way if you are thinking about keeping the code for the proper solution then you are taking a risk that the extra work will have to be thrown away if the spike is unsuccessful. If the spikes and the final solution require you to build some sort of framework to run in anyway, then consider doing a zero-feature release for the framework first.

    • As for keeping the code from the spike – the only case I care about is to get an end-to-end solution to work (in order to validate the solution). This typically means ignoring error cases etc. These are the kinds of things I add afterwards (with tests). In my experience, it is not a big risk.

  4. Great article! I can add my own experiences to your reasoning.

    In certain complex cases the issue is not the implementation, but the actual requirements. Implementing tests with super vague requirements is pointless, and the only way to clarify requirements is to prototype, and go through the results with a product manager or a customer. This lets one have a very lightweight process before going deep into the details of testing and implementations.

  5. Pingback: Java Weekly, Issue 301 | Baeldung

  6. Pingback: A Collection of Software Testing Opinions for Python and Beyond – Python Marketer

Leave a comment