Workflow of Test driven development to follow SOLID principles

I’m a bit confused with TDD+SOLID.

1) I’ve watched MVA course about test driven development
https://mva.microsoft.com/en-US/training-courses/testdriven-development-16458?l=gLxGoEwXC_2306218965

2) I have read various blogs, posts etc. stating that TDD accompanies, if not enforces, SOLID principles

OK, let’s say I write a failing test, then make it work and then refactor (Red-Green-Refactor). According to SOLID principles code should depend on abstractions, not on concretions, but this “Make it work” (or “Green”) step is about writing a simplest code to make test pass. As far as I’ve seen – all those tests are against concrete types, not an abstractions (maybe it’s just for the sake of simplicity for learning purposes?)

So how should I write my unit test – should I first write code against concrete type and afterwords (in refactoring or “make it right” phase?) make the code SOLID (extract interface etc.)? Or is there some more “SOLID” approach for TDD?

2

should I first write code against concrete type and afterwords (in refactoring or “make it right” phase?) make the code SOLID

Writing test code against a concrete type is fine, and it is not a violation of the SOLID principles (or, to be more specific, of the “D” in the SOLID priciples). That is because the purpose of a typical test case is to test exactly that one concrete type it tests, not more, not less. There is no benefit in having the “subject under test” potentially mocked out from the test code. There might be cases where you have different components which all use the same interface, and you want to have a “reusable test” which can be applied to all these implementations, but to my experience these situations are rare and should not bring you to the conclusion unit tests should exclusesively call components via an explicit interface.

According to SOLID principles code should depend on abstractions

On abstractions, yes – but that does not mean all code needs to depend on a explicit interface in the sense of the interface keyword in Java or C#.

For a typical unit test, to depend on abstractions means simply just to rely on the public members of a class. That is also called the “interface” of a class, but not in the explicit sense mentioned above. That way, one also follows the “Abstractions should not depend on details” (=implementation details like private functions) recommendation of the DIP.

7

There is no friction between the “Dependency inversion principle” and the creation of a concrete object during TDD. Please, note that the fact that your test code create a concrete object doesn’t violate the principle. In fact, your production code can still depend solely upon an interface. In any program, at some point, you need to construct concrete objects. Unit test is a very special “program” that performs a very narrow task, in order execute the those programs you need the creation of concrete objects.

An aside note, unit tests are helping you to follow the DIP, in fact in many situation you need to mock some data during testing. If your code provide you an interface the testing code can simply implement a concrete mock version of it. Basically DIP makes your life easier when writing test, so you have an incentive in following the principle.

So how should I write my unit test – should I first write code against concrete type and afterwords (in refactoring or “make it right” phase?) make the code SOLID (extract interface etc.)? Or is there some more “SOLID” approach for TDD?

Please, remember that “not compiling is red”, so you can start writing your unit test code with a concrete implementation of an interface that doesn’t exist. Than you can create the interface in your production code and make the code compile (Green), then update the test to assert something (red again) and ahead in the loop.

Personally, I like to define the interfaces up front and only after (30 seconds) I write the test. I am following TDD red-green-red loop only for implementation details. I guess it is matter of taste, but the key is that the smaller is the time between the definition of your interface and the usage the better is.

3

There are a couple of possible, but closely inter-related ways to interpret your question:

Do I write my tests against a concrete implementation?

The answer is: No! You write the test before the implementation, there is no concrete implementation against which you could possibly write the test!

In that sense, you are writing your test against an abstract interface. Even more: actually, writing the test is at the same time the act of designing that interface, and the test also serves as an executable example of using that interface. (That’s why I prefer the terminology of BDD over the terminology of TDD: BDD and TDD are the same thing (kind of), just with different terminology, but this different terminology makes it easier to talk about, IMO. In this particular case, BDD actually uses the term “example” instead of the TDD term “test”.)

The test (Red) is an example of what you wish the interface was like, the implementation (Green) makes the interface work, and then you Refactor to make it look like the interface and the implementation belonged there all along.

Do I extract interfaces during refactoring?

The answer is: Yes! Abstract interfaces should not be manufactured out of thin air, they should be extracted (through refactoring) from at least three different clients of that (future) interface. Pulling interfaces out of thin air has the risk that the interface won’t be general enough.

E.g. both Visual Studio’s and Eclipse’s Plugin Interfaces for Version Control Systems were designed to be general interfaces for all kinds of VCSs. But, when those interfaces were designed, they didn’t have multiple clients, they only had one (CVS for Eclipse, VSS for Visual Studio). And it turned out that this interface didn’t support VCSs that were substantially different from those. The Subversion plugin for Eclipse was relatively painless (after all, Subversion was designed by former CVS developers as a drop-in replacement), but Git and Mercurial turned out to be a nightmare. And Microsoft had to actually introduce a new VCS interface because the existing “general interface for all kinds of VCSs” didn’t even support Microsoft’s own successor to VSS (TFS) properly. This could likely have been avoided if the interface had been extracted from several existing VCS plugins of different kinds (e.g. snapshot-based vs. commit-based, tree-based vs. file-based, checkout-based vs. filesystem-based, distributed vs. centralized, etc.) instead of being pulled out of thin air based on what future clients might look like.

Do I think about design before writing the test?

The answer is: No!

And I do realize that this is a controversial answer. But, you asked within the context of TDD, and within the context of TDD, you don’t design before you test. If you do, you’re not doing TDD. Period. There’s nothing wrong with not doing TDD, but you weren’t asking about not-TDD, you were asking about TDD.

In TDD, the tests drive the development and the design. It’s not actually about writing the tests first. It’s about the tests being the driver. This requires the tests being written first, otherwise they can’t drive the process, but the important thing is driving the process, being written first is just a prerequisite for that.

If you write code first, even if it’s just an interface, even if it’s just in your head, you are not letting the tests drive you. In fact, thinking about stuff first is even worse than writing stuff first: if you write stuff first, you can at least write a test afterwards to see if you were right, and you can play around with the system to see how it behaves. How can you write tests for something that is only in your head?

This approach is sometimes called “pseudo-TDD”:

  1. Imagine what the code should be in your head
  2. Write a test that forces you to write the code in your head
  3. Copy the code from your head into your IDE
  4. Refactor until the design matches the design in your head

Note how the test had basically nothing to do with the entire process? You could have just as well left it out.

Letting go of that habit is hard. Really, really hard. It seems to be especially hard for experienced programmers.

Keith Braithwaite has created an exercise he calls TDD As If You Meant It. It consists of a set of rules (based on Uncle Bob Martin’s Three Rules of TDD, but much stricter) that you must strictly follow and that are designed to steer you towards applying TDD more rigorously. It works best with pair programming (so that your pair can make sure you are not breaking the rules) and an instructor.

The rules are:

  1. Write exactly one new test, the smallest test you can that seems to point in the direction of a solution
  2. See it fail; compilation failures count as failures
  3. Make the test from (1) pass by writing the least implementation code you can in the test method.
  4. Refactor to remove duplication, and otherwise as required to improve the design. Be strict about using these moves:
    1. you want a new method—wait until refactoring time, then … create new (non-test) methods by doing one of these, and in no other way:
      • preferred: do Extract Method on implementation code created as per (3) to create a new method in the test class, or
      • if you must: move implementation code as per (3) into an existing implementation method
    2. you want a new class—wait until refactoring time, then … create non-test classes to provide a destination for a Move Method and for no other reason
    3. populate implementation classes with methods by doing Move Method, and no other way

These rules are meant for exercising TDD. They are not meant for actually doing TDD in production (although nothing stops you from trying it out). They can feel frustrating because it will sometimes seem as if you make thousands of teeny tiny little steps without making any real progress.

From your question, you are obviously familiar with the Red-Green-Refactor Cycle:

  • Red: Write a failing test
  • Green: Write the simplest possible code to make the test pass
  • Refactor to re-integrate what you have learned over the course of the development into the code, so that it looks like you had known everything already from the very beginning

That’s a good and simple mnemonic, but I prefer to use a slightly more detailed version, where Red, Green, and Refactor are themselves cycles nested into the outer cycle:

Red:

  • Write the simplest test that could possibly fail
  • Run the test
  • Watch it fail
  • Verify that it fails for the right reason
  • If it fails for the wrong reason, make the simplest possible modification to make the reason right
  • REPEAT

Green:

  • Write the simplest code that could possibly change the error message (i.e. don’t try to make the entire test pass at once, just try to get rid of the immediate error)
  • Run the test
  • If it still fails, verify that you changed the error message in the direction, then make the simplest possible change that could possibly change the error message
  • REPEAT until Green

And of course Refactoring is generally thought of as a cycle already.

In addition to that, you can actually nest two TDD cycles inside of each other: an “outer” cycle of Acceptance Tests corresponding to User Stories and an “inner” cycle of Unit Tests (what is normally thought of as the “TDD cycle”).

With these two ideas combined, the general TDD cycle looks like this (it’s actually 4 cycles, nested 3 levels deep):

  1. Pick a user story (how you do this is outside the scope of TDD, you can use Scrum or XP, for example, for figuring out which story to pick next).
  2. Write the simplest acceptance test that could possibly fail for the acceptance criteria of that user story.
  3. Run your acceptance tests.
  4. Watch the acceptance test (and only that test!) you just wrote fail.
  5. Verify that it fails for the right reasons.
  6. As long as the test fails, repeat:

    1. Pick an independent unit of behavior
    2. Write the simplest unit test that could possibly fail for that unit of behavior
    3. Run your unit tests.
    4. Watch the unit test (and only that test!) you just wrote fail.
    5. Verify that it fails for the right reasons.
    6. As long as the test fails, repeat:

      1. Write the simplest code that could possibly change the error message
    7. As soon as the test passes, as long as there is something to improve, repeat:

      1. Refactor Mercilessly

If you combine this very fine-grained cycle with the rules of TDD As If You Meant It, and play around with some small exercises (e.g. Tic-Tac-Toe, Rock-Paper-Scissors, then try extending it to Rock-Paper-Scissors-Spock-Lizard and see what happens, the Bowling Game Kata, Langton’s Ant, etc.), then you will a) get very frustrated at the extremely slow pace, b) probably get a headache, but c) slowly lose the habit of doing too much or doing something upfront, when doing TDD. You will get the feel of what the tiny steps are that you normally gloss over when doing TDD. And when you then go back to your production work, and take the larger steps again, you will however have a sense for what the steps would have been if you had taken them.

If you watch the Langton’s Ant Kata Video, can you spot instances where Micah follows the more detailed steps and where he is skipping some? Can you imagine how it would look like if he followed TDD As I You Meant It instead? That’s also an interesting exercise to do. For example, there is one instance where he adheres to “change the message”, but mostly he makes the tests pass in one go.

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *