I think this is a great explanation of a lot of the obvious pitfalls with "basic" TDD, and why so many people end up putting in a lot of effort with TDD without getting much return.
I personally have kind of moved away from TDD over the years, because of some of these reasons: namely, that if the tests match the structure of the code too closely, changes to the organization of that code are incredibly painful because of the work to be done in fixing the tests. I think the author's solution is a good one, though it still doesn't really solve the problem around what you do if you realize you got something wrong and need to refactor things.
Over the years I personally have moved to writing some of the integration tests first, basically defining the API and the contracts that I feel like are the least likely to change, then breaking things down into the pieces that I think are necessary, but only really filling in unit tests once I'm pretty confident that the structure is basically correct and won't require major refactorings in the near future (and often only for those pieces whose behavior is complicated enough that the integration tests are unlikely to catch all the potential bugs).
I think there sometimes needs to be a bit more honest discussion about things like:
* When TDD isn't a good idea (say, when prototyping things, or when you don't yet know how you want to structure the system)
* Which tests are the most valuable, and how to identify them
* The different ways in which tests can provide value (in ensuring the system is designed for testability, in identifying bugs during early implementation, in providing a place to hang future regression tests, in enabling debugging of the system, in preventing regressions, etc.), what kinds of tests provide what value, and how to identify when they're no longer providing enough value to justify their continued maintenance
* What to do when you have to do a major refactoring that kills hundreds of tests (i.e. how much is it worth it to rewrite those unit tests?)
* That investment in testing is an ROI equation (as with everything), and how to evaluate the true value the tests are giving you against the true costs of writing and maintaining them
* All the different failure modes of TDD (e.g. the unit tests work but the system as a whole is broken, mock hell, expensive refactorings, too many tiny pieces that make it hard to follow anything) and how to avoid them or minimize their cost
Sometimes it seems like the high level goals, i.e. shipping high-quality software that solves a user's problems, get lost in the dogma around how to meet those goals.
> I think this is a great explanation of a lot of the obvious pitfalls with "basic" TDD, and why so many people end up putting in a lot of effort with TDD without getting much return.
If you have the cash, spring for Gary Bernhardt's Destroy All Software screencasts. That $240 was the best money my employer ever spent on me. Trying to learn TDD on your own is asking for a lot of pain, and all you'll end up doing is reinventing the wheel.
There are a lot of subtle concepts Gary taught me that I'm still learning to master. You learn what to test, how to test it, at what level to test it, how to structure your workflow to accommodate it.
Were there any particular seasons your found useful in Destroy All? It seams like it's mixed where there's just snipped of TDD spread around at will, whenever the need hit.
(I ask because there's no way I'm going to have time to watch/absorb all those things).
There was one 4-episode series on testing untested code that I thought was great. Especially because I have a ginormous untested codebase that I have to work with. It's in season 3. Also the one before that series about Test Isolation, that's a great topic.
> When TDD isn't a good idea (say, when ... you don't yet know how you want to structure the system)
(Apologies in advance as I can't figure out how not to sound snarky here.)
Isn't that called "the design? And is there any meaningful way in which, if "test-driven design" fails if you don't already have the design, it's worth anything at all?
Sure, you can call that structure the design, or the architecture, or whatever you like. Either way, it's a fair question.
As a point of semantics: TDD generally stands for "test-driven development," not "test-driven design," though the article here does make the claim that TDD helps with design.
To reduce my personal philosophy to a near tautology: if you don't design the system to be testable, it's not going to be testable. TDD, to me, is really about designing for testability. Doing that, however, isn't easy: knowing what's testable and what's not requires a lot of practical experience which tends to be gained by writing a bunch of tests for things. In addition, the longer you wait to validate how testable your design actually is, the more likely it is that you got things wrong and will find it very painful to fix them. So when I talk about TDD myself, I'm really talking about "design for testability and validate testability early and often." If you don't have a clue how you want to build things, TDD isn't going to help.
If you take TDD to mean strictly test-first development . . . well, I only find that useful when I'm fixing bugs, where step 1 is always to write a regression test (if possible). Otherwise it just makes me miserable.
The other thing worth pointing out is that design for testability isn't always 100% aligned with other design concerns like performance, readability, or flexibility: you often have to make a tradeoff, and testability isn't always the right answer. I personally get really irked by the arguments some people make that "TDD always leads to good design; if you did TDD and the result isn't good, you're doing TDD wrong." Sure, plenty of people have no clue what they're doing and make a mess of things in the name of testability. (To be clear, I don't think the author here makes the mistake of begging the question: I liked the article because I think it honestly points out many of the types of mistakes people make and provides a reasonable approach to avoiding them.)
I think you're spot on here - TDD is great as long as you're not too obstinate about it. It's a trade off, just like every interesting problem.
One point I'd like to draw out. If you don't have a clue how you want to build things, TDD isn't going to help.
This is exactly right. If you find yourself completely unable to articulate a test for something, you probably don't really know what it is you're trying to build. I think that's the greatest benefit to TDD: it forces you to stop typing and think.
Exactly. This is the whole purpose behind the "spike" - make a branch, write a crap implementation of some code to help understand the problem, put it aside. Then go write the production version TDD style. Once you understand the problem, you can use TDD to create a good design to solve that problem.
Sounds crazy, but this is how I do everything I don't understand. And my second implementation is usually better than my first.
If you find yourself completely unable to articulate a test for something, you probably don't really know what it is you're trying to build.
I don’t buy this argument. How would you write tests to drive the development of a graphics demo, say rendering a Mandelbrot set? Or a tool to convert audio data from one format to another? Or any other kind of software where the output doesn’t consist of readily verifiable, discrete data points?
Are you asking about unit tests or acceptance tests?
The problems you describe are very high level, but we could design an acceptance testing scheme for them. For the Mandelbrot set it might involve comparison to a reference rendering, for the audio tool a reference recording. In both cases you'd allow a delta relevant to the application, and probably also benchmark for acceptable performance.
But my point was more aimed at unit testing. When you set out to write a function you should know something about that function before starting. If you know enough to write the function signature, you can first write a failing test. If you can write a bit of code in that function, you can write a bit expecting the behavior of that code.
Are you asking about unit tests or acceptance tests?
I suppose what I’m really asking is how you would go from not having software to having software that does those things, using TDD. I think in practice its fail-pass-refactor cycle is normally applied at the level of unit tests, but in any case, how would using TDD help to drive a good design, to ensure testability, or otherwise, in that kind of situation?
(I’m asking this rhetorically. I don’t think TDD is a very helpful process in this context. I’m just trying to demonstrate this with practical examples rather than bluntly stating it without any supporting argument.)
I think I mostly agree with your larger point, but I'm not in love with your examples. The Mandelbrot set does consist of readily verifiable discrete data points, after all. I don't have any problem imagining myself developing a Mandelbrot set program using TDD.
A great example for your point (which might have been what you were getting at with the audio thing) is a test for creating files in a lossy audio format. The acid test is if it sounds right to a human being with good ears, I've got no clue how you would write a pure computer test for that.
In my own work, a great example is finding a NURBS approximation of the intersection of two surfaces. There are an infinite number of correct answers for a given pair of surfaces, and testing that the curve you've generated fits the surfaces is a distressingly hard problem.
The Mandelbrot set does consist of readily verifiable discrete data points, after all.
Indeed it does, but in order to verify them I see only two options.
One is that you have to choose test cases where the answer is trivially determined. However, with this strategy, it seems you must ultimately rely on the refactoring step to magically convert your implementation to support the general case, so the hard part isn’t really test-driven at all.
The other is that you verify non-trivial results. However, to do so you must inevitably reimplement the relevant mathematics one way or another to support your tests. Of course, if you could do that reliably, then you wouldn’t need the tests in the first place.
This isn’t to say that writing multiple independent implementations in parallel is a bad thing for testing purposes. If you do that and then run a statistical test comparing their outputs for a large sample of inputs, a perfect match will provide some confidence that you did at least implement the same thing each time, so it is likely that all versions correctly implement the spec. (Whether the spec itself is correct is another question, but then we’re getting into verification vs. validation, a different issue.) However, again for a simple example like rendering a fractal, you could do that by reimplementing the entire algorithm as a whole, without any of the overheads that TDD might impose.
I don't have any problem imagining myself developing a Mandelbrot set program using TDD.
I’m genuinely curious about how you’d see that going.
I kind of wish I had the time to actually do it right now and see how it works. But here's how I imagine it going:
1) Establish tests for the is-in-set function. You're absolutely right that the most obvious way to do this meaningfully is to reimplement the function. A better approach would be to find some way to leverage an existing "known good" implementation for the test. Maybe a graphics file of the Mandelbrot set we can test against?
2) Establish tests that given an arbitrary (and quick!) is-in-set function, we write out the correct chunk of graphics (file?) for it.
3) Profit.
Observations:
1) I absolutely would NOT do the "write a test; write just enough code to pass that test; write another test..." thing for this. My strong inclination would be to write a fairly complete set of tests for is-in-set, and then focus on making that function work.
2) There's really no significant design going on here. I'd be using the exact same overall design I used for my first Mandelbrot program, back in the early 90s. (And of course, that design is dead obvious.)
In my mind, the world of software breaks down something like this:
1) Exhaustive tests are easy to write.
2) Tests are easy to write.
3) Tests are a pain to write.
4) Tests are incredibly hard to write.
5) Tests are impossible to write.
I think it's pretty telling that when TDD people talk about tests that are hard to write, they mean easy tests in hard to get at areas of your code. I've never heard one discuss what to do if the actual computations are hard to verify (ie 4 & 5 above) and when I've brought it up to them the typical response is "Wow, guess it sucks to be you."
1) I absolutely would NOT do the "write a test; write just enough code to pass that test; write another test..." thing for this. My strong inclination would be to write a fairly complete set of tests for is-in-set, and then focus on making that function work.
The latter is what I’d expect most developers who like a test-first approach to do. I don’t see anything wrong with it, either. I just don’t think it’s the same as what TDD advocates are promoting.
I think it's pretty telling that when TDD people talk about tests that are hard to write, they mean easy tests in hard to get at areas of your code. I've never heard one discuss what to do if the actual computations are hard to verify (ie 4 & 5 above) and when I've brought it up to them the typical response is "Wow, guess it sucks to be you."
Indeed. At this point, I’m openly sceptical of TDD advocacy and consider much of it to be somewhere between well-intentioned naïveté and snake oil. There’s nothing wrong with automated unit testing, nor with writing those unit tests before/with the implementation rather than afterwards. Many projects benefit from these techniques. But TDD implies much more than that, and it’s the extra parts — or rather, the idea that the extra parts are universally applicable and superior to other methods — that I tend to challenge.
Thus I object to the original suggestion in this thread, which was that a developer probably doesn’t know what they are doing just because they can’t articulate a test case according to the critic’s preferred rules. I think those rules are inadequate for many of the real world problems that software developers work on.
I almost feel like we should come up with a "How the hell would you test this?" challenge for TDD advocates. At least, my impression is it is mostly naïveté rather than snake oil.
Whether the spec itself is correct is another question, but then we’re getting into verification vs. validation, a different issue.
I think you get to the nub of it here. TDD lets you develop a spec that is consistent with requirements (the subset so far implemented) and the code at all times.
Writing a comprehensive suite of tests before any production code is like writing a complete spec without any clue as to its applicability. Writing tests afterward would be analogous to writing a spec for an already shipped product.
Tests work both ways in TDD: you are checking both that the code behaves as intended and that your expected behavior is reasonable. If it were only about the former it wouldn't be very valuable.
I think you get to the nub of it here. TDD lets you develop a spec that is consistent with requirements (the subset so far implemented) and the code at all times.
This is another TDD-related argument that I just don’t understand.
A specification might say that the function add returns the sum of its arguments.
A unit test might verify that add(1,1) = 2.
One of these describes the general case. One of them describes a single specific case. Unless your problem space is small enough to enumerate every possible set of inputs and the expected result for each of them, no amount of unit tests can replace a full specification of the required behaviour. Unfortunately, not many real world problems are that convenient.
"Test-driven design", as it is commonly understood, does seem to be a mythical beast. I've hunted it with both logic and experience and come up empty-handed.
That said, i do still find that while test-driven development doesn't itself create good design, it is a useful tool to help me create good design. I have a bite-size piece of functionality to write; i think about what the class should look like; i write tests to describe the class; i write the class. The key thing is that the tests are a description of the class. The act of writing down a description of something has an amazing power to force the mind to really understand it; to see what's missing, what's contradictory, what's unnecessary, and what's really important. I experience this when i write presentations, when i write documentation, and when i write tests. The tests don't do the thinking for me, but they are a very useful tool for my thinking.
It's very common in software development to receive incomplete requirements. My world would be a very different place if I always receive feature complete design documents (and in same cases, any documents at all). Had I insisted on any kind of TDD, it would greatly increase my workload by reducing my ability to alter the design to accommodate new feature requests and changes while internal clients test the code.
I do gather some places do things differently though. Must be nice.
I think I'd have to offer that my experience differs. TDD is not at all big-design-up-front, even with this reductive exercise. In fact, most features start very minimally and the tree of dependencies grows over time, just like any system becomes incrementally more complex. TDD is just one tool (of many) to help manage that complexity. Both by offering some regression value (at least of the logical bits) and also by encouraging small, bite-sized units that are easy to make sense of (and therefore change or replace)
I have tried many times to do TDD. I find it extraordinarily hard to let tests drive the design, because I already see the design in my head before I start coding. All the details might not be filled in, and there are surely things I overlook from the high-up view, but for the most part I already envision the solution.
It's difficult to ignore the solution that is staring my brain in the face and pretend to let it happen organically. I know that I will end up with a worse design too, because I'm a novice at TDD and it doesn't come naturally to me. (I'd argue that I'm a novice at everything and always will be, but I'm even more green when it comes to TDD)
I have no problem writing unit tests, I love mocking dependencies, and I love designing small units of code with little or no internal state. But I cannot figure out how to let go of all that and try to get there via tests instead.
I don't think that I'm a master craftsman, nor do I think my designs are perfect. I get excited at the idea of learning that the way I do everything is garbage and there's a better way. If I ever learn that I'm a master at software development, I'll probably get depressed. But I don't think my inability to get to a better design via TDD is dunning-kruger, either.
You're already doing several reasonable things that tend to improve results: using unit tests, being aware of dependencies, being aware of where your state is held. There is ample credible evidence to suggest that both using automated testing processes and controlling the complexity of your code are good things.
There is little if any robust evidence that adopting TDD would necessarily improve your performance from the respectable position you're already in. So do the truly agile thing, and follow a process that works for you on your projects. You can and should always be looking for ways to improve that process as you gain experience. But never feel compelled to adopt a practice just because some textbook or blog post or high-profile consultant advocated it, if you've tried it and your own experience is that it is counterproductive for you at that time.
My big tip for someone at your stage: don't see the design. See a few designs. Sure, be going in some direction, but constantly be seeking alternatives to choose from. And always favor the simpler alternative to start.
One thing that helps keep me doing that: it's only with trivial problems that you know everything important up front. Accept that your domain will surprise you. That your technology will surprise you. That your own code will surprise you if you pay close attention to what's working well and what could be better.
Maybe you're over thinking it? It sounds like you're already doing the right things.
All the details might not be filled in, and there are surely things I overlook from the high-up view, but for the most part I already envision the solution.
The design part of TDD is just the expectations. So if you were to test an add function for example, you might write something like
before actually implementing the function. So here the design is that the add function takes 2 arguments. That's it.
For other things like classes, your expectations will also drive the design of the class -- what fields and methods are exposed, what the fields might default to, what kinds of things the methods return, etc. Your expectations are the things you saw in your head before you start coding. So it's pretty much the same as what you do already. The benefit of TDD is in knowing that you have a correct implementation and you can move on once things are green.
One thing that's easy to misinterpret is that TDD doesn't mean writing a bunch of tests before writing any code...That's pretty much waterfall development. TDD tends to work best with a real tight test-code loop at the function level.
Incidentally for functions like that, if you have an environment that supports a tool like QuickCheck[1], it's a great thing to use. "The programmer provides a specification of the program, in the form of properties which functions should satisfy, and QuickCheck then tests that the properties hold in a large number of randomly generated cases."
Why is that TDD examples always test stuff that is pretty much useless? I don't need to check an add function. I am pretty confident it will work as is.
If you can find me a more useful example on somewhere then please show it to me.
The comments section today looks like a support group for beginners/intermediates who struggled with TDD and gave up, and so want to explain why it's all bunk. I get this. I am not a great programmer. I'm self taught like a lot of you. I had tremendous difficulty grokking TDD and for the longest time I'd start, give up, build without it.
But, I'm here as a you-can-do-it-to. You might not think you want to but I'm so glad I DID manage to get there.
Feel free to ignore because I respect that everyone's experience differs. But the real problem is that there are few good step by step tutorials that teach you from start to competent with TDD. Couple that with the fact that it takes real time to learn good TDD practices and the vast majority of TDDers in their early stage write too many tests, bad tests, and tightly couple tests.
Just as it's taken you time to learn programming - I don't mean hello world, but getting to the competent level with coding you're at today, it'll take a long time to get good with TDD. My case (ruby ymmv) involved googling every time I struggled; lots of Stack Overflow; plenty of Confreaks talks; Sandi Metz' POODR...
Like the OP says - at different stages in the learning cycles you take different approaches because you're better, it's more instinctive to you. I thought I understood the purpose of mocks/doubles, until I actually understood the purpose of mocks/doubles. When used right they're fantastic.
The key insight that everyone attempting TDD has to grok, before all else, is that it's about design not regression testing. If you're struggling to write tests, and they're hard to write, messy, take a lot of setup, are slow to run, too tightly coupled etc. you have a design problem. It's exposed. Think through your abstractions. Refactor. Always refactor. Don't do RED-GREEN-GOOD ENOUGH ... I did for a long time. It was frustrating.
This is a good post. Don't dismiss TDD because you're struggling. Try to find better learning tools and practice lots and listen to others who are successful with it.
It's true that sometimes fads take hold and we can dismiss them as everyone doing something for no reason. But cynicism can take hold too and we can think that of everything and miss good tools and techniques. TDD will help you be a better coder - at least it has me. If your first response to this post was TDD is bullshit, give it another try.
This is really right on the money. If it's too hard to test then you've already found something really valuable - a problem with your design that will cause you friction later on.
"If you're struggling to write tests, and they're hard to write, messy, take a lot of setup, are slow to run, too tightly coupled etc. you have a design problem."
This is my problem exactly, and I wouldn't say I have a design problem. My application is a Django app that return complex database query results. Creating the fixtures for ALL of the edge cases would take significantly longer than writing the code. At this stage it is far more efficient to take a copy of the production database and check things manually. It helps that my app is in house only, and so users will report straight away when something isn't working.
But to say that I have a design problem because tests are going to be difficult to implement is just plain wrong.
The approach outlined actually makes much more sense without OO. I guess the WTF comes from forcing yourself into a world of "MoneyFinder", "InvoiceFetcher", etc. Makes it look a lot more complicated and prone to error than it is, because you're now supposed to mock objects that may have internal state. Otherwise it's the usual top-down approach with stubs.
Yeah I think it's interesting that the final approach with "logical units" and "collaboration units" mirrors a functional approach with "functions" and "higher-order functions". The advice to write small "logical units" could also just be "write pure functions". The complex class hierarchy in the final example could probably be avoided entirely if you were using a language with first class functions. As a bonus, in a functional language the "collaboration units" have probably already been written and tested for you.
Yep. I don't really practice "OOP" anymore because each of my objects are really just behavior with no application state (their only state would be the other behavioral objects they depend on).
However, in a classical language it's easier to organize stuff into classes and for the purpose of a post like this one it's easier to convey. But you're dead on.
I think that Red-Green-Refactor is as much about learning to habitually look for and recognize the refactoring opportunities as it is about being meticulous in reacting to those opportunities.
It's true that nothing forces your to refactor - but I think wanting that is a symptom of treating TDD as a kind of recipe-based prescriptive approach. It is not a reflection of the nature of TDD as a practice or habit.
It's a subtle difference, but important:
A recipe says "do step 3 or your end result will be bad"
A practice says "do step 3 so you get better at doing step 3"
The more I try to explain TDD, the more I realize that some of my favorite concepts, like the ability to mock functionality of an external process because the details of that process should be irrelevant...is just beyond the grasp of most beginners. That is, I thought/hoped that TDD would necessarily force them into good orthogonal design, because it does so for me...but it seems like they have to have a good grasp of that before they can truly grok TDD.
Has anyone else solved this chicken and the egg dilemma?
Test Driven Design doesn't fundamentally solve any problems, it's a tool for master craftsmen to tease out subtle errors in their design. The problem is junior programmers can't recognize bad design so they end up writing tests for a bad design, because they don't understand how bad the design is, they don't understand how to break it.
IMHO junior programmers tend to think that over specifying a design helps them, only a master can recognize the brilliance of something like SMTP/REST/JSON over X400/SOAP/XML. TDD just helps them over specify their bad designs.
That said TDD is a wonderful tool in the hands of a master. It's like photography, a $10,000 camera won't help you solve your composition problems. Tech can help ensure Ansel Adams doesn't take a photo with the wrong focus, but a properly focused poorly composed image does not a masterpiece make.
This was indeed my motivation for writing the post. I think the next step to take if you agree with my premise is that we need to come together with ideas for how to best teach TDD to beginners/novices. Exercises that promote these concepts, lines of reasoning to take, tools to get people started without any unnecessary cognitive overhead, etc.
I agree that teaching TDD exactly how I do it today can be a bit overwhelming from a tooling perspective currently, but conceptually I think visualizing it as a reductionist exercise with a tree graph of units is pretty simple.
One thing I do with my beginning programming students (since their programs are tiny) is make them write out "test plans" on paper before they can write their program code.
They have to write the inputs and then the expected results.
It gets them thinking about the concept of using tests as part of the design practice.
Later, I give them the unit tests and they have to write the code. This is usually a rewritten version of a previous program so they see the text-based test plans in action as unit tests.
Then I might give them the empty test and an empty implementation, asking them to fill in the test first, then the implementation.
Finally I ask for a completely new feature, and they have to figure out how to write the test. And I ask them to go about it with a test plan.
After a few semesters of this, I think I'm ready to say that this is successful for getting the "beginners" there.
It doesn't address everything, but I think it's a good start.
I wonder about these workshops (even asked Uncle Bob Martin about them in a recent thread). I can't shake the feeling they are the exact opposite of agility (obviously, he is better qualified than me to judge that). Their limited time schedules, which is essentially a bound over the amount of contact between the client and the supplier, seems analogous to the infamous "requirements document". Also, there doesn't appear to be a "shippable" product at the end - the developers apparently don't end up practicing TDD.
I used to be an instructor for a living, and I kind-of equated lectures to waterfall and exercises to XP. There is even a semantically analogous term in teaching research, problem-based learning (each word corresponds to the respective word in test-driven development - cool, right?). Is there anyone else who sees these analogues, or am I completely crazy here?
Might one of the problems be that we place too much importance on the "symmetrical" unit test. In your example the child code is still covered when it is extracted from the parent.
As a developer that often prefers tests at the functional level, the primary benefit of tests for me is to get faster feedback while I am developing.
The trouble with abandoning symmetrical unit tests is that:
* The unit is no longer portable and can't be pulled from the context it was first used in (e.g. into a library or another app) without becoming untested. And adding characterization testing later is usually more expensive
* A developer who needs to make a change to that unit needs to know where to "test drive" that change from, which requires that they know where to look for the parent's test that uses it. That's hard enough but it completely falls over when the unit is used in two, three, or more places. Now a bunch of tests have to be redesigned and none of them are easy to find.
* Integrated unit tests like this lead to superlinear build duration growth b/c they each get slower as the system gets bigger. This really trips teams up in year 2 or 3 of a system.
Unless I'm missing something, wouldn't the child dependency be enough to prevent the unit from being dropped into another library or app? That's a good point you bring up about knowing where to "test drive" the changes from, though usually on the apps I've worked on, they've been small enough that the relevant integration test could be found without much detective work.
I guess I haven't been involved in too many 2-3 year monolithic projects. Maybe that's when a stricter symmetrical unit test policy makes the most sense.
What other levels of tests do you end up running besides your unit tests? Do you have any integrated unit tests? Functional tests? End to end tests?
The author is stating that the child dependency cannot be extracted to another library or app. If it is extracted, it is untested, because the only tests wrapping the child dependency are actually testing the child's original parent. (Which is likely to not exist in whatever other library/app to which the child component is moved.) And then, to retroactively add tests to the child component in order to facilitate moving it to another library, is painful.
Having symmetrical tests enable components to be easier moved to other libraries/apps because the test can move with the unit under test.
Shout out for Sandi Metz book POODR, and her Railsconf talk The Magic Tricks of Testing, if you're a rubyist (though the principles hold true for non-ruby OO programmers too).
+1 for POODR - very (very) well written, goes down multiple pathways reasonably (rather than "this is how you solve that" without any clue why you solve it that way), and gives some decent tools for any project. I only wish it were longer.
I agree with the general approach suggested in the article (in tests, write/assume the code you wish you had).
But one detail ran counter to my personal practice.
I don't believe that "symmetrical" unit tests are a worthy goal. I believe in testing units of behavior, whether or not they correspond to a method/class. Symmetry leads to brittleness. I refactor as much as possible into private methods, but I leave my tests (mostly) alone. I generally try to have a decent set of acceptance tests, too.
Ideally, you specify a lot of behavior about your public API, but the details are handled in small private methods that are free to change without affecting your tests.
I understand the concern, but I value consistency and discoverability, so symmetry of thing-being-tested to test itself is (so far) the best way I've found to make sure it's dreadfully obvious where a given unit's test is.
This approach is not concerned with brittleness or being coupled to the implementation because each unit is so small that it's easier to trash the object and its test when requirements change than it is to try to update both dramatically.
I suppose that if you do keep things that small, it could work well to trash and rewrite. Plus it has the benefit of making you consider explicitly what is going/staying.
Personally, I like my tests to be pretty clearly about the behavior of the contract, and not the implementation, which is hard when you require every method have a test.
I'd also be concerned that other team members are reluctant to delete tests - as this is a dysfunction I see often, and try to counteract with varying degrees of success.
Symmetrical tests really help other developers on your team. It depends on your design, but "public API" tests are often something between integration and unit tests - i.e. they tests in-process co-operation of units.
Yes! I've always hated the common kata, because for every dev writing software for a bowling alley, there are 200,000 devs writing software the sends invoices or stores documents.
When I'm teaching TDD, the kata I have everyone go through is a simple order system.
The requirements are something like:
A user can order a case of soda
The user should have their credit card charged
The user should get an email when the card is charged
The user should get an email when their order ships
If the credit card is denied, they should see an error message
(etc....)
This way they can think about abstracting out dependencies, an IEmailService, a ICreditCardService, etc. There are no dependencies for a Roman Numeral converter.
TDD as I practice it does, but I think OOP as it's traditionally taught encourages developers to tangle mutable application state and behavior, which leads to all sorts of problems. The more I practice, the more I learn that life is better when I separate whatever holds the state from whatever has the behavior
This is probably the first reasonably sophisticated attempt to describe a test-driven design/development process I have read.
The observation that "[s]ome teachers deal with this problem by exhorting developers to refactor rigorously with an appeal to virtues like discipline and professionalism" reminds me of E. O. Wilson's remark that "Karl Marx was right, socialism works, it is just that he had the wrong species."
If test-driven design were the programming panacea its proponents sometimes seem to make of it, Knuth would have written about it in TAOCP. Instead Knuth advocates Literate Programming. TDD seems to attract a cult-like following, with a relatively high ratio of opinion to cited peer-reviewed literature among proponents.
TDD as this is commonly understood seems to me like the calculational approach to program design (c.f. Anne Kaldewaij, Programming: the derivation of algorithms), only without the calculation and without predicate transformers. Still it can be a useful technique.
There is no "right" way to program. This was evident from the beginning, when Turing proved the unsolvability of the halting problem. (Conventions are another matter.)
Sure, but if the end result is "lots of little objects/methods/functions" maybe there's a simpler way of getting there, e.g. prescriptive design rules. After all, that's what every design method, including stuff from the waterfall era attempted.
I'd like TDD to be more than just another way to relearn those old rules, especially if we arrive at the same conclusions on a circuitous path. Perhaps the old design rules, object patterns, etc. have to each be integrated with a testing strategy, e.g. if you're using an observer you have to test it like this and if you refactor it like that you change your tests like so.
The general rules are easy to understand and your post makes perfect sense but once you formulate your new design approach you'll have to find a way to teach it precisely enough to avoid whatever antipattern is certain to evolve among the half-educated user community, which usually includes myself and about 95% of everyone else.
Hey HN, I just wanted to thank you for the overall very positive, constructive comment thread. Thanks to you this post got roughly/over ~22k page views and I didn't receive a single vitriolic comment or bitter dissent. All I got was thoughtful, earnest, and honest replies. Made my day.
OK but after you "Fake It Until You Make It" and you have to add a new feature to that class structure, aren't you just going to start over with all the failures he brings up?
---------
I haven't designed code the way he's advocating, but I have attempted TDD by starting with the leaves first. Here are the downsides to that:
1) Sometimes you end testing and writing a leaf that you you don't end up using/needing.
2) You realize you need a parameter you didn't anticipate. EG: "Obviously this patient report needs the Patient object. Oh crap I forgot that there's a requirement to print the user's name on the report. Now I've got get that User object and pass it all the way through".
Maybe these experiences aren't relevant. As I said, I haven't tried to "Fake It Until You Make It".
Excellent post, I've had exactly the same experience and come to exactly the same conclusion.
I still follow the old Code Complete method: think about the problem, sketch it out, then finally implement with unit tests. The results are the same, and it's a lot less painful than greenhorn-TDD.
I do this as well. Prototyping needs flexibility and unit tests slow down refactoring. If you are familiar with SOLID then your design will be not bad even without test-first approach.
I completely agree with this. I fact when I have a bigger architectural problem to think about I like to sit on it for a day or two, thinking about one or two designs that would work. It takes a while to see the strengths / flaws in each design and jumping in to code you won't realize problems until you have something half implemented.
TDD and agile have been an effort at breaking an old must have for code which was: ISO9001; the code should behave according to the plan, and if they don't conform, plan must be revised if the tests failed. The Plan Do Check Act Mantra.
Now, they find themselves facing the consequences of not respecting the expectation of the customers and they whine because "it was not applied correctly, because no one cared".
So now, they reformalize exactly the so "rigid" ISO9001 they were trying to throw down.
I suspect if they had called it Architecture Driven Development (ADD) rather than Test Driven Development (TDD) it might contextualize better. Basically what the author explains is that you can design an architecture top down from simple requirements, deriving more complex requirements, and then providing an implementation strategy that lets you reason about whether or not you are "done."
But that 'test' word really puts people in the wrong frame of mind at the outset.
Yeah, the common implications of the word "test" have always been problematic. The BDD movement did a good job bringing that to light, but I didn't want to re-litigate that all in my post just to make a point about semantics. Totally agree, though.
> ...TDD's primary benefit is to improve the design of our code, they were caught entirely off guard. And when I told them that any regression safety gained by TDD is at best secondary and at worst illusory...
Thank you! Details of this post aside, this gave me an Aha! moment and I feel like I'm finally leaving the WTF mountain.
Ian Cooper has a good talk that's relevant to this blog post. It's called 'TDD, where did it all go wrong?' and a recording from NDC 2013 can be found here: http://vimeo.com/68375232
Those guys must really hate their readers. That crappy web site is not zoomable! In the 21st century? In the era of “responsive web design”? Mega fail. Did they use TDD?
What's your definition of zoomable? I'm able to adjust text size just fine. More details on your specific issue? If you mean layout, it is responsive. The text column narrows and the images never exceed 100% width.
Tools like Typemock helps you make bad decisions that you will regret later on...
Isolating things is very important to make it easier to test, and lower the risk for tests to break when you change other parts of the system. Some times isolating one part from another is hard work. Typemock makes it easier, but in the same time it ties you closer to the part that you are trying to isolate from.
e.g. a database. You want to test something that eventually should store something in a database. You can either make a thin layer abstracting away your database so that you can test the functionality without depending on the database, or you can make a tighter coupling to the database, and use tools like typemock to get rid of it in test mode. If you want to change the way you store data, you now have production code tightly coupled to the current storage strategy AND tests tightly coupled to the current storage strategy...
Typemock can be of great help some times, but really you should strive to find better designs instead.
Apologies for the continued downtime, we're trying to get a CDN in front of the (static apache) heroku app. In the past not having any dynamic language in the background was enough to stay up, but not today apparently.
I personally have kind of moved away from TDD over the years, because of some of these reasons: namely, that if the tests match the structure of the code too closely, changes to the organization of that code are incredibly painful because of the work to be done in fixing the tests. I think the author's solution is a good one, though it still doesn't really solve the problem around what you do if you realize you got something wrong and need to refactor things.
Over the years I personally have moved to writing some of the integration tests first, basically defining the API and the contracts that I feel like are the least likely to change, then breaking things down into the pieces that I think are necessary, but only really filling in unit tests once I'm pretty confident that the structure is basically correct and won't require major refactorings in the near future (and often only for those pieces whose behavior is complicated enough that the integration tests are unlikely to catch all the potential bugs).
I think there sometimes needs to be a bit more honest discussion about things like: * When TDD isn't a good idea (say, when prototyping things, or when you don't yet know how you want to structure the system) * Which tests are the most valuable, and how to identify them * The different ways in which tests can provide value (in ensuring the system is designed for testability, in identifying bugs during early implementation, in providing a place to hang future regression tests, in enabling debugging of the system, in preventing regressions, etc.), what kinds of tests provide what value, and how to identify when they're no longer providing enough value to justify their continued maintenance * What to do when you have to do a major refactoring that kills hundreds of tests (i.e. how much is it worth it to rewrite those unit tests?) * That investment in testing is an ROI equation (as with everything), and how to evaluate the true value the tests are giving you against the true costs of writing and maintaining them * All the different failure modes of TDD (e.g. the unit tests work but the system as a whole is broken, mock hell, expensive refactorings, too many tiny pieces that make it hard to follow anything) and how to avoid them or minimize their cost
Sometimes it seems like the high level goals, i.e. shipping high-quality software that solves a user's problems, get lost in the dogma around how to meet those goals.