Hacker News new | past | comments | ask | show | jobs | submit login

> I experienced this first hand when I first started learning to do TDD with unit tests. We had a badly designed system and we adopted a unit testing strategy that said every class needed to be tested independently from every other class. Testing this system was hard. It was a lot of effort with mocks and dependency wrangling to exercise our classes, and it produced brittle tests that never added any value in terms of the quality of the overall system.

> We were writing bad tests in a badly designed system and having a horrible time. The conclusion we came to was that this testing thing was for the birds. It was slowing us down and not really adding any benefit. Get rid of it.

If the tests are revealing that the system is "badly designed," the solution is not to throw away the tests in my view - it's to work on gradually refactoring the system to address the difficulties with testing it (of course subject to developer time/budget constraints...). The tests provide a gauge of the "internal quality" of the system (to take a concept introduced by the text "Growing Object-Oriented Software, Guided by Tests" [0]).

xUnit Patterns has published a list of "test smells" and possible defects they may indicate in the design of the system: http://xunitpatterns.com/Test%20Smells.html

To be fair the author does call this point out in their conclusion:

> If testing your code is hard it means your code needs to be factored better. Stop what you’re doing and fix the design so that testing is easy again.

I would also add that adhering to the testing pyramid [1] principle is important, such that your system is not overly reliant on mocks (which can diverge from the actual "production" behaviour of the system) and/or brittle unit tests that break easily when changes to the system are introduced.

[0] https://www.google.ca/books/edition/Growing_Object_Oriented_...

[1] https://martinfowler.com/articles/practical-test-pyramid.htm...




>If the tests are revealing that the system is "badly designed," the solution is not to throw away the tests in my view - it's to work on gradually refactoring the system to address the difficulties with testing it

I'm mystified that people keep suggesting this given the glaringly obvious chicken-egg problem inherent in doing it.

If you think you can't safely refactor without reliable unit tests and you can't get reliable unit tests without refactoring then you are stuck.

The ironic thing is that on the projects Ive dug out of a "not safe to refactor" quagmire (with high level integration tests), I always used to have an end goal in mind of refactoring to make the code more unit testable in the end. It felt like the "right" thing to do. That was just the industry dogma talking though.

In practice by the time I reached the point where we could do that there was usually no point. Refactoring towards Unit testability had a low/negative ROI at that point, with code that was already safe to change.


> If you think you can't safely refactor without reliable unit tests and you can't get reliable unit tests without refactoring then you are stuck.

Books have been written on exactly this problem; Michael Feathers' "Working Effectively with Legacy Code" comes to mind, where he explicitly names this problem as "The Legacy Code Dilemma:"

> When we change code, we should have tests in place. To put tests in place, we often have to change code.

It's not exactly an easy problem (hence the existence of the book), but there do exist techniques for getting around it - speaking very generally, finding "seams" (places where the behaviour of the system can be modified without changing the source code), breaking dependencies, and gradually getting smaller and smaller units of the system into test harnesses.

Sometimes the code does need to change in order to enable testing:

> Is it safe to do these refactorings without tests? It can be. [...] The trick is to do these initial refactorings very conservatively.

Martin Fowler has catalogued [0] some of these "safe moves" or refactorings one can make; Feathers also recommends the use of tooling and automated refactoring support (provided one understands how safe those tools are, and what guarantees they offer) in order to make these initial refactorings to get the code under test.

Whether or not this is actually worth the time invested, is another matter, and probably one far more complex.

[0] https://refactoring.com/catalog/

[1] https://archive.org/details/working-effectively-with-legacy-...


>It's not exactly an easy problem (hence the existence of the book), but there do exist techniques for getting around it - speaking very generally, finding "seams" (places where the behaviour of the system can be modified without changing the source code), breaking dependencies, and gradually getting smaller and smaller units of the system into test harnesses.

These techniques are on the right track but they are outdated. They advocate making changes that are as small as possible (hence the seams thing) while covering the app with unit tests. Most of the advice centers around this maxim: change as little as possible.

What's the smallest amount of change you can make? Zero - i.e. running hermetic end to end testa over the app and changing no more than a couple of lines.

They dont advocate zero though. They advocate unit tests.

To be fair to these authors that option was not really viable when they first wrote the book. Instead of 200 15 second reliable playwright tests run across 20 cloud based workers completing in 3 minutes you were faced with the possibility of 24-48 hour flaky test suites running on one Jenkins server.

So, it probably made sense more often to take slightly larger risks to crack open some of those seams in pursuit of a test that was less flaky and ran in under a second rather than 2 minutes.


> To be fair to these authors that option was not really viable when they first wrote the book. Instead of 200 15 second reliable playwright tests run across 20 cloud based workers completing in 3 minutes you were faced with the possibility of 24-48 hour flaky test suites running on one Jenkins server.

The problem with integration and e2e tests is that they do not give you a measure of the "internal quality" of the system in the way unit tests do. Quoting again from "Growing Object Oriented Software, Guided by Tests:"

> Running end-to-end tests tells us about the external quality of our system, and writing them tells us something about how well we (the whole team) understand the domain, but end-to-end tests don’t tell us how well we’ve written the code. Writing unit tests gives us a lot of feedback about the quality of our code [...]

I don't think compute / test duration were the only motivations behind this approach.


If I inherited a crappy code base I want to get to a place where I can safely refactor as quickly as possible.

I dont need indirect commentary on how crap the code base is in the form of tests that are annoying to write because they require 100 mock objects. It's not telling me anything new and it's annoying the hell out of me while it does it.

If the codebase is good, I also dont need that commentary. I can read.

Indeed maybe the real message that unit tests are sending by being intolerant of bad code is that they are also the bad code.


Huzzah!

I'll add that the meme of preferring integration tests to any other tests seems to stem from badly designed code. Looking at production code less closely disguises many sins.


To be fair, while I agree with you, integration/e2e tests are much much easier to introduce into a legacy system, and it's really really easy to break things at the edges so they are definitely useful.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: