Hacker News new | past | comments | ask | show | jobs | submit login

One point uncle bob makes is that doing this for everything allows you to make far-reaching architectural changes with confidence that you haven't broken a bunch of things in places you don't even realize. so the tests are a tool to allow you to do refactorings you would otherwise be scared of



One point uncle bob makes is that doing this for everything allows you to make far-reaching architectural changes with confidence that you haven't broken a bunch of things in places you don't even realize.

I was working on a new project recently. Over the first month or so I went through four or five significant iterations of the architecture before I settled on one that seemed to have a healthy mix of power, flexibility and simplicity.

Each time, I found the test cases I’d identified during earlier iterations helpful. Ultimately they led me to a more rigorous analysis to make sure I’d covered all required cases in all required places.

However, I hardly ever kept the test code from one iteration to the next. Each unit test follows a general pattern of arranging whatever scenario I want to test, then calling the function under test itself, and then asserting some expected result. With a significant architectural change, the interfaces for arranging things or to the function under test might be changed, leaving much of the code in the test obsolete. The responsibility being tested might not even be in the same part of the code any more, meaning a whole set of test cases now need to be applied to a different component in the system.


Statements like that make me wonder when the last time he committed to production code, because that’s just laughably wrong.

Maybe if you work in a monolith, sure. But most of us work in distributed systems with really complex behavior. No TDD suite in the world is going to catch a thread pool issue that’ll open a circuit breaker in your client.


> No TDD suite in the world is going to catch a thread pool issue that’ll open a circuit breaker in your client.

I don't know your setup. But this is normally exactly what system tests do (aka E2E tests, GUI tests and so on; the tip of the Testing Pyramid). In distributed systems, those often live outside of the various components' codebases; maybe in their own project even.

Edit: because it is a) very impractical to check these things manually, b) often simply impossible to check them before each release and c) requires hoops and tricks to get in a state where such events/regressions occur in the first place; a state that is near impossible to get to manually.


I think that what the majority of the people within the industry want to do (microservices and distributed systems, much like FAANG) far exceeds their capability to deal with the complexities (such as testing distributed systems) and doesn't always even fit their needs.

I get the feeling that if more people were okay with developing monoliths (albeit modular ones), then a lot of things could be easier to do, such as doing TDD properly and being able to refactor without fearing the unforseen consequences.

Heck, maybe the projects that i work on in my $dayjob would even have proper documentation, decent test coverage, as well as a set of tools to support the development processes, instead of having to waste time managing the configuration, deployments, as well as system integrations. Maybe it's a matter of there not being enough workforce (or the managerial folk not seeing much point in investing resources into testing and other ops related activities that don't generate business value directly, a worrying trend i've noticed), but right now i'm implementing Ansible and containerization to at least simplify some of this stuff, but it feels like an uphill battle, as i'm also supposed to ship new features.

Surely i'm not the only one that kind of sees why the person that you're replying to would express the viewpoints that they did? It's hard to do elaborate E2E tests when everything is figuratively on fire constantly and you're struggling with a complicated architecture which may or may not be necessary. I'm probably projecting here, though, but every single enterprise project that i've worked on has been like that.


This can also lead to an asphyxiating second-system that prevents you making the slightest architectural change.


That is definitely true and I fully agree with this.

The benefit is basically all the tests that you don't have to change and give you confidence that your refactorings don't break certain things.

But the drawback is all those tests that do need to be rewritten because of the refactoring, and so will slow you down again.

But in the end for this use-case, I think it's a good ROI. Probably the best use-case for TDD.


A brittle test suite during refactoring is usually caused by too much mocking.

Look at the concept of sociable tests.

https://martinfowler.com/bliki/UnitTest.html


Thats a problem of your tests. Go the classicist approach and your test don't fail after a significant refactoring.


This is what good sum types are for, and have the added benefit of not just testing what the an output is after the fact, but can tell you as you’re writing the thing what the output must be, making refactoring far faster than waiting on a test suite to fail.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: