I used to use a very convoluted coverage setup with my Rails apps to ensure that coverage was only counted for the parts directly under test. To clarify: it's pretty easy in a Rails app to write an integration test that hits one endpoint and uses every model.
Because end-to-end integration tests don't actually assert anything about model properties, it's incorrect to record model coverage during such tests. So each controller test only recorded coverage for the controller it was testing (and nothing else), each model test only recorded coverage for the model it was testing (and nothing else), etc.
Of course, in a more IoC/DI-type system (Spring, etc.) well-written tests don't interact with other objects in the first place: stubs or mocks are injected. In that case it's easier to ensure a test for a given component only exercises that one component, but you still have to verify that your assertions are meaningful for the coverage to ultimately mean anything.
So I guess what I'm saying is even with all those precautions and thought, a tool like this is extremely useful to tell you "uh, hey, this method you think exercises all your code isn't actually asserting anything meaningful about your code's behavior". More of this, please!
Yes, writing tests is easy; however writing rigorous tests ... seems like one of those things that you can't really trust yourself not to say, 'Eh, it's rigorous enough'.
Because end-to-end integration tests don't actually assert anything about model properties, it's incorrect to record model coverage during such tests. So each controller test only recorded coverage for the controller it was testing (and nothing else), each model test only recorded coverage for the model it was testing (and nothing else), etc.
Of course, in a more IoC/DI-type system (Spring, etc.) well-written tests don't interact with other objects in the first place: stubs or mocks are injected. In that case it's easier to ensure a test for a given component only exercises that one component, but you still have to verify that your assertions are meaningful for the coverage to ultimately mean anything.
So I guess what I'm saying is even with all those precautions and thought, a tool like this is extremely useful to tell you "uh, hey, this method you think exercises all your code isn't actually asserting anything meaningful about your code's behavior". More of this, please!