First off, I do put more than 1 assertion in a test. But it definitely leads to situations where you have to investigate why a test failed, instead of it just being obvious. Like the article, I test 1 thing per test, but sometimes that means multiple assertions about the outcome of a test.
IMO there's no point in checking that you got a response in 1 test, and then checking the content/result of that response in another test. The useful portion of that test is the response bit.
IMO, the opposite also has to be considered. I've briefly worked with some code bases that absolutely did 1 assert per test. Essentially you'd have a helper method like "doCreateFooWithoutBarAttribute", and 3-4 tests around that - "check that response code is 400", "check that error message exists", and so on. Changes easily caused 4-5 tests to fail all at once, for example because the POST now returned a 404, but the 404 response also doesn't contain the error message and so on.
This also wasted time, because you always had to look at the tests, and eventually realized that they all failed from the same root cause. And sure, you can use test dependencies if your framework has that and do all manner of things... or you just put the asserts in the same test with a good message.
Even with multiple assertions the test failure reason should be quite clear as most testing frameworks allow to specify a message which is then output in the testing summary.
E.g. `assertEqual(actual_return_code, 200, "bad status code")` should lead to output like `FAILED: test_when_delete_user_then_ok (bad status code, expected 200 got 404)`
Note it mentions the actual expression put in the assert. Which makes it almost always uniquely identifiable within the test.
That's the bare minimum I'd expect of a testing framework - if it can't do that, then what's the point of having it? It's probably better to just write your own executable and throw exceptions in conditionals.
What I expect from a testing framework is at least this:
I.e. to also identify the file and the line containing the failing assertion.
If your testing framework doesn't do that, then again, what's even the point of using it? Throwing an exception or calling language's built-in assert() on a conditional will likely provide at least the file+line.
Maybe it’s different in other languages but in JS and .NET the failed assertion fails and you investigate the failed assertion. You wouldn’t ever have a situation that isn’t obvious.
If an assertion says “expected count to be 5 but got 4” you wouldn’t be looking at the not null check assertion getting confused why it’s not null…
> IMO there's no point in checking that you got a response in 1 test, and then checking the content/result of that response in another test. The useful portion of that test is the response bit.
If I understood this part correctly, you are making the dangerous assumption that your tests will run in a particular order.
No, I definitely am not making that assumption. With a bad response, but a good response code, 1 test would fail and the other would succeed, no matter the order. I just don't think that that valid response code is a useful test on its own. It's much better with both assertions in the same test, unless you have some reason to think that response code failure would signify something special on its own.
IMO there's no point in checking that you got a response in 1 test, and then checking the content/result of that response in another test. The useful portion of that test is the response bit.