Hacker News new | past | comments | ask | show | jobs | submit login

>> this is the smallest thing I _can_ test usefully

> Then you're testing useless things.

We'll have to agree to disagree then.

> Testing DB access layer and service layer separately (as units are often defined)

Not at all. For me, a unit is a small part of a layer; one method. Testing the various parts in one system/layer is another type of test. Testing that different systems work together is yet another.

I tend to think in terms of the following

- Unit test = my code works

- Functional test = my design works

- Integration test = my code is using your 3rd party stuff correctly (databases, etc)

- Factory Acceptance Test = my system works

- Site Acceptance Test = your code sucks, this totally isn't what I asked for!?!

The "my code works" part is the smallest piece possible. Think "the sorting function" of a library that can return it's results sorted in a specific order.




And the only actual useful tests are functional (depending on how you write them) and above.

If those fail, it means that neither your design nor your code works.

The absolute vast majority of unit tests are meaningless because you just repeat them again in the higher level tests.


That seems like a silly opinion to me. I use unit tests to make sure that individual units work like I expect them to. And I use them to test edge cases that can be tested separately from their caller. If I had to test all the use cases for each function, all combined together, there number of tests would grow by the multiplication of the partitions of each one, N x M x O x P, ... rather than the sum, plus a much smaller set of tests for how they work together (N + M + O + P + N_M + M_O + O_P, etc). It's much simpler to thoroughly test each unit. Then test how they work together.


> If I had to test all the use cases for each function, all combined together, there number of tests would grow by the multiplication of the partitions of each one

Why would they? Do these edge cases not appear when the caller is invoked? Do you not test these edge cases and the behavior when the caller is invoked?

As an example: you tested that your db layer doesn't fail when getting certain data and returns response X (or throws exception Y). But your service layer has no idea what to do with this, and so simply fails or falls back to some generic handler.

Does this represent how the app should behave? No. You have to write a functional or an integration test for that exact same data to test that the response is correct. So why write the same thing twice (or more)?

You can see this with Twitter: the backend always returns a proper error description for any situation (e.g. "File too large", or "Video aspect ratio is incorrect"). However, all you see is "Something went wrong, try again later".

> It's much simpler to thoroughly test each unit. Then test how they work together.

Me, telling you: test how they work together, unit tests are usually useless

You: no, this increases the number of tests. Instead, you have to... write at least double the amount of tests: first for the units, and then test the exact same scenarios for the combination of units.

----

Edit: what I'm writing is especially true for typical microservices. It's harder for monoliths, GUI apps etc. But even there: if you write a test for a unit, but then need to write the exact same test for the exact same scenarios to test a combination of units, then those unit tests are useless.


Unit one - returns a useful test for each type of error condition that can occur (N). Test that, for each type of error condition that can occur. One test for each error condition.

Unit two - calls unit one - test that, if unit one returns an error, it is treated appropriately. One test, covers all error conditions because they're all returned the same way from Unit one.

Unit three - same idea as unit one

If you were to test the behavior of unit one _through_ units 2 and 3, you'd need 2*N tests. If you were to test the behavior of unit one separately, you'd need N+2 tests.

You're missing the point that you don't need to test "the exact same scenarios for the combination of units", because the partitions of <inputs to outputs> is not the same as the partitions for <outputs>. And for each unit, you only need to test how it handles the partitions of <outputs> for the items, it calls; not that of <inputs to outputs>.


> If you were to test the behavior of unit one _through_ units 2 and 3, you'd need 2*N tests.

There are only two possible responses to that:

1. No, there are not 2*N tests because unit 3 does not cover, or need, all of the behavior and cases that flow through those units. Then unit testing unneeded behaviors is unnecessary.

2. Unit 3 actually goes through all those 2*N cases. So, by not testing them at the unit 3 level you have no idea that the system behaves as needed. Literally this https://twitter.com/ThePracticalDev/status/68767208615275315...

> You're missing the point that you don't need to test "the exact same scenarios for the combination of units", because the partitions of <inputs to outputs>

This makes no sense at all. Yes, you've tested those "inputs/outputs" in isolation. Now, what tests the flow of data? That unit 1 outputs data required by unit 2? That unit 3 outputs data that is correctly propagated by unit 2 back to unit 1?

Once you start testing the actual flow... all your unit tests are immediately entirely unnecessary because you need to test all the same cases, and edge cases to ensure that everything fits together correctly.

So, where I would write a single functional test (and/or, hopefully, an integration test) that shows me how my system actually behaves, you will have multiple tests for each unit, and on top of that you will still need a functional test, at least, for the same scenarios.


> Once you start testing the actual flow... all your unit tests are immediately entirely unnecessary because you need to test all the same cases, and edge cases to ensure that everything fits together correctly.

You don't, but it's clear that I am unable to explain why to you. I apologize for not being better able to express what I mean.


> You don't

If you don't, then you you have no idea if your units fit together properly :)

I've been bitten by this when developing microservices. And as I said in an edit above, it becomes less clear what to test in more monolithic apps and in GUIs, but in general the idea still holds.

Imagine a typical simple microservice. It will have many units working together:

- the controller that accepts an HTTP request

- the service layer that orchestrates data retrieved from various sources

- the wrappers for various external services that let you get data with a single method call

- a db wrapper that also lets you get necessary data with one method call

So you write extensive unit tests for your DB wrapper. You think of and test every single edge case you can think of: invalid calls, incomplete data etc.

Then you write extensive unit tests for your service layer. You think of and test every single edge case you can think of: invalid calls, external services returning invalid data etc.

Then you write extensive unit tests for your controller. Repeat above.

So now you have three layers of extensive tests, and that's just unit tests.

You'll find that most (if not all) of those are unnecessary for one simple reason: you never tested how they actually behave. That is, when the microservice is actually invoked with an actual HTTP request.

And this is where it turns out that:

- those edge cases you so thoroughly tested for the DB layer? Unnecessary because invalid and incomplete data is actually handled at the controller layer, or service layer

- or that errors raised or returned by service wrappers, or the db layer either don't get propagated up, or are handled by a generic catch all so that the call returns a nonsensical stuff like `HTTP 200: {error: "Server error"}`

- or that those edge cases actually exist, but since you tested them in isolation, and you didn't test the whole flow, the service just fails with a HTTP 500 error on invalid invocation

Or, instead, you can just write a single suite of functional tests that test all of that for the actual controller<->service<->wrappers flow covering the exact same scenarios.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: