Hacker News new | past | comments | ask | show | jobs | submit login

I've been writing tests for a long time, and I went through a phase where I depended heavily on DI. While it's still useful in some conditions, I rarely make use of it.

First, DI is simply the least subtle for of IoC available. When all else fails, you can always rely on DI. But, this is something languages should be looking to help developers with. Elixir, for example, supports IoC as part of the language, and is much more elegant than DI [1]. When monkey patching is available, this is also a possible solution (easy to abuse, yes, but suitable in simple cases). Functions as first class values can also help (instead of passing a clock around, you can have a SystemClock.Time func that you can override in your tests, which is similar to what Date.now() is in javascript).

But, perhaps more importantly, and as much as I tried to deny it, integration tests are absolutely and totally worth the trouble. If you unit test two parts independently, you run the very real risk of having them pass under test, but fail once deployed. I've seen more production downtime caused by incorrect assumptions between services than anything else.

Also, lately, I've been writing more and more fuzz tests. I'm probably not very good at it yet, but I think for the couple of projects we did it, it's been a worthwhile effort (moreso when we started...they barely catch anything now since we're all coding much more defensively).

[1] http://openmymind.net/Dependency-Injection-In-Elixir/




Nobody denies integration tests are good. You just don't want your unit tests to be integration tests.

Unit tests are meant to be fast, and quick to isolate the failing code.


If you make your so-called-'unit' tests too closely coupled to you don't actually work faster though, because you constantly have to refactor them to match the internal implementation.

Your testing feedback might be quicker, but quickly getting useless information (ie. your implementation changed) is just as bad as slower tests, because you need to fix the test, recompile and again before you get useful information.

Also not all 'integration tests' are slow. If you can test using SQlite in memory, instead of pointing at a PG instance that needs to be provisioned, for instance, you can get a lot of your end-to-end testing running really quickly.


I agree. Unit tests can definitely slow you down. Testing against high level interfaces or having full integration tests is better. Then the failure is much more likely to be a genuine failure rather than a legitimate change in implementation.

If you have good logging and debugging tools the failure can be quite quick to pin down. James Coplien suggests liberal use of assertions in code which is a really good idea and something C/C++ programmers used to do a lot of. Assertions can then be as effective as unit tests in terms of localising failure.


Try CREATE TABLE UNLOGGED with postgres. The diversity of RDBMS and SQL dialects defeats the point of integration testing the persistence layer with SQLite.


Yes, but a large portion of code that needs testing is simple inserting, queries and joins, and SQLite can do this without any problems.

You can push complex or db-specific work to slower test groups, but not everything requires it.


even this is less clear - for some dbs it's normal to use a sequence for rowid's (vs some sort of autoincrement) and i don't think sqlite offers all the same behavior there.


Any good resources about fuzzing around? I know the concept, but I really could use some best practices guideline and some examples.


Yes, integration tests are worth it, but to take your two part example, testing that only those parts talk correctly together should be enough. You shouldn't need an end-to-end test just for integration.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: