Logic errors are much trickier than type errors, so I focus tests on parts with tricky logic.
Long version:
I really like the BDD approach of thinking about tests as executable specs. Before I write any code, I need to think through how exactly I want my code to behave. While I'm doing this, I might as well capture it in a list of (empty) specs, so that I and others have a clear and concise summary of the code's declared behavior.
A lot of web app code is fairly trivial / non-mission-critical, so I don't spend too much time testing it. Instead I focus my effort on the parts where it really matters (e.g. a permissions system that could expose private user information or lead to data loss if a bad code tweak goes unnoticed).
Some things I couldn't imagine writing without tests. Right now, I'm building a somewhat complex grammar using a parser generator library, and the only way to stay sane in this process is by building a test suite that alerts me whenever a subtle logic mistake in one of my rules breaks something somewhere else.
"During their operations they seem to focus entirely on the process, but very little on the quality of the code. Sorry guys, but having a 1:4 code:test ratio is not focusing on code quality. It’s focusing on test quality."
I don't think it sounds healthy. But maybe writing tests is primarily a way to kill time? Like when I am not fully awake, I may not be able to concentrate hard enough to write some real code, but test code is less difficult.
I've found that Test Driven Development is particularly beneficial when tired. The test suite reduces the size of the program that I need to keep in short term memory, and the quick test-solve-refactor cycle keeps me focused. Later, when more awake, I review what I did by quickly scanning the test suite and looking for any obvious omissions.
The above isn't necessary an argument for TDD, but I think you might find it useful to try to view building a test suite and building your codebase as complementary activities rather than exclusive ones. If you use them together, the entire process becomes less difficult, and code in your test suite is just as valuable as anywhere else.
I think it would really have to depend on what the tests were doing. If the tests were just goldbricking code testing random crap, yeah, 1:4 C:T is crazy. If you really are validating functionality and business logic, is 1:4 C:T so crazy? I would feel a little more fearless about changing a system if the tests were that comprehensive.
One of the most important things I focus on is how easy it is to modify the original app once you get user feedback that indicates some of your original assumptions are wrong--that is, you need new features or to change the way implemented features work.
So I am very interested in how the test suite will help or hurt this. Will having the tests allow quick reworking with confidence? Or will changing the features break the tests so badly that the tests are effectively broken, causing more work rather than less?
First of all, it doesn't make too much sense to test everything, every single method, every single LOC etc. Your time's so valuable you should focus on things that are critical to the application, or pretty complicated. Testing whether 1.equal?(1) is not.
As you noticed, tests can limit your ability to incorporate changes quickly. I believe that more tests you have, less free you're to modify the code without a risk of breaking the tests set. This is about finding the gold equilibrium.
In case of Rails, I fanatically follow Fat Model approach, so most of the hard code is in the models. As they change less often than the views or the controllers, their tests tend to live longer, delivering better return on investment.
Personally I believe that writing tests is oftenly a very good excuse for not writing the real code, or improving the existing one, especially in case of web applications that are technically mostly trivial.
I often find that the line not covered by any tests is the line with 2 bugs in it. While I rarely get 100% test coverage, it is something I strive for. A line of code that never gets executed is useless.
It hurts in that the tests add inertia to your codebase. Radical changes may require you to cut out portions of tests, for better or worse.
It helps in that, by writing tests first, your code is more testable. And thus less tightly coupled to the rest of your code and easier to change without trashing the whole system. This is the real reason to do TDD: it forces you into doing good design. This is less painful than you might expect in languages that are not Java.
Conversations like this without being able to see the code have about as much relevance as talking about total lines of code as it relates to the average carrying capacity of a swallow.
The spreadsheet mentioned is hands-down the most exciting program I've run into in recent months. They've built a usable, reliable, powerful system around a novel concept. The whole thing just works.
Looking at the program, you wouldn't guess it's only 30,000 lines of application code. They've obviously made all their code count, kept it short and maintainable. 110,000 lines of test code is a small price to pay.
[OK. correlation != causation, they might have made an even better program with less test code. Intuitively, I doubt it]
Actually, its a little strange that they would do this for a spreadsheet program. Spreadsheets are, by nature functional, and would yield themselves to approaches like quickcheck.
My tests are about 4 times more verbose in comments than my code. Why? Because I do random crazy things that don't make sense in order to do negative testing.
Evidence of anecdotal evidence that really doesn't mean much?
... lack of sense for a good cost-benefit ratio?
Given that test code also needs to be maintained, blowing up the whole code base will make maintenance probably more expensive that needed.
I'd rather acknowledge a weakness and account for it moving forward than ignore it and have everything explode when it compounds a problem down the line.
Logic errors are much trickier than type errors, so I focus tests on parts with tricky logic.
Long version:
I really like the BDD approach of thinking about tests as executable specs. Before I write any code, I need to think through how exactly I want my code to behave. While I'm doing this, I might as well capture it in a list of (empty) specs, so that I and others have a clear and concise summary of the code's declared behavior.
A lot of web app code is fairly trivial / non-mission-critical, so I don't spend too much time testing it. Instead I focus my effort on the parts where it really matters (e.g. a permissions system that could expose private user information or lead to data loss if a bad code tweak goes unnoticed).
Some things I couldn't imagine writing without tests. Right now, I'm building a somewhat complex grammar using a parser generator library, and the only way to stay sane in this process is by building a test suite that alerts me whenever a subtle logic mistake in one of my rules breaks something somewhere else.