The trouble is that the poll doesn't have a middle ground between "all functionality" and "a few critical things".
A full run of our test suite literally takes months on a cluster of hundreds of CPUs (obviously, there are also faster versions of the tests which are run frequently). While I have a long list of additional test coverage that I would like to add, what we test is much closer to "all functionality" than it is to "a few critical things".
Agreed. Missing the option of "Most functionality", or "All functionality within reason". Without that option anything I select would be misleading.
I think it's safe to assume that anyone who selected "All functionality" actually means "Most functionality". Also I think we can assume that a good proportion of people who selected "A few critical" would belong in the "Most" bucket.
Well, there's `all` and there's virtually all. All is 100% branch and statement coverage, and is a big waste of time.
When I saw my codebase has 'all' functionality tested, I mean we don't commit code without tests included too. I think that's a pretty reasonable definition.
That's not entirely fair - you haven't tested the branches or statements inside the sin function.
However, even if the sin function is already tested elsewhere, you will still need further testing to ensure that you are calling it correctly (e.g. not confusing degrees and radians).
EDIT: Yes, I read it wrong - clearly need coffee...
The original poster gave an implementation of sin(), not a unit test. That implementation has no branches in the source and, for any decent compiler, will not have any branches on the machine, either.
yep, you're absolutely right. You can get 100% testing coverage when you define it as "percentage of code executed when tests are run".
That said... that kind of coverage isn't quite as useless as it might seem. If your tests do execute every line, even in a completely contrived way, you will catch a lot if you change your code. You just tend to catch more of the "wrong number of arguments passed to a method" kind of error than "you are allowing the autopilot to try to land the plane 100 feet below the runway" kind of error ;)
Careful though with tests that literally execute every line of code: You tie your test to your implementation. That makes even the slightest refactoring difficult. Better to have unit tests that only care about the functional interface.
Depends how you define functionality I guess. If you are talking about high-level user functions (create a new user, modify user, delete user), then lots of organisations probably do have tests for all functions.
However, if you consider functions on the code level (e.g. Java methods) then organisations with 100% coverage will be thin on the ground. If you go further and consider line coverage, almost nobody will have 100% coverage.
A common problem with organisations claiming to test all functions is that they will only test the happy path - there will be few tests for things like unexpected or illegal input etc.
Article about the group that writes the space shuttle software, sort of relevant?: http://www.fastcompany.com/magazine/06/writestuff.html