On Friday, March 14, 2014 5:38:20 PM UTC-4, Herman Peeren wrote:
Private methods should not be unit-tested. Code quality would improve if we delete those tests.
It sounds weird and I understand it is felt as a waste to delete work you've done and lowering the code coverage, but I'll explain. Testing is not an improvement of code robustness in itself. It is done to test if the results are the same when you change the code.
Actually, that is only a third of it. There is unit testing to see if the expected result is achieved, what I like to think of as "sanity checks".
A method which performs addition is "sane" if given 2+2 it responds with 4. It is insane if it responds with 5.
The problem with such testing is it is tedious to setup, especially if you get caught in a trap of trying to test every reasonable combination[ie do you test 1+1, 2+2, 3+3, 4+4, etc? How many do you test before you feel complete?]
Personally, I don't like TDD, I write code and then write tests. So I'm not a big fan of these tests.
Secondly, there are spec/documentation tests. For a lot of code, it's not always clear just what you expect some bit of code to return. A good example of this is in the Registry class. I was working on extending the class with some additional functionality for a specific purpose. Here I /need/ to know what some of these private methods are expected to return so I can decide whether to use/extend them or not. So as I went through the documentation, I'd add a unit test to make sure the code matched the documentation - and fixed the documentation when it did not[and then submitted pull requests].
Basically, whenever you are adding documentation and you write "to use X do this" - you should write a unit a test for that and provide linkage to the documentation. Otherwise you end up with docs that don't get updated when code was updated.
I personally keep my own repository of framework related unit tests when I am dealing with a new to me section of code. IE if in my code I depend on Log to throw a RunTime exception when a message cannot be saved[so I can suppress it since it's common when the file system is not readable] - if that exception changes to something else my code breaks. So I'll keep that unit test in any log related plugin I write so if the framework changes Travis can tell me.
Thirdly, and most commonly overlooked, are regression tests. Regression tests are ugly little tests that are written just to demonstrate that a bug that was discovered is fixed.
Personally, I've started to be explicit about these tests. Ie for class:
I create 3 test files with 3 classes:
I suppose I could use the @group tag instead, I just prefer this. Using the various other tags like @covers, I configure PHPUnit so that ONLY SanityTests count for code coverage. Regression and Doc Tests are not code coverage.
Right now I'm trying to work out a way of using skip and ignore on tests to provide meaningful coverage reports. IE if I want to assign 3 states for class methods:
Covered, missing, and ignored. For metrics, I only care about how many methods that are not ignored are covered. If a method is ignored, I made the decision not to count it, so don't show it to me.