I ended up publishing it in two parts :-) Here are the URLs:
http://hcoder.org/2012/02/13/unit-testing-advice-for-seasoned-hackers-12/ http://hcoder.org/2012/02/15/unit-testing-advice-for-seasoned-hackers-22/
Now I'll have a look at the coverage and see what I can do for the unit tests before I move on to Stem/chutney.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 2/15/2012 6:07 PM, Esteban Manchado Velázquez wrote:
I ended up publishing it in two parts :-) Here are the URLs:
http://hcoder.org/2012/02/13/unit-testing-advice-for-seasoned-hackers-12/
http://hcoder.org/2012/02/15/unit-testing-advice-for-seasoned-hackers-22/
Now I'll have a look at the coverage and see what I can do for the unit tests before I move on to Stem/chutney.
Thanks for posting these, and for your work to make Tor and Tor's test suite better, Esteban! I found them very interesting and useful reminders.
Regards, Tim
- -- Tim Wilde, Software Engineer, Team Cymru, Inc. twilde@cymru.com | +1-847-378-3333 | http://www.team-cymru.org/
Nice posts and thanks for improving the tor tests! I'm not entirely in agreement with the last point (about tests covering all cases). If the test space is decently small then exercising everything can better ensure that you don't violate a set of invariants. For example, one of stem's unit tests attempt every combination of authentication methods against every set of failures they can encounter, making sure that we properly report success/failure and never raise an unexpected type of exception... https://gitweb.torproject.org/stem.git/blob/HEAD:/test/unit/connection/authe...
But that said, in general you're probably right. -Damian
Hey Damian!
Wasn't sure if follow up privately or to the list, sending this to the list for now.
On Thu, 16 Feb 2012 22:09:44 +0100, Damian Johnson atagar1@gmail.com wrote:
Nice posts and thanks for improving the tor tests! I'm not entirely in agreement with the last point (about tests covering all cases). If the test space is decently small then exercising everything can better ensure that you don't violate a set of invariants. For example, one of stem's unit tests attempt every combination of authentication methods against every set of failures they can encounter, making sure that we properly report success/failure and never raise an unexpected type of exception... https://gitweb.torproject.org/stem.git/blob/HEAD:/test/unit/connection/authe...
But that said, in general you're probably right. -Damian
Thanks for the feedback. You're right, covering all cases is not bad in itself. The last point was the weakest, both because I had thought the least about it, and because I didn't have an example to back it up.
The main thing I had in mind when I wrote that point was some test cases I had recently reviewed at work. They make some statistical computations, and for the tests it *creates* hundreds of test cases (and calculates the expected values!) in a loop, then checks the actual values are the same as the (calculated) expected. That makes the test code as complex as the production code, making it very hard to know if the tests are correct, and once something fails, they're pretty hard to debug.
As I had seen it before, and my impression was that the reason for this happening was developers not realising test code is different than "production" code (among other things, because it doesn't have to be "generic" as in "covering all cases"). In hindsight, after you comment, maybe the main/only bad thing about that test was *calculating* the expected value.