Above image has been historically used for different topics and the current one is "how much we trust what we sell".
Do You Trust Your UT Framework?
I wasn't kidding that much when I wrote about "test the testing framework" in my precedent wru post. The overall Unit Test Frameworks code coverage is poor, specially those with all the magic behind the scene, magic that does not come for free.Use The Framework Itself To Test The Framework
This is a common technique that may result in a reliability deadlock. If we trust our UT Framework and we use it to test itself, the moment we have a problem with the framework we'll never know ... or even worst, it will be too late.Don't Trust "Magic" Too Much
If the framework is simple and it does not pollute each test with any sort of crap, we can surely use it to test the framework itself without problems while if this framework elaborates and transforms our tests, it may become impossible to test it via the framework itself due scope and possibly context conflicts for each executed test.This may produce a lot of false positives for something in place theoretically to make our code more robust.
The wru KISS approach
With or without async.js, wru does few things and now it's proved that it does these things properly. I don't want to spend any extra time for a library that I should trust 100% and if I know that all things this library should do are working as expected, I can simply "forget to maintain it" (improve, eventually) and use it feeling safer.99.9% Of Code And Cases Coverage
Loaded on top of whatever test, wru uses few native calls and the assumption is that these are already working as expected (accordingly with ES3 or ES5 specs).The new wru test against wru covers all possible tests combination, where setup and teardown could fail on wru itself or for each test, and assert should simply work in a test life-cycle. This means we could assert inside setups and teardowns as well, simply because these calls are fundamental for a single test and, if present, must be part of the test result. A problem in a setup could compromise the whole test while a problem in a teardown could compromise other tests. Being sure that nothing went wrong is up to us but, at least, we can do it via wru.
Can you do the same with your UT Framework? Do you have the same coverage? Have fun with wru ;)
2 comments:
I'd love to say you've achieved what you wanted, but the failing tests are not successes. Simply stating that a test needs to fail in the test message, isn't enough. It still requires that a human being go through and compare results with expected results. Can that step also be automated?
... failing IS an expected result when things go wrong ... and yes, I have achieved what I was looking for ;-)
Post a Comment