Model driven testing
Software QS TAG 2017 Conference Report
Hi Folks, another wisdom from Software QS TAG 2017 conference, I have been last October (held in Frankfurt) was at Gregory's Model Driven Testing (Nokia) talk.
It starts with defining the Test-Friendly requirements, which illustrate attributes of ideal tests who are actually attributes of ideal requirements. goes in describing levels of test completeness using UML diagram hierarchy, reviews, TMS and monitoring, and ends up with adding test-friendliness examples. The talk was integrated with useful examples and exercises, which in my opinion should have been longer allowing participants to exercise more.
The goal of testing is clear: deliver products with minimum defects at implementation date, and try always to shift left to move into detection mode. Thus requirements play a big part in our success.
We all know what Karl E. Wieger claimed, to be true: “If you don’t get the requirements right, it doesn’t matter how well you do anything else”. We can never be sure that the requirements are correct and we can never be sure that their implementation is correct. But it is absolutely certain that testable requirements save enormous effort and time to build complete tests and maintain them - said Gregory - which I totally agree with. Good requirements testability makes it possible for us to design better, create 'deep' tests (in a day...) find all (or at least most critical) implementation defects and have the grounds to maintain the test mapping with requirements.
Requirements are often written in a non-complete, non-self-explainable, read ambiguously by different people - thus leaving a lot of slack for errors by coders, and testers. Modeling requirements creates a unique simple language for all to create, understand and maintain those requirements thus allowing the business, product, development and testing to be aligned, and focus on doing the right thing.
The argument went into defining the Ideal Test as complete, effortless to create (from requirements written correctly), reviewable, executable, and of course maintainable.
The next phase was presented is to create a structured hierarchy of UML diagrams that represent the requirements - using Use case diagrams, Activity diagrams, and Sequence diagrams, and in-the-details Individual activity diagrams - together building up a well-structured understood, easy to maintain set of requirements for all to use.
The next level was to have a Testware hierarchy created having a test case layer, use case layer, test suite layer and a top test plan layer, thus representing requirements in a good way to enhance when needed, changed when needed, and analyzed when changes appear in requirements and specifications later on.
The last level of defense was described as E2E reviews.
That enable developers to prepare their unit tests harness prior to testing work (does that remind you TDD?)
The last level of defense was mentioned as monitoring (what else...).
"A system is only as good as its weakest link", as they say, and monitoring quality of everything we do during the development process - here eliciting and creating requirements, building the UML and different diagrams structure, and writing well covering test cases and suites - serve that purpose.
Gregory takes us through the different monitoring areas, and gives example of dashboards possible to light up our understanding in the matter.
Last but not least, Gregory speaks of examples or obscure requirements and how we can fix them, to be more understood and fit better. The use of formal techniques (different diagrams, state machine diagrams etc.) helps create better and more coherent requirements and tests, while assisting in 'shifting left', and creating better system, with potentially less defects at implementation day.
Well said Gregory.