In this article we overview what is and how to perform test design (and, which is at least as important, how not to perform it.
Considering the ‘official’ definition, ‘test design is the activity of deriving and specifying test cases from test conditions’, where a test condition is ‘an aspect of the test basis that is relevant in order to achieve specific test objectives.’ Going forward, test basis is ‘the body of knowledge used as the basis for test analysis and design’.
Let’s make it clearer. Requirements or user stories with acceptance criteria (forms of the test basis) determine what you should test (test objects and test conditions) and from this, you have to figure out the way of testing, i.e., design the test cases.
One of the most important questions is the following: what are the factors, which influences the successful test design? If you read different blogs, articles or books you will find more or less the following:
- The time and budget available for testing.
- Appropriate knowledge and experience of the people involved.
- The target coverage level determined (measuring the confidence level).
- The way the software development process is organized (for instance waterfall vs. Agile).
- The ratio of the test creation methods is established (e.g. manual vs. automated).
It’s not true! Without sufficient time and budget probably you do not start any project at all. If you do not have qualified people about software testing including test design then probably you will not start the project either. Good test design involves three prerequisites:
- Complete specification (test bases).
- Risk and complexity analysis.
- Historical data of your previous developments.
Some explanation is needed. The complete specification doesn’t mean error-free specification as during test design lots of problems can be found and fixed (defect prevention). It only means that we have all the necessary requirements or in Agile development, we have all the epics, themes and user stories with the acceptance criteria.
In Chapter 3 of our book, we have shown that there is a minimum value in considering the testing costs and the defect correcting costs together, and the goal of good test design is to select appropriate testing techniques approaching this minimum. This can be done by analysing complexity, risk and using historical data. Thus, risk analysis is inevitable to decide the thoroughness of testing. The more risk the usage of the function/object has, the more thorough the testing that is needed. The same can be said for code complexity. For more risky or complex code we should first apply more non-combinatorial test design techniques instead of one pure combinatorial one.
Our different and proper view on test design is that if you have the appropriate specification (test basis) and reliable risk and complexity analysis then knowing your historical data you can perform test design in an optimal way.
In the beginning, you have no historical data, and you will probably not reach the optimum. No problem, let’s make an initial assessment. For example, if the risk and complexity are low, then use only exploratory testing. If they are a little bit higher, then use exploratory testing and simple specification-based techniques such as equivalence partitioning with boundary value analysis. If the risk is high you can use exploratory testing, combinative testing (see Chapter 9), defect prevention, static analysis, and reviews.
You may think that automation has a large influence on test design. It has not. Automated test design eventually leads to the same test cases. The difference is that the test cases will be generated, which is easier and more maintainable. You may think that by applying automated test design you can generate more test cases at a lower cost. Unfortunately not, you will design the same tests for reaching the optimum, sometimes with lower costs.
Similar is the argument with respect to the different development processes. You should design the same tests. This remains valid even for Exploratory testing as you can apply it even if you use the waterfall model.
Another important remark. Test selection and test data adequacy criteria are different. The first is an organic part of any test design technique. The second validates the test set. The test design process results in implementation-independent test cases that validate requirements or user stories. On the contrary, tests that are generated based on missing coverage with respect to a selected test data adequacy criteria validate implementation-dependent issues. However, this is NOT test design, this is test creation.
It’s very important that you use the test-first method, i.e. test design should be the starting point of the development. Test design is also very effective for defect prevention if it is applied prior to implementation.
For other meaningful thoughts, please read the whole book.