A good test plan is always kept short and focused. At a high level, you need to consider the purpose served by the testing work. Hence, it is really very important to keep the following things in mind while planning tests:
- What is in scope and what is out of scope for this testing effort?
- What are the test objectives?
- What are the important project and product risks? (details on risks will discuss in Section 5.5).
- What constraints affect testing (e.g., budget limitations, hard deadlines, etc.)?
- What is most critical for this product and project?
- Which aspects of the product are more (or less) testable?
- What should be the overall test execution schedule and how should we decide the order in which to run specific tests? (Product and planning risks, discussed later in this chapter, will influence the answers to these questions.)
- How to split the testing work into various levels (e.g., component, integration, system and acceptance).
- If that decision has already been made, you need to decide how to best fit your testing work in the level you are responsible for with the testing work done in those other test levels.
- During the analysis and design of tests, you’ll want to reduce gaps and overlap between levels and, during test execution, you’ll want to coordinate between the levels. Such details dealing with inter-level coordination are often addressed in the master test plan.
- In addition to integrating and coordinating between test levels, you should also plan to integrate and coordinate all the testing work to be done with the rest of the project. For example, what items must be acquired for the testing?
- When will the programmers complete work on the system under test?
- What operations support is required for the test environment?
- What kind of information must be delivered to the maintenance team at the end of testing?
- How many resources are required to carry out the work.
Now, think about what would be true about the project when the project was ready to start executing tests. What would be true about the project when the project was ready to declare test execution done? At what point can you safely start a particular test level or phase, test suite or test target? When can you finish it? The factors to consider in such decisions are often called ‘entry criteria’ and ‘exit criteria.’ For such criteria, typical factors are:
- Acquisition and supply: the availability of staff, tools, systems and other materials required.
- Test items: the state that the items to be tested must be in to start and to finish testing.
- Defects: the number known to be present, the arrival rate, the number predicted to remain, and the number resolved.
- Tests: the number run, passed, failed, blocked, skipped, and so forth.
- Coverage: the portions of the test basis, the software code or both that have been tested and which have not.
- Quality: the status of the important quality characteristics for the system.
- Money: the cost of finding the next defect in the current level of testing compared to the cost of finding it in the next level of testing (or in production).
- Risk: the undesirable outcomes that could result from shipping too early (such as latent defects or untested areas) – or too late (such as loss of market share).
When writing exit criteria, we try to remember that a successful project is a balance of quality, budget, schedule and feature considerations. This is even more important when applying exit criteria at the end of the project.