Exploratory testing plays an important role in agile testing. It is a simultaneous approach where the testers learn about the system, perform test design and write test cases. It may be sometimes combined with other experience-based testing such as analytical risk-based testing, analytical requirements-based testing, model-based testing, and regression-averse testing.
During the iteration time box, the test charter provides the boundaries of exploratory testing, which follows and inspect and adapt model, where the next results are performed based on the previous results. The black and white box testing techniques may be used along with exploratory testing.
A test charter may include the following information:
- Actor: intended user of the system
- Purpose: the theme of the charter including the objective of every actor i.e., the test conditions
- Setup: the environment of test execution
- Priority: relative importance of this charter with respect to the priority of the associated user story or the risk level
- Reference: specifications (e.g., user story), risks, or other information sources
- Data: any data that was gathered during the test, such as screen recordings, or screen shots.
- Activities: a list of ideas that an actor may want to do with the system (e.g., “Log on to the system as a super user”) and the interesting to test (both positive and negative tests)
- Oracle notes: how to evaluate the product to determine correct results (e.g., to capture what happens on the screen and compare to what is written in the user’s manual)
- Variations: alternative actions and evaluations to complement the ideas described under activities
Exploratory testing may be used along with session-based test management which could last for a time period of 60-120 min, which makes intangibles more tangible.
Test sessions include the following:
- Brief Survey session (to learn how it works)
- Analysis of session (evaluation of the functionality or characteristics)
- Deep coverage (corner cases, scenarios, interactions)
Tester’s ability to ask right and relevant questions determines the test quality
Few examples are:
- What is the critical component to find out about the system?
- Under what circumstances may the system fail?
- What happens if…..?
- What should happen when…..?
- Are customer needs, requirements, and expectations fulfilled?
- Is the system possible to install (and remove if necessary) in all supported upgrade paths?
A set of heuristics can be applied when testing which can guide the tester in how to perform the testing and to evaluate the results. Examples include:
- CRUD (Create, Read, Update, Delete)
- Configuration variations
- Interruptions (e.g., log off, shut down, or reboot)
The session metrics are primary means to express the status of the exploratory testing, they may contain some or all of the following parameters
- Number of sessions completed
- Number of problems found
- Function areas covered
- Percentage of session time spent setting up for testing
- Percentage of session time spent testing
- Percentage of session time spent investigating problems
It is important that the tester document the results. The following list provides few examples of the results.
- Test coverage: The input data used, percentage covered, percentage to be covered.
- Evaluation notes: observations during testing, stability of the system and feature and next steps
- Risk/strategy list: The risk matrix and their coverage till date and any changes to the mitigation plan
- Issues, questions, and anomalies: any unforeseen behavior, any questions on efficiency of the approach, any concerns about the ideas/test attempts, test environment, test data, misinterpretation of the function, test script or the system test.
- Actual behavior: actual behavior of the system needs to be recorded and saved (e.g., video, screen captures, output data files)
The information may be presented in a summarized form to the stakeholders and management so that it can be understood easily.