What is configuration management in software testing?

Configuration management is a topic that often confuses new practitioners. So, let me describe it briefly:

  • Configuration management determines clearly about the items that make up the software or system. These items include source code, test scripts, third-party software, hardware, data and both development and test documentation.
  • Configuration management is also about making sure that these items are managed carefully, thoroughly and attentively during the entire project and product life cycle.
  • Configuration management has a number of important implications for testing. Like configuration management allows the testers to manage their testware and test results using the same configuration management mechanisms.
  • Configuration management also supports the build process, which is important for delivery of a test release into the test environment. Simply sending Zip archives by e-mail will not be sufficient, because there are too many opportunities for such archives to become polluted with undesirable contents or to harbor left-over previous versions of items. Especially in later phases of testing, it is critical to have a solid, reliable way of delivering test items that work and are the proper version.
  • Last but not least, configuration management allows us to keep the record of what is being tested to the underlying files and components that make it up. This is very important. Let us take an example, when we report defects, we need to report them against something, something which is version controlled. If it is not clear what we found the defect in, the programmers will have a very tough time of finding the defect in order to fix it. For the kind of test reports discussed earlier to have any meaning, we must be able to trace the test results back to what exactly we tested.

Ideally, when testers receive an organized, version-controlled test release from a change-managed source code repository, it is along with a test item trans-mittal report or release notes. [IEEE 829] provides a useful guideline for what goes into such a report. Release notes are not always so formal and do not always contain all the information shown.

Configuration managemement or Item transmittal report template

Configuration management is a topic that is very complex. So, advanced planning is very important to make this work. During the project planning stage – and perhaps as part of your own test plan – make sure that configuration management procedures and tools are selected. As the project proceeds, the configuration process and mechanisms must be implemented, and the key interfaces to the rest of the development process should be documented.

What is test control?

Projects do not always open up as planned. If the planned product and the actual product is different then risks become occurrences, stakeholder needs evolve, the world around us changes.  Hence it is required and needed to bring the project back under control. [Read more…]

What is test monitoring in software testing?

Test monitoring can serve various purposes during the project, including the following:

  • Give the test team and the test manager feedback on how the testing work is going, allowing opportunities to guide and improve the testing and the project.
  • Provide the project team with visibility about the test results.
  • Measure the status of the testing, test coverage and test items against the exit criteria to determine whether the test work is done.
  • Gather data for use in estimating future test efforts.

For small projects, the test leader or a delegated person can gather test progress monitoring information manually using documents, spreadsheets and simple databases. But, when working with large teams, distributed projects and long-term test efforts, we find that the efficiency and consistency of data collection is done by the use of automated tools.

One way to keep the records of test progress information is by using the IEEE 829 test log template. While much of the information related to logging events can be usefully captured in a document, we prefer to capture the test-by-test information in spreadsheets (see Figure 5.1).

IEEE 829 STANDARD_ TEST LOG TEMPLATE_ Test monitoringLet us take an example as shown in Figure 5.1, columns A and B show the test ID and the test case or test suite name. The state of the test case is shown in column C (‘Warn’ indicates a test that resulted in a minor failure). Column D shows the tested configuration, where the codes A, B and C correspond to test environments described in detail in the test plan. Columns E and F show the defect (or bug) ID number (from the defect-tracking database) and the risk priority number of the defect (ranging from 1, the worst, to 25, the least risky). Column G shows the initials of the tester who ran the test. Columns H through L capture data for each test related to dates, effort and duration (in hours). We have metrics for planned and actual effort and dates completed which would allow us to summarize progress against the planned schedule and budget. This spreadsheet can also be summarized in terms of the percentage of tests which have been run and the percentage of tests which have passed and failed.

System test case summary

Figure 5.1 might show a snapshot of test progress during the test execution Period.  During the analysis, design and implementation of the tests, such a worksheet would show the state of the tests in terms of their state of development.

In addition to test case status, it is also common to monitor test progress during the test execution period by looking at the number of defects found and fixed. Figure 5.2 shows a graph that plots the total number of defects opened and closed over the course of the test execution so far. It also shows the planned test period end date and the planned number of defects that will be found. Ideally, as the project approaches the planned end date, the total number of defects opened will settle in at the predicted number and the total number of defects closed will converge with the total number opened. These two outcomes tell us that we have found enough defects to feel comfortable that we’re done testing, that we have no reason to think many more defects are lurking in the product, and that all known defects have been resolved.

Defects open and closed chart

Charts such as Figure 5.2 can also be used to show failure rates or defect density. When reliability is a key concern, we might be more concerned with the frequency with which failures are observed (called failure rates) than with how many defects are causing the failures (called defect density).

In organizations that are looking to produce ultra-reliable software, they may plot the number of unresolved defects normalized by the size of the product, either in thousands of source lines of code (KSLOC), function points (FP) or some other metric of code size. Once the number of unresolved defects falls below some predefined threshold – for example, three per million lines of code – then the product may be deemed to have met the defect density exit criteria.

That is why it is said, test progress monitoring techniques vary considerably depending on the preferences of the testers and stakeholders, the needs and goals of the project, regulatory requirements, time and money constraints and other factors.

In addition to the kinds of information shown in the IEEE 829 Test Log Template, Figures 5.1 and Figure 5.2, other common metrics for test progress monitoring include:

  • The extent of completion of test environment preparation;
  • The extent of test coverage achieved, measured against requirements, risks, code, configurations or other areas of interest;
  • The status of the testing (including analysis, design and implementation) compared to various test milestones;

The economics of testing, such as the costs and benefits of continuing test execution in terms of finding the next defect or running the next test.

What is test strategy in software testing?

The choice of test approaches or test strategy is one of the most powerful factor in the success of the test effort and the accuracy of the test plans and estimates. This factor is under the control of the testers and test leaders.

Let’s survey the major types of test strategies that are commonly found:

  • Analytical: Let us take an example to understand this. The risk-based strategy involves performing a risk analysis using project documents and stakeholder input, then planning, estimating, designing, and prioritizing the tests based on risk. Another analytical test strategy is the requirements-based strategy, where an analysis of the requirements specification forms the basis for planning, estimating and designing tests. Analytical test strategies have in common the use of some formal or informal analytical technique, usually during the requirements and design stages of the project.
  • Model-based: Let us take an example to understand this. You can build mathematical models for loading and response for e commerce servers, and test based on that model. If the behavior of the system under test conforms to that predicted by the model, the system is deemed to be working. Model-based test strategies have in common the creation or selection of some formal or informal model for critical system behaviors, usually during the requirements and design stages of the project.
  • Methodical: Let us take an example to understand this. You might have a checklist that you have put together over the years that suggests the major areas of testing to run or you might follow an industry-standard for software quality, such as ISO 9126, for your outline of major test areas. You then methodically design, implement and execute tests following this outline. Methodical test strategies have in common the adherence to a pre-planned, systematized approach that has been developed in-house, assembled from various concepts developed inhouse and gathered from outside, or adapted significantly from outside ideas and may have an early or late point of involvement for testing.
  • Process – or standard-compliant: Let us take an example to understand this. You might adopt the IEEE 829 standard for your testing, using books such as [Craig, 2002] or [Drabick, 2004] to fill in the methodological gaps. Alternatively, you might adopt one of the agile methodologies such as Extreme Programming. Process- or standard-compliant strategies have in common reliance upon an externally developed approach to testing, often with little – if any – customization and may have an early or late point of involvement for testing.
  • Dynamic: Let us take an example to understand this. You might create a lightweight set of testing guide lines that focus on rapid adaptation or known weaknesses in software. Dynamic strategies, such as exploratory testing, have in common concentrating on finding as many defects as possible during test execution and adapting to the realities of the system under test as it is when delivered, and they typically emphasize the later stages of testing. See, for example, the attack based approach of [Whittaker, 2002] and [Whittaker, 2003] and the exploratory approach of [Kaner et al., 2002].
  • Consultative or directed: Let us take an example to understand this. You might ask the users or developers of the system to tell you what to test or even rely on them to do the testing. Consultative or directed strategies have in common the reliance on a group of non-testers to guide or perform the testing effort and typically emphasize the later stages of testing simply due to the lack of recognition of the value of early testing.
  • Regression-averse: Let us take an example to understand this. You might try to automate all the tests of system functionality so that, whenever anything changes, you can re-run every test to ensure nothing has broken. Regression-averse strategies have in common a set of procedures – usually automated – that allow them to detect regression defects. A regression-averse strategy may involve automating functional tests prior to release of the function, in which case it requires early testing, but sometimes the testing is almost entirely focused on testing functions that already have been released, which is in some sense a form of post release test involvement.

Some of these strategies are more preventive, others more reactive. For example, analytical test strategies involve upfront analysis of the test basis, and tend to identify problems in the test basis prior to test execution. This allows the early – and cheap – removal of defects. That is a strength of preventive approaches.

Dynamic test strategies focus on the test execution period. Such strategies allow the location of defects and defect clusters that might have been hard to anticipate until you have the actual system in front of you. That is a strength of reactive approaches.

Rather than see the choice of strategies, particularly the preventive or reactive strategies, as an either/or situation, we’ll let you in on the worst-kept secret of testing (and many other disciplines): There is no one best way. We suggest that you adopt whatever test approaches make the most sense in your particular situation, and feel free to borrow and blend.

How do you know which strategies to pick or blend for the best chance of success? There are many factors to consider, but let us highlight a few of the most important:

  • Risks: Risk management is very important during testing, so consider the risks and the level of risk. For a well-established application that is evolving slowly, regression is an important risk, so regression-averse strategies make sense. For a new application, a risk analysis may reveal different risks if you pick a risk-based analytical strategy.
  • Skills: Consider which skills your testers possess and lack because strategies must not only be chosen, they must also be executed. . A standard compliant strategy is a smart choice when you lack the time and skills in your team to create your own approach.
  • Objectives: Testing must satisfy the needs and requirements of stakeholders to be successful. If the objective is to find as many defects as possible with a minimal amount of up-front time and effort invested – for example, at a typical independent test lab – then a dynamic strategy makes sense.
  • Regulations: Sometimes you must satisfy not only stakeholders, but also regulators. In this case, you may need to plan a methodical test strategy that satisfies these regulators that you have met all their requirements.
  • Product: Some products like, weapons systems and contract-development software tend to have well-specified requirements. This leads to synergy with a requirements-based analytical strategy.
  • Business: Business considerations and business continuity are often important. If you can use a legacy system as a model for a new system, you can use a model-based strategy.

You must choose testing strategies with an eye towards the factors mentioned earlier, the schedule, budget, and feature constraints of the project and the realities of the organization and its politics.

We mentioned above that a good team can sometimes triumph over a situation where materials, process and delaying factors are ranged against its success. However, talented execution of an unwise strategy is the equivalent of going very fast down a highway in the wrong direction. Therefore, you must make smart choices in terms of testing strategies.

What are the factors affecting test effort in software testing?

When you create test plans and estimate the testing effort and schedule, you must keep these factors in mind otherwise your plans and estimates will mislead you at the beginning of the project and betray you at the middle or end.

The test strategies or approaches you pick will have a major influence on the testing effort. In this section, let’s look at factors related to the product, the process and the results of testing. [Read more…]

What are the estimation techniques in software testing?

There are two techniques for estimation covered by the ISTQB Foundation Syllabus.

  1. One involves people with expertise on the tasks to be done and
  2. Other involves consulting the people who will do the work .

The first one involves analyzing metrics from past projects and from industry data.

Let’s look at both of them one by one. [Read more…]

Estimating what testing will involve and what it will cost?

As we know that testing is a process rather than a single activity. Hence, we need to break down a testing project into phases using the fundamental test process identified in the ISTQB Syllabus: planning and control; analysis and design; implementation and execution; evaluating exit criteria and reporting; and test closure. [Read more…]

What things to keep in mind while planning tests?

A good test plan is always kept short and focused. At a high level, you need to consider the purpose served by the testing work. Hence, it is really very important to keep the following things in mind while planning tests:

  • What is in scope and what is out of scope for this testing effort?
  • What are the test objectives? [Read more…]

What is the purpose and importance of test plans in software testing?

Test plan is the project plan for the testing work to be done. It is not a test design specification, a collection of test cases or a set of test procedures; in fact, most of our test plans do not address that level of detail. Many people have different definitions for test plans. [Read more…]

What are the roles and responsibilities of a Tester?

Roles and Responsibilities of a Tester are as follows. In the test planning and preparation phases of the testing, testers should review and contribute to test plans, as well as analyzing, reviewing and assessing requirements and design specifications. They may be involved in or even be the primary people identifying test conditions and creating test designs, test cases, test procedure specifications and test data, and may automate or help to automate the tests. [Read more…]