What is test design technique?

A test design technique basically helps us to select a good set of tests from the total number of all possible tests for a given system. There are many different types of software testing technique, each with its own strengths and weaknesses. Each individual technique is good at finding particular types of defect and relatively poor at finding other types. [Read more…]

What is Test management tools?

The features of test management tools are given below. Some tools will provide all of these features; others may provide one or more of the features. [Read more…]

What is Test harness/ Unit test framework tools in software testing?

These tools are mostly used by developers. These two types of tool are grouped together because they are variants of the type of support needed by developers when testing individual components or units of software.

A test harness provides stubs and drivers, which are small programs that interact with the software under test (e.g. for testing middleware and embedded software). Some unit test framework tools provide support for object-oriented software, others for other development paradigms. [Read more…]

What is Test execution tools in software testing?

When people talk about the ‘testing tool’, it is mostly a test execution tool that they think of, basically a tool that can run tests. This type of tool is also known as a ‘test running tool’. Most tools of this type get started by capturing or recording manual tests; hence they are also known as ‘capture/playback’ tools, ‘capture/replay’ tools or ‘record/playback’ tools. It is similar as recording a television program, and playing it back. [Read more…]

What is Test data preparation tools in software testing?

When an extensive range or volume of data is needed for testing then using the test data preparation tool is of great help.

They are very useful for performance and reliability testing, where a large amount of realistic data is needed. They may be used by developers and may also be used during system or acceptance testing. [Read more…]

What is Test design tools in software testing?

Test design tools help to create test cases, or at least test inputs (which is part of a test case). If an automated oracle is available, then the tool can also make the expected result, so in point of fact it can generate test cases (rather than just test inputs). [Read more…]

What is test monitoring in software testing?

Test monitoring can serve various purposes during the project, including the following:

  • Give the test team and the test manager feedback on how the testing work is going, allowing opportunities to guide and improve the testing and the project.
  • Provide the project team with visibility about the test results.
  • Measure the status of the testing, test coverage and test items against the exit criteria to determine whether the test work is done.
  • Gather data for use in estimating future test efforts.

For small projects, the test leader or a delegated person can gather test progress monitoring information manually using documents, spreadsheets and simple databases. But, when working with large teams, distributed projects and long-term test efforts, we find that the efficiency and consistency of data collection is done by the use of automated tools.

One way to keep the records of test progress information is by using the IEEE 829 test log template. While much of the information related to logging events can be usefully captured in a document, we prefer to capture the test-by-test information in spreadsheets (see Figure 5.1).

IEEE 829 STANDARD_ TEST LOG TEMPLATE_ Test monitoringLet us take an example as shown in Figure 5.1, columns A and B show the test ID and the test case or test suite name. The state of the test case is shown in column C (‘Warn’ indicates a test that resulted in a minor failure). Column D shows the tested configuration, where the codes A, B and C correspond to test environments described in detail in the test plan. Columns E and F show the defect (or bug) ID number (from the defect-tracking database) and the risk priority number of the defect (ranging from 1, the worst, to 25, the least risky). Column G shows the initials of the tester who ran the test. Columns H through L capture data for each test related to dates, effort and duration (in hours). We have metrics for planned and actual effort and dates completed which would allow us to summarize progress against the planned schedule and budget. This spreadsheet can also be summarized in terms of the percentage of tests which have been run and the percentage of tests which have passed and failed.

System test case summary

Figure 5.1 might show a snapshot of test progress during the test execution Period.  During the analysis, design and implementation of the tests, such a worksheet would show the state of the tests in terms of their state of development.

In addition to test case status, it is also common to monitor test progress during the test execution period by looking at the number of defects found and fixed. Figure 5.2 shows a graph that plots the total number of defects opened and closed over the course of the test execution so far. It also shows the planned test period end date and the planned number of defects that will be found. Ideally, as the project approaches the planned end date, the total number of defects opened will settle in at the predicted number and the total number of defects closed will converge with the total number opened. These two outcomes tell us that we have found enough defects to feel comfortable that we’re done testing, that we have no reason to think many more defects are lurking in the product, and that all known defects have been resolved.

Defects open and closed chart

Charts such as Figure 5.2 can also be used to show failure rates or defect density. When reliability is a key concern, we might be more concerned with the frequency with which failures are observed (called failure rates) than with how many defects are causing the failures (called defect density).

In organizations that are looking to produce ultra-reliable software, they may plot the number of unresolved defects normalized by the size of the product, either in thousands of source lines of code (KSLOC), function points (FP) or some other metric of code size. Once the number of unresolved defects falls below some predefined threshold – for example, three per million lines of code – then the product may be deemed to have met the defect density exit criteria.

That is why it is said, test progress monitoring techniques vary considerably depending on the preferences of the testers and stakeholders, the needs and goals of the project, regulatory requirements, time and money constraints and other factors.

In addition to the kinds of information shown in the IEEE 829 Test Log Template, Figures 5.1 and Figure 5.2, other common metrics for test progress monitoring include:

  • The extent of completion of test environment preparation;
  • The extent of test coverage achieved, measured against requirements, risks, code, configurations or other areas of interest;
  • The status of the testing (including analysis, design and implementation) compared to various test milestones;

The economics of testing, such as the costs and benefits of continuing test execution in terms of finding the next defect or running the next test.