A test comparator helps to automate the comparison between the actual and the expected result produced by the software.
There are two ways in which actual results of a test can be compared to the expected results for the test.:
i. Dynamic comparison is where the comparison is done dynamically, i.e. while the test is executing. This type of comparison is good for comparing the wording of an error message that pops up on a screen with the correct wording for that error message. Dynamic comparison is useful when an actual result does not match the expected result in the middle of a test – the tool can be programmed to take some recovery action at this point or go to a different set of tests.
ii. Post-execution comparison is the other way, where the comparison is performed after the test has finished executing and the software under test is no longer running. Operating systems normally have file comparison tools available which can be used for post-execution comparison and often a comparison tool will be developed in-house for comparing a particular type of file or test result. Post-execution comparison is best for comparing a large volume of data, for example comparing the contents of an entire file with the expected contents of that file, or comparing a large set of records from a database with the expected content of those records. For example, comparing the result of a batch run (e.g. overnight processing of the day’s online transactions) is probably impossible to do without tool support.
Whether a comparison is dynamic or post-execution, the test comparator needs to know what the correct result is. This may be stored in the test case itself or it may be computed using a test oracle.
Features or characteristics of test comparators are:
• To do the dynamic comparison of transient events that occurs during test execution;
• To do the post-execution comparison of stored data, e.g. in files or databases;
• To mask or filter the subsets of actual and expected results.