What is Test design? or How to specify test cases?

Test design: specifying test cases

  • Basically test design is the act of creating and writing test suites for testing a software.
  • Test analysis and identifying test conditions gives us a generic idea for testing which covers quite a large range of possibilities. But when we come to make a test case we need to be very specific. In fact now we need the exact and detailed specific input. But just having some values to input to the system is not a test, if you don’t know what the system is supposed to do with the inputs, you will not be able to tell that whether your test has passed or failed.
  •  Test cases can be documented as described in the IEEE 829 Standard for Test Documentation.
  •  One of the most important aspects of a test is that it checks that the system does what it is supposed to do. Copeland says ‘At its core, testing is the process of comparing “what is” with “what ought to be” ‘. [Copeland, 2003]. If we simply put in some inputs and think that was fun, I guess the system is probably OK because it didn’t crash, but are we actually testing it? We don’t think so. You have observed that the system does what the system does but this is not a test. Boris Beizer refers to this as ‘kiddie testing’ [Beizer, 1990]. We may not know what the right answer is in detail every time, and we can still get some benefit from this approach at times, but it isn’t really testing. In order to know what the system should do, we need to have a source of information about the correct behavior of the system – this is called an ‘oracle’ or a test oracle.
  •  Once a given input value has been chosen, the tester needs to determine what the expected result of entering that input would be and document it as part of the test case. Expected results include information displayed on a screen in response to an input. If we don’t decide on the expected results before we run a test then there might be a chance that we will notice that there is something wildly wrong.  However, we would probably not notice small differences in calculations, or results that seemed to look OK. So we would conclude that the test had passed, when in fact the software has not given the correct result. Small differences in one calculation can add up to something very major later on, for example if results are multiplied by a large factor. Hence, ideally expected results should be predicted before the test is run.