Assume you are given the following description of a small program: "A user enters three integer values using a dialog box. Each value represents the length of one side of a triangle. The users clicks a button and the program analyzes the values and displays a message stating whether the triangle is scalene (no two sides are equal), isosceles (two sides are equal), or equilateral (all sides are equal." How can you properly test such a program?
While the purpose of all these sections on testing is to fully answer the above question, for this example we can test the above program by writing a set of test cases and then running these test cases against the program. A test case can be defined as a combination of specific input and expected results. At the very least, you should have the following test cases:
A valid scalene triangle. Note that a test case with input values of 1, 2, 3 does not warrant a 'yes' answer because there does not exist such a triangle.
A valid equilateral triangle.
A valid isosceles triangle. Again, note that a test case with input values of 1, 1, 3 does not warrant a 'yes' answer because there does not exist such a triangle.
A valid isosceles triangle such that you test all three permutations of two equal sides (i.e., 3,3,4; 3,4,3; and 4,3,3)
One side has a zero value.
All sides have a zero value
One side has a negative value.
All sides have a negative value.
All sides are non-integers (doubles, strings, etc.)
Wrong number of values (i.e., two rather than three values were specified).
The whole point of this exercise it to illustrate that even the testing of a very trivial program such as this is not an easy task. Consider testing a real-time trading application with tens of thousands of lines of code! Testing also becomes more difficult for object-oriented languages such as C++ and C# where your test cases must expose errors associated with object instantiation and memory management.
The rest of this section and the subsequent chapters is to show that testing of trivial and even complex programs is non-trivial, very necessary, and very achievable.
What is your definition of "testing"? Most programmers begin with a wrong definition of testing. For example, testing can be wrongly defined as:
"Testing is demonstrating that the program works".
"Testing is demonstrating that the program performs its intended functions correctly".
"Testing is establishing confidence that a program does what it is supposed to do."
These definitions are all wrong.
When you test a program, you want to add value to it. Adding values through testing means raising the quality and reliability of the program. Raising the reliability of the program means finding and removing errors. Therefore, an appropriate definition of testing is:
Testing is the process of executing a program with the intent of finding errors
Although it may seem that there is a very subtle difference between the above proper definition and the other wrong definitions, there is really an important distinction. Understanding the true definition of software testing is the key to the success of your testing efforts.
This definition of testing has many implications. For example, it implies that testing is a destructive, even sadistic process. This may go against our grain where most of us (hopefully) have a constructive rather than a destructive outlook on life. This definition of testing also has implications for how test cases (test data and expected results) should be designed and who should/should not test a program.
There is profound psychological angle to testing; human beings tend to be highly goal-oriented, and establishing the proper goal has an important psychological effect. If your goal is to establish that a program has no errors, then we will subconsciously be steered towards that goal. In testing terminology, we will tend to select test data that has low probability of causing the program to fail. On the other hand, if our goal is to establish that a program does indeed have errors, then we will subconsciously be steered towards that goal and we will tend to select test data that has high probability of causing the program to fail. Obviously, the second approach will add more value to the program.
Another way of reinforcing the proper meaning of testing is to be particularly aware of the meaning of "successful" and "unsuccessful" in categorising the results of test cases. Most would call a test case run that did not find an error as a "successful test case" whereas a test case run that did find errors would be called "an unsuccessful test case". Again, this notion of using successful and unsuccessful is wrong. A test case that finds a new error is hardly unsuccessful, rather is has proven to be a valuable instrument. An unsuccessful test case is one that causes the program to produce the correct result without finding any errors.
To summarize, program testing should be viewed as a destructive process of trying to find errors. A successful test case is one that furthers progress in this direction by causing the program to fail.
In general, it is impractical and often impossible to find all the errors in a program. This fundamental problem has implications for the economics of testing, assumptions that the tester will have to make about the program, and the manner in which test cases are designed. To address the challenges associated with testing economics, two testing strategies will be used: black-box testing and white-box testing.
Black-box testing is often referred to as data-driven, or input/output-driven testing. In black-box testing, you are completely unconcerned about the internal behaviour and structure of the program. Instead, you concentrate on finding circumstances in which the program does not behave according to its specifications. This implies that with this approach, test data are derived solely from the program specification without taking advantage of any knowledge of program internal structure and behaviour.
If you want to use this approach to find all possible errors in the program, then you would have to do exhaustive input testing, making use of every possible input conditions as a test case. Why? Going back to the triangle program, running a test case with inputs of 10,10,10 does not guarantee the correct detection of all equilateral triangles. For example, the program could be using an unsigned byte for representing triangle lengths and any values over 256 may cause the program to throw an overflow exception. Worse yet, the program could contain a subtle bug where values of 64, 64, 64 may represent a scalene triangle! Since the program is a black box, the only way to be sure of detecting the presence of such subtle bugs to by trying every possible input condition!
To test the triangle program exhaustively, you would have to create test cases for all valid triangles up to the maximum integer size of the development language. This is an impractical and silly number of test cases. Remember, we are talking about testing a very trivial triangle program. Consider attempting an exhaustive black-box testing of a C# compiler. Not only would you have to create test cases representing all valid C# programs (again, impossible), but you would also have to create test cases for all invalid C# programs (again, infinite number) to ensure that the compiler detects them as invalid. The problem is even worse for programs that have memory such as operating systems or database applications or even real-time trading applications.
This discussion shows that exhaustive input testing is simply impossible. The implications of this are: 1) You cannot black-test a program to ensure that it is error free, and 2) a fundamental consideration in program testing is one of economics. In other words, since exhaustive input testing is out of the questions, the objective of black-box testing should be to maximize the yield of the testing investment by maximizing the number of errors found by a finite number of test cases.
Another testing strategy is white-box or logic-driven testing. White-box testing allows you to examine the internal structure of the program. This strategy derives test data from an examination of the program's logic (and often unfortunately at the neglect of the specification.)
The white-box testing analogue for exhaustive-input testing in the black-box approach is exhaustive-path testing. In other words, if you execute, via test cases, all paths of control flow through the program, then possibly the program has been completely tested. There are two flaws with this statement:
The number of unique logic paths through a program can be astronomically large.
Every path in a program could be tested, yet the program might still contain many errors. There are three explanations for this:
An exhaustive path testing does not mean that a program matches its specification. For example, you may test all paths in an ascending sort function, but the function may still have a bug if it produces a descending sorted result.
A program may be incorrect because of missing paths. Exhaustive path testing does not detect the absence of required paths.
Exhaustive path testing may not uncover data-sensitive errors.
Although exhaustive input testing is superior to exhaustive path testing, neither proves to be useful because both are impossible to achieve. The best approach would be to combine elements of both black-box testing and white-box testing to derive a reasonable but not air-tight testing strategy. This approach is discussed in Test Case Design section.
The following lists the most important testing principles:
A test case must include a definition of the expected output or result.
A programmer should not test his/her own program.
A programming organization should not test its own programs.
Fully inspect the results of each test.
Test cases must be written for input conditions that are valid and expected as well as for those that are invalid and unexpected.
Programs must be examined for unwanted side effects.
Avoid throwaway test cases unless the program is a throwaway program.
Do not plan a testing effort assuming that no errors will be found.
The probability of the existence of more errors in a module is proportional to the number of errors already found in that module
A test case must include a definition
of the expected output or result
Although this principle is very obvious, it is one of the most frequent mistakes in testing. The eye sees what it wants to see, and if output has not been predefined, chances are that a plausible but erroneous result can be interpreted as a correct result. A test case should therefore always include two components:
A description of the input data.
A precise description of the correct output for that set of input
A programmer should not test his/her own
After a programmer has constructively designed and codes a program, it becomes extremely difficult to change perspective to look at the program with a destructive eye. In addition to this psychological problem, another significant problem arises from the fact the program may contain errors due to the programmer's own misunderstanding of the problem statement or requirements. If this is the case, the programmer will carry the same misunderstanding into test of his/her program.
A programming organization should not
test its own programs
The argument here is similar to principle 2. A project or programming organization is really a living organization with psychological problems similar to those of individual programmers. Also in most cases, a programming organization of a project manager is largely measure on the ability to deliver software by a given date for a certain cost, but it is extremely difficult to quantify the reliability of the program.
Fully inspect the results of each test
This principle is often overlooked. Errors that are found on later tests are often missed in the results of earlier tests.
Test cases must be written for input
conditions that are valid and expected as well as for those that are invalid and
There is a psychological tendency when testing a program to concentrate on valid and expected input conditions at the neglect of invalid and unexpected conditions.
Programs must be examined for unwanted
This is a corollary of the previous principle. For example, a trading application that books trades made by existing traders is till an erroneous program if it can book trades for non-existing traders.
Avoid throwaway test cases unless the
program is a throwaway program
There is a natural tendency for the re-test of the program to be much less rigorous than the original test of the program. Saving test cases and running them again after changes to other components of the program is known as regression testing.
Do not plan a testing effort assuming
that no errors will be found
This is a sign of incorrect understanding of testing. Once again, testing is the process of executing a program with the intent of finding errors.
The probability of the existence of more
errors in a module is proportional to the number of errors already found in that
In other words, some sections of the software seem to be much more prone to errors than other sections. Additional testing efforts are best focused against these error-prone module.