Test Terminology

This is a quick introduction to many terminologies used in the science (and art) of testing. Do not worry if you do not understand them fully. We will examine these topics more deeply by looking closely at test strategies (white-box, black-box, top-down, and bottom-up) and the testing levels (unit, integration, and system tests).

Testing is the process (and art) of finding errors; it is the ultimate review of specifications, design, and coding. The purpose of testing is to guarantee that all elements of an application mesh properly, function as expected, and meet performance criteria. 

Testing is a difficult activity to accept mentally because we are deliberately analyzing our own work or that of our peers to find fault. Thus, after working in groups and becoming productive teams, we seek to find fault and uncover mistakes through testing. When the person conducting the test is not on the project as, for instance, acceptance testers, they are viewed as adversaries. 

Testing is a difficult activity for management to accept because it is costly, time consuming, and rarely finds all errors. Frequently, resources are difficult to obtain and risks of not testing are inadequately analyzed. The result is that most applications are not tested enough and are delivered with 'bugs'.

Research studies show that software errors tend to cluster in modules. As errors are found in a tested unit, the probability that more errors are present increases. Because of this phenomenon, as severe errors are found, the lower the confidence in the overall quality and reliability of the tested unit should be.

In this chapter, we discuss useful strategies for testing and the strategies which are most applicable to each level of testing. Then, we discuss each level of testing and develop test plan examples for the ABC rental system. Finally, automated test support within CASE tools and independent test support tools are defined and examples listed. The next section defines testing terminology.


Testing Terminology

As above, testing is the process ( and art) of finding errors. A good test has a high probability of finding undiscovered errors. A successful test is one that finds new errors; a poor test is one that never finds errors. 

There are two types of errors in applications. A Type 1 error defines code that does not do what it is supposed to do; these are errors of omission. A Type 2 error defines code that does something it is not supposed to do; these are errors of commission. Type 1 errors are most prevalent in newly developed applications. Type 2 errors predominate in maintenance applications which have code 'turned off' rather than removed. Good tests identify both types of errors.

Testing takes place at different levels and is conducted by different individuals during the application development. In this chapter we discuss the testing performed by the project team and testing performed by outside agents for application acceptance. Project team tests are termed developmental tests. Developmental tests include unit, subsystem, integration, and system tests. Tests by outside agents are called quality assurance (QA) and acceptance tests. The relationship between testing levels and project life-cycle phases are summarized in Figure 17-1. 

A unit test is performed for each of the smallest units of code. Subsystem, integration tests verify the logic and processing for suites of modules that perform some activity, verifying communications between them. System tests verify that the functional specifications are met, that the human interface operates as desired, and that the application works in the intended operational environment, within its constraints. During maintenance, testers use a technique called regression testing in addition to other types of tests. Regression tests are customized to test that changes to an application have not caused it to regress to some state of unacceptable quality.

Finally, outside agents perform quality assurance (QA) tests of acceptance for the application. The outside agent is either the user or a user representative. The goal is to perform an objective, unbiased assessment of the application, and an outside agent is considered more objective than a team member. QA tests are similar to system tests in their makeup and objectives, but they differ in that they are beyond the control of the project team. QA test reports usually are sent to IS and user management in addition to the project manager. The QA tester plans his own strategy and conducts his own test to ensure that the application meets all functional requirements. QA testing is the last testing done before an application is placed into production status.

Each test level requires the definition of a strategy for testing. Strategies are either white box or black box, and either top-down or bottom-up. Black-box strategies use a 'toaster mentality': You plug it in, it is supposed to work (see Figure 17-2). Created input data is designed to generate variations of outputs without regard to how the logic actually functions. The results are predicted and compared to the actual results to determine the success of the test.


FIGURE 17-1 Correspondence between Project Life-Cycle Phases and Testing


FIGURE 17-2 Black Box Data Testing Strategy

White-box strategies open up the 'box' and look at specific logic of the application to verify how it works (see Figure 17-3). Tests use logic specifications to generate variations of processing and to predict the resulting outputs. Intermediate and final output results can be predicted and validated using white-box tests. 

The second type of testing strategy defines how the test and code development will proceed. Top-down testing assumes that critical control code and functions will be developed and tested first (see Figure 17-4). These are followed by secondary functions and supporting functions. The theory is that the more often critical modules are exercised, the higher the confidence in their reliability can be.

Bottom-up testing assumes that the lower the number of incremental changes in modules, the lower the error rate. Complete modules are coded and unit tested (see Figure 17-5). Then the tested module is placed into integration testing. The test strategies are not mutually exclusive; any of them can be used individually and collectively. 

The test strategy chosen constrains the type of errors that can be found, sometimes necessitating the use of more than one. Ideally, the test for the application combines several strategies to uncover the broadest range of errors.

After a strategy is defined, it is applied to the level of the test to develop actual test cases. Test cases are individual transactions or data records that cause logic to be tested. For every test case, all results of processing are predicted. For on-line and real-time applications, test scripts document the interactive dialogue that takes place between user and application and the changes that result from the dialogue. A test plan documents the strategy, type, cases, and scripts for testing some component of an application. All the plans together comprise the test plan for the application. 

Testing is iterative until no errors, or some acceptable number of errors, are found. In the first step of the testing process, test inputs, configuration, and application code are required to conduct the actual test. The second step is to compare the results of the test to predicted results and evaluate differences to find errors. The next step is to remove errors, or 'debug' the code. When recoding is complete, testing of changes ensures that each module works. The revised modules are then reentered into the testing cycle until a decision to end testing is made. This cycle of testing is depicted in Figure 17-6 for a top-down strategy.


FIGURE 17-3 White Box Logic Testing Strategy


FIGURE 17-4 Top-Down Testing Strategy

The process of test development begins during design. The test coordinator assigned should be a capable programmer-analyst who understands the requirements of the application and knows how to conduct testing. The larger and more complex the application, the more senior and skilled the test coordinator should be. A test team may also be assigned to work with the coordinator on large, complex projects. The testing team uses the functional requirements from the analysis phase and the design and program specifications from the design phase as input to begin developing a strategy for testing the system. As a strategy evolves, walk-throughs are held to verify the strategy and communicate it to the entire test team. Duties for all levels of testing are assigned. Time estimates for test development and completion are developed. The test team works independently in parallel with the development team to do their work. They work with the DBA in developing a test database that can support all levels of testing. For unit testing, the test team verifies results and accepts modules and programs for integration testing. The test team conducts and evaluates integration and system tests.


FIGURE 17-5 Bottom-Up Testing Strategy


Source: Sue Conger, https://resources.saylor.org/CS/CS302/OER/The_New_Software_Engineering.pdf
Creative Commons License This work is licensed under a Creative Commons Attribution 3.0 License.

Last modified: Saturday, November 18, 2023, 12:49 PM