Test Plan for ABC Video Order Processing

Site: Saylor Academy
Course: CS302: Software Engineering
Book: Test Plan for ABC Video Order Processing
Printed by: Guest user
Date: Friday, June 24, 2022, 6:39 PM

Description

In this section, see how ABC Video designs testing to validate that specification, design, and coding mesh with functional and non-functional requirements of the system.

Test Strategy

Developing a Test Strategy 

There are no rules for developing a test strategy. Rather, loose heuristics are provided. Testing, like everything else in software engineering, is a skill that comes with practice. Good testers are among the most highly skilled workers on a development team. A career can revolve around testing because skilled testers are in short supply.

As with all other testing projects, the strategy should be designed to prove the application works and that it is stable in its operational environment. While scheduling and time allotted are not most important, when the strategy is devised, one subgoal is to minimize the amount of time and resources (both human and computer) that are devoted to testing.

The first decision is whether and what to test top-down and bottom-up. There are no rules, or even heuristics, for this decision. Commitment to top-down testing is as much cultural and philosophical as it is technical. To provide some heuristics, in general, the more critical, the larger, and the more complex an application, the more top-down benefits outweigh bottom-up benefits.

TABLE 17-2 Test Level and Test Strategy

Level General Strategy Specific Strategy Comments on Use
System/QA-Human Interface White-Box Condition Logic May be used for critical logic.
Multiple Condition Logic May be used for critical logic.
System/QA-Constraints Black-Box Equivalence Partitioning May be useful at the execute unit level.
Boundary Value Analysis Should not be required at this level but could be used.
Cause-Effect Graphing Might be useful for defining how to measure constraint compliance.
White-Box Multiple Condition Logic Could be used but generally is too detailed at this level of test.
Live-Data Reality Test Useful for black-box type tests of constraints after created data tests are successful.
System/QA-Peak Requirements White-Box Multiple Condition Logic May be used for critical logic, but generally too detailed for this level of testing.
Live-Data Reality Test Most useful for peak testing.

The heuristics of testing are dependent on the language, timing, and operational environment of the application. Significantly different testing strategies are needed for third (e.g., COBOL, Pl/1), fourth (e.g., Focus, SQL), and semantic (e.g., Lisp, PROLOG) languages. Application timing is either batch, on-line, or real-time. Operational environment includes hardware, software, and other co-resident applications. Heuristics for each of these are summarized in Table 17-3.

Package testing differs significantly from self-developed code. More often, when you purchase package software, you are not given the source code or the specifications. You are given user documentation and an executable code. By definition, you have to treat the software as a black box. Further, top-down testing does not make sense because you are presented with a complete, supposedly working, application. Testing should be at the system level only, including functional, volume, intermodular communications, and data-related black-box tests. Next, we consider the ABC test strategy.


ABC Video Test Strategy 

The ABC application will be developed using some SQL-based language. SQL is a fourth-generation language which simplifies the testing process and suggests certain testing strategies. The design from Chapter 10, Data-Oriented Design, is used as the basis for testing, although the arguments are the same for the other methodologies. 

First, we need to decide the major questions: Who? What? Where? When? How?


TABLE 17-3 Test Strategy Design Heuristics


Condition

 

 

 

 

 

 

 

 

 

 

Critical

Y

Y

-

-

N

N

N

N

N

N

Large

Y

-

Y

-

N

N

N

N

N

N

Complex

Y

-

-

Y

N

N

N

N

N

N

Timing

-

-

-

-

BS

BE

BS

BE

BE

-

Language Generation

-

-

-

-

2

2

3/3

3

4

Rule

Test Strategy

 

 

 

 

 

 

 

 

 

 

Top-Down/Bottom-Up, Both, or Either

Both

Both

Either

Either

Either

Either

Cont

Either

Both

Cont

 

 

 

 

 

 

 

T

 

 

T

 

 

 

 

 

 

 

Mod

 

 

Mod

 

 

 

 

 

 

 

B

 

 

B

Black/White/Both/Either

Both

Both

Cont

Cont

Both

Either

Either

Both

Cont

Bl

 

 

 

W

W

 

 

or

Or

W

 

 

 

 

Mod

Mod

 

 

Both

Both

Mod

 

 

 

 

Bl

Bl

 

 

 

 

Bl

 


Legend

 

Y

=

Yes

N

=

No

Both

=

Both

BS

=

Batch-stand-alone

BE

=

Batch-execute unit

Cont

=

Control Structure

Mod

=

Modules

T

=

Top-down

B

=

Bottom-up

W

=

White

Bl

=

Black


Who? The test coordinator should be a member of the team. Assume it is yourself. Put yourself into this role and think about the remaining questions and how you would answer them if you were testing this application. 

What? All application functions, constraints, user acceptance criteria, human interface, peak performance, recoverability, and other possible tests must be performed to exercise the system and prove its functioning. 

Where? The ABC application should be tested in its operational environment to also test the environment. This means that all hardware and software of the operational environment should be installed and tested. If Vic, or the responsible project team member, has not yet installed and tested the equipment, they are now delaying the conduct of application testing. 

When? Since a 4GL is being used, we can begin testing as soon as code is ready. An iterative, top-down approach will be used. This approach allows Vic and his staff early access to familiarize themselves with the application. Testing at the system level needs to include the scaffold code to support top-down testing. The schedule for module coding should identify and schedule all critical modules for early coding. The tasks identified so far are:

1. Build scaffold code and test it. 

2. Identify critical modules. 

3. Schedule coding of critical modules first. 

4. Test and validate modules as developed using the strategy developed.

How? Since a top-down strategy is being used, we should identify critical modules first. Since the application is completely on-line, the screen controls and navigation modules must be developed before anything else can be tested. Also, since the application is being developed specifically to perform rental/return processing, rental/return processing should be the second priority. Rental/return cannot be performed without a customer file and a video file, both of which try to access the respective create modules. Therefore, the creation modules for the two files have a high priority.

The priority definition of create and rental/return modules provides a prioritized list for development. The scaffolding should include the test screens, navigation, and stubs for all other processing. The last item, backup and recovery testing, can be parallel to the others. 

Next, we want to separate the activities into parallel equivalent chunks for testing. By having parallel testing streams, we can work through the system tests for each parallel stream simultaneously, speeding the testing process. For ABC, Customer Maintenance, Video Maintenance, Rental/Return, and Periodic processing can all be treated as stand-alone processes. Notice that in Information Engineering (IE), this independence of processes is at the activity level. If we were testing object-oriented design, we would look at processes from the Rooch diagram as the independent and parallel test units. If we were testing process design, we would use the structure charts to decide parallel sets of processes. 

Of the ABC processes, Rental/ Return is the most complex and is discussed in detail. Rental/Return assumes that all files are present, so the DBA must have files defined and populated with data before Rental/Return can be tested. Note that even though files must be present, it is neither important nor required that the file maintenance processes be present. For the two create processes that are called, program stubs that return only a new Customer ID, or Video ID IC opy ID, are sufficient for testing.

In addition to parallel streams of testing, we might also want to further divide Rental/Return into several streams of testing by level of complexity, by transaction type, or by equivalent processes to further subdivide the code generation and testing processes. We choose such a division so that the same person can write all of the code but testing can proceed without all variations completed. For example, we will divide Rental/Return by transaction type as we did in IE. The four transaction types are rentals with and without returns, and returns with and without rentals. This particular work breakdown allows us to test all major variations of all inputs and outputs, and allows us to proceed from simple to complex as well. In the next sections, we will discuss from bottom-up how testing at each level is designed and conducted using Rental/Return as the ABC example.

Next, we define the integration test strategy. The IE design resulted in small modules that are called for execution, some of which are used in more than one process. At the integration level, we define inputs and predict outputs of each module, using a black-box approach. Because SQL calls do not pass data, predicting SQL set output is more important than creating input. An important consideration with the number of modules is that intermodular errors that are created in one module but not evidenced until they are used in another module. The top-down approach should help focus attention on critical modules for this problem.

Because SQL is a declarative language, blackbox testing at the unit level is also appropriate. The SQL code that provides the control structure is logic and becomes an important test item. White-box tests are most appropriate to testing the control logic. Therefore, a mix of black- and white-box testing will be done at the unit level. 

To summarize, the top-down strategy for testing the application includes:

1. Test screen design and navigation, including validation of security and access controls. 

2. Test the call structure for all modules. 

3. Test rental/return processing. 

4. Test create processing for customers and videos. 

5. Test remaining individual processes and file contents as parallel streams. 

6. Test multiple processes and file manipulations together, including validation of response time and peak system performance. The test will use many users doing the same and different processes, simultaneously. 

7. Test backup and recovery strategies.

Now, we develop and try a unit test to test the strategy. If a small test of the strategy works, we implement the strategy.



Source: Sue Conger, https://learn.saylor.org/pluginfile.php/236045/mod_resource/content/2/The%20New%20Software%20Engineering.pdf
Creative Commons License This work is licensed under a Creative Commons Attribution 3.0 License.

Unit Testing

Guidelines for Developing a Unit Test 

Unit tests verify that a specific program, module, or routine (all referred to as 'module' in the remaining discussion) fulfills its requirements as stated in related program and design specifications. The two primary goals of unit testing are conformance to specifications and processing accuracy. 

For conformance, unit tests determine the extent to which processing logic satisfies the functions assigned to the module. The logical and operational requirements of each module are taken from the program specifications. Test cases are designed to verify that the module meets the requirements. The test is designed/rom the specification, not the code. 

Processing accuracy has three components: input, process, and output. First, each module must process all allowable types of input data in a stable, predictable, and accurate manner. Second, all possible errors should be found and treated according to the specifications. Third, all output should be consistent with results predicted from the specification. Outputs might include hard copy, terminal displays, electronic transmissions, or file contents; all are tested. 

There is no one strategy for unit testing. For input/output bound applications, black-box strategies are normally used. For process logic, either or both strategies can be used. In general, the more critical to the organization or the more damaging the possible errors, the more detailed and extensively white-box testing is used. For example, organizationally critical processes might be defined as any process that affects the financial books of the organization' meets legal requirements, or deals with client relationships. Examples of application damage might include life-threatening situations such as in nuclear power plant support systems, life support systems in hospitals, or test systems for car or plane parts.

Since most business applications combine approaches, an example combining black- and white-box strategies is described here. Using a white-box approach, each program specification is analyzed to identify the distinct logic paths which serve as the basis for unit test design. This analysis is simplified by the use of tables, lists, matrices, diagrams, or decision tables to document the logic paths of the program. Then, the logic paths most critical in performing the functions are selected for white-box testing. Next, to verify that all logic paths not white-box tested are functioning at an acceptable level of accuracy, black-box testing of input and output is designed. This is a common approach that we will apply to ABC Video. 

When top-down unit testing is used, control structure logic paths are tested first. When each path is successfully tested, combinations of paths may be tested in increasingly complex relationships until all possible processing combinations are satisfactorily tested. This process of simple-to-complex testing ensures that all logic paths in a module are performing both individually and collectively as intended.

Similarly, unit testing of multiuser applications also uses the simple-to-complex approach. Each program is tested first for single users. Then multiuser tests of the single functions follow. Finally, multiuser tests of multiple functions are performed.

Unit tests of relatively large, complex programs may be facilitated by reducing them to smaller, more manageable equivalent components such as 

  • transaction type: e.g., Debit/Credit, Edit/Update/Report/Error 
  • functional component activity e.g., Preparing, Sending, Receiving, Processing 
  • decision option e.g., If true ... If false ...

When the general process of reduction is accomplished, both black-box and white-box approaches are applied to the process of actually defining test cases and their corresponding output. The black-box approach should provide both good and bad data inputs and examine the outputs for correctness of processing. In addition, at least one white-box strategy should be used to test specific critical logic of the tested item. 

Test cases should be both exhaustive and minimal. This means that test cases should test every condition or data domain possible but that no extra tests are necessary. For example, the most common errors in data inputs are for edit/validate criteria. Boundary conditions of fields should be tested. Using equivalence partitioning of the sets of allowable values for each field we develop the test for a date formatted YYYYMMDD (that is, 4-digit year, 2-digit month, and 2-digit day). A good year test will test last year, this year, next year, change of century, all zeros, and all nines. A good month test will test zeros, 1, 2, 4 (representative of months with 30 days), 12, and 13. Only 1 and 12 are required for the boundary month test, but the other months are required to test day boundaries. A good day test will test zeros, 1,28,29,30,31, and 32, depending on the final day of each month. Only one test for zero and one are required, based on the assumption that if one month processes correctly, all months will. Leap year and nonleap years should also be tested. An example of test cases for these date criteria is presented. Figure 17 -14 shows the equivalent sets of data for each domain. Table 17-4 lists exhaustive test cases for each set in the figure. Table 17-5 lists the reduced set after extra tests are removed.


FIGURE 17-14 Unit Test Equivalent Sets for a Date


TABLE 17-4 Exhaustive Set of Unit Test Cases for a Date

Test Case

YYYY

MM

DD

Comments

1

aaaa

0

Aa

Tests actions against garbage input

2

1992*

13

0

Tests all incorrect lower bounds

3

2010

1

32

Tests all incorrect upper bounds

4

1993

12

31

Tests correct upper day bound

4a

1994

1

31

Not required ... could be optional test of upper month/day bound. Assumption is that if month = 1 works, all valid, equivalent months will work.

5

1995

12

1

Tests correct lower day bound

6

1996

12

1

Not required ... could be optional test of upper month/lower day bound. Assumption is that if month = 1 works, all valid, equivalent months will work.

7

1997

1

32

Tests upper day bound error

8

1998

12

32

Not required ... could be optional test of upper month/upper day bound error. Assumption is that if month = 1 works, all valid, equivalent months will work.

9

1999

12

0

Retests lower bound day error with otherwise valid data ... Not strictly necessary but could be used.

10

2000

2

1

Tests lower bound ... not strictly necessary

11

2000

2

29

Tests leap year upper bound

12

2000

2

30

 

Tests leap year upper bound error

13

1999

2

28

Tests nonleap year upper bound

14

1999

2

29

Tests nonleap year upper bound error

15

1999

2

0

Tests lower bound error... not strictly necessary

16

2001

4

30

Tests upper bound

17

2001

4

31

Tests upper bound error

18

2002

4

1

Tests lower bound ... not strictly necessary

19

2003

4

0

Tests lower bound error... not strictly necessary


TABLE 17-5 Minimal Set of Unit Test Cases for a Date

Test Case

YYYY

MM

DD

Comments

1

aaaa

aa

aa

Tests actions against garbage input

2

1992

0

0

Tests all incorrect lower bounds

3

2010

13

32

Tests all incorrect upper bounds

4

1993

1

31

Tests correct upper day bound

5

1995

1

1

Tests correct lower day bound

6

1997

1

32

Tests upper day bound error

7(9)

2000

2

29

Tests leap year upper bound

8(10)

2000

2

30

Tests leap year upper bound error

9(11)

1999

2

28

Tests nonleap year upper bound

10(12)

1999

2

29

Tests nonleap year upper bound error

10(14)

2001

4

30

Tests upper bound

11(15)

2001

4

31

Tests upper bound error


Other frequently executed tests are for character, field, batch, and control field checks. Table 17-6 lists a sampling of errors found during unit tests. Character checks include tests for blanks, signs, length, and data types (e.g., numeric, alpha, or other). Field checks include sequence, reasonableness, consistency, range of values, or specific contents. Control fields are most common in batch applications and are used to verify that the file being used is the correct one and that all records have been processed. Usually the control field includes the last execution date and file name which are both checked for accuracy. Record counts are only necessary when not using a declarative language. 

Once all test cases are defined, tests are run and results are compared to the predictions. Any result that does not exactly match the prediction must be reconciled. The only possible choices are that the tested item is in error or the prediction is in error. If the tested item is in error, it is fixed and retested. Retests should follow the approach used in the first tests. If the prediction is in error, the prediction is researched and corrected so that specifications are accurate and documentation shows the correct predictions. 

Unit tests are conducted and reviewed by the author of the code item being tested, with final test results approved by the project test coordinator. 

How do you know when to stop unit testing? While there is no simple answer to this question, there are practical guidelines. When testing, each tester should keep track of the number of errors found (and resolved) in each test. The errors should be plotted by test shot to show the pattern. A typical module test curve is skewed left with a decreasing number of errors found in each test (see Figure 17-15). When the number of errors found approaches zero, or when the slope is negative and approaching zero, the module can be moved forward to the next level of testing. If the number of errors found stays constant or increases, you should seek help either in interpreting the specifications or in testing the program.


ABC Video Unit Test 

Above, we said we would use a combination of black- and white-box testing for ABC unit tests. The application is being implemented using a SQL software package. Therefore, all code is assumed to be in SQL. The control logic and nonSELECT code is subject to white-box tests, while the SELECT modules will be subject to black-box tests.

TABLE 17-6 Sample Unit Test Errors

Edit/Validate

Transaction rejected when valid

Error accepted as valid

Incorrect validation criteria applied

Screen

Navigation faulty

Faulty screen layout

Spelling errors on screen

Inability to call screen

Data Integrity

Transaction processed when inconsistent with other information

Interfile matching not correct

File sequence checking not correct

File Processing

File, segment, relation of field not correctly processed

Read/write data format error

Syntax incorrect but processed by interpreter

Report

Format not correct

Totals do not add/crossfoot

Wrong field(s) printed

Wrong heading, footing or other cosmetic error

Data processing incorrect


In Chapter 10, we defined Rent/Return processing as an execute unit with many independent code units. Figure 17-16 shows partial SQL code from two Rent/Return modules. Notice that most of the code is defining data and establishing screen addressability. As soon as two or three modules that have such strikingly similar characteristics are built, the need to further consolidate the design to accommodate the implementation language should be obvious. With the current design, more code is spent in overhead tasks than in application tasks. Overhead code means that users will have long wait times while the system changes modules. The current design also means that debugging the individual modules would require considerable work to verify that the modules performs collectively as expected. Memory locations would need to be printed many times in such testing.


FIGURE 17-15 Unit Test Errors Found Over Test Shots


To restructure the code, we examine what all of the Rent/Return modules have in common-Open Rentals data. We can redefine the data in terms of Open Rentals with a single-user view used for all Rent/Return processing. This simplifies the data part of the processing but increases the vulnerability of the data to integrity problems. Problems might increase because the global view of data violates the principle of information hiding. The risk must be taken, however, to accommodate reasonable user response time. 

The common format of the restructured SQL code is shown in Figure 17-17. In the restructured version, data is defined once at the beginning of Rent/Return processing. The cursor name is declared once and the data is retrieved into memory based on the data entered through the Get Request module. The remaining Rent/Return modules are called in sequence. The modules have a similar structure for handling memory addressing. The problems with many prints of memory are reduced because once the data is brought into memory, no more retrievals are necessary until updates take place at the end of the transaction. Processing is simplified by unifying the application's view of the data.

          ADD RETURN DATE (Boldface code Is redundant)
                    DCL   INPUT_VIDEOID	     CHAR(8);
                    DCL	  INPUT COPY_ID	     CHAR(2);
                    DCL	  INPUT CUST_ID      CHAR (9);
                    DCL	  AMT                PAID	DECIMAL (4,2);
                    DCL	  CUST_ID	           CHAR(9);
                    ...
                       CONTINUE UNTIL ALL HELDS USED ON THE SCREEN OR USED TO
                    CONTROL SCREEN PROCESSING ARE DECLARED
                    DCL	  TOTAL_AMT_DUE	     DCIMAL(5.2);
                    DCL	  CHANGE	           DECIMAL(4,2);
                    DCL	  MORE OPEN RENTALS	 BIT(1);
                    DCL	  MORE NEW_RENTALS	 BIT(1);
                    EXEC  SQL INCLUDE SQLCA: /"COMMUNICATION AREA"/".
                    EXEC  SQL DECLARE CUSTOMER TABLE
                          (FIELD DEFINITIONS FOR CUSTOMER RELATION);
                    EXEC SOL DECLARE VIDEO TABLE
                          (FIELD DERNmONS FOR VIDEO RELATION);
                    EXEC SQL DECLARE COPY TABLE
                          (FIELD DEFINITIONS FOR COPY RELATION);
                    EXEC SQL DECLARE OPENRENTAL TABLE
                          (FIELD DEFINITIONS FOR OPENRENTAL RELATION);
                    EXEC SQL DECLARE SCREEN_CURSOR CURSOR FOR
                          SELECT ’ FROM OPEN_RENTAL
                                   WHERE VIDEOID - ORVlDEOID
                                   AND COPYID = ORCOPYID;
                    EXEC SQL OPEN SCREEN_CURSOR
                    GOTOLABEL
                    EXEC SQL FETCH SCREEN CURSOR INTO TARGET
                          :CUSTID
                          :VIDEOID
                          :COPYID
                          :RENTALDATE
                    IF SQLCOOE - 100 GOTO GOTOEXIT;
                    EXEC SQL SET :RETURNOATE - TODAYS.DATE
                    WHERE CURRENT OF SCREEN_CURSOR;
                    EXEC SQL UPDATE OPEN RENTAL
                          SET ORRETURNDATE - TODAYS,DATE
                          WHERE CURRENT OF SCREEN_CURSOR;
                    GOTO GOTOLABEL.
                    GOTOEXIT;
                    EXEC SQL CLOSE SCREEN CURSOR;


      

FIGURE 17-16 Two Modules Sample Code


The restructuring now requires a change to the testing strategy for Rent/Return. A strictly top-down approach cannot work because the Rent/Return modules are no longer independent. Rather, a combined top-down and bottom-up approach is warranted. A sequential bottom-up approach is more effective for the functional Rent/Return processing. Top-down, black-box tests of the SELECT code are done before being embedded in the execute unit. Black-box testing for the SELECT is used because SQL controls all data input and output. Complete SELECT statements are the test unit.

        DCL  	INPUT_VIDEO_ID	    CHAR(8);
        DCL	  INPUT_COPY_ID	      CHAR(2);
        DCL	  INPUT_CUST_ID	      CHAR(9);
        DCL	  AMT_PAID	          DECIMAL (4.2);
        DCL	  CUST_ID	            CHAR(9);
        ...
        continue until all fields used on the screen or used to control screen processing are 
        declared...
        DCL	  TOTAL_AMT_DUE	      DECIMALS(5,2);
        DCL	  CHANGE	            DECIMAL(4,2);
        DCL	  MORE_OPEN RENTALS	  BIT(1);
        DCL	  MORE_NEW_RENTALS	  BIT(1);
        EXEC SQL INCLUDE SQLCA: /*COMMUNICATION AREA*/
        EXEC SQL DECLARE RENTRETURN TABLE
        (field definitions for user view including all fields from customer, video, copy,
        open rental, and customer history relations);
        EXEC SQL DECLARE SCREEN_CURSOR CURSOR FOR
                  SELECT * from rentretum
                  where (:videoid = orvideojd and xopyid = orcopyid) 
                  or xustid = orcustid)
        EXEC SQL OPEN SCREEN_CURSOR
        EXEC SQL FETCH SCREEN_CURSOR INTO TARGET
                 :Request
        If :request eq "C?" set custid = :request
        else      set :videoid = :request 1
                  set icopyid = :request;
                      
                  (At this point the memory contains the related relation data
                  and the remaining rent/return processing can be done.)
                      
        All the other modules are called and contain the following common format:
        GOTOLABEL
        EXEC SQL FETCH SCREEN.CURSOR INTO TARGET
        :screen fields
            
        IF SQLCODE = 0 next step; (return code of zero means no errors)
        IF SQLCODE - 100 (not found condition) CREATE DATA or CALL END PROCESS;
        IF SQLCODE < 0 CALL ERROR PROCESS, ERROR-TYPE;
        Set screen variables (which displays new data)
        Prompt next action
            
        GOTO GOTOLABEL;
        GOTOEXIT;
        EXEC SQL CLOSE SCREEN_CURSOR;
      

FIGURE 17-17 Restructured SQL Code-Common Format


Test


1. Test SQL SELECT statement

2. Verify SQL cursor and data addressibility

3. Test Get Request

4. Test Get Valid Customer. Get Open Rentals

5. Test Get Valid Video

6. Test Process Payment and Make Change

7. Test Update Open Rental

8. Test Create Open Rental

9. Test Update Item

10. Test Update/Create Customer History

11. Test Print Receipt

Type


Black Box

White Box

White Box

Black Box for embedded SELECT statement. White Box for other logic

White Box for logic. Black Box for embedded SELECT statement

White Box

Black Box for Update, White Box for other logic

Black Box for Update, White Box for other logic

Black Box for Update, White Box for other logic

Black Box for Update, White Box for other logic

Black Box for Update. White Box for other logic

FIGURE 17-18 Unit Test Strategy


The screen interaction and module logic can be tested as either white box or black box. At the unit level, white-box testing will be used to test intermodule control logic. A combination of white-box and black-box testing should be used to test intramodule control and process logic. 

The strategy for unit testing, then, is to test data retrievals first, to verify screen processing, including SQL cursor and data addressability second, and to sequentially test all remaining code last (see Figure 17-18). 

Because all processing in the ABC application is on-line, an interactive dialogue test script is developed. All file interactions predict data retrieved and written, as appropriate. The individual unit test scripts begin processing at the execute unit boundary. This means that menus are not necessarily tested. A test script has three columns of information developed. The first column shows the computer messages or prompts displayed on the screen. The second column shows data entered by the user. The third column shows comments or explanations of the interactions taking place.

A partial test script for Rent/Return processing is shown in Figure 17-19. The example shows the script for a return with rental transaction. Notice that the test begins at the Rent/Return screen and that both error and correct data are entered for each field. After all errors are detected and dispatched properly, only correct data is required. This script shows one of the four types of transactions. It shows only one return and one rental, however, and should be expanded in another transaction to do several rentals and several returns; returns should include on-time and late videos and should not include all tapes checked out. This type of transaction represents the requisite variety to test returns with rentals. Of course, other test scripts for the other three types of transactions should also be developed. This is left as an extra-credit activity.

Subsystem or Integration Testing

Guidelines for Integration Testing 

The purpose of integration testing is to verify that groups of interacting modules that comprise an execute unit perform in a stable, predictable, and accurate manner that is consistent with all related program and systems design specifications. 

Integration tests are considered distinct from unit tests. That is, as unit tests are successful, integration testing for the tested units can begin. The two primary goals of integration testing are compatibility and intermodule processing accuracy.


System Prompt

User Action

Explanation


Menu

Press mouse, move to Rent/Return, and release

Select Rent/Return from menu

Rent/Return screen, cursor at request field

Scan customer bar code 1234567

Dummy bar code

Error Message 1: Illegal Customer or Video Code, Type Request

Enter: 1234567

Dummy bar code

Customer Data Entry Screen with message: Illegal Customer 10, enter new customer

<cr>

Carriage return entered to end Create Customer process

Rent/Return screen, cursor at request field

Scan customer bar code 2221234

Legal customer 10. System should return customer and rental information for M. A. Jones, Video 12312312, Copy 3, Terminator 2, Rental date 1/23/94, not returned.

Cursor at request field

Scan 123123123

Cursor moves to rented video line

Cursor at return date field

Enter yesterday's date

Error message: Return date must be today's date.

Cursor at return date field

Enter today's date

Late fee computed and displayed ... should be $4.00.

Cursor at request field

Scan new tape 10–

123412345

New tape entered and displayed. Video #12341234, Copy 5, Mary Poppins, Rental date 1/25/94, Charge $2.00.

Cursor at request field

Press <cr>

System computes and displays Total Amount Due ... should be $6.00.

Cursor at Total Amount Paid field

Enter <cr>

Error Message: Amount paid must be numeric and equal or greater than Total Amount Due.

Cursor at Total Amount Paid field

Enter 10 <cr>

System computes and displays Change Due . .. should be $4.00. Cash drawer should open.

Cursor at request field

Enter <cr>

Error Message: You must enter P or F5 to request print.

Cursor at request field

Enter P <cr>

System prints transaction

 

 

 


Go to SQL Query and verify Open Rental and Copy contents

Open Rental tuple for Video 123123123 contents should be:       

     22212341231231230123940200012594040000000000000

Open Rental tuple for Video 123412345 should be:       

     22212341234123450125940200000000000000000000000

Copy tuple for Video 12312312, Copy 3 should be:

     12312312311019200103

Copy tuple for Video 12341234, Copy 5 should be:

     12341234511319010000

Verify the contents of the receipt.

 

FIGURE 17-19 ABC Video Unit Test Example-Rent/Return


Integration tests are considered distinct from unit tests. That is, as unit tests are successful, integration testing for the tested units can begin. The two primary goals of integration testing are compatibility and intermodule processing accuracy. 

Compatibility relates to calling of modules an operational environment. The test verifies first that all modules are called correctly, and, even with errors, do not cause abends. Intermodule tests check that data transfers between modules operate as intended within constraints of CPU time, memory, and response time. Data transfers tested include sorted and extracted data provided by utility programs, as well as data provided by other application modules.

Test cases developed for integration testing should be sufficiently exhaustive to test all possible interactions and may include a subset of unit test cases as well as special test cases used only in this test. The integration test does not test logic paths within the modules as the unit test does. Instead, it tests interactions between modules only. Thus, a black-box strategy works well in integration testing. 

If modules are called in a sequence, checking of inputs and outputs to each module simplifies the identification of computational and data transfer errors. Special care must be taken to identify the source of errors, not just the location of bad data. Frequently, in complex applications, errors may not be apparent until several modules have touched the data and the true source of problems can be difficult to locate. Representative integration test errors are listed in Table 17-7.

Integration testing can begin as soon as two or more modules are successfully unit tested. When to end integration tests is more subjective. When exceptions are detected, the results of all other test processing become suspect. Depending on the severity and criticality of the errors to overall process integrity, all previous levels of testing might be reexecuted to reverify processing. Changes in one module may cause tests of other modules to become invalid. Therefore, integration tests should be considered successful only when the entire group of modules in an execute unit are run individually and collectively without error. Integration test curves usually start low, increase and peak, then decrease (see Figure 17-20). If there is pressure to terminate integration testing before all errors are found, the rule of thumb is to continue testing until fewer errors are found on several successive test runs.


TABLE 1 7-7 Sample Integration Test Errors

Intermodule communication
Called module cannot be invoked
Calling module does not invoke all expected modules
Message passed to module contains extraneous information
Message passed to module does not contain correct information
Message passed contains wrong (or inconsistent) data type
Return of processing from called module is to the wrong place
Module has no return
Multiple entry points in a single module
Multiple exit points in a single module
Process errors
Input errors not properly disposed
Abend on bad data instead of graceful degradation
Output does not match predicted results
Processing of called module produces unexpected results does not match prediction
Time constrained process is over the limit
Module causes time-out in some other part of the application


ABC Video Integration Test 

Because of the redesign of execute units for more efficient SQL processing, integration testing can be concurrent with unit code and test work, and should integrate and test the unit functions as they are complete. The application control structure for screen processing and for calling modules is the focus of the test.


FIGURE 17-20 Integration Test Errors Found . Over Test Shots


Black-box, top-down testing is used for the integration test. Because SQL does not pass data as input, we predict the sets that SQL will generate during SELECT processing. The output sets are then passed to the control code and used for screen processing, both of which have been unit tested and should work. To verify the unit tests at the integration level, we should: 

1. Ensure that the screen control structure works and that execute units are invoked as intended. 

2. Ensure that screens contain expected data from SELECT processing. 

3. Ensure that files contain all updates and created records as expected. 

4. Ensure that printed output contains expected information in the correct format.

First, we want to define equivalent sets of processes and the sets' equivalent sets of data inputs. For instance, the high level processes from IE analysis constitute approximately equivalent sets. These were translated into modules during design and, with the exception of integrating data access and use across modules, have not changed. These processes include Rent/Return, Customer Maintenance, Video Maintenance, and Other processing. If the personnel are available, four people could be assigned to develop one script each for these equivalent sets of processing. Since we named Rent/Return as the highest priority for development, its test should be developed first. The others can follow in any order, although the start-up and shutdown scripts should be developed soon after Rent/Return to allow many tests of the entire interface. 

First, we test screen process control, then individual screens. Since security and access control are embedded in the screen access structure, this test should be white box and test every possible access path, including invalid ones. Each type of access rights and screen processing should be tested . For the individual screens, spelling, positioning, color, highlighting, message placement, consistency of design, and accuracy of information are all validated (see Figure 17-21).

The integration test example in Figure 17-22 is the script for testing the start-up procedure and security access control for the application. This script would be repeated for each valid and invalid user including the other clerks and accountant. The startup should only work for Vic, the temporary test account, and the chief clerk. The account numbers that work should not be documented in the test script. Rather, a note should refer the reader to the person responsible for maintaining passwords.

1. Define equivalent sets of processes and data inputs. 

2. Define the priorities of equivalent sets for testing. 

3. Develop test scrips for Rent/Return, Other processing, Customer Maintenance, Video Maintenance. 

4. For each of the above scripts, the testing will proceed as follows: 

a. Test screen control, including security of access to the Rent/Return application. 

b. Evaluate accuracy of spelling, format, and consistency of each individual screen. 

c. Test access rights and screen access controls. 

d. Test information retrieval and display. 

e. For each transaction, test processing sequence, dialogue, error messages, and error processing. 

f. Review all reports and file contents for accuracy of processing, consistency, format, and spelling.

FIGURE 17-21 ABC Integration Test Plan

System Prompt

User Action

Explanation

C:>

StRent<cr>

StRent is Exec to startup the Rental/Return Processing application

Enter password:

<cr>

Error

Password

 

 

Password must be alphanumeric and six characters.

 

 

Enter Password:

123456<cr>

Error-illegal password

Password illegal, try again.

 

 

Enter Password:

Abcdefg

Error-illegal password

Three illegal attempts at password. System shutdown

 

 

G:>

StRent<cr>

Error-3 illegal attempts requires special start-up.              

Illegal start-up attempt

 

 

System begins to beep continuously until stopped by system administrator. No further prompts.

 

 

 

 

 

Single User Sign-on

 

 

C:>

StRent<cr>

StRent is Exec to startup the Rental/Return Processing application

Enter Password:

<cr>

Error

Password illegal, try again.

 

 

Enter Password:

VAC5283

Temporary legal entry

User Sign-on menu

 

 

Enter Initials:

<cr>

Error

You must enter your initials.

 

 

Enter initials:

VAV

Error

Initials not authorized, try again.

 

 

Enter initials:

VAC

Legal entry <VAC is Vic?

Main Menu with all Options

Begin Main Menu Test

 


FIGURE 17-22 ABC Video Integration Test Script


In the integration portion of the test, multiuser processing might take place, but it is not necessarily fully tested at this point. File contents are verified after each transaction is entered to ensure that file updates and additions are correct. If the integration test is approached as iteratively adding modules for testing, the final run-through of the test script should include all functions of the application, including start-up, shutdown, generation and printing of all reports, queries on all files, all file maintenance, and all transaction types. At least several days and one monthly cycle of processing should be simulated for ABC's test to ensure that end-oi-day and end-of-month processing work. 

Next, we discuss system testing and continue the example from ABC with a functional test that is equally appropriate at the integration, system, or QA levels.

System and Quality Assurance Testing

Guidelines for Developing System and Quality Assurance Tests 

The system test is used to demonstrate an application's ability to operate satisfactorily in a simulated production environment using its intended hardware and software configuration. The quality assurance test (QA) is both a system test and a documentation test. Both tests also verify that all of the system's interacting modules do the following:

1. Fulfill the user's functional requirements as contained in the business system design specifications and as translated into design requirements in the design spec and any documents controlling interfaces to other systems. 

2. The human interface works as intended. Screen design, navigation, and work interruptability are the test objects for human interface testing. All words on screens should be spelled properly. All screens should share a common format that is presented consistently throughout the application. This format includes the assignment of program function keys as well as the physical screen format. Navigation is the movement between screens. All menu selections should bring up the correct next screen. All screens should return to a location designated somewhere on the screen. If direct navigation from one screen to any other is provided, the syntax for that movement should be consistent and correct. If transactions are to be interruptible, the manner of saving partial transactions and calling them back should be the same for all screens. System level testing should test all of these capabilities. 

3. All processing is within constraints. General constraints can relate to prerequisites, post-requisites, time, structure, control, and inferences (see Chapter 1). Constraints can be internally controlled by the application or can be externally determined with the application simply meeting the constraint. Internally controlled constraints are tested through test cases specifically designed for that purpose. For instance, if response time limits have been stated, the longest possible transaction with the most possible errors or other delays should be designed to test response. If response time for a certain number of users is limited, then the test must have all users doing the most complex of actions to prove the response time constraint is met. Externally controlled constraints are those that the application either meets or does not. If the constraints are not met, then some redesign is probably required. 

4. All modules are compatible and, in event of failures, degrade gracefully. System tests of compatibility prove that all system components are capable of operating together as designed. System components include programs, modules, utilities, hardware, database, network, and other specialized software. 

5. Has sufficient procedures and code to provide disaster, restart, and application error recovery in both the designed and host software (e.g., DB2) 

6. All operations procedures for the system are useful and complete. Operations procedures include start-up, shutdown, normal processing, exception processing, special operator interventions, periodic processing, system specific errors, and the three types of recovery.

In addition, the QA test evaluates the accuracy, consistency, format, and content of application documentation, including technical, user, on-line, and operations documentation. Ideally, the individual performing the QA test does not work on the project team but can deal with them effectively in the adversarial role of QA. Quality assurance in some companies is called the acceptance test and is performed by the user. In other companies, QA is performed within the IS department and precedes the user acceptance test. 

The system test is the final developmental test under the control of the project team and is considered distinct from integration tests. That is, the successful completion of integration testing of successively larger groups of programs eventually leads to a test of the entire system. The system test is conducted by the project team and is analogous to the quality assurance acceptance test which is conducted by the user (or an agent of the user). Sample system test errors are shown in Table 17-8.

Test cases used in both QA and system testing should include as many normal operating conditions as possible. System test cases may include subsets of all previous test cases created for unit and integration tests as well as global test cases for system level requirements. The combined effect of test data used should be to verify all major logic paths (for both normal and exception processing), protection mechanisms, and audit trails. 

QA tests are developed completely from analysis and design documentation. The goal of the test is to verify that the system does what the documentation describes and that all documents, screens, and processing are consistent. Therefore, QA tests go beyond system testing by specifically evaluating application information consistency across environments in addition to testing functional software accuracy. QA tests find a broader range of errors than system tests; a sampling of QA errors is in Table 17-9.

System testing affords the first opportunity to observe the system's hardware components operating as they would in a production mode. This enables the project's test coordinator to verify that response time and performance requirements are satisfied.

Since system testing is used to check the entire system, any errors detected and corrected may require retesting of previously tested items. The system test, therefore, is considered successful only when the entire system runs without error for all test types.

TABLE 17-8 Sample System Test Errors

Functional

Application does not perform a function in the functional specification

Application does not meet all functional acceptance criteria

Human Interface

Screen format, spelling, content errors

Navigation does not meet user requirements

Interruption of transaction processing does not meet user requirements

Constraints

Prerequisites treated as sequential and should be parallel ... must all be checked by (x) module

Prerequisite not checked

Response Time/Peak Performance

Response time not within requirements for file updates, start-up, shutdown, query, etc.

Volume of transactions expected cannot be processed within the specified run-time intervals

Batch processing cannot be completed in the time allotted

Expected number of peak users cannot be accommodated

Restart/Recovery

Program-Interrupted printout fails to restart at the point of failure (necessary for check processing and some confidential/financial reporting)

Software-Checkpoint/restart routine is not called properly

Hardware-Printer cannot be accessed from main terminal

Switches incorrectly set

System re-IPL called for in procedures cannot be done without impacting other users not of this application

Expected hardware configuration has incompatible components



TABLE 17-9 Sample ON Acceptance Test Errors

Documentation

Two or more documents inconsistent

Document does not accurately reflect system feature

Edit/Validate

Invalid transaction accepted

Valid transaction rejected

Screen

Navigation, format, content, processing inconsistent with functional specification

Data Integrity

Multifile, multitransaction, multimatches are incorrect

File

File create, update, delete, query not present or not working

Sequence, data, or other criteria for processing not checked

Report specification

Navigation, format, content, processing inconsistent with functional

Recovery

Printer, storage, memory, software, or application recovery not correct

Performance

Process, response, user, peak, or other performance criteria not met

User ProceduresDo not match processing

Incomplete, inconsistent, incomprehensible

On-line help differs from paper documents

Operations Procedures

Do not match processing

Incomplete, inconsistent, incomprehensible


The test design should include all possible legal and illegal transactions, good and bad data in transactions, and enough volume to measure response time and peak transaction processing performance. As the test proceeds, each person notes on the test script whether an item worked or not. If a tested interaction had unexpected results, the result obtained is marked in the margin and noted for review.

The first step is to list all actions, functions, and transactions to be tested. The information for this list is developed from the analysis document for all required functions in the application and from the design document for security, audit, backup, and interface designs.

The second step is to design transactions to test all actions, functions and transactions. Third, the transactions are developed into a test script for a single user as a general test of system functioning. This test proves that the system works for one user and all transactions. Fourth, the transactions are interleaved across the participating number of users for multiuser testing. In general, the required transactions are only a subset of the total transactions included in the multiuser test. Required transactions test the variations of processing and should be specifically designed to provide for exhaustive transaction coverage. The other transactions can be a mix of simple and complex transactions at the designer's discretion. If wanted, the same transaction with variations to allow multiple use can be used. Fifth, test scripts for each user are then developed. Last, the test is conducted. These steps in developing system/QA tests are summarized as follows:

  1. List all actions, functions, and transactions to be tested.
  2. Design transactions to test all actions, functions, and transactions.
  3. Develop a single-user test script for above.
  4. Interleave the tests across the users participating in the test to fully test multiuser functioning of the application.
  5. Develop test scripts for each user.
  6. Conduct the test.
  7. Review test results and reconcile anomalous findings.

Designing multiuser test scripts is a tedious and lengthy process. Doing multiuser tests is equally time-consuming. Batch test simulator (BTS) software is an on-line test aid available in mainframe environments. BTSs generate data transactions based on designer-specified attribute domain characteristics. Some BTSs can read data dictionaries and can directly generate transactions. The simulation portion of the software executes the interactive programs using the automatically generated transactions and can, in seconds, perform a test that might take people several hours. BTSs are not generally available on PCs or LANs yet, but they should be in the future.

Finally, after the system and QA tests are successful, the minimal set of transactions to test the application are compiled into test scripts for a regression test package. A regression test package is a set of tests that is executed every time a change is made to the application. The purpose of the regression test is to ensure that the changes do not cause the application to regress to a nonfunctional state, that is, that the changes do not introduce errors into the processing.

Deciding when to stop system testing is as subjective as the same decision for other tests. Unlike module and integration tests, system tests might have several peaks in the number of errors found over time (see Figure 17-23). Each peak might represent new modules or subsystems introduced for testing or might demonstrate application regression due to fixes of old errors that cause new errors. Because of this multipeak phenomenon, system testing is the most difficult to decide to end. If a decreasing number of errors have not begun to be found, that is, the curve is still rising, do not stop testing. If all modules have been through the system test at least once, and the curve is moving toward zero, then testing can be stopped if the absolute number of errors is acceptable. Testing should continue with a high number of errors regardless of the slope of the line. What constitutes an acceptable number of errors, however, is decided by the project manager, user, and IS managers; there is no right number.

QA testing is considered complete when the errors do not interfere with application functioning. A complete list of errors to be fixed is developed and given to the project manager and his or her manager to track. In addition, a QA test report is developed to summarize the severity and types of errors found over the testing cycle. Errors that are corrected before the QA test completes are noted as such in the report.

The QA report is useful for several purposes. The report gives feedback to the project manager ~bout the efficacy of the team-testing effort and can Identify weaknesses that need correcting. The reports are useful for management to gain confidence (or lose it) in project managers and testing groups. Projects that reach the QA stage and are then stalled for several months because of errors identify training needs that might not otherwise surface.


FIGURE 17-23 System Test Errors Found Over Test Shots


ABC Video System Test 

Because ABC's application is completely on-line, the system test is essentially a repeat of the integration test for much of the functional testing. The system test, in addition, evaluates response time, audit, recovery, security, and multiuser processing. The functional tests do not duplicate the integration test exactly, however. The first user might use the integration test scripts. Other user(s) dialogues are designed to try to corrupt processing of the first user data and processes and to do other independent processing. If the total number of independent processing. If the total number of, expected system users is six people simultaneously, then the system test should be designed for six simultaneous users.

Trans #

Rents

Returns

Late Fees

Payment

Receipt


T111

2

0

-

Exact

Automatic

T112

1

0

-

Over

Automatic

T113

1

1 (Total)

No

Over

Automatic

T121

10

0

-

Over

Automatic

T122

0

2 (From 121)

No

-

No

T141

0

2 (From 121)

2, 4 days

Over

Automatic

T151

4

2 (From 121)

2, 5 days

Over

Automatic

T211

1

1 (Total)

1 day

Exact

Automatic

T212

0

1 (Total)

No

-

No

T213

0

1 (Total)

No

-

Requested

T214

0

1 (Total)

2 days

Under, then exact

Automatic

T221

2

0

-

Under-abort

No

T222-Wait required

0

2 (From T121)

No

-

Requested

T311

0

1 (Total)

10 days

Over

Automatic

T312

1 (with other open rentals)

0

-

Over

Automatic

T313

6 (with other open rentals), error then rent 5

1

0

Exact

Automatic

T411=T311 Err

0

1 (Total)

10 days

Over

Automatic

T412=T312 Err

1 (with other open rentals)

0

-

Over

Automatic

T413=T313 Err

6 (with other open rentals), error then rent 5

1

0

Exact

Automatic

T331

0

2 (From 121)

2.2 days

Exact

Automatic

T322

2

0

-

Under-abort

No

T511

5 (with other open rentals)

2

1 tape, 3 days

Over

Automatic


NOTE: Txyz TransactionID: x = User, x = Day, z = Transaction number

FIGURE 17-24 ABC Video System Test Overview-Rent/Return Transactions


The first step is to list all actions, functions, and transactions to be tested. For example, Figure 17-24 lists required transactions to test multiple days and all transaction types for each major file and processing activity for Rent/Return. These transactions would be developed into a test script for a single user test of the application. 

User 1

User 2

User 3

User 4

User 5

User 6


Start-up–success

Start-up–Err

Start-up–Err

Password–Err

Logon–Err

 

Logon

Logon

Logon

Logon

Logon

Logon

Rent–T111

Errs + Good data

Rent–T211

Errs + Good data

Cust Add

Errs + Good data

Cust Change–Err, Abort

Video add, Errs + Good data

Shutdown–Err

Rent–T112

Rent–T111

Rent–T311

Cust–Change

Copy Change–

Errs + Good data

Try to crash system with bad trans

Rent–T113

Rent–T112–Err

Rent–T312

Rent–T411

Rent–T511

Delete Cust–Errs + Good data

Rent–T114

Rent–T213

Rent–T313

Rent–T412

Rent–any trans

Delete Video

Errs

Rent–any trans

Rent–any trans

Rent–any trans

Rent–any trans

Rent–any trans

Delete Copy–Errs + Good data


END OF DAY, SHUT-DOWN, and STARTUP


Rent–T121

Rent–T221

Rent–any trans

Rent–any trans

Rent–any trans

Rent–any trans

Rent–T122

Rent–T111

Rent–any trans

Rent–any trans

Rent–any trans

Rent–any trans


END OF DAY, SHUT-DOWN, and STARTUP


Cust Add

Errs + Good data

Cust Change–Err, Abort

Rent–T331

Copy Change–Errors + Good data

Try to crash system 2ith bad trans

Rent–any trans

Delete Cust–Errs + Good data

Delete Video Errs

Rent–T332

Cust–Change

Video Add

Rent–any trans


END OF DAY, SHUT-DOWN, and STARTUP


END OF MONTH


NOTE: Txyz TransactionID: x = User, x = Day, z = Transaction number

FIGURE 17-25 ABC Video System Test Overview-Test Schedule

Then, the transactions are interleaved with other erroneous and legal transactions for the other ABC processes as planned in Figure 17-25. Notice that the required transactions are only a subset of the total transactions included in the test. The required transactions provide for exhaustive transaction coverage. The other transactions in Figure 17-25 are a mix of simple and complex transactions. Test scripts to follow the plan for each user are then developed; this is left as a student exercise.

left as a student exercise. Last, the test is conducted. During each shutdown procedure, the end-of-day reports are generated and reset. The data mayor may not be checked after the first day to verify that they are correct. If errors are suspected, the files and report should be checked to verify accuracy. When one whole day is run through without errors, the entire set of test scripts can be executed. After an entire execution of each test script completes, the test team convenes and reviews all test scripts together to discuss unexpected results. All data from the files are verified for their predicted final contents. That is, unless a problem is suspected, intermediate intraday results are not verified during system testing. Errors that are found are reconciled and fixed as required. The test scripts are run through repeatedly until no errors are generated. Then, the test team should take real transactions for several days of activity and do the same type of test all over again. These transactions should also have file and report contents predicted. This 'live-data' test should be successful if system testing has been successful. If it is not, the errors found should be corrected and transactions to cause the same errors should be added to the system test. After the test is complete, the regression test package is developed for use during application maintenance.

Automated Support Tools for Testing

Many CASE tools now support the automatic generation of test data for the specifications in their design products. There are also hundreds of different types of automated testing support tools that are not related to CASE. Some of the functions of these tools include

  • static code analyzers 
  • dynamic code analyzers 
  • assertion generators and processors 
  • test data generators 
  • test driver 
  • output comparators

In Table 17-10, several examples of CASE testing tools are presented. Many other types of testing support tools are available for use outside of a CASE environment. The most common test support tools are summarized below and sample products are listed in Table 17-11.

A code analyzer can range from simple to complex. In general, static code analyzers evaluate the syntax and executability of code without ever executing the code. They cross-reference all references to a line of code. Analyzers can determine code that is never executed, infinite loops, files that are only read once, data type errors, global, common, or parameter errors, and other common problems. Another output of some static analyzers is a cross-reference of all variables and the lines of code on which they are referenced. They are a useful tool, but they cannot determine the worth or reliability of the code which are desired functions. 

A special type of code analyzer audits code for compliance to standards and structured programming (or other) guidelines. Auditors can be customized by each using company to check their conventions for code structure.

A more complex type of code analyzer is a dynamic tool. Dynamic code analyzers run while the program is executing, hence the term dynamic. They can determine one or more of: coverage, tracing, tuning, timing, resource use, symbolic execution, and assertion checking. Coverage analysis of test data determines how much of the program is exercised by the set of test data. Tracing shows the execution path by statement of code. Some tools list values of key variables identified by the programmer. Languages on PCs usually have dynamic tracers as an execute option. Tuning analyzers identify the parts of the program executed most frequently, thus identifying code for tuning should a timing problem occur. Timing analysis reports CPU time used by a module or program. Resource usage software reports physicall/Os, CPU time, number of database transactions, and other hardware and software utilization. Symbolic executors run with symbolic, rather than real data, to identify the logic paths and computations for programmer-specified levels of coverage.

TABLE 17-10 CASE Test Tools

Tool Name Vendor Features and Functions
Teamwork Cadre Technologies, Inc. Providence, RI Testing Software
Telon and other products Pansophic Systems, Inc. Lisle,IL Code Generation, Test Management


An assertion is a statement of fact about the state of some entity. An assertion generator makes facts about the state the data in a program should be in, based on test data supplied by the programmer. If the assertions fail based on program performance, an error is generated. Assertion generators are useful testing tools for artificial intelligence programs and any program language with which a generator can work. Assertion checkers evaluate the truth of programmer-coded assertions within code. For instance, the statement 'Assert make-buy = o. " might be evaluated as true or false.

A test data generator (TDG) is a program that can generate any volume of data records based on programmer specifications. There are four kinds of test data generators: static, pathwise, data specification, and random. A static TDG requires programmer specification for the type, number, and data contents of each field. A simple static TDG, the IEBDG utility from IBM, generates letters or numbers in any number of fields with some specified number of records output. It is useful for generating volumes of test data for timing tests as long as the records contain mostly zeros and ones. Unless the test data generator is easy to use, it quickly becomes more cumbersome than self-made test data.

TABLE 17-11 Other Testing Support Tools

Tool Name

Vendor

Features and Functions

Assist

 

Coverage analysis, logic flow tracing, tracing, symbolic execution

Attest

University of Massachusetts

Amherst, MA

Coverage analysis, test data generation, data flow analysis, automatic path selection, constraint analysis

Automatic Test Data Generator (ATDG)

TRW Systems, Inc.

Redondo Beach, CA

Test data generation, path analysis, anomaly detection, variable analysis, constraint evaluation

Autoretest

TRW, Defense Systems Dept.

Redondo Beach, CA

Comparator, test driver, test data management, automated comparison of test parameters

C/Spot/Run

Procase Corp.

Santa Clara, CA

Syntax analysis, dependency analysis, source code Altering, source code navigation, graphical representation of function calls, error Altering


Tool Name

Vendor

Features and Functions

COBOL Optimizer

Instrumentor

Cotune

Softool Copr.

Goleta, CA

COBOL testing, path flow tracing, tracing, tuning

 

 

Coverage analysis, timing

Datamacs

Management & Computer Services, Inc.

Valley Forge, PA

Test file generation, I/O specification analysis, file structure testing

DAVE

Leon Osterwell

University of Colorado

Boulder, CL

Static analyzer, diagnostics, data flow analysis, interface analysis, cross-reference, standards enforcer, documentation aid

DIFF

Software Consulting Services

Allentown, PA

File comparison

FACOM and Fadebug

Fujitsu, Ltd.

Output comparator, anomaly detector

Fortran Optimizer Instrumentor

Softool Corp.

Goleta, CA

Coverage analysis Fortran testing, path flow tracing, tracing, tuning

McCabe Tools

M. McCabe & Associates

Columbia, MD

Specification analysis, visual path testing generates conditions for untested paths computes metrics

MicroFocus Cobol

Workbench

MicroFocus

Palo Alto, CA

Source navigation, interactive dynamic debugging, structure analysis, regression testing, tuning

Softool 80

Softool Corp.

Goleta, CA

Coverage analysis, tuning, timing, tracing

UX-Metric

Quality Tools for Software Craftsmen

Mulino, OR

Static analyzer, syntax checking, path analysis, tuning, volume testing, cyclic tests


Pathwise TDGs use input domain definitions to exercise specific paths in a program. These TDGs read the program code, create a representation of the control flow, select domain data to create representative input for a programmer-specified type of test, and execute the test. The possible programmer choices for test type include all feasible paths, statement coverage, or branch coverage. Since these are white-box techniques, unless a programmer is careful, a test can run for excessively long times.

Test drivers are software that simulate the execution of module tests. The tester writes code in the test driver language to provide for other module stubs, test data input, input/output parameters, files, messages, and global variable areas. The driver uses the test data input to execute the module. The other tester-defined items are used during the test to execute pieces of code without needing physical interfaces to any of the items. The major benefits of test drivers are the ease of developing regression test packages from the individual tests, and the forced standardization of test cases. The main problem with drivers is the need to learn another language to use the driver software.

On-line test drivers are of several types. Batch simulators generate transactions in batch-mode processing to simulate multi-user, on-line processing. Transaction simulators copy a test script as entered in single-user mode for later re-execution with other copied test scripts to simulate multi-user interactions.

Output comparators compare two files and identify differences. This makes checking of databases and large files less time-consuming than it would otherwise be.

Summary

Testing is the process of finding errors in an application's code and documentation. Testing is a difficult activity because it is a high-cost, time-consuming activity for which the returns diminish upon success. As such, it is frequently difficult for managers to understand the importance of testing in application development.

The levels of developmental testing include unit, integration, and system. In addition, an agent, who is not a project team member, performs quality assurance testing to validate the documentation and processing for the user. Code tests are on subroutines, modules, and programs to verify that individual code units work as expected. Integration tests verify the logic and processing for suites of modules, verifying intermodular communications. Systems tests verify that the application operates in its intended environment" and meets requirements for constraints, response time, peak processing, backup and recovery, and security, access, and audit controls.

Strategies of testing are either white-box, black-box, top-down, or bottom-up. White-box tests verify that specific logic of the application works as intended. White-box strategies include logic tests, mathematical proof tests, and cleanroom tests. Black-box strategies include equivalence partitioning, boundary value analysis, and error guessing. Heuristics for matching the test level to the strategy were provided.