[go: up one dir, main page]

0% found this document useful (0 votes)
77 views43 pages

Module-5 ST

The document discusses various types and levels of software testing including integration testing, component testing, system testing, acceptance testing, and regression testing. It defines key terms like integration testing, components, frameworks, and different integration strategies like top-down and bottom-up. It also discusses challenges in testing components and assemblies built from preexisting software components where the component developer is separate from the system developer.

Uploaded by

Rama D N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views43 pages

Module-5 ST

The document discusses various types and levels of software testing including integration testing, component testing, system testing, acceptance testing, and regression testing. It defines key terms like integration testing, components, frameworks, and different integration strategies like top-down and bottom-up. It also discusses challenges in testing components and assemblies built from preexisting software components where the component developer is separate from the system developer.

Uploaded by

Rama D N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

SOFTWRAE TESTING 17IS63

MODULE -5
Syllabus

Integration and Component-Based Software Testing: Overview, Integration testing


strategies, Testing components and assemblies. System, Acceptance and Regression
Testing: Overview, System testing, Acceptance testing, Usability, Regression testing,
Regression test selection techniques, Test case prioritization and selective execution.
Levels of Testing, Integration Testing: Traditional view of testing levels, Alternative
life-cycle models, The SATM system, Separating integration and system testing, A
closer look at the SATM system, Decomposition-based, call graph-based, Path-based
integrations.

Integration and Component-Based Software Testing


Overview
The V model divides testing into four main levels of granularity:
 Module
- checks module behavior against specifications or expectations;
 Integration
- Integration test checks module compatibility;
 System and acceptance test.
- System and acceptance tests check behavior of the whole system with respect to
specifications and user needs, respectively.

An effective integration test is built on a foundation of thorough module testing and


inspection. Module test maximizes controllability and observability of an individual unit, and is more

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 1


SOFTWRAE TESTING 17IS63

effective in exercising the full range of module behaviors, rather than just those that are easy to trigger
and observe in a particular context of other modules. While integration testing may to some extent
act as a process check on module testing (i.e., faults revealed during integration test can be taken as
a signal of unsatisfactory unit testing), thorough integration testing cannot fully compensate for
sloppiness at the module level.

Integration testing strategies


What is integration testing?

Integration versus Unit Testing


•Unit (module) testing is a necessary foundation
– Unit level has maximum controllability and visibility

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 2


SOFTWRAE TESTING 17IS63

– Integration testing can never compensate for inadequate unit testing


•Integration testing may serve as a process check
–If module faults are revealed in integration testing, they signal inadequate unit testing
–If integration faults occur in interfaces between correctly implemented modules, the errors
can be traced to module breakdown and interface specifications

Integration Faults

•Inconsistent interpretation of parameters or values


– Example: Mixed units (meters/yards) in Martian Lander
• Violations of value domains, capacity, or size limits
– Example: Buffer overflow
• Side effects on parameters or resources
– Example: Conflict on (unspecified) temporary file
• Omitted or misunderstood functionality
– Example: Inconsistent interpretation of web hits
• Nonfunctional properties
– Example: Unanticipated performance issues
• Dynamic mismatches
– Example: Incompatible polymorphic method calls

Example: A Memory Leak


Apache web server, version 2.0.48
Response to normal page request on secure (https) port
static void ssl io filter disable(ap filter t *f)
{

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 3


SOFTWRAE TESTING 17IS63

bio filter in ctx t *inctx = f->ctx;

inctx->ssl = NULL; inctx->filter


ctx->pssl = NULL;
} // No obvious error, but Apache leaked memory slowly (in normal use) //or quickly (if exploited
for a DOS attack)

static void ssl io filter disable(ap filter t *f)


{
bio filter in ctx t *inctx = f->ctx;
SSL_free(inctx -> ssl); // The missing code is for a structure defined and created elsewhere,
accessed through an opaque pointer.

inctx->ssl = NULL;
inctx->filter ctx->pssl = NULL;
}

static void ssl io filter disable(ap filter t *f)


{
bio filter in ctx t *inctx = f->ctx;
SSL_free(inctx -> ssl);// Almost impossible to find with unit testing.
//(Inspection and some dynamic techniques could have found it.)
inctx->ssl = NULL;
inctx->filter ctx->pssl = NULL;
}

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 4


SOFTWRAE TESTING 17IS63

Integration Plan + Test Plan


• Integration test plan drives and is driven by the project “build plan”
–A key feature of the system architecture and project plan

Big Bang Integration Test


An extreme and desperate approach:
Test only after integrating all modules
+Does not require scaffolding
• The only excuse, and a bad one
- Minimum observability, diagnosability, efficacy, feedback
-High cost of repair
• Recall: Cost of repairing a fault rises as a function of time between error and repair

Structural and Functional Strategies


• Structural orientation:

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 5


SOFTWRAE TESTING 17IS63

Modules constructed, integrated and tested based on a hierarchical project structure


–Top-down, Bottom-up, Sandwich, Backbone
• Functional orientation:
Modules integrated according to application characteristics or features
–Threads, Critical module

Working from the top level (in terms of “use”or “include” relation) toward the bottom.
No drivers required if program tested from top-level interface (e.g. GUI, CLI, web app, etc.)
Top

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 6


SOFTWRAE TESTING 17IS63

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 7


SOFTWRAE TESTING 17IS63

Working Definition of Component


• Reusable unit of deployment and composition

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 8


SOFTWRAE TESTING 17IS63

– Deployed and integrated multiple times


– Integrated by different teams (usually)
• Component producer is distinct from component user
• Characterized by an interface or contract
•Describes access points, parameters, and all functional and non-functional behavior and
conditions for using the component
•No other access (e.g., source code) is usually available
• Often larger grain than objects or packages
– Example: A complete database system may be a component

Testing Components and Assemblies


Many software products are constructed, partly or wholly, from assemblies of prebuilt software
components. A key characteristic of software components is that the organization that develops a
component is distinct from the (several) groups of developers who use it to construct systems. The
component developers cannot completely anticipate the uses to which a component will be put, and
the system developers have limited knowledge of the component. Testing components (by the
component developers) and assemblies (by system developers) therefore brings some challenges and
constraints that differ from testing other kinds of module.

Terminology for Components and Frameworks


Component A software component is a reusable unit of deployment and composition that is deployed
and integrated multiple times and usually by different teams. Components are characterized by a
contract or interface and may or may not have state.
Components are often confused with objects, and a component can be encapsulated by an
object or a set of objects, but they typically differ in many respects:
 Components typically use persistent storage, while objects usually have only local state.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 9


SOFTWRAE TESTING 17IS63

 Components may be accessed by an extensive set of communication mechanisms, while


objects are activated through method calls.
 Components are usually larger grain subsystems than objects.

Component contract or interface The component contract describes the access points and
parameters of the component, and specifies functional and nonfunctional behavior and any conditions
required for using the component.

Framework A framework is a micro-architecture or a skeleton of an application, with hooks for


attaching application-specific functionality or configuration-specific components. A framework can
be seen as a circuit board with empty slots for components.

Frameworks and design patterns Patterns are logical design fragments, while frameworks are
concrete elements of the application. Frameworks often implement patterns.
Component-based system A component-based system is a system built primarily by assembling
software components (and perhaps a small amount of application specific code) connected through a
framework or ad hoc "glue code."

COTS The term commercial off-the-shelf, or COTS, indicates components developed for the sale to
other organizations.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 10


SOFTWRAE TESTING 17IS63

System, Acceptance, and Regression Testing


Overview
System, acceptance, and regression testing are all concerned with the behavior of a software system
as a whole, but they differ in purpose.

System testing is a check of consistency between the software system and its specification (it is a
verification activity). Like unit and integration testing, system testing is primarily aimed at
uncovering faults, but unlike testing activities at finer granularity levels, system testing focuses on
system-level properties. System testing together with acceptance testing also serves an important role
in assessing whether a product can be released to customers, which is distinct from its role in exposing
faults to be removed to improve the product.

System Testing
The essential characteristics of system testing are that it is comprehensive, based on a specification
of observable behavior, and independent of design and implementation decisions. System testing can
be considered the culmination of integration testing, and passing all system tests is tantamount to

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 11


SOFTWRAE TESTING 17IS63

being complete and free of known bugs. The system test suite may share some test cases with test
suites used in integration and even unit testing, particularly when a thread-based or spiral model of
development has been taken and subsystem correctness has been tested primarily through externally
visible features and behavior. However, the essential characteristic of independence implies that test
cases developed in close coordination with design and implementation may be unsuitable. The
overlap, if any, should result from using system test cases early, rather than reusing unit and
integration test cases in the system test suite. Independence in system testing avoids repeating
software design errors in test design. This danger exists to some extent at all stages of development,
but always in trade for some advantage in designing effective test cases based on familiarity with the
software design and its potential pitfalls. The balance between these considerations shifts at different
levels of granularity, and it is essential that independence take priority at some level to obtain a
credible assessment of quality.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 12


SOFTWRAE TESTING 17IS63

Unit, Integration, and System Testing

Acceptance Testing
The purpose of acceptance testing is to guide a decision as to whether the product in its current
state should be released. The decision can be based on measures of the product or process. Measures
of the product are typically some inference of dependability based on statistical testing. Measures of
the process are ultimately based on comparison to experience with previous products.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 13


SOFTWRAE TESTING 17IS63

Although system and acceptance testing are closely tied in many organizations, fundamental
differences exist between searching for faults and measuring quality. Even when the two activities
overlap to some extent, it is essential to be clear about the distinction, in order to avoid drawing
unjustified conclusions.
Quantitative goals for dependability, including reliability, availability, and mean time
between failures. These are essentially statistical measures and depend on a statistically valid
approach to drawing a representative sample of test executions from a population of program
behaviors. Systematic testing, which includes all of the testing techniques presented heretofore in this
book, does not draw statistically representative samples. Their purpose is not to fail at a "typical"
rate, but to exhibit as many failures as possible. They are thus unsuitable for statistical testing.

Usability
A usable product is quickly learned, allows users to work efficiently, and is pleasant to use. Usability
involves objective criteria such as the time and number of operations required to perform tasks and
the frequency of user error, in addition to the overall, subjective satisfaction of users.

Even if usability is largely based on user perception and thus is validated based on user feedback, it
can be verified early in the design and through the whole software life cycle. The process of verifying
and validating usability includes the following main steps:
Inspecting specifications with usability checklists. Inspection provides early feedback on usability.
Testing early prototypes with end users to explore their mental model (exploratory test), evaluate
alternatives (comparison test), and validate software usability. A prototype for early assessment of
usability may not include any functioning software; a cardboard prototype may be as simple as a
sequence of static images presented to users by the usability tester.
Testing incremental releases with both usability experts and end users to monitor progress and
anticipate usability problems.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 14


SOFTWRAE TESTING 17IS63

System and acceptance testing that includes expert-based inspection and testing, user based testing,
comparison testing against competitors, and analysis and checks often done automatically, such as a
check of link connectivity and verification of browser compatibility.

Regression Testing
When building a new version of a system (e.g., by removing faults, changing or adding
functionality, porting the system to a new platform, or extending interoperability), we may also
change existing functionality in unintended ways. Sometimes even small changes can produce
unforeseen effects that lead to new failures. For example, a guard added to an array to fix an overflow
problem may cause a failure when the array is used in other contexts, or porting the software to a new
platform may expose a latent fault in creating and modifying temporary files.
When a new version of software no longer correctly provides functionality that should be
preserved, we say that the new version regresses with respect to former versions. The nonregression
of new versions (i.e., preservation of functionality), is a basic quality requirement. Disciplined design
and development techniques, including precise specification and modularity that encapsulates
independent design decisions, improves the likelihood of achieving nonregression. Testing activities
that focus on regression problems are called (non) regression testing. Usually "non" is omitted and
we commonly say regression testing.
 A simple approach to regression testing consists of reexecuting all test cases designed for
previous versions.
 Even this simple retest all approach may present nontrivial problems and costs.
 Former test cases may not be reexecutable on the new version without modification, and
rerunning all test cases may be too expensive and unnecessary.
 A good quality test suite must be maintained across system versions.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 15


SOFTWRAE TESTING 17IS63

Regression Test Selection Techniques

Even when we can identify and eliminate obsolete test cases, the number of tests to be reexecuted
may be large, especially for legacy software. Executing all test cases for large software products may
require many hours or days of execution and may depend on scarce resources such as an expensive
hardware test harness.
For example, some mass market software systems must be tested for compatibility with hundreds of
different hardware configurations and thousands of drivers. Many test cases may have been designed
to exercise parts of the software that cannot be affected by the changes in the version under test. Test
cases designed to check the behavior of the file management system of an operating system is unlikely
to provide useful information when reexecuted after changes of the window manager. The cost of
reexecuting a test suite can be reduced by selecting a subset of test cases to be reexecuted, omitting
irrelevant test cases or prioritizing execution of subsets of the test suite by their relation to changes.

Regression test selection techniques are based on either code or specifications. Code based
selection techniques select a test case for execution if it exercises a portion of the code that has been
modified. Specification-based criteria select a test case for execution if it is relevant to a portion of
the specification that has been changed. Code based regression test techniques can be supported by
relatively simple tools. They work even when specifications are not properly maintained. However,
like code-based test techniques in general, they do not scale well from unit testing to integration and
system testing. In contrast, specification-based criteria scale well and are easier to apply to changes
that cut across several modules. However, they are more challenging to automate and require
carefully structured and well-maintained specifications.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 16


SOFTWRAE TESTING 17IS63

Test Case Prioritization and Selective Execution

Regression testing criteria may select a large portion of a test suite. When a regression test suite is
too large, we must further reduce the set of test cases to be executed.
Random sampling is a simple way to reduce the size of the regression test suite. Better
approaches prioritize test cases to reflect their predicted usefulness. In a continuous cycle of retesting
as the product evolves, high-priority test cases are selected more often than low-priority test cases.
With a good selection strategy, all test cases are executed sooner or later, but the varying periods
result in an efficient rotation in which the cases most likely to reveal faults are executed most
frequently.
Priorities can be assigned in many ways. A simple priority scheme assigns priority according
to the execution history: Recently executed test cases are given low priority, while test cases that have
not been recently executed are given high priority. In the extreme, heavily weighting execution
history approximates round robin selection.
Other history-based priority schemes predict fault detection effectiveness. Test cases that have
revealed faults in recent versions are given high priority. Faults are not evenly distributed, but tend
to accumulate in particular parts of the code or around particular functionality. Test cases that
exercised faulty parts of the program in the past often exercise faulty portions of subsequent revisions.
Structural coverage leads to a set of priority schemes based on the elements covered by a test case.
We can give high priority to test cases that exercise elements that have not recently been exercised.
Both the number of elements covered and the "age" of each element (time since that element was
covered by a test case) can contribute to the prioritization.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 17


SOFTWRAE TESTING 17IS63

Levels of Testing, Integration Testing


Traditional view of testing levels
Traditional Waterfall Testing
The traditional model of software development is the waterfall model, which is
illustrated in Figure. It is sometimes drawn as a V as in Figure. To emphasize how the
basic levels of testing reflect the early waterfall phases. (In ISTQB circles, this is
known as the “V-Model.”) In this view, information produced in one of the
development phases constitutes the basis for test case identification at that level.
Nothing controversial here: we certainly would hope that system test cases are clearly
correlated with the requirements specification, and that unit test cases are derived from
the detailed design of the unit. On the upper left side of the waterfall, the tight what/
how cycles are important. They underscore the fact that the predecessor phase defines
what is to be done in the successor phase. When complete, the successor phase states
how it accomplishes “what” was to be done. These are also ideal points at which to
conduct software reviews. Some humorists assert that these phases are the fault
creation phases, and those on the right are the fault detection phases.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 18


SOFTWRAE TESTING 17IS63

Fig.

Two observations: a clear presumption of functional testing is used here, and an implied bottom–up
testing order is used. Here, “bottom–up” refers to levels of abstraction—unit first, then integration,
and finally, system testing. Of the three main levels of testing (unit, integration, and system), unit
testing is best understood. System testing is understood better than integration testing, but both need
clarification. The bottom–up approach sheds some insight: test the individual components, and then
integrate these into subsystems until the entire system is tested. System testing should be something
that the

Waterfall Spin-Offs

There are three mainline derivatives of the waterfall model: incremental development, evolutionary
development, and the spiral model (Boehm, 1988). Each of these involves a series of increments or
builds as shown in Figure 11.3. It is important to keep preliminary design as an integral phase rather
than to try to amortize such high-level design across a series of builds. (To do so usually results in
unfortunate consequences of design choices made during the early builds that are regrettable in later

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 19


SOFTWRAE TESTING 17IS63

builds.) This single design step cannot be done in the evolutionary and spiral models. This is also a
major limitation of the bottom–up agile methods.
Within a build, the normal waterfall phases from detailed design through testing occur with one
important difference: system testing is split into two steps—regression and progression testing. The
main impact of the series of builds is that regression testing becomes necessary. The goal of
regression testing is to ensure that things that worked correctly in the previous build still work with
the newly added code. Regression testing can either precede or follow integration testing, or possibly
occur in both places. Progression testing assumes that regression testing was successful and that the
new functionality can be tested. Regression testing is an absolute necessity in a series of builds
because of the well-known ripple effect of changes to an existing system.

Specification-Based Life Cycle Models

When systems are not fully understood (by either the customer or the developer), functional
decomposition is perilous at best. Barry Boehm jokes when he describes the customer who says “I
don’t know what I want, but I’ll recognize it when I see it.” The rapid prototyping life cycle (Figure
11.4) deals with this by providing the “look and feel” of a system. Thus, in a sense, customers can
recognize what they “see.” In turn, this drastically reduces the specification-to-customer feedback
loop by producing very early synthesis. Rather than build a final system, a “quick and dirty” prototype

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 20


SOFTWRAE TESTING 17IS63

is built and then used to elicit customer feedback. Depending on the feedback, more prototyping
cycles may occur. Once the developer and the customer agree that a prototype represents the desired
system, the developer goes ahead and builds to a correct specification. At this point, any of the
waterfall spin-offs might also be used. The agile life cycles are the extreme of this pattern.

Rapid prototyping has no new implications for integration testing; however, it has very
interesting implications for system testing. Where are the requirements? Is the last prototype the
specification?
How are system test cases traced back to the prototype? One good answer to questions such as these
is to use the prototyping cycles as information-gathering activities and then produce a requirements
specification in a more traditional manner. Another possibility is to capture what the customer does
with the prototypes, define these as scenarios that are important to the customer, and then use these
as system test cases. These could be precursors to the user stories of the agile life cycles. The main
contribution of rapid prototyping is that it brings the operational (or behavioral) viewpoint to the
requirements specification phase. Usually, requirements specification techniques emphasize the
structure of a system, not its behavior. This is unfortunate because most customers do not care about
the structure, and they do care about the behavior.
Fig. Rapid prototyping life cycle.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 21


SOFTWRAE TESTING 17IS63

The SATM System


The Simple Automatic Teller Machine (SATM) system, the version developed is built around the
fifteen screens shown in Figure 4.6. This is a greatly reduced system; commercial ATM systems have
hundreds ofscreens and numeroustime-outs.

The SATM terminal is sketched in below Figure 4.7 in addition to the display screen, there are
function buttons B1, B2, and B3, a digit keypad with a cancel key, slots for printer receipts and ATM
cards, and doors for deposits and cash withdrawals.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 22


SOFTWRAE TESTING 17IS63

The SATM system is described here with a traditional, structured analysis approach in Figure.

The structured analysis approach to requirements specification is the most widely used method in the
world. It enjoys extensive CASE tool support as well as commercial training. The technique is based
on three complementary models: function, data, and control. Here we use data flow diagrams for the

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 23


SOFTWRAE TESTING 17IS63

functional models, entity/relationship models for data, and finite state machine models for the control
aspect of the SATM system.

The functional and data models were drawn with the Deft CASE tool from Sybase Inc. That tool
identifies external devices with lower case letters, and elements of the functional decomposition with
numbers. The open and filled arrowheads on flow arrows signify whether the flow item is simple or
compound. The portions of the SATM system shown here pertain generally to the personal
identification number (PIN) verification portion of the system. The Deft CASE tool distinguishes
between simple and compound flows, where compound flows may be decomposed into other flows,

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 24


SOFTWRAE TESTING 17IS63

which may themselves be compound. The graphic appearance of this choice is that simple flows have
filled arrowheads, while compound flows have open arrowheads. As an example, the compound flow
“screen” has the following decomposition: screen is comprised of
screen1 welcome
screen2 enter PIN
screen3 wrong PIN
screen4 PIN failed, card retained
screen5 select trans type
screen6 select account type
screen7 enter amount
screen8 insufficient funds
screen9 cannot dispense that amount
screen10 cannot process withdrawals
screen11 take your cash
screen12 cannot process deposits
screen13 put dep envelop in slot
screen14 another transaction?
screen15 Thanks; take card and receipt

Figure 4.10 is an Entity/Relationship diagram of the major data structures in the SATM system:
Customers, Accounts, Terminals, and Transactions. Good data modeling practice dictates postulating
an entity for each portion of the system that is described by data that is retained (and used by
functional components).

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 25


SOFTWRAE TESTING 17IS63

Among the data the system would need for each customer are the customer’s identification and
personal account number (PAN); these are encoded into the magnetic strip on the customer’s ATM
card. We would also want to know information about a customer’s account(s), including the account
numbers, the balances, the type of account (savings or checking), and the Personal Identification
Number (PIN) of the account. At this point, we might ask why the PIN is not associated with the
customer, and the PAN with an account. Some design has crept into the specification at this point: if
the data were as questioned, a person’s ATM card could be used by anyone; as it is, the present
separation predisposes a security checking procedure. Part of the E/R model describes relationships
among the entities: a customer HAS account(s), a customer conducts transaction(s) in a SESSION,
and, independent of customer information, transaction(s) OCCUR at an ATM terminal. The single
and double arrowheads signify the singularity or plurality of these relationships: one customer may
have several accounts and may conduct none or several transactions. Many transactions may occur at
a terminal, but one transaction never occurs at a multiplicity of terminals.
The dataflow diagrams and the entity/relationship model contain information that is
primarily structural. This is problematic for testers, because test cases are concerned with behavior,
not with structure. As a supplement, the functional and data information are linked by a control model;

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 26


SOFTWRAE TESTING 17IS63

here we use a finite state machine. The upper level finite state machine in Figure divides the system
into states that correspond to stages of customer usage.

The decomposition of the Await PIN state is shown in Figure 4.12. In both of these figures, state
transitions are caused either by events at the ATM terminal or by data conditions (such as the
recognition that a PIN is correct).

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 27


SOFTWRAE TESTING 17IS63

The function, data, and control models are the basis for design activities in the waterfall model (and
its spin-offs). During design, some of the original decisions may be revised based on additional
insights and more detailed requirements. The end result is a functional decomposition such as the
partial one shown in the structure chart in Figure 4.13. Notice that the original first level
decomposition into four subsystems is continued: the functionality has been decomposed to lower
levels of detail.

SATM System Device Sense &Control Door Sense &Control Get DoorStatus ControlDoor
DispenseCash Slot Sense &Control WatchCardSlot Get Deposit Slot Status Control CardRoller
Control Envelope Roller Read Card Strip Central BankComm. Get PIN forPAN Get AccountStatus
Post DailyTransactions Terminal Sense &Control ScreenDriver Key Sensor ManageSession
ValidateCard ValidatePIN Get PIN CloseSession NewTransaction Request PrintReceipt Post
Transaction Local Manage Transaction Get TransactionType Get AccountType ReportBalance
ProcessDeposit ProcessWithdrawal

A Closer Look at the SATM System


we described the SATM system in terms of its output screens (Figure 4.6), the terminal itself (Figure
4.7), its context and partial dataflow (Figures 4.8 and 4.9), an entity/relationship model of its data
(Figure 4.10), finite state machines describing some of its behavior (Figures 4.11 and 4.12), and a

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 28


SOFTWRAE TESTING 17IS63

partial functional decomposition (Figure 4.13). We also developed a PDL description of the main
program and two units, ValidatePIN and GetPIN. We begin here by expanding the functional
decomposition that was started in Figure 4.13; the numbering scheme preserves the levels of the
components in that figure. For easier reference, each component that appears in our analysis is given
a new (shorter) number; these numbers are given in Table 1. (The only reason for this is to make the
figures and spreadsheet more readable.)
Table 1 SATM Units and Abbreviated Names
Unit Number Unit Name
1 SATM System
A Device Sense & Control
D Door Sense & Control
2 Get Door Status
3 Control Door
4 Dispense Cash
E Slot Sense & Control
5 WatchCardSlot
6 Get Deposit Slot Status
7 Control Card Roller
8 Control Envelope Roller
9 Read Card Strip
10 Central Bank Comm.
11 Get PIN for PAN
12 Get Account Status
13 Post Daily Transactions
B Terminal Sense & Control
14 Screen Driver
15 Key Sensor C Manage Session

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 29


SOFTWRAE TESTING 17IS63

16 Validate Card
17 Validate PIN
18 GetPIN
F Close Session
19 New Transaction Request
20 Print Receipt
21 Post Transaction Local
22 Manage Transaction
23 Get Transaction Type
24 Get Account Type
25 Report Balance
26 Process Deposit
27 Process Withdrawal

Decomposition-Based Integration
Mainline introductory software engineering texts, for example, Pressman (2005) and Schach
(2002), typically present four integration strategies based on the functional decomposition tree of the
procedural software: top–down, bottom–up, sandwich, and the vividly named “big bang.”

We can dispense with the big bang approach most easily: in this view of integration, all the
units are compiled together and tested at once. The drawback to this is that when (not if!) a failure is
observed, few clues are available to help isolate the location(s) of the fault.
The functional decomposition tree is the basis for integration testing because it is the main
representation, usually derived from final source code, which shows the structural relationship of the
system with respect to its units. All these integration orders presume that the units have been
separately tested; thus, the goal of decomposition-based integration is to test the interfaces among
separately tested units. A functional decomposition tree reflects the lexicological inclusion of units,

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 30


SOFTWRAE TESTING 17IS63

in terms of the order in which they need to be compiled, to assure the correct referential scope of
variables and unit names.
Test the interfaces and interactions among separately tested units
 Three different approaches
1. Based on functional decomposition
2. Based on call graphs
3. Based on paths
Functional Decomposition
 Create a functional hierarchy for the software
 Problem is broken up into independent task units, or functions
 Units can be run either
Sequentially and in a synchronous call-reply manner
Or simultaneously on different processors
Used during planning, analysis and design

Four strategies
Top-down
Bottom-up
Sandwich
Big bang

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 31


SOFTWRAE TESTING 17IS63

TOP – DOWN APPROACH


 Top-down integration strategy
 Focuses on testing the top layer or the controlling subsystem first (i.e. the main, or the root of
the call tree)
 The general process in top-down integration strategy is
 To gradually add more subsystems that are referenced/required by the already tested
subsystems when testing the application
 Do this until all subsystems are incorporated into the test
 Special code is needed to do the testing
Test stub
 A program or a method that simulates the input-output functionality of a missing subsystem
by answering to the decomposition sequence of the calling subsystem and returning back
simulated data
Example of Top-down approach

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 32


SOFTWRAE TESTING 17IS63

Top-Down integration issues


 Writing stubs can be difficult
o Especially when parameter passing is complex.
o Stubs must allow all possible conditions to be tested
 Possibly a very large number of stubs may be required
o Especially if the lowest level of the system contains many functional units
 One solution to avoid too many stubs
o Modified top-down testing strategy
o Test each layer of the system decomposition individually before merging the layers
o Disadvantage of modified top-down testing
Both, stubs and drivers are needed

Bottom-Up integration
 Bottom-Up integration strategy
 Focuses on testing the units at the lowest levels first
 Gradually includes the subsystems that reference/require the previously tested subsystems
 Do until all subsystems are included in the testing
 Special driver code is needed to do the testing
 The driver is a specialized routine that passes test cases to a subsystem
Subsystem is not everything below current root module, but a sub-tree down to the leaf
level

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 33


SOFTWRAE TESTING 17IS63

Example of bottom–up integration


Bottom-Up Integration Issues
 Not an optimal strategy for functionally decomposed systems
o Tests the most important subsystem (user interface) last
 More useful for integrating object-oriented systems
 Drivers may be more complicated than stubs
 Less drivers than stubs are typically required

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 34


SOFTWRAE TESTING 17IS63

Sandwich Integration
 Combines top-down strategy with bottom-up strategy
 Less stub and driver development effort
 Added difficulty in fault isolation
 Doing big-bang testing on sub-trees
 Sandwich example is as shown in the figure.

Integration test metrics


 The number of integration tests for a decomposition tree is the following

Sessions = nodes – leaves + edges


 For SATM have 42 integration test sessions, which correspond to 42 separate sets of test cases
 For top-down integration nodes – 1 stubs are needed

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 35


SOFTWRAE TESTING 17IS63

 For bottom-up integration nodes – leaves drivers are needed


 For SATM need 32 stubs and 10 drivers

Call Graph-Based Integration


The basic idea is to use the call graph instead of the decomposition tree
The call graph is a directed, labeled graph
Vertices are program units; e.g. methods
A directed edge joins calling vertex to the called vertex
Adjacency matrix is also used
Do not scale well, although some insights are useful
Nodes of high degree are critical

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 36


SOFTWRAE TESTING 17IS63

Call graph integration strategies


Two types of call graph based integration testing
Pair-wise Integration Testing
Neighborhood Integration Testing

Pair-Wise Integration
 The idea behind Pair-Wise integration testing
Eliminate need for developing stubs / drivers
Use actual code instead of stubs/drivers
 In order not to deteriorate the process to a big-bang strategy
Restrict a testing session to just a pair of units in the call graph
Results in one integration test session for each edge in the call graph
 Pair-wise integration session example

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 37


SOFTWRAE TESTING 17IS63

Neighbourhood integration
 The neighbourhood of a node in a graph
The set of nodes that are one edge away from the given node
 In a directed graph
All the immediate predecessor nodes and all the immediate successor nodes of a given node
 Neighborhood Integration Testing
Reduces the number of test sessions
Fault isolation is more difficult
 Neighbourhood integration example

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 38


SOFTWRAE TESTING 17IS63

Pros and Cons of Call-Graph Integration


 Aim to eliminate / reduce the need for drivers / stubs
Development effort is a drawback
 Closer to a build sequence
 Neighborhoods can be combined to create “villages”
 Suffer from fault isolation problems
Specially for large neighborhoods
 Redundancy
Nodes can appear in several neighborhoods
 Assumes that correct behaviour follows from correct units and correct interfaces
Not always the case
 Call-graph integration is well suited to devising a sequence of builds with which to implement
a system

Path-Based Integration
 Motivation
Combine structural and behavioral type of testing for integration testing as we did for unit
testing
 Basic idea
Focus on interactions among system units
Rather than merely to test interfaces among separately developed and tested units
 Interface-based testing is structural while interaction-based is behavioral
 Source node
A program statement fragment at which program execution begins or resumes.
For example the first “begin” statement in a program.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 39


SOFTWRAE TESTING 17IS63

Also, immediately after nodes that transfer control to other units.


 Sink node
A statement fragment at which program execution terminates.
The final “end” in a program as well as statements that transfer control to other units.
 Module execution path
A sequence of statements that begins with a source node and ends with a sink node with no
intervening sink nodes.
 Message
A programming language mechanism by which one unit transfers control to another unit.
Usually interpreted as subroutine invocations
The unit which receives the message always returns control to the message source.

MM-Path
 An interleaved sequence of module execution paths and messages.
 Describes sequences of module execution paths that include transfers of control among
separate units.
 MM-paths always represent feasible execution paths, and these paths cross unit boundaries.
 There is no correspondence between MM paths and DD paths.
 The intersection of a module execution path with a unit is the analog of a slice with respect to
the MM-path function.

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 40


SOFTWRAE TESTING 17IS63

MM-Path Example

MM-path Graph
 Given a set of units their MM-path graph is the directed graph in which
Nodes are module execution paths
Edges correspond to messages and returns from one unit to another
 The definition is with respect to a set of units
It directly supports composition of units and composition based integration testing

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 41


SOFTWRAE TESTING 17IS63

MM-path guidelines
 How long, or deep, is an MM-path? What determines the end points?
Message quiescence
Occurs when a unit that sends no messages is reached
Module C in the example
 Data quiescence
Occurs when a sequence of processing ends in the creation of stored data that is not
immediately used (path D1 and D2)

 Quiescence points are natural endpoints for MM-paths

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 42


SOFTWRAE TESTING 17IS63

MM-Path metric
 How many MM-paths are sufficient to test a system
Should cover all source-to-sink paths in the set of units
 What about loops?
Use condensation graphs to get directed acyclic graphs
Avoids an excessive number of paths

Pros and cons of path-based integration


 Hybrid of functional and structural testing
Functional – represent actions with input and output
Structural – how they are identified
 Avoids pitfall of structural testing (???)
 Fairly seamless union with system testing
 Path-based integration is closely coupled with actual system behaviour
Works well with OO testing
 No need for stub and driver development
 There is a significant effort involved in identifying MM-paths
MM-path compared to other methods

PREPARED BY, ASSIT. PROF. RANGANATHA K, DEPT. OF ISE, CEC 43

You might also like