[go: up one dir, main page]

0% found this document useful (0 votes)
69 views41 pages

STM Unit-4

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 41

UNIT-4

CONTENTS

Progressive Vs Regressive Testing


Objectives of regression testing
When is Regression Testing done?
Regression Testing Types
Regression testing techniques.
Debugging: Debugging process
Techniques
correcting bugs
INTRODUCTION

When a new module is added as a part of integration


testing, the software changes. New data flow paths are
established, new I/O may occur, etc.
These changes, due to addition of new modules, may
affect the functioning of earlier integrated modules.
Whenever a new bug in the working system appears
and it needs some changes to be made in s/w. The new
modifications may affect other parts of the software
too.
It becomes important to test the whole software again
with all the test cases so that a new modification does
not affect other parts of the software.
PROGRESSIVE VS. REGRESSIVE TESTING

From verification to validation, the testing process


progresses towards the release of the product.
A system under test ( SUT) is said to regress if
1. A modified component fails, or
2. A new component, when used with unchanged
components, causes failures in the unchanged
components by generating side-effects.
PROGRESSIVE VS. REGRESSIVE TESTING

Baseline version The version of a component


(system) that has passed a test suite.
Delta version A changed version that has not
passed a regression test.
The purpose of regression testing is to ensure that
bug-fixes and new functionalities introduced in a
new version of the software do not adversely affect
the correct functionality inherited from the previous
version.
Definition of Regression testing

IEEE software glossary defines regression testing as


follows
Regression testing is the selective retesting of a
system or component to verify that modifications
have not caused unintended effects and that the
system or component still complies with its specified
requirements.
OBJECTIVES OF REGRESSION TESTING

It tests to check that the bug has been


addressed : The first objective is to check whether
the bug-fixing has worked or not. Run the same test
that was executed when the problem was first found.
If the program fails on this testing, it means the bug
has not been fixed correctly.
It finds other related bugs: It may be possible
that the developer has fixed only the symptoms of
the reported bugs without fixing the underlying bug.
OBJECTIVES OF REGRESSION TESTING

Therefore, regression tests are necessary to validate


that the system does not have any related bugs.
It tests to check the effect on other parts of
the program
It may be possible that bug-fixing has unwanted
consequences on other parts of a program.
Therefore, regression tests are necessary to check the
influence of changes in one part on other parts of the
program.
WHEN IS REGRESSION TESTING DONE?

Software Maintenance
Corrective maintenance
Changes made to correct a system after a failure has
been observed (usually after general release).
Adaptive maintenance
Changes made to achieve continuing compatibility
with the target environment or other systems.
Perfective maintenance
Changes designed to improve or add capabilities.
WHEN IS REGRESSION TESTING DONE?

Rapid Iterative Development


The extreme programming approach requires that a
test be developed for each class and that this test be
re-run every time the class changes.
First Step of Integration
Re-running test suites, as new components are
added to successive test configurations, builds the
regression suite incrementally and reveals regression
bugs.
REGRESSION TESTING TYPES

Bug-Fix regression
This testing is performed after a bug has been
reported and fixed. Its goal is to repeat the test cases
that expose the problem in the first place.
Side-Effect regression/Stability regression
It involves retesting a substantial part of the product.
The goal is to prove that the change has no
detrimental effect on something that was earlier in
order. It tests the overall integrity of the program.
REGRESSION TESTING TECHNIQUES

There are three different techniques for regression


testing.
Regression test selection technique
This technique attempt to reduce the time required to
retest a modified program by selecting some subset of
the existing test suite.
Test case prioritization technique
It attempts to reorder a regression test suite so that
those tests with the highest priority, according to some
established criteria, are executed earlier in the
regression testing process rather than those with lower
priority.
REGRESSION TESTING TECHNIQUES

There are two types of prioritization:


(a) General Test Case Prioritization
For a given program P and test suite T, we prioritize
the test cases in T that will be useful over a
successive modified versions of P, without any
knowledge of the modified version.
(b) Version-Specific Test Case Prioritization
We prioritize the test cases in T, when P is modified
to P ’, with the knowledge of the changes made in P.
REGRESSION TESTING TECHNIQUES

Test suite reduction technique


It reduces testing costs by permanently eliminating
redundant test cases from test suites in terms of
codes or functionalities exercised.
SELECTIVE RETEST TECHNIQUE

Selective retest techniques attempt to reduce the


cost of testing by identifying the portions of P’ that
must be exercised by the regression test suite.
 Selective retesting is different from a retest-all
approach that always executes every test in an
existing regression test suite.
It is the process of selecting a subset of the
regression test suite that tests the changes.
SELECTIVE RETEST TECHNIQUE

Following are the characteristic features of the selective


retest technique:
1. It minimizes the resources required to regression test a
new version.
2. It is achieved by minimizing the number of test cases
applied to the new version.
3. It is needed because a regression test suite grows with
each version, resulting in broken, uncontrollable,
redundant test cases.
4. It uses the information about changes to select test
cases.
SELECTIVE RETEST TECHNIQUE
SELECTIVE RETEST TECHNIQUE
SELECTIVE RETEST TECHNIQUE

Strategy for Test Case Selection


A decision procedure for selecting the test cases is
provided by a test criterion.
SELECTIVE RETEST TECHNIQUE

Selection Criteria Based on Code


The motivation for code-based testing is that
potential failures can only be detected if the parts of
code that can cause faults are executed.
following tests are based on these criteria.
Modification-revealing test cases
A test case t is modification-revealing for P
and P | if and only if it causes the outputs of P and P |
to differ.
SELECTIVE RETEST TECHNIQUE

Fault-revealing test cases


A test case t detects a fault in P | if it causes P | to fail.
|
Hence t is fault-revealing for P .
There is no effective procedure to find the tests in T
|
that are fault-revealing for P .
Under these conditions, such a technique omits no
|
tests in T that can reveal faults in P .
REGRESSION TEST PRIORITIZATION

The prioritization methods order the execution of RTS


with the intention that a fault is detected earlier.
In other words, regression test prioritization attempts
to reorder a regression test suite so that those tests
with the highest priority, according to some established
criterion, are executed earlier in the regression testing
process than those with a lower priority.
By prioritizing the execution of a regression test suite,
can reveal important defects in a software system
earlier in the regression testing process.
REGRESSION TEST PRIORITIZATION

The steps for this approach are given below


Debugging

Debugging is not testing and testing is not debugging.


Testing phase in the SDLC aims to find more and more
bugs.
Debugging is the process of identification of the
symptoms of failures, tracing the bug, locating the
errors that caused the bug, and correcting these errors.
1. Determining the nature of the bug and location of the
suspected error within the program.
2. Fixing or repairing the error.
DEBUGGING PROCESS
DEBUGGING PROCESS

Check the result of the output produced by executing


test cases prepared in the testing process. If the actual
output matches with the expected output, it means that
the results are successful. Otherwise, there is failure in
the output which needs to be analysed.
Debugging is performed for analysis of failure, where
we identify the cause of the problem and correct it.
It may be possible that symptoms associated with the
present failure are not sufficient to find the bug.
Therefore, some additional testing is required so that
we may get more clues to analyse the causes of failures.
DEBUGGING PROCESS

If symptoms are sufficient to provide clues about the


bug, then the cause of failure is identified. The bug is
traced to find the actual location of the error.
Once we find the actual location of the error, the bug
is removed with corrections.
Regression testing is performed as bug has been
removed with corrections in the software. Thus, to
validate the corrections, regression testing is
necessary after every modification.
DEBUGGING TECHNIQUES

DEBUGGING WITH MEMORY DUMP


In debugging with the memory dump technique,
printout of all registers and relevant memory
locations is obtained and studied.
The relevant data of the program is observed
through these memory locations and registers for
any bug in the program.
DEBUGGING TECHNIQUES

DEBUGGING WITH MEMORY DUMP


Drawbacks of this method
1. There is difficulty of establishing the correspondence
between storage locations and the variables in one’s
source program.
2. The massive amount of data is faced, most of which
is irrelevant.
3. It is limited to static state of the program as it shows
the state of the program at only one instant of time.
DEBUGGING TECHNIQUES

DEBUGGING WITH WATCH POINTS


At a particular point of execution in the program,
value of variables or other actions can be verified.
These particular points of execution are known as
watch points.
Debugging with watch points can be implemented
with the following methods:
Output statements
Breakpoint execution
Single stepping
DEBUGGING TECHNIQUES

DEBUGGING WITH WATCH POINTS


1. Output statements
In this method, output statements can be used to
check the state of a condition or a variable at some
watch point in the program.
Therefore, output statements are inserted at various
watch points; program is complied and executed
with these output statements.
Execution of output statements may give some clues
to find the bug.
DEBUGGING TECHNIQUES

Output statements
This method has the following drawbacks:
(i) It may require changes in the code. These changes
may mask an error or introduce new errors in the
program.
(ii) After analysing the bug, we may forget to remove
these added statements which may cause other
failures.
DEBUGGING TECHNIQUES

2.Breakpoint execution
Breakpoint is actually a watch point inserted at
various places in the program. But these insertions
are not placed in the actual user program and
therefore need not be removed manually like output
statements.
program is executed up to the breakpoint inserted.
At that point, you can examine whatever is desired.
Afterwards, the program will resume and will be
executed further for the next breakpoint.
DEBUGGING TECHNIQUES

Breakpoints can be categorized as follows:


(i) Unconditional breakpoint: It is a simple breakpoint
without any condition to be evaluated. It is simply
inserted at a watch point and its execution stops the
execution of the program.
(ii) Conditional breakpoint: On the activation of this
breakpoint, one expression is evaluated for its
Boolean value. If true, the breakpoint will cause a
stop; otherwise, execution will continue.
DEBUGGING TECHNIQUES

(iii) Temporary breakpoint: This breakpoint is used


only once in the program. When it is set, the
program starts running, and once it stops the
execution, the temporary breakpoint is removed.
(iv) Internal breakpoint: These are invisible to the
user but are key to debugger’s correct handling of its
algorithms. These are the breakpoints set by the
debugger itself for its own purposes.
DEBUGGING TECHNIQUES

3.Single stepping
After every instruction execution, the users can
watch the condition or status of variable.
Single stepping is implemented with the help of
internal breakpoints.
Step-into
It means execution proceeds into any function in the
current source statement and stops at the first
executable source line in that function.
DEBUGGING TECHNIQUES

Step-over
It is also called skip, instead of step.
It treats a call to a function as an atomic operation
and proceeds to the textually succeeding source line
in the current scope.
DEBUGGING TECHNIQUES

Debugging with backtracking


(a) Observe the symptom of the failure at the output
side and reach the site where the symptom can be
uncovered.
(b) Once you have reached the site where symptom
has been uncovered, trace the source code starting
backwards and move to the highest level of
abstraction of design. The bug will be found in this
path.
DEBUGGING TECHNIQUES

(c) Slowly isolate the module through logical


backtracking using data flow diagrams ( DFDs) of
the modules wherein the bug resides.
(d) Logical backtracking in the isolated module will
lead to the actual bug and error can thus be removed.
Backtracking requires that the person debugging
must have knowledge regarding the design of the
system so that he can understand the logical flow of
the program through DFDs.
CORRECTING THE BUGS

The second phase of the debugging process is to correct


the error when it has been uncovered. But it is not as
easy as it seems.
The design of the software should not be affected by
correcting the bug or due to new modifications. Before
correcting the errors, we should concentrate on the
following points:
(a) Evaluate the coupling of the logic and data structure
where corrections are to be made. Highly coupled
module correction can introduce many other bugs. That
is why low-coupled module is easy to debug.
CORRECTING THE BUGS

(b) After recognizing the influence of corrections on


other modules or parts, plan the regression test cases
to perform regression testing.
(c) Perform regression testing with every correction in
the software to ensure that the corrections have not
introduced new bugs in other parts of the software.

You might also like