[go: up one dir, main page]

0% found this document useful (0 votes)
107 views32 pages

Software Technologies

The document discusses software testing methodologies. It provides definitions of key terms like testing, bugs, and purposes of testing such as catching bugs and improving productivity. It outlines different testing phases from seeing no difference between debugging and testing, to accepting statistical quality control methods. The document also discusses dichotomies in testing like the differences between testing and debugging, functional and structural testing, and designers vs testers. It provides examples of how testing approaches can be interleaved.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views32 pages

Software Technologies

The document discusses software testing methodologies. It provides definitions of key terms like testing, bugs, and purposes of testing such as catching bugs and improving productivity. It outlines different testing phases from seeing no difference between debugging and testing, to accepting statistical quality control methods. The document also discusses dichotomies in testing like the differences between testing and debugging, functional and structural testing, and designers vs testers. It provides examples of how testing approaches can be interleaved.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Software Testing Methodologies

Unit1

Purpose of Testing.

What is testing? (just for intro)


There could be various opinions & definitions. We follow the text & references for
testing and methodologies related concepts.

Testing is verification against given specifications. For software, it is verification


of functionality of a software product by executing the software, for conformance
to the given specifications.

Bug / defect / Fault: it is a deviation from the expected functionality. However, it


is not always obvious to determine that an observation is a bug.

Purpose of Testing
1. It is done to catch bugs.

Bugs arise due to imperfect communication among the members of the


development team regarding specifications, design and low level functionality.
Statistics show that around 3 bugs/100 programming statements exist.

Also, testing is to break the s/w and drive to the ultimate. NOTE: should not
be misunderstood.

2. Productivity Related reasons

If insufficient effort (cost) is spent in QA (includes testing), the rejection ration


will be high. Rework & recycling will be high and hence the net cost. (Usually
the testing & QA costs are 2% for a consumer product and are about 80% for
critical software such as spaceship/aircraft/nuclear/defense/life saving medical
software.

The biggest part of s/w cost is the cost of bugs and the corresponding rework.
Quality and productivity are almost indistinguishable for s/w.

3. Goals for testing:

Testing, as part of QA, should focus on bug prevention. The art of test
design one of the best bug preventors have known. Test design and tests
should provide clear diagnosis so that bugs can be easily corrected. If a bug
is prevented, the corresponding rework is saved. (Rework includes bug
reporting, debugging, corrections, retesting the corrections, re-distributions,
re-installations etc.).
Test-design thinking (writing test specification from the requirement
specifications first and then writing the code) can discover and eliminate bugs
at every stage of the SDLC. To the extent that the testing fails to achieve its
primary goal (bur prevention), testing must reach its secondary goal bug
discovery.

4. 5 Phases in testers thinking: - relating to purpose of testing

Phase 0: says no difference between debugging & testing. Today, phase 0


thinking is a barrier to good testing and quality software.

Phase 1: says testing is done to show that the software works. A failed test
shows that s/w does not work in spite of many testing passing ok. The
objective phase 1 is not achievable totally.

Phase 2: says Software product does not work. One failed test satisfies the
phase 2 goal. Tests can be redesigned to test the corrected s/w again.
However, we do not know when to stop testing.

Phase 3: says Testing is for Risk Reduction. Let us accept the principles of
statistical quality control. If a test passes or fails our perception of software
quality changes and more importantly, our perception of risk about the product
reduces. The product is released when the risk is under a predetermined limit.
(Statistics are used here)

Phase 4: says A State of Mind regarding what testing can do and cannot,
and further what makes software testable.
Applying that knowledge reduces amount of testing. Then effort in testing s/w
reduced. Also, the code will have fewer bugs than the code which is hard to
test.

All phase cumulative Goal:

The above goals are cumulative. one leads to the other and are
complementary. Phase2 tests will not show software works. Use the
statistical methods to test the design to achieve good testing (& product) at an
acceptable risk. Most testable software must be debugged, must work and
must be hard to break.

5. Testing & Inspection: (also called static testing)

Purpose of testing & inspection are different. Testing is to catch and


inspection is to prevent, but different kinds of bugs. To prevent and catch
most bugs, we must review, inspect, read, do walkthroughs on the code, and
then test the code.
Test Design
After testing & corrections, Redesign tests & Test the redesigned tests

Bug Prevention

To prevent the bugs we need to employ a mix of various following


approaches, depending on factors culture, development environment,
application, project size, history, programming language

Inspection Methods: walkthroughs, formal inspections, code reading


etc.

Design Style: adopting stylistic objectives such as testability, openness,


clarity and so on.

Static Analysis: Methods including anything that can be done by formal


analysis of the source code during or in conjunction the compilation of
the code. Strong syntax checks, data flow detection & other controls
could be done

Languages: Languages continue to evolve and preventing bugs is the


main driving force for the evolution. However, programmers may find
newer bugs

Design methodologies & development environment:


a. The design methodology (development process used and the
environment in which the methodology is embedded) can prevent bugs
b. Configuration control and automatic distribution of change information
can prevent bugs which may result from a programmers unawareness
that there were changes.

Two laws regarding limitations wrt Subtler bugs

a) Pesticide paradox: Every method you use to prevent or find bugs leaves a
residue of subtler bugs against which those methods are ineffectual.
As we progress by enhancements & corrections of bugs, we may say software
gets better; however, it may not be quite that.

b) Complexity Barrier: Software complexity grows to the limits of our ability to


manage that complexity. Simpler bugs are corrected and the software is
enhanced with new features increasing the complexity. But we have subtler bugs
to face in order to retain the same reliability. As he user pushes the builder to
add more and more features nearer to the complexity barrier that can be
managed. The strength of the techniques in the whole development environment
can wield the builder against the subtler bugs.
Dichotomies
Dichotomies

It is the division of important terms related to testing into two especially


mutually exclusive or contradictory groups or entities. There is dichotomy
between theory and practice.

Let us look at six of them:

1. Testing & Debugging


2. Functional Vs Structural Testing
3. Designer vs Tester
4. Modularity (Design) vs Efficiency
5. Programming in SMALL Vs programming in BIG
6. Buyer vs Builder

1. Testing Vs Debugging

Testing is to find bugs.


Debugging is to find the cause or misconception leading to the bug.

Their roles are confused to be the same. But, there are differences in goals,
methods and psychology applied to these

# Testing Debugging

1 Starts with known conditions. Uses Starts with possibly unknown initial
predefined procedure. Has predictable conditions. End cannot be predicted.
outcomes.
2 Planned, Designed and Scheduled. Procedures & Duration are not
constrained.
3 A demo of an error or apparent A Deductive process.
correctness.
4 Proves programmers success or failure. It is programmers Vindication.
5 Should be predictable, dull, constrained, There are intuitive leaps, conjectures,
rigid & inhuman. experimentation & freedom.
6 Much of testing can be without design Impossible without a detailed design
knowledge. knowledge.
7 Can be done by outsider to the Must be done by an insider
development team. (development team).
8 A theory establishes what testing can do There are only Rudimentary Results
or cannot do. (on how much can be done. Time,
effort, how etc. depends on human
ability).

9 Test execution and design can be Debugging - Automation is a dream.


automated.

2 Functional Vs Structural Testing

Functional Testing: Treats a program as a black box. Outputs are verified for
conformance to specifications from users point of view.

Structural Testing: Looks at the implementation details: programming style,


control method, source language, database & coding details.

Interleaving of functional & Structural testing:


A good program is built in layers from outside.
Outside layer is pure system function from users point of view.
Each layer is a structure (means implementation) with its outer layer being its
function (means specifications). Inside layer gives implementation for the specs
mentioned for the outside layer.

Two Examples:

Application2

Malloc()
Link block()

Devices
User
O.S.
r Application1

For a given model of programs, Structural tests may be done first and later
the Functional, Or vice-versa. Choice depends on which seems to be the
natural choice.
Both are useful, have limitations and target different kind of bugs. Functional
tests can detect all bugs in principle, but would take infinite amount of time.
Structural tests are inherently finite, but cannot detect all bugs.

The Art of Testing is how much allocation % for structural vs how much %
for functional.

3. Designer vs Tester

Completely separated in black box testing. Unit testing may be done by


either.
Artistry of testing is to balance knowledge of design and its biases against
ignorance & inefficiencies.
Tests are more efficient if the designer, programmer & tester are
independent in all of the unit, unit integration, component, component
integration, system, and the formal system feature testing.
The extent to which test designer & programmer are separated or linked
depends on testing level and the context.

# Programmer / Designer Tester


1 Tests designed by designers are With knowledge about internal test design,
more oriented towards structural the tester can eliminate useless tests,
testing and are limited to its optimize & do an efficient test design.
limitations.
2 Likely to be biased. Tests designed by independent testers are
bias-free.
3 Tries to do the job in simplest & Tester needs to suspicious,
cleanest way, trying to reduce the uncompromising, hostile and obsessed with
complexity. destroying program.

4. Modularity (Design) vs Efficiency

system and test design can both be modular.

A module implies a size, an internal structure and an interface, Or, in other


words.

A module (well defined discrete component of a system) consists of


internal complexity & interface complexity and has a size.
# Modularity Efficiency
1 Smaller the component easier to Implies more number of components &
understand. hence more # of interfaces increase
complexity & reduce efficiency (=> more
bugs likely)
2 Small components/modules are Higher efficiency at module level, when a
repeatable independently with less bug occurs with small components.
rework (to check if a bug is fixed).
3 Microscopic test cases need individual More # of test cases implies higher
setups with data, systems & the possibility of bugs in test cases. Implies
software. Hence can have bugs. more rework and hence less efficiency with
microscopic test cases
4 Easier to design large modules & Less complex & efficient. (Design may not
smaller interfaces at a higher level. be enough to understand and implement. It
may have to be broken down to
implementation level.)

So:
Optimize the size & balance internal & interface complexity to increase
efficiency

Optimize the test design by setting the scopes of tests & group of tests
(modules) to minimize cost of test design, debugging, execution & organizing
without compromising effectiveness.

5. Programming in SMALL Vs programming in BIG

Impact on the development environment due to the volume of customer


requirements.

# Small Big
1 More efficiently done by informal, A large # of programmers & large # of
intuitive means and lack of formality components.
if its done by 1 or 2 persons for
small & intelligent user population.
2 Done for e.g., for oneself, for ones Program size implies non-linear effects (on
office or for the institute. complexity, bugs, effort, rework quality).

3 Complete test coverage is easily Acceptance level could be: Test coverage
done. of 100% for unit tests and for overall tests
80%.
6. Buyer vs Builder

If the Buyer and Builder are the same organization, it clouds accountability in
the software development process. So separate them just enough into groups
to make the accountability clear for the purposes of software development.
Further, the accountability increases motivation for quality.

Let us look at the roles of all parties in the software development and usage
later on.

Builder: designs for & is accountable to the Buyer.

Buyer: Pays for the system and hopes to get profits from the services to
the User.

User: is the ultimate beneficiary of the system. Users interests are


guarded by the Tester.

Tester: works towards the destruction of the software. The tester tests the
software in the interests of the user & the operator.

Operator: The operator lives with the mistakes of the builder, murky
specs of Buyer, oversights of Tester and the complaints of User.
A MODEL FOR TESTING

We want to look at a model for testing in a software project, with in a specific


environment and with tests done at various levels.

First we understand what a project is and then look at the roles of the testing
models in a project.

PROJECT:

An Archetypical System (product) allows tests without complications (even for


a large project). Testing a one shot routine & very regularly used routine are
different. A model for project in a real world consists of the following 8
components:

1 Application: An online real-time system (with remote terminals) providing


timely responses to user requests (for services).

2 Staff: A manageable size of programming staff with some specialists in


systems design. (staff with skills in the programming / development domain)

3 Acceptance test: Application is accepted after a formal acceptance test. At


first its the customers & then the software design teams responsibility.

4 Personnel: From the personnel of the project team, the technical staff
comprises of: a combination of experienced professionals, junior
programmers (1 3 yrs) and some with no experience with varying degrees of
knowledge of the application.

5 Standards: Programming, test and interface standard (documented and


followed). A centralized standards data base is developed & administrated

6 Objectives: (of a project)


A system is expected to operate profitably for > 10 yrs after installation).
Similar systems with up to 75% code in common may be implemented in
future.

7 Source: (for a new project) is usually a combination of new code up to 1/3rd,


1/3rd from a previous reliable system up to 1/3rd and 1/3rd re-hosted from
another language & O.S.

8 History: Typically:
Developers may quit before his/her components are tested. Excellent but
poorly documented work. Unexpected changes (major & minor) from
customer may come in. Important milestones may slip, but the delivery
date is met. Problems in integration, with some hardware, redoing of some
component etc..
Finally, a model project is a well Run & Successful Project with a combination
of glory and catastrophe.

The The Model


World World

Environment Environment Unexpecte


Model d

Expecte
Program Program Tests Outc d
Model ome

Nature & Bug Model


Psychology

2. Roles of Models for Testing


1) Overview:

1) Testing process starts with a program embedded in an


environment.
Human nature of susceptibility to error leads to 3 models.
Create tests out of these models & execute
Results is expected Its okay
unexpected Revise tests & program. Revise bug model & program.

2) Environment: includes
All hardware & software (firmware, OS, linkage editor, loader,
compiler, utilities, libraries) required to make the program run.
Usually bugs do not result from the environment. (with
established h/w & s/w)
But arise from our understanding of the environment.

3) Program:
1) Complicated to understand in detail.
2) Deal with a simplified overall view.
3) Focus on control structure ignoring processing & focus on
processing ignoring control structure.
4) If bugs not solved, modify the program model to include
more facts, & if that fails, modify the program.

4) Bugs: (bug model)


1) Categorize the bugs as initialization, call sequence, wrong
variable etc..
2) An incorrect spec. may lead us to mistake for a program
bug.
3) There are 9 Hypotheses regarding Bugs.

a. Benign Bug Hypothesis:


The belief that the bugs are tame & logical.
Weak bugs are logical & are exposed by logical means.
Subtle bugs have no definable pattern.

b. Bug locality hypothesis:


Belief that bugs are localized.
Subtle bugs affect that component & external to it.
c. Control Dominance hypothesis:
a. Belief that most errors are in control structures, but data flow
& data structure errors are common too.
b. Subtle bugs are not detectable only thru control structure.
(subtle bugs => from violation of data structure
boundaries & data-code separation)

d. Code/data Separation hypothesis:

Belief that the bugs respect code & data separation in


HOL programming.
In real systems the distinction is blurred and hence such
bugs exist.

e. Lingua Salvator Est hypothesis:

Belief that the language syntax & semantics eliminate


most bugs.
But, such features may not eliminate Subtle Bugs.

f. Corrections Abide hypothesis:

Belief that a corrected bug remains corrected.


Subtle bugs may not. For e.g.
A correction in a data structure DS due to a bug in the interface
between modules A & B, could impact module C using DS.

g. Silver Bullets hypothesis:

Belief that - language, design method, representation,


environment etc. grant immunity from bugs.
Not for subtle bugs.
Remember the pesticide paradox.

h. Sadism Suffices hypothesis:


a. Belief that a sadistic streak, low cunning & intuition (by
independent testers) are sufficient to extirpate most bugs.
b. Subtle & tough bugs are may not be - these need
methodology & techniques.

i. Angelic Testers hypothesis:

Belief that testers are better at test design than programmers at


code design.

5) Tests:
1) Formal procedures.
2) Input preparation, outcome prediction and observation,
documentation of test, execution & observation of outcome are
subject to errors.
3) An unexpected test result may lead us to revise the test and test
model.

6) Testing & Levels:

3 kinds of tests (with different objectives)

1) Unit & Component Testing


a. A unit is the smallest piece of software that can be
compiled/assembled, linked, loaded & put under the control of test
harness / driver.
b. Unit testing - verifying the unit against the functional specs & also
the implementation against the design structure.
c. Problems revealed are unit bugs.
d. Component is an integrated aggregate of one or more units (even
entire system)
e. Component testing - verifying the component against functional
specs and the implemented structure against the design.
f. Problems revealed are component bugs.

2) Integration Testing:
Integration is a process of aggregation of components into larger
components.
Verification of consistency of interactions in the combination of
components.
Examples of integration testing are improper call or return
sequences, inconsistent data validation criteria & inconsistent
handling of data objects.

Integration testing & Testing Integrated Objects are different

A B D
A B
C

Sequence of Testing:
Unit/Component tests for A, B. Integration tests for A &
B. Component testing for (A,B) component

3) System Testing

a. System is a big component.


b. Concerns issues & behaviors that can be tested at the level of
entire or major part of the integrated system.
c. Includes testing for performance, security, accountability,
configuration sensitivity, start up & recovery

-----------------------------------------------
After understanding a Project, Testing Model, now lets see
finally,

Role of the Model of testing:


Used for the testing process until system behavior is correct or until the
model is insufficient (for testing).

Unexpected results may force a revision of the model.

Art of testing consists of creating, selecting, exploring and revising models.

The model should be able to express the program.


We will now look at

1. Importance of Bugs - statistical quantification of impact

2. Consequences of Bugs, Nightmares, To stop testing

3. Taxonomy of Bugs - along with some remedies

In order to be able to create an organizations own Bug Importance Model


for the sake of controlling associated costs

Importance of Bugs

Depends on frequency, correction cost, installation cost & consequences of


bugs

1. Frequency

Statistics from different sources are in table 2.1 (Beizer)


Note the bugs with higher frequency & mark them in this order:

Control structures, Data structures, Features & Functionality, Coding,


Integration, Requirements & others

Higher frequency higher rework & other consequences


Frequency may not depend on the application in context or the
environment.

2. Correction Cost

Sum of detection & Correction.


High if a bug is detected later in the SDLC.
Depends on system size, application and the environment too.

3. Installation Cost

Depends on # of installations.
May dominate all other costs, as we nee to distribute bug fixes across
all installations.
Depends also on application and environment.

4. Consequences (effects)

Measure by the mean size of the awards given to the victims of the
bug.
Depend on the application and environment.
A metric for Importance of Bugs

Importance = frequency * ( Correction_cost + Installation_cost +


Consequential_cost )

Bug importance is more important than the raw frequency


Own Importance model for bugs may need to be created (the above
costs depend on application and the environment)

Hence we look at consequences and taxonomy in detail.

Consequences: (how bugs may affect users)

These range from mild to catastrophic on a 10 point scale.

Mild
Aesthetic bug such as misspelled output or mal-aligned print-
out.

Moderate
Outputs are misleading or redundant impacting performance.

Annoying
Systems behavior is dehumanizing for e.g. names are
truncated/modified arbitrarily, bills for $0.0 are sent.
Till the bugs are fixed operators must use unnatural command
sequences to get proper response.

Disturbing
Legitimate transactions refused.
For e.g. ATM machine may malfunction with ATM card / credit
card.

Serious
Losing track of transactions & transaction events. Hence
accountability is lost.

Very serious
System does another transaction instead of requested e.g.
Credit another account, convert withdrawals to deposits.

Extreme
Frequent & Arbitrary - not sporadic & unusual.

Intolerable
Long term unrecoverable corruption of the Data base.
(not easily discovered and may lead to system down.)
Catastrophic
System fails and shuts down.

Infectious
Corrupts other systems, even when it may not fail.

Assignment of severity

Assign flexible & relative rather than absolute values to the bug (types).
Number of bugs and their severity are factors in determining the quality
quantitatively.
Organizations design & use quantitative, quality metrics based on the
above.

Parts are weighted depending on environment, application, culture,


correction cost, current SDLC phase & other factors.

Nightmares

Define the nightmares that could arise from bugs for the context of
the organization and the application.

Quantified nightmares help calculate importance of bugs.


That helps in making a decision on when to stop testing & release the
product.

When to stop Testing


1. List all nightmares in terms of the symptoms & reactions of the user to
their consequences.

2. Convert the consequences of into a cost. There could be rework cost.


(but if the scope extends to the public, there could be the cost of
lawsuits, lost business, nuclear reactor meltdowns.)

3. Order these from the costliest to the cheapest. Discard those with
which you can live with.

4. Based on experience, measured data, intuition, and published statistics


postulate the kind of bugs causing each symptom. This is called bug
design process. A bug type can cause multiple symptoms.

5. Order the causative bugs by decreasing probability (judged by intuition,


experience, statistics etc.). Calculate the importance of a bug type as:

Importance of bug type j = C j k P j k


where, all k
C j k = cost due to bug type j causing nightmare k
P j k = probability of bug type j causing nightmare k
( Cost due to all bug types = C jk P jk )
all k all j

6. Rank the bug types in order of decreasing importance.

7. Design tests & QA inspection process with most effective against


the most important bugs.

8. If a test is passed or when correction is done for a failed test, some


nightmares disappear. As testing progresses, revise the
probabilities & nightmares list as well as the test strategy.

9. Stop testing when probability (importance & cost) proves to be


inconsequential.

This procedure could be implemented formally in SDLC.

Important points to Note:

Designing a reasonable, finite # of tests with high probability of


removing the nightmares.
Test suites wear out.
As programmers improve programming style, QA improves.
Hence, know and update test suites as required.

we had seen the:

1. Importance of Bugs - statistical quantification of impact


2. Consequences of Bugs - causes, nightmares, to stop testing

We will now see the:

1. Taxonomy of Bugs - along with some remedies

Reason : In order to be able to create an organizations own Bug Importance


Model for the sake of controlling associated costs

Reference of IEEE Taxonomy: IEEE 87B

Why Taxonomy ?
To study the consequences, nightmares, probability, importance, impact and
the methods of prevention
and correction.

Adopt known taxonomy to use it as a statistical framework on which your


testing strategy is based.

There are 6 main categories with sub-categories..

1) Requirements, Features, Functionality Bugs 24.3% bugs

2) Structural Bugs 25.2%

3) Data Bugs 22.3%

4) Coding Bugs 9.0%

5) Interface, Integration and System Bugs 10.7%

6) Testing & Test Design Bugs 2.8 %

Reference of IEEE Taxonomy: IEEE 87B

1) Requirements, Features, Functionality Bugs

3 types of bugs : Requirement & Specs, Feature, & feature


interaction bugs

I. Requirements & Specs.

Incompleteness, ambiguous or self-contradictory


Analysts assumptions not known to the designer

Some thing may miss when specs change


These are expensive: introduced early in SDLC and removed at
the last

II. Feature Bugs

Specification problems create feature bugs


Wrong feature bug has design implications

Missing feature is easy to detect & correct


Gratuitous enhancements can accumulate bugs, if they increase
complexity

Removing features may foster bugs

III. Feature Interaction Bugs


Arise due to unpredictable interactions between feature groups or
individual features. The earlier removed the better as these are costly if
detected at the end.
Examples: call forwarding & call waiting. Federal, state & local tax
laws.
No magic remedy. Explicitly state & test important combinations

Remedies

Use high level formal specification languages to eliminate human-to-


human communication
Its only a short term support & not a long term solution

Short-term Support:
Specification languages formalize requirements & so automatic
test generation is possible. Its cost-effective.

Long-term support:
Even with a great specification language, problem is not
eliminated, but is shifted to a higher level. Simple ambiguities &
contradictions may only be removed, leaving tougher bugs.

Testing Techniques

Functional test techniques - transaction flow testing, syntax


testing, domain testing, logic testing, and state testing - can
eliminate requirements & specifications bugs.

2. Structural Bugs

we look at the 5 types, their causes and remedies.

I. Control & Sequence bugs


II. Logic Bugs
III. Processing bugs
IV. Initialization bugs
V. Data flow bugs & anomalies

1. Control & Sequence Bugs:

Paths left out, unreachable code, spaghetti code, and pachinko code.
Improper nesting of loops, Incorrect loop-termination or look-back, ill-
conceived switches.

Missing process steps, duplicated or unnecessary processing,


rampaging GOTOs.
Novice programmers.
Old code (assembly language & Cobol)

Prevention and Control:

Theoretical treatment and,


Unit, structural, path, & functional testing.

II. Logic Bugs

Misunderstanding of the semantics of the control structures & logic


operators
Improper layout of cases, including impossible & ignoring necessary
cases,

Using a look-alike operator, improper simplification, confusing Ex-OR


with inclusive OR.
Deeply nested conditional statements & using many logical operations
in 1 stmt.

Prevention and Control:

Logic testing, careful checks, functional testing

III. Processing Bugs

Arithmetic, algebraic, mathematical function evaluation, algorithm


selection & general. processing, data type conversion, ignoring
overflow, improper use of relational operators.

Prevention

Caught in Unit Testing & have only localized effect


Domain testing methods

IV. Initialization Bugs

Forgetting to initialize work space, registers, or data areas.


Wrong initial value of a loop control parameter.

Accepting a parameter without a validation check.


Initialize to wrong data type or format.
Very common.

Remedies (prevention & correction)

Programming tools, Explicit declaration & type checking in source


language, preprocessors.
Data flow test methods help design of tests and debugging.
V. Dataflow Bugs & Anomalies

Run into an un-initialized variable.


Not storing modified data.

Re-initialization without an intermediate use.


Detected mainly by execution (testing).

Remedies (prevention & correction)

Data flow testing methods & matrix based testing methods.

3. Data Bugs

Depend on the types of data or the representation of data. There are 4 sub
categories.

I. Generic Data Bugs


II. Dynamic Data Vs Static Data
III. Information, Parameter, and Control Bugs
IV. Contents, Structure & Attributes related Bugs

I. Generic Data Bugs

Due to data object specs., formats, # of objects & their initial values.

Common as much as in code, especially as the code migrates to data.

Data bug introduces an operative statement bug & is harder to find.

Generalized components with reusability when customized from a


large parametric data to specific installation.

Remedies (prevention & correction):

Using control tables in lieu of code facilitates software to handle many


transaction types with fewer data bugs. Control tables have a hidden
programming language in the database.

Caution - theres no compiler for the hidden control language in data


tables
II. Dynamic Data Vs Static Data

Dynamic Data Bugs Static Data Bugs


Transitory. Difficult to catch. Fixed in form & content.
Due to an error in a shared Appear in source code or data base,
storage object initialization. directly or indirectly
Due to unclean / leftover Software to produce object code creates a
garbage in a shared resource. static data table bugs possible
Examples Examples
Generic & shared variable Telecom system software: generic
parameters, a generic large program & site
adapter program to set parameter values,
build data declarations etc.
Shared data structure Postprocessor : to install software
packages. Data is initialized at run time
with configuration handled by tables.
Prevention Prevention

Data validation, unit testing Compile time processing


Source language features

III. Information, Parameter, and Control Bugs

Static or dynamic data can serve in any of the three forms. It is a matter of
perspective.

What is information can be a data parameter or control data else where in a


program.

Examples: name, hash code, function using these. A variable in different


contexts.

Information: dynamic, local to a single transaction or task.


Parameter: parameters passed to a call.
Control: data used in a control structure for a decision.

Bugs

Usually simple bugs and easy to catch.


When a subroutine (with good data validation code) is modified,
forgetting to update the data validation code, results in these bugs.

Preventive Measures (prevention & correction)

Proper Data validation code.


IV. Contents, Structure & Attributes related Bugs

Contents: are pure bit pattern & bugs are due to misinterpretation or
corruption of it.
Structure: Size, shape & alignment of data object in memory. A structure
may have substructures.
Attributes: Semantics associated with the contents (e.g. integer, string,
subroutine).

Bugs

Severity & subtlety increases from contents to attributes as they get less
formal.
Structural bugs may be due to wrong declaration or when same contents
are interpreted by multiple structures differently (different mapping).

Attribute bugs are due to misinterpretation of data type, probably at an


interface

Preventive Measures (prevention & correction)

Good source lang. documentation & coding style (incl. data dictionary).
Data structures be globally administered. Local data migrates to global.

Strongly typed languages prevent mixed manipulation of data.


In an assembly lang. program, use field-access macros & not directly
accessing any field.

4. Coding Bugs

Coding errors create other kinds of bugs.

Syntax errors are removed when compiler checks syntax.

Coding errors
typographical, misunderstanding of operators or statements or could
be just arbitrary.

Documentation Bugs

Erroneous comments could lead to incorrect maintenance.

Testing techniques cannot eliminate documentation bugs.

Solution:

Inspections, QA, automated data dictionaries & specification systems.


5. Interface, Integration and Systems Bugs

There are 9 types of bugs of this type.

i. External Interfaces

ii. Internal Interfaces

iii. Hardware Architecture Bugs

iv. Operating System Bugs

v. Software architecture bugs

vi. Control & Sequence bugs

vii. Resource management bugs

viii. Integration bugs

ix. System bugs

User

System

component
component

hardware

O. S.
Drivers

Application
software
5. Interface, Integration and Systems Bugs contd..

1) External Interfaces

Means to communicate with the world: drivers, sensors, input


terminals, communication lines.
Primary design criterion should be - robustness.

Bugs: invalid timing, sequence assumptions related to external


signals, misunderstanding external formats and no robust
coding.
Domain testing, syntax testing & state testing are suited to
testing external interfaces.

2) Internal Interfaces

Must adapt to the external interface.


Have bugs similar to external interface

Bugs from improper


Protocol design, input-output formats, protection against
corrupted data, subroutine call sequence, call-parameters.

Remedies (prevention & correction):

Test methods of domain testing & syntax testing.


Good design & standards: good trade off between # of internal
interfaces & complexity of the interface.
Good integration testing is to test all internal interfaces with
external world.

3) Hardware Architecture Bugs:

A s/w programmer may not see the h/w layer / architecture.

S/w bugs originating from hardware architecture are due to


misunderstanding of how h/w works.

Bugs are due to errors in:

Paging mechanism, address generation


I/O device instructions, device status code, device protocol

Expecting a device to respond too quickly, or to wait for too long


for response, assuming a device is initialized, interrupt
handling, I/O device address

H/W simultaneity assumption, H/W race condition ignored,


device data format error etc..
Remedies (prevention & correction):

Good software programming & Testing.

Centralization of H/W interface software.

Nowadays hardware has special test modes & test instructions


to test the H/W function.

An elaborate H/W simulator may also be used.

4) Operating System Bugs:

Due to:

Misunderstanding of H/W architecture & interface by the


O. S.

Not handling of all H/W issues by the O. S.

Bugs in O. S. itself and some corrections may leave


quirks.

Bugs & limitations in O. S. may be buried some where in


the documentation.

Remedies (prevention & correction):

Same as those for H/W bugs.

Use O. S. system interface specialists

Use explicit interface modules or macros for all O.S. calls.

The above may localize bugs and make testing simpler.


5) Software Architecture Bugs: (called Interactive)

The subroutines pass thru unit and integration tests without detection of
these bugs. Depend on the Load, when the system is stressed. These
are the most difficult to find and correct.

Due to:

Assumption that there are no interrupts, Or, Failure to block or


unblock an interrupt.
Assumption that code is re-entrant or not re-entrant.
Bypassing data interlocks, Or, Failure to open an interlock.
Assumption that a called routine is memory resident or not.
Assumption that the registers and the memory are initialized,
Or, that their content did not change.
Local setting of global parameters & Global setting of local
parameters.

Remedies:

Good design for software architecture.

Test Techniques

All test techniques are useful in detecting these bugs, Stress


tests in particular.

6) Control & Sequence Bugs:

Due to:

Ignored timing
Assumption that events occur in a specified sequence.
Starting a process before its prerequisites are met.
Waiting for an impossible combination of prerequisites.
Not recognizing when prerequisites are met.
Specifying wrong priority, Program state or processing level.
Missing, wrong, redundant, or superfluous process steps.

Remedies:

Good design.
highly structured sequence control - useful
Specialized internal sequence-control mechanisms such as an
internal job control language useful.
Storage of Sequence steps & prerequisites in a table and
interpretive processing by control processor or dispatcher -
easier to test & to correct bugs.
Test Techniques

Path testing as applied to Transaction Flow graphs is effective.

7) Resource Management Problems:


Resources: Internal: Memory buffers, queue blocks etc.
External: discs etc.

Due to:

Wrong resource used (when several resources have


similar structure or different kinds of resources in the
same pool).
Resource already in use, or deadlock

Resource not returned to the right pool, Failure to return a


resource. Resource use forbidden to the caller.

Remedies:

Design: keeping resource structure simple with fewest


kinds of resources, fewest pools, and no private resource
mgmt.
Designing a complicated resource structure to handle all
kinds of transactions to save memory is not right.
Centralize management of all resource pools thru
managers, subroutines, macros etc.

Test Techniques

Path testing, transaction flow testing, data-flow testing &


stress testing.

8) Integration Bugs:

Are detected late in the SDLC and cause several components and hence
are very costly.

Due to:

Inconsistencies or incompatibilities between components.


Error in a method used to directly or indirectly transfer data
between components. Some communication methods are:
data structures, call sequences, registers, semaphores,
communication links, protocols etc..

Remedies:
Employ good integration strategies. ***

Test Techniques

Those aimed at interfaces, domain testing, syntax testing, and


data flow testing when applied across components.

9) System Bugs:

Infrequent, but are costly

Due to:

Bugs not ascribed to a particular component, but result


from the totality of interactions among many components
such as: programs, data, hardware, & the O.S.

Remedies:
Thorough testing at all levels and the test techniques
mentioned below

Test Techniques

Transaction-flow testing.

All kinds of tests at all levels as well as integration tests -


are useful.

6. Testing & Test Design Bugs

Bugs in Testing (scripts or process) are not software bugs.

Its difficult & takes time to identify if a bug is from the software or from the test
script/procedure.

1) Bugs could be due to:

Tests require code that uses complicated scenarios & databases, to be


executed.

Though an independent functional testing provides an un-biased point


of view, this lack of bias may lead to an incorrect interpretation of
the specs.

Test Criteria
Testing process is correct, but the criterion for judging softwares
response to tests is incorrect or impossible.

If a criterion is quantitative (throughput or processing time), the


measurement test can perturb the actual value.

Remedies:

1. Test Debugging:

Testing & Debugging tests, test scripts etc. Simpler when tests
have localized affect.

2. Test Quality Assurance:

To monitor quality in independent testing and test design.

3. Test Execution Automation:

Test execution bugs are eliminated by test execution automation


tools & not using manual testing.

4. Test Design Automation:

Test design is automated like automation of software devpt.


For a given productivity rate, It reduces bug count.

A word on productivity

At the end of a long study on taxonomy, we could say

Good design inhibits bugs and is easy to test. The two factors are
multiplicative and results in high productivity.

Good test works best on good code and good design.

Good test cannot do a magic on badly designed software.


30.0

25.0
Bugs percentage

20.0

15.0

10.0

5.0

0.0
ts ct ic s on s lit y ss ss s gs s d gs gs ng ng at a re g ng cal ns on ers ion es ut ion ure se re ilit y ce ns ys nt on gs gs s s gs ied
en re og es ti ge a e e es u ic le u u ci si D ctu dlin odi phi t io t at i t h rat f ac ghp rat ect d U ectu tab an tio erla nme uti bu bu bug nes Bu cif
m cor s L t en nta an tion ct n t en t en in b ost and al b al B en ces ru n a la n O teg t er ou t eg hit an hit un rm ep Ov iro xec gn ion on te ing pe
r e t e h c e e
ui s in en ple m C n orr pl ple ma ag ish ion tu eq ro
n r u st ha & C ogr vio me In l I n hr r I n Arc all Arc co erfo Exc s, nv E esi cut tati ple est Uns
eq ent em om ocu nts d fu n c om om Do di n m nct truc d s p nd and ion typ rds ocu a T c c , n E d m
R ir , c D e an t io e c c d io fu S n n
a
s ta t
& a D e rn ng, t he are / S are d A P sis titio n, an st D Exe en Co er T er,
m
re equ nts on, irem re nc tur ase
n
a dit er a
it io es en g an d
In t
m
i O w O w n
t t a no ar ge ion Te st um e t h t h
u i i u u s n th w n c m in st T i of o f y g P ys it e o c as O O
e t u t f a c e
g co o lo i
f a e c d r a n T D tc
eq R em nta eq ea re/ fe al lf de a pl Co nd s, ,S S ve di S fi
st es
R n sa n t ro a at m ce o ct De
r
ui se
R F tu io es ptio at d I
a em ec st Te T
eq Pre fe
a
n ct m e c on d ty le erf a y st R o rre e
R fu r c
se x S nt S c T
U e lI In
na
ter
Ex

Activity
QUESTIONS FROM PERVIOUS EXAMS FROM UNIT 1

Q. Give Differences between functional and structural testing.


Ans: Dichotomies 2

Q. Differentiate between function and structure


Ans: Dichotomies 2

Q. Specify on which factors the importance of bugs depends. Give the metric for
it (importance).
Ans: Importance of bugs as discussed in chapter 2

Q. Briefly explain various consequences of bugs.


Ans: consequences as seen from the user point of view

Q. What are different types of testing? Explain them briefly.


Ans: levels of testing as mentioned in a model for testing: unit, component,
integration, system. possibly could add functional & structural)..
Q. Give brief explanation of white box testing & black box testing and give the
differences between them.
Ans: same as for dichotomies 2 : function vs structure

Q. What are the differences between static data and dynamic data?
Ans: 2nd point in Data bugs in taxonomy of bugs

Q. What are the principles of test case design? Explain.


Ans: Dichotomies 4

Q. What are the remedies for test bugs?


Ans: 6th and last point in taxonomy of bugs.. Remedies.

You might also like