[go: up one dir, main page]

0% found this document useful (0 votes)
13 views58 pages

Software Testing Introduction - Notes

BCA Software Testing

Uploaded by

span4564
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views58 pages

Software Testing Introduction - Notes

BCA Software Testing

Uploaded by

span4564
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

SOFTWARE TESTING

INTRODUCTION
Syllabus Unit 1 – Fundamentals of
test and analysis
 Software test and analysis in a nutshell – Engineering
process and verification, Basic questions: When do
verification and validation start and End? What
technique should be applied? How can we assess the
readiness of a product? How can we ensure the quality of
successive releases?
 A framework for test and analysis – Validation and
verification, Degrees of freedom, verities of software
 Basic principles – Sensitivity, Redundancy, Restriction,
Partition, Visibility and Feedback
 Test and Analysis activities within a software process –
The quality process, planning and monitoring, quality
goals, dependability properties, analysis, testing,
improving the process and organizational factors
Engineering Processes and
Verification
Construction of high-quality engineering
products requires complementary
pairing of design and verification
activities throughout development.
Appropriate verification activities depend
on
the engineering discipline,
the construction process,
the final product,
and quality requirements.
 Challenges in identifying verification process for a
software
Each software package is at least partly unique in its design
and functionality. Such products are verified individually both
during and after production to identify and eliminate faults.
Verification of a unique product, requires the design of a
specialized set of tests and analyses to assess the quality of
that product.
The relationship between the test and analysis results and
the quality of the product cannot be defined once for all
items, but must be assessed for each product.
Verification grows more difficult with the complexity and
variety of the products. Software is among the most
variable and complex of artifacts engineered on a regular
basis.
Quality requirements of software used in one
environment may be quite different and
incompatible with quality requirements of a different
environment or application domain.
The structure of a software evolves and often
deteriorates as the software system grows.
The inherent nonlinearity of software systems and
uneven distribution of faults complicates
verification.
The new development approaches in software like
distributed computing, object oriented programming
etc introduce new subtle kinds of faults, which may
be more difficult to reveal and remove than classic
faults.
The variety of problems and the richness of
approaches make it challenging to choose and
schedule the right blend of techniques to reach the
required level of quality within cost constraints.
 The cost of software verification often exceeds half the
overall cost of software development and
maintenance.
There are no fixed recipes or pre-cooked solutions
for attacking the problem of verifying a software
product. But need to design a solution that
suits the problem, the requirements, and the
development environment.
Basic Questions
When do verification and validation start?
When are they complete?
 What particular techniques should be
applied during development of the product
to obtain acceptable quality at an
acceptable cost?
 How can we assess the readiness of a
product for release?
 How can we control the quality of
successive releases?
 How can the development process itself be
improved over the course of the current
When do verification and validation
start? When are they complete?
 Verification and validation start as soon as we decide to build a software
product, or even before.
 During feasibility study, the IT manager considers not only functionality
and development costs, but also the required qualities and their impact
on the overall cost.
 Opportunities and obstacles for cost effective verification are important
considerations in factoring the development effort into subsystems and
phases, and in defining major interfaces .
 The quality manager steers the early design toward a separation of
concerns that will facilitate test and analysis.
 The initial build plan also includes some preliminary decisions about test
and analysis techniques to be used in development.
 Execute test cases during development and test phases
 If the feasibility study leads to a project commitment,
verification and validation (V&V) activities will commence with
other development activities, and like development itself will
continue long past initial delivery of a product V & V activities
will go along with it.
What techniques to be
applied?
Why Combine Techniques?
No single test or analysis technique can serve all purposes. The
primary reasons for combining techniques, rather than choosing a
single “best” technique, are
 Effectiveness for different classes of faults. For example, race
conditions are very difficult to find with conventional testing, but they
can be detected with static analysis techniques.
 Applicability at different points in a project. For example, we can
apply inspection techniques very early to requirements and design
representations that are not suited to more automated analyses.
 Differences in purpose. For example, systematic (nonrandom) testing
is aimed at maximizing fault detection, but cannot be used to
measure reliability; for that, statistical testing is required.
 Trade-offs in cost and assurance. For example, one may use a
relatively expensive technique to establish a few key properties of
core components (e.g., a security kernel) when those techniques
would be too expensive for use throughout a project.
The choice of the set of test and analysis
techniques depends on
quality,
cost,
scheduling,
and resource constraints in development of a
particular product
 For the business logic subsystem, the quality team plans to use a
preliminary prototype for validating requirements specifications.
 They plan to use automatic tools for simple structural checks of the
architecture and design specifications.
 They will train staff for design and code inspections, which will be
based on company checklists that identify deviations from design rules
for ensuring maintainability, scalability, and correspondence between
design and code.
 The analysis and test plan requires inspection of requirements
specifications, design specifications, source code, and test
documentation.
 Most source code and test documentation inspections are a simple
matter of soliciting an off-line review by one other developer
 Component interface specifications are inspected by small groups again
mostly off-line
 A larger group and more involved process, including a moderated
inspection meeting with three or four participants, is used for inspection
of a requirements specification.
 Developers produce functional unit tests with each development
work assignment, as well as test oracles and any other scaffolding
required for test execution.
 If the extent to which unit tests exercise the control structure of
programs is less, additional test cases will be devised.
 Integration and system tests are generated by the quality team,
working from a catalog of patterns and corresponding tests.
 The behavior of some subsystems or components is modeled as
finite state machines, so the quality team creates test suites that
exercise program paths corresponding to each state transition in
the models.
 The human factors team will produce look-and-feel guidelines for
the Web purchasing system and also produces and executes a
usability testing plan.
 Number of fault found in the process will be recorded and if number
of faults found in a component during design inspections is high,
additional dynamic test time will be planned for that component.
How Can We Assess the Readiness
of a Product?
 Products must be delivered when they meet an
adequate level of functionality and quality. We
must have some way to specify the required level
of dependency by specifying availability, MTBF,
reliability of each product to determine when that
level has been attained.
 Availability- measures the quality of service in
terms of running versus down time
 MTBF - measures the quality of the service in terms
of time between failures
 Reliability - indicates the fraction of all attempted
operations (program runs, or interactions, or
sessions) that complete successfully.
Once the goals are set, we need to monitor if the
goals are achieved by
measuring reliability when debug and testing in
organization is matching the goals set
By checking the usage data from operational profiles
By verifying reliability, using a sample of real users in
a controlled environment, observed by the
development organization which is called alpha
testing
Or by verifying reliability, using a sample of real users
in their own environment which is called beta testing
Once the desired dependency is achieved, the
product can be released
How Can We Ensure the Quality of
Successive Releases?
 Maintains a database for tracking problems. This database
serves a dual purpose of tracking and prioritizing actual,
known program faults and their resolution and managing
communication with users who file problem reports.
 Major revisions, involving several developers is called as
point releases and smaller revisions are called patch level
releases
Point release includes inspection of revised requirements to
design and execution of new unit, integration, system, and
acceptance test cases. A major point release is likely even to
repeat a period of beta testing
Patch level revisions are often urgent for at least some
customers. Test and analysis for patch level revisions is
abbreviated, and automation is particularly important for
obtaining a reasonable level of assurance with very fast
turnaround.
Maintains an extensive suite of regression
tests to test recording, classification, and
automatic re-execution of test cases. Each
point release must undergo complete
regression testing before release, but patch
level revisions may be released with a
subset of regression tests that run
unattended overnight.
When fixing one fault, it is all too easy to
introduce a new fault or re-introduce faults
that have occurred in the past. Separate
set of regression test cases should be
executed as faults are discovered and
How Can the Development Process
Be Improved?
Implement a quality improvement program
with group members drawn from
developers and quality specialists on
several project teams.
This team tracks and classifies faults to
identify the human errors that cause them
and weaknesses in test and analysis that
allow them to remain undetected.
The group produces recommendations that
may include modifications to development
and test practices, tool and technology
support, and management practices.
Fault analysis and process improvement
comprise four main phases:
Defining the data to be collected and
implementing procedures for collecting it;
analyzing collected data to identify important
fault classes;
 analyzing selected fault classes to identify
weaknesses in development and quality
measures;
and adjusting the quality and development
process
A Framework for Test and
Analysis
There are no perfect test or analysis
techniques, nor a single “best” technique
for all circumstances. Every technique has
its own strength and weakness.
The best approach will not be exclusive
reliance on one technique, but careful
choice of a portfolio of test and analysis
techniques selected to obtain acceptable
results at acceptable cost, and addressing
particular challenges posed by
characteristics of the application domain or
software
Validation & Verification
Assessing the degree to which a software
system actually fulfills its requirements, in
the sense of meeting the user’s real needs,
is called validation.
Verification is checking the consistency of
an implementation with a specification.
verification is a check of consistency
between two descriptions, in contrast to
validation which compares a description
(whether a requirements specification, a
design, or a running system) against actual
needs
 Verification and validation process is primarly required
to assess the dependability of a system. Dependability
properties include correctness, reliability, robustness,
and safety.
Correctness is absolute consistency with a specification,
always and in all circumstances.
Reliability is a statistical approximation to correctness,
expressed as the likelihood of correct behavior in
expected use.
 Robustness, distinguishes which properties should be
maintained even under exceptional circumstances in
which full functionality cannot be maintained.
Safety is a kind of robustness in which the critical
property to be maintained is avoidance of particular
hazardous behaviors.
Degrees of Freedom
In practice, failure is not only possible but
common, and we are forced to accept a
significant degree of inaccuracy.
The level of inaccuracy we decide to
accept for a testing process is called
degree of freedom.
A technique for verifying a property can be inaccurate
in one of two directions
It may be pessimistic, meaning that it is not guaranteed
to accept a program D optimistic even if the program
does possess the property being analyzed. A software
verification technique that errs only in the pessimistic
direction is called a conservative analysis. It appears
that a conservative analysis would always be preferable
to one that could accept a faulty program. But, a
conservative analysis will often produce a very large
number of spurious error reports, in addition to a few
accurate reports.
Or it can be optimistic if it may accept some programs
that do not possess the property (i.e., it may not detect
all violations).
In addition to pessimistic and optimistic
inaccuracy, a third dimension of
compromise is possible: substituting a
property that is more easily checked, or
constraining the class of programs that can
be checked.
Varieties of software
Software comes with a lot variety
Based on developmental approach a software could
be Procedural software or Object oriented software
It varies based on application domains like Real
time applications or Safety critical software
Based on construction methods, it could be a
physically distributed system or a single threaded
application or a simple graphical user interface

Typically a software system does not fall neatly into


one category but rather has a number of relevant
characteristics that must be considered when
planning verification.
Basic Principles
The six principles that characterize various
approaches and techniques for analysis and
testing are:
Sensitivity: better to fail every time than
sometimes,
Redundancy: making intentions explicit,
Restriction: making the problem easier,
Partition: divide and conquer,
Visibility: making information accessible, and
Feedback: applying lessons from experience
in process and techniques.
Sensitivity
 The sensitivity principle states that it is better to fail every time than
sometimes.
 Reliable criteria require that inputs belonging to the same class
produce the same test results: They all fail or they all succeed. When
this happens, we can infer the correctness of a program with respect
to the a whole class of inputs from a single execution.
 A fault that results in a failure randomly but very rarely — for
example, a race condition that only occasionally causes data
corruption, may likewise escape detection until the software is in use
by thousands of customers, and even then be difficult to diagnose and
correct.
 The sensitivity principle says that we should try to make these faults
easier to detect by making them cause failure more often.
 It can be applied in three main ways: at the design level, changing the
way in which the program fails; at the analysis and testing level,
choosing a technique more reliable with respect to the property of
interest; and at the environment level, choosing a technique that
reduces the impact of external factors on the results
 Replacing strcpy and strncpy with stringCopy in the program of is a
simple example of application of the sensitivity principle in design.
 Run-time array bounds checking in many programming languages
(including Java but not C or C++) is an example of the sensitivity
principle applied at the language level.
 A variety of tools and replacements for the standard memory
management library are available to enhance sensitivity to memory
allocation and reference faults in C and C++.
 The sensitivity principle can also be applied to test and analysis
techniques.
 At scenarios where normal testing cannot detect errors such as deadlock
or race condition, we may use model testing and reachability
analysis
 Code inspection can reveal many subtle faults.
 The use of detailed checklists and a disciplined review process
may reduce the influence of external factors
 skilled test designers can derive excellent test suites,
 Systematic testing criteria will also help in the process.
Redundancy
 In software test and analysis, we wish to detect faults that could
lead to differences between intended behavior and actual behavior,
for that we will add redundant statement of intent.
 It is introduced with an automatic, algorithmic check for consistency
 It is much cheaper and more thorough than dynamic testing or
manual inspection.
 Static type checking is a classic application of this principle : eg:
Java enforces rules about explicitly declaring each exception that
can be thrown by a method.
 Software design tools and other software artifacts also provide ways
to check consistency between different design views or artifacts.
 Defensive programming, explicit run-time checks for conditions that
should always be true if the program is executing correctly, is
another application of redundancy in programming.
Restriction
Restriction is the way of making the testing
process simpler by implementing more
restriction in programming language,
architectural design or detailed design.
Programming level of restriction example-
consider the following example. Java will
not allow to compile this code as rule for
initialization before use is a program source
code is a restriction in Java language.
Detailed design restriction example –
Implementing serializability of transaction
using locking or versioning
Architectural design restriction example -
Stateless component interfaces are an
example of restriction applied at the
architectural level. A famous example of
simplifying component interfaces by
making them stateless is the Hypertext
Transport Protocol (HTTP) 1.0 of the World-
Wide-Web, which made Web servers
simpler, more robust and also much easier
to test
Partition
 Partition or divide and conquer principle is to divide a complex
problem into subproblems to be attacked and solved
independently.
 Partitioning can be applied both at testing process and testing
technique levels.
 In process level, testing is usually divided into unit,
integration, subsystem, and system testing. In this way, we
can focus on different sources of faults at different steps, and
at each step, we can take advantage of the results of the
former steps.
 In technique level, consider an example where we construct a
model and then analyze the system using the model. In this
way we divide the overall analysis into two subtasks: first
simplify the system to make the proof of the desired
properties feasible and then prove the property with respect
to the simplified model.
Visibility
 Visibility means the ability to measure progress or status
against goals.
 The principle of visibility involves setting goals that can be
assessed as well as devising methods to assess their
realization.
 In testing and analysis visibility means
 process visibility – measuring achieved quality against quality
goals and
 schedule visibility - ability to judge the state of development
against a project schedule.
 One way of achieving schedule visibility is by documentation.
All the test plans (unit test plan, integration test plan, system
test plan, user acceptance test plan) are documented well
and success/failure status will be updated during testing
phase. All the defects detected are tracked in a separate
defect tracker and its status will be promptly updated.
Visibility is closely related to observability, the ability
to extract useful information from a software artifact.
The choice of simple, human-readable text rather
than a more compact binary encoding has a small
cost in performance and a large payoff in
observability, eg: use of http and smtp protocols,
making construction of test drivers and oracles using
human readable text etc.
The architectural design and build plan of a system
determines what will be observable at each stage of
development, which in turn largely determines the
visibility of progress against goals at that stage.
Feedback
 Feedback is another classic engineering principle that
applies to analysis and testing.
 Feedbacks will be collected from all category of people like
developers, testers, quality analysts, technical experts,
managers, users etc..
 Feedback applies both to the process itself (process
improvement) and to individual techniques (e.g., using test
histories to prioritize regression testing).
 Systematic inspection and walkthrough will be done based
on the feedback.
 Participants in inspection are guided by checklists, and
checklists are revised and refined based on experience.
 New checklist items may be derived from root cause
analysis, analyzing previously observed failures to identify
the initial errors that lead to them
Test and Analysis activities within
a software process
The quality process should be structured
for
Completeness
Timeliness
Cost effectiveness
The activities includes
Planning and monitoring
Quality Goals
Dependability properties
Analysis
Testing
Improving the process
Planning and monitoring
Visibility
A&T strategy
A & T plan
 Visibility
 Visibility in software quality analysis is about checking “How does our
progress compare to our plan?” and also checking progress against
quality goals. If one cannot gain confidence in the quality of the
software system long before it reaches final testing, the quality
process has not achieved adequate visibility.
 A well-designed quality process balances several activities across the
whole development process, selecting and arranging them to be as
cost-effective as possible, and to improve early visibility.
 Visibility is particularly challenging and is one reason that quality
activities are usually placed as early in a software process as possible.
For example, one designs test cases at the earliest opportunity and
uses both automated and manual static analysis techniques on
software artifacts that are produced before actual code.
 Early visibility also motivates the use of “proxy” measures, that is,
use of quantifiable attributes that are not identical to the properties
that one really wishes to measure, but that have the advantage of
being measurable earlier in development.
A&T strategy
The overall analysis and test strategy identifies
company- or project-wide standards that must be
satisfied like
 procedures for obtaining quality certificates required for
certain classes of products,
 techniques and tools that must be used, and
 documents that must be produced.
Some companies develop and certify procedures
following international A&T strategy standards such
as ISO 9000 or SEI Capability Maturity Model, which
require detailed documentation and management of
analysis and test activities and well-defined phases,
documents, techniques, and tools.
 A&T plan
A complete analysis and test plan is a comprehensive
description of the quality process and includes several
items like
 It indicates objectives and scope A&T plan of the test and analysis
activities;
 it describes documents and other items that must be available for
performing the planned activities,
 integrating the quality process with the software development
process;
 it identifies items to be tested, thus allowing for simple
completeness checks and detailed planning;
 it distinguishes features to be tested from those not to be tested;
 it selects analysis and test activities that are considered essential for
success of the quality process; and finally
 it identifies the staff involved in analysis and testing and their
respective and mutual responsibilities.
The final analysis and test plan includes additional
information that illustrates
 constraints - Constraints indicate deadlines and limits that may be
derived from the hardware and software implementation of the system
under analysis and the tools available for analysis and testing
 pass and fail criteria - Pass and fail criteria indicate when a test or
analysis activity succeeds or fails, thus supporting monitoring of the
quality process
 Schedule - The schedule describes the individual tasks to be performed
and provides a feasible schedule
 Deliverables - Deliverables specify which documents, scaffolding and
test cases must be produced, and indicate the quality expected from
such deliverables.
 hardware and software requirements - Hardware, environment and tool
requirements indicate the support needed to perform the scheduled
activities
 risks, and contingencies - The risk and contingency plan identifies the
possible problems and provides recovery actions to avoid major
failures.
Quality goals
Software product qualities can be divided
into two categories
 Internal - Properties that are not directly visible to
end users, such as maintainability, reusability, and
traceability, are called internal properties, even
when their impact on the software development and
evolution processes may indirectly affect users.
 external quality - Properties that are directly visible
to users of a software product, such as
dependability, latency, usability, and throughput
are called external properties
The external properties of software can ultimately
be divided into dependability and usefulness.
Dependability
 Does the software do what it is intended to do?
 When software is not dependable, we say it has a fault, or a
defect, or a bug, resulting in an undesirable behavior or
failure.
Usability
 It is quite possible to build systems that are very reliable,
relatively free from usefulness hazards, and completely
useless. They may be unbearably slow, or have terrible user
interfaces and unfathomable documentation, or they may be
missing several crucial features.
 Like dependability, it is always good to mention the usability
requirements also explicit
Dependability properties
Correctness
 A program or system is correctness correct if it is consistent
with its specification. By definition, a specification divides all
possible system behaviors into two classes, successes and
failures. All of the possible behaviors of a correct system are
successes
Reliability
 Reliability is a measure of the likelihood of correct function for
some “unit” of behavior, for a period of time.
 Like correctness, reliability is relative to a specification (which
determines whether a unit of behavior is counted as a
success or failure).
 Unlike correctness, reliability is also relative to a particular
usage profile. The same program can be more or less reliable
depending on how it is used.
Availability
 The availability of a system is the time in which the system is
“up” (providing normal service) as a fraction of total time.
 Eg: a network router that averages 1 hour of down time in
each 24-hour period would have an availability of 23 /24 , or
95.8%.
MTBF (Mean Time Between Failure) –
 It is the time interval between two system failures.
 Eg: A system that fails once in a day and takes one hour to
recover has a MTBF of 23 hours
Safety
 Software safety is concerned with preventing certain
undesirable behaviors, called hazards.
 Software safety is typically a concern in “critical” systems
such as avionics and medical systems, but the basic principles
apply to any system.
Robustness
Software that gracefully degrades or fails
“softly” outside its normal operating
parameters is robust
Eg : It is acceptable for the database system
to cease to function when the power is cut,
but unacceptable for it to leave the database
in a corrupt state.
Analysis- Analysis techniques that do not involve
actual execution of program source code play a
prominent role in overall software quality processes.
Manual inspection
 It can be applied to essentially any document including
requirements documents, architectural and more detailed
design documents, test plans and test cases, and even program
source code.
 Inspection may also have secondary benefits, such as spreading
good practices and instilling shared standards of quality.
 But, inspection takes a considerable amount of time and
requires meetings, which can become a scheduling bottleneck.
 Also re-inspecting a changed component can be as expensive as
the initial inspection. Automated static analysis
Automated analysis
Automated static analyses are more limited in
applicability but are selected when available as it is
more cost-effective.
It thoroughly check for particular classes of faults
For example, finite state verification techniques for
concurrent systems requires construction and
careful structuring of a formal design model, and
addresses faulty synchronization structure.
Hybrid approach
It is a combination of manual and automated
analysis selecting suitable approach for different
classes of faults
 Testing
Tests are executed when the corresponding code is available,
but testing activities start earlier, as soon as the artifacts
required for designing test case specifications are available.
Thus, acceptance and system test suites should be generated
before integration and unit test suites, even if executed in the
opposite order.
Early test design has several advantages.
 Tests are specified independently from code
 when the corresponding software specifications are fresh in the mind of
analysts and developers, facilitating review of test design
 Test cases may highlight inconsistencies and incompleteness in the
corresponding software specifications.
 Helps in early repair of software specifications, preventing specification
faults from propagating to later stages in development
 programmers may use test cases to illustrate and clarify the software
specifications, especially for errors and unexpected conditions.
 Improving the process
 The goal of quality process improvement is to find cost-effective
countermeasures for classes of faults that are expensive because
they occur frequently
 The quality process can be improved by gathering, analyzing,
and acting on data regarding faults and failures.
 The first part of a process improvement feedback loop, and often the most
difficult to implement, is gathering sufficiently complete and accurate raw
data about faults and failures.
 Raw data on faults and failures must be aggregated into categories and
prioritized
 The analysis step consists of tracing several instances of an observed fault
or failure back to the human error from which it resulted, or even further to
the factors that led to that human error. This process is known as “root
cause analysis,”
 The countermeasure could involve differences in programming methods
(e.g., requiring use of certified “safe” libraries for buffer management), or
improvements to quality assurance activities (e.g., additions to inspection
checklists), or sometimes changes in management practices.
Organizational factors
Many people and teams will be involved in
the quality and analysis process of a system.
The quality of a product could be affected by
the poor allocation of responsibilities
Separation of roles will be helpful to resolve
conflicts
Mobility of people and roles will be helpful as
well
Short answer questions
1. When do verification and validation start? When are they complete?
2. What particular techniques should be applied during development of the product to obtain
acceptable quality at an acceptable cost?
3. How can we assess the readiness of a product for release?
4. How can we control the quality of successive releases?
5. How can the development process itself be improved over the course of the current and
future projects to improve products and make verification more cost effective?
6. What is meant by verification and validation
7. What is meant by degree of freedom in software testing process
8. What is pessimistic and optimistic approach based on degree of freedom of a testing process
9. List down the six principles that characterize various approaches and techniques for analysis
and testing
10. Define availability and MTBF of a system.

11. Define internal and external quality


Long answer questions

1. Challenges in identifying verification process for a software


2. List down the basic questions with answers with respect to
verification process
3. Distinguish between validation and verification
4. Write a note on Degree of Freedom in software testing processes
5. Write a note on Varieties of software which identifies the
appropriate testing process of a product
6. Write a note on sensitivity of testing processes
7. Write a note on redundancy of testing processes
8. Write a note on restriction of testing processes
9. Write a note on partition of testing processes
10. Write a note on visibility of testing processes
11. Write a note on feedback in testing processes
12. Explain the testing and analysis activities within a software process
13. Write a note on dependability properties
14. What are the organizational factors that affects the testing and
analysis process

You might also like