Testing Fundamentals
Testing Fundamentals
Contents
What is Quality ?
What Is Software Testing?
Why Testing?
Testing Lifecycle
Testing Principles
When to Start Testing
Quality Assurance Vs Testing
Complete Testing – Is it possible?
What is Quality ?
The totality of features and characteristics of a product or service that bear on its
ability to satisfy stated or implied needs - ISO 8402
Quality is Conformance to requirements - Crossby
Quality is Compliance to standard - W.E Perry
“The degree to which a system, component or process meets requirements”
or
“The degree to which a system, component or process meets customer or
user needs or expectations” - I.E.E.E.
Dimensions / Attributes of Software Quality
Portability
Efficiency Testability
Reliability Understandability
Usability Modifiability
(Boehm, 1978)
What and Why Testing?
What and Why Testing?
What Is Software Testing?
Testing is the process of executing a program with the intent of finding errors
Assume that the program contains errors and then test the program to find as many of
the error as possible
Testing is a destructive yet creative process which aims at establishing confidence
that a program does what it is supposed to do
Why Testing?
To Ensure Quality
Verifies that all requirements are implemented correctly (both for positive and
negative conditions)
Identifies defects before software deployment
Helps improve quality and reliability.
Makes software predictable in behavior.
Reduces incompatibility and interoperability issues.
Helps marketability and retention of customers.
Give programmers information they can use to prevent bugs and create useful
software
Are we playing with fire? Cost of Ineffective Testing
Time
Late Releases
Projects need to be reworked or abandoned
Money
Budget over-runs
Defects are 100 to 1000 times more costly to find and repair after
deployment
Quality
Developers unsure of product quality
Products released with undiscovered or unresolved defects
The Cost of Quality
Poor quality has costs. Many IT failures could be equated to
lost business. There is, however, a cost to ensuring quality
The chart below, displaying data from the Software Engineering Institute
at Carnegie Mellon University, underscores the importance of identifying
software errors prior to testing.
The graph above clearly indicates a quantum leap in software defect costs if those
defects are not identified until testing. If they are then not identified during testing,
the costs again increase substantially
The Cost of Quality ……..contd
Following are the two findings by Center for Empirically Based
Software Engineering Republished in January 2001, their analysis of
software defect reduction metrics includes these findings:
"Finding and fixing a software problem after delivery is often 100
times more expensive than finding and fixing it during the
requirements and design phase."
"Disciplined personal practices can reduce defect introduction rates
by up to 75 percent…When you couple this [Personal Software
Process] with the strongly compatible Team Software Process,
defect reduction rates can soar to factors of 10 or higher for an
organization that operates at a modest maturity level."
Testing Life Cycles
Testing Life Cycle
Testing Lifecycle
Requirements
Capture
Analysis The Scenarios Design and the Test Case
Development could normally start in parallel
with the Development Cycle.
Test planning & Test Execution Synchs up with the
Scenario Design Development Cycle during the functional
Testing Phases
New Version Test Case
Development
Defect Fixing
Cycle
Test Execution
Defects Test Cycle Closure
Test Result
Analysis
Development Process and Testing Process go on in parallel
Development Process
Development Testing
Business Detailed * High Level Detailed and Unit (Systems, SI &
Transition /
Analysis Requirements Design Design Testing UAT) Rollout
* Requirements could be defined along many dimensions e.g. : Functional Reqts, Usability Reqts, System Reqts, Performance Reqts, Quality Reqts, Technical Reqts, etc
Testing Process
Requirement Unit test Integration
capture planning testing
Testing IT results
Strategy
IT test planning Unit testing
review
Unit test results Functional Non-functional
Test analysis testing
System Test planning review testing
& design
Functional Non-functional
Test data Test bed setup
results review
results review
Defect tracking
The boxes in blue represent the tasks undertaken by Testing Team
Different stages of Testing Lifecycle
Test requirements identification
Validate for testability
Test Planning and Scenario design
Develop Test Objectives
Identify Test Items
Resources and Schedules
Test Case Development
Test Case Specification
Pre-requisites, post-requisites and Acceptance criteria
Different stages of Testing Lifecycle (Contd.)
Prepare Test Bed
Test Data Preparation
Test Environment Setup
Test Execution
Run Tests and Validate Results
Bug Reporting
Bug fixes and retesting
Test Result Analysis
Defect Analysis
Determining Test Set Coverage and Effectiveness
Testing principles
Testing principles
Testing Principles
Plan and test to find errors and NOT to prove that the program works fine.
Be creative and have an attitude to HACK / BREAK.
All tests should be traceable to the requirements
Exhaustive/complete testing is not feasible, so optimize tests based on priority and
criticality
Test Early and Iteratively as it evolves
“Divide and Conquer” – Begin “in the small” and progress towards testing “in the large”
Pareto principle holds true - The 80-20 rule
More the errors found already, it’s more likely to find more errors
Automate tests, wherever feasible
Create reusable test artifacts/scenarios
Fix only when a test cycle is completed
Testing is more efficient if done by an Unbiased third party
Testing Principles - continued
Testing must be planned – Testing Engineering
A good testing requires thinking about overall approach, designing test cases,
and establishing expected result for each case
The planning and care we expend on that case selection accounts for much
of the difference between good and poor testers
Examining a program to see if it does not do what it is supposed to do is only half of the
battle. The other half is seeing whether the program does what is not supposed to do
Test cases must be written not only for valid and expected, but also for invalid and
unexpected input conditions
Errors seem to come in clusters, and some sections seems to be more error-prone that
other sections.
E.g. a program with modules A and B. Five errors found in A and one in B. It is
likely that A still has more errors if A has not been purposely subjected to a more
rigorous test
Testing efforts may be focused against error-prone sections
Actors in Testing
Testers
• Role - Execute tests, record test scripts, maintain statistics and
metrics, check test data setup, test environment setup, execute re-tests
• Knowledge - Understanding the system being tested, awareness of
tools, how to progress against a plan
• Skills - Observation, accuracy, methodical, co-ordination, problem
solver, avoiding egos
• Experience - Following instructions, problem reporting and solving,
and relevant testing tools.
Developers
Users
Test Analyst
Test Manager
Test Artifacts
Test Plan : A formal test plan is a document that provides and records important information
about a test project, for example: Resources, Schedule & Timeline, Test Milestones, Use
cases and/or Test cases
Test Environment/Bed: An environment containing the hardware, instrumentation,
simulators, software tools, and other support elements needed to conduct a test
Test Case: A set of test inputs, execution conditions and expected results.
Test Data: The actual (set of) values used in the test or that are necessary to execute the
test.
Test Tools: Winrunner, Loadruner, Test Director, WebLoad etc. are widely used tools.
Test Scripts: is used to test a particular functionality (business rule). It may consist of 1 or
more Test Cases.
Test Log: A chronological record of all relevant details about the execution of a test
Bug Reports: Contains a summary of the bug, its priority and other details regarding the
bug.
When to Start Testing?
When to Start Testing?
When to Start Testing
Quality Assurance Vs Testing
Quality Assurance vs Testing
Quality Assurance - Preventing Bugs
The earlier in the process a bug is discovered and corrected, the
cheaper the correction.
Testing Isn’t Everything
Non-testing Activities
Prototyping – incomplete implementation that mimics the behavior we
think the users need.
Requirement analysis – checking the requirements for logical self-
consistency, for testability and for feasibility. Users can’t be expected
to provide valid requirements because they are not trained to do so.
Formal analysis, possibly mathematical
Design – ill-considered design may destroy the best requirements
Formal inspection – a process without formal inspection is seriously
flawed and depends too much on testing.
Complete Testing – Is it possible?
Impossibility of Complete Testing
If test completely, then at the end of testing, there cannot be any
undiscovered errors - impossible.
Main Reasons
The domain of possible inputs is too large to test
There are too many paths through the program to test
The user interface (and thus design) issues are too complex to completely
test
Difficult to find all design errors
From Kaner’s lecture notes on Black Box Software Testing
Too Many Inputs
Test all valid inputs
Test all invalid inputs – check everything you can enter at the keyboard
Test all edited inputs if the program lets you edit /change numbers- make
sure editing works
Test all variations on input timing
Try testing one event just before, just after, and in the middle of processing a second
event. Will they interfere with each other?
Don’t wait to enter numbers until the computer has printed a question mark and started
flashing its cursor at you
Enter numbers when it’s trying to display others, when it is adding them up, when it is
printing a message, whenever it is busy
As soon as you skip any input value, you have abandoned completing
testing
Too Many Combinations
Variables Interact
E.g. a program fails when the sum of a series of variables is too large.
Suppose the number of choices for the N variables are V1, V2,
through VN. The total number of possible combinations is V1 x V2 x . . .
x VN.
39,601 combinations of just two variables whose values could range only
between -99 and 99.
A case that isn’t so trivial - 318,979,564,000 possible combinations of the first
four moves in chess.
Too Many Paths
A program path can be traced through the code from the start of the
program to the program termination.
Two paths differ if the program executes different statements in each or executes the
same statements but in a different order
You have not completely tested the program unless you have exercised every path
Example 1: From The Art of Software Testing by Myers
Program starts at A.
From A it can go to B or C
From B it goes to X
From C it can go to D or E
From D it can go to F or G
From F or from G it goes to X
From E it can go to H or I
From H or from I it goes to X
From X the program can go to EXIT or back to A. It can go back to A no more than
19 times.
Too Many Paths - cont
B
A F
D
G
C X EXIT
H
E < 20 times
through the
I
loop
One path is ABX-Exit. There are 5 ways to get to X and then to the EXIT in one pass.
Another path is ABXACDFX-Exit. There are 5 ways to get to X the first time, 5 more to get
back to X the second time, so there are 5 x 5 = 25 cases like this.
There are 51 + 52 + ... + 519 + 520 = 1014 = 100 trillion paths through the program to test or
approximately one billion years to try every path (if one could write, execute and verify a
test case every five minutes ).
Difficult to Find Every Design Error
Specifications often contain errors
Accidents, e.g. 2+2=5
Deliberate – the designer thought he had a better idea, but didn’t
implement it
Many user interface failings are design errors – being in the specification
does not make them right
In summary – it is hard to find all errors
Q&A
Thank You
References
http://www.geocities.com/xtremetesting/
http://www.sqatester.com/
http://www.ontariofamilyhealthnetwork.gov.on.ca/
http://www.Kaner.com
http://www.testingFaqs.com