Testing
Testing
| Fozia Abako
1
Introduction
2
Cont.…
The role of Quality Assurance may cover aspects such as the design and
monitoring of documentation systems,
• approval and monitoring of written procedures to produce the product,
• approval of written records of the processing operations,
• approval and monitoring of cleaning systems,
• regulatory control,
• batch or lot review,
• release of product. ..etc.
3
Quality Control
• Simplistically, quality is an attribute of software that implies the software meets its
specification
• This definition is too simple for ensuring quality in software systems
• Software specifications are often incomplete or ambiguous
• Some quality attributes are difficult to specify
• Tension exists between some quality attributes, e.g. efficiency vs. reliability
• Software requirements are the foundation from which quality is measured.
• Lack of conformance to requirements is lack of quality.
• Specified standards define a set of development criteria that guide the manner in which
software is engineered.
• If the criteria are not met, lack of quality will almost surely result.
• There is a set of implicit requirements that often goes unmentioned.
• If software conforms to its explicit requirements but fails to meet its implicit
requirements, software quality is suspect.
4
Cont.…
• Explicit requirements: is special requirements given by customer.
• Implicit requirements: that are added(analyzed) by the business analyst which will increase
the value of application without disturbing the original customer requirement.
• Without quality, your company will not survive.
• Use a written quality program to ensure you can offer your customers consistent products.
• Provide consistent products to keep production costs down and increase revenue.
6
7
Effective quality process
An effective quality process must focus on:
• Paying much attention to customer’s requirements
• Making efforts to continuously improve quality
• Integrating measurement processes with product design and development
• Pushing the quality concept down to the lowest level of the organization
• Developing a system-level perspective with an emphasis on methodology and process
• Eliminating waste through continuous improvement.
From the different quality scenarios Online banking system(Security , correctness, reliability ),
Air traffic control system (Robustness, real time responses) Educational Game for Children (
User-friendliness)
8
QA versus QC
• Some people may confuse the term quality assurance with quality
control (QC).
• Although the two concepts share similarities, there are important distinctions
between them.
• In effect, QA provides the overall guidelines used anywhere, and QC is a
production-focused process – for things such as inspections.
• QA is any systematic process for making sure a product meets specified
requirements, whereas QC addresses other issues, such as individual
inspections or defects.
• In terms of software development, QA practices seek to prevent
malfunctioning code or products, while QC implements testing and
troubleshooting and fixes code.
9
Quality assurance in software
• Software quality assurance (SQA) systematically finds patterns and the
actions needed to improve development cycles.
• Finding and fixing coding errors can carry unintended consequences; it is
possible to fix one thing, yet break other features and functionality at the
same time.
• SQA has become important for developers as a means of avoiding errors
before they occur, saving development time and expenses.
• Even with SQA processes in place, an update to software can break other
features and cause defects -- commonly known as bugs.
• There have been numerous SQA strategies.
10
Cont....
11
Cont....
• Software development methodologies have developed over time that rely on SQA, such as
Waterfall, Agile and Scrum.
• Each development process seeks to optimize work efficiency.
• Waterfall is the traditional linear approach to software development.
• It's a step-by-step process that typically involves gathering requirements, formalizing a
design, implementing code, code testing and remediation, and release.
• It is often seen as too slow, which is why alternative development methods were
constructed.
• Agile is a team-oriented software development methodology where each step in the work
process is approached as a sprint (race/run).
• Agile software development is highly adaptive, but it is less predictive because the scope of
the project can easily change.
• Scrum is a combination of both processes where developers are split into teams to handle
specific tasks, and each task is separated into multiple sprints.
12
Software Quality Assurance
• To ensure quality in a software product, an organization must having
following approach to quality management:
• 1. Organization-wide policies, procedures and standards must be established.
• 2. Project-specific policies, procedures and standards must be tailored from
the organization-wide templates.
• 3. Quality must be controlled; that is, the organization must ensure that the
appropriate procedures are followed for each project
• Standards exist to help an organization draft an appropriate software quality
assurance plan.
• External entities can be contracted to verify that an organization is standard-
compliant.
13
What is Testing?
• Testing is a process of executing a software application with the intent of finding errors and
to verify that it satisfies specified requirements .
• Testing is the process of exercising or evaluating a system or a system component by manual
or automated means to verify that it satisfies specified requirements or to identify differences
between expected and actual results.
• Testing is a measurement of software quality in terms of defects found, for both functional
and non-functional software requirements and characteristics.
• Testing is the process of executing the program with the intent of finding faults.
❑ Who should do this testing and when should it start are very important questions.
❑ As we know software testing is the fourth phase of the software development life cycle
(SDLC). About 70% of development time is spent on testing. 14
What is Software Testing?
• Testers used to be relied upon to test everything, but as the whole software delivery life cycle
has evolved, code-specific tests (unit tests, which test individual units of computer code, and
integration tests, which test multiple modules of code) have been taken on by the software
developer and the software tester has been allowed to focus on adding value into the process
with:
Functional Testing - testing the function of a system, or part of an overall system, to ensure
that the specification is met, usually based on acceptance criteria.
Integration Testing - testing a number of software modules in an integrated fashion as a group.
User Experience Testing - testing the main graphical user interface between the software
system under development and the end user. 15
Cont…
• Exploratory Testing - manually exploring the developed system while simultaneously
learning about its functionality, designing tests as you go and observing any testable or
unexpected behavior.
• Regression Testing - testing designed to uncover any new bugs introduced to existing
functionality as a result of changes to the code currently being deployed or environmental
changes.
• Acceptance Testing - testing usually, although not always, performed on the customer side to
ensure that the code/system being delivered meets acceptable standards and is as defined
prior to commencing the project.
• Smoke & Sanity(rational/stability) Testing - testing performed at a high level to ensure that
the main functionality of a system is as required prior to moving onto more granular tests.
16
Cont…
• Performance Testing - the process of determining the performance of a system (speed, load,
stress points, bottlenecks, etc.) based on a number of metrics and quite often measured
against a baseline measurement prior to development.
• Security Testing - ensuring that the system security and data integrity are of the highest
possible standard and any weaknesses or vulnerabilities are spotted prior to release.
• Internationalization and Localization Testing - testing that all local languages, symbols,
currencies and other regional variations are correctly applied to each local site.
17
IO Mapping
Subset of inputs
causing erroneous
Input Set
outputs
Software
Output Set
Erroneous
outputs
18
Software Faults and Failures?
• A failure corresponds to erroneous/unexpected runtime behavior observed by a user.
• A fault is a static software characteristic that can cause a failure to occur.
• The presence of a fault doesn’t necessarily imply the occurrence of a failure.
Input
Set
Erroneou
User A s
Inputs Inputs
User B User C
Inputs Inputs
19
What is Software Testing?
❑ The concept of software testing has evolved from simple program “check-out” to a broad set of activities that
cover the entire software life-cycle.
There are five distinct levels of testing that are given below:
2. Demonstrate: The process of showing that major features work with typical input.
3. Verify: The process of finding as many faults in the application under test (AUT) as possible.
4. Validate: The process of finding as many faults in requirements, design, and AUT.
20
Cont.…
“Testing is the process of exercising or evaluating a system or system components by manual or automated
means to verify that it satisfies specified requirements.” [IEEE]
“Software testing is the process of executing a program or system with the intent of finding errors.”
[Myers]
“It involves any activity aimed at evaluating an attribute or capability of a program or system and
determining that it meets its required results.” [Hetzel]
21
What is NOT Software
Testing?
❑ The process of demonstrating that errors are not present.
❑ The process of showing that a program performs its intended functions correctly.
❑ The process of establishing confidence that a program does what it is supposed to do.
22
What is Positive testing?
❑ Operate the application as it should be operated(normal/common Workflows).
❑ Does it behave normally? Use a proper variety of legal test data, including data values at the boundaries to
test if it fails. Check actual test results with the expected.
23
What is Negative testing?
❑ Test for abnormal operations. Does the system fail/crash?
❑ Test with illegal or abnormal data or invalid values. Intentionally attempt to make things go wrong and to
discover/detect.
24
What is Positive View of Negative Testing?
❑ Positive view of negative testing: The job of testing is to discover errors before the user does.
❑ Mentality of the tester has to be destructive--opposite to that of the creator /author /developer, which should
be constructive
25
What is Software Testing?
❑ One very popular equation of software testing is:
(OR)
“It is the process of evaluating, reviewing, inspecting and doing desk checks of work products such as
requirement specifications, design specifications and code.”
(OR)
26
What is Software Testing?
❑ As per IEEE definition(s):
❑ Software validation:
❑ “It is defined as the process of evaluating a system or component during or at the end of development
process to determine whether it satisfies the specified requirements.
❑ As mentioned earlier, good testing expects more than just running a program.
27
Why Should We Test? What is the Purpose?
Testing is necessary- Reasons from various prospective
1. The Technical Case:
a. Competent developers are not always effective.
b. The implications of requirements are not always predictible.
c. The behaviour of a system is not necessarily predictable from its components.
d. Languages, databases, user interfaces, and operating systems have bugs that can cause application failures.
e. Reusable classes and objects must be trustworthy.
2. The Business Case:
a. If you don’t find bugs your customers or users will.
b. Post-release debugging is the most expensive form of development.
c. Buggy software hurts operations, sales, and reputation.
d. Buggy software can be hazardous to life and property.
3. The Professional Case:
a. Test case design is a challenging and rewarding task.
b. Good testing allows confidence in your work.
c. Systematic testing allows you to be most effective.
d. Your credibility is increased and you have pride in your efforts.
28
Why Should We Test? What is the Purpose?
4.The Economics Case: Practically speaking, defects get introduced in every phase of SDLC. Pressman has
described a defect amplification model wherein he says that errors get amplified by a certain factor if that error is
not removed in that phase only.
❑ This may increase the cost of defect removal. This principle of detecting errors as close to their point of
introduction as possible is known as phase containment of errors.
29
Efforts During SDLC
Why Should We Test? What is the Purpose?
5. To Improve Quality: As computers and software are used in critical applications, the outcome of a bug can be
severe.
❑ Bugs can cause huge losses.
❑ Bugs in critical systems have caused airplane crashes, allowed space shuttle systems to go awry, and halted
trading on the stock market. Bugs can kill.
❑ Bugs can cause disasters.
❑ In a computerized embedded world, the quality and reliability of software is a matter of life and death. This
can be achieved only if thorough testing is done.
30
Why Should We Test? What is the Purpose?
7. For Reliability Estimation: Software reliability has important relationships with many aspects of software,
including the structure and the amount of testing done to the software.
❑ Based on an operational profile (an estimate of the relative frequency of use) of various inputs to the program,
testing can serve as a statistical sampling method to gain failure data for reliability estimation.
31
Who Should do Testing?
❑ As mentioned earlier, testing starts right from the very beginning. This implies that testing is everyone’s
responsibility.
❑ By “everyone,” we mean all project team members. So, we cannot rely on one person only. Naturally, it is a
team effort.
❑ We cannot only designate the tester responsible. Even the developers are responsible.
❑ Developers build the code but do not indicate any errors as they have written their own code.
32
How Much Should We Test?
❑ Consider that there is a while loop that has three paths. If this loop is executed twice, we have (3 × 3) paths
and so on. So, the total number of paths through such code will be:
= 1 + 3 + (3 × 3) + (3 × 3 × 3) + ...
= 1 + ∑3n (where n > 0)
Note: This means an infinite number of test cases. Thus, testing is not 100% exhaustive.
❑ If multiple testing of the functionality of the software is not showcasing any defects, then it is the right time
to stop testing the software.
❑ If you found defects with a small portion of overall functionality, you should continue the testing.
33
Selection of Good Test Cases
Designing a good test case is a complex art. It is complex because:
a. Different types of test cases are needed for different classes of information.
b. All test cases within a test suite will not be good. Test cases may be good in variety of ways.
c. People create test cases according to certain testing styles like domain testing or risk-based testing. And good
domain tests are different from good risk-based tests.
Brian Marick coined a new term to a lightly documented test case—the test idea. According to Brian, “A test idea
is a brief statement of something that should be tested.” For example, if we are testing a square-root function, one
test idea would be—“test a number less than zero.” The idea here is again to check if the code handles an error
case.
Cem Kaner said—“The best test cases are the ones that find bugs.” Our efforts should be on the test cases that
finds issues. Do broad or deep coverage testing on the trouble spots.
A test case is a question that you ask of the program. The point of running the test is to gain information like
whether the program will pass or fail the test.
34
Measurement of Testing
❑ There is no single scale that is available to measure the testing progress.
❑ A good project manager (PM) wants worse conditions to occur in the very beginning of the project instead
of in the later phases.
❑ If errors are large in numbers, we can say either testing was not done thoroughly or it was done so
thoroughly that all errors were covered. So there is no standard way to measure our testing process. But metrics
can be computed at the organizational, process, project, and product levels. Each set of these measurements
has its value in monitoring, planning, and control.
Note: Metrics is assisted by four core components—schedule, quality, resources, and size.
35
Incremental Testing Approach
To be effective, a software tester should be knowledgeable in two key areas:
1. Software testing techniques
2. The application under test (AUT)
For each new testing assignment, a tester must invest time in learning about the application. A tester with no
experience must also learn testing techniques, including general testing concepts and how to define test cases.
Our goal is to define a suitable list of tests to perform within a tight deadline.
There are 8 stages for this approach:
Stage 1: Exploration
Purpose: To gain familiarity with the application
Stage 2: Baseline test
Purpose: To devise and execute a simple test case
Stage 3: Trends analysis
Purpose: To evaluate whether the application performs as expected when actual output cannot be predetermined
Stage 4: Inventory
Purpose: To identify the different categories of data and create a test for each category item
36
Incremental Testing Approach
Stage 5: Inventory combinations
Purpose: To combine different input data
Stage 6: Push the boundaries
Purpose: To evaluate application behaviour at data boundaries
Stage 7: Devious data
Purpose: To evaluate system response when specifying bad data
Stage 8: Stress the environment
Purpose: To attempt to break the system
The schedule is tight, so we may not be able to perform all of the stages. The time permitted by the delivery
schedule determines how many stages one person can perform. After executing the baseline test, later stages could
be performed in parallel if more testers are available.
37
Basic Terminology Related to Software Testing
We must define the following terminologies one by one:
1. Error (or mistake or bugs): People make errors. When people make mistakes while coding, we call these
mistakes bugs. Errors tend to propagate.
A requirements error may be magnified during design and still amplified
during coding. So, an error is a mistake during SDLC.
2. Fault (or defect): A missing or incorrect statement in a program resulting from an error is a fault. So, a fault is
the representation of an error.
Representation here means the mode of expression, such as a narrative text, data flow diagrams, hierarchy charts,
etc. Defect is a good synonym for fault. Faults can be elusive. They requires fixes.
3. Failure: A failure occurs when a fault executes. The manifested inability of a system or component to perform a
required function within specified limits is known as a failure. A failure is evidenced by incorrect output, abnormal
termination, or unmet time and space constraints. It is a dynamic process.
38
Cont..
4. Incident: When a failure occurs, it may or may not be readily apparent to the user. An incident is the symptom
associated with a failure that alerts the user to the occurrence of a failure. It is an unexpected occurrence that
requires further investigation. It may not need to be fixed.
5. Test: Testing is concerned with errors, faults, failures, and incidents. A test is the act of exercising software with
test cases. A test has two distinct goals—to find failures or to demonstrate correct execution.
6. Test case: A test case has an identity and is associated with program behaviour. A test case also has a set of
inputs and a list of expected outputs. The essence of software testing is to determine a set of test cases for the item
to be tested.
Test Case ID, Purpose , Preconditions, Inputs, Expected Outputs, Postconditions, Execution History, Date,
Result, Version, Run By
39
Cont.…
There are 2 types of inputs:
a. Preconditions: Circumstances that hold prior to test case execution.
b. Actual inputs: That were identified by some testing method.
Expected outputs are also of two types:
a. Post conditions
b. Actual outputs
7. Test suite: A collection of test scripts or test cases that is used for validating bug fixes (or finding new bugs)
within a logical or physical area of a product. For example, an acceptance test suite contains all of the test cases that
were used to verify that the software has met certain predefined acceptance criteria.
8. Test script: The step-by-step instructions that describe how a test case is to be executed. It may contain one or
more test cases.
9.Test ware: It includes all of testing documentation created during the testing process. For example, test
specification, test scripts, test cases, test data, the environment specification.
10. Test oracle: Any means used to predict the outcome of a test.
11. Test log: A chronological record of all relevant details about the execution of a test.
12. Test report: A document describing the conduct and results of testing carried out for a system.
40
Origin of Software Defect
• Software reliability improves when faults which are present in the most frequently used
portions of the software are removed.
• A removal of X% of faults doesn’t necessarily mean an X% improvement in reliability.
• In a study by Mills et al. in 1987 removing 60% of faults resulted in a 3% improvement in
reliability.
• Removing faults with the most serious consequences is the primary objective.
41
Common Errors
• These are the common error categories: Boundary-Related ,
Calculation/Algorithmic, Control flow, Errors in handling/interpreting data ,
User Interface, Exception handling errors, Version control errors
• We make mistakes:
Unclear requirements , Wrong assumptions , Design errors , Implementation
errors
• Some aspects of a system are hard to predict:
For a large system, no one understands the whole, Some behaviours are
hard to predict , Sheer complexity
• Evidence (if any is needed!):
• Widely accepted failure of “n-version programming”
42
Debugging
43
Thank You
44
Chapter Two
Basics of Software Testing
By Haimnaot D.
Introduction
❑ Software testing is the process of evaluating a software application to
identify any defects, errors, or gaps in the expected functionality.
❑ It ensures that the software meets specified requirements and performs
as intended.
❑ Testing can be performed manually or using automated tools to verify
the following points of software products
▪ Correctness,
▪ Completeness
▪ Reliability
Objectives of Software Testing
❑ Identify defects before deployment to avoid failures in production.
❑ Ensure software reliability, efficiency, and performance.
❑ Validate that software meets business and technical requirements.
❑ Improve the quality of the software by finding and fixing bugs early.
❑ Enhance security by identifying vulnerabilities that could be exploited.
❑ Reduce maintenance costs by detecting defects early in the
development process.
Cont..,
❑ Ensure customer satisfaction by delivering a high-quality product.
❑ Verify the software's compatibility with different environments and
platforms.
❑ Validate data integrity and security measures.
❑ Improve user experience by identifying usability issues.
Types of Software Testing
❑ Manual Testing:
▪ Testers manually execute test cases without using automation tools.
▪ It is useful for exploratory, usability, and ad-hoc testing.
❑ Automated Testing:
❑ Uses automation tools to execute test scripts and compare actual outcomes
with expected results, improving efficiency and accuracy.
❑ Functional Testing:
❑ Validates that the software performs according to the defined functional
requirements.
❑ Non-Functional Testing:
❑ Focuses on aspects such as performance, usability, compatibility, and security.
Conti..,
❑ Unit Testing:
▪ Tests individual components or modules to verify that each works correctly in
isolation.
❑ Integration Testing:
▪ Ensures that different modules work together correctly by testing their interactions.
❑ System Testing:
▪ Tests the entire system as a whole to verify that all integrated components function
correctly.
❑ Acceptance Testing:
▪ Confirms that the software meets business requirements and is ready for deployment.
Conti..,
❑ Regression Testing:
▪ Ensures that new changes or updates do not negatively impact existing
functionality.
❑ Performance Testing:
▪ Evaluates how the software performs under different loads and stress
conditions.
❑ Security Testing:
▪ Identifies vulnerabilities and ensures the application is protected against
attacks.
❑ Usability Testing:
▪ Assesses how user-friendly and intuitive the software is.
Conti..,
❑ Compatibility Testing:
▪ Ensures that software functions properly on different devices, browsers, and
operating systems.
❑ Load Testing:
▪ Assesses how well the system handles a specified volume of transactions.
❑ Stress Testing:
▪ Determines the system's ability to handle extreme conditions.
❑ Exploratory Testing:
▪ Performed without predefined test cases to uncover unexpected issues.
Principles of Software Testing
❑ Testing Shows Presence of Defects:
▪ Testing can demonstrate that defects exist, but it cannot prove the absence of
defects.
❑ Exhaustive Testing is Impossible:
▪ It is impossible to test all possible inputs and scenarios, so risk-based and
prioritized testing is necessary.
❑ Early Testing:
▪ Defects should be identified as early as possible in the development lifecycle
to reduce costs and effort.
❑ Defect Clustering:
▪ A small number of modules usually contain the majority of defects, following
the Pareto Principle (80/20 rule).
Conti..,
❑ Pesticide Paradox:
▪ Repeating the same test cases will eventually stop finding new defects, so test
cases should be updated regularly.
❑ Testing is Context Dependent:
▪ Different software applications require different testing approaches based on
their industry, risks, and usage.
❑ Absence of Errors is a Fallacy:
▪ Even if no defects are found, the software might not meet user expectations or
business needs.
❑ Quality is Subjective:
▪ What is considered "high quality" varies depending on user expectations and
industry standards.
Software Testing Process
❑ Requirement Analysis
▪ Understand the project requirements, both functional and
non-functional.
Identify testable features and clarify ambiguities with
stakeholders.
Define acceptance criteria for testing success.
Gather information about system architecture and dependencies.
Identify potential risks and create a risk mitigation plan.
Cont..,
❑ Test Planning
▪ Develop a test strategy, scope, and approach.
▪ Identify required resources, tools, and responsibilities.
▪ Define test schedules, test deliverables, and success criteria.
▪ Establish risk analysis and mitigation strategies.
▪ Allocate test cases to team members based on expertise.
▪ Plan for both manual and automated testing approaches.
Cont..,
❑ Test Case Development
▪ Design test cases based on business and technical requirements.
▪ Prepare test data and define expected results.
▪ Review and refine test cases to ensure coverage and effectiveness.
▪ Develop test scripts for automated testing if applicable.
▪ Implement boundary value analysis and equivalence partitioning.
▪ Create positive and negative test cases to ensure robustness.
Cont..,
❑ Test Environment Setup
▪ Configure the necessary hardware, software, and network settings.
▪ Ensure test data availability and database readiness.
▪ Verify that the test environment mirrors the production
environment as closely as possible.
▪ Install and configure automation tools, if required.
▪ Establish a continuous integration/continuous deployment (CI/CD)
pipeline for testing.
Cont..,
❑ Test Execution
▪ Execute test cases as per the test plan.
▪ Document actual results and compare them with expected
outcomes.
▪ Log defects in a tracking system and categorize them based on
severity and priority.
▪ Conduct retesting and regression testing when necessary.
▪ Perform exploratory testing to identify additional issues.
▪ Validate performance and security requirements through
specialized testing.
Cont..,
❑ Defect Reporting & Tracking
▪ Identify, log, and document defects with detailed information (e.g.,
steps to reproduce, screenshots, severity, priority).
▪ Assign defects to developers for resolution.
▪ Retest fixed defects and conduct regression testing to ensure no
new issues arise.
▪ Track defect lifecycle from discovery to closure.
▪ Generate defect trend reports and analyze recurring issues.
Cont..,
❑ Test Closure
▪ Evaluate test completion criteria (e.g., test case execution, defect
resolution, coverage achieved).
▪ Document test results, lessons learned, and best practices for future
projects.
▪ Conduct a retrospective meeting to discuss testing process
improvements.
▪ Provide a final test summary report to stakeholders.
▪ Archive test artifacts for future reference.
▪ Assess overall testing effectiveness and recommend improvements.
Advanced Testing Techniques
❑ White-Box Testing
▪ Examines the internal structure, design, and implementation of the
software.
▪ Used for unit testing and structural validation.
▪ Examples include statement coverage, branch coverage, and path
coverage.
▪ Key Features of White-Box Testing
▪ Requires knowledge of programming and internal system logic.
▪ Focuses on improving the efficiency and security of the code.
Conti..,
▪ Ensures that all statements, branches, and paths in the code are
tested.
▪ Detects hidden errors in logical structures.
❑ Black-Box Testing
▪ Focuses on input and output without considering internal code
structure.
▪ Includes functional, usability, and security testing.
▪ Used for validating business logic and user experience.
▪ Key Characteristics of Black-Box Testing:
Conti..,
▪ Testers do not need programming knowledge.
▪ It is based on requirements and specifications.
▪ Test cases are designed to check expected outcomes based on
inputs.
▪ It is primarily used for functional, usability, and security testing.
Conti..,
❑ Grey-Box Testing
▪ A combination of white-box and black-box testing techniques.
▪ Useful when partial knowledge of the internal system is available.
▪ Often used for security and penetration testing.
❑ Model-Based Testing
▪ Uses models to represent the expected behavior of the system.
▪ Generates test cases from these models for automated validation.
Chapter 3
Functional (Black Box) Testing
By Haimanot D.
Introduction
❑ Functional testing is a type of software testing that primarily focuses
on testing the functionalities of an application to ensure that it behaves
as expected.
❑ The key characteristic of functional testing is that the tester does not
have to know the internal workings of the application.
❑ The testing is based on the requirements and specifications of the
software, and the main objective is to verify whether the system
behaves as intended for a given input.
Key Features of Functional Testing
❑ Black Box Testing:
▪ The tester is concerned only with the inputs and outputs.
▪ They do not need knowledge of the internal code structure.
❑ Focus on Functional Specifications:
▪ Functional testing checks the functions of the system, such as processing transactions,
performing calculations, and interacting with external systems.
❑ User Behavior Simulation:
▪ Test cases are designed to simulate how the end-user will interact with the system.
❑ Validation Against Requirements:
▪ The goal is to confirm whether the system is meeting its functional requirements.
❑ User-centered approach:
▪ It is designed to ensure that user expectations are met.
Types of Functional Testing:
❑ Unit Testing:
❑ Focuses on individual components or modules in isolation to
ensure they work correctly.
❑ Integration Testing:
❑ Verifies that multiple components or systems work together as
expected.
❑ System Testing:
❑ A complete test of the entire system to ensure everything works as
a whole.
Cont..,
❑ Sanity Testing:
❑ Conducted to verify if the basic functionalities of the application
are working after a change or update.
❑ Smoke Testing:
❑ A high-level test to ensure that the application is stable enough for
further testing.
❑ Regression Testing:
❑ Ensures that new changes or fixes do not break existing
functionality.
Importance of Functional Testing
❑ Functional testing is crucial for ensuring:
Reliability:
▪ The application functions correctly under expected conditions.
Compliance:
▪ The system meets industry standards and regulatory requirements.
User Satisfaction:
▪ The application delivers the expected functionality and user
experience.
Error-Free Performance:
▪ Identifies defects before deployment, reducing potential failures.
Conti..,
Security:
Ensures that the system is protected from unauthorized access
and vulnerabilities.
Interoperability:
Confirms that the application works well with other software
and systems.:
Functional Testing Approaches
❑ Functional testing can be approached in different ways depending on
the scope and depth of the testing.
❑ The following are some common approaches:
❖ Black Box Testing
▪ Black Box Testing refers to testing an application without
knowledge of its internal structures or code.
▪ The focus is solely on what the system is supposed to do (inputs
and outputs).
▪ Testers will interact with the system’s user interface and verify
that it behaves as expected.
Conti..,
▪ Key Characteristics:
✔ No knowledge of internal code is needed.
✔ Focuses on functional requirements.
✔ Tests are based on specifications, use cases, and user stories.
▪ Advantages:
✔ Helps ensure that the system meets user expectations.
✔ It can be performed by individuals who are not involved in the
software development process.
✔ Provides a more user-centric view of testing.
Conti..,
▪ Disadvantages:
✔ Test coverage can be incomplete if the functional specifications
are vague or incomplete.
✔ Tests tend to focus only on "happy path" scenarios and might
miss edge cases or deeper system issues.
❑ Manual Functional Testing
▪ This approach involves human testers executing test cases
without using automation tools.
▪ It is beneficial for exploratory testing, usability testing, and
testing small applications.
Conti..,
❖ Boundary Value Analysis (BVA)
▪ Boundary Value Analysis is a technique used to test the
boundaries of input values.
▪ It ensures that values at the boundaries of valid input ranges are
handled correctly by the system.
▪ Test Case Design:
✔ For an input range of 1 to 10, the boundary values would be
1, 10, and values just outside these boundaries, such as 0 and
11.
Conti..,
▪ Key Points:
▪ Identifying boundaries and testing them ensures that
boundary-related defects are caught.
▪ This approach often uncovers edge-case issues that could
lead to unexpected behavior
Conti..,
❖ Equivalence Class Partitioning (ECP)
▪Equivalence Class Partitioning divides input data into
different equivalence classes, where each class is treated as
equivalent for testing purposes.
▪Rather than testing every possible input value, testers focus
on representative values from each class.
▪If an input field accepts integers between 1 and 100, the input
can be divided into three equivalence classes:
▪ Valid inputs (1 to 100)
▪ Invalid inputs (less than 1 or greater than 100)
▪ Boundary values (1 and 100)
Conti..,
❖ Integration Testing
▪Integration testing ensures that multiple components or
systems interact correctly.
▪It can be performed using:
Top-down approach: Tests higher-level modules first.
Bottom-up approach: Tests lower-level modules first.
Big Bang approach: All modules are tested together.
Hybrid approach: A combination of top-down and bottom-up
testing.
Conti..,
❖ System Testing
▪System testing validates the complete application, ensuring
all modules work together as expected.
▪It includes:
End-to-end testing: Simulating real-world user scenarios.
Performance testing: Assessing speed, stability, and scalability.
Security testing: Ensuring data protection and access control.
Compatibility testing: Verifying software functionality across
different devices, operating systems, and browsers.
Conti..,
❖ Regression Testing
▪ Regression testing ensures that new updates do not break existing
functionality.
▪ Automated tools are commonly used to speed up this process.
▪ It is particularly important in agile and DevOps environments, where
software is frequently updated.
❖ Acceptance Testing
❖ Acceptance testing determines whether the application meets business
requirements and is ready for deployment.
❖ Alpha Testing: Conducted in a controlled environment by internal testers.
❖ Beta Testing: Conducted by real users in a production-like environment.
❖ User Acceptance Testing (UAT): Ensures the system meets end-user
requirements before final release.
Conti..,
❖ Smoke and Sanity Testing
▪ Smoke Testing
Purpose: Ensures that the critical functionalities of an application
are working before proceeding with more in-depth testing.
When It's Performed: After a new build is deployed to check if it's
stable enough for further testing.
Scope: Broad and shallow (covers major functionalities but doesn’t
go into details).
Example: Checking if the application launches successfully, login
works, and key features are accessible.
Conti..,
▪ Sanity Testing
Purpose: Validates that specific functionality is working correctly
after a minor change or bug fix.
When It's Performed: After a small change in the codebase, such as
a bug fix or minor enhancement.
Scope: Narrow and deep (focuses on a specific functionality
without checking the entire system).
Example: If a bug in the loan interest calculation was fixed, sanity
testing ensures that the fix works and hasn’t introduced new issues.
Tools for Functional Testing
▪ Functional Testing is conducted using various manual and automated
tools to ensure software behaves as expected.
▪ The choice of a testing tool depends on factors like project size,
application type, budget, and team expertise.
▪ Below is a detailed breakdown of the most commonly used Functional
Testing tools.
Manual Functional Testing Tools
✔ TestRail Test case management tool.
✔ JIRA with Xray or Zephyr Bug tracking and test management.
✔ qTest Bug tracking and test management.
Conti..,
Automated Functional Testing Tools
▪ Automation tools help speed up functional testing by executing pre-written
test scripts without manual intervention.
Selenium Web application testing.
Appium Mobile application testing (iOS and Android).
Katalon Studio Low-code automation for web, API, mobile, and
desktop testing.
Postman API testing.
Cypress End-to-end testing for web applications.
TestComplete UI testing for desktop, web, and mobile
applications.
Best Practices in Functional Testing
❑ Functional Testing plays a critical role in ensuring software reliability.
❑ However, to achieve effective and efficient testing, teams must follow
best practices that
❑ Enhance accuracy,
❑ Reduce testing time, and
❑ Improve overall software quality.
❑ Below are the best practices for conducting Functional Testing.
✔ Understand the Business Requirements Clearly
✔ Create Well-Defined Test Cases
✔ Automate Repetitive Functional Tests
✔ Perform Both Positive and Negative Testing
Conti..,
✔ Use Realistic Test Data
✔ Implement Regression Testing Regularly
✔ Ensure Cross-Browser and Cross-Device Testing
✔ Prioritize Exploratory Testing for Unscripted Scenarios
Challenges in Functional Testing
❑ Test Case Design Complexity
▪ Possible Solutions:
Collaborate with stakeholders to clarify business logic before writing test
cases.
Use Equivalence Partitioning & Boundary Value Analysis to minimize
redundant test cases.
Maintain a well-structured test case repository to avoid duplication.
❑ Frequent Changes in Requirements
▪ Possible Solutions:
▪ Maintain dynamic and modular test scripts to adapt to changes quickly.
▪ Use version control (Git, SVN) to track changes in test cases.
▪ Involve testers early in requirement discussions to anticipate changes.
Conti..,
❑ Test Data Management Issues
▪ Possible Solutions:
Use data-driven testing (DDT) to test multiple inputs dynamically.
Mask or anonymize sensitive production data before using it.
Utilize test data generation tools like Mockaroo, Faker.js, or Test Data
Manager.
❑ Lack of Skilled Testers
▪ Possible Solutions:
Provide regular training on automation, performance, and security testing.
Use low-code/no-code automation tools (Katalon, TestComplete) for beginners.
Encourage cross-training between developers and testers.
❑ Incomplete Test Coverage
Conti..,
▪ Possible Solutions:
Use traceability matrices to ensure each requirement has
corresponding test cases.
Perform exploratory testing to uncover unexpected issues.
Automate regression testing to cover older functionalities.
❑ Time Constraints and Deadlines
▪ Possible Solutions:
Automate frequent test cases to save time.
Prioritize critical functionalities for testing when time is limited.
Use parallel testing to execute multiple test cases simultaneously.
Chapter 5: Levels of Testing
1 Haimanot D.
System decomposition tree
2
Different Levels of testing
• A process where every unit or component of a software/system is tested
• Defined by a given environment
• Environment is a collection of attributes:
• People or responsible individual
• Strategy or the element to be tested
• Testing goal
3
Levels of testing
• UnitTesting
• IntegrationTesting
• System Testing
• AcceptanceTesting
• Regression testing
4
1. Unit Testing
• A unit is smallest testable piece of the system
– can be compiled, linked, loaded, executed
– e.g functions/procedures, classes, interfaces
• Testing of individual components separately
• Normally done by programmer
• Find unit bugs
– Wrong implementation of functional specs
• Better to use “BuddyTesting”
5
1. Unit Testing
BuddyTesting
• A collaborative approach to software
testing where a developer and a tester pair
up
• Team approach to coding and testing
• One programmer codes the other tests and
vice versa
– Test cases ‐ written by tester(before
coding starts). Better than single worker
approach
6
2. Integration testing
• Systems built by merging existing libraries
• Modules coded by different people
• Mainly tests the interfaces among units by checking
the data flow from one module to other modules
• Test for correct interaction between system units
• Performed by developers/testers on the
programmer’s workbench
7
2. Integration testing…
• Top down integration testing
• Units at the top level are tested first, and
then units at low levels are tested one by
one.
• Use of stubs : to allow testing of the upper levels
of the code, when the lower levels of the code are
not yet developed
• Bottom up integration testing
• The reverse of the top-down approach.
• Use of drivers: to allow testing of the lower levels
of the code, when the upper levels of the code are
not yet developed.
8
2. Integration testing…
9
3. System Testing
10
Types of System Testing
Read about:
11
4. Acceptance Testing
12
5. Regression Testing
• This testing is required when there is any:
• Change in requirements and code is modified as per the changed
requirements
• Added new features in product
• Bug fixing
• Fixing of performance related issues
13
Factors influencing test scope
• Size of project
• Complexity of project
• Budget for project
• Time scope for project
• Number of staff
14
Why test at different levels
• Easily track bugs
• Ensures a working subsystem/ component/ library
• Software reuse more practical
• Software development naturally split to phases
15
The “V” model and test levels
16