[go: up one dir, main page]

0% found this document useful (0 votes)
37 views144 pages

Ccs366 - Software Testing and Automation Notes

Software testing is the process of verifying and validating software for correctness and quality, aiming to find defects and ensure the product meets user requirements. It involves various testing levels, including unit, integration, and system testing, and employs techniques like black-box and white-box testing. Effective testing not only enhances product quality and security but also leads to cost savings and improved customer satisfaction.

Uploaded by

Sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views144 pages

Ccs366 - Software Testing and Automation Notes

Software testing is the process of verifying and validating software for correctness and quality, aiming to find defects and ensure the product meets user requirements. It involves various testing levels, including unit, integration, and system testing, and employs techniques like black-box and white-box testing. Effective testing not only enhances product quality and security but also leads to cost savings and improved customer satisfaction.

Uploaded by

Sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 144

UNIT-1

FOUNDATIONS OF SOFTWARE TESTING

Software testing:

 Software testing is defined as performing Verification and Validation of


the Software Product for its correctness and accuracy of working.
 Software Testing is the process of executing a program with the intent of finding errors.
 A successful test is one that uncovers an as-yet-undiscovered error.
 Testing can show the presence of bugs but never their absence.
 Testing is a support function that helps developers look good by finding
their mistakes before anyone else does.

Role of testing / Objectives of testing:


1. Finding defects which may get created by the programmer while developing the software.
2. Gaining confidence in and providing information about the level of quality.
3. To prevent defects.
4. To make sure that the end result meets the business and user requirements.
5. To ensure that it satisfies the BRS that is Business Requirement Specification and
SRS that isSystem Requirement Specifications.
6. To gain the confidence of the customers by providing them a quality product

What is Software testing?


• Finding defects
• Trying to break the system
• Finding and reporting defects
• Demonstrating correct functionality
• Demonstrating incorrect functionality
• Demonstrating robustness, reliability, security, maintainability, …
• Measuring performance, reliability, …
• Evaluating and measuring quality
• Proving the software correct
• Executing pre-defined test cases
• Automatic error detection

Skills Required for Tester


• Communication skills
• Domain knowledge
• Desire to learn
• Technical skills
• Analytical skills
• Planning
• Integrity
• Curiosity
• Think from users perspective
• Be a good judge of your product
Bug, Fault & Failure
• A person makes an Error
• That creates a fault in software
• That can cause a failure in operation
• Error: An error is a human action that produces the incorrect result that
results ina fault.
• Bug: The presence of error at the time of execution of the software.
• Fault: State of software caused by an error.
• Failure: Deviation of the software from its expected result. It is an event.
• Defect: A defect is an error or a bug, in the application which is created. A
programmer while designing and building the software can make mistakes or
error. These mistakes or errors mean that there are flaws in the software.
These are called defects.

Why do defects occur in software?


Software is written by human beings

□ Who know something, but not everything


□ Who have skills, but aren’t perfect
□ Who don’t usually use rigorous methods
□ Who do make mistakes (errors)
Under increasing pressure to deliver to strict deadlines
□ No time to check, assumptions may be wrong
□ Systems may be incomplete
Software is complex, abstract and invisible
□ Hard to understand
□ Hard to see if it is complete or working correctly
□ No one person can fully understand large systems
□ Numerous external interfaces and dependencies

Why do we test Software?

Software testing is the process of evaluating and verifying that a software product or application
does what it is supposed to do. The benefits of testing include preventing bugs, reducing
development costs and improving performance.

Software is the one which runs the world now. If we take any industry, the software is majorly included to
do the main job role. For instance, in science and technology inclusion of software programming can be
seen in space machines, aircraft, drones, etc. In this virtual world, any industry you imagine has software
products running their businesses behind the scenes.

Now we may understand the importance of testing our newly developed software products. It not only cut
the costs in the initial stages but also helps to efficiently run the applications to suit the business needs.

There are some other major benefits of testing a software product which helps businesses to use software
applications in a productive way.
 Security: Even a common person doesn’t want any risk occurring in their mobile device due to
the apps they use. In the same way, big firms don't like to be prone to risks and hazards a
software product may cause. Therefore testing a product may avoid all the uncertainties and
deliver a reliable product.

 Product-quality: Of Course when we test a product, its quality is maintained. The quality of the
product is what ensures a brand’s growth and reputation in the IT market.

 Cost-effective: Testing a product in the initial stage will cut the cost and also be helpful to deliver
a quality product in time.

 Customer Satisfaction: User experience is very important in this digitized world. Giving the
satisfaction of using a hassle-free product is the best result of testing.

Testing Levels

Testing levels are nothing but a program going through a testing phase to assure that it is error-
free to move into the next development stage.

 Unit testing: Unit testing is done by the programmers while coding to check whether an
individual unit of the program is error-free.

 Integration testing: As the name suggests, integration testing is done when individual
units of the program are integrated together. In other words, it focuses on the structure
and design of the software.

 System testing: Here, the entire program is compiled as software and tested as a whole.
This tests all the features of a program including functionality, performance, security,
portability, etc.

Principles of software testing

There are some principles maintained while testing software. Eventually, a tester cannot keep on
testing the product till it gives zero error, which is not possible. Therefore some principles are
followed while debugging the programmes.

 Exhaustive testing is not possible: Yes, no tester can repeat the testing process over and
over again till the program is error-free. It will be exhaustive for both the tester and the
program that it will stop pointing out the errors if repetitive test cases are carried out
every time. Thus if the testing process is based on the risk assessment factor, software
can produce, then it will be easy for the testing professionals to concentrate only on the
important functions of a program.

 Defect Clustering: The defect clustering principle states that most of the defects are found
in small modules of the program and only experienced professionals can deal with such
risky modules.
Black-Box Testing
🞂 Black Box Testing, also known as Behavioral Testing,functional and non-
functional testing and Regression testing is a software testing method in
which the internal structure/ design/ implementation of the item being tested
is not knownto the tester. These tests can be functional or non-functional,
though usually functional

This method attempts to find errors in the following categories:

• Incorrect or missing functions


• Interface errors
• Errors in data structures or external database access
• Behavior or performance errors
• Initialization and termination errors
• EXAMPLE : A tester, without knowledge of the internal structures of a
website, tests theweb pages by using a browser; providing inputs (clicks,
keystrokes) and verifying the outputs against the expected outcome.

Advantages of black box testing

🞂 Tests are done from a user’s point of view and will help in exposing
discrepancies in thespecifications.

🞂 Tester need not know programming languages or how the


software has been implemented.
🞂 Tests can be conducted by a body independent from the developers,
allowing for anobjective perspective and the avoidance of developer-
bias.

🞂 Test cases can be designed as soon as the specifications are complete.

Disadvantages of black box testing

🞂 Only a small number of possible inputs can be tested and many program
paths will be left untested.

🞂 Without clear specifications, which is the situation in many projects, test


cases will bedifficult to design.

🞂 Tests can be redundant if the software designer/ developer has already run a test case.

🞂 Ever wondered why a soothsayer closes the eyes when foretelling events?
So is almostthe case in Black Box Testing.

Techniques for black box testing

1. Requirement Based Testing


2. Boundary Value Analysis
3. Equivalence Partitioning

1) Requirement based testing


 Requirements-based testing is a testing approach in which test cases,
conditions and dataare derived from requirements. It includes functional
tests and also non-functional attributes such as performance, reliability or
usability.

Stages in Requirements based Testing:

 Defining Test Completion Criteria - Testing is completed only when all


the functionaland non-functional testing is complete.
 Design Test Cases - A Test case has five parameters namely the
initial state orprecondition, data setup, the inputs, expected
outcomes and actual outcomes.
 Execute Tests - Execute the test cases against the system under test and
document theresults.
 Verify Test Results - Verify if the expected and actual results match each other.
 Verify Test Coverage - Verify if the tests cover both functional and
non-functionalaspects of the requirement.
 Track and Manage Defects - Any defects detected during the testing
process goesthrough the defect life cycle and are tracked to resolution.
Defect Statistics are maintained which will give us the overall status of
the project.

2) Boundary Value Analysis

 For the most part, errors are observed in the extreme ends of the input
values, so theseextreme values like start/end or lower/upper values are
called Boundary values and analysis of these Boundary values is called
“Boundary value analysis”. It is also sometimes known as ‘range
checking’.

 Boundary value analysis is used to find the errors at boundaries of input


domain rather than finding those errors in the center of input.

 This is one of the software testing technique in which the test cases are
designed to include values at the boundary. If the input data is used within
the boundary value limits, then it is said to be Positive Testing. If the input
data is picked outside the boundary value limits, then it is said to be Negative
Testing.

 Boundary value analysis is another black box test design technique and it is
used to find the errors at boundaries of input domain rather than finding
those errors in the center ofinput.

 Each boundary has a valid boundary value and an invalid boundary value.
Test cases aredesigned based on the both valid and invalid boundary values.
Typically, we choose onetest case from each boundary.

Boundary value analysis is a black box testing and is also applies to white
box testing. Internal data structures like arrays, stacks and queues need to be
checked for boundary or limit conditions; when there are linked lists used as
internal structures, the behavior of the list at the beginning and end have to be
tested thoroughly.
 Boundary value analysis help identify the test cases that are most
likely to uncoverdefects

🞂 For example : Suppose you have very important tool at office, accepts valid
User Nameand Password field to work on that tool, and accepts minimum
8 characters and maximum 12 characters. Valid range 8-12, Invalid range 7
or less than 7 and Invalid range 13 or more than 13.

🞂 Test Cases 1: Consider password length less than 8.

🞂 Test Cases 2: Consider password of length exactly 8.

🞂 Test Cases 3: Consider password of length between 9 and 11.

🞂 Test Cases 4: Consider password of length exactly 12.

🞂 Test Cases 5: Consider password of length more than 12.

Test cases for the application whose input box accepts numbers between 1-1000.
Valid range 1-1000, Invalid range 0 and Invalid range 1001 or more.
• Test Cases 1: Consider test data exactly as the input boundaries of
input domain i.e. values 1 and 1000.

• Test Cases 2: Consider test data with values just below the extreme
edges of input domains i.e. values 0 and 999.

• Test Cases 3: Consider test data with values just above the extreme
edges of input domain i.e. values 2 and 1001.

3) Equivalence Partitioning

 Equivalence partitioning is a software technique that involves identifying a


small set ofrepresentative input values that produce as much different
output condition as possible.

 This reduces the number of permutation & combination of input, output


values used fortesting, thereby increasing the coverage and reducing the
effort involved in testing.

 The set of input values that generate one single expected output is called a partition.

 When the behavior of the software is the same for a set of values, then the set
is termed asequivalence class or partition.
 Example: An insurance company that has the following premium rates
based on the age group. A life insurance company has base premium of
$0.50 for all ages. Based on the age group, an additional monthly premium has
to pay that is as listed in the table below. For example, a person aged 34 has to pay
a premium=$0.50 +$ 1.65=$2.15

Age Group Additional Premium


Under 35 $1.65
35-59 $2.87
60+ $6.00

 Based on the equivalence portioning technique, the equivalence partitions


that are basedon age are given below:

1. Below 35 years of age (valid input)


2. Between 35 and 59 years of age (valid input)
3. Above 6 years of age (valid input)
4. Negative age (invalid input)
5. Age as 0(invalid input)
6. Age as any three-digit number (valid input)
WHITE BOX TESTING

White box testing which also known as glass box is testing, structural testing, clear

box testing, open box testing and transparent box testing. It tests internal coding
and infrastructure of a software focus on checking of predefined inputs against

expected and desired outputs. It is based on inner workings of an application and

revolves around internal structure testing. In this type of testing programming skills are

required to design test cases. The primary goal of white box testing is to focus on the
flow of inputs and outputs through the software and strengthening the security of the

software.

The term 'white box' is used because of the internal perspective of the system. The clear

box or white box or transparent box name denote the ability to see through the

software's outer shell into its inner workings.

Developers do white box testing. In this, the developer will test every line of the code of

the program. The developers perform the White-box testing and then send the
application or the software to the testing team, where they will perform the black box

testing and verify the application along with the requirements and identify the bugs and

sends it to the developer.

The developer fixes the bugs and does one round of white box testing and sends it to

the testing team. Here, fixing the bugs implies that the bug is deleted, and the particular
feature is working fine on the application.

Here, the test engineers will not include in fixing the defects for the following reasons:

o Fixing the bug might interrupt the other features. Therefore, the test engineer
should always find the bugs, and developers should still be doing the bug fixes.

o If the test engineers spend most of the time fixing the defects, then they may be

unable to find the other bugs in the application.


Desk Checking:
 Checking done manually by author
 Verify code for Correctness
 Catching and correct error before compiling and execute
 No structural method/formalism
 No log/Checklist

Code Walkthrough
 In walkthrough, author guides the review team via the document to fulfill
the commonunderstanding and collecting the feedback.
 Walkthrough is not a formal process.
 In walkthrough, a review team does not require to do detailed study before
meeting whileauthors are already in the scope of preparation.
 Walkthrough is useful for higher-level documents i.e requirement
specification andarchitectural documents.

Goals of Walkthrough

 Make the document available for the stakeholders both outside and inside
the softwarediscipline for collecting the information about the topic under
documentation.
 Describe and evaluate the content of the document.
 Study and discuss the validity of possible alternatives and proposed solutions.

Participants of Structured Walkthrough

🞂 Author - The Author of the document under review.


🞂 Presenter - The presenter usually develops the agenda for the walkthrough
and presentsthe output being reviewed.
🞂 Moderator - The moderator facilitates the walkthrough session, ensures the
walkthroughagenda is followed, and encourages all the reviewers to
participate.
🞂 Reviewers - The reviewers evaluate the document under test to
determine if it istechnically accurate.
🞂 Scribe - The scribe is the recorder of the structured walkthrough outcomes
who recordsthe issues identified and any other technical comments,
suggestions, and unresolved questions.

Benefits of Structured Walkthrough

🞂 Saves time and money as defects are found and rectified very early in the lifecycle.
🞂 This provides value-added comments from reviewers with
different technicalbackgrounds and experience.
🞂 It notifies the project management team about the progress of the development process.
🞂 It creates awareness about different development or maintenance
methodologies whichcan provide a professional growth to participants.

Code Inspection
 The trained moderator guides the Inspection. It is most formal type of review.
 The reviewers are prepared and check the documents before the meeting.
 In Inspection, a separate preparation is achieved when the product is
examined anddefects are found. These defects are documented in issue
log.
 In Inspection, moderator performs a formal follow-up by applying exit criteria.

Goals of Inspection
 Assist the author to improve the quality of the document under inspection.
 Efficiently and rapidly remove the defects.
 Generating the documents with higher level of quality and it helps to
improve the productquality.
 It learns from the previous defects found and prohibits the occurrence of similar
defects.
 Generate common understanding by interchanging information.

Difference between Inspection and Walkthrough

Inspection Walkthrough

Formal Informal

Initiated by the project team Initiated by the author

Planned meeting with fixed roles assigned to all Unplanned.


the members involved
Reader reads the product code. Everyone inspects Author reads the product code and his team mate
it and comes up with defects. comes up with defects or suggestions
Recorder records the defects Author makes a note of defects and suggestions
offered by team mate
Moderator has a role in making sure that the Informal, so there is no moderator
discussions proceed on the productive lines

Technical Review
 Technical review is a discussion meeting that focuses on technical
content of thedocument. It is a less formal review.
 It is guided by a trained moderator or a technical expert.

Goals of Technical Review


🞂 The goal is to evaluate the value of technical concept in the project environment.
🞂 Build the consistency in the use and representation of the technical concepts.
🞂 In early stages it ensures that the technical concepts are used correctly.
🞂 Notify the participants regarding the technical content of the document.

Code Functional Testing:

i. Code Functional Testing involves tracking a piece of data completely through the
software.
ii. At the unit test level this would just be through an individual module or function.
iii. The same tracking could be done through several integrated modules or even
through theentire software product although it would be more time consuming
to do so.
iv. During data flow, the check is made for the proper declaration of variables
declared andthe loops used are declared and used properly.

Code Coverage Testing:

i. The logical approach is to divide the code just as you did in black-box testing
into its data and its states (or program flow).
ii. By looking at the software from the same perspective, you can more easily
map the white-box information you gain to the black-box case you have already
written.
iii. Consider the data first. Data includes all the variables, constants, arrays, data
structures, keyboard and mouse input, files and screen input and output, and I/O
to other devices such as modems, networks, and so on.

Program Statements and Line Coverage (Code Complexity Testing)

i. The most straightforward form of code coverage is called statement


coverage or linecoverage.
ii. If you‘re monitoring statement coverage while you test your software,
your goal is to make sure that you execute every statement in the program at
least once.
iii. With line coverage the tester tests the code line by line giving the
relevant output.For example
Branch Coverage (Code Complexity Testing)
i. Attempting to cover all the paths in the software is called path testing.
ii. The simplest form of path testing is called branch coverage testing.
iii. To check all the possibilities of the boundary and the sub boundary
conditions and it‘s branching on those values.
iv. Test coverage criteria requires enough test cases such that each condition
in a decisiontakes on all possible outcomes at least once, and each point of
entry to a program or subroutine is invoked at least once.
v. Every branch (decision) taken each way, true and false. vi. It helps in
validating all the branches in the code making sure that no branch leads to
abnormal behavior of the application.

Condition Coverage (Code Complexity Testing)


i. Just when you thought you had it all figured out, there‘s yet another
Complication to pathtesting.
ii. Condition coverage testing takes the extra conditions on the branch
statements into account.
Software Testing Life Cycle (STLC)
The Software Testing Life Cycle (STLC) is a systematic approach to testing a
software application to ensure that it meets the requirements and is free of defects. It is
a process that follows a series of steps or phases, and each phase has specific objectives
and deliverables. The STLC is used to ensure that the software is of high quality,
reliable, and meets the needs of the end-users.
The main goal of the STLC is to identify and document any defects or issues in the
software application as early as possible in the development process. This allows for
issues to be addressed and resolved before the software is released to the public.
The stages of the STLC include Test Planning, Test Analysis, Test Design, Test
Environment Setup, Test Execution, Test Closure, and Defect Retesting. Each of these
stages includes specific activities and deliverables that help to ensure that the software
is thoroughly tested and meets the requirements of the end users.
Overall, the STLC is an important process that helps to ensure the quality of software
applications and provides a systematic approach to testing. It allows organizations to
release high-quality software that meets the needs of their customers, ultimately leading
to customer satisfaction and business success.
Characteristics of STLC
 STLC is a fundamental part of the Software Development Life Cycle (SDLC) but
STLC consists of only the testing phases.
 STLC starts as soon as requirements are defined or software requirement document
is shared by stakeholders.
 STLC yields a step-by-step process to ensure quality software.
In the initial stages of STLC, while the software product or the application is being
developed, the testing team analyzes and defines the scope of testing, entry and exit
criteria, and also test cases. It helps to reduce the test cycle time and also enhances
product quality. As soon as the development phase is over, the testing team is ready
with test cases and starts the execution. This helps in finding bugs in the early phase.
Phases of STLC
1. Requirement Analysis: Requirement Analysis is the first step of the Software
Testing Life Cycle (STLC). In this phase quality assurance team understands the
requirements like what is to be tested. If anything is missing or not understandable then
the quality assurance team meets with the stakeholders to better understand the detailed
knowledge of requirements.
The activities that take place during the Requirement Analysis stage include:
 Reviewing the software requirements document (SRD) and other related documents
 Interviewing stakeholders to gather additional information
 Identifying any ambiguities or inconsistencies in the requirements
 Identifying any missing or incomplete requirements
 Identifying any potential risks or issues that may impact the testing process
Creating a requirement traceability matrix (RTM) to map requirements to test cases
At the end of this stage, the testing team should have a clear understanding of the
software requirements and should have identified any potential issues that may impact
the testing process. This will help to ensure that the testing process is focused on the
most important areas of the software and that the testing team is able to deliver high-
quality results.
2. Test Planning: Test Planning is the most efficient phase of the software testing life
cycle where all testing plans are defined. In this phase manager of the testing, team
calculates the estimated effort and cost for the testing work. This phase gets started
once the requirement-gathering phase is completed.
The activities that take place during the Test Planning stage include:
 Identifying the testing objectives and scope
 Developing a test strategy: selecting the testing methods and techniques that will be
used
 Identifying the testing environment and resources needed
 Identifying the test cases that will be executed and the test data that will be used
 Estimating the time and cost required for testing
 Identifying the test deliverables and milestones
 Assigning roles and responsibilities to the testing team
 Reviewing and approving the test plan
At the end of this stage, the testing team should have a detailed plan for the testing
activities that will be performed, and a clear understanding of the testing objectives,
scope, and deliverables. This will help to ensure that the testing process is well-
organized and that the testing team is able to deliver high-quality results.
3. Test Case Development: The test case development phase gets started once the test
planning phase is completed. In this phase testing team notes down the detailed test
cases. The testing team also prepares the required test data for the testing. When the test
cases are prepared then they are reviewed by the quality assurance team.
The activities that take place during the Test Case Development stage include:
 Identifying the test cases that will be developed
 Writing test cases that are clear, concise, and easy to understand
 Creating test data and test scenarios that will be used in the test cases
 Identifying the expected results for each test case
 Reviewing and validating the test cases
 Updating the requirement traceability matrix (RTM) to map requirements to test
cases
At the end of this stage, the testing team should have a set of comprehensive and
accurate test cases that provide adequate coverage of the software or application. This
will help to ensure that the testing process is thorough and that any potential issues are
identified and addressed before the software is released.
4. Test Environment Setup: Test environment setup is a vital part of the STLC. Basically, the
test environment decides the conditions on which software is tested. This is independent
activity and can be started along with test case development. In this process, the testing team is
not involved. either the developer or the customer creates the testing environment.
5. Test Execution: After the test case development and test environment setup test execution
phase gets started. In this phase testing team starts executing test cases based on prepared test
cases in the earlier step.
The activities that take place during the test execution stage of the Software Testing Life
Cycle (STLC) include:
 Test execution: The test cases and scripts created in the test design stage are run against
the software application to identify any defects or issues.
 Defect logging: Any defects or issues that are found during test execution are logged in a
defect tracking system, along with details such as the severity, priority, and description of
the issue.
 Test data preparation: Test data is prepared and loaded into the system for test execution
 Test environment setup: The necessary hardware, software, and network configurations
are set up for test execution
 Test execution: The test cases and scripts are run, and the results are collected and
analyzed.
 Test result analysis: The results of the test execution are analyzed to determine the
software’s performance and identify any defects or issues.
 Defect retesting: Any defects that are identified during test execution are retested to
ensure that they have been fixed correctly.
 Test Reporting: Test results are documented and reported to the relevant stakeholders.
It is important to note that test execution is an iterative process and may need to be repeated
multiple times until all identified defects are fixed and the software is deemed fit for release.
6. Test Closure: Test closure is the final stage of the Software Testing Life Cycle (STLC)
where all testing-related activities are completed and documented. The main objective of the
test closure stage is to ensure that all testing-related activities have been completed and that the
software is ready for release.
At the end of the test closure stage, the testing team should have a clear understanding of the
software’s quality and reliability, and any defects or issues that were identified during testing
should have been resolved. The test closure stage also includes documenting the testing
process and any lessons learned so that they can be used to improve future testing processes
Test closure is the final stage of the Software Testing Life Cycle (STLC) where all testing-
related activities are completed and documented. The main activities that take place during the
test closure stage include:
 Test summary report: A report is created that summarizes the overall testing process,
including the number of test cases executed, the number of defects found, and the overall
pass/fail rate.
 Defect tracking: All defects that were identified during testing are tracked and managed
until they are resolved.
 Test environment clean-up: The test environment is cleaned up, and all test data and test
artifacts are archived.
 Test closure report: A report is created that documents all the testing-related activities
that took place, including the testing objectives, scope, schedule, and resources used.
 Knowledge transfer: Knowledge about the software and testing process is shared with the
rest of the team and any stakeholders who may need to maintain or support the software in
the future.
 Feedback and improvements: Feedback from the testing process is collected and used to
improve future testing processes
It is important to note that test closure is not just about documenting the testing process, but
also about ensuring that all relevant information is shared and any lessons learned are captured
for future reference. The goal of test closure is to ensure that the software is ready for release
and that the testing process has been conducted in an organized and efficient manner.
SDLC - V-Model
The V-model is an SDLC model where execution of processes happens in a sequential manner in
a V-shape. It is also known as Verification and Validation model.
The V-Model is an extension of the waterfall model and is based on the association of a testing
phase for each corresponding development stage. This means that for every single phase in the
development cycle, there is a directly associated testing phase. This is a highly-disciplined model
and the next phase starts only after completion of the previous phase
V-Model - Design
Under the V-Model, the corresponding testing phase of the development phase is planned in
parallel. So, there are Verification phases on one side of the ‘V’ and Validation phases on the
other side. The Coding Phase joins the two sides of the V-Model.
The following illustration depicts the different phases in a V-Model of the SDLC.

V-Model - Verification Phases


There are several Verification phases in the V-Model, each of these are explained in detail
below.
Business Requirement Analysis
This is the first phase in the development cycle where the product requirements are understood
from the customer’s perspective. This phase involves detailed communication with the customer
to understand his expectations and exact requirement. This is a very important activity and needs
to be managed well, as most of the customers are not sure about what exactly they need.
The acceptance test design planning is done at this stage as business requirements can be used
as an input for acceptance testing.
System Design
Once you have the clear and detailed product requirements, it is time to design the complete
system. The system design will have the understanding and detailing the complete hardware and
communication setup for the product under development. The system test plan is developed
based on the system design. Doing this at an earlier stage leaves more time for the actual test
execution later.
Architectural Design
Architectural specifications are understood and designed in this phase. Usually more than one
technical approach is proposed and based on the technical and financial feasibility the final
decision is taken. The system design is broken down further into modules taking up different
functionality. This is also referred to as High Level Design (HLD).
The data transfer and communication between the internal modules and with the outside world
(other systems) is clearly understood and defined in this stage. With this information, integration
tests can be designed and documented during this stage.
Module Design
In this phase, the detailed internal design for all the system modules is specified, referred to
as Low Level Design (LLD). It is important that the design is compatible with the other modules
in the system architecture and the other external systems. The unit tests are an essential part of
any development process and helps eliminate the maximum faults and errors at a very early
stage. These unit tests can be designed at this stage based on the internal module designs.
Coding Phase
The actual coding of the system modules designed in the design phase is taken up in the Coding
phase. The best suitable programming language is decided based on the system and architectural
requirements.
The coding is performed based on the coding guidelines and standards. The code goes through
numerous code reviews and is optimized for best performance before the final build is checked
into the repository.
Validation Phases
The different Validation Phases in a V-Model are explained in detail below.
Unit Testing
Unit tests designed in the module design phase are executed on the code during this validation
phase. Unit testing is the testing at code level and helps eliminate bugs at an early stage, though
all defects cannot be uncovered by unit testing.
Integration Testing
Integration testing is associated with the architectural design phase. Integration tests are
performed to test the coexistence and communication of the internal modules within the system.
System Testing
System testing is directly associated with the system design phase. System tests check the entire
system functionality and the communication of the system under development with external
systems. Most of the software and hardware compatibility issues can be uncovered during this
system test execution.
Acceptance Testing
Acceptance testing is associated with the business requirement analysis phase and involves
testing the product in user environment. Acceptance tests uncover the compatibility issues with
the other systems available in the user environment. It also discovers the non-functional issues
such as load and performance defects in the actual user environment.
V- Model ─ Application
V- Model application is almost the same as the waterfall model, as both the models are of
sequential type. Requirements have to be very clear before the project starts, because it is usually
expensive to go back and make changes. This model is used in the medical development field, as
it is strictly a disciplined domain.
The following pointers are some of the most suitable scenarios to use the V-Model application.
 Requirements are well defined, clearly documented and fixed.
 Product definition is stable.
 Technology is not dynamic and is well understood by the project team.
 There are no ambiguous or undefined requirements.
 The project is short.
V-Model - Pros and Cons
The advantage of the V-Model method is that it is very easy to understand and apply. The
simplicity of this model also makes it easier to manage. The disadvantage is that the model is not
flexible to changes and just in case there is a requirement change, which is very common in
today’s dynamic world, it becomes very expensive to make the change.
The advantages of the V-Model method are as follows −
 This is a highly-disciplined model and Phases are completed one at a time.
 Works well for smaller projects where requirements are very well understood.
 Simple and easy to understand and use.
 Easy to manage due to the rigidity of the model. Each phase has specific deliverables and
a review process.
The disadvantages of the V-Model method are as follows −
 High risk and uncertainty.
 Not a good model for complex and object-oriented projects.
 Poor model for long and ongoing projects.
 Not suitable for the projects where requirements are at a moderate to high risk of
changing.
 Once an application is in the testing stage, it is difficult to go back and change a
functionality.
 No working software is produced until late during the life cycle.

Software Testing – Bug vs Defect vs Error vs Fault vs Failure

Software Testing defines a set of procedures and methods that check whether the actual
software product matches with expected requirements, thereby ensuring that the product is
Defect free. There are a set of procedures that needs to be in mind while testing the software
manually or by using automated procedures. The main purpose of software testing is to
identify errors, deficiencies, or missing requirements with respect to actual requirements.
Software Testing is Important because if there are any bugs or errors in the software, they can
be identified early and can be solved before the delivery of the software product. The article
focuses on discussing the difference between bug, defect, error, fault, and failure.
What is a Bug?
A bug refers to defects which means that the software product or the application is not working
as per the adhered requirements set. When we have any type of logical error, it causes our code
to break, which results in a bug. It is now that the Automation/ Manual Test Engineers describe
this situation as a bug.
 A bug once detected can be reproduced with the help of standard bug-reporting templates.
 Major bugs are treated as prioritized and urgent especially when there is a risk of user
dissatisfaction.
 The most common type of bug is a crash.
 Typos are also bugs that seem tiny but are capable of creating disastrous results.
What is a Defect?
A defect refers to a situation when the application is not working as per the requirement and
the actual and expected result of the application or software are not in sync with each other.
 The defect is an issue in application coding that can affect the whole program.
 It represents the efficiency and inability of the application to meet the criteria and prevent
the software from performing the desired work.
 The defect can arise when a developer makes major or minor mistakes during the
development phase.
What is an Error?
Error is a situation that happens when the Development team or the developer fails to
understand a requirement definition and hence that misunderstanding gets translated into
buggy code. This situation is referred to as an Error and is mainly a term coined by the
developers.
 Errors are generated due to wrong logic, syntax, or loop that can impact the end-user
experience.
 It is calculated by differentiating between the expected results and the actual results.
 It raises due to several reasons like design issues, coding issues, or system specification
issues and leads to issues in the application.
What is a Fault?
Sometimes due to certain factors such as Lack of resources or not following proper steps Fault
occurs in software which means that the logic was not incorporated to handle the errors in the
application. This is an undesirable situation, but it mainly happens due to invalid documented
steps or a lack of data definitions.
 It is an unintended behavior by an application program.
 It causes a warning in the program.
 If a fault is left untreated it may lead to failure in the working of the deployed code.
 A minor fault in some cases may lead to high-end error.
 There are several ways to prevent faults like adopting programming techniques,
development methodologies, peer review, and code analysis.
What is a Failure?
Failure is the accumulation of several defects that ultimately lead to Software failure and
results in the loss of information in critical modules thereby making the system unresponsive.
Generally, such situations happen very rarely because before releasing a product all possible
scenarios and test cases for the code are simulated. Failure is detected by end-users once they
face a particular issue in the software.
 Failure can happen due to human errors or can also be caused intentionally in the system by
an individual.
 It is a term that comes after the production stage of the software.
 It can be identified in the application when the defective part is executed.
A simple diagram depicting Bug vs Defect vs Fault vs Failure:
Bug vs Defect vs Error vs Fault vs Failure
Some of the vital differences between bug, defect, fault, error, and failure are listed in the
below table:

Basis Bug Defect Fault Error Failure

Failure is the
accumulation
A bug refers of several
to defects A Fault is a defects that
which means state that ultimately lead
that the causes the An Error is a to Software
A Defect is a
software software to mistake made in failure and
deviation
product or the fail and the code due to results in the
between the
application is therefore it which loss of
actual and
not working as does not compilation or information in
expected output
per the achieve its execution fails, critical
adhered necessary modules
requirements function. thereby
set making the
Definitio system
n unresponsive.

The defect is
identified by The failure is
The Testers found by the
Human Developers and
And is resolved test engineer
Test Engineers mistakes lead automation test
by developers during the
to fault. engineers
in the development
Raised development cycle of SDLC
by phase of SDLC.

Defects are  Business  Syntactic Error


classified as Logic  UI screen error
 Logical follows: Faults  Error handling
bugs Based on  Functional error
 Algorithmi Priority: and  Flow control NA
c bugs  High Logical error
 Resource  Medium Faults  Calculation
Differen bugs  Low  Graphical error
t types Based on User  Hardware error
Severity: Interface
Basis Bug Defect Fault Error Failure

 Critical (GUI)
 Major Faults
 Minor  Performan
 Trivial ce Faults
 Security
Faults
 Hardware
Faults

 Wrong
design of
 Error in code.
 Receiving & the data
 Inability to
providing definition
compile/execut
incorrect processes.
 Missing e a program
input  An  Environmen
Logic  Ambiguity in
irregularit t variables
 Erroneous  Coding/Logi code logic
cal Error y in Logic  System
Logic  Misunderstand
leading to or gaps in Errors
 Redundant ing of
the the  Human
codes requirements
breakdown software Error
 Faulty design
of software leads to
and
the non-
architecture
Reasons functionin
 Logical error
behind g of the
software.

 Peer
 Implementi
review of
ng Test-  Implementin
the Test
driven g Out-of-the-
document  Conduct peer
developme box
s and reviews and
nt. programmin
requireme code-reviews
 Adjusting g methods.
nts.  Need for
enhanced  Proper usage
 Verifying validation of
developme of primary
the bug fixes and
nt practices and correct
correctnes enhancing the
Way to and software
s of overall quality
prevent evaluation coding
software of the softwa
of practices.
the design
cleanliness
reasons and
of the code.
coding.
Levels of Testing in Software Testing

There are mainly four Levels of Testing in software testing :

1. Unit Testing : checks if software components are fulfilling functionalities or not.


2. Integration Testing : checks the data flow from one module to other modules.
3. System Testing : evaluates both functional and non-functional needs for the testing.
4. Acceptance Testing : checks the requirements of a specification or contract are met as
per its delivery.

Unit Testing
Unit testing is the first level of testing performed on individual modules, components, or pieces
of code. In unit testing, the individual modules are tested as independent components to ensure
that they work correctly and are fit to be assembled/integrated with other components.

This testing is performed by developers. The developers usually write unit tests for the piece of
code written by them.

As stated before, it is the first level of testing. Once individual components are unit
tested, integration testing is carried out.

Unit testing cannot be performed manually. The unit tests are always automated and more
specifically use the White-box testing technique as the knowledge of the piece of code and
internal architecture is required to test the code. The developers create unit tests passing required
inputs to the test script and asserting the actual output with the expected results.

Advantages of Unit Testing


 Defects within a module can be detected at earlier stages of development. Hence the cost
of fixing the defects greatly reduces.
 Unit testing improves testing efficiency and better resource utilization as testing of a
module can be started without having to wait for other modules to finish.

 Exhaustive testing focusing on individual functionality is possible in unit testing.

 Unit tests aid in faster development and debugging as the impact of new changes can be
easily detected by running the unit tests.

 A successful unit test report generates a sense of confidence about the quality of the code.
Modules successfully unit tested can be easily merged with other modules.

Limitations of Unit Testing


 Unit testing cannot detect integration or interfacing issues between two modules.

 It cannot catch complex errors in the system ranging from multiple modules.

 It cannot test non-functional attributes like usability, scalability, the overall performance
of the system, etc.

 Unit tests cannot guarantee functional correctness or conformance of application with its
business requirements.

Integration testing:

 Integration testing is the second level of testing in which we test a group of related
modules.
 It aims at finding interfacing issues b/w the modules i.e. if the individual units can be
integrated into a sub-system correctly.
 It is of four types – Big-bang, top-down, bottom-up, and Hybrid.
1. In big bang integration, all the modules are first required to be completed and then
integrated. After integration, testing is carried out on the integrated unit as a whole.

2. In top-down integration testing, the testing flow starts from top-level modules that
are higher in the hierarchy towards the lower-level modules. As there is a possibility
that the lower-level modules might not have been developed while beginning with
top-level modules.

So, in those cases, stubs are used which are nothing but dummy modules or functions
that simulate the functioning of a module by accepting the parameters received by the
module and giving an acceptable result.

3. Bottom-up integration testing is also based on an incremental approach but it starts


from lower-level modules, moving upwards to the higher-level modules. Again the
higher-level modules might not have been developed by the time lower modules are
tested. So, in those cases, drivers are used. These drivers simulate the functionality of
higher-level modules in order to test lower-level modules.

4. Hybrid integration testing is also called the Sandwich integration approach. This
approach is a combination of both top-down and bottom-up integration testing. Here,
the integration starts from the middle layer, and testing is carried out in both
directions, making use of both stubs and drivers, whenever necessary.
System Testing
System testing is a type of software testing that evaluates a software product as a whole against
functional and non-functional requirements. It determines the overall performance and
functionality of a fully integrated software product.
The primary goal of this testing type is to check that all software components work together
without any flaws and function as intended while meeting all the specified requirements. It is
concerned with verifying the software product’s design, behavior, and compliance with customer
requirements.
A QA team carries out system testing after the integration testing and before acceptance testing.
They choose a testing environment that closely resembles the actual production environment.
Since the QA team tests the entire system without knowing its internal workings, it falls
under black-box testing.
Integrated modules that have passed integration testing serve as the input to system testing.
Integration testing uncovers defects or irregularities between the integrated units. However,
system testing discovers defects between integrated units and the whole system.
In a nutshell, this software testing type involves performing a series of tests to exercise the entire
software.

System Testing Example


Let us take a real-world example to understand this. Consider the car manufacturing process.
Initially, a car manufacturer produces all the essential components, such as brakes, an engine,
seats, steering, wheels, etc. After manufacturing these components, it’s time to test them
individually, which we call unit testing in software development.
Once the functionality of all these individual components is confirmed, the manufacturer
assembles them together. The next step is to check whether the assembled combination does not
result in any error or has no side effects on the functionality of each component. We refer to this
as integration testing.
After ensuring no defects between the assembled combination, the manufacturer checks this
combination as a whole, which is system testing. The car as a whole undergoes multiple checks
to verify it meets the specified requirements, like the car running smoothly, all other components
(brakes, gears, wheels, etc.) working correctly, etc.
When the car meets customers’ expectations, they are more likely to buy it.

Why do we Need System Testing in Software Testing?


 Even after successful unit and integration testing, many complex scenarios may have
undiscovered issues. System testing helps in uncovering those defects.
 It tests the software against functional and non-functional requirements. This happens for
the first time in the entire software development life cycle. Hence, it verifies the
software’s architecture or design and business requirements.
 The testing environment closely matches the production environment. Hence, successful
system testing brings a sense of confidence in the final delivered product. Also,
stakeholders can understand how end users react to the software.
 It minimizes post-deployment issues, troubleshooting, and support calls.

What to Verify in System Testing?


This testing type assesses the software product for the following:
 The interaction between software components, including external peripherals, to verify
how the software works as a whole. This is the scenario of end-to-end testing.
 Inputs given to the software produce the expected outcomes.
 Functional and non-functional requirements are met.
 End users’ experience with the software.
We have listed some of the most important parameters here. However, system testing involves
validating many other aspects. It requires detailed test cases and test suites to the software’s
every aspect from the outside, without peeking into its internal details.

Entry and Exit Criteria


Each software testing level has entry and exit criteria. The following are the entry and exit
criteria for system testing:
Entry Criteria
 The software should pass all the exit criteria of integration testing, i.e., the execution of
all test cases should be finished, and there should be no critical or priority bug in an open
state.
 The test plan should be approved and signed off.
 Test cases, test scenarios, and test scripts should be ready for execution.
 The testing environment should be ready.
 Non-functional requirements should be at hand.
Exit Criteria
 All the intended test cases for system testing should be executed.
 No priority or critical bug should be in an open state.
 Even if medium or low-priority bugs are open, they should pass on to the next level of
testing – acceptance testing.
 The exit report should be ready.

System Testing Types


As stated earlier, This testing type evaluates the software for functional and non-functional
requirements. Hence, the software must undergo various testing techniques to assess the whole
system and its different aspects.

The different types of system testing are –


1. Functional testing: It assesses the software to check whether each functionality works as
intended and meets the specified requirements. If testers find some functionalities
missing, they list them for the development team for implementation. Also, they suggest
additional functionality to enhance the software.
2. Performance testing: It is a non-functional testing type that validates the system for
stability, scalability, responsiveness, and speed under the given load.
3. Usability testing: It is also a non-functional testing that checks the user-friendliness and
effectiveness of the software. Simply put, it determines how easily users can manage and
operate the software and access its features.
4. Reliability testing: This testing technique evaluates the software to check whether it
functions correctly and consistently under a specific condition for a given period.
5. Security testing: It discovers all security risks and vulnerabilities in the software and
ensures that it does not allow any unauthorized access to data and other resources.
6. Scalability testing: This type of load testing evaluates the software for its performance
when the number of users scales up and down.
7. Recoverability testing: This testing technique determines the software’s ability to
recover from failures and crashes.
8. Interoperability testing: It analyzes how well the software interacts with its components
and third-party products.
9. Regression testing: It ensures that any new changes in the software’s source code do not
affect the existing functionality. It assures the stability of the software as it integrates
subsystems and maintenance procedures.
System Testing Process
Here are the different steps of the system testing:
1. Setting Test Environment – The first step is to create a test environment that matches
the production environment for quality testing. A test environment includes selecting
programming languages, frameworks, and testing tools and establishing necessary
dependencies and configurations.
2. Creating Test Cases – The next step is to create test cases for an exhaustive testing
process. It also involves creating a test document containing the count of passed and
failed test cases.
3. Developing Test Data – This step involves collecting test data. It should include all
necessary information and fields. Identify favorable and unfavorable input/output
combinations.
4. Executing Test Cases – Use the created test cases and test data to execute them. This
helps you know whether the test cases are successful or unsuccessful.
5. Detecting Bugs/Defects – If any bugs or errors are encountered, testers should be
reported in the test document created in the second step.
6. Regression Testing – To fix the encountered errors, developers make changes to the
source code. So, testes perform regression testing to ensure that the most recent
modifications to the source code do not affect its existing functionality.
7. Retest – If errors are found during regression testing, the testing team reports them to the
development team. The testing cycle continues until the software is ready to go into the
production stage.

Advantages and Disadvantages of System Testing


Here are some significant advantages and disadvantages of system testing:
Advantages
 System testing does not require testers to have programming knowledge.
 It validates the entire software product and uncovers errors and defects that unit and
integration testing cannot.
 The testing environment is similar to the actual production environment.
 The thorough testing of the software product ultimately results in high quality.
 It improves the overall performance, maintenance, and reliability of the system.
 As it uncovers all possible bugs and errors, the development and testing teams are
confident enough to release products to users.

Disadvantages
 System testing is time-consuming.
 It requires highly skilled testers.
 It is challenging for large and complex projects.
 Testers do not have visibility into the software’s source code.
 No testing uncovers 100% bugs. Even if system testing validates every aspect of the
source code, bugs may still exist.

Acceptance testing:
Acceptance testing is the final level of software testing where the system is tested for compliance
to its business requirements. It is performed by the client or the end users with the intent to see if
the product is fit for delivery. It can be both formal as well as informal.

Formal acceptance testing is carried out by the client’s representatives and the informal or Adhoc
one is carried out by a subset of potential users who check functionality as well as features like
the usability of the product.
It is carried out after system testing and before the final delivery to the client.
Types of Acceptance Testing
Alpha Testing
 Alpha testing is the form of acceptance testing that takes place at the developer’s site.

 It can be carried out by both in-house developers and QAs as well as potential end-users
as well.

 Alpha testing is not open to the world.

 These tests can also be white box along with black-box tests.

Beta Testing
 Beta testing is the form of acceptance testing that takes place at the customer’s or the end
user’s site.

 It is performed after alpha testing and in the real-world environment without the presence
or control of developers.

 Beta tests or the beta version of the application are normally open to the whole world (or
client).

 These tests are only black-box.

Along with Alpha and beta testing, we can also classify acceptance testing into the following
different types-
User Acceptance Testing – In user acceptance testing, developed application is assessed from
the end-users’ perspective, whether it is working for the end-users or not as per the requirements.
It is done by employees of the developer organization only. It is also known as ‘End User
Testing’ and follows a black box testing mode.
Business Acceptance Testing – Business acceptance testing assesses the developed application
from the perspective of business goals and processes. It is to make sure the system is ready for
the operational challenges and needs of the real world. It is a superset of user acceptance testing.
BAT is performed by an independent testing team. Every member of the team should have
precise knowledge of the client’s domain and business.
Contract Acceptance Testing – This type of testing involves checking the developed system
against pre-defined criteria or specifications in the contract. The contract would have been signed
by the client and the development party.
Regulations Acceptance Testing – Regulations Acceptance testing is also known as
Compliance acceptance testing. It checks whether the system complies with the rules and
regulations of the country where the software will be released. Usually, a product or application
that is being released internationally, will require such testing as different countries have
different rules and laws.
Operational Acceptance Testing – It is non-functional testing. It makes sure that the
application is ready operationally. Operational acceptance testing involves testing the backup or
restore facilities, user manuals, maintenance tasks, and security checks.

Importance of Acceptance Testing


Before acceptance testing, the application has been tested by the QA team i.e. Internal testing
team. QA team will test, and developers will develop the application based on the requirement
documents given to them.

They may have their own understanding of the requirements due to a lack of domain knowledge.
It is possible that their understanding is different than that of business users. During acceptance
testing, business users have a chance to check if everything matches their expectations.
During acceptance testing, business users (clients) get to see the final product. Users can check
whether the system works according to the given requirements. UAT also ensures that the
requirements have been communicated and implemented effectively. Business users can gain
confidence in showing the application in the market i.e. to the end-users.
As acceptance testing will be done by users from the business side, they will have more ideas of
what end-users want. Thus, feedback/ suggestions given during acceptance testing can be
helpful in the next releases. The development team can avoid the same mistakes in future
releases.
Also, an application may have some major or critical issues, such issues should be identified
during testing not when the system is LIVE. These issues can be resolved before the code goes
to the production environment. This will reduce the efforts and time of developers.
Conclusion:

 A level of software testing is a process where every unit or component of a


software/system is tested.
 The primary goal of system testing is to evaluate the system’s compliance with the
specified needs.
 In Software Engineering, four main levels of testing are Unit Testing, Integration Testing,
System Testing and Acceptance Testing.
UNIT II

TEST PLANNING

The Goal of Test Planning, High Level Expectations,Intergroup Responsibilities, Test


Phases, Test Strategy, Resource Requirements, Tester Assignments, Test Schedule, Test
Cases, Bug Reporting, Metrics and Statistics.

The Goal of Test Planning

What is a Test Plan in Software Testing?


A Test Plan is a detailed document that describes the test strategy, objectives, schedule,
estimation, deliverables, and resources required to perform testing for a software product. Test
Plan helps us determine the effort needed to validate the quality of the application under test. The
test plan serves as a blueprint to conduct software testing activities as a defined process, which is
minutely monitored and controlled by the test manager..
There is a test plan document, the goal of which is to give readers a thorough overview
of the testing strategy for testing a project. It outlines the many features and specifications that
will be evaluated as part of the project's scope, the entrance and exit criteria for each phase and
the dependencies that go along with them.
Types of Test Plans in Software Testing
1. Master Test Plan
The master test plan is a document that goes into great depth on the planning
and management of testing at various test levels. It provides a bird's eye view of the important
choices made, the tactics used, and the testing effort put forth for the project. The master test
plan includes the list of tests that must be run. Test coverage, connections between various test
levels and associated code tasks, test execution strategies, etc.
2. Test Phase Plan
The testing plans that must be followed for each test level, or occasionally test type, are
described in detail in the level test plan. The level test plan typically includes further information
on the levels listed in the master testing plan. They would offer the testing schedule,
benchmarks, activities, templates, and other information that isn't included in the master plan.
3. Specific Test Plans
Plans for conducting particular testing, such as performance and security tests. For instance,
performance testing is software testing that aims to ascertain how a system responds and
performs under a specific load. Security testing is software testing that aims to ascertain the
system's vulnerabilities and whether its data and resources are safe from potential intruders.

Importance of Test Plan


Making Test Plan document has multiple benefits

 Help people outside the test team such as developers, business managers,
customers understand the details of testing.
 Test Plan guides our thinking. It is like a rule book, which needs to be followed.
 Important aspects like test estimation, test scope, Test Strategy are documented in Test Plan,
so it can be reviewed by Management Team and re-used for other projects.

Components of the Test Plan in Software Testing

1. Resource Allocation: This component specifies which tester will work on which test.
2. Training Needs: The staff and skill levels required to carry out test-related duties should be
specified by the test planner. Any specialized training needed to complete a task should also be
indicated.
3. Scheduling: A task networking tool should be used to determine and record task durations.
Establish, keep track of, and plan test milestones.
4. Tools: Specifies the instruments used for testing, problem reporting, and other pertinent tasks.
5. Risk Management: Describes the dangers that could arise during software testing as well as the
problems that the software itself might face if it is published without enough testing.
6. Approach: The concerns to be addressed when testing the target program are thoroughly
covered in this portion of the test plan.

How to Write an Effective Test Plan [Step-by-Step]


The below sections should be covered for an effective test plan. Follow the seven steps below to
create a test plan

1. Analyze the product


2. Design the Test Strategy
3. Define the Test Objectives
4. Define Test Criteria
5. Resource Planning
6. Plan Test Environment
7. Schedule & Estimation
8. Determine Test Deliverables
Step 1) Analyze the product

How can you test a product without any information about it? The answer is Impossible. You
must learn a product thoroughly before testing it.

The product under test is Guru99 banking website. You should research clients and the end users
to know their needs and expectations from the application

 Who will use the website?


 What is it used for?
 How will it work?
 What are software/ hardware the product uses?

You can use the following approach to analyze the site

Step 2) Develop Test Strategy

Test Strategy is a critical step in making a Test Plan in Software Testing. A Test Strategy
document, is a high-level document, which is usually developed by Test Manager. This
document defines:

 The project’s testing objectives and the means to achieve them


 Determines testing effort and costs

Back to your project, you need to develop Test Strategy for testing that banking website. You
should follow steps below
2.1) Define Scope of Testing
Before the start of any test activity, scope of the testing should be known. You must think hard
about it.
Defining the scope of your testing project is very important for all stakeholders. A precise scope
helps you
 Give everyone a confidence & accurate information of the testing you are doing
 All project members will have a clear understanding about what is tested and what is not
How do you determine scope your project?
To determine scope, you must –
 Precise customer requirement
 Project Budget
 Product Specification
 Skills & talent of your test team
Step 2.2) Identify Testing Type
A Testing Type is a standard test procedure that gives an expected test outcome.
Each testing type is formulated to identify a specific type of product bugs. But, all Testing Types
are aimed at achieving one common goal “Early detection of all the defects before releasing the
product to the customer”

Step 2.3) Document Risk & Issues

Risk is future’s uncertain event with a probability of occurrence and a potential for loss. When
the risk actually happens, it becomes the ‘issue’.

 Team member lack the required skills for website testing.

 The project schedule is too tight; it’s hard to complete this project on time
 Test Manager has poor management skill
 A lack of cooperation negatively affects your employees’ productivity
 Wrong budget estimate and cost overruns

Step 2.4) Create Test Logistics

In Test Logistics, the Test Manager should answer the following questions:

 Who will test?


 When will the test occur?

Who will test?

You may not know exact names of the tester who will test, but the type of tester can be defined.

To select the right member for specified task, you have to consider if his skill is qualified for the task or not,
also estimate the project budget. Selecting wrong member for the task may cause the project to fail or delay.

Person having the following skills is most ideal for performing software testing:

 Ability to understand customers point of view


 Strong desire for quality
 Attention to detail
 Good cooperation

When will the test occur?

Test activities must be matched with associated development activities.

You will start to test when you have all required items shown in following figure

Step 3) Define Test Objective


Test Objective is the overall goal and achievement of the test execution. The objective of the testing is finding
as many software defects as possible; ensure that the software under test is bug free before release.

To define the test objectives, you should do 2 following steps

1. List all the software features (functionality, performance, GUI…) which may need to test.
2. Define the target or the goal of the test based on above features.

Step 4) Define Test Criteria


Test Criteria is a standard or rule on which a test procedure or test judgment can be based.
There’re 2 types of test criteria as following

Suspension Criteria

Specify the critical suspension criteria for a test. If the suspension criteria are met during testing,
the active test cycle will be suspended until the criteria are resolved.

Test Plan Example: If your team members report that there are 40% of test cases failed, you
should suspend testing until the development team fixes all the failed cases.

Exit Criteria

It specifies the criteria that denote a successful completion of a test phase. The exit criteria are
the targeted results of the test and are necessary before proceeding to the next phase of
development. Example: 95% of all critical test cases must pass.
Step 5) Resource Planning
Resource plan is a detailed summary of all types of resources required to complete project task.
Resource could be human, equipment and materials needed to complete a project

The resource planning is important factor of the test planning because helps
in determining the number of resources (employee, equipment…) to be used for the project.
Therefore, the Test Manager can make the correct schedule & estimation for the project.

Step 6) Plan Test Environment


What is the Test Environment

A testing environment is a setup of software and hardware on which the testing team is going to
execute test cases. The test environment consists of real business anduser environment, as well
as physical environments, such as server, front end running environment .

Following figure describes the test environment of the banking website

Step 7) Schedule & Estimation


Test estimation, use some techniques to estimate the effort to complete the project. Now
we should include that estimation as well as the schedule to the Test Planning

In the Test Estimation phase, suppose you break out the whole project into small tasks and add
the estimation for each task as below
Task Members Estimate effort

Create the test specification Test Design 170 man-hour


Perform Test Execution Tester, Test Administrator 80 man-hour
Test Report Tester 10 man-hour
Test Delivery 20 man-hour
Total 280 man-hour
To create the project schedule, the Test Manager needs several types of input as below:

 Employee and project deadline: The working days, the project deadline, resource
availability are the factors which affected to the schedule
 Project estimation: Base on the estimation, the Test Manager knows how long it takes to
complete the project. So he can make the appropriate project schedule
 Project Risk : Understanding the risk helps Test Manager add enough extra time to the
project schedule to deal with the risks

Step 8) Test Deliverables


Test Deliverables is a list of all the documents, tools and other components that has to be
developed and maintained in support of the testing effort.

There are different test deliverables at every phase of the software development lifecycle.

Test deliverables are provided before testing phase.

 Test plans document.


 Test cases documents
 Test Design specifications.

Test deliverables are provided during the testing

 Test Scripts
 Simulators.
 Test Data
 Test Traceability Matrix
 Error logs and execution logs.

Test deliverables are provided after the testing cycles is over.


 Test Results/reports
 Defect Report
 Installation/ Test procedures guidelines
 Release notes
High Level Expectations

Software Testing: Meeting The Customer Expectation


Organizations worldwide in the present circumstances put billions in software quality
assurance. Proficient testing not just guarantees the needs of a client it simultaneously implies quality
and lessened expenses. Consequently, software testing needs to suffice both.

Consumer satisfaction henceforth turns into the key prerequisite.

Let’s have look at the most important qualities that will make your client happy.

 Concede to Plan, Objectives, and Courses of Events


Until the point when you and your customer approve on plan, objectives, and courses of events, you

are constantly in danger of them not understanding what achievement is and how it ought to

beregulated.

We generally propose making a scope-of-work archive that blueprints the details, financial plans, and

metrics of the software testing. This will ease any perplexity over expectations and ideally take out a

troublesome discussion.

 Availability
By availability, we don’t mean every minute of a day support system. It just means clear and

forthright correspondence about time off, optional plans and being reachable and not going

Missing in Actions Regardless of whether you are a sole individual or a team supporting the

client, your availability ought to frequently be checked.

 First Be a Good Listener than Counsel


Listening is amongst the most misjudged and inadequately utilized instruments in

managing customer aspirations. Numerous clients are uncertain of what they are attempting to

achieve or not great at explaining it. In that position, you should have brilliant instinct and

listening aptitudes so as to distinguish key information being conveyed.

A standout approaches to remunerate for a customer who presents ineffectively is to reiterate

what you have understood and ask them to affirm the precision from key takeaways, which will
at last effect expectations. When you offer your client direction, counsel, info, and business

advice, you turn into a really profitable accomplice. This style of open discussion sets up the

honour important to guarantee better project administration.

 Reviewing Client Demands


One thing all Customer-driven organizations know is that Customer desires change. What

may have been sufficient a year ago isn’t sufficient this year. To get this data, they have to

research and record these expectations in their prevailing practice.

Thus, customer-driven organizations should review their customer expectations

consistently and as often as possible, and at any rate every year.

Internal research and reviews can be carried out to ensure that procedures are duly

followed in the company. Testing procedures use strategies to convert customer expectations

into required outputs.


 Quality as an Expense Versus Quality as a Profit

Earlier, obtaining quality was considered as an expense. Any investment in techniques, tools,

and procedures to accomplish higher quality were considered as a cost. The management was not

persuaded of putting excessively in the quality.

 Constant Improvement

It might surprise you. Before, organizations aimed to build products that meet specific

benchmarks. A satisfactory deviation scope was characterized for a product which implies that

specific level o f errors were permitted in the software. In the event that the organization is as of

now addressing the benchmarks, they don’t see the requirement for enhancing the procedure any

further.

Despite what might be expected, the contemporary approach of value looks for consistent

improvement. It is client centered and takes actions around the premise of review got from

clients.

This review can incorporate demands for new features, complaints, and recognition.

Consequently, today, our product industry has likewise progressed toward becoming client

regulated.
During creating and releasing a product, we do not just observe the conformance to

prerequisites; rather we attempt to surpass the expectations to fulfill the clients’ demands.

Constant improvement proposes you regularly check your practices and processes for any

opportunity for improvements. This further involves working on the removal of the root causes

of obstacles to abstain them from happening repeatedly in the future.

Why Meeting Customer Expectations are Vital?


1. To Maintain Reputation
Quality impacts your item and company reputation. The speed and significance of online

networking imply that your clients—and forthcoming clients—can undoubtedly share both

positive reviews and feedback of your soft quality on product review sites, forums, and social

networking channels. A powerful notoriety for quality can be a vital differentiator in

exceptionally competing markets.

In the outrageous, low quality or failure of the product which leads to a product summon

campaign can deliver negative attention and harm your reputation.

2. Long-Term Profitability
If customers receive your defective products and are unsatisfied, you will have to compensate

for returns and possibly legal charges to pay for failure to comply with the client or industry

standards. So, having adequate quality controls is important to reduce cost.


Intergroup Responsibilities
Software Testingis an interesting but challenging field. At one time, a career
in software testing was considered as the last option to be part of the IT industry.
However, automation and technological revolution has changed the scenario
completely. Software testing is seen as a respectful job and people see themselves
growing in the field.

Typically, career in software testing field consists of 7 major roles, based on


knowledge and experience :

 Junior Software Tester / QA Engineer


 Senior Software Tester / Senior QA Engineer
 Test Architect
 QA Lead / Test Lead
 QA Manager / Test Manager
 Quality Head
 Delivery Head

Junior Software Tester / QA Engineer

Skills required :

1. Communication
 Communication is the key factor to be successful in IT industry for any
role.
 As a software tester, you are expected to communicate with team
members, client and stakeholders of the project. For that, good
communication skills are important.
 Testers are supposed to prepare different artifacts like Test plan, Test
strategy, Test cases, Test data, Test results etc. and to effectively
prepare them, written communication skills should be good.
 As a tester, you are expected to send daily status report about what you
did, which bugs or defects you found and what is the plan of work next
day. For this, understanding of point-to-point communication, what to
include and what not, is very important.

2. Curiosity
 Being in software testing means asking lots of questions.
 Testers have to deal with bad or incomplete requirements. And when
requirements are not enough to clarify things, testers have to ask
meaningful questions.
 Testers should be curious about things like why, what, when and How.
More questions yield more information and that helps testers to perform
testing effectively.
3. Grasping abilities
 Testing almost always gets the least time. In less time, testers are
expected to perform effective testing. Understanding the requirements in
short time is therefore very important.
 To grasp the purpose of software, how it will be used, what all changes
have been applied etc. is necessary.
 Sharp grasping abilities make the task easier and efficient.

4. Team work
 The Tester is supposed to work as a team with developers and other
stakeholders.
 Right attitude with attention towards quality of product is very important
to have in any tester.
 Being a Junior Tester, its expected to execute assigned work on time,
report to seniors and support each other while facing deadline.

5. Basic knowledge about software testing


 Some definitions and terms of software testing is important to know,
before entering into software testing field.
 Anyone who wants to work as a software tester should have knowledge
of software testing concepts and processes given below
 Bug life cycle
 different types of testing like regression testing, integration
testing, functional testing, performance testing etc, when each type of
testing is being performed,
 terms like test plans, test strategies, test estimates and their importance
are some basic factors

6. Basic knowledge about computer


 Knowledge of below mentioned basics about computer is helpful
because as a tester you are going to work with it, everyday.
 Any computer operating system (Windows or Linux)
 Different Browsers (IE, FF, Safari, Opera) etc.
 Any mobile operating system (Android/iOS/Windows)

7. User’s perspective
 As a tester, you need to understand following points about end user’s
perspective because it helps in defining more real-time test scenarios.
 Who is going to use the product
 What purpose the product will resolve
 How the customer might handle the product

Roles and responsibilities of a Junior Tester

Reports to : Mostly, Test Lead (depends on organization structure)

Role and Responsibilities


1. Requirement Analysis

As a tester, analysis of requirements provided by customer is the main point to


start with.

The tester is supposed to understand the requirements and relevant domain of


work, prepare query list and share it with Test lead.

2. Test Effort Estimation

While test planning meeting, the tester is supposed to understand the details of
tasks to be performed.

Also, he/she is supposed to come up with estimated efforts required to


complete the task efficiently.

3. Test cases documentation

Based on task defined and modules assigned, tester is expected to document


test cases for the same.

Based on organization and development method, test cases or test scenarios are
prepared in specific format.

4. Reporting and tracking Defect

Reporting bug, observed while executing testing task is important. Again, each
organization uses different tools/templates to report and track defect.

Tester needs to understand how specific tool works and is supposed to report
detailed defect report.

Tester is also supposed to track the reported defect and according to its
criticality, needs to make sure that defect gets resolved.

5. Listing improvement areas

A junior tester is a fresh eye to the product and therefore expectation from
him/her is to come up with suggestions to improve the product in terms
of usability.

6. Reporting to Test Lead / QA Lead

The tester is supposed to send daily status report to test lead, describing testing
activities performed and relevant status.
Daily status report is a tool for junior tester to communicate with Test lead
about the work done.

Senior Software Tester / Senior QA Engineer


Senior software tester is the role that comes with

 Few years of experience (typically 4-5 years)


 Multiple achievements (multiple projects, strict timelines, identification of
critical bugs)
 Constant learning (different tools, processes, methods)

Roles and responsibilities of a Senior Software Tester

Reports to : Test Lead

Role and Responsibilities

1. Participation in Test planning, designing and estimation – After gaining


few years of experience, the tester is expected to participate in Test planning
meeting and contribute in same.
 Test planning meeting is the meeting where high level test scenarios,
challenges, risks, resources etc. are discussed and the Senior Tester can
provide his inputs.
 Test designing is the process where high level test scenarios are broken
into medium/minute level test cases. Which kind of test cases to write,
what to focus on, what risk factors to be considered etc. are the points
where Senior Tester is expected to help.
 Test estimation is a very important part of project planning. After years
of experience, the Senior Tester can easily gauge how much time a
particular task might take, considering all relevant factors.

2. Review of test artifacts – Junior testers document test cases and submit it for
review to Senior Tester.

Based on experience, he/she is supposed to check for completeness and


effectiveness.

3. Test automation – The Senior Tester is expected to be good at one test


automation tool.

He/She is expected to:

 identify test cases to be automated.


 Automate them
 Provide results on timely basis
 Train junior team members for the same
4. Collaboration with development team – With experience, the Senior
Tester is expected to know communication tactics, to deal with
Development team.
 From reproducing the reported issue to emphasizing on fix for critical
bugs to understand how the bug had been fixed and to know the change
requests and their impacts, the tester has to work closely with
developers.
 As a Senior Tester, one is supposed to be able to work with development
team comfortably, in any situation and under huge pressure.

5. Reporting and tracking defects – With experience, what remains constant for
a tester is to identify defects in software, report them and track them till they are
satisfactorily fixed.
6. Training need identification – The Senior tester knows the weaknesses of
Junior testers in team. He knows what is stopping testers in performing their
task efficiently. He identifies training needs for the team and conveys it to Test
Lead.

The training need can be anything like,

 Communication training
 Process training
 Effective reporting training
 Tools training

Test Architect
Test Architect is the senior position who looks after solutions for problems faced
while testing. The role seeks deep technical knowledge and up-to-date knowledge
about latest tools and technologies. This role does not asks for people/team
management skills.

Test Architect is not a common role. It is only found in organizations that focus
heavily on the use of automation and technology in testing.

Test Architects must generally fulfill the following criteria

 Significant years of experience (minimum of 8 years)


 Must be a technical specialist

Skills required

1. Experience in creating test automation framework, using latest and relevant


tools.
2. Sound knowledge of different technologies and approach to automation.
3. Deep understanding about which test cases should be selected for automation.
4. Strong technical leadership abilities
Roles and responsibilities of a Test Architect

Reports to : Test Manager / Quality head

Role and Responsibilities

1. Helps in defining automation approach for testing.


2. Supports test manager in fulfilling strategic goals by providing technical
support and help in terms automation.
3. Identifies effective technologies and tools aligned with what being already in
use.
4. Helps and designs test automation framework based on past experience and
demand of current project.
5. Works in identifying best suitable test cases for automation.
6. Helps in creating Test environment and test data.
7. Co-ordinates for testing via automation.
8. Monitors, enhances, improves automation activities as per demand of project.
9. Collaborates with team in mentoring and training team members for
automation.

QA Lead / Test Lead


Every company has different criteria about designation definition but by default some
positions come with team management abilities. Test lead is one of those position.
Test Lead is the role that comes with

 Significant years of experience (typically 5-7 years)


 Team management capabilities

Skills required

1. Negotiation: Test lead is the communication bridge between management and


individual team members. Conveying right things at right time, convincing
team members about tasks and relevant efforts estimation, communicating with
project manager about timelines etc needs negotiation skills.

Also, to resolve inter personal problems in team and to convince team members
to put in extra efforts, negotiation skills are mandatory.

2. Collaboration: As a test lead, one has to deal with cross-functional teams on


daily basis. Collaboration skills are important to
 Understand specific business scenarios included by business analysis
team.
 Realize the technical limitations
 Understand why specific defects had been rejected or deferred
 Know how particular defect was fixed and its impacts
3. Technical abilities: The test lead needs to be technically well versed so that no
one can trick him/her when it comes to estimation or bug fixing or execution of
testing tasks. Also, technically strong test lead can guide test team members for
automation as well as in finding root cause of defect.
4. Communication and reporting : As a test lead, you are supposed to show
clear picture about overall progress of the product, quality of the same and
performance of test team. And to do the same effectively, strong
communication and reporting skills are required. Along with it, skill to disagree
for invalid matters is equally important.
5. Leadership: As a test lead, you are supposed to set an example. The lead
 should be ready to jump into execution, whenever required,
 should be available to help to any team member
 should be able to stand-by the team for valid issues
 should be able to deliver best quality
 should be able to help the team in managing the commitments fulfilling.

Roles and responsibilities of a Test Lead

Reports to : Test Manager / Project Manager

Role and Responsibilities

1. Defining Test strategy and test metrics :


 Understands the requirements and defines test artifacts required for the
project.
 Defines test strategy based on requirements, resources, information and
timeline.
 Works with the team to estimate testing tasks, scope of each task and
relevant details.
 Decides on matrices which should be maintained so that testing progress
can be effectively tracked.
 Helps team members by scheduling tasks and guiding them about how to
efficiently test.

2. Managing Test Team :


 Build a team of variety of skilled people, who can make a productive
team, overall.
 Assigns tasks to team members and makes sure that task pipeline for
each team member is full enough.
 Motivates team to apply right attitude and focus on quality.
 Identifies training needs for team members and conveys it to
management.
 Maintains good relationship with cross functional teams as testing team
has to work with all the other teams.
 Resolves disputes between team members.

3. Being point of contact :


 Communicates with client about changes in requirement and overall
quality concerns.
 Single point of contact for development team, to communicate relevant
queries/concerns.
 Conveys message from management to the team about process
improvement, performance improvement, expectations to meet etc.

4. Review and feedback :


 Reviews test artifacts prepared by testing team and approves them.
 Reviews defects logged by testing team and provide relevant feedback, if
any.
 Identifies weak/strong point of team members and accordingly suggests
improvement areas.
 Reviews automation framework prepared by senior testers and provides
feedback to make it effective.

QA Manager / Test Manager

QA Manager is the managerial position, which looks after most of the management
aspects compared to technical.

The role brings proven record of successful management of team and projects.

Skills required

1. Team management skills, including tactics to communicate with all team


members, carrying different attitude and skills.
2. Detailed understanding about project lifecycle and each phase of it.
3. Understanding and proven track record of experience working on different
aspects of testing like Requirement analysis, Test efforts and relevant
estimation, Test reports etc.
4. Preferred to have domain knowledge and understanding of application under
test.
5. Ability to plan and manage testing life cycle independently.

Roles and responsibilities

1. Looks after QA processes implementation and execution in organization.


2. Interviews, identifies and trains relevant skilled people to be part of QA team.
3. Defines risks involved in specific projects and helps in migrating risks by QA
activities.
4. Reviews and ensures all the deliverables, including documentations are
complete and concise before delivering it to customer.
5. Manages multiple projects simultaneously and works with Test leads to make
sure every project is running smoothly.
6. Helps is setting up environment by adding necessary tools and processes to
control quality of the project.
7. Evaluates performance of QA team members on timely basis and guide them to
grow.
8. Reports stakeholders on regular interval about the progress of project and
decisions made/modified.

Quality Head

Quality Head is the highest position in the Quality department. This role is a
combination of technical and managerial skills. This position is a result of years of
experience along with proven track record of handling multiple teams and projects /
programs successfully.

Skills required

1. Around 15+ years of industry experience and proven track record of successful
management of quality aspect of multiple products.
2. Expertise in implementing industry best practices for quality assurance.
3. Experience in delivering multiple projects, by managing time and resources
successfully.
4. Experience in working with different stakeholders in hierarchy, from a
developer to business partner and customer.
5. Excellent hands on experience in manual and automation testing.
6. Solid communication skills
7. Experience in working into challenging environment and have displayed result-
oriented attitude.
8. Known to the best applicable techniques for testing and quality improvement.
9. Experience in establishing quality as a culture in organization.
10. Knowledge of best supportive tools to make testing more effective.
11. Experience in implementing best policies/processes to maintain quality
standards of the products / services / organization.
12. Attitude inclined to defining, working and improving processes.

Roles and responsibilities of a Quality Head

1. Manages multiple QA projects simultaneously with detailed level of


involvement.
2. Works with QA managers and QA lead to keep the team morale high by
resolving high level problems.
3. Heads the QA practices successfully by implementing highest level of planning
and co-ordination.
4. Establishes and accomplishes quality standards by understanding products and
expectations of stakeholders.
5. Imposes relevant policies and procedures to maintain quality standards,
organization wide.
6. Works with senior management to understand expectations from QA
department is most important for Quality head.. Based on expectations,
strategizes QA processes and delivery plan and implements the same
successfully.
7. Shows effective leadership in addition to planning and management abilities to
achieve defined goals.
8. Suggests changes in project/product, with relevant data, to stakeholders.
9. Measures and shares progress of project via different metrics and processes
implemented.
10. Looks after every associated factors of project like cost, time, resources, risks
associated etc is Also proposes risk mitigation plan.
11. Carries expertise and experience in working with different software
development models.
12. Highly skilled and be able to guide team in following areas of testing, if needed
: test design, test execution, defect management, data analysis and reporting.
13. Possesses excellent level of communication skills, which can help in gaining
acceptance from all levels of team.
14. Establishes and maintains professional relationship with stakeholders,
customers and partners.
15. Plays a major role in achieving and maintaining CMMi certification for
organization, by maintaining quality standards.

Delivery Head

Delivery Head is the position that covers all aspects of software development life
cycle. Delivery Head should have experience in

1. Handling a large department / business unit


2. Management of multiple portfolios, programs and projects
3. Using project, program and portfolio management tools
4. Software development and delivery management

Roles and Responsibilities of Delivery Head

1. Building and maintaining a large business unit or department of technically


sound people
2. Identifying future leaders from the team and mentor them
3. Bringing strong technological leadership and raising quality standards
4. Planning for best utilization of resources available
5. Ensuring timely and flawless delivery
6. Working on scope and cost of delivery aspect of project
7. Balancing expectations of customer and team
8. Encouraging QA team members to work as gate keepers for the quality of the
product to be delivered.
9. Maintaining good relationship with existing and potential new customers.
10. Keeping himself/herself updated with latest technologies and tools and
deciding on how to adopt the change.
Test Phases
The 6 phases of testing are Requirement analysis, Test planning, Test case development, Test
environment setup, Test execution, and Test reporting.

1. Requirement analysis

Requirement analysis involves identifying, analyzing, and documenting the requirements of a


software system.

 During requirement analysis, the software testing team works closely with the
stakeholders to gather information about the system’s functionality, performance, and
usability.
 The requirements document serves as a blueprint for the software development team,
guiding them in creating the software system.
 It also serves as a reference point for the testing team, helping them design and execute
effective test cases to ensure the software meets the requirements.

In summary, by conducting thorough requirement analysis, software testing teams can help
ensure the software system’s success and user satisfaction.
2. Test planning

During the test planning phase, the team develops a complete plan outlining each testing process
step, including identifying requirements, determining the target audience, selecting appropriate
testing tools and methods, defining roles and responsibilities, and defining timelines. This phase
aims to ensure that all necessary resources are in place and everyone on the team understands
their roles and responsibilities. A well-designed test plan minimizes risks by ensuring that
potential defects are identified early in the development cycle when they are easier to fix. Also,
adhering to the plan throughout the testing process fosters thoroughness and consistency in
testing efforts which can save time and cost down the line.
3. Test case development
During the test case development phase, the team thoroughly tests the software and considers all
possible scenarios.
This phase involves multiple steps, including test design, test case creation, and test case review:

 Test design involves identifying the test scenarios and defining the steps to be followed
during testing.
 Writing test cases for each identified scenario, including input data, expected output, and
the steps to be followed, involves creating test cases.
 Test case review involves reviewing the test cases to ensure they are complete and cover
all possible scenarios.

Also, this is the phase when the involvement of test automation can be started. You can select the
test cases for test automation here. And, if automation is already a part of the STLC, and the
product is suitable for testing, then the test case automation can be started too.

4. Test environment setup

Test environment setup in software testing life refers to creating an environment that simulates
the production system where the software application is deployed. A person can ensure efficient
and effective testing activities by designing the test environment correctly.
The setup includes

 hardware,
 software,
 networks, and
 databases.

When setting up test environments, we consider network bandwidth, server capabilities, and
storage capacity. A properly set-up test environment aims to replicate real-world scenarios to
identify potential issues before deployment in production systems. Testers can perform
functional, performance, or load testing during this phase. Automating your Test environment
setup can make your work easier. You can set up automated tests to run on the configured setups
here.
5. Test execution

Test execution refers to the software testing life cycle phase where created test cases are
executed on the actual system being tested. At this stage, testers verify whether features,
functions, and requirements prescribed in earlier phases perform as expected. The test execution
also involves the execution of automated test cases.

6. Test closure

Test closure is integral to the STLC and includes completing all planned testing activities. It
includes

 reviewing and analyzing test results,


 reporting defects,
 identifying achieved or failed test objectives,
 assessing test coverage, and
 evaluating exit criteria.
Test Strategy

Test Strategy in software testing is defined as a set of guiding principles that

determines the test design & regulates how the software testing process will be done. The

objective of the Test Strategy is to provide a systematic approach to the software testing process

in order to ensure the quality, traceability, reliability and better planning.

 A test strategy is carried out by the project manager. It says what type of

technique to follow and which module to test.

 Test strategy narrates about the general approaches

 Test strategy cannot be changed

 It is a long-term plan of action. You can abstract information that is not project

specific and put it into test approach

 In smaller project, test strategy is often found as a section of a test plan

 It is set at organization level and can be used by multiple projects

On the basis of the development design papers, we may write the test strategy.
The following documents are included in the development design document :
 Documents pertaining to the system design: These documents will mostly be used to
build the test strategy.
 Design Documents: These are used to outline the software features that will be enabled in
a future version.
 Documents relating to conceptual design: These are the documents that we don’t utilize
very often.
Here, we will discuss the following points:
1. Components of Test Strategy.
2. Test Strategy vs Test Plan.
3. Types of Test Strategies.
4. Test Strategy Selection.
5. Details Included in Test Strategy Document.
6. Conclusion.

Components of a Test Strategy

The test effort, test domain, test setups, and test tools used to verify and validate a set of
functions are all outlined in a Test Strategy. It also includes schedules, resource allocations,
and employee utilization information. This data is essential for the test team (Test) to be as
structured and efficient as possible. A Test Strategy differs from a Test Plan, which is a
document that gathers and organizes test cases by functional areas and/or types of testing in a
format that can be presented to other teams and/or customers. Both are critical components of
the Quality Assurance process since they aid in communicating the breadth of the test method
and ensuring test coverage while increasing the testing effort’s efficiency .
The following are the components of the test strategy:
1. Scope and Overview.
2. Testing Methodology.
3. Testing Environment Specifications.
4. Testing Tools.
5. Release Control.
6. Risk Analysis.
7. Review and Approvals.

Let’s discuss each of these in detail.

1. Scope and Overview: Scope and Overview is the first section of the test strategy paper.
Any product’s overview includes information about who should approve, review, and use the
document. The testing activities and phases that must be approved were also described in the
test strategy paper.
 An overview of the project, as well as information on who should utilize this page.
 Include information such as who will evaluate and approve the document.
 Define the testing activities and phases that will be performed, as well as the timetables
that will be followed in relation to the overall project timelines stated in the test plan.
2. Testing Methodology: Testing methodology is the next module in the test strategy
document, and it is used to specify the degrees of testing, testing procedures, roles, and duties
of all team members. The change management process, which includes the modification
request submission, pattern to be utilized, and activity to manage the request, is also included
in the testing strategy. Above all, if the test plan document is not properly established, it may
result in future errors or blunders. This module is used to specify the following information-
 Define the testing process, testing level, roles, and duties of each team member.
 Describe why each test type is defined in the test plan (for example, unit, integration,
system, regression, installation/uninstallation, usability, load, performance, and security
testing) should be performed, as well as details such as when to begin, test owner,
responsibilities, testing approach, and details of automation strategy and tool (if
applicable).
3. Testing Environment Specifications: Testing Environment Specification is another section
of the test strategy paper. The specification of the test data requirements, as we well know, is
quite important. As a result, the testing environment specification in the test strategy document
includes clear instructions on how to produce test data. This module contains information on
the number of environments and the required setup. The strategies for backup and restoration
are equally important.
 The information about the number of environments and the needed configuration for each
environment should be included in the test environment setup.
 For example, the functional test team might have one test environment and the UAT team
might have another.
 Define the number of users supported in each environment, as well as each user’s access
roles and software and hardware requirements, such as the operating system, RAM, free
disc space, and the number of systems.
 It’s just as crucial to define the test data needs.
 Give specific instructions on how to generate test data (either generate data or use
production data by masking fields for privacy).
 Define a backup and restoration strategy for test data.
 Due to unhandled circumstances in the code, the test environment database may encounter
issues.
 The backup and restoration method should state who will take backups when backups
should be taken, what should be included in backups, when the database should be restored,
who will restore it, and what data masking procedures should be implemented if the
database is restored.
4. Testing Tools: Testing tools are an important part of the test strategy document since it
contains all of the information on the test management and automation tools that are required
for test execution. The necessary approaches and tools for security, performance, and load
testing are dictated by the details of the open-source or commercial tool and the number of
users it can support.
 Define the tools for test management and automation that will be utilized to
execute the tests.
 Describe the test approach and tools needed for performance, load, and security testing.
 Mention whether the product is open-source or commercial, as well as the number of
individuals it can accommodate, and make suitable planning.
5. Release Control: Release Control is a crucial component of the test strategy document. It’s
used to make sure that test execution and release management strategies are established in a
systematic way. It specifies the following information-
 Different software versions in test and UAT environments can occur from unplanned
release cycles.
 All adjustments in that release will be tested using the release management strategy, which
includes a proper version history.
 Set up a build management process that answers questions like where the new build should
be made available, where it should be deployed when to receive the new build, where to
acquire the production build, who will give the go signal for the production release, and so
on.
6. Risk Analysis: Risk Analysis is the next section of the test strategy paper. All potential
hazards associated with the project are described in the test strategy document and can become
an issue during test execution. Furthermore, a defined strategy is established for inclining these
risks in order to ensure that they are carried out appropriately. If the development team is
confronted with these hazards in real-time, we establish a contingency plan. Make a list of all
the potential dangers. Provide a detailed plan to manage these risks, as well as a backup plan in
case the hazards materialize.
7. Review and Approval: Review and Approval is the last section of the Testing strategy
paper.
When all of the testing activities are stated in the test strategy document, it is evaluated by the
persons that are involved, such as:

 System Administration Team.


 Project Management Team.
 Development Team.
 Business Team.

Starting the document with the right date, approver name, comment, and summary of the
reviewed modifications should be followed.

It should also be evaluated and updated on a regular basis as the testing procedure improves.

Types of Test Strategies

The following are the different types of test strategies:

1. Analytical strategy: For instance, risk-based testing and requirements-based testing are
two types of testing. After examining the test premise, such as risks or requirements, the
testing team sets the testing circumstances to be covered. In the instance of requirements-
based testing, the requirements are examined to determine the test circumstances. Then
tests are created, implemented, and run to ensure that the requirements are met. Even the
findings are kept track of in terms of requirements, such as those who were tested and
passed, those that were tested but failed, those that were not fully tested, and so on.
2. Model-based strategy: The testing team selects an actual or anticipated circumstance and
constructs a model for it, taking into account inputs, outputs, processes, and possible
behavior. Models are also created based on existing software, technology, data speeds,
infrastructure, and other factors. Let’s look at a case where you’re testing a mobile app.
Models to simulate outgoing and receiving traffic on a mobile network, the number of
active/inactive users, predicted growth, and other factors may be constructed to conduct
performance testing.
3. Methodical strategy: In this case, test teams adhere to a quality standard (such as
ISO25000), checklists, or just a set of test circumstances. Specific types of testing (such as
security) and application domains may have standard checklists. For example, while
performing maintenance testing, a checklist describing relevant functions, their properties,
and so on is sufficient.
4. Standards or process compliant strategy: This method is well-exemplified by medical
systems that adhere to US Food and Drug Administration (FDA) guidelines. The testers
follow the methods or recommendations established by the standards committee or a panel
of enterprise specialists to determine test conditions, identify test cases, and assemble the
testing team. In the case of an Agile program, testers will create a complete test strategy for
each user story, starting with establishing test criteria, developing test cases, conducting
tests, reporting status, and so on.
5. Reactive strategy: Only when the real program is released are tests devised and
implemented. As a result, testing is based on faults discovered in the real system. Consider
the following scenario: you’re conducting exploratory testing. Test charters are created
based on the features and functionalities that already exist. The outcomes of the testing by
testers are used to update these test charters. Agile development initiatives can also benefit
from exploratory testing.
6. Consultative strategy: In the same way, that user-directed testing uses input from key
stakeholders to set the scope of test conditions, this testing technique does as well. Let’s
consider a scenario in which the browser compatibility of any web-based application is
being evaluated. In this section, the app’s owner would provide a list of browsers and their
versions in order of preference. They may also include a list of connection types, operating
systems, anti-malware software, and other requirements for the program to be tested
against. Depending on the priority of the items in the provided lists, the testers can use
various strategies such as pairwise or equivalence splitting.
7. Regression averse strategy: In this case, the testing procedures are aimed at lowering the
risk of regression for both functional and non-functional product aspects. Using the web
application as an example, if the program needs to be tested for regression issues, the
testing team can design test automation for both common and unusual use cases. They can
also employ GUI-based automation tools to conduct tests every time the application is
updated. Any of the strategies outlined above does not have to be used for any testing job.
Two or more strategies may be integrated depending on the needs of the product and the
organization.

Test Strategy Selection

The following factors may influence the test approach selection:

 The test strategy chosen is determined by the nature and size of the organization.
 One can choose a test strategy based on the project needs; for example, safety and security
applications necessitate a more rigorous technique.
 The test strategy can be chosen based on the product development model.
 Is this a short-term or long-term strategy?
 Organization type and size.
 Project requirements — Safety and security applications necessitate a well-thought-out
strategy.
 Product development model.
Details Included in Test Strategy Document

The test strategy document includes the following important details:

 Overview and Scope.


 Software and testing work products that can be reused.
 Details about the various test levels, their relationships, and the technique for integrating
the various test levels.
 Techniques for testing the environment.
 Level of testing automation.
 Various testing tools.
 Risk Assessment.
 For each level of the test Conditions for both entry and exit.
 Reports on test results.
 Each test’s degree of independence.
 During testing, metrics and measurements will be analyzed.
 Regression and confirmation testing.
 Taking care of discovered flaws.
 Configuring and managing test tools and infrastructure.
 Members of the Test team’s roles and responsibilities.

Resource Requirements
Resources include human effort, equipment, and all infrastructure needed for accurate and
comprehensive testing. This part of test planning decides the project's required measure of
resources (number of testers and equipment).
Resource requirement is a detailed summary of all types of resources required to complete
project task. Resource could be human, equipment and materials needed to complete a project.
The resource requirement and planning is important factor of the test planning because helps in
determining the number of resources (employee, equipment…) to be used for the project.
Therefore, the Test Manager can make the correct schedule & estimation for the project.
Some of the following factors need to be considered:
 Machine configuration (RAM,processor,disk)needed to run the product under test.
 Overheads required by test automation tools, if any
 Supporting tools such as compilers, test data generators, configuration management tools.
 The different configurations of the supporting software(e.g. OS)that must be present
 Special requirements for running machine-intensive tests such as load tests and
performance tests.
 Appropriate number of licenses of all the software
o Human Resource: The following table represents various members in your project team

No. Member Tasks

1 Test Manager Manage the whole project Define project directions Acquire appropriate
resources

2 Tester Identifying and describing appropriate test techniques/tools/automation


architecture Verify and assess the Test Approach Execute the tests, Log results,
Report the defects. Tester could be in-sourced or out-sourced members, base on
the project budget For the task which required low skill, I recommend you
choose outsourced members to save project cost

3. Developer in Implement the test cases, test program, test suite etc.
Test

4. Test Builds up and ensures test environment and assets are managed and maintained
Administrator Support Tester to use the test environment for test execution

5. SQA members Take in charge of quality assurance Check to confirm whether the testing
process is meeting specified requirements

 System Resource: For testing, a web application, you should plan the resources as
following tables:

No. Resources Descriptions

1 Server Install the web application under test This includes a separate web server, database
server, and application server if applicable
No. Resources Descriptions

2 Test tool The testing tool is to automate the testing, simulate the user operation, generate the
test results There are tons of test tools you can use for this project such as Selenium,
QTP…etc.

3. Network You need a Network include LAN and Internet to simulate the real business and user
environment

4. Computer The PC which users often use to connect the web server

Test schedule
A test schedule includes the testing steps or tasks, the target start and end dates, and
responsibilities. It should also describe how the test will be reviewed, tracked, and approved.

Test cases
“A test case is a set of input values, execution preconditions, expected results, and execution
postconditions, developed for a particular objective or test condition, such as to exercise a
particular program path or to verify compliance with a specific requirement.” It’s one of the key
instruments used by testers. The standard test case includes the following information:

 The test case ID


 Test case description
 Prerequisites
 Test steps
 Test data
 Expected result
 Actual result
 Status
 Created by
 Date of creation
 Executed by
 Date of execution

Use the following practices to write effective test cases:

Identify testable requirements. Identify the scope and purpose of testing before starting the test
process.

Customer requirement. The specialist who writes the test case must have a good understanding
of the features and user requirements. Each test case should be written keeping the client’s
requirements in mind.

Write on time. The best time to write test cases is the early requirement analysis and design
phases. That way QA specialists can understand whether all requirements are testable or not.

Simple and сlear. Test cases should be simple and easy to understand. Every test case should
include only the necessary and relevant steps. No matter how many times and by whom it will be
used, a test case must have a single expected result rather than multiple expected results.

Unique test cases. Each test case must have a unique name. This will help classify, track, and
review test cases at later stages.

Test cases should be maintainable. If requirements change, a tester must be able to adjust a test
case.

Bug Reporting

Bug reporting is an integral part of software testing as it helps to identify and document any

issues that arise during the process. By using a Bug report, testers can track the progress of their

work and compare results over time. This allows them to change their test plans and strategies if

needed.

What is a Bug Report/Defect Report?


A bug report is a document that communicates information regarding a software or
hardware fault or malfunction. It typically includes details such as the steps necessary to
reproduce the issue, the expected behavior, and the observed behavior. The primary purpose of a
bug report is to provide an accurate description of the problem to the development team to
facilitate its resolution. Bug reports must be clear, concise, and correct to assist developers in
understanding and quickly resolving the issue. All bugs must be documented in a bug-reporting
system to identify, prioritize, and fix them promptly. Failure to do so may lead to the developer
not understanding or disregarding the issue, as well as management not recognizing the severity
of it and leaving it in production until customers make them aware.

Benefits of a Good Software Bug Report?


A good bug report should provide clear and detailed information about the issue, enabling the
development team to understand and reproduce it. It should include details such as an accurate
description of the problem, steps taken to reproduce it, expected results, actual results,
screenshots or video recordings, if applicable, device configuration, and other relevant data. Such
information allows for a more efficient resolution of the issue.

1. It can help you figure out precisely what’s wrong with a bug, so you can find the best
way to fix it.
2. Saves you time and money by helping you catch the bug before it worsens.
3. Stops bugs from making it into the final product and ruining someone’s experience.
4. Plus, it helps ensure the same bug doesn’t appear again in future versions.
5. Finally, everyone involved will know what’s happening with the bug so they can do
something about it.

How to Report a Bug?


Effectively reporting a bug is essential for the development team to resolve the issue promptly
and accurately. A well-constructed bug report should be concise, comprehensive, and
comprehensible. The following steps can be taken to submit a bug report:

1. Attempt to replicate the bug consistently and systematically.


2. Gather data on the environment, such as the browser type, operating system, and
applicable software versions.
3. Construct explicit instructions outlining how to reproduce the bug.
4. Include screenshots or videos that may assist in illustrating the issue to developers.
5. Articulate what outcome was anticipated and differentiate it from what occurred in
reality.
6. Outline the severity and priority of the bug: Describe how the bug impacts the software’s
functionality and determine its level of urgency.
7. Check for duplicates: Investigate the bug tracking system to ascertain if it has already
been reported.
8. Assign the bug to a relevant developer or team and follow up
9. Monitor progress on the bug to ensure it is being addressed and provide any extra
information that may be necessary.
How to Write a Bug Report?
A good bug report should enable the developer and management to comprehend the issue.
Guidelines to consider include:
1. All the relevant information must be provided with the bug report

Simple sentences should be used to describe the bug. Expert testers consider bug reporting
nothing less than a skill. We have compiled some tips that will help testers master them better:
2. Report reproducible bugs:

While reporting a bug, the tester must ensure that the bug is reproducible. The steps to reproduce
the bug must be mentioned. All the prerequisites for the execution of steps and any test data
details should be added to the bug.
3. Be concise and clear:

Try to summarize the issue in a few words, brief but comprehensive. Avoid writing lengthy
descriptions of the problem.
Describe the issue in pointers and avoid paragraphs. It’s essential to provide all the relevant
information, and it helps the developers to understand the issue without any additional to and fro
of the bug. The developer must clearly understand the underlying problem with the bug report.
4. Report bugs early:

It is important to report bugs as soon as you find them. Reporting the bug early will help the
team to fix the bug early and will help to deliver the product early.
5. Avoid Spelling mistakes and language errors:

Proofread all the sentences and check the issue description for spelling and grammatical errors.
If required, one can use third-party tools, for eg. Grammarly. It will help the developer
understand the bug without ambiguity and misrepresentation.
6. Documenting intermittent issues:

Sometimes all bugs are not reproducible. You must have observed that sometimes a mobile app
crashes, and you must restart the app to continue. These types of bugs are not reproducible every
time.
Testers must try to make a video of the bug in such scenarios and attach it to the bug report. A
video is often more helpful than a screenshot because it will include details of steps that are
difficult to document.
For example, a mobile app crashes while switching between applications or sending an app to
the background and bringing it to the front.
7. Avoid duplication of bugs:

While raising a bug, one must ensure that the bug is not duplicating an already-reported bug.
Also, check the list of known and open issues before you start raising bugs. Reporting duplicate
bugs could cost duplicate efforts for developers, thus impacting the testing life cycle.
8. Create separate bugs for unrelated issues:

If multiple issues are reported in the same bug, it can’t be closed unless all the issues are
resolved. So, separate bugs should be created if issues are not related to each other.
For example, Let’s say a tester comes across two issues in an application in different modules.
One issue is in compose email functionality, where the user cannot compose an email, and
another issue is that the user cannot print an email. These issues must be raised separately as they
are independent of each other.
9. Don’t use an authoritative tone:

While documenting the bug, avoid using a commanding tone, harsh words, or making fun of the
developer.
The objective of a good bug report is to help the developer and the management to understand
the bug and its impact on the system. The more accurate and detailed the bug report is, the more
quickly and effectively the bug can be resolved.
A software bug follows a cycle. According to where it is in the cycle, a status is assigned.
For eg. When a new bug is created, its status is assigned as open. Later, it goes through various
stages like In Progress, Fixed, Won’t Fix, Accepted, Reopen, Verified, etc. These stages vary
following different bug reporting tools.

Testers must create comprehensive bug reports for practical bug analysis and resolution. Testers should
incorporate all pertinent information to ensure the highest quality of reports and communicate clearly
with developers and managers. Best practices for bug reporting should be shared to optimize report
accuracy. Ultimately, well-crafted bug reports foster positive collaboration between teams and reduce
costs related to fixing bugs.
Metrics and Statistics.
Software testing metrics are quantifiable indicators of the software testing process
progress, quality, productivity, and overall health. The purpose of software testing metrics is to
increase the efficiency and effectiveness of the software testing process while also assisting in
making better decisions for future testing by providing accurate data about the testing process.
Using statistics can help us map out those outliers, identify the levels of uncertainty in
our results, and help us deal fairly with those errors. No statistical test is perfect and neither is
any dataset. Statistics allows us to draw conclusions openly by realizing these limitations from
the start.
A metric expresses the degree to which a system, system component, or process
possesses a certain attribute in numerical terms. A weekly mileage of an automobile compared
to its ideal mileage specified by the manufacturer is an excellent illustration of metrics. Here,
we discuss the following points:
1. Importance of Metrics in Software Testing.
2. Types of Software Testing Metrics.
3. Manual Test Metrics: What Are They and How Do They Work?
4. Other Important Metrics.
5. Test Metrics Life Cycle.
6. Formula for Test Metrics.

1.Importance of Metrics in Software Testing

Test metrics are essential in determining the software’s quality and performance. Developers
may use the right software testing metrics to improve their productivity.
 Test metrics help to determine what types of enhancements are required in order to create a
defect-free, high-quality software product.
 Make informed judgments about the testing phases that follow, such as project schedule
and cost estimates.
 Examine the current technology or procedure to see if it need any more changes.

2.Types of Software Testing Metrics

Software testing metrics are divided into three categories:


1. Process Metrics: A project’s characteristics and execution are defined by process metrics.
These features are critical to the SDLC process’s improvement and maintenance (Software
Development Life Cycle).
2. Product Metrics: A product’s size, design, performance, quality, and complexity are
defined by product metrics. Developers can improve the quality of their software
development by utilizing these features.
3. Project Metrics: Project Metrics are used to assess a project’s overall quality. It is used to
estimate a project’s resources and deliverables, as well as to determine costs, productivity,
and flaws.
3.Manual Test Metrics: What Are They and How Do They Work?

Manual testing is carried out in a step-by-step manner by quality assurance experts. Test
automation frameworks, tools, and software are used to execute tests in automated testing.
There are advantages and disadvantages to both human and automated testing. Manual testing
is a time-consuming technique, but it allows testers to deal with more complicated
circumstances. There are two sorts of manual test metrics:
1. Base Metrics: Analysts collect data throughout the development and execution of test cases
to provide base metrics. By generating a project status report, these metrics are sent to test
leads and project managers. It is quantified using calculated metrics.
 The total number of test cases
 The total number of test cases completed.
2. Calculated Metrics: Data from base metrics are used to create calculated metrics. The test
lead collects this information and transforms it into more useful information for tracking
project progress at the module, tester, and other levels. It’s an important aspect of the SDLC
since it allows developers to make critical software changes.

4.Other Important Metrics

The following are some of the other important software metrics:


 Defect metrics: Defect metrics help engineers understand the many aspects of software
quality, such as functionality, performance, installation stability, usability, compatibility,
and so on.
 Schedule Adherence: Schedule Adherence’s major purpose is to determine the time
difference between a schedule’s expected and actual execution times.
 Defect Severity: The severity of the problem allows the developer to see how the defect
will affect the software’s quality.
 Test case efficiency: Test case efficiency is a measure of how effective test cases are at
detecting problems.
 Defects finding rate: It is used to determine the pattern of flaws over a period of time.
 Defect Fixing Time: The amount of time it takes to remedy a problem is known as defect
fixing time.
 Test Coverage: It specifies the number of test cases assigned to the program. This metric
ensures that the testing is completed completely. It also aids in the verification of code flow
and the testing of functionality.
 Defect cause: It’s utilized to figure out what’s causing the problem.

5.Test Metrics Life Cycle

The below diagram illustrates the different stages in the test metrics life cycle.
The various stages of the test metrics lifecycle are:

1. Analysis:
 The metrics must be recognized.
 Define the QA metrics that have been identified.
2. Communicate:
 Stakeholders and the testing team should be informed about the requirement for
metrics.
 Educate the testing team on the data points that must be collected in order to process
the metrics.
3. Evaluation:
 Data should be captured and verified.
 Using the data collected to calculate the value of the metrics
4. Report:
 Create a strong conclusion for the paper.
 Distribute the report to the appropriate stakeholder and representatives.
 Gather input from stakeholder representatives.

6.Formula for Test Metrics

To get the percentage execution status of the test cases, the following formula can be used:
Percentage test cases executed = (No of test cases executed / Total no of test cases written) x
100
Similarly, it is possible to calculate for other parameters also such as test cases that were not
executed, test cases that were passed, test cases that were failed, test cases that were blocked,
and so on. Below are some of the formulas:

1. Test Case Effectiveness:


Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100
2. Passed Test Cases Percentage: Test Cases that Passed Coverage is a metric that indicates
the percentage of test cases that pass.
Passed Test Cases Percentage = (Total number of tests ran / Total number of tests executed)
x 100
3. Failed Test Cases Percentage: This metric measures the proportion of all failed test cases.
Failed Test Cases Percentage = (Total number of failed test cases / Total number of tests
executed) x 100
4. Blocked Test Cases Percentage: During the software testing process, this parameter
determines the percentage of test cases that are blocked.
Blocked Test Cases Percentage = (Total number of blocked tests / Total number of tests
executed) x 100
5. Fixed Defects Percentage: Using this measure, the team may determine the percentage of
defects that have been fixed.
Fixed Defects Percentage = (Total number of flaws fixed / Number of defects reported) x
100
6. Rework Effort Ratio: This measure helps to determine the rework effort ratio.
Rework Effort Ratio = (Actual rework efforts spent in that phase/ Total actual efforts spent
in that phase) x 100
7. Accepted Defects Percentage: This measures the percentage of defects that are accepted
out of the total accepted defects.
Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects
Reported) x 100
8. Defects Deferred Percentage: This measures the percentage of the defects that are deferred
for future release.
Defects Deferred Percentage = (Defects deferred for future releases / Total Defects
Reported) x 100
Unit III TEST DESIGN AND EXECUTION

Test Objective Identification, Test Design Factors, Requirement identification, Testable


Requirements, Modeling a Test Design Process, Modeling Test Results, Boundary Value Testing,
Equivalence Class Testing, Path Testing, Data Flow Testing, Test Design Preparedness Metrics,
Test Case Design Effectiveness, Model-Driven Test Design, Test Procedures, Test Case
Organization and Tracking, Bug Reporting, Bug Life Cycle.

Test Objective Identification


The main objectives of software testing

 To find any defects or bugs that may have been created when the software was being developed.
 To increase confidence in the quality of the software.

 To prevent defects in the final product.


 To ensure the end product meets customer requirements as well as the company specifications.

Software testing and the 7 testing principles

There are seven testing principles which are common in the software industry.

1. Optimal testing – it's not possible to test everything so it's important to determine the optimal
amount. The decision is made using a risk assessment. This assessment will uncover the area
that is most likely to fail and this is where testing should take place.
2. Pareto Principle – this principle states that approximately 80% of problems will be found in
20% of tests. However, there is a flaw in this principle in that repeating the same tests over and
over again will mean no new bugs will be found.
3. Review and Revise – repeating the same tests will mean that the methods will eventually
become useless for uncovering new defects. To prevent this from happens only requires the tests
to be reviewed and revised on a regular basis. Adding new tests will help to find more defects.
4. Defects that are present – testing reduces the probability of the being a defect in the final
product but does not guarantee that a defect won't be there. And even if you manage to make a
product that's 99% bug free, the testing won't have shown whether the software meets the needs
of clients.
5. Meeting customer needs – testing a product for the wrong requirements is foolhardy. Even if it
is bug free it may still fail to meet customer requirements.
6. Test early – it's imperative that testing starts as soon as possible in the development of a
product.
7. Test in context – test a product in accordance with how it will be used Software is not identical
and will be developed to meet a certain need rather than a general one. Different techniques,
methodologies, approach and type of testing can be used depending on the applications planned
use.

Typical Objectives of Testing


Delivering quality products is the ultimate objective of testing. Let’s have look over the various
objectives of testing:Identification of Bugs and Errors

 Quality Product
 Justification with Requirement
 Offers Confidence
 Enhances Growth
 three major categories of testing:

Functional Testing:
The purpose of this testing method is to verify each function of an application. During functional
testing, QA team verifies each module’s output, by inseting various inputs.

Technically, Functional Testing is a kind of testing through which the testing team verifies the software
system against the specification document.

However, the testing method does not do anything with the source code as it only validates functioning.

Furthermore, functional testing is the backbone of the entire testing process. Also, if your software
generates an accurate output only then, users will like it.

You can perform functional testing either by following the manual or automation testing approaches.

Example: If you test whether a user able to login into a system or not, after registration, you are doing
functional testing

During functional testing, we ensure


 Accessibility of an application
 Main Functions
 Usability
 Conditions of errors

Non-Functional Testing:
As its name says, the testing method verifies the non-functional part of an application such as reliability,
response, speed, etc.

It is entirely the opposite of functional testing, which we have explained above. Issues that testers do not
address during functional testing they test here.

QA tea examines the overall functioning of the software. They highlight the concerns that affect the
accomplishments and usability of the application.

Example: If you test an application by checking how many users can log in simultaneously, you are
ding non-functional testing.

One should always verify software from the perspective of functional and non-functional testing.

During Non-Functional testing, we ensure:

 Efficiency
 Portability
 Optimization
 Performance

Regression & Maintenance Testing:


The more you test, the more productive it becomes. Also, we know that the development of any
software is a continuous process. It means every now, and then there are some updates.

Hence rather than testing the entire system again & again, we use regression testing. So, through this
testing, testers validate whether the newly written code will affect the existing feature or not.

Now you must be thinking, what is this Regression Testing? It is the collection of already executed test
cases. Hence it helps in getting the effect of any code change in the existing features.
Example: Suppose there is an application with the feature of “ADD DATA” and “EDIT DATA”. Now
the developer has introduced one more feature, “DELETE Data”, Under Regression testing, the tester
will ensure that the new feature must not affect the existing characteristics.

So, guys, we hope you liked our deep analysis work regarding testing. Testing is not only a mere word
but, it is the backbone of an online product.

Being a testing company, we know it’s worth it, and we always try to let people understand the same.
We facilitate all kinds of testing methods, whether it is automation testing or manual testing.

Test Design Factors, Requirement identification, Testable


Requirements
Software testing is a vital activity to be performed during software development to relea se bug-free
software that meets customer requirements. Testability is the ease with which a system can be tested.
It enables easy assessment and determining the overall efforts required for performing the testing
activities on the software. The article focuses on discussing the following topics of Software
Testability:
1. What is Software Testability?
2. Factors of Software Testability
3. How to Measure Software Testability?
4. Requirements of Software Testability
5. Types of Software Testability
6. Improving Software Testability
7. Benefits of Software Testability

What is Software Testability?

Software testability is measured with respect to the efficiency and effectiveness of testing. Efficient
software architecture is very important for software testability. Software testing is a time-consuming,
necessary activity in the software development lifecycle, and making this activity easier is one of the
important tasks for software companies as it helps to reduce costs and increase the probability of
finding bugs. There are certain metrics that could be used to measure testability in most of its aspects.
Sometimes, testability is used to mean how adequately a particular set of tests will cover the product.
 Testability helps to determine the efforts required to execute test activities.
 Less the testability larger will be efforts required for testing and vice versa.

Factors of Software Testability

Below are some of the metrics to measure software testability:


1. Operability: “The better it works, the more efficiently it can be tested.”
 The system has a few bugs (bugs add analysis and reporting overhead to the test process).
 No bugs block the execution of tests.
 The product evolves in functional stages (allows simultaneous development testing).
2. Observability: “What you see is what you test.”
 Distinct output is generated for each input.
System states and variables are visible or queriable during execution. Past system states and
variables are visible or queriable. For example, transaction logs.
 All factors affecting the output are visible.
 Incorrect output is easily identified.
 Internal errors are automatically detected through self-testing mechanisms. o Internal errors are
automatically reported. co Source code is accessible.
3. Controllability: “The better we can control the software, the more the testing can be automated
and optimized.”
 All possible outputs can be generated through some combination of inputs. All code is executable
through some combination of input.
 Software and hardware states and variables can be controlled directly by the test engineer.
 Input and output formats are consistent and structured.
4. Decomposability: “By controlling the scope of testing, we can more problems and perform smarter
retesting.” quickly isolate
 The software system is built from independent modules.
 Software modules can be tested independently.
5. Simplicity: “The less there is to test, the more quickly we can test it.”
 Functional simplicity (e.g., the feature set is the minimum necessary to meet requirements).
 Structural simplicity (e.g., architecture is modularized to limit the propagation of faults).
 Code simplicity (e.g., a coding standard is adopted for ease of inspection and maintenance).
6. Stability: “The fewer the changes, the fewer the disruptions to testing.” Changes to the software
are infrequent.
 Changes to the software are controlled.
 Changes to the software do not invalidate existing tests. The software recovers well from failures.
7. Understandability: “The more information we have, the smarter we will test.”
 The design is well understood.
 Dependencies between internal, external, and shared components are well
 understood.
 Changes to the design are communicated. Technical documentation is instantly accessible.
 Technical documentation is well organized. Technical documentation is specific and detailed.
 Technical documentation is accurate.
8. Availability: “The more accessible objects are the easier is to design test cases”. It is all about the
accessibility of objects or entities for performing the testing, including bugs, sour ce code, etc.
9. Testing Tools: Testing tools that are easy to use will reduce the staff size and less training will be
required.
10. Documentation: Specifications and requirements documents should be according to the client’s
needs and fully featured.

How to Measure Software Testability?

Software testability evaluates how easy it is to test the software and how likely software testing will
find the defects in the application. Software testability assessment can be accomplished through
software metrics assessment:
 Depth of Inheritance Tree.
 Fan Out (FOUT).
 Lack Of Cohesion Of Methods (LCOM).
 Lines Of Code per Class (LOCC).
 Response For Class (RFC).
 Weighted Methods per Class (WMC).
During software launch, it is crucial to determine which components may be more challenging to test.
Software testability assessment is crucial during the start of the testing phase as it affects the
efficiency of the planning process.

Requirements of Software Testability

The attributes suggested by Bach can be used by a software engineer to develop a software
configuration (i.e., programs, data, and documents) that is amenable to testing. Below are some of the
capabilities that are associated with software testability requirements:
 Module capabilities: Software is developed in modules and each module will be tested
separately. Test cases will be designed for each module and then the interaction between the
modules will be tested.
 Testing support capabilities: The entry point to test drivers and root must be saved for each
person, test interface as during the increment level testing, the trouble and accuracy level of
testing root and driver should be given high priority and importance.
 Defects disclosure capabilities: The system errors should be less so that they do not block the
software testing. The requirement document should also pass the following parameters to be
testable:
 The requirement must be accurate, correct, concise, and complete.
 The requirement should be unambiguous i.e it should have one meaning for all staff
members.
 A requirement should not contradict other requirements.
 Priority-based ranking of requirements should be implemented.
 A requirement must be domain-based so that the changing requirements won’t be a
challenge to implement.
 Observation capabilities: Observing the software to monitor the inputs, their outcomes, and the
factors influencing them.

Types of Software Testability

Below are the different types of software testability:


1. Object-oriented programs testability: Software testing object-oriented software is done at three
levels, Unit, Integration, and System testing. Unit testing is the most accessible level to get better
software testability as one can apply testability examination earlier in the improvement life-cycle.
2. Domain-based testability: Software products that are developed with the concept of domain-
driven development are easy to test and changes can be also done easily. The domain testable
software is modifiable to make it observable and controllable.
3. Testability based on modules: The module-based approach for software testability consists of
three stages:
 Normalize program: In this stage, the program needs to be normalized using some semantic and
systematic tools to make it more reasonable for testability measures.
 Recognize testable components: In this stage, the testable components are recognized based on
the demonstrated normalized data flow.
 Measure program testability: Program testability is measured based on the information stream
testing criteria.

Improving Software Testability

Below are some of the parameters that can be implemented in practice to improve software testability:
 Appropriate test environment: If the test environment corresponds to the production
environment then testing will be more accurate and easier.
 Adding tools for testers: Building special instruments for manual testing helps to make the
process easier and simpler.
 Consistent element naming: If the developers can ensure that they are naming the elements
correctly, consistently, logically, and uniquely then it makes testing more convenient. Although
this approach is difficult in larger projects with multiple developers and engineers.
 Improve observability: Improving observability provides unique outputs for unique inputs for
the Software Under Test.
 Adding assertions: Adding assertions to the units in the software code helps to make the software
more testable and find more defects.
 Manipulating coupling: Manipulating coupling to make it Domain dependent relative to the
increased testability of code.
 Internal logging: If software accurately logs the internal state then manual testing can be
streamlined and it enables to check of what is happening during any test.
 Consistent UI design: Consistent UI design also helps to improve software testability as the
testers can easily comprehend how the user interface principles work.

Benefits of software testability

 Minimizes testers’ efforts: Testability calculates and minimizes the testers’ efforts to perform
testing as improved software testability facilitates estimating the difficulty in finding the software
flaws.
 Determines the volume of automated testing: Software testability determines the volume of
automated testing based on the software product’s controllability.
 Early detection of bugs: Software testability helps in the early and effortless detection of bugs
and thus saves time, cost, and effort required in the software development process.

Modeling a Test Design Process, Modeling Test Results


What is Test Design? When to create Test Design?

Test design is a process that describes “how” testing should be done. It includes processes for the
identifying test cases by enumerating steps of the defined test conditions. The testing techniques
defined in test strategy or plan is used for enumerating the steps.

The test cases may be linked to the test conditions and project objectives directly or indirectly
depending upon the methods used for test monitoring, control and traceability.
The objectives consist of test objectives, strategic objectives and stakeholder definition of
success.When to create test design?

After the test conditions are defined and sufficient information is available to create the test cases of
high or low level, test design for a specified level can be created.

For lower level testing, test analysis and design are combined activity. For higher level testing, test
analysis is performed first, followed by test design.

There are some activities that routinely take place when the test is implemented. These activities may
also be incorporated into the design process when the tests are created in an iterative manner.

An example of such a case is creation of test data.

Test data will definitely be created during the test implementation. So it is better to incorporate it in the
test design itself.

This approach enables optimization of test condition scope by creating low or high level test cases
automatically.

Modeling a Test Design Process


1. One test case is created for each test objective.
2. Each test case is designed as a combination of modular components called test steps.
3. Test cases are clearly specified so that testers can quickly understand, borrow, and re-use the test cases.

Boundary Value Testing, Equivalence Class Testing, Path Testing, Data Flow Testing

Test design is done using several test design techniques. The following is a list of some of the top
design techniques,

 Equivalence Class Testing


 State Transition
 Exploratory Testing
 Boundary value analysis
 Pairwise test design
 Error guessing test design

Top Test Design Techniques


Let’s discuss the top techniques in detail.
1. Equivalence Class Testing

Equivalence class testing, also known as Equivalence Class Partitioning, is a test design technique that
lets you partition your test data into equivalent classes or sections.
It aims to reduce the number of test cases required to test a product by dividing the input domain into a
set of equivalence classes. You can use this whenever an input field has a range like age.
Example:
Consider a gaming website has a form that requires users to enter their age. And the form specifies that
the age has to be between 18 and 60. Now, using the Equivalence Class Partitioning technique, you can
divide the input range into three partitions, as follows,

 1 to 17 (invalid)
 18 to 60 (valid)
 >60 (invalid)

By partitioning the input range into equal partitions, you can create test cases that cover all possible
scenarios. This way, you can easily make sure that your form is working correctly without testing every
possible number between 18 and 60.

2. State Transitioning

It is a type of black box testing that is performed to check the change in the application’s state under
various inputs. This testing is used where different system transitions have to be tested.

State Transition DiagramAbove is a state transition diagram that needs to be tested, it depicts how a
system’s state changes on specific inputs. The four main components for a state transition diagram are
as follows:

1. States
2. Transition
3. Events
4. Actions

State transition testing helps understand the system’s behavior and covers all the conditions. Let’s try to
understand this with an example.
Example:
Consider a bank application that allows users to log in with valid credentials. But, if the user doesn’t
remember the credentials, the application allows them to retry with up to three attempts. If they provide
valid credentials within those three attempts, it will lead to a successful login. In case of three
unsuccessful attempts, the application will have to block the account.
The below image will explain this scenario in a clear way.
3. Exploratory Testing

Exploratory testing is primarily used in Agile methods and involves outside-the-box thinking.
This process does not follow pre-defined test cases but instead involves testers using their knowledge,
skills, and creativity to identify issues that may not have been anticipated by developers.
During exploratory testing, the tester will explore the software application in an unstructured way,
creating and executing tests on the fly. The goal of this testing is to uncover bugs that may have been
missed by other traditional testing methods.
Test Design Concepts You Must Be Familiar With
As a software tester, there are several test design concepts that you must be familiar with to create
effective test cases. Here are some of the most important test design concepts,
Test Automation Pyramid

Test Automation Pyramid is a testing framework that helps developers and testers to build high-quality
products. It emphasizes the importance of automation at each level. The pyramid consists of three
levels, each representing a different type of testing as follows,

 Unit Tests
 Integration Tests
 End-to-End Tests

Unit testing: Here, testing is done on individual units or software modules. Each unit is tested separately
to ensure that it behaves as intended and meets its requirements.Integration testing: It helps verify that
different modules of a software application are working as intended when they are integrated
together.End-to-End testing: It involves evaluating the entire software application, from start to finish,
to ensure that all components work together as expected and meet the requirements and business
objectives. Also, it enables quick feedback cycles and helps developers fix bugs in a short time. It helps
save time, reduce costs, and improve the overall quality of an application.
Test Coverage and Code Coverage

Both test coverage and code coverage are two related but distinct concepts in software testing. Test
coverage refers to the maximum extent to which an application has been tested.
Test coverage is calculated as follows,

For example, if you have 2000 lines of code, your test cases should be able to test the entire codebase. If
only 1000 lines of code are tested out of 2000, the test coverage is 50%. Aim for 100% test coverage
which means that your entire application functionality is tested to ensure a high-quality product.Code
coverage specifically refers to the percentage of the code that has been covered by tests. Simply put,
code coverage tells how much code is tested, and test coverage tells if your tests cover the application’s
functionality or not.
Code coverage is calculated as follows,
Code Coverage Percentage = (Number of lines of code executed)/(Total Number of lines of code in an
application) * 100
If the entire piece of code is tested, then you may consider that the code coverage is 100%. Good code
coverage is considered a good metric for testing.
Test Suites and Test Cases

Test Suites and Test cases are interrelated terms. A test suite can be defined as a collection of test cases
designed to test a specific functionality of the software. They are typically created and managed by
testers.A test case is a set of instructions that defines the steps to be taken and the expected results for
testing specific software functionality. Simply put, test cases are individual tests.
When you automate your test cases, Testsigma – a no-code test automation tool also supports the
addition, updation, and deletion of test cases and test suites. It is very easy to create test cases using its
NLP approach. It also lets you easily manage and run automated test cases on the cloud.
Test Design Preparedness Metrics, Test Case Design Effectiveness, Model-Driven
Test Design

Basically test design is the act of creating and writing test suites for testing a software.

Test analysis and identifying test conditions gives us a generic idea for testing which covers quite a
large range of possibilities. But when we come to make a test case we need to be very specific. In fact
now we need the exact and detailed specific input. But just having some values to input to the system
is not a test, if you don’t know what the system is supposed to do with the inputs, you will not be able
to tell that whether your test has passed or failed.


Automated Testing is a technique where the Tester writes scripts on their own and uses suitable
Software or Automation Tool to test the software. It is an Automation Process of a Manual Process. It
allows for executing repetitive tasks without the intervention of a Manual Tester.
 It is used to automate the testing tasks that are difficult to perform manually.
 Automation tests can be run at any time of the day as they use scripted sequences to examine the
software.
 Automation tests can also enter test data can compare the expected result with the actual result and
generate detailed test reports.
 The goal of automation tests is to reduce the number of test cases to be executed manually but not to
eliminate manual testing.
 It is possible to record the test suit and replay it when required.
Why Transform From Manual to Automated Testing?
In the year 1994, An aircraft completing its Routine flight crashed just before landing. This was due to
some bug or defect in the Software. The Testers didn’t even care about the final testing and hence this
accident happened. So in order to replace for few of the Manual Tests (mandatory), there is a need for
Automation Testing. Below are some of the reasons for using automation testing:
 Quality Assurance: Manual testing is a tedious task that can be boring and at the same time error-
prone. Thus, using automation testing improves the quality of the software under test as more test
coverage can be achieved.
 Error or Bug-free Software: Automation testing is more efficient for detecting bugs in comparison
to manual testing.
 No Human Intervention: Manual testing requires huge manpower in comparison to automation
testing which requires no human intervention and the test cases can be executed unattended.
 Increased test coverage: Automation testing ensures more test coverage in comparison to manual
testing where it is not possible to achieve 100% test coverage.
 Testing can be done frequently: Automation testing means that the testing can be done frequently
thus improving the overall quality of the software under test.
Manual Testing vs Automated Testing
Below are some of the differences between manual testing and automated testing:

Parameters Manual Testing Automated Testing

Manual testing is not accurate at all times Since it is performed by third-party


Reliability due to human error, thus it is less reliable. tools and/or scripts, therefore it is
Parameters Manual Testing Automated Testing

more reliable.

Investment in tools rather than


Heavy investment in human resources.
Investment human resources.

Automation testing is time-saving as


Manual testing is time-consuming due to
due to the use of the tools the
human intervention where test cases are
execution is faster in comparison to
generated manually.
Time efficiency manual testing.

Programming There is no need to have programming It is important to have programming


knowledge knowledge to write the test cases. knowledge to write test cases.

There is a possibility that the test cases


When there are changes in the code,
executed the first time will not be able to
regression testing is done to catch
Regression catch the regression bugs due to the
the bugs due to changes in the co
testing frequently changing requirements.

Test Automation Frameworks


Some of the most common types of automation frameworks are:
 Linear framework: This is the most basic form of framework and is also known as the record and
playback framework. In this testers create and execute the test scripts for each test case. It is
mostly suitable for small teams that don’t have a lot of test automation experience.
 Modular-Based Framework: This framework organizes each test case into small individual units
knowns as modules each module is independent of the other, having different scenarios but all
modules are handled by a single master script. This approach requires a lot of pre-planning and is
best suited for testers who have experience with test automation.
 Library Architecture Framework: This framework is the expansion of a modular-based
framework with few differences. Here, the task is grouped within the test script into functions
according to a common objective. These functions are stored in the library so that they can be
accessed quickly when needed. This framework allows for greater flexibility and reusability but
creating scripts takes a lot of time so testers with experience in automation testing can benefit from
this framework.
Which Tests to Automate?
Below are some of the parameters to decide which tests to automate:
 Monotonous test: Repeatable and monotonous tests can be automated for further use in the future.
 A test requiring multiple data sets: Extensive tests that require multiple data sets can be
automated.
 Business critical tests: High-risk business critical test cases can be automated and can be
scheduled to run regularly.
 Determinant test: Determinant test cases where it is easy for the computer to decide whether the
test is failed or not can be automated.
 Tedious test: Test cases that involve repeatedly doing the same action can be automated so that
the computer can do the repetitive task as humans are very poor at performing the repetitive task
with efficiency, there increase the chances of error.
Automation Testing Process
1. Test Tool Selection: There will be some criteria for the Selection of the tool. The majority of the
criteria include: Do we have skilled resources to allocate for automation tasks, Budget constraints,
and Do the tool satisfies our needs?
2. Define Scope of Automation: This includes a few basic points such as the Framework should
support Automation Scripts, Less Maintenance must be there, High Return on Investment, Not
many complex Test Cases
3. Planning, Design, and Development: For this, we need to Install particular frameworks or
libraries, and start designing and developing the test cases such as NUnit, JUnit, QUnit, or
required Software Automation Tools
4. Test Execution: Final Execution of test cases will take place in this phase and it depends on
Language to Language for .NET, we’ll be using NUnit, for Java, we’ll be using JUnit, for
JavaScript, we’ll be using QUnit or Jasmine, etc.
5. Maintenance: Creation of Reports generated after Tests and that should be documented so as to
refer to that in the future for the next iterations.
Criteria to Select Automation Tool
Following are some of the criteria for selecting the automation tool:
 Ease of use: Some tools have a steep learning curve, they may require users to learn a completely
new scripting language to create test cases and some may require users to maintain a costly and
large test infrastructure to run the test cases.
 Support for multiple browsers: Cross-browser testing is vital for acceptance testing. Users must
check how easy it is to run the tests on different browsers that the application supports.
 Flexibility: No single tool framework can support all types of testing, so it is advisable to
carefully observe what all tool offers and then decide.
 Ease of analysis: Not all tools provide the same sort of analysis. Some tools have a nice
dashboard feature that shows all the statistics of the test like which test failed and which test
passed. On the other hand, there can be some tools that will first request users to generate and
download the test analyses report thus, not much user-friendly. It depends entirely on the tester,
project requirement, and budget to decide which tool to use.
 Cost of tool: There are some tools that are free and some are commercial tools but there are many
other factors that need to be considered before making a decision whether to use free or paid tools.
If a tool takes a lot of time to develop test cases and it is a business-critical process that is at stake
then it is better to use paid tool that can generate test cases easily and at a faster rate.
 Availability of support: Free tools mostly provide community support on the other hand
commercial tools provides customer support, and training material like tutorials, videos, etc. Thus,
it is very important to keep in mind the complexity of the tests before selecting the appropriate
tool.
Best Practices for Test Automation
Below are some of the best practices for test automation that ca n be followed:
 Plan self-contained test cases: It is important to ensure that the test is clearly defined and well-
written. The test cases should be self-contained and easy to understand.
 Plan the order to execute tests: Planning the test in the manner that the one test creates the state
for the second test can be beneficial as it can help to run test cases in order one after another.
 Use tools with automatic scheduling: If possible use tools that can schedule testing automatically
according to a schedule.
 Set up an alarm for test failure: If possible select a tool that can raise an alarm when a test
failure occurs. Then a decision needs to be made whether to continue with the test or abort it.
 Reassess test plans as the app develops and changes: It is important to continuously reassess the
test plan as there is no point in wasting resources in testing the legacy features in the application
under test.
Popular Automation Tools
 Selenium: Selenium is an automated testing tool that is used for Regression testing and provides a
playback and recording facility. It can be used with frameworks like JUnit and Test NG. It
provides a single interface and lets users write test cases in languages like Ruby, Java, Python, etc.
 QTP: Quick Test Professional (QTP) is an automated functional testing tool to test both web and
desktop applications. It is based on the VB scripting language and it provides functional and
regression test automation for software applications.
 Sikuli: It is a GUI-based test automation tool that is used for interacting with elements of web
pages. It is used to search and automate graphical user interfaces using screenshots.
 Appium: Apium is an open-source test automation framework that allows QAs to conduct
automated app testing on different platforms like iOS, Android, and Windows SDK.
 Jmeter: Apache JMeter is an open-source Java application that is used to load test the functional
behavior of the application and measure the performance.
Advantages of Automation Testing
 Simplifies Test Case Execution: Automation testing can be left virtually unattended and thus it
gives an opportunity to monitor the results at the end of the process. . Thus, simplifying the overall
test execution and increasing the efficiency of the application.
 Improves Reliability of Tests: Automation testing ensures that there is equal focus on all the
areas of the testing, thus ensuring the best quality end product.
 Increases amount of test coverage: Using automation testing, more test cases can be created and
executed for the application under test. Thus, resulting in higher test coverage and the detection of
more bugs. This allows for the testing of more complex applications and more features can be
tested.
 Minimizing Human Interaction: In automation testing, everything is automated from test case
creation to execution thus there are no changes for human error due to neglect. This reduces the
necessity for fixing glitches in the post-release phase.
 Saves Time and Money: The initial investment for automation testing is on the higher side but it
is cost-efficient and time-efficient in long run. This is due to the reduction in the amount of time
required for test case creation and execution which contributes to the high quality of work.
 Earlier detection of defects: Automation testing documents the defects, thus making it easier for
the development team to fix the defect and give a faster output. The earlier defect is identified, the
more easier and cost-efficient it is to fix the defects.
Disadvantages of Automation Testing
 High initial cost: Automation testing in the initial phases requires a lot of time and money
investment. It requires a lot of effort for selecting the tool and designing customized software.
 100% test automation is not possible: Generally, the effort is to automate all the test cases but in
practical real situations not all test cases can be automated there are some test cases that require
human intervention for careful observation. There is always a human factor, i.e., it can’t test
everything like humans(design, usability, etc.).
 Not possible to automate all testing types: It is not possible to automate tests that verify the
user-friendliness of the system. Similarly, if we talk about the graphics or sound files, even their
testing cannot be automated as automated tests typically use textual descriptions to verify the
output.
 Programming knowledge is required: Every automation testing tool uses any one of the
programming languages to write test scripts. Thus, it is mandatory to have programming
knowledge for automation testing.
 False positives and negatives: Automation tests may sometimes fail and reflect that there is some
issue in the system but actually there is no issue present and in some cases, it may generate false
negatives if tests are designed to verify that some functionality exists and not to verify that it
works as expected.

Test Procedures, Test Case Organization and Tracking, Bug Reporting, Bug Life
Cycle

There are 5 levels for achieving this. They are as follows:


1. Level 1 to Level 2: Level 1 is characterized by inconsistency. The testing procedures are not
coherent and not systematic and there is no control in the entire operations. There is a need for a
proper structure and better project management operations for proceeding to Level 2. At level 2, step
definition and their implementations are finalized and documented. This level is known as “Defined”
due to the fact that proper rules are put in place and they are abided by.
2. Level 2 to Level 3: After Level 2 has been achieved, the testing team is provided with all the
resources required for testing. New methods and activities required for the completion of testing are
documented and the resources are trained accordingly. These new methods are applied in the
upcoming sprints and the software lifecycle. For proceeding to level 3, the documentation process,
standardization techniques, and the number of integrations of people are gradually increased.
3. Level 3 to Level 4: At level 4, all the processes and methods from level 3 are used. The aim of this
level is to take control of the components and tasks and manage the resources effectively. Any
manager who wants to adjust some procedures can enquire about it and can be done without affecting
software quality. At this level, To make methods more productive, large methods are broken down
into smaller units and then proper metrics are assigned for evaluation of the smaller parts.
Level 4, also known as “management and measurement” strives to maximize the performance of
resources using necessary tools and defined processes.
4. Level 4 to Level 5: Level 5 is the final and the peak level of achieving test maturity. Here,
innovation is the key to driving new changes and improving existing methodologies and processes.
This level inculcates an Agile mindset among the QA team members. New methods, tools, and
technologies are incorporated in Level 4 and checked whether the methods produce better outputs or
not. Awareness about upcoming new tools and technologies is always being made aware.
The process of software testing aims not only at finding faults in the existing software but also at
finding measures to improve the software in terms of efficiency, accuracy, and usability. It mainly
aims at measuring the specification, functionality, and performance of a software program or
application.
Software testing can be divided into two steps:
1. Verification: it refers to the set of tasks that ensure that the software correctly implements a
specific function.
2. Validation: it refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements.
Software Testing can be broadly classified into two types:
1. Manual Testing: Manual testing includes testing software manually, i.e., without using any
automation tool or any script. In this type, the tester takes over the role of an end-user and tests the
software to identify any unexpected behavior or bug. There are different stages for manual testing
such as unit testing, integration testing, system testing, and u ser acceptance testing.
Testers use test plans, test cases, or test scenarios to test software to ensure the completeness of
testing. Manual testing also includes exploratory testing, as testers explore the software to identify
errors in it.
2. Automation Testing: Automation testing, which is also known as Test Automation, is when the
tester writes scripts and uses another software to test the product. This process involves the
automation of a manual process. Automation Testing is used to re-run the test scenarios quickly and
repeatedly, that were performed manually in manual testing.
Apart from regression testing, automation testing is also used to test the application from a load,
performance, and stress point of view. It increases the test coverage, improves accuracy, and saves
time and money when compared to manual testin
1. Black Box Testing: The technique of testing in which the tester doesn’t have access to the source
code of the software and is conducted at the software interface without any concern with the internal
logical structure of the software is known as black-box testing.
2. White-Box Testing: The technique of testing in which the tester is aware of the internal workings
of the product, has access to its source code, and is conducted by making sure that all internal
operations are performed according to the specifications is known as white box testing.
Black Box Testing White Box Testing

Internal workings of an application are not


Knowledge of the internal workings is a must.
required.

Also known as closed box/data-driven testing. Also known as clear box/structural testing.

End users, testers, and developers. Normally done by testers and developers.

This can only be done by a trial and error Data domains and internal boundaries can be better
method. tested.
What are different levels of software testing?
Software level testing can be majorly classified into 4 levels:
1. Unit Testing: A level of the software testing process where individual units/components of a
software/system are tested. The purpose is to validate that each unit of the software performs as
designed.
2. Integration Testing: A level of the software testing process where individual units are combined
and tested as a group. The purpose of this level of testing is to expose faults in the interaction between
integrated units.
3. System Testing: A level of the software testing process where a complete, integrated
system/software is tested. The purpose of this test is to evaluate the system’s compliance with the
specified requirements.
4. Acceptance Testing: A level of the software testing process where a system is tested for
acceptability. The purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.
UNIT IV ADVANCED TESTING CONCEPTS

Performance Testing
Performance Testing is a type of software testing that ensures software applications to
perform properly under their expected workload. It is a testing technique carried out to
determine system performance in terms of sensitivity, reactivity and stability under a
particular workload.
Performance testing is a type of software testing that focuses on evaluating the performance
and scalability of a system or application. The goal of performance testing is to identify
bottlenecks, measure system performance under various loads and conditions, and ensure
that the system can handle the expected number of users or transactions.

There are several types of performance testing, including:

 Load testing: Load testing simulates a real-world load on the system to see how it
performs under stress. It helps identify bottlenecks and determine the maximum number
of users or transactions the system can handle.
 Stress testing: Stress testing is a type of load testing that tests the system’s ability to
handle a high load above normal usage levels. It helps identify the breaking point of the
system and any potential issues that may occur under heavy load conditions.
 Spike testing: Spike testing is a type of load testing that tests the system’s ability to
handle sudden spikes in traffic. It helps identify any issues that may occur when the
system is suddenly hit with a high number of requests.
 Soak testing: Soak testing is a type of load testing that tests the system’s ability to
handle a sustained load over a prolonged period of time. It helps identify any issues that
may occur after prolonged usage of the system.
 Endurance testing: This type of testing is similar to soak testing, but it focuses on the
long-term behavior of the system under a constant load.
 Performance Testing is the process of analyzing the quality and capability of a product.
It is a testing method performed to determine the system performance in terms of speed,
reliability and stability under varying workload. Performance testing is also known
as Perf Testing
Performance Testing Attributes:
 Speed:
It determines whether the software product responds rapidly.
 Scalability:
It determines amount of load the software product can handle at a time.
 Stability:
It determines whether the software product is stable in case of varying workloads.
 Reliability:
It determines whether the software product is secure or not.
Objective of Performance Testing:
1. The objective of performance testing is to eliminate performance congestion.
2. It uncovers what is needed to be improved before the product is launched in market.
3. The objective of performance testing is to make software rapid.
4. The objective of performance testing is to make software stable and reliable.
5. The objective of performance testing is to evaluate the performance and scalability of a
system or application under various loads and conditions. It helps identify bottlenecks,
measure system performance, and ensure that the system can handle the expected
number of users or transactions. It also helps to ensure that the system is reliable, stable
and can handle the expected load in a production environment.
Types of Performance Testing:
1. Load testing:
It checks the product’s ability to perform under anticipated user loads. The objective is
to identify performance congestion before the software product is launched in market.
2. Stress testing:
It involves testing a product under extreme workloads to see whether it handles high
traffic or not. The objective is to identify the breaking point of a software product.
3. Endurance testing:
It is performed to ensure the software can handle the expected load over a long period of
time.
4. Spike testing:
It tests the product’s reaction to sudden large spikes in the load generated by users.
5. Volume testing:
In volume testing large number of data is saved in a database and the overall software
system’s behavior is observed. The objective is to check product’s performance under
varying database volumes.
6. Scalability testing:
In scalability testing, software application’s effectiveness is determined in scaling up to
support an increase in user load. It helps in planning capacity addition to your software
system.
Performance Testing Tools:
1. Jmeter
2. Open STA
3. Load Runner
4. Web Load
Advantages of Performance Testing :
 Performance testing ensures the speed, load capability, accuracy and other performances
of the system.
 It identifies, monitors and resolves the issues if anything occurs.
 It ensures the great optimization of the software and also allows large number of users to
use it on same time.
 It ensures the client as well as end-customers satisfaction.Performance testing has
several advantages that make it an important aspect of software testing:
 Identifying bottlenecks: Performance testing helps identify bottlenecks in the system
such as slow database queries, insufficient memory, or network congestion. This helps
developers optimize the system and ensure that it can handle the expected number of
users or transactions.
 Improved scalability: By identifying the system’s maximum capacity, performance
testing helps ensure that the system can handle an increasing number of users or
transactions over time. This is particularly important for web-based systems and
applications that are expected to handle a high volume of traffic.
 Improved reliability: Performance testing helps identify any potential issues that may
occur under heavy load conditions, such as increased error rates or slow response times.
This helps ensure that the system is reliable and stable when it is deployed to
production.
 Reduced risk: By identifying potential issues before deployment, performance testing
helps reduce the risk of system failure or poor performance in production.
 Cost-effective: Performance testing is more cost-effective than fixing problems that
occur in production. It is much cheaper to identify and fix issues during the testing phase
than after deployment.
 Improved user experience: By identifying and addressing bottlenecks, performance
testing helps ensure that users have a positive experience when using the system. This
can help improve customer satisfaction and loyalty.
 Better Preparation: Performance testing can also help organizations prepare for
unexpected traffic patterns or changes in usage that might occur in the future.
 Compliance: Performance testing can help organizations meet regulatory and industry
standards.
 Better understanding of the system: Performance testing provides a better understanding
of how the system behaves under different conditions, which can help in identifying
potential problem areas and improving the overall design of the system.
Disadvantages of Performance Testing :
 Sometimes, users may find performance issues in the real time environment.
 Team members who are writing test scripts or test cases in the automation tool should
have high-level of knowledge.
 Team members should have high proficiency to debug the test cases or test scripts.
 Low performances in the real environment may lead to lose large number of users
 Performance testing also has some disadvantages, which include:
 Resource-intensive: Performance testing can be resource-intensive, requiring significant
hardware and software resources to simulate a large number of users or transactions.
This can make performance testing expensive and time-consuming.
 Complexity: Performance testing can be complex, requiring specialized knowledge and
expertise to set up and execute effectively. This can make it difficult for teams with
limited resources or experience to perform performance testing.
 Limited testing scope: Performance testing is focused on the performance of the system
under stress, and it may not be able to identify all types of issues or bugs. It’s important
to combine performance testing with other types of testing such as functional testing,
regression testing, and acceptance testing.
 Inaccurate results: If the performance testing environment is not representative of the
production environment or the performance test scenarios do not accurately simulate
real-world usage, the results of the test may not be accurate.
 Difficulty in simulating real-world usage: It’s difficult to simulate real-world usage, and
it’s hard to predict how users will interact with the system. This makes it difficult to
know if the system will handle the expected load.
 Complexity in analyzing the results: Performance testing generates a large amount of
data, and it can be difficult to analyze the results and determine the root cause of
performance issues.
Load Testing
Load Testing is a type of Performance Testing that determines the performance of a system,
software product, or software application under real-life based load conditions. Basically, load
testing determines the behavior of the application when multiple users use it at the same time.
It is the response of the system measured under varying load conditions. The load testing is
carried out for normal and extreme load conditions.
Load testing is a type of performance testing that simulates a real-world load on a system or
application to see how it performs under stress. The goal of load testing is to identify
bottlenecks and determine the maximum number of users or transactions the system can
handle. It is an important aspect of software testing as it helps ensure that the system can
handle the expected usage levels and identify any potential issues before the system is
deployed to production.
During load testing, various scenarios are simulated to test the system’s behavior under
different load conditions. This can include simulating a high number of concurrent users,
simulating a large number of requests, and simulating heavy network traffic. The system’s
performance is then measured and analyzed to identify any bottlenecks or issues that may
occur.

Some common load testing techniques include:

 Stress testing: Testing the system’s ability to handle a high load above normal usage
levels
 Spike testing: Testing the system’s ability to handle sudden spikes in traffic
 Soak testing: Testing the system’s ability to handle a sustained load over a prolonged
period of time
 Tools such as Apache JMeter, LoadRunner, Gatling, and Grinder can be used to simulate
load and measure system performance. It’s important to ensure that the load testing is done
in an environment that closely mirrors the production environment to get accurate results.
Objectives of Load Testing: The objective of load testing is:
 To maximize the operating capacity of a software application.
 To determine whether the latest infrastructure is capable to run the software application or
not.
 To determine the sustainability of application with respect to extreme user load.
 To find out the total count of users that can access the application at the same time.
 To determine scalability of the application.
 To allow more users to access the application.

Load Testing Process:

1. Test Environment Setup: Firstly create a dedicated test environment setup for
performing the load testing. It ensures that testing would be done in a proper way.
2. Load Test Scenario: In second step load test scenarios are created. Then load testing
transactions are determined for an application and data is prepared for each transaction.
3. Test Scenario Execution: Load test scenarios that were created in previous step are
now executed. Different measurements and metrices are gathered to collect the
information.
4. Test Result Analysis: Results of the testing performed is analyzed and various
recommendations are made.
5. Re-test: If the test is failed then the test is performed again in order to get the result in
correct way.
Metrics of Load Testing :
Metrics are used in knowing the performance of load testing under different circumstances.
It tells how accurately the load testing is working under different test cases. It is usually
carried out after the preparation of load test scripts/cases. There are many metrics to
evaluate the load testing. Some of them are listed below.
1. Average Response Time : It tells the average time taken to respond to the request
generated by the clients or customers or users. It also shows the speed of the application
depending upon the time taken to respond to the all requests generated.
2. Error Rate : The Error Rate is mentioned in terms of percentage denotes the number of
errors occurred during the requests to the total number of requests. These errors are usually
raised when the application is no longer handling the request at the given time or for some
other technical problems. It makes the application less efficient when the error rate keeps
on increasing.
3. Throughput : This metric is used in knowing the range of bandwidth consumed during
the load scripts or tests and it is also used in knowing the amount of data which is being
used for checking the request that flows between the user server and application main
server. It is measured in kilobytes per second.
4. Requests Per Second : It tells that how many requests are being generated to the
application server per second. The requests could be anything like requesting of images,
documents, web pages, articles or any other resources.
5. Concurrent Users : This metric is used to take the count of the users who are actively
present at the particular time or at any time. It just keeps track of count those who are
visiting the application at any time without raising any request in the application. From this,
we can easily know that at which time the high number of users are visiting the application
or website.
6. Peak Response Time : Peak Response Time measures the time taken to handle the
request. It also helps in finding the duration of the peak time(longest time) at which the
request and response cycle is handled and finding that which resource is taking longer time
to respond the request.
Load Testing Tools:
1. Apache Jmeter
2. WebLoad
3. NeoLoad
4. LoadNinja
5. HP Performance Tester
6. LoadUI Pro
7. LoadView

Advantages of Load Testing:


 Load testing enhances the sustainability of the system or software application.
 It improves the scalability of the system or software application.
 It helps in the minimization of the risks related to system downtime.
 It reduces the costs of failure of the system.
 It increases customer’s satisfaction.
 Load testing has several advantages that make it an important aspect of software testing:
 Identifying bottlenecks: Load testing helps identify bottlenecks in the system such as
slow database queries, insufficient memory, or network congestion. This helps
developers optimize the system and ensure that it can handle the expected number of
users or transactions.
 Improved scalability: By identifying the system’s maximum capacity, load testing helps
ensure that the system can handle an increasing number of users or transactions over
time. This is particularly important for web-based systems and applications that are
expected to handle a high volume of traffic.
 Improved reliability: Load testing helps identify any potential issues that may occur
under heavy load conditions, such as increased error rates or slow response times. This
helps ensure that the system is reliable and stable when it is deployed to production.
 Reduced risk: By identifying potential issues before deployment, load testing helps
reduce the risk of system failure or poor performance in production.
 Cost-effective: Load testing is more cost-effective than fixing problems that occur in
production. It is much cheaper to identify and fix issues during the testing phase than
after deployment.
 Improved user experience: By identifying and addressing bottlenecks, load testing helps
ensure that users have a positive experience when using the system. This can help
improve customer satisfaction and loyalty.

Disadvantages of Load Testing:
 To perform load testing there in need of programming knowledge.
 Load testing tools can be costly.Load testing also has some disadvantages, which
include:
 Resource-intensive: Load testing can be resource-intensive, requiring significant
hardware and software resources to simulate a large number of users or transactions.
This can make load testing expensive and time-consuming.
 Complexity: Load testing can be complex, requiring specialized knowledge and
expertise to set up and execute effectively. This can make it difficult for teams with
limited resources or experience to perform load testing.
 Limited testing scope: Load testing is focused on the performance of the system under
stress, and it may not be able to identify all types of issues or bugs. It’s important to
combine load testing with other types of testing such as functional testing, regression
testing, and acceptance testing.
 Inaccurate results: If the load testing environment is not representative of the production
environment or the load test scenarios do not accurately simulate real-world usage, the
results of the test may not be accurate.
 Difficulty in simulating real-world usage: It’s difficult to simulate real-world usage, and
it’s hard to predict how users will interact with the system. This makes it difficult to
know if the system will handle the expected load.
 Complexity in analyzing the results: Load testing generates a large amount of data, and
it can be difficult to analyze the results and determine the root cause of performance
issues.
 It’s important to keep in mind that load testing is one aspect of software testing, and it
should be combined with other types of testing to ensure that the system is thoroughly
tested and that any issues are identified and addressed before deployment.
Stress Testing
Stress Testing is a software testing technique that determines the robustness of software by
testing beyond the limits of normal operation. Stress testing is particularly important for
critical software but is used for all types of software. Stress testing emphasizes ro bustness,
availability, and error handling under a heavy load rather than what is correct behavior
under normal situations. Stress testing is defined as a type of software testing that verifies
the stability and reliability of the system. This test particularly determines the system on its
robustness and error handling under extremely heavy load conditions. It even tests beyond
the normal operating point and analyses how the system works under extreme conditions.
Stress testing is performed to ensure that the system would not crash under crunch
situations. Stress testing is also known as Endurance Testing or Torture Testing.
Prerequisite – Types of Software Testing
Characteristics of Stress Testing
1. Stress testing analyzes the behavior of the system after a failure.
2. Stress testing makes sure that the system recovers after failure.
3. It checks whether the system works under abnormal conditions.
4. It ensures the display of appropriate error messages when the system is under stress.
5. It verifies that unexpected failures do not cause security issues.
6. It verifies whether the system has saved the data before crashing or not.
Need For Stress Testing
 To accommodate the sudden surges in traffic: It is important to perform stress testing
to accommodate abnormal traffic spikes. For example, when there is a sale
announcement on the e-commerce website there is a sudden increase in traffic. Failure
to accommodate such needs may lead to a loss of revenue and reputation.
 Display error messages in stress conditions: Stress testing is important to check
whether the system is capable to display appropriate error messages when the system is
under stress conditions.
 The system works under abnormal conditions: Stress testing checks whether the
system can continue to function in abnormal conditions.
 Prepared for stress conditions: Stress testing helps to make sure there are sufficient
contingency plans in case of sudden failure due to stress conditions. It is better to be
prepared for extreme conditions by executing stress testing.
Purpose of Stress Testing
 Analyze the behavior of the application after failure: The purpose of stress testing is
to analyze the behavior of the application after failure and the software should display
the appropriate error messages while it is under extreme conditions.
 System recovers after failure: Stress testing aims to make sure that there are plans for
recovering the system to the working state so that the system recovers after failure.
 Uncover Hardware issues: Stress testing helps to uncover hardware issues and data
corruption issues.
 Uncover Security Weakness: Stress testing helps to uncover the security
vulnerabilities that may enter into the system during the constant peak load and
compromise the system.
 Ensures data integrity: Stress testing helps to determine the application’s data
integrity throughout the extreme load, which means that the data should be in a
dependable state even after a failure.
Stress Testing Process
The stress testing process is divided into 5 steps:
1. Planning the stress test: This step involves gathering the system data, analyzing the
system, and defining the stress test goals.
2. Create Automation Scripts: This step involves creating the stress testing automation
scripts and generating the test data for the stress test scenarios.
3. Script Execution: This step involves running the stress test automation scripts and
storing the stress test results.
4. Result Analysis: This phase involves analyzing stress test results and identifying the
bottlenecks.
5. Tweaking and Optimization: This step involves fine-tuning the system and optimizing
the code with the goal meet the desired benchmarks.

Types of Stress Testing


1. Server-client Stress Testing: Server-client stress testing also known as distributed
stress testing is carried out across all clients from the server.
2. Product Stress Testing: Product stress testing concentrates on discovering defects
related to data locking and blocking, network issues, and performance congestion in a
software product.
3. Transactional Stress Testing: Transaction stress testing is performed on one or more
transactions between two or more applications. It is carried out for fine-tuning and
optimizing the system.
4. Systematic Stress Testing: Systematic stress testing is integrated testing that is used to
perform tests across multiple systems running on the same server. It is used to discover
defects where one application data blocks another application.
5. Analytical Stress Testing: Analytical or exploratory stress testing is performed to test
the system with abnormal parameters or conditions that are unlikely to happen in a real
scenario. It is carried out to find defects in unusual scenarios like a large number of
users logged at the same time or a database going offline when it is accessed from a
website.
6. Application Stress Testing: Application stress testing also known as product stress
testing is focused on identifying the performance bottleneck, and network issues in a
software product.
Stress Testing Tools
1. Jmeter: Apache JMeter is a stress testing tool is an open-source, pure Java-based
software that is used to stress test websites. It is an Apache project and can be used for
load testing for analyzing and measuring the performance of a variety of services.
2. LoadNinja: LoadNinja is a stress testing tool developed by SmartBear that enables
users to develop codeless load tests, substitutes load emulators with actual browsers,
and helps to achieve high speed and efficiency with browser-based metrics.
3. WebLoad: WebLoad is a stress testing tool that combines performance, stability, and
integrity as a single process for the verification of mobile and web applications.
4. Neoload: Neoload is a powerful performance testing tool that simulates large numbers
of users and analyzes the server’s behavior. It is designed for both mobile and web
applications. Neoload supports API testing and integrates with different CI/ CD
applications.
5. SmartMeter: SmartMeter is a user-friendly tool that helps to create simple tests
without coding. It has a graphical user interface and has no necessary plugins. This tool
automatically generates advanced test reports with complete and detailed test results.
Metrics of Stress Testing
Metrics are used to evaluate the performance of the stress and it is usually carried out at the
end of the stress scripts or tests. Some of the metrics are given below.
1. Pages Per Second: Number of pages requested per second and number of pages loaded
per second.
2. Pages Retrieved: Average time is taken to retrieve all information from a particular
page.
3. Byte Retrieved: Average time is taken to retrieve the first byte of information from the
page.
4. Transaction Response Time: Average time is taken to load or perform transactions
between the applications.
5. Transactions per Second: It takes count of the number of transactions loaded per
second successfully and it also counts the number of failures that occurred.
6. Failure of Connection: It takes count of the number of times that the client faced
connection failure in their system.
7. Failure of System Attempts: It takes count of the number of failed attempts in the
system.
8. Rounds: It takes count of the number of test or script conditions executed by the clients
successfully and it keeps track of the number of rounds failed.
Benefits of Stress Testing
 Determines the behavior of the system: Stress testing determines the behavior of the
system after failure and ensures that the system recovers quickly.
 Ensure failure does not cause security issues: Stress testing ensures that system
failure doesn’t cause security issues.
 Makes system function in every situation: Stress testing makes the system work in
normal as well as abnormal conditions in an appropriate way.
Limitations of Stress Testing
1. Manual stress testing is complicated: The manual process of stress testing takes a
longer time to complete and it is a complicated process.
2. Good scripting knowledge required: Good scripting knowledge for implementing the
script test cases for the particular tool is required.
3. Need for external resources: There is a need for external resources to implement stress
testing. It leads to an extra amount of resources and time.
4. Constantly licensed tool: In the case of a licensed stress testing tool, it charges more
than the average amount of cost.
5. Additional tool required in case of open-source stress testing tool: In the case of
some open-source tools, there is a need for a load testing tool additionally for setting up
the stress testing environment.
6. Improper test script implementation results in wastage: If proper stress scripts or
test cases are not implemented then there will be a chance of failure of some resources
and wastage of time.

Volume Testing
Volume Testing is a type of software testing which is carried out to test a software
application with a certain amount of data. The amount used in volume testing could be a
database size or it could also be the size of an interface file that is the subject of volume
testing.
While testing the application with a specific database size, database is extended to that size
and after that the performance of the application is tested. When application needs
interaction with an interface file this could be either reading or writing the file or same
from the file. A sample file of the size needed is created and then functionality of the
application is tested with that file in order to test the performance.
In volume testing a huge volume of data is acted upon the software. It is basically
performed to analyze the performance of the system by increasing the volume of data in the
database. Volume testing is performed to study the impact on response time and behavior of
the system when the volume of data is increased in the database.
Volume Testing is also known as Flood Testing.
Characteristics of Volume Testing:
Following are the characteristics of the Volume Testing:
 Performance of the software decline as passing of the time as there is huge amount of
data overtime.
 Basically the test data is created by test data generator.
 Only small amount of data is tested during development phase.
 The test data need to be logically correct.
 The test data is used to assess the performance of the system.
Objectives of Volume Testing:
The objectives of volume testing is:
 To recognize the problems that may be created with large amount of data.
 To check The system’s performance by increasing the volume of data in the database.
 To find the point at which the stability of the system reduces.
 To identify the capacity of the system or application.
Volume Testing Attributes:
Following are the important attributes that are checked during the volume testing:
 System’s Response Time:
During the volume testing, the response time of the system or the application is tested.
It is also tested whether the system responses within the finite time or not. If the
response time is large then the system is redesigned.
 Data Loss:
During the volume testing, it is also tested that there is no data loss. If there is data loss
some key information might be missing.
 Data Storage:
During the volume testing, it is also tested that the data is stored correctly or not. If the
data is not stored correctly then it is restored accordingly in proper place.
 Data Overwriting:
In volume testing, it is tested that whether the data is overwritten without giving prior
information to the developer. If it so then developer is notified.
Volume Testing is a type of Performance Testing.
Advantages of Volume Testing:
 Volume testing is helpful in saving maintenance cost that will be spent on application
maintenance.
 Volume testing is also helpful in a rapid start for scalability plans.
 Volume testing also helps in early identification of bottlenecks.
 Volume testing ensures that the system is capable of real world usage.
Disadvantages of Volume Testing:
 More number of skilled resources are needed to carry out this testing.
 It is sometimes difficult to prepare test cases with respect to the number of volume of
data to be tested.
 It is a time consuming technique since it requires lot of time to decide the number of
volume of data and test scenarios.
 It is a bit costly as compared to another testing technique.
 It is not possible to have the exact break down of memory used in the real world
application.

Fail-Over Testing
Fail-Over Testing Software products/services are tested multiple times before delivery to
ensure that it is providing the required service. Testing before delivery doesn’t guarantee
that no problem will occur in the future. Even some times the software application fails due
to some unwanted event due to network issues or due to server-related problems. Failover
testing aims to respond to these types of failures.
Suppose that the PC gets off due to some technical issue, and on restarting we open the
browser, then a pop-up is shown saying Do you want to restore all pages? On clicking
restore, all tabs are restored. The process of ensuring such restorations is known as
FAILOVER TESTING.
Failover Testing :
Failover testing is a technique that validates if a system can allocate extra resources and
back up all the information and operations when a system fails abruptly due to some reason.
This test determines the ability of a system to handle critical failures and handle extra
servers. So, the testing is independent of the physical hardware component of a server.
It is preferred that testing should be performed by servers. Active-active and active-
passive standby are the two most common configurations. Both the techniques achieve
failover in a very different manner but both of them are performed to improve the server’s
reliability.
For example, if we have three servers, one of them fails due to heavy load, and then two
situations occur. Either that failed server will restart on its own or in another situation when
the failed server cannot be restarted, the remaining servers will handle the load. Such
situations are tested during this test.
Considerable Factors Before Performing Failover Testing :
1. The budget has to be the first thing to be taken into consideration before thinking about
performing the Failover test.
2. The budget is connected to the frameworks that might crash or break down under
pressure/load.
3. Always keep in mind how much time it will take to fix all of the issues caused by the
failure of the system.
4. Note down the most likely failures and organize the outcomes according to how much
harm is caused by the failure.
Considerable Factors While Performing Failover Testing :
1. Keep a plan of measures to be taken after performing a test.
2. Focus on the execution of the test plan.
3. Set up a benchmark so that performance requirements can be achieved.
4. Prepare a report concerning issue requirements and/or requirements of the asset.
Working of Failover testing :
1. Consider the factors before performing failover testing like budget, time, team,
technology, etc.
2. Perform analysis on failover reasons and design solutions.
3. Develop test cases to test failover scenarios.
4. Based on the result execute the test plan.
5. Prepare a detailed report on failover.
6. Take necessary actions based on the report.
Benefits of Failover Testing :
1. Allows users to configure everything like user access network settings and so on.
2. Ensures that the configuration made is working properly.
3. All the faults are easily resolved in the system’s server beforehand.
4. Provides better services so that users’ servers can run smoothly.
5. Ensures no loss during downtime.
Examples of Failover Testing :
1. Banking and Financial applications
2. Telecom applications
3. Visa applications
4. Trading applications
5. Emergency service business applications
6. Government applications
7. Defense service-related applications

Recovery Testing
Recovery Testing Recovery testing is a type of system testing which aims at testing
whether a system can recover from failures or not. The technique involves failing the
system and then verifying that the system recovery is performed properly.
To ensure that a system is fault-tolerant and can recover well from failures, recovery testing
is important to perform. A system is expected to recover from faults and resume its work
within a pre-specified time period. Recovery testing is essential for any mission-critical
system, for example, the defense systems, medical devices, etc. In such systems, there is a
strict protocol that is imposed on how and within what time period the system should
recover from failure and how the system should behave during the failure.
A system or software should be recovery tested for failures like :
 Power supply failure
 The external server is unreachable
 Wireless network signal loss
 Physical conditions
 The external device not responding
 The external device is not responding as expected, etc.
Steps to be performed before executing a Recovery Test :
A tester must ensure that the following steps are performed before carrying out the
Recovery testing procedure :
1. Recovery Analysis –
It is important to analyze the system’s ability to allocate extra resources like servers or
additional CPUs. This would help to better understand the recovery-related changes that
can impact the working of the system. Also, each of the possible failures, their possible
impact, their severity, and how to perform them should be studied.
2. Test Plan preparation –
Designing the test cases keeping in mind the environment and results obtained in
recovery analysis.
3. Test environment preparation –
Designing the test environment according to the recovery analysis results.
4. Maintaining Back-up –
Information related to the software, like various states of the software and database
should be backed up. Also, if the data is important, then the backing up of the data at
multiple locations is important.
5. Recovery personnel Allocation –
For the recovery testing process, it is important to allocate recovery personnel who is
aware and educated enough for the recovery testing being conducted.
6. Documentation –
This step emphasizes on documenting all the steps performed before and during the
recovery testing so that the system can be analyzed for its performance in case of a
failure.
Example of Recovery Testing :
 When a system is receiving some data over a network for processing purposes, we can
stimulate software failure by unplugging the system power. After a while, we can plug
in the system again and test its ability to recover and continue receiving the data from
where it stopped.
 Another example could be when a browser is working on multiple sessions, we can
stimulate software failure by restarting the system. After restarting the system, we can
check if it recovers from the failure and reloads all the sessions it was previously
working on.
 While downloading a movie over a Wifi network, if we move to a place where there is
no network, then the downloading process will be interrupted. Now to check if the
process recovers from the interruption and continues working as before, we move back
to a place where there is a Wifi network. If the downloading resumes, then the software
has a good recovery rate.
Advantages of Recovery Testing :
 Improves the quality of the system by eliminating the potential flaws in the system so
that the system works as expected.
 Recovery testing is also referred to as Disaster Recovery Testing. A lot of companies
have disaster recovery centers to make sure that if any of the systems is damaged or fails
due to some reason, then there is back up to recover from the failure.
 Risk elimination is possible as the potential flaws are detected and removed from the
system.
 Improved performance as faults are removed and the system becomes more reliable
and performs better in case a failure occurs.
Disadvantages of Recovery testing :
 Recovery testing is a time-consuming process as it involves multiple steps and
preparations before and during the process.
 The recovery personnel must be trained as the process of recovery testing takes place
under his supervision. So, the tester needs to be trained to ensure that recovery testing is
performed in the proper way. For performing recovery testing, he should have enough
data and back up files to perform recovery testing.
 The potential flaws or issues are unpredictable in a few cases. It is difficult to point
out the exact reason for the same, however, since the quality of the software must be
maintained, so random test cases are created and executed to ensure such potential flaws
are removed.
Configuration Testing
Configuration Testing is the type of Software Testing which verifies the performance of
the system under development against various combinations of software and hardware to
find out the best configuration under which the system can work without any flaws or
issues while matching its functional requirements.
Configuration Testing is the process of testing the system under each configuration of the
supported software and hardware. Here, the different configurations of hardware and
software means the multiple operating system versions, various browsers, various
supported drivers, distinct memory sizes, different hard drive types, various types of CPU
etc.
Various Configurations:
 Operating System Configuration:
Win XP, Win 7 32/64 bit, Win 8 32/64 bit, Win 10 etc.
 Database Configuration:
Oracle, DB2, MySql, MSSQL Server, Sybase etc.
 Browser Configuration:
IE 8, IE 9, FF 16.0, Chrome, Microsoft Edge etc.
Objectives of Configuration Testing:
The objective of configuration testing is:
 To determine whether the software application fulfills the configurability requirements.
 To identify the defects that were not efficiently found during different testing processes.
 To determine an optimal configuration of the application under test.
 To do analyse of the performance of software application by changing the hardware and
software resources.
 To do analyse of the system efficiency based on the prioritization.
 To verify the degree of ease to how the bugs are reproducible irrespective of the
configuration changes.
Types of Configuration Testing:
Configuration testing is of 2 types:
1. Software Configuration Testing:
Software configuration testing is done over the Application Under Test with various
operating system versions and various browser versions etc. It is a time consuming
testing as it takes long time to install and uninstall the various software which are to be
used for testing. When the build is released, software configuration begins after passing
through the unit test and integration test.
2. Hardware Configuration Testing:
Hardware configuration testing is typically performed in labs where physical machines
are used with various hardware connected to them.
When a build is released, the software is installed in all the physical machines to which
the hardware is attached and the test is carried out on each and every machine to
confirm that the application is working fine. While doing hardware configuration test,
the kind of hardware to be tested is spelled out and there are several computer hardware
and peripherals which make it next to impossible to execute all the tests.
Configuration Testing can also be classified into following 2 types:
1. Client level Testing:
Client level testing is associated with the usability and functionality testing. This testing
is done from the point of view of its direct interest of the users.
2. Server level Testing:
Server level testing is carried out to determine the communication between the software
and the external environment when it is planned to be integrated after the release.

Compatibility Testing
Compatibility testing :
Compatibility testing is software testing which comes under the non functional
testing category, and it is performed on an application to check its compatibility (running
capability) on different platform/environments. This testing is done only when the
application becomes stable. Means simply this compatibility test aims to check the
developed software application functionality on various software, hardware platforms,
network and browser etc. This compatibility testing is very important in product production
and implementation point of view as it is performed to avoid future issues regarding
compatibility.
Types of Compatibility Testing :
Several examples of compatibility testing are given below.
1. Software :
 Testing the compatibility of an application with an Operating System like Linux, Mac,
Windows
 Testing compatibility on Database like Oracle SQL server, MongoDB server.
 Testing compatibility on different devices like in mobile phones, computers.
Types based on Version Testing :
There are two types of compatibility testing based on version testing
1. Forward compatibility testing : When the behavior and compatibility of a software or
hardware is checked with its newer version then it is called as forward compatibility
testing.
2. Backward compatibility testing : When the behavior and compatibility of a software
or hardware is checked with its older version then it is called as backward compatibility
testing.
2. Hardware :
Checking compatibility with a particular size of
 RAM
 ROM
 Hard Disk
 Memory Cards
 Processor
 Graphics Card
3. Smartphones :
Checking compatibility with different mobile platforms like android, iOS etc.
4.Network :
Checking compatibility with different :
 Bandwidth
 Operating speed
 Capacity
Along with this there are other types of compatibility testing are also performed such as
browser compatibility to check software compatibility with different browsers like Google
Chrome, Internet Explorer etc. device compatibility, version of the software and others.
So for now we have known the uses of compatibility in different fields. Now the quest ion
rises is HOW TO PERFORM A COMPATIBILITY TEST?
How to perform Compatibility testing ?
Testing the application in a same environment but having different versions. For example,
to test compatibility of Facebook application in your android mobile. First check for the
compatibility with Android 9.0 and then with Android 10.0 for the same version of
Facebook App.
Testing the application in a same versions but having different environment. For example,
to test compatibility of Facebook application in your android mobile. First check for the
compatibility with a Facebook application of lower version with a Android 10.0(or your
choice) and then with a Facebook application of higher version with a same version of
Android.
Why compatibility testing is important ?
1. It ensures complete customer satisfaction.
2. It provides service across multiple platforms.
3. Identifying bugs during development process.
Compatibility testing defects :
1. Variety of user interface.
2. Changes with respect to font size.
3. Alignment issues.
4. Issues related to existence of broken frames.
5. Issues related to overlapping of content.

Usability Testing

Several tests are performed on a product before deploying it. You need to collect qualitative
and quantitative data and satisfy customers’ needs with the product. A proper final report is
made mentioning the changes required in the product (software). Usability Testing in
software testing is a type of testing, that is done from an end user’s perspective to
determine if the system is easily usable. Usability testing is generally the practice of testing
how to easy a design is to use on a group of representative users. A very common mistake
in usability testing is conducting a study too late in the design process If you wait until
right before your product is released, you won’t have the time or money to fix any issues –
and you’ll have wasted a lot of effort developing your product the wrong way.

This testing has a cycle wherein when –


1. the product is ready,
2. customers are asked to test it,
3. if found any further changes,
4. product (software) is returned to the development team with feedback to update the
changes,
5. again the software had to run usability testing,
6. if there’re no more changes required,
7. the software is launched in the market.
This whole process from 1 to 5 is repeated unless the software is completely ready and
there are no further changes required. This process helps you to meet customers’ needs and
identify the problems faced by customers during the usage of the software. Usability
Testing is also referred to as User Experience.
Phases of Usability Testing
There are five phases in usability testing which are followed by the system when usability
testing is performed. These are given below:
1. Prepare your product or design to test: The first phase of usability testing is choosing
a product and then making it ready for usability testing. For usability testing, more
functions and operations are required than this phase provided that type of requirement.
Hence this is one of the most important phases in usability testing.
2. Find your participants: The second phase of usability testing is finding an employee
who is helping you with performing usability testing. Generally, the number of
participants that you need is based on a number of case studies. Generally, five
participants are able to find almost as many usability problems as you’d find using many
more test participants.
3. Write a test plan: This is the third phase of usability testing. The plan is one of the first
steps in each round of usability testing is to develop a plan for the test. The main
purpose of the plan is to document what you are going to do, how you are going to
conduct the test, what metrics you are going to find, the number of participants you are
going to test, and what scenarios you will use.
4. Take on the role of the moderator: This is the fourth phase of usability testing and
here the moderator plays a vital role that involves building a partnership with the
participant. Most of the research findings are derived by observing the participant’s
actions and gathering verbal feedback to be an effective moderator, you need to be able
to make instant decisions while simultaneously overseeing various aspects of the
research session.
5. Present your findings/ final report: This phase generally involves combining your
results into an overall score and presenting it meaningfully to your audience. An easy
method to do this is to compare each data point to a target goal and represent this as one
single metric based on a percentage of users who achieved this goal.
Techniques/Methods of Usability Testing
There are various types of usability testing that when performed lead to efficient software.
But few of them which are the most widely used have been discussed here.

1. Guerilla Testing

It is a type of testing where testers wander to public places and ask random users about the
prototype. Also, a thank gift is offered to the users as a gesture of token. It is the best way
to perform usability testing during the early phases of the product development process.
Users generally spare 5-10 minutes and give instant feedback on the product. Also, the cost
is comparatively low as you don’t need to hire participants. It is also known as corridor or
hallway testing.

2. Usability Lab

Usability lab testing is conducted in a lab environment where moderators (who ask for
feedback on the product) hire participants and ask them to take a survey on the product.
This test is performed on a tablet/desktop. The participant count can be 8-10 which is a bit
costlier than Guerilla testing as you need to hire participants, arrange a place, and conduct
testing.

3. Screen or Video Recording

Screen or video recording kind of testing is in which a screen is recorded as per the user’s
action (navigation and usage of the product). This testing describes how the user’s mind
runs while using a product. This kind of testing involves the participation of almost 10
users for 15 minutes. It helps in describing the issues users may face while interacting with
the product.
Generally, there are two studies in usability testing –
1. Moderated – Moderator guides the participant for the changes required in the product
(software)
2. Unmoderated – There’s no moderator (no human guidance), participants gets a set of
questions on which he/she has to work.
While performing usability testing, all kinds of biases (be it friendly bias, social bias, etc.)
by the participants are avoided to have honest feedback on the product so as to improve
its durability.
Need for Usability Testing
Usability testing provides some benefits and the main benefit and purpose of usability
testing are to identify usability problems with a design as early as possible, so they can be
fixed before the design is implemented or mass produced and then such, usability testing is
often conducted on prototypes rather than finished products, with different levels of fidelity
depending on the development phase.
Why Usability Testing?
When software is made-ready, it is important to make sure that the user experience with the
product should be seamless. It should be easy to navigate and all the functions would be
working properly, else the competitor’s website will win the race. Therefore, usabilit y
testing is performed. The objective of usability testing is to understand customers’ needs
and requirements and also how users interact with the product (software). With the test, all
the features, functions, and purposes of the software are checked.
The primary goals of usability testing are – discovering problems (hidden issues) and
opportunities, comparing benchmarks, and comparison against other websites. The
parameters tested during usability testing are efficiency, effectiveness, and satisfaction. It
should be performed before any new design is made. This test should be iterated unless all
the necessary changes have been made. Improving the site consistently by performing
usability testing enhances its performance which in return makes it the best website.
Pros and Cons of Usability Testing
As every coin has two sides, usability testing has pros and cons. Some of the pros it has are:
 Gives excellent features and functionalities to the product
 Improves user satisfaction and fulfills requirements based on user’s feedback
 The product becomes more efficient and effective
The biggest cons of usability testing are the cost and time. The more usability testing is
performed, the more cost and time is being used.

Testing the Documentation


Testing documentation is the documentation of artifacts that are created during or before the testing of
a software application. Documentation reflects the importance of processes for the customer,
individual and organization.

Projects which contain all documents have a high level of maturity. Careful documentation can save

the time, efforts and wealth of the organization.


There is the necessary reference document, which is prepared by every test engineer before stating the
test execution process. Generally, we write the test document whenever the developers are busy in
writing the code.

Once the test document is ready, the entire test execution process depends on the test document. The
primary objective for writing a test document is to decrease or eliminate the doubts related to the
testing activities.

Types of test document

In software testing, we have various types of test document, which are as follows:

o Test scenarios
o Test case
o Test plan
o Requirement traceability matrix(RTM)
o Test strategy
o Test data
o Bug report
o Test execution report

Test Scenarios

It is a document that defines the multiple ways or combinations of testing the application. Generally,
it is prepared to understand the flow of an application. It does not consist of any inputs and navigation
steps.

Test case

It is an in-details document that describes step by step procedure to test an application. It consists of
the complete navigation steps and inputs and all the scenarios that need to be tested for the
application. We will write the test case to maintain the consistency, or every tester will follow the
same approach for organizing the test document.

Test plan

It is a document that is prepared by the managers or test lead. It consists of all information about the
testing activities. The test plan consists of multiple components such as Objectives, Scope,
Approach, Test Environments, Test methodology, Template, Role & Responsibility, Effort
estimation, Entry and Exit criteria, Schedule, Tools, Defect tracking, Test Deliverable,
Assumption, Risk, and Mitigation Plan or Contingency Plan.

Requirement Traceability Matrix (RTM)

The Requirement traceability matrix [RTM] is a document which ensures that all the test case has
been covered. This document is created before the test execution process to verify that we did not
miss writing any test case for the particular requirement.
Test strategy

The test strategy is a high-level document, which is used to verify the test types (levels) to be
executed for the product and also describe that what kind of technique has to be used and which
module is going to be tested. The Project Manager can approve it. It includes the multiple components
such as documentation formats, objective, test processes, scope, and customer communication
strategy, etc. we cannot modify the test strategy.

Test data

It is data that occurs before the test is executed. It is mainly used when we are implementing the test
case. Mostly, we will have the test data in the Excel sheet format and entered manually while
performing the test case.

The test data can be used to check the expected result, which means that when the test data is entered,
the expected outcome will meet the actual result and also check the application performance by
entering the in-correct input data.

Bug report

The bug report is a document where we maintain a summary of all the bugs which occurred during the
testing process. This is a crucial document for both the developers and test engineers because, with
the help of bug reports, they can easily track the defects, report the bug, change the status of bugs
which are fixed successfully, and also avoid their repetition in further process.

Test execution report

It is the document prepared by test leads after the entire testing execution process is completed. The
test summary report defines the constancy of the product, and it contains information like the
modules, the number of written test cases, executed, pass, fail, and their percentage. And each module
has a separate spreadsheet of their respective module.

Security testing

Security Testing is a type of Software Testing that uncovers vulnerabilities of the system
and determines that the data and resources of the system are protected from possible
intruders. It ensures that the software system and application are free from any threats or
risks that can cause a loss. Security testing of any system is focused on finding all possible
loopholes and weaknesses of the system which might result in the loss of information or
repute of the organization. Security testing is a type of software testing that focuses on
evaluating the security of a system or application. The goal of security testing is to identify
vulnerabilities and potential threats, and to ensure that the system is protected against
unauthorized access, data breaches, and other security-related issues.
Goal of Security Testing: The goal of security testing is to:
 To identify the threats in the system.
 To measure the potential vulnerabilities of the system.
 To help in detecting every possible security risks in the system.
 To help developers in fixing the security problems through coding.
 The goal of security testing is to identify vulnerabilities and potential threats in a system
or application, and to ensure that the system is protected against unauthorized access,
data breaches, and other security-related issues. The main objectives of security testing
are to:
 Identify vulnerabilities: Security testing helps identify vulnerabilities in the system, such
as weak passwords, unpatched software, and misconfigured systems, that could be
exploited by attackers.
 Evaluate the system’s ability to withstand an attack: Security testing evaluates the
system’s ability to withstand different types of attacks, such as network attacks, social
engineering attacks, and application-level attacks.
 Ensure compliance: Security testing helps ensure that the system meets relevant security
standards and regulations, such as HIPAA, PCI DSS, and SOC2.
 Provide a comprehensive security assessment: Security testing provides a
comprehensive assessment of the system’s security posture, including the identification
of vulnerabilities, the evaluation of the system’s ability to withstand an attack, and
compliance with relevant security standards.
 Help organizations prepare for potential security incidents: Security testing helps
organizations understand the potential risks and vulnerabilities that they face, enabling
them to prepare for and respond to potential security incidents.
 Identify and fix potential security issues before deployment to production: Security
testing helps identify and fix security issues before the system is deployed to production.
This helps reduce the risk of a security incident occurring in a production environment.
Principle of Security Testing: Below are the six basic principles of security testing:
 Confidentiality
 Integrity
 Authentication
 Authorization
 Availability
 Non-repudiation
Major Focus Areas in Security Testing:
 Network Security
 System Software Security
 Client-side Application Security
 Server-side Application Security
 Authentication and Authorization: Testing the system’s ability to properly authenticate
and authorize users and devices. This includes testing the strength and effectiveness of
passwords, usernames, and other forms of authentication, as well as testing the system’s
access controls and permission mechanisms.
 Network and Infrastructure Security: Testing the security of the system’s network and
infrastructure, including firewalls, routers, and other network devices. This includes
testing the system’s ability to defend against common network attacks such as denial of
service (DoS) and man-in-the-middle (MitM) attacks.
 Database Security: Testing the security of the system’s databases, including testing for
SQL injection, cross-site scripting, and other types of attacks.
 Application Security: Testing the security of the system’s applications, including testing
for cross-site scripting, injection attacks, and other types of vulnerabilities.
 Data Security: Testing the security of the system’s data, including testing for data
encryption, data integrity, and data leakage.
 Compliance: Testing the system’s compliance with relevant security standards and
regulations, such as HIPAA, PCI DSS, and SOC2.
 Cloud Security: Testing the security of cloud-
Types of Security Testing:
1. Vulnerability Scanning: Vulnerability scanning is performed with the help of
automated software to scan a system to detect the known vulnerability patterns.
2. Security Scanning: Security scanning is the identification of network and system
weaknesses. Later on it provides solutions for reducing these defects or risks. Security
scanning can be carried out in both manual and automated ways.
3. Penetration Testing: Penetration testing is the simulation of the attack from a malicious
hacker. It includes an analysis of a particular system to examine for potential
vulnerabilities from a malicious hacker that attempts to hack the system.
4. Risk Assessment: In risk assessment testing security risks observed in the organization
are analyzed. Risks are classified into three categories i.e., low, medium and high. This
testing endorses controls and measures to minimize the risk.
5. Security Auditing: Security auditing is an internal inspection of applications and
operating systems for security defects. An audit can also be carried out via line-by-line
checking of code.
6. Ethical Hacking: Ethical hacking is different from malicious hacking. The purpose of
ethical hacking is to expose security flaws in the organization’s system.
7. Posture Assessment: It combines security scanning, ethical hacking and risk
assessments to provide an overall security posture of an
8. Application security testing: Application security testing is a type of testing that
focuses on identifying vulnerabilities in the application itself. It includes testing the
application’s code, configuration, and dependencies to identify any potential
vulnerabilities.
9. Network security testing: Network security testing is a type of testing that focuses on
identifying vulnerabilities in the network infrastructure. It includes testing firewalls,
routers, and other network devices to identify potential vulnerabilities.
10. Social engineering testing: Social engineering testing is a type of testing that simulates
phishing, baiting, and other types of social engineering attacks to identify vulnerabilities
in the system’s human element.
11. Tools such as Nessus, OpenVAS, and Metasploit can be used to automate and simplify
the process of security testing. It’s important to ensure that security testing is done
regularly and that any vulnerabilities or threats identified during testing are fixed
immediately to protect the system from potential attacks. organization.

Advantages

1. Identifying vulnerabilities: Security testing helps identify vulnerabilities in the system


that could be exploited by attackers, such as weak passwords, unpatched software, and
misconfigured systems.
2. Improving system security: Security testing helps improve the overall security of the
system by identifying and fixing vulnerabilities and potential threats.
3. Ensuring compliance: Security testing helps ensure that the system meets relevant
security standards and regulations, such as HIPAA, PCI DSS, and SOC2.
4. Reducing risk: By identifying and fixing vulnerabilities and potential threats before the
system is deployed to production, security testing helps reduce the risk of a security
incident occurring in a production environment.
5. Improving incident response: Security testing helps organizations understand the
potential risks and vulnerabilities that they face, enabling them to prepare for and
respond to potential security incidents.

Disadvantages:

1. Resource-intensive: Security testing can be resource-intensive, requiring significant


hardware and software resources to simulate different types of attacks.
2. Complexity: Security testing can be complex, requiring specialized knowledge and
expertise to set up and execute effectively.
3. Limited testing scope: Security testing may not be able to identify all types of
vulnerabilities and threats.
4. False positives and negatives: Security testing may produce false positives or false
negatives, which can lead to confusion and wasted effort.
5. Time-consuming: Security testing can be time-consuming, especially if the system is
large and complex.
6. Difficulty in simulating real-world attacks: It’s difficult to simulate real-world attacks,
and it’s hard to predict how attackers will interact with the system.

Testing in the Agile Environment


Agile Testing is a type of software testing that follows the principles of agile software
development to test the software application. All members of the project team along with
the special experts and testers are involved in agile testing. Agile testing is not a separate
phase and it is carried out with all the development phases i.e. requirements, design and
coding, and test case generation. Agile testing takes place simultaneously throughout the
Development Life Cycle. Agile testers participate in the entire development life cycle along
with development team members and the testers help in building the software according to
the customer requirements and with better design and thus code becomes possible. The
agile testing team works as a single team towards the single objective of achieving quality.
Agile Testing has shorter time frames called iterations or loops. This methodology is also
called the delivery-driven approach because it provides a better prediction on the workable
products in less duration time.
 Agile testing is an informal process that is specified as a dynamic type of testing.
 It is performed regularly throughout every iteration of the Software Development
Lifecycle (SDLC).
 Customer satisfaction is the primary concern for agile test engineers at some stage in
the agile testing process.
Features of Agile Testing
Some of the key features of agile software testing are:
 Simplistic approach: In agile testing, testers perform only the necessary tests but at the
same time do not leave behind any essential tests. This approach delivers a product that
is simple and provides value.
 Continuous improvement: In agile testing, agile testers depend mainly on feedback
and self-learning for improvement and they perform their activities efficiently
continuously.
 Self-organized: Agile testers are highly efficient and tend to solve problems by
bringing teams together to resolve them.
 Testers enjoy work: In agile testing, testers enjoy their work and thus will be able to
deliver a product with the greatest value to the consumer.
 Encourage Constant communication: In agile testing, efficient communication
channels are set up with all the stakeholders of the project to reduce errors and
miscommunications.
 Constant feedback: Agile testers need to constantly provide feedback to the developers
if necessary.
Agile Testing Principles
 Shortening feedback iteration: In Agile Testing, the testing team gets to know the
product development and its quality for each and every iteration. Thus continuous
feedback minimizes the feedback response time and the fixing cost is also reduced.
 Testing is performed alongside Agile testing is not a different phase. It is performed
alongside the development phase. It ensures that the features implemented during that
iteration are actually done. Testing is not kept pending for a later phase.
 Involvement of all members: Agile testing involves each and every member of the
development team and the testing team. It includes various developers and experts.
 Documentation is weightless: In place of global test documentation, agile testers use
reusable checklists to suggest tests and focus on the essence of the test rather than the
incidental details. Lightweight documentation tools are used.
 Clean code: The defects that are detected are fixed within the same iteration. This
ensures clean code at any stage of development.
 Constant response: Agile testing helps to deliver responses or feedback on an ongoing
basis. Thus, the product can meet the business needs.
 Customer satisfaction: In agile testing, customers are exposed to the product
throughout the development process. Throughout the development process, the
customer can modify the requirements, and update the requirements and the tests can
also be changed as per the changed requirements.
 Test-driven: In agile testing, the testing needs to be conducted alongside the
development process to shorten the development time. But testing is implemented after
the implementation or when the software is developed in the traditional process.
Agile Testing Methodologies
Some of the agile testing methodologies are:
1. Test-Driven Development (TDD): TDD is the software development process relying
on creating unit test cases before developing the actual code of the software. It is an
iterative approach that combines 3 operations, programming, creation of unit tests, and
refactoring.
2. Behavior Driven Development (BDD): BDD is agile software testing that aims to
document and develop the application around the user behavior a user expects to
experience when interacting with the application. It encourages collaboration among the
developer, quality experts, and customer representatives.
3. Exploratory Testing: In exploratory testing, the tester has the freedom to explore the
code and create effective and efficient software. It helps to discover the unknown risks
and explore each aspect of the software functionality.
4. Acceptance Test-Driven Development (ATDD): ATDD is a collaborative process
where customer representatives, developers, and testers come together to discuss the
requirements, and potential pitfalls and thus reduce the chance of errors before coding
begins.
5. Extreme Programming (XP): Extreme programming is a customer-oriented
methodology that helps to deliver a good quality product that meets customer
expectations and requirements.
6. Session-Based Testing: It is a structured and time-based approach that involves the
progress of exploratory testing in multiple sessions. This involves uninterrupted testing
sessions that are time-boxed with a duration varying from 45 to 90 minutes. During the
session, the tester creates a document called a charter document that includes various
information about their testing.
7. Dynamic Software Development Method (DSDM): DSDM is an agile project
delivery framework that provides a framework for building and maintaining systems. It
can be used by users, developers, and testers.
8. Crystal Methodologies: This methodology focuses on people and their interactions
when working on the project instead of processes and tools. The suitability of the
crystal method depends on three dimensions, team size, criticality, and priority of the
project.
Testing Web and Mobile Applications
Mobile App Testing:
Mobile app testing refers to process of testing application software simply for controlling
and handling mobile devices. It is used to test mobile app for its functionality, usability,
compatibility, etc.
Web App Testing:
Web app testing refers to process of testing application software hosted on web to ensure
quality, functionality, usability, etc. It is also known as web testing or website testing.
Mobile App Testing vs Web App Testing
S.
No.
Mobile App Testing Web App Testing

These are software programs that are These are software programs that are used
1.
used on mobile devices. on computer.

Web applications are developed for shorter


Mobile applications are developed for
2. range of users as compared to
broader range of users.
mobile applications.

New applications can be downloaded


3. Applications will be updated on website.
from app store.

It is not easy to create responsive It is easy to code relative design for large
4. design for small screen devices such screen devices such as desktop and laptop.
as mobile devices, tablets.

Mobile storage capacity is less than


desktop or laptop when it comes to
Desktop or laptop has larger storage
5. downloading apps and multimedia
capacity as compare to mobile.
therefore, sometime it becomes
difficult to test mobile apps.

Mobile apps sometimes do not require


any internet connection but speed Web app generally requires internet
6.
matters, quality of connection matters, connection to perform any task.
speed of LTE connection, etc.

It is quite complex and complicated to


7. It is easy and simple to test web
test mobile apps because of different
applications because of functionality of
mobile devices having different and
S.
No.
Mobile App Testing Web App Testing

greater number of functionalities. desktop.

Testing team have to check


performance of mobile devices on
There is no such of problem of battery life.
8. fully charged devices and low charged
devices because application that
drains battery life gets deleted soon.

One has to consider different screen


size, different OEM’s (original One does not consider such things in web
9.
equipment manufacturer), storage app testing.
capacity, etc., in mobile app testing.

Testing team has to focus on Testing team does not need on interaction
interaction of mobile devices with of web devices with user’s move, direction
10. user’s moves, voice and environment, of user’s attentions, eye moves, etc. as it
eye moves, etc., as it offers variety of offers less variety of options to perform
options to perform operations. operations.

The following are the tools or The following are the tools or Frameworks
Frameworks for Mobile App Testing- for Web App Testing-
 Appium  Selenium, the most popular one among
11.
 Espresso all other commercial tools.
 XCUITest  WebLOAD
 Xamarin  Acunetix
 Robotium and more.  Netsparker and more.

Tablets, peripheral devices like


smartwatches, fitness trackers, and Mouse, webcams, game controllers,
12. even medical devices like heart keyboards, and other peripheral devices
pacemakers may need to undergo are tested.
testing.
UNIT V TEST AUTOMATION AND TOOLS
.

Automated Software Testing

WHAT IS TEST AUTOMATION?


Developing software to test the software is called test automation.

Test automation can help address several problems.

 Automation save time as software can execute test cases faster than human do .
The time thus saved can be used effectively for test engineers to
1. develop additional test cases to achieve better coverage;
2. perform some esoteric or specialized tests like ad hoc testing; or
3. Perform some extra manual testing.

The time saved in automation can also be utilized to develop additional test cases,
thereby improving the coverage of testing.

 Test automation can free the test engineers from mundane tasks and make them
focus on more creative tasks. -E.g- Ad hoc testing requires intuition and creativity to
test the product for those perspectives that may have been missed out by planned test
cases. If there are too many planned test cases that need to be run manually and
adequate automation does not exist, then the test team may spend most of its time in
test execution.
Automating the more mundane tasks gives some time to the test engineers for
creativity and challenging tasks.
 Automated tests can be more reliable -when an engineer executes a particular test
case many times manually, there is a chance for human error. As with all machine-
oriented activities, automation can be expected to produce more reliable results every
time, and eliminates the factors of boredom and fatigue.
 Automation helps in immediate testing -automation reduces the time gap between
development and testing as scripts can be executed as soon as the product build is
ready. Automated testing need not wait for the availability of test engineers.
 Automation can protect an organization against attrition of test engineers
Automation can also be used as a knowledge transfer tool to train test engineers on
the product as it has a repository of different tests for the product.
 Test automation opens up opportunities for better utilization of global resources
Manual testing requires the presence of test engineers, but automated tests can be
run round the clock, twenty- four hours a day and seven days a week.

1
This will also enable teams in different parts of the words, in different time zones, to
monitor and control the tests, thus providing round the- clock coverage.

 Certain types of testing cannot be executed without automation-For example, if


we want to study the behavior of a system with thousands of users logged in, there is
now way one can perform these tests without using automated tolls.
 Automation means end-to-end, not test execution alone -Automation should
consider all activities such as picking up the right product build, choosing the right
configuration, performing installation, running the tests, generating the right test data,
analyzing the results, and filling the defects in the defect repository. When talking
about automation, this large picture should always be kept in mind.

Automate Testing of Web Applications

A test case is a set of sequential steps to execute a test operating on a set of


predefined inputs to produce certain expected outputs. There are two types of test cases –
automated and manual. A manual test case is executed manually while an automated test
case is executed using automation.

As we have seen earlier , testing involves several phases and several types of testing.
Some test cases are repeated several times during a product release, because the product is
built several times. Table describes some test cases for the log in example, on how the login
can be tested for different types of testing.

S.No Test Cases for Testing Belongs to What type of testing


1. Check whether login works Functionality
2. Repeat Login operation in a loop for Reliability
48 hours
3. Perform Login from 10000 clients Load /Stress Testing
4. Measure time taken for Login Performance
operations in different conditions
5. Run log in operation from a Internationalization
machine running Japanese
language

From the above table , it is observed that there are 2 important dimensions
1) What operations have to be tested
2) How the operations have to be tested  scenarios

The how portion is called  scenarios.


What an operation has to do  product specific feature
How they are to be run  framework specific requirement
They are the generic requirements for all products that are being tested in an
organization.

2
When a set of test cases is combined and associated with a set of scenarios, they are
called “test suite”. A test suite is nothing but a set of test cases that are automated and
scenarios that are associated with the test cases.

User
scenarios How to
defined execute
scenarios Framework/ the tests
harness test
tool

Test
What a test
Test should do

Framework for test automation

SKILLS NEEDED FOR AUTOMATION


There are different “Generations of Automation”. The skills required for automation
depends on what generation automation the company is in or desires to be in the near future.

The automation of testing is broadly classified into three generations.


a. First generation – Record and playback
 Record and playback avoids the repetitive nature of executing tests. Almost
all the test tools available in the market have the record and playback
feature.
 A test engineer records the sequence of actions by keyboard characters or
mouse clicks and those recorded scripts are played back later, in the same
order as they were recorded. But this generation of tool has several
disadvantages.
 The scripts may contain hard-coded values, thereby making it difficult to
perform general types of tests.
 When the application changes, all the scripts have to be re-recorded,
thereby increasing the test maintenance costs.
b. Second generation-Data-driven
 This method helps in developing test scripts that generates the set of input
conditions and corresponding expected output.
 This enables the tests to be repeated for different input and output conditions.
The approach takes as much time and effort as the product.

3
c. Third generation-Action-driven
 This technique enables a layman to create automated tests. There are no input
and expected output conditions required for running the tests.
 All actions that appear on the application are automatically tested, based on a
generic set of controls defined for automation.
 The set of actions are represented as objects and those objects are reused. The
user needs to specify only the operations and everything else that is needed for
those actions are automatically generated.
 Hence, automation in the third generation involves two major aspects- “test case
automation” and “framework design”.

Classification of skills for automation

Automation-first Automation- second Automation-third generation


generation generation
Skills for test case Skills for test case Skills for test case Skills for framework
automation automation automation
Scripting languages Scripting languages Programming
Scripting languages
languages
Record- playback Programming Programming Design and
tools usage languages languages architecture skills for
framework creation
Knowledge of data Design and Generic test
generation architecture of the requirements for
techniques product under test multiple products
Usage of the product Usage of the
under test framework

Selenium: Introducing Web Driver and Web Elements

1. Identifying the Types of Testing Amenable to Automation


Certain types of tests automatically lend themselves to automation
a. Stress, reliability, scalability, and performance testing these types of testing
require the test cases to be run form a large number of different machines for an
extended period of time, such as 24 hours, 48 hours, and so on. Test cases belonging
to these testing types become the first candidates for automation.
b. Regression tests Regression tests are repetitive in nature. These test cases are
executed multiple times during the product development phases.

c. Functional tests These kinds of tests may require a complex set up and thus require
specialized skill, which may not be available on an ongoing basis. Automating these
once, using the expert skill sets, can enable using less-skilled people to run these tests
on an ongoing basis.
2. Automating Areas Less Prone To Change
Automation should consider those areas where requirements go through lesser or
no changes. Normally change in requirements cause scenarios and new features to be
impacted, not the basic functionality of the product.
3. Automate Tests That Pertain to Standards
One of the tests that product may have to undergo is compliance to standards. For
example, a product providing a JDBC interface should satisfy the standard JDBC tests.
Automating for standards provides a dual advantage. Test suites developed for
standards are not only used for product testing but can also be sold as test tools for
the market.

4. Management Aspects in Automation


Prior to starting automation, adequate effort has to be spent to obtain management
commitment. It involves significant effort to develop and maintain automated tools; obtaining
management commitment is an important activity. Return on investment is another aspect
to be considered seriously.

DESIGN AND ARCHITECTURE FOR AUTOMATION


Design and architecture is an important aspect of automation. As in product development,
the design has to represent all requirements in modules and in the interactions between
modules and in the interactions between modules.

Integration Testing, both internal interfaces and external interfaces have to be captured
by design and architecture. In this figure the thin arrows represent the internal interfaces
and the direction of flow and thick arrows show the external interfaces. All the

5
modules, their purpose, and interactions between them are described in the subsequent
sections.
Architecture for test automation involves two major heads: a test infrastructure that
covers a test case database and a defect database or defect repository. Using this
infrastructure, the test framework provides a backbone that ties the selection and
execution of test cases.
1. External Modules

 There are two modules that are external modules to automation-TCDB and defect
DB. All the test cases, the steps to execute them, and the history of their execution
are stored in the TCDB.
 The test cases in TCDB can be manual or automated. The interface shown by thick
arrows represents the interaction between TCDB and the automation framework only
for automated test cases.
 Defect DB or defect database or defect repository contains details of all the defects
that are found in various products that are tested in a particular organization. It
contains defects and all the related information test engineers submit the defects for
manual test cases.
 For automated test cases, the framework can automatically submit the defects to the
defect DB during execution.

2. Scenario and Configuration File Modules.


Scenarios are nothing but information on “how to execute a particular test case”.
A configuration file contains a set of variables that are used in automation. A configuration
file is important for running the test cases for various execution conditions and for
running the tests for various input and output conditions and states.

3. Test Cases and Test Framework Modules


Test case is an object for execution for other modules in the architecture and does not
represent any interaction by itself.
A test framework is a module that combines “what to execute” and “how they have to
be executed”. It picks up the specific test cases that are automated from TCDB and picks
up the scenarios and executes them.
The test framework is considered the core of automation design. It subjects the test
cases to different scenarios. The test framework contains the main logic for interacting ,
initiating, and controlling all modules.
A test framework can be developed by the organization internally or can be bought from
the vendor.

4. Tools and Result Modules


 When a test framework performs its operations, there are a set of tools that may be
required. For example, when test cases are stored as source code files in TCDB, they
need to be extracted and compiled by build tools. In order to run the compiled code,
certain runtime tools and utilities may be required.
 For eg , IP Packet Simulators. The result that comes out of the tests run by the test
framework should not overwrite the results from the previous test runs. The history of
all the previous tests run should be recorded and kept as archives.

2020-2021 Jeppiaar Institute of Technology


5. Report Generator and Reports / Metrics Modules
 Once the results of a test run are available, the next step is to prepare the test reports
and metrics. Preparing reports is a complex and time- consuming effort and hence it
should be part of the automation design.
 There should be customized reports such as an executive report, which gives very high
level status; technical reports, which give a moderate level of detail of the test run; and
detailed or debug reports which are generated for developers to debug the failed test
cases and the product.
 The module that takes the necessary inputs and prepares a formatted report is called
a report generator. Once the results are available, the report generator can generate
metrics.

Locating Web Elements, Actions on Web Elements

Requirement 1: No hard coding in the test suite


 The variables for the test suite are called configuration variables. The file in which all
variable names and their associated values are kept is called configuration file.
 The variables belonging to the test tool and the test suite need to be separated so that
the user of the test suite need not worry about test tool variables.
 Changing test tool variables, without knowing their purpose, may impact the results
of the tests.
 Providing inline comment for each of the variables will make the test suite more
usable and may avoid improper usage of variables.

Ex: well documented config file


#Test Framework Configuration Parameter
TOOL_PATH =/tools
COMMONLIB_PATH =/tools/crm/lib
SUITE_PATH =/tools/crm

#parameter common to all the test cases in the test


VERBOSE_LEVEL =3
MAX_ERRORS=200
USER_PASSWD =hello123

#Test Case1 Parameter


TC1_USR_CREATE =0 # 1=yes 0=no
TC1_USR_PASSWD=hello123
TC1_MAX_USRS =200

Requirement 2: Test case/ suite expandability


Points to considered during expansion are

 Adding a test case should not affect other test cases


 Adding a test case should not result in retesting the complete test suite
 Adding a new test suite to the framework should not affect existing test suites
7

2020-2021 Jeppiaar Institute of Technology


Requirement 3: Reuse of code for different types of testing, test cases
Points to be considered during Reuse of codes are:
1) The test suite should only do what a test is expected to do. The test framework needs
to take care of “how” and

2) The test programs need to be modular to encourage reuse of code.

Requirement 4: Automatic setup and cleanup


When test cases expect a particular setup to run the tests, it will be very difficult to
remember each one of them and do the setup accordingly in the manual method. Hence, each
test program should have a “setup” program that will create the necessary setup before
executing the test cases. The test framework should have the intelligence to find out what
test cases are executed and call the appropriate setup program.

A setup for one test case may work negatively for another test case. Hence, it is
important not only to create.
Requirement 5: Independent test cases
Each test case should be executed alone; there should be no dependency between test
cases such as test case-2 to be executed after test case-1 and so on. This requirement enables
the test engineer to select and execute any test case at random without worrying about other
dependencies.
Requirement 6: Test case dependency
Making a test case dependent on another makes it necessary for a particular test case
to be executed before or after a dependent test case is selected for execution

Requirement 7: Insulating test cases during execution


Insulating test cases from the environment is an important requirement for the
framework or test tool. At the time of test case execution, there could be some events or
interrupts or signals in the system that may affect the execution.

Requirement 8: Coding standards and directory structure


Coding standards and proper directory structures for a test suite may help the new
engineers in understanding the test suite fast and help in maintaining the test suite.
Incorporating the coding standards improves portability of the code.
Requirement 9: Selective execution of test cases
A Test Framework contains  many Test Suite
A Test Suite contains  many Test Program
A Test Program contains  many Test Cases
The selection of test cases need not be in any order and any combination should be
allowed. Allowing test engineers to select test cases reduces the time. These selections are
normally done as part of the scenario file. The selection of test cases can be done dynamically
just before running the test cases, by editing the scenario file.

Example scenario file Meaning


test-pgm-name 2,4,1,7-10 The test cases 2,4,1,7-10 are selected for execution
Tests-pgm-name Executes all test cases

2020-2021 Jeppiaar Institute of Technology


Requirement 10: Random execution of test cases
Test engineer may sometimes need to select a test case randomly from a list of test
cases. Giving a set of test cases and expecting the test tool to select the test case is called
random execution of test cases. A test engineer selects a set of test cases from a test suite;
selecting a random test case from the given list is done by the test tool.

Ex: scenario file.


Random test tool select one out of test cases 2,1,5 for execution
test-pgm-name 2,1,5
Random Test engineer wants one out of test program 1,2,3 to be
test-pgm-name1 (2,1,5) randomly executed and if pgm-name1 is selected , then one
test-pgm-name2 out of test cases 2,1,5 to be randomly executed, if test
test-pgm-name3 program 2,3 are selected , then all TC in those 2 program
are executed.

Requirement 11: parallel execution of test cases


In a multi-tasking and multi processing operating systems it is possible to make several
instances of the tests and make them run in parallel. Parallel execution simulates the
behavior of several machines running the same test and hence is very useful for performance
and load testing.

Ex: scenario file.


Instance, 5 5 instances of test case 3 in test-pgm-name1 are executed
test-pgm-name1 (3)
Instance, 5 5 instances of test programs are created , within each of
test-pgm-name1 (2,1,5) the five instances that are created the test program 1,2,3,
test-pgm-name2 are executed in sequence .
test-pgm-name3

Requirement 12: Looping the test cases


Reliability testing requires the test cases to be executed in a loop. There are two types
of loops that are available.
1) iteration loop - gives the number of iterations of a particular test case to be
executed.
2) timed loop - which keeps executing the test cases in a loop till the specified time
duration is reached.

Ex: scenario file.


Repeat_loop, 50 test case 3 in test-pgm-name1 is repeated 50 times.
test-pgm-name1 (3)
Time_loop, 5 Hours TC 2,1,5 from test-pgm-name1 and all test cases from
test-pgm-name1 (2,1,5) the test program2 and 3 are executed in order, in a
test-pgm-name2 loop for 5 hours
test-pgm-name3

2020-2021 Jeppiaar Institute of Technology


Requirement 13: Grouping of test scenarios
The group scenarios allow the selected test cases to be executed in order, random, in
a loop all at the same time. The grouping of scenarios allows several tests to be executed in
a predetermined combination of scenarios.

Ex: scenario file.


Group_scenario1 Group scenario was created to execute 2 instances
Parallel, 2 AND repeat,10@scen1 of the individual scenario “scen1” in a
loop 10 times
Scen1
test-pgm1 (2,1,5)
test-pgm2
test-pgm3

Requirement 14: Test case execution based on previous results


One of the effective practices is to select the test cases that are not executed and test
cases that failed in the past and focus more on them. Some of the common scenarios that
require test cases to be executed based on the earlier results are

1. Rerun all test cases which were executed previously;


2. Resume the test cases from where they were stopped the previous time;
3. Rerun only failed/not run test cases; and
4. Execute all test cases that were executed previously.

Requirement 15: Remote execution of test cases


The central machine that allocates tests to multiple machines and co-ordinates the execution
and result is called test console or test monitor. In the absence of a test console, not only
does executing the results from multiple machines become difficult, collecting the results
from all those machines also becomes difficult.

Role of test console and multiple execution machine.

10

2020-2021 Jeppiaar Institute of Technology


Requirement 16: Automatic archival of test data
The test cases have to be repeated the same way as before, with the same scenarios,
same configuration variables and values, and so on. This requires that all the related
information for the test cases have to be archived. It includes

1) What configuration variables were used


2) What scenario was used
3) What program were executed and from what path

Requirement 17: Reporting scheme


Every test suite needs to have a reporting scheme from where meaningful reports can be
extracted. As we have seen in the design and architecture of framework, the report generator
should have the capability to look at the results file and generate various reports. Audit logs
are very important to analyze the behavior of a test suite and a pr oduct. A reporting scheme
should include
1. When the framework, scenario, test suite, test program, and each test case were
started/ completed;
2. Result of each test case;
3. Log messages;
4. Category of events and log of events; and
5. Audit reports

Requirement 18: Independent of languages


A framework or test tool should provide a choice of languages and scripts that are
popular in the software development area.

 A framework should be independent of programming languages and scripts.


 A framework should provide choice of programming languages, scripts, and
their combinations.
 A framework or test suite should not force a language/script.
 A framework or test suite should work with different test programs written
using different languages, and scripts.
 The internal scripts and options used by the framework should allow the
developers of a test suite to migrate to better framework.

Requirement 19: portability to different platforms


With the advent of platform-independent languages and technologies, there are many
products in the market that are supported in multiple OS and language platforms.

 The framework and its interfaces should be supported on various platforms.


 Portability to different platforms is a basic requirement for test tool/ test suite.
 The language/script used in the test suite should be selected carefully so that
it runs on different platforms.
 The language/ script written for the test suite should not contain platform-
specific calls.
11
CHALLENGES IN AUTOMATION
 Test automation presents some very unique challenges. The most important of
these challenges is management commitment.
 Automation should not be viewed as a panacea for all problems nor should it be
perceived as a quick-fix solution for all the quality problems in a product.
 The main challenge here is because of the heavy front-loading of costs of test
automation, management starts to look for an early payback.
 Successful test automation endeavors are characterized by unflinching
management commitment, a clear vision of the goals, and the ability to set
realistic short-term goals that track progress with respect to the long-term vision.

Different Web Drivers, Understanding Web Driver Events

Web Driver events:

Step1: Metrics program is to decide what measurements are important and collect data
accordingly. Ex for Measurements: effort spent on testing , no of defects , no of test cases.
Step2: It deals with defining how to combine data points or measurement to provide
meaningful metrics. A particular metric can use one or more measurements

Step3: It involves with operational requirement for measurements. It contains


Who should collect measurements?
Who should receive the analysis etc.
This step helps to decide on the appropriate periodicity for the measurements as well as
assign operational responsibility for collecting, recording and reporting the measurements.
Daily basis measurements  no of testcases executed, no of defects found, defects fixed..
Weekly measurements  how may testcases produced 40 defects,

Step4: This step analyzes the metrics to identify both positive area and improvement areas
on product quality.
Step5: The final step is to take necessary action and follow up on the action.

Step6: To continue with next iteration of metrics programs, measuring a different set of
measurements, leading to more refined metrics addressing different issues.

Knowing only how much testing got completed does not answer the question on when the
testing will get completed and when the product is ready for release. To Answer these
questions , one need to estimate the following

Days needed to complete testing = Total test cases yet to be executed


Total test case execution productivity

test case execution productivity  testcases executed per person day

Total Days needed for defect fixes = (outstanding defects yet to fixed + Defects that can be
found in future test cycles)
Defect fixing capability

Days needed for Release = Max(Days needed for testing , days needed for defect fixes )

More accurate estimate with regression testing


Days needed for Release = Max(Days needed for testing ,(days needed for defect fixes +
Days needed for regressing outstanding defect fixes) )

1. Effort Variance (Planned Vs Actual)

When the baselined effort estimates, revised effort estimates, and actual effort are
plotted together for all the phases of SDLC, it provides many insights about the estimation
process. As different set of people may get involved in different phases, it is a good idea to
plot these effort numbers phase-wise. A sample data for each of the phase is plotted in the
chart.
If there is a substantial difference between the baselined and revised effort, it points to
incorrect initial estimation. Calculating effort variance for each of the phases provides a
quantitative measure of the relative difference between the revised and actual efforts.

Calculating effort variance for each of the phases provides a quantitative measure of the relative difference between
the revised and actual efforts.

Effort variance % = actual effort – revised estimate X 100


Revised estimate
person days Phase wise effort variation

Sample variance percentage by phase.

Effort Req Design Coding Testing Doc Defect


Fixing
Variance % 7.1 8.7 5 0 40 15

 All the baseline estimates, revised estimates, and actual effort are plotted together for
each of the phases. The variance can be consolidated into as shown in the above table.
 A variance of more than 5% in any of the SDLC phase indicates the scope for
improvement in the estimation. The variance is acceptable only for the coding and
testing phases.
 The variance can be negative also. A negative variance is an indication of an over
estimate.
 The variance is acceptable only for the coding and testing phases.

2. Schedule Variance (Planned vs Actual)


Schedule variance is calculated at the end of every milestone to find out how well the
project is doing with respect to the schedule.

180
160
140 56
No.ofDays

120

100
80
60 126 136
40 110
20
0

Estimated Remaining

To get a real picture on schedule in the middle of project execution, it is important to


calculate “remaining days yet to be spent” on the project and plot it along with the “actual

16
schedule spent” as in the above chart. “Remaining days yet to be spent” can be calculated by
adding up all remaining activities. If the remaining days yet to be spent on project is not
calculated and plotted, it does not give any value to the chart in the middle of the project,
because the deviation cannot be inferred visually from the chare. The remaining days in the
schedule becomes zero when the release is met.
Effort and schedule variance have to be analyzed in totality, not in isolation. This is
because while effort is a major driver of the cost, schedule determines how best a product
can exploit market opportunities, variance can be classified into negative variance, zero
variance, acceptable variance, and unacceptable variance.

3. Effort Distribution Across Phases


Adequate and appropriate effort needs to be spent in each of the SDLC phase for a
quality product release.
The distribution percentage across the different phases can be estimated at the time of
planning and these can be compared with the actual at the time of release for getting a
comfort feeling on the release and estimation methods. A sample distribution of effort across
phases is given in figure.
Actual Effort Distribution

Effort distribution :
Req > Testing > design > bug fixing > coding > doc
17
Mature organizations spend at least 10-15 % of the total effort in requirements and
approximately the same effort in the design phase. The effort percentage for testing depends
on the type of release and amount of change to the existing code base and functionality.
Typically, organizations spend about 20 -50 % of their total effort in testing.

II ) PROGRESS METRICS
One of the main objectives of testing is to find as many defects as possible before any
customer finds them. The number of defects that are found in the product is one of the main
indicators of quality.
Defects get detected by the testing team and get fixed by the development team.
Defect metrics are further classified in to
1. test defect metrics
2. development defect metrics

The progress chart gives


 pass rate
 fail rate of executed test cases
 pending test cases
 test cases that are waiting for defects to be fixed.

100%

80%

60%

40%

20%

A scenario represented by such a progress chart shows that not only is testing
progressing well, but also that the product quality is improving. The chart had shown a trend
that as the weeks progress, the “not run” cases are not reducing in number, or “blocked”
cases are increasing in number, or “pass” cases are not increasing, then it would clearly point
to quality problems in the product that prevent the product from being ready for release.
Testing: Understanding Testing.xml, Adding Classes, Packages, Methods to Test, Test Reports .

1. TEST Methods
.
Some organizations classify effects by assigning a defect priority (for example P1, P2,
P3, and so on)Some organizations use defect severity levels (for example, S1, S2, S3, and so

1).The priority of a defect can change dynamically once assigned. Severity is absolute and does
not change often as they reflect the state and quality of the product.
Table -Defect priority and defect severity – sample interpretation.
Defect priority is based on defect fixing and defect severity is based on functionality level.

Priority What it means


1 Fix the defect on highest priority; fix it before the next build
2. Fix the defect on high priority before next test cycle
3 Fix the defect on moderate priority when time permits, before the release
4 Postpone this defect for next release or live with this defect
Severity WhWahtait im
t eans
means
1 The basic product functionality failing or product crashes
2 Unexpected error condition or a functionality not working
3 A minor functionality is failing or behaves differently than expected
4 Cosmetic issue and no impact on the users

This defect classification is based on priority and severity.


Defect Classification What it Means
Extreme  Product crashes or unusable
 Need to be fixed immediately
Critical  Basic functionality of the product not working
 Needs to be fixed before next test cycle starts
Important  Extended functionality of the product not working
 Does not affect the progress of testing
 Fix it before the release
Minor  Product Behaves differently
 No impact on test team or customer
 Fix it when time permits
Cosmetic  Minor Irritant
 Need not be fixed for this release

a) Defect Find Rate

The purpose of testing is to find defects early in the test cycle. The idea of testing is to find
as many defects as possible early in the cycle. However, this may not be possible for two
reasons. First, not all features of a product may become available early; because of
scheduling of resources, the features of a product arrive in a particular sequence. Some
of the test cases may be blocked because of some show stopper defects.
Once a majority of the modules become available and the defects that are blocking the
tests are fixed, the defect arrival rate increases. After a certain period of defect fixing and
testing, the arrival of defects tends to slow down and a continuation of that enables
product release. This results in a “bell curve” as shown in figure.

b) Defect fix rate

The purpose of development is to fix defects as soon as they arrive. If the goal of testing is
to find defects as early as possible, it is natural to expect that the goal of development
should be to fix defects as soon as they arrive. There is a reason why defect fixing rate
should be same as defect arrival rate. If more defects are fixed later in the cycle, they may
not get tested properly for all possible side-effects.

c) Outstanding defects rate


In a well executed project, the number of outstanding defects is very close to zero all the
time during the test cycle. The number of defects outstanding in the product is calculated by
subtracting the total defects fixed from the total defects found in the product.

d) Priority outstanding rate


The modification to the outstanding defects rate curve by plotting only the high- priority
defects and filtering out the low- priority defects is called priority outstanding defects. This
is an important method because closer to product release, the product team would not want
to fix the low – priority defects.

Normally only high-priority defects are tracked during the period closer to release. Some
high-priority defects may require a change in design or architecture & fixed immediately

e) Defect trend
The effectiveness analysis increases when several perspectives of find rate, fix rate,
outstanding, and priority outstanding defects are combined.
Defect trend

The following observations can be made.


1. The find rate, fix rate, outstanding defects, and priority outstanding follow a bell
curve pattern, indicating readiness for release at the end of the 19th week.
2. a sudden downward movement as well as upward spike in defect fixes rate needs
analysis (13th to 17th week in the chartabove)
3. By looking at the priority outstanding which shows close to zero defects in the 19th
week, it can be concluded that all outstanding defects belong to low priority.
4. A smooth priority outstanding rate suggests that priority defects were closely tracked
and fixed.

f) Defect Classification trend


Providing the perspective of defect classification in the chart helps in finding out release
readiness of the product. When talking about the total number of outstanding defects, some
of the questions that can be asked are

 How many of them are extreme defects?


 How many are critical?
 How many are important?
These questions require the charts to be plotted separately based on defect classification.
The sum of extreme, critical, important, minor, and cosmetic defects is equal to the total
number of defects.

Pie chart of defect distribution

21
g) Weighted defects trend

Weighted defect helps in quick analysis of defect, instead of worrying about the
classification of defects.

Weighted defects= (Extreme * 5 + Critical * 4 + important * 3 + Minor * 2 +


Cosmetic)

Both “large defects” and “large number of small defects” affect product release.

Weighted defects trend.


From Figure it can be noted that
1. The ninth week has more weighted defects, which means existence of "large number
of small defects" or "significant numbers of large defects" or a combination of the two.
This is consistent with our interpretation of the same data using the stacked area
chart.
2. The tenth week has a significant (more than 50) number of weighted defects
indicating the product is not ready for release.

h) Defect cause distribution

Logical questions that would arise are:


1. Why are those defects occurring and what are the root causes?
2. What areas must be focused for getting more defects out of testing?
Finding the root causes of the defects help in identifying more defects and
sometimes help in even preventing the defects.

22
Defect cause distribution chart

2. Development Defect Metrics


To map the defects to different components of the product , the parameter is LOC. It has
a) Component wise Defect Distribution
b) Defect Density & defect removal rate
c) Age Analysis of outstanding defect
d) Introduced and reopened defects trend

a) Component wise Defect Distribution

When module wise defect distribution is done , modules like install ,reports , client and
database has > 20 defects indicating that more focus and resources are needed for these
components.
So knowing the components producing more defects helps in defect fix plan and in deciding
what to release.

b) Defect Density & defect removal rate


Defect density maps the defects in the product with the volume of code that is produced for the
product.

Defects per KLOC = Total defects found in the product / total Executable line
of code in KLOC

Variants to this metrics is to calculate AMD (add , modify , delete code ) to find how a release affects product
quality .

23
c) Age Analysis of outstanding defect
The time needed to fix a defect may be proportional to its age. It helps in finding out whether
the defects are fixed as soon as they arrive and to ensure that long pending defects are given
adequate priority.

d) Introduced and reopened defects trend

Introduced defect( ID): when adding new code or modifying the code to provide a defect fix , something that was
working earlier may stop working , this is called ID.

reopened defects :fix that is provided in the code may not have fixed the problem completely or some other
modification may have reproduced a defect that was fixed earlier. This is called as reopened defects.

Testing is not meant to find the same defects again ; release readiness should consider the quality of defect fixes.

Test Reports

Productivity metrics combine several measurements and parameters with effort spend on
the product. They help in finding out the capability of the team as well as for other purpose,
such as

1. Estimating for the new release.


2. Finding out how well the team is progressing, understanding the reasons for (both
positive and negative) variations in results.
3. Estimating the number of defects that can be found
4. Estimating release data and quality
5. Estimating the cost involved in the release.

a) Defects per 100 Hours of Testing


Defects per 100 hours of testing= (Total defects found in the product for a period /
Total hours spent to get those defects) * 100

Test Cases Executed per 100 Hours of Testing


Test cases executed per 100 hours of testing = (Total test cases executed for a period /
Total hours spent in test execution) * 100

b) Test cases Developed per 100 Hours of Testing


Test cases developed per 100 hours of testing= (Total test cases developed for a period /
Total hours spent in test case development) * 100

c) Defects per 100 Test Cases


Defects per 100 test cases = (Total defects found for a period / Total test cases
executed for the same period) * 100

24
d) Defects per 100 Failed Test Cases
Defects per 100 failed test cases = (Total defects found for a period/Total test cases
failed due to those defects) * 100

e) Test Phase Effectiveness

The following few observations can be made

1. A good proportion of defects were found in the early phases of testing (UT and CT).
2. Product quality improved from phase to phase (shown by less percent of defects found
in the later test phases – IT and ST)

Test phase effectiveness

f) Closed Defect Distribution

The closed defect distribution helps in this analysis as shown in the figure below. From the
chart, the following observations can be made.

1. Only 28% of the defects found by test team were fixed in the product. This suggests
that product quality needs improvement before release.
2. Of the defects filled 19% were duplicates. It suggests that the test team needs to update
itself on existing defects before new defects are filed.
3. Non-reproducible defects amounted to 11%. This means that the product has some
random defects or the defects are not provided with reproducible test cases. This area
needs further analysis.

25
4. Close to 40% of defects were not fixed for reasons “as per design,” “will not fix,” and
“next release.” These defects may impact the customers. They need further discussion
and defect fixes need to be provided to improve release quality.

26

You might also like