Ccs366 - Software Testing and Automation Notes
Ccs366 - Software Testing and Automation Notes
Software testing:
Software testing is the process of evaluating and verifying that a software product or application
does what it is supposed to do. The benefits of testing include preventing bugs, reducing
development costs and improving performance.
Software is the one which runs the world now. If we take any industry, the software is majorly included to
do the main job role. For instance, in science and technology inclusion of software programming can be
seen in space machines, aircraft, drones, etc. In this virtual world, any industry you imagine has software
products running their businesses behind the scenes.
Now we may understand the importance of testing our newly developed software products. It not only cut
the costs in the initial stages but also helps to efficiently run the applications to suit the business needs.
There are some other major benefits of testing a software product which helps businesses to use software
applications in a productive way.
Security: Even a common person doesn’t want any risk occurring in their mobile device due to
the apps they use. In the same way, big firms don't like to be prone to risks and hazards a
software product may cause. Therefore testing a product may avoid all the uncertainties and
deliver a reliable product.
Product-quality: Of Course when we test a product, its quality is maintained. The quality of the
product is what ensures a brand’s growth and reputation in the IT market.
Cost-effective: Testing a product in the initial stage will cut the cost and also be helpful to deliver
a quality product in time.
Customer Satisfaction: User experience is very important in this digitized world. Giving the
satisfaction of using a hassle-free product is the best result of testing.
Testing Levels
Testing levels are nothing but a program going through a testing phase to assure that it is error-
free to move into the next development stage.
Unit testing: Unit testing is done by the programmers while coding to check whether an
individual unit of the program is error-free.
Integration testing: As the name suggests, integration testing is done when individual
units of the program are integrated together. In other words, it focuses on the structure
and design of the software.
System testing: Here, the entire program is compiled as software and tested as a whole.
This tests all the features of a program including functionality, performance, security,
portability, etc.
There are some principles maintained while testing software. Eventually, a tester cannot keep on
testing the product till it gives zero error, which is not possible. Therefore some principles are
followed while debugging the programmes.
Exhaustive testing is not possible: Yes, no tester can repeat the testing process over and
over again till the program is error-free. It will be exhaustive for both the tester and the
program that it will stop pointing out the errors if repetitive test cases are carried out
every time. Thus if the testing process is based on the risk assessment factor, software
can produce, then it will be easy for the testing professionals to concentrate only on the
important functions of a program.
Defect Clustering: The defect clustering principle states that most of the defects are found
in small modules of the program and only experienced professionals can deal with such
risky modules.
Black-Box Testing
🞂 Black Box Testing, also known as Behavioral Testing,functional and non-
functional testing and Regression testing is a software testing method in
which the internal structure/ design/ implementation of the item being tested
is not knownto the tester. These tests can be functional or non-functional,
though usually functional
🞂 Tests are done from a user’s point of view and will help in exposing
discrepancies in thespecifications.
🞂 Only a small number of possible inputs can be tested and many program
paths will be left untested.
🞂 Tests can be redundant if the software designer/ developer has already run a test case.
🞂 Ever wondered why a soothsayer closes the eyes when foretelling events?
So is almostthe case in Black Box Testing.
For the most part, errors are observed in the extreme ends of the input
values, so theseextreme values like start/end or lower/upper values are
called Boundary values and analysis of these Boundary values is called
“Boundary value analysis”. It is also sometimes known as ‘range
checking’.
This is one of the software testing technique in which the test cases are
designed to include values at the boundary. If the input data is used within
the boundary value limits, then it is said to be Positive Testing. If the input
data is picked outside the boundary value limits, then it is said to be Negative
Testing.
Boundary value analysis is another black box test design technique and it is
used to find the errors at boundaries of input domain rather than finding
those errors in the center ofinput.
Each boundary has a valid boundary value and an invalid boundary value.
Test cases aredesigned based on the both valid and invalid boundary values.
Typically, we choose onetest case from each boundary.
Boundary value analysis is a black box testing and is also applies to white
box testing. Internal data structures like arrays, stacks and queues need to be
checked for boundary or limit conditions; when there are linked lists used as
internal structures, the behavior of the list at the beginning and end have to be
tested thoroughly.
Boundary value analysis help identify the test cases that are most
likely to uncoverdefects
🞂 For example : Suppose you have very important tool at office, accepts valid
User Nameand Password field to work on that tool, and accepts minimum
8 characters and maximum 12 characters. Valid range 8-12, Invalid range 7
or less than 7 and Invalid range 13 or more than 13.
Test cases for the application whose input box accepts numbers between 1-1000.
Valid range 1-1000, Invalid range 0 and Invalid range 1001 or more.
• Test Cases 1: Consider test data exactly as the input boundaries of
input domain i.e. values 1 and 1000.
• Test Cases 2: Consider test data with values just below the extreme
edges of input domains i.e. values 0 and 999.
• Test Cases 3: Consider test data with values just above the extreme
edges of input domain i.e. values 2 and 1001.
3) Equivalence Partitioning
The set of input values that generate one single expected output is called a partition.
When the behavior of the software is the same for a set of values, then the set
is termed asequivalence class or partition.
Example: An insurance company that has the following premium rates
based on the age group. A life insurance company has base premium of
$0.50 for all ages. Based on the age group, an additional monthly premium has
to pay that is as listed in the table below. For example, a person aged 34 has to pay
a premium=$0.50 +$ 1.65=$2.15
White box testing which also known as glass box is testing, structural testing, clear
box testing, open box testing and transparent box testing. It tests internal coding
and infrastructure of a software focus on checking of predefined inputs against
revolves around internal structure testing. In this type of testing programming skills are
required to design test cases. The primary goal of white box testing is to focus on the
flow of inputs and outputs through the software and strengthening the security of the
software.
The term 'white box' is used because of the internal perspective of the system. The clear
box or white box or transparent box name denote the ability to see through the
Developers do white box testing. In this, the developer will test every line of the code of
the program. The developers perform the White-box testing and then send the
application or the software to the testing team, where they will perform the black box
testing and verify the application along with the requirements and identify the bugs and
The developer fixes the bugs and does one round of white box testing and sends it to
the testing team. Here, fixing the bugs implies that the bug is deleted, and the particular
feature is working fine on the application.
Here, the test engineers will not include in fixing the defects for the following reasons:
o Fixing the bug might interrupt the other features. Therefore, the test engineer
should always find the bugs, and developers should still be doing the bug fixes.
o If the test engineers spend most of the time fixing the defects, then they may be
Code Walkthrough
In walkthrough, author guides the review team via the document to fulfill
the commonunderstanding and collecting the feedback.
Walkthrough is not a formal process.
In walkthrough, a review team does not require to do detailed study before
meeting whileauthors are already in the scope of preparation.
Walkthrough is useful for higher-level documents i.e requirement
specification andarchitectural documents.
Goals of Walkthrough
Make the document available for the stakeholders both outside and inside
the softwarediscipline for collecting the information about the topic under
documentation.
Describe and evaluate the content of the document.
Study and discuss the validity of possible alternatives and proposed solutions.
🞂 Saves time and money as defects are found and rectified very early in the lifecycle.
🞂 This provides value-added comments from reviewers with
different technicalbackgrounds and experience.
🞂 It notifies the project management team about the progress of the development process.
🞂 It creates awareness about different development or maintenance
methodologies whichcan provide a professional growth to participants.
Code Inspection
The trained moderator guides the Inspection. It is most formal type of review.
The reviewers are prepared and check the documents before the meeting.
In Inspection, a separate preparation is achieved when the product is
examined anddefects are found. These defects are documented in issue
log.
In Inspection, moderator performs a formal follow-up by applying exit criteria.
Goals of Inspection
Assist the author to improve the quality of the document under inspection.
Efficiently and rapidly remove the defects.
Generating the documents with higher level of quality and it helps to
improve the productquality.
It learns from the previous defects found and prohibits the occurrence of similar
defects.
Generate common understanding by interchanging information.
Inspection Walkthrough
Formal Informal
Technical Review
Technical review is a discussion meeting that focuses on technical
content of thedocument. It is a less formal review.
It is guided by a trained moderator or a technical expert.
i. Code Functional Testing involves tracking a piece of data completely through the
software.
ii. At the unit test level this would just be through an individual module or function.
iii. The same tracking could be done through several integrated modules or even
through theentire software product although it would be more time consuming
to do so.
iv. During data flow, the check is made for the proper declaration of variables
declared andthe loops used are declared and used properly.
i. The logical approach is to divide the code just as you did in black-box testing
into its data and its states (or program flow).
ii. By looking at the software from the same perspective, you can more easily
map the white-box information you gain to the black-box case you have already
written.
iii. Consider the data first. Data includes all the variables, constants, arrays, data
structures, keyboard and mouse input, files and screen input and output, and I/O
to other devices such as modems, networks, and so on.
Software Testing defines a set of procedures and methods that check whether the actual
software product matches with expected requirements, thereby ensuring that the product is
Defect free. There are a set of procedures that needs to be in mind while testing the software
manually or by using automated procedures. The main purpose of software testing is to
identify errors, deficiencies, or missing requirements with respect to actual requirements.
Software Testing is Important because if there are any bugs or errors in the software, they can
be identified early and can be solved before the delivery of the software product. The article
focuses on discussing the difference between bug, defect, error, fault, and failure.
What is a Bug?
A bug refers to defects which means that the software product or the application is not working
as per the adhered requirements set. When we have any type of logical error, it causes our code
to break, which results in a bug. It is now that the Automation/ Manual Test Engineers describe
this situation as a bug.
A bug once detected can be reproduced with the help of standard bug-reporting templates.
Major bugs are treated as prioritized and urgent especially when there is a risk of user
dissatisfaction.
The most common type of bug is a crash.
Typos are also bugs that seem tiny but are capable of creating disastrous results.
What is a Defect?
A defect refers to a situation when the application is not working as per the requirement and
the actual and expected result of the application or software are not in sync with each other.
The defect is an issue in application coding that can affect the whole program.
It represents the efficiency and inability of the application to meet the criteria and prevent
the software from performing the desired work.
The defect can arise when a developer makes major or minor mistakes during the
development phase.
What is an Error?
Error is a situation that happens when the Development team or the developer fails to
understand a requirement definition and hence that misunderstanding gets translated into
buggy code. This situation is referred to as an Error and is mainly a term coined by the
developers.
Errors are generated due to wrong logic, syntax, or loop that can impact the end-user
experience.
It is calculated by differentiating between the expected results and the actual results.
It raises due to several reasons like design issues, coding issues, or system specification
issues and leads to issues in the application.
What is a Fault?
Sometimes due to certain factors such as Lack of resources or not following proper steps Fault
occurs in software which means that the logic was not incorporated to handle the errors in the
application. This is an undesirable situation, but it mainly happens due to invalid documented
steps or a lack of data definitions.
It is an unintended behavior by an application program.
It causes a warning in the program.
If a fault is left untreated it may lead to failure in the working of the deployed code.
A minor fault in some cases may lead to high-end error.
There are several ways to prevent faults like adopting programming techniques,
development methodologies, peer review, and code analysis.
What is a Failure?
Failure is the accumulation of several defects that ultimately lead to Software failure and
results in the loss of information in critical modules thereby making the system unresponsive.
Generally, such situations happen very rarely because before releasing a product all possible
scenarios and test cases for the code are simulated. Failure is detected by end-users once they
face a particular issue in the software.
Failure can happen due to human errors or can also be caused intentionally in the system by
an individual.
It is a term that comes after the production stage of the software.
It can be identified in the application when the defective part is executed.
A simple diagram depicting Bug vs Defect vs Fault vs Failure:
Bug vs Defect vs Error vs Fault vs Failure
Some of the vital differences between bug, defect, fault, error, and failure are listed in the
below table:
Failure is the
accumulation
A bug refers of several
to defects A Fault is a defects that
which means state that ultimately lead
that the causes the An Error is a to Software
A Defect is a
software software to mistake made in failure and
deviation
product or the fail and the code due to results in the
between the
application is therefore it which loss of
actual and
not working as does not compilation or information in
expected output
per the achieve its execution fails, critical
adhered necessary modules
requirements function. thereby
set making the
Definitio system
n unresponsive.
The defect is
identified by The failure is
The Testers found by the
Human Developers and
And is resolved test engineer
Test Engineers mistakes lead automation test
by developers during the
to fault. engineers
in the development
Raised development cycle of SDLC
by phase of SDLC.
Critical (GUI)
Major Faults
Minor Performan
Trivial ce Faults
Security
Faults
Hardware
Faults
Wrong
design of
Error in code.
Receiving & the data
Inability to
providing definition
compile/execut
incorrect processes.
Missing e a program
input An Environmen
Logic Ambiguity in
irregularit t variables
Erroneous Coding/Logi code logic
cal Error y in Logic System
Logic Misunderstand
leading to or gaps in Errors
Redundant ing of
the the Human
codes requirements
breakdown software Error
Faulty design
of software leads to
and
the non-
architecture
Reasons functionin
Logical error
behind g of the
software.
Peer
Implementi
review of
ng Test- Implementin
the Test
driven g Out-of-the-
document Conduct peer
developme box
s and reviews and
nt. programmin
requireme code-reviews
Adjusting g methods.
nts. Need for
enhanced Proper usage
Verifying validation of
developme of primary
the bug fixes and
nt practices and correct
correctnes enhancing the
Way to and software
s of overall quality
prevent evaluation coding
software of the softwa
of practices.
the design
cleanliness
reasons and
of the code.
coding.
Levels of Testing in Software Testing
Unit Testing
Unit testing is the first level of testing performed on individual modules, components, or pieces
of code. In unit testing, the individual modules are tested as independent components to ensure
that they work correctly and are fit to be assembled/integrated with other components.
This testing is performed by developers. The developers usually write unit tests for the piece of
code written by them.
As stated before, it is the first level of testing. Once individual components are unit
tested, integration testing is carried out.
Unit testing cannot be performed manually. The unit tests are always automated and more
specifically use the White-box testing technique as the knowledge of the piece of code and
internal architecture is required to test the code. The developers create unit tests passing required
inputs to the test script and asserting the actual output with the expected results.
Unit tests aid in faster development and debugging as the impact of new changes can be
easily detected by running the unit tests.
A successful unit test report generates a sense of confidence about the quality of the code.
Modules successfully unit tested can be easily merged with other modules.
It cannot catch complex errors in the system ranging from multiple modules.
It cannot test non-functional attributes like usability, scalability, the overall performance
of the system, etc.
Unit tests cannot guarantee functional correctness or conformance of application with its
business requirements.
Integration testing:
Integration testing is the second level of testing in which we test a group of related
modules.
It aims at finding interfacing issues b/w the modules i.e. if the individual units can be
integrated into a sub-system correctly.
It is of four types – Big-bang, top-down, bottom-up, and Hybrid.
1. In big bang integration, all the modules are first required to be completed and then
integrated. After integration, testing is carried out on the integrated unit as a whole.
2. In top-down integration testing, the testing flow starts from top-level modules that
are higher in the hierarchy towards the lower-level modules. As there is a possibility
that the lower-level modules might not have been developed while beginning with
top-level modules.
So, in those cases, stubs are used which are nothing but dummy modules or functions
that simulate the functioning of a module by accepting the parameters received by the
module and giving an acceptable result.
4. Hybrid integration testing is also called the Sandwich integration approach. This
approach is a combination of both top-down and bottom-up integration testing. Here,
the integration starts from the middle layer, and testing is carried out in both
directions, making use of both stubs and drivers, whenever necessary.
System Testing
System testing is a type of software testing that evaluates a software product as a whole against
functional and non-functional requirements. It determines the overall performance and
functionality of a fully integrated software product.
The primary goal of this testing type is to check that all software components work together
without any flaws and function as intended while meeting all the specified requirements. It is
concerned with verifying the software product’s design, behavior, and compliance with customer
requirements.
A QA team carries out system testing after the integration testing and before acceptance testing.
They choose a testing environment that closely resembles the actual production environment.
Since the QA team tests the entire system without knowing its internal workings, it falls
under black-box testing.
Integrated modules that have passed integration testing serve as the input to system testing.
Integration testing uncovers defects or irregularities between the integrated units. However,
system testing discovers defects between integrated units and the whole system.
In a nutshell, this software testing type involves performing a series of tests to exercise the entire
software.
Disadvantages
System testing is time-consuming.
It requires highly skilled testers.
It is challenging for large and complex projects.
Testers do not have visibility into the software’s source code.
No testing uncovers 100% bugs. Even if system testing validates every aspect of the
source code, bugs may still exist.
Acceptance testing:
Acceptance testing is the final level of software testing where the system is tested for compliance
to its business requirements. It is performed by the client or the end users with the intent to see if
the product is fit for delivery. It can be both formal as well as informal.
Formal acceptance testing is carried out by the client’s representatives and the informal or Adhoc
one is carried out by a subset of potential users who check functionality as well as features like
the usability of the product.
It is carried out after system testing and before the final delivery to the client.
Types of Acceptance Testing
Alpha Testing
Alpha testing is the form of acceptance testing that takes place at the developer’s site.
It can be carried out by both in-house developers and QAs as well as potential end-users
as well.
These tests can also be white box along with black-box tests.
Beta Testing
Beta testing is the form of acceptance testing that takes place at the customer’s or the end
user’s site.
It is performed after alpha testing and in the real-world environment without the presence
or control of developers.
Beta tests or the beta version of the application are normally open to the whole world (or
client).
Along with Alpha and beta testing, we can also classify acceptance testing into the following
different types-
User Acceptance Testing – In user acceptance testing, developed application is assessed from
the end-users’ perspective, whether it is working for the end-users or not as per the requirements.
It is done by employees of the developer organization only. It is also known as ‘End User
Testing’ and follows a black box testing mode.
Business Acceptance Testing – Business acceptance testing assesses the developed application
from the perspective of business goals and processes. It is to make sure the system is ready for
the operational challenges and needs of the real world. It is a superset of user acceptance testing.
BAT is performed by an independent testing team. Every member of the team should have
precise knowledge of the client’s domain and business.
Contract Acceptance Testing – This type of testing involves checking the developed system
against pre-defined criteria or specifications in the contract. The contract would have been signed
by the client and the development party.
Regulations Acceptance Testing – Regulations Acceptance testing is also known as
Compliance acceptance testing. It checks whether the system complies with the rules and
regulations of the country where the software will be released. Usually, a product or application
that is being released internationally, will require such testing as different countries have
different rules and laws.
Operational Acceptance Testing – It is non-functional testing. It makes sure that the
application is ready operationally. Operational acceptance testing involves testing the backup or
restore facilities, user manuals, maintenance tasks, and security checks.
They may have their own understanding of the requirements due to a lack of domain knowledge.
It is possible that their understanding is different than that of business users. During acceptance
testing, business users have a chance to check if everything matches their expectations.
During acceptance testing, business users (clients) get to see the final product. Users can check
whether the system works according to the given requirements. UAT also ensures that the
requirements have been communicated and implemented effectively. Business users can gain
confidence in showing the application in the market i.e. to the end-users.
As acceptance testing will be done by users from the business side, they will have more ideas of
what end-users want. Thus, feedback/ suggestions given during acceptance testing can be
helpful in the next releases. The development team can avoid the same mistakes in future
releases.
Also, an application may have some major or critical issues, such issues should be identified
during testing not when the system is LIVE. These issues can be resolved before the code goes
to the production environment. This will reduce the efforts and time of developers.
Conclusion:
TEST PLANNING
Help people outside the test team such as developers, business managers,
customers understand the details of testing.
Test Plan guides our thinking. It is like a rule book, which needs to be followed.
Important aspects like test estimation, test scope, Test Strategy are documented in Test Plan,
so it can be reviewed by Management Team and re-used for other projects.
1. Resource Allocation: This component specifies which tester will work on which test.
2. Training Needs: The staff and skill levels required to carry out test-related duties should be
specified by the test planner. Any specialized training needed to complete a task should also be
indicated.
3. Scheduling: A task networking tool should be used to determine and record task durations.
Establish, keep track of, and plan test milestones.
4. Tools: Specifies the instruments used for testing, problem reporting, and other pertinent tasks.
5. Risk Management: Describes the dangers that could arise during software testing as well as the
problems that the software itself might face if it is published without enough testing.
6. Approach: The concerns to be addressed when testing the target program are thoroughly
covered in this portion of the test plan.
How can you test a product without any information about it? The answer is Impossible. You
must learn a product thoroughly before testing it.
The product under test is Guru99 banking website. You should research clients and the end users
to know their needs and expectations from the application
Test Strategy is a critical step in making a Test Plan in Software Testing. A Test Strategy
document, is a high-level document, which is usually developed by Test Manager. This
document defines:
Back to your project, you need to develop Test Strategy for testing that banking website. You
should follow steps below
2.1) Define Scope of Testing
Before the start of any test activity, scope of the testing should be known. You must think hard
about it.
Defining the scope of your testing project is very important for all stakeholders. A precise scope
helps you
Give everyone a confidence & accurate information of the testing you are doing
All project members will have a clear understanding about what is tested and what is not
How do you determine scope your project?
To determine scope, you must –
Precise customer requirement
Project Budget
Product Specification
Skills & talent of your test team
Step 2.2) Identify Testing Type
A Testing Type is a standard test procedure that gives an expected test outcome.
Each testing type is formulated to identify a specific type of product bugs. But, all Testing Types
are aimed at achieving one common goal “Early detection of all the defects before releasing the
product to the customer”
Risk is future’s uncertain event with a probability of occurrence and a potential for loss. When
the risk actually happens, it becomes the ‘issue’.
The project schedule is too tight; it’s hard to complete this project on time
Test Manager has poor management skill
A lack of cooperation negatively affects your employees’ productivity
Wrong budget estimate and cost overruns
Step 2.4) Create Test Logistics
In Test Logistics, the Test Manager should answer the following questions:
You may not know exact names of the tester who will test, but the type of tester can be defined.
To select the right member for specified task, you have to consider if his skill is qualified for the task or not,
also estimate the project budget. Selecting wrong member for the task may cause the project to fail or delay.
Person having the following skills is most ideal for performing software testing:
You will start to test when you have all required items shown in following figure
1. List all the software features (functionality, performance, GUI…) which may need to test.
2. Define the target or the goal of the test based on above features.
Suspension Criteria
Specify the critical suspension criteria for a test. If the suspension criteria are met during testing,
the active test cycle will be suspended until the criteria are resolved.
Test Plan Example: If your team members report that there are 40% of test cases failed, you
should suspend testing until the development team fixes all the failed cases.
Exit Criteria
It specifies the criteria that denote a successful completion of a test phase. The exit criteria are
the targeted results of the test and are necessary before proceeding to the next phase of
development. Example: 95% of all critical test cases must pass.
Step 5) Resource Planning
Resource plan is a detailed summary of all types of resources required to complete project task.
Resource could be human, equipment and materials needed to complete a project
The resource planning is important factor of the test planning because helps
in determining the number of resources (employee, equipment…) to be used for the project.
Therefore, the Test Manager can make the correct schedule & estimation for the project.
A testing environment is a setup of software and hardware on which the testing team is going to
execute test cases. The test environment consists of real business anduser environment, as well
as physical environments, such as server, front end running environment .
In the Test Estimation phase, suppose you break out the whole project into small tasks and add
the estimation for each task as below
Task Members Estimate effort
Employee and project deadline: The working days, the project deadline, resource
availability are the factors which affected to the schedule
Project estimation: Base on the estimation, the Test Manager knows how long it takes to
complete the project. So he can make the appropriate project schedule
Project Risk : Understanding the risk helps Test Manager add enough extra time to the
project schedule to deal with the risks
There are different test deliverables at every phase of the software development lifecycle.
Test Scripts
Simulators.
Test Data
Test Traceability Matrix
Error logs and execution logs.
Let’s have look at the most important qualities that will make your client happy.
are constantly in danger of them not understanding what achievement is and how it ought to
beregulated.
We generally propose making a scope-of-work archive that blueprints the details, financial plans, and
metrics of the software testing. This will ease any perplexity over expectations and ideally take out a
troublesome discussion.
Availability
By availability, we don’t mean every minute of a day support system. It just means clear and
forthright correspondence about time off, optional plans and being reachable and not going
Missing in Actions Regardless of whether you are a sole individual or a team supporting the
managing customer aspirations. Numerous clients are uncertain of what they are attempting to
achieve or not great at explaining it. In that position, you should have brilliant instinct and
what you have understood and ask them to affirm the precision from key takeaways, which will
at last effect expectations. When you offer your client direction, counsel, info, and business
advice, you turn into a really profitable accomplice. This style of open discussion sets up the
may have been sufficient a year ago isn’t sufficient this year. To get this data, they have to
Internal research and reviews can be carried out to ensure that procedures are duly
followed in the company. Testing procedures use strategies to convert customer expectations
Earlier, obtaining quality was considered as an expense. Any investment in techniques, tools,
and procedures to accomplish higher quality were considered as a cost. The management was not
Constant Improvement
It might surprise you. Before, organizations aimed to build products that meet specific
benchmarks. A satisfactory deviation scope was characterized for a product which implies that
specific level o f errors were permitted in the software. In the event that the organization is as of
now addressing the benchmarks, they don’t see the requirement for enhancing the procedure any
further.
Despite what might be expected, the contemporary approach of value looks for consistent
improvement. It is client centered and takes actions around the premise of review got from
clients.
This review can incorporate demands for new features, complaints, and recognition.
Consequently, today, our product industry has likewise progressed toward becoming client
regulated.
During creating and releasing a product, we do not just observe the conformance to
prerequisites; rather we attempt to surpass the expectations to fulfill the clients’ demands.
Constant improvement proposes you regularly check your practices and processes for any
opportunity for improvements. This further involves working on the removal of the root causes
networking imply that your clients—and forthcoming clients—can undoubtedly share both
positive reviews and feedback of your soft quality on product review sites, forums, and social
In the outrageous, low quality or failure of the product which leads to a product summon
2. Long-Term Profitability
If customers receive your defective products and are unsatisfied, you will have to compensate
for returns and possibly legal charges to pay for failure to comply with the client or industry
Skills required :
1. Communication
Communication is the key factor to be successful in IT industry for any
role.
As a software tester, you are expected to communicate with team
members, client and stakeholders of the project. For that, good
communication skills are important.
Testers are supposed to prepare different artifacts like Test plan, Test
strategy, Test cases, Test data, Test results etc. and to effectively
prepare them, written communication skills should be good.
As a tester, you are expected to send daily status report about what you
did, which bugs or defects you found and what is the plan of work next
day. For this, understanding of point-to-point communication, what to
include and what not, is very important.
2. Curiosity
Being in software testing means asking lots of questions.
Testers have to deal with bad or incomplete requirements. And when
requirements are not enough to clarify things, testers have to ask
meaningful questions.
Testers should be curious about things like why, what, when and How.
More questions yield more information and that helps testers to perform
testing effectively.
3. Grasping abilities
Testing almost always gets the least time. In less time, testers are
expected to perform effective testing. Understanding the requirements in
short time is therefore very important.
To grasp the purpose of software, how it will be used, what all changes
have been applied etc. is necessary.
Sharp grasping abilities make the task easier and efficient.
4. Team work
The Tester is supposed to work as a team with developers and other
stakeholders.
Right attitude with attention towards quality of product is very important
to have in any tester.
Being a Junior Tester, its expected to execute assigned work on time,
report to seniors and support each other while facing deadline.
7. User’s perspective
As a tester, you need to understand following points about end user’s
perspective because it helps in defining more real-time test scenarios.
Who is going to use the product
What purpose the product will resolve
How the customer might handle the product
While test planning meeting, the tester is supposed to understand the details of
tasks to be performed.
Based on organization and development method, test cases or test scenarios are
prepared in specific format.
Reporting bug, observed while executing testing task is important. Again, each
organization uses different tools/templates to report and track defect.
Tester needs to understand how specific tool works and is supposed to report
detailed defect report.
Tester is also supposed to track the reported defect and according to its
criticality, needs to make sure that defect gets resolved.
A junior tester is a fresh eye to the product and therefore expectation from
him/her is to come up with suggestions to improve the product in terms
of usability.
The tester is supposed to send daily status report to test lead, describing testing
activities performed and relevant status.
Daily status report is a tool for junior tester to communicate with Test lead
about the work done.
2. Review of test artifacts – Junior testers document test cases and submit it for
review to Senior Tester.
5. Reporting and tracking defects – With experience, what remains constant for
a tester is to identify defects in software, report them and track them till they are
satisfactorily fixed.
6. Training need identification – The Senior tester knows the weaknesses of
Junior testers in team. He knows what is stopping testers in performing their
task efficiently. He identifies training needs for the team and conveys it to Test
Lead.
Communication training
Process training
Effective reporting training
Tools training
Test Architect
Test Architect is the senior position who looks after solutions for problems faced
while testing. The role seeks deep technical knowledge and up-to-date knowledge
about latest tools and technologies. This role does not asks for people/team
management skills.
Test Architect is not a common role. It is only found in organizations that focus
heavily on the use of automation and technology in testing.
Skills required
Skills required
Also, to resolve inter personal problems in team and to convince team members
to put in extra efforts, negotiation skills are mandatory.
QA Manager is the managerial position, which looks after most of the management
aspects compared to technical.
The role brings proven record of successful management of team and projects.
Skills required
Quality Head
Quality Head is the highest position in the Quality department. This role is a
combination of technical and managerial skills. This position is a result of years of
experience along with proven track record of handling multiple teams and projects /
programs successfully.
Skills required
1. Around 15+ years of industry experience and proven track record of successful
management of quality aspect of multiple products.
2. Expertise in implementing industry best practices for quality assurance.
3. Experience in delivering multiple projects, by managing time and resources
successfully.
4. Experience in working with different stakeholders in hierarchy, from a
developer to business partner and customer.
5. Excellent hands on experience in manual and automation testing.
6. Solid communication skills
7. Experience in working into challenging environment and have displayed result-
oriented attitude.
8. Known to the best applicable techniques for testing and quality improvement.
9. Experience in establishing quality as a culture in organization.
10. Knowledge of best supportive tools to make testing more effective.
11. Experience in implementing best policies/processes to maintain quality
standards of the products / services / organization.
12. Attitude inclined to defining, working and improving processes.
Delivery Head
Delivery Head is the position that covers all aspects of software development life
cycle. Delivery Head should have experience in
1. Requirement analysis
During requirement analysis, the software testing team works closely with the
stakeholders to gather information about the system’s functionality, performance, and
usability.
The requirements document serves as a blueprint for the software development team,
guiding them in creating the software system.
It also serves as a reference point for the testing team, helping them design and execute
effective test cases to ensure the software meets the requirements.
In summary, by conducting thorough requirement analysis, software testing teams can help
ensure the software system’s success and user satisfaction.
2. Test planning
During the test planning phase, the team develops a complete plan outlining each testing process
step, including identifying requirements, determining the target audience, selecting appropriate
testing tools and methods, defining roles and responsibilities, and defining timelines. This phase
aims to ensure that all necessary resources are in place and everyone on the team understands
their roles and responsibilities. A well-designed test plan minimizes risks by ensuring that
potential defects are identified early in the development cycle when they are easier to fix. Also,
adhering to the plan throughout the testing process fosters thoroughness and consistency in
testing efforts which can save time and cost down the line.
3. Test case development
During the test case development phase, the team thoroughly tests the software and considers all
possible scenarios.
This phase involves multiple steps, including test design, test case creation, and test case review:
Test design involves identifying the test scenarios and defining the steps to be followed
during testing.
Writing test cases for each identified scenario, including input data, expected output, and
the steps to be followed, involves creating test cases.
Test case review involves reviewing the test cases to ensure they are complete and cover
all possible scenarios.
Also, this is the phase when the involvement of test automation can be started. You can select the
test cases for test automation here. And, if automation is already a part of the STLC, and the
product is suitable for testing, then the test case automation can be started too.
Test environment setup in software testing life refers to creating an environment that simulates
the production system where the software application is deployed. A person can ensure efficient
and effective testing activities by designing the test environment correctly.
The setup includes
hardware,
software,
networks, and
databases.
When setting up test environments, we consider network bandwidth, server capabilities, and
storage capacity. A properly set-up test environment aims to replicate real-world scenarios to
identify potential issues before deployment in production systems. Testers can perform
functional, performance, or load testing during this phase. Automating your Test environment
setup can make your work easier. You can set up automated tests to run on the configured setups
here.
5. Test execution
Test execution refers to the software testing life cycle phase where created test cases are
executed on the actual system being tested. At this stage, testers verify whether features,
functions, and requirements prescribed in earlier phases perform as expected. The test execution
also involves the execution of automated test cases.
6. Test closure
Test closure is integral to the STLC and includes completing all planned testing activities. It
includes
determines the test design & regulates how the software testing process will be done. The
objective of the Test Strategy is to provide a systematic approach to the software testing process
A test strategy is carried out by the project manager. It says what type of
It is a long-term plan of action. You can abstract information that is not project
On the basis of the development design papers, we may write the test strategy.
The following documents are included in the development design document :
Documents pertaining to the system design: These documents will mostly be used to
build the test strategy.
Design Documents: These are used to outline the software features that will be enabled in
a future version.
Documents relating to conceptual design: These are the documents that we don’t utilize
very often.
Here, we will discuss the following points:
1. Components of Test Strategy.
2. Test Strategy vs Test Plan.
3. Types of Test Strategies.
4. Test Strategy Selection.
5. Details Included in Test Strategy Document.
6. Conclusion.
The test effort, test domain, test setups, and test tools used to verify and validate a set of
functions are all outlined in a Test Strategy. It also includes schedules, resource allocations,
and employee utilization information. This data is essential for the test team (Test) to be as
structured and efficient as possible. A Test Strategy differs from a Test Plan, which is a
document that gathers and organizes test cases by functional areas and/or types of testing in a
format that can be presented to other teams and/or customers. Both are critical components of
the Quality Assurance process since they aid in communicating the breadth of the test method
and ensuring test coverage while increasing the testing effort’s efficiency .
The following are the components of the test strategy:
1. Scope and Overview.
2. Testing Methodology.
3. Testing Environment Specifications.
4. Testing Tools.
5. Release Control.
6. Risk Analysis.
7. Review and Approvals.
1. Scope and Overview: Scope and Overview is the first section of the test strategy paper.
Any product’s overview includes information about who should approve, review, and use the
document. The testing activities and phases that must be approved were also described in the
test strategy paper.
An overview of the project, as well as information on who should utilize this page.
Include information such as who will evaluate and approve the document.
Define the testing activities and phases that will be performed, as well as the timetables
that will be followed in relation to the overall project timelines stated in the test plan.
2. Testing Methodology: Testing methodology is the next module in the test strategy
document, and it is used to specify the degrees of testing, testing procedures, roles, and duties
of all team members. The change management process, which includes the modification
request submission, pattern to be utilized, and activity to manage the request, is also included
in the testing strategy. Above all, if the test plan document is not properly established, it may
result in future errors or blunders. This module is used to specify the following information-
Define the testing process, testing level, roles, and duties of each team member.
Describe why each test type is defined in the test plan (for example, unit, integration,
system, regression, installation/uninstallation, usability, load, performance, and security
testing) should be performed, as well as details such as when to begin, test owner,
responsibilities, testing approach, and details of automation strategy and tool (if
applicable).
3. Testing Environment Specifications: Testing Environment Specification is another section
of the test strategy paper. The specification of the test data requirements, as we well know, is
quite important. As a result, the testing environment specification in the test strategy document
includes clear instructions on how to produce test data. This module contains information on
the number of environments and the required setup. The strategies for backup and restoration
are equally important.
The information about the number of environments and the needed configuration for each
environment should be included in the test environment setup.
For example, the functional test team might have one test environment and the UAT team
might have another.
Define the number of users supported in each environment, as well as each user’s access
roles and software and hardware requirements, such as the operating system, RAM, free
disc space, and the number of systems.
It’s just as crucial to define the test data needs.
Give specific instructions on how to generate test data (either generate data or use
production data by masking fields for privacy).
Define a backup and restoration strategy for test data.
Due to unhandled circumstances in the code, the test environment database may encounter
issues.
The backup and restoration method should state who will take backups when backups
should be taken, what should be included in backups, when the database should be restored,
who will restore it, and what data masking procedures should be implemented if the
database is restored.
4. Testing Tools: Testing tools are an important part of the test strategy document since it
contains all of the information on the test management and automation tools that are required
for test execution. The necessary approaches and tools for security, performance, and load
testing are dictated by the details of the open-source or commercial tool and the number of
users it can support.
Define the tools for test management and automation that will be utilized to
execute the tests.
Describe the test approach and tools needed for performance, load, and security testing.
Mention whether the product is open-source or commercial, as well as the number of
individuals it can accommodate, and make suitable planning.
5. Release Control: Release Control is a crucial component of the test strategy document. It’s
used to make sure that test execution and release management strategies are established in a
systematic way. It specifies the following information-
Different software versions in test and UAT environments can occur from unplanned
release cycles.
All adjustments in that release will be tested using the release management strategy, which
includes a proper version history.
Set up a build management process that answers questions like where the new build should
be made available, where it should be deployed when to receive the new build, where to
acquire the production build, who will give the go signal for the production release, and so
on.
6. Risk Analysis: Risk Analysis is the next section of the test strategy paper. All potential
hazards associated with the project are described in the test strategy document and can become
an issue during test execution. Furthermore, a defined strategy is established for inclining these
risks in order to ensure that they are carried out appropriately. If the development team is
confronted with these hazards in real-time, we establish a contingency plan. Make a list of all
the potential dangers. Provide a detailed plan to manage these risks, as well as a backup plan in
case the hazards materialize.
7. Review and Approval: Review and Approval is the last section of the Testing strategy
paper.
When all of the testing activities are stated in the test strategy document, it is evaluated by the
persons that are involved, such as:
Starting the document with the right date, approver name, comment, and summary of the
reviewed modifications should be followed.
It should also be evaluated and updated on a regular basis as the testing procedure improves.
1. Analytical strategy: For instance, risk-based testing and requirements-based testing are
two types of testing. After examining the test premise, such as risks or requirements, the
testing team sets the testing circumstances to be covered. In the instance of requirements-
based testing, the requirements are examined to determine the test circumstances. Then
tests are created, implemented, and run to ensure that the requirements are met. Even the
findings are kept track of in terms of requirements, such as those who were tested and
passed, those that were tested but failed, those that were not fully tested, and so on.
2. Model-based strategy: The testing team selects an actual or anticipated circumstance and
constructs a model for it, taking into account inputs, outputs, processes, and possible
behavior. Models are also created based on existing software, technology, data speeds,
infrastructure, and other factors. Let’s look at a case where you’re testing a mobile app.
Models to simulate outgoing and receiving traffic on a mobile network, the number of
active/inactive users, predicted growth, and other factors may be constructed to conduct
performance testing.
3. Methodical strategy: In this case, test teams adhere to a quality standard (such as
ISO25000), checklists, or just a set of test circumstances. Specific types of testing (such as
security) and application domains may have standard checklists. For example, while
performing maintenance testing, a checklist describing relevant functions, their properties,
and so on is sufficient.
4. Standards or process compliant strategy: This method is well-exemplified by medical
systems that adhere to US Food and Drug Administration (FDA) guidelines. The testers
follow the methods or recommendations established by the standards committee or a panel
of enterprise specialists to determine test conditions, identify test cases, and assemble the
testing team. In the case of an Agile program, testers will create a complete test strategy for
each user story, starting with establishing test criteria, developing test cases, conducting
tests, reporting status, and so on.
5. Reactive strategy: Only when the real program is released are tests devised and
implemented. As a result, testing is based on faults discovered in the real system. Consider
the following scenario: you’re conducting exploratory testing. Test charters are created
based on the features and functionalities that already exist. The outcomes of the testing by
testers are used to update these test charters. Agile development initiatives can also benefit
from exploratory testing.
6. Consultative strategy: In the same way, that user-directed testing uses input from key
stakeholders to set the scope of test conditions, this testing technique does as well. Let’s
consider a scenario in which the browser compatibility of any web-based application is
being evaluated. In this section, the app’s owner would provide a list of browsers and their
versions in order of preference. They may also include a list of connection types, operating
systems, anti-malware software, and other requirements for the program to be tested
against. Depending on the priority of the items in the provided lists, the testers can use
various strategies such as pairwise or equivalence splitting.
7. Regression averse strategy: In this case, the testing procedures are aimed at lowering the
risk of regression for both functional and non-functional product aspects. Using the web
application as an example, if the program needs to be tested for regression issues, the
testing team can design test automation for both common and unusual use cases. They can
also employ GUI-based automation tools to conduct tests every time the application is
updated. Any of the strategies outlined above does not have to be used for any testing job.
Two or more strategies may be integrated depending on the needs of the product and the
organization.
The test strategy chosen is determined by the nature and size of the organization.
One can choose a test strategy based on the project needs; for example, safety and security
applications necessitate a more rigorous technique.
The test strategy can be chosen based on the product development model.
Is this a short-term or long-term strategy?
Organization type and size.
Project requirements — Safety and security applications necessitate a well-thought-out
strategy.
Product development model.
Details Included in Test Strategy Document
Resource Requirements
Resources include human effort, equipment, and all infrastructure needed for accurate and
comprehensive testing. This part of test planning decides the project's required measure of
resources (number of testers and equipment).
Resource requirement is a detailed summary of all types of resources required to complete
project task. Resource could be human, equipment and materials needed to complete a project.
The resource requirement and planning is important factor of the test planning because helps in
determining the number of resources (employee, equipment…) to be used for the project.
Therefore, the Test Manager can make the correct schedule & estimation for the project.
Some of the following factors need to be considered:
Machine configuration (RAM,processor,disk)needed to run the product under test.
Overheads required by test automation tools, if any
Supporting tools such as compilers, test data generators, configuration management tools.
The different configurations of the supporting software(e.g. OS)that must be present
Special requirements for running machine-intensive tests such as load tests and
performance tests.
Appropriate number of licenses of all the software
o Human Resource: The following table represents various members in your project team
1 Test Manager Manage the whole project Define project directions Acquire appropriate
resources
3. Developer in Implement the test cases, test program, test suite etc.
Test
4. Test Builds up and ensures test environment and assets are managed and maintained
Administrator Support Tester to use the test environment for test execution
5. SQA members Take in charge of quality assurance Check to confirm whether the testing
process is meeting specified requirements
System Resource: For testing, a web application, you should plan the resources as
following tables:
1 Server Install the web application under test This includes a separate web server, database
server, and application server if applicable
No. Resources Descriptions
2 Test tool The testing tool is to automate the testing, simulate the user operation, generate the
test results There are tons of test tools you can use for this project such as Selenium,
QTP…etc.
3. Network You need a Network include LAN and Internet to simulate the real business and user
environment
4. Computer The PC which users often use to connect the web server
Test schedule
A test schedule includes the testing steps or tasks, the target start and end dates, and
responsibilities. It should also describe how the test will be reviewed, tracked, and approved.
Test cases
“A test case is a set of input values, execution preconditions, expected results, and execution
postconditions, developed for a particular objective or test condition, such as to exercise a
particular program path or to verify compliance with a specific requirement.” It’s one of the key
instruments used by testers. The standard test case includes the following information:
Identify testable requirements. Identify the scope and purpose of testing before starting the test
process.
Customer requirement. The specialist who writes the test case must have a good understanding
of the features and user requirements. Each test case should be written keeping the client’s
requirements in mind.
Write on time. The best time to write test cases is the early requirement analysis and design
phases. That way QA specialists can understand whether all requirements are testable or not.
Simple and сlear. Test cases should be simple and easy to understand. Every test case should
include only the necessary and relevant steps. No matter how many times and by whom it will be
used, a test case must have a single expected result rather than multiple expected results.
Unique test cases. Each test case must have a unique name. This will help classify, track, and
review test cases at later stages.
Test cases should be maintainable. If requirements change, a tester must be able to adjust a test
case.
Bug Reporting
Bug reporting is an integral part of software testing as it helps to identify and document any
issues that arise during the process. By using a Bug report, testers can track the progress of their
work and compare results over time. This allows them to change their test plans and strategies if
needed.
1. It can help you figure out precisely what’s wrong with a bug, so you can find the best
way to fix it.
2. Saves you time and money by helping you catch the bug before it worsens.
3. Stops bugs from making it into the final product and ruining someone’s experience.
4. Plus, it helps ensure the same bug doesn’t appear again in future versions.
5. Finally, everyone involved will know what’s happening with the bug so they can do
something about it.
Simple sentences should be used to describe the bug. Expert testers consider bug reporting
nothing less than a skill. We have compiled some tips that will help testers master them better:
2. Report reproducible bugs:
While reporting a bug, the tester must ensure that the bug is reproducible. The steps to reproduce
the bug must be mentioned. All the prerequisites for the execution of steps and any test data
details should be added to the bug.
3. Be concise and clear:
Try to summarize the issue in a few words, brief but comprehensive. Avoid writing lengthy
descriptions of the problem.
Describe the issue in pointers and avoid paragraphs. It’s essential to provide all the relevant
information, and it helps the developers to understand the issue without any additional to and fro
of the bug. The developer must clearly understand the underlying problem with the bug report.
4. Report bugs early:
It is important to report bugs as soon as you find them. Reporting the bug early will help the
team to fix the bug early and will help to deliver the product early.
5. Avoid Spelling mistakes and language errors:
Proofread all the sentences and check the issue description for spelling and grammatical errors.
If required, one can use third-party tools, for eg. Grammarly. It will help the developer
understand the bug without ambiguity and misrepresentation.
6. Documenting intermittent issues:
Sometimes all bugs are not reproducible. You must have observed that sometimes a mobile app
crashes, and you must restart the app to continue. These types of bugs are not reproducible every
time.
Testers must try to make a video of the bug in such scenarios and attach it to the bug report. A
video is often more helpful than a screenshot because it will include details of steps that are
difficult to document.
For example, a mobile app crashes while switching between applications or sending an app to
the background and bringing it to the front.
7. Avoid duplication of bugs:
While raising a bug, one must ensure that the bug is not duplicating an already-reported bug.
Also, check the list of known and open issues before you start raising bugs. Reporting duplicate
bugs could cost duplicate efforts for developers, thus impacting the testing life cycle.
8. Create separate bugs for unrelated issues:
If multiple issues are reported in the same bug, it can’t be closed unless all the issues are
resolved. So, separate bugs should be created if issues are not related to each other.
For example, Let’s say a tester comes across two issues in an application in different modules.
One issue is in compose email functionality, where the user cannot compose an email, and
another issue is that the user cannot print an email. These issues must be raised separately as they
are independent of each other.
9. Don’t use an authoritative tone:
While documenting the bug, avoid using a commanding tone, harsh words, or making fun of the
developer.
The objective of a good bug report is to help the developer and the management to understand
the bug and its impact on the system. The more accurate and detailed the bug report is, the more
quickly and effectively the bug can be resolved.
A software bug follows a cycle. According to where it is in the cycle, a status is assigned.
For eg. When a new bug is created, its status is assigned as open. Later, it goes through various
stages like In Progress, Fixed, Won’t Fix, Accepted, Reopen, Verified, etc. These stages vary
following different bug reporting tools.
Testers must create comprehensive bug reports for practical bug analysis and resolution. Testers should
incorporate all pertinent information to ensure the highest quality of reports and communicate clearly
with developers and managers. Best practices for bug reporting should be shared to optimize report
accuracy. Ultimately, well-crafted bug reports foster positive collaboration between teams and reduce
costs related to fixing bugs.
Metrics and Statistics.
Software testing metrics are quantifiable indicators of the software testing process
progress, quality, productivity, and overall health. The purpose of software testing metrics is to
increase the efficiency and effectiveness of the software testing process while also assisting in
making better decisions for future testing by providing accurate data about the testing process.
Using statistics can help us map out those outliers, identify the levels of uncertainty in
our results, and help us deal fairly with those errors. No statistical test is perfect and neither is
any dataset. Statistics allows us to draw conclusions openly by realizing these limitations from
the start.
A metric expresses the degree to which a system, system component, or process
possesses a certain attribute in numerical terms. A weekly mileage of an automobile compared
to its ideal mileage specified by the manufacturer is an excellent illustration of metrics. Here,
we discuss the following points:
1. Importance of Metrics in Software Testing.
2. Types of Software Testing Metrics.
3. Manual Test Metrics: What Are They and How Do They Work?
4. Other Important Metrics.
5. Test Metrics Life Cycle.
6. Formula for Test Metrics.
Test metrics are essential in determining the software’s quality and performance. Developers
may use the right software testing metrics to improve their productivity.
Test metrics help to determine what types of enhancements are required in order to create a
defect-free, high-quality software product.
Make informed judgments about the testing phases that follow, such as project schedule
and cost estimates.
Examine the current technology or procedure to see if it need any more changes.
Manual testing is carried out in a step-by-step manner by quality assurance experts. Test
automation frameworks, tools, and software are used to execute tests in automated testing.
There are advantages and disadvantages to both human and automated testing. Manual testing
is a time-consuming technique, but it allows testers to deal with more complicated
circumstances. There are two sorts of manual test metrics:
1. Base Metrics: Analysts collect data throughout the development and execution of test cases
to provide base metrics. By generating a project status report, these metrics are sent to test
leads and project managers. It is quantified using calculated metrics.
The total number of test cases
The total number of test cases completed.
2. Calculated Metrics: Data from base metrics are used to create calculated metrics. The test
lead collects this information and transforms it into more useful information for tracking
project progress at the module, tester, and other levels. It’s an important aspect of the SDLC
since it allows developers to make critical software changes.
The below diagram illustrates the different stages in the test metrics life cycle.
The various stages of the test metrics lifecycle are:
1. Analysis:
The metrics must be recognized.
Define the QA metrics that have been identified.
2. Communicate:
Stakeholders and the testing team should be informed about the requirement for
metrics.
Educate the testing team on the data points that must be collected in order to process
the metrics.
3. Evaluation:
Data should be captured and verified.
Using the data collected to calculate the value of the metrics
4. Report:
Create a strong conclusion for the paper.
Distribute the report to the appropriate stakeholder and representatives.
Gather input from stakeholder representatives.
To get the percentage execution status of the test cases, the following formula can be used:
Percentage test cases executed = (No of test cases executed / Total no of test cases written) x
100
Similarly, it is possible to calculate for other parameters also such as test cases that were not
executed, test cases that were passed, test cases that were failed, test cases that were blocked,
and so on. Below are some of the formulas:
To find any defects or bugs that may have been created when the software was being developed.
To increase confidence in the quality of the software.
There are seven testing principles which are common in the software industry.
1. Optimal testing – it's not possible to test everything so it's important to determine the optimal
amount. The decision is made using a risk assessment. This assessment will uncover the area
that is most likely to fail and this is where testing should take place.
2. Pareto Principle – this principle states that approximately 80% of problems will be found in
20% of tests. However, there is a flaw in this principle in that repeating the same tests over and
over again will mean no new bugs will be found.
3. Review and Revise – repeating the same tests will mean that the methods will eventually
become useless for uncovering new defects. To prevent this from happens only requires the tests
to be reviewed and revised on a regular basis. Adding new tests will help to find more defects.
4. Defects that are present – testing reduces the probability of the being a defect in the final
product but does not guarantee that a defect won't be there. And even if you manage to make a
product that's 99% bug free, the testing won't have shown whether the software meets the needs
of clients.
5. Meeting customer needs – testing a product for the wrong requirements is foolhardy. Even if it
is bug free it may still fail to meet customer requirements.
6. Test early – it's imperative that testing starts as soon as possible in the development of a
product.
7. Test in context – test a product in accordance with how it will be used Software is not identical
and will be developed to meet a certain need rather than a general one. Different techniques,
methodologies, approach and type of testing can be used depending on the applications planned
use.
Quality Product
Justification with Requirement
Offers Confidence
Enhances Growth
three major categories of testing:
Functional Testing:
The purpose of this testing method is to verify each function of an application. During functional
testing, QA team verifies each module’s output, by inseting various inputs.
Technically, Functional Testing is a kind of testing through which the testing team verifies the software
system against the specification document.
However, the testing method does not do anything with the source code as it only validates functioning.
Furthermore, functional testing is the backbone of the entire testing process. Also, if your software
generates an accurate output only then, users will like it.
You can perform functional testing either by following the manual or automation testing approaches.
Example: If you test whether a user able to login into a system or not, after registration, you are doing
functional testing
Non-Functional Testing:
As its name says, the testing method verifies the non-functional part of an application such as reliability,
response, speed, etc.
It is entirely the opposite of functional testing, which we have explained above. Issues that testers do not
address during functional testing they test here.
QA tea examines the overall functioning of the software. They highlight the concerns that affect the
accomplishments and usability of the application.
Example: If you test an application by checking how many users can log in simultaneously, you are
ding non-functional testing.
One should always verify software from the perspective of functional and non-functional testing.
Efficiency
Portability
Optimization
Performance
Hence rather than testing the entire system again & again, we use regression testing. So, through this
testing, testers validate whether the newly written code will affect the existing feature or not.
Now you must be thinking, what is this Regression Testing? It is the collection of already executed test
cases. Hence it helps in getting the effect of any code change in the existing features.
Example: Suppose there is an application with the feature of “ADD DATA” and “EDIT DATA”. Now
the developer has introduced one more feature, “DELETE Data”, Under Regression testing, the tester
will ensure that the new feature must not affect the existing characteristics.
So, guys, we hope you liked our deep analysis work regarding testing. Testing is not only a mere word
but, it is the backbone of an online product.
Being a testing company, we know it’s worth it, and we always try to let people understand the same.
We facilitate all kinds of testing methods, whether it is automation testing or manual testing.
Software testability is measured with respect to the efficiency and effectiveness of testing. Efficient
software architecture is very important for software testability. Software testing is a time-consuming,
necessary activity in the software development lifecycle, and making this activity easier is one of the
important tasks for software companies as it helps to reduce costs and increase the probability of
finding bugs. There are certain metrics that could be used to measure testability in most of its aspects.
Sometimes, testability is used to mean how adequately a particular set of tests will cover the product.
Testability helps to determine the efforts required to execute test activities.
Less the testability larger will be efforts required for testing and vice versa.
Software testability evaluates how easy it is to test the software and how likely software testing will
find the defects in the application. Software testability assessment can be accomplished through
software metrics assessment:
Depth of Inheritance Tree.
Fan Out (FOUT).
Lack Of Cohesion Of Methods (LCOM).
Lines Of Code per Class (LOCC).
Response For Class (RFC).
Weighted Methods per Class (WMC).
During software launch, it is crucial to determine which components may be more challenging to test.
Software testability assessment is crucial during the start of the testing phase as it affects the
efficiency of the planning process.
The attributes suggested by Bach can be used by a software engineer to develop a software
configuration (i.e., programs, data, and documents) that is amenable to testing. Below are some of the
capabilities that are associated with software testability requirements:
Module capabilities: Software is developed in modules and each module will be tested
separately. Test cases will be designed for each module and then the interaction between the
modules will be tested.
Testing support capabilities: The entry point to test drivers and root must be saved for each
person, test interface as during the increment level testing, the trouble and accuracy level of
testing root and driver should be given high priority and importance.
Defects disclosure capabilities: The system errors should be less so that they do not block the
software testing. The requirement document should also pass the following parameters to be
testable:
The requirement must be accurate, correct, concise, and complete.
The requirement should be unambiguous i.e it should have one meaning for all staff
members.
A requirement should not contradict other requirements.
Priority-based ranking of requirements should be implemented.
A requirement must be domain-based so that the changing requirements won’t be a
challenge to implement.
Observation capabilities: Observing the software to monitor the inputs, their outcomes, and the
factors influencing them.
Below are some of the parameters that can be implemented in practice to improve software testability:
Appropriate test environment: If the test environment corresponds to the production
environment then testing will be more accurate and easier.
Adding tools for testers: Building special instruments for manual testing helps to make the
process easier and simpler.
Consistent element naming: If the developers can ensure that they are naming the elements
correctly, consistently, logically, and uniquely then it makes testing more convenient. Although
this approach is difficult in larger projects with multiple developers and engineers.
Improve observability: Improving observability provides unique outputs for unique inputs for
the Software Under Test.
Adding assertions: Adding assertions to the units in the software code helps to make the software
more testable and find more defects.
Manipulating coupling: Manipulating coupling to make it Domain dependent relative to the
increased testability of code.
Internal logging: If software accurately logs the internal state then manual testing can be
streamlined and it enables to check of what is happening during any test.
Consistent UI design: Consistent UI design also helps to improve software testability as the
testers can easily comprehend how the user interface principles work.
Minimizes testers’ efforts: Testability calculates and minimizes the testers’ efforts to perform
testing as improved software testability facilitates estimating the difficulty in finding the software
flaws.
Determines the volume of automated testing: Software testability determines the volume of
automated testing based on the software product’s controllability.
Early detection of bugs: Software testability helps in the early and effortless detection of bugs
and thus saves time, cost, and effort required in the software development process.
Test design is a process that describes “how” testing should be done. It includes processes for the
identifying test cases by enumerating steps of the defined test conditions. The testing techniques
defined in test strategy or plan is used for enumerating the steps.
The test cases may be linked to the test conditions and project objectives directly or indirectly
depending upon the methods used for test monitoring, control and traceability.
The objectives consist of test objectives, strategic objectives and stakeholder definition of
success.When to create test design?
After the test conditions are defined and sufficient information is available to create the test cases of
high or low level, test design for a specified level can be created.
For lower level testing, test analysis and design are combined activity. For higher level testing, test
analysis is performed first, followed by test design.
There are some activities that routinely take place when the test is implemented. These activities may
also be incorporated into the design process when the tests are created in an iterative manner.
Test data will definitely be created during the test implementation. So it is better to incorporate it in the
test design itself.
This approach enables optimization of test condition scope by creating low or high level test cases
automatically.
Boundary Value Testing, Equivalence Class Testing, Path Testing, Data Flow Testing
Test design is done using several test design techniques. The following is a list of some of the top
design techniques,
Equivalence class testing, also known as Equivalence Class Partitioning, is a test design technique that
lets you partition your test data into equivalent classes or sections.
It aims to reduce the number of test cases required to test a product by dividing the input domain into a
set of equivalence classes. You can use this whenever an input field has a range like age.
Example:
Consider a gaming website has a form that requires users to enter their age. And the form specifies that
the age has to be between 18 and 60. Now, using the Equivalence Class Partitioning technique, you can
divide the input range into three partitions, as follows,
1 to 17 (invalid)
18 to 60 (valid)
>60 (invalid)
By partitioning the input range into equal partitions, you can create test cases that cover all possible
scenarios. This way, you can easily make sure that your form is working correctly without testing every
possible number between 18 and 60.
2. State Transitioning
It is a type of black box testing that is performed to check the change in the application’s state under
various inputs. This testing is used where different system transitions have to be tested.
State Transition DiagramAbove is a state transition diagram that needs to be tested, it depicts how a
system’s state changes on specific inputs. The four main components for a state transition diagram are
as follows:
1. States
2. Transition
3. Events
4. Actions
State transition testing helps understand the system’s behavior and covers all the conditions. Let’s try to
understand this with an example.
Example:
Consider a bank application that allows users to log in with valid credentials. But, if the user doesn’t
remember the credentials, the application allows them to retry with up to three attempts. If they provide
valid credentials within those three attempts, it will lead to a successful login. In case of three
unsuccessful attempts, the application will have to block the account.
The below image will explain this scenario in a clear way.
3. Exploratory Testing
Exploratory testing is primarily used in Agile methods and involves outside-the-box thinking.
This process does not follow pre-defined test cases but instead involves testers using their knowledge,
skills, and creativity to identify issues that may not have been anticipated by developers.
During exploratory testing, the tester will explore the software application in an unstructured way,
creating and executing tests on the fly. The goal of this testing is to uncover bugs that may have been
missed by other traditional testing methods.
Test Design Concepts You Must Be Familiar With
As a software tester, there are several test design concepts that you must be familiar with to create
effective test cases. Here are some of the most important test design concepts,
Test Automation Pyramid
Test Automation Pyramid is a testing framework that helps developers and testers to build high-quality
products. It emphasizes the importance of automation at each level. The pyramid consists of three
levels, each representing a different type of testing as follows,
Unit Tests
Integration Tests
End-to-End Tests
Unit testing: Here, testing is done on individual units or software modules. Each unit is tested separately
to ensure that it behaves as intended and meets its requirements.Integration testing: It helps verify that
different modules of a software application are working as intended when they are integrated
together.End-to-End testing: It involves evaluating the entire software application, from start to finish,
to ensure that all components work together as expected and meet the requirements and business
objectives. Also, it enables quick feedback cycles and helps developers fix bugs in a short time. It helps
save time, reduce costs, and improve the overall quality of an application.
Test Coverage and Code Coverage
Both test coverage and code coverage are two related but distinct concepts in software testing. Test
coverage refers to the maximum extent to which an application has been tested.
Test coverage is calculated as follows,
For example, if you have 2000 lines of code, your test cases should be able to test the entire codebase. If
only 1000 lines of code are tested out of 2000, the test coverage is 50%. Aim for 100% test coverage
which means that your entire application functionality is tested to ensure a high-quality product.Code
coverage specifically refers to the percentage of the code that has been covered by tests. Simply put,
code coverage tells how much code is tested, and test coverage tells if your tests cover the application’s
functionality or not.
Code coverage is calculated as follows,
Code Coverage Percentage = (Number of lines of code executed)/(Total Number of lines of code in an
application) * 100
If the entire piece of code is tested, then you may consider that the code coverage is 100%. Good code
coverage is considered a good metric for testing.
Test Suites and Test Cases
Test Suites and Test cases are interrelated terms. A test suite can be defined as a collection of test cases
designed to test a specific functionality of the software. They are typically created and managed by
testers.A test case is a set of instructions that defines the steps to be taken and the expected results for
testing specific software functionality. Simply put, test cases are individual tests.
When you automate your test cases, Testsigma – a no-code test automation tool also supports the
addition, updation, and deletion of test cases and test suites. It is very easy to create test cases using its
NLP approach. It also lets you easily manage and run automated test cases on the cloud.
Test Design Preparedness Metrics, Test Case Design Effectiveness, Model-Driven
Test Design
Basically test design is the act of creating and writing test suites for testing a software.
Test analysis and identifying test conditions gives us a generic idea for testing which covers quite a
large range of possibilities. But when we come to make a test case we need to be very specific. In fact
now we need the exact and detailed specific input. But just having some values to input to the system
is not a test, if you don’t know what the system is supposed to do with the inputs, you will not be able
to tell that whether your test has passed or failed.
Automated Testing is a technique where the Tester writes scripts on their own and uses suitable
Software or Automation Tool to test the software. It is an Automation Process of a Manual Process. It
allows for executing repetitive tasks without the intervention of a Manual Tester.
It is used to automate the testing tasks that are difficult to perform manually.
Automation tests can be run at any time of the day as they use scripted sequences to examine the
software.
Automation tests can also enter test data can compare the expected result with the actual result and
generate detailed test reports.
The goal of automation tests is to reduce the number of test cases to be executed manually but not to
eliminate manual testing.
It is possible to record the test suit and replay it when required.
Why Transform From Manual to Automated Testing?
In the year 1994, An aircraft completing its Routine flight crashed just before landing. This was due to
some bug or defect in the Software. The Testers didn’t even care about the final testing and hence this
accident happened. So in order to replace for few of the Manual Tests (mandatory), there is a need for
Automation Testing. Below are some of the reasons for using automation testing:
Quality Assurance: Manual testing is a tedious task that can be boring and at the same time error-
prone. Thus, using automation testing improves the quality of the software under test as more test
coverage can be achieved.
Error or Bug-free Software: Automation testing is more efficient for detecting bugs in comparison
to manual testing.
No Human Intervention: Manual testing requires huge manpower in comparison to automation
testing which requires no human intervention and the test cases can be executed unattended.
Increased test coverage: Automation testing ensures more test coverage in comparison to manual
testing where it is not possible to achieve 100% test coverage.
Testing can be done frequently: Automation testing means that the testing can be done frequently
thus improving the overall quality of the software under test.
Manual Testing vs Automated Testing
Below are some of the differences between manual testing and automated testing:
more reliable.
Test Procedures, Test Case Organization and Tracking, Bug Reporting, Bug Life
Cycle
Also known as closed box/data-driven testing. Also known as clear box/structural testing.
End users, testers, and developers. Normally done by testers and developers.
This can only be done by a trial and error Data domains and internal boundaries can be better
method. tested.
What are different levels of software testing?
Software level testing can be majorly classified into 4 levels:
1. Unit Testing: A level of the software testing process where individual units/components of a
software/system are tested. The purpose is to validate that each unit of the software performs as
designed.
2. Integration Testing: A level of the software testing process where individual units are combined
and tested as a group. The purpose of this level of testing is to expose faults in the interaction between
integrated units.
3. System Testing: A level of the software testing process where a complete, integrated
system/software is tested. The purpose of this test is to evaluate the system’s compliance with the
specified requirements.
4. Acceptance Testing: A level of the software testing process where a system is tested for
acceptability. The purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.
UNIT IV ADVANCED TESTING CONCEPTS
Performance Testing
Performance Testing is a type of software testing that ensures software applications to
perform properly under their expected workload. It is a testing technique carried out to
determine system performance in terms of sensitivity, reactivity and stability under a
particular workload.
Performance testing is a type of software testing that focuses on evaluating the performance
and scalability of a system or application. The goal of performance testing is to identify
bottlenecks, measure system performance under various loads and conditions, and ensure
that the system can handle the expected number of users or transactions.
Load testing: Load testing simulates a real-world load on the system to see how it
performs under stress. It helps identify bottlenecks and determine the maximum number
of users or transactions the system can handle.
Stress testing: Stress testing is a type of load testing that tests the system’s ability to
handle a high load above normal usage levels. It helps identify the breaking point of the
system and any potential issues that may occur under heavy load conditions.
Spike testing: Spike testing is a type of load testing that tests the system’s ability to
handle sudden spikes in traffic. It helps identify any issues that may occur when the
system is suddenly hit with a high number of requests.
Soak testing: Soak testing is a type of load testing that tests the system’s ability to
handle a sustained load over a prolonged period of time. It helps identify any issues that
may occur after prolonged usage of the system.
Endurance testing: This type of testing is similar to soak testing, but it focuses on the
long-term behavior of the system under a constant load.
Performance Testing is the process of analyzing the quality and capability of a product.
It is a testing method performed to determine the system performance in terms of speed,
reliability and stability under varying workload. Performance testing is also known
as Perf Testing
Performance Testing Attributes:
Speed:
It determines whether the software product responds rapidly.
Scalability:
It determines amount of load the software product can handle at a time.
Stability:
It determines whether the software product is stable in case of varying workloads.
Reliability:
It determines whether the software product is secure or not.
Objective of Performance Testing:
1. The objective of performance testing is to eliminate performance congestion.
2. It uncovers what is needed to be improved before the product is launched in market.
3. The objective of performance testing is to make software rapid.
4. The objective of performance testing is to make software stable and reliable.
5. The objective of performance testing is to evaluate the performance and scalability of a
system or application under various loads and conditions. It helps identify bottlenecks,
measure system performance, and ensure that the system can handle the expected
number of users or transactions. It also helps to ensure that the system is reliable, stable
and can handle the expected load in a production environment.
Types of Performance Testing:
1. Load testing:
It checks the product’s ability to perform under anticipated user loads. The objective is
to identify performance congestion before the software product is launched in market.
2. Stress testing:
It involves testing a product under extreme workloads to see whether it handles high
traffic or not. The objective is to identify the breaking point of a software product.
3. Endurance testing:
It is performed to ensure the software can handle the expected load over a long period of
time.
4. Spike testing:
It tests the product’s reaction to sudden large spikes in the load generated by users.
5. Volume testing:
In volume testing large number of data is saved in a database and the overall software
system’s behavior is observed. The objective is to check product’s performance under
varying database volumes.
6. Scalability testing:
In scalability testing, software application’s effectiveness is determined in scaling up to
support an increase in user load. It helps in planning capacity addition to your software
system.
Performance Testing Tools:
1. Jmeter
2. Open STA
3. Load Runner
4. Web Load
Advantages of Performance Testing :
Performance testing ensures the speed, load capability, accuracy and other performances
of the system.
It identifies, monitors and resolves the issues if anything occurs.
It ensures the great optimization of the software and also allows large number of users to
use it on same time.
It ensures the client as well as end-customers satisfaction.Performance testing has
several advantages that make it an important aspect of software testing:
Identifying bottlenecks: Performance testing helps identify bottlenecks in the system
such as slow database queries, insufficient memory, or network congestion. This helps
developers optimize the system and ensure that it can handle the expected number of
users or transactions.
Improved scalability: By identifying the system’s maximum capacity, performance
testing helps ensure that the system can handle an increasing number of users or
transactions over time. This is particularly important for web-based systems and
applications that are expected to handle a high volume of traffic.
Improved reliability: Performance testing helps identify any potential issues that may
occur under heavy load conditions, such as increased error rates or slow response times.
This helps ensure that the system is reliable and stable when it is deployed to
production.
Reduced risk: By identifying potential issues before deployment, performance testing
helps reduce the risk of system failure or poor performance in production.
Cost-effective: Performance testing is more cost-effective than fixing problems that
occur in production. It is much cheaper to identify and fix issues during the testing phase
than after deployment.
Improved user experience: By identifying and addressing bottlenecks, performance
testing helps ensure that users have a positive experience when using the system. This
can help improve customer satisfaction and loyalty.
Better Preparation: Performance testing can also help organizations prepare for
unexpected traffic patterns or changes in usage that might occur in the future.
Compliance: Performance testing can help organizations meet regulatory and industry
standards.
Better understanding of the system: Performance testing provides a better understanding
of how the system behaves under different conditions, which can help in identifying
potential problem areas and improving the overall design of the system.
Disadvantages of Performance Testing :
Sometimes, users may find performance issues in the real time environment.
Team members who are writing test scripts or test cases in the automation tool should
have high-level of knowledge.
Team members should have high proficiency to debug the test cases or test scripts.
Low performances in the real environment may lead to lose large number of users
Performance testing also has some disadvantages, which include:
Resource-intensive: Performance testing can be resource-intensive, requiring significant
hardware and software resources to simulate a large number of users or transactions.
This can make performance testing expensive and time-consuming.
Complexity: Performance testing can be complex, requiring specialized knowledge and
expertise to set up and execute effectively. This can make it difficult for teams with
limited resources or experience to perform performance testing.
Limited testing scope: Performance testing is focused on the performance of the system
under stress, and it may not be able to identify all types of issues or bugs. It’s important
to combine performance testing with other types of testing such as functional testing,
regression testing, and acceptance testing.
Inaccurate results: If the performance testing environment is not representative of the
production environment or the performance test scenarios do not accurately simulate
real-world usage, the results of the test may not be accurate.
Difficulty in simulating real-world usage: It’s difficult to simulate real-world usage, and
it’s hard to predict how users will interact with the system. This makes it difficult to
know if the system will handle the expected load.
Complexity in analyzing the results: Performance testing generates a large amount of
data, and it can be difficult to analyze the results and determine the root cause of
performance issues.
Load Testing
Load Testing is a type of Performance Testing that determines the performance of a system,
software product, or software application under real-life based load conditions. Basically, load
testing determines the behavior of the application when multiple users use it at the same time.
It is the response of the system measured under varying load conditions. The load testing is
carried out for normal and extreme load conditions.
Load testing is a type of performance testing that simulates a real-world load on a system or
application to see how it performs under stress. The goal of load testing is to identify
bottlenecks and determine the maximum number of users or transactions the system can
handle. It is an important aspect of software testing as it helps ensure that the system can
handle the expected usage levels and identify any potential issues before the system is
deployed to production.
During load testing, various scenarios are simulated to test the system’s behavior under
different load conditions. This can include simulating a high number of concurrent users,
simulating a large number of requests, and simulating heavy network traffic. The system’s
performance is then measured and analyzed to identify any bottlenecks or issues that may
occur.
Stress testing: Testing the system’s ability to handle a high load above normal usage
levels
Spike testing: Testing the system’s ability to handle sudden spikes in traffic
Soak testing: Testing the system’s ability to handle a sustained load over a prolonged
period of time
Tools such as Apache JMeter, LoadRunner, Gatling, and Grinder can be used to simulate
load and measure system performance. It’s important to ensure that the load testing is done
in an environment that closely mirrors the production environment to get accurate results.
Objectives of Load Testing: The objective of load testing is:
To maximize the operating capacity of a software application.
To determine whether the latest infrastructure is capable to run the software application or
not.
To determine the sustainability of application with respect to extreme user load.
To find out the total count of users that can access the application at the same time.
To determine scalability of the application.
To allow more users to access the application.
1. Test Environment Setup: Firstly create a dedicated test environment setup for
performing the load testing. It ensures that testing would be done in a proper way.
2. Load Test Scenario: In second step load test scenarios are created. Then load testing
transactions are determined for an application and data is prepared for each transaction.
3. Test Scenario Execution: Load test scenarios that were created in previous step are
now executed. Different measurements and metrices are gathered to collect the
information.
4. Test Result Analysis: Results of the testing performed is analyzed and various
recommendations are made.
5. Re-test: If the test is failed then the test is performed again in order to get the result in
correct way.
Metrics of Load Testing :
Metrics are used in knowing the performance of load testing under different circumstances.
It tells how accurately the load testing is working under different test cases. It is usually
carried out after the preparation of load test scripts/cases. There are many metrics to
evaluate the load testing. Some of them are listed below.
1. Average Response Time : It tells the average time taken to respond to the request
generated by the clients or customers or users. It also shows the speed of the application
depending upon the time taken to respond to the all requests generated.
2. Error Rate : The Error Rate is mentioned in terms of percentage denotes the number of
errors occurred during the requests to the total number of requests. These errors are usually
raised when the application is no longer handling the request at the given time or for some
other technical problems. It makes the application less efficient when the error rate keeps
on increasing.
3. Throughput : This metric is used in knowing the range of bandwidth consumed during
the load scripts or tests and it is also used in knowing the amount of data which is being
used for checking the request that flows between the user server and application main
server. It is measured in kilobytes per second.
4. Requests Per Second : It tells that how many requests are being generated to the
application server per second. The requests could be anything like requesting of images,
documents, web pages, articles or any other resources.
5. Concurrent Users : This metric is used to take the count of the users who are actively
present at the particular time or at any time. It just keeps track of count those who are
visiting the application at any time without raising any request in the application. From this,
we can easily know that at which time the high number of users are visiting the application
or website.
6. Peak Response Time : Peak Response Time measures the time taken to handle the
request. It also helps in finding the duration of the peak time(longest time) at which the
request and response cycle is handled and finding that which resource is taking longer time
to respond the request.
Load Testing Tools:
1. Apache Jmeter
2. WebLoad
3. NeoLoad
4. LoadNinja
5. HP Performance Tester
6. LoadUI Pro
7. LoadView
Volume Testing
Volume Testing is a type of software testing which is carried out to test a software
application with a certain amount of data. The amount used in volume testing could be a
database size or it could also be the size of an interface file that is the subject of volume
testing.
While testing the application with a specific database size, database is extended to that size
and after that the performance of the application is tested. When application needs
interaction with an interface file this could be either reading or writing the file or same
from the file. A sample file of the size needed is created and then functionality of the
application is tested with that file in order to test the performance.
In volume testing a huge volume of data is acted upon the software. It is basically
performed to analyze the performance of the system by increasing the volume of data in the
database. Volume testing is performed to study the impact on response time and behavior of
the system when the volume of data is increased in the database.
Volume Testing is also known as Flood Testing.
Characteristics of Volume Testing:
Following are the characteristics of the Volume Testing:
Performance of the software decline as passing of the time as there is huge amount of
data overtime.
Basically the test data is created by test data generator.
Only small amount of data is tested during development phase.
The test data need to be logically correct.
The test data is used to assess the performance of the system.
Objectives of Volume Testing:
The objectives of volume testing is:
To recognize the problems that may be created with large amount of data.
To check The system’s performance by increasing the volume of data in the database.
To find the point at which the stability of the system reduces.
To identify the capacity of the system or application.
Volume Testing Attributes:
Following are the important attributes that are checked during the volume testing:
System’s Response Time:
During the volume testing, the response time of the system or the application is tested.
It is also tested whether the system responses within the finite time or not. If the
response time is large then the system is redesigned.
Data Loss:
During the volume testing, it is also tested that there is no data loss. If there is data loss
some key information might be missing.
Data Storage:
During the volume testing, it is also tested that the data is stored correctly or not. If the
data is not stored correctly then it is restored accordingly in proper place.
Data Overwriting:
In volume testing, it is tested that whether the data is overwritten without giving prior
information to the developer. If it so then developer is notified.
Volume Testing is a type of Performance Testing.
Advantages of Volume Testing:
Volume testing is helpful in saving maintenance cost that will be spent on application
maintenance.
Volume testing is also helpful in a rapid start for scalability plans.
Volume testing also helps in early identification of bottlenecks.
Volume testing ensures that the system is capable of real world usage.
Disadvantages of Volume Testing:
More number of skilled resources are needed to carry out this testing.
It is sometimes difficult to prepare test cases with respect to the number of volume of
data to be tested.
It is a time consuming technique since it requires lot of time to decide the number of
volume of data and test scenarios.
It is a bit costly as compared to another testing technique.
It is not possible to have the exact break down of memory used in the real world
application.
Fail-Over Testing
Fail-Over Testing Software products/services are tested multiple times before delivery to
ensure that it is providing the required service. Testing before delivery doesn’t guarantee
that no problem will occur in the future. Even some times the software application fails due
to some unwanted event due to network issues or due to server-related problems. Failover
testing aims to respond to these types of failures.
Suppose that the PC gets off due to some technical issue, and on restarting we open the
browser, then a pop-up is shown saying Do you want to restore all pages? On clicking
restore, all tabs are restored. The process of ensuring such restorations is known as
FAILOVER TESTING.
Failover Testing :
Failover testing is a technique that validates if a system can allocate extra resources and
back up all the information and operations when a system fails abruptly due to some reason.
This test determines the ability of a system to handle critical failures and handle extra
servers. So, the testing is independent of the physical hardware component of a server.
It is preferred that testing should be performed by servers. Active-active and active-
passive standby are the two most common configurations. Both the techniques achieve
failover in a very different manner but both of them are performed to improve the server’s
reliability.
For example, if we have three servers, one of them fails due to heavy load, and then two
situations occur. Either that failed server will restart on its own or in another situation when
the failed server cannot be restarted, the remaining servers will handle the load. Such
situations are tested during this test.
Considerable Factors Before Performing Failover Testing :
1. The budget has to be the first thing to be taken into consideration before thinking about
performing the Failover test.
2. The budget is connected to the frameworks that might crash or break down under
pressure/load.
3. Always keep in mind how much time it will take to fix all of the issues caused by the
failure of the system.
4. Note down the most likely failures and organize the outcomes according to how much
harm is caused by the failure.
Considerable Factors While Performing Failover Testing :
1. Keep a plan of measures to be taken after performing a test.
2. Focus on the execution of the test plan.
3. Set up a benchmark so that performance requirements can be achieved.
4. Prepare a report concerning issue requirements and/or requirements of the asset.
Working of Failover testing :
1. Consider the factors before performing failover testing like budget, time, team,
technology, etc.
2. Perform analysis on failover reasons and design solutions.
3. Develop test cases to test failover scenarios.
4. Based on the result execute the test plan.
5. Prepare a detailed report on failover.
6. Take necessary actions based on the report.
Benefits of Failover Testing :
1. Allows users to configure everything like user access network settings and so on.
2. Ensures that the configuration made is working properly.
3. All the faults are easily resolved in the system’s server beforehand.
4. Provides better services so that users’ servers can run smoothly.
5. Ensures no loss during downtime.
Examples of Failover Testing :
1. Banking and Financial applications
2. Telecom applications
3. Visa applications
4. Trading applications
5. Emergency service business applications
6. Government applications
7. Defense service-related applications
Recovery Testing
Recovery Testing Recovery testing is a type of system testing which aims at testing
whether a system can recover from failures or not. The technique involves failing the
system and then verifying that the system recovery is performed properly.
To ensure that a system is fault-tolerant and can recover well from failures, recovery testing
is important to perform. A system is expected to recover from faults and resume its work
within a pre-specified time period. Recovery testing is essential for any mission-critical
system, for example, the defense systems, medical devices, etc. In such systems, there is a
strict protocol that is imposed on how and within what time period the system should
recover from failure and how the system should behave during the failure.
A system or software should be recovery tested for failures like :
Power supply failure
The external server is unreachable
Wireless network signal loss
Physical conditions
The external device not responding
The external device is not responding as expected, etc.
Steps to be performed before executing a Recovery Test :
A tester must ensure that the following steps are performed before carrying out the
Recovery testing procedure :
1. Recovery Analysis –
It is important to analyze the system’s ability to allocate extra resources like servers or
additional CPUs. This would help to better understand the recovery-related changes that
can impact the working of the system. Also, each of the possible failures, their possible
impact, their severity, and how to perform them should be studied.
2. Test Plan preparation –
Designing the test cases keeping in mind the environment and results obtained in
recovery analysis.
3. Test environment preparation –
Designing the test environment according to the recovery analysis results.
4. Maintaining Back-up –
Information related to the software, like various states of the software and database
should be backed up. Also, if the data is important, then the backing up of the data at
multiple locations is important.
5. Recovery personnel Allocation –
For the recovery testing process, it is important to allocate recovery personnel who is
aware and educated enough for the recovery testing being conducted.
6. Documentation –
This step emphasizes on documenting all the steps performed before and during the
recovery testing so that the system can be analyzed for its performance in case of a
failure.
Example of Recovery Testing :
When a system is receiving some data over a network for processing purposes, we can
stimulate software failure by unplugging the system power. After a while, we can plug
in the system again and test its ability to recover and continue receiving the data from
where it stopped.
Another example could be when a browser is working on multiple sessions, we can
stimulate software failure by restarting the system. After restarting the system, we can
check if it recovers from the failure and reloads all the sessions it was previously
working on.
While downloading a movie over a Wifi network, if we move to a place where there is
no network, then the downloading process will be interrupted. Now to check if the
process recovers from the interruption and continues working as before, we move back
to a place where there is a Wifi network. If the downloading resumes, then the software
has a good recovery rate.
Advantages of Recovery Testing :
Improves the quality of the system by eliminating the potential flaws in the system so
that the system works as expected.
Recovery testing is also referred to as Disaster Recovery Testing. A lot of companies
have disaster recovery centers to make sure that if any of the systems is damaged or fails
due to some reason, then there is back up to recover from the failure.
Risk elimination is possible as the potential flaws are detected and removed from the
system.
Improved performance as faults are removed and the system becomes more reliable
and performs better in case a failure occurs.
Disadvantages of Recovery testing :
Recovery testing is a time-consuming process as it involves multiple steps and
preparations before and during the process.
The recovery personnel must be trained as the process of recovery testing takes place
under his supervision. So, the tester needs to be trained to ensure that recovery testing is
performed in the proper way. For performing recovery testing, he should have enough
data and back up files to perform recovery testing.
The potential flaws or issues are unpredictable in a few cases. It is difficult to point
out the exact reason for the same, however, since the quality of the software must be
maintained, so random test cases are created and executed to ensure such potential flaws
are removed.
Configuration Testing
Configuration Testing is the type of Software Testing which verifies the performance of
the system under development against various combinations of software and hardware to
find out the best configuration under which the system can work without any flaws or
issues while matching its functional requirements.
Configuration Testing is the process of testing the system under each configuration of the
supported software and hardware. Here, the different configurations of hardware and
software means the multiple operating system versions, various browsers, various
supported drivers, distinct memory sizes, different hard drive types, various types of CPU
etc.
Various Configurations:
Operating System Configuration:
Win XP, Win 7 32/64 bit, Win 8 32/64 bit, Win 10 etc.
Database Configuration:
Oracle, DB2, MySql, MSSQL Server, Sybase etc.
Browser Configuration:
IE 8, IE 9, FF 16.0, Chrome, Microsoft Edge etc.
Objectives of Configuration Testing:
The objective of configuration testing is:
To determine whether the software application fulfills the configurability requirements.
To identify the defects that were not efficiently found during different testing processes.
To determine an optimal configuration of the application under test.
To do analyse of the performance of software application by changing the hardware and
software resources.
To do analyse of the system efficiency based on the prioritization.
To verify the degree of ease to how the bugs are reproducible irrespective of the
configuration changes.
Types of Configuration Testing:
Configuration testing is of 2 types:
1. Software Configuration Testing:
Software configuration testing is done over the Application Under Test with various
operating system versions and various browser versions etc. It is a time consuming
testing as it takes long time to install and uninstall the various software which are to be
used for testing. When the build is released, software configuration begins after passing
through the unit test and integration test.
2. Hardware Configuration Testing:
Hardware configuration testing is typically performed in labs where physical machines
are used with various hardware connected to them.
When a build is released, the software is installed in all the physical machines to which
the hardware is attached and the test is carried out on each and every machine to
confirm that the application is working fine. While doing hardware configuration test,
the kind of hardware to be tested is spelled out and there are several computer hardware
and peripherals which make it next to impossible to execute all the tests.
Configuration Testing can also be classified into following 2 types:
1. Client level Testing:
Client level testing is associated with the usability and functionality testing. This testing
is done from the point of view of its direct interest of the users.
2. Server level Testing:
Server level testing is carried out to determine the communication between the software
and the external environment when it is planned to be integrated after the release.
Compatibility Testing
Compatibility testing :
Compatibility testing is software testing which comes under the non functional
testing category, and it is performed on an application to check its compatibility (running
capability) on different platform/environments. This testing is done only when the
application becomes stable. Means simply this compatibility test aims to check the
developed software application functionality on various software, hardware platforms,
network and browser etc. This compatibility testing is very important in product production
and implementation point of view as it is performed to avoid future issues regarding
compatibility.
Types of Compatibility Testing :
Several examples of compatibility testing are given below.
1. Software :
Testing the compatibility of an application with an Operating System like Linux, Mac,
Windows
Testing compatibility on Database like Oracle SQL server, MongoDB server.
Testing compatibility on different devices like in mobile phones, computers.
Types based on Version Testing :
There are two types of compatibility testing based on version testing
1. Forward compatibility testing : When the behavior and compatibility of a software or
hardware is checked with its newer version then it is called as forward compatibility
testing.
2. Backward compatibility testing : When the behavior and compatibility of a software
or hardware is checked with its older version then it is called as backward compatibility
testing.
2. Hardware :
Checking compatibility with a particular size of
RAM
ROM
Hard Disk
Memory Cards
Processor
Graphics Card
3. Smartphones :
Checking compatibility with different mobile platforms like android, iOS etc.
4.Network :
Checking compatibility with different :
Bandwidth
Operating speed
Capacity
Along with this there are other types of compatibility testing are also performed such as
browser compatibility to check software compatibility with different browsers like Google
Chrome, Internet Explorer etc. device compatibility, version of the software and others.
So for now we have known the uses of compatibility in different fields. Now the quest ion
rises is HOW TO PERFORM A COMPATIBILITY TEST?
How to perform Compatibility testing ?
Testing the application in a same environment but having different versions. For example,
to test compatibility of Facebook application in your android mobile. First check for the
compatibility with Android 9.0 and then with Android 10.0 for the same version of
Facebook App.
Testing the application in a same versions but having different environment. For example,
to test compatibility of Facebook application in your android mobile. First check for the
compatibility with a Facebook application of lower version with a Android 10.0(or your
choice) and then with a Facebook application of higher version with a same version of
Android.
Why compatibility testing is important ?
1. It ensures complete customer satisfaction.
2. It provides service across multiple platforms.
3. Identifying bugs during development process.
Compatibility testing defects :
1. Variety of user interface.
2. Changes with respect to font size.
3. Alignment issues.
4. Issues related to existence of broken frames.
5. Issues related to overlapping of content.
Usability Testing
Several tests are performed on a product before deploying it. You need to collect qualitative
and quantitative data and satisfy customers’ needs with the product. A proper final report is
made mentioning the changes required in the product (software). Usability Testing in
software testing is a type of testing, that is done from an end user’s perspective to
determine if the system is easily usable. Usability testing is generally the practice of testing
how to easy a design is to use on a group of representative users. A very common mistake
in usability testing is conducting a study too late in the design process If you wait until
right before your product is released, you won’t have the time or money to fix any issues –
and you’ll have wasted a lot of effort developing your product the wrong way.
1. Guerilla Testing
It is a type of testing where testers wander to public places and ask random users about the
prototype. Also, a thank gift is offered to the users as a gesture of token. It is the best way
to perform usability testing during the early phases of the product development process.
Users generally spare 5-10 minutes and give instant feedback on the product. Also, the cost
is comparatively low as you don’t need to hire participants. It is also known as corridor or
hallway testing.
2. Usability Lab
Usability lab testing is conducted in a lab environment where moderators (who ask for
feedback on the product) hire participants and ask them to take a survey on the product.
This test is performed on a tablet/desktop. The participant count can be 8-10 which is a bit
costlier than Guerilla testing as you need to hire participants, arrange a place, and conduct
testing.
Screen or video recording kind of testing is in which a screen is recorded as per the user’s
action (navigation and usage of the product). This testing describes how the user’s mind
runs while using a product. This kind of testing involves the participation of almost 10
users for 15 minutes. It helps in describing the issues users may face while interacting with
the product.
Generally, there are two studies in usability testing –
1. Moderated – Moderator guides the participant for the changes required in the product
(software)
2. Unmoderated – There’s no moderator (no human guidance), participants gets a set of
questions on which he/she has to work.
While performing usability testing, all kinds of biases (be it friendly bias, social bias, etc.)
by the participants are avoided to have honest feedback on the product so as to improve
its durability.
Need for Usability Testing
Usability testing provides some benefits and the main benefit and purpose of usability
testing are to identify usability problems with a design as early as possible, so they can be
fixed before the design is implemented or mass produced and then such, usability testing is
often conducted on prototypes rather than finished products, with different levels of fidelity
depending on the development phase.
Why Usability Testing?
When software is made-ready, it is important to make sure that the user experience with the
product should be seamless. It should be easy to navigate and all the functions would be
working properly, else the competitor’s website will win the race. Therefore, usabilit y
testing is performed. The objective of usability testing is to understand customers’ needs
and requirements and also how users interact with the product (software). With the test, all
the features, functions, and purposes of the software are checked.
The primary goals of usability testing are – discovering problems (hidden issues) and
opportunities, comparing benchmarks, and comparison against other websites. The
parameters tested during usability testing are efficiency, effectiveness, and satisfaction. It
should be performed before any new design is made. This test should be iterated unless all
the necessary changes have been made. Improving the site consistently by performing
usability testing enhances its performance which in return makes it the best website.
Pros and Cons of Usability Testing
As every coin has two sides, usability testing has pros and cons. Some of the pros it has are:
Gives excellent features and functionalities to the product
Improves user satisfaction and fulfills requirements based on user’s feedback
The product becomes more efficient and effective
The biggest cons of usability testing are the cost and time. The more usability testing is
performed, the more cost and time is being used.
Projects which contain all documents have a high level of maturity. Careful documentation can save
Once the test document is ready, the entire test execution process depends on the test document. The
primary objective for writing a test document is to decrease or eliminate the doubts related to the
testing activities.
In software testing, we have various types of test document, which are as follows:
o Test scenarios
o Test case
o Test plan
o Requirement traceability matrix(RTM)
o Test strategy
o Test data
o Bug report
o Test execution report
Test Scenarios
It is a document that defines the multiple ways or combinations of testing the application. Generally,
it is prepared to understand the flow of an application. It does not consist of any inputs and navigation
steps.
Test case
It is an in-details document that describes step by step procedure to test an application. It consists of
the complete navigation steps and inputs and all the scenarios that need to be tested for the
application. We will write the test case to maintain the consistency, or every tester will follow the
same approach for organizing the test document.
Test plan
It is a document that is prepared by the managers or test lead. It consists of all information about the
testing activities. The test plan consists of multiple components such as Objectives, Scope,
Approach, Test Environments, Test methodology, Template, Role & Responsibility, Effort
estimation, Entry and Exit criteria, Schedule, Tools, Defect tracking, Test Deliverable,
Assumption, Risk, and Mitigation Plan or Contingency Plan.
The Requirement traceability matrix [RTM] is a document which ensures that all the test case has
been covered. This document is created before the test execution process to verify that we did not
miss writing any test case for the particular requirement.
Test strategy
The test strategy is a high-level document, which is used to verify the test types (levels) to be
executed for the product and also describe that what kind of technique has to be used and which
module is going to be tested. The Project Manager can approve it. It includes the multiple components
such as documentation formats, objective, test processes, scope, and customer communication
strategy, etc. we cannot modify the test strategy.
Test data
It is data that occurs before the test is executed. It is mainly used when we are implementing the test
case. Mostly, we will have the test data in the Excel sheet format and entered manually while
performing the test case.
The test data can be used to check the expected result, which means that when the test data is entered,
the expected outcome will meet the actual result and also check the application performance by
entering the in-correct input data.
Bug report
The bug report is a document where we maintain a summary of all the bugs which occurred during the
testing process. This is a crucial document for both the developers and test engineers because, with
the help of bug reports, they can easily track the defects, report the bug, change the status of bugs
which are fixed successfully, and also avoid their repetition in further process.
It is the document prepared by test leads after the entire testing execution process is completed. The
test summary report defines the constancy of the product, and it contains information like the
modules, the number of written test cases, executed, pass, fail, and their percentage. And each module
has a separate spreadsheet of their respective module.
Security testing
Security Testing is a type of Software Testing that uncovers vulnerabilities of the system
and determines that the data and resources of the system are protected from possible
intruders. It ensures that the software system and application are free from any threats or
risks that can cause a loss. Security testing of any system is focused on finding all possible
loopholes and weaknesses of the system which might result in the loss of information or
repute of the organization. Security testing is a type of software testing that focuses on
evaluating the security of a system or application. The goal of security testing is to identify
vulnerabilities and potential threats, and to ensure that the system is protected against
unauthorized access, data breaches, and other security-related issues.
Goal of Security Testing: The goal of security testing is to:
To identify the threats in the system.
To measure the potential vulnerabilities of the system.
To help in detecting every possible security risks in the system.
To help developers in fixing the security problems through coding.
The goal of security testing is to identify vulnerabilities and potential threats in a system
or application, and to ensure that the system is protected against unauthorized access,
data breaches, and other security-related issues. The main objectives of security testing
are to:
Identify vulnerabilities: Security testing helps identify vulnerabilities in the system, such
as weak passwords, unpatched software, and misconfigured systems, that could be
exploited by attackers.
Evaluate the system’s ability to withstand an attack: Security testing evaluates the
system’s ability to withstand different types of attacks, such as network attacks, social
engineering attacks, and application-level attacks.
Ensure compliance: Security testing helps ensure that the system meets relevant security
standards and regulations, such as HIPAA, PCI DSS, and SOC2.
Provide a comprehensive security assessment: Security testing provides a
comprehensive assessment of the system’s security posture, including the identification
of vulnerabilities, the evaluation of the system’s ability to withstand an attack, and
compliance with relevant security standards.
Help organizations prepare for potential security incidents: Security testing helps
organizations understand the potential risks and vulnerabilities that they face, enabling
them to prepare for and respond to potential security incidents.
Identify and fix potential security issues before deployment to production: Security
testing helps identify and fix security issues before the system is deployed to production.
This helps reduce the risk of a security incident occurring in a production environment.
Principle of Security Testing: Below are the six basic principles of security testing:
Confidentiality
Integrity
Authentication
Authorization
Availability
Non-repudiation
Major Focus Areas in Security Testing:
Network Security
System Software Security
Client-side Application Security
Server-side Application Security
Authentication and Authorization: Testing the system’s ability to properly authenticate
and authorize users and devices. This includes testing the strength and effectiveness of
passwords, usernames, and other forms of authentication, as well as testing the system’s
access controls and permission mechanisms.
Network and Infrastructure Security: Testing the security of the system’s network and
infrastructure, including firewalls, routers, and other network devices. This includes
testing the system’s ability to defend against common network attacks such as denial of
service (DoS) and man-in-the-middle (MitM) attacks.
Database Security: Testing the security of the system’s databases, including testing for
SQL injection, cross-site scripting, and other types of attacks.
Application Security: Testing the security of the system’s applications, including testing
for cross-site scripting, injection attacks, and other types of vulnerabilities.
Data Security: Testing the security of the system’s data, including testing for data
encryption, data integrity, and data leakage.
Compliance: Testing the system’s compliance with relevant security standards and
regulations, such as HIPAA, PCI DSS, and SOC2.
Cloud Security: Testing the security of cloud-
Types of Security Testing:
1. Vulnerability Scanning: Vulnerability scanning is performed with the help of
automated software to scan a system to detect the known vulnerability patterns.
2. Security Scanning: Security scanning is the identification of network and system
weaknesses. Later on it provides solutions for reducing these defects or risks. Security
scanning can be carried out in both manual and automated ways.
3. Penetration Testing: Penetration testing is the simulation of the attack from a malicious
hacker. It includes an analysis of a particular system to examine for potential
vulnerabilities from a malicious hacker that attempts to hack the system.
4. Risk Assessment: In risk assessment testing security risks observed in the organization
are analyzed. Risks are classified into three categories i.e., low, medium and high. This
testing endorses controls and measures to minimize the risk.
5. Security Auditing: Security auditing is an internal inspection of applications and
operating systems for security defects. An audit can also be carried out via line-by-line
checking of code.
6. Ethical Hacking: Ethical hacking is different from malicious hacking. The purpose of
ethical hacking is to expose security flaws in the organization’s system.
7. Posture Assessment: It combines security scanning, ethical hacking and risk
assessments to provide an overall security posture of an
8. Application security testing: Application security testing is a type of testing that
focuses on identifying vulnerabilities in the application itself. It includes testing the
application’s code, configuration, and dependencies to identify any potential
vulnerabilities.
9. Network security testing: Network security testing is a type of testing that focuses on
identifying vulnerabilities in the network infrastructure. It includes testing firewalls,
routers, and other network devices to identify potential vulnerabilities.
10. Social engineering testing: Social engineering testing is a type of testing that simulates
phishing, baiting, and other types of social engineering attacks to identify vulnerabilities
in the system’s human element.
11. Tools such as Nessus, OpenVAS, and Metasploit can be used to automate and simplify
the process of security testing. It’s important to ensure that security testing is done
regularly and that any vulnerabilities or threats identified during testing are fixed
immediately to protect the system from potential attacks. organization.
Advantages
Disadvantages:
These are software programs that are These are software programs that are used
1.
used on mobile devices. on computer.
It is not easy to create responsive It is easy to code relative design for large
4. design for small screen devices such screen devices such as desktop and laptop.
as mobile devices, tablets.
Testing team has to focus on Testing team does not need on interaction
interaction of mobile devices with of web devices with user’s move, direction
10. user’s moves, voice and environment, of user’s attentions, eye moves, etc. as it
eye moves, etc., as it offers variety of offers less variety of options to perform
options to perform operations. operations.
The following are the tools or The following are the tools or Frameworks
Frameworks for Mobile App Testing- for Web App Testing-
Appium Selenium, the most popular one among
11.
Espresso all other commercial tools.
XCUITest WebLOAD
Xamarin Acunetix
Robotium and more. Netsparker and more.
Automation save time as software can execute test cases faster than human do .
The time thus saved can be used effectively for test engineers to
1. develop additional test cases to achieve better coverage;
2. perform some esoteric or specialized tests like ad hoc testing; or
3. Perform some extra manual testing.
The time saved in automation can also be utilized to develop additional test cases,
thereby improving the coverage of testing.
Test automation can free the test engineers from mundane tasks and make them
focus on more creative tasks. -E.g- Ad hoc testing requires intuition and creativity to
test the product for those perspectives that may have been missed out by planned test
cases. If there are too many planned test cases that need to be run manually and
adequate automation does not exist, then the test team may spend most of its time in
test execution.
Automating the more mundane tasks gives some time to the test engineers for
creativity and challenging tasks.
Automated tests can be more reliable -when an engineer executes a particular test
case many times manually, there is a chance for human error. As with all machine-
oriented activities, automation can be expected to produce more reliable results every
time, and eliminates the factors of boredom and fatigue.
Automation helps in immediate testing -automation reduces the time gap between
development and testing as scripts can be executed as soon as the product build is
ready. Automated testing need not wait for the availability of test engineers.
Automation can protect an organization against attrition of test engineers
Automation can also be used as a knowledge transfer tool to train test engineers on
the product as it has a repository of different tests for the product.
Test automation opens up opportunities for better utilization of global resources
Manual testing requires the presence of test engineers, but automated tests can be
run round the clock, twenty- four hours a day and seven days a week.
1
This will also enable teams in different parts of the words, in different time zones, to
monitor and control the tests, thus providing round the- clock coverage.
As we have seen earlier , testing involves several phases and several types of testing.
Some test cases are repeated several times during a product release, because the product is
built several times. Table describes some test cases for the log in example, on how the login
can be tested for different types of testing.
From the above table , it is observed that there are 2 important dimensions
1) What operations have to be tested
2) How the operations have to be tested scenarios
2
When a set of test cases is combined and associated with a set of scenarios, they are
called “test suite”. A test suite is nothing but a set of test cases that are automated and
scenarios that are associated with the test cases.
User
scenarios How to
defined execute
scenarios Framework/ the tests
harness test
tool
Test
What a test
Test should do
3
c. Third generation-Action-driven
This technique enables a layman to create automated tests. There are no input
and expected output conditions required for running the tests.
All actions that appear on the application are automatically tested, based on a
generic set of controls defined for automation.
The set of actions are represented as objects and those objects are reused. The
user needs to specify only the operations and everything else that is needed for
those actions are automatically generated.
Hence, automation in the third generation involves two major aspects- “test case
automation” and “framework design”.
c. Functional tests These kinds of tests may require a complex set up and thus require
specialized skill, which may not be available on an ongoing basis. Automating these
once, using the expert skill sets, can enable using less-skilled people to run these tests
on an ongoing basis.
2. Automating Areas Less Prone To Change
Automation should consider those areas where requirements go through lesser or
no changes. Normally change in requirements cause scenarios and new features to be
impacted, not the basic functionality of the product.
3. Automate Tests That Pertain to Standards
One of the tests that product may have to undergo is compliance to standards. For
example, a product providing a JDBC interface should satisfy the standard JDBC tests.
Automating for standards provides a dual advantage. Test suites developed for
standards are not only used for product testing but can also be sold as test tools for
the market.
Integration Testing, both internal interfaces and external interfaces have to be captured
by design and architecture. In this figure the thin arrows represent the internal interfaces
and the direction of flow and thick arrows show the external interfaces. All the
5
modules, their purpose, and interactions between them are described in the subsequent
sections.
Architecture for test automation involves two major heads: a test infrastructure that
covers a test case database and a defect database or defect repository. Using this
infrastructure, the test framework provides a backbone that ties the selection and
execution of test cases.
1. External Modules
There are two modules that are external modules to automation-TCDB and defect
DB. All the test cases, the steps to execute them, and the history of their execution
are stored in the TCDB.
The test cases in TCDB can be manual or automated. The interface shown by thick
arrows represents the interaction between TCDB and the automation framework only
for automated test cases.
Defect DB or defect database or defect repository contains details of all the defects
that are found in various products that are tested in a particular organization. It
contains defects and all the related information test engineers submit the defects for
manual test cases.
For automated test cases, the framework can automatically submit the defects to the
defect DB during execution.
A setup for one test case may work negatively for another test case. Hence, it is
important not only to create.
Requirement 5: Independent test cases
Each test case should be executed alone; there should be no dependency between test
cases such as test case-2 to be executed after test case-1 and so on. This requirement enables
the test engineer to select and execute any test case at random without worrying about other
dependencies.
Requirement 6: Test case dependency
Making a test case dependent on another makes it necessary for a particular test case
to be executed before or after a dependent test case is selected for execution
10
Step1: Metrics program is to decide what measurements are important and collect data
accordingly. Ex for Measurements: effort spent on testing , no of defects , no of test cases.
Step2: It deals with defining how to combine data points or measurement to provide
meaningful metrics. A particular metric can use one or more measurements
Step4: This step analyzes the metrics to identify both positive area and improvement areas
on product quality.
Step5: The final step is to take necessary action and follow up on the action.
Step6: To continue with next iteration of metrics programs, measuring a different set of
measurements, leading to more refined metrics addressing different issues.
Knowing only how much testing got completed does not answer the question on when the
testing will get completed and when the product is ready for release. To Answer these
questions , one need to estimate the following
Total Days needed for defect fixes = (outstanding defects yet to fixed + Defects that can be
found in future test cycles)
Defect fixing capability
Days needed for Release = Max(Days needed for testing , days needed for defect fixes )
When the baselined effort estimates, revised effort estimates, and actual effort are
plotted together for all the phases of SDLC, it provides many insights about the estimation
process. As different set of people may get involved in different phases, it is a good idea to
plot these effort numbers phase-wise. A sample data for each of the phase is plotted in the
chart.
If there is a substantial difference between the baselined and revised effort, it points to
incorrect initial estimation. Calculating effort variance for each of the phases provides a
quantitative measure of the relative difference between the revised and actual efforts.
Calculating effort variance for each of the phases provides a quantitative measure of the relative difference between
the revised and actual efforts.
All the baseline estimates, revised estimates, and actual effort are plotted together for
each of the phases. The variance can be consolidated into as shown in the above table.
A variance of more than 5% in any of the SDLC phase indicates the scope for
improvement in the estimation. The variance is acceptable only for the coding and
testing phases.
The variance can be negative also. A negative variance is an indication of an over
estimate.
The variance is acceptable only for the coding and testing phases.
180
160
140 56
No.ofDays
120
100
80
60 126 136
40 110
20
0
Estimated Remaining
16
schedule spent” as in the above chart. “Remaining days yet to be spent” can be calculated by
adding up all remaining activities. If the remaining days yet to be spent on project is not
calculated and plotted, it does not give any value to the chart in the middle of the project,
because the deviation cannot be inferred visually from the chare. The remaining days in the
schedule becomes zero when the release is met.
Effort and schedule variance have to be analyzed in totality, not in isolation. This is
because while effort is a major driver of the cost, schedule determines how best a product
can exploit market opportunities, variance can be classified into negative variance, zero
variance, acceptable variance, and unacceptable variance.
Effort distribution :
Req > Testing > design > bug fixing > coding > doc
17
Mature organizations spend at least 10-15 % of the total effort in requirements and
approximately the same effort in the design phase. The effort percentage for testing depends
on the type of release and amount of change to the existing code base and functionality.
Typically, organizations spend about 20 -50 % of their total effort in testing.
II ) PROGRESS METRICS
One of the main objectives of testing is to find as many defects as possible before any
customer finds them. The number of defects that are found in the product is one of the main
indicators of quality.
Defects get detected by the testing team and get fixed by the development team.
Defect metrics are further classified in to
1. test defect metrics
2. development defect metrics
100%
80%
60%
40%
20%
A scenario represented by such a progress chart shows that not only is testing
progressing well, but also that the product quality is improving. The chart had shown a trend
that as the weeks progress, the “not run” cases are not reducing in number, or “blocked”
cases are increasing in number, or “pass” cases are not increasing, then it would clearly point
to quality problems in the product that prevent the product from being ready for release.
Testing: Understanding Testing.xml, Adding Classes, Packages, Methods to Test, Test Reports .
1. TEST Methods
.
Some organizations classify effects by assigning a defect priority (for example P1, P2,
P3, and so on)Some organizations use defect severity levels (for example, S1, S2, S3, and so
1).The priority of a defect can change dynamically once assigned. Severity is absolute and does
not change often as they reflect the state and quality of the product.
Table -Defect priority and defect severity – sample interpretation.
Defect priority is based on defect fixing and defect severity is based on functionality level.
The purpose of testing is to find defects early in the test cycle. The idea of testing is to find
as many defects as possible early in the cycle. However, this may not be possible for two
reasons. First, not all features of a product may become available early; because of
scheduling of resources, the features of a product arrive in a particular sequence. Some
of the test cases may be blocked because of some show stopper defects.
Once a majority of the modules become available and the defects that are blocking the
tests are fixed, the defect arrival rate increases. After a certain period of defect fixing and
testing, the arrival of defects tends to slow down and a continuation of that enables
product release. This results in a “bell curve” as shown in figure.
The purpose of development is to fix defects as soon as they arrive. If the goal of testing is
to find defects as early as possible, it is natural to expect that the goal of development
should be to fix defects as soon as they arrive. There is a reason why defect fixing rate
should be same as defect arrival rate. If more defects are fixed later in the cycle, they may
not get tested properly for all possible side-effects.
Normally only high-priority defects are tracked during the period closer to release. Some
high-priority defects may require a change in design or architecture & fixed immediately
e) Defect trend
The effectiveness analysis increases when several perspectives of find rate, fix rate,
outstanding, and priority outstanding defects are combined.
Defect trend
21
g) Weighted defects trend
Weighted defect helps in quick analysis of defect, instead of worrying about the
classification of defects.
Both “large defects” and “large number of small defects” affect product release.
22
Defect cause distribution chart
When module wise defect distribution is done , modules like install ,reports , client and
database has > 20 defects indicating that more focus and resources are needed for these
components.
So knowing the components producing more defects helps in defect fix plan and in deciding
what to release.
Defects per KLOC = Total defects found in the product / total Executable line
of code in KLOC
Variants to this metrics is to calculate AMD (add , modify , delete code ) to find how a release affects product
quality .
23
c) Age Analysis of outstanding defect
The time needed to fix a defect may be proportional to its age. It helps in finding out whether
the defects are fixed as soon as they arrive and to ensure that long pending defects are given
adequate priority.
Introduced defect( ID): when adding new code or modifying the code to provide a defect fix , something that was
working earlier may stop working , this is called ID.
reopened defects :fix that is provided in the code may not have fixed the problem completely or some other
modification may have reproduced a defect that was fixed earlier. This is called as reopened defects.
Testing is not meant to find the same defects again ; release readiness should consider the quality of defect fixes.
Test Reports
Productivity metrics combine several measurements and parameters with effort spend on
the product. They help in finding out the capability of the team as well as for other purpose,
such as
24
d) Defects per 100 Failed Test Cases
Defects per 100 failed test cases = (Total defects found for a period/Total test cases
failed due to those defects) * 100
1. A good proportion of defects were found in the early phases of testing (UT and CT).
2. Product quality improved from phase to phase (shown by less percent of defects found
in the later test phases – IT and ST)
The closed defect distribution helps in this analysis as shown in the figure below. From the
chart, the following observations can be made.
1. Only 28% of the defects found by test team were fixed in the product. This suggests
that product quality needs improvement before release.
2. Of the defects filled 19% were duplicates. It suggests that the test team needs to update
itself on existing defects before new defects are filed.
3. Non-reproducible defects amounted to 11%. This means that the product has some
random defects or the defects are not provided with reproducible test cases. This area
needs further analysis.
25
4. Close to 40% of defects were not fixed for reasons “as per design,” “will not fix,” and
“next release.” These defects may impact the customers. They need further discussion
and defect fixes need to be provided to improve release quality.
26