Software Testing Updated e Content
Software Testing Updated e Content
E-CONTENT
OF
SUBJECT
SUBJECT CODE
22518
PREPARED BY
OBJECTIVES OF TESTING:
Software Testing has different goals and objectives. The major objectives of Software testing
are as follows:
Finding defects which may get created by the programmer while developing the software.
Gaining confidence in and providing information about the level of quality.
To prevent defects.
To make sure that the end result meets the business and user requirements.
To ensure that it satisfies the BRS that is Business Requirement Specification and SRS
that is System Requirement Specifications.
To gain the confidence of the customers by providing them a quality product.
Defect
The variation between the actual results and expected results is known as defect. If a developer
finds an issue and corrects it by himself in the development phase then it‘s called a defect.
BUG
If testers find any mismatch in the application/system in testing phase then they call it as Bug. As
I mentioned earlier, there is a contradiction in the usage of Bug and Defect. People widely say
the bug is an informal name for the defect.
Failure
Once the product is deployed and customers find any issues then they call the product as a failure
product. After release, if an end user finds an issue then that particular issue is called as failure
A TEST CASE is a set of conditions or variables under which a tester will determine whether a
system under test satisfies requirements or works correctly. The process of developing test cases
can also help find problems in the requirements or design of an application.
A test case can have the following elements. Note, however, that a test management tool is
normally used by companies and the format is determined by the tool used.
Test case ID: Unique ID is required for each test case. Follow some convention to indicate the
types of the test. For Example, ‗TC_UI_1' indicating ‗user interface test case #1'.
Test priority (Low/Medium/High): This is very useful while test execution. Test priority for
business rules and functional test cases can be medium or higher whereas minor user interface
cases can be of a low priority. Test priority should always be set by the reviewer.
Module Name: Mention the name of the main module or the sub-module.
Test Executed By Name of the Tester who executed this test. To be filled only after test
execution.
Test Title/Name: Test case title. For Example, verify the login page with a valid username and
password.
Pre-conditions: Any prerequisite that must be fulfilled before the execution of this test case. List
all the pre-conditions in order to execute this test case successfully.
Dependencies: Mention any dependencies on the other test cases or test requirements.
Test Steps: List all the test execution steps in detail. Write test steps in the order in which they
should be executed. Make sure to provide as many details as you can.
Test Data: Use of test data as an input for this test case. You can provide different data sets with
exact values to be used as an input.
Expected Result: What should be the system output after test execution? Describe the expected
result in detail including message/error that should be displayed on the screen.
Post-condition: What should be the state of the system after executing this test case?
Status (Pass/Fail): If an actual result is not as per the expected result, then mark this test
as failed. Otherwise, update it as passed.
Notes/ Comments/ Questions: If there are some special conditions to support the above fields,
which can‘t be described above or if there are any questions related to expected or actual results
then mention them here.
o As far as possible, write test cases in such a way that you test only one thing at a time. Do
not overlap or complicate test cases. Attempt to make your test cases ‗atomic‘.
o Ensure that all positive scenarios AND negative scenarios are covered.
ENTRY CRITERIA
Entry Criteria for STLC phases can be defined as specific conditions; or, all those documents
which are required to start a particular phase of STLC should be present before entering any of
the STLC phase.
Entry criteria is a set of conditions that permits a task to perform, or in absence of any of these
conditions, the task cannot be performed.
While setting the entry criteria, it is also important to define the time-frame when the entry
criteria item is available to start the process.
For Instance, to start the Test Cases development phase, the following conditions should be met
−
EXIT CRITERIA
Exit Criteria for STLC phases can be defined as items/documents/actions/tasks that must be
completed before concluding the current phase and moving on to the next phase.
Exit criteria are a set of expectations; this should be met before concluding the STLC phase.
For Instance, to conclude the Test Cases development phase, following expectations should be
met −
The V-model is a type of SDLC model where process executes in a sequential manner in V-
shape. It is also known as Verification and Validation model. It is based on the association of a
testing phase for each corresponding development stage. Development of each step directly
associated with the testing phase. The next phase starts only after completion of the previous
phase i.e. for each development activity, there is a testing activity corresponding to it.
So V-Model contains Verification phases on one side of the Validation phases on the other side.
Verification and Validation phases are joined by coding phase in V-shape. Thus it is called V-
Model.
Design Phase:
Requirement Analysis: This phase contains detailed communication with the customer
to understand their requirements and expectations. This stage is known as
Requirement Gathering.
Testing Phases:
Unit Testing: Unit Test Plans are developed during module design phase. These Unit
Test Plans are executed to eliminate bugs at code or unit level.
Integration testing: After completion of unit testing Integration testing is performed. In
integration testing, the modules are integrated and the system is tested. Integration
testing is performed on the Architecture design phase. This test verifies the
communication of modules among themselves.
System Testing: System testing test the complete application with its functionality, inter
dependency, and communication. It tests the functional and non-functional
requirements of the developed application.
User Acceptance Testing (UAT): UAT is performed in a user environment that
resembles the production environment. UAT verifies that the delivered system meets
user‘s requirement and system is ready for use in real world.
Quality assurance (QA) and quality control (QC) are two terms that are often used
interchangeably. Although similar, there are distinct differences between the two concepts.
Quality assurance and quality control are two aspects of quality management. While some
quality assurance and quality control activities are interrelated, the two are defined differently.
Typically, QA activities and responsibilities cover virtually all of the quality system in one
fashion or another, while QC is a subset of the QA activities.
QUALITY ASSURANCE
QUALITY CONTROL
Quality control can be defined as "part of quality management focused on fulfilling quality
requirements." While quality assurance relates to how a process is performed or how a product
is made, quality control is more the inspection aspect of quality management. An alternate
definition is "the operational techniques and activities used to fulfill requirements for quality."
METHODS OF
TESTING
STATIC TESTING
Under Static Testing, code is not executed. Rather it manually checks the code, requirement
documents, and design documents to find errors. Hence, the name "static".
The main objective of this testing is to improve the quality of software products by finding errors
in the early stages of the development cycle. This testing is also called a Non-execution
technique or verification testing.
DYNAMIC TESTING
Under Dynamic Testing, a code is executed. It checks for functional behavior of software
system, memory/cpu usage and overall performance of the system. Hence the name "Dynamic"
The main objective of this testing is to confirm that the software product works in conformance
with the business requirements. This testing is also called an Execution technique or validation
testing.
Dynamic testing executes the software and validates the output with the expected outcome.
It is performed in the early stage of the It is performed at the later stage of the software
software development. development.
Static testing prevents the defects. Dynamic testing finds and fixes the defects.
Static testing is performed before code Dynamic testing is performed after code
deployment. deployment.
Static Testing involves checklist for Dynamic Testing involves test cases for testing
testing process. process.
Example: Example:
Verification Validation
BLACK BOX
TESTING
BLACK BOX TESTING also known as Behavioral Testing is a software testing method in
which the internal structure/design/implementation of the item being tested is not known to the
tester.
Advantages
o Tests are done from a user‘s point of view and will help in exposing discrepancies in the
specifications.
o Tester need not know programming languages or how the software has been implemented.
o Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias.
Disadvantages
o Only a small number of possible inputs can be tested and many program paths will be left
untested.
o Without clear specifications, which are the situation in many projects, test cases will be
difficult to design.
o Tests can be redundant if the software designer/developer has already run a test case.
Testing the specification is static black-box testing. The specification is a document, not an
executing program, so it's considered static. It's also something that was created using data from
many sources usability studies, focus groups, marketing input, and so on.
The first step is to stand back and view it from a high level. Examine the spec for large
fundamental problems, oversights, and omissions. You might consider this more research than
testing, but ultimately the research is a means to better understand what the software should do.
After you complete the high-level review of the product specification, you'll have a better
understanding of what your product is and what external influences affect its design. Armed with
this information, you can move on to testing the specification at a lower level.
Testing software without having an insight into the details of underlying code is dynamic black-
box testing. It's dynamic because the program is running you're using it as a customer would.
And, it's black-box because you're testing it without knowing exactly how it works with blinders
on.
1 Equivalence partitioning
Equivalence partitioning or equivalence class partitioning (ECP) is a software
testing technique that divides the input data of a software unit into partitions of equivalent data
from which test cases can be derived. In principle, test cases are designed to cover each partition
at least once. This technique tries to define test cases that uncover classes of errors, thereby
reducing the total number of test cases that must be developed. An advantage of this approach is
reduction in the time required for testing software due to lesser number of test cases.
For Example, If you are testing for an input box accepting numbers from 1 to 1000 then there is
no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for
invalid data. Using the Equivalence Partitioning method above test cases can be divided into
three sets of input data called classes. Each test case is representative of a respective class.
So in the above example, we can divide our test cases into three equivalence classes of some
valid and invalid inputs.
Test cases for input box accepting numbers between 1 and 1000 using Equivalence
Partitioning:
#1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid
test case. If you select other values between 1 and 1000 the result is going to be the same.
So one test case for valid input data should be sufficient.
#2) Input data class with all values below the lower limit. I.e. any value below 1, as an invalid
input data test case.
#3) Input data with any value greater than 1000 to represent the third invalid input class.
So using Equivalence Partitioning you have categorized all possible test cases into three classes.
Test cases with other values from any class should give you the same result.
We have selected one representative from every input class to design our test cases. Test case
values are selected in such a way that largest number of attributes of equivalence class can be
exercised.
It's widely recognized that input values at the extreme ends of the input domain cause more
errors in the system. More application errors occur at the boundaries of the input domain.
‗Boundary Value Analysis' Testing technique is used to identify errors at boundaries rather than
finding those that exist in the center of the input domain.Boundary Value Analysis is the next
part of Equivalence Partitioning for designing test cases where test cases are selected at the edges
of the equivalence classes.
Test cases for input box accepting numbers between 1 and 1000 using Boundary value
analysis:
#1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and
1000 in our case.
#2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.
#3) Test data with values just above the extreme edges of the input domain i.e. values 2 and
1001.
Boundary Value Analysis is often called as a part of the Stress and Negative Testing.
WHITE BOX
TESTING
White box testing techniques analyze the internal structures the used data structures, internal
design, code structure and the working of the software rather than just the functionality as in
black box testing. It is also called glass box testing or clear box testing or structural testing.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of
code.
3. It can start at an earlier stage as it doesn‘t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language
as opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.
Static white-box testing is the process of carefully and methodically reviewing the software
design, architecture, or code for bugs without executing it. It's sometimes referred to as structural
analysis.
The obvious reason to perform static white-box testing is to find bugs early and to find bugs that
would be difficult to uncover or isolate with dynamic black-box testing. Having a team of testers
concentrate their efforts on the design of the software at this early stage of development is highly
cost effective.
Static whitebox testing is performed using formal technical review
Formal Technical Review
A formal technical review (FTR) is a form of review in which "a team of qualified personnel
examines the suitability of the software product for its intended use and identifies discrepancies
from specifications and standards. Technical reviews may also provide recommendations of
alternatives and examination of various alternatives"
This is a informal review technique where two team members work on the same Built (software
application) on the same machine. One of the team members will work with the systems (with
keyboard and mouse) and another should make notes and scenarios.
When a tester and a developer work together to ensure the quality of a product then the
efficiency rises a bit, even if the time is less. Here there is no documentation is needed like Test
Cases, Test Plan or Test Scenarios.
2 Walkthrough
Walkthrough in formal technical review process it is conducted by the author of the ‗document
under review‘ who takes the participants through the document and his or her thought processes,
to achieve a common understanding and to gather feedback. This is especially useful if
people from outside the software discipline are present, who are not used to, or cannot easily
understands of software development documents. The content of the document is explained
step by step by the author, to reach consensus on changes or to gather information. The
participants are selected from different departments and backgrounds If the audience represents a
broad section of skills and disciplines, it can give assurance that no major defects are ‗missed‘ in
the walk-through. A walkthrough is especially useful for higher-level documents, such as
requirement specifications and architectural documents.
3 Inspection
Inspection is the most formal review type. It is usually led by a trained moderator (certainly not
by the author).The document under inspection is prepared and checked thoroughly by their
viewers before the meeting, comparing the work product with its sources and other referenced
documents, and using rules and checklists. In the inspection meeting the defects found are
logged. Depending on the organization and the objectives of a project, inspections can be
balanced to serve a number of goals.
1. The moderator: - The moderator (or review leader) leads the review process. His role is to
determine the type of review, approach and the composition of the review team. The moderator
also schedules the meeting, disseminates documents before the meeting, coaches other team
members, paces the meeting, leads possible discussions and stores the data that is collected.
2. The author: - As the writer of the ‗document under review‘, the author‘s basic goal should be
to learn as much as possible with regard to improving the quality of the document. The author‘s
task is to illuminate unclear areas and to understand the defects found.
3. The scribe/ recorder: – The scribe (or recorder) has to record each defect found and any
suggestions or feedback given in the meeting for process improvement.
4. The reviewer: - The role of the reviewers is to check defects and further improvements in
accordance to the business specifications, standards and domain knowledge.
5. The manager:- Manager is involved in the reviews as he or she decides on the execution of
reviews, allocates time in project schedules and determines whether review process objectives
have been met or not.
Dynamic white-box testing, in a nutshell, is using information you gain from seeing what the
code does and how it works to determine what to test, what not to test, and how to approach the
testing. Another name commonly used for dynamic white-box testing is structural testing
because you can see and use the underlying structure of the code to design and run your tests.
Hence “Statement Coverage”, as the name itself suggests, it is the method of validating whether
each and every line of the code is executed at least once.
Hence, in theory, Branch Coverage is a testing method which is when executed ensures that each
and every branch from each decision point is executed.
Example
Path 1: 1,2,3,5,6,7
Path 2: 1,2,4,5,6,7
Path 3: 1, 6, 7
3) Condition Coverage
Conditional coverage or expression coverage will reveal how the variables or sub expressions in
the conditional statement are evaluated. In this coverage expressions with logical operands are
only considered.For example, if an expression has Boolean operations like AND, OR, XOR,
which indicated total possibilities. Conditional coverage offers better sensitivity to the control
flow than decision coverage. Condition coverage does not give a guarantee about full decision
coverage
Example
TT
FF
TF
FT
Y=4
B=4
Example :
IF A = 10 THEN
IF B > C THEN
A=B
ELSE
A=C
ENDIF
ENDIF
Print A
Print B
Print C
FlowGraph:
Driver
A driver is basically a piece of code through which other programs or pieces of code or modules
can be called. Drivers are the main program through which other modules are called. If we want
to test any module it is required that we should have a main program which will call the testing
module. Without the dummy program or driver, the complete testing of the module is not
possible.
Drivers are basically called in Bottom-Up testing approach. In bottom up testing approach the
bottom level modules are prepared but the top level modules are not prepared. Testing of the
bottom level modules is not possible with the help of main program. So we prepare a dummy
program or driver to call the bottom level modules and perform its testing. The main
purpose of drivers is to allow testing of the lower levels of the code, when the upper levels of the
code are not yet developed.
The main purpose of a stub is to allow testing of the upper levels of the code. Hence, when the
lower levels of the code are not yet developed
Integration Testing is defined as a type of testing where software modules are integrated
logically and tested as a group. A typical software project consists of multiple software modules,
coded by different programmers. The purpose of this level of testing is to expose defects in the
interaction between these software modules when they are integrated. Integration Testing focuses
on checking data communication amongst these modules.
1 Bottom-up Integration
In the bottom-up strategy, each module at lower levels is tested with higher modules until all
modules are tested. It takes help of Drivers for testing
Diagrammatic Representation:
Disadvantages:
Critical modules (at the top level of software architecture) which control the flow of
application are tested last and may be prone to defects.
An early prototype is not possible
2 Top-down Integration:
In Top to down approach, testing takes place from top to down following the control flow of the
software system.
Diagrammatic Representation:
Advantages:
In the sandwich/hybrid strategy is a combination of Top Down and Bottom up approaches. Here,
top modules are tested with lower modules at the same time lower modules are integrated with
top modules and tested. This strategy makes use of stubs as well as drivers.
1. Load testing:
It checks the product‘s ability to perform under anticipated user loads. The objective is to
identify performance congestion before the software product is launched in market.
2. Stress testing:
It involves testing a product under extreme workloads to see whether it handles high traffic
or not. The objective is to identify the breaking point of a software product.
3. Spike testing:
It tests the product‘s reaction to sudden large spikes in the load generated by users.
4. Volume testing:
In volume testing large number of data is saved in a database and the overall software
system‘s behavior is observed. The objective is to check product‘s performance under
varying database volumes
5. Scalability testing:
In scalability testing, software application‘s effectiveness is determined in scaling up to
support an increase in user load. It helps in planning capacity addition to your software
system.
6. Soak testing:
Soak testing is a type of performance evaluation that gauges how an application handles a
growing number of users or increasingly taxing tasks over an extended period of time.
Web application testing, a software testing technique exclusively adopted to test the
applications that are hosted on web in which the application interfaces and other functionalities
are tested.
1. Functionality Testing - The below are some of the checks that are performed but not limited
to the below list:
Verify there is no dead page or invalid redirects.
First check all the validations on each field.
Wrong inputs to perform negative testing.
Verify the workflow of the system.
Verify the data integrity.
In Client-server testing there are several clients communicating with the server.
Multiple users can access the system at a time and they can communicate with the server.
Configuration of client is known to the server with certainty.
Client and server are connected by real connection.
Testing approaches of client server system:
1. Component Testing: One need to define the approach and test plan for testing client and
server individually. When server is tested there is need of a client simulator, whereas
testing client a server simulator, and to test network both simulators are used at a time.
2. Integration testing: After successful testing of server, client and network, they are
brought together to form system testing.
3. Performance testing: System performance is tested when number of clients is
communicating with server at a time. Volume testing and stress testing may be used for
testing, to test under maximum load as well as normal load expected. Various interactions
may be used for stress testing.
4. Concurrency Testing: It is very important testing for client-server architecture. It may
be possible that multiple users may be accessing same record at a time, and concurrency
testing is required to understand the behavior of a system in this situation.
5. Compatibility Testing: Client server may be put in different environments when the
users are using them in production. Servers may be in different hardware, software, or
operating system environment than the recommended. Other testing such as security
2 Beta Testing
Beta testing takes place at customers' sites, and involves testing by a group of customers who use
the system at their own locations and provide feedback, before the system is released to other
customers. The latter is often called "field testing".
1 Regression Testing
Regression Testing is defined as a type of software testing to confirm that a recent program or
code change has not adversely affected existing features. Regression Testing is nothing but a full
or partial selection of already executed test cases which are re-executed to ensure existing
functionalities work fine.
Retest All
This is one of the methods for Regression Testing in which all the tests in the existing test
bucket or suite should be re-executed. This is very expensive as it requires huge time and
resources.
Instead of re-executing the entire test suite, it is better to select part of the test suite to be
run
Test cases selected can be categorized as 1) Reusable Test Cases 2) Obsolete Test Cases.
Re-usable Test cases can be used in succeeding regression cycles.
Obsolete Test Cases can't be used in succeeding cycles.
Prioritize the test cases depending on business impact, critical & frequently used
functionalities. Selection of test cases based on priority will greatly reduce the regression
test suite.
GUI Testing is a software testing type that checks the Graphical User Interface of the
Application under Test. GUI testing involves checking the screens with the controls like menus,
buttons, icons, and all types of bars - toolbar, menu bar, dialog boxes, and windows, etc. The
purpose of Graphical User Interface (GUI) Testing is to ensure UI functionality works as per the
specification.
GUI is what the user sees. Say if you visit any site what you will see say homepage it is the GUI
(graphical user interface) of the site. A user does not see the source code. The interface is visible
to the user. Especially the focus is on the design structure, images that they are working properly
or not.
The following checklist will ensure detailed GUI Testing in Software Testing.
Check all the GUI elements for size, position, width, length, and acceptance of characters
or numbers. For instance, you must be able to provide inputs to the input fields.
Check you can execute the intended functionality of the application using the GUI
Check Error Messages are displayed correctly
Check Font used in an application is readable
Check the alignment of the text is proper
Check the Color of the font and warning messages is aesthetically pleasing
Check that the images have good clarity
Check that the images are properly aligned
Check the positioning of GUI elements for different screen resolution.
It is an important document for execution, tracking and reporting the entire testing.
Following things are required to prepare test plan:
1. Scope: - write the scope of testing including the clear indication of what will be
testing and what will not be tested.
2. Break it down into small and manageable task: - Testing is performed by breaking
it down into small and manageable tasks and identifying the strategies to be use, for
carrying out the task.
3. Resources: - Find out the resources needed for testing.
4. Timeline: - Design timeline by which testing activities can be performed.
It is a part of test plan, it identifies right type of testing required for the system
Test approach is a kind of approach in which we have to decide what need to be tested in
order to estimate size, efforts, and schedule.
Setting up criteria for testing :-
Set entry and exit criteria for testing entry criteria for test specify a criteria for each phase
there must be entry criteria for entire testing activity to start.
3 Exit Criteria: - It specifies when test cycle can be completed.
Testing process requires different people to play different role the most common roles
are test engineer, test lead , test manager
How to identify responsibilities:-
Each person should know clearly what he or she has to do.
List out the responsibilities of each function in testing process.
Everyone should know the importance of work, while performing testing activity.
Compliment and cooperate each other
No task should be left unassigned.
Staffing:-
It is done based on estimation of effort and time available for project for completion.
The task in testing is prioritized on basis of efforts, time and importance.
The people are assigned to task based on the requirement of jobs, skills and
experience of people.
Need of training:-
If people are not having the skills the specific requirement of jobs, then appropriate
training program is needed.
Test deliverables are artefacts which mean things that are produced by people involved in
the process and given to the stakeholders.
A milestone is indication of completion of key project task.
E.g.: Some common test deliverables are:-
1. Test plan
2. Test case design specification
3. Test cases
4. Test logs
5. Test summary report
7 Testing tasks:-
There are various tasks of tester:-
TEST INFRASTRUCTURE
The prepared test cases has to be executed at appropriate time during project.
For e.g.; system testing must be executed during the system test.
During test case execution defect repository must be updated with following points:-
1. Defect that are fixed currently.
2. New defects that are found.
Test team communicate with development team with help of defect repository.
Test has to be suspended during it‘s run and it should be resumed only after satisfying
resumption the criteria.
Test Reporting:-
During testing constant communication between test team and development team takes
place with the help of document called test report.
Various types of test repots are:-
1. Test incident report
2. Test cycle report
3. Test summary report
Defect:
Is something by which the customer requirements don‘t get satisfied? Basically the difference
between expected result and actual result is called defect. Defect is a specific corner about the
quality of an application under test. Defects are expensive because finding &correcting defect is
supposed to bea complex activity in software development.
Causes of defects:
The requirements defined by the customer are not clear & because of that something
development team makes some assumption. The software designs are incomplete& doesn‘t,
accommodate of the customer. People working on product design development &testing are not
skills. The process those are intended for product design development & testing are not capable
of producing the desired result.
Effect of defects:
Due to the defect present in system, customer has total dissatisfaction about the system. There
are some serious effects of defect as given below:-
Performance of the system will not be at the acceptable level. Security of the system can be
problematic& there are changes of external attack on the system. Required functionality might be
absent from the system which may result in rejected of the system leg the
customer.
Requirement defect: when a developer can‘t understand the customer requirement properly, the
requirement defect occurs. The requirement defect can be further classified as:-
Functional defect: the customer expected functionality is not present in the system, then it is
called functional defect.
Interface defect: if the defect remains in the modules when one module is interfaced with other
module, then it is called interface defect.
Design defect:
If the software design is not correct or created without understanding the requirement, then the
design defect occurs. There are design defect can be further classified as:-
Algorithm defect: if the design of algorithm is unable to translate requirement correctly, then
algorithm defect occurs.
Interface defect: due to lack of communication the interface occurs. When parameter from one
module doesn‘t get passed to other module correctly then interface occurs.
Coding defect:
If the coding standards & design standards are not followed properly according to organisation
guidelines then coding defect occurs.
Test design defect:if the test plan,test cases,test scanner &test data are not properly defined then
test design defect occurs.
Test tool defect:if there is a defect in the test tool,then it is difficult to identify&resolve the
defect.
Defect management process can be carried out in various phases as- defect prevention baselines
delivery defect discovery, defect resolution &process improvement.
Defect prevention:
Defect prevention is the highest priority activity in the defect management process.following is
the steps taken for defect prevention:-
Identify the cause of defect & try to reduce the occurrence of defect. Focus on common cause of
defect which occurs in coding or interface generation. Identify critical risk, access it &try to
minimize is expected impact of the risks.
Baselines Delivery:
The baseline means the work product which is in deliverable stage. The deliverable is baselines
2hen it reduces predefined milestone in its developed.
Defect Resolution:
Once the developer have acknowledged valid defect, there solution process beings. The
resolution process done in the following steps:-
Determine the importance of the defect. Schedule & fix the defect as per the order of its
importance. Notify it to all concern parties.
Process Improvement:
This step suggest that participated should go back to the process that originated the defect to
understand what caused the defect. Then do validation process& check improvement of software.
Management Reporting:
It is important that the defect information must be analysed & communicate to both the project
management & senior management. The purpose of collecting such information is:-
To know the status of each defect. To provide insight into process that needs the improvement.
To provide strategy information for making important decisions.
Defect life cycle is a cycle which a defect goes through, during its life time it starts when defect
is found& ends when a defect is closed. After ensuring it‘s not reproduced. Defect life cycle is
related bug found during testing. The bug has different stages in life cycle. The life cycle of the
bug are as followed it includes the following stages:-
New: - when a defect is logged& posted for the first time. Its state is given as new.
Assign: - after the tester has posted the bug the lead if the tester approves that the bug is genuine.
Then he assigns the bug to corresponding developer & the developer team. Its state is given as
assigned.
Open: - at this state developer started analysing & working on the defect fixed.
Fixed: - when developer made necessary code changes &verifies the change then the make the
bug state as fixed.
Verified: - the tester test the bug again after it got fixed by developer. It the bug is not present in
the software. He approves that the bug is fixed & change the status to verify. If the bug is not
fixed the retesting of the bug is necessary.
Duplicate: - if the bug is repeated twice or provide identified bug mention the same concept of
current bug then bug status is changed to duplicate.
Rejected: - if the developer feels that the bug is not genuine, then he/she rejects the bug is
change the states to reject.
Deferred: - when the bug comes into the deferred state, then bug is expected to release in next
release.
Not a bug: - the state given as not a bug if there is no change in functionality of the application.
Closed: - once the bug is fixed it is tested by the tester if the tester feels that the bug no longer
exists in the software, he/she changes the status of bug to be closed.
Actual impact of the defect can be relearned when the risks becomes a reality. But it is possible t
estimate the probable impact of it .some organisation classify the risk as high, medium & low
based on some model.
Contingency planning:-
It means the action Imitated by Organisation When Risks become reality. There are ways
which are already planned action by keeping in mind that the preventive & corrective action
might fail.
Reporting a defect:-
Finding & reporting defect is an important step in software development life cycle. it Is
necessary to find root cause of defect & prepare the document about it.
Following are some to be noted in reporting.
Defects:-
Give complete record of inconsistency:-
Complete description of defect helps the tester & user to take preventive & corrective
actions about the defect.
Complete record description also helps in process improvement.
Defect report forms a base quality measurement:-
No. of defect serve as a measure of software quality. It most defines severity, priority &
category of defects.
More defect mean quality of software is poor thus; defect report is helpful in deciding the
quality of software.
Advantages:
1. It is preferred for products with short life cycle.
2. It is preferred with software that have gui that constantly change
3. It requires less time and expense to test manually.
4. Automation cannot replace human intuition and inductive reasoning.
5. Automation cannot stop in middle of test run to examine something that has not been
considered.
6. In automation testing batch testing is not possible for each and every soft test human
interaction in necessary.
Disadvantages:
1. Manual test scope is very limited
2. Comparing large data is impractical
3. Processing change requests during software maintenance takes more time
4. Manual testing is slow and costly
5. It is labour intensive and takes time to complete
6. Lack of training is a common problem
7. Not suitable for large projects that are time bound
8. As the complexity increases the testing grows more complex, which causes increased
time and cost for development
9. It is not consistent or repeatable
An automated tool is able to playback and the predefined actions, compare the results to the
expected behaviour & report the success or the failure of these manual tests to the test
engineer
Once automated tests are created they can be easily repeated & they can be extended to
perform tasks impossible with manual testing
Automation testing is essential for successful development projects
Needs of automated testing tools can be listed as follows
1. Speed
Automation speeds up the pre recorded task very much as it is just to be repeated and can
be sped up to 1000X faster than manual testing
2. Efficiency
Automation doesn‘t require human touch meaning while it runs test you can continue
without her at such as planning and analysis, this increases the efficiency
3. Accuracy and precision
After about 100 manual tests humans tend to lose focus and make more mistakes , this
can be solved using automation since it can do the at up to any scale with the same
amount of accuracy and precision
4. Resource reduction
Sometimes the efforts and manpower can be unrealistic in testing, in these cases
automation can really help reduce resource and save human efforts and stimulate real
world
5. Simulation and emulation
Test tools are used to replace hardware and software that would normally interface to
your product this fake device or application can then used to drive or respond to your
software in way that you choose to execute
6. Relentless
The test automation doesn‘t tire or give up; it will continuously test the software
Advantages
1. Automation test tools save time.
2. It improves the quality of manual test scripts
3. Using test tool early bug detection is possible
4. Machine and tools work 24x7 and never get tired
5. Reusability when a test script generated by an automation tool it must be saved for
further requirement so it can be utilised as many times as software tewst wants
especially for automation testing.
Disadvantages
1. Proficiency is require to write automation test script
2. Debugging test script is a major issue
3. Maintenance is costly in terms of playback method
4. Programming knowledge is required
5. Test tool have environment limitation
To select the most suitable testing tool for the project, the Test Manager should follow the below
tools selection process
How can you select a testing tool if you do not know what you are looking for?
You to precisely identify your test tool requirements. The entire requirement must
be documented and reviewed by project teams and the management board.
After base lining the requirement of the tool, the Test Manager should
Analyze the commercial and open source tools that are available in the market, based on
the project requirement.
Create a tool shortlist which best meets your criteria
One factor you should consider is vendors. You should consider the vendor‘s reputation,
after sale support, tool update frequency, etc. while taking your decision.
Evaluate the quality of the tool by taking the trial usage & launching a pilot. Many
vendors often make trial versions of their software available for download
To ensure the test tool is beneficial for business, the Test Manager have to balance the following
factors:
Example: After spending considerable time to investigate testing tools, the project team found
the perfect testing tool for the any xyz project. The evaluation results concluded that this tool
could
However, after discussing with the software vendor, you found that the cost of this tool is too
high compare to the value and benefit that it can bring to the teamwork.
In such a case, the balance between cost & benefit of the tool may affect the final decision.
Have a strong awareness of the tool. It means you must understand which is
the strong points and the weak points of the tool
Balance cost and benefit.
Even with hours spent reading software manual and vendor information, you may still need to try
the tool in your actual working environment before buying the license.
Your decision may adversely impact the project, the testing process, and the business goals; you
should spend a good time to think hard about it.
Consider a scenario where the defect is fixed in the build and similar feature was used in
different working modules. So it is hard to check new bug is introduced in previous working
functionality. While doing test pass you need to check regression testing around the defect fixes.
This testing exercise needs to be executed each and every time you need to manually test the
functionality around the impacted area. So considering resources, time and money you need to
work effectively and smartly. In such scenarios you need to think of Automation testing.
Test automation is a process to check the software application after development and getting new
build or release. The investment for test automation is time, money and resources. In requires
initial efforts which will help you whenever you want to execute the regression cases.
1. Planning
2. Organizing
3. Controlling
4. Improving
BOOKS:
WEBSITES:
1 WWW.GURU99.COM
2 WWW.GEEKSFORGEEKS.ORG
3 WWW.WIKIPEDIA.COM
4 WWW.TUTORIALSPOINT.COM