[go: up one dir, main page]

0% found this document useful (0 votes)
7 views63 pages

Software Testing Updated e Content

The document outlines the basics of software testing, including definitions, objectives, and key terminologies such as defects, bugs, errors, and failures. It also describes the structure of test cases, entry and exit criteria, the V-model for software development, and the differences between quality assurance and quality control. Additionally, it covers static and dynamic testing methods, as well as black box testing approaches and their advantages and disadvantages.

Uploaded by

samruddhibore
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views63 pages

Software Testing Updated e Content

The document outlines the basics of software testing, including definitions, objectives, and key terminologies such as defects, bugs, errors, and failures. It also describes the structure of test cases, entry and exit criteria, the V-model for software development, and the differences between quality assurance and quality control. Additionally, it covers static and dynamic testing methods, as well as black box testing approaches and their advantages and disadvantages.

Uploaded by

samruddhibore
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

COURSE: COMPUTER ENGINEERING

E-CONTENT
OF
SUBJECT

SUBJECT CODE

22518
PREPARED BY

MR. NILESH JAGDISH VISPUTE


SENIOR LECTURER
COMPUTER ENGINEERING DEPARTMENT SHIFT A
PRAVIN PATIL COLLEGE OF DIPLOMA ENGINEERING AND TECHNOLOGY
UNIT 1: BASICS OF SOFTWARE TESTING AND TESTING METHODS

1.1 SOFTWARE TESTING


DEFINITION:

Software testing is a process, to evaluate the functionality of a software application with an


intent to find whether the developed software met the specified requirements or not and to
identify the defects to ensure that the product is defect-free in order to produce the quality
product.

OBJECTIVES OF TESTING:

Software Testing has different goals and objectives. The major objectives of Software testing
are as follows:

 Finding defects which may get created by the programmer while developing the software.
 Gaining confidence in and providing information about the level of quality.
 To prevent defects.
 To make sure that the end result meets the business and user requirements.
 To ensure that it satisfies the BRS that is Business Requirement Specification and SRS
that is System Requirement Specifications.
 To gain the confidence of the customers by providing them a quality product.

1.2 BUG TERMINOLOGIES


Let‘s see the difference between defect, bug, error and failure. In general, we use these terms
whenever the system/application acts abnormally. Sometimes we call it‘s an error and sometimes
bug and so on.

Defect
The variation between the actual results and expected results is known as defect. If a developer
finds an issue and corrects it by himself in the development phase then it‘s called a defect.

BUG
If testers find any mismatch in the application/system in testing phase then they call it as Bug. As
I mentioned earlier, there is a contradiction in the usage of Bug and Defect. People widely say
the bug is an informal name for the defect.

SOFTWARE TESTING -22518 PAGE 1


Error
We can‘t compile or run a program due to coding mistake in a program. If a developer unable to
successfully compile or run a program then they call it as an error.

Failure
Once the product is deployed and customers find any issues then they call the product as a failure
product. After release, if an end user finds an issue then that particular issue is called as failure

1.3 TEST CASE AND ENTRY EXIT CRITERIA


TEST CASE

A TEST CASE is a set of conditions or variables under which a tester will determine whether a
system under test satisfies requirements or works correctly. The process of developing test cases
can also help find problems in the requirements or design of an application.

TEST CASE TEMPLATE

A test case can have the following elements. Note, however, that a test management tool is
normally used by companies and the format is determined by the tool used.

SOFTWARE TESTING -22518 PAGE 2


SEVERAL STANDARD FIELDS OF A SAMPLE TEST CASE TEMPLATE ARE
LISTED BELOW.

Test case ID: Unique ID is required for each test case. Follow some convention to indicate the
types of the test. For Example, ‗TC_UI_1' indicating ‗user interface test case #1'.

Test priority (Low/Medium/High): This is very useful while test execution. Test priority for
business rules and functional test cases can be medium or higher whereas minor user interface
cases can be of a low priority. Test priority should always be set by the reviewer.

Module Name: Mention the name of the main module or the sub-module.

Test Designed By Name of the Tester.

Test Designed Date: Date when it was written.

Test Executed By Name of the Tester who executed this test. To be filled only after test
execution.

Test Execution Date: Date when the test was executed.

Test Title/Name: Test case title. For Example, verify the login page with a valid username and
password.

Test Summary/Description: Describe the test objective in brief

Pre-conditions: Any prerequisite that must be fulfilled before the execution of this test case. List
all the pre-conditions in order to execute this test case successfully.

Dependencies: Mention any dependencies on the other test cases or test requirements.

Test Steps: List all the test execution steps in detail. Write test steps in the order in which they
should be executed. Make sure to provide as many details as you can.

Test Data: Use of test data as an input for this test case. You can provide different data sets with
exact values to be used as an input.

Expected Result: What should be the system output after test execution? Describe the expected
result in detail including message/error that should be displayed on the screen.

Post-condition: What should be the state of the system after executing this test case?

SOFTWARE TESTING -22518 PAGE 3


Actual result: The actual test result should be filled after test execution. Describe the system
behavior after test execution.

Status (Pass/Fail): If an actual result is not as per the expected result, then mark this test
as failed. Otherwise, update it as passed.

Notes/ Comments/ Questions: If there are some special conditions to support the above fields,
which can‘t be described above or if there are any questions related to expected or actual results
then mention them here.

WRITING GOOD TEST CASES

o As far as possible, write test cases in such a way that you test only one thing at a time. Do
not overlap or complicate test cases. Attempt to make your test cases ‗atomic‘.

o Ensure that all positive scenarios AND negative scenarios are covered.

o Language: Write in simple and easy-to-understand language.

o Use exact and consistent names (of forms, fields, etc).

CHARACTERISTICS OF A GOOD TEST CASE:

o Accurate: Exacts the purpose.

o Economical: No unnecessary steps or words.


o Traceable: Capable of being traced to requirements.
o Repeatable: Can be used to perform the test over and over.
o Reusable: Can be reused if necessary.

SOFTWARE TESTING -22518 PAGE 4


ENTRY AND EXIT CRITERIA

ENTRY CRITERIA

Entry Criteria for STLC phases can be defined as specific conditions; or, all those documents
which are required to start a particular phase of STLC should be present before entering any of
the STLC phase.
Entry criteria is a set of conditions that permits a task to perform, or in absence of any of these
conditions, the task cannot be performed.
While setting the entry criteria, it is also important to define the time-frame when the entry
criteria item is available to start the process.
For Instance, to start the Test Cases development phase, the following conditions should be met

 The requirement document should be available.


 Complete understanding of the application flow is required.
 The Test Plan Document should be ready.

EXIT CRITERIA

Exit Criteria for STLC phases can be defined as items/documents/actions/tasks that must be
completed before concluding the current phase and moving on to the next phase.
Exit criteria are a set of expectations; this should be met before concluding the STLC phase.
For Instance, to conclude the Test Cases development phase, following expectations should be
met −

 Test Cases should be written and reviewed.


 Test Data should be identified and ready.
 Test automation script should be ready if applicable.

SOFTWARE TESTING -22518 PAGE 5


1.4 V MODEL, QUALITY ASSURANCE AND QUALITY CONTROL

V MODEL (VERIFICATION AND VALIDATION MODEL)

The V-model is a type of SDLC model where process executes in a sequential manner in V-
shape. It is also known as Verification and Validation model. It is based on the association of a
testing phase for each corresponding development stage. Development of each step directly
associated with the testing phase. The next phase starts only after completion of the previous
phase i.e. for each development activity, there is a testing activity corresponding to it.

So V-Model contains Verification phases on one side of the Validation phases on the other side.
Verification and Validation phases are joined by coding phase in V-shape. Thus it is called V-
Model.

Design Phase:
 Requirement Analysis: This phase contains detailed communication with the customer
to understand their requirements and expectations. This stage is known as
Requirement Gathering.

SOFTWARE TESTING -22518 PAGE 6


 System Design: This phase contains the system design and the complete hardware and
communication setup for developing product.
 Architectural Design: System design is broken down further into modules taking up
different functionalities. The data transfer and communication between the internal
modules and with the outside world (other systems) is clearly understood.
 Module Design: In this phase the system breaks down into small modules. The detailed
design of modules is specified, also known as Low-Level Design (LLD).

Testing Phases:
 Unit Testing: Unit Test Plans are developed during module design phase. These Unit
Test Plans are executed to eliminate bugs at code or unit level.
 Integration testing: After completion of unit testing Integration testing is performed. In
integration testing, the modules are integrated and the system is tested. Integration
testing is performed on the Architecture design phase. This test verifies the
communication of modules among themselves.
 System Testing: System testing test the complete application with its functionality, inter
dependency, and communication. It tests the functional and non-functional
requirements of the developed application.
 User Acceptance Testing (UAT): UAT is performed in a user environment that
resembles the production environment. UAT verifies that the delivered system meets
user‘s requirement and system is ready for use in real world.

SOFTWARE TESTING -22518 PAGE 7


QUALITY ASSURANCE AND QUALITY CONTROL

Quality assurance (QA) and quality control (QC) are two terms that are often used
interchangeably. Although similar, there are distinct differences between the two concepts.

Quality assurance and quality control are two aspects of quality management. While some
quality assurance and quality control activities are interrelated, the two are defined differently.
Typically, QA activities and responsibilities cover virtually all of the quality system in one
fashion or another, while QC is a subset of the QA activities.

Fig: Quality System, Quality Assurance, and Quality Control Relationships

QUALITY ASSURANCE

Quality assurance can be defined as "part of quality management focused on providing


confidence that quality requirements will be fulfilled." The confidence provided by quality
assurance is twofold—internally to management and externally to customers, government
agencies, regulators, certifiers, and third parties. An alternate definition is "all the planned and
systematic activities implemented within the quality system that can be demonstrated to provide
confidence that a product or service will fulfill requirements for quality."

QUALITY CONTROL

Quality control can be defined as "part of quality management focused on fulfilling quality
requirements." While quality assurance relates to how a process is performed or how a product
is made, quality control is more the inspection aspect of quality management. An alternate
definition is "the operational techniques and activities used to fulfill requirements for quality."

SOFTWARE TESTING -22518 PAGE 8


DIFFERENCE BETWEEN QUALITY ASSURANCE AND QUALITY CONTROL

SOFTWARE TESTING -22518 PAGE 9


1.5 METHODS OF TESTING

METHODS OF
TESTING

STATIC TESTING DYNAMIC TESTING

STATIC TESTING

Under Static Testing, code is not executed. Rather it manually checks the code, requirement
documents, and design documents to find errors. Hence, the name "static".

The main objective of this testing is to improve the quality of software products by finding errors
in the early stages of the development cycle. This testing is also called a Non-execution
technique or verification testing.

DYNAMIC TESTING

Under Dynamic Testing, a code is executed. It checks for functional behavior of software
system, memory/cpu usage and overall performance of the system. Hence the name "Dynamic"

The main objective of this testing is to confirm that the software product works in conformance
with the business requirements. This testing is also called an Execution technique or validation
testing.

Dynamic testing executes the software and validates the output with the expected outcome.

SOFTWARE TESTING -22518 PAGE 10


DIFFERENCE BETWEEN STATIC AND DYNAMIC TESTING

STATIC TESTING DYNAMIC TESTING

It is performed in the early stage of the It is performed at the later stage of the software
software development. development.

In static testing whole code is not


Executed. In dynamic testing whole code is executed.

Static testing prevents the defects. Dynamic testing finds and fixes the defects.

Static testing is performed before code Dynamic testing is performed after code
deployment. deployment.

Static testing is less costly. Dynamic testing is highly costly.

Static Testing involves checklist for Dynamic Testing involves test cases for testing
testing process. process.

It includes walkthroughs, code review,


inspection etc. It involves functional and nonfunctional testing.

It usually takes longer time as it involves


It generally takes shorter time. running several test cases.

It expose the bugs that are explorable through


execution hence discover only limited type of
It can discover variety of bugs. bugs.

Static Testing may complete 100%


statement coverage in comparably less While dynamic testing only achieves less than
time. 50% statement coverage.

Example: Example:
Verification Validation

SOFTWARE TESTING -22518 PAGE 11


1.6 THE BOX APPROACH

BLACK BOX TESTING APPROACH

BLACK BOX
TESTING

STATIC BLACK DYNAMIC BLACK


BOX TESTING BOX TESTING

 HIGH LEVEL  BOUNDARY VALUE


REQUIREMENT ANALYSIS
SPECIFICATION
 EQUIVALENCE
 LOW LEVEL PARTITIONING
REQUIREMENT
SPECIFICATION

BLACK BOX TESTING

SOFTWARE TESTING -22518 PAGE 12


DEFINITION

BLACK BOX TESTING also known as Behavioral Testing is a software testing method in
which the internal structure/design/implementation of the item being tested is not known to the
tester.

Advantages

o Tests are done from a user‘s point of view and will help in exposing discrepancies in the
specifications.

o Tester need not know programming languages or how the software has been implemented.

o Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias.

o Test cases can be designed as soon as the specifications are complete.

Disadvantages

o Only a small number of possible inputs can be tested and many program paths will be left
untested.

o Without clear specifications, which are the situation in many projects, test cases will be
difficult to design.

o Tests can be redundant if the software designer/developer has already run a test case.

STATIC BLACK BOX TESTING

Testing the specification is static black-box testing. The specification is a document, not an
executing program, so it's considered static. It's also something that was created using data from
many sources usability studies, focus groups, marketing input, and so on.

Static black box testing methods

1 High level Review of Specification

The first step is to stand back and view it from a high level. Examine the spec for large
fundamental problems, oversights, and omissions. You might consider this more research than
testing, but ultimately the research is a means to better understand what the software should do.

SOFTWARE TESTING -22518 PAGE 13


If you have a better understanding of the whys and how‘s behind the spec, you'll be much better
at examining it in detail.

Things must be consider at the time of high level review of specification

 Pretend to be the customer


 Research existing standards and guidelines
 Corporate terminology and conventions
 Industry requirements
 Government standards
 Graphical user interface
 Security standards
 Review and test similar software

2 Low level Review of Specification

After you complete the high-level review of the product specification, you'll have a better
understanding of what your product is and what external influences affect its design. Armed with
this information, you can move on to testing the specification at a lower level.

Specification Attributes Checklist


A good, well-thought-out product specification has eight important attributes:

o Complete. Is anything missing or forgotten? Is it thorough? Does it include everything


necessary to make it stand alone?
o Accurate. Is the proposed solution correct? Does it properly define the goal? Are there
any errors?
o Precise, Unambiguous, and Clear. Is the description exact and not vague? Is there a
single interpretation? Is it easy to read and understand?
o Consistent. Is the description of the feature written so that it doesn't conflict with itself or
other items in the specification?
o Relevant. Is the statement necessary to specify the feature? Is it extra information that
should be left out? Is the feature traceable to an original customer need?
o Feasible. Can the feature be implemented with the available personnel, tools, and
resources within the specified budget and schedule?
o Code-free. Does the specification stick with defining the product and not the underlying
software design, architecture, and code?
o Testable. Can the feature be tested? Is enough information provided that a tester could
create tests to verify its operation?

SOFTWARE TESTING -22518 PAGE 14


DYNAMIC BLACK BOX TESTING

Testing software without having an insight into the details of underlying code is dynamic black-
box testing. It's dynamic because the program is running you're using it as a customer would.
And, it's black-box because you're testing it without knowing exactly how it works with blinders
on.

Dynamic black box testing methods

1 Equivalence partitioning
Equivalence partitioning or equivalence class partitioning (ECP) is a software
testing technique that divides the input data of a software unit into partitions of equivalent data
from which test cases can be derived. In principle, test cases are designed to cover each partition
at least once. This technique tries to define test cases that uncover classes of errors, thereby
reducing the total number of test cases that must be developed. An advantage of this approach is
reduction in the time required for testing software due to lesser number of test cases.

For Example, If you are testing for an input box accepting numbers from 1 to 1000 then there is
no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for
invalid data. Using the Equivalence Partitioning method above test cases can be divided into
three sets of input data called classes. Each test case is representative of a respective class.

So in the above example, we can divide our test cases into three equivalence classes of some
valid and invalid inputs.

Test cases for input box accepting numbers between 1 and 1000 using Equivalence
Partitioning:
#1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid
test case. If you select other values between 1 and 1000 the result is going to be the same.
So one test case for valid input data should be sufficient.
#2) Input data class with all values below the lower limit. I.e. any value below 1, as an invalid
input data test case.
#3) Input data with any value greater than 1000 to represent the third invalid input class.

So using Equivalence Partitioning you have categorized all possible test cases into three classes.
Test cases with other values from any class should give you the same result.

We have selected one representative from every input class to design our test cases. Test case
values are selected in such a way that largest number of attributes of equivalence class can be
exercised.

SOFTWARE TESTING -22518 PAGE 15


Equivalence Partitioning uses fewest test cases to cover maximum requirements.

2 Boundary Value Analysis

It's widely recognized that input values at the extreme ends of the input domain cause more
errors in the system. More application errors occur at the boundaries of the input domain.
‗Boundary Value Analysis' Testing technique is used to identify errors at boundaries rather than
finding those that exist in the center of the input domain.Boundary Value Analysis is the next
part of Equivalence Partitioning for designing test cases where test cases are selected at the edges
of the equivalence classes.

Test cases for input box accepting numbers between 1 and 1000 using Boundary value
analysis:
#1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and
1000 in our case.
#2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.
#3) Test data with values just above the extreme edges of the input domain i.e. values 2 and
1001.

Boundary Value Analysis is often called as a part of the Stress and Negative Testing.

SOFTWARE TESTING -22518 PAGE 16


WHITE BOX TESTING APPROACH

WHITE BOX
TESTING

STATIC WHITE DYNAMIC WHITE


BOX TESTING BOX TESTING

FORMAL TECHNICAL  LINE COVERAGE/


REVIEW STATEMENT COVERAGE

 PEER REVIEW/  PATH COVERAGE/


BUDDY REVIEW BRANCH COVERAGE

 WALKTHROUGH  CONDITION COVERAGE

 INSPECTION  CODE COMPLEXITY

WHITE BOX TESTING

SOFTWARE TESTING -22518 PAGE 17


DEFINITION

White box testing techniques analyze the internal structures the used data structures, internal
design, code structure and the working of the software rather than just the functionality as in
black box testing. It is also called glass box testing or clear box testing or structural testing.

Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of
code.
3. It can start at an earlier stage as it doesn‘t require any interface as in case of black box
testing.
4. Easy to automate.

Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language
as opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.

STATIC WHITEBOX TESTING

Static white-box testing is the process of carefully and methodically reviewing the software
design, architecture, or code for bugs without executing it. It's sometimes referred to as structural
analysis.

The obvious reason to perform static white-box testing is to find bugs early and to find bugs that
would be difficult to uncover or isolate with dynamic black-box testing. Having a team of testers
concentrate their efforts on the design of the software at this early stage of development is highly
cost effective.
Static whitebox testing is performed using formal technical review
Formal Technical Review
A formal technical review (FTR) is a form of review in which "a team of qualified personnel
examines the suitability of the software product for its intended use and identifies discrepancies
from specifications and standards. Technical reviews may also provide recommendations of
alternatives and examination of various alternatives"

SOFTWARE TESTING -22518 PAGE 18


Phases of Formal Review Technique
 Prepare
 Review
 Record
 Report

Verious Formal Review Techniques


1 Peer Review/ Buddy Review

This is a informal review technique where two team members work on the same Built (software
application) on the same machine. One of the team members will work with the systems (with
keyboard and mouse) and another should make notes and scenarios.

When a tester and a developer work together to ensure the quality of a product then the
efficiency rises a bit, even if the time is less. Here there is no documentation is needed like Test
Cases, Test Plan or Test Scenarios.

2 Walkthrough

Walkthrough in formal technical review process it is conducted by the author of the ‗document
under review‘ who takes the participants through the document and his or her thought processes,
to achieve a common understanding and to gather feedback. This is especially useful if
people from outside the software discipline are present, who are not used to, or cannot easily
understands of software development documents. The content of the document is explained
step by step by the author, to reach consensus on changes or to gather information. The
participants are selected from different departments and backgrounds If the audience represents a
broad section of skills and disciplines, it can give assurance that no major defects are ‗missed‘ in
the walk-through. A walkthrough is especially useful for higher-level documents, such as
requirement specifications and architectural documents.

3 Inspection

Inspection is the most formal review type. It is usually led by a trained moderator (certainly not
by the author).The document under inspection is prepared and checked thoroughly by their
viewers before the meeting, comparing the work product with its sources and other referenced
documents, and using rules and checklists. In the inspection meeting the defects found are
logged. Depending on the organization and the objectives of a project, inspections can be
balanced to serve a number of goals.

SOFTWARE TESTING -22518 PAGE 19


Roles and Responsibilities in Technical Review

1. The moderator: - The moderator (or review leader) leads the review process. His role is to
determine the type of review, approach and the composition of the review team. The moderator
also schedules the meeting, disseminates documents before the meeting, coaches other team
members, paces the meeting, leads possible discussions and stores the data that is collected.

2. The author: - As the writer of the ‗document under review‘, the author‘s basic goal should be
to learn as much as possible with regard to improving the quality of the document. The author‘s
task is to illuminate unclear areas and to understand the defects found.

3. The scribe/ recorder: – The scribe (or recorder) has to record each defect found and any
suggestions or feedback given in the meeting for process improvement.

4. The reviewer: - The role of the reviewers is to check defects and further improvements in
accordance to the business specifications, standards and domain knowledge.

5. The manager:- Manager is involved in the reviews as he or she decides on the execution of
reviews, allocates time in project schedules and determines whether review process objectives
have been met or not.

DYNAMIC WHITEBOX TESTING

Dynamic white-box testing, in a nutshell, is using information you gain from seeing what the
code does and how it works to determine what to test, what not to test, and how to approach the
testing. Another name commonly used for dynamic white-box testing is structural testing
because you can see and use the underlying structure of the code to design and run your tests.

Dynamic Whitebox Testing Types or Coverage

1) Statement coverage / Line Converge:


In a programming language, a statement is nothing but the line of code or instruction for the
computer to understand and act accordingly. A statement becomes an executable statement when
it gets compiled and converted into the object code and performs the action when the program is
in a running mode.

Hence “Statement Coverage”, as the name itself suggests, it is the method of validating whether
each and every line of the code is executed at least once.

SOFTWARE TESTING -22518 PAGE 20


2) Branch Coverage/ Path Coverage:
―Branch‖ in a programming language is like the ―IF statements‖. An IF statement has two
branches: True and False.
So in Branch coverage (also called Decision coverage), we validate whether each branch is
executed at least once.

In case of an “IF statement”, there will be two test conditions:


 One to validate the true branch and,
 Other to validate the false branch.

Hence, in theory, Branch Coverage is a testing method which is when executed ensures that each
and every branch from each decision point is executed.

Example

SOFTWARE TESTING -22518 PAGE 21


In the above example, we can see there are few conditional statements that is executed depending
on what condition it suffice. Here there are 3 paths or condition that needs to be tested to get the
output,

 Path 1: 1,2,3,5,6,7
 Path 2: 1,2,4,5,6,7
 Path 3: 1, 6, 7

3) Condition Coverage

Conditional coverage or expression coverage will reveal how the variables or sub expressions in
the conditional statement are evaluated. In this coverage expressions with logical operands are
only considered.For example, if an expression has Boolean operations like AND, OR, XOR,
which indicated total possibilities. Conditional coverage offers better sensitivity to the control
flow than decision coverage. Condition coverage does not give a guarantee about full decision
coverage

Example

For the above expression, we have 4 possible combinations

 TT

 FF

 TF

 FT

SOFTWARE TESTING -22518 PAGE 22


Consider the following input

X=3 (x<y) TRUE Condition Coverage is ¼ = 25%

Y=4

A=3 (a>b) FALSE

B=4

4 Code Complexity / Cyclomatic Complexity


Cyclomatic complexity is a source code complexity measurement that is being correlated to a
number of coding errors. It is calculated by developing a Control Flow Graph of the code that
measures the number of linearly-independent paths through a program module.
Lower the Program's cyclomatic complexity, lower the risk to modify and easier to understand.
It can be represented using the below formula:

Cyclomatic complexity = E - N + 2*P


where,
E = number of edges in the flow graph.
N = number of nodes in the flow graph.
P = number of nodes that have exit points

Example :

IF A = 10 THEN
IF B > C THEN
A=B
ELSE
A=C
ENDIF
ENDIF
Print A
Print B
Print C

FlowGraph:

SOFTWARE TESTING -22518 PAGE 23


The Cyclomatic complexity is calculated using the above control flow diagram that shows
seven nodes(shapes) and eight edges (lines), hence the cyclomatic complexity is 8 - 7 + 2 = 3

SOFTWARE TESTING -22518 PAGE 24


UNIT 2: TYPES AND LEVELS OF TESTING
Levels of testing:

2.1 UNIT TESTING


Unit Testing is a level of software testing where individual units/ components of software are
tested. The purpose is to validate that each unit of the software performs as designed. A unit is
the smallest testable part of any software. It usually has one or a few inputs and usually a single
output.

The objective of Unit Testing is:

1. To isolate a section of code.


2. To verify the correctness of code.
3. To test every function and procedure.
4. To fix bug early in development cycle and to save costs.
5. To help the developers to understand the code base and enable them to make changes
quickly.
6. To help for code reuse.

SOFTWARE TESTING -22518 PAGE 25


Workflow of unit testing

SOFTWARE TESTING -22518 PAGE 26


Unit Testing Tools

Driver
A driver is basically a piece of code through which other programs or pieces of code or modules
can be called. Drivers are the main program through which other modules are called. If we want
to test any module it is required that we should have a main program which will call the testing
module. Without the dummy program or driver, the complete testing of the module is not
possible.

Drivers are basically called in Bottom-Up testing approach. In bottom up testing approach the
bottom level modules are prepared but the top level modules are not prepared. Testing of the
bottom level modules is not possible with the help of main program. So we prepare a dummy
program or driver to call the bottom level modules and perform its testing. The main
purpose of drivers is to allow testing of the lower levels of the code, when the upper levels of the
code are not yet developed.

SOFTWARE TESTING -22518 PAGE 27


Stub
Stubs are basically used in Top-Down approach of integration testing. In this approach, the
upper modules are prepared first and are ready for testing. While the bottom modules are not yet
prepared by the developers. So in order to form the complete application we create dummy
programs for the lower modules in the application. Hence all the functionalities can be tested.

The main purpose of a stub is to allow testing of the upper levels of the code. Hence, when the
lower levels of the code are not yet developed

2.2 INTEGRATION TESTING

Integration Testing is defined as a type of testing where software modules are integrated
logically and tested as a group. A typical software project consists of multiple software modules,
coded by different programmers. The purpose of this level of testing is to expose defects in the
interaction between these software modules when they are integrated. Integration Testing focuses
on checking data communication amongst these modules.

Integration Testing Approaches

1 Bottom-up Integration

In the bottom-up strategy, each module at lower levels is tested with higher modules until all
modules are tested. It takes help of Drivers for testing

Diagrammatic Representation:

SOFTWARE TESTING -22518 PAGE 28


Advantages:

 Fault localization is easier.


 No time is wasted waiting for all modules to be developed unlike Big-bang approach

Disadvantages:

 Critical modules (at the top level of software architecture) which control the flow of
application are tested last and may be prone to defects.
 An early prototype is not possible

2 Top-down Integration:

In Top to down approach, testing takes place from top to down following the control flow of the
software system.

Takes help of stubs for testing.

Diagrammatic Representation:

Advantages:

 Fault Localization is easier.


 Possibility to obtain an early prototype.
 Critical Modules are tested on priority; major design flaws could be found and fixed first.

SOFTWARE TESTING -22518 PAGE 29


Disadvantages:

 Needs many Stubs.


 Modules at a lower level are tested inadequately.

3 Hybrid/ Bi-Directional Integration

In the sandwich/hybrid strategy is a combination of Top Down and Bottom up approaches. Here,
top modules are tested with lower modules at the same time lower modules are integrated with
top modules and tested. This strategy makes use of stubs as well as drivers.

2.2 PERFORMANCE TESTING


Performance Testing is a type of software testing that ensures software applications to
perform properly under their expected workload. It is a testing technique carried out to
determine system performance in terms of sensitivity, reactivity and stability under a particular
workload.
Performance Testing is the process of analyzing the quality and capability of a product. It is a
testing method performed to determine the system performance in terms of speed, reliability
and stability under varying workload. Performance testing is also known as Perf Testing.
Performance Testing Attributes:
 Speed:
It determines whether the software product responds rapidly.
 Scalability:
It determines amount of load the software product can handle at a time.

SOFTWARE TESTING -22518 PAGE 30


 Stability:
It determines whether the software product is stable in case of varying workloads.
 Reliability:
It determines whether the software product is secure or not.

Objective of Performance Testing:


1. The objective of performance testing is to eliminate performance congestion.
2. It uncovers what is needed to be improved before the product is launched in market.
3. The objective of performance testing is to make software rapid.
4. The objective of performance testing is to make software stable and reliable.

Types of Performance Testing:

1. Load testing:
It checks the product‘s ability to perform under anticipated user loads. The objective is to
identify performance congestion before the software product is launched in market.

2. Stress testing:
It involves testing a product under extreme workloads to see whether it handles high traffic
or not. The objective is to identify the breaking point of a software product.

3. Spike testing:
It tests the product‘s reaction to sudden large spikes in the load generated by users.

4. Volume testing:
In volume testing large number of data is saved in a database and the overall software
system‘s behavior is observed. The objective is to check product‘s performance under
varying database volumes

5. Scalability testing:
In scalability testing, software application‘s effectiveness is determined in scaling up to
support an increase in user load. It helps in planning capacity addition to your software
system.

6. Soak testing:
Soak testing is a type of performance evaluation that gauges how an application handles a
growing number of users or increasingly taxing tasks over an extended period of time.

SOFTWARE TESTING -22518 PAGE 31


Performance Testing Process:

Advantages of Performance Testing:


 Performance testing ensures the speed, load capability, accuracy and other performances of
the system.
 It identifies monitors and resolves the issues if anything occur.
 It ensures the great optimization of the software and also allows large number of users to
use it on same time.
 It ensures the client as well as end-customers satisfaction.

Disadvantages of Performance Testing:


 Sometimes, users may find performance issues in the real time environment.
 Team members who are writing test scripts or test cases in the automation tool should have
high-level of knowledge.
 Team members should have high proficiency to debug the test cases or test scripts.
 Low performances in the real environment may lead to lose large number of users.

SOFTWARE TESTING -22518 PAGE 32


2.3.1 TESTING ON WEB APPLICATION

Web application testing, a software testing technique exclusively adopted to test the
applications that are hosted on web in which the application interfaces and other functionalities
are tested.

Web Application Testing - Techniques:

1. Functionality Testing - The below are some of the checks that are performed but not limited
to the below list:
 Verify there is no dead page or invalid redirects.
 First check all the validations on each field.
 Wrong inputs to perform negative testing.
 Verify the workflow of the system.
 Verify the data integrity.

SOFTWARE TESTING -22518 PAGE 33


2. Usability testing - To verify how the application is easy to use with.
 Test the navigation and controls.
 Content checking.
 Check for user intuition.
3. Interface testing - Performed to verify the interface and the dataflow from one system to
other.
4. Compatibility testing- Compatibility testing is performed based on the context of the
application.
 Browser compatibility
 Operating system compatibility
 Compatible to various devices like notebook, mobile, etc.
5. Performance testing - Performed to verify the server response time and throughput under
various load conditions.
 Load testing - It is the simplest form of testing conducted to understand the behavior of
the system under a specific load. Load testing will result in measuring important
business critical transactions and load on the database, application server, etc. are also
monitored.
 Stress testing - It is performed to find the upper limit capacity of the system and also to
determine how the system performs if the current load goes well above the expected
maximum.
 Soak testing - Soak Testing also known as endurance testing, is performed to determine
the system parameters under continuous expected load. During soak tests the parameters
such as memory utilization is monitored to detect memory leaks or other performance
issues. The main aim is to discover the system's performance under sustained use.
 Spike testing - Spike testing is performed by increasing the number of users suddenly by
a very large amount and measuring the performance of the system. The main aim is to
determine whether the system will be able to sustain the work load.
7. Security testing - Performed to verify if the application is secured on web as data theft
and unauthorized access are more common issues and below are some of the techniques
to verify the security level of the system. Normally, a serious of fabricated malicious
attacks is used to test how the app responds and performs under these circumstances. If
security shortfalls are detected, it is important to find the best way possible to overcome
them.
8.

SOFTWARE TESTING -22518 PAGE 34


2.3.2 CLIENT SERVER TESTING

In Client-server testing there are several clients communicating with the server.

 Multiple users can access the system at a time and they can communicate with the server.
 Configuration of client is known to the server with certainty.
 Client and server are connected by real connection.
 Testing approaches of client server system:

1. Component Testing: One need to define the approach and test plan for testing client and
server individually. When server is tested there is need of a client simulator, whereas
testing client a server simulator, and to test network both simulators are used at a time.
2. Integration testing: After successful testing of server, client and network, they are
brought together to form system testing.
3. Performance testing: System performance is tested when number of clients is
communicating with server at a time. Volume testing and stress testing may be used for
testing, to test under maximum load as well as normal load expected. Various interactions
may be used for stress testing.
4. Concurrency Testing: It is very important testing for client-server architecture. It may
be possible that multiple users may be accessing same record at a time, and concurrency
testing is required to understand the behavior of a system in this situation.
5. Compatibility Testing: Client server may be put in different environments when the
users are using them in production. Servers may be in different hardware, software, or
operating system environment than the recommended. Other testing such as security

SOFTWARE TESTING -22518 PAGE 35


testing and compliance testing may be involved if needed, as per testing and type of
system.

2.4.1 ACCEPTANCE TESTING


It is a formal testing according to user needs, requirements and business processes conducted to
determine whether a system satisfies the acceptance criteria or not and to enable the users,
customers or other authorized entities to determine whether to accept the system or not.

Types of Acceptance Testing


1 Alpha Testing
Alpha testing takes place at developers' sites, and involves testing of the operational system by
internal staff, before it is released to external customers.

2 Beta Testing
Beta testing takes place at customers' sites, and involves testing by a group of customers who use
the system at their own locations and provide feedback, before the system is released to other
customers. The latter is often called "field testing".

Use of Acceptance Testing:


 To find the defects missed during the functional testing phase.
 How well the product is developed.
 A product is what actually the customers need.
 Feedbacks help in improving the product performance and user experience.
 Minimize or eliminate the issues arising from the production.

SOFTWARE TESTING -22518 PAGE 36


2.4.2 SPECIAL TESTS

1 Regression Testing

Regression Testing is defined as a type of software testing to confirm that a recent program or
code change has not adversely affected existing features. Regression Testing is nothing but a full
or partial selection of already executed test cases which are re-executed to ensure existing
functionalities work fine.

How to perform Regression Testing?

Retest All

 This is one of the methods for Regression Testing in which all the tests in the existing test
bucket or suite should be re-executed. This is very expensive as it requires huge time and
resources.

Regression Test Selection

 Instead of re-executing the entire test suite, it is better to select part of the test suite to be
run
 Test cases selected can be categorized as 1) Reusable Test Cases 2) Obsolete Test Cases.
 Re-usable Test cases can be used in succeeding regression cycles.
 Obsolete Test Cases can't be used in succeeding cycles.

Prioritization of Test Cases

 Prioritize the test cases depending on business impact, critical & frequently used
functionalities. Selection of test cases based on priority will greatly reduce the regression
test suite.

SOFTWARE TESTING -22518 PAGE 37


2 GUI Testing

GUI Testing is a software testing type that checks the Graphical User Interface of the
Application under Test. GUI testing involves checking the screens with the controls like menus,
buttons, icons, and all types of bars - toolbar, menu bar, dialog boxes, and windows, etc. The
purpose of Graphical User Interface (GUI) Testing is to ensure UI functionality works as per the
specification.

GUI is what the user sees. Say if you visit any site what you will see say homepage it is the GUI
(graphical user interface) of the site. A user does not see the source code. The interface is visible
to the user. Especially the focus is on the design structure, images that they are working properly
or not.

The following checklist will ensure detailed GUI Testing in Software Testing.

 Check all the GUI elements for size, position, width, length, and acceptance of characters
or numbers. For instance, you must be able to provide inputs to the input fields.
 Check you can execute the intended functionality of the application using the GUI
 Check Error Messages are displayed correctly
 Check Font used in an application is readable
 Check the alignment of the text is proper
 Check the Color of the font and warning messages is aesthetically pleasing
 Check that the images have good clarity
 Check that the images are properly aligned
 Check the positioning of GUI elements for different screen resolution.

SOFTWARE TESTING -22518 PAGE 38


UNIT 3: TEST MANAGEMENT
3.1 TEST PLANNING
 Testing is important phase, while can be planning, executed, traced and periodically
reported on.
 Test planning is nothing but a detailed document that outlines, the test strategy, testing
objectives resources (hardware, software, manpower etc), test schedule, test estimation
and test deliverable.
1 Preparing a test plan:-

 It is an important document for execution, tracking and reporting the entire testing.
 Following things are required to prepare test plan:
1. Scope: - write the scope of testing including the clear indication of what will be
testing and what will not be tested.
2. Break it down into small and manageable task: - Testing is performed by breaking
it down into small and manageable tasks and identifying the strategies to be use, for
carrying out the task.
3. Resources: - Find out the resources needed for testing.
4. Timeline: - Design timeline by which testing activities can be performed.

Standard template for Test Plan:-


1. Introduction
1.1 scope
2. References
3. Test methodology and strategy / approach
4. Test criteria
4.1 entry criteria
4.2 exit criteria
4.3 suspension criteria
4.4 resumption criteria
5. Assumption dependencies and risk
5.1 Assumption
5.2 Dependencies
5.3 Risk and risk management plans
6. Estimations
6.1 Size estimate
6.2 Effort estimate
6.3 Schedule estimate
7. Test deliverable and milestones

SOFTWARE TESTING -22518 PAGE 39


8. Responsibilities
9. Resources requirement
9.1 Hardware resources
9.2 Software resources
9.3 People resources
9.4 Other resources
10. Training requirements
11. Defect logging and tracking process
12. Matrices plan
13. Product release criteria
2 Deciding Test Approaches:-

 It is a part of test plan, it identifies right type of testing required for the system
 Test approach is a kind of approach in which we have to decide what need to be tested in
order to estimate size, efforts, and schedule.
 Setting up criteria for testing :-
 Set entry and exit criteria for testing entry criteria for test specify a criteria for each phase
there must be entry criteria for entire testing activity to start.
3 Exit Criteria: - It specifies when test cycle can be completed.

4 Identifying Responsibilities Staffing, Need of Tracking:-

 Testing process requires different people to play different role the most common roles
are test engineer, test lead , test manager
How to identify responsibilities:-
 Each person should know clearly what he or she has to do.
 List out the responsibilities of each function in testing process.
 Everyone should know the importance of work, while performing testing activity.
 Compliment and cooperate each other
 No task should be left unassigned.
Staffing:-
 It is done based on estimation of effort and time available for project for completion.
 The task in testing is prioritized on basis of efforts, time and importance.
 The people are assigned to task based on the requirement of jobs, skills and
experience of people.
Need of training:-
 If people are not having the skills the specific requirement of jobs, then appropriate
training program is needed.

SOFTWARE TESTING -22518 PAGE 40


5 Identifying the Resources Requirements:-
While planning for testing of project, the project manages should get estimate of hardware and
software requirements.
Following factors shall be considered while selecting resource requirements
a. RAM, processor, hard disk required for system.
b. Test automation tool.
c. Supporting tools such as complier, test configuration manager and so on.
d. Appropriate version of software with license.
e. OS required for execution.
6 Identifying Test Deliverables:-

 Test deliverables are artefacts which mean things that are produced by people involved in
the process and given to the stakeholders.
 A milestone is indication of completion of key project task.
E.g.: Some common test deliverables are:-
1. Test plan
2. Test case design specification
3. Test cases
4. Test logs
5. Test summary report
7 Testing tasks:-
There are various tasks of tester:-

1) Identify bug and error, describe and send for revision.


2) Check the implemented improvement
3) Test the product until the desired result is achieved.
4) Test the software solution on all necessary devices.
5) Test the product complies with requirements.
8 Estimation:-
Test Estimation is a management activity which approximates how long a Task would take to
complete. Estimating cost & effort for the test is one of the major and important tasks in Test
Management.
There are 3 phases of estimation:-
1) Size estimation
2) Effort estimation
3) Schedule estimation

SOFTWARE TESTING -22518 PAGE 41


3.2 TEST MANAGEMENT
 Test planning is a process of identifying the test activities and resource requirements for
implementation of testing strategy.
 It is a process of managing the test process by using a series of activities such as
planning , execution , monitoring and controlling.

3.2.1 TEST INFRASTRUSTURE MANAGEMENT


There are 3 elements of test infrastructure

TEST INFRASTRUCTURE

TEST CASE DEFECT CONFIGURATION


DATABASE [TCDB] REPOSITORY MANAGEMENT &
REPOSITORY TOOL

Test Case Database (TCDB)


 It is a collection of information about test cases in an organization.
 The content of test case database are given below:-
Entity Purpose Attribute
Test case Records all the information 1. Test case ID
about the tests. 2. Name of test case
3. Owner of test case
4. Status
Test case product case Providing mapping between 1. Test case ID
reference test case and corresponding 2. Module ID
feature of the product.
Test case run history Provides the history of when 1. Test case ID
test was running and what was 2. Run date
the result. 3. Time
4. Status of run (pass or
fail)
Test case defect cross Details of test cases 1. Test case ID
reference introduced to test certain 2. Defect reference
defects detected in product.

SOFTWARE TESTING -22518 PAGE 42


Defect repository
 It captures all the relevant details of defects reported for a product.
 The following information stored in defect repository:-
Entity Purpose Attributes
Details of defect Records all static information 1. Defect ID
about test. 2. Defect priority
3. Defect description
4. Affected product
5. Environment
information
6. Customer who
encountered with the
defect
7. Date and time of defect
occurrences
Details of defect text Details of test cases for a 1. Test case ID
given defect. Cross reference 2. Defect ID
with the TCDB .
Fixing details Details of the fixes for given 1. Defect ID
defect. 2. Fix details
Communication Details of the communication 1. Test case ID
that occur among various 2. Defect reference
stakeholder during defect 3. Details of
management. communication

Configuration Management and Repository Tools


 Software Configuration management (SCM) repository is also known as Configuration
management repository.
 It keeps the track of change control and version control of all the files present in Software
product.
 Change control ensure that:-
 Change made during testing of file must be in controlled manner.
 Changes made by one tester must not be lost or overwritten by another tester.
 For each change made in the file appropriate version of file must be made.
 Everyone must get update of must appropriates file.
 Version control insures that the test scripts associated with an given release of a product or
baseline along with product file.

SOFTWARE TESTING -22518 PAGE 43


Working of Test Infrastructure Components

 The TCDB, SCM, DR, work together by cooperating each other.


 For e.g.:- Defect repository links defects, fires and test. Those files are present in SCM.
The Meta data about the modified files is present in TCDB and find the corresponding
test case file and source file from SCM.

3.2.2 TEST PEOPLE MANAGEMENT

 People management is an important aspect of any project.


 You need good people‘s for:-
1. Achieving project target
2. Communicating more effectively
3. Cooperate effectively
 For people management following are general steps to be taken:-
1. Effective communication with people
2. Build healthy relationship
3. Influence people
4. Motivate people
5. Handle ethical issues properly

SOFTWARE TESTING -22518 PAGE 44


3.3 TEST PROCESS

Base lining the Test Plan.


Test plan is an important document which is very much need for testing entries project.

Some checklist of test plan must be prepared.


 Each testing project puts together a test plan based on template.
 Test plan is reviewed by expert people in the organization.
 It is then approved by computed authority.
 After this test plan is base lined into configuration management repository from this test
plan become bases for testing process of the project.
 If any change is made in testing, then that should be reflected in test and also again
changed in configuration management repository.

Test case specification


 On the basis of test plan, testing teams designs test case specification, it is a basis for
preparing individual test cases.
 Test case be defined as series of steps executed on the product using pre-defined set of
input data expecting to produce pre-defined set of output in a given environment.
 Following things need to be indentified :-
i. Purpose of test.
ii. Item to be tested. (object)
iii. Software and hardware requirement
iv. Input data
v. Steps to be followed
vi. Expected result
vii. Actual result
viii. Status

3.4 TEST REPORTING


Execution Test Cases:-

 The prepared test cases has to be executed at appropriate time during project.
 For e.g.; system testing must be executed during the system test.
 During test case execution defect repository must be updated with following points:-
1. Defect that are fixed currently.
2. New defects that are found.
 Test team communicate with development team with help of defect repository.
 Test has to be suspended during it‘s run and it should be resumed only after satisfying
resumption the criteria.

SOFTWARE TESTING -22518 PAGE 45


 Test should be run when the entry criteria for test gets satisfied and should be
accomplished only when exit criteria satisfied.
 On successful execution of test cases, traceability matrix must be updated.

Test Reporting:-

 During testing constant communication between test team and development team takes
place with the help of document called test report.
 Various types of test repots are:-
1. Test incident report
2. Test cycle report
3. Test summary report

1. Test Incident Report


 Is nothing but an entry made in defect repository (DR).
 Each defect has unique ID and it is used to identify the incidents.
 A test incident report is used to refer to the testing cycle when defects are
encountered.
 The high impact (severity) test incidents are highlighted in test summary report.

2. Test Cycle Report


Testing takes places in unit of test cycle. A test cycle report at end of each cycle gives
following things:-
1. Description of activities carried out during test cycle.
2. Defects that are defected during the test cycle along with their severity and impact.
3. Progress in fixing the defect from one cycle to next cycle.
4. Outstanding defects that are yet to be fixed.
5. Any variation in schedule.

3. Preparing Test Summary Report:-


 A report that summates the results of test cycle is called test summary report.
 There are two types of test summary reports:-
1. Phase wise test summary report which is produced at end of every cycle.
2. Final test summary report.
 It includes following things:-
1. Summary of the activities carried out during the test cycle.
2. Variation in actual activities and planned activities.
 Test that are planned to run get couldn‘t be run.
 Modified tests.
 Additional tests those are required.

SOFTWARE TESTING -22518 PAGE 46


 Difference in effort and time between planning and actual.
 Any other deviation from plan.

Summary of results should include:-

 Tests that are failed with root cause.


 Severity of impact of uncovered defects.

Comprehensive assessment and recommendation for release

SOFTWARE TESTING -22518 PAGE 47


UNIT 4: DEFECT MANAGEMENT

Defect:
Is something by which the customer requirements don‘t get satisfied? Basically the difference
between expected result and actual result is called defect. Defect is a specific corner about the
quality of an application under test. Defects are expensive because finding &correcting defect is
supposed to bea complex activity in software development.

Causes of defects:
The requirements defined by the customer are not clear & because of that something
development team makes some assumption. The software designs are incomplete& doesn‘t,
accommodate of the customer. People working on product design development &testing are not
skills. The process those are intended for product design development & testing are not capable
of producing the desired result.

Effect of defects:
Due to the defect present in system, customer has total dissatisfaction about the system. There
are some serious effects of defect as given below:-

Performance of the system will not be at the acceptable level. Security of the system can be
problematic& there are changes of external attack on the system. Required functionality might be
absent from the system which may result in rejected of the system leg the
customer.

SOFTWARE TESTING -22518 PAGE 48


4.1.1 DEFFECT CLASSIFICATION

There are various types of defects:

Requirement defect: when a developer can‘t understand the customer requirement properly, the
requirement defect occurs. The requirement defect can be further classified as:-

Functional defect: the customer expected functionality is not present in the system, then it is
called functional defect.

Interface defect: if the defect remains in the modules when one module is interfaced with other
module, then it is called interface defect.

Design defect:
If the software design is not correct or created without understanding the requirement, then the
design defect occurs. There are design defect can be further classified as:-

Algorithm defect: if the design of algorithm is unable to translate requirement correctly, then
algorithm defect occurs.
Interface defect: due to lack of communication the interface occurs. When parameter from one
module doesn‘t get passed to other module correctly then interface occurs.

Coding defect:
If the coding standards & design standards are not followed properly according to organisation
guidelines then coding defect occurs.

SOFTWARE TESTING -22518 PAGE 49


Testing defect:
If the testing is not conducted properly then testing defect occurs. Various types of testing defect
are:-

Test design defect:if the test plan,test cases,test scanner &test data are not properly defined then
test design defect occurs.

Test tool defect:if there is a defect in the test tool,then it is difficult to identify&resolve the
defect.

4.1.2 DEFECT MANAGEMENT PROCESS

Defect management process can be carried out in various phases as- defect prevention baselines
delivery defect discovery, defect resolution &process improvement.

Defect prevention:
Defect prevention is the highest priority activity in the defect management process.following is
the steps taken for defect prevention:-

Identify the cause of defect & try to reduce the occurrence of defect. Focus on common cause of
defect which occurs in coding or interface generation. Identify critical risk, access it &try to
minimize is expected impact of the risks.

Baselines Delivery:
The baseline means the work product which is in deliverable stage. The deliverable is baselines
2hen it reduces predefined milestone in its developed.

SOFTWARE TESTING -22518 PAGE 50


Defect Discovery:
It means defect are identified & brought to attention to the developer. As soon as defect get
identified it must be reported to the authority team in order to resolve it.

Defect Resolution:
Once the developer have acknowledged valid defect, there solution process beings. The
resolution process done in the following steps:-

Determine the importance of the defect. Schedule & fix the defect as per the order of its
importance. Notify it to all concern parties.

Process Improvement:
This step suggest that participated should go back to the process that originated the defect to
understand what caused the defect. Then do validation process& check improvement of software.

Management Reporting:
It is important that the defect information must be analysed & communicate to both the project
management & senior management. The purpose of collecting such information is:-

To know the status of each defect. To provide insight into process that needs the improvement.
To provide strategy information for making important decisions.

SOFTWARE TESTING -22518 PAGE 51


4.2 DEFECT LIFE CYCLE

Defect life cycle is a cycle which a defect goes through, during its life time it starts when defect
is found& ends when a defect is closed. After ensuring it‘s not reproduced. Defect life cycle is
related bug found during testing. The bug has different stages in life cycle. The life cycle of the
bug are as followed it includes the following stages:-
New: - when a defect is logged& posted for the first time. Its state is given as new.

Assign: - after the tester has posted the bug the lead if the tester approves that the bug is genuine.
Then he assigns the bug to corresponding developer & the developer team. Its state is given as
assigned.
Open: - at this state developer started analysing & working on the defect fixed.

Fixed: - when developer made necessary code changes &verifies the change then the make the
bug state as fixed.

Verified: - the tester test the bug again after it got fixed by developer. It the bug is not present in
the software. He approves that the bug is fixed & change the status to verify. If the bug is not
fixed the retesting of the bug is necessary.

SOFTWARE TESTING -22518 PAGE 52


Reopen: - if the bug still exists even after the bug is fixed by the developer. The tester changes
the status to reopen. The bug goes through the life cycle once again.

Duplicate: - if the bug is repeated twice or provide identified bug mention the same concept of
current bug then bug status is changed to duplicate.

Rejected: - if the developer feels that the bug is not genuine, then he/she rejects the bug is
change the states to reject.

Deferred: - when the bug comes into the deferred state, then bug is expected to release in next
release.

Not a bug: - the state given as not a bug if there is no change in functionality of the application.

Closed: - once the bug is fixed it is tested by the tester if the tester feels that the bug no longer
exists in the software, he/she changes the status of bug to be closed.

4.3 ESTIMATED IMPACT OF DEFECT

Actual impact of the defect can be relearned when the risks becomes a reality. But it is possible t
estimate the probable impact of it .some organisation classify the risk as high, medium & low
based on some model.

Following are the some ways to handle the risks:-

Accept the risk as it is:-


It there occurs some natural disaster then there would not be any solution for this. The actions or
solutions to handle such risks are very costly.

Bypassing the risks:-


The risks can be by passed when user cannot accept The risks or no action can be taken to reduce
the probability of the risks.

How to minimise risk impact or probability:-


Eliminate risk:-
Risk probability can be reduced by reducing the cause of the risk although it is not possible
to eliminate risk completely, the probability of risk can be reduced to certain level. There are
some pre enter e. Controls that help in. Reducing the probability of risks.
Mitigation of risks:-
Actions taken by a organisation to minimize possible damage due to realisation of risk are
allied mitigation action. The mitigation action are planned by organisation correctively. Risk

SOFTWARE TESTING -22518 PAGE 53


detection can be performed by detective controls. It is used to visual. The risk its Severity &
decide its priority accordingly.

Contingency planning:-
It means the action Imitated by Organisation When Risks become reality. There are ways
which are already planned action by keeping in mind that the preventive & corrective action
might fail.

Techniques for finding defects:-


There are 3 techniques for finding defects.
Static testing:-
This is a technique in which the testing is done without executing the program. Code
review is an e.g. of static technique.
Dynamic testing:-
This is a technique in which the system components are executed in order to identify the
defects. Execution of test cases is an e.g. dynamic testing.
Operational technique:-
The system contains defect are delivered to the users & then defect are identified by the
user customer. Then it is called as operations technique.

Reporting a defect:-
Finding & reporting defect is an important step in software development life cycle. it Is
necessary to find root cause of defect & prepare the document about it.
Following are some to be noted in reporting.
Defects:-
 Give complete record of inconsistency:-
 Complete description of defect helps the tester & user to take preventive & corrective
actions about the defect.
 Complete record description also helps in process improvement.
 Defect report forms a base quality measurement:-
 No. of defect serve as a measure of software quality. It most defines severity, priority &
category of defects.
 More defect mean quality of software is poor thus; defect report is helpful in deciding the
quality of software.

SOFTWARE TESTING -22518 PAGE 54


CHAPTER 5: TESTING TOOLS AND MEASUREMENT
5.1.1 MANUAL TESTING
 It is the process of using functions and features of an application according to end user
perspective in order to verify the software is working as required
 With manual testing the tester performs tests on the software by following a set of
predefined cases.
 In manual testing, testers manually execute test cases without any automation tools.
 It requires the tester to play the role of an end user.
 Any new application must be manually tested before automation testing starts
 Manual testing requires more effort, but is necessary to check automation feasibility.
 Manual testing doesn‘t require knowledge of automation tools.
 Any software is not possible by 100% Automation manual testing plays an important role
too.

Advantages:
1. It is preferred for products with short life cycle.
2. It is preferred with software that have gui that constantly change
3. It requires less time and expense to test manually.
4. Automation cannot replace human intuition and inductive reasoning.
5. Automation cannot stop in middle of test run to examine something that has not been
considered.
6. In automation testing batch testing is not possible for each and every soft test human
interaction in necessary.

Disadvantages:
1. Manual test scope is very limited
2. Comparing large data is impractical
3. Processing change requests during software maintenance takes more time
4. Manual testing is slow and costly
5. It is labour intensive and takes time to complete
6. Lack of training is a common problem
7. Not suitable for large projects that are time bound
8. As the complexity increases the testing grows more complex, which causes increased
time and cost for development
9. It is not consistent or repeatable

SOFTWARE TESTING -22518 PAGE 55


5.1.2 NEED OF AUTOMATION TESTING

 An automated tool is able to playback and the predefined actions, compare the results to the
expected behaviour & report the success or the failure of these manual tests to the test
engineer
 Once automated tests are created they can be easily repeated & they can be extended to
perform tasks impossible with manual testing
 Automation testing is essential for successful development projects
 Needs of automated testing tools can be listed as follows
1. Speed
Automation speeds up the pre recorded task very much as it is just to be repeated and can
be sped up to 1000X faster than manual testing
2. Efficiency
Automation doesn‘t require human touch meaning while it runs test you can continue
without her at such as planning and analysis, this increases the efficiency
3. Accuracy and precision
After about 100 manual tests humans tend to lose focus and make more mistakes , this
can be solved using automation since it can do the at up to any scale with the same
amount of accuracy and precision
4. Resource reduction
Sometimes the efforts and manpower can be unrealistic in testing, in these cases
automation can really help reduce resource and save human efforts and stimulate real
world
5. Simulation and emulation
Test tools are used to replace hardware and software that would normally interface to
your product this fake device or application can then used to drive or respond to your
software in way that you choose to execute
6. Relentless
The test automation doesn‘t tire or give up; it will continuously test the software

5.2 ADVANTAGES AND DISADVANTAGES OF USING TOOLS

Advantages
1. Automation test tools save time.
2. It improves the quality of manual test scripts
3. Using test tool early bug detection is possible
4. Machine and tools work 24x7 and never get tired
5. Reusability when a test script generated by an automation tool it must be saved for
further requirement so it can be utilised as many times as software tewst wants
especially for automation testing.

SOFTWARE TESTING -22518 PAGE 56


6. Automated testing comes with distributed testing feature:
7. Automation test script have huge benefits for tracking each and every script
8. Compatibility for automated test tool is more than manual testing
9. Automation test tool improves the overall test coverage
10. Manpower utilization improves as the automation tool decreases the man power
which then can be reallocated for better different tasks
11. Automation test tool is more accurate than manual test
12. It allows to repeat execution of same set of automation test cases without any
human intervention and that too faster
13. It is reliable and works with same accuracy for long
14. It can replace the manually repeated tasks until it reduces the cost of the project

Disadvantages
1. Proficiency is require to write automation test script
2. Debugging test script is a major issue
3. Maintenance is costly in terms of playback method
4. Programming knowledge is required
5. Test tool have environment limitation

5.3 SELECTING A TESTING TOOL


Success in any test automation depends on identifying the right tool for automation.
Selecting the ―correct‖ Testing Tool for your project is one of the best ways to achieve the
project target.

Testing Tool selection process

To select the most suitable testing tool for the project, the Test Manager should follow the below
tools selection process

SOFTWARE TESTING -22518 PAGE 57


Step 1) Identify the requirement for tools

How can you select a testing tool if you do not know what you are looking for?

You to precisely identify your test tool requirements. The entire requirement must
be documented and reviewed by project teams and the management board.

Step 2) Evaluate the tools and vendors

After base lining the requirement of the tool, the Test Manager should

 Analyze the commercial and open source tools that are available in the market, based on
the project requirement.
 Create a tool shortlist which best meets your criteria
 One factor you should consider is vendors. You should consider the vendor‘s reputation,
after sale support, tool update frequency, etc. while taking your decision.
 Evaluate the quality of the tool by taking the trial usage & launching a pilot. Many
vendors often make trial versions of their software available for download

Step 3) Estimate cost and benefit

To ensure the test tool is beneficial for business, the Test Manager have to balance the following
factors:

SOFTWARE TESTING -22518 PAGE 58


A cost-benefit analysis should be performed before acquiring or building a tool

Example: After spending considerable time to investigate testing tools, the project team found
the perfect testing tool for the any xyz project. The evaluation results concluded that this tool
could

 Double the current productivity of test execution


 Reduce the management effort by 30%

However, after discussing with the software vendor, you found that the cost of this tool is too
high compare to the value and benefit that it can bring to the teamwork.

In such a case, the balance between cost & benefit of the tool may affect the final decision.

Step 4) Make the final decision

To make the final decision, the Test Manager must have:

 Have a strong awareness of the tool. It means you must understand which is
the strong points and the weak points of the tool
 Balance cost and benefit.

Even with hours spent reading software manual and vendor information, you may still need to try
the tool in your actual working environment before buying the license.

SOFTWARE TESTING -22518 PAGE 59


You should have the meeting with the project team, consultants to get the deeper knowledge of
the tool.

Your decision may adversely impact the project, the testing process, and the business goals; you
should spend a good time to think hard about it.

5.4 WHEN TO USE AUTOMATED TEST TOOLS

Consider a scenario where the defect is fixed in the build and similar feature was used in
different working modules. So it is hard to check new bug is introduced in previous working
functionality. While doing test pass you need to check regression testing around the defect fixes.
This testing exercise needs to be executed each and every time you need to manually test the
functionality around the impacted area. So considering resources, time and money you need to
work effectively and smartly. In such scenarios you need to think of Automation testing.

Test automation is a process to check the software application after development and getting new
build or release. The investment for test automation is time, money and resources. In requires
initial efforts which will help you whenever you want to execute the regression cases.

5.5 METRICS AND MEASUREMENT

Software Measurement: A measurement is a manifestation of the size, quantity, amount or


dimension of a particular attributes of a product or process.

Need of Software Measurement:


Software is measured to:
1. Create the quality of the current product or process.
2. Anticipate future qualities of the product or process.
3. Enhance the quality of a product or process.
4. Regulate the state of the project in relation to budget and schedule.

Classification of Software Measurement:


There are 2 types of software measurement:
1. Direct Measurement:
In direct measurement the product, process or thing is measured directly using standard
scale.
2. Indirect Measurement:
In indirect measurement the quantity or quality to be measured is measured using related
parameter i.e. by use of reference.

SOFTWARE TESTING -22518 PAGE 60


Metrics:
A metrics is a measurement of the level that any impute belongs to a system product or process.
There are 4 functions related to software metrics:

1. Planning
2. Organizing
3. Controlling
4. Improving

Characteristics of software Metrics:


1. Quantitative:
Metrics must possess quantitative nature. It means metrics can be expressed in values.
2. Understandable:
Metric computation should be easily understood, the method of computing metric should
be clearly defined.
3. Applicability:
Metrics should be applicable in the initial phases of development of the software.
4. Repeatable:
The metric values should be same when measured repeatedly and consistent in nature.
5. Economical:
Computation of metric should be economical.
6. Language Independent:
Metrics should not depend on any programming language.

Classification of Software Metrics:


There are 2 types of software metrics:
1.Product Metrics:
Product metrics are used to evaluate the state of the product, tracing risks and under
covering prospective problem areas. The ability of team to control quality is evaluated.
2.Process Metrics:
Process metrics pay particular attention on enhancing the long term process of the team
or organization.
3.Project Metrics:
Project matrix is describes the project characteristic and execution process.
 Number of software developer
 Staffing pattern over the life cycle of software
 Cost and schedule
 Productivity

SOFTWARE TESTING -22518 PAGE 61


REFERENCES:

BOOKS:

1 SOFTWARE TESTING – SECOND EDITION – BY RON PATTON


2 SOFTWARE TESTING PRINCIPLES AND PRACTICES – BY SHRINIVASAN DESIKAN
AND GOPALSWAMY RAMESH

WEBSITES:

1 WWW.GURU99.COM

2 WWW.GEEKSFORGEEKS.ORG

3 WWW.WIKIPEDIA.COM

4 WWW.TUTORIALSPOINT.COM

SOFTWARE TESTING -22518 PAGE 62

You might also like