[go: up one dir, main page]

0% found this document useful (0 votes)
44 views22 pages

ST Module1

Uploaded by

Yashwanth Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views22 pages

ST Module1

Uploaded by

Yashwanth Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

SOFTWARE TESTING

By
Mr. Hemanth Kumar
Asst.Professor,
Dept of MCA,RNSIT
1
Basics of Software Testing
 Basic Definitions
Error:
 A good synonym is mistake.
 When people make mistakes while coding, we call
these mistakes as bugs.
 Errors tend to propagate and requirements error
may be magnified during design and amplified still
more during coding.

2
Basics of Software Testing(contd..)
Fault:
 A fault is the result of an error
 It is the representation of an error, where
representation is the mode of expression, such as
narrative text, data flow diagrams, hierarchy
charts, source code, and so on
 Defect is a good synonym for fault.

3
Basics of Software Testing(contd..)
 Types of Fault:
 Fault of omission occurs when we fail to enter correct
information. (Some details are missed out in specifications
and additional details are implemented)
 Fault of commission occurs when we enter something into
a representation that is incorrect. (Mentioned in
Specifications but missed out while implementing) or
 Omission/commission:
 Omission - neglecting to include some entity in a module
 Commission - incorrect executable statement

4
Basics of Software Testing(contd..)
Failure:
 A failure occurs when a fault executes
 Two subtleties arise here: one is that failures only
occur in an executable representation, which is
usually taken to be source code, or loaded object
code
 Second subtlety is that, this definition relates
failures only to faults of commission.

5
Basics of Software Testing(contd..)
 Incident:
 When a failure occurs, it may or may not be readily apparent to the user (or customer or tester)
 An incident is the symptom associated with a failure that alerts the user to the occurrence of failure
 Test:
 Testing is concerned with errors, faults, failures, and incidents.
 A test is the act of exercising software with test cases. A test has two distinct goals: to find failures or to demonstrate correct execution

 Test Case:
 Test case has an identity and is associated with a program behavior. A test case also has a set of inputs and a list of expected outputs.

6
Basics of Software Testing(contd..)
What is testing?
 Software testing is a process used to identify the correctness,
completeness and quality of developed computer software.
 The process of devising a set of inputs to a given piece of
software that will cause the software to exercise some
portion of its code.
 The developer of the software can then check that the results
produced by the software are in accord with his or her
expectations.

7
A Testing life cycle

8
Test cases

• Aim of testing is to determine a set of test cases.


• The below information should be in a test case.
• Inputs: Pre-conditions (circumstances that hold prior to test case execution), Actual Inputs
identified by some testing method.
• Expected Outputs: Post- conditions and actual outputs
• Typical Test Case Information (Contents of a Test Case)
 Title, author, date,
 Test case ID Purpose
 Pre-conditions
 Inputs
 Expected Outputs
 Observed Outputs
 Pass/Fail Comments

9
Software quality

 Software quality is a multidimensional quantity and is measurable.


 Quality attributes
 There exist several measures of software quality. These can be divided into static and dynamic quality attributes.
 Static quality attributes include structured, maintainable and testable code as well as the availability of correct and complete
documentation.
 Dynamic quality attributes include software reliability, correctness, completeness, consistency, usability and performance.
– Reliability refers to the probability of failure free operation
– Correctness refers to the correct operation of an application and is always with reference to some artifact
– Completeness refers to the availability of all features listed in the requirements, or in the user manual. Incomplete software is one that
does not fully implement all features required
– Consistency refers to adherence to a common set of conventions and
assumptions. For example, all buttons in the user interface might follow a common color coding convention.
– Usability refers to the ease with which an application can be used.
– Performance refers to the time the application takes to perform a requested task

10
Requirements, Behavior and Correctness

 Any software is designed in response to requirements of the environment.


 Example: Two Requirements are given below and each leads to two different programs.
 Requirement 1: It is required to write a program that inputs two integers and
outputs the maximum of these.
 Requirement 2: It is required to write a program that inputs a sequence of
integers and outputs the sorted version of this sequence.
 Consider Requirement 1: The expected output of max when the input integers are 13
and 19 is easily determined as 19. Suppose now that the tester wants to know if the
two integers to be on the same line followed by a carriage return, or on two separate
lines with a carriage return typed in after each number. The requirement as stated
above fails to provide an answer to this question. This requirement illustrates
Incompleteness.
 Consider Requirement 2: It is not clear whether the input sequence is to be sorted in
ascending or in descending order. The behavior of sort program written to satisfy this
requirement will depend on the decision taken by the programmer. This is called as
ambiguity.

11
Correctness versus Reliability

 Correctness
 Correctness is the process of testing a program on all
elements in the input domain. In most cases this is
impossible to accomplish. Thus, correctness is
established via mathematical proofs of programs.
 While correctness attempts to establish that the
program is error free, testing attempts to find if there
are any errors in it.
 Testing, debugging and the error removal processes
together increase our confidence in the correct
functioning of the program under test.
12
Correctness versus Reliability

 Reliability
 The reliability of a program P is the probability of
its successful execution on a randomly selected
element from its input domain. Example: Consider
a program P whose inputs are {< (0, 0) (-1, 1) (1, -
1)>}. If it is known that P fails on exactly one of
the three possible input pairs then the frequency
with which P will function correctly is 2/3.

13
Testing and debugging

Testing is the process of determining if a


program behaves as expected.
When testing reveals an error, the process
used to determine the cause of this error
and to remove it, is known as debugging.

14
Testing and debugging

15
Test- generation Strategies

 One of the key tasks in software testing is


generation of test cases.
 Any form of test case generation uses a source
document/ requirement document.
 In most of the test methods the source
document resides in the mind of the tester who
generates tests based on the knowledge of the
requirements

16
Test- generation Strategies

17
Test- generation Strategies
 The top row captures techniques that are applied directly to the
requirements.
 Another set of strategies falls under the category of model-based test
generation.
 These strategies require that a subset of the requirements be modeled using
a formal notation.
 Languages based on predicate logic as well as algebraic languages are also
used to express subsets of requirements. This model is also known as
specification of the subset of requirements.
 Finite State Machines, state charts, timed I/O automata and Petri nets are
some of the formal notations for modeling various subsets of the
requirements. Code based test generation generate tests directly from the
code.

18
Test Metrics
• Test metrics are indicators of the efficiency,
effectiveness, quality and performance of software
testing techniques. These metrics allow professionals
to collect data about various testing procedures and
devise ways to make them more efficient.
Benefits Of Software Testing Metrics

i) Save Time
ii) Improve Quality
iii) Measure Progress

Department of ISE,RNSIT 19
2021
 The term metric refers to a standard of measurement. In software testing there exist a variety
of metrics

20
Test Metrics

 There are four general core areas that assist in the design of metrics.
They are schedule, quality, resources and size.
 Schedule related metrics: Measure actual completion times of
various activities and compare these with estimated time to
completion.
 Quality related metrics: Measure quality of a product or a
process
 Resource related metrics: Measure items such as cost, man
power and test execution.
 Size-related metrics: Measure size of various objects such as
the source code and number of tests in a test suite.

21
Static and dynamic metrics

 Static metrics are those computed without having to


execute the product.
 Example: Number of testable entities in an
application.
 Dynamic metric requires code execution. Example:
Number of testable entities actually covered by a test
suite.

22

You might also like