[go: up one dir, main page]

0% found this document useful (0 votes)
18 views18 pages

Testing

The document provides an overview of software testing concepts, including definitions of testing, test cases, use cases, and the importance of quality assurance. It outlines testing principles, testability attributes, and factors that influence testing strategy, emphasizing risk reduction and the need for structured methodologies. Additionally, it distinguishes between various testing techniques such as structural, functional, dynamic, and static testing.

Uploaded by

tonbunhon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views18 pages

Testing

The document provides an overview of software testing concepts, including definitions of testing, test cases, use cases, and the importance of quality assurance. It outlines testing principles, testability attributes, and factors that influence testing strategy, emphasizing risk reduction and the need for structured methodologies. Additionally, it distinguishes between various testing techniques such as structural, functional, dynamic, and static testing.

Uploaded by

tonbunhon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

SOFTWARE TESTING

BABU
• Testing :- “Executing the software with the intent of finding
errors.”(Glenford Myers)

• TESTING:- Running or operating the software under controlled


conditions and evaluating its results. (I.E.E.E)

• Test: -A test is a group of related test cases and test procedures.

• Test case: - A test related item, which contains a set of inputs,


execution statements and expected outputs. Used to test
whether a particular feature in the software is working correctly
or not.
USECASE
Usecase :- A usecase is a sequence of actions performed by a
system, which together produce results required by users of the
system.
Tests derived from usecases help uncover. Defects in process
flows during actual use of the system.
each usecase has
1) Preconditions :- which need to be met for the use case to work
success fully.
2) Post conditions :- define the condition in which the use case
terminates.
3) Flow of events :- which defines user actions and system
responses to those actions.
Test plan: -It is a document, which contains the scope approach,
resources and scheduling of the testing activity. (a road map to
the testing activity).

Test harness :- The auxiliary code developed to support testing of units


and components is called test harness. The harness consists of

drivers that call the target code and


stubs that represent the module it calls

Test bed:- - is an environment that contains all the s/w and hardware
needed for the testing activity.
• Error :- It is a misunderstanding, misconception, or the
mistake on the part of a developer.
• The mistakes done by the developer during development.

• Fault :-It is an anomaly in the s/w, which is produced due to


error. The manifestation of error in the s/w.

• Failure :- The inability of the s/w in providing expected


service to its user.
• It is a condition, which results in unwanted (unexpected)
results.
DEBUGGING
• Debugging is the process of analysing and
locating bugs in the software
• Debugging is done only when the software
does not behave as expected.
QUALITY ASSURANCE
• Quality assurance :- It is an activity, which
involves the entire development process by
continuously monitoring and improving the
process. It also ensures that the standards are
maintained. It is an activity of prevention.

• A good testing activity should include both


verification and validation testing techniques.
QUALITY SOFTWARE
• Reasonably bug free
• Delivered on time within budgets
• Meets the requirements
• Maintainable.
TESTING PRINCIPLES :-
( Glenford Myers )

• Testing is the process of exercising s/w components using a selected set


of test cases with the intent of revealing defects and evaluating quality.
• A good test case is the one that has a high probability of finding an as yet
undiscovered error.
• Test results should be inspected meticulously.
• A test case must contain the expected outputs or result.
• Test case should be developed for both valid and invalid input conditions.
• The probability of the existence of additional defects in s/w components
is proportional to the number of defects. Already detected in that
component.
• A group a group that is independent of the development group should
carry out testing.
• Tests must be repeatable & reusable.
• Testing activities should be integrated into the s/w lifecycle.
• Testing should be planned.
• Testing is a creative & challenging task.
TESTABILITY

• Testability; - According to James Bach testability means

“How easily a s/w can be tested”


The following are the characteristics that determine the testability of the software.
• Operability:-The better we can operate the s/w the better it can be
tested.
• Controllability:- The more we control the s/w the more it can be
tested
• Decomposability:- The more the s/w can be divided that much
comprehensively we can test it.
• Stability:-The less the changes are during development the less the
obstacles are during testing.
• Simplicity:- The less there is to test the quicker we can test it.
• Understandability:- The more the knowledge we had about the s/w
that much easily it can be tested.
SOFTWARE QUALITY ATTRIBUTES
(McCAL & RICHARDS)

PRODUCT OPERATION:
• correctness: Assurance that the data entered, processed and deliverd by the software is correct
and complete.
• Reliability: Assurance that the software works correctly for a give period of time under given
conditions.
• Integrity: ability to withstand intentional and accidental attacks and provide enough security to
the data.
• Usability: effort required to learn and use the software.
• Efficiency: The amount of computations and resources required to complete the given task.
PRODUCT REVISION:
• Maintainability: effort required to analyse locate, and fix errors in the software.
• Flexibility: effort required to make minor changes according to the convenice of the user is
called Flexibility.
• Testability: how well a software can be tested is called testability.
PRODUCT TRANSITION:
• Interoperability: effort required link or interconnect to another software.
• Resuability: The effort required to make minor changes and use the software for other purpose.
• Portability: The effort required to transform the software from one hardware configuration to
another hardware configuration.
BUILDING TEST STRATEGY

• The main objective of testing is to reduce risk in the s/w

Risk; -
• Risk is a condition that can result in a loss the risk situation always exists we can
not completely eliminate risks but we can reduce their occurrences and impact of
the loss
• A good test strategy must address these risk and present a process that can
reduce those risks.
• The two components in test strategy test factor & test phase.

Test factor; -the risk associated with testing are called test factors, the risk need to
be addressed as a part of test strategy the risk factors become the objectives of
testing

Test phase; - the phase of s/w development life cycle in that testing will occur.
TEST FACTORS: - (William e. Perry)

1) Correctness; - assurance that the data entered, processed outputted by the s/w is accurate & correct.
2) File integrity; - assurance that the data entered into the system will be returned unaltered.
3) Authorization; - assurance that the data is processed in accordance with the intents of management.
4) Audit trail; - retention of sufficient matter to substantiate the assurance, completeness, timeliness, and
authorization of data.
5) Continuity of processing; -ability to sustain processing in the event problem occurs it confirms that
necessary procedures and back up information are maintained to recover the operation
6) Service levels; -assurance that the desired results will be available with in a time frame acceptable to the
user.
7) Access control; - assurance that unauthorized access is prevented.
8)Compliance; - assurance that the s/w is designed in accordance with the company policies, procedures,
and standards.
9) Reliability; - assurance that the application will perform it intended functions with required precision.
Over an extended period of time.
10) Ease to use; - effort required to learn & use the s/w
11) Maintainability; - effort required to locate and fix error
12) Portability effort required to transfer from one h/w to another
13) Coupling; - effort required to interconnect components or link to other s/w
14) Performance(efficiency); - amount of computing resources and code required by a system to perform its
standard functions.
15) Ease to operation; - the effort required to operate the s/w in real time production mode.
Developing test strategy
Four steps are involved in developing test strategy.
1) Selecting and ranking test factors; -
The test team along with user should select and rank the test
factors in an order.
2) Identify the system development phases; -
Development team should identify the phases of their
development process. This is normally obtained from the system
development methodology; these phases should be recorded in
the test phase components of the matrix.
3) Identify the business risk associated with the system under
development; -
The risk should be identified and agreed upon by the group. the
risk should then be ranked into high, medium, and low.
4) Place risks in the matrix; - the risk team should determine the
test phase in which the risk needs to be addressed by the test
team and the test factor to which the risk is associated.
Establishing s/w testing methodology

The testing methodology incorporate both testing strategy and testing


tactics. The tactics add the test plans, test criteria, testing techniques and
testing tools used in validations and verifying the s/w under development.

The eight considerations in establishing testing methodology are

• Acquire and study the test strategy


• Determine the type of development project.
• Determine the type of s/w system
• Determine the project scope
• Identify the tactical risks
• Determine when testing should occur
• Build the system test plan.
• Build the unit test plan.
DETERMINING S/W TESTING TECHNIQUES
• The tactics out lines the criteria that should be used to test each of the
identified risks. the three testing concepts are

Structural vs. functional testing techniques


Dynamic vs. static testing techniques
Manual vs. automatic testing techniques

• Structural vs. functional; - structural testing uncovers the errors that


occur during coding of the program while functional tests uncover the
errors that occurs in implementing the requirements or design
specification.
• Dynamic vs. static; - in dynamic testing the program is executed on
some test cases and the result of the programs performance are
examined to check whether the program is working as expected or not.
Static testing does not involve program execution. Static analysis
techniques include system checking.
• Manual vs. automatic:- Manual testing involves no softwares during
testing whereas Automated testing uses testing tools for testing
STRUCTURAL TESTING TECHNIQUES

• Here the objective is to ensure that the product


designed is structurally sound and will functions
correctly
E.g., STRESS TESTING:-
EXECUTION TESTING,
RECOVERY TESTING,
OPERATIONAL TESTING,
COMPLIANCE TESTING,
SECURITY TESTING
FUNCTIONAL TESTING TECHNIQUES

• Ensure that system requirements and


specification and achieved.
Eg:- Requirements
Regression
Error-handling
Interconnect system
Parallel

You might also like