Software Testing Jai Raj Yadav
Software Testing Jai Raj Yadav
Software Testing Jai Raj Yadav
Software Testing
Software Testing
Course Coverage
How Much to Test Testing Techniques Testing for Specialized Environments & Applications Regression Testing Unit Testing Integration Testing System Testing Acceptance Testing Causal analysis
Software Testing
Definitions
Defect
A deviation from specification or standard Anything that causes customer dissatisfaction
Verification
All Quality Control activities throughout the life cycle that ensure that interim deliverables meet their input specification
Validation
The test phase of life cycle which assures that the end product meets the user needs.
Testing
Act of showing that the program has bugs / does not have bugs
Debugging
Debugging is the act of attempting to determine the cause of symptoms of malfunctions detected by testing or by frenzied user complaints.
5 Software Testing Ver 2.0 Jan, 2001
Software Testing
Definitions
Static Testing
Verification performed without executing the systems codes Code inspection Reverse engineering
Dynamic Testing
Verification or validation performed by executing the systems code
Software Testing
Definitions
Functional Test
Tests that validate business functional requirements (what the system isupposed to do)
Structural Test
Tests that validate the system architecture (how the system was designed / implemented)
Software Testing
Testing & Debugging
Testing
Starts with known conditions, user predefined procedures, predictable outcomes Should be planned, designed, scheduled Is demonstration of error / apparent correctness Proves a programmers / designers failure Should strive to be predictable, dull, rigid, inhuman Much can be done without knowledge Can be done by outsider Theory of testing available Much of test design and execution can be automated
8
Debugging
Unknown initial conditions, end can not be predicted Can not be constrained Is deductive process Vindicates a Programmer / Designer Demands intuitive leaps,conjectures, experimentation,freedom Impossible without design knowledge Must be done by insider No theory of debugging is available Can not be automated
Software Testing
Statistics
Introduction of Defects
Requirements Design Coding Others 56% 27% 7% 10%
80-20 of the Defects -80% of all defects occur in 20% of the work The Pesticide Paradox
First Law - Every method you use to prevent or find bugs leaves a residue of subteler bugs against which those methods are ineffective Second Law - Software complexity grows to the limits of our ability to manage that complexity
9 Software Testing Ver 2.0 Jan, 2001
Software Testing
Testing
Testing is the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specifed requirements (IEEE 83A) Testing is a process of executing a program with the intent of finding an error. Software testing is a critical element of software quality assurance and represents the ultimate review of system specification, design and coding. Testing is the last chance to uncover the errors / defects in the software and facilitates delivery of quality system.
10
Software Testing
Role of Testing
Primary Role of Testing
Determine whether system meets specificatios Determine whether system meets needs
Why Test
11
Developer not falliable Bugs in compilers, languages, DBs , Operating Systems Certain bugs easier to find in testing Dont want customers to find bugs Post release debugging is expensive Good test designing is challenging & rewarding
Software Testing Ver 2.0 Jan, 2001
Software Testing
What Hinders Effective Testing
What Hinders Effective Testing
Optimism Belief that the system works Negative attitude towards effective testing Ego Dont want to fail Conflict between testers and developers testing is least structured testing is expensive Delivery commitments
12
Software Testing
Testers Maturity
Testers Maturity (as perBiezer)
Phase 0 - no difference between testing and debugging. Test only to support debugging Phase 1 - purpose is to show that the software works Phase 2 - purpose is to show that the software does not work Phase 3 - purpose is not to prove anything, but to reduce perceived risk of not working to an acceptable value Phase 4 Testing is a mental discipline resulting in low risk software without much testing effort Testing as state of mind
The goal is testability Reduce the labour of testing Testable code has less bugs
13 Software Testing Ver 2.0 Jan, 2001
Software Testing
Testing Everything ? Can it be replaced ?
Is Testing Every Thing ?
There are other approaches to create good software Effective methods are inspection design style static analysis language checks development environment
Software Testing
Testing Principles
Testing Principles
A good test case is one that has a high probability of finding an as-yet undiscovered error. A successful test is one that uncovers an as-yet-undiscovered error. All tests should be traceable to the customer requirements Tests should be planned long before testing begins Testing should begin in the small and progress towards testing in the large Exhaustive testing is not possible
Testing Involves
Generate test conditions, cases Create required test environments Execute test by initiating application under test & applying inputs that are specified in the already generated test cases Compare actual results with expected results Results : Test passed or failed
Software Testing Ver 2.0 Jan, 2001
15
Software Testing
Software Testing Requirements
Software testing is carriedout thru all the phases of development. An effective testing begins with a proper plan from the user requirements stage itself. Software testability is the ease with which a computer program is tested. Metrics can be used to measure the testability of a product
Operability
The better the software works, the more efficiently it can be tested.
Observability
What is seen is what is tested
Controllability
The better the software is controlled, the more the testing can be automated and optimised.
16
Software Testing
Software Testing Requirements
Decomposability
By controlling the scope of testing, problems can be isolated quickly, and smarter testing can be performed.
Simplicity
The less there is to test, the more quickly it can be tested
Stability
The fewer the changes, the fewer the disruptions to testing
Understandability
The more information we have, the smarter we will test.
17
Software Testing
Testing Strategies , Levels/Phases
Testing Strategies
Testing begins at the unit level and works outward toward the integration of the entire system Different testing techniques are appropriate at different points of S/W development cycle.
Testing Levels/Phases
Unit / Component Testing
Focuses on individual software units ( programs) and group of related units
Integration Testing
Focuses on combining units to evaluate the interaction among them
System Testing
Focuses on complete integrated system to evaluate compliance with specificified requirements (test characterstics that are present only when entire system is run)
Acceptance Testing
Done from users perspective to evaluate fitness of use
18 Software Testing Ver 2.0 Jan, 2001
Software Testing
Software Testing Phases
DDD UTP HLD ITP
Integration Test Cases Unit Tested Integration Source Test Code Review Defect Log
ITR
Unit Test
Review
UTR
19
Software Testing
Software Testing Phases
SRS STP URD ATP
AcceptanceTest Cases
Integrated SystemTested Acceptance Source Source System Test Test Code Code Review
STR
20
Accepted Product
ATR
Software Testing
Relation of Development and Testing Phases
SSAD Projects:
S.No 1 2 3 4 Development Cycle Phase URD SRS HLD DD Type of Testing to be planned Acceptance Testing System Testing Integration Testing Unit Testing
Type of Testing to be planned Acceptance Testing System Testing Class Integration Testing, and Class Testing
21
Software Testing
Relation of Development and Testing Phases
V MODEL
Contract
Users Software Requirements Software Requirements Specifications Acceptance Testing
Warranty
System Testing
Software Development
Integration Testing
Quality Assurance
Detailed Design
Unit Testing
Coding
Project Management
22
Software Testing
Alpha and Beta Testing
Alpha Test
At developers site by customer Developer looking over the shoulder recording errors, usage problems controlled environment
Beta Test
at one / more customer sites by end user developer not present live situation, developer not in control customer records problems and reports to developer
23
Software Testing
Exception & Suspicion Testing
Exception Testing
Exceptions handling often crucial for reliable operations, should be tested file errors (empty, missing, overflow) i-o error handling arithmetic operations (overflow) resource allocation (memory) task communication, creation, purging
Suspicion Testing
Consider modules where Programmer is less experienced Module having high failure rate Failed inspection Late change order Designer / programmer feels uneasy More extensive tests such as Multi- condition / Multi-branch coverage
24 Software Testing Ver 2.0 Jan, 2001
Software Testing
Testing Exercise - 1
ABC Ltd is a software development company in area of software consultancy & export. Company had been collecting and analysing project manpower costs based on time sheets. The processing had been normal till now. Currently, one of the project group is in development of new integrated computerised system for Insurance Membership system for overseas client. The project team has now completed unit & integration testing and are about to start system testing phase. The project Manager has decided to use one months live data as the only data in system test. The system testers will not be making test plans and preparing test data based on test plan, but will use the input data of the old computerised system & compare the outputs of the new system with the outputs of the old system.
Discuss the advantages & disadvantages of using live data for system testing as against using planned data.
25 Software Testing Ver 2.0 Jan, 2001
Software Testing
Testing Methodologies
Testing methodologies are used for designing test cases which provide the developer with a systematic approach for testing. Any software product can be tested in one of the two ways: Knowing the specific function the product has been designed to perform, tests can be planned and conducted to demonstrate that each function is fully operational, and to find and correct the errors in it (Blackbox Testing). Knowing the internal working of a product, tests can be conducted to ensure that the internal operation performs according to specification and all internal components are being adequately exercised and in the process, errors if any are eliminated (Whitebox Testing).
26
Software Testing
Testing Methodologies
The attributes of both black-box and white-box testing can be combined to provide an approach that validates the software interface and also selectively assures that internal structures of software are correct. The black-box and white-box testing methods are applicable across all environments, architectures and applications but unique guidelines and approaches to testing are warranted in some cases. The testing methodologies applicable to test case design in different testing phases are as given below:
White-box Testing Unit Testing Integration Testing System Testing Acceptance Testing
27
Black-box Testing
Software Testing
Testing Methodologies
Black-box Testing
Black-box tests are used to demonstrate that
the software functions are operational; input is properly accepted and output is correctly produced; the integrity of external information (e.g., data files) is maintained. enables the developer to derive sets of input conditions (test cases) that will fully exercise all functional requirements for a program. In-correct or missing functions Interface errors Errors in the data structures or external data base access Performance errors Initialisation and termination errors
Black-box testing is applied during the later stages of the testing as it purposely disregards control structure and attention is focused on the problem domain.
28 Software Testing Ver 2.0 Jan, 2001
Software Testing
Testing Methodologies
Black-box Testing (contd)
Test cases are to be designed to answer the following questions:
How is functional validity tested? What categories of input will make good test case? Is the system particularly sensitive to certain input values? How are the boundaries of data input isolated? What data rates and data volume can the system tolerate? What effect will specific combinations of data have on system operation?
The following black-box testing methods are practically feasible and adopted depending on the applicability:
Graph-based testing methods Equivalence partitioning Boundary value analysis
29
Software Testing
Testing Methodologies
White-box Testing
White-box testing is designed for close examination of procedural detail by providing test cases that exercise specific sets of conditions and/or loops tests logical paths through the software. White box testing uses the control structure of the procedural design to derive test cases. The test cases derived from white-box testing methods will:
Guarantee that all independent paths within a module have been exercised at least ones Exercise all logical decisions on their true and false sides Execute all loops at their boundaries and within their operational bounds Exercise internal data structures to ensure their validity.
White box testing need to be adopted under Unit level testing strategy & can be adapted to a limited extent under integration testing if situation warrants it. Basis path testing and control structure testing are some of the most widely used white-box testing techniques
30 Software Testing Ver 2.0 Jan, 2001
Software Testing
Test Strategy
Should be developed for each project Defines the scope & general direction of testing in project High level, prepared together with the project plan Should answer When will testing occur ? What kind of testing will occur ? What are the risks ? What are the critical success factors ? What are the testing objectives ? What are the trade offs ? Who will conduct the testing ? How much testing will be done ? What tools, if any, will be used ?
31
Software Testing
Test Strategy - Sample
1. Name of the Project :
Insurance Management System (IMS)
2. Brief Description :
Process new insuarance membership & revisions to existing memberships. 3. Type of Project : New development (Batch / online application)
4. Type of Software :
PL/I, COBOL, IMS DB/DC, DB2 on Mainframe
32
Software Testing
Test Strategy - Sample
6. Risk Factors :
Development team members new to environment Limited automated testing tools Thorough Knowledge of American Insurance Business
7. Test Objectives :
Insure that no more 1 bug per 10 function point.
Max 3 seconds response time User friendliness of screen / menues / documentations
8. Trade Offs : Objectives stated under 7 to be achieved at any cost. Delivery on time takes precedence over other aspects
33
Software Testing
Test Strategy - Sample
9.2 System Test (Plan,Test Preparation,Execution)
34
Responsibility - PM/PL Resource budgeted - 6 person month, 3 terminals Planned start date - Dec 15,2000 Planned end date - Jan 31, 2001 Stop criteria - Review , approval, all URD functionalty covered Responsibility - PL/ML Resource budgeted - 3 person month, 3 terminals Planned start date - Nov 15,2000 Planned end date - Dec 14, 2000 Stop criteria - Review , approval, 100 design features covered Responsibility - ML/TM Resource budgeted - 6 person month, 3 terminals Planned start date - Nov 15,2000 Planned end date - Dec 14, 2000 Stop criteria - Review , approval, 100% code coverage proved
Software Testing Ver 2.0 Jan, 2001
Software Testing
Exercise 2 (Test Strategy )
For sample project, estimated LOC is 100,000. Client has specified that number of major (defined without ambiguity) bugs found during acceptance testing should not exceed 50. In case it exceeds 50, system will not be accepted, no payment & future projects from the client. Acceptance test plan will be prepared by client before the start of acceptance test and will be made available to the team before acceptance starts. Requirement specifications are provided by client , reviewed by team and queries clarified by client. Client will not be willing to be associated in internal reviews of Unit, Integration and System testing before acceptance phase.
Software Testing
How Much to Test
Is complete Testing Possible ?
To prove that a program is free of bug is Practically impossible, and Theoretically a mammoth exercise
Barriers
We can never be sure of the verification system has been implemented without any bug No verification system can confirm absence of bugs
We aim at
Not absolute proof, but A suitable , convincing demonstration Quantitative measures - statistical measure of software reliability Judgement of enough
36
Software Testing
How Much to Test
Stop Criteria
Time runs out - Poor Criteria Requires specified test design methods e.g. test cases must be derived using class partitioning and boundry value analysis. Testing stops when all tests execute without producing any error. A certain number of errors found - until N numbers of errors found and corrected. Requires a certain coverage - e.g. testing stops when all statements and branches are executed and all test cases execute without failure. Stop when testing becomes unproductive - e.g.system testing stops when the number of errors detected per testing person day drops under one. Use error prediction - e.g. Stop testing when number of bugs left over reduces to X / 1000 LOC
37
Software Testing
Exercise 3 ( How Much to Test)
For sample project, estimated LOC is 100,000. Client has specified that number of major (defined without ambiguity) bugs found during acceptance testing should not exceed 50. In case it exceeds 50, system will not be accepted, no payment & future projects from the client. Acceptance test plan will be prepared by client before the start of acceptance test and will be made available to the team before acceptance starts. Requirement specifications are provided by client , reviewed by team and queries clarified by client. Client will not be willing to be associated in internal reviews of Unit, Integration and System testing before acceptance phase. To be safe side , ABC decided to release software for acceptance when no of unknown bugs < 35.
Software Testing
Testing techniques
Testing Techniques are means by which test conditions / cases are identified. Three broad types of techniques :
1. Flow / Coverage based 2. Domain Based 3. Population analysis
39
Software Testing
Exercise-4 (Testing techniques)
Illustrate logic coverage in logic below :
IF CODE IS BLANK OR NOT IN DATABASE DISABLE ACCOUNT HISTORY ELSE IF NO CREDIT AND AMOUNT < 100 DISABLE ACCOUNT HISTORY ELSE ENABLE ACCOUNT HISTORY ENDIF ENDIF DISABLE SPECIAL REGION IF REGION = NORTH EAST ENABLE SPECIAL REGION ENDIF
40
Software Testing
Testing techniques
Domain Based
Domain based testing techniques look at inputs and outputs and derive test cases based on the analysis of the input & output domains. Steps are : Identify all inputs Identify all outputs Identify equivalence class for each input Identify equivalence class for each output Equivalence partitioning :Ensure that test cases test each input & output equivalence class at least once Boundary Value Analysis :For each input equivalence class, ensure that test cases include one interior point all extrme point all epsilon points Decision Table : Identify combinations of input that result in different output values
41 Software Testing Ver 2.0 Jan, 2001
Software Testing
Exercise - 5 (Testing techniques)
A subroutine takes in subject marks, student type and returns the grades for the students in that subject.
Inputs : Marks - (0-100) Student type - First Time / Repeat Outputs : Grades - F, D, C, B, A Rules : Student type First Time : 0-40 = F, 41-50 = D, 51-60 = C, 61-70 = B, 71-100 = A Student type Repeat : 0-50 = F, 51-60 = D, 61-70 = C, 71-80 = B, 81-100 = A
1. Identify test cases that satisfy equivalence partitioning rules 2. Identify cases based on boundry value analysis 3. Identify cases that satisfy decision tables
42 Software Testing Ver 2.0 Jan, 2001
Software Testing
Testing techniques Population analysis
Used to identify kinds and frequency of data in production environment Uses existing production data, could be in various forms production files, tables manual files files from other similar systems Could give the tester things not clear in / different form/ additional to specifications codes/values used in production unusual data conditions type, frequency of transactions types of in correct transactions Production environment also gives model for flow of scripts
43 Software Testing Ver 2.0 Jan, 2001
Software Testing
Unit Testing
Unit testing focuses verification effort on the smallest unit of software code. Five aspects are tested under Unit testing considerations: module interface. local data structure Boundary conditions independent paths error-handling paths are tested. Unit can be compiled / assembled/ linked/ loaded and put under test Unit testing is done to show that the unit does not satisfy the functional specification and / or its implemented structure does not match the intended design structure
44
Test Coverage
Links each unit being tested with the associated unit test report.
Unit ID
45
46
Submitted to Whom
the agencies to whom they should be submitted. For example, Project Team, , Customer etc.
Metrics Collection
The Unit Test Report containing the details of the test runs and defect analysis shall be prepared by each tester. The data from all the Unit Test Reports shall be consolidated in the Defect Log. Goto : UTP Template
47
48
Software Testing
Integration Testing
Integration testing is a systematic technique for verifying the software structure while conducting tests to uncover errors associated with interfacing. Integration testing is done to show that even though components were individually satisfactory, the combination is in correct / inconsistent. Black-box test case design techniques are the most prevalent during integration, although limited amount of white box testing may be used to ensure coverage of major control paths.
51
Software Testing
Integration Testing
Bottom-Up Integration Testing
Bottom-Up integration testing begins construction and testing with atomic modules Since modules are integrated from the bottom up, processing required for modules sub-ordinate to a given level is always available and the need for stubs is eliminated.
Regression Testing
Testing after changes have been made to ensure that no unwanted changes were introduced. Regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.
52 Software Testing Ver 2.0 Jan, 2001
Software Testing
Integration Testing
Types of integration Problems
Configuration / version control I/O format , protocol mismatchs Conflicting data views / usage data integrity violated wrong call order / parameters Missing / overlapping functions Resource problems (memory etc)
53
ITP Template
54
55
Test Completion Criteria Test Features (as per URD) and associated Test Data Source of Test Data (Supplied by Customer or generated by Satyam)
56 Software Testing Ver 2.0 Jan, 2001
Deliverables
Test Reports Defect Summary Severity details
(test deliverables (documents, reports that must be provided at the end of testing eg, Test Plan, Test Case(s), Input and Output Data (Screen Dumps, Reports etc.) and any other items used)
Submitted to Whom
the agencies to whom they should be submitted. For example, Project Team, Quality group, Customer etc.
57 Software Testing Ver 2.0 Jan, 2001
58
59
Boundary-value Analysis Error Guessing Interface Integrity Functional Validity Informal Content
Software Testing Ver 2.0 Jan, 2001
61
62
63
Software Testing
System Testing
System testing verifies that all elements mesh properly and the overall system function/performance is achieved. The aim is to verify all system elements and validate conformance against SRS. System testing is aimed at revealing bugs that can not be attributed to a component as such, to inconsistencies between components or planned interactions between components. Concerns : issues , behaviours that can only be exposed by testing the entire integrated system (eg. Performance, security, recovery etc) System testing is categorised into the following 15 types.The type(s) of testing is to chosen depending on the customer / system requirements.
64
Software Testing
System Testing
Compatibility / Conversion Testing
Where the software developed is a plug-in into an existing system, the compatibility of the developed software with the existing system has to be tested.
Configuration Testing
Configuration testing includes either or both of the following:
testing the software with the different possible hardware configurations testing each possible configuration of the software
Documentation Testing
Documentation testing is concerned with the accuracy of the user documentation
Facility Testing
Facility Testing is the determination of whether each facility / functionality mentioned in SRS is actually implemented.
65 Software Testing Ver 2.0 Jan, 2001
Software Testing
System Testing
Installability Testing
Certain software systems will have complicated procedures for installing the system e.g. the system generation (sysgen) process in IBM Mainframes.
Performance Testing
Performance testing is designed to test run-time performance of software within the context of an integrated system. Performance testing occurs throughout all phases testing.
66
Software Testing
System Testing
Procedure Testing
When the software forms a part of a large and not completely automated system, the interfaces of the developed software with the other components in the larger system shall be tested.
Recovery Testing
Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed.
Reliability Testing
Test any specific reliability factors that are stated explicitly in the SRS.
Security Testing
Verify that protection mechanisms built into a system will protect it from improper penetration. Design test cases that try to penetrate into the system using all possible mechanisms.
67 Software Testing Ver 2.0 Jan, 2001
Software Testing
System Testing
Serviceability Testing
Serviceability testing covers the serviceability or maintainability characteristics of the software. The requirements stated in the SRS may include : service aids to be provided with the system, e.g., storage-dump programs, diagnostic programs the mean time to debug an apparent problem the maintenance procedures for the system the quality of the internal-logic documentation
Storage Testing
Storage testing is to ensure that the primary & secondary storage requirements are within the specified bounds.
Stress Testing
Stress tests are designed to confront programs with abnormal situations. Stress testing executes a system in a manner that demand rescues in abnormal quantity, frequency or volume.
68 Software Testing Ver 2.0 Jan, 2001
Software Testing
System Testing
Usability Testing
Attempt to uncover the software usability problems involving the human-factor
Volume Testing
Volume Testing is to ensure that the software can handle the volume of data as specified in the SRS Does not crash with heavy volumes of data, but gives an appropriate message
and/or makes a clean exit.
69
Scope of Testing
System Test Plan identifies which portions of the system are covered under the test and the features being tested.
Test Coverage
Test Features (as per SRS) Links each requirement in the SRS (identified by means of a brief description) with specific test cases.
70
72
73
74
75
Software Testing
Acceptance Testing
Acceptance tests are conducted at the development site or at the customer site depending upon the requirements and mutually agreed principles to enable the customer to validate all the requirements as per Requirement Document(URD). Aims at uncovering implied requirements Aims at evaluating fitness for use Should not find bugs which should have been found in earlier phases. Acceptance Test Plan for the Project Acceptance Test Report
One for each Feature to be Tested (as mentioned in URD)
76
Test Deliverables
Deliverables Test Reports Defect Summary Severity details Submitted to Whom Goto : ATP Template
77 Software Testing Ver 2.0 Jan, 2001
78
79
80
Software Testing
Regression Testing
Re-running of test cases
after fix / change / enhancement re-verify all functions of each build of application no new problem introduced by fix / change (ripple effect) more complete testing confidence in quality and stability of build Bugs remain undetected without full regression test Repaeting the full test catches inadvertently introduced bugs
Regression Testing
Testing after changes have been made to ensure that no unwanted changes were introduced. Regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.
81
Software Testing
Regression Testing
Regression Testing recommendations During Unit Testing Re-run unit test after every change During Integration Testing Re-run unit tests of all changed programs and re-run full integration tests During System Testing Re-run unit tests of all changed programs, and re-run full integration tests & system tests During Acceptance Testing Re-run unit tests of all changed programs, and re-run full integration tests , system tests and acceptance tests
82 Software Testing Ver 2.0 Jan, 2001
Software Testing
Regression Testing
During Post Acceptance Testing
Create special regression Test packs at unit level, integration level, system test levels Consider time for re-running regression tests To build regression packs (post acceptance) , use earlier tests (unit, integration) tests that found bugs additional tests for bugs found in productions
83
Software Testing
Testing for Specialized Environments & Applications (WEB) Usability Testing for Web Applications
The intended audience will determine the "usability" testing needs of the Web site.Take into account the current state of the Web and Web culture.
84
Software Testing
Testing for Specialized Environments & Applications (WEB) HTML validation for Web applications
Testing will be determined by the intended audience, the type of browser(s) expected to be used, whether the site delivers pages based on browser type or targets a common denominator.
85
Software Testing
Testing for Specialized Environments & Applications (WEB) Performance Testing for Web Applications
Performance testing has five manageable phases:
architecture validation performance bench marking performance regression performance tuning and acceptance and the continuous performance monitoring necessary to control performance and manage growth . .
86
Software Testing
Testing for Specialized Environments & Applications (WEB) Performance Testing for Web Applications
Performance testing must be an integral part of designing, building, and maintaining Web applications. Automated testing tools play a critical role in measuring, predicting, and controlling application performance. The final goal for any Web application set for high-volume use is for users to consistently have
continuous availability consistent response timeseven during peak usage times.
87
Software Testing
Testing for Specialized Environments & Applications (WEB) Performance Testing for Web Applications (contd)
1. Architecture Validation:
Use performance tests to validate that the software architecture will deliver the necessary performance & it will scale linearly as hardware is added to accommodate future growth. Testing the scalability of the architecture must happen at the start of development, after a prototype of the application has been created and is able to generate transactions to touch all tiers of the application. Walkthrough of the logic of the front-end of the application must be done to ensure that its easy to navigate. There should be provision for enough memory and processors for these layers of the architecture to handle the expected user volumes. If the application requires custom components, the scalability of the core technologies involved should be tested.
88 Software Testing Ver 2.0 Jan, 2001
Software Testing
Testing for Specialized Environments & Applications (WEB) Performance Testing for Web Applications (contd) 2. Performance Benchmarking:
Consider and define the types of performance tests needed. Create, run, and analyze the first round of performance tests against the initial version of the application to provide a set of performance metrics commonly referred to as performance benchmarks. While considering the types of performance tests to create, start by defining all of the possible types of transactions user audience can initiate.
89
Software Testing
Testing for Specialized Environments & Applications (WEB) Performance Testing for Web Applications (contd) 4. Performance Tuning and Acceptance :
This is the final load-testing phase prior to the acceptance testing in which all of the different pieces of the Web application are integrated, and performance is validated. Different transaction scenarios of real-life usage are emulated, and the scalability of the final configuration is validated. Ensure that server-clustering techniques provide adequate failover and bandwidth to handle the amount of concurrent users planned.
90
Software Testing
GUI - Windows Testing for Specialized Environments & Applications Testing GUIs
GUIs consist of number of reusable components to simplify system development , but their inherent complexity results in the necessity of carefully designed test cases.
For Windows
Will window open properly based on related typed or menu-based commands? Can it be resized, moved and scrolled? Is all data content contained within the window properly addressable with mouse, and keyboard? Are all functions that relate to window available when needed? Are all functions that relate to window operational? Are all relevant pull down menus, tool bars, dialogue boxes, and buttons, and other controls available and properly displayed for the window? When multiple windows are displayed, is the name of window properly represented?
91 Software Testing Ver 2.0 Jan, 2001
Software Testing
GUI - Windows Testing for Specialized Environments & Applications For Windows (contd)
Is the active window properly highlighted? If multitasking is used, are all windows updated at appropriate time? Do multiple or incorrect mouse picks within the window cause unexpected side effects? Are audio and/or colour prompts within the window or as a consequence of window operations presented according to the specification? Does the window properly close?
92
Software Testing
GUI - Pull Down Menues & Mouse Operations Testing for Specialized Environments & Applications For pull down menus and mouse operations:
Is the appropriate menu bar displayed in the appropriate context? Does the application menu bar display system related features? Do pull-down operations work properly? Do breakaway menus, pallettes, and tool bars work properly? Are all menu functions and pull-down sub functions properly listed? Are all menu functions properly addressable by mouse? Is text typeface, size and format correct? Is it possible to invoke each menu function using its alternative text based command? Are menu functions highlighted based on the context of current operations within a window? Does each menu function perform as required? Are the names of menu functions self-explanatory?
93
Software Testing
GUI - Pull Down Menues & Mouse Operations Testing for Specialized Environments & Applications For pull down menus and mouse operations (contd)
Is context sensitive help available for each menu item? Are mouse operations properly recognised throughout the interactive context? If multiple clicks are required, are they properly recognised in the context? Do the cursor, processing indicator, and mouse pointer properly change as different operations are invoked?
94
Software Testing
GUI - Data Entry Testing for Specialized Environments & Applications For Data Entry:
Is alphanumeric data entry properly echoed and input to the system? Do graphical models of data entry work properly? Is invalid data properly recognised? Are data input messages intelligible?
95
Software Testing
Testing of client/server software occurs at three different levels:
Testing of Client/ Server Architectures Testing for Specialized Environments & Applications
Individual client applications are tested in disconnected mode - the operation of the server and the underlying network are not considered. The client software and associated server applications are tested in concert, but network operations are not explicitly exercised. The complete C/S architecture, network operation and performance, is tested.
Derive an operation profile from client/server user scenario to indicate the multiple user inter-operation with the C/S system and to provide the pattern of usage, so that the tests are planned and executed. Initially a single client application is tested, integration of the clients, server, and the network is tested next. Finally, the entire system is tested.
96 Software Testing Ver 2.0 Jan, 2001
Software Testing
Information to be Recorded in each Testing Phase
Each run shall be recorded in the corresponding Test Report as Observed behavior Severity of each defect Cause of each defect Defect Summary shall be recorded in Defect Analysis Log
97
Software Testing
CAUSAL ANALYSIS AND DEFECT PREVENTION Let us recollect
Software Testing
Defect Types
Syntax: General Syntax Problems, Spelling, punctuation, General format problem, Did not properly delimit the operation Assignment: Value(s) assigned incorrectly or not assigned at all. Interface: Communication problems between modules, components, device drivers, objects, functions via macros, call statements, control blocks, parameter lists. Checking: Errors caused by missing or incorrect validation of parameters or data in conditional statements. Data: Errors cause in data definitions and handling. Includes errors related to Structure, content Function: General logic, Pointers, strings, Off-by-one, incrementing, recursion, Computation, algorithmic System: Necessary serialization of shared resource was missing, the wrong resource was serialized, or the wrong serialization technique was employed. Documentation: Errors associated with Non-conformance to standards, Comments, Messages, Manual etc.
99
Software Testing
Cause Categories
COMM: Communications Failure. e.g., Incorrect information, lost information, failure to communicate a change in information. OVER: Oversight. For example, something is overlooked or forgotten, all cases and conditions are not considered EDUC: Education. i.e., lack of knowledge or understanding about something. For example, not understanding a new functionality, not understanding some aspect of the existing system, inadequate knowledge of programming language, programming standards, compiler, tool, etc. TRAN: Transcription Error. Transcribed or copied something incorrectly. Knew what to do but just made a mistake PROC: Inadequacy of the Process. The process definition did not cover something and that led to a defect in the work product
100
Software Testing
Causal Analysis
Assigning Defect Type to defects Assigning Cause Categories to defects Causal Analysis Causal Analysis Meetings Preventive Actions Evaluation of the effectiveness of defect preventive actions in projects
101
Software Testing
Test Tools
Why Test Tools ?
To do tedius work Free tester for creative work Do things Manually impossible Eliminate errors in manual testing Ability to test more Specially useful for regression testing
But Tools can not repair effects of poor management Typical Toolkit - Broad classification
defect management test coverage test execution & regression testing test planning test data generation
102
Software Testing
Test Tools
File Comparators
Compares contents of two files and highlight differences Used to compare out files of test runs before,after changes expected / actual results Examples IMSCOMPR (IMS Files) from IBM CA-Verify (IBM) from Computer Associates Comparex from sterling Software EXPEDITOR (IBM) from Compuware BTS (IBM) IDT (IBM) Tools for keystroke capture in test scripts and script playback
103
Software Testing
Test Tools
Defect Management Tools
Compares contents of two files and highlight differences Used to compare out files of test runs before,after changes expected / actual results Examples IMSCOMPR (IMS Files) from IBM CA-Verify (IBM) from Computer Associates Comparex from sterling Software
104
Software Testing
Any Questions / clarifications ?
Thank You
105