[go: up one dir, main page]

0% found this document useful (0 votes)
718 views40 pages

Software Testing Guide for Professionals

This document provides an overview of software testing, including definitions of common testing types and terms. It discusses the testing lifecycle and different phases like unit testing, integration testing, and acceptance testing. It also covers test documentation formats, test deliverables, defect tracking, and the basics of test automation. The goal is to explain the nuts and bolts of software testing.

Uploaded by

api-3728605
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
718 views40 pages

Software Testing Guide for Professionals

This document provides an overview of software testing, including definitions of common testing types and terms. It discusses the testing lifecycle and different phases like unit testing, integration testing, and acceptance testing. It also covers test documentation formats, test deliverables, defect tracking, and the basics of test automation. The goal is to explain the nuts and bolts of software testing.

Uploaded by

api-3728605
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 40

Software Testing - Nuts and Bolts

Software Testing - Nuts and Bolts


Prepared for: RTG
Revision: 1.1
Publication Date: 01-Jan-06
Prepared By: Satish kumar Abbadi

Table of Contents

1.What is Software Testing?................................................................................................3


2.Why Software Testing??...................................................................................................3
3.The 20 Common software problems:................................................................................3
4.The Five Essentials for Software testing: .......................................................................4
5.Principle of Testing:..........................................................................................................4
6.Testing Life Cycle:............................................................................................................4
7.Objective of Software Tester:...........................................................................................7
8.Software Testing 10 Rules:...............................................................................................7
9.Difference between QA, QC and Testing:........................................................................7
10.Types of Software testing:..............................................................................................8
10.1Acceptance Testing...................................................................................................9
10.2Ad Hoc Testing.........................................................................................................9
10.3Alpha Testing............................................................................................................9
10.4Automated Testing....................................................................................................9
10.5Beta Testing.............................................................................................................10
10.6Black Box Testing...................................................................................................10
10.7Compatibility Testing..............................................................................................10
10.8Configuration Testing.............................................................................................10
10.9Functional Testing...................................................................................................10
10.10Independent Verification and Validation (IV&V).................................................10
10.11Installation Testing................................................................................................10
10.12Integration Testing................................................................................................10
10.12.1Big Bang Testing................................................................................................11
10.12.2Bottom up Integration Testing...........................................................................11
10.12.3Top down Integration Testing............................................................................11
10.13Load Testing..........................................................................................................12
10.14Performance Testing.............................................................................................12
10.15Pilot Testing..........................................................................................................12
10.16Regression Testing................................................................................................12
10.17Security Testing.....................................................................................................12
10.18Software Testing...................................................................................................12
10.19Stress Testing........................................................................................................13
10.20System Integration Testing....................................................................................13
10.21White Box Testing................................................................................................13
10.22Documentation Testing.........................................................................................13
10.23Unit Testing (1).....................................................................................................13

Page 1 of 40
Software Testing - Nuts and Bolts

10.24Unit Testing (2).....................................................................................................13


10.25Sanity (or) Smoke Testing.....................................................................................13
10.26Recovery Testing ..................................................................................................13
10.27Fail-over testing....................................................................................................13
10.28Exploratory testing................................................................................................13
10.29Context-driven testing...........................................................................................14
10.30Comparison testing ..............................................................................................14
10.31Mutation testing ...................................................................................................14
10.32Thread Testing.......................................................................................................14
10.33Exhaustive Testing................................................................................................14
10.34Conformance Testing............................................................................................14
11.What is a Defect and what isn’t?...................................................................................14
12.Defects generally fall into one of three categories:......................................................15
13.A Defect is not:.............................................................................................................16
14.Types of Defects:..........................................................................................................16
15.Defects Life Cycle:.......................................................................................................16
16.Every good bug report needs exactly three things........................................................17
17.Defect Tracking:............................................................................................................17
18.Top Ten Tips for Bug Tracking.....................................................................................17
19.Not All the Bugs You Find will Be Fixed.....................................................................18
20.Severity versus Priority:................................................................................................19
21.Levels of Severity.........................................................................................................19
22.One Line Instructions for writing Effective Defect Reports.........................................19
23.Defect Density..............................................................................................................19
24.Testing Document formats............................................................................................21
25.One Line instructions for Testing:................................................................................22
26.Testing Deliverables......................................................................................................23
26.1Change Request (Bug Report)................................................................................23
26.2Test Case.................................................................................................................23
26.3Test Criteria.............................................................................................................23
26.4Test Cycle................................................................................................................23
26.5Test Plan..................................................................................................................23
26.6Test Status Report...................................................................................................23
26.7Test Tool..................................................................................................................23
26.8Test Summary.........................................................................................................23
27.Automation:..................................................................................................................24
27.1Benefits of Automation:..........................................................................................24
27.2When do we need automation for testing:..............................................................24
28.Application Test Tools:.................................................................................................26
28.1Source Test Tools:...................................................................................................26
28.2Functional Test Tools:.............................................................................................27
28.3Performance Test Tools:..........................................................................................27
29.Software Certifications:................................................................................................28
29.1What is software certification?...............................................................................28
29.2Why become certified?...........................................................................................28
29.3Various Software Certifications:.............................................................................29

Page 2 of 40
Software Testing - Nuts and Bolts

30.Useful Q & A:...............................................................................................................29


31.Conclusion....................................................................................................................40
32.Useful sites for Software Testing:.................................................................................40

1. What is Software Testing?

Software testing is a process used to help identify the correctness, completeness


and quality of developed computer software. With that in mind, testing can never
completely establish the correctness of computer software. Only the process of formal
verification can prove that there are no defects. Since the software testing proofs or proof
engines themselves are typically complex systems constructed by fallible humans, we
aren't entitled to be entirely confident with formal methods of software testing.

There are many approaches to software testing, but effective testing of complex
products is essentially a process of investigation, not merely a matter of creating and
following rote procedure. One definition of software testing is "the process of questioning
a product in order to evaluate it," where the "questions" are things the software tester tries
to do with the product, and the product answers with its behavior in reaction to the
probing of the software tester.

2. Why Software Testing??

Software Testing is important as it may cause mission failure, impact on


operational performance and reliability if not done properly. Effective software testing
helps to deliver quality software products that satisfy user’s requirements, needs and
expectations. If done poorly, it leads to high maintenance cost and user dissatisfaction.

3. The 20 Common software problems:

1. Incorrect Calculations.
2. Incorrect data edits
3. Ineffective data edits
4. Incorrect coding / implementation of business rules.
5. Inadequate software performance.
6. Confusing or misleading data
7. Software that is difficult to use.
8. Obsolete software.
9. Inconsistent Processing
10. Difficult to maintain & understand.
11. Unreliable results or performance
12. Inadequate support of business needs or objectives.
13. No longer supported by the vendor.
14. Incorrect or inadequate interfaces with other systems.
15. Incorrect matching & merging of data.

Page 3 of 40
Software Testing - Nuts and Bolts

16. Data searches that yield incorrect results.


17. Incorrect processing of data relationships.
18. Incorrect file and data handling.
19. Inadequate security controls.
20. Inability to handle production data capacities.

4. The Five Essentials for Software testing:

The following advice should help clarify your thinking about software testing and
help you improve the effectiveness and efficiency of your testing. It is helpful to think
about software testing in terms of five essential elements:

1. A test strategy that tells you what types of testing and the amount of testing you
think will work best at finding the defects that are lurking in the software

2. A testing plan of the actual testing tasks you will need to execute to carry out that
strategy

3. Test cases that have been prepared in advance in the form of detailed examples
you will use to check that the software will actually meet its requirements

4. Test data consisting of both input test data and database test data to use while you
are executing your test cases, and

5. A test environment which you will use to carry out your testing.

If any one of these five elements is missing or inadequate, your test effort will most likely
fall far short of what you could otherwise achieve.

5. Principle of Testing:
The probability of the existence of more errors in a section of a program is
proportional to the number of errors already found in that section.

6. Testing Life Cycle:


• Test Plan Preparation
• Test Case design
• Test Execution & Test Log
• Defect Tracking.
• Test Report Preparation

Page 4 of 40
Software Testing - Nuts and Bolts

Page 5 of 40
Software Testing - Nuts and Bolts

V - Model

Page 6 of 40
Software Testing - Nuts and Bolts

W - Model

7. Objective of Software Tester:


• The goal of a software tester is to find bugs.
• The goal of a software tester is to find bugs and find them as early as possible.
• The goal of a software tester is to find bugs and find them as early as possible and
make sure they get fixed.

8. Software Testing 10 Rules:


1. Test early and test often.
2. Integrate the application development and testing life cycles. You'll get better
results and you won't have to mediate between two armed camps in your IT shop.
3. Formalize a testing methodology; you'll test everything the same way and you'll
get uniform results.
4. Develop a comprehensive test plan; it forms the basis for the testing methodology.
5. Use both static and dynamic testing.
6. Define your expected results.
7. Understand the business reason behind the application. You'll write a better
application and better testing scripts.
8. Use multiple levels and types of testing (regression, systems, integration, stress
and load).
9. Review and inspect the work, it will lower costs.
10. Don't let your programmers check their own work; they'll miss their own errors.

9. Difference between QA, QC and Testing:

Page 7 of 40
Software Testing - Nuts and Bolts

Quality Assurance Quality Control Testing


A set of activities designed A set of activities designed The process of executing a
to ensure that the to evaluate a developed system with the intent of
development and/or work product. finding defects. (Note that
maintenance process is the "process of executing a
adequate to ensure a system system" includes test
will meet its objectives. planning prior to the
execution of the test cases.)
QA activities ensure that the QC activities focus on Testing is one example of a
process is defined and finding defects in specific QC activity, but there are
appropriate. Methodology deliverables - e.g., are the others such as inspections
and standards development defined requirements the
are examples of QA right requirements
activities. A QA review
would focus on the process
elements of a project - e.g.,
are requirements being
defined at the proper level
of detail.
QA is process oriented QC is product oriented Testing therefore is product
Quality Assurance makes Quality Control makes sure oriented and thus is in the
sure you are doing the right the results of what you've QC domain. Testing for
things, the right way done are what you expected quality isn't assuring
quality, it's controlling it.

10. Types of Software testing:


• Acceptance Testing
• Ad Hoc Testing
• Alpha Testing
• Automated Testing
• Beta Testing
• Black Box Testing
• Compatibility Testing
• Configuration Testing
• Functional Testing
• Independent Verification and Validation (IV&V)
• Installation Testing
• Integration Testing
o Big Bang Testing.
o Bottom up Integration Testing.
o Top down Integration Testing.
• Load Testing
• Performance Testing

Page 8 of 40
Software Testing - Nuts and Bolts

• Pilot Testing
• Regression Testing
• Security Testing
• Software Testing
• Stress Testing
• System Integration Testing
• User Acceptance Testing
• White Box Testing
• Documentation Testing
• Unit Testing
• Sanity (or) Smoke Testing
• Recovery Testing
• Fail-over testing
• Exploratory testing
• Context-driven.
• Comparison testing.
• Mutation testing.
• Thread Testing
• Exhaustive Testing
• Conformance Testing

10.1Acceptance Testing
Testing the system with the intent of confirming readiness of the product and customer
acceptance.

10.2Ad Hoc Testing


Testing without a formal test plan or outside of a test plan. With some projects this type
of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it
can often find problems that are not caught in regular testing. Sometimes, if testing
occurs very late in the development cycle, this will be the only kind of testing that can be
performed. Sometimes ad hoc testing is referred to as exploratory testing.

10.3Alpha Testing
Testing after code is mostly complete or contains most of the functionality and prior to
users being involved. Sometimes a select group of users are involved. More often this
testing will be performed in-house or by an outside testing firm in close cooperation with
the software engineering department.

10.4Automated Testing
Software testing that utilizes a variety of tools to automate the testing process and when
the importance of having a person manually testing is diminished. Automated testing still

Page 9 of 40
Software Testing - Nuts and Bolts

requires a skilled quality assurance professional with knowledge of the automation tool
and the software being tested to set up the tests.

10.5Beta Testing
Testing after the product is code complete. Betas are often widely distributed or even
distributed to the public at large in hopes that they will buy the final product when it is
released.

10.6Black Box Testing


Testing software without any knowledge of the inner workings, structure or language of
the module being tested. Black box tests, as most other kinds of tests, must be written
from a definitive source document, such as a specification or requirements document

10.7Compatibility Testing
Testing used to determine whether other system software components such as browsers,
utilities, and competing software will conflict with the software being tested.

10.8Configuration Testing
Testing to determine how well the product works with a broad range of
hardware/peripheral equipment configurations as well as on different operating systems
and software.

10.9Functional Testing
Testing two or more modules together with the intent of finding defects, demonstrating
that defects are not present, verifying that the module performs its intended functions as
stated in the specification and establishing confidence that a program does what it is
supposed to do.

10.10Independent Verification and Validation (IV&V)


The process of exercising software with the intent of ensuring that the software system
meets its requirements and user expectations and doesn't fail in an unacceptable manner.
The individual or group doing this work is not part of the group or organization that
developed the software. A term often applied to government work or where the
government regulates the products, as in medical devices

10.11Installation Testing
Testing with the intent of determining if the product will install on a variety of platforms
and how easily it installs.

10.12Integration Testing
Testing two or more modules or functions together with the intent of finding interface
defects between the modules or functions. Testing completed at as a part of unit or
functional testing, and sometimes, becomes its own standalone test phase. On a larger
level, integration testing can involve a putting together of groups of modules and
functions with the goal of completing and verifying that the system meets the system
requirements. (see system testing)

Page 10 of 40
Software Testing - Nuts and Bolts

10.12.1Big Bang Testing


A type of integration in which software components of an application are combined all at
once in to a overall system. According to the approach, every module is first unit tested in
isolation. After that each module combined all at once and tested.

10.12.2Bottom up Integration Testing


All the modules are added or combined from lower level hierarchy to higher level
hierarchy. i.e., the lower model is tested in isolation first, and then the next set of higher
level modules is tested with the previously tested lower modules.

10.12.3Top down Integration Testing


All the modules are added or combined from higher level hierarchy to lower level
hierarchy. i.e., the higher model is tested in isolation first, and then the next set of lower
level modules is tested with the previously tested higher modules.

Bottom-up Top-down
Major Features • Allows early testing aimed • The control program is
t proving feasibility and tested first
practicality of particular • Modules are integrated
modules. one at a time
• Modules can be
integrated in various • Major emphasis is on
clusters as desired. interface testing

• Major emphasis is on
module functionality and
performance.
Advantages • No test stubs are needed • No test drivers are
• It is easier to adjust needed
manpower needs • The control program plus a
few modules forms a basic
• Errors in critical modules early prototype
are found early • Interface errors are
discovered early

• Modular features aid


debugging
Disadvantages • Test drivers are needed • Test stubs are needed
• Many modules must be • The extended early
integrated before a phases dictate a slow
working program is manpower buildup
available
• Errors in critical modules
• Interface errors are at low levels are found

Page 11 of 40
Software Testing - Nuts and Bolts

discovered late late


Comments At any given point, more code An early working program raises
has been written and tested that morale and helps convince
with top down testing. Some management progress is being
people feel that bottom-up is a made. It is hard to maintain a
more intuitive test philosophy. pure top-down strategy in
practice.

10.13Load Testing
Testing with the intent of determining how well the product handles competition for
system resources. The competition may come in the form of network traffic, CPU
utilization or memory allocation.

10.14Performance Testing
Testing with the intent of determining how quickly a product handles a variety of events.
Automated test tools geared specifically to test and fine-tune performance are used most
often for this type of testing.

10.15Pilot Testing
Testing that involves the users just before actual release to ensure that users become
familiar with the release contents and ultimately accept it. Often is considered a Move-to-
Production activity for ERP releases or a beta test for commercial products. Typically
involves many users, is conducted over a short period of time and is tightly controlled.
(see beta testing)

10.16Regression Testing
Testing with the intent of determining if bug fixes have been successful and have not
created any new problems. Also, this type of testing is done to ensure that no degradation
of baseline functionality has occurred.

10.17Security Testing
Testing of database and network software in order to keep company data and resources
secure from mistaken/accidental users, hackers, and other malevolent attackers.

10.18Software Testing
The process of exercising software with the intent of ensuring that the software system
meets its requirements and user expectations and doesn't fail in an unacceptable manner.
The organization and management of individuals or groups doing this work is not
relevant. This term is often applied to commercial products such as internet applications.
(Contrast with independent verification and validation)

Page 12 of 40
Software Testing - Nuts and Bolts

10.19Stress Testing
Testing with the intent of determining how well a product performs when a load is placed
on the system resources that nears and then exceeds capacity.

10.20System Integration Testing


Testing a specific hardware/software installation. This is typically performed on a COTS
(commercial off the shelf) system or any other system comprised of disparent parts where
custom configurations and/or unique installations are the norm.

10.21White Box Testing


Testing in which the software tester has knowledge of the inner workings, structure and
language of the software, or at least its purpose.

10.22Documentation Testing
Testing the documents (Function Spec, Requirement doc) for typo-errors, functionality
wrongly mentioned.

10.23Unit Testing (1)


The most 'micro' scale of testing; to test particular functions or code modules. Typically
done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a
well-designed architecture with tight code; may require developing test driver modules or
test harnesses.

10.24Unit Testing (2)


Another kind of unit testing done by the testers is testing the field-level validations. (Eg:
How numeric field behaves on saving it with alphabetic values)

10.25Sanity (or) Smoke Testing


Typically an initial testing effort to determine if a new software version is performing
well enough to accept it for a major testing effort. For example, if the new software is
crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting
databases, the software may not be in a 'sane' enough condition to warrant further testing
in its current state.

10.26Recovery Testing
Testing how well a system recovers from crashes, hardware failures, or other catastrophic
problems

10.27Fail-overtesting
Typically used interchangeably with 'recovery testing'

10.28Exploratory testing
Often taken to mean a creative, informal software test that is not based on formal test
plans or test cases; testers may be learning the software as they test it. Exploratory testing
is simultaneous learning, test design, and test execution

Page 13 of 40
Software Testing - Nuts and Bolts

10.29Context-driven testing
Testing driven by an understanding of the environment, culture, and intended use of
software. For example, the testing approach for life-critical medical equipment software
would be completely different than that for a low-cost computer game.

10.30Comparison testing
Comparing software weaknesses and strengths to competing products.

10.31Mutation testing
A method for determining if a set of test data or test cases is useful, by deliberately
introducing various code changes ('bugs') and retesting with the original test data/cases to
determine if the 'bugs' are detected. Proper implementation requires large computational
resources

10.32Thread Testing
This test technique, which is often used during early integration testing, demonstrates key
functional capabilities by testing a string of units that accomplish a specific function in
the application.

10.33Exhaustive Testing
Executing the program through all possible combination of values for program variables.

10.34Conformance Testing
Verifying implementation conformance to industry standards. Producing tests for the
behavior of an implementation to be sure it provides the portability, interoperability,
and/or compatibility a standard defines.

11. What is a Defect and what isn’t?

A defect is:

A deviation from product specifications, requirements or user documentation:

• Software does not conform to defined functional specifications documentation


• Error or inconsistency within product specifications, requirements or user
documentation

A deviation from customer/user expectation:

• Deviation from Windows standard operating procedure


• Causes the system to crash and/or function improperly (this includes data
loss/data discrepancies)

Page 14 of 40
Software Testing - Nuts and Bolts

• Creates a cosmetic problem (spelling/grammar errors; incorrect sizing/placement


of fields, etc.)

12. Defects generally fall into one of three categories:

Incorrect:

• The requirement or specification has been implemented incorrectly.

Missing:

• A specified or desired requirement is not in the built product. This can be a


variance from a requirement, an indication that the requirement was not
implemented, or a requirement that was defined during or after the product was
built.

Extra:

• An attribute that was added to the product that was not documented in the
specifications. Even if the variance is a desirable one, it is still considered a
defect. Most often, these are linked to documentation discrepancies.
Documentation Errors are discussed later in this document.

Page 15 of 40
Software Testing - Nuts and Bolts

13. A Defect is not:

• Something that, in a Tester’s opinion, should be changed either for cosmetic or


functional reasons.
• A question or issue that a Tester has about either documentation or functionality.

These two items fall into the Enhancement and Issue categories to be addressed later.

14. Types of Defects:

All software defects can be broadly categorized into the below mentioned types:
• Errors of commission: something wrong is done
• Errors of omission: something left out by accident
• Errors of clarity and ambiguity: different interpretations
• Errors of speed and capacity

However, the above is a broad categorization; below is a list of varied types of defects
that can be identified in different software applications:
- Conceptual bugs / Design bugs
- Coding bugs
- Integration bugs
- User Interface Errors
- Functionality
- Communication
- Command Structure
- Missing Commands
- Performance
- Output
- Error Handling Errors
- Boundary-Related Errors
- Calculation Errors
- Initial and Later States
- Control Flow Errors
- Errors in Handling Data
- Race Conditions Errors
- Load Conditions Errors
- Hardware Errors
- Source and Version Control Errors
- Documentation Errors
- Testing Errors

15. Defects Life Cycle:

Page 16 of 40
Software Testing - Nuts and Bolts
Open

Dispatched

Fixed

Y
Defect Reject / Reopen
reproducible

Closed

16. Every good bug report needs exactly three things.


• Steps to reproduce,
• What you expected to see, and
• What you saw instead.
17. Defect Tracking:
Defects are recorded for four major purposes:
• To correct the defect.
• To report the status of the application.
• To gather statistics used to develop defect expectations in future applications.
• To improve the software development process.

18. Top Ten Tips for Bug Tracking


1. A good tester will always try to reduce the repro steps to the minimal steps to
reproduce; this is extremely helpful for the programmer who has to find the bug.
2. Remember that the only person who can close a bug is the person who opened it
in the first place. Anyone can resolve it, but only the person who saw the bug can
really be sure that what they saw is fixed.
3. There are many ways to resolve a bug. Resolve a bug as fixed, won't fix,
postponed, not repro, duplicate, or by design.
4. Not Repro means that nobody could ever reproduce the bug. Programmers often
use this when the bug report is missing the repro steps.
5. You'll want to keep careful track of versions. Every build of the software that you
give to testers should have a build ID number so that the poor tester doesn't have
to retest the bug on a version of the software where it wasn't even supposed to be
fixed.

Page 17 of 40
Software Testing - Nuts and Bolts

6. If you're a programmer, and you're having trouble getting testers to use the bug
database, just don't accept bug reports by any other method. If your testers are
used to sending you email with bug reports, just bounce the emails back to them
with a brief message: "please put this in the bug database. I can't keep track of
emails."
7. If you're a tester, and you're having trouble getting programmers to use the bug
database, just don't tell them about bugs - put them in the database and let the
database email them.
8. If you're a programmer, and only some of your colleagues use the bug database,
just start assigning them bugs in the database. Eventually they'll get the hint.
9. If you're a manager, and nobody seems to be using the bug database that you
installed at great expense, start assigning new features to people using bugs. A
bug database is also a great "unimplemented feature" database, too.
10. Avoid the temptation to add new fields to the bug database. Every month or so,
somebody will come up with a great idea for a new field to put in the database.
You get all kinds of clever ideas, for example, keeping track of the file where the
bug was found; keeping track of what % of the time the bug is reproducible;
keeping track of how many times the bug occurred; keeping track of which exact
versions of which DLLs were installed on the machine where the bug happened.
It's very important not to give in to these ideas. If you do, your new bug entry
screen will end up with a thousand fields that you need to supply, and nobody will
want to input bug reports any more. For the bug database to work, everybody
needs to use it, and if entering bugs "formally" is too much work, people will go
around the bug database.
19. Not All the Bugs You Find will Be Fixed

There are several reasons why you might choose not to fix a bug:

• There’s not enough time. In every project there are always too many software
features, too few people to code and test them, and not enough room left in the
schedule to finish. If you’re working on a tax preparation program, April 15 isn’t
going to move—you must have your software ready in time.
• It’s really not a bug. Maybe you’ve heard the phrase, "It’s not a bug, it’s a
feature!" It’s not uncommon for misunderstandings, test errors, or spec changes
to result in would-be bugs being dismissed as features.
• It’s too risky to fix. Unfortunately, this is all too often true. Software can be
fragile, intertwined, and sometimes like spaghetti. You might make a bug fix that
causes other bugs to appear. Under the pressure to release a product under a tight
schedule, it might be too risky to change the software. It may be better to leave
in the known bug to avoid the risk of creating new, unknown ones.
• It’s just not worth it. This may sound harsh, but it’s reality. Bugs that would
occur infrequently or bugs that appear in little-used features may be dismissed.
Bugs that have work-arounds, ways that a user can prevent or avoid the bug, are
often not fixed. It all comes down to a business decision based on risk.

Page 18 of 40
Software Testing - Nuts and Bolts

20. Severity versus Priority:


The severity of a defect should be assign objectively by the test team based on
pre-defined severity descriptions. For example, a “Severity One” defect may be defined
as one that causes data corruption, a system crash, security violations etc.
In large projects, it may also be necessary to assign a priority to the defect which
determines the order in which defects should be fixed. The priority assigned to a defect is
usually more subjective based upon input from users regarding which defects are most
important to them and therefore should be fixed first.

21. Levels of Severity


Blocker: This bug prevents developers from testing or developing the software.
Critical: The software crashes, hangs, or causes you to lose data.
Major: A major feature is broken.
Normal: It's a bug that should be fixed.
Minor: Minor loss of function, and there's an easy work around.
Trivial: A cosmetic problem, such as a misspelled word or misaligned text.
Enhancement: Request for new feature or enhancement.

22. One Line Instructions for writing Effective Defect Reports


• Condense (say it clearly but briefly)
• Accurate (Is it really a defect? Could it be user error, setup problem etc.?)
• Neutralize (Just the facts, no zingers, no humor, no emotion)
• Precise (Explicitly what is the problem?)
• Isolate (What has been done to isolate the problem?)
• Generalize (What has been done to understand how general the problem is?)
• Re-create (What are the essentials in creating/triggering this problem?)
• Impact (What is the impact if the bug were to surface in customer env.?)
• Debug (What does the developer need to debug this?)
• Evidence (What will prove the existence of the error? documentation?)
23. Defect Density
Defect density is a measure of the total known defects divided by the size of the software
entity being measured.
Number of Known Defects
Size
The Number of Known Defects is the count of total defects identified against a particular
software entity, during a particular time period. Examples include:
• defect to date since the creation of module
• defects found in a program during an inspection
• defects to date since the shipment of a release to the customer
Size is a normalizer that allows comparisons between difference software entities (i.e.,
modules, releases, products). Size is typically counted either in Lines of Code or
Function Points.

Uses: Defect Density is used to compare the relative number of defects in various
software components. This helps identify candidates for additional inspection or testing

Page 19 of 40
Software Testing - Nuts and Bolts

or for possible re-engineering or replacement. Identifying defect prone components


allows the concentration of limited resources into areas with the highest potential return
on the investment. Figure 1 illustrates a typical reporting format for Defect Density when
it is being utilized in this manner.

Page 20 of 40
Software Testing - Nuts and Bolts

Another use for Defect Density is to compare subsequent releases of a product to track
the impact of defect reduction and quality improvement activities. Normalizing by size
allows releases of varying size to be compared. Differences between products or product
lines can also be compared in this manner. Figure 2 illustrates a typical reporting format
for Defect Density when it is being utilized in this manner.

24. Testing Document formats

Test Script:

Req ID Description / Expected Actual P/F Initials, Date


Action Result Result & Build #

Traceability Matrix:

Req ID Description Test Script Test Case ID


Name

Test Coverage Matrix

Build # Test Script Total No. # Pass # Fail # Not Bug ID


Name of Test Executed
Cases

Test Plan:

• Testing Scope (Modules included for testing)


• Testing Activities (Activities, Milestones and Deadlines)
• Test Approach (Type of testing, No. of builds to be tested, configuration used etc).
• Risks, Assumptions and Constraints
• Environment
• Deliverables
• Defect Reporting Process
• Roles and Responsibilities (Roles and Responsibilities of Lead and Testers)

Test Summary Report:

Page 21 of 40
Software Testing - Nuts and Bolts

• Testing Summary (Summary of Test approach, environment and test execution)


• Test Results (Open and Closed defects in this release)
• Test Coverage Matrix (for all the builds)
• Deviations, Concerns and Observations

Change Control Form

• Req. File Name


• Req. #
• Change Requested
• Requested By

Scenario Document

• Scenario ID
• Objective
• Description

25. One Line instructions for Testing:


1. Prepare effective Test Plan.
2. Test the given documents (Installation guide, Requirements or Functional Spec)
for documentation errors, incorrect functionalities (This can be done by checking
the functionality of the application and compare with the document. If it’s not the
same, consult the developers to ensure whether this is a documentation defect or
functionality defect).
3. Convert the requirements given the documents in to use cases.
4. Convert all the functionalities even though not given in the Functional
Specifications documents in to use cases. (this will fetch you lot of bugs)
5. Now, convert all the use cases in to test cases.
6. Execute the test cases.
7. After completed the execution of all test cases, do ad-hoc testing. At this time,
consider that you have developed the product so that you will not give a chance to
get bugs from others.
8. Post the defects and keep your eye on the defects posted since there may be a
chance of dev-rejecting your defects.
9. Get ready to fight with the developers with proper justification.
10. If it’s not a defect, close it gently else re-open it.
11. If you found any defects while doing ad-hoc, convert all the defects in to test
cases, so that you can ensure the same defects don’t appear in future builds.

Page 22 of 40
Software Testing - Nuts and Bolts

12. Prepare the Test Coverage Matrix (for every build to keep track of what you have
done in that build), Traceability Matrix and Test Summary Report.
13. Review your work once or twice (even thrice, if not satisfied) after completed.
Don’t lend an opportunity to others for providing review comments for your
work.
14. Get ready for appreciations from your boss.
26. Testing Deliverables
At the end of testing process, you need to deliver your outputs to your client. These are
some of the deliverables:

26.1Change Request (Bug Report)


A formal request that describes specific details about a desired change and a justification
for the change. A bug report is a report of the incident or defect.

26.2Test Case
A software test case is a set of test inputs, executions, conditions, and defined expected
results developed for a particular objective (such as, to exercise a particular program path
or to verify compliance with specific requirement).

26.3Test Criteria
Software testing entrance and exit criteria identify critical conditions and measures
necessary to start and stop testing of the event, phase or cycle.

26.4Test Cycle
A cycle of testing in which design and development stop, and the system or component is
tested against the design to ensure the system or component work as intended. Upon the
completion of a test cycle, incidents will be reported to the development team for fixes or
enhancements.

26.5Test Plan
A software test plan is a document covering goals, methods, and strategy for testing a
specific project. A plan includes coverage and method for testing, what items will be
tested as well as what is not covered.

26.6Test Status Report


A status report identifies testing progress or the lack thereof on a regular basis.

26.7Test Tool
Software testing automated tools are used to create test cases, test scripts and test
procedures that run automatically. There are other test tools that help with quality
assurance test management and coordination.

26.8Test Summary
A software test summary is an analysis of the results and processes of the completed test
phase or cycle events. This document summarizes a test event including
recommendations and testing assessment.

Page 23 of 40
Software Testing - Nuts and Bolts

27. Automation:

27.1Benefits of Automation:
Automating your test procedures will result in four primary benefits:

• The total elapsed time to complete your test suite will be reduced significantly.
• Your test data will be input to the test application consistently and accurately
every time the test is run.
• The validation of the test results will be 100% accurate each time the tests are run.
• Because test execution and validation is fully automatic, your testing labor costs
will be reduced to a small fraction of the cost of running the tests manually.

The total elapsed testing time is reduced in two ways. Input of test data is much faster,
and testing can continue twenty-four hours per day.

The net result of test automation will be more effective testing in less time and for a much
lower cost.

27.2When do we need automation for testing:

Three things to think about when making the decision to automate.

1. Automate only that which needs automating. Once you have decided as a QA
group that you need to implement test automation, it can be very tempting to want
to automate as much as possible. This inclination can be especially acute when
you have spent tens of thousands of dollars on test automation software. Perhaps
you have even hired a person who will do test automation full time.

Unfortunately, test automation is not a magic bullet for achieving great test
results. Software vendors will try to convince you that you can automate any and
all testing your group does—this is not true. Remember, automation does not
actually do software testing, it is a tool to help your test engineers test better. The
time saved through test automation can easily be reinvested through test
maintenance, adding new test cases, removing obsolete test cases, and improving
upon the test architecture.

What does this mean for your automation effort? It means you should automate
only the things that need to be automated. You can probably think of numerous
applications for test automation, but of those, select the best fit and start on that
first. Especially if this is your first effort in test automation, if you shoot for the
moon, it can very easily backfire on you in terms of projected effort and cost.
Going for the low-hanging fruit first allows you to maximize the return in the
short term, realizing important gains from just having spent resources and money
on tools and people.

Page 24 of 40
Software Testing - Nuts and Bolts

Consider all the output of the development group in our company, including 1)
core API driven peer-to-peer technology, 2) a showcase Web site to demonstrate
the technology, 3) a corporate site, and 4) some internal tool development. We
decided to automate the core technology first. Not only does it lend itself to
reliable test automation (an API is almost always easier and more reliably
automated than a GUI), it achieves the best bang for the buck for the department.
Sure, it would be nice to automate the corporate Web site, so that on those small,
weekly pushes of new content we could regression test the site—but why spend
time and money automating something that takes two testers about an hour to
test? It’s not worth it.

2. Design and build for maintainability. There are occasions for writing small,
one-off scripts that either utilize record-and-playback methods or are hacked
together with time as the primary consideration rather than quality. Designing and
building your automated test suite is not one of these occasions. You are entering
into a maintainability nightmare if you approach your automation project like that.
The only way to make automation work successfully is by doing up-front
planning and pulling for the long haul.

Test automation is software development, nothing less. We wouldn’t expect code


written from development to be sloppy or undocumented. Ideally, we would like
to see code reviews, participation, and reliability, even if it takes a little longer.
Our test automation projects should be held to the same high standards. Taking
even a day or two at the outset of an automation project to plan and scope the
effort will pay off in the building phase. Not only will it keep you focused as you
proceed through the effort, but it will result in a well-organized end result that is
easily accessible to other engineers or testers, and is maintainable as well.

If you do not design for maintainability, you will spend so much time trying to fix
your scripts that they will either be abandoned or rewritten completely. Either
way, the goals of your test automation have failed. Architect the suite in a logical
manner, comment and document the code that is written, and have peer reviews if
possible so that idiosyncrasies of your programming style are understood by
others (especially if they will be helping you with maintenance or writing test
cases themselves to add to the suite). Additionally, write flexible code that doesn’t
break with the slightest change to the product. Rather than constricting your code
(and validation points) down to a single point of failure, write test cases so that
instead of exiting or breaking on failure, failures are simply reported and the test
moves on. Good logging techniques will also help save time when diagnosing
problems or bugs.

3. Whether or not to automate: rules of thumb.


o GUIs are difficult to automate. Despite software vendors telling you how
easy it is to use record-and-playback functionality, graphical interfaces are
notoriously difficult to automate with any sort of maintainability.
Application interfaces tend to become static quicker than Web pages and

Page 25 of 40
Software Testing - Nuts and Bolts

are thus a bit easier to automate. Using Windows hooks is more reliable
than the DOM interface. Keys to look for when deciding to automate a
GUI is how static it is (the less it changes, the easier it will be to automate)
and how close the application is tied to Windows standard libraries
(custom objects can be difficult to automate).
o If possible, automate on the command-line or API level. Removing the
GUI interface dramatically helps reliability in test scripts. If the
application has a command-line interface, not only does it lend itself to
reliable automation but is also somewhat data driven, another green light
to go forward with automation.
o Automate those things that a human cannot do. If you suspect a memory
leak by performing a certain function but can’t seem to reproduce it in a
reasonable amount of time, automate it. Also particularly interesting to
automate are time-sensitive actions (requiring precise timing to capture a
state change, for example) and very rapid actions (e.g., loading a
component with a hundred operations a second).
o Stick with the golden rule in automation: do one thing, and do it well. A
robust test case that does a single operation will pay off more than a test
case that covers more but requires heavy maintenance. If you design your
test cases (or library functions, preferably) to do single actions, and you
write them robustly, pretty soon you can execute a series of them together
to perform those broad test cases that you would have written into a single
test case.
28. Application Test Tools:

28.1Source Test Tools:

Product Vendor Comments


AdaTEST IPL Coverage, static, and dynamic software testing for Ada
Apps
AQtime AutomatedQA Profiling toolkit for thorough analysis of Delphi, Visual
Basic, Visual C++, C++ Builder, Intel C++, GCC and
Visual Fortran applications. It offers over two dozen
performance and memory usage profilers and
productivity tools that work in unison to give you an
unrivaled level of information on the state of your
software projects - from inception to delivery.
BoundsChecker Compuware Run-time error detection and debugging tool for C++
developers. Supports C/C++,.net,ASP,ASP.net.
Bullseye Bullseye C/C++ code coverage
Coverage Testing
Technology

Page 26 of 40
Software Testing - Nuts and Bolts

Cantata IPL Coverage, static, and dynamic software testing for


C/C++ apps
Code Coverage DMS DCC reinvents code coverage. Without recompiling or
relinking, function, line, decision and branch coverage
information is gathered. Full source code annotation is
given. Information

28.2Functional Test Tools:

Product Vendor Comments


Astra Mercury Web site functional testing
QuickTest Interactive
Rational Rational Automated functional, regression, and smoke tests for e-
Robot Software applications
Silktest Segue Object-oriented software testing for Windows
Software, Inc. applications
TestComplete AutomatedQA Automated test manager, with unmatched support for
unit, functional, regression, distributed and HTTP
performance testing at the project level. Designed for
application developers and testers alike, TestComplete
will help you to achieve thorough Quality Assurance in
development from the first line of code right through
delivery and maintenance, with no surprises along the
WinRunner Mercury GUI capture/playback testing for Windows applications
Interactive
XRunner Mercury GUI capture/playback testing for X applications
Interactive

28.3Performance Test Tools:

Product Vendor Comments


DB Stress DTM Utility for stress testing the server parts of information
systems and applications, as well as DBMSs and servers
themselves.
LoadRunner Mercury Integrated client, server and Web load testing tool
Interactive
SilkPerformer Segue Web server load testing

Page 27 of 40
Software Testing - Nuts and Bolts

preVue-X, Rational Load and performance testing in X Window and ASCII


preVue-ASCII Software terminal environments
QACenter Compuware Load testing
Performance
Edition
SilkPerformer Segue Load testing
Vantage Compuware Integrated suite of application performance management
products. Each product deals with a different aspect of
the application's performance across the network
infrastructure.
WinFeedback Beson Data WinFeedback is a Windows scripting extension for
testing, monitoring and automation purposes, like
response timing, up-timing, functional testing, stress
testing, health monitoring, task automation.
XtremeLoad US Computer XtremeLoad is a software framework for load testing of
Software distributed software systems. It provides an extensible,
scalable, and easy-to-use foundation on which you can
build a comprehensive and cost-effective load testing
solution tailored to your product.

29. Software Certifications:


Software Certifications has become recognized worldwide as the standard for information
technology quality assurance professionals. This website is designed to answer some of
the most common questions concerning certification and to serve as an ongoing public
procedure guide. Its intent is to assist in understanding the program, its certification
designations, and the procedures involved.

29.1What is software certification?


Certification is formal recognition of a level of proficiency in the information technology
(IT) quality assurance industry. The recipient is acknowledged as having an overall
comprehension of the disciplines and skills represented in a comprehensive body of
knowledge for a respective software discipline.

29.2Why become certified?


As the IT industry becomes more competitive, the ability for management to distinguish
professional and skilled individuals in the field becomes mandatory. Certification
demonstrates a level of understanding in carrying out quality assurance principles and
practices. Acquiring the designation of Certified Software Quality Analyst (CSQA),
Certified Software Tester (CSTE) or Certified Software Project Manager (CSPM)
indicates a professional level of competence in the principles and practices of quality
assurance in the IT profession. CSQAs, CSTEs and CSPMs gain recognition as software

Page 28 of 40
Software Testing - Nuts and Bolts

quality profession, achieve potentially more rapid career advancement, and gain greater
acceptance in the role as advisor to management.

Certification is a big step; a big decision. Certification identifies an individual as a quality


assurance leader. Certification earns the candidate the respect of colleagues and
managers. One or more of these certifications is frequently a prerequisite for promotion
or acquiring a new position. We hope this site helps anyone considering this step in
achieving higher career growth and goals.

29.3Various Software Certifications:


• Certified Software Project Manager (CSPM)
• Certified Software Quality Analyst (CSQA)
• Certified Software Tester (CSTE)
• Certified Software Quality Engineer (CSQE)
30. Useful Q & A:

Q: What is 'Software Quality Assurance'?


A: Software QA involves the entire software development PROCESS - monitoring and
improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'.

Q: What is verification? validation?


A: Verification typically involves reviews and meetings to evaluate documents, plans,
code, requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing and
takes place after verifications are completed. The term 'IV & V' refers to Independent
Verification and Validation

Q: What makes a good Software Test engineer?


A: A good test engineer has a 'test to break' attitude, an ability to take the point of view of
the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy
are useful in maintaining a cooperative relationship with developers, and an ability to
communicate with both technical (developers) and non-technical (customers,
management) people is useful. Previous software development experience can be helpful
as it provides a deeper understanding of the software development process, gives the
tester an appreciation for the developers' point of view, and reduce the learning curve in
automated test tool programming. Judgment skills are needed to assess high-risk areas of
an application on which to focus testing efforts when time is limited.

Q: What makes a good Software QA engineer?


A: The same qualities a good tester has are useful for a QA engineer. Additionally, they
must be able to understand the entire software development process and how it can fit
into the business approach and goals of the organization. Communication skills and the
ability to understand various sides of issues are important. In organizations in the early
stages of implementing QA processes, patience and diplomacy are especially needed. An

Page 29 of 40
Software Testing - Nuts and Bolts

ability to find problems as well as to see 'what's missing' is important for inspections and
reviews.

Q: What makes a good QA or Test manager?


A good QA, test, or QA/Test(combined) manager should:

• be familiar with the software development process


• be able to maintain enthusiasm of their team and promote a positive atmosphere,
despite what is a somewhat 'negative' process (e.g., looking for or preventing
problems)
• be able to promote teamwork to increase productivity
• be able to promote cooperation between software, test, and QA engineers
• have the diplomatic skills needed to promote improvements in QA processes
• have the ability to withstand pressures and say 'no' to other managers when
quality is insufficient or QA processes are not being adhered to
• have people judgment skills for hiring and keeping skilled personnel
• be able to communicate with technical and non-technical people, engineers,
managers, and customers.
• be able to run meetings and keep them focused

Q: What's the role of documentation in QA?


A: Critical. (Note that documentation can be electronic, not necessarily paper, may be
embedded in code comments, etc.) QA practices should be documented such that they are
repeatable. Specifications, designs, business rules, inspection reports, configurations,
code changes, test plans, test cases, bug reports, user manuals, etc. should all be
documented in some form. There should ideally be a system for easily finding and
obtaining information and determining what documentation will have a particular piece
of information. Change management for documentation should be used if possible.

Q: What steps are needed to develop and run software tests?


A: The following are some of the steps to consider:

• Obtain requirements, functional design, and internal design specifications and


other necessary documents
• Obtain budget and schedule requirements
• Determine project-related personnel and their responsibilities, reporting
requirements, required standards and processes (such as release processes, change
processes, etc.)
• Determine project context, relative to the existing quality culture of the
organization and business, and how it might impact testing scope, approaches, and
methods.
• Identify application's higher-risk aspects, set priorities, and determine scope and
limitations of tests

Page 30 of 40
Software Testing - Nuts and Bolts

• Determine test approaches and methods - unit, integration, functional, system,


load, usability tests, etc.
• Determine test environment requirements (hardware, software, communications,
etc.)
• Determine test ware requirements (record/playback tools, coverage analyzers, test
tracking, problem/bug tracking, etc.)
• Determine test input data requirements
• Identify tasks, those responsible for tasks, and labor requirements
• Set schedule estimates, timelines, milestones
• Determine input equivalence classes, boundary value analyses, error classes
• Prepare test plan document and have needed reviews/approvals
• Write test cases
• Have needed reviews/inspections/approvals of test cases
• Prepare test environment and test ware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes,
set up logging and archiving processes, set up or obtain test input data
• Obtain and install software releases
• Perform tests
• Evaluate and report results
• Track problems/bugs and fixes
• Retest as needed
• Maintain and update test plans, test cases, test environment, and test ware through
life cycle

Q: What's a 'test plan'?


A: A software project test plan is a document that describes the objectives, scope,
approach, and focus of a software testing effort. The process of preparing a test plan is a
useful way to think through the efforts needed to validate the acceptability of a software
product. The completed document will help people outside the test group understand the
'why' and 'how' of product validation. It should be thorough enough to be useful but not
so thorough that no one outside the test group will read it. The following are some of the
items that might be included in a test plan, depending on the particular project:

• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test
plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions

Page 31 of 40
Software Testing - Nuts and Bolts

• Overall software project organization and personnel/contact-info/responsibilities


• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and production
systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilities,
deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.

Q: What's a 'test case'?

A: Test Case:

Page 32 of 40
Software Testing - Nuts and Bolts

• A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test
case should contain particulars such as test case identifier, test case name,
objective, test conditions/setup, input data requirements, steps, and expected
results.
• Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely thinking
through the operation of the application. For this reason, it's useful to prepare test
cases early in the development cycle if possible.

Q: What should be done after a bug is found?


A: The bug needs to be communicated and assigned to developers that can fix it. After the
problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn't create problems elsewhere. If
a problem-tracking system is in place, it should encapsulate these processes. A variety of
commercial problem-tracking/management software tools are available. The following
are items to consider in the tracking process:

• Complete information such that developers can understand the bug, get an idea of
it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case or if
the developer doesn't have easy access to the test case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that
would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix

Page 33 of 40
Software Testing - Nuts and Bolts

• Application version that contains the fix


• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results

A reporting or tracking process should enable notification of appropriate personnel at


various stages. For instance, testers need to know when retesting is needed, developers
need to know when bugs are found and how to get the needed information, and
reporting/summary capabilities are needed for managers.

Q: What is 'configuration management'?


A: Configuration management covers the processes used to control, coordinate, and
track: code, requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.

Q: What if the software is so buggy it can't really be tested at all?


A: The best bet in this situation is for the testers to go through the process of reporting
whatever bugs or blocking-type problems initially show up, with the focus being on
critical bugs. Since this type of problem can severely affect schedules, and indicates
deeper problems in the software development process (such as insufficient unit testing or
insufficient integration testing, poor design, improper build or release procedures, etc.)
managers should be notified, and provided with some documentation as evidence of the
problem.

Q: How can it be known when to stop testing?


A: This can be difficult to determine. Many modern software applications are so
complex, and run in such an interdependent environment, that complete testing can never
be done. Common factors in deciding when to stop are:

• Deadlines (release deadlines, testing deadlines, etc.)


• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends

Q: What if there isn't enough time for thorough testing?


A: Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk
analysis is appropriate to most software development projects. This requires judgment
skills, common sense, and experience. (If warranted, formal methods are also available.)
Considerations can include:

Page 34 of 40
Software Testing - Nuts and Bolts

• Which functionality is most important to the project's intended purpose?


• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance
expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?

Q: What if the project isn't big enough to justify extensive testing?


A: Consider the impact of project errors, not the size of the project. However, if extensive
testing is still not justified, risk analysis is again needed and the same considerations as
described previously in 'What if there isn't enough time for thorough testing?' apply. The
tester might then do ad hoc testing, or write up a limited test plan based on the risk
analysis.

Q: What are 5 common problems in the software development process?

• poor requirements - if requirements are unclear, incomplete, too general, and not
testable, there will be problems.
• unrealistic schedule - if too much work is crammed in too little time, problems are
inevitable.
• inadequate testing - no one will know whether or not the program is any good
until the customer complains or systems crash.
• Futurities - requests to pile on new features after development is underway;
extremely common.
• miscommunication - if developers don't know what's needed or customer's have
erroneous expectations, problems are guaranteed.

Q: What are 5 common solutions to software development problems?

• solid requirements - clear, complete, detailed, cohesive, attainable, testable


requirements that are agreed to by all players. Use prototypes to help nail down
requirements. In 'agile'-type environments, continuous close coordination with
customers/end-users is necessary.

Page 35 of 40
Software Testing - Nuts and Bolts

• realistic schedules - allow adequate time for planning, design, testing, bug fixing,
re-testing, changes, and documentation; personnel should be able to complete the
project without burning out.
• adequate testing - start testing early on, re-test after fixes or changes, plan for
adequate time for testing and bug-fixing. 'Early' testing ideally includes unit
testing by developers and built-in testing and diagnostic capabilities.
• stick to initial requirements as much as possible - be prepared to defend against
excessive changes and additions once development has begun, and be prepared to
explain consequences. If changes are necessary, they should be adequately
reflected in related schedule changes. If possible, work closely with
customers/end-users to manage expectations. This will provide them a higher
comfort level with their requirements decisions and minimize excessive changes
later on.
• communication - require walkthroughs and inspections when appropriate; make
extensive use of group communication tools - groupware, wiki's, bug-tracking
tools and change management tools, intranet capabilities, etc.; insure that
information/documentation is available and up-to-date - preferably electronic, not
paper; promote teamwork and cooperation; use protoypes and/or continuous
communication with end-users if possible to clarify expectations.

Q: What is the 'software life cycle'?


A: The life cycle begins when an application is first conceived and ends when it is no
longer in use. It includes aspects such as initial concept, requirements analysis, functional
design, internal design, documentation planning, test planning, coding, document
preparation, integration, testing, maintenance, updates, retesting, phase-out, and other
aspects.

Q: What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

• SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by


the U.S. Defense Department to help improve software development processes.
• CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity
Model Integration'), developed by the SEI. It's a model of 5 levels of process
'maturity' that determine effectiveness in delivering quality software. It is geared
to large organizations such as large U.S. Defense Department contractors.
However, many of the QA processes involved are appropriate to any organization,
and if reasonably applied can be helpful. Organizations can receive CMMI ratings
by undergoing assessments by qualified auditors.

Level 1 - characterized by chaos, periodic panics, and heroic


efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable.

Level 2 - software project tracking, requirements management,


realistic planning, and configuration management

Page 36 of 40
Software Testing - Nuts and Bolts

processes are in place; successful practices can


be repeated.

Level 3 - standard software development and maintenance processes


are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee
software processes, and training programs are used to
ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes,


and products. Project performance is predictable,
and quality is consistently high.

Level 5 - the focus is on continouous process improvement. The


impact of new processes and technologies can be
predicted and effectively implemented when required.

Perspective on CMM ratings: During 1997-2001, 1018 organizations


were assessed. Of those, 27% were rated at Level 1, 39% at 2,
23% at 3, 6% at 4, and 5% at 5. (For ratings during the period
1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software
engineering/maintenance personnel; 32% of organizations were
U.S. federal contractors or agencies. For those rated at
Level 1, the most problematical key process area was in
Software Quality Assurance.

• ISO = 'International Organization for Standardization' - The ISO 9001:2000


standard (which replaces the previous standard of 1994) concerns quality systems
that are assessed by outside auditors, and it applies to many kinds of production
and manufacturing organizations, not just software. It covers documentation,
design, development, production, testing, installation, servicing, and other
processes. The full set of standards consists of: (a)Q9001-2000 - Quality
Management Systems: Requirements; (b)Q9000-2000 - Quality Management
Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management
Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a
third-party auditor assesses an organization, and certification is typically good for
about 3 years, after which a complete reassessment is required. Note that ISO
certification does not necessarily indicate quality products - it indicates only that
documented processes are followed. Also see http://www.iso.ch/ for the latest
information. In the U.S. the standards can be purchased via the ASQ web site at
http://e-standards.asq.org/
• IEEE = 'Institute of Electrical and Electronics Engineers' - among other things,
creates standards such as 'IEEE Standard for Software Test Documentation'

Page 37 of 40
Software Testing - Nuts and Bolts

(IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing


(IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance
Plans' (IEEE/ANSI Standard 730), and others.
• ANSI = 'American National Standards Institute', the primary industrial standards
body in the U.S.; publishes some software-related standards in conjunction with
the IEEE and ASQ (American Society for Quality).
• Other software development/IT management process assessment methods besides
CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and
CobiT.

Q: What is Test Data?

A: In addition to the steps to perform to execute your test cases, you also need to
systematically come up with test data to use. This often equals sets of names, addresses,
product orders, or whatever other information the system uses. Since you are probably
going to test query functions, change functions and delete functions, you will most likely
need a starting database of data in addition to the examples to input. Consider how many
times you might need to go back to the starting point of the database to restart the testing
and how many new customer names you will need for all the testing in your plan. Test
data development is usually done simultaneously with test case development.

Q: What is Test Environment?

A: You will need a place to do the testing and the right equipment to use. Unless the
software is very simple, one PC will not suffice. You will need all of the components of
the system as close as possible to what it will eventually be. Test environments may be
scaled-down versions of the real thing, but all the parts need to be there for the system to
actually run. Building a test environment usually involves setting aside separate regions
on mainframe computers and/or servers, networks and PCs that can be dedicated to the
test effort and that can be reset to restart testing as often as needed. Sometimes lab rooms
of equipment are set aside, especially for performance or usability testing. A wish list of
components that will be needed is part of the test strategy, which then needs to be reality
checked as part of the test planning process. Steps to set up the environment are part of
the testing plan and need to be completed before testing begins.

Q: What is the Zero Defect Build Phase?

A: This is a period of stability where no new serious defects are discovered. The product
is very stable now and nearly ready to ship.

Q: Key Differences between Client-Server and Web applications

Fundamental differences between web applications and client-server applications open


enterprises to significant risks when they move to the web. These risks become apparent
when
one looks at the specific ways in which the two types of applications differ.

Page 38 of 40
Software Testing - Nuts and Bolts

These key differences between client-server applications and web applications are:
1. Heavy Client vs. Browser
The heavy, purpose-built clients used in client-server applications are difficult to reverse-
engineer,
so it's difficult for a hacker to modify input to the server. Browsers, however, are very
easy to manipulate. The source of the client side application is available to anyone
accessing the web page, and easy to change. (Users can simply go to "View Source"
under the Source menu of IE, change the code, and then reload the page.) To improve
server performance, reduce traffic on the network and enhance the user experience,
client-server applications perform a lot of data validation on the client side. Web servers
try to utilize the browser's capabilities (HTML and JavaScript) to perform data validation
on the client, but HTML can be changed and JavaScript can be disabled.
These differences expose web applications to attacks such as Buffer Overflow and Cross
Site Scripting.

2. One Program vs. Many Scripts


In client-server there is usually one program that is communicating with a client. In the
web environment there are multiple scripts, running on many web servers, with many
different entry points. In client-server, the flow of the user interaction within the
application is usually controlled by the client (certain buttons could be disabled, some
screens made unavailable). Furthermore, in client-server, users always have to log in
before gaining access to the application. The web client, however, is not designed to
maintain flow so it is very difficult to enforce behavior from the web server. This
difference leaves applications vulnerable to attacks such as Forceful Browsing.

3. State vs. No State


In the client-server environment, a 'session' is maintained between each user and a server;
once a user logs into the application, an unbroken connection feeds appropriate
information
to the user. In web environments there is no session; users request a page and then
disappear from the server's view until another page is requested. In order to keep track of
users on the web, developers must leave something in the browser that identifies the user,
and the types of information they can request. Hackers can change this information, and
therefore their identity and privileges. This difference leaves applications vulnerable to
attacks such as Cookie Poisoning, Hidden Field Manipulation, parameter tampering, and
SQL injection.

4. Hundreds of Users vs. Millions of Users


Application servers built for the client-sever environment were designed to handle
hundreds
of users. Web servers frequently handle millions of users. Hackers can exploit this
difference
to overload the servers, often exposing the raw data behind them.
This leaves web applications vulnerable to enumeration attacks.

Page 39 of 40
Software Testing - Nuts and Bolts

31. Conclusion
• Software testing is an art. Most of the testing methods and practices are not very
different from 20 years ago. It is nowhere near maturity, although there are many
tools and techniques available to use. Good testing also requires a tester's
creativity, experience and intuition, together with proper techniques.
• Testing is more than just debugging. Testing is not only used to locate defects and
correct them. It is also used in validation, verification process, and reliability
measurement.
• Testing is expensive. Automation is a good way to cut down cost and time.
Testing efficiency and effectiveness is the criteria for coverage-based testing
techniques.
• Complete testing is infeasible. Complexity is the root of the problem. At some
point, software testing has to be stopped and product has to be shipped. The
stopping time can be decided by the trade-off of time and budget. Or if the
reliability estimate of the software product meets requirement.
• Testing may not be the most effective method to improve software quality.
Alternative methods, such as inspection, and clean-room engineering, may be
even better.
32. Useful sites for Software Testing:

www.qaiindia.com

www.sqatester.com

www.softwareqatest.com

www.sqa-test.com

Page 40 of 40

You might also like