[go: up one dir, main page]

0% found this document useful (0 votes)
259 views75 pages

SETLabs Briefings Software Validation

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 75

VOL 6 NO 1

2008
INSIGHTS INTO
SOFTWARE VALIDATION
SETLabs Briefings
Advisory Board
Aveejeet Palit
Principal Solutions Manager,
System Integration Practice
Gaurav Rastogi
Associate Vice President,
Global Sales Effectiveness
George Eby Mathew
Senior Principal
Infosys Australia
Kochikar V P PhD
Associate Vice President,
Education & Research Unit
Raj Joshi
Managing Director,
Infosys Consulting Inc.
Rajiv Narvekar PhD
Manager,
R&D Strategy
Software Engineering &
Technology Labs
Ranganath M
Vice President & Head,
Domain Competency Group
Srinivas Uppaluri
Vice President & Global Head,
Marketing
Subu Goparaju
Vice President & Head,
Software Engineering &
Technology Labs
Testing Times: Coping
with Test Challenges
A recent Gartner report mentions that a major re-haul in software
architecture is in the ofng what with major upgrades being made to key
business packages. This is likely to aggravate the already existing concerns
on software quality. Defective software is always a huge drain on an
enterprises resources. Testing for software quality has thus grown from a
software lifecycle stage to a full blown industry.
In this issue we touch upon different facets of software validation
and the challenges that they bring to an enterprise.
Business intelligence decisions are largely dependent on data
contained in data warehouse systems. But how does one appropriate
awless gains from such systems that manage a gargantuan amount of data?
We have a paper that dwells around this idea and proposes an approach to
test data warehouse systems.
While validation supposedly removes all glitches inherent in faulty
software, the big question is whether an enterprise should engage its own
experts or some third party in the validation process. One of our papers
sheds light on this question by proposing a decision matrix that can help
enterprises assess the need for independent validation.
Nip the problem in its bud, assert most validation experts. One of our
researchers takes a step backwards and advocates preventing the problem,
from arising, in the rst place. In a very simple and nonchalant way, he
accentuates the need for testing the unwritten code through a proper
deployment of design metrics.
Manual interventions in the testing process are still prevalent.
Automated testing has of late, taken centre-stage though. We have three
papers around this concept. While two of them draw from live cases,
the third one proposes an automation strategy especially in package
implementation scenarios.
SOA is becoming a ubiquitous architectural norm. While its
advantages are many, it is not devoid of challenges and especially so in a
testing scenario. Testing SOA applications can be a technology nightmare.
In yet another interesting paper, we take you through the testing challenges
that an SOA environment poses to the test planners.
In the spotlight, is a paper on accessibility testing. Authored by a
differently-abled researcher, the paper sets about explaining the need for
accessibility in a world where the differently-abled customers have become
a major force to reckon with.
I am sure you would enjoy the collection of papers that we have
strewn together for you.
Wishing you Merry Christmas and a very Happy New Year ahead!
Praveen B. Malla PhD
praveen_malla@infosys.com
Editor
SETLabs Briefings
VOL 6 NO 1
2008
Tutorial: The STV Approach to Redefining Data Warehouse 3
System Testing
By Girish Vishwanathan
Do you want to maximize your gains from your existing data warehouse? In this paper,
the author takes you through the STV methodology to test your data warehouse system
and appropriate DW benefits.
Insight: Self-Assessment of Need for Independent Validation 9
By Manikandan M and Anuradha Goyal
Enterprise package implementations are cost intensive. Independent validations, if under-
taken, add to these costs. The authors probe into the need for independent validation and
propose a decision matrix to self-assess such need.
Viewpoint: Earlier = Cheaper: Test Code Before it is Written
17
By Vinoth Michael Pro
Prevention is better than cure. The author draws from his consulting experience and pro-
pounds the need for pre-testing the unwritten code. This he views can be done through a
proper modeling of the design metrics.
Framework: Test Automation Strategy for ERP/CRM Business Scenarios
23
By Ashwin Anjankar and Sriram Sridharan
Multiple business processes need to be tested during package implementations. Test
Automation not only shortens delivery cycles but also considerably reduces the overall cost
of quality.
Methodology: Model-Based Automated Test Case Generation
33
By Ravi Gorthi PhD and Kailash K P Chanduka PhD
Test planning is still largely subject to manual interventions. This is both, effort intensive
and error prone. The authors propose a methodology that automates test case genera-
tion which they contend overcomes the shortcomings posed by the widely employed path
analysis techniques.
Perspective: Testing Challenges in SOA Application
41
By Sandeep K Singhdeo, Lokesh Chawla and Satish K Balasubramanian
SOA characterizes flexibility and interoperability. These features, the authors opine, can
be a technology nightmare for effective testing. The paper explains the different testing
challenges that one has to confront in an SOA environment.
Case Study: How to Select Regression Tests to Validate Applications upon
47
Deployment of Upgrades?
By Anjaneyulu Pasala PhD, Yannick Lew and Ravi Gorthi PhD
The conventional model of executing the entire test suite to validate applications upon
deployment of upgrades is expensive and time consuming. The authors propose a regres-
sion test strategy that captures and analyzes the runtime behavior of the application to
address validation related issues.
Spotlight: Reach Out to the Differently-Abled Users: Be Accessible
55
By Shrirang Sahasrabudhe
Differently-abled customers have, of late, become a potent force and are now more in-
volved in making buy-decisions than ever. The author draws from his personal experience
and expounds the need for being accessible to this new class of customers.
Index
69
Behavioral slicing is a very effective method to discover
ambiguities and incompleteness of Software Requirements
Specifications early in the lifecycle of software development.
Ravi Prakash Gorthi
Principal Researcher
Articial Intelligence and Test Automation Labs
SETLabs, Infosys Technologies Limited
Ensuring product quality does not begin with testing
and tracking defects. Such a narrow view can eliminate
opportunities for early problem detection during the software
development lifecycle.
Vinoth Michael Pro
Program Manager, Quality Group
Infosys Technologies Limited
SETLabs Briefings
VOL 6 NO 1
2008
The STV Approach to Redefining
Data Warehouse System Testing
By Girish Viswanathan
Strategize, Test and Validate your
DW System to maximize gains from your existing
Data Warehouse
T
he data warehouse (DW) is a subject-
oriented, integrated, time variant
(temporal) and non-volatile collection of data,
used to support the strategic decision making
process for the enterprise or business intelligence
Claudia Imhoff [1].
Data warehouse system testing gains
its importance due to the fact that data from the
DW is extensively used for business planning and
reporting. The data is analyzed from different
dimensions and is interpreted differently across
business groups. This paper presents a holistic
approach to testing a data warehouse system
which comprises of core components like Source
Systems, ETL, Data Warehouse, Data Marts and
OLAP. A methodology on effectively testing
a data warehouse system is discussed taking
both functional and non-functional testing into
consideration. This is a three-phase approach
where a test strategy is populated after
analyzing the system, test is carried out in a staged
manner and in the post-test phase, test results are
analyzed.
WHY DATA WAREHOUSE SYSTEM TESTING
IS DIFFERENT
As data warehousing relies fully on the accuracy
of data, end-to-end testing is necessary to verify
and validate the correctness of data across various
systems. During each step of the way, for e.g.,
extracting from each source, merging with that of
other sources, translating and processing, loading
into the warehouse, queries/retrievals/reporting
from the warehouse expected inputs and
outputs are to be veried. A data warehouse has
billions of records which are transformed as per
business requirements that are highly complex.
There is a high possibility that a minute change
in the value of one eld can affect thousands of
records. The factors that make data warehouse
testing to stand apart from the usual application
testing are discussed below.
Mode of Testing: Unlike normal testing, data
warehouse testing is divided into two parts viz.,
back-end testing where the source systems data
is compared to the end-result data in loaded
5
area (system triggered through scripts) and
front-end testing where the user checks the
data by comparing his/her information system
with the data displayed by the end-user tools
like OLAP.
Batch v/s Online Gratication: While a transaction
system provides instant or at least overnight
gratification to the users when they enter a
transaction, which either is processed online or as
overnight batch; in case of data warehouses since
most of the action happens at back-end, users have
to trace individual transactions.
Volume of Test Data: The test data in a transaction
system in the extreme scenario, is a very small
sample of the overall production data. Data
Warehouse has typically a large test data as one
does try to ll-up maximum possible permutations
and combinations of dimensions and facts. DW
can be tested with a gradual scaling of volume
of data from limited test data, expanding it to
limited production data and nally ending with
full production data.
Possible Scenarios/ Test Cases: While the number
of test scenarios in transaction systems is limited,
for a DW system they could be unlimited given
that the core objective of DW is to allow all
possible views of data. Therefore, one has to be
creative in designing the test scenarios to gain a
high level of condence.
Programming for Testing Challenge: DW data
Quality testing and Extraction, Transformation
and Loading testing is done by running separate
stand-alone scripts say, comparison of aggregates
between the pre-transformation script and post-
transformation script and it produces pilferages
which could offer a big challenge to the tester.
Validations: In data warehouse testing there are
two types of validations that need to be performed
viz., functional validation where one checks for the
functionality of the application through standard
(date, number) validation and business (lookup,
data integrity, data transformation) validation
and non-functional validation where checks are
made for load testing, performance testing,
volume testing and end-to-end testing.
End-to-end testing is a must to verify the authenticity of data
residing in the data warehouse system
Apart from the factors mentioned above,
some other challenges that drive the importance
of DW testing are:
Environmental set up like setting up
of source systems, stage, DWH, ETL
mappings
System constraints like test DB size,
performance de-gradation and automation
issues.
THE STV (STRATEGIZE, TEST, VALIDATE)
APPROACH TO DW SYSTEM TESTING
Testing of the data warehouse and the evolving
needs of the business will both drive continuous
change to the data warehouse schema and the
6
data being loaded into it, especially after each area
of the data warehouse moves into production.
Therefore, the testing processes need to be dened
within a bigger framework with clearly dened
testing and production steps. The framework
should have a well-defined business process
enabled by pervasive metadata and impact-
analysis and also have strong alignment between
development, operations and business.
The STV testing framework comprises
of 3 different phases: Strategize, Testing the data
warehouse application, Validate.
Phase 1 - Strategize: In this phase, a primary
assessment of the application is done, a test strategy
formulated, test results repository designed and
a test plan arrived at. Creative designing of the
test scenarios is required to gain a high level of
condence among end users. The test strategy
mainly focuses on functional automation and
performance automation. Functional automation
checks for data integrity at various stages of DW
testing while performance automation focuses on
sub-system response under load.
Phase 2 - Testing the DW application: This is a very
crucial phase that involves testing extraction and
transformation logic, data loading and analytical
processing. The test results arrived during all
the stages is stored in a dedicated test results
repository that acts as a single source of truth for
all the functional testing that are carried out.
Extraction Testing: Veries whether correct data
is being extracted from the source, as efciently as
possible. Data extraction from multiple sources is
veried in the extraction phase. This is achieved
by checking whether
Data is able to extract the required elds
Extraction logic for each source system is
working
Extraction scripts are granted security
access to the source systems
Extracted audit log is updated and time
stamping materialized
Source to extraction destination is
working in terms of completeness and
accuracy
Extraction is getting completed within the
expected window.
The STV testing framework is characterized by a robust
alignment between development, operations and business
Data validation is done for each of
the test cases formulated and the results along
with relevant metadata are stored in test results
repository.
Transformation Testing: This testing is carried out
in stages to validate the transformation logic of
each source separately. This is often facilitated by
a debugger in the ETL tool which enables proling
and what-if testing as the job executes.
Transformation testing checks whether
Transaction scripts are transforming the
data as per the expected logic
The one time transformation for historical
snap-shots is working
7
Detailed and aggregated data sets are
created and are matching
Transaction audit log is updated and time
stamping materialized
There is no pilferage of data during
transformation process
Transformation is getting completed
within the expected window.
The test results along with relevant
metadata are stored in the test results repository.
Loading Testing: Involves testing the data loaded
into each target eld to ensure correct execution
of business rules such as valid-value constraints,
cleansing and calculations. Valid-value and
cleansing operations can be tested with a data
profiling tool and can be enforced via domain
constraints in the target schema or in the ETL
processes. For business transformations, a controlled
source test data set is recommended to facilitate
checking expected against actual results.
Loading testing veries whether
There is pilferage during the loading process
Transformations during loading process
are working
Data sets in staging to loading destination
are working
One time historical snap-shots are working
Both incremental and total refresh are
working
Data is extracted from the data warehouse
and updated in the down-stream systems/
data marts
Loading is happening within the expected
window.
Independent Data marts i.e., data
collected directly from sources and not derived
from data warehouse, if any, are also tested in this
phase. Data Quality reports that are generated are
stored in the test results repository.
End User Browsing and OLAP Testing: Involves
testing the accuracy of data from the business
point of view and evaluating various business
scenarios with the end users; generating test cases
Storing multiple functional test results in a single repository
can facilitate easy retrieval during validation phase
as per the business scenarios; scripting; execution
and test conguration. This is done in an iterative
manner to cover all the major test scenarios.
OLAP testing is carried out by checking:
Ad-hoc queries creation is as per the
expected functionalities
Ad-hoc queries output response time is as
expected
Business views and dashboard are
displaying data as expected
The scheduled reports are accurate and
complete
The scheduled reports and other batch
operations like view refresh is happening
within the expected window
8
Analysis Functions and Data Analysis
are working
There is no pilferage of data between the
source systems and the views.
The test results of all business scenarios
are stored in the test results repository.
Error Handling: Here, the logic involves testing
of alerts and rollback activity that takes place on
certain error states, such as referential integrity
constraints, lost connectivity at all phases. Error
handling logic can be dened once and reused for
the entire DW process to simplify testing.
Phase 3 Validate: In this phase the focus is on
data integrity and application monitoring. Test
results repository is browsed for testing data
integrity. Application monitoring is done at the
client, server and network levels.
Data Integrity Check: Test results from the
various activities carried out during test execution
phase are retrieved from the test result repository
and are evaluated by the end users for data
integrity and accuracy. In addition to this end-
to-end integrated testing may be carried out to
check the accuracy of data from source system to
the down stream system.
Application Monitoring
Client monitoring provides a picture of
the client experience from the perspective
of application performance
Server monitoring gives a check on
the internal temperature of data
warehouse servers, including CPU/
memory usage, log le analysis, database/
SQL statistics, service/process availability,
and connection up/down
Network analysis provides a diagnostic
Window on how network communications
contribute to application performance.
ROI
High user participation, carefully developed test
cases and test data during the strategy phase of
STV methodology, provides exibility in time
Validating the test results is crucial to ensure an error-free
data warehousing application
and effort spent during testing, leading to less
rework due to missing rejected data and hence
increased condence amongst users. Focus on
testing with highly developed processes in the
test execution phase helps identify defects early
on, providing complete coverage to testing,
resulting in signicant cost reduction and higher
DW quality. Robust and well defined testing
framework improves test efciency and provides
full guarantee on correct data for decision making
when the application moves into production.
CONCLUSION
Data warehousing applications are subjected
to changing requirements. There may be a case
that a requirement tested on commencing the
9

test cycle has changed when the testing cycle
ends. Changing source systems, adhoc reporting,
volume of test data all makes DW testing stand
apart from testing other online applications.
To perform DW testing correctly and
thoroughly takes some time and effort. The
best test cases come from detailed requirement
documents created during the planning and
analysis phases. Each and every requirement that
is documented must be measurable, achievable
and testable. Each requirement must be assigned
to someone who wants it, who will implement it
and who will test and measure it.
The STV methodology blends the
conventional approach to testing with the avor
of DW auditing integrated from the initial stages,
thereby reducing costs to the organization and
also providing high quality datawarehousing
application.
REFERENCES
1. Claudia Imhoff, The Corporate Information
Factory, DM Review, December 1999
2. W H Inmon, Building the Data Warehouse:
Getting Started, 2000. Available at http://
www. i nmonci f . com/regi st rat i on/
whitepapers/ttbuild-1.pdf. Accessed
March 2007
3. Nate Skinner and Paul Down, Avoiding
Performance Downtime and System Outages
Through Goals-based Testing, August 2003.
Available on www.embarcadero.com/
resources/tech_papers/Goals_Based_
Testing.pdf. Accessed during March 2007
4. Rob Levy, What is Data Warehouse
Testing? D M Review, December 2003.
Available at http://www.datawarehouse.
com/article/?articleid=3275. Accessed
during March 2007.
10
SETLabs Briefings
VOL 6 NO 1
2008
Self-Assessment of Need for
Independent Validation
By Manikandan M and Anuradha Goyal
A litmus test to understand when independent
validation adds most value during enterprise
package implementations
O
ver the years, ERPs and other enterprise
wide applications like CRM, BI, SCM and
the like have evolved as the backbone of the
organizations they support. While consolidating
all the functional requirements of the enterprise in
a few packaged applications has its advantages,
it also shifts most business dependencies on
these applications. There is enough evidence
that with the increase in number of users, the
complexity of the solution increases, which
directly impacts the investments that go into
the solution. This necessitates that enterprises
need to ensure the robustness of the solution
being implemented. The solution should take
care of all the business requirements of the
organization, while ensuring flexibility for
anticipated enhancements and scalability for
future growth.
The dilemma that most IT decision
makers face is how to gure out if independent
validation (IV) is required in their enterprise
application scenario. Investing additional funds
in independent validation to secure already
invested funds in enterprise applications can be
a tough decision to make. Ascertaining the need
for independent validation of a solution being
implemented can be tricky. There are multiple
factors that need to be considered to arrive at
this decision. In this paper, we identify some of
them which help in deciding IV strategy.
WHY INDEPENDENT VALIDATION?
Independent Validation is the process of using
an independent team to concurrently test the
solution being delivered. These teams work
independent of the implementation team and can
have representatives from organizations business
users as well.
A complex task that it is, large package
implementations come with inherent risks and
complexities. Organizations that have realized
this believe that independent validation is a
necessary step in safeguarding their investments
in these implementations.
12
The key reasons for this trend are:
Package Implementations are Getting Riskier:
A Gartner study states that the industry average
success rate for ERP Implementations is only
30% when measured by on-budget and schedule
parameters [1]. However, if the implementations
failure to meet stated functionality is also measured,
success rate will most likely be less than this.
Packaged applications today pack-in
more functionality than they did a few years
ago. These applications being enterprise-wide in
nature have to co-exist with multiple applications
and have to interact with them. This means that
the channels with which existing applications and
the new packaged application(s) interact need to
be tested extensively and thoroughly. Apart from
this, organizations wanting to leverage features
such as localization and language support will
have to face increased complexity in working with
congurations and customizations. This drives up
the complexity of the implementation, resulting
in dramatically high risk.
There are other sources for complexity
as well. Most large organizations today have
more than one ERP installed. According to an
Aberdeen benchmarking report of 2006, 71%
of large organizations have two or more ERPs
and 25% of them have at least four installed
across the enterprise [2]. Unlike the latest
ERPs, their predecessors were not designed
with interoperability in mind. This limitation
of yesteryears ERPs can prove a nightmare
for companies planning to integrate internal
functions and consolidate reporting.
Apart from technical challenges,
people and change related challenges are
substantial. A large package implementation can
mean an extensive change in the fundamental
process structure of the enterprise. It is always
recommended that the processes of the
organizations be modied to reect the processes
supported by the package. Such changes are
usually large scale and involve high risks that are
only well known to organizations.
Package Implementations Involve High
Investments: Large implementations demand
more investments. An ERP TCO survey conducted
Independent validation helps firms take a call on
whether huge investments in risk intensive large package
implementations can be committed
by Aberdeen has shown that the average cost of
software and services spent on SAP was as high
as $5995 per business user [2]. The cost for other
ERPs surveyed were also in this range. This
means for a 5000 member organization the cost
of implementation of SAP ERP can be as high as
$29.98million [2].
Some of the factors that inuence the cost
of implementation include hardware, software
licenses, professional services for implementation,
customization, integration, data conversion and
testing and training.
The huge benefits of large packaged
applications on the one hand and high risks
of their implementation on the other, have led
organizations to look for ways of mitigating
13
risks and independent validation of packages is fast
emerging as a popular alternative.
IS INDEPENDENT VALIDATION ALWAYS
REQUIRED?
While Independent Validation adds to the
predictability of the implementation, it also
increases the costs and can potentially increase
the implementation time. Independent validation
teams consist of business users and testing
experts who specialize in certain areas of testing
and have good understanding of testing tools
and processes. Employing/ hiring people with
specialized skill sets adds to the cost. Furthermore,
co-ordination between the independent testing
team and the implementation team can run into
rough weather if adequate and apt processes
are not set-up.
Given the fact that cost saving is a key
business metric, it is important to ensure that the
return on independent validation is justied for
a given implementation. And more importantly,
organizations must be able to justify this return
before spending on the project. A framework
which can quickly determine if an independent
validation can add value to the implementation,
can be a good tool in the kit for key decision
makers.
Later in this paper, we propose a
framework which is aimed at helping IT decision
makers understand when an independent
validation adds maximum value and also to assess
whether it is needed for their implementation.
Risks and investments are to be identified,
assessed and categorized on a Low-Medium-
High scale and further plotted onto the proposed
framework to understand the value added by
independent validation.
FACTORS AFFECTING INDEPENDENT
VALIDATION DECISION
It is important for us to explore all the factors that
affect Independent Validation while assessing
the value-add it brings to the table. All factors
have been clubbed under two broad headings
viz., Risk of Implementation and Investments in
Independent Validation, for discussion.
The factors listed under the headings
Justifying spends made on independent validation is as
much a concern as investing huge amounts in package
implementations
below are intended to serve as an initial checklist
and organizations must add and delete factors that
are specic to their organizations/ industry.
I. Investments in Independent Validation
Independent Validation costs vary with vendor
selection. While most organizations today offer
independent validation services, some have
more mature processes than others. So a careful
consideration must be made while selecting
vendors for this task.
The cost of Independent Validation
services for packaged applications like ERP and
CRM can range roughly from 8% to 12% of the
total implementation size. Organizations are
required to assess and rate their investments in
14
independent validation on a High-Med-Low scale
to be plotted on the graph.
II. Risk Assessment
Risk of implementation vary widely between
organizations and depend on a large number
of factors. The list of factors mentioned here act
as a checklist and as with Investments in IV,
organizations are required to use these factors to
arrive at a High-Low-Med score for Risk.
Risks during ERP implementations, as
discussed here is a function of two major elements
Investments in implementation and Complexity
of implementation. Because high investments
and complexity drive up risks, it is important to
ascertain the impact of each of these on the overall
risk of implementation.
A. Investments in Implementation: Investments
can broadly be of two types: Direct and Indirect
Investments
(i) Direct Investments Most organizations have
an established way of tracking these investments.
In a typical packaged implementation, the major
cost elements are:
(a) Costs in Hardware and Software: Large
package implementations generally require
additional hardware capacity than what is
available in the organization. A detailed sizing
exercise needs to be undertaken to estimate
the hardware requirements before a purchase/
upgrade decision.
License costs comprise a significant
part of package implementations, vary widely
between vendors and depend on the package
being chosen. Supporting software such as
operating systems will also need to be upgraded
to support the installations and will add to the
total cost.
(b) Costs in Software Services: A typical
business organization will not have required
skill sets to perform an ERP implementation
and hence a professional services organization
is usually engaged to do the implementation.
These organizations specialize in assessing,
implementing, upgrading, integrating and
maintaining large package applications. This is
a crucial investment because it inuences the
quality of the solution delivered. Investment here
is dictated by the price quoted by the vendors and
this forms a signicant part of investments. It is
not unusual for these costs to be as high as 100%
of license costs.
(c) Other Internal Costs: People at all levels
in the organization are involved in activities
like collecting and providing requirements,
re-engineering processes, project management,
change management and training during
implementation. In addition to this, large
package implementations demand constant senior
management attention in activities like planning,
budgeting, reviews and ROI assessments. These
are time investments made by employees during
the implementation.
A large package implementation exercise
can bring changes in systems and processes in the
day-to-day operations of people. This requires that
people be trained on new systems and processes
to operate smoothly in the new environment.
Training expenses can form a signicant part of
the entire expenses. 33% respondents in a report
by the Aberdeen Group said that training during
ERP implementations was a challenge for them
[3].
Accordi ng to another survey by
TechRepublic Inc., end-user adoption of the ERP
package being implemented was the greatest
concern among IT professionals [4]. Managing
people and their expectations becomes important
15
in programs as large as package implementations
and many failed ERP implementations in the past
owe their failure to poor change management.
Workshops and meetings are conducted
regularly to set the right expectations and
identify any people related problems that might
surface.
Overhead costs like travel, welfare,
hiring expenses, are also covered under this
head. Most organizations have established ways
of tracking these investments. Usually a separate
investment project is created and people are
assigned to it, to track the investments. Though
no standard benchmark of this cost is available,
we believe this cost to be a signicant portion of
the total costs.
(ii) Indirect Investments: There are other
investments apart from the ones mentioned
above, which many not be quantiable or tracked
directly. For instance, people not directly assigned
to the project also spend time and effort in making
a given project successful. Key employees have to
be assigned with the implementation task because
of the criticality, which means they essentially
have to be taken off their current tasks and this
might cause some business disruption. Such costs
are tough to quantify.
It is important to realize that efforts
that may not be quantiable will be incurred
during implementations and not including
them in calculating total investment costs can be
misleading.
Organizations need to consider both
nancial investments and investments in time and
arrive at a Low/Medium/High score for the total
investments. This score must then be combined
with Complexity Score, discussed in the following
section, to arrive at a nal Risk Score.
B. Complexity Factors
Large package implementations come with
inherent complexities. Further modications and
enhancements increase complexity and as the
complexity increases, risk follows. While a lot of
factors can contribute to complexity and risk, the
factors discussed below must always be reckoned
while evaluating complexity. These factors are
generic and can be applied to any large package
The more complex the package is, more are the risks and
higher their accompanying costs
implementation. Individual organizations using
this model must evaluate if these factors sufce
to arrive at a risk score for their implementations.
The model is exible so as to accommodate more
factors.
(a) Lifecycle Stage of the Package: Package lifecycle
has three distinct phases, viz., implementation,
upgrade and maintenance and the risks associated
with these phases vary. Implementations are
the most complex of all the three. Package
implementation involves feasibility study, process
documentation and re-modelling, package
evaluation, vendor selection, and other intensive
activities and needs extensive time and support
from the organization.
16
Post i mpl ement at i on, packaged
applications need regular upgrades. Reasons
for upgrade can include statutory updates,
older releases no longer supported by vendors,
technology stack changes, etc. A report by AMR
states that 55% of the upgrades are business-driven
expansions, consolidations or are triggered by
implementation of new functionality [5]. While
upgrades are not as complex as implementations,
they also involve activities like infrastructure
upgrade and training that need careful planning.
Once the system is stable after an upgrade
or an implementation, minor updates in the form
of bug xes, minor customizations and patches are
applied to the package to keep the system up and
running. This is referred to as the maintenance of
the system and is usually anchored by a professional
services organization. Gartner in its 2006 report on
testing recommended that even seemingly small
changes must be tested in a controlled environment
that imitates production operations to ensure that
critical business operations are not interrupted
when a change is implemented [1]. Having a strong
testing team to back-up post-production support
is important because individual business units
cannot make changes without affecting other lines
of business involved.
As we proceed from implementation
to upgrade to maintenance, there is a signicant
decrease in the overall activity level and the
corresponding complexity levels.
(b) Level of Customization: Though most packages
are comprehensive in functionality, they do not
meet the requirements of all organizations in the
world and hence packages are customized to take
care of the organization specific requirements.
According to one estimate, 20% of the processes
in an organization cannot be modelled in an ERP
system without customization [6].
Extensive customization, however, adds
to complexity. In a report by Aberdeen, in an ERP
implementation, customization related challenges
topped the list of challenges [3]. Customizations
introduce more challenges technical, project
management and organization related which drive
up the complexity levels. Higher the customization,
higher is the complexity.
(c) Level of Integration: Enterprise applications
have to co-exist with other applications in the
organization. And some of these applications could
be mission critical. Interfaces are to be developed
between the enterprise applications and the residing
application to ensure smooth exchange of data.
While interfaces are essential, organizations
must ensure that the number of applications
that need to be integrated has to be kept at a
minimum and that the package application being
implemented is leveraged fully to replace any
smaller applications. In an Aberdeen survey, 37% of
respondents quoted that integration issues were the
motivation behind replacement of their ERPs and
26% of them said that high integration costs was
one of their implementation challenges [3]. Large
number of integration points will drive both the cost
and complexity of implementation, northwards.
(d) Size in terms of Number of Users: Number
of users impacted depends on the number of
business units that are impacted. Large package
implementations can affect a majority of people
spanning various geographies. The more the
number of users, the higher is the complexity of
implementation.
As discussed, the impact of the complexity
factors discussed above varies widely between
organizations and can be quantied by individual
organizations using Table 1 in page 15.
Steps in Computing Complexity Scores
Ensure completeness of Complexity
17
factors. Fill any organization/ Industry
specic complexity factors.
Assign weights to each of these complexity
factors. The combined score of the weights
should be equal to 1.
Calculate Group Complexity Score by
multiplying individual score of each
complexity factor with their weights.
Add all Group Complexity Scores to arrive
at a Total Complexity Score.
C. Other Risk Factors
In addition to the complexity factors discussed
above and any other organization specic factors,
the following risks, along with any organization/
industry specic risks, must also be considered
to arrive at a total risk score on a High-Med-Low
scale.
(a) Big Bang vs. Phased Approach: The approach
to implementing packages can also introduce
risks during implementation. In a big bang
i mpl ementati on, al l the modul es bei ng
implemented go live at one time. Heavy co-
ordination and work on both the organizations
part and the implementation teams part increases
the risk in a big bang implementation.
A phased approach on the other hand
is far less complex and is less risky to implement.
As one proceeds from a big bang to a phased
approach, the risk of implementation goes down.
(b) Duration of the Project: The longer the duration
of the project, the higher is the risk involved in
its completion. Unforeseen circumstances like
economic recession and organizational change can
affect the project progress adversely and hence it
is recommended that the project duration be as
short as possible.
Table 1: Complexity Score Source: Infosys Research
INTERPRETING THE FRAMEWORK
Organizations are required to arrive at a High-
Med-Low score for both the decision factors
incremental investment in Independent
Validation and Risks as mentioned above. Plotting
these scores will place organizations in one of the
four quadrants depicted in Figure 1 overleaf.
The model makes use of a simple 2X2
graph plotted with investments in independent
validation on the X-axis and risk of implementation
on the Y-axis. Organizations would map to one
of the quadrants in this model, based on their
investments and risk scores. Interpretation of
the value added by independent validation for
18
organizations in each of these quadrants (Q) is
listed below:
Independent Validation is most effective for
organizations falling in Q2 since the cost of
IV is low while the risk is high.
While organizations in Q1 will benet from
Independent Validation, since the cost of
IV is high individual organizations need
to take a call on whether they need it.
Organizations in Q3 also need to take in-
dividual judgment on whether they need
Independent Validations because of rela-
tively low risk and investments involved.
Independent validation for organizations
in Q4 is not justified since the cost of
investment is high and the risk levels are
relatively low. However, organizations
with less appetite for risk may still opt for
an independent validation here.
Figure 1: Decision Matrix
Source: Infosys Research
CONCLUSION
Organizations need to take a balanced and objective
view for de-risking the huge investments made
in implementing large enterprise applications. By
considering the various factors mentioned in this
paper, they can take an informed decision on the
best way to mitigate risks associated with running
the business on enterprise applications. The
weights of each of the factors mentioned can be
decided depending on the various organizational,
industry and environmental factors. Investment in
independent validation can help the organizations
deal with potential risks and at the same time
they need to evaluate if this investment is really
required.
REFERENCES
1. Pat Phelan, The Role of Testing in Business
Application Support, Gartner Research Id
No. G00142376, September 2006
2. Cindy Jutras, The ERP in Manufacturing
Benchmark Report, Aberdeen Group
Benchmark Report, August 2006
3. The Total Cost of ERP Ownership,
Aberdeen Group Report, October 2006
4. Donald Burleson, Four Factors that Shape
the Cost of ERP, Tech Republic, August
2001. Also available at http://www.dba-
oracle.com/art_erp_factors.htm
5. Bill Swanton, Dineli Samaraweera and Eric
Klein, Minimizing ERP Upgrade Costs
Requires Synchronizing With Business
Improvement Projects, AMR Research,
July 2004
6. J E Scott and L Kaindl, Enhancing
Functionality in an Enterprise Package,
Information and Management, Vol. 37,
No.3, April 2000, pp. 111-122
7. www.dgs.virginia.gov/PPEA_EAP/IBM_
EADP/03-Vol_I_Executive_Summary.pdf.
19
SETLabs Briefings
VOL 6 NO 1
2008
Earlier = Cheaper:
Test Code Before it is Written
By Vinoth Michael Pro
Employ design metrics to assess systems
prior to implementation and mitigate losses
due to software defects
T
ypically, quality checks in software do not
begin until the testing phase, by which time
many months and lines of code would have been
written. Today we test compiled code as we get it.
Testing, before code is even written may seem
illogical, but some of the defects in an application
can be traced back to poorly dened requirements
and design aws in the software.
One of the revolutionary improvements
in manufacturing has been the advent of three-
dimensional modeling that detects conicting
parts so that part design can be changed much
before they are manufactured. In systems and
software engineering, such attempt would take
the form of design specications with capabilities
to detect aws and mismatched interfaces.
Visual UML design models can provide
an automated way for software practitioners
to assess the quality of their software prior to
coding. This approach involves processing
design specications/ UML diagrams to calculate
design metrics on classes before they have been
implemented. Design metrics focus more on
design-level parameters and reveal characteristics
such as coupling, cohesion, and complexity. These
design metrics have been shown to correlate
with aspects of software quality such as fault-
proneness and changeability.
Since design metrics do not rely on the
syntax of code, they can be computed from design
specications before the code has been written.
This allows project managers and developers an
early insight into the quality of the design and
the potential quality of the eventual software.
For instance, if the values of design metrics show
that a module is very complex or lacks cohesion,
the project manager could order redesign of the
module. This paper focuses on employing design
metrics to assess systems that have not yet been
implemented.
COST OF FIXING DEFECTS
Software quality can be described as the
convergence of complete requirements, awless
design and correct code that align to meet
business goals.
21
When an organization emphasizes
only testing, it typically focuses on product
quality in terms of defects that are to be
tracked and reworked in time to meet schedule
commitments. This narrow view of quality
eliminates opportunities for early problem
detection throughout the software development
lifecycle. Between 40-50% defects in a software
application can be traced back to a poorly dened
requirement and design aw in the software
[Table 1] [1].
Table 1: Defect Potential in Software
Source: Economics of Software Process Improvement,
Capers Jones, 2005
When organizations consider quality as
an afterthought they are in effect increasing their
costs and decreasing their efciencies. Figure 1
illustrates how signicantly the estimated cost
per defect can increase in the latter stages of the
software development lifecycle. By taking a more
proactive approach to quality management,
organizations can detect and remove defects
earlier during design and ultimately spend less
time on rework and testing.
WHAT ARE DESIGN METRICS?
Obviously, design review is a good practice in
systems and software engineering for detecting
and removing defects before progressing to
the next phase of development. But review
still remains an inspection-based approach for
quality control. Rather, measuring for code error
proneness factors in design and eliminating
those factors would be much more cost effective.
This realization has led to the concept of design
metrics.
Metrics to predict software quality do
exist and can be used starting in the design
phase. Fundamentally, design metrics involve
counting or measuring some of the design
parameters that correlate with error proneness.
To nd out how much quality you have in your
system, you can nd how many of these metrics
provide abnormal measures at the design level.
Thus, design metrics play an important role in
helping developers understand design aspects
of software and, hence, improve software
quality.
Object-oriented design measures are
believed to be indicators of complexity and one
of the main reasons for defects is complexity.
This means that object oriented classes are easy
to understand as long as their complexity is
below a threshold. Above that threshold their
understandability decreases rapidly, leading
to an increased probability of a fault. There
are, however, a number of object-oriented
design principles and practices to help contain
complexity, or at least identify areas that may
become overly complex so that actions can be
taken judiciously. For e.g.,
Interfacing - Level of interfacing quanties
the extent to which the different parts of
the software are connected with each
other. The higher this connection, higher
is the possibility of error propagating from
one part of the software to another.
Cohesiveness - Refers to the inherent
similarity of the activities carried out by
a component/module/class. If a class/
module is not cohesive it will possibly
22
carry out a lot of unrelated activities and
understanding the module becomes fairly
difcult.
There are measures for each of the above
design principles, for e.g., coupling between
objects (CBO) is a measurement of Interfacing. The
comprehensive list of design parameters chosen
for early defect detection includes NOAM, NOO,
and DOIH apart from CBO.
Figure 1: Cost of Fixing Software Defect
Source: Quality Improvement Consultants (QIC) and
World-Class Quality (WCQ)
NOAM Counts the number of operations
added by a class. Inherited and overridden
operations are not counted. A large
value for this measure indicates that the
functionality of the given class becomes
increasingly distinct from that of the
parent classes.
NOO Counts the number of operations.
If a class has a high number of operations,
it might be wise to consider whether it
would be appropriate to divide it into
subclasses.
CBO Represents the number of other
classes to which a class is coupled. Excessive
coupling between objects is detrimental to
modular design and prevents reuse.
DOIH Counts how far down the
inheritance hierarchy a class or interface
is declared. High values imply that a class
is quite specialized.
COMPUTING METRICS ON DESIGN MODEL
The approach involves processing design
specifications/ UML diagrams to calculate
design metrics on classes before they have been
implemented [Fig.2].
Figure 2: Flowchart for Computation of Design Metrics
Source: Infosys Research
For instance, Rational Rose tools
extensibility interface and Rose scripting language
can be used to get class information from UML
diagrams.
Below are listed, in chronological order,
the steps for computing metrics on design
model.
Step 1: Design using Object modeling tool
(Rational Rose)
Step 2: Forward engineer the code
Step 3: Obtain the OO design metrics
Step 4: Identify the violations
Step 5: Correct the OO metrics violations
prior to coding.
23
In turn the tool does the following:
Generate abstract syntax tree information
from UML design
Process the abstract syntax tree to retrieve
the inheritance hierarchy and each class
attributes and behaviors.
Tool derived class- and method-level
information can later be used to calculate design
metrics and to identify violations exceeding
dened threshold values. This would help in the
identication of defect prone classes at the design
stage itself using the design properties of the
classes, thereby, allowing errors to be corrected
earlier in the development.
There exists yet another approach to
analyze design specications [2]. A study by
Lakshminarayana et. al., provides direction to
such an approach. In their study, the authors
aimed at generating visual representations of
the metric values for each class in a system,
to aid developers in quickly pointing areas
for improvement. They used Rational Roses
extensibility interface and Rose scripting
language to get class information from UML
diagrams following which they computed
design metrics using the obtained information.
Later, they developed a visual representation
for each class-based on its value for each metric.
Each class was represented as a cube and each
metric as a feature of that cube. In the resulting
model, the visual representation evidenced
which classes had complicated interactions with
other classes. This kind of an approach allows
developers to analyze a large system much more
quickly than they ever could with a standard
printout of metric values.
EFFICACY OF DESIGN METRICS IN
PREDICTING SOFTWARE QUALITY
A considerable amount of research has been done
around the use of metrics to gauge and predict
software quality.
Extensive research studies were conducted
by Basili et. al. and Briand et. al. [3, 4]. These
studies analyzed existing metrics for their use as
predictors of probability of fault. Probability of
Metric values when visually represented can help developers
understand lacunae existing in the current design
fault is the likelihood that a fault will be detected
in a module.
Basili et. al., found that fault probability
had significant positive correlation to DOIH
(depth of inheritance tree), NOO (number of
operations) and CBO (coupling between objects).
Overall, they found that the best model for
predicting fault probability contained design
parameters. This model found 95% of the faults
in the system, and 85% of them that have been
agged as probably having fault, actually had
faults.
Also recently, a cognitive theory has been
proposed suggesting that there are threshold
effects for many object-oriented measures [6].
24
This means that object-oriented classes are easy
to understand as long as their complexity is
below a threshold. Above that threshold their
understandability decreases rapidly, leading to
an increased probability of a fault. This occurs,
according to the theory, due to an overow of
short-term human memory.
Object-oriented programming has matured
to the point where it is commonly accepted as
the most employed paradigm for developing
software. With the shift in the way software is
developed comes the new suite of design metrics
aimed at validating the quality of object-oriented
designs. The goal is to gather data that may later
be analyzed to identify aws. Reverse may also be
stated, in which case, metrics may provide some
assurance that the code that comes as result of the
design is without serious aw.
Once design level metrics have been
collected for several systems, they can be compared
with defect data for the implemented software to
develop models for predicting aspects of software
quality such as changeability, or fault-proneness.
An increasing rigor is being applied to this area
of research, with the hope being that someday
software quality will be mathematically proven.
Until that day, it is important to understand a bit
of the theory behind these metrics and how to best
put to use what they show.
CONCLUSION
Design metrics computed for design models
can indicate relative cohesion, complexity, and
coupling of the systems modules. Computing
design metrics in the design phase, before the
code is ever written, can give the development
team a very valuable insight. Managers can use
this information in the area of project planning.
They can also take preventive measures in
modules that are complex or lacking in cohesion,
by redesigning or including activities such as
code inspections. Cost savings, better delivery
date estimation and code with better design and
fewer errors are the potential results of such an
initiative.
Quantitatively proving quality of software can be an
insurmountable task. Nevertheless, researchers are
working with renewed vigor to prove it mathematically
REFERENCES
1. Ec onomi c s of Sof t war e Pr oc es s
Improvement, Capers Jones, 2005
2. A Lakshminarayana et. al., Automatic
Extraction and Visualization of Object-
Oriented Software Design Metrics,
Proceedings of SPIE The International
Society for Optical Engineering, Vol. 3960,
2000, pp. 218-225
3. L Briand et. al., Predicting Fault-Prone
Classes with Design Measures in Object-
Oriented Systems, Proceedings of the 9th
International Symposium on Software
Reliability Engineering, 1998, pp. 334-343
25
4. V Basili, L Briand and W Melo, A
Validation of Object- Oriented Design
Metrics as Quality Indicators. IEEE
Transactions on Software Engineering,
Vol. 22, No. 10, 1996, pp. 751-761
5. J Bansiya and C Davis, A Hierarchical
Model for Obj ect-Oriented Design
Quality Assessment, IEEE Transactions
on Software Engineering, Vol. 28, No. 1,
2002, pp. 4-17
6. R Subramanyam and M S Krishnan,
Empirical Analysis of CK Metrics for
Object-Oriented Design Complexity:
Implications for Software Defects, IEEE
Transactions on Software Engineering,
Vol. 29, No. 4, 2003, pp. 297-310.
26
SETLabs Briefings
VOL 6 NO 1
2008
Test Automation Strategy
for ERP/CRM Business Scenarios
By Ashwin Anjankar and Sriram Sridharan
Efficient test management compresses delivery
cycles and reduces overall cost of quality
A
ny package implementation involves a
number of business processes that need to
be tested. These business processes can be seeded
or customized. The extent of customization will
depend on the gaps, with respect to customer
requirements. These business processes can
correspond to specic modules or can be
cross-functional, spanning multiple modules.
The number of business scenarios can vary
depending on the scale of ERP/CRM package
implementation.
Though all the functionalities that need
to be tested can be independent of each other,
they are related to business in some way or the
other. Thus each of these functionalities can be
seen in the context of a business process or a
business scenario.
Test automation considering only
specic user-interfaces and limited functionality
may not have more usage. Hence the objective is
to create generic test scripts having more usage
and easier maintainability. There can be different
ways to automate the testing of these business
processes.
In the simplest form, a main test script
can be created which will invoke different sub-
scripts. Each sub-script can cater to some of the
functionalities under test [Fig. 1]. Different sub-
scripts can be executed sequentially by taking
the user inputs using a user input excel.
Such functionality can be extended
further, to create scripts that are independent
of each other and that can be executed
independently after passing the required inputs.
In this case each script exists as a stand-alone
component. These components can be either
executed independently or together with other
components. Each component can be congured
to accept inputs from other components
[Fig. 2]. This is component based design and
development.
Figures 1 and 2 overleaf depict test
automation with and without component based
development approach.
This paper looks at test automation from
the perspective of business processes by using
component based design and development. It
also discusses strategies to address some of the
issues with test automation and to make test
automation more effective.
28
Figure 1: Test Automation Without Using Components
Source: Infosys Research
Figure 2: Test Automation Using Components
Source: Infosys Research
In the subsequent sections we detail the
various constituents that go into making a robust
test automation strategy.
GATHERING TEST REQUIREMENTS
Requirement gathering is one of the most
important areas in any project. Test requirements
should be collected as early as possible in the
software development life cycle. In order to make
test automation more effective, test requirements
should encompass the following:
Traceability, that should be maintained
between business requirements and test
requirements
Identication of stakeholders and their
reporting/communication needs about
test progress
Business, Functional and Unit Test
scenarios are well laid out.
Use-case based requirements collection
is ideal to maintain test requirement traceability.
Some of the modern test management tools have
built-in support for achieving the same.
The traceability between business
requirements and test requirements (established
at the time of requirements gathering) can be
further extended to design/development phase
in the form of a test coverage matrix.
TEST COVERAGE MATRIX
Knowledge of the automated test components
repository among business/functional testers
is very essential to achieve reuse and maximum
utilization of test assets. A test coverage matrix
showing details of the business scenario, related
functional scenario, corresponding automated
script components and the coverage of the test
cases within that script component will help to
assess the test automation coverage very early in
the process [Table 1]. Test coverage matrix can
29
have the following challenges:
Maintaining the test coverage matrix for
latest details
Build awareness of the matrix itself
Establish a process to assess test coverage
automation using the matrix.
Table 1: Example of Test Automation Coverage Matrix
Source: Infosys Research
DESIGNING SCRIPT REPOSITORY
Usually, all the test scripts are stored in a
central server for easy access. During the script
development/modication, these test scripts are
downloaded to the client machine. Over a period
of time the number of scripts will increase in the
repository. The size and the number of the test
scripts can affect the download time and thus
prolong development time.
The solution is to restrict the number
of scripts that get downloaded to the client.
Ideally, only the required scripts should get
downloaded to the client. This can be achieved
by effectively designing the way these scripts
are stored in the server [Fig. 3]. Scripts can be
stored separately according to the modules
or functionality. This will reduce download
time, achieve modularity and ensure ease of
maintenance.
Figure 3: Script Repository Design
Source: Infosys Research
TEST SCRIPTS DESIGN AND
DEVELOPMENT
Thoughtful design of test scripts is essential to
achieve maximum reuse and easy portability of
scripts for future application upgrades. Newer
releases of test management products support
business process testing. Using this feature each
business process can be further subdivided
into smaller parts that can be developed as test
components. These test components can be
sequenced together to cover the entire business
process. Component based design gives lot of
exibility in terms of development and reuse.
Component Denition: Component is the
fundamental block of a business process that can
be reused across business processes. The design
of a component can affect the reuse of that
component. For e.g., component encompassing
order-to-shipping together is less likely to be
reused because of its rigidity and tight-coupling.
On the other hand, separate generic components
for all the order creation steps are more likely to
be reused.
Business Process Design: Each business
process owner will own his business process

30
components and will be responsible for testing
his business processes [Table 2]. End-to-end
testing of applications like ERP/CRM would
involve testing several business processes that
are cross functional in nature. Different business
owners coordinate with each other in doing such
end-to-end testing.
Table 2: Component Definition
Source: Infosys Research
Component based design should
facilitate the business owners to selectively test
the business process either fully or partly.
One of the fundamental aspects of
componentized test development is to achieve
loose coupling between the components. Each
component should take multiple input values
and provide multiple output values which the
subsequent components accept as inputs. These
inputs will also be supplemented by user inputs.
Such components will be part of the component
repository and can be easily reused during the
creation of new test scenarios.
Business Scenario Design: Business scenario
consists of one or many business processes. For
e.g., order-to-cash is a single business scenario
[Fig. 4] which contains multiple business
processes like order-creation, order-shipment,
invoicing, etc.
Figure 4: Order-to-Cash End-To-End Business Scenario
Source: Infosys Research
31
Test script can be designed based on the
need to focus only on a single business process
or multiple business processes. Test script design
should give exibility to business owner to test
her business process either as a single scenario
or as a part of different scenarios.
Design Considering the Big Picture: While
designing the test scripts for business
components, consider the other related business
scenarios. This will facilitate more reuse, quicker
development and reduced maintenance effort.
For e.g., Figure 5 shows three different scenarios,
each using three components. One component
for create/query order, one for credit check
and one for book order. While designing create
order component, it can be made generic so
that it can be used for multiple scenarios thus
facilitating reuse.







Figure 5: Design of Related Scenarios
Source: Infosys Research
TESTING OF TEST SCRIPTS
Once the test scripts are developed, they need to
be tested. Reduced testing time can contribute
towards compressing the overall delivery time.
Testing time of the test scripts can be reduced by
considering the following options
Provide Restart Ability to the Scripts: Provide
the ability to restart from the same place where
the execution failed earlier. This will save time
on account of
(a) repetitive testing of components that
have already been successfully tested,
Figure 6: Testing of Related Scenarios
Source: Infosys Research
32
(b) consumption of data by components
already tested successfully,
(c) execution time for test script.
Test the Related Scenarios Together: Business
scenarios having common business processes
can be tested together. For e.g., Figure 6 shows
three different scenarios involving order
management and shipping process. Here, the
order management business process can be
tested only once. Instead of retesting the order
management process, the data generation scripts
can be used to provide the required data to the
subsequent business processes.
Selection Criteria for Automation What
to Automate First? : The sequence in which
the business scenarios are automated decide
the extent to which previously developed
components are reused. A practical approach for
selection of automation sequence for business
processes can be set in terms of minimum reuse
percentage of components [Fig. 7]. For e.g.,
Figure 7: Scenario Selection Based on Reuse Criteria
Source: Infosys Research
The scenario selected for automation
should produce at least 10% components
that can be reused during later
automation
Alternatively, the scenario selected for
automation should have at least 10%
components that have been previously
developed.
(The number 10% is an indicative
gure).
Table 3: Reuse Decision Matrix
Source: Infosys Research
There can be different permutations
and combinations with respect to the sequence
in which the components can be automated.
For each sequence, calculate the total reuse
33

percentage. Select the sequence which provides
the maximum reuse.
This criterion may not be always
satised but will hold good in most of the cases
as there will be always few components that can
be reused. We believe such goals will increase
reuse of components thereby reducing the
delivery time and accelerating the ROI on the
automation.
Table 3 provides different permutations
for automating three business scenarios
having equal customer preference. Here, the
automation sequence should be B-A-C as it
provides maximum reuse. Data in the table is
indicative and is provided to clarify the strategy
proposed.
TEST SCRIPTS MAINTENANCE AND
PORTING METHODOLOGY
One of the challenges for test management is
to identify the impact on test scripts because of
changes to the underlying forms and to keep the
test scripts updated.
Porting methodology provides the
ability to identify the changes to the underlying
GUI [Fig. 8]. It also facilitates the measurement
of the impact of these changes on the test scripts/
components.
Why is Porting Methodology Required?
To understand the need for porting
methodology, let us consider a large scale ERP/
CRM package implementation as an example.
Test automation can be applicable to seeded
as well as custom user-interfaces. As these
ERP/CRM packages are upgraded from time
to time, there is a possibility that the seeded
user-interface might get changed. Similarly the
custom forms can also undergo some changes
regularly due to process enhancements. Any
change to the underlying seeded/custom
user-interface (either because of patch/code-
change) can make the test scripts unusable.
The challenge here is to identify the change in
the underlying form and then to measure the
Porting methodology helps detect changes that are made to
user interfaces due to package upgradation
impact on the test scripts.
Effective porting methodology can help
in identication of impact on the test scripts thus
facilitating script updation, thereby resulting in
ready availability of the scripts, as and when
required.
TESTING STRATEGIES AND REALIZATION
OF BENEFITS
Some of the challenges for test automation
are availability of scripts, achieving more
coverage, faster delivery, reduced maintenance
effort, identication of impact post new patch
introduction and also post enhancements to
underlying user interfaces.
The strategies discussed in this paper
help in addressing these issues and can make
34
test automation more effective. The correlation
between these strategies and benets can be seen
in Figure 9.



35
Figure 8: Porting Process
Source: Infosys Research
DIRECTIONS IN THE TEST AUTOMATION
SPACE
Test automation is gaining momentum for
package implementations. Considering the
frequent product upgrades that are rolled
out by package vendors, automation of
regression testing is becoming more and more
important.
Test management products are also
becoming mature over the years. They are not
only limited to test automation but are also
related to the complete system development life
cycle right from maintaining the traceability
(with respect to requirements) till management
reporting. Vendors that support business process
testing can leverage these features to provide
maximum benets to their customers.
CONCLUSION
Testing should be viewed as a value added
service, offering which, adds value to the
delivery capabilities of the organization. Test
automation is picking up fast and testing
products are maturing faster than ever. Most
organizations confront testing in piecemeal
manner rather than as a comprehensive
framework based on strategies. As is evident,
the current day enterprises need a strategic
viewpoint that looks at the testing function
holistically within the context of the system
development life cycle. Return on investment
(ROI) for test management function can be
measured in terms of the usage and coverage
of the test scripts. It can also be measured
indirectly by reduction in cost of quality by
ensuring that test procedures and activities are
followed at each phase of the system lifecycle
Figure 9: Realization of Benefits on Adoption of Testing Source: Infosys Research
Strategies
and most of the defects are captured during
that phase itself. Effective testing strategies can
go hand in hand with the advanced product
features in providing maximum benets to the
customers.
REFERENCES
1. Kiran Karanki and Sukandha Ram,
Automated Testing: Achieving Complete
Coverage, OAUG Fall Conference, 2003
2. Anand Kulkarni, Flexing the IT muscle
for Business Flexibility and Innovation,
SETLabs Briengs, Vol.2, No.1, 2004,
pp.3-10
3. Manoj Narayan and Somil Katiyar,
Realizing Business Value with a Testing
Centre of Excellence. Available at (http://
www.infosys.com/services/enterprise-
quality-services/white-papers/business-
value-of-testing-CoE.pdf. Accessed
during June 2007
4. Brian Marick, When Should a Test be
Automated? Available at http://www.
testing.com/writings/automate.pdf

36
Accessed during June 2007
5. Carl Nagle, Test Automation Frame-
works, Available at (http://safsdev.
sourceforge.net/FRAMESDataDriven-
TestAutomationFrameworks.htm Ac-
cessed during June 2007
6. Surendra Dosad, Nishanth Rao and Ravi
Rengasamy, Building a Robust Suite for
Automated Regression testing of SAP Re-
leases, Available at (http://www.infosys.
com/services/packaged-applications/
white-papers/automated-regression-
testing-of-sap-releases.pdf?ws=qwp. Ac-
cessed during June 2007.
37
SETLabs Briefings
VOL 6 NO 1
2008
Model -Based Automated
Test Case Generation
By Ravi Gorthi PhD and Kailash K.P.Chanduka PhD
Automating the test planning activity is
the key to enhancing productivity and
quality of testing services
T
est planning and execution are two critical
phases of software testing services. In
the last one decade, test execution phase has
witnessed an increased degree of automation,
resulting in enhanced productivity and quality
gains. However, the test planning activities are
still largely manual. One of the main constituents
of test planning phase is the creation and
maintenance of test-cases, data and scripts.
Typically, even today, in many IT organizations,
test cases, data and scripts are manually created
and maintained from Software Requirements
Specications (SRS) and UML analysis and
design models. For medium to large software
applications, this manual process is effort
intensive and prone to human errors. An NIST
study estimates that ineffective software testing
costs the U.S. economy as much as $56 billion
per year [1]. These indications call for effective
and efcient methodologies for creation and
maintenance of test-case, data and scripts.
We present here a methodology
to automatically generate test-cases from
text based SRS and UML Use-Case Activity
Diagrams (UCAD) / InFlux task-ow diagrams
(InFlux is a tool, designed and developed by
Infosys Technologies Limited that facilitates the
development of UML Analysis Models such as
use-case activity diagrams named as task-ow
diagrams). Automation of test-case generation
requires a machine readable format of the SRS.
Hence, the methodology uses a novel concept
called Behavioral Slicing to structure a given SRS
into a machine readable format. This structured
format is used to generate the UCAD and test-
cases automatically. A prototype tool, based
on the proposed methodology, is designed and
developed that offers the following functionality:
A facility for a business analyst to convert
a given SRS into a structured one, using
Behavioral Slicing and save it as an xml
le
A facility to automatically generate
UCAD from the structured SRS
A facility to automatically generate test
cases from the structured SRS or UCAD.
39
Proof-of-concept (PoC) experiments
were carried out on three live projects using the
tool. The analysis of results indicated benets as
listed below:
The process of structuring SRS using
the concept of behavioral slicing enables a
business analyst to discover ambiguities
and incompleteness of SRS early in the
life cycle of software development. These
defects, if undetected at an early stage, will
be more expensive to x at later stages.
The facility to automatically generate
UCAD from the structured SRS,
considerably enhances the productivity
of the Software Analysis phase.
The facility to automatically generate test
cases from structured SRS or UCAD not
only improves the productivity of test-
case planning phase but also enhances
the quality of coverage of the test cases
by systematically traversing through all
the paths in the UCAD.
In the subsequent sections, the state-of-
the-art and the details of the methodology are
discussed and illustrated using a case study.
STATE OF THE ART
There are a few approaches in literature on
generation of test cases from the requirements
specication expressed in a natural language.
These can mainly be categorized into two
groups. One set of approaches uses formal
specication languages such as SCR [2], Z [3] to
manually transform the requirements expressed
in a natural language into a more formal model
and generates the test-cases from the formal
model. The other set of approaches is based on
UML analysis models (e.g., UCAD and state
diagrams) and discusses methods to derive test
cases from these models [4, 5]. Most of these
latter approaches use path analysis techniques to
generate the test cases. These approaches have the
following main shortcomings in the IT industry
context: (a) use of formal specication languages
is found to be unsuitable to express requirements
of medium to large complex software systems.
It is found that even for small systems, the use
of formal languages was cumbersome and
effort intensive; (b) use of manual processes
in generating test-cases from use-cases largely
depends upon the experience and domain skills
of the tester and is effort intensive and error
prone, and (c) lack of a well-dened structure in
expressing requirements from which test-cases
can be generated automatically.
A methodology is presented here to
address the above short-comings.
OVERVIEW OF THE METHODOLOGY
The methodology serves the following two
objectives based on whether a given IT project has
readily available UCAD or not. If the project has
no UCAD models, then it takes an unstructured
SRS as an input and facilitates a business analyst
to structure the SRS and then automatically
generates UCAD and test-cases. If the project
has UCAD models (developed using UCAD
support tools like InFlux
TM
or IBM

Rational

Rose

), it automatically processes UCAD models


and generates test-cases. Accordingly, the
methodology consists of two major phases:
Phase-1 - Structuring an SRS: A given SRS is
decomposed into a set of use-cases and each use
case is structured using the concept and process
of behavioral slicing. The outcome of this phase is
the structured SRS.
Phase-2 Automatic Generation of UCAD
and Test Cases: This phase takes the structured
SRS as an input and generates UCADs. It then
40
processes UCADs, enumerates all the paths in
each UCAD and generates test cases from each
path.
STRUCTURING AN SRS
SRS is the document that describes the expected
behavior of a software system to be developed.
In the IT industry, many projects use either
unstructured SRS or at most partially structured
SRS, expressed in proprietary formats, from
which the test cases are manually generated.
For medium to large software applications, the
SRS runs into hundreds of pages. Often, it is
difcult to generate the test cases from such SRS,
as the lack of structure gives rise to ambiguities.
To address these problems, the methodology
proposes a process to structure SRS using
behavioral slicing.
Behavioral Slicing: A given SRS is rst
decomposed into a set of use-cases using OOAD
best practices. Then each use-case is sliced
into units of behavior. The behavior model of
Gerrard [6] is used to dene a unit of behavior
as a tuple <UI, SP, C, SO>, where,
UI: user inputs
SP: system process
C: a set of conditions on the state of the
system
SO: system output
To illustrate this idea, let us consider
the use-case, Withdraw Cash, from the SRS of an
Automatic Teller Machine (ATM) system. One
of the units of behavior in this case is:
UI: user inputs the details of his/ her
credit or debit card by swiping it
SP: the system processes the card number
and checks whether the given card is
valid or not
C: if the card is not valid then
SO: the system displays an error
message.
Given that the purpose of SRS is to
express the composite behavior of a complex,
to-be-developed system, our methodology
facilitates a business user to (a) decompose the
complex behavior into a set of use-cases, and
(b) slice the behavior contained in each use-case
into a sequence of units of behavior. Cockburns
use-case template [7] is adapted and modied to
slice a use-case behavior into units of behavior,
as shown in Table 1.
Table 1: Use-case Template for Structuring the SRS
Source: Infosys Research
Most of the elds described in this use-
case template are common to other use-case
templates followed in practice, except for the
steps in User Interactions. The User Interactions
steps describe the interactions between the actor
and the system to meet the goal of the use-case.
The interactions between the actor and the system
are structured into one or more steps which are
expressed in a natural language. A step has the
form
< Type><sequence number><interaction>
41
One can note from Table 1 that the types
and sequence of interactions exactly correspond
to the unit of behavior dened and discussed
above. In other words, the User Interaction steps
of the above use-case template facilitate the
slicing of a composite behavior of each use-
case into units of behavior.
A prototype tool is developed to assist
in structuring a given SRS. The tool offers a GUI
with two windows that allows one to import
the given SRS into its left window. In the right
window, it displays the use-case template of
Table 1. The tool facilitates drag-and-drop and
editing of slices of units of behavior of each use-
case from the left window into the structured
template contained in the right window.
The thus structured SRS is stored in an
XML format using the following key tags:
Start of a Use-case
User Inputs
System Process
Conditions
System Output
Goto
End of a Use-case
It can be observed that the resulting
structure is very uniform and offers great
benets like automatic generation of use-case
activity diagrams and test cases.
AUTOMATIC GENERATION OF UCAD AND
TEST CASES
In this phase, the methodology carries out the
following two main steps:
i) Process the Structured SRS and generate
UCAD models:
a. For each use-case, construct a UCAD
as a cyclic di-graph, where the nodes
correspond to the tags, User Inputs, System
Process, Conditions and System Output
b. Save the resulting UCAD model.
ii) Process UCAD models and generate test-
cases:
a. Traverse the UCAD using a depth-rst
strategy (DFS) and enumerate all the
paths (note: cyclic sub-paths are traversed
only once)
b. Each path corresponds to a test scenario;
slice each path into units of behavior and
print each unit as a test case.
A tool is designed and developed to
automatically generate test cases using the above
discussed methodology.
CASE STUDY
To demonstrate the efcacy of the proposed
methodology in generating test cases, a case
study using the following SRS of an ATM system
is carried out.
A customer uses the ATM to withdraw cash
from her account, query the balance of her account,
or transfer funds from one account into another.
Assuming the card is recognized, the system validates
the ATM card to determine that the card has not
expired, that the user has entered the correct PIN
(personal identication number), and that the card was
not reported as lost or stolen. The customer is allowed
three attempts to enter the correct PIN; the card being
conscated if the third attempt fails. Cards that have
been reported lost or stolen are also conscated. If the
PIN is validated satisfactorily, the customer is prompted
for a withdrawal, query, or transfer transaction. For a
withdrawal transaction, the system validates that the
user has entered the amount gure correctly. Before
the withdrawal transaction is approved, the system
determines that sufcient funds exist in the requested
account, the maximum daily limit is not exceeded and
ATM has sufcient funds in cash dispenser. If the
ATM does not have sufcient funds, then an apology
42
message is displayed and the card is returned. If the
transaction is approved, the requested amount of cash
is dispensed, and the card is returned.
The above requirements of the ATM
system are structured using the proposed
methodology and the structured requirements
are used to generate the UCAD and test cases
automatically using the tool.
STRUCTURING ATM REQUIREMENTS
Once the SRS is available for the ATM system,
it is analyzed and the following use-cases are
identied. The corresponding use-case diagram
is shown in Figure 1.
Figure 1: ATM Use-case Diagram
Source: Infosys Research
Validate Card and PIN
Withdraw Cash
Query Account
Transfer Funds
The Withdraw Cash use-case is
considered for illustrating the Behavioral
Slicing process. The SRS of the ATM system is
uploaded into the left pane of the window of the
tool as shown in Figure 2 and the user uses the
right pane of the window to edit and structure
the use-case requirements making use of the
key words described earlier. The structured
requirements are saved which is stored internally
in the XML format as shown in next section.
This XML format is analogous to a UCAD which
describes the Withdraw Cash use-case.
System Name: ATM System
User: Customer
Usecase Name: Withdraw Cash
Description: Withdrawing cash fom ATM
Requirement:
Input id=1: User selects withdrawal,
enters the amount and selects the account
number
Process id=2: Perform all system checks to
carry out the transaction
Decision id=3:
If: ATM is out of funds
Goto: 4
Else:
Goto: 5
Output id=4: system displays an apology
message
Goto:stop
Decision id=5:
If: Customer has enough balance
Goto: 6
Else:
Goto: 7
Output id=6: System dispenses the
amount
Goto: stop
Output id=7: system displays an apology
message
Goto:stop
GENERATION OF TEST CASES
The tool generates the XML le in the following
format containing the requirements information
for the use-case.
43
Figure 2: Tool Assisted Structuring of SRS Source: Infosys Research
<?xml version = 1.0?>
<taskow name = Withdraw Cash>
<description> </description>
<initialnode id=node9>
... ... ...
</initialnode>
... ... ...
... ... ...
<nalnode id=node10>
... ... ...
</nalnode>
<useraction id=node7>
... .... ...
</useraction>
<systemprocess id=node2>
<transition>
<target>node3</target>
... ... ...
</systemprocess>
<systemoutput id=node5>
... ... ...
</systemoutput>
... ... ...
... ... ...
<decision id=node3>
<name>decision</name>
<description>ATM is out of funds</de-
scription>
... ... ...
</decision>
</taskow>
The requirements stored in the XML
format are used by the tool to automatically
generate the UCAD [Fig. 3]. Then the tool
processes the UCAD and enumerates all the
paths. Further, the tool generates one test
scenario corresponding to each path of the
UCAD. The tool decomposes each test scenario
into units of behavior and displays each
44
Figure 3: Activity Diagram Generated from Structured Source: Infosys Research
Requirements
scenario in a tabular format as shown in Figure
4 where each row of the table corresponds to
(a) user-inputs, (b) condition, (c) system output
(expected output). All the conditions appearing
in the table are solved by a constraint solver
[8] to nd whether the path is feasible or not
and thus discover inconsistencies, if any, at an
early stage of the development. If the path is
feasible then the input data for the test cases are
generated from user inputs column of the test
case table using the Category-Partition method
[9].
CONCLUSION
The manual approaches used in software test
planning activities are effort-intensive and
prone to human errors. The approach proposed
here helps in tackling these issues. Our results
indicate an average productivity improvement
of 50% in the generation of test cases. The tool
covers all paths of UCAD, thus enhancing the
quality of test case generation.
Some of the key issues of software
test planning phase that are still unaddressed
are effective and efcient methodologies for (a)
automatic generation of test data and scripts and
(b) test effort estimation.
REFERENCES
1. Planning Report 02-3: The Economic
Impacts of Inadequate Infrastructure
for Software Testing. U.S. Dept. of
Commerce, National Institute of
Standards and Technology(NIST),
Technology Program Ofce, Strategic
Planning and Economic Analysis Group,
May 2002
2. C Heitmeyer, R Jeffords, and B Labaw,
Automated Consistency Checking of
Requirements Specications, ACM
Transactions on Software Engineering,
Vol. 5, No. 3, July 1996, pp. 231-261
3. Hall, P.A.V. Relationship Between
Specications and Testing, Information
45
Figure 4: Test Case Table as Generated by Tool Source: Infosys Research
and Software Technology, Vol.33, No.1,
1991, pp.47-52
4. C Nebut et. al., Automatic Test
Generation: A Use-case Driven approach.
IEEE Transactions on Software
Engineering., Vol.32, No.3, March 2006
5. R Boddu, L Guo, and S Mukhopadhyay,
RETNA: From Requirements to Testing
in Natural Way, 12th IEEE International
Requirements Engineering, 2004
6. Paul Gerrard Testing Requirements.
Systeme Evolutif, http://www.evolutif.
co.uk/testReqs/TESTREQS.html
7. A Cockburn, Basic Use-case Template,
http://alistair.cockburn.us/index.php/
Basic_use_case_template
8. S Biswas and V Rajaraman, An Algorithm
to Decide Feasibility of Linear Integer
Constraints Occurring in Decision
Tables, IEEE Transactions on Software
Engineering, Vol. SE-13, No. 12, 1987
9. T J Ostrand and M J Balcer, The Category-
Partition Method for Specifying
and Generating Functional Tests,
Communications of the ACM, Vol. 31,
No.6, 1988, pp. 676-686.
46
SETLabs Briefings
VOL 6 NO 1
2008
Testing Challenges
in an SOA Application
By Sandeep K Singhdeo, Lokesh Chawla and
Satish K Balasubramanian
Get face-to-face with some challenges that baulk
successful testing in an SOA environment
W
hy is testing an application totally
different from browser, client/server and
mainframe testing? Primarily because a traditional
testing life cycle involving unit, integration,
systems and acceptance tests, simply wont work
for Service Oriented Architecture (SOA) projects.
SOA projects are as much about the assembly of
system components into an integrated whole as
they are about the development of the components
themselves. This assembly, despite emerging
standards, vendor support and best practices, is
often far more technically complex than many
other technologies. Since each system component
comes with its own set of quality concerns
executing an effective testing for a completely
integrated service oriented architecture
component is a technology nightmare.
SOA has helped in breaking down
many complex business processes and has
aided in easing down many intricate business
interactions. Many of the benets of SOA by
themselves have become challenges to testing an
SOA application. This paper details out all the
key benets of SOA and the challenges they pose
in testing and validation
SOA is Distributed by Definition: Services
are based on heterogeneous technologies. No
longer can we expect to test an application
that was developed by a unified group, as a
single project, sitting on a single application
server and delivered through a standardized
browser interface. The ability to string
together multiple types of components to form
a business workflow allows for unconstrained
thinking from an architects perspective and
paranoia from a testers perspective.
In SOA, application logic is in the
middle tier, operating within any number of
technologies, residing outside the department,
or even outside the company .
Exception Management vs SLAs: Quality
governs the successful delivery of SOA
applications. Service availability and response
times are some of the critical factors driving SOA
48
testing effort. It will be a daunting task to manage
exception unless Service Level Agreements
(SLAs) specify performance guarantees
associated with web services operations and
business process that are essential for business-
critical operations. With an SOA, application
stress points can be anywhere, and change as
individual services are added to the workow
or changed.
Finding the root causes of problems
across the middle tiers of SOA applications
is difcult. Testing a front-end user interface
becomes irrelevant when it provides no insight
into what is actually happening at the back
end.
UNDERSTANDING SOA
From a technology perspective, SOA can
be characterized as an approach that treats
software resources as services, which are
essentially software components that:
Provide a discrete package of technology
functions with a dened role, having a
clearly dened purpose, with explicit
duties or responsibilities;
Can be insulated from each other, so the
internal design of the service can be
fairly exible; and
Can be integrated with each other in a
standardized manner.
SOA is an implementation process
of having services that are shared among
applications. Service providers create
services and service consumers utilize these
services. Composite services are made up of
two or more services, and web services/web
sphere MQ creates a route that triggers these
services. The services used in SOA are not
limited to web services, but can include other
technologies such as Distributed Component
Object Model (DCOM) and XML over Remote
Method Invocation (RMI).
At a high level, SOA can be divided into four
basic components (Fig. 1):
Figure 1: SOA Architecture
Source: Infosys Research
1. SOA Architecture provides the conceptual
and logical framework for services, their
use and integration.
2. SOA Process is a series of dened planning
and guiding processes to support SOA.
The process addresses each phase of
developing systems from identifying
the business needs to implementing and
using services.
3. SOA Management is the act of putting
these SOA processes into practice, from
the point at which the business needs are
identied, through to implementing and
using services as a natural part of our
technology and systems.
4. SOA Integration is the act of physically
implementing services - placing the
supporting technology and systems into
production to support everyday business.
49
This also encompasses the supporting
hardware, network and infrastructure
that enables and ensures the operation
and health of the technology.
Let us take a hypothetical example of
general SOA implementation architecture in a
Banking and Financial Organization. We will
keep returning to this example as we proceed
explaining the challenges countered during
testing SOA applications.
Figure 2: SOA Architecture in Banking / Financial
Industry
Source: Infosys Research


TESTING CHALLENGES IN SOA
APPLICATION
According to Gartner Inc. Service-oriented
architecture (SOA) will be used in more than
50 percent of new mission-critical operational
applications and business processes, designed

in 2007 and in more than 80 percent by 2010,
with success of implementation depending
on managing the key challenges faced in
testing, debugging, managing and securing a
distributed SOA application.
Currently, SOA is the buzz word
across industries and many industries are
looking forward to implement a full edged
highly distributed SOA application that
could help them integrate remote business
applications, outside vendor applications and
enable B2B connections. The distributed nature
of the implementation by itself poses a very
big challenge for successful implementation of
SOA. Also, SOA implementation in Mainframe
is catching attention in BFSI industries as a
majority of their systems are still in Mainframe.
With current business applications demanding
50
more real time transactions rather than ancient
legacy batch processing, combining brilliance of
mainframe with intricacies of Service Oriented
Architecture seems to be the best way out. There
are many legacy modernization projects that
have been initiated in recent times to realize SOA
benets. Testing has historically been a complex
phase in Mainframe software development.
Setting up test cases, debugging, running the
developed piece of code for all possible test
scenarios has always been time consuming and
demanding on mainframe developers.
For nancial industry, SOA applications
are composed of loosely coupled business-level
services, distributed over a network, with many
disparate systems including age old legacy to
current web applications. Testing thus needs to
be conducted in the following scenarios:
End-to-End Testing
Interface and third party system testing
Testing of each and every service
Ensure correct functionality under load.
Testing an entire business process path
to assure that seamless integration has resulted
in the intended execution of transactions,
interactions, and data transformations between
the nancial organizations internal system with
all external services. In SOA, application logic is
in the middle-tier, operating within any number
of technologies, residing outside the traditional
setup or framework, so the challenge lies in
simulating unavailable services.
ISSUES IN SOA TESTING IN BFSI
INDUSTRY
The BFSI industry is enswathed with multiple
testing challenges some of which are listed
below.
Complexity of the System: For SOA applications
business processes are asynchronous and span
across several applications. Many applications
are mainframe legacy applications with legacy
databases like IMS and DB2. Also usage of
web services is restricted in Mainframe thereby
increasing the complexity of testing.
Test Coverage: Traditional testing tools cannot
isolate all SOA interactions with different layers
that help us rely on functional and black box
tests. As a result, test coverage cannot be reliably
measured.
Business Process Visibility: In SOA applications,
faceless services are extensively used. As a result
there are numerous invisible hand-offs/failure
points due to which plumbing level errors
are often untraceable leading to unexpected
downstream impacts.
Application Integration: SOA solution is
typically derived from components that are
constructed, tested and hosted by multiple
stakeholders. Testing must thus address
functionality, availability and quality of service
across these multiple stakeholders, since the
service components are well-encapsulated,
making test coverage evaluation and problem
determination difcult for users integration and
system testing.
More complex scenarios uncover further
integration testing needs if a service is unavailable
at runtime or if it needs to be replaced. Involvement
of middleware like IBM MQ, Websphere MQ,
complicate the entire service orchestration and
poses challenge to testing.
Security Testing Challenges: The SOA
application for nancial services deals with very
sensitive client data which needs authentication
checks for the users before they access information
or carry out the real nancial transaction through
51
the application. The integration of security
features in the SOA application in itself causes
huge challenges for testing the entire security of
the application.
Functional and Regression Testing : A service
that does not respond with the expected quality
of service can be considered to be violating SLA
norms. External factors, such as heavy network
or server load, can affect a services performance.
Test case generation is more complex and
expensive for generating a combination of cases
and inputs that can cause SLA violations.
Since services need to withstand
high load and adverse environments, proper
regression testing that almost mimics the
production environment needs to be performed.
It is often very difcult to generate production
like scenario for testing.
Performance Testing: Performance testing
is critical for any application and integrated
systems are no exception. The more integrated
the system, the more complicated performance
testing is. Testing the scalability and robustness
of web services and determining performance
and endurance characteristics of their WSDL
operations is a challenge.
Determining response times, latency,
throughput proles for target web services in a
loosely coupled environment is a real challenge.
In addition to performance proles, testing for a
specied duration for measuring endurance and
robustness proles is also required.
Scalability is another criterion which
determines the behavior and endurance by
bombarding target web services with varying
SOAP messages across a range of concurrent
loading clients.
Interoperability Testing: Interoperability is the
other type of SOA Testing which determines
design-time and run-time interoperability
characteristics of the target web services
while loading a web service WSDL, consumer
applications. So testing task should also include
running a set of comprehensive WSI prole
tests and report interoperability issues with the
Due to the sheer complexity of the system, an SOA
environment in BFSI industry can pose multiple testing
challenges to negotiate
web services WSDL. Adhering to WSI proles
ensures that SOA assets are interoperable and
that WSDL can work within heterogeneous
platform/framework.
Testing the interoperability of web
services requires creating specialized test suites
for a WSDL. These tests ensure that the target
web services are interoperable, by actively
sending specialized request to the web services
and determining whether the web service
responds per WSI prole specication.
Vulnerability Testing: Vulnerability assessment
is another important testing criteria of SOA
52
application. By creating specialized tests for
a target web service, network and security
testers can measure the vulnerability proles of
the target web service. Testing of these caliber
vulnerabilities such as buffer overows, deeply
nested nodes, recursive payloads, schema
poisoning and malware traveling over SOAP
messages ensure that this does not affect the
critical web services.
The testing also includes the ability to
rapidly scan web services and assess areas of
exposure, determine severity levels, provide
vulnerability diagnosis and publish remediation
techniques.
CONCLUSION
The Service-oriented Architecture (SOA) vision
is everywhere, garnering almost universal
acceptance among vendors and customers alike.
Todays market dynamics makes sense
for the organization to implement service-
oriented architecture to reduce operational costs,
to combine products across lines of business, to
distribute them over many evolving channels
and reduce time to market for new products.
Gone are the days when testing would
be left for the end of a project plan and sacriced
when time would be running out. Waiting
to nd out if you have a problem, waiting
for unavailable systems or waiting while you
randomly try to discover the root cause of a
problem in a complex system all compromise
the speed to market and the intent of SOA.
SOA with its advantages brings in key
challenges like coverage, automation, business
process visibility with increased complexity
of the systems. SOA quality assurance needs
a special focus on the testing challenges of
performance, interoperatibility, integration,
security and vulnerability.
REFERENCES
1. Yen Natis and Roy Schulte, Introduction
to Service-Oriented Architecture, Gartner
Research, Id No. SPA 19-5971, April
2003
2. Pal Krogdahl, Gottrried Luef and
Christoph Steindl, Basics of SOA and
Agile Methods, IBM DeveloperWorks,
July 2005. Also available at http://
www. i bm. com/devel oper wor ks/
webservices/library/ws-agile1/
3. Scott Barber, SOA Testing Challenges,
PerfTestPlus.com Webinar, May
2005. Also available at http://www.
perftestpl us. com/resources/SOA_
challenges_ppt.pdf
4. Gerardo Canfora and Massimiliano Di
Penta, Testing Services and Service-
Centric Systems: Challenges and
Opportunities, IT Professional, Vol. 8,
No. 2, 2006, pp. 10-17
5. Bringing SOA Value Patterns to Life: An
Oracle White Paper, June 2006
6. SOA Adoption Benets and challenges
during Gartner Symposium/ITxpo:
Emerging Trends Press release on SOA,
April 2007.
53
SETLabs Briefings
VOL 6 NO 1
2008
How to Select Regression
Tests to Validate Applications
upon Deployment of Upgrades?
By Anjaneyulu Pasala PhD, Yannick L.H. Lew Yaw Fung and Ravi Prakash Gorthi PhD
Regression testing techniques based on dynamic
behavioral analysis promise to address
re-validation related pain-points
C
omponent/application developers
periodically release upgrades to their
software due to bug xes and enhancements.
Current practice of executing entire system test
suite to validate applications upon deployment
of these upgrades is both expensive and time
consuming. The existing automatic regression
tests selection techniques that recommend
smaller regression test suites either depend on
availability of source code or version change
information made available to testers. Further,
techniques based on static information analysis
are not efcient. Therefore, a regression test
strategy based on capturing and analyzing
the runtime behavior of the application has
been proposed. This strategy recommends
smaller test suites to validate applications
upon deployment of upgrades to components.
Methods based on dynamic analysis are more
efcient, as they analyze the applications by
executing the application according to their
intended use. Based on the proposed approach, a
prototype tool called InARTS (Infosys Automatic
Regression Test Strategy) that determines the
impact of upgrades on .NET based applications
and suggests a reduced set of regression tests,
has been developed. A case study has been
performed on a practical application using
InARTS to nd the efcacy of the tool. The
results of the case study show on an average a
reduction of 50% test effort.
REGRESSION TESTS: SELECTION
Software maintenance consumes between 60%
to 80% of overall cost of software life cycle
expenditure [1]. Software maintenance typically
involves code changes to satisfy customer
requirements like addition of a new functionality,
improving existing functionalities or xing bugs.
After incorporating each change, the impact of
the change is analysed and the software is re-
validated using regression testing technique.
Regression testing involves selective re-testing
of an application or component to verify that
55
changes have not caused any unintended effects
and that the application or component still
complies with its specied requirements. Current
practice of regression testing of applications
using complete system test suite, called retest-
all strategy [2], upon deployment of upgrades
to components is costly and time consuming,
leading to delays in delivering the software
applications to clients.
To minimize the time and cost
of regression testing of applications upon
deployment of upgrades, a number of regression
test selection techniques have been proposed
[3-8]. Largely these approaches are based on static
analysis (involves analyzing the information
available on changes made to components
without actually executing the system) of source
code and are thus not suitable when source code
is not available for analysis to test architects,
such as upgrades of Commercial Off-The-Shelf
(COTS) components. Further, some of these
techniques make use of complex structures based
on representing program elements as a series of
nodes and edges of a graph. This representation
is slow to process. Moreover, these techniques
rely on code coverage information analysed at
the granularity level of statement involving the
complete list of statements that are executed
for a given test case. The process of registering
information on statement execution is again
slow and tedious. Though COUGAR uses
runtime interactions at method level to analyse
the impact of upgrades to COTS components
built using functional programming languages
such as C, it is not suitable to analyse .NET
applications built using C# because it cannot
handle object-oriented features like dynamic
binding, exceptions etc [7].
Often, for a large and complex system
comprising of thousands of test cases, the
developers intuitively select regression tests,
based on their experience and program change
specications that need to be re-executed. This
technique is neither safe and nor precise. Further,
in practice the software product development
organisations largely outsource the testing
activities to third party testing services providers.
Selecting regression tests intuitively and unscientifically
are uncalled for in a large and complex system
In general, these third party organisations do not
have access to source code or any information on
changes made to the software, hence making it
difcult to analyse changes made to software.
To make automation of regression
test -case selection successful, the test-cases
selection techniques must be safe and precise
[6]. A safe methodology ensures that all test-
cases susceptible to behave differently as a result
of changes made to software will be selected for
re-execution. The precision criterion refers to the
ability to specically target only those test cases
that will be affected by the changes. Therefore,
if a technique is precise, then it must be safe as
well.
56
An approach that overcomes these
shortcomings by building a new representation
based on class and method dependencies to
analyse the impact of the changes, and selecting
the test cases to be re-executed is presented here.
The approach is based on capturing and analysing
the dynamic behaviour of the software system.
Dynamic behavioural analyses based techniques
are efcient, as the system is analysed by executing
the application according to its intended use by
the end user. It is also unique that it supports
the .NET platform compliant object-oriented
programs built using languages such as C#.
Based on the proposed approach, a prototype tool
called InARTS has been developed. A case study
has been carried out on an industrial application
to understand the practical implications of using
the tool. Initial results have shown a considerable
reduction in regression testing effort and hence
the reduction in testing cost.
The subsequent sections will describe
in detail our approach to test cases selection and
InARTS prototype tool development.
OVERVIEW OF THE APPROACH
The approach to regression tests selection is
composed of three core functional parts that
perform several important tasks. These are:
(a) Capture dynamic interactions among all
software components along with their
invoked methods across all application
processes by executing complete system
test suite.
(b) Find affected methods in a new software
upgrade through change impact analysis,
and
(c) Select test cases to be re-executed upon
deployment of upgrades.
InARTS tool is designed and developed
to perform these tasks automatically. InARTS is
developed to analyse changes made to software
developed using .NET compliant object-oriented
languages such as C# and select regression test
suite based on the impact of those changes. The
change impact is analysed both syntactically and
semantically.
Figure 1: Relations Between Test Cases and Executed
Methods
Source: Infosys Research
Capture Dynamic Behavior
Dynamic behaviour of a software system is de-
ned as a set of interactions among all compo-
nents such as COTS, applications and system
components along with their invoked functions
across all application processes. The dynamic
behaviour of the system is captured by execut-
ing the complete system test suite developed
and used during system testing of the applica-
tion. For each test case, the runtime interactions
at method level are captured. The captured be-
haviour thus is modelled into a graph called a
Function Interaction Graph (FIG) similar to the
Component Interaction Graph (CIG) used in [7].
A typical FIG is shown in Figure 1 and depicts
the relationships between test cases and meth-
ods, executed during execution of the respective
test cases. For instance, execution of Test case T1
57
involves the methods <N1.C1::F0, N1.C4::F3, and
N2.C8::F22>. This graph needs to be built once
for each software application to be analysed and
hence involves a one time cost. The graph thus
built is analysed for change impact and appro-
priate test-cases are selected for re-execution.
To build FIG, InARTS makes use of the
Common Language Runtime (CLR) proling
API available within the .NET framework. The
CLR proler based on the .NET framework 2.0
does not allow unmanaged code to be proled.
Therefore, InARTS analyses only code that is
written using any .NET supported, managed
languages including C#, Visual Basic, .NET, and
managed C++.
The proling API is able to capture
only events associated with garbage collection,
loading and unloading of methods, and just-in-
time (JIT) compiler notications. If the proler
was developed using a managed language such
as C#, then it would prole itself and would
interfere with the execution of managed code of
the application being proled. Thus, the proler
must be written in a non-managed language
such as C++.
The CLR proler tool does affect the
performance of the running application as it
relies heavily on the interactions with the .NET
CLR. Consequently, the application being tested
may run at reduced speed than usual.
Find Affected Methods
Identication of affected functions in a new
upgrade is a three step process:
(1) Components both old and new versions,
in binary are reverse engineered into
either source or Intermediate Language
(IL) code.
(2) By comparing the reverse engineered
code of the components, the syntactical
changes made to the code are identied
and the corresponding methods are
marked as changed.
(3) Semantic analysis is carried by
propagating the above syntactic changes
and all the affected methods are marked
as changed.
InARTS converts the binaries into
both source and IL codes. Source code is easier
to understand in terms of the logic and control
ow of the program as compared to cryptic IL
code instructions. However, software vendor
or customer might prevent reverse engineering
of binary code into its equivalent source code.
In such cases, InARTS uses IL code for change
impact analysis. InARTS makes use of reection
API as well as the metadata of the .NET
framework to extract source code information
such as class names, method denitions, and
other statement-level instructions from binary
assemblies. An implementation of reection API,
the .NET Reector [9] is used for this purpose.
Changes to code occur at the syntactic
and semantic levels. Code changes due to a
change in syntax refer to the textual differences
between corresponding line statements of code
versions of a program. Pure comparison of
strings is normally done to detect those changes.
Diff tool in UNIX uses such technique. Semantic
analysis is concerned with understanding the
program logic. A syntactic difference may not
necessarily cause a change in the semantics of
the program. For example, in Figure 2, there is
a change between corresponding lines of code
of the program. However, this does not cause
a change in the semantics of the program the
nal value assigned to variable sum will be the
added values of variables a, b, c, and d in both
the cases. Hence, from the semantics point of
view, this does not logically constitute a change
in the behaviour of the program and, therefore,
58
should not be marked as a change. To resolve
such problems, data ow analysis techniques,
based on program slicing, have been devised
[10 & 11]. These techniques basically compute
the variables that are going to be affected by a
code change, at a given point in time, so that if
the change occurs on a statement that makes use
of the affected variable, the code is considered
as changed. Although slicing of program
statements is a safe and precise method, it is
overly complex and necessitates heavy usage
of memory and processing time. Thus, scaling
slicing techniques to large programs would be
infeasible and too costly in terms of performance.
Also no commercial tool that identies changes
based on program slicing is available.
Figure 2: An Example of Syntactical Change with No
Semantic Change
Source: Infosys Research


To make change impact analysis
complete, both syntactic and semantic changes
to a program should be considered. InARTS
focuses on determining local changes through
syntactic analysis and non-local changes
through semantic analysis. The techniques to
identify methods affected due to both local and
non-local changes made to software written
using any .NET based object-oriented language
are presented. The fundamental techniques,
presented in this paper, are generic enough to be
implemented for analysing code changes made
to software written using other object-oriented



languages such as Java. Differences in language
constructs and syntax will have to be compared
and dealt with accordingly.
Hashing, as used in cryptography, is a
reproducible technique that allows converting
a block of data into a unique series of bytes or
characters, called a hash value. The latter is
relatively much smaller than the original size
of the hashed data. This hash value is used in
determining whether the original data has been
modied or not. InARTS makes use of the Secure
Hashing Algorithm (SHA) with a key size of 256
bits, SHA-256. Using SHA-256 on the extracted
method codes in old and new versions of the
component, the local changes are identied and
those methods are marked as changed. Once the
changed methods are identied, an extensive
semantics based change impact analysis is
carried to nd non-local affects to methods as
discussed here.
Figures 3 and 4 provide an example of
changed code developed using C# that can have
both local and non-local propagation of changes.
The code in the new version of the program,
shown in grey, has been modied. The VAT rate
has been increased from 12.5% to 15%. When
executed, the new code will produce a different
amount to be paid by the customer. In this
case, there is a direct effect that involves only
the method _btnDisplayAmount(object sender,
EventArgs e) and it is marked as changed.
As shown in Figure 4, consider a new
method called calcCount() introduced inside
the class RetailCustomer, which overrides the
virtual method inside the superclass Customer.
It is observed that this overridden method now
will have an inuence on code dened in method
_btnCCount_Click(object sender, EventArgs e). In
the original version of the program, an object
aRCust of type RetailCustomer makes a call
to the method calcCount(). Since the latter is
59
Figure 3: Original Code of Customer Class
Source: Infosys Research
not found inside class RetailCustomer, the C#
compiler resolves to using the method with the
same signature found inside the superclass of
RetailCustomer (i.e., in the class Customer).
In the new version of the program, method
calcCount() is added to RetailCustomer and
overrides the already-dened virtual method
calcCount() inside class Customer. In this case,
the call aRCust.calcCount() now resolves to
RetailCustomerscalcCount() causing a different
behaviour to be produced. Although the method
_btnCCount_Click(object sender, EventArgs e) has
not been modied, the output generated via the
Figure 4: New Version of Customer Class
Source: Infosys Research
call to aRCust.calcCount() is changed. The nature
of the change is due to the inheritance property
of the object-oriented paradigm as implemented
in the .NET platform. Thus, _btnCCount_
Click(object sender, EventArgs e) is marked as
changed. Hence, both _btnCCount_Click(..) and
_btnDisplayAmount(..) are marked as affected
functions. An algorithm has been devised and
implemented to identify automatically all the
affected functions from such changes.
Similarly, the non-local affects due
to changes to virtual function code inside
super classes (dynamic binding through
60
polymorphism), property methods and class and
structure attributes (both static and non-static)
have been resolved with appropriate algorithms.
These algorithms have been designed and
implemented as part of InARTS tool.
SELECTION OF TEST CASES
On nding the affected methods, the impact
of those methods on test cases execution is
identied by analysing the FIG. FIG is analysed
by mapping the test execution trace against the
affected methods. In a given test-case trace,
if there is at least one of the affected methods,
that test-case is recommended for re-execution.
All such recommended test cases form the
regression test-suite to be used to validate the
software upon deployment of upgrades. For
example, in Figure 1, if method F3(int) found
in namespace N1 and class C4 [N1.C4.F3(int)],
is marked as changed in the new version of the
software, then test cases T1 and T2 may likely
produce different outputs. Therefore, T1 and
T2 need to be re-executed. On the other hand, if
function [N1.C1.F5(oat)] has changed, only test
case T2 needs to be re-executed.
CASE STUDY RESULTS
Using InARTS, a case study has been conducted
on one of the existing applications running
on MS Windows XP operating system and
consisting of 7 components (one EXE and 6
DLL). Sixty ve test cases form the system test-
suite. The case study consists of four upgrades
released by the development team during
application regression testing cycle. The results
of the case study are tabulated in Table 1. The
case study results show that InARTS achieves
on an average ~64% of test effort reduction. The
effort is calculated assuming the test cases (test
scenarios) execution takes uniformly the same
effort and time.
Table 1: Results of Case Study for 4 Upgrades
Source: Infosys Research
CONCLUSION
The technique presently handles change impact
analysis for object-oriented software components
built using any .NET compliant programming
language such as C# and supplied in binary
form. Based on the proposed regression test
selection technique, InARTS tool is designed and
developed. Some of the important extensions to
the work presented here are:
Investigating the techniques to capture
the interactions among components
involving both managed and unmanaged
code
Implementing the techniques to address
the regression tests selection to validate
applications developed on Linux and
embedded operating systems upon
deployment of upgrades to their
components
Investigating the techniques to address
the selection of regression tests to validate
systems upon deployment of upgrades to
web-services.
REFERENCES
1. George E Stark, Measurements to
Manage Software Maintenance, MITRE
Corporation. Available at http://www.
stsc. hill. af. mil/crosstalk/1997/07/
maintenance.asp. Accessed during July
2007
61
2. G Rothermel and M J Harrold, Analyzing
Regression Test Selection Techniques,
IEEE Transactions on Software
Engineering, Vol. 22, No. 8, August 1996,
pp 529551
3. T Apiwattanapong, A Orso, and M J
Harrold, A Differencing Algorithm for
Obect-Oriented Programs, 19th IEEE
International Conference on Automated
Software Engineering, September 2004
4. M J Harrold et. al., Regression test
selection for Java software, International
Conference on Object-Oriented
Programming, Systems, Languages and
Applications, October 2001, pp. 312-326
5. Xiaoxia Ren et.al., Chianti: A Change
Impact Analysis Tool for Java Programs,
27th International Conference on Software
Engineering, May 2005, pp. 664-665
6. J Zheng et. al., Applying Regression Test
Selection for COTS-based Applications,
28th IEEE International Conference
on Software Engineering, May 2006,
pp. 512-521
7. Anjaneyulu Pasala et. al., An Approach
Based on Modeling Dynamic Behavior
of the System to Assess the Impact of
COTS Upgrades, Asia-Pacic Software
Engineering Conference, December 2006,
pp. 19-26
8. John Dunagan et. al., Towards A Self-
Managing Software Patching Process
Using Black-Box Persistent-State Manifests,
International Conference on Autonomic
Computing (ICAC04), May 2004
9. Reector for .NET, at http://www.aisto.
com/roeder/dotnet/. Accessed during
May 2007
10. K Pcza, M Bicz and Z Porkolb,
Cross-language Program Slicing in the
.NET Framework, 3rd International
Conference on .NET Technologies, May
2005
11. X Zhang, and R Gupta, Cost Effective
Dynamic Program Slicing, ACM
SIGPLAN Conference on Programming
Language Design and Implementation,
June 2004, pp. 94106.
62
SETLabs Briefings
VOL 6 NO
2008
1
Reach Out to the Differently-Abled
Users: Be Accessible
By Shrirang P. Sahasrabudhe
Raise the accessibility quotient of your site
and attract more business
A
re you a disabled-friendly organization?
Have you been successful in navigating the
surfeit of technology issues enswathing todays
business? Have you harnessed your web presence
to reach to every possible customer, even the
disabled ones? If yes, then you are of interest to
me. If no, then read on.
It is to reach every nook and corner of
the world that you have to adapt yourself to the
wide range of delivery channels and consumer
devices that have emerged. Accessing internet
on a PDA or a cell phone, is gaining popularity.
Today, an organization cannot sit back and
close its eyes to the increasing need of reaching
out to the Differently-Abled (DA) people, as
of late, DA people comprise a huge chunk of
online customers. Invention and proliferation of
assistive technologies like Screen Readers, Screen
Magniers, and Voice Recognition Systems have
created enormous interaction possibilities for the
DA users. As a result they are participating as
potential consumers of online goods and services.
Handheld devices equipped with adaptive
technologies are opening up new avenues to
the DA users, thus providing them with better
consumption opportunities and a sense of
independence.
If consumer demography identies the
need to cater to the varied requirements of the
consumer as the challenge, then accessibility
can be a simple solution to it. This discussion
will walk you through the DA user group, their
respective special needs and issues they face on
web. We will also look at a few possible business
impacts of accessibility.
WEB CONSUMER: WHO FITS THE BILL?
Anybody having internet connectivity, purchasing
power and willingness to spend qualifies as
a customer. DA Screen Reader users, users of
PDAs, users of different browsers, users of low/
high bandwidth and users who are alien to the
language of the content of the site all qualify as
customers on the Web. All of them have individual
preferences and special needs. Only by choosing
to be fairly accessible, one would be able to attract
and retain them. At this juncture we would like
to understand what the term accessibility refers
to and what is a disability or different-ability in the
context of web.
64
DISABILITIES IN THE CONTEXT OF WEB
Disability in the context of web has a far wider
meaning and much deeper penetration than it
is usually perceived. While the word disability
for an average person may connote a person
without eyesight, or someone who cannot hear
or someone who is bound to a wheel chair; in the
context of worldwide web, it means much more.
It can be dened as any substantial inability to
accomplish the task as a result of ones physical,
psychological, technical or situational status.
This comprehensive denition makes everyone
a part of disability community at some point
of time, reinforcing the need for accessible
websites.
A link is an image but the visually-
impaired Screen Reader users are unable
to read the content of the image
Clicking a link requires using mouse
and the user is motor-impaired; audio
prompts are used to communicate
different messages to visitors but the
visitor cannot hear
The monochrome device user can not
differentiate between colors, and website
suggests All products marked in RED
are on sale at 30% discount; text size
is very small for the elderly people to
read
Site functions only with java script
enabled browsers, but the user has scripts
disabled for security reasons, or the
language of the content is very complex
and the user has cognitive and learning
difculty.
The scenarios discussed above necessitate
every organization to restructure their websites,
keeping in mind the accessibility levels of the
users.
ACCESSIBILITY: THE GATEWAY TO ABLE
BUSINESS
Accessibility, within the context of this paper can
be dened as a collection of facilities and amenities
to assist people with physical or cognitive
disabilities, to gain access to services offered by
an enterprise. As discussed earlier, the reasons for
inability to access can range from social, physical,
economic to situational factors. Accessibility talks
about creating a barrier-free world where these
factors have been appropriately taken care of. A
few examples of accessibility are:
Building a walk-over ramp next to a
staircase
Providing Braille labels in elevators
Marking goods with tactile codes.
Web Accessibility is a special case of
universal accessibility. It talks about creation of web
pages/sites that can be accessed and utilized by all
users with reasonable ease, taking adequate care
of their backgrounds. It comprises of standards,
techniques, technologies and legislatures.
Accessibility that is devoid of perceivable,
operable, understandable robust (POUR) charac-
teristics is a bane to the differently-abled user.
Perceivable: Anything communicated to human
brain through one of the ve senses can be called
perceivable. WWW is a digitized medium and
visual, auditory and tactile senses are the most
relevant senses in the context of Web, as they
can absorb information from electronic means.
Accessibility requires dissemination of data
through either multiple alternative modes like,
audio-clip along with the text-transcript or textual
equivalents for images which are transformable
into audio and tactile forms. Text is the most
easily transformable form of information. It can be
converted into speech by Screen Readers and into
tactile form by refreshable Braille displays.
65
Operable: A website should be operable with
a variety of devices and congurations. Users
differ in their choice of input device and also in
abilities to understand and respond. Some might
use only keyboard for interaction, or need extra
time to complete online transactions, whereas
others might use some specialized input device
to complete the job.
Understandable: Understandability of the content
and the functions is very much essential from a
users point of view. One may design a perfectly
perceivable and operable site, but the users might
nd the content within the site very difcult to
understand. Complex language for the content,
cryptic instructions, excessive use of jargon and
difcult navigation reduce understandability.
Accessibility requires clear and consistent
navigation; use of simple and readable language;
appropriate explanations of technical terms and
clear and precise instructions.
Robust: Technological advances characterize the
changing business paradigms. While websites
embrace the latest technologies, robustness
should be rendered on older and minimally
equipped technologies. Users neither use similar
browsers, nor are they equally equipped with the
latest software and hardware to view the websites.
While an accessible site must work with highly
sophisticated browser technologies, at the same
time it should be reasonably compatible and
usable with older technologies. Having a site
which works with only IE7 or only with Mozilla
can be a bad idea.
ACCESSIBILITY THE MAGIC WAND FOR
YOUR BUSINESS
There is no gainsaying the fact that the better the
accessibility of a website, the larger would be the
quantum of its clicks. These clicks may as well
translate into business transactions, thus keeping
the rm in business.
Reach One and All: Accessibility would make
your site easy to use, find and maintain and
would support easy upgradations. It would thus
attract and retain more and more customers.
Websites with better accessibility not only generate good
business but also cost less during maintenance and upgrades
Statistics shows that more than 750 million people
worldwide have some sort of disability and their
discretionary income amounts to $175 billion.
Considering the comprehensive denition of web
disability, the actual number could be signicantly
larger [1]. Only by being accessible can you reach
all your users. Moreover, for DA users, online
business and e-commerce is an essential need and
not just a matter of convenience.
Serve Multimodal Delivery Channels: Web is
multimodal. Advancements in electronics and
wireless communication have created a wide
range of devices supporting different modes
of communication as well as various ways of
presentation. Number of users accessing the
66
worldwide web over PDA, mobile phones
and WebTV are incessantly rising. It has been
projected that some 58 million PDAs would be
sold in 2008 alone [2]. Websites failing to meet
accessibility requirements will not be able to
serve this growing customer population. In such
a scenario, fair accessibility would prove to be a
differentiator for online business.
Be Usable and Lightweight: Implementing
accessibility practices always enhances usability of
the website. When you take care of disabled user
groups, it automatically provides better access to
regular users. Clear and consistent navigation and
layout helps users with cognitive disabilities and
at the same time it also helps increase usability in
general. The use of accessible design results into
sites that are lightweight and get downloaded
quickly. Several user-surveys reveal that if a site
takes more than 5 seconds to appear on the screen,
the customers are likely to click away. Price being
the rst factor, download time stands as the next
important factor for shoppers to decide whether
or not to shop on the site. Disappointed shoppers
not only desert you but narrate their negative
experiences to their family, friends and associates
resulting in a negative viral marketing for your
organization.
Save on Maintenance and Updation Costs:
Most business websites are frequently updated
to meet agile business requirements, which add
to recurring costs. Accessible design mandates
use of style sheets for controlling presentation.
Making site-wide changes would then require
modications at a single style sheet and not on
every page. Thus, maintaining a 2000 page-plus
monster becomes quite cost effective. Use of style
sheets equips the users with more control to view
the site on any device without any changes to the
site. This would directly result in cost savings
over a period of time that may turn out to be a
substantial pecuniary gain.
Mom and Dad are Growing Older: The average
age of the world population is steadily rising. As
the incidence of disability is strongly correlated to
the age factor, number of disabled users would go
up simultaneously. Their participation as potential
consumers of online businesses would entirely
depend on accessibility of the websites [3, 4].
DOES YOUR BUSINESS NEED
ACCESSIBILITY?
Accessibility is a way of life. Ideally it should
not be restricted to a specic business sector. But
in todays scenario what drives any and every
business is ROI (Return on Investment). So
any website, irrespective of the business sector
it serves, should have basic accessibility in place.
It is true that accessibility requirements would
differ in intensity from business to business.
Any business having B2C portal must be highly
accessible compared to B2B portals that may
have lower needs of accessibility. Similarly
sites made for a specic user group need not
take care of all user scenarios, like in case of a
site on core medical research may not be easily
comprehensible to a layman. A site specically
designed for users with cognitive difculties
could have minimum text and maximum images,
making it less usable to visually-impaired
visitors. But if the site is for a bank or an online
shop, it surely needs to be highly accessible. In
other words, public-centric sites serving all types
of users must be accessible.
MYTHS AND FACTS AROUND ACCESSIBILITY
Awareness levels on web accessibility are
rising steadily but still many myths and
misunderstandings are predominant amongst
the client and the vendor community.
67
Myth: Accessibility is only for blind users
Fact: Though blind users happen to be one of
the prime beneficiaries of accessibility, web
accessibility is not just about them. The concept
is about respecting different people, having
different needs and varied preferences. Not
everyone would have similar equipments to use
or same abilities to perceive.
Myth: Accessibility and aesthetics are at
loggerheads
Fact: Accessibility in no sense opposes beauty.
You can use beautiful colors, graphics, animations
and all sorts of fancy multimedia on your site.
The only thing you must ensure is that accessible
alternative has been provided for all graphic and
multimedia that is informational or functional.
Myth: Accessibility is just an icing on the cake.
Bother about it only at the end.
Fact: The best time to think about accessibility is at
the beginning of design process and not at the end
of coding. Accessibility should be built into your
site right from the beginning and should not be
added as a topping. Testing for accessibility and
implementing accessibility features at later project
phases, often requires redesign and fundamental
changes, which is obviously time, effort and dollar
intensive.
Myth: Run Bobby the accessibility evaluator
and the job is done
Fact: Using Bobby or any automated accessibility
tester is useful to review thousands of web
pages in a short span of time but it is certainly
not enough. The tool can check whether an
accessibility technique has been used or not, but
it lacks necessary sensibility to judge whether the
technique is helping the user or not. For example,
the tool can check whether every image has alt
text but it is unable to judge whether the value of
alt is conveying the same information as that
of the image. Thus manual evaluation becomes
inevitable.
Myth: Offering text-only version is the best thing
you can do
Fact: Separate text only versions are considered
Designing accessibility before coding is a smart way to
pre-empt future testing problems
accessible but actually they are not. They tend
to lack rich html markup and so, understanding
content structure and layout becomes difcult.
Users with partial visual-impairments (who
need very little adjustments like, changing font
size or adjusting color contrasts) and PDA users
are also compelled to use the text-only site.
Naturally the text-only users are not exposed
to the fancy graphical online branding you
invest in, thus making you lose a good branding
opportunity. Moreover, users get annoyed and
feel marginalized when they are barred from
using the ordinary site. Maintaining two versions
of the site adds to the overhead costs as well. It
often results in old, outdated information being
published on text-only site. If special offers are
68
not available on the text-only site when it is up
on usual website, the users of text-only version
would never come to know of the offer. This is
certainly not desirable for any business. In fact
using accessibility design practices, a single bubbly
beautiful site can be designed which takes care of
maximum site visitors.
Myth: Accessibility is costly and it requires too
much of intelligence
Fact: Using clean HTML/CSS and appropriate
accessible design and coding techniques, an
accessible site can be built without putting any
extra efforts. But if you need to reengineer the
existing bad site and retrot it to make accessible,
you would end up spending large sums. So
better be accessible from day one and have fair
accessibility as a priority requirement.
ACCESSIBILITY ACTION PLAN
To bridge the gap between the Myths and Facts of
accessibility, you need to sketch an accessibility
plan.
Accessibility is wired through the site
design, coding and testing and thus cannot
be achieved merely by testing at the end. Any
organization aspiring to achieve fair accessibility
status needs to carefully look into the entire idea.
Commitment to accessibility is continuous, and
not a one-time phenomenon.
Organizations can follow a five-step
process to tread on the path of accessibility. W3C
has suggested a similar model for accessibility
implementation.
Understand your responsibilities
and jot down accessibility policy:
Understand the nature of your business
and regulatory compliance requirements
for your site across geographies you are
active in. Compile all the legislative points
required to conform with and formalize
the accessibility policy document.
Have a quick review of your website:
If you are redesigning your site, have a
quick look at your existing site and assess
the degree of your accessibility and how
far you still need to go.
Design accessibility requirements:
Once you know what exactly you need
to achieve, pen down all the accessibility
requirements in the form of checkpoints
and checklists.
Develop an accessible site: If you
are building the site from scratch,
incorporate accessibility requirements
right from design phase of the project. If
you plan to retrot the existing website,
incorporate appropriate changes to ll-
in the gaps. Use content authoring tools
and technologies that create accessible
content. Enable development team
with appropriate tools and facilities to
effectively implement accessibility.
Monitor for accessibility periodically:
Monitor accessibility status of the website
during any updations or changes to the
site structure or content.
Take remedial Actions: Take necessary
remedial actions to maintain accessibility
through out the websites life cycle.
THE ROADMAP TO TESTING FOR
ACCESSIBILITY
Testing for accessibility is quite different from
ordinary functional testing exercise. Unlike
any other testing accessibility evaluation, it
should run in parallel with the design and
coding phases. Accessibility testing puts
69
people at the center of the process. It requires
a thorough understanding of DA user groups;
the way they interact with the system, their
respective special needs and good knowledge
of assistive technologies they utilize. Getting
DA users onboard to evaluate the site is the best
thing you can do. The process has two phases:
(i) algorithmic testing, and (ii) judgmental
evaluation. For example, the accessibility
checkpoint suggests, for every image there
should be appropriate alt text provided.
Algorithmic testing would check for whether
every image has alt being provided; whereas,
judgment would evaluate whether the alt
makes sense to the user. Algorithmic part can
be helped with using automated accessibility
checkers and judgment part must be taken care
by real human tester. Explained below is the
accessibility test methodology.
Test Requirements Gathering: The process
begi ns wi t h cl earl y underst andi ng and
documenting the accessibility compliance
requirements for the site. The inputs for deciding
on compliance levels depend on a few factors.
To start with, gaining knowledge about your
varied prospective users and the special needs
they might have, largely inuences levels of
accessibility need for you to achieve. Without
understanding the user, you may tend to design
compliant but less usable website. Once you
know which different abilities you are planning
to take care of, you can device your tests to focus
the respective user groups. Similarly, looking at
your competitors and peers websites can bring
to light the issues you must tackle and the issues
that are costliest to x. It is equally important to
know the minimum level of accessibility status
in terms of compliance levels like A, AA, AAA
which you must achieve. The level of compliance
is generally mapped to the A, AA, AAA levels
specied by W3C Web Content Accessibility
Guidelines (WCAG) [5]. These guidelines and
respective checkpoints state what is acceptable
and what is necessary for accessibility. Your
affordability of time and money would drive
your decision. If you know your budgets and
timelines clearly, it will help you prioritize the
Realizing that functional testing and accessibility testing are
not one and the same can save one from designing errors
issues to test. Once you know what to test for,
document it in a form of Test Traceability Matrix
(TTM). This matrix would give a complete
picture of all the checkpoints to be veried and
their association with applicable accessibility
guidelines.
Test Case Preparation: This phase is very
important as we determine what to test. Ideally
every page needs to be tested for accessibility. But
practically it may not happen. Two approaches
user task approach and page type approach,
are to be reckoned to make testing more
comprehensive.
User Task Approach: List and prioritize the user
70
tasks as must, should and may. For example,
on a banking site a few Must tasks for general
customers are logging into the bank account,
check balance, transfer money etc. Should tasks
are for e.g., viewing organizational policies,
sending feedback, nding contact information
etc., and May tasks include reading press
releases, reading reports etc. Once the task list is
ready, all the pages pertaining to the task ows
should be selected.
Page Type Approach: At a higher level, pages on
any website fall under certain page categories.
Classify the pages into following classes:
Home page/entry points
Forms
Information/Content pages
Dynamically generated using data base
query output
Help/FAQ pages
Generated using templates
Pages having embedded interfaces and
Multimedia presentations
Usually the accessibility requirements for
all page categories are the same, but the criticalities
of requirements differ according to the category.
For example, audio captions and text transcripts
are relevant in the context of audio visual pages,
whereas the use of structural markup is t for almost
all pages. So the accessibility checklist for every type
of page will differ a bit as per the html elements used
on the page. Using a wise mix of these approaches
is a good idea to ensure 100% test coverage.
Test Environment Setup: In order to make
testing realistic, we need to either get DA users
to test or install simulators to mimic various DA
users. Assistive technology softwares like JAWS,
WindowEyes also need to be installed and should
be congured for executing the tests.
Test Execution and Defect Reporting: While
actually executing the test, a tester will try
accomplishing all the listed tasks for different
user scenarios. Every screen thus viewed will be
evaluated against the accessibility standards and
the accessibility TTM. Wherever there are any
gaps identied, they will be documented in terms
of general description of the problem, reference
to respective checkpoint/standard, affected user
group etc. Automated tools would also be used
to supplement manual evaluation process. It is a
wise decision to run a minimum of two tools in
order to capture defects effectively.
Defect Analysis and Resolution: The problem is
analyzed and possible solutions are suggested.
The solutions are then evaluated for possible
impact to User Interface (UI).
ACCESSIBILITY TESTING: POINTS TO
PONDER
AAA/Section 508 Compliance: This consists
of carefully looking at the code and verifying
adherence to AAA standards. It requires skilled
personnel with sound knowledge of accessibility
standards and techniques. Compliance testing
tools like Bobby can be of great help too.
Cross Browser Compatibility Testing: Though
HTML is same for all, different browsers tend to
differ while rendering the content. This aspect of
the usage of different browsers makes it essential
to ensure reasonable and uniform behavior of the
site on a range of browser programs. To check for
browser related issues, one can browse the site on
varied browsers like Internet Explorer, FireFox,
Netscape, Opera etc., to verify compatibility.
Device Independence Testing: Verifying device
independence involves operating the site using
different input/output devices. The simplest
71
example is accessing the application using only
keyboard. Usually ensuring keyboard access
ensures access with other input devices such as
puff and sip devices, single switch devices,
as these varied input mechanisms work in a way
similar to that of keyboard. One can also attempt
an interaction with the site using handheld
devices having tiny screens, different pointing
mechanisms and touch screens.
Screen Reader or SR testing: Visually-impaired
Screen Reader users (SR) access the page in a
fundamentally different way than that of a visual
user. Unlike visual access, Screen Reader has serial
access (left top to right bottom). So understanding
the behavior of the Screen Reader is very important
in order to execute the test successfully. One can
make the Screen Reader read the entire page and
nd answers to the following questions:
Did SR read everything on the page?
Did the reading order make sense?
Are the descriptions for the links and form
elds appropriate and clearly conveyed?
Testing for Cognitive Disabilities: Unlike
any other disability, cognitive difculties are
very difcult to understand. Therefore, testing
your site for users having cognitive disability
is not an easy job. Of course AAA guidelines
do state certain checkpoints regarding clarity
and simplicity of language used, clear and
consistent navigations and the like. Tests must
be conducted to ensure that
Content is present in multiple forms such
as text and images and the instructions to
user are precise
Application ow is unambiguous
Destructive user actions like delete,
aborting ask for a user conrmation
before proceeding and the error messages
are precise.
The learning curve for the web site is
minimal
This can help users with cognitive
disabilities to interact with the site in a better and
fruitful way.
Engaging differently-abled users during the test stage can
make testing realistic
Testing with Real Users: The success of a website
largely depends on the number of customers that
it attracts and the number that it retains. Hence
receiving direct user feedback is the best way
to know customer needs. You may choose to
put a team of DA users with different or special
needs to access your website along with your
team of testers. Users can try to accomplish
various tasks on the site. Their feedback in
terms of ease or difculty of use, availability
of alternative forms for the presented content,
learnability, distractibility, clarity of navigation
and instructions, can be recorded and can be
appropriately acted upon. Involving DA users
in the website design process is also a viable and
benecial proposition, as it could cut down on
the rework costs, saving some valuable dollars.
72
A team may be formed based on the observations
below.
Involve diverse disabilities (complete
blindness, partial blindness, cognitive
disability, motor impairment etc.)
Users differ in their competency using
assistive tools, so refrain from choosing
only the expert users
Understand and focus on your target
audience. If the user group comprises
high percentage of a specic disability,
then choose relevant participants. For
example, if your user group is mainly
visually-challenged or low-vision
individuals, then choose users having
varied levels of blindness.
CHALLENGES IN ACCESSIBILITY
TESTING
Non-functional nature of accessibility testing
inevitably adds to a subjective dimension to
the entire gamut of processes. Anything that is
subjective needs to be carefully evaluated as a
similar object is perceived differently by different
individuals. The major challenges faced during
testing for accessibility are:
Difculty in Understanding Special needs of
Varied User Groups: Wearing a clients hat is
very important and particularly so, in accessibility
testing space. Users with disabilities have special
needs and special ways of accessing internet.
Gaining knowledge about user groups requires
good observation of the real situation. Inability to
understand and appreciate different abilities can
hamper the sole purpose of testing.
Lack of Good Understanding of Accessibility
Standards/Guidelines: Though W3C AAA
compliance is mother of all organizational
accessibility policies and standards, policies
may differ in implementation of the guidelines.
Logical and realistic understanding of guidelines
thus becomes imperative to achieve maximum
accessibility compliance.
Not Well-Versed with Assistive Technology
Products: Lack of prociency with assistive tools
can blur a testers judgment. Usually this compels
Accessibility testing is not devoid of challenges.
Differences in perception by individuals are the root
cause for all such challenges.
the tester to approve or reject the tests without
understanding the users perspective. If the tester
is unable to utilize the tool in an effective manner,
he can not mimic the realistic user scenario.
Overuse of Automated Testing Tools: As
mentioned earlier, automated tools are helpful
but not sufcient. Excessive reliance on automated
accessibility checkers can result in overlooking of
the actual end user.
Typically, user scenarios in accessibility
evaluation are various disability conditions or
situations wherein the user is posed with inability
or difculty in interacting with the system. To list
a few:
User is unable to see the screen and is
73
using a Screen Reader to access; the user
has a partial vision and is using screen
magnication software to magnify the
screen.
User is color blind and can not differentiate
between shades of violet and blue; the
user has a learning difculty and can not
understand written information.
User can not hear at all or has difculty
in hearing; the user is suffering with
repetitive strain injuries and thus is
unable to use a mouse.
TOOLS TO AUTOMATE TESTING
There are a few automated accessibility testers.
But they are of little or no use unless they are
supplemented with real user evaluation. If in case
it is not possible to involve DA users for testing,
one can choose to utilize certain automated
accessibility testers and assistive technology
products. Tools utilized in accessibility testing
process are of three categories:
Automated Accessibility Testers/Compliance
Checkers: They are pieces of code that run
through the underlying html code to nd out
any coding level errors. They would check for
existence of specic tags and attributes.
Assistive Technologies: There is software,
hardware or combinations of both that assist DA
individuals in accomplishing the tasks that they
are otherwise unable to perform. During the test
they would be utilized to evaluate user experience.
For instance, whether JAWS Screen Reader user
is able to access and use the site; whether screen
magnifier tool can be used comfortably with
the site, etc. Screen Reader, Screen Magnier,
Refreshable Braille Display, Mouth Stick and
Head Wand are some examples of assistive tools
and technologies.
Disability Simulators: There are programs
which simulate a specic disability condition.
For e.g., color contrast analyzer tool can show
how a page would look like to a person having
Tritanopia, Deuteranopia, Protanopia. Visual
Impairment Simulator can show a page as it
is perceived by a person with different visual
impairments. At times use of these tools turn
out to be an eye-opener for the development
team
CONCLUSION
With business going online and governments
tightening accessibility regulations, achieving
accessibility can safeguard you from any possible
legal implication. Invention and expansion of
diverse access devices and variety of delivery
channels are coercing websites to be adequately
accessible than ever before. Recent breakthroughs
in the eld of assistive technologies have created
unprecedented opportunities and possibilities for
users with disabilities, and they are participating
as strong, demanding and loyal customers of all
sorts of e-commerce sites. As a result your site
needs to cater to large and diverse user groups
by accommodating their special needs and
individual preferences. In this scenario religiously
following accessible design practices and testing
for accessibility and usability can differentiate
you from all your competitors. Having accessible
Web presence would essentially bring you higher
revenue, good publicity, enhanced customer
satisfaction and most importantly, peace of
mind.
REFERENCES
1. Mary Frances Theofanos and Janice
Redish, Guidelines for Accessible and
Usable Web Sites: Observing Users Who
Work With Screen Readers, Interactions,
Vol. 10. No.6, 2003, pp. 38-51. Also a
74
available on http://www.redish.net/
content/papers/interactions.html
2. Smart Phones Have Started to Impact
PDA Sales, June 2003. Available on
ht t p: //www. et f orecast s. com/pr/
pr0603.htm
3. Reaching Out to Customers with
Disabilities, Americans with Disabilities
Act Online Course for Businesses.
Available on http://www.ada.gov/
reachingout/intro1.htm
4. ADHD, Baby Boomers, Ageing
Population drive Disability Increases,
Release by Australian Institute of
Health and Welfare, December 2003.
Available on http://www.aihw.gov.au/
mediacentre/2003/mr20031212.cfm
5. http://www.w3.org/WAI/
75
THE LAST WORD
Junk the Manuals,
Understand Business
From a lifecycle stage to a full grown industry, software
validation has come a long way. To compete at the next
level, consulting editor, Shishank Gupta feels, firms
should focus on innovations around testing
O
ver the last decade, the meaning of the term
testing has undergone a face shift. With
software solutions now running business, testing
for business functionalities, user experience and
non-functional risks is a top priority for any
enterprise. The cost of software failure today goes
far beyond the bug xing cost and is therefore
bringing about a drastic change in the importance
that is attributed to effective testing before
software is rolled out to production. From a mere
lifecycle stage to a multi billion dollar industry
today, testing has surely come a long way.
The traditional in-a-box IT products
are now giving way to complex heterogeneous
solutions that involve integration of legacy
systems with new generation ERP packages. The
task of dening the right test strategy, addressing
cost, time to market, reuse for every single release
/ upgrade is no longer a case of reading from a
manual, but requires in-depth understanding
of the business and a vast experience in testing
similar solutions. New software paradigms
like Service Oriented Architecture (SOA) have
their own set of challenges as traditional testing
approaches do not help effectively validate
applications built using SOA concepts.
With testing gaining importance by the
day, the need for innovation is paramount. Tools
that bring about predictability and help improve
productivity are gaining focus. In recent times,
there have been multiples IPs that have been
created with testing at its core. These include
estimation models, automation accelerators and
frameworks, model based test case generators, to
name a few. For practitioners, it is very important
to stay tuned with these innovations and look for
opportunities to create new IP as that is the only
way ahead in todays dynamic world.
77
About the Author
Shishank Gupta is a Delivery Manager with
the Independent Validation Solutions, Infosys
Technologies Limited. He has over a decade of
experience in the IT industry and has worked on
technologies ranging from mainframes, Unix / C++,
J2EE, middleware like MQ, TIBCO and databases
including Oracle and MS SQL server. He holds a
patent for his Test Unit based estimation model and
has authored many articles on topics like optimization
of testing effort and knowledge management. An
engineering graduate, he is also a Certied Project
Management Professional (PMP). Shishank can be
reached at Shishankg@infosys.com.
78
Index
.NET 47-53, 55
Analysis
change impact 40-51, 53, 54
data ow 51
dynamic 47
semantic 49-51
static information 47
syntactic 49-51
UML 17, 19, 20, 33, 34
Commercial off-the-shelf, also COTS 48, 49, 54
Common Language Runtime, also CLR 50
Component Interaction Graph, also CIG 49
COUGAR 48
Coupling Between Objects, also CBO 19, 28
Customer Relationship Management,
also CRM 9, 11, 23, 26, 29
Data
warehouse, also DW 3-8
systems, also DWS 3-4
mart 3, 7
Design
component 23, 25, 26
business process 23, 25-28, 30, 41-46
business scenario 6-7, 23, 25, 27-29
metrics 17-18, 20-21
Enterprise Resource Planning,
also ERP 20, 22, 26
Extraction, Transformation and Loading,
also ETL 3-6
Function Interaction Graph, also FIG 48, 50, 52
InFlux 33, 34
Just-in-time, also JIT 50
OLAP 5, 6
for testing, see under Testing
Porting 29, 31
Proof-of-concept, also POC 34
Rational Rose 19-20, 34
Risk
assessment 12
implementation 10-13, 15
score 12-13, 15
Slicing
behavioral 33-35, 37
program 51, 54
Software Requirements Specications,
also SRS 33-38
Test
automation 5, 23-24, 29-30, 32
coverage matrix 24
data 4, 6-8, 39
functional 6, 24, 44-45, 60
planning activity 33
requirements 24, 61
unit 68
Testing
accessibility 60
algorithmic 61
application 3
automated 31, 64
back-end 3
business process 25, 30
compliance 62
cross browser compatibility 62
data quality 4
data warehouse system 3-4
device independence 62
end-to-end 3-4, 26, 44
end user
extraction 5
front-end 4, 42
functional 3, 5, 60-61
independent 11
79
interoperability 45
life cycle 41
loading 6
non-functional 3
OLAP 6
performance 45
regression 30, 32, 45, 47-48, 53
screen reader, also SR 63
security 44
service oriented architecture,
also SOA 41-46, 67
software 33, 39
system 3-4, 44, 49
test script 23, 25, 27-29
transformation 5
volume 4
vulnerability 45
Use-Case Activity Diagram,
also UCAD 33-39
Validation
functional 4
fon-functional 4
data 5
independent 9-12, 16, 68
80
SETLabs Briefings
BUSINESS INNOVATION through TECHNOLOGY
Editor
Praveen B Malla PhD
Associate Editor
Srinivas Padmanabhuni PhD
Copy Editor
Sudarshana Dhar
Graphics/Web Editor
Ramesh Ramachandran
Ravishankar SL
ITLS Lead
Ajay Kolhatkar PhD
Program Manager
Naju Mohan
Marketing Manager
Vijayaraghavan T S
Production Manager
Sudarshan Kumar V S
Distribution Manager
Suresh Kumar V H
How to Reach Us:
Email:
SETLabsBriengs@infosys.com
Phone:
+91-080-41187792
Fax:
+91-080-28520740
Post:
SETLabs Briengs,
B-19, Infosys Technologies Ltd.
Electronics City, Hosur Road,
Bangalore 560100, India
Subscription:
vijaytsr@infosys.com
Rights, Permission, Licensing
and Reprints:
praveen_malla@infosys.com
Editorial Office: SETLabs Briefings, B-19, Infosys Technologies Ltd.
Electronics City, Hosur Road, Bangalore 560100, India
Email: SetlabsBriefings@infosys.com http://www.infosys.com/technology/SETLabs-briefings.asp
SETLabs Briengs is a journal published by Infosys Software Engineering
& Technology Labs (SETLabs) with the objective of offering fresh
perspectives on boardroom business technology. The publication aims at
becoming the most sought after source for thought leading, strategic and
experiential insights on business technology management.
SETLabs is an important part of Infosys commitment to leadership
in innovation using technology. SETLabs anticipates and assesses the
evolution of technology and its impact on businesses and enables Infosys
to constantly synthesize what it learns and catalyze technology enabled
business transformation and thus assume leadership in providing best of
breed solutions to clients across the globe. This is achieved through research
supported by state-of-the-art labs and collaboration with industry leaders.
Infosys Technologies Ltd (NASDAQ: INFY) denes, designs and delivers
IT-enabled business solutions that help Global 2000 companies win in a
at world. These solutions focus on providing strategic differentiation
and operational superiority to clients. Infosys creates these solutions
for its clients by leveraging its domain and business expertise along
with a complete range of services. With Infosys, clients are assured of a
transparent business partner, world-class processes, speed of execution
and the power to stretch their IT budget by leveraging the Global Delivery
Model that Infosys pioneered. To nd out how Infosys can help businesses
achieve competitive advantage, visit www.infosys.com or send an email to
infosys@infosys.com
2008, Infosys Technologies Limited
Infosys acknowledges the proprietary rights of the trademarks and product names of the other companies
mentioned in this issue. The information provided in this document is intended for the sole use of the recipient
and for educational purposes only. Infosys makes no express or implied warranties relating to the information
contained herein or to any derived results obtained by the recipient from the use of the information in this
document. Infosys further does not guarantee the sequence, timeliness, accuracy or completeness of the
information and will not be liable in any way to the recipient for any delays, inaccuracies, errors in, or omissions
of, any of the information or in the transmission thereof, or for any damages arising there from. Opinions and
forecasts constitute our judgment at the time of release and are subject to change without notice. This document
does not contain information provided to us in condence by our clients.
Authors featured in this issue
ANJANEYULU PASALA
Anjaneyulu Pasala PhD, is a Senior Research Associate with SETLabs of Infosys Technologies. His current research interests
include component-based software engineering, software testing, regression testing, upgrades impact analysis and UML
testing profilers. He can be contacted at anjaneyulu_pasala@infosys.com.
ANURADHA GOYAL
Anuradha Goyal was a Senior Consultant with Solutions and Consulting Group and led the go-to-market industry solutions
for Enterprise Solutions group at Infosys Technologies.
ASHWIN ANJANKAR
Ashwin Anjankar is an Offshore Project Lead for Enterprise Architecture team of the Resources, Energy & Utilities unit of
Infosys Technologies. He can be reached at Ashwin_anjankar@infosys.com.
GIRISH VISWANATHAN
Girish Viswanathan is a Project manager with Infosys Technologies. He has several years of experience in implementing data
warehousing and business intelligence projects. He can be contacted at girish_viswanathan@infosys.com.
KAILASH K P CHANDUKA
Kailash K P Chanduka PhD, is a Senior Research Associate with SETLabs of Infosys Technologies. His current research interests
include model based software testing, software test effort estimation, combinatorial designs and its applicability in optimization
of software test case generation. He can be contacted at chanduka@infosys.com.
LOKESH CHAWLA
Lokesh Chawla is a Project Manager with the Banking and Capital Markets unit at Infosys Technologies. He has several years
of experience in implementing SOA projects in financial services firms. He can be contacted at lokesh_chawla@infosys.com.
MANIKANDAN M
Manikandan M is an Associate Consultant with Solutions and Consulting Group at Infosys Technologies. He currently
anchors industry solutions for telecom vertical along with the Package Assurance Solution at Enterprise Solution in Infosys.
He can be reached at Manikandan_m02@infosys.com.
RAVI GORTHI
Ravi Gorthi PhD, is a Principal Researcher with SETLabs of Infosys Technologies. Currently he heads Artificial Intelligence
Labs and also Software Test Automation Labs and has several years of IT consultancy and R&D experience in these two areas.
He can be contacted at ravi_gorthi@infosys.com.
SANDEEP KUMAR SINGHDEO
Sandeep K Singhdeo is a Senior Project Manager with the Banking and Capital Markets unit of Infosys Technologies. He has
several years of experience in implementing IT projects that use data warehouse technology, client server and web technology.
He can be contacted at Sandeep_Singhdeo@infosys.com.
SATISH KUMAR BALASUBRAMANIAN
Satish K Balasubramanian is an Architect with the BCM unit of Infosys Technologies where he is involved in implementing
SOA framework for leading financial services firms. He can be contacted at Satishkumar_B@Infosys.com.
SHRIRANG P. SAHASRABUDHE
Shrirang P. Sahasrabudhe is a Software Engineer with Independent Validation Solutions unit of Infosys Technologies.
He provides web accessibility consultancy and accessibility evaluation services to Infosys projects. He can be contacted at
Shrirang_s@infosys.com.
SRIRAM SRIDHARAN
Sriram Sridharan is a Project Manager with the enterprise architecture team of Resources, Energy & Utilities unit of Infosys
Technologies. He can be reached at Sriram_Sridharan@infosys.com.
VINOTH MICHAEL PRO
Vinoth Michael Pro is a Program Manager with Infosys Quality Group. He has several years of experience in managing IT
projects and quality assurance initiatives. He can be contacted at Vinoth_MP@infosys.com.
YANNICK LEW YAW FUNG
Yannick Lew Yaw Fung was an InStep trainee, coming from the University of Mauritius, Mauritius. His research interests are
software testing and change impact analysis. He can be reached at yck.lew@gmail.com
Subu Goparaju
Vice President
and Head of SETLabs
At SETLabs, we constantly look for opportunities to leverage
technology while creating and implementing innovative business
solutions for our clients. As part of this quest, we develop engineering
methodologies that help Infosys implement these solutions right rst
time and every time.
For information on obtaining additional copies, reprinting or translating articles, and all other correspondence,
please contact:
Telephone : 91-80-41187792
Email: SetlabsBriengs@infosys.com
SETLabs 2008, Infosys Technologies Limited.
Infosys acknowledges the proprietary rights of the trademarks and product names of the other
companies mentioned in this issue of SETLabs Briengs. The information provided in this document
is intended for the sole use of the recipient and for educational purposes only. Infosys makes no
express or implied warranties relating to the information contained in this document or to any
derived results obtained by the recipient from the use of the information in the document. Infosys
further does not guarantee the sequence, timeliness, accuracy or completeness of the information and
will not be liable in any way to the recipient for any delays, inaccuracies, errors in, or omissions of,
any of the information or in the transmission thereof, or for any damages arising there from. Opinions
and forecasts constitute our judgment at the time of release and are subject to change without notice.
This document does not contain information provided to us in condence by our clients.

You might also like