Testing DB
Testing DB
LEARNING GUIDE # 06
Module Title: - Testing Physical
Database Implementation
MODULE CODE ICT DBA3 06 0411
SYMBOLS
These symbols are located at the left margin of the module. These illustrate the actions that
should be taken or resource to be used at a particular stage in the module.
LO Learning
Outcome
Self-Check
Answer Key
Resources
Reading Assessment
Activity
Remember/Tips
Use Computer
LO
1. Undertake database management system
modeling
2. Test database performance
3. Seek client feedback and signoff
Testing Physical Database Implementation
1. Introduction on Testing
Definition
Testing is no longer an adjunct to the system development life cycle (SDLC), but rather a key
part of it.
Software customer
Software user
Software developer
Software Tester
Information service management
Senior organization management
To find bugs
Find them as early as possible.
Make sure they get fixed
Quality Principles
What is Quality?
Quality is defined as meeting the customer’s requirements for the first time and every
time.
Quality is much more than the absence of defects, which allows us to meet customer
expectations.
Why Quality?
Quality is the important factor affecting an organization’s long term performance and improves
productivity and competitiveness in any organization
Quality Assurance
Quality assurance is a planned and systematic set of activities necessary to provide adequate
confidence that products and services will conform to specified requirements and meet user
needs.
It is process oriented.
Defect prevention based.
Quality Control
Quality control is the process by which product quality is compared with applicable standards
and the action taken when non-conformance is detected.
It is product oriented.
Defect detection based.
Software Process
Plan (P): Device a plan. Define your objective and determine the strategy and
supporting methods required to achieve that objective.
Do (D): Execute the plan. Create the conditions and perform the necessary training to
execute the plan.
Check (C): Check the results. Check to determine whether work is progressing
according to the plan and whether the results are obtained.
Action (A): Take the necessary action. If your checkup reveals that the work is not being
performed according to plan or that results are not what was anticipated. Device
measures for appropriate action
Standards/ Certifications
Management Activities
Project Management
Project management is concerned with the overall planning and co-ordination of a project from
inception to completion aimed at meeting the client's requirements and ensuring completion on
time, within cost and to required quality standards.
Project Planning
Project Scheduling
Project Costing
Testing Physical Database Implementation Page 6
Monitoring Reviews
Report Writing & Presentation
Risk Management
Risk Management is concerned with identifying risks and drawing up plans to minimize their
effect on a project.
Requirement Management
Testing fundamentals
Software testing is an investigation conducted to provide stakeholders with information about the
quality of the product or service under test. Software testing can also provide an objective,
independent view of the software to allow the business to appreciate and understand the risks of
software implementation. Test techniques include, but are not limited to, the process of executing
a program or application with the intent of finding software bugs (errors or other defects).
Software testing can be stated as the process of validating and verifying that a software
program/application/product:
Software testing, depending on the testing method employed, can be implemented at any time in
the development process. Traditionally most of the test effort occurs after the requirements have
been defined and the coding process has been completed, but in the Agile approaches most of the
test effort is on-going. As such, the methodology of the test is governed by the chosen software
development methodology.
Different software development models will focus the test effort at different points in the
development process. Newer development models, such as Agile, often employ test-driven
development and place an increased portion of the testing in the hands of the developer, before it
reaches a formal team of testers. In a more traditional model, most of the test execution occurs
after the requirements have been defined and the coding process has been completed.
Classifications of Defects
Classification of Testing
Static Testing
Testing techniques that involve analysis of application components and/or processing results
without the actual execution of the application. A quality assurance (QA) review is a form of
static testing.
Dynamic Testing
Testing, based on specific test cases, by execution of the test object or running programs.
Techniques used are determined by the type of testing that must be conducted
Tests are frequently grouped by where they are added in the software development process, or by
the level of specificity of the test. The main levels during the development process as defined by
the SWEBOK guide are unit-, integration-, and system testing that are distinguished by the test
target without implying a specific process model. Other test levels are classified by the testing
objective.
1. Unit Testing
Unit testing, also known as component testing refers to tests that verify the functionality of a
specific section of code, usually at the function level. In an object-oriented environment, this is
usually at the class level, and the minimal unit tests include the constructors and destructors.
These types of tests are usually written by developers as they work on code (white-box style), to
ensure that the specific function is working as expected. One function might have multiple tests,
to catch corner cases or other branches in the code. Unit testing alone cannot verify the
functionality of a piece of software, but rather is used to assure that the building blocks the
software uses work independently of each other.
Unit Testing:
Integration testing is any type of software testing that seeks to verify the interfaces between
components against a software design. Software components may be integrated in an iterative
way or all together ("big bang"). Normally the former is considered a better practice since it
allows interface issues to be localized more quickly and fixed.
Testing Physical Database Implementation Page 9
Integration testing works to expose defects in the interfaces and interaction between integrated
components (modules). Progressively larger groups of tested software components
corresponding to elements of the architectural design are integrated and tested until the software
works as a system.
Integration testing:
Integrate two or more modules and test for the communication between the modules.
Follows a white box testing (Testing the code).
3. System Testing
System testing tests a completely integrated system to verify that it meets its requirements.
System Testing:
Confirms that the system as a whole delivers the functionality originally required.
Follows the black box testing.
4. Stub Testing
In computer science, test stubs are programs which simulate the behaviors of software
components (or modules) that are depended upon modules of the module being tested. “ Test
stubs provide canned answers to calls made during the test, usually not responding at all to
anything outside what's programmed in for the test. ”
Test Stubs are mainly used in incremental testing's Top-Down approach. Stubs are software
programs which act as a module and give the output as given by an actual
product/software. Test stub is also called as a 'called' function.
Example
Consider a software program which queries a database to obtain the sum price total of all
products stored in the database. However, the query is slow and consumes a large number of
system resources. This reduces the number of test runs per day. Secondly, the tests need to be
conducted on values larger than what is currently in the database.
The method (or call) used to perform this is get_total(). For testing purposes, the source code in
get_total() could be temporarily replaced with a simple statement which returned a specific
value. This would be a test stub.
There are several testing frameworks available and there is software that can generate test stubs
based on existing source code and testing requirements.
A smoke test is used as an acceptance test prior to introducing a new build to the main
testing process, i.e. before integration or regression.
Acceptance testing performed by the customer, often in their lab environment on their
own hardware, is known as user acceptance testing (UAT). Acceptance testing may be
performed as part of the hand-off process between any two phases of development.
Building the confidence of the client and users is the role of the acceptance test phase.
It is depend on the business scenario.
In software testing, recovery testing is the activity of testing how well an application is able to
recover from crashes, hardware failures and other similar problems.
A. Recovery testing
Recovery testing is the forced failure of the software in a variety of ways to verify that recovery
is properly performed. Recovery testing should not be confused with reliability testing, which
tries to discover the specific point at which failure occurs.Recovery testing is basically done in
Testing Physical Database Implementation Page 11
order to check how fast and better the application can recover against any type of crash or
hardware failure etc. Type or extent of recovery is specified in the requirement specifications. It
is basically testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems
While an application is running, suddenly restart the computer, and afterwards check the
validness of the application's data integrity.
While an application is receiving data from a network, unplug the connecting cable. After
some time, plug the cable back in and analyze the application's ability to continue
receiving data from the point at which the network connection disappeared.
Restart the system while a browser has a definite number of sessions. Afterwards, check
that the browser is able to recover all of them.
B. Security testing
Security testing is a process to determine that an information system protects data and maintains
functionality as intended.
The six basic security concepts that need to be covered by security testing are:
confidentiality,
integrity,
authentication,
availability,
authorization and
Non-repudiation.
Security testing as a term has a number of different meanings and can be completed in a number
of different ways. As such a Security Taxonomy helps us to understand these different
approaches and meanings by providing a base level to work from.
Confidentiality
A security measure which protects against the disclosure of information to parties other than the
intended recipient that is by no means the only way of ensuring the security.
Integrity
A measure intended to allow the receiver to determine that the information which it is providing
is correct.
Integrity schemes often use some of the same underlying technologies as confidentiality
schemes, but they usually involve adding additional information to a communication to form the
basis of an algorithmic check rather than the encoding all of the communication.
Authentication
This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring
that a product is what it’s packaging and labeling claims to be, or assuring that a computer
program is a trusted one.
Authorization
Availability
Assuring information and communications services will be ready for use when expected.
Information must be kept available to authorized persons when they need it.
Non-repudiation
In reference to digital security, nonrepudiation means to ensure that a transferred message has
been sent and received by the parties claiming to have sent and received the message.
Nonrepudiation is a way to guarantee that the sender of a message cannot later deny having sent
the message and that the recipient cannot deny having received the message.
C. Stress testing
Testing Physical Database Implementation Page 13
Stress testing is a form of testing that is used to determine the stability of a given system or
entity. It involves testing beyond normal operational capacity, often to a breaking point, in order
to observe the results. Stress testing may have a more specific meaning in certain industries, such
as fatigue testing for materials.
Stress
Load testing is primarily concerned with testing that the system can continue to operate under a
specific load, whether that is large quantities of data or a large number of users. This is generally
referred to as software scalability. The related load testing activity of when performed as a non-
functional activity is often referred to as endurance testing. Volume testing is a way to test
software functions even when certain components (for example a file or database) increase
radically in size. Stress testing is a way to test reliability under unexpected or rare workloads.
Stability testing (often referred to as load or endurance testing) checks to see if the software can
continuously function well in or above an acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms load
testing, performance testing, reliability testing, and volume testing, are often used
interchangeably.
Beta testing comes after alpha testing and can be considered a form of external user acceptance
testing. Versions of the software, known as beta versions, are released to a limited audience
outside of the programming team. The software is released to groups of people so that further
testing can ensure the product has few faults or bugs. Sometimes, beta versions are made
available to the open public to increase the feedback field to a maximal number of future users.
Other Testing’s
A testing method where the application under test is viewed as a black box and the internal
behavior of the program is completely ignored. Testing occurs based upon the external
specifications. Also known as behavioral testing, since only the external behaviors of the
program are evaluated and analyzed
Equivalence Partitioning
Boundary Analysis
Error Guessing
Equivalence Partitioning
ATesting
subset of Physical
data that is Database
representativeImplementation
of a larger class Page 15
For example, a program which edits credit limits within a given range ($10,000 - $15,000 would
have 3 equivalence classes:
A technique that consists of developing test cases and data that focus on the input and output
boundaries of a given function
Error Guessing
Based on the theory that test cases can be developed based on experience of the Test Engineer
For example, in an example where one of the inputs is the date, a test engineer might try
February 29, 2012
Incremental Testing
A disciplined method of testing the interfaces between unit-tested programs as well as between
system components
Top-down
Bottom-up
Top-Down
Begins testing from the top of the module hierarchy and works down to the bottom using interim
stubs to simulate lower interfacing modules or programs
Bottom-Up
* Begins testing from the bottom of the hierarchy and works up to the top
* Bottom-up testing requires the development of driver modules which provide the test input,
call the module or program being testing, and display test output
3. Thread Testing
A variation of top-down testing where the progressive integration of components follows the
implementation of subsets of the requirements, as opposed to the integration of components by
successively lower levels.
* Demonstrates key functional capabilities by testing a string of units that accomplish a specific
function in the application
4. Usability Testing
Usability testing focuses on measuring a human-made product's capacity to meet its intended
purpose. Examples of products that commonly benefit from usability testing are foods, consumer
products, web sites or web applications, computer interfaces, documents, and devices. Usability
testing measures the usability, or ease of use, of a specific object or set of objects, whereas
general human-computer interaction studies attempt to formulate universal principles.
Usability testing determines how well the user will be able to understand and interact with the
system. This is done prior to the testing levels.
Testing Physical Database Implementation Page 17
* This can be conducted jointly by software vendor (seller) and the team.
6. Sanity Testing
A sanity test or "smoke test" is a brief run-through of the main functionality of a computer
program or other product. It gives a measure of confidence that the system works as expected
prior to a more exhaustive round of testing.
The process of adapting software for a particular country or region. For example, the software
must support the character set of the local language and must be configured to present numbers
and other values in the local format.
8. Regression Testing
The selective retesting of a software system that has been modified to ensure that any bugs have
been fixed and that no other previously working functions have failed as a result of the
reparations and that newly added features have not created problems with previous versions of
the software. Also referred to as verification testing
Regression testing focuses on finding defects after a major code change has occurred.
Specifically, it seeks to uncover software regressions, or old bugs that have come back. Such
regressions occur whenever software functionality that was previously working correctly stops
working as intended. Typically, regressions occur as an unintended consequence of program
changes, when the newly developed part of the software collides with the previously existing
Testing Physical Database Implementation Page 18
code. Common methods of regression testing include re-running previously run tests and
checking whether previously fixed faults have re-emerged. The depth of testing depends on the
phase in the release process and the risk of the added features. They can either be complete, for
changes added late in the release or deemed to be risky, to very shallow, consisting of positive
tests on each feature, if the changes are early in the release or deemed to be of low risk.
9. Compatibility Testing
A common cause of software failure (real or perceived) is a lack of its compatibility with other
application software, operating systems (or operating system versions, old or new), or target
environments that differ greatly from the original (such as a terminal or GUI application intended
to be run on the desktop now being required to become a web application, which must render in
a web browser). For example, in the case of a lack of backward compatibility, this can occur
because the programmers develop and test software only on the latest version of the target
environment, which not all users may be running. This result in the unintended consequence that
the latest work may not function on earlier versions of the target environment or on older
hardware those earlier versions of the target environment was capable of using. Sometimes such
issues can be fixed by proactively abstracting operating system functionality into a separate
program module or library.
Testing whether the system is compatible with other systems with which it should communicate.