[go: up one dir, main page]

0% found this document useful (0 votes)
54 views128 pages

Drug Management System

The document summarizes a proposed drug management system. Key points include: - The system aims to automate and integrate a drug research organization's vast database to allow easier retrieval of relevant information and coordination across the organization. - It will provide details on drugs, reaction agents, drug trials, participants and results to help with research and development. - The new system is proposed to address issues with the current manual process, which is time-consuming and risks inconsistencies. It will give administrators and users centralized access to integrated drug data. - A technical feasibility study identified 22 databases needed for administrative and general user components to manage master data and transactional queries consistently.

Uploaded by

mewatihira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views128 pages

Drug Management System

The document summarizes a proposed drug management system. Key points include: - The system aims to automate and integrate a drug research organization's vast database to allow easier retrieval of relevant information and coordination across the organization. - It will provide details on drugs, reaction agents, drug trials, participants and results to help with research and development. - The new system is proposed to address issues with the current manual process, which is time-consuming and risks inconsistencies. It will give administrators and users centralized access to integrated drug data. - A technical feasibility study identified 22 databases needed for administrative and general user components to manage master data and transactional queries consistently.

Uploaded by

mewatihira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 128

Drug

Manageme
nt
System

Page 1 of 128
Chapter
1

Page 2 of 128
Abstract

Page 3 of 128
Drug Audition & Research Management is a system that practically
concentrates on the associative standards of the Medical diagnosis and
Research Developments environments. The major problem in Drugs and
pharmaceuticals industry is to design or invert a new Bio-molecular
combination of a chemical. The new Bio molecular combination should have
the ability in training its roots towards the ailment that exists in the body and
fight against that ailment. In the initial stages while the drug is under the
preparatory stages of experiment, it is combinationally checked on some of
the living organisms, which belong to the species of mammals. Once the drug
trial experiments come to a proper status on these animals to have precision
check and reliability standards, they are once again checked upon the human
beings who are physically associated to such problems. The individuals who
are suffering through proper ailments are recognized and they are requested
to participate in the Drug trials voluntarily. The participation of the
individuals is governed through the Byelaws and legal procedures that exist
under the human and civilian rights of the constitution governed by the
European Union. The application increases in its size through the database,
as the research activity increases within the organization. At a specific time
the search of required information takes a great lot of time the search of
required information takes a great lot of time and costs the organization both
in time and money.

The present application concentrates on the relative information that has


been stored at the level of the organization while the system is under the
process of execution. To keep the latency of the system at the lowest profile
the system manages all the information in MS SQL Server 2000 database, to
keep at least of the database standards that are existing at the industrial
level. The application has been developed using the .NET technologies to
keep pace with the present trends of the industrial requirements. The
different standards of the .NET technology have been adapted to Cater to the
standards like Intranet based standards and browser specific user interfaces.
The ADO.NET database connectivity has been exploited for the database
interactive standards.

Page 4 of 128
Chapter
2

Page 5 of 128
Project
Synopsis

Page 6 of 128
The entire project has been develped keeping in view of the Distributed client
server computing technology in mind.The specification have been normalized
upto 3NF to eliminate all the anomalies that may arise due to the database
transactions that are executed by the actual administration and users.The
user interfaces are browser specific to give distributed accessability for the
overall system.The internal database has beeb selected as MS SQL Server
2000. The MS SQL Server 2000 was a choice as it provides the constructs of
high level reliabiity and security.The total front end was dominated using
HTML standards applied with the dynamism of ASP.NET. Thecommunicatin
client was designed using C#.NET. At all proper levels high care was taken to
check that the system manages the date consistency with proper business
validations.The database connectivity was planned using the ADO.NET
DataBase Connectivity. The authorization was cross checked at all stages.The
user level accessabiity has been restricted into two zones the administrative
and the normal user zone.

About the Organization


The Human Life Innovators Pvt. Limited is a Drug Research and
Development Foundation, which has its roots of existence in the Bio Medical
Drugs Research for 20 years. The organization has a vast database that has
been collected from the association of the system for all those years. The
present slystem is flooded with huge database and it is an unmanageable
task for the existing staff in retrieving the applicable and required data within
the limited time frame. The system has a huge resource of drugs that are
practically made to participate in the trials. Each drug can have various
reaction agents, that may be necessary under some special circumstances
for the overall chemical reaction to be analyzed and scheduled. The system
has very specific processed information that gets revealed upon the
execution of the reactions and Drug trial participations. The overall scenario
of the Drug and its effectiveness of usage has to be recorded at every stage
any mis-confusion or discrepancy is the manual processcan always create a
havoc under the normalo operational standards of the system. To keep the
standards of precision the system needs. The accumulated information

Page 7 of 128
should be collected and integrated at all different levels of the organization
for smooth functioning and coordination of the system. The system needs
proper handling of the investigation and the Drug trial participants along with
the clinical condition upon which they are being treated or experimented. The
actual system under the manual process needs a huge amount of manpower
in managing and maintaining the information, any miscoordination among
the existing users within the working system can cause overall disturbance
within the date management standards.

Manual Process

First Phase

Informatio
n about Identify the Search the Ledger
associated Drug of Reaction agents
The New Reaction agents
Drug

Register the
Reference of Prepare a reference
Identify the Drug
Drug along with to the required
the usage usage conditions Drug
conditions

Page 8 of 128
Second Phase

Search the
Register the Drug
listed Drugs
Make a in one of the
that are
Registration for existing masters
authorized for
new Drug Trial ledgers.
trials

Checking the Register the


outcome make Identify the
information of the
any required individual
trial outcomes as
on whom the trial
recommendati and when it is
has to be applied
ons as applied for testing
necessary

Why the New system

The development of the new system contains the following activites, which
try to automate the entire process keeping in view of the database
integration approach.

1. The administrators have greater accessibility in collecting the


consistent information that is very much necessary for the system to
exist and coordinate.

2. The system at any point of time can provide the details of all the drugs
that exist within the system along with their reaction agent’s
combination.

3. The system can provide the generic details of all the allergies and its
associated drugs that can be applied upon it, with a click of the mouse.

4. The system can provide instantaneous information related to the drugs


and their usage conditions along with the special instruction if any.

Page 9 of 128
5. The system with respect to the necessities can identify all the history
details of the real participants along with their outcome of the results.

6. The system with respect to the necessities can identify all the history
details of the trial participants along with their outcome of the results.

7. The system with respect to the necessities can provide the status of
research and development process that is under schedule within the
organization currently.

8. With proper storage of the data in a relational environment the system


can aggregate itself to cater to the standards of providing a clear and
easy path for future research standards that may arise due to
organizational policies.

Page 10 of 128
Chapter
3

Page 11 of 128
Feasibility
Report

Page 12 of 128
Feasibility Study

(a) Technical Description

Databases: The total number of databases that were identified to build


the system are 22.The major part of the databases are categorized as
administrative components and the general user components. The
administrative components are useful in managing the actual master data
that is very much necessary to maintain the consistency of the system.
The administrative databases are purely used for the internal
organizational needs and necessities only at the upper and middle
management areas.

The user components are designed to handle the transactional states that
arise upon the system whenever the general employee within the
organization visits the user interface for much enquiry for the required
data. The normal user interfaces get associated to the environment
mostly for the sake of respect standardization. The user components are
scheduled to accept parametrical information from the user as per the
systems necessity.

GUI’s

To keep the user highly flexible upon the system, the overall interface has
been developed in Graphical user interface mode. The applied interface is
through a browser specific environment.
The GUI’s at the top level have been categorized as:

1. Administrative user interface


2. The operational or generic user interface.

The administrative user interface concentrates on the consistent information


that is practically, part of the organizational activities and which needs
proper authentication for the data collection. The interfaces help the
administrations with all the transactional states like Data insertion, Data
deletion and Date updation along with the extensive data search capabilities.

Page 13 of 128
The operational or generic user interface helps the users upon the system in
transactions through the existing data and required services. The operational
user interface also helps the ordinary users in managing their own
information helps the ordinary users in managing their own information in a
customized manner as per the assisted flexibilities.

Number of Modules

The system after careful analysis has been identified to be presented with the
following modules.

 Employees Information Module: The module manages the information

of all the employees who practically exist for this organization. Each
employee is exclusively associated through a specific department and
authorized designation. The module manages all the transactional
relations that generically arise as and when the system has been
executed, upon the requirements.

 Drug Information Module: This module takes care of the


information related to all the drugs that are scheduled for investigation
within the system. The module integrates itself with all the areas
wherever the drug is associated and applied within the system. The
module also cross checks its reference with the scheduled reaction
agents and allergies along with the usage conditions.

 Allergic Information Module: This module manages the information


related to all the allergies and their associated anti-allergic medicines,
which should be handled, when any allergic reaction protrudes within
the system under the drug trials. The system within this module also
manages the referential information related to the different symptoms
through which the drug should be applied.

 Drug Trials Information Module: This module manages the


information related to the Drugs that are under the trial registry within
the system. This module keeps a reference to the individuals who are
participating in the system for the trial participation. The module
records the Drug trials starting date and ending date. It also cross

Page 14 of 128
checks and verifies the authenticity of the individuals who are deputed
upon the drug trials.

 Individual Trials Information Module: This module manages the

information of the individuals who have been put upon the drug trials,
and the related information regarding their trials is recorded within a
secured, format the systems within the executional domain allows only
the specific information to be viewed by the individual.

 Drug Trial History Module: This module maintains the standard

information related to the data that is generated through the process


of cross checking the drugs trials History. This module helps the
organization to keep track of the executional future plans upon the
system.

 Security Module: This module maintains and manages the security

standards that may be necessary in accessing the system as per the


required authorization.

SOFTWARE REQUIREMENTS

The software used in this project is:

Operating System : Windows 2000.

Software : ASP. Net.

Data Base : SQL Server 2000

HARDWARE REQUIREMENTS

The hardware used in this project is:

RAM : 256 MB.

Processor : P-IV Processor.

Page 15 of 128
Hard Disk : 20 GB

Memory :32 MB.

Feasibility

(i) Technical Feasibility

The system is self-explanting and does not need any entire sophisticated
training. A system has been built by concentrating on the graphical uses
interface concepts, the application can also be handled very easily with a
novice uses. The overall time that a user needs to get trained is less than 15
minutes.

The system has been added with features of menu device and button
interaction methods, which makes him the master as he starts working
through the environment. As the software that were used as developing this
application are very economical and are readily available in the market the
only time that is lost by the customer is just installation time .

ii) Financial Feasibility

From the customer view:

Time Based:

If the user has to view drug details being put to test, he should have ready
available information like, the chemical composition, reaction agents which
can cause reactions. This process always costs some extra time and the
amount of drug details what he can collect is an instance are always meager.
Hence a user always spends a lot of time and he has to contend with what he
has at his reach.

But the same sceneries if it is handled through the intranet application will
always give the user a ready list being displayed at his reach within few
seconds of time from the request being placed. The user need not even make
a move physically from his place (i.e., from his section) to get the required
information of drugs and also other details like information from other

Page 16 of 128
sections. The user has a deep satisfaction in processing the information of his
choice respective of geographical lassies. Which are major hurdles in a
manual system.

Cost Based:

If the physical system is established through a manual process. There is


much need of stationary that has to be managed and maintained as files, the
overall system once implemented as a intranet based web application not
only saves the time but also eliminates the latency that can exist within the
system, and saves the cost of stationary that is an unforeseen overhead
within the system.

The base administrative staff at the level of the strategic decision-making is


greatly relieved for the extensive data search. The administrators are greatly
benefited by this system, as they need not collect all that information that is
accessible by only selects that information which is more important for them.
The administrative standards of the system become more economical as
there is no need of stationary exchange within the organization. As the
information governed upon the system maintains statistics related to
information that is collected, the overall system can be used for forecasting
analysis in the research process.

Cost Based: In the manual process of the DRUG MANAGEMENT SYSTEM, the
information search and storage needs extra manpower assignment, which
potentially costs the organization in the perennial investment of funds for the
sake of the salaries. The information interrelations among different areas of
the system should be handled carefully else.

Page 17 of 128
Chapter
4

Page 18 of 128
Analysis
Report

Page 19 of 128
SRS Document:
Intended Audience And Reading Suggestions

The document is prepared keeping is view of the academic constructs of my


Bachelors Degree / Masters Degree from university as partial fulfillment
of my academic purpose the document specifies the general procedure that
that has been followed by me, while the system was studied and developed.
The general document was provided by the industry as a reference guide to
understand my responsibilities in developing the system, with respect to the
requirements that have been pin pointed to get the exact structure of the
system as stated by the actual client.

The system as stated by my project leader the actual standards of the


specification were designed by conducting a series of interviews and
questionnaires. The collected information was organized to form the
specification document and then was modeled to suite the standards of the
system as intended.

Document Conventions:

The overall documents for this project use the recognized modeling
standards at the software industries level.

 ER-Modeling to concentrate on the relational states existing


upon the system with respect to Cardinality.

 The Physical dispense, which state the overall data search for
the relational key whereas a transactions is implemented on
the wear entities.

 Unified modeling language concepts to give a generalized blue


print for the overall system.

 The standards of flow charts at the required states that are the
functionality of the operations need more concentration.

Page 20 of 128
Scope of The Development Project:

Database Tier: The concentration is applied by adopting the MS SQL Server


2000. SQL is taken as the standard query language.

User Tier: The user interface is developed in a browser specific environment


to have distributed architecture. The components are designed using HTML
standards and ASP.NET power the dynamic content generation in the page
design.

Data Base Connectivity Tier: The communication architecture is designed


by concentrating on the standards of ADO.NET technology for database
connectivity.

Role of MS SQL Server 2000

MS SQL Server 2000 is one of the many database services that plug into a
client / server model. It works efficiently to manage resources, a database
information, among the multiple clients requesting & sending.

Structured Query Language (SQL)

SQL is an inter-active language used to query the database and access data
in database. SQL has the following features:

1. It is a unified language.

2. It is a common language for relational database

3. It is a non-procedural language.

Page 21 of 128
Introduction to MS SQL Server 2000

Microsoft SQL Server 7.0 Storage Engine

Introduction
SQL Server™ 7.0 a scalable, reliable, and easy-to-use product that will
provide a solid foundation for application design for the next 20 years.

Storage Engine Design Goals

Database applications can now be deployed widely due to intelligent,


automated storage engine operations. Sophisticated yet simplified architecture
improves performance, reliability, and scalability.

Feature Description and Benefits

Reliability Concurrency, scalability, and reliability are improved with


simplified data structures and algorithms. Run-time checks of
critical data structures make the database much more robust,
minimizing the need for consistency checks.

Scalability The new disk format and storage subsystem provide storage
that is scalable from very small to very large databases.
Specific changes include:
 Simplified mapping of database objects to files eases
management and enables tuning flexibility. DB objects
can be mapped to specific disks for load balancing.

 More efficient space management including increasing


page size from 2 KB to 8 KB, 64 KB I/O, variable length
character fields up to 8 KB, and the ability to delete
columns from existing tables without an unload/reload
of the data.

 Redesigned utilities support terabyte-sized databases


efficiently.

Ease of Use DBA intervention is eliminated for standard operations—


enabling branch office automation and desktop and mobile

Page 22 of 128
database applications. Many complex server operations are
automated.

Storage Engine Features

Feature Description and Benefits

Data Type Maximum size of character and binary data types is


Sizes dramatically increased.

Databases and Databases creation is simplified, now residing on


Files operating system files instead of logical devices.

Dynamic Improves performance by optimizing memory allocation


Memory and usage. Simplified design minimizes contention with
other resource managers.

Dynamic Row- Full row-level locking is implemented for both data rows
Level Locking and index entries. Dynamic locking automatically chooses
the optimal level of lock (row, page, multiple page, table)
for all database operations. This feature provides
improved concurrency with no tuning. The database also
supports the use of "hints" to force a particular level of
locking.

Dynamic Space A database can automatically grow and shrink within


Management configurable limits, minimizing the need for DBA
intervention. It is no longer necessary to pre allocate
space and manage data structures.

Evolution The new architecture is designed for extensibility, with a


foundation for object-relational features.

Large Memory SQL Server 7.0 Enterprise Edition will support memory
Support addressing greater than 4 GB, in conjunction with
Windows NT Server 5.0, Alpha processor-based systems,

Page 23 of 128
and other techniques.

Unicode Native Unicode, with ODBC and OLE DB Unicode APIs,


improves multilingual support.

Storage Engine Architectural Overview

Overview
The original code was inherited from Sybase and designed for eight-
megabyte Unix systems in 1983.These new formats improve manageability
and scalability and allow the server to easily scale from low-end to high-end
systems, improving performance and manageability.

Benefits
There are many benefits of the new on-disk layout, including:
Improved scalability and integration with Windows NT Server
Better performance with larger I/Os
Stable record locators allow more indexes
More indexes speed decision support queries
Simpler data structures provide better quality
Greater extensibility, so that subsequent releases will have a cleaner
development process and new features are faster to implement

Storage Engine Subsystems

Most relational database products are divided into relational engine and
storage engine components. This document focuses on the storage engine,
which has a variety of subsystems:
 Mechanisms that store data in files and find pages, files, and extents.
 Record management for accessing the records on pages.
 Access methods using b-trees that are used to quickly find records
using record identifiers.

Page 24 of 128
 Concurrency control for locking, used to implement the physical lock
manager and locking protocols for page- or record-level locking.
 I/O buffer management.
 Logging and recovery.
 Utilities for backup and restore, consistency checking, and bulk data
loading.

Databases, Files, and File groups

Overview

SQL Server 7.0 is much more integrated with Windows NT Server than any of
its predecessors. Databases are now stored directly in Windows NT Server
files. SQL Server is being stretched towards both the high and low end.

Files

SQL Server 7.0 creates a database using a set of operating system files, with
a separate file used for each database. Multiple databases can no longer
share the same file. There are several important benefits to this
simplification. Files can now grow and shrink, and space management is
greatly simplified. All data and objects in the database, such as tables, stored
procedures, triggers, and views, are stored only within these operating
system files:

File Type Description

Primary data This file is the starting point of the database. Every database
file has only one primary data file and all system tables are always
stored in the primary data file.

Secondary These files are optional and can hold all data and objects that
data files are not on the primary data file. Some databases may not
have any secondary data files, while others have multiple
secondary data files.

Page 25 of 128
Log files These files hold all of the transaction log information used to
recover the database. Every database has at least one log file.

When a database is created, all the files that comprise the database are
zeroed out (filled with zeros) to overwrite any existing data left on the disk
by previously deleted files. This improves the performance of day-to-day
operations.

Filegroups
A database now consists of one or more data files and one or more log files.
The data files can be grouped together into user-defined filegroups. Tables
and indexes can then be mapped to different filegroups to control data
placement on physical disks. Filegroups are a convenient unit of
administration, greatly improving flexibility. SQL Server 7.0 will allow you to
back up a different portion of the database each night on a rotating schedule
by choosing which filegroups to back up. Filegroups work well for
sophisticated users who know where they want to place indexes and tables.
SQL Server 7.0 can work quite effectively without filegroups.
Log files are never a part of a filegroup. Log space is managed separately
from data space.

Using Files and Filegroups


Using files and filegroups improves database performance by allowing a
database to be created across multiple disks, multiple disk controllers, or
redundant array of inexpensive disks (RAID) systems. For example, if your
computer has four disks, you can create a database that comprises three
data files and one log file, with one file on each disk. As data is accessed,
four read/write heads can simultaneously access the data in parallel, which
speeds up database operations.Additionally, files and filegroups allow better
data placement because a table can be created in a specific filegroup. This
improves performance because all I/O for a specific table can be directed at a
specific disk. For example, a heavily used table can be placed on one file in
one filegroup and located on one disk. The other less heavily accessed tables

Page 26 of 128
in the database can be placed on other files in another filegroup, located on a
second disk.

Space Management
There are many improvements in the allocations of space and the
management of space within files. The data structures that keep track of
page-to-object relationships were redesigned. Instead of linked lists of
pages, bitmaps are used because they are cleaner and simpler and facilitate
parallel scans. Now each file is more autonomous; it has more data about
itself, within itself. This works well for copying or mailing database files.
SQL Server now has a much more efficient system for tracking table space.
The changes enable
 Growing and shrinking files
 Better support for large I/O
 Row space management within a table
 Less expensive extent allocations
SQL Server is very effective at quickly allocating pages to objects and reusing
space freed by deleted rows. These operations are internal to the system and
use data structures not visible to users, yet are occasionally referenced in
SQL Server messages.

File Shrink
The server checks the space usage in each database periodically. If a
database is found to have a lot of empty space, the size of the files in the
database will be reduced. Both data and log files can be shrunk. This activity
occurs in the background and does not affect any user activity within the
database. You can also use the SQL Server Enterprise Manager or DBCC to
shrink files as individually or as a group, or use the DBCC commands
SHRINKDATABASE or SHRINKFILE.
SQL Server shrinks files by moving rows from pages at the end of the file to
pages allocated earlier in the file. In an index, nodes are moved from the end
of the file to pages at the beginning of the file. In both cases pages are freed
at the end of files and then returned to the file system. Databases can only

Page 27 of 128
be shrunk to the point that no free space is remaining; there is no data
compression.

File Grow
Automated file growth greatly reduces the need for database management
and eliminates many problems that occur when logs or databases run out of
space. When creating a database, an initial size for the file must be given.
SQL Server creates the data files based on the size provided by the database
creator and data is added to the database these files fill. By default, data files
are allowed to grow as much as necessary until disk space is exhausted.
Alternatively, data files can be configured to grow automatically, but only to
a predefined maximum size. This prevents disk drives from running out of
space.
Allowing files to grow automatically can cause fragmentation of those files if
a large number of files share the same disk. Therefore, it is recommended
that files or file groups be created on as many different local physical disks as
available. Place objects that compete heavily for space in different file
groups.
Physical Database Architecture
Microsoft SQL Server version 7.0 introduces significant improvements in the
way data is stored physically. These changes are largely transparent to
general users, but do affect the setup and administration of SQL Server
databases.

Pages and Extents


The fundamental unit of data storage in SQL Server is the page. In SQL
Server version 7.0, the size of a page is 8 KB, increased from 2 KB. The start
of each page is a 96-byte header used to store system information, such as
the type of page, the amount of free space on the page, and the object ID of
the object owning the page.
There are seven types of pages in the data files of a SQL Server 7.0 database.

Page Type Contains

Page 28 of 128
Data Data rows with all data except text, ntext, and
image.

Index Index entries

Log Log records recording data changes for use in


recovery

Text/Image Text, ntext, and image data

Global Allocation Map Information about allocated extents

Page Free Space Information about free space available on pages

Index Allocation Map Information about extents used by a table or index.

Torn Page Detection

Torn page detection helps insure database consistency. In SQL Server 7.0,
pages are 8 KB, while Windows NT does I/O in 512-byte segments. This
discrepancy makes it possible for a page to be partially written. This could
happen if there is a power failure or other problem between the time when
the first 512-byte segment is written and the completion of the 8 KB of I/O.
There are several ways to deal with this. One way is to use battery-backed
cached I/O devices that guarantee all-or-nothing I/O. If you have one of
these systems, torn page detection is unnecessary.
In SQL Server 7.0, you can enable torn page detection for a particular
database by turning on a database option.

Locking Enhancements

Row-Level Locking

Page 29 of 128
SQL Server 6.5 introduced a limited version of row locking on inserts. SQL
Server 7.0 now supports full row-level locking for both data rows and index
entries. Transactions can update individual records without locking entire
pages. Many OLTP applications can experience increased concurrency,
especially when applications append rows to tables and indexes.

Dynamic Locking

SQL Server 7.0 has a superior locking mechanism that is unique in the
database industry. At run time, the storage engine dynamically cooperates
with the query processor to choose the lowest-cost locking strategy, based
on the characteristics of the schema and query.

Dynamic locking has the following advantages:


 Simplified database administration, because database administrators
no longer need to be concerned with adjusting lock escalation
thresholds.
 Increased performance, because SQL Server minimizes system
overhead by using locks appropriate to the task.
 Application developers can concentrate on development, because SQL
Server adjusts locking automatically.
Multigranular locking allows different types of resources to be locked by a
transaction. To minimize the cost of locking, SQL Server automatically locks
resources at a level appropriate to the task. Locking at a smaller granularity,
such as rows, increases concurrency but has a higher overhead because
more locks must be held if many rows are locked. Locking at a larger
granularity, such as tables, is expensive in terms of concurrency. However,
locking a larger unit of data has a lower overhead because fewer locks are
being maintained.

Lock Modes
SQL Server locks resources using different lock modes that determine how
the resources can be accessed by concurrent transactions.

Page 30 of 128
SQL Server uses several resource lock modes:

Lock mode Description

Shared Used for operations that do not change or update data


(read-only operations), such as a SELECT statement.

Update Used on resources that can be updated. Prevents a


common form of deadlock that occurs when multiple
sessions are reading, locking, and then potentially
updating resources later.

Exclusive Used for data-modification operations, such as UPDATE,


INSERT, or DELETE. Ensures that multiple updates cannot
be made to the same resource at the same time.

Intent Used to establish a lock hierarchy.

Schema Used when an operation dependent on the schema of a


table is executing. There are two types of schema locks:
schema stability and schema modification.

Table and Index Architecture

Overview
Fundamental changes were made in table organization. This new
organization allows the query processor to make use of more nonclustered
indexes, greatly improving performance for decision support applications.
The query optimizer has a wide set of execution strategies and many of the
optimization limitations of earlier versions of SQL Server have been removed.
In particular, SQL Server 7.0 is less sensitive to index-selection issues,
resulting in less tuning work.

Page 31 of 128
Table Organization
The data for each table is now stored in a collection of 8-KB data pages. Each
data page has a 96-byte header containing system information such as the
ID of the table that owns the page and pointers to the next and previous
pages for pages linked in a list. A row-offset table is at the end of the page.
Data rows fill the rest of the page.

SQL Server 7.0 tables use one of two methods to organize their data pages:

 Clustered tables are tables that have a clustered index. The data rows
are stored in order based on the clustered index key. The data pages
are linked in a doubly linked list. The index is implemented as a b-tree
index structure that supports fast retrieval of the rows based on their
clustered index key values.
 Heaps are tables that have no clustered index. There is no particular
order to the sequence of the data pages and the data pages are not
linked in a linked list.

Table Indexes
A SQL Server index is a structure associated with a table that speeds
retrieval of the rows in the table. An index contains keys built from one or
more columns in the table. These keys are stored in a structure that allows
SQL Server to quickly and efficiently find the row or rows associated with the
key values. This structure is called a heap. The two types of SQL Server
indexes are clustered and nonclustered indexes

Clustered Indexes
A clustered index is one in which the order of the values in the index is the
same as the order of the data stored in the table.
The clustered index contains a hierarchical tree. When searching for data
based on a clustered index value, SQL Server quickly isolates the page with
the specified value and then searches the page for the record or records with
the specified value. The lowest level, or leaf node, of the index tree is the
page that contains the data.

Page 32 of 128
Nonclustered Indexes

A nonclustered index is analogous to an index in a textbook. The data is


stored in one place; the index is stored in another, with pointers to the
storage location of the indexed items in the data. The lowest level, or leaf
node, of a nonclustered index is the Row Identifier of the index entry, which
gives SQL Server the location of the actual data row. The Row Identifier can
have one of two forms. If the table has a clustered index, the identifier of the
row is the clustered index key. If the table is a heap, the Row Identifier is the
actual location of the data row, indicated with a page number and offset on
the page. Therefore, a nonclustered index, in comparison with a clustered
index, has an extra level between the index structure and the data itself.

When SQL Server searches for data based on a nonclustered index, it


searches the index for the specified value to obtain the location of the rows
of data and then retrieves the data from their storage locations. This makes
nonclustered indexes the optimal choice for exact-match queries.

Some books contain multiple indexes. Since nonclustered indexes frequently


store clustered index keys as their pointers to data rows, it is important to
keep clustered index keys as small as possible.

SQL Server supports up to 249 nonclustered indexes on each table. The


nonclustered indexes have a b-tree index structure similar to the one in
clustered indexes. The difference is that nonclustered indexes have no effect
on the order of the data rows. The collection of data pages for a heap is not
affected if nonclustered indexes are defined for the table.

Data Type Changes

Unicode Data
SQL Server now supports Unicode data types, which makes it easier to store
data in multiple languages within one database by eliminating the problem of
converting characters and installing multiple code pages. Unicode stores

Page 33 of 128
character data using two bytes for each character rather than one byte.
There are 65,536 different bit patterns in two bytes, so Unicode can use one
standard set of bit patterns to encode each character in all languages,
including languages such as Chinese that have large numbers of characters.
Many programming languages also support Unicode data types.

The new data types that support Unicode are ntext, nchar, and nvarchar.
They are the same as text, char, and varchar, except for the wider range of
characters supported and the increased storage space used.

Improved Data Storage


Data storage flexibility is greatly improved with the expansion of the
maximum limits for char, varchar, binary, and varbinary data types to 8,000
bytes, increased from 255 bytes. It is no longer necessary to use text and
image data types for data storage for anything but very large data values.
The Transact-SQL string functions also support these very long char and
varchar values, and the SUBSTRING function can be used to process text and
image columns. The handling of Nulls and empty strings has been improved.
A new unique identifier data type is provided for storing a globally unique
identifier (GUID).

Normalization

Normalization is the concept of analyzing the “inherent” or normal


relationships between the various elements of a database. Data is normalized
in different forms.

First normal form: Data is in first normal form if data of the tables is
moved in to separate tables where data in each table is of a similar type,
giving each table a primary key – a unique label or an identifier. This
eliminates repeating groups of data.

Second normal form: Involves taking out data that is only dependent on
part of key.

Page 34 of 128
Third normal form: Involves removing the transitive dependencies. This
means getting rid of any thing in the tables that doesn’t depend Solely on the
primary key. Thus, through normalization, effective data storage can be
achieved eliminating redundancies and repeating groups.

Fourth Generation Languages

The fourth generation languages were created to overcome the problem of


third generation languages, and these 4GL’s are generally referred to as high
productivity languages.

Objectives Of Fourth Generation Languages

To speed up the application building process

 To make the application building process

 To minimize the debugging problems

 To generate bug free code from high level of expressions of


requirements.

 To make languages easy to use and understand

All these make the end users solve their own problems and pit computers to
work.

Characteristics of Fourth Generation Languages

 Simple Query facilities/ language

 Complex query and updating language

 Report generators

 Graphic languages

 Decision support languages

 Application generations

 Specification language

Page 35 of 128
 Very high level language

 Parameterized application language

 Application language

Properties of Fourth Generation Languages


 Easy to use

 Employees a database management system directly

 Requires significantly fewer instructions than third generation


language

 Intelligent default assumptions make abort what the use wants


possible

 Easy to understand and maintain

 Enforces and encourages structured code

 Subset can learnt not by non-technical users in a short period

Page 36 of 128
Client Server
Te c h n o l o g i e s

Page 37 of 128
MS.NET

Overview of the .NET Framework

The .NET Framework is a new computing platform that simplifies application


development in the highly distributed environment of the Internet. The .NET
Framework is designed to fulfill the following objectives:
 To provide a consistent object-oriented programming environment
whether object code is stored and executed locally, executed locally
but Internet-distributed, or executed remotely.
 To provide a code-execution environment that minimizes software
deployment and versioning conflicts.
 To provide a code-execution environment that guarantees safe
execution of code, including code created by an unknown or semi-
trusted third party.
 To provide a code-execution environment that eliminates the
performance problems of scripted or interpreted environments.
 To make the developer experience consistent across widely varying
types of applications, such as Windows-based applications and Web-
based applications.
 To build all communication on industry standards to ensure that code
based on the .NET Framework can integrate with any other code.

The .NET Framework has two main components: the common language
runtime and the .NET Framework class library. The common language
runtime is the foundation of the .NET Framework. You can think of the
runtime as an agent that manages code at execution time, providing core
services such as memory management, thread management, and remoting,
while also enforcing strict type safety and other forms of code accuracy that
ensure security and robustness. In fact, the concept of code management is
a fundamental principle of the runtime. Code that targets the runtime is
known as managed code, while code that does not target the runtime is
known as unmanaged code. The class library, the other main component of

Page 38 of 128
the .NET Framework, is a comprehensive, object-oriented collection of
reusable types that you can use to develop applications ranging from
traditional command-line or graphical user interface (GUI) applications to
applications based on the latest innovations provided by ASP.NET, such as
Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components that load the
common language runtime into their processes and initiate the execution of
managed code, thereby creating a software environment that can exploit
both managed and unmanaged features. The .NET Framework not only
provides several runtime hosts, but also supports the development of third-
party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side


environment for managed code. ASP.NET works directly with the runtime to
enable Web Forms applications and XML Web services, both of which are
discussed later in this topic.

Internet Explorer is an example of an unmanaged application that hosts the


runtime (in the form of a MIME type extension). Using Internet Explorer to
host the runtime enables you to embed managed components or Windows
Forms controls in HTML documents. Hosting the runtime in this way makes
managed mobile code (similar to Microsoft® ActiveX® controls) possible, but
with significant improvements that only managed code can offer, such as
semi-trusted execution and secure isolated file storage.

Features of the Common Language Runtime

The common language runtime manages memory, thread execution, code


execution, code safety verification, compilation, and other system services.
These features are intrinsic to the managed code that runs on the common
language runtime.

With regards to security, managed components are awarded varying degrees


of trust, depending on a number of factors that include their origin (such as

Page 39 of 128
the Internet, enterprise network, or local computer). This means that a
managed component might or might not be able to perform file-access
operations, registry-access operations, or other sensitive functions, even if it
is being used in the same active application.

The runtime enforces code access security. For example, users can trust that
an executable embedded in a Web page can play an animation on screen or
sing a song, but cannot access their personal data, file system, or network.
The security features of the runtime thus enable legitimate Internet-deployed
software to be exceptionally feature rich.

The runtime also enforces code robustness by implementing a strict type-


and code-verification infrastructure called the common type system (CTS).
The CTS ensures that all managed code is self-describing. The various
Microsoft and third-party language compilers generate managed code that
conforms to the CTS. This means that managed code can consume other
managed types and instances, while strictly enforcing type fidelity and type
safety.

In addition, the managed environment of the runtime eliminates many


common software issues. For example, the runtime automatically handles
object layout and manages references to objects, releasing them when they
are no longer being used. This automatic memory management resolves the
two most common application errors, memory leaks and invalid memory
references.

The runtime also accelerates developer productivity. For example,


programmers can write applications in their development language of choice,
yet take full advantage of the runtime, the class library, and components
written in other languages by other developers. Any compiler vendor who
chooses to target the runtime can do so. Language compilers that target
the .NET Framework make the features of the .NET Framework available to
existing code written in that language, greatly easing the migration process
for existing applications.

Page 40 of 128
While the runtime is designed for the software of the future, it also supports
software of today and yesterday. Interoperability between managed and
unmanaged code enables developers to continue to use necessary COM
components and DLLs.

The runtime is designed to enhance performance. Although the common


language runtime provides many standard runtime services, managed code
is never interpreted. A feature called just-in-time (JIT) compiling enables all
managed code to run in the native machine language of the system on which
it is executing. Meanwhile, the memory manager removes the possibilities of
fragmented memory and increases memory locality-of-reference to further
increase performance.

Finally, the runtime can be hosted by high-performance, server-side


applications, such as Microsoft® SQL Server™ and Internet Information
Services (IIS). This infrastructure enables you to use managed code to write
your business logic, while still enjoying the superior performance of the
industry's best enterprise servers that support runtime hosting.

Common Type System

The common type system defines how types are declared, used, and
managed in the runtime, and is also an important part of the runtime's
support for cross-language integration. The common type system performs
the following functions:

Establishes a framework that enables cross-language integration, type


safety, and high performance code execution.

Provides an object-oriented model that supports the complete


implementation of many programming languages.

Defines rules that languages must follow, which helps ensure that objects
written in different languages can interact with each other.

In This Section Common Type System Overview

Page 41 of 128
Describes concepts and defines terms relating to the common type system.

Page 42 of 128
Type Definitions

Describes user-defined types.

Type Members

Describes events, fields, nested types, methods, and properties, and


concepts such as member overloading, overriding, and inheritance.

Value Types

Describes built-in and user-defined value types.

Classes

Describes the characteristics of common language runtime classes.

Delegates

Describes the delegate object, which is the managed alternative to


unmanaged function pointers.

Arrays

Describes common language runtime array types.

Interfaces

Describes characteristics of interfaces and the restrictions on interfaces


imposed by the common language runtime.

Pointers

Describes managed pointers, unmanaged pointers, and unmanaged function


pointers.

Related Sections

. NET Framework Class Library

Provides a reference to the classes, interfaces, and value types included in


the Microsoft .NET Framework SDK.

Page 43 of 128
Common Language Runtime

Describes the run-time environment that manages the execution of code and
provides application development services.

Cross-Language Interoperability

The common language runtime provides built-in support for language


interoperability. However, this support does not guarantee that developers
using another programming language can use code you write. To ensure that
you can develop managed code that can be fully used by developers using
any programming language, a set of language features and rules for using
them called the Common Language Specification (CLS) has been defined.
Components that follow these rules and expose only CLS features are
considered CLS-compliant.

This section describes the common language runtime's built-in support for
language interoperability and explains the role that the CLS plays in enabling
guaranteed cross-language interoperability. CLS features and rules are
identified and CLS compliance is discussed.

In This Section

Language Interoperability

Describes built-in support for cross-language interoperability and introduces


the Common Language Specification.

What is the Common Language Specification?

Explains the need for a set of features common to all languages and
identifies CLS rules and features.

Writing CLS-Compliant Code

Discusses the meaning of CLS compliance for components and identifies


levels of CLS compliance for tools.

Page 44 of 128
Common Type System

Describes how types are declared, used, and managed by the common
language runtime.

Metadata and Self-Describing Components

Explains the common language runtime's mechanism for describing a type


and storing that information with the type itself.

. NET Framework Class Library

The .NET Framework class library is a collection of reusable types that tightly
integrate with the common language runtime. The class library is object
oriented, providing types from which your own managed code can derive
functionality. This not only makes the .NET Framework types easy to use, but
also reduces the time associated with learning new features of the .NET
Framework. In addition, third-party components can integrate seamlessly
with classes in the .NET Framework.

For example, the .NET Framework collection classes implement a set of


interfaces that you can use to develop your own collection classes. Your
collection classes will blend seamlessly with the classes in the .NET
Framework.

As you would expect from an object-oriented class library, the .NET


Framework types enable you to accomplish a range of common programming
tasks, including tasks such as string management, data collection, database
connectivity, and file access. In addition to these common tasks, the class
library includes types that support a variety of specialized development
scenarios. For example, you can use the .NET Framework to develop the
following types of applications and services:

Console applications

 Scripted or hosted applications.

Page 45 of 128
 Windows GUI applications (Windows Forms).

 ASP.NET applications.

 XML Web services.

 Windows services.

For example, the Windows Forms classes are a comprehensive set of


reusable types that vastly simplify Windows GUI development. If you write
an ASP.NET Web Form application, you can use the Web Forms classes.

Client Application Development

Client applications are the closest to a traditional style of application in


Windows-based programming. These are the types of applications that
display windows or forms on the desktop, enabling a user to perform a task.
Client applications include applications such as word processors and
spreadsheets, as well as custom business applications such as data-entry
tools, reporting tools, and so on. Client applications usually employ windows,
menus, buttons, and other GUI elements, and they likely access local
resources such as the file system and peripherals such as printers.

Another kind of client application is the traditional ActiveX control (now


replaced by the managed Windows Forms control) deployed over the Internet
as a Web page. This application is much like other client applications: it is
executed natively, has access to local resources, and includes graphical
elements.

In the past, developers created such applications using C/C++ in conjunction


with the Microsoft Foundation Classes (MFC) or with a rapid application
development (RAD) environment such as Microsoft® Visual Basic®. The .NET
Framework incorporates aspects of these existing products into a single,
consistent development environment that drastically simplifies the
development of client applications.

The Windows Forms classes contained in the .NET Framework are designed
to be used for GUI development. You can easily create command windows,

Page 46 of 128
buttons, menus, toolbars, and other screen elements with the flexibility
necessary to accommodate shifting business needs.

For example, the .NET Framework provides simple properties to adjust visual
attributes associated with forms. In some cases the underlying operating
system does not support changing these attributes directly, and in these
cases the .NET Framework automatically recreates the forms. This is one of
many ways in which the .NET Framework integrates the developer interface,
making coding simpler and more consistent.

Unlike ActiveX controls, Windows Forms controls have semi-trusted access to


a user's computer. This means that binary or natively executing code can
access some of the resources on the user's system (such as GUI elements
and limited file access) without being able to access or compromise other
resources. Because of code access security, many applications that once
needed to be installed on a user's system can now be safely deployed
through the Web. Your applications can implement the features of a local
application while being deployed like a Web page.

Managed Execution Process

The managed execution process includes the following steps:

Choosing a Complier

To obtain the benefits provided by the common language runtime, you must
use one or more language compilers that target the runtime.

Compiling your code to Microsoft Intermediate Language


(MSIL)

Compiling translates your source code into MSIL and generates the required
metadata.

Compiling MSIL to native code

At execution time, a just-in-time (JIT) compiler translates the MSIL into


native code. During this compilation, code must pass a verification process

Page 47 of 128
that examines the MSIL and metadata to find out whether the code can be
determined to be type safe.

Executing your code

The common language runtime provides the infrastructure that enables


execution to take place as well as a variety of services that can be used
during execution.

Assemblies Overview

Assemblies are a fundamental part of programming with the .NET


Framework. An assembly performs the following functions:

It contains code that the common language runtime executes. Microsoft


intermediate language (MSIL) code in a portable executable (PE) file will not
be executed if it does not have an associated assembly manifest. Note that
each assembly can have only one entry point (that is, DllMain, WinMain, or
Main).

It forms a security boundary. An assembly is the unit at which permissions


are requested and granted. For more information about security boundaries
as they apply to assemblies, see Assembly Security Considerations

It forms a type boundary. Every type's identity includes the name of the
assembly in which it resides. A type called MyType loaded in the scope of one
assembly is not the same as a type called MyType loaded in the scope of
another assembly.

It forms a reference scope boundary. The assembly's manifest contains


assembly metadata that is used for resolving types and satisfying resource
requests. It specifies the types and resources that are exposed outside the
assembly. The manifest also enumerates other assemblies on which it
depends.

It forms a version boundary. The assembly is the smallest versionable unit in


the common language runtime; all types and resources in the same assembly
are versioned as a unit. The assembly's manifest describes the version

Page 48 of 128
dependencies you specify for any dependent assemblies. For more
information about versioning, see Assembly Versioning

It forms a deployment unit. When an application starts, only the assemblies


that the application initially calls must be present. Other assemblies, such as
localization resources or assemblies containing utility classes, can be
retrieved on demand. This allows applications to be kept simple and thin
when first downloaded. For more information about deploying assemblies,
see Deploying Applications

It is the unit at which side-by-side execution is supported. For more


information about running multiple versions of the same assembly, see Side-
by-Side Execution

Assemblies can be static or dynamic. Static assemblies can include .NET


Framework types (interfaces and classes), as well as resources for the
assembly (bitmaps, JPEG files, resource files, and so on). Static assemblies
are stored on disk in PE files. You can also use the .NET Framework to create
dynamic assemblies, which are run directly from memory and are not saved
to disk before execution. You can save dynamic assemblies to disk after they
have executed.

There are several ways to create assemblies. You can use development tools,
such as Visual Studio .NET, that you have used in the past to create .dll
or .exe files. You can use tools provided in the .NET Framework SDK to
create assemblies with modules created in other development environments.
You can also use common language runtime APIs, such as Reflection. Emit,
to create dynamic assemblies.

Server Application Development

Server-side applications in the managed world are implemented through


runtime hosts. Unmanaged applications host the common language runtime,
which allows your custom managed code to control the behavior of the
server. This model provides you with all the features of the common

Page 49 of 128
language runtime and class library while gaining the performance and
scalability of the host server.

The following illustration shows a basic network schema with managed code
running in different server environments. Servers such as IIS and SQL Server
can perform standard operations while your application logic executes
through the managed code.

Page 50 of 128
Server-side managed code

ASP.NET is the hosting environment that enables developers to use the .NET
Framework to target Web-based applications. However, ASP.NET is more
than just a runtime host; it is a complete architecture for developing Web
sites and Internet-distributed objects using managed code. Both Web Forms
and XML Web services use IIS and ASP.NET as the publishing mechanism for
applications, and both have a collection of supporting classes in the .NET
Framework.

XML Web services, an important evolution in Web-based technology, are


distributed, server-side application components similar to common Web sites.
However, unlike Web-based applications, XML Web services components
have no UI and are not targeted for browsers such as Internet Explorer and
Netscape Navigator. Instead, XML Web services consist of reusable software
components designed to be consumed by other applications, such as
traditional client applications, Web-based applications, or even other XML
Web services. As a result, XML Web services technology is rapidly moving
application development and deployment into the highly distributed
environment of the Internet.

If you have used earlier versions of ASP technology, you will immediately
notice the improvements that ASP.NET and Web Forms offers. For example,
you can develop Web Forms pages in any language that supports the .NET
Framework. In addition, your code no longer needs to share the same file
with your HTTP text (although it can continue to do so if you prefer). Web
Forms pages execute in native machine language because, like any other
managed application, they take full advantage of the runtime. In contrast,
unmanaged ASP pages are always scripted and interpreted. ASP.NET pages
are faster, more functional, and easier to develop than unmanaged ASP
pages because they interact with the runtime like any managed application.

The .NET Framework also provides a collection of classes and tools to aid in
development and consumption of XML Web services applications. XML Web

Page 51 of 128
services are built on standards such as SOAP (a remote procedure-call
protocol), XML (an extensible data format), and WSDL (the Web Services
Description Language). The .NET Framework is built on these standards to
promote interoperability with non-Microsoft solutions.

For example, the Web Services Description Language tool included with
the .NET Framework SDK can query an XML Web service published on the
Web, parse its WSDL description, and produce C# or Visual Basic source code
that your application can use to become a client of the XML Web service. The
source code can create classes derived from classes in the class library that
handle all the underlying communication using SOAP and XML parsing.
Although you can use the class library to consume XML Web services directly,
the Web Services Description Language tool and the other tools contained in
the SDK facilitate your development efforts with the .NET Framework.

If you develop and publish your own XML Web service, the .NET Framework
provides a set of classes that conform to all the underlying communication
standards, such as SOAP, WSDL, and XML. Using those classes enables you
to focus on the logic of your service, without concerning yourself with the
communications infrastructure required by distributed software development.

Finally, like Web Forms pages in the managed environment, your XML Web
service will run with the speed of native machine language using the scalable
communication of IIS.

Programming with the .NET Framework

This section describes the programming essentials you need to build .NET
applications, from creating assemblies from your code to securing your
application. Many of the fundamentals covered in this section are used to
create any application using the .NET Framework. This section provides
conceptual information about key programming concepts, as well as code
samples and detailed explanations.

Page 52 of 128
Accessing Data with ADO.NET

Describes the ADO.NET architecture and how to use the ADO.NET classes to
manage application data and interact with data sources including Microsoft
SQL Server, OLE DB data sources, and XML.

Accessing Objects in Other Application Domains using .NET Remoting

Describes the various communications methods available in the .NET


Framework for remote communications.

Accessing the Internet

Shows how to use Internet access classes to implement both Web- and
Internet-based applications.

Creating Active Directory Components

Discusses using the Active Directory Services Interfaces.

Creating Scheduled Server Tasks

Discusses how to create events that are raised on reoccurring intervals.

Developing Components

Provides an overview of component programming and explains how those


concepts work with the .NET Framework.

Developing World-Ready Applications

Explains the extensive support the .NET Framework provides for developing
international applications.

Discovering Type Information at Runtime

Explains how to get access to type information at run time by using


reflection.

Page 53 of 128
Drawing and Editing Images

Discusses using GDI+ with the .NET Framework.

Emitting Dynamic Assemblies

Describes the set of managed types in the System.Reflection.Emit


namespace.

Employing XML in the .NET Framework

Provides an overview to a comprehensive and integrated set of classes that


work with XML documents and data in the .NET Framework.

Extending Metadata Using Attributes

Describes how you can use attributes to customize metadata.

Generating and Compiling Source Code Dynamically in Multiple Languages

Explains the .NET Framework SDK mechanism called the Code Document
Object Model (CodeDOM) that enables the output of source code in multiple
programming languages.

Grouping Data in Collections

Discusses the various collection types available in the .NET Framework,


including stacks, queues, lists, arrays, and structs.

Handling and Raising Events

Provides an overview of the event model in the .NET Framework.

Handling and Throwing Exceptions

Describes error handling provided by the .NET Framework and the


fundamentals of handling exceptions.

Page 54 of 128
Overall Description:

Product Perspective:

The software application has been developed to act as the easy


monitoring tool for the investigators to access the necessary
information and update other info regarding drug trials. The normal
latency that exists in the system is eliminated and also the job
scheduling standards becomes very faster within the system.

Basic Structure Of The System

 Maintains and manages the information of all the drugs that


exists in the Institute which are about to put for trial and also
which are passed trials.

 Maintains and manages the list of all reaction agents which


comprise each drug.

 Maintains and manages the list of all the generally and


occasionally occurring allergic symptoms and the drugs to
counter them.

 Maintains and manages the information regarding the various


usage conditions associated with various drugs.

 Maintains and manages the information of all the Individuals who


are expressing their concern to participate in the drug trial
programs.

 Specifically maintains the Information regarding the drug trial


programs pertaining to each and every drug along with the
individuals who are participating in the trials and their outcomes.

Page 55 of 128
Product Functions

The major function that product executes are divided into two
categories.

1. Administrative Functions.
2. User Interface Functions.

Administrative Functions:

The functions take care of the actual date interpretation standards at


the level of the administrative officer area. All these transactions that
need consistency function the system existence. All the master table
transaction with respect to then data insertion, deletion and updation
are totally managed by the system administrators. The generic
information maintained by the administrations is:

 Drugs information management

 Reaction Agents information management

 Allergies information management

 Individuals information management

 Drug Trials management

 Security information management

User interface functions

The general functions that are taken care of at the user level are the
Drug Trial Investigators can view information regarding various drugs
which are arising with the institute at any time. They can also view the
drug trials info on Individuals. The system also helps the Agents to
study the outcomes of each trial.

Page 56 of 128
Chapter
5

Page 57 of 128
Design
Document

Page 58 of 128
 The entire system is projected with a physical diagram which specifics
the actual storage parameters that are physically necessary for any
database to be stored on to the disk. The overall systems existential
idea is derived from this diagram.

 The relation upon the system is structure through a conceptual ER-


Diagram, which not only specifics the existential entities but also the
standard relations through which the system exists and the
cardinalities that are necessary for the system state to continue.

 The content level DFD is provided to have an idea of the functional


inputs and outputs that are achieved through the system. The
system depicts the input and out put standards at the high level of
the systems existence.

Data Flow Diagrams

 This Diagram serves two purpose.

 Provides an indication of how date is transformed as it moves


through the system.

 Disputes the functions and sub functions that transforms the


dataflow.

 The Data flow diagram provides additional information that is


used during the analysis of the information domain, and server as
a basis for the modeling of functions.

 The description of each function presented in the DFD is contained


in a process specifications called as PSPEC

Page 59 of 128
Employee Report on Employee
Information Module Information

Drug Information Report on the Registered


Module Drugs

Report on the Allergic


Allergic Information Information
Module

Report on the
Drug Drug Trials
Information Management
Managem
Drug Trials ent
Information Module System

Report on the
Trial
Individual Trial Participation
Information Module
Report on the
Trials History

Drug Trial History Reports on the


Module Security Information

Security Module

Page 60 of 128
Administration

Check for
Designation
master Verify
Data

1.2

Insert

Investigators

Page 61 of 128
Drug Master Reaction Agent Master

Verify Verify
the the
Data Data

Commit
()

Designation
Department Master
Master

Inserting a Check
new Emp for Check
Info Depart For
Verify ment Verify Desig Commit
Data Data ()

2.1 2.2
2.3

Page 62 of 128
Clinical
Individual Condition
Master Master

Inserting a
new Check For
Individuals Condition
Condition Check for codes
Status Individuals
Verify Verify
Data Commit()
Data

2.1 2.2
2.3

Drug
Master

Inserting a
new Record of
Drug Trials
Master Check for Drugs
Verify Commit(
Data )

2.1
2.2

Page 63 of 128
Individual
Master Record Drug
Initials Maste
Conditions r

Register
Individuals Validat Generat
e e Drug
Validate() Trials
Concer
ned
Form 2.3
2.1
2.2

Record
Conditions

Validate
Trial
Participations Validat
Store in the e Drug
Database 2.5 ID

2.4

Page 64 of 128
ER-Diagrams

 The entity Relationship Diagram (ERD) depicts the relationship


between the data objects. The ERD is the notation that is used
to conduct the date modeling activity the attributes of each
data object noted is the ERD can be described resign a data
object descriptions.

 The set of primary components that are identified by the ERD


are

 Data object  Relationships

 Attributes  Various types of indicators.

 The primary purpose of the ERD is to represent data objects


and their relationships.

Page 65 of 128
Physical Diagram

Page 66 of 128
ER Diagram

Page 67 of 128
Unified Modeling Language Diagrams

 The unified modeling language allows the software engineer


to express an analysis model using the modeling notation that
is governed by a set of syntactic semantic and pragmatic
rules.

 A UML system is represented using five different views that


describe the system from distinctly different perspective. Each
view is defined by a set of diagram, which is as follows.

 User Model View

i. This view represents the system from the users


perspective.

ii. The analysis representation describes a usage scenario


from the end-users perspective.

Structural model view

 In this model the data and functionality are arrived


from inside the system.

 This model view models the static structures.


Behavioral Model View

 It represents the dynamic of behavioral as parts of the


system, depicting the interactions of collection between
various structural elements described in the user model
and structural model view.

Implementation Model View

 In this the structural and behavioral as parts of the


system are represented as they are to be built.

Environmental Model View

In this the structural and behavioral aspects of the environment in which the
system is to be implemented are represented.

UML is specifically constructed through two different domains they are

 UML Analysis modeling, which focuses on the user


model and structural model views of the system.

Page 68 of 128
 UML design modeling, which focuses on the
behavioral modeling, implementation modeling and
environmental model views.

Use Case Diagrams

Use cases model the system from the end users point of view, with the following
objectives

 To define the functional and operational requirements of


the system by defining a scenario of usage.

 To provide a class and unambiguous description of how the


end user and the system interact with one another.

 To provide a basis for validation testing.

Page 69 of 128
Employees
Information

Departments
&Designation
information

 Drug
Administrator Registration &
Usages
information

Allergies Anti
Allergic drugs
information

Individuals
Information

Drug Trials
Information

Security
Information

(a) High Level Diagram

Page 70 of 128
<<Uses>>
Raise Request for New Collect the Required
Employee Registration information

Authenticate the information


of Department & Designation
of Department &
Designation

<<Uses>> Store the information

Raise request for New Cross check the required

 Drug Registration parameters that govern


the
implementation
drugs

Admini
strator

<<Uses>> Store the information

Raise request for New Cross check the


Allergies Registration required Authenticated
and consistent
parameters which are
specific

<<Uses>>
Store the information
Raise request for New Cross check the
Individuals Registration Authenticated
parameters like
consent signed from
etc.

<<Uses>>
Store the information
Raise request for New
Check the cross
Individuals Registration
references of the
consistent information
required
<<Uses>>

Authenticate &
Raise request for login validate thethe
login name
login
of password
name of password

(b) Elaborated Diagram for Investigation

Page 71 of 128
Drug trials
information

Clinical

 conditions
information

Investigation

Individual
clinical condition
information

Dry reaction
agents
information

Individual Trial
information &
History
information

(a) High Level Diagram

Page 72 of 128
<<Uses>> <<Uses>>
Authenticate the Provide
Login login name Accessibility
&password

<<Uses>> Cross check and


Raise Request for Drug authenticate the trial
trials registration participants information

<<Uses>>

Store the information


<<Uses>>
Raise request for Cross check consistent
registering clinical clinical conditions as
conditions required

 <<Uses>>

Investigation Store the information

Raise request for <<Uses>> Cross-reference the


registering the Drug’s information
individual clinical along with the
conditions individual’s initial
condition

<<Uses>>

Store the information


<<Uses>>
Raise request for Cross check the
registering the reaction associated drugs that
agents information gets cross-referenced

<<Uses>>

Store the information


Raise request for <<Uses>>
Cross check the
individual’s trials and
applicable interference
history information
data of trials and
individuals

<<Uses>>

Store the information

(b) Elaborated Diagram for investigation

Page 73 of 128
Class Collaboration Diagrams

1) Employees Information Collaboration

Employees Master

-Emp-No : Number
- Emp -Name : Varchar2
- Emp -DOB : Date
- Emp –Addr : Number
- Emp-phone: Varchar2
- Empe-email: Date
-emp-gender : Number
-emp-doj : Number
-emp-dept-n0:Number
-emp-desig-id:Number
-emp-mgr-no:number
-Insert ()
-Delete ()
-Update ()
-Search ()
-Validate- deptno ()
-Validate_designatig Id ()
-validate-mgr-no()

Department master Designation master


-dept-no:Number -Desig-ID: Number
-dept-name:varchar2 -Desig-name: varchar2
-dept-descrip:Varchar2 -Desig-descrip: Varchar2
-Highest-desig-id:Number Insert ()
Insert () Delete ()
Delete () Update ()
Update () Search ()
Search ()
Validate-desig-id ()

Page 74 of 128
Sequence Diagram

Record Record Store in


Login Register initial Deny
Conditions the
individual conditions Master
database
Validate ()

Validate

Consent Generate
form () Drug trial Validate
ID ()
Validate
Drug id
Trial
participat
ion ID ()

Page 75 of 128
Chapter
6

Page 76 of 128
Coding

Page 77 of 128
Program Design Language

 The program design language is also called as structured


English or pseudopodia. PDL is a generic reference for a design
language PDL looks like a modern language. The difference
between PDL and real programming language lies in the
narrative text embedded directly within PDL statements.

The characteristics required by a design language are:

 A fixed system of keywords that provide for all structured


constructs date declaration and modularity characteristics.

 A free syntax of natural language that describes processing


features.

 Date declaration facilities that should include both simple and


complex data structures.

 Subprogram definition and calling techniques that support


various nodes of interface description.

PDL syntax should include constructs for subprogram definition, interface


description date declaration techniques for structuring, conditions
constructs, repetition constructs and I/O constructs.

PDL can be extended to include keywords for multitasking and/or


concurrent processing interrupt handling, interposes synchronization the
application design for which PDL is to be used should dictate the final form
for the design language.

Page 78 of 128
Chapter
7

Page 79 of 128
Te s t i n g &
Debugging
Strategies

Page 80 of 128
Testing

Testing is the process of detecting errors. Testing performs a very critical


role for quality assurance and for ensuring the reliability of software. The
results of testing are used later on during maintenance also.

Psychology of Testing
The aim of testing is often to demonstrate that a program works by showing
that it has no errors. The basic purpose of testing phase is to detect the
errors that may be present in the program. Hence one should not start
testing with the intent of showing that a program works, but the intent
should be to show that a program doesn’t work. Testing is the process of
executing a program with the intent of finding errors.

Testing Objectives
The main objective of testing is to uncover a host of errors, systematically
and with minimum effort and time. Stating formally, we can say,
 Testing is a process of executing a program with the intent of
finding an error.
 A successful test is one that uncovers an as yet undiscovered
error.
 A good test case is one that has a high probability of finding error,
if it exists.
 The tests are inadequate to detect possibly present errors.
 The software more or less confirms to the quality and reliable
standards.

Levels of Testing
In order to uncover the errors present in different phases we have the
concept of levels of testing. The basic levels of testing are as shown below…

Page 81 of 128
Acceptance
Client Needs Testing

System Testing
Requirements

Integration Testing
Design

Unit Testing
Code

System Testing

The philosophy behind testing is to find errors. Test cases are devised with
this in mind. A strategy employed for system testing is code testing.

Code Testing:

This strategy examines the logic of the program. To follow this method we
developed some test data that resulted in executing every instruction in the
program and module i.e. every path is tested. Systems are not designed as
entire nor are they tested as single systems. To ensure that the coding is
perfect two types of testing is performed or for that matter is performed or
that matter is performed or for that matter is performed on all systems.

Types Of Testing

 Unit Testing
 Link Testing

Unit Testing
Unit testing focuses verification effort on the smallest unit of software i.e.
the module. Using the detailed design and the process specifications testing
is done to uncover errors within the boundary of the module. All modules
must be successful in the unit test before the start of the integration testing
begins.

Page 82 of 128
In this project each service can be thought of a module. There are so many
modules like Login, HWAdmin, MasterAdmin, Normal User, and PManager.
Giving different sets of inputs has tested each module. When developing the
module as well as finishing the development so that each module works
without any error. The inputs are validated when accepting from the user.

In this application developer tests the programs up as system. Software


units in a system are the modules and routines that are assembled and
integrated to form a specific function. Unit testing is first done on modules,
independent of one another to locate errors. This enables to detect errors.
Through this errors resulting from interaction between modules initially
avoided.

Link Testing

Link testing does not test software but rather the integration of each
module in system. The primary concern is the compatibility of each module.
The Programmer tests where modules are designed with different
parameters, length, type etc.

Integration Testing

After the unit testing we have to perform integration testing. The goal here
is to see if modules can be integrated properly, the emphasis being on
testing interfaces between modules. This testing activity can be considered
as testing the design and hence the emphasis on testing module
interactions.

In this project integrating all the modules forms the main system. When
integrating all the modules I have checked whether the integration effects
working of any of the services by giving different combinations of inputs
with which the two services run perfectly before Integration.

System Testing

Here the entire software system is tested. The reference document for this
process is the requirements document, and the goal os to see if software
meets its requirements.

Page 83 of 128
Here entire ‘ATM’ has been tested against requirements of project and it is
checked whether all requirements of project have been satisfied or not.

Acceptance Testing

Acceptance Test is performed with realistic data of the client to demonstrate


that the software is working satisfactorily. Testing here is focused on
external behavior of the system; the internal logic of program is not
emphasized.

In this project ‘Network Management Of Database System’ I have collected


some data and tested whether project is working correctly or not.

Test cases should be selected so that the largest number of attributes of an


equivalence class is exercised at once. The testing phase is an important
part of software development. It is the process of finding errors and missing
operations and also a complete verification to determine whether the
objectives are met and the user requirements are satisfied.

White Box Testing

This is a unit testing method where a unit will be taken at a time and tested
thoroughly at a statement level to find the maximum possible errors. I
tested step wise every piece of code, taking care that every statement in
the code is executed at least once. The white box testing is also called Glass
Box Testing.

I have generated a list of test cases, sample data, which is used to check all
possible combinations of execution paths through the code at every module
level.

Black Box Testing


This testing method considers a module as a single unit and checks the unit
at interface and communication with other modules rather getting into
details at statement level. Here the module will be treated as a block box
that will take some input and generate output. Output for a given set of
input combinations are forwarded to other modules.

Criteria Satisfied by Test Cases

Page 84 of 128
1) Test cases that reduced by a count that is greater than
one, the number of additional test cases that much be
designed to achieve reasonable testing.

2) Test cases that tell us something about the presence


or absence of classes of errors, rather than an error
associated only with the specific test at hand.

Page 85 of 128
Chapter
8

Page 86 of 128
User
Manual

Page 87 of 128
Installation

 The database as it is developed by MS SQL Server 2000 can


be installed only by using the export and import concepts.

 Using .NET components like ASP.NET and VB.NET needs


proper deployment as per general specifications developed.

Page 88 of 128
Page 89 of 128
Page 90 of 128
Page 91 of 128
Page 92 of 128
Page 93 of 128
Page 94 of 128
Page 95 of 128
Page 96 of 128
Page 97 of 128
Page 98 of 128
Page 99 of 128
Page 100 of 128
Page 101 of 128
Page 102 of 128
Page 103 of 128
Page 104 of 128
Page 105 of 128
Page 106 of 128
Page 107 of 128
Page 108 of 128
Page 109 of 128
Page 110 of 128
Page 111 of 128
Page 112 of 128
Page 113 of 128
Page 114 of 128
Page 115 of 128
Page 116 of 128
Page 117 of 128
Page 118 of 128
Page 119 of 128
Page 120 of 128
Page 121 of 128
Page 122 of 128
Page 123 of 128
Page 124 of 128
Chapter9

Page 125 of 128


Conclusions &
Recommendatio
ns

Page 126 of 128


Conclusions And Recommendations

The entire project has been developed and deployed as per the

requirements stated by the user, it is found to be bug free as per the

testing standards that is implemented. Any specification-untraced errors

will be concentrated in the coming versions, which are planned to be

developed in near future. The system at present does not take care off the

money payment methods, as the consolidated constructs need SSL

standards and are critically to be initiated in the first face, the application of

the credit card transactions is applied as a developmental phase in the

coming days. The system needs more elaborative technical management for

its inception and evolution.

Page 127 of 128


Bibliography:
References for the Project Development were taken from the
following Books and Web Sites.

SQL Server

Mastering SQL Server 2000 by Gunderloy, Jorden BPB Publications

Beginning SQL Server 2000 by Thearon Willis wrox publications

Visual Basic .NET

Programming Visual Basic .NET, Mircrosoft Press

VisulaBasic .NET by Mc Donald, Microsoft Press

Page 128 of 128

You might also like