Drug Management System
Drug Management System
Manageme
nt
System
Page 1 of 128
Chapter
1
Page 2 of 128
Abstract
Page 3 of 128
Drug Audition & Research Management is a system that practically
concentrates on the associative standards of the Medical diagnosis and
Research Developments environments. The major problem in Drugs and
pharmaceuticals industry is to design or invert a new Bio-molecular
combination of a chemical. The new Bio molecular combination should have
the ability in training its roots towards the ailment that exists in the body and
fight against that ailment. In the initial stages while the drug is under the
preparatory stages of experiment, it is combinationally checked on some of
the living organisms, which belong to the species of mammals. Once the drug
trial experiments come to a proper status on these animals to have precision
check and reliability standards, they are once again checked upon the human
beings who are physically associated to such problems. The individuals who
are suffering through proper ailments are recognized and they are requested
to participate in the Drug trials voluntarily. The participation of the
individuals is governed through the Byelaws and legal procedures that exist
under the human and civilian rights of the constitution governed by the
European Union. The application increases in its size through the database,
as the research activity increases within the organization. At a specific time
the search of required information takes a great lot of time the search of
required information takes a great lot of time and costs the organization both
in time and money.
Page 4 of 128
Chapter
2
Page 5 of 128
Project
Synopsis
Page 6 of 128
The entire project has been develped keeping in view of the Distributed client
server computing technology in mind.The specification have been normalized
upto 3NF to eliminate all the anomalies that may arise due to the database
transactions that are executed by the actual administration and users.The
user interfaces are browser specific to give distributed accessability for the
overall system.The internal database has beeb selected as MS SQL Server
2000. The MS SQL Server 2000 was a choice as it provides the constructs of
high level reliabiity and security.The total front end was dominated using
HTML standards applied with the dynamism of ASP.NET. Thecommunicatin
client was designed using C#.NET. At all proper levels high care was taken to
check that the system manages the date consistency with proper business
validations.The database connectivity was planned using the ADO.NET
DataBase Connectivity. The authorization was cross checked at all stages.The
user level accessabiity has been restricted into two zones the administrative
and the normal user zone.
Page 7 of 128
should be collected and integrated at all different levels of the organization
for smooth functioning and coordination of the system. The system needs
proper handling of the investigation and the Drug trial participants along with
the clinical condition upon which they are being treated or experimented. The
actual system under the manual process needs a huge amount of manpower
in managing and maintaining the information, any miscoordination among
the existing users within the working system can cause overall disturbance
within the date management standards.
Manual Process
First Phase
Informatio
n about Identify the Search the Ledger
associated Drug of Reaction agents
The New Reaction agents
Drug
Register the
Reference of Prepare a reference
Identify the Drug
Drug along with to the required
the usage usage conditions Drug
conditions
Page 8 of 128
Second Phase
Search the
Register the Drug
listed Drugs
Make a in one of the
that are
Registration for existing masters
authorized for
new Drug Trial ledgers.
trials
The development of the new system contains the following activites, which
try to automate the entire process keeping in view of the database
integration approach.
2. The system at any point of time can provide the details of all the drugs
that exist within the system along with their reaction agent’s
combination.
3. The system can provide the generic details of all the allergies and its
associated drugs that can be applied upon it, with a click of the mouse.
Page 9 of 128
5. The system with respect to the necessities can identify all the history
details of the real participants along with their outcome of the results.
6. The system with respect to the necessities can identify all the history
details of the trial participants along with their outcome of the results.
7. The system with respect to the necessities can provide the status of
research and development process that is under schedule within the
organization currently.
Page 10 of 128
Chapter
3
Page 11 of 128
Feasibility
Report
Page 12 of 128
Feasibility Study
The user components are designed to handle the transactional states that
arise upon the system whenever the general employee within the
organization visits the user interface for much enquiry for the required
data. The normal user interfaces get associated to the environment
mostly for the sake of respect standardization. The user components are
scheduled to accept parametrical information from the user as per the
systems necessity.
GUI’s
To keep the user highly flexible upon the system, the overall interface has
been developed in Graphical user interface mode. The applied interface is
through a browser specific environment.
The GUI’s at the top level have been categorized as:
Page 13 of 128
The operational or generic user interface helps the users upon the system in
transactions through the existing data and required services. The operational
user interface also helps the ordinary users in managing their own
information helps the ordinary users in managing their own information in a
customized manner as per the assisted flexibilities.
Number of Modules
The system after careful analysis has been identified to be presented with the
following modules.
of all the employees who practically exist for this organization. Each
employee is exclusively associated through a specific department and
authorized designation. The module manages all the transactional
relations that generically arise as and when the system has been
executed, upon the requirements.
Page 14 of 128
checks and verifies the authenticity of the individuals who are deputed
upon the drug trials.
information of the individuals who have been put upon the drug trials,
and the related information regarding their trials is recorded within a
secured, format the systems within the executional domain allows only
the specific information to be viewed by the individual.
SOFTWARE REQUIREMENTS
HARDWARE REQUIREMENTS
Page 15 of 128
Hard Disk : 20 GB
Feasibility
The system is self-explanting and does not need any entire sophisticated
training. A system has been built by concentrating on the graphical uses
interface concepts, the application can also be handled very easily with a
novice uses. The overall time that a user needs to get trained is less than 15
minutes.
The system has been added with features of menu device and button
interaction methods, which makes him the master as he starts working
through the environment. As the software that were used as developing this
application are very economical and are readily available in the market the
only time that is lost by the customer is just installation time .
Time Based:
If the user has to view drug details being put to test, he should have ready
available information like, the chemical composition, reaction agents which
can cause reactions. This process always costs some extra time and the
amount of drug details what he can collect is an instance are always meager.
Hence a user always spends a lot of time and he has to contend with what he
has at his reach.
But the same sceneries if it is handled through the intranet application will
always give the user a ready list being displayed at his reach within few
seconds of time from the request being placed. The user need not even make
a move physically from his place (i.e., from his section) to get the required
information of drugs and also other details like information from other
Page 16 of 128
sections. The user has a deep satisfaction in processing the information of his
choice respective of geographical lassies. Which are major hurdles in a
manual system.
Cost Based:
Cost Based: In the manual process of the DRUG MANAGEMENT SYSTEM, the
information search and storage needs extra manpower assignment, which
potentially costs the organization in the perennial investment of funds for the
sake of the salaries. The information interrelations among different areas of
the system should be handled carefully else.
Page 17 of 128
Chapter
4
Page 18 of 128
Analysis
Report
Page 19 of 128
SRS Document:
Intended Audience And Reading Suggestions
Document Conventions:
The overall documents for this project use the recognized modeling
standards at the software industries level.
The Physical dispense, which state the overall data search for
the relational key whereas a transactions is implemented on
the wear entities.
The standards of flow charts at the required states that are the
functionality of the operations need more concentration.
Page 20 of 128
Scope of The Development Project:
MS SQL Server 2000 is one of the many database services that plug into a
client / server model. It works efficiently to manage resources, a database
information, among the multiple clients requesting & sending.
SQL is an inter-active language used to query the database and access data
in database. SQL has the following features:
1. It is a unified language.
3. It is a non-procedural language.
Page 21 of 128
Introduction to MS SQL Server 2000
Introduction
SQL Server™ 7.0 a scalable, reliable, and easy-to-use product that will
provide a solid foundation for application design for the next 20 years.
Scalability The new disk format and storage subsystem provide storage
that is scalable from very small to very large databases.
Specific changes include:
Simplified mapping of database objects to files eases
management and enables tuning flexibility. DB objects
can be mapped to specific disks for load balancing.
Page 22 of 128
database applications. Many complex server operations are
automated.
Dynamic Row- Full row-level locking is implemented for both data rows
Level Locking and index entries. Dynamic locking automatically chooses
the optimal level of lock (row, page, multiple page, table)
for all database operations. This feature provides
improved concurrency with no tuning. The database also
supports the use of "hints" to force a particular level of
locking.
Large Memory SQL Server 7.0 Enterprise Edition will support memory
Support addressing greater than 4 GB, in conjunction with
Windows NT Server 5.0, Alpha processor-based systems,
Page 23 of 128
and other techniques.
Overview
The original code was inherited from Sybase and designed for eight-
megabyte Unix systems in 1983.These new formats improve manageability
and scalability and allow the server to easily scale from low-end to high-end
systems, improving performance and manageability.
Benefits
There are many benefits of the new on-disk layout, including:
Improved scalability and integration with Windows NT Server
Better performance with larger I/Os
Stable record locators allow more indexes
More indexes speed decision support queries
Simpler data structures provide better quality
Greater extensibility, so that subsequent releases will have a cleaner
development process and new features are faster to implement
Most relational database products are divided into relational engine and
storage engine components. This document focuses on the storage engine,
which has a variety of subsystems:
Mechanisms that store data in files and find pages, files, and extents.
Record management for accessing the records on pages.
Access methods using b-trees that are used to quickly find records
using record identifiers.
Page 24 of 128
Concurrency control for locking, used to implement the physical lock
manager and locking protocols for page- or record-level locking.
I/O buffer management.
Logging and recovery.
Utilities for backup and restore, consistency checking, and bulk data
loading.
Overview
SQL Server 7.0 is much more integrated with Windows NT Server than any of
its predecessors. Databases are now stored directly in Windows NT Server
files. SQL Server is being stretched towards both the high and low end.
Files
SQL Server 7.0 creates a database using a set of operating system files, with
a separate file used for each database. Multiple databases can no longer
share the same file. There are several important benefits to this
simplification. Files can now grow and shrink, and space management is
greatly simplified. All data and objects in the database, such as tables, stored
procedures, triggers, and views, are stored only within these operating
system files:
Primary data This file is the starting point of the database. Every database
file has only one primary data file and all system tables are always
stored in the primary data file.
Secondary These files are optional and can hold all data and objects that
data files are not on the primary data file. Some databases may not
have any secondary data files, while others have multiple
secondary data files.
Page 25 of 128
Log files These files hold all of the transaction log information used to
recover the database. Every database has at least one log file.
When a database is created, all the files that comprise the database are
zeroed out (filled with zeros) to overwrite any existing data left on the disk
by previously deleted files. This improves the performance of day-to-day
operations.
Filegroups
A database now consists of one or more data files and one or more log files.
The data files can be grouped together into user-defined filegroups. Tables
and indexes can then be mapped to different filegroups to control data
placement on physical disks. Filegroups are a convenient unit of
administration, greatly improving flexibility. SQL Server 7.0 will allow you to
back up a different portion of the database each night on a rotating schedule
by choosing which filegroups to back up. Filegroups work well for
sophisticated users who know where they want to place indexes and tables.
SQL Server 7.0 can work quite effectively without filegroups.
Log files are never a part of a filegroup. Log space is managed separately
from data space.
Page 26 of 128
in the database can be placed on other files in another filegroup, located on a
second disk.
Space Management
There are many improvements in the allocations of space and the
management of space within files. The data structures that keep track of
page-to-object relationships were redesigned. Instead of linked lists of
pages, bitmaps are used because they are cleaner and simpler and facilitate
parallel scans. Now each file is more autonomous; it has more data about
itself, within itself. This works well for copying or mailing database files.
SQL Server now has a much more efficient system for tracking table space.
The changes enable
Growing and shrinking files
Better support for large I/O
Row space management within a table
Less expensive extent allocations
SQL Server is very effective at quickly allocating pages to objects and reusing
space freed by deleted rows. These operations are internal to the system and
use data structures not visible to users, yet are occasionally referenced in
SQL Server messages.
File Shrink
The server checks the space usage in each database periodically. If a
database is found to have a lot of empty space, the size of the files in the
database will be reduced. Both data and log files can be shrunk. This activity
occurs in the background and does not affect any user activity within the
database. You can also use the SQL Server Enterprise Manager or DBCC to
shrink files as individually or as a group, or use the DBCC commands
SHRINKDATABASE or SHRINKFILE.
SQL Server shrinks files by moving rows from pages at the end of the file to
pages allocated earlier in the file. In an index, nodes are moved from the end
of the file to pages at the beginning of the file. In both cases pages are freed
at the end of files and then returned to the file system. Databases can only
Page 27 of 128
be shrunk to the point that no free space is remaining; there is no data
compression.
File Grow
Automated file growth greatly reduces the need for database management
and eliminates many problems that occur when logs or databases run out of
space. When creating a database, an initial size for the file must be given.
SQL Server creates the data files based on the size provided by the database
creator and data is added to the database these files fill. By default, data files
are allowed to grow as much as necessary until disk space is exhausted.
Alternatively, data files can be configured to grow automatically, but only to
a predefined maximum size. This prevents disk drives from running out of
space.
Allowing files to grow automatically can cause fragmentation of those files if
a large number of files share the same disk. Therefore, it is recommended
that files or file groups be created on as many different local physical disks as
available. Place objects that compete heavily for space in different file
groups.
Physical Database Architecture
Microsoft SQL Server version 7.0 introduces significant improvements in the
way data is stored physically. These changes are largely transparent to
general users, but do affect the setup and administration of SQL Server
databases.
Page 28 of 128
Data Data rows with all data except text, ntext, and
image.
Torn page detection helps insure database consistency. In SQL Server 7.0,
pages are 8 KB, while Windows NT does I/O in 512-byte segments. This
discrepancy makes it possible for a page to be partially written. This could
happen if there is a power failure or other problem between the time when
the first 512-byte segment is written and the completion of the 8 KB of I/O.
There are several ways to deal with this. One way is to use battery-backed
cached I/O devices that guarantee all-or-nothing I/O. If you have one of
these systems, torn page detection is unnecessary.
In SQL Server 7.0, you can enable torn page detection for a particular
database by turning on a database option.
Locking Enhancements
Row-Level Locking
Page 29 of 128
SQL Server 6.5 introduced a limited version of row locking on inserts. SQL
Server 7.0 now supports full row-level locking for both data rows and index
entries. Transactions can update individual records without locking entire
pages. Many OLTP applications can experience increased concurrency,
especially when applications append rows to tables and indexes.
Dynamic Locking
SQL Server 7.0 has a superior locking mechanism that is unique in the
database industry. At run time, the storage engine dynamically cooperates
with the query processor to choose the lowest-cost locking strategy, based
on the characteristics of the schema and query.
Lock Modes
SQL Server locks resources using different lock modes that determine how
the resources can be accessed by concurrent transactions.
Page 30 of 128
SQL Server uses several resource lock modes:
Overview
Fundamental changes were made in table organization. This new
organization allows the query processor to make use of more nonclustered
indexes, greatly improving performance for decision support applications.
The query optimizer has a wide set of execution strategies and many of the
optimization limitations of earlier versions of SQL Server have been removed.
In particular, SQL Server 7.0 is less sensitive to index-selection issues,
resulting in less tuning work.
Page 31 of 128
Table Organization
The data for each table is now stored in a collection of 8-KB data pages. Each
data page has a 96-byte header containing system information such as the
ID of the table that owns the page and pointers to the next and previous
pages for pages linked in a list. A row-offset table is at the end of the page.
Data rows fill the rest of the page.
SQL Server 7.0 tables use one of two methods to organize their data pages:
Clustered tables are tables that have a clustered index. The data rows
are stored in order based on the clustered index key. The data pages
are linked in a doubly linked list. The index is implemented as a b-tree
index structure that supports fast retrieval of the rows based on their
clustered index key values.
Heaps are tables that have no clustered index. There is no particular
order to the sequence of the data pages and the data pages are not
linked in a linked list.
Table Indexes
A SQL Server index is a structure associated with a table that speeds
retrieval of the rows in the table. An index contains keys built from one or
more columns in the table. These keys are stored in a structure that allows
SQL Server to quickly and efficiently find the row or rows associated with the
key values. This structure is called a heap. The two types of SQL Server
indexes are clustered and nonclustered indexes
Clustered Indexes
A clustered index is one in which the order of the values in the index is the
same as the order of the data stored in the table.
The clustered index contains a hierarchical tree. When searching for data
based on a clustered index value, SQL Server quickly isolates the page with
the specified value and then searches the page for the record or records with
the specified value. The lowest level, or leaf node, of the index tree is the
page that contains the data.
Page 32 of 128
Nonclustered Indexes
Unicode Data
SQL Server now supports Unicode data types, which makes it easier to store
data in multiple languages within one database by eliminating the problem of
converting characters and installing multiple code pages. Unicode stores
Page 33 of 128
character data using two bytes for each character rather than one byte.
There are 65,536 different bit patterns in two bytes, so Unicode can use one
standard set of bit patterns to encode each character in all languages,
including languages such as Chinese that have large numbers of characters.
Many programming languages also support Unicode data types.
The new data types that support Unicode are ntext, nchar, and nvarchar.
They are the same as text, char, and varchar, except for the wider range of
characters supported and the increased storage space used.
Normalization
First normal form: Data is in first normal form if data of the tables is
moved in to separate tables where data in each table is of a similar type,
giving each table a primary key – a unique label or an identifier. This
eliminates repeating groups of data.
Second normal form: Involves taking out data that is only dependent on
part of key.
Page 34 of 128
Third normal form: Involves removing the transitive dependencies. This
means getting rid of any thing in the tables that doesn’t depend Solely on the
primary key. Thus, through normalization, effective data storage can be
achieved eliminating redundancies and repeating groups.
All these make the end users solve their own problems and pit computers to
work.
Report generators
Graphic languages
Application generations
Specification language
Page 35 of 128
Very high level language
Application language
Page 36 of 128
Client Server
Te c h n o l o g i e s
Page 37 of 128
MS.NET
The .NET Framework has two main components: the common language
runtime and the .NET Framework class library. The common language
runtime is the foundation of the .NET Framework. You can think of the
runtime as an agent that manages code at execution time, providing core
services such as memory management, thread management, and remoting,
while also enforcing strict type safety and other forms of code accuracy that
ensure security and robustness. In fact, the concept of code management is
a fundamental principle of the runtime. Code that targets the runtime is
known as managed code, while code that does not target the runtime is
known as unmanaged code. The class library, the other main component of
Page 38 of 128
the .NET Framework, is a comprehensive, object-oriented collection of
reusable types that you can use to develop applications ranging from
traditional command-line or graphical user interface (GUI) applications to
applications based on the latest innovations provided by ASP.NET, such as
Web Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that load the
common language runtime into their processes and initiate the execution of
managed code, thereby creating a software environment that can exploit
both managed and unmanaged features. The .NET Framework not only
provides several runtime hosts, but also supports the development of third-
party runtime hosts.
Page 39 of 128
the Internet, enterprise network, or local computer). This means that a
managed component might or might not be able to perform file-access
operations, registry-access operations, or other sensitive functions, even if it
is being used in the same active application.
The runtime enforces code access security. For example, users can trust that
an executable embedded in a Web page can play an animation on screen or
sing a song, but cannot access their personal data, file system, or network.
The security features of the runtime thus enable legitimate Internet-deployed
software to be exceptionally feature rich.
Page 40 of 128
While the runtime is designed for the software of the future, it also supports
software of today and yesterday. Interoperability between managed and
unmanaged code enables developers to continue to use necessary COM
components and DLLs.
The common type system defines how types are declared, used, and
managed in the runtime, and is also an important part of the runtime's
support for cross-language integration. The common type system performs
the following functions:
Defines rules that languages must follow, which helps ensure that objects
written in different languages can interact with each other.
Page 41 of 128
Describes concepts and defines terms relating to the common type system.
Page 42 of 128
Type Definitions
Type Members
Value Types
Classes
Delegates
Arrays
Interfaces
Pointers
Related Sections
Page 43 of 128
Common Language Runtime
Describes the run-time environment that manages the execution of code and
provides application development services.
Cross-Language Interoperability
This section describes the common language runtime's built-in support for
language interoperability and explains the role that the CLS plays in enabling
guaranteed cross-language interoperability. CLS features and rules are
identified and CLS compliance is discussed.
In This Section
Language Interoperability
Explains the need for a set of features common to all languages and
identifies CLS rules and features.
Page 44 of 128
Common Type System
Describes how types are declared, used, and managed by the common
language runtime.
The .NET Framework class library is a collection of reusable types that tightly
integrate with the common language runtime. The class library is object
oriented, providing types from which your own managed code can derive
functionality. This not only makes the .NET Framework types easy to use, but
also reduces the time associated with learning new features of the .NET
Framework. In addition, third-party components can integrate seamlessly
with classes in the .NET Framework.
Console applications
Page 45 of 128
Windows GUI applications (Windows Forms).
ASP.NET applications.
Windows services.
The Windows Forms classes contained in the .NET Framework are designed
to be used for GUI development. You can easily create command windows,
Page 46 of 128
buttons, menus, toolbars, and other screen elements with the flexibility
necessary to accommodate shifting business needs.
For example, the .NET Framework provides simple properties to adjust visual
attributes associated with forms. In some cases the underlying operating
system does not support changing these attributes directly, and in these
cases the .NET Framework automatically recreates the forms. This is one of
many ways in which the .NET Framework integrates the developer interface,
making coding simpler and more consistent.
Choosing a Complier
To obtain the benefits provided by the common language runtime, you must
use one or more language compilers that target the runtime.
Compiling translates your source code into MSIL and generates the required
metadata.
Page 47 of 128
that examines the MSIL and metadata to find out whether the code can be
determined to be type safe.
Assemblies Overview
It forms a type boundary. Every type's identity includes the name of the
assembly in which it resides. A type called MyType loaded in the scope of one
assembly is not the same as a type called MyType loaded in the scope of
another assembly.
Page 48 of 128
dependencies you specify for any dependent assemblies. For more
information about versioning, see Assembly Versioning
There are several ways to create assemblies. You can use development tools,
such as Visual Studio .NET, that you have used in the past to create .dll
or .exe files. You can use tools provided in the .NET Framework SDK to
create assemblies with modules created in other development environments.
You can also use common language runtime APIs, such as Reflection. Emit,
to create dynamic assemblies.
Page 49 of 128
language runtime and class library while gaining the performance and
scalability of the host server.
The following illustration shows a basic network schema with managed code
running in different server environments. Servers such as IIS and SQL Server
can perform standard operations while your application logic executes
through the managed code.
Page 50 of 128
Server-side managed code
ASP.NET is the hosting environment that enables developers to use the .NET
Framework to target Web-based applications. However, ASP.NET is more
than just a runtime host; it is a complete architecture for developing Web
sites and Internet-distributed objects using managed code. Both Web Forms
and XML Web services use IIS and ASP.NET as the publishing mechanism for
applications, and both have a collection of supporting classes in the .NET
Framework.
If you have used earlier versions of ASP technology, you will immediately
notice the improvements that ASP.NET and Web Forms offers. For example,
you can develop Web Forms pages in any language that supports the .NET
Framework. In addition, your code no longer needs to share the same file
with your HTTP text (although it can continue to do so if you prefer). Web
Forms pages execute in native machine language because, like any other
managed application, they take full advantage of the runtime. In contrast,
unmanaged ASP pages are always scripted and interpreted. ASP.NET pages
are faster, more functional, and easier to develop than unmanaged ASP
pages because they interact with the runtime like any managed application.
The .NET Framework also provides a collection of classes and tools to aid in
development and consumption of XML Web services applications. XML Web
Page 51 of 128
services are built on standards such as SOAP (a remote procedure-call
protocol), XML (an extensible data format), and WSDL (the Web Services
Description Language). The .NET Framework is built on these standards to
promote interoperability with non-Microsoft solutions.
For example, the Web Services Description Language tool included with
the .NET Framework SDK can query an XML Web service published on the
Web, parse its WSDL description, and produce C# or Visual Basic source code
that your application can use to become a client of the XML Web service. The
source code can create classes derived from classes in the class library that
handle all the underlying communication using SOAP and XML parsing.
Although you can use the class library to consume XML Web services directly,
the Web Services Description Language tool and the other tools contained in
the SDK facilitate your development efforts with the .NET Framework.
If you develop and publish your own XML Web service, the .NET Framework
provides a set of classes that conform to all the underlying communication
standards, such as SOAP, WSDL, and XML. Using those classes enables you
to focus on the logic of your service, without concerning yourself with the
communications infrastructure required by distributed software development.
Finally, like Web Forms pages in the managed environment, your XML Web
service will run with the speed of native machine language using the scalable
communication of IIS.
This section describes the programming essentials you need to build .NET
applications, from creating assemblies from your code to securing your
application. Many of the fundamentals covered in this section are used to
create any application using the .NET Framework. This section provides
conceptual information about key programming concepts, as well as code
samples and detailed explanations.
Page 52 of 128
Accessing Data with ADO.NET
Describes the ADO.NET architecture and how to use the ADO.NET classes to
manage application data and interact with data sources including Microsoft
SQL Server, OLE DB data sources, and XML.
Shows how to use Internet access classes to implement both Web- and
Internet-based applications.
Developing Components
Explains the extensive support the .NET Framework provides for developing
international applications.
Page 53 of 128
Drawing and Editing Images
Explains the .NET Framework SDK mechanism called the Code Document
Object Model (CodeDOM) that enables the output of source code in multiple
programming languages.
Page 54 of 128
Overall Description:
Product Perspective:
Page 55 of 128
Product Functions
The major function that product executes are divided into two
categories.
1. Administrative Functions.
2. User Interface Functions.
Administrative Functions:
The general functions that are taken care of at the user level are the
Drug Trial Investigators can view information regarding various drugs
which are arising with the institute at any time. They can also view the
drug trials info on Individuals. The system also helps the Agents to
study the outcomes of each trial.
Page 56 of 128
Chapter
5
Page 57 of 128
Design
Document
Page 58 of 128
The entire system is projected with a physical diagram which specifics
the actual storage parameters that are physically necessary for any
database to be stored on to the disk. The overall systems existential
idea is derived from this diagram.
Page 59 of 128
Employee Report on Employee
Information Module Information
Report on the
Drug Drug Trials
Information Management
Managem
Drug Trials ent
Information Module System
Report on the
Trial
Individual Trial Participation
Information Module
Report on the
Trials History
Security Module
Page 60 of 128
Administration
Check for
Designation
master Verify
Data
1.2
Insert
Investigators
Page 61 of 128
Drug Master Reaction Agent Master
Verify Verify
the the
Data Data
Commit
()
Designation
Department Master
Master
Inserting a Check
new Emp for Check
Info Depart For
Verify ment Verify Desig Commit
Data Data ()
2.1 2.2
2.3
Page 62 of 128
Clinical
Individual Condition
Master Master
Inserting a
new Check For
Individuals Condition
Condition Check for codes
Status Individuals
Verify Verify
Data Commit()
Data
2.1 2.2
2.3
Drug
Master
Inserting a
new Record of
Drug Trials
Master Check for Drugs
Verify Commit(
Data )
2.1
2.2
Page 63 of 128
Individual
Master Record Drug
Initials Maste
Conditions r
Register
Individuals Validat Generat
e e Drug
Validate() Trials
Concer
ned
Form 2.3
2.1
2.2
Record
Conditions
Validate
Trial
Participations Validat
Store in the e Drug
Database 2.5 ID
2.4
Page 64 of 128
ER-Diagrams
Page 65 of 128
Physical Diagram
Page 66 of 128
ER Diagram
Page 67 of 128
Unified Modeling Language Diagrams
In this the structural and behavioral aspects of the environment in which the
system is to be implemented are represented.
Page 68 of 128
UML design modeling, which focuses on the
behavioral modeling, implementation modeling and
environmental model views.
Use cases model the system from the end users point of view, with the following
objectives
Page 69 of 128
Employees
Information
Departments
&Designation
information
Drug
Administrator Registration &
Usages
information
Allergies Anti
Allergic drugs
information
Individuals
Information
Drug Trials
Information
Security
Information
Page 70 of 128
<<Uses>>
Raise Request for New Collect the Required
Employee Registration information
Admini
strator
<<Uses>>
Store the information
Raise request for New Cross check the
Individuals Registration Authenticated
parameters like
consent signed from
etc.
<<Uses>>
Store the information
Raise request for New
Check the cross
Individuals Registration
references of the
consistent information
required
<<Uses>>
Authenticate &
Raise request for login validate thethe
login name
login
of password
name of password
Page 71 of 128
Drug trials
information
Clinical
conditions
information
Investigation
Individual
clinical condition
information
Dry reaction
agents
information
Individual Trial
information &
History
information
Page 72 of 128
<<Uses>> <<Uses>>
Authenticate the Provide
Login login name Accessibility
&password
<<Uses>>
<<Uses>>
<<Uses>>
<<Uses>>
<<Uses>>
Page 73 of 128
Class Collaboration Diagrams
Employees Master
-Emp-No : Number
- Emp -Name : Varchar2
- Emp -DOB : Date
- Emp –Addr : Number
- Emp-phone: Varchar2
- Empe-email: Date
-emp-gender : Number
-emp-doj : Number
-emp-dept-n0:Number
-emp-desig-id:Number
-emp-mgr-no:number
-Insert ()
-Delete ()
-Update ()
-Search ()
-Validate- deptno ()
-Validate_designatig Id ()
-validate-mgr-no()
Page 74 of 128
Sequence Diagram
Validate
Consent Generate
form () Drug trial Validate
ID ()
Validate
Drug id
Trial
participat
ion ID ()
Page 75 of 128
Chapter
6
Page 76 of 128
Coding
Page 77 of 128
Program Design Language
Page 78 of 128
Chapter
7
Page 79 of 128
Te s t i n g &
Debugging
Strategies
Page 80 of 128
Testing
Psychology of Testing
The aim of testing is often to demonstrate that a program works by showing
that it has no errors. The basic purpose of testing phase is to detect the
errors that may be present in the program. Hence one should not start
testing with the intent of showing that a program works, but the intent
should be to show that a program doesn’t work. Testing is the process of
executing a program with the intent of finding errors.
Testing Objectives
The main objective of testing is to uncover a host of errors, systematically
and with minimum effort and time. Stating formally, we can say,
Testing is a process of executing a program with the intent of
finding an error.
A successful test is one that uncovers an as yet undiscovered
error.
A good test case is one that has a high probability of finding error,
if it exists.
The tests are inadequate to detect possibly present errors.
The software more or less confirms to the quality and reliable
standards.
Levels of Testing
In order to uncover the errors present in different phases we have the
concept of levels of testing. The basic levels of testing are as shown below…
Page 81 of 128
Acceptance
Client Needs Testing
System Testing
Requirements
Integration Testing
Design
Unit Testing
Code
System Testing
The philosophy behind testing is to find errors. Test cases are devised with
this in mind. A strategy employed for system testing is code testing.
Code Testing:
This strategy examines the logic of the program. To follow this method we
developed some test data that resulted in executing every instruction in the
program and module i.e. every path is tested. Systems are not designed as
entire nor are they tested as single systems. To ensure that the coding is
perfect two types of testing is performed or for that matter is performed or
that matter is performed or for that matter is performed on all systems.
Types Of Testing
Unit Testing
Link Testing
Unit Testing
Unit testing focuses verification effort on the smallest unit of software i.e.
the module. Using the detailed design and the process specifications testing
is done to uncover errors within the boundary of the module. All modules
must be successful in the unit test before the start of the integration testing
begins.
Page 82 of 128
In this project each service can be thought of a module. There are so many
modules like Login, HWAdmin, MasterAdmin, Normal User, and PManager.
Giving different sets of inputs has tested each module. When developing the
module as well as finishing the development so that each module works
without any error. The inputs are validated when accepting from the user.
Link Testing
Link testing does not test software but rather the integration of each
module in system. The primary concern is the compatibility of each module.
The Programmer tests where modules are designed with different
parameters, length, type etc.
Integration Testing
After the unit testing we have to perform integration testing. The goal here
is to see if modules can be integrated properly, the emphasis being on
testing interfaces between modules. This testing activity can be considered
as testing the design and hence the emphasis on testing module
interactions.
In this project integrating all the modules forms the main system. When
integrating all the modules I have checked whether the integration effects
working of any of the services by giving different combinations of inputs
with which the two services run perfectly before Integration.
System Testing
Here the entire software system is tested. The reference document for this
process is the requirements document, and the goal os to see if software
meets its requirements.
Page 83 of 128
Here entire ‘ATM’ has been tested against requirements of project and it is
checked whether all requirements of project have been satisfied or not.
Acceptance Testing
This is a unit testing method where a unit will be taken at a time and tested
thoroughly at a statement level to find the maximum possible errors. I
tested step wise every piece of code, taking care that every statement in
the code is executed at least once. The white box testing is also called Glass
Box Testing.
I have generated a list of test cases, sample data, which is used to check all
possible combinations of execution paths through the code at every module
level.
Page 84 of 128
1) Test cases that reduced by a count that is greater than
one, the number of additional test cases that much be
designed to achieve reasonable testing.
Page 85 of 128
Chapter
8
Page 86 of 128
User
Manual
Page 87 of 128
Installation
Page 88 of 128
Page 89 of 128
Page 90 of 128
Page 91 of 128
Page 92 of 128
Page 93 of 128
Page 94 of 128
Page 95 of 128
Page 96 of 128
Page 97 of 128
Page 98 of 128
Page 99 of 128
Page 100 of 128
Page 101 of 128
Page 102 of 128
Page 103 of 128
Page 104 of 128
Page 105 of 128
Page 106 of 128
Page 107 of 128
Page 108 of 128
Page 109 of 128
Page 110 of 128
Page 111 of 128
Page 112 of 128
Page 113 of 128
Page 114 of 128
Page 115 of 128
Page 116 of 128
Page 117 of 128
Page 118 of 128
Page 119 of 128
Page 120 of 128
Page 121 of 128
Page 122 of 128
Page 123 of 128
Page 124 of 128
Chapter9
The entire project has been developed and deployed as per the
developed in near future. The system at present does not take care off the
standards and are critically to be initiated in the first face, the application of
coming days. The system needs more elaborative technical management for
SQL Server