[go: up one dir, main page]

0% found this document useful (0 votes)
41 views11 pages

Original Dbms Lab

The document outlines a series of experiments focused on database design and management, including problem formulation for ER diagrams, conceptual design, normalization, SQL table creation, and advanced querying techniques. Each experiment includes an aim, theoretical background, required components, procedures, results, and conclusions, emphasizing the importance of structured data management and integrity. The experiments cover practical applications such as creating triggers, using DML commands, and implementing aggregate functions.

Uploaded by

ashuwashu22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views11 pages

Original Dbms Lab

The document outlines a series of experiments focused on database design and management, including problem formulation for ER diagrams, conceptual design, normalization, SQL table creation, and advanced querying techniques. Each experiment includes an aim, theoretical background, required components, procedures, results, and conclusions, emphasizing the importance of structured data management and integrity. The experiments cover practical applications such as creating triggers, using DML commands, and implementing aggregate functions.

Uploaded by

ashuwashu22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Experiment No.

Title: Case Study and Problem Formulation for ER Diagram


Aim:
To select a real-world scenario and formulate a clear, detailed problem statement that
can be implemented as an ER Diagram.

Theory:
Before creating any database, we must understand what information we need to store and
how it will be used. This is done through a problem statement.
For example, a Library Management System keeps track of books, students, issuing and
returning of books, penalties, etc.
A good problem statement explains:

• What the system is for


• Who will use it
• What entities and data fields are needed
• What operations will be done

This step ensures that the database design will be relevant and covers all necessary
information.

Components Required:

• Notebook
• Computer with word processing tool

Procedure:

1. Identify a suitable domain (e.g., Library, Hospital, College).


2. Note down possible users and processes.
3. List main activities: issuing books, keeping student records, etc.
4. Write a clear, detailed statement covering scope, users, and operations.

Result:
A detailed problem statement for the Library Management System was created which
clearly defines its purpose, required data, and operations.
Conclusion:
Understanding and defining the problem in detail helps build a correct and complete
database structure. Without a clear problem statement, database design may miss key
details.

Experiment No. 2

Title: Conceptual Designing using ER Diagrams


Aim:
To identify entities, attributes, keys, and relationships for the selected system and
represent them in an ER Diagram.

Theory:
The Entity-Relationship (ER) Model is a popular high-level data model used in the first
stage of database design.
Key points:

• Entity: Object or concept about which data is stored (e.g., Book, Student).
• Attribute: Property of an entity (e.g., Book_ID, Student_Name).
• Keys: Uniquely identify records (Primary Key).
• Relationships: Show how entities are connected (Student issues Book).
• Cardinality: Defines how many instances participate in the relationship.

An ER Diagram helps designers visualize the entire data structure.

Components Required:

• Notebook
• Paper or ER Diagram software

Procedure:

1. Read the problem statement carefully.


2. List all possible entities and attributes.
3. Identify the primary key for each entity.
4. Determine relationships between entities (one-to-one, one-to-many).
5. Draw the ER Diagram with symbols for entities, attributes, and relationships.
Result:
A complete ER Diagram for the Library System was created, showing Students, Books,
Librarians, and transactions with attributes and clear relationships.

Conclusion:
Conceptual designing helps visualize and plan how data will be structured and connected.
It forms the base for physical database design.

Experiment No. 3

Title: Conversion of ER Model to Relational Model


Aim:
To convert the ER Diagram into a relational schema which can be implemented in a
database.

Theory:
The Relational Model represents data as tables (relations).

• Each entity becomes a table.


• Attributes become columns.
• Relationships use Foreign Keys to connect tables.

A relational model must remove redundant data and show clear connections between
tables. It is the bridge between design and implementation.

Components Required:

• Notebook
• Computer

Procedure:

1. Take each entity in the ER Diagram and make a table for it.
2. Define all attributes as columns.
3. Identify primary keys.
4. Use Foreign Keys for relationships.
5. Write schema with table names, columns, and keys.
Result:
The ER Diagram was successfully converted into relational tables with clear primary and
foreign key constraints.

Conclusion:
The relational model shows how the conceptual design will actually be stored and
managed in a real DBMS.

Experiment No. 4

Title: Normalization up to Third Normal Form


Aim:
To remove data redundancy and anomalies by normalizing the relational schema up to
3NF.

Theory:
Normalization is the process of organizing data to reduce redundancy and dependency.

• 1NF: Remove repeating groups — ensure atomicity.


• 2NF: Remove partial dependency — all non-key attributes fully depend on the
whole primary key.
• 3NF: Remove transitive dependency — non-key attributes should not depend on
other non-key attributes.

A well-normalized table is easier to maintain and less error-prone.

Components Required:

• Notebook
• Computer

Procedure:

1. Take relational tables and check for repeating columns (apply 1NF).
2. Check for partial dependency (apply 2NF).
3. Check for transitive dependency (apply 3NF).
4. Rewrite tables for each stage.
Result:
Tables were normalized up to 3NF, removing redundant columns and ensuring
dependencies were proper.

Conclusion:
Normalization helped organize the database in a logical way, ensuring minimum
redundancy and maximum consistency.

Experiment No. 5

Title: Creating Tables using SQL


Aim:
To create tables with appropriate constraints using SQL DDL commands.

Theory:
SQL provides DDL (Data Definition Language) commands like CREATE TABLE to define
tables, columns, and constraints. Constraints like PRIMARY KEY, FOREIGN KEY, UNIQUE
ensure data integrity.

Components Required:

• Computer with SQL Tool (MySQL/Oracle)

Procedure:

1. Open SQL tool and connect to database.


2. Write CREATE TABLE for each entity.
3. Define columns with data types.
4. Add PRIMARY and FOREIGN KEY constraints.
5. Execute commands and verify table structure.

Result:
Tables were created in SQL with valid constraints and relationships.

Conclusion:
The experiment showed practical implementation of table design using SQL commands
and constraints.
Experiment No. 6
Title: Practicing DML Commands — INSERT, SELECT, UPDATE, DELETE
Aim:
To practice Data Manipulation Language commands for adding, retrieving, modifying, and
deleting data from database tables.

Theory:
DML (Data Manipulation Language) is used to handle the actual data stored in tables.

• INSERT adds new records.


• SELECT retrieves specific records.
• UPDATE modifies existing records.
• DELETE removes unwanted records.
These commands help maintain and manage the data effectively. Mastery of DML
ensures the database remains current and accurate.

Components Required:

• Computer with SQL tool (MySQL/Oracle)

Procedure:

1. Open SQL editor and connect to the database.


2. Use INSERT to add multiple records into tables.
3. Run SELECT queries to display data.
4. Modify some records using UPDATE with conditions.
5. Use DELETE to remove unwanted records.
6. Use SELECT again to verify all changes.

Result:
All DML commands were successfully practiced and verified. Data was added, viewed,
modified, and deleted as intended.

Conclusion:
The experiment showed how real-time data is handled practically using DML. It reinforced
the importance of maintaining accurate data in the database.
Experiment No. 7
Title: Advanced Queries — ANY, ALL, IN, EXISTS, NOT EXISTS, UNION, INTERSECT,
CONSTRAINTS
Aim:
To practice advanced SQL query operators and set operations to handle complex data
scenarios.

Theory:
Advanced SQL queries help deal with multiple conditions and large data sets:

• ANY and ALL compare a value with a list of values.


• IN checks if a value exists in a list.
• EXISTS/NOT EXISTS test subqueries for row existence.
• UNION and INTERSECT combine results of two SELECT statements.
• Constraints like CHECK, NOT NULL, UNIQUE enforce data rules.

These features make SQL more powerful for real-world applications.

Components Required:

• Computer with SQL tool

Procedure:

1. Create sample tables with data.


2. Write queries using ANY and ALL with WHERE clause.
3. Use IN to filter data.
4. Use EXISTS and NOT EXISTS with subqueries.
5. Combine multiple queries with UNION and INTERSECT.
6. Apply constraints like CHECK and UNIQUE to maintain data validity.

Result:
Complex queries and constraints were executed successfully, producing the expected
results.

Conclusion:
This experiment developed skills to write more complex and efficient SQL queries to
handle different practical requirements.
Experiment No. 8
Title: Subqueries and Joins
Aim:
To use nested and correlated subqueries and perform different joins to retrieve data from
multiple tables.

Theory:
Subqueries are queries within another query, allowing step-by-step data fetching.

• Nested Subquery: Runs independently inside main query.


• Correlated Subquery: Depends on main query for each row.
Joins merge rows from two or more tables:
• Inner Join: Retrieves matching rows.
• Outer Join (Left, Right): Includes unmatched rows too.
• Equi Join: Uses equality condition to join tables.

These techniques help work with complex data stored in multiple tables.

Components Required:

• Computer with SQL tool

Procedure:

1. Create related tables and insert sample data.


2. Write nested subqueries to filter data.
3. Write correlated subqueries to fetch row-wise results.
4. Perform INNER JOIN to merge matching rows.
5. Perform LEFT OUTER and RIGHT OUTER joins.
6. Use EQUI JOIN to join tables based on equality.

Result:
Subqueries and joins were implemented correctly and multiple tables were connected
effectively.
Conclusion:
The experiment enhanced understanding of multi-table operations and efficient data
extraction using joins and subqueries.

Experiment No. 9
Title: Aggregation Functions, GROUP BY, HAVING and Views
Aim:
To practice aggregate functions (COUNT, SUM, AVG, MAX, MIN), use GROUP BY and HAVING
clauses, and create/drop views.

Theory:
Aggregate functions calculate summary values:

• COUNT counts rows, SUM totals values, AVG finds mean, MAX/MIN find
highest/lowest.
GROUP BY groups rows with same values. HAVING filters grouped results.
Views are virtual tables created by a query. They display data without storing it
separately, improving security and convenience.

Components Required:

• Computer with SQL tool

Procedure:

1. Create tables with numeric data.


2. Write queries using COUNT, SUM, AVG, MAX, MIN.
3. Use GROUP BY to group results by category.
4. Use HAVING to filter groups by conditions.
5. Create a view for selected data.
6. Drop the view after testing.

Result:
Aggregate functions and views were implemented and verified successfully.
Conclusion:
The experiment showed how to summarize data and manage access using views,
improving database flexibility.

Experiment No. 10
Title: Triggers for Insert, Delete, and Update
Aim:
To create and test triggers that perform automatic actions on insert, delete, and update
events in tables.

Theory:
Triggers are special programs that automatically execute when specific database
operations occur. They help automate tasks like logging changes, enforcing security, or
auditing records.
For example, when new data is inserted, a trigger can create a log entry. When a record is
deleted, it can archive the data before deletion.

Components Required:

• Computer with SQL tool

Procedure:

1. Write CREATE TRIGGER statements for insert, delete, and update events.
2. Insert sample data and observe the trigger action.
3. Delete data and check if the trigger action works.
4. Update data and verify if logs are created.
5. Test all actions to ensure triggers work properly.

Result:
Triggers were created and tested. Automatic logging and actions were performed
successfully on insert, delete, and update.

Conclusion:
The experiment explained how triggers help automate database activities, improve
security and maintain data integrity.

You might also like