Lab Manual Dbms
Lab Manual Dbms
LAB MANUAL
Prepared By-
Roll No
Name
Sr.
Particulars Page No.
No.
1 Scheme
2 List of Experiments
3 Experiments
2. LIST OF EXPERIMENTS
Sr. Date of Faculty
Particulars
No. Experiment Signature
Draw database & types of database using Smart
1
Draw Tool
Draw database & types of database using Smart
2
Draw Tool.
Draw 2 Tier- 3 Tier Architecture using Smart
3
Draw Tool.
Draw Schema Architecture with Smart Draw
4
Tool.
Draw DBMS detailed architecture with Smart
5
Draw Tool.
Draw Data Independence architecture using
6
Smart Draw Tool.
Draw Data models Category using Smart Draw
7
Tool.
Draw Specialization Generalization &
8
Aggregation with Smart Draw Tool.
Draw ER Diagram of College database, Railway
Reservation Database, Hospital Management
9
System and Bank Database with Smart Draw
Tool.
Experiment - One
(a) Objective: Draw database & types of database using Smart Draw tool.
(b) Description:
Centralised - where the database is physically in one location and users typically use
an Internet connection to access it. Banks (such as ANZ) tend to use centralised
databases.
Distributed - Where the database is in many locations often where you have a national
or international company and customers tend to regularly interact with a local branch.
For example: Google uses Big-Table a distributed DBMS as searching tends to be by
users in a particular region of the world.
Experiment - Two
(a) Objective: Draw 2 Tier- 3 Tier Architecture with Smart Draw tool.
(b) Description:
Two-Tier Architecture:
The two-tier architecture is like client server application. The direct communication takes
place between client and server. There is no intermediate between client and server.
1. Database (Data tier)
2. Client Application (Client tier)
So, in client application the client writes the program for saving the record in SQL Server and
thereby saving the data in the database.
Advantages:
1. Understanding and maintenances is easier.
Disadvantages:
1. Performance will be reduced when there are more users.
Three-Tier Architecture:
Client layer: Here we design the form using textbox, label etc.
Business layer: It is the intermediate layer which has the functions for client layer and it is
used to make communication faster between client and data layer. It provides the business
processes logic and the data access.
Advantages
1. Easy to modify with out affecting other modules
2. Fast communication
3. Performance will be good in three tier architecture.
( c ) Result (Snapshot)
Experiment - Three
(a) Objective: Draw Schema Architecture with Smart Draw tool.
(b) Description:
Data are actually stored as bits, or numbers and strings, but it is difficult to work with data at
this level. It is necessary to view data at different levels of abstraction.
Schema:
Description of data at some level. Each level has its own schema.
physical,
conceptual, and
external.
The physical schema describes details of how data is stored: files, indices, etc. on the
random access disk system. It also typically describes the record layout of files and type of
files (hash, b-tree, flat).
In the relational model, the conceptual schema presents data as a set of tables.
In the relational model, the external schema also presents data as a set of relations. An
external schema specifies a view of the data in terms of the conceptual level. It is tailored to
the needs of a particular category of users. Portions of stored data should not be seen by some
users and begins to implement a level of security and simplifies the view for these users
Experiment - Four
(a) Objective: Draw DBMS detailed architecture with Smart Draw tool
(b) Description:
A database system is partitioned into modules that deal with each of the responsibilities of
the overall system.
The functional components of a database system can be broadly divided into the storage
manager and the query processor components. Storage manager: The storage manager is
important because database typically require a large amount of storage space. Corporate
database range in size from hundreds of gigabytes to for the largest database terabytes of
data. Since the main memory of computer cannot store this much information, the
information is stored on disks. Data are moved between disk storage and main memory as
needed. Since the movement of data to and from disk is slow relative to the speed of the
central processing unit, it is imperative that the database system structure the data so as to
minimize the need to move data between disk and main memory.
A storage manager is a program module that provides the interface between the low level data
stored in the database and the application programs and queries submitted to the system. The
storage manager is responsible for the interaction with the file manager. The raw data are
stored on the disk using the file system, which is usually provided by a conventional
operating system. The storage manager translates the various DML statements into low-level
File system commands. Thus, the storage manager is responsible for storing, retrieving, and
updating data in the data base.
i) Authorization and integrity manager: Which tests for the satisfaction of integrity
constraints and checks the authority of users to access data.
ii) Transaction manager: Which ensures that the database remains in a consistent (correct)
state despite system failures, and that concurrent transaction executions proceed without
conflicting.
iii) File manager: Which manages the allocation of space on disk- storage and the data
structures used to represent information stored on disk.
iv) Buffer manager: Which is responsible for fetching data from disk storage into main
memory and deciding what data to cache in main memory. The buffer manager is a critical
part of the data base system, since it enables the data base to handle data sizes that are much
larger than the size of main memory.
The storage manager implements several data structures as part of the physical system
implementation.
ii) Data dictionary which stores metadata about the structure of the database, in particular
the schema of the data base.
iii) Indices which provide fast access to data items that hold particular values.
The query processor: Query processor helps the database system simplify and facilitate
access to data. The job of the database system to translate update and queries written in a non
procedural language, at the logical level, into an efficient sequence of operations at the
physical level.
Experiment - Five
(a) Objective: Draw Data Independence architecture with Smart Draw tool
(b) Description:
Data independence is the type of data transparency that matters for a centralized DBMS. It
refers to the immunity of user applications to changes made in the definition and organization
of data. Data independence can be explained as follows: Each higher level of the data
architecture is immune to changes of the next lower level of the architecture.
The ability to modify schema definition in one level without affecting schema definition in
the next higher level is called data independence. There are two levels of data independence,
they are Physical data independence and Logical data independence.
1. Physical data independence is the ability to modify the physical schema without
causing application programs to be rewritten. Modifications at the physical level are
occasionally necessary to improve performance. It means we change the physical
storage/level without affecting the conceptual or external view of the data. The new
changes are absorbed by mapping techniques.
2. Logical data independence is the ability to modify the logical schema without causing
application program to be rewritten. Modifications at the logical level are necessary
whenever the logical structure of the database is altered (for example, when money-
market accounts are added to banking system). Logical Data independence means if
we add some new columns or remove some columns from table then the user view
and programs should not change. It is called the logical independence. For example:
consider two users A & B. Both are selecting the empno and ename. If user B add a
new column salary in his view/table then it will not effect the external view user; user
A, but internal view of database has been changed for both users A & B. Now user A
can also print the salary. It means if we change in view then program which use this
view need not to be changed.
Experiment - Six
(a) Objective: Draw Data Models Category using Smart Draw tool
(b) Description:
A database model is a type of data model that determines the logical structure of a database
and fundamentally determines in which manner data can be stored, organized, and
manipulated. The most popular example of a database model is the relational model, which
uses a table-based format. Common logical data models for databases include:
A hierarchical database model is a data model in which the data is organized into a tree-like
structure. The data is stored as records which are connected to one another through links. A
record is a collection of fields, with each field containing only one value. The entity type of a
record defines which fields the record contains.
Network model
The network model is a database model conceived as a flexible way of representing objects
and their relationships. Its distinguishing feature is that the schema, viewed as a graph in
which object types are nodes and relationship types are arcs, is not restricted to being a
hierarchy or lattice.
Relational model
The relational model for database management is a database model based on first-order
predicate logic, first formulated and proposed in 1969 by Edgar F. Codd. In the relational
model of a database, all data is represented in terms of tuples, grouped into relations. A
database organized in terms of the relational model is a relational database.
Entity–relationship model
Experiment - Seven
(a) Objective: Draw Specialization Generalization & Aggregation with Smart Draw tool.
(b) Description:
Specialization is top down manner, in which an entity set may include sub groupings of entities that
are distinct in some way from other entities in the set. For instance, a subset of entities within an
entity set may have attributes that are not shared by all the entities in the entity set.
Generalization is bottom-up manner, in which multiple entity sets are synthesized into a higher-level
entity set on the basis of common features.
Aggregation is an abstraction in which relationship sets (along with their associated entity sets) are
treated as higher-level entity sets, and can participate in relationships.
Experiment - Eight
(a) Objective: Draw ER Diagram of College Database, Railway Reservation Database,
Hospital Management System and Bank Database
(b) Description:
Assume all entities , attributes and relationship. Represent all cardinality and constraints for
the following databases-
1) College Database
2) Railway Reservation Database
3) Hospital Management System
4) Bank Database
Experiment - Nine
Table Creation:
Table Description:
Table Deletion:
Table Renaming:
Rename Column:
Syntax:
Description:
Table Truncation:
Example:-
Table created.
Desc student
Output-
Renaming the column:
Table altered.
Desc student;
Table altered.
Desc student
Table altered.
Desc student
Table altered.
Desc student
Table altered
Desc class
Table truncation:
Table created.
desc employ
Table deletion:
Table dropped.
Result:
Experiment - Ten
Syntax:
Select A1,A2,…….An
From r1,r2,r3…….rn
Where p
Description: It selects the rows from the table.
2.Insert:
Syntax:
3.Update:
Syntax 1:
Update tablename
Syntax 2:
Update tablename
Set A = case
else return 0
end .
Description:
Syntax:
Where P
5.Group by:
Syntax:
Select <(columns)>
From <(tables)>
Where <(condition)>
Group by <groupcolumns>
Description: It is used to group the rows that have certain properties and then to apply an
aggregate function on one column for each group separately.
6.OrderBy:
Syntax:
Select [distinct]<columns>
From <table>
Where <condition>
[order by <columns[asc/desc]>]
Output:
1. Select loannumber,amount*1000
From loan
LOANNUMBER AMOUNT*1000
2. Select loannumber
from loan
LOANNUMBER
3. Select customername,loan.loannumber,amount
From loan,Borrower
Where loan.loannumber=Borrower.loannumber and
Branchname = 'Perryridge'
from Borrower
order by Customername
CUSTOMERNAME LOANNUMBER
2 rows updated.
7. select customername
from customer
CUSTOMERNAME
Experiment - Eleven
from table-name
CNAME LNUMBER
BRANCHNAME BRANCHCITY ASSETS
AVG(BALANCE)
BRANCHNAME AVG(BALANCE)
Select count(*) from custrelation
COUNT(*)
COUNT(DISTINCTBRANCHNAME)
avg(balance)>700;
BRANCHNAME AVG(BALANCE)
MAX(BALANCE)
MAX(BALANCE)-MIN(BALANCE)
Select min(amount)from loan
MIN(AMOUNT)
SUM(AMOUNT)
Experiment - Twelve
LNUMBER
Select cname from depositer minus (Select cname from borrower)
CNAME
CNAME
CUSTOMERNAME
Select cname from depositer where cname in(select cname from borrower)
CNAME
Select cname from depositer where cname not in(select cname from borrower)
CNAME
Experiment - Thirteen
INNER JOIN:
DESCRIPTION: Returns the matching rows from the tables that are being joined.
OUTER JOIN:
SELECT *
ON tablename1.aattribute=tablename2.attribute
LEFT OUTER JOIN: Returns the matching row from the table that are being joined and
non matching rows from the left table
RIGHT OUTER JOIN: Returns the matching row from the table that are being joined and
non matching rows from the right table
FULL OUTER JOIN: Returns the matching row from the table that are being joined and
non matching rows from the left and right table
INITIAL TABLES:
CNAME LOANNO
INNER JOIN:
OUTPUT:
OUTPUT:
SYNTAX:
FROM tablename
WHERE columnname=expressionlist;
OUTPUT:
select*from account;
OUTPUT:
select*from depositor;
CNAME ACCNO
OUTPUT:
view created
OUTPUT:
select *from a;
CNAME BNAME
OUTPUT:
QUERY :
CNAME
Result:
Program1
Table creation
Output:
Value insertion
Output:
1. What is database?
2. What is DBMS?
It is a collection of programs that enables user to create and maintain a database. In other
words it is general-purpose software that provides the users with the processes of defining,
constructing and manipulating the database for various applications.
1. Redundancy is controlled.
2. Unauthorised access is restricted.
3. Providing multiple user interfaces.
4. Enforcing integrity constraints.
5. Providing backup and recovery.
1. Physical level: The lowest level of abstraction describes how data are stored.
2. Logical level: The next higher level of abstraction, describes what data are stored in
database and what relationship among those data.
3. View level: The highest level of abstraction describes only part of entire database.
1. Entity Integrity: States that "Primary key cannot have NULL value"
2. Referential Integrity: States that "Foreign Key can be either a NULL value or should
be Primary Key value of other relation.
1. Extension: It is the number of tuples present in a table at any instance. This is time
dependent.
2. Intension: It is a constant value that gives the name, structure of table and the
constraints laid on it.
Data independence means that "the application is independent of the storage structure and
access strategy of data". In other words, The ability to modify the schema definition in one
level should not affect the schema definition in the next higher level.
Two types of Data Independence:
1. Physical Data Independence: Modification in physical level should not affect the
logical level.
2. Logical Data Independence: Modification in logical level should affect the view
level.
A view may be thought of as a virtual table, that is, a table that does not really exist in its own
right but is instead derived from one or more underlying base table. In other words, there is
no stored file that direct represents the view instead a definition of view is stored in data
dictionary.
A collection of conceptual tools for describing data, data relationships data semantics and
constraints.
This data model is based on real world that consists of basic objects called entities and of
relationship among these objects. Entities are described in a database by a set of attributes.
This model is based on collection of objects. An object contains values stored in instance
variables with in the object. An object also contains bodies of code that operate on the object.
These bodies of code are called methods. Objects that contain same types of values and the
same methods are grouped together into classes.
This language that enable user to access or manipulate data as organised by appropriate data
model.
1. Procedural DML or Low level: DML requires a user to specify what data are needed
and how to get those data.
2. Non-Procedural DML or High level: DML requires a user to specify what data are
needed without specifying how to get those data.
It translates DML statements in a query language into low-level instruction that the query
evaluation engine can understand.
23. Are the resulting relations of PRODUCT and JOIN operation the same?
No.
PRODUCT: Concatenation of every row in one relation with every row in another.
JOIN: Concatenation of rows from one relation and related rows from another.
It guarantees that the spurious tuple generation does not occur with respect to relation
schemas after decomposition.
The domain of attribute must include only atomic (simple, indivisible) values.
A relation schema R is in 3NF if it is in 2NF and for every FD X A either of the following is
true
1. X is a Super-key of R.
2. A is a prime attribute of R.
In other words, if every non prime attribute is non-transitively dependent on primary key.
A relation schema R is in BCNF if it is in 3NF and satisfies an additional constraint that for
every FD X A, X must be a candidate key.
It is procedural query language. It consists of a set of operations that take one or two relations
as input and produce a new relation.
1. The tuple-oriented calculus uses a tuple variables i.e., variable whose only permitted
values are tuples of that relation. E.g. QUEL
2. The domain-oriented calculus has domain variables i.e., variables that range over the
underlying domains instead of over relation. E.g. ILL, DEDUCE.
A Functional dependency is denoted by X Y between two sets of attributes X and Y that are
subsets of R specifies a constraint on the possible tuple that can form a relation state r of R.
The constraint is for any two tuples t1 and t2 in r if t1[X] = t2[X] then they have t1[Y] =
t2[Y]. This means the value of X component of a tuple uniquely determines the value of
component Y.