Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Dbms QP-15

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

DATABASE MANAGEMENT SYSTEM (10CS54) QP

SOLVED DEC-15
1 a) Write a note on various types of end users who use DBMS.

(08 M)

Sol: Actors on the Scene


These apply to "large" databases, not "personal" databases that are defined, constructed, and
used by a single person via, say, Microsoft Access.
Users may be divided into
Those who actually use and control the database content, and those
who design, develop and maintain database applications (called Actors
on the Scene), and
Those who design and develop the DBMS software and related tools,
and the Computer systems operators (called Workers Behind the
Scene).
1. Database Administrator (DBA): This is the chief administrator, who oversees
and manages the database system (including the data and software). Duties
include authorizing users to access the database, coordinating/monitoring its use,
acquiring hardware/software for upgrades, etc. In large organizations, the DBA
might have a support staff.
2. Database Designers: They are responsible for identifying the data to be stored and
for choosing an appropriate way to organize it. They also define v i ew s for
different categories of users. The final design must be able to support the
requirements of all the user sub-groups.
3. End Users: These are persons who access the database for querying, updating, and
report generation. They are main reason for database's
existence!
o Casual end users: use database occasionally, needing different information
each time; use query language to specify their requests; typically middle- or
high-level managers.
o Naive/Parametric end users: Typically the biggest group of users;
frequently query/update the database using standard canned transactions
that have been carefully programmed and tested in advance. Examples:
bank tellers check account balances, post withdrawals/deposits
reservation clerks for airlines, hotels, etc., check availability
of seats/rooms and make reservations.
shipping clerks (e.g., at UPS) who use buttons, bar code scanners,
etc., to update status of in-transit packages.
Sophisticated end users: engineers, scientists, business analysts who implement
their own applications to meet their complex needs.
o Stand-alone users: Use "personal" databases, possibly employing a
special- purpose (e.g., financial) software package. Mostly maintain personal
databases using ready-to-use packaged applications.
o An example is a tax program user that creates its own internal database.
o Another example is maintaining an address book
Prepared By: Prof. Gautam Dematti Page 1

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
4. System Analysts, Application Programmers, Software Engineers:
o System Analysts: determine needs of end users, especially naive and
parametric users, and develop specifications for canned transactions that meet
these needs.
o Application Programmers: Implement, test, document, and maintain
programs that satisfy the specifications mentioned above.
Workers behind the Scene:
DBMS system designers/implementers: provide the DBMS software that is at the
foundation of all this!
Tool developers: design and implement software tools facilitating database
s ystem design, performance monitoring, creation of graphical user interfaces,
prototyping, etc.
Operators and maintenance personnel: responsible for the day-to-day operation of the
system.

b) Explain the three level DBMS architecture, with a neat diagram. Why do we need
mappings between schema levels? Explain mapping in DBMS architecture.
(12 M)
Sol:

Prepared By: Prof. Gautam Dematti Page 2

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
The goal of the three-schema architecture, illustrated in Figure 2.2, is to separate the user
applications and the physical database. In this architecture, schemas can be defined at the
following three levels:
internal level: has an internal/physical schema that describes the physical storage
structure of the database using a low-level data model)
Conceptual level: has a conceptual schema describing the (logical) structure of the whole
database for a community of users. The conceptual schema hides the details of physical
storage structures and concentrates on describing entities, data types, relationships, user
operations, and constraints. Can be described using either high-level or implementation
data model.

external/view level: includes a number of external schemas (or user views), each
of which describes part of the database that a particular category of users is interested in,
hiding rest of database. Can be described using either high-level or implementation data model.
(In practice, usually described using same model as is the conceptual schema.)
There is need for mappings between schema levels for visualization and schema matching. The
mappings between schema levels helps in the different types of transformation.

The processes of transforming requests and results between levels are called mappings.
These mappings may be time-consuming, so some DBMSs-especially those that are meant to
support small databases-do not support external views. Even in such systems, however, a certain
amount of mapping is necessary to transform requests between the conceptual and internal
levels.
By virtue of mappings between the levels:
external/conceptual mapping (providing logical data independence)
conceptual/internal mapping (providing physical data independence).
Data independence is the capacity to change the schema at one level of the architecture
without having to change the schema at the next higher level.
We distinguish between logical and physical data independence according to which two
adjacent levels are involved.
1. Logical data independence is the capacity to change the conceptual schema without having
to change external schemas or application programs. We may change the conceptual schema
to expand the database (by adding a record type or data item), to change constraints, or to
reduce the database (by removing a record type or data item). In the last case, external
schemas that refer only to the remaining data should not be affected.
2. Physical data independence is the capacity to change the internal schema without having
to change
the conceptual schema. Hence, the external schemas need not be changed as
well. Changes to the internal schema may be needed because some physical files had to be
reorganized-for example, by creating additional access structures-to improve the performance of
Prepared By: Prof. Gautam Dematti Page 3

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
retrieval or update. If the same data as before remains in the database, we should not have to
change the conceptual schema.
2 a) Explain the ER notations used for various constructs in database in database schema.
(10M)
Sol:

Prepared By: Prof. Gautam Dematti Page 4

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15

1)

Entity: An entity represents some "thing" (in the miniworld) that is of interest to us,
i.e., about which we want to maintain some data. An entity could represent a physical

Prepared By: Prof. Gautam Dematti Page 5

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
object (e.g., house, person, automobile, widget) or a less tangible concept (e.g., company,
job, academic course).
2) Weak Entity: An entity type may also have no key, in which case it is called a weak entity
type.
3) Relationship: This is an association between two entities.
As an example, one can imagine a STUDENT entity being associated to an
ACADEMIC_COURSE entity via, say, an ENROLLED_IN relationship.
4) Identifying Relationship: An entity of a weak identity type is uniquely identified by the
specific entity to which it is related (by a so-called identifying relationship that relates
the weak entity type with its so-called identifying or owner entity type) in
combination with some set of its own attributes (called a partial key).
5) Attribute: An entity is described by its attributes, which are properties characterizing
it. Each attribute has a value drawn from some domain (set of meaningful values).
6) Key Attribute: A minimal collection of attributes (often only one) that, by design,
distinguishes any two (simultaneously-existing) entities of that type.
7) Multivalued Attribute: Such attributes are called multivalued. A multivalued attribute may
have lower and upper bounds to constrain the number of values allowed for each individual
entity.
8) Composite attributes: A composite attribute is one that is composed of smaller parts.It
can be divided into smaller subparts, which represent more basic attributes with
independent meanings.
9) Derived Attribute: It is one whose value can be calculated from the values of other
attributes, and hence need not be stored.
10) Total Participation: To say that entity type A is constrained to participate totally in
relationship R is to say that if (at some moment in time) R's instance set is
{ (a1, b1), (a2, b2), ... (am, bm)
},
then (at that same moment) A's instance set must be { a1, a2, ..., am }.
11) Cardinality ratio: The cardinality ratio for a binary relationship specifies the maximum
number of relationship instances that an entity can participate in.
For example, in the WORKS_FOR binary relationship type, DEPARTMENT:
EMPLOYEE is of cardinality ratio l:N, meaning that each department can be related to
(that is,employs) any number of ernployees.l" but an employee can be related to (work for)
only onedepartment. The possible cardinality ratios for binary relationship types are 1:1,
l:N, N:l, and M:N.
12) Structural Constraint: This notation involves associating a pair of integer numbers (min,
max) with each participation of an entity type E in a relationship type R, where 0 min
max and max 1.
b) With respect to ER model, Explain with example i) Composite Attributes ii) Cardinality
ratio iii) Participation Constraints iv) Binary relationship.
(10M)
Sol:

Prepared By: Prof. Gautam Dematti Page 6

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
i) Composite Attributes: A composite attribute is one that is composed of smaller parts. It can
be divided into smaller subparts, which represent more basic attributes with independent
meanings.
ii) Cardinality ratio: The cardinality ratio for a binary relationship specifies the maximum
number of relationship instances that an entity can participate in.
iii) Participation Constraints: The participation constraint specifies whether the existence of an
entity depends on its being related to another entity via the relationship type. This constraint
specifies the minimum number of relationship instances that each entity can participate in, and is
sometimes called the minimum cardinality constraint.
iv) Binary relationship: A relationship type of degree two is called binary relationship.
For Example: the WORKSFOR relationship is of degree two.
3 a) Discuss the various type of JOIN operations. Why is theta join required?
(6M)
Sol:
The JOIN operation, denoted by , is used to combine related tuples from two relations into
single tuples. This operation is very important for any relational database with more than a single
relation, because it allows us to process relationships among relations. There are two types of
Join The EQUljOIN and NATURAL JOIN .
EQUIJOIN: JOIN involves join conditions with equality comparisons only. Such a JOIN, where
the only comparison operator used is =, is called an EQUIJOIN.

NATURAL JOIN: one of each pair of attributes with identical values is superfluous, a new
operation Called NATURAL JOIN-denoted by *-was created to get rid of the second
(superfluous) attribute in an EQUIJOIN conditions.
In the following example, we first rename the DNUMBER attribute of DEPARTMENT to
DNUM so that it has the same name as the DNUM attribute in PROJECT-and then apply
NATURAL JOIN:

Prepared By: Prof. Gautam Dematti Page 7

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
Theta Join Produces all combinations of tuples from R1 and R2 that satisfy the join condition.
b) Consider the following relation schema:
USERS (uid,uname,cost)
GROUPS (gid,title,category,n,gsize,owner)
POSTS (pid,uid,gid,tid,ptext,pdate)
write the following queries in relation algebra.
i)
Show the text and number of all the posts made by user number 4 before march
1,2007.
ii)
Show the names of all the users who responded to post number 2.
iii)
Show the uid and cost of all the users who are group owners and posted thread
on 1.1.2003.
(9M)
Sol:
i)
ii)
iii)

pid,ptext ( uid =4 and pdate <1-05-2007`> (posts) )


uname ( posts.pid=2(users posts))
users,uid,cost ( pdate = 1-1-2003 (groups posts) users)

c. Explain the SELECT and PROJECT operations in relational algebra with example.
(5M)
Sol:
SELECT :
The SELECT operation is used to select a subset of the tuples from a relation that satisfy a
selection condition. One can consider the SELECT operation to be a filter that keeps only those
tuples that satisfy a qualifying condition.
The SELECT operation can also be visualized as a horizontal partition of the relation into two
sets of tuples-those tuples that satisfy the condition and are selected, and those tuples that do not
satisfy the condition and are discarded.
For example, to select the EMPLOYEE tuples whose department is 4, or those whose salary is
greater than $30,000, we can individually specify each of these two conditions with a SELECT
operation as follows:

PROJECT:
Prepared By: Prof. Gautam Dematti Page 8

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
The PROJECT operation, selects certain columns from the table and discards the other columns.
If we are interested in only certain attributes of a relation,
we use the PROJECT operation to project the relation over these attributes only. The result of the
PROJECT operation can hence be visualized as a vertical partition of the relation into two
relations: one has the needed columns (attributes) and contains the result of the operation, and
the other contains the discarded columns. For example, to list each employee's first and last name
and salary,
we can use the PROJECT operation as follows:

LNAME, FNAME, SALARY(

EMPLOYEE)

4 a) Explain the following:


i)
Primary key ii) Foreign key iii) Candidate key
(6M)
Sol:
i) Primary key: a key chosen to act as the means by which to identify tuples in a
relation. Typically, one prefers a primary key to be one having as few attributes as possible.
ii) Foreign key: A foreign key of relation R is a set of its attributes intended to be used (by
each tuple in R) for identifying/referring to a tuple in some relation S. (R is called the
referencing relation and S the referenced relation.) For this to make sense, the set of attributes
of R forming the foreign key should "correspond to" some superkey of S. Indeed, by definition
we require this superkey to be the primary key of S.
iii) Candidate key In general, a relation schema may have more than one key. In this case, each
of the keys is called a candidate key.
b) Consider the following relations:
Hotel(Hotelno,name,address)
Room( Roomno,hotelno,type,price)
Booking(Hotelno,guestno,datefrom,dateto,roomno)
Guest(guestno,name,address)
Write the SQL statements for the following:
i) List the names and address of all guest booked the hotel, which is located in chandugarh,
alphabetically ordered by name.
ii) List all family rooms with the price below Rs 400 per night, in ascending order of price
in hotelRVH
iii) How many hotels are there?
Prepared By: Prof. Gautam Dematti Page 9

(9M)

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
Sol: i) SELECT name, address
FROM guest,booking, hotel
WHERE guest.guestno = booking. guestno and booking.hotelno = hotel.hotelno
ORDER BY guest.name;
ii)

SELECT *
FROM room, hotel
WHERE room.price < 400 and hotel.name = RVN and room.type =
familyroom
ORDER BY price:

iii)

SELECT Count (*)


FROM hotel;

c. Explain with example in SQL


i) Drop Command ii) Delete Command

(5M)

Sol:
i) Drop Command: The DROP command can be used to drop named schema elements, such as
tables, domains, or constraints. One can also drop a schema.
For example, to remove the COMPANY database schema and all its tables, domains, and other
elements, the CASCADE option is used as follows:
DROP SCHEMA COMPANY CASCADE;
ii) Delete Command: The Delete operation can violate only referential integrity, if the tuple
being deleted is referenced by the foreign keys from other tuples in the database.
Delete the WORKS_ON tuple with ESSN = '999887777' and PNO = 10.
5 a) What a view? Explain how to create the view and how view can be dropped?
(8M)
Sol:
A view in SQL terminology is a single table that is derived from other tables. A view does
not necessarily exist in physical form; it is considered a virtual table, in contrast to base tables,
whose tuples are actually stored in the database.
In SQL, the command to specify a view is CREATE VIEW. The view is given a (virtual) table
name (or view name), a list of attribute names, and a query to specify the contents of the view.
Prepared By: Prof. Gautam Dematti Page 10

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15

CREATE VIEW WORKS_ON1


AS SELECT FNAME, LNAME, PNAME, HOURS
FROM EMPLOYEE, PROJECT, WORKS_ON
WHERE SSN=ESSN AND PNO=PNUMBER;
If we do not need a view any more, we can use the DROP VIEW command to dispose of it. For
example, to get rid of the view VI, we can use the SQL statement:
DROP VIEW WORKS_ON1;
b) Explain the following
i) Embedded SQL ii) Database stored procedure

(12M)

Sol:
i) Embedded SQL:
Notice that the only embedded SQL commands are lines 1 and 7, which tell the
precompiler to take note of the C variable names between BEGIN DECLARE and END
DECLARE because they can be included in embedded SQL statements-as long as they are
preceded by a colon (:). Lines 2 through 5 are regular C program declarations. The C program
variables declared in lines 2 through 5 correspond to the attributes of the EMPLOYEE and
DEPARTMENT tables from the COMPANY database
0) int loop ;
1) EXEC SQL BEGIN DECLARE SECTION
2) varchar dname [16J. fname [16J. lname [16J, address [31J
3) char ssn [10J. bdate [l1J. sex [2J. mi ni t [2J ;
4) float salary, rai se ;
5) int dna. dnumber ;
6) int SQLCODE ; char SQLSTATE [6J
7) EXEC SQL END DECLARE SECTION ;
ii) Database stored procedure: it is sometimes useful to create database program modulesprocedures or functions-that are stored and executed by the DBMS at the database server. These
are historically known as database stored procedures, although they can be functions or
procedures.
Stored procedures are useful in the following circumstances:
If a database program is needed by several applications, it can be stored at the server and
invoked by any of the application programs. This reduces duplication of effort and improves
software modularity.
Executing a program at the server can reduce data transfer and hence communication cost
between the client and server in certain situations.
These procedures can enhance the modeling power provided by views by allowing more
complex types of derived data to be made available to the database users. In addition, they can be
Prepared By: Prof. Gautam Dematti Page 11

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
used to check for complex constraints that are beyond the specification power of assertions and
triggers.
The general form of
declaring a stored procedures is as follows:
CREATE PROCEDURE <procedure name> ( <parameters> )
<local declarations>
<procedure body> ;
6 a) Explain informal design guidelines for relation schemas.
Sol:

(6M)

The four informal measures of quality for relation schema


Semantics of the attributes
Reducing the redundant values in tuples
Reducing the null values in tuples
Disallowing the possibility of generating spurious tuples

6.1.1 Semantics of relations attributes


Specifies how to interpret the attributes values stored in a tuple of the relation. In other words, how
the attribute value in a tuple relate to one another.
Guideline 1: Design a relation schema so that it is easy to explain its meaning. Do not combine
attributes from multiple entity types and relationship types into a single relation.
Reducing redundant values in tuples. Save storage space and avoid update anomalies.

Prepared By: Prof. Gautam Dematti Page 12

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15

Insertion anomalies.
Deletion anomalies.
Modification anomalies.

Insertion Anomalies
To insert a new employee tuple into EMP_DEPT, we must include either the attribute values for
that department that the employee works for, or nulls. It's difficult to insert a new department that
has no employee as yet in the EMP_DEPT relation. The only way to do this is to place null
values in the attributes for employee. This causes a problem because SSN is the primary key of
EMP_DEPT, and each tuple is supposed to represent an employee entity - not a department
entity.
Deletion Anomalies
If we delete from EMP_DEPT an employee tuple that happens to represent the last employee
working for a particular department, the information concerning that department is lost from the
database.
Modification Anomalies
In EMP_DEPT, if we change the value of one of the attributes of a particular department- say the
manager of department 5- we must update the tuples of all employees who work in that
department.
Guideline 2: Design the base relation schemas so that no insertion, deletion, or modification
anomalies occur. Reducing the null values in tuples. e.g., if 10% of employees have offices, it is
better to have a separate relation, EMP_OFFICE, rather than an attribute OFFICE_NUMBER in
EMPLOYEE.
Prepared By: Prof. Gautam Dematti Page 13

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
Guideline 3: Avoid placing attributes in a base relation whose values are mostly null.
Disallowing spurious tuples. Spurious tuples - tuples that are not in the original relation but
generated by natural join of decomposed subrelations. Example: decompose EMP_PROJ into
EMP_LOCS and EMP_PROJ1.

Guideline 4: Design relation schemas so that they can be naturally JOINed on primary keys or
foreign keys in a way that guarantees no spurious tuples are generated.
b) What is the need for Normalization? Explain the first, second, and third normal forms
with examples.
(14M)
Sol:
The purpose of normalization.

The problems associated with redundant data.


The identification of various types of update anomalies such as insertion, deletion,
and modification anomalies.
How to recognize the appropriateness or quality of the design of relations.
The concept of functional dependency, the main tool for measuring the appropriateness
of attribute groupings in relations.
How functional dependencies can be used to group attributes into relations that are in a
known normal form.
How to define normal forms for relations.
How to undertake the process of normalization.
How to identify the most commonly used normal forms, namely first (1NF), second (2NF),
and third (3NF) normal forms, and Boyce-Codd normal form (BCNF).
How to identify fourth (4NF), and fifth (5NF) normal forms.
Prepared By: Prof. Gautam Dematti Page 14

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
First Normal Form (1NF)
First normal form is now considered to be part of the formal definition of a relation;
historically, it was defined to disallow multivalued attributes, composite attributes, and their
combinations. It states that the domains of attributes must include only atomic (simple,
indivisible) values and that the value of any attribute in a tuple must be a single value from the
domain of that attribute.
Practical Rule: "Eliminate Repeating Groups," i.e., make a separate table for each set of
related attributes, and give each table a primary key.
Formal Definition: A relation is in first normal form (1NF) if and only if all underlying
simple domains contain atomic values only.

Second Normal Form (2NF)


Second normal form is based on the concept of fully functional dependency. A functional X
Y is a fully functional dependency is removal of any attribute A from X means that the
dependency does not hold any more. A relation schema is in 2NF if every nonprime
attribute in relation is fully functionally dependent on the primary key of the relation. It
also can be restated as: a relation schema is in 2NF if every nonprime attribute in relation is
not partially dependent on any key of the relation.
Practical Rule: "Eliminate Redundant Data," i.e., if an attribute depends on only part of
a multivalued key, remove it to a separate table.
Formal Definition: A relation is in second normal form (2NF) if and only if it is in 1NF
and every nonkey attribute is fully dependent on the primary key.

Prepared By: Prof. Gautam Dematti Page 15

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15

Third Normal Form (3NF)


Third normal form is based on the concept of transitive dependency. A functional
dependency X Y in a relation is a transitive dependency if there is a set of attributes Z that
is not a subset of any key of the relation, and both X Z and Z
Y hold. In other words, a
relation is in 3NF if, whenever a functional dependency
X
A holds in the relation, either (a) X is a superkey of the relation, or (b) A is a
prime attribute of the relation.
Practical Rule: "Eliminate Columns not Dependent on Key," i.e., if attributes do not contribute
to a description of a key, remove them to a separate table.
Formal Definition: A relation is in third normal form (3NF) if and only if it is in 2NF and
Prepared By: Prof. Gautam Dematti Page 16

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
every nonkey attribute is nontransitively dependent on the primary key.

6.4 Boyce-Codd Normal Form (BCNF)


A relation schema R is in Boyce-Codd Normal Form (BCNF) if whenever a FD X -> A holds
in
R, then X is a superkey of R

Each normal form is strictly stronger than the previous one:

Every 2NF relation is in 1NF Every 3NF relation is in 2NF

Every BCNF relation is in 3NF

There exist relations that are in 3NF but not in BCNF


A relation is in BCNF, if and only if every determinant is a candidate key.
Additional criteria may be needed to ensure the the set of relations in a relational database
are satisfactory.

7 a) Consider the schema


Prepared By: Prof. Gautam Dematti Page 17

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
R=(A,B,C,D,E) suppose the following functional dependencies hold
EA
CDE
ABC
BD
state whether the following decomposition of R are lossless join decomposition or
not, justify {(A,B,C),(A,D,E)}
{(A,B,C),(C,D,E)}
(10M)
Sol:

b) Explain the following


i) Inclusion dependencies
ii) Domain key normal form.

(10M)

Sol:
i) Inclusion dependencies: An inclusion dependency R.X < S.Y between two sets of attributes-X
of relation schema R, and Y of relation schema S-specifies the constraint that, at any specific
time when r is a relation state of Rand s a relation state of S, we must have
The ~ (subset) relationship does not necessarily have to be a proper subset.
Prepared By: Prof. Gautam Dematti Page 18

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
Obviously, the sets of attributes on which the inclusion dependency is specified-X of R and Y of
S-must have the same number of attributes. In addition, the domains for each pair of
corresponding attributes should be compatible.
For example, if X = {AI' Az, ... ,An) and Y={B], Bz, ... , Bn }, one possible correspondence is to
have dom(A) Compatible With dom(B,) for 1 :S i :S n. In this case, we say that A; corresponds to
Bi.
For example, we can specify the following inclusion dependencies on the relational
schema in Figure 10.1:
DEPARTMENT. DMGRSSN < EMPLOYEE. SSN
WORKS_ON. SSN < EMPLOYEE. SSN
EMPLOYEE. DNUMBER < DEPARTMENT. DNUMBER
PROJECT. DNUM < DEPARTMENT. DNUMBER
WORKS_ON. PNUMBER < PROJ ECT PNUMBER
DEPT_LOCATIONS.DNUMBER < DEPARTMENT.DNUMBER
ii) Domain key normal form:
There is no hard and fast rule about defining normal forms only up to 5NF. Historically,
the process of normalization and the process of discovering undesirable dependencies was
carried through 5NF, but it has been possible to define stricter normal forms that take into
account additional types of dependencies and constraints. The idea behind domain-key normal
form (DKNF) is to specify (theoretically, at least) the "ultimate normal form" that takes into
account all possible types of dependencies and constraints. A relation schema is said to be in
DKNF if all constraints and dependencies that should hold on the valid relation states can be
enforced simply by enforcing the domain constraints and key constraints on the relation. For a
relation in DKNF, it becomes very straightforward to enforce all database constraints by simply
checking that each attribute value in a tuple is of the appropriate domain and that every key
constraint is enforced.
However, because of the difficulty of including complex constraints in a DKNF relation,
its practical utility is limited, since it may be quite difficult to specify general integrity
constraints. For example, consider a relation CAR (MAKE, VIN#) (where VIN# is the vehicle
identification number) and another relation MANUFACTURE (VIN# , COUNTRY) (where
COUNTRY is the country of manufacture). A general constraint may be of the following form:
"If the MAKE is
either Toyota or Lexus, then the first character of the VIN# is a "T' if the country of manufacture
is Japan; if the MAKE is Honda or Acura, the second character of the VIN# is a "T' ifthe country
of manufacture is Japan." There is no simplified way to represent such constraints short of
writing a procedure (or general assertions) to test them.
8 a) Explain why a transaction execution should be atomic? Explain ACID properties by
considering the following transaction
Ti: read(A)
A:=A-50;
write(A);
read(B);
B:=B+50;
Prepared By: Prof. Gautam Dematti Page 19

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
write(B);

(10M)

Sol:
A transaction is an atomic unit of work that is either completed in its entirety or not done at all.
For recovery purposes, the system needs to keep track of when the transaction starts, terminates,
and commits or aborts
Transaction to transfer $50 from account A to account B:
1.
2.
3.
4.
5.
6.
Atomicity requirement

read(A)
A := A 50
write(A)
read(B)
B := B + 50
write(B)

o if the transaction fails after step 3 and before step 6, money will be lost leading
to an inconsistent database state
Failure could be due to software or hardware
o the system should ensure that updates of a partially executed transaction are not
reflected in the database
Durability requirement once the user has been notified that the transaction has
completed (i.e., the transfer of the $50 has taken place), the updates to the database by the
transaction must persist even if there are software or hardware failures.
Consistency requirement in above example:
the sum of A and B is unchanged by the execution of the transaction
In general, consistency requirements include
Explicitly specified integrity constraints such as primary keys and foreign
keys
Implicit integrity constraints
e.g. sum of balances of all accounts, minus sum of loan amounts
must equal value of cash-in-hand
o A transaction must see a consistent database.
o During transaction execution the database may be temporarily inconsistent.
o When the transaction completes successfully the database must be consistent
Erroneous transaction logic can lead to inconsistency
Isolation requirement if between steps 3 and 6, another transaction T2 is allowed to
access the partially updated database, it will see an inconsistent database (the sum
A+Bwillbelessthanitshouldbe).
T1
T2

Prepared By: Prof. Gautam Dematti Page 20

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
1. read(A)
2. A := A 50
3. write(A)
read(A), read(B), print(A+B)
4. read(B)
5. B := B + 50
6. write(B
Isolation can be ensured trivially by running transactions serially, that is, one after the other.
b) Briefly discuss on the two phase locking protocol used in concurrency control.
(10M)
Sol:
Locking is an operation which secures (a) permission to Read or (b) permission to Write
a data item for a transaction. requesting transaction.
Example: Lock (X).
Data item X is locked in behalf of the
Unlocking is an
operation
which
removes
these
permissions
from
the
data item.
Example: Unlock (X).
Data item X is made available to all other transactions.
Lock and Unlock are Atomic operations.
Two locks modes (a) shared (read) and (b) exclusive (write).
Shared mode:
shared lock (X).
More than one transaction can apply share lock on X for
reading its value but no write lock can be applied on X by any other transaction.
Exclusive mode: Write lock (X).
Only one write lock on X can exist at any time and no
shared lock can be applied by any other transaction on X.
Lock Manager: Managing loc
Lock table: Lock manager uses it to store the identify of transaction locking a data item, the data
item, lock mode and pointer to the next data item locked. One simple way to implement a lock
table is through linked list.
Transaction ID Data item id lock mode
Prepared By: Prof. Gautam Dematti Page 21

DATABASE MANAGEMENT SYSTEM (10CS54) QP


SOLVED DEC-15
Ptr to next data item

Two-Phase Locking Techniques: Essential components


The following code performs the lock operation:
B: if LOCK (X) = 0 (*item is unlocked*) then LOCK (X) 1 (*lock the item*) else begin
wait (until lock (X) = 0) and
the lock manager wakes up the transaction);
goto B
end;

Prepared By: Prof. Gautam Dematti Page 22

You might also like