DBMS Viva Questions
DBMS Viva Questions
DBMS Viva Questions
A list of top frequently asked DBMS interview questions and answers are given below.
1) What is DBMS?
2) What is a database?
A Database is a logical, consistent and organized collection of data that it can easily be
accessed, managed and updated. Databases, also known as electronic databases are
structured to provide the facility of creation, insertion, updating of the data efficiently and
are stored in the form of a file or set of files, on the magnetic disk, tapes and another
sort of secondary devices. Database mostly consists of the objects (tables), and tables
include of the records and fields. Fields are the basic units of data storage, which
contain the information about a particular aspect or attribute of the entity described by
the database. DBMS is used for extraction of data from the database in the form of the
queries.
The data can be stored in the database with ease, and there are no issues of data
redundancy and data inconsistency.
The data will be extracted from the database using DBMS software whenever required.
So, the combination of database and DBMS software enables one to store, retrieve and
access data with considerate accuracy and security.
● Redundancy control
● Easy accessibility
● Easy data extraction and data processing due to the use of queries
The Checkpoint is a type of mechanism where all the previous logs are removed from
the system and permanently stored in the storage disk.
There are two ways which can help the DBMS in recovering and maintaining the ACID
properties, and they are- maintaining the log of each transaction and maintaining
shadow pages. So, when it comes to log based recovery system, checkpoints come into
existence. Checkpoints are those points to which the database engine can recover after
a crash as a specified minimal point from where the transaction log record can be used
to recover all the committed data up to the point of the crash.
A checkpoint is like a snapshot of the DBMS state. Using checkpoints, the DBMS can
reduce the amount of work to be done during a restart in the event of subsequent
crashes. Checkpoints are used for the recovery of the database after the system crash.
Checkpoints are used in the log-based recovery system. When due to a system crash
we need to restart the system then at that point we use checkpoints. So that, we don't
have to perform the transactions from the very starting.
The transparent DBMS is a type of DBMS which keeps its physical structure hidden
from users. Physical structure or physical storage structure implies to the memory
manager of the DBMS, and it describes how the data stored on disk.
PROJECTION and SELECTION are the unary operations in relational algebra. Unary
operations are those operations which use single operands. Unary operations are
SELECTION, PROJECTION, and RENAME.
● DATA Control Language (DCL) e.g., GRANT and REVOKE. These commands
are used for giving and removing the user access on the database. So, they are
the part of Data Control Language.
Database language implies the queries that are used for the update, modify and
manipulate the data.
11) What do you understand by Data Model?
The Data model is specified as a collection of conceptual tools for describing data, data
relationships, data semantics and constraints. These models are used to describe the
relationship between the entities and their attributes.
● network model
● relational model
A relation is specified as a set of tuples. A relation is the set of related attributes with
identifying key attributes
Let r be the relation which contains set tuples (t1, t2, t3, ..., tn). Each tuple is an ordered
list of n-values t=(v1,v2, ...., vn).
The Relationship is defined as an association among two or more entities. There are
three type of relationships in DBMS-
One-To-One: Here one record of any object can be related to one record of another
object.
One-To-Many (many-to-one): Here one record of any object can be related to many
records of other object and vice versa.
Many-to-many: Here more than one records of an object can be related to n number of
records of another object.
● Inconsistent
● Not secure
● Data redundancy
● Data isolation
● Data integrity
● Atomicity problem
16) What is data abstraction in DBMS?
Data abstraction in DBMS is a process of hiding irrelevant details from users. Because
database systems are made of complex data structures so, it makes accessible the
user interaction with the database.
For example: We know that most of the users prefer those systems which have a
simple GUI that means no complex processing. So, to keep the user tuned and for
making the access to the data easy, it is necessary to do data abstraction. In addition to
it, data abstraction divides the system in different layers to make the work specified and
well defined.
Physical level: It is the lowest level of abstraction. It describes how data are stored.
Logical level: It is the next higher level of abstraction. It describes what data are stored
in the database and what the relationship among those data is.
View level: It is the highest level of data abstraction. It describes only part of the entire
database.
For example- User interacts with the system using the GUI and fill the required details,
but the user doesn't have any idea how the data is being used. So, the abstraction level
is entirely high in VIEW LEVEL.
Then, the next level is for PROGRAMMERS as in this level the fields and records are
visible and the programmers have the knowledge of this layer. So, the level of
abstraction here is a little low in VIEW LEVEL.
Data Definition Language (DDL) is a standard for commands which defines the different
structures in a database. Most commonly DDL statements are CREATE, ALTER, and
DROP. These commands are used for updating data into the database.
DData Manipulation Language (DML) is a language that enables the user to access or
manipulate data as organized by the appropriate data model. For example- SELECT,
UPDATE, INSERT, DELETE.
Procedural DML or Low level DML: It requires a user to specify what data are needed
and how to get those data.
Non-Procedural DML or High level DML:It requires a user to specify what data are
needed without specifying how to get those data.
The DML Compiler translates DML statements in a query language that the query
evaluation engine can understand. DML Compiler is required because the DML is the
family of syntax element which is very similar to the other programming language which
requires compilation. So, it is essential to compile the code in the language which query
evaluation engine can understand and then work on those queries with proper output.
● project
● set difference
● union
● rename,etc.
The term query optimization specifies an efficient execution plan for evaluating a query
that has the least estimated cost. The concept of query optimization came into the
frame when there were a number of methods, and algorithms existed for the same task
then the question arose that which one is more efficient and the process of determining
the efficient way is known as query optimization.
Once the DBMS informs the user that a transaction has completed successfully, its
effect should persist even if the system crashes before all its changes are reflected on
disk. This property is called durability. Durability ensures that once the transaction is
committed into the database, it will be stored in the non-volatile memory and after that
system failure cannot affect that data anymore.
E-R model is a short name for the Entity-Relationship model. This model is based on
the real world. It contains necessary objects (known as entities) and the relationship
among these objects. Here the primary objects are the entity, attribute of that entity,
relationship set, an attribute of that relationship set can be mapped in the form of E-R
diagram.
The Entity is a set of attributes in a database. An entity can be a real-world object which
physically exists in this world. All the entities have their attribute which in the real world
considered as the characteristics of the object.
For example: In the employee database of a company, the employee, department, and
the designation can be considered as the entities. These entities have some
characteristics which will be the attributes of the corresponding entity.
An entity type is specified as a collection of entities, having the same attributes. Entity
type typically corresponds to one or several related tables in the database. A
characteristic or trait which defines or uniquely identifies the entity is called entity type.
For example, a student has student_id, department, and course as its characteristics.
The entity set specifies the collection of all entities of a particular entity type in the
database. An entity set is known as the set of all the entities which share the same
properties.
An entity set that doesn't have sufficient attributes to form a primary key is referred to as
a weak entity set. The member of a weak entity set is known as a subordinate entity.
Weak entity set does not have a primary key, but we need a mean to differentiate
among all those entries in the entity set that depend on one particular strong entity set.
For example: If a student is an entity in the table then age will be the attribute of that
student.
Data integrity is one significant aspect while maintaining the database. So, data integrity
is enforced in the database system by imposing a series of rules. Those set of integrity
is known as the integrity rules.
Entity Integrity : It specifies that "Primary key cannot have a NULL value."
Referential Integrity: It specifies that "Foreign Key can be either a NULL value or
should be the Primary Key value of other relation
36) What do you mean by extension and intension?
Extension: The Extension is the number of tuples present in a table at any instance. It
changes as the tuples are created, updated and destroyed. The actual data in the
database change quite frequently. So, the data in the database at a particular moment
in time is known as extension or database state or snapshot. It is time dependent.
Intension: Intension is also known as Data Schema and defined as the description of
the database, which is specified during database design and is expected to remain
unchanged. The Intension is a constant value that gives the name, structure of tables
and the constraints laid on it.
System R was designed and developed from 1974 to 1979 at IBM San Jose Research
Centre. System R is the first implementation of SQL, which is the standard relational
data query language, and it was also the first to demonstrate that RDBMS could provide
better transaction processing performance. It is a prototype which is formed to show
that it is possible to build a Relational System that can be used in a real-life environment
to solve real-life problems.
● Research Storage
It makes you able to modify the schema definition in one level should not affect the
schema definition in the next higher level.
For example: If we want to manipulate the data inside any table that should not change
the format of the table.
Logical Data Independence: Logical data in the data about the database. It basically
defines the structure. Such as tables stored in the database. Modification in logical level
should not affect the view level.
For example: If we need to modify the format of any table, that modification should not
affect the data inside it.
Physical level: It is the lowest level of abstraction. It describes how data are stored.
Logical level: It is the next higher level of abstraction. It describes what data are stored
in the database and what relationship among those data.
View level: It is the highest level of data abstraction. It describes only part of the entire
database.
For example- User interact with the system using the GUI and fill the required details,
but the user doesn't have any idea how the data is being used. So, the abstraction level
is absolutely high in VIEW LEVEL.
Then, the next level is for PROGRAMMERS as in this level the fields and records are
visible and the programmer has the knowledge of this layer. So, the level of abstraction
here is a little low in VIEW LEVEL.
The Join operation is one of the most useful activities in relational algebra. It is most
commonly used way to combine information from two or more relations. A Join is
always performed on the basis of the same or related column. Most complex queries of
SQL involve JOIN command.
○ Theta join
○ Natural join
○ Equi join
1NF is the First Normal Form. It is the simplest type of normalization that you can
implement in a database. The primary objectives of 1NF are to:
● Create separate tables for each group of related data and identify each row with
a unique column
42) What is 2NF?
2NF is the Second Normal Form. A table is said to be 2NF if it follows the following
conditions:
● The table is in 1NF, i.e., firstly it is necessary that the table should follow the rules
of 1NF.
● Every non-prime attribute is fully functionally dependent on the primary key, i.e.,
every non-key attribute should be dependent on the primary key in such a way
that if any key element is deleted, then even the non_key element will still be
saved in the database.
3NF stands for Third Normal Form. A database is called in 3NF if it satisfies the
following conditions:
Where:
X->Y
● It is in 3NF.
● For every functional dependency X->Y, X should be the super key of the table. It
merely means that X cannot be a non-prime attribute if Y is a prime attribute.
ACID properties are some basic rules, which has to be satisfied by every transaction to
preserve the integrity. These properties and rules are:
ATOMICITY: Atomicity is more generally known as ?all or nothing rule.' Which implies
all are considered as one unit, and they either run to completion or not executed at all.
CONSISTENCY: This property refers to the uniformity of the data. Consistency implies
that the database is consistent before and after the transaction.
ISOLATION: This property states that the number of the transaction can be executed
concurrently without leading to the inconsistency of the database state.
DURABILITY: This property ensures that once the transaction is committed it will be
stored in the non-volatile memory and system crash can also not affect it anymore.
A stored procedure is a group of SQL statements that have been created and stored in
the database. The stored procedure increases the reusability as here the code or the
procedure is stored into the system and used again and again that makes the work
easy, takes less time in processing and decreases the complexity of the system. So, if
you have a code which you need to use again and again then save that code and call
that code whenever it is required.
47) What is the difference between a DELETE command and
TRUNCATE command?
DELETE command: DELETE command is used to delete rows from a table based on
the condition that we provide in a WHERE clause.
● DELETE command delete only those rows which are specified with the WHERE
clause.
● The TRUNCATE command removes all the rows from the table.
The 2-Tier architecture is the same as basic client-server. In the two-tier architecture,
applications on the client end can directly communicate with the database at the server
side.
49) What is the 3-Tier architecture?
The 3-Tier architecture contains another layer between the client and server.
Introduction of 3-tier architecture is for the ease of the users as it provides the GUI,
which, make the system secure and much more accessible. In this architecture, the
application on the client-end interacts with an application on the server which further
communicates with the database system.
You have to use Structured Query Language (SQL) to communicate with the RDBMS.
Using queries of SQL, we can give the input to the database and then after processing
of the queries database will provide us the required output.
51) What is the difference between a shared lock and exclusive
lock?
Shared lock: Shared lock is required for reading a data item. In the shared lock, many
transactions may hold a lock on the same data item. When more than one transaction is
allowed to read the data items then that is known as the shared lock.
Exclusive lock: When any transaction is about to perform the write operation, then the
lock on the data item is an exclusive lock. Because, if we allow more than one
transaction then that will lead to the inconsistency in the database.
Primary key: The Primary key is an attribute in a table that can uniquely identify each
record in a table. It is compulsory for every table.
Candidate key: The Candidate key is an attribute or set of an attribute which can
uniquely identify a tuple. The Primary key can be selected from these attributes.
Super key: The Super key is a set of attributes which can uniquely identify a tuple.
Super key is a superset of the candidate key.
Foreign key: The Foreign key is a primary key from one table, which has a relationship
with another table. It acts as a cross-reference between tables.