Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 4-DBMS

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 20

UNIT 4

RECOVERY AND SECURITY

DATABASE RECOVERY TECHNIQUES - DATABASE SECURITY –


DEBATE ON THE DISTRIBUTED DATABASES AND CLIENT-
SERVER ARCHITECTURE WITH REFERENCE TO INDIAN
RAILWAY RESERVATION SYSTEM.

Database Recovery Techniques in DBMS


Database systems, like any other computer system, are subject to failures but the data stored
in them must be available as and when required. When a database fails it must possess the
facilities for fast recovery. It must also have atomicity i.e. either transaction are completed
successfully and committed (the effect is recorded permanently in the database) or the
transaction should have no effect on the database. There are both automatic and non-
automatic ways for both, backing up of data and recovery from any failure situations. The
techniques used to recover the lost data due to system crashes, transaction errors, viruses,
catastrophic failure, incorrect commands execution, etc. are database recovery techniques. So
to prevent data loss recovery techniques based on deferred update and immediate update or
backing up data can be used. Recovery techniques are heavily dependent upon the existence
of a special file known as a system log. It contains information about the start and end of
each transaction and any updates which occur during the transaction. The log keeps track of
all transaction operations that affect the values of database items. This information is needed
to recover from transaction failure.
● The log is kept on disk start_transaction(T): This log entry records that transaction
T starts the execution.
● read_item(T, X): This log entry records that transaction T reads the value of
database item X.
● write_item(T, X, old_value, new_value): This log entry records that transaction T
changes the value of the database item X from old_value to new_value. The old
value is sometimes known as a before an image of X, and the new value is known
as an afterimage of X.
● commit(T): This log entry records that transaction T has completed all accesses to
the database successfully and its effect can be committed (recorded permanently)
to the database.
● abort(T): This records that transaction T has been aborted.
● checkpoint: Checkpoint is a mechanism where all the previous logs are removed
from the system and stored permanently in a storage disk. Checkpoint declares a
point before which the DBMS was in a consistent state, and all the transactions
were committed.
A transaction T reaches its commit point when all its operations that access the database have
been executed successfully i.e. the transaction has reached the point at which it will
not abort (terminate without completing). Once committed, the transaction is permanently
recorded in the database. Commitment always involves writing a commit entry to the log and
writing the log to disk. At the time of a system crash, item is searched back in the log for all
transactions T that have written a start_transaction(T) entry into the log but have not written a
commit(T) entry yet; these transactions may have to be rolled back to undo their effect on the
database during the recovery process.
● Undoing – If a transaction crashes, then the recovery manager may undo
transactions i.e. reverse the operations of a transaction. This involves examining a
transaction for the log entry write_item(T, x, old_value, new_value) and set the
value of item x in the database to old-value. There are two major techniques for
recovery from non-catastrophic transaction failures: deferred updates and
immediate updates.
● Deferred update – This technique does not physically update the database on
disk until a transaction has reached its commit point. Before reaching commit, all
transaction updates are recorded in the local transaction workspace. If a
transaction fails before reaching its commit point, it will not have changed the
database in any way so UNDO is not needed. It may be necessary to REDO the
effect of the operations that are recorded in the local transaction workspace,
because their effect may not yet have been written in the database. Hence, a
deferred update is also known as the No-undo/redo algorithm
● Immediate update – In the immediate update, the database may be updated by
some operations of a transaction before the transaction reaches its commit point.
However, these operations are recorded in a log on disk before they are applied to
the database, making recovery still possible. If a transaction fails to reach its
commit point, the effect of its operation must be undone i.e. the transaction must
be rolled back hence we require both undo and redo. This technique is known
as undo/redo algorithm.
● Caching/Buffering – In this one or more disk pages that include data items to be
updated are cached into main memory buffers and then updated in memory before
being written back to disk. A collection of in-memory buffers called the DBMS
cache is kept under the control of DBMS for holding these buffers. A directory is
used to keep track of which database items are in the buffer. A dirty bit is
associated with each buffer, which is 0 if the buffer is not modified else 1 if
modified.
● Shadow paging – It provides atomicity and durability. A directory with n entries
is constructed, where the ith entry points to the ith database page on the link.
When a transaction began executing the current directory is copied into a shadow
directory. When a page is to be modified, a shadow page is allocated in which
changes are made and when it is ready to become durable, all pages that refer to
the original are updated to refer new replacement page.
● Backward Recovery – The term “Rollback ” and “UNDO” can also refer to
backward recovery. When a backup of the data is not available and previous
modifications need to be undone, this technique can be helpful. With the
backward recovery method, unused modifications are removed and the database is
returned to its prior condition. All adjustments made during the previous traction
are reversed during the backward recovery. In another word, it reprocesses valid
transactions and undoes the erroneous database updates.
● Forward Recovery – “Roll forward “and “REDO” refers to forwarding recovery.
When a database needs to be updated with all changes verified, this forward
recovery technique is helpful.
Some failed transactions in this database are applied to the database to roll those
modifications forward. In another word, the database is restored using preserved
data and valid transactions counted by their past saves.
Some of the backup techniques are as follows :

● Full database backup – In this full database including data and database, Meta
information needed to restore the whole database, including full-text catalogs are
backed up in a predefined time series.
● Differential backup – It stores only the data changes that have occurred since the
last full database backup. When some data has changed many times since last full
database backup, a differential backup stores the most recent version of the
changed data. For this first, we need to restore a full database backup.
● Transaction log backup – In this, all events that have occurred in the database,
like a record of every single statement executed is backed up. It is the backup of
transaction log entries and contains all transactions that had happened to the
database. Through this, the database can be recovered to a specific point in time. It
is even possible to perform a backup from a transaction log if the data files are
destroyed and not even a single committed transaction is lost.
Database Security

Security of databases refers to the array of controls, tools, and procedures designed to ensure
and safeguard confidentiality, integrity, and accessibility. This tutorial will concentrate on
confidentiality because it's a component that is most at risk in data security breaches.

Security for databases must cover and safeguard the following aspects:

o The database containing data.

o Database management systems (DBMS)

o Any applications that are associated with it.

o Physical database servers or the database server virtual, and the hardware that runs it.

o The infrastructure for computing or network that is used to connect to the database.

Security of databases is a complicated and challenging task that requires all aspects of
security practices and technologies. This is inherently at odds with the accessibility of
databases. The more usable and accessible the database is, the more susceptible we are to
threats from security. The more vulnerable it is to attacks and threats, the more difficult it is
to access and utilize.

Why Database Security is Important?

According to the definition, a data breach refers to a breach of data integrity in databases. The
amount of damage an incident like a data breach can cause our business is contingent on
various consequences or elements.
o Intellectual property that is compromised: Our intellectual property--trade secrets,
inventions, or proprietary methods -- could be vital for our ability to maintain an
advantage in our industry. If our intellectual property has been stolen or disclosed and
our competitive advantage is lost, it could be difficult to keep or recover.
o The damage to our brand's reputation: Customers or partners may not want to
purchase goods or services from us (or deal with our business) If they do not feel they
can trust our company to protect their data or their own.
o The concept of business continuity (or lack of it): Some businesses cannot continue
to function until a breach has been resolved.
o Penalties or fines to be paid for not complying: The cost of not complying with
international regulations like the Sarbanes-Oxley Act (SAO) or Payment Card
Industry Data Security Standard (PCI DSS) specific to industry regulations on data
privacy, like HIPAA or regional privacy laws like the European Union's General Data
Protection Regulation (GDPR) could be a major problem with fines in worst cases in
excess of many million dollars for each violation.
o Costs for repairing breaches and notifying consumers about them: Alongside
notifying customers of a breach, the company that has been breached is required to
cover the investigation and forensic services such as crisis management, triage repairs
to the affected systems, and much more.

Common Threats and Challenges

Numerous software configurations that are not correct, weaknesses, or patterns of


carelessness or abuse can lead to a breach of security. Here are some of the most prevalent
kinds of reasons for security attacks and the reasons.

Insider Dangers

An insider threat can be an attack on security from any three sources having an access
privilege to the database.

o A malicious insider who wants to cause harm

o An insider who is negligent and makes mistakes that expose the database to attack.
vulnerable to attacks
o An infiltrator is an outsider who acquires credentials by using a method like phishing
or accessing the database of credential information in the database itself.

Insider dangers are among the most frequent sources of security breaches to databases. They
often occur as a consequence of the inability of employees to have access to privileged user
credentials.

Human Error

The unintentional mistakes, weak passwords or sharing passwords, and other negligent or
uninformed behaviours of users remain the root causes of almost half (49 percent) of all data
security breaches.

Database Software Vulnerabilities can be Exploited

Hackers earn their money by identifying and exploiting vulnerabilities in software such as
databases management software. The major database software companies and open-source
databases management platforms release regular security patches to fix these weaknesses.
However, failing to implement the patches on time could increase the risk of being hacked.

SQL/NoSQL Injection Attacks

A specific threat to databases is the infusing of untrue SQL as well as other non-SQL string
attacks in queries for databases delivered by web-based apps and HTTP headers. Companies
that do not follow the safe coding practices for web applications and conduct regular
vulnerability tests are susceptible to attacks using these.

Buffer Overflow is a way to Exploit Buffers

Buffer overflow happens when a program seeks to copy more data into the memory block
with a certain length than it can accommodate. The attackers may make use of the extra data,
which is stored in adjacent memory addresses, to establish a basis for they can begin attacks.

DDoS (DoS/DDoS) Attacks

In a denial-of-service (DoS) attack in which the attacker overwhelms the targeted server -- in
this case, the database server with such a large volume of requests that the server is unable to
meet no longer legitimate requests made by actual users. In most cases, the server is unstable
or even fails to function.

Malware

Malware is software designed to exploit vulnerabilities or cause harm to databases. Malware


can be accessed via any device that connects to the databases network.

Attacks on Backups

Companies that do not protect backup data using the same rigorous controls employed to
protect databases themselves are at risk of cyberattacks on backups.

The following factors amplify the threats:

o Data volumes are growing: Data capture, storage, and processing continue to


increase exponentially in almost all organizations. Any tools or methods must be
highly flexible to meet current as well as far-off needs.
o The infrastructure is sprawling: Network environments are becoming more
complicated, especially as companies shift their workloads into multiple clouds and
hybrid cloud architectures and make the selection of deployment, management, and
administration of security solutions more difficult.
o More stringent requirements for regulatory compliance: The worldwide
regulatory compliance landscape continues to increase by complexity. This makes the
compliance of every mandate more challenging.

Best use of Database Security

As databases are almost always accessible via the network, any security risk to any
component or part of the infrastructure can threaten the database. Likewise, any security
attack that impacts a device or workstation could endanger the database. Therefore, security
for databases must go beyond the limits of the database.

In evaluating the security of databases in our workplace to determine our organization's top
priorities, look at each of these areas.
o Security for physical security: If the database servers are on-premises or the cloud
data centre, they should be placed in a secure, controlled climate. (If our server for
database is located in a cloud-based data centre, the cloud provider will handle the
security on our behalf.)
o Access to the network and administrative restrictions: The practical minimum
number of users granted access to the database and their access rights should be
restricted to the minimum level required to fulfil their tasks. Additionally, access to
the network is limited to the minimum permissions needed.
o End security of the user account or device: Be aware of who has access to the
database and when and how data is used. Monitoring tools for data can notify you of
data-related activities that are uncommon or seem to be dangerous. Any device that
connects to the network hosting the database must be physically secured (in the sole
control of the appropriate person) and be subject to security checks throughout the
day.
o Security: ALL data--including data stored in databases, as well as credential
information should be secured using the highest-quality encryption when in storage
and while in transport. All encryption keys must be used in accordance with the best
practices guidelines.
o Security of databases using software: Always use the most current version of our
software to manage databases and apply any patches immediately after they're
released.
o Security for web server applications and websites: Any application or web server
that connects to the database could be a target and should be subjected to periodic
security testing and best practices management.
o Security of backups: All backups, images, or copies of the database should have the
identical (or equally rigorous) security procedures as the database itself.
o Auditing: Audits of security standards for databases should be conducted every few
months. Record all the logins on the server as well as the operating system. Also,
record any operations that are made on sensitive data, too.
Data protection tools and platforms

Today, a variety of companies provide data protection platforms and tools. A comprehensive
solution should have all of the following features:

o Discovery: The ability to discover is often needed to meet regulatory compliance


requirements. Look for a tool that can detect and categorize weaknesses across our
databases, whether they're hosted in the cloud or on-premises. It will also provide
recommendations to address any vulnerabilities that are discovered.
o Monitoring of Data Activity: The solution should be capable of monitoring and
analysing the entire data activity in all databases, whether our application is on-
premises, in the cloud, or inside a container. It will alert us to suspicious activity in
real-time to allow us to respond more quickly to threats. It also provides visibility into
the state of our information through an integrated and comprehensive user interface. It
is also important to choose a system that enforces rules that govern policies,
procedures, and the separation of duties. Be sure that the solution we select is able to
generate the reports we need to comply with the regulations.
o The ability to Tokenize and Encrypt Data: In case of an incident, encryption is an
additional line of protection against any compromise. Any software we choose to use
must have the flexibility to protect data cloud, on-premises hybrid, or multi-cloud
environments. Find a tool with volume, file, and application encryption features that
meet our company's regulations for compliance. This could require tokenization (data
concealing) or advanced key management of security keys.
o Optimization of Data Security and Risk Analysis: An application that will provide
contextual insights through the combination of security data with advanced analytics
will allow users to perform optimizing, risk assessment, and reporting in a breeze.
Select a tool that is able to keep and combine large amounts of recent and historical
data about the security and state of your databases. Also, choose a solution that
provides data exploration, auditing, and reporting capabilities via an extensive but
user-friendly self-service dashboard.

Distributed databases
A distributed database is basically a database that is not limited to one system, it is spread
over different sites, i.e, on multiple computers or over a network of computers. A distributed
database system is located on various sites that don’t share physical components. This may be
required when a particular database needs to be accessed by various users globally. It needs
to be managed such that for the users it looks like one single database.
Types of Distributed Databases

Distributed databases can be broadly classified into homogeneous and heterogeneous


distributed database environments, each with further sub-divisions, as shown in the following
illustration.

Homogeneous Distributed Databases

In a homogeneous distributed database, all the sites use identical DBMS and operating
systems. Its properties are −

● The sites use very similar software.


● The sites use identical DBMS or DBMS from the same vendor.
● Each site is aware of all other sites and cooperates with other sites to process
user requests.
● The database is accessed through a single interface as if it is a single database.
Types of Homogeneous Distributed Database

There are two types of homogeneous distributed database −

● Autonomous − Each database is independent that functions on its own. They


are integrated by a controlling application and use message passing to share
data updates.
● Non-autonomous − Data is distributed across the homogeneous nodes and a
central or master DBMS co-ordinates data updates across the sites.
Heterogeneous Distributed Databases

In a heterogeneous distributed database, different sites have different operating systems,


DBMS products and data models. Its properties are −

● Different sites use dissimilar schemas and software.


● The system may be composed of a variety of DBMSs like relational, network,
hierarchical or object oriented.
● Query processing is complex due to dissimilar schemas.
● Transaction processing is complex due to dissimilar software.
● A site may not be aware of other sites and so there is limited co-operation in
processing user requests.
Types of Heterogeneous Distributed Databases
● Federated − The heterogeneous database systems are independent in nature
and integrated together so that they function as a single database system.
● Un-federated − The database systems employ a central coordinating module
through which the databases are accessed.
Distributed DBMS Architectures

DDBMS architectures are generally developed depending on three parameters −

● Distribution − It states the physical distribution of data across the different


sites.
● Autonomy − It indicates the distribution of control of the database system and
the degree to which each constituent DBMS can operate independently.
● Heterogeneity − It refers to the uniformity or dissimilarity of the data models,
system components and databases.
Architectural Models

Some of the common architectural models are −

● Client - Server Architecture for DDBMS


● Peer - to - Peer Architecture for DDBMS
● Multi - DBMS Architecture
Client - Server Architecture for DDBMS

This is a two-level architecture where the functionality is divided into servers and clients. The
server functions primarily encompass data management, query processing, optimization and
transaction management. Client functions include mainly user interface. However, they have
some functions like consistency checking and transaction management.

The two different client - server architecture are −

● Single Server Multiple Client


● Multiple Server Multiple Client (shown in the following diagram)

Peer- to-Peer Architecture for DDBMS

In these systems, each peer acts both as a client and a server for imparting database services.
The peers share their resource with other peers and co-ordinate their activities.

This architecture generally has four levels of schemas −

● Global Conceptual Schema − Depicts the global logical view of data.


● Local Conceptual Schema − Depicts logical data organization at each site.
● Local Internal Schema − Depicts physical data organization at each site.
● External Schema − Depicts user view of data.
Multi - DBMS Architectures

This is an integrated database system formed by a collection of two or more autonomous


database systems.

Multi-DBMS can be expressed through six levels of schemas −

● Multi-database View Level − Depicts multiple user views comprising of


subsets of the integrated distributed database.
● Multi-database Conceptual Level − Depicts integrated multi-database that
comprises of global logical multi-database structure definitions.
● Multi-database Internal Level − Depicts the data distribution across different
sites and multi-database to local data mapping.
● Local database View Level − Depicts public view of local data.
● Local database Conceptual Level − Depicts local data organization at each
site.
● Local database Internal Level − Depicts physical data organization at each
site.

There are two design alternatives for multi-DBMS −

● Model with multi-database conceptual level.


● Model without multi-database conceptual level.
Design Alternatives

The distribution design alternatives for the tables in a DDBMS are as follows −

● Non-replicated and non-fragmented


● Fully replicated
● Partially replicated
● Fragmented
● Mixed
Non-replicated &Non-fragmented

In this design alternative, different tables are placed at different sites. Data is placed so that it
is at a close proximity to the site where it is used most. It is most suitable for database
systems where the percentage of queries needed to join information in tables placed at
different sites is low. If an appropriate distribution strategy is adopted, then this design
alternative helps to reduce the communication cost during data processing.

Fully Replicated

In this design alternative, at each site, one copy of all the database tables is stored. Since,
each site has its own copy of the entire database, queries are very fast requiring negligible
communication cost. On the contrary, the massive redundancy in data requires huge cost
during update operations. Hence, this is suitable for systems where a large number of queries
is required to be handled whereas the number of database updates is low.

Partially Replicated

Copies of tables or portions of tables are stored at different sites. The distribution of the
tables is done in accordance to the frequency of access. This takes into consideration the fact
that the frequency of accessing the tables vary considerably from site to site. The number of
copies of the tables (or portions) depends on how frequently the access queries execute and
the site which generate the access queries.

Fragmented

In this design, a table is divided into two or more pieces referred to as fragments or partitions,
and each fragment can be stored at different sites. This considers the fact that it seldom
happens that all data stored in a table is required at a given site. Moreover, fragmentation
increases parallelism and provides better disaster recovery. Here, there is only one copy of
each fragment in the system, i.e. no redundant data.
The three fragmentation techniques are −

● Vertical fragmentation
● Horizontal fragmentation
● Hybrid fragmentation
Mixed Distribution

This is a combination of fragmentation and partial replications. Here, the tables are initially
fragmented in any form (horizontal or vertical), and then these fragments are partially
replicated across the different sites according to the frequency of accessing the fragments.

Advantages of Distributed database


Distributed databases basically provide us the advantages of distributed computing to the
database management domain. Basically, we can define a Distributed database as a collection
of multiple interrelated databases distributed over a computer network and a distributed
database management system as a software system that basically manages a distributed
database while making the distribution transparent to the user.
Distributed database management basically proposed for the various reason from
organizational decentralization and economical processing to greater autonomy. Some of
these advantages are as follows:

1. Management of data with different level of transparency –


Ideally, a database should be distribution transparent in the sense of hiding the details of
where each file is physically stored within the system. The following types of transparencies
are basically possible in the distributed database system:
● Network transparency:
This basically refers to the freedom for the user from the operational details of the
network. These are of two types Location and naming transparency.
● Replication transparencies:
It basically made user unaware of the existence of copies as we know that copies
of data may be stored at multiple sites for better availability performance and
reliability.
● Fragmentation transparency:
It basically made user unaware about the existence of fragments it may be the
vertical fragment or horizontal fragmentation.
2. Increased Reliability and availability –
Reliability is basically defined as the probability that a system is running at a certain time
whereas Availability is defined as the probability that the system is continuously available
during a time interval. When the data and DBMS software are distributed over several sites
one site may fail while other sites continue to operate and we are not able to only access the
data that exist at the failed site and this basically leads to improvement in reliability and
availability.
3. Easier Expansion –
In a distributed environment expansion of the system in terms of adding more data,
increasing database sizes, or adding more data, increasing database sizes or adding more
processor is much easier.
4. Improved Performance –
We can achieve interquery and intraquery parallelism by executing multiple queries at
different sites by breaking up a query into a number of subqueries that basically executes in
parallel which basically leads to improvement in performance.
Case Study/Debate - Railway Reservation System

You might also like