Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
27 views

Token Based Algorithms

os

Uploaded by

gurpreet kohli
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Token Based Algorithms

os

Uploaded by

gurpreet kohli
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Token

Based Non-Token Based


Algorithms Algorithms

In the Token-based algorithm, a In Non-Token based


unique token is shared among all algorithm, there is no token
the sites in Distributed Computing even not any concept of
1. Systems. sharing token for access.

Here, two or more successive


Here, a site is allowed to enter the rounds of messages are
Critical Section if it possesses the exchanged between sites to
token. determine which site is to
2. enter the Critical Section next.

Non-Token based algorithm


The token-based algorithm uses
uses the timestamp (another
the sequences to order the
concept) to order the request
request for the Critical Section
for the Critical Section and to
and to resolve the conflict for the
resolve the conflict for the
simultaneous requests for the
simultaneous requests for the
System.
3. System.

The token-based algorithm Non-Token based Algorithm


produces less message traffic as produces more message
compared to Non-Token based traffic as compared to the
4. Algorithm. Token-based Algorithm.

They are free from deadlock (i.e.


here there are no two or more
processes are in the queue in They are not free from the
order to wait for messages that deadlock problem as they are
will actually can’t come) because based on timestamp only.
of the existence of unique token in
5. the distributed system.

Here, it is ensured that requests


Here there is no surety of
are executed exactly in the order
execution order.
6. as they are made in.

7. Token-based algorithms are more Non-Token based algorithms


Token
Based Non-Token Based
Algorithms Algorithms

scalable as they can free your


are less scalable than the
server from storing session state
Token-based algorithms
and also they contain all the
because server is not free
necessary information which they
from its tasks.
need for authentication.

Here the access control is quite Here the access control is not
Fine-grained because here inside so fine as there is no token
the token roles, permissions and which can specify roles,
resources can be easily specifying permission, and resources for
8. for the user. the user.

Non-Token based algorithms


Token-based algorithms make
can’t make authentication
authentication quite easy.
9. easy.

Examples of Token-Based
Examples of Non-Token
Algorithms are:
Based Algorithms are:
(i) Singhal’s Heuristic
(i) Lamport’s Algorithm
Algorithm
(ii) Ricart-Agarwala
(ii) Raymonds Tree Based
Algorithm
Algorithm
(iii) Maekawa’s
(iii) Suzuki-Kasami
Algorithm
10. Algorithm

Performance Metrics For Mutual Exclusion Algorithm


Last Updated : 25 Feb, 2022



Mutual exclusion is a program object that refers to the requirement of satisfying


that no two concurrent processes are in a critical section at the same time. It is
presented to intercept the race condition. If a current process is accessing the
critical section then it prevents entering another concurrent process there. So,
in a single word, just one process is permitted to execute the critical section at a
given example of time.
Requirement of Mutual Exclusion
1. No deadlock – Sites should not wait for infinite time for any kind of pending
message that will not arrive.
2. No Starvation – There should be a threshold that one site cannot execute a
critical section repeatedly while another is waiting without executing the
critical section.
3. Fault Tolerance – It should automatically detect its own failure to continue
functioning with next to no interruption.
There are three fundamental approaches for executing distributed
mutual exclusion:
1. Token-based methodology – In the token-based methodology, an
exceptional token (otherwise called the PRIVILEGE message) is shared among
the sites. A site is allowed to enter its critical section assuming it has the
token and it proceeds to hold the token until the execution of the critical
section is finished. Mutual exclusion is guaranteed because the token is
unique. The algorithms in view of this approach basically differ in the manner
a site sends out the quest for the token.
2. Non-token-based methodology – In the non-token-based methodology, at
least two progressive rounds of messages are exchanged among the sites to
figure out which site will enter the critical section next. A site enters the
critical section when a declaration, characterized by its local factors, becomes
valid.
3. Quorum-based methodology – In the quorum-based methodology, each
site demands consent to execute the critical section from a subset of sites
(called a quorum). The quorums are shaped so that when two sites
simultaneously demand to enter the critical section, one site gets both the
request and which is capable to ensure that just one request executes the
critical section at a given time.

Performance Metrics for Mutual Exclusion

The criteria of performance of mutual exclusion in a distributed system are


measured in the following methods:
1. Response time
The interval of time when a request waits for the end of its critical section
execution after its solicitation messages have been conveyed.
Response Time

2. Synchronization Delay
The time required for the next process to enter the critical section after a
process leaves the critical section is known as Synchronization delay.

Synchronization Delay
3. Message complexity
The number of messages needed to execute each critical section by the
process.
4. Throughput
Throughput is the amount at which the system executes requests for the critical
section.
Throughput = 1/(Synchronization delay + Avg Critical Section execution time)
5. Low and High Load Performance
The amount of request that arrives for critical section execution denotes the
load. If more than 1 request is present for the critical section then it is known as
Low Load. If there is always a pending request then it is known as High Load. In
heavy load conditions, after a request is executed, a site promptly starts
activities to execute its next demand of Critical Section. A site is only
occasionally in the inactive state in heavy load conditions. For some mutual
exclusion algorithms, the performance metrics can be registered effectively
under low and heavy loads through simple mathematical reasoning.

How Does Metric Affect the Performance of Mutual Exclusion?

For the most part, mutual exclusion algorithms have best and worst cases for
the performance metrics. In the best case, winning circumstances are to such an
extent that a performance metric achieves the best conceivable worth. For
example, in most mutual exclusion algorithms the best worth of the reaction
time is a round-trip message delay in addition to the critical section execution
time, 2T + E. Frequently for mutual exclusion algorithms, the best and worst
scenarios agree with low and high loads, individually.
For example, the best and worst of the reaction time are accomplished when the
load is, separately, low and high; in a few mutual exclusion algorithms, the best
and the worse message traffic is produced at low and heavy load conditions,
separately.

Lamport’s Algorithm for Mutual Exclusion in Distributed


System
Last Updated : 12 May, 2023



Prerequisite: Mutual exclusion in distributed systems


Lamport’s Distributed Mutual Exclusion Algorithm is a permission based
algorithm proposed by Lamport as an illustration of his synchronization scheme
for distributed systems. In permission based timestamp is used to order critical
section requests and to resolve any conflict between requests. In Lamport’s
Algorithm critical section requests are executed in the increasing order of
timestamps i.e a request with smaller timestamp will be given permission to
execute critical section first than a request with larger timestamp. In this
algorithm:
 Three type of messages ( REQUEST, REPLY and RELEASE) are used and
communication channels are assumed to follow FIFO order.
 A site send a REQUEST message to all other site to get their permission to
enter critical section.
 A site send a REPLY message to requesting site to give its permission to
enter the critical section.
 A site send a RELEASE message to all other site upon exiting the critical
section.
 Every site Si, keeps a queue to store critical section requests ordered by their
timestamps. request_queuei denotes the queue of site Si
 A timestamp is given to each critical section request using Lamport’s logical
clock.
 Timestamp is used to determine priority of critical section requests. Smaller
timestamp gets high priority over larger timestamp. The execution of critical
section request is always in the order of their timestamp.
Algorithm:
 To enter Critical section:
o When a site Si wants to enter the critical section, it sends a request
message Request(tsi, i) to all other sites and places the request
on request_queuei. Here, Tsi denotes the timestamp of Site Si
o When a site Sj receives the request message REQUEST(tsi, i) from
site Si, it returns a timestamped REPLY message to site Si and places
the request of site Si on request_queuej
 To execute the critical section:
o A site Si can enter the critical section if it has received the message
with timestamp larger than (tsi, i) from all other sites and its own
request is at the top of request_queuei
 To release the critical section:
o When a site Si exits the critical section, it removes its own request
from the top of its request queue and sends a
timestamped RELEASE message to all other sites
o When a site Sj receives the timestamped RELEASE message from
site Si, it removes the request of Si from its request queue
Message Complexity: Lamport’s Algorithm requires invocation of 3(N – 1)
messages per critical section execution. These 3(N – 1) messages involves
 (N – 1) request messages
 (N – 1) reply messages
 (N – 1) release messages
Drawbacks of Lamport’s Algorithm:
 Unreliable approach: failure of any one of the processes will halt the
progress of entire system.
 High message complexity: Algorithm requires 3(N-1) messages per critical
section invocation.
Performance:
 Synchronization delay is equal to maximum message transmission time
 It requires 3(N – 1) messages per CS execution.
 Algorithm can be optimized to 2(N – 1) messages by omitting
the REPLY message in some situations.
Advantages of Lamport’s Algorithm for Mutual Exclusion in Distributed System:
1. Simplicity: Lamport’s algorithm is relatively easy to understand and
implement compared to other algorithms for mutual exclusion in distributed
systems.
2. Fairness: The algorithm guarantees fairness by providing a total order of
events that is used to determine the next process that can enter the critical
section.
3. Scalability: Lamport’s algorithm is scalable because it only requires each
process to communicate with its neighbors, rather than all other processes in
the system.
4. Compatibility: The algorithm is compatible with a wide range of distributed
systems and can be adapted to different network topologies and
communication protocols.

Disadvantages of Lamport’s Algorithm for Mutual Exclusion in


Distributed System:

1. Message Overhead: The algorithm requires a lot of message passing


between processes to maintain the total order of events, which can lead to
increased network traffic and latency.
2. Delayed Execution: The algorithm can lead to delays in the execution of
critical sections because a process may have to wait for messages to arrive
from other processes before entering the critical section.
3. Limited Performance: The algorithm may not perform well in systems with
a high number of processes because of the increased message overhead and
delayed execution.
4. No Fault Tolerance: The algorithm does not provide any fault tolerance
mechanism, which means that if a process fails or crashes, it may cause the
entire system to fail or become unstable.
FAQs:
How does Lamport’s algorithm ensure mutual exclusion in a distributed
system?
Lamport’s algorithm uses a permission-based approach and timestamps to order
critical section requests and resolve any conflicts between them. Each process
sends a request message to all other processes to enter the critical section, and
the request is placed in the request queue of each process in the order of its
timestamp. A process can enter the critical section if it has received a reply
message from all other processes with a timestamp greater than its own, and its
own request is at the top of its request queue.
What is the message complexity of Lamport’s algorithm?
Lamport’s algorithm requires 3(N-1) messages per critical section execution,
where N is the number of processes in the system. These messages involve (N-
1) request messages, (N-1) reply messages, and (N-1) release messages.
Is Lamport’s algorithm fault-tolerant?
No, Lamport’s algorithm does not provide any fault tolerance mechanism. If a
process fails or crashes, it may cause the entire system to fail or become
unstable.

You might also like