Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Chapter 7
Deadlocks
System Model
• A system consists of a finite number of resources to be distributed
among a number of competing processes.
• If a system has two CPUs, then the resource type CPU has two
instances.
• A process must request a resource before using it and must
release the resource after using it.
• Each process utilizes a resource as follows:
-- Request: The process requests the resource. If the request
cannot be granted immediately then the requesting process must
wait until it can acquire the resource.
– Use : The process can operate on the resource.
– Release : The process releases the resource.
Deadlock Example
• Deadlock : It is a situation when a process in the system has
acquired some resources and waiting for more resources acquired
by some other process which in turn is waiting for the resources
acquired by this process. None of them can process and operating
system cant do any work.
• Different resource type : P1 DVD and P2 Printer
P1 requests printer and P2 requests DVD
P1 waiting for the release of printer(P2) and P2 is waiting for the
release of DVD(P1).
• Same resource type : P1,P2,P3 having three CD RW drives.
Deadlock Characterization
• Mutual exclusion: Only one process at a time can use a resource.
If a process requests that resource, the requesting process must
be delayed until the resource has been released.
• Hold and wait: A process holding at least one resource is waiting
to acquire additional resources held by other processes.
• No preemption: A resource can be released voluntarily by the
process holding it, after that process has completed its task.
• Circular wait: there exists a set {P0, P1, …, Pn} of waiting
processes such that P0 is waiting for a resource that is held by P1,
P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for
a resource that is held by Pn, and Pn is waiting for a resource that
is held by P0.
Four conditions for a deadlock to occur

Recommended for you

Module-2Deadlock.ppt
Module-2Deadlock.pptModule-2Deadlock.ppt
Module-2Deadlock.ppt

This document discusses various techniques for handling deadlocks in operating systems, including prevention, avoidance, detection, and recovery. Deadlock prevention methods ensure deadlock conditions cannot occur by restricting resource usage. Deadlock avoidance algorithms dynamically examine the resource allocation state to guarantee the system remains in a safe state. Detection algorithms search for resource allocation cycles to identify deadlocks. Recovery methods terminate or roll back processes involved in deadlocks. The document provides examples to illustrate these deadlock handling techniques.

Mch7 deadlock
Mch7 deadlockMch7 deadlock
Mch7 deadlock

This document discusses deadlocks in computer systems. It begins by defining a deadlock as a state where a set of blocked processes are each holding resources and waiting for resources held by others in a cyclic manner. It then presents methods for handling deadlocks, including prevention, avoidance, and detection and recovery. For avoidance, it describes using a resource allocation graph to model processes and resources, and the banker's algorithm to ensure the system is always in a safe state where deadlocks cannot occur.

DeadlockMar21.ppt
DeadlockMar21.pptDeadlockMar21.ppt
DeadlockMar21.ppt

EE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtgEE 6rfy yyedtgedtgawys dh asetrg edtg saetg setfgsedgsh Ahgtyuj h eg edtg

dddfsdeasdfefdfa r werf wsef
Resource-Allocation Graph
• It consists of set of vertices V and a set of edges E.
• V is partitioned into two types:
– P = {P1, P2, …, Pn}, the set consisting of all the processes in the
system
R = {R1, R2, …, Rm}, the set consisting of all resource types in
the system
• Request edge – Directed edge Pi  Rj , process Pi has requested
an instance of resource type Rj and is currently waiting for that
resource.
• Assignment edge – Directed edge Rj  Pi, instance of resource
type Rj has been allocated to process Pi.
Resource-Allocation Graph (Cont.)
• Process
• Resource Type with 4 instances
• Pi requests instance of Rj
• Pi is holding an instance of Rj Pi
Pi
Rj
Rj
Resource Allocation Graph with a Deadlock
Resource Allocation Graph with no deadlock
Resource-Allocation Graph (Cont.)
 Process states:
• Process P1 is holding an instance of resource type R2 and is waiting
for an instance of resource type R1 .
• Process P2 is holding an instance of R1 and an instance of R2 and is
waiting for an instance of R3.
• Process P3 is holding an instance of R3 .
 Methods for Handling Deadlocks :
• Ensure that the system will never enter a deadlock state:
– Deadlock prevention
– Deadlock avoidance
• Allow the system to enter a deadlock state and then recover.
• Ignore the problem and pretend that deadlocks never occur in the
system;

Recommended for you

OS Module-3 (2).pptx
OS Module-3 (2).pptxOS Module-3 (2).pptx
OS Module-3 (2).pptx

This document discusses deadlocks in a multiprogramming system. It defines deadlock as a situation where a set of processes are waiting indefinitely for resources held by each other in a circular chain. Four necessary conditions for deadlock are explained: mutual exclusion, hold and wait, no preemption, and circular wait. Methods for handling deadlocks include prevention, avoidance, detection and recovery. Prevention methods aim to enforce restrictions to ensure at least one condition cannot be met, such as allocating all resources for a process upfront or not allowing processes to hold resources while waiting for others.

Deadlocks Part- II.pdf
Deadlocks Part- II.pdfDeadlocks Part- II.pdf
Deadlocks Part- II.pdf

The objectives of Deadlocks in Operating Systems are: - To develop a description of deadlocks, which prevent sets of concurrent processes from completing their tasks - To present a number of different methods for preventing or avoiding deadlocks in a computer system

os slidesos lecturesoperating systems slides
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptxosvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptx

This document discusses deadlocks in computer systems. It defines a deadlock as when two or more processes are waiting for resources held by each other in a circular chain. It describes the four conditions required for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Methods to handle deadlocks include deadlock prevention, avoidance, detection, and recovery. Prevention ensures conditions don't hold, avoidance uses additional process information, detection uses graphs to find cycles indicating deadlocks, and recovery aborts processes or preempts resources.

dead lock
Deadlock Prevention
 Mutual Exclusion –It must hold for non-sharable resources. We
cannot prevent deadlocks by denying the mutual-exclusion
condition, because some resources are intrinsically non sharable.
E.g.: Printer cannot be simultaneously shared by several process(non
sharable).
 Hold and Wait – To ensure that this condition doesnot occurs in the
system, we must guarantee that whenever a process requests a
resource, it does not hold any other resources.
– One protocol requires process to request and be allocated all its
resources before it begins execution.
– Another protocol allows process to request resources only when
the process has none allocated to it.
Deadlock Prevention (Cont.)
• No Preemption : To ensure that this condition doesnot hold, use the
following protocol.
– If a process is holding some resources, requests another resource
that cannot be immediately allocated to it, then all resources the
process is currently holding are preempted.
– Preempted resources are added to list of resources for which
process is waiting.
– Process will be restarted only when it can regain its old resources,
as well as the new ones that it is requesting.
– This protocol is used whose state can be easily saved and restored
later.
Deadlock Prevention (Cont.)
• Circular Wait
- Impose a total ordering of all resource types, and require that
each process requests resources in an increasing order of
enumeration.
- Initially process requests any number of instances of resource
type Ri. After that process can request instance of resource type Rj
if and only if F(Rj) >F(Ri).
- E.g.: We define a one-to-one function F: R N, where N is the
set of natural numbers, R is the set of resource types includes tape
drives, disk drives, and printers.
- Alternatively, we can require that a process requesting an instance
of resource type Rj must have released any resources Ri; such that
F(Ri) >=F(Rj).
Deadlock Example with Lock Ordering
void transaction(Account from, Account to, double amount)
{
mutex lock1, lock2;
lock1 = get_lock(from);
lock2 = get_lock(to);
acquire(lock1);
acquire(lock2);
withdraw(from, amount);
deposit(to, amount);
release(lock2);
release(lock1);
}
Transactions 1 and 2 execute concurrently. Transaction1 transfers $25
from account A to account B, and Transaction2 transfers $50 from
account B to account A

Recommended for you

Deadlock.ppt
Deadlock.pptDeadlock.ppt
Deadlock.ppt

Process synchronization ensures systematic sharing of resources among concurrent processes. A race condition occurs when two processes access and modify shared resources without coordination. In a printer spooler example, two processes modify shared variables that point to files to print and available slots, which could result in files being skipped or printed multiple times. The critical section is the part of the program where shared memory is accessed; avoiding race conditions requires no two processes to be in their critical section simultaneously. Deadlocks occur when a set of processes are blocked waiting for resources held by other processes in the set, forming a circular wait. Methods to handle deadlocks include prevention, avoidance, detection, and recovery.

deadlock
Chapter 7 - Deadlocks
Chapter 7 - DeadlocksChapter 7 - Deadlocks
Chapter 7 - Deadlocks

The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock

operatingsystemsdeadlocks
Ch8 OS
Ch8 OSCh8 OS
Ch8 OS

The document summarizes different approaches to handling deadlocks in operating systems, including prevention, avoidance, detection, and recovery. It describes the four conditions required for deadlock, and models for representing resource allocation and processes waiting for resources, such as resource allocation graphs and wait-for graphs. Detection algorithms allow the system to enter a deadlocked state and then identify cycles in wait-for graphs to detect deadlocks.

 
by C.U
Deadlock Avoidance
• Simplest and most useful model requires that each process
declare the maximum number of resources of each type that it
may need.
• Given this a priori information, it is possible to construct an
algorithm which ensures that the system will never enter a
deadlocked state.
• Resource-allocation state is defined by the number of available
and allocated resources, and the maximum demands of the
processes.
• The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a
circular-wait condition.
Safe State
• A state is in safe if the system can allocate resources to a process in
some order and still avoid a deadlock.
• System is in safe state if there exists a sequence <P1, P2, …, Pn> of
all the processes in the system such that if Pi resource needs are
not immediately available, then Pi can wait until all Pj have
finished.
– When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate.
– When Pi terminates, Pi +1 can obtain its needed resources, and
so on .
Basic Facts
• If a system is in safe state  no deadlocks
If a system is in unsafe state  possibility of deadlock.
• Avoidance  ensure that a system will never enter an unsafe
state.
Avoidance Algorithms :
• Single instance of a resource type
Use a resource-allocation graph algorithm
• Multiple instances of a resource type
Use the banker’s algorithm
Resource-Allocation Graph Algorithm
• Claim edge Pi  Rj indicates that process Pj may request resource Rj
at sometime in the future; represented by a dashed line.
• Claim edge converts to request edge when a process requests a
resource.
• Request edge converted to an assignment edge when the resource
is allocated to the process.
• When a resource is released by a process, assignment edge
reconverts to a claim edge.
• Resources must be claimed a priori in the system.

Recommended for you

Os unit 4
Os unit 4Os unit 4
Os unit 4

Deadlock occurs in a system when multiple processes are waiting indefinitely for resources held by other waiting processes, resulting in no progress. The four conditions required for deadlock are mutual exclusion, hold and wait, no preemption, and circular wait. Deadlock can be avoided by ensuring that at least one of these conditions does not occur through methods like deadlock prevention, deadlock avoidance using safe sequences, and the banker's algorithm.

Deadlock Avoidance - OS
Deadlock Avoidance - OSDeadlock Avoidance - OS
Deadlock Avoidance - OS

There are three main methods for dealing with deadlocks in an operating system: prevention, avoidance, and detection with recovery. Prevention ensures that the necessary conditions for deadlock cannot occur through restrictions on resource allocation. Avoidance uses additional information about future resource needs and requests to determine if allocating resources will lead to an unsafe state. Detection identifies when a deadlock has occurred, then recovery techniques like process termination or resource preemption are used to resolve it. No single approach is suitable for all resource types, so systems often combine methods by applying the optimal one to each resource class.

protection and security in operating systems
protection and security in operating systemsprotection and security in operating systems
protection and security in operating systems

protection, access matrix,implementaion

Deadlock Avoidance
Unsafe State In Resource-Allocation Graph
Banker’s Algorithm
• It is applicable to resource allocation system with multiple instances
of each resource type.
• The name was chosen because the algorithm could be used in a
banking system to ensure that the bank never allocated its available
cash in such a way that it could no longer satisfy the needs of all its
customers.
Banker’s Algorithm
• Available: Vector of length m. If available [j] = k, there are k
instances of resource type Rj available
• Max: n x m matrix. If Max [i,j] = k, then process Pi may request at
most k instances of resource type Rj
• Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently
allocated k instances of Rj
• Need: n x m matrix. If Need[i,j] = k, then Pi may need k more
instances of Rj to complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]
Let n = number of processes, and m = number of resources types.
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1
2. Find an i such that both:
(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4.
3.Work=Work + Allocation
Finish [i] = true .
Go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state

Recommended for you

7 Deadlocks
7 Deadlocks7 Deadlocks
7 Deadlocks

The document discusses different methods for handling deadlocks in a system. It describes deadlock characterization including the necessary conditions for deadlock, using a resource-allocation graph to model deadlocks, and examples of such graphs. It also explains several methods for handling deadlocks including deadlock prevention, avoidance, and detection and recovery. Deadlock prevention methods aim to enforce constraints to ensure the necessary conditions for deadlock cannot occur. Deadlock avoidance uses additional information to dynamically monitor the system state and ensure it remains in a safe state where deadlocks cannot happen.

operating systems
Ice
IceIce
Ice

This chapter discusses deadlocks in computer systems. It defines deadlock as a situation where a set of blocked processes are each holding resources and waiting for additional resources held by other processes in the set, resulting in a circular wait. The chapter presents methods for handling deadlocks including deadlock prevention, avoidance, detection, and recovery. It describes the basic conditions for deadlock and models like the resource allocation graph used to analyze deadlock states. Banker's algorithm is provided as an example of a deadlock avoidance strategy.

OS - Unit 3 Deadlock (Bankers Algorithm).pptx
OS - Unit 3 Deadlock (Bankers Algorithm).pptxOS - Unit 3 Deadlock (Bankers Algorithm).pptx
OS - Unit 3 Deadlock (Bankers Algorithm).pptx

Here is the solution to the Bankers Algorithm example 3: A) The content of the Need matrix will be: Process | A | B | C P0 | 3 | 1 | 3 P1 | 2 | 4 | 2 P2 | 7 | 4 | 5 P3 | 8 | 4 | 6 P4 | 8 | 5 | 5 B) The system is not in a safe state. The available resources (3, 3, 2) are not sufficient to fulfill the need of process P2 (7, 4, 5). C) If process P1 requests one more instance of resource A and two instances of resource C, the request can be granted as

engineeringb tech
Resource-Request Algorithm for Process Pi
Requesti = request vector for process Pi. If Requesti [j] = k then
process Pi wants k instances of resource type Rj
1.If Requesti  Needi go to step 2. Otherwise, raise error
condition, since process has exceeded its maximum claim.
2. If Requesti  Available, go to step 3. Otherwise Pi must wait,
since resources are not available.
3. Pretend to allocate requested resources to Pi by modifying the
state as follows:
Available = Available – Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
 If safe  the resources are allocated to Pi
 If unsafe  Pi must wait, and the old resource-allocation
state is restored
Deadlock Detection
• Allow system to enter deadlock state
• Detection algorithm
• Recovery scheme
Single Instance of Each Resource Type
• If all the resources have single instance then we use a variant of
resource allocation graph called wait for graph.
• We obtain this graph by removing the resource nodes and
collapsing the appropriate edges.
• Maintains wait-for graph
– Nodes are processes
– Pi  Pj means Pi is waiting for Pj to release a resource that
Pi needs.
• Periodically invoke an algorithm that searches for a cycle in the
graph. If there is a cycle, there exists a deadlock.
Resource-Allocation Graph and Wait-for Graph
Resource-Allocation Graph Corresponding wait-for graph

Recommended for you

CH07.pdf
CH07.pdfCH07.pdf
CH07.pdf

This document discusses different methods for handling deadlocks in computer systems, including deadlock prevention, avoidance, and detection. Deadlock prevention methods aim to ensure a system will never enter a deadlocked state by enforcing rules like mutual exclusion of resources and requiring processes to request all resources before starting. Deadlock avoidance uses algorithms like the banker's algorithm to dynamically examine the system state and ensure it remains in a "safe" state where deadlocks cannot occur. Deadlock detection allows the system to enter a deadlocked state but periodically checks a wait-for graph representing resource dependencies between processes to detect any cycles that indicate a deadlock.

Gp1242 007 oer ppt
Gp1242 007 oer pptGp1242 007 oer ppt
Gp1242 007 oer ppt

This document discusses deadlocks in operating systems. It defines deadlock as when a set of blocked processes each hold a resource and wait for a resource held by another process. It then covers methods for handling deadlocks such as prevention, avoidance, detection, and recovery. Prevention ensures deadlock conditions cannot occur. Avoidance allows the system to deny requests that could lead to deadlock. Detection identifies when a deadlock has occurred. Recovery breaks deadlocks by terminating or preempting processes.

ppt_by_niveditadeadlock_gp_1242_007ppt_by_anusha
Is Email Marketing Really Effective In 2024?
Is Email Marketing Really Effective In 2024?Is Email Marketing Really Effective In 2024?
Is Email Marketing Really Effective In 2024?

Slide 1 Is Email Marketing Really Effective in 2024? Yes, Email Marketing is still a great method for direct marketing. Slide 2 In this article we will cover: - What is Email Marketing? - Pros and cons of Email Marketing. - Tools available for Email Marketing. - Ways to make Email Marketing effective. Slide 3 What Is Email Marketing? Using email to contact customers is called Email Marketing. It's a quiet and effective communication method. Mastering it can significantly boost business. In digital marketing, two long-term assets are your website and your email list. Social media apps may change, but your website and email list remain constant. Slide 4 Types of Email Marketing: 1. Welcome Emails 2. Information Emails 3. Transactional Emails 4. Newsletter Emails 5. Lead Nurturing Emails 6. Sponsorship Emails 7. Sales Letter Emails 8. Re-Engagement Emails 9. Brand Story Emails 10. Review Request Emails Slide 5 Advantages Of Email Marketing 1. Cost-Effective: Cheaper than other methods. 2. Easy: Simple to learn and use. 3. Targeted Audience: Reach your exact audience. 4. Detailed Messages: Convey clear, detailed messages. 5. Non-Disturbing: Less intrusive than social media. 6. Non-Irritating: Customers are less likely to get annoyed. 7. Long Format: Use detailed text, photos, and videos. 8. Easy to Unsubscribe: Customers can easily opt out. 9. Easy Tracking: Track delivery, open rates, and clicks. 10. Professional: Seen as more professional; customers read carefully. Slide 6 Disadvantages Of Email Marketing: 1. Irrelevant Emails: Costs can rise with irrelevant emails. 2. Poor Content: Boring emails can lead to disengagement. 3. Easy Unsubscribe: Customers can easily leave your list. Slide 7 Email Marketing Tools Choosing a good tool involves considering: 1. Deliverability: Email delivery rate. 2. Inbox Placement: Reaching inbox, not spam or promotions. 3. Ease of Use: Simplicity of use. 4. Cost: Affordability. 5. List Maintenance: Keeping the list clean. 6. Features: Regular features like Broadcast and Sequence. 7. Automation: Better with automation. Slide 8 Top 5 Email Marketing Tools: 1. ConvertKit 2. Get Response 3. Mailchimp 4. Active Campaign 5. Aweber Slide 9 Email Marketing Strategy To get good results, consider: 1. Build your own list. 2. Never buy leads. 3. Respect your customers. 4. Always provide value. 5. Don’t email just to sell. 6. Write heartfelt emails. 7. Stick to a schedule. 8. Use photos and videos. 9. Segment your list. 10. Personalize emails. 11. Ensure mobile-friendliness. 12. Optimize timing. 13. Keep designs clean. 14. Remove cold leads. Slide 10 Uses of Email Marketing: 1. Affiliate Marketing 2. Blogging 3. Customer Relationship Management (CRM) 4. Newsletter Circulation 5. Transaction Notifications 6. Information Dissemination 7. Gathering Feedback 8. Selling Courses 9. Selling Products/Services Read Full Article: https://digitalsamaaj.com/is-email-marketing-effective-in-2024/

email marketingemailconvertkit
Several instances of resource type :Detection Algorithm
Let Work and Finish be vectors of length m and n, respectively
Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi  0, then
Finish[i] = false; otherwise, Finish[i] = true
2. Find an index i such that both:
(a) Finish[i] == false
(b) Requesti  Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish[i] == false, for some i, 1  i  n, then the system is in
deadlock state. Moreover, if Finish[i] == false, then Pi is
deadlocked
Detection-Algorithm Usage
• When, and how often, to invoke depends on:
– How often a deadlock is likely to occur?
– How many processes will be affected by deadlock when it
happens?
• If detection algorithm is invoked arbitrarily, there may be many
cycles in the resource graph and so we would not be able to tell
which of the many deadlocked processes “caused” the deadlock.
• Recovery from Deadlock :
• Inform the operator that a deadlock has occurred and to let the
operator deal with the deadlock manually.
• Let the system recover from the deadlock automatically.
Recovery from Deadlock: Process Termination
• Abort all deadlocked processes :The deadlocked processes may have
computed for a long time, and the results of these partial
computations must be discarded and probably will have to be
recomputed later.
• Abort one process at a time until the deadlock cycle is eliminated
:each process is aborted, a deadlock-detection algorithm must be
invoked to determine whether any processes are still deadlocked.
• In which order should we choose to abort?
1. Priority of the process.
2. How long process has computed, and how much longer to
completion.
3. How many resources that the process has used.
4. How many resources that process needs in order to complete.
5. How many processes will need to be terminated.
Recovery from Deadlock: Resource Preemption
 Resource Preemption : We successively preempt some resources
from processes and give these resources to other processes till
the deadlock cycle is broken.
• Selecting a victim – Which resources and which processes are to
be preempted? As in process termination, we must determine
the order of preemption to minimize cost.
• Rollback – If we preempt a resource from a process, what
should be done with that process? If it is missing some needed
resource. We must roll back the process to some safe state and
restart it from that state.
• Starvation – How can we guarantee that resources will not
always be preempted from the same process?

Recommended for you

AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894
AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894
AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894

As artificial intelligence continues to evolve, understanding the complexities and regulations regarding AI risk management is more crucial than ever. Amongst others, the webinar covers: • ISO/IEC 42001 standard, which provides guidelines for establishing, implementing, maintaining, and continually improving AI management systems within organizations • insights into the European Union's landmark legislative proposal aimed at regulating AI • framework and methodologies prescribed by ISO/IEC 23894 for identifying, assessing, and mitigating risks associated with AI systems Presenters: Miriama Podskubova - Attorney at Law Miriama is a seasoned lawyer with over a decade of experience. She specializes in commercial law, focusing on transactions, venture capital investments, IT, digital law, and cybersecurity, areas she was drawn to through her legal practice. Alongside preparing contract and project documentation, she ensures the correct interpretation and application of European legal regulations in these fields. Beyond client projects, she frequently speaks at conferences on cybersecurity, online privacy protection, and the increasingly pertinent topic of AI regulation. As a registered advocate of Slovak bar, certified data privacy professional in the European Union (CIPP/e) and a member of the international association ELA, she helps both tech-focused startups and entrepreneurs, as well as international chains, to properly set up their business operations. Callum Wright - Founder and Lead Consultant Founder and Lead Consultant Callum Wright is a seasoned cybersecurity, privacy and AI governance expert. With over a decade of experience, he has dedicated his career to protecting digital assets, ensuring data privacy, and establishing ethical AI governance frameworks. His diverse background includes significant roles in security architecture, AI governance, risk consulting, and privacy management across various industries, thorough testing, and successful implementation, he has consistently delivered exceptional results. Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment. Date: June 26, 2024 Tags: ISO/IEC 42001, Artificial Intelligence, EU AI Act, ISO/IEC 23894 ------------------------------------------------------------------------------- Find out more about ISO training and certification services Training: ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB Webinars: https://pecb.com/webinars Article: https://pecb.com/article -------------------------------------------------------------------------------

SYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISING
SYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISINGSYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISING
SYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISING

 Integrated Marketing Communications (IMC)- Concept, Features, Elements, Role of advertising in IMC  Advertising: Concept, Features, Evolution of Advertising, Active Participants, Benefits of advertising to Business firms and consumers.  Classification of advertising: Geographic, Media, Target audience and Functions.

integrated marketingcommunicationadvertising
How to Show Sample Data in Tree and Kanban View in Odoo 17
How to Show Sample Data in Tree and Kanban View in Odoo 17How to Show Sample Data in Tree and Kanban View in Odoo 17
How to Show Sample Data in Tree and Kanban View in Odoo 17

In Odoo 17, sample data serves as a valuable resource for users seeking to familiarize themselves with the functionalities and capabilities of the software prior to integrating their own information. In this slide we are going to discuss about how to show sample data to a tree view and a kanban view.

odoo 17how to show sample datatree and kanban view
Chapter 8
Memory management
Background
• Memory consists of large array of words each is having own
address.
• Instruction execution cycle : Fetches an instruction from memory.
The instruction is then decoded and may cause operands to be
fetched from memory.
• After the instruction has been executed on the operands, results
may be stored back in to memory.
• Program must be brought (from disk) into memory and placed
within a process for it to be run.
• Main memory and registers built in to the processor itself are
the only storage that the CPU can access directly.
• If the data are not in memory they must be moved before CPU
can operate on them.
• Make sure that each process has a separate memory space.
• To do this, we need the ability to determine the range of legal
addresses that the process may access and to ensure that the
process can access only these legal addresses.
• Legal address – Providing protection through two registers : base
and limit register.
Basic Hardware
Basic Hardware
• A pair of base and limit registers define the logical address
space.
• Logical address : It is the address generated by the CPU.
• Base register holds the smallest legal physical memory address.
• Limit register specifies the size of the range.
• CPU must check every memory access generated in user mode
to be sure that it is between base and limit for that user.

Recommended for you

No, it's not a robot: prompt writing for investigative journalism
No, it's not a robot: prompt writing for investigative journalismNo, it's not a robot: prompt writing for investigative journalism
No, it's not a robot: prompt writing for investigative journalism

How to use generative AI tools like ChatGPT and Gemini to generate story ideas for investigations, identify potential sources, and help with coding and writing. A talk from the Centre for Investigative Journalism Summer School, July 2024

journalismgenerative aiideas
Credit limit improvement system in odoo 17
Credit limit improvement system in odoo 17Credit limit improvement system in odoo 17
Credit limit improvement system in odoo 17

In Odoo 17, confirmed and uninvoiced sales orders are now factored into a partner's total receivables. As a result, the credit limit warning system now considers this updated calculation, leading to more accurate and effective credit management.

credit limit improvementodoo 17odoo erp
BRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptx
BRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptxBRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptx
BRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptx

BRIGADA ESKWELA OPENING PROGRAM

brigada
Hardware Address Protection
Base – Smallest legal physical address
Limit -- Size of Range
Eg : CPU Address --- 256002
Base --- 256000 (256002>256000)
Limit---- (300040- 256000 )= 44040
Base + Limit = 300040
• The base and limit registers can be loaded only by the
operating system, which uses a special privileged instruction.
• These privileged instructions and operating system executes
in kernel mode.
• Only the operating system can load base and limit registers.
• OS can change the value of the registers but user programs
cannot change the register contents.
Binding of Instructions and Data to Memory
• Binding of instructions and data to memory addresses can
happen at three different stages.
• Compile time: If memory location known at compile time , where
the process will reside then absolute code can be generated . Eg :
MSDOS.
• Load time: If it is not known at compile time where the process
will reside in memory, then the compiler must generate
relocatable code.
• Execution time: If the process can be moved during its execution
from one memory segment to another then binding will be
delayed until run time.
Multistep Processing of a User Program

Recommended for you

(T.L.E.) Agriculture: Essentials of Gardening
(T.L.E.) Agriculture: Essentials of Gardening(T.L.E.) Agriculture: Essentials of Gardening
(T.L.E.) Agriculture: Essentials of Gardening

(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏.𝟎)-𝐅𝐢𝐧𝐚𝐥𝐬 Lesson Outcome: -Students will understand the basics of gardening, including the importance of soil, water, and sunlight for plant growth. They will learn to identify and use essential gardening tools, plant seeds, and seedlings properly, and manage common garden pests using eco-friendly methods.

tlek-12
Beyond the Advance Presentation for By the Book 9
Beyond the Advance Presentation for By the Book 9Beyond the Advance Presentation for By the Book 9
Beyond the Advance Presentation for By the Book 9

In June 2020, L.L. McKinney, a Black author of young adult novels, began the #publishingpaidme hashtag to create a discussion on how the publishing industry treats Black authors: “what they’re paid. What the marketing is. How the books are treated. How one Black book not reaching its parameters casts a shadow on all Black books and all Black authors, and that’s not the same for our white counterparts.” (Grady 2020) McKinney’s call resulted in an online discussion across 65,000 tweets between authors of all races and the creation of a Google spreadsheet that collected information on over 2,000 titles. While the conversation was originally meant to discuss the ethical value of book publishing, it became an economic assessment by authors of how publishers treated authors of color and women authors without a full analysis of the data collected. This paper would present the data collected from relevant tweets and the Google database to show not only the range of advances among participating authors split out by their race, gender, sexual orientation and the genre of their work, but also the publishers’ treatment of their titles in terms of deal announcements and pre-pub attention in industry publications. The paper is based on a multi-year project of cleaning and evaluating the collected data to assess what it reveals about the habits and strategies of American publishers in acquiring and promoting titles from a diverse group of authors across the literary, non-fiction, children’s, mystery, romance, and SFF genres.

book publishing race gender
NLC English 7 Consolidation Lesson plan for teacher
NLC English 7 Consolidation Lesson plan for teacherNLC English 7 Consolidation Lesson plan for teacher
NLC English 7 Consolidation Lesson plan for teacher

NLc

Memory-Management Unit (MMU)
• The run time mapping from virtual to physical address is done by
hardware device called MMU.
Dynamic relocation using a relocation register
Logical versus Physical address space
• The address generated by CPU is known as logical address(virtual
address).
• The address which is seen by the memory unit is known as
physical address.
• The compile time and load time address time binding methods
generate identical physical and logical address.
• The set of all logical addresses generated by a program is a logical
address space.
• The set of all physical addresses corresponding to these logical
addresses is a physical address space.
Swapping
• A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for continued
execution.
• Backing store : Fast disk which is large enough to accommodate
copies of all memory images for all users; must provide direct
access to these memory images.
• Round robin scheduling algorithm : When a time quantum
expires, the memory manager will start to swap out the process
that just finished and to swap another process into the memory
space that has been freed.
Schematic View of Swapping
• Roll out, roll in – It is a variant of swapping used for priority-
based scheduling algorithms; lower-priority process is swapped
out so higher-priority process can be loaded and executed.

Recommended for you

ARCHITECTURAL PATTERNS IN HISTOPATHOLOGY pdf- [Autosaved].pdf
ARCHITECTURAL PATTERNS IN HISTOPATHOLOGY  pdf-  [Autosaved].pdfARCHITECTURAL PATTERNS IN HISTOPATHOLOGY  pdf-  [Autosaved].pdf
ARCHITECTURAL PATTERNS IN HISTOPATHOLOGY pdf- [Autosaved].pdf

Verious architectural patterns of tumor cells in histopathology

by dr dharmaraj pawar
Chapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptx
Chapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptxChapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptx
Chapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptx

Chapter 2 Era of One party Dominance Class 12 Political Science Notes

"DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ...
"DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ..."DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ...
"DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ...

"DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY NĂM 2024 KHỐI NGÀNH NGOÀI SƯ PHẠM"

Contiguous Allocation
• The memory is usually divided into two partitions: one for the
resident operating system and one for the user processes.
• Contiguous allocation is one of the early method . Resident
operating system, usually held in low memory .User process reside
in high memory.
• Consider how to allocate available memory to the processes that
are in the input queue waiting to be brought into memory.
• In contiguous memory allocation, each process is contained in a
single contiguous section of memory.
Memory Allocation
• Simplest method for allocating memory is to divide memory into
several fixed-sized partitions.
• Each partition may contain exactly one process . Degree of
multiprogramming is bound by the number of partitions.
• In multiple partition when a partition is free, a process is selected
from the input queue and is loaded into the free partition. When
the process terminates, the partition becomes available for another
process.
• In variable partition scheme, the operating system keeps a table
indicating which parts of memory are available and which are
occupied.
• Initially, all memory is available for user processes and is considered
one large block of available memory called hole.
Dynamic Storage-Allocation Problem
• First-fit: Allocate the first hole that is big enough. Searching can
start either at the beginning of the set of holes or at the location
where the previous first-fit search ended.
• Best-fit: Allocate the smallest hole that is big enough; must
search entire list, unless ordered by size
– Produces the smallest leftover hole.
• Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole.
– First-fit and best-fit better than worst-fit in terms of speed
and storage utilization but first fit is generally faster.
How to satisfy a request of size n from a list of free holes?
Fragmentation
• External Fragmentation – It exists when there is a total memory
space exists to satisfy a request, but it is not contiguous.
• Internal Fragmentation – allocated memory may be slightly
larger than requested memory; this size difference is the
memory internal to a partition, but not being used.
Eg: Multiple partition scheme with a hole of 18464 bytes. Suppose
process requests 18462 bytes, if we allocate the requested block
we are left with a hole of 2 bytes(unused memory).
• First fit analysis reveals that given N blocks allocated, 0.5 N
blocks lost to fragmentation
– 1/3 may be unusable -> 50-percent rule
Both first fit and best fit suffers from external fragmentation

Recommended for you

How to Configure Time Off Types in Odoo 17
How to Configure Time Off Types in Odoo 17How to Configure Time Off Types in Odoo 17
How to Configure Time Off Types in Odoo 17

Now we can take look into how to configure time off types in odoo 17 through this slide. Time-off types are used to grant or request different types of leave. Only then the authorities will have a clear view or a clear understanding of what kind of leave the employee is taking.

odoo 17time off typestime off types in odoo
Views in Odoo - Advanced Views - Pivot View in Odoo 17
Views in Odoo - Advanced Views - Pivot View in Odoo 17Views in Odoo - Advanced Views - Pivot View in Odoo 17
Views in Odoo - Advanced Views - Pivot View in Odoo 17

In Odoo, the pivot view is a graphical representation of data that allows users to analyze and summarize large datasets quickly. It's a powerful tool for generating insights from your business data. The pivot view in Odoo is a valuable tool for analyzing and summarizing large datasets, helping you gain insights into your business operations.

odoo 17pivot viewpivot view in odoo
Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...
Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...
Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...

This presentation, crafted for the Kubernetes Village at BSides Bangalore 2024, delves into the essentials of bypassing Falco, a leading container runtime security solution in Kubernetes. Tailored for beginners, it covers fundamental concepts, practical techniques, and real-world examples to help you understand and navigate Falco's security mechanisms effectively. Ideal for developers, security professionals, and tech enthusiasts eager to enhance their expertise in Kubernetes security and container runtime defenses.

kubernetesvillagepeachycloudsecuritybsidesbangalore
Fragmentation (Cont.)
• Reduce external fragmentation by compaction.
– Shuffle the memory contents to place all free memory together
in one large block.
– Compaction is done at execution time and if the relocation is
dynamic.
– Relocation requires only moving the program n data and then
changing the base register to reflect the new base address.
– Permit the logical address space to be non contiguous thus
allowing the physical memory to be allocated whenever such
memory is available.
Paging
• Paging is a memory management scheme that permits the
physical address space of a process can be noncontiguous.
– Avoids external fragmentation
– Avoids problem of varying sized memory chunks.
• Basic method :
--Divide physical memory into fixed-sized blocks called frames.
--Divide logical memory into blocks of same size called pages.
 When a process is to be executed its pages are loaded in to
available memory frames from their source.
Paging
• Every address generated by the CPU is divided in to two parts :
page number(p) and page offset (d).
• The page number is used as an index to page table. The page
table contains the base address of each page in physical
memory.
• The base address is combined with the page offset to define
the physical address that is sent to memory unit.
• If the size of the logical address space is 2m , and a page size 2n
then the high-order m- n bits of a logical address designate the
page number, and the n low-order bits designate the page
offset.
page number page offset
p d
m -n n
Paging Hardware
Page no| page offset
Base address
of page
page

Recommended for you

Capitol Doctoral Presentation -June 2024v2.pptx
Capitol Doctoral Presentation -June 2024v2.pptxCapitol Doctoral Presentation -June 2024v2.pptx
Capitol Doctoral Presentation -June 2024v2.pptx

Slide Presentation from a Doctoral Virtual Open House presented on June 30, 2024 by staff and faculty of Capitol Technology University Covers degrees offered, program details, tuition, financial aid and the application process.

online graduate degreeonline doctorateaccredited doctoral degree
Webinar Innovative assessments for SOcial Emotional Skills
Webinar Innovative assessments for SOcial Emotional SkillsWebinar Innovative assessments for SOcial Emotional Skills
Webinar Innovative assessments for SOcial Emotional Skills

Presentations by Adriano Linzarini and Daniel Catarino da Silva of the OECD Rethinking Assessment of Social and Emotional Skills project from the OECD webinar "Innovations in measuring social and emotional skills and what AI will bring next" on 5 July 2024

oecdeducations&e skills
Conducting exciting academic research in Computer Science
Conducting exciting academic research in Computer ScienceConducting exciting academic research in Computer Science
Conducting exciting academic research in Computer Science

Talk at FSE2024 New Faculty Symposium on Conducting Exciting Research

researchcomputer sciencesoftware engineering
Paging Model of Logical and Physical Memory
Paging Example
n=2 and m=4 32-byte memory and 4-byte pages
m=4 n=2
Logical address 2^m= 2^4= 16 (0 to 15)
Page size 2^n=2^2=4
Paging
• When a process arrives in the system to be executed, its size is
expressed in pages. Each page of the process needs one frame.
• The first page of a process is loaded in to one of the physical
frames and frame number is put in to page table .
• Logical address are mapped in to physical address and this
mapping is hidden from the user and is controlled by the
operating system.
• OS manages physical memory it must be aware of allocation
details and this information is kept in a data structure called a
frame table.
Free Frames
Before allocation After allocation

Recommended for you

The basics of sentences session 9pptx.pptx
The basics of sentences session 9pptx.pptxThe basics of sentences session 9pptx.pptx
The basics of sentences session 9pptx.pptx

Pie

Implementation of Page Table
• The page table is implemented as a set of dedicated registers .
These registers should be built with very high-speed logic to
make the paging-address translation efficient.
• If the page table is large (million of entries) use of fast registers is
not feasible.
• Page table is kept in main memory.
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the page table.
• Changing page tables requires changing only this one register,
substantially reducing context-switch time.
• The problem with this approach is the time required to access
a user memory location.
• If we want to access location i, we must first index in to the
page table, using the value in the PTBR offset by the page
number for i . This task requires a memory access.
• It provides us with the frame number, which is combined with
the page offset to produce the actual address. We can then
access the desired place in memory.
• The standard solution to this problem is to use a special,
small, fast lookup hardware cache, called translation look
aside buffer. The TLB is associative, high-speed memory.
Paging Hardware With TLB
Effective Access Time
• Hit ratio (): Percentage of times that a page number is found in
the TLB.
• Consider  = 80%, 20ns for TLB search, 100ns for memory
access.
• Miss ratio(ẞ): Percentage of times that a page number is not
found in the TLB.
• Consider ẞ = 20%, fail to find the page number in TLB(20ns),
Access memory for the page table(100ns), 100ns to access the
desired byte in memory .
• Effective Access Time (EAT):
EAT= HIT RATIO*(TLB Search + Memory Access)+MISS RATIO*(TLB Search +page
table+Memory Access)
EAT = 0.80 x (20+100) + 0.20 x(20+100 +100) = 140ns

Recommended for you

Memory Protection
• Memory protection is a paging environment in which protection
bits are associated with each frame. These bits are kept in page
table.
• Valid-invalid bit attached to each entry in the page table:
– “valid” indicates that the associated page is in the process’
logical address space, and is thus a legal page.
– “invalid” indicates that the page is not in the process’ logical
address space
• The OS sets this bit for each page to allow or disallow access to
the page.
Valid (v) or Invalid (i) Bit in a Page Table
Shared Pages
• Advantage of paging is the possibility of sharing common code.
• System supports 40 users ,each of whom executes a text editor. If
the text editor consists of 150KB of code and 50KB of data space,
so we need 200KB*40=8000 KB to support 40 users.
• Made code as reentrant code(sharable) that never changes during
execution time.
• To support 40 users, we need only 1 copy of editor(150KB)+ 40
copies of 50KB= 2150 KB.A significant savings.
• In the figure use three-page editors each page 50 KB in size (being
shared among three processes). Each process has its own data
page.
Shared Pages Example

Recommended for you

Segmentation
• Segmentation is a memory-management scheme that supports
user view of memory .
• A program is a collection of segments
– A segment is a logical unit such as: main program ,procedure
function , method , object , local variables, global variables
common block , stack , symbol table ,arrays.
User’s view of a program
Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>
• Segment table – maps two-dimensional physical addresses;
each table entry has:
– base – contains the starting physical address where the
segments reside in memory
– limit – specifies the length of the segment
• Segment-table base register (STBR) points to the segment
table’s location in memory
• Segment-table length register (STLR) indicates number of
segments used by a program.
Segmentation Hardware
Module 3 Deadlocks.pptx

Recommended for you

Structure of the Page Table
• Memory structures for paging can get huge using straight-
forward methods
– Consider a 32-bit logical address space as on modern
computers.
– Page size of 4 KB (212).
– Page table would have 1 million entries (232 / 212).
– If each entry is 4 bytes -> 4 MB of physical address space /
memory for page table alone.
• That amount of memory used to cost a lot.
• Don’t want to allocate that contiguously in main memory
Module 3 Deadlocks.pptx
Two-Level Paging Example
• A logical address (on 32-bit machine with 1K page size) is divided
into:
– a page number consisting of 22 bits
– a page offset consisting of 10 bits
• Since the page table is paged, the page number is further divided
into:
– a 12-bit page number
– a 10-bit page offset
• Thus, a logical address is as follows:
where p1 is an index into the outer page table, and p2 is the
displacement within the page of the inner page table.
• Known as forward-mapped page table
Two-Level Page-Table Scheme
Address-Translation Scheme

Recommended for you

Module 3 Deadlocks.pptx
64-bit Logical Address Space
• Even two-level paging scheme not sufficient
• If page size is 4 KB (212)
– Then page table has 252 entries
– If two level scheme, inner page tables could be 210 4-byte
entries
– Address would look like
– Outer page table has 242 entries or 244 bytes
– One solution is to add a 2nd outer page table
– But in the following example the 2nd outer page table is still
234 bytes in size
• And possibly 4 memory access to get to one physical
memory location
Three-level Paging Scheme
Hashed Page Table
• The virtual page number in the virtual address is hashed into
the hash table.
• The virtual page number is compared with field 1 in the first
element in the linked list.
• If there is a match, the corresponding page frame (field 2) is
used to form the desired physical address.
• If there is no match, subsequent entries in the linked list are
searched for a matching virtual page number.

Recommended for you

Hashed Page Table
Inverted Page Table
• Rather than each process having a page table and keeping track of all
possible logical pages, track all physical pages.
• One entry for each real page of memory.
• Entry consists of the virtual address of the page stored in that real
memory location, with information about the process that owns the
page.
• Each virtual address in the system consists of a triple:
<process-id, page-number, offset>.
• Each inverted page-table entry is a pair <process-id, page-number>
where the process-id assumes the role of the address-space
identifier.
Inverted Page Table

More Related Content

Similar to Module 3 Deadlocks.pptx

Os5
Os5Os5
Deadlock (1).ppt
Deadlock (1).pptDeadlock (1).ppt
Deadlock (1).ppt
Nusaike Mufthie
 
Chapter 4
Chapter 4Chapter 4
Chapter 4
ushabarad142
 
Module-2Deadlock.ppt
Module-2Deadlock.pptModule-2Deadlock.ppt
Module-2Deadlock.ppt
KAnurag2
 
Mch7 deadlock
Mch7 deadlockMch7 deadlock
Mch7 deadlock
wahab13
 
DeadlockMar21.ppt
DeadlockMar21.pptDeadlockMar21.ppt
DeadlockMar21.ppt
hetrathod001
 
OS Module-3 (2).pptx
OS Module-3 (2).pptxOS Module-3 (2).pptx
OS Module-3 (2).pptx
KokilaK25
 
Deadlocks Part- II.pdf
Deadlocks Part- II.pdfDeadlocks Part- II.pdf
Deadlocks Part- II.pdf
Harika Pudugosula
 
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptxosvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
Bhaskar271887
 
Deadlock.ppt
Deadlock.pptDeadlock.ppt
Deadlock.ppt
JeelBhanderi4
 
Chapter 7 - Deadlocks
Chapter 7 - DeadlocksChapter 7 - Deadlocks
Chapter 7 - Deadlocks
Wayne Jones Jnr
 
Ch8 OS
Ch8 OSCh8 OS
Ch8 OS
C.U
 
Os unit 4
Os unit 4Os unit 4
Os unit 4
Krupali Mistry
 
Deadlock Avoidance - OS
Deadlock Avoidance - OSDeadlock Avoidance - OS
Deadlock Avoidance - OS
MsAnita2
 
protection and security in operating systems
protection and security in operating systemsprotection and security in operating systems
protection and security in operating systems
MadhaviB6
 
7 Deadlocks
7 Deadlocks7 Deadlocks
7 Deadlocks
Dr. Loganathan R
 
Ice
IceIce
OS - Unit 3 Deadlock (Bankers Algorithm).pptx
OS - Unit 3 Deadlock (Bankers Algorithm).pptxOS - Unit 3 Deadlock (Bankers Algorithm).pptx
OS - Unit 3 Deadlock (Bankers Algorithm).pptx
GovindJha93
 
CH07.pdf
CH07.pdfCH07.pdf
CH07.pdf
ImranKhan880955
 
Gp1242 007 oer ppt
Gp1242 007 oer pptGp1242 007 oer ppt
Gp1242 007 oer ppt
Nivedita Kasturi
 

Similar to Module 3 Deadlocks.pptx (20)

Os5
Os5Os5
Os5
 
Deadlock (1).ppt
Deadlock (1).pptDeadlock (1).ppt
Deadlock (1).ppt
 
Chapter 4
Chapter 4Chapter 4
Chapter 4
 
Module-2Deadlock.ppt
Module-2Deadlock.pptModule-2Deadlock.ppt
Module-2Deadlock.ppt
 
Mch7 deadlock
Mch7 deadlockMch7 deadlock
Mch7 deadlock
 
DeadlockMar21.ppt
DeadlockMar21.pptDeadlockMar21.ppt
DeadlockMar21.ppt
 
OS Module-3 (2).pptx
OS Module-3 (2).pptxOS Module-3 (2).pptx
OS Module-3 (2).pptx
 
Deadlocks Part- II.pdf
Deadlocks Part- II.pdfDeadlocks Part- II.pdf
Deadlocks Part- II.pdf
 
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptxosvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
 
Deadlock.ppt
Deadlock.pptDeadlock.ppt
Deadlock.ppt
 
Chapter 7 - Deadlocks
Chapter 7 - DeadlocksChapter 7 - Deadlocks
Chapter 7 - Deadlocks
 
Ch8 OS
Ch8 OSCh8 OS
Ch8 OS
 
Os unit 4
Os unit 4Os unit 4
Os unit 4
 
Deadlock Avoidance - OS
Deadlock Avoidance - OSDeadlock Avoidance - OS
Deadlock Avoidance - OS
 
protection and security in operating systems
protection and security in operating systemsprotection and security in operating systems
protection and security in operating systems
 
7 Deadlocks
7 Deadlocks7 Deadlocks
7 Deadlocks
 
Ice
IceIce
Ice
 
OS - Unit 3 Deadlock (Bankers Algorithm).pptx
OS - Unit 3 Deadlock (Bankers Algorithm).pptxOS - Unit 3 Deadlock (Bankers Algorithm).pptx
OS - Unit 3 Deadlock (Bankers Algorithm).pptx
 
CH07.pdf
CH07.pdfCH07.pdf
CH07.pdf
 
Gp1242 007 oer ppt
Gp1242 007 oer pptGp1242 007 oer ppt
Gp1242 007 oer ppt
 

Recently uploaded

Is Email Marketing Really Effective In 2024?
Is Email Marketing Really Effective In 2024?Is Email Marketing Really Effective In 2024?
Is Email Marketing Really Effective In 2024?
Rakesh Jalan
 
AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894
AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894
AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894
PECB
 
SYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISING
SYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISINGSYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISING
SYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISING
Dr Vijay Vishwakarma
 
How to Show Sample Data in Tree and Kanban View in Odoo 17
How to Show Sample Data in Tree and Kanban View in Odoo 17How to Show Sample Data in Tree and Kanban View in Odoo 17
How to Show Sample Data in Tree and Kanban View in Odoo 17
Celine George
 
No, it's not a robot: prompt writing for investigative journalism
No, it's not a robot: prompt writing for investigative journalismNo, it's not a robot: prompt writing for investigative journalism
No, it's not a robot: prompt writing for investigative journalism
Paul Bradshaw
 
Credit limit improvement system in odoo 17
Credit limit improvement system in odoo 17Credit limit improvement system in odoo 17
Credit limit improvement system in odoo 17
Celine George
 
BRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptx
BRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptxBRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptx
BRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptx
kambal1234567890
 
(T.L.E.) Agriculture: Essentials of Gardening
(T.L.E.) Agriculture: Essentials of Gardening(T.L.E.) Agriculture: Essentials of Gardening
(T.L.E.) Agriculture: Essentials of Gardening
MJDuyan
 
Beyond the Advance Presentation for By the Book 9
Beyond the Advance Presentation for By the Book 9Beyond the Advance Presentation for By the Book 9
Beyond the Advance Presentation for By the Book 9
John Rodzvilla
 
NLC English 7 Consolidation Lesson plan for teacher
NLC English 7 Consolidation Lesson plan for teacherNLC English 7 Consolidation Lesson plan for teacher
NLC English 7 Consolidation Lesson plan for teacher
AngelicaLubrica
 
ARCHITECTURAL PATTERNS IN HISTOPATHOLOGY pdf- [Autosaved].pdf
ARCHITECTURAL PATTERNS IN HISTOPATHOLOGY  pdf-  [Autosaved].pdfARCHITECTURAL PATTERNS IN HISTOPATHOLOGY  pdf-  [Autosaved].pdf
ARCHITECTURAL PATTERNS IN HISTOPATHOLOGY pdf- [Autosaved].pdf
DharmarajPawar
 
Chapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptx
Chapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptxChapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptx
Chapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptx
Brajeswar Paul
 
"DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ...
"DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ..."DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ...
"DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ...
thanhluan21
 
How to Configure Time Off Types in Odoo 17
How to Configure Time Off Types in Odoo 17How to Configure Time Off Types in Odoo 17
How to Configure Time Off Types in Odoo 17
Celine George
 
Views in Odoo - Advanced Views - Pivot View in Odoo 17
Views in Odoo - Advanced Views - Pivot View in Odoo 17Views in Odoo - Advanced Views - Pivot View in Odoo 17
Views in Odoo - Advanced Views - Pivot View in Odoo 17
Celine George
 
Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...
Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...
Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...
anjaliinfosec
 
Capitol Doctoral Presentation -June 2024v2.pptx
Capitol Doctoral Presentation -June 2024v2.pptxCapitol Doctoral Presentation -June 2024v2.pptx
Capitol Doctoral Presentation -June 2024v2.pptx
CapitolTechU
 
Webinar Innovative assessments for SOcial Emotional Skills
Webinar Innovative assessments for SOcial Emotional SkillsWebinar Innovative assessments for SOcial Emotional Skills
Webinar Innovative assessments for SOcial Emotional Skills
EduSkills OECD
 
Conducting exciting academic research in Computer Science
Conducting exciting academic research in Computer ScienceConducting exciting academic research in Computer Science
Conducting exciting academic research in Computer Science
Abhik Roychoudhury
 
The basics of sentences session 9pptx.pptx
The basics of sentences session 9pptx.pptxThe basics of sentences session 9pptx.pptx
The basics of sentences session 9pptx.pptx
heathfieldcps1
 

Recently uploaded (20)

Is Email Marketing Really Effective In 2024?
Is Email Marketing Really Effective In 2024?Is Email Marketing Really Effective In 2024?
Is Email Marketing Really Effective In 2024?
 
AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894
AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894
AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894
 
SYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISING
SYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISINGSYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISING
SYBCOM SEM III UNIT 1 INTRODUCTION TO ADVERTISING
 
How to Show Sample Data in Tree and Kanban View in Odoo 17
How to Show Sample Data in Tree and Kanban View in Odoo 17How to Show Sample Data in Tree and Kanban View in Odoo 17
How to Show Sample Data in Tree and Kanban View in Odoo 17
 
No, it's not a robot: prompt writing for investigative journalism
No, it's not a robot: prompt writing for investigative journalismNo, it's not a robot: prompt writing for investigative journalism
No, it's not a robot: prompt writing for investigative journalism
 
Credit limit improvement system in odoo 17
Credit limit improvement system in odoo 17Credit limit improvement system in odoo 17
Credit limit improvement system in odoo 17
 
BRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptx
BRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptxBRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptx
BRIGADA ESKWELA OPENING PROGRAM KICK OFF.pptx
 
(T.L.E.) Agriculture: Essentials of Gardening
(T.L.E.) Agriculture: Essentials of Gardening(T.L.E.) Agriculture: Essentials of Gardening
(T.L.E.) Agriculture: Essentials of Gardening
 
Beyond the Advance Presentation for By the Book 9
Beyond the Advance Presentation for By the Book 9Beyond the Advance Presentation for By the Book 9
Beyond the Advance Presentation for By the Book 9
 
NLC English 7 Consolidation Lesson plan for teacher
NLC English 7 Consolidation Lesson plan for teacherNLC English 7 Consolidation Lesson plan for teacher
NLC English 7 Consolidation Lesson plan for teacher
 
ARCHITECTURAL PATTERNS IN HISTOPATHOLOGY pdf- [Autosaved].pdf
ARCHITECTURAL PATTERNS IN HISTOPATHOLOGY  pdf-  [Autosaved].pdfARCHITECTURAL PATTERNS IN HISTOPATHOLOGY  pdf-  [Autosaved].pdf
ARCHITECTURAL PATTERNS IN HISTOPATHOLOGY pdf- [Autosaved].pdf
 
Chapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptx
Chapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptxChapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptx
Chapter-2-Era-of-One-party-Dominance-Class-12-Political-Science-Notes-2 (1).pptx
 
"DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ...
"DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ..."DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ...
"DANH SÁCH THÍ SINH XÉT TUYỂN SỚM ĐỦ ĐIỀU KIỆN TRÚNG TUYỂN ĐẠI HỌC CHÍNH QUY ...
 
How to Configure Time Off Types in Odoo 17
How to Configure Time Off Types in Odoo 17How to Configure Time Off Types in Odoo 17
How to Configure Time Off Types in Odoo 17
 
Views in Odoo - Advanced Views - Pivot View in Odoo 17
Views in Odoo - Advanced Views - Pivot View in Odoo 17Views in Odoo - Advanced Views - Pivot View in Odoo 17
Views in Odoo - Advanced Views - Pivot View in Odoo 17
 
Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...
Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...
Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...
 
Capitol Doctoral Presentation -June 2024v2.pptx
Capitol Doctoral Presentation -June 2024v2.pptxCapitol Doctoral Presentation -June 2024v2.pptx
Capitol Doctoral Presentation -June 2024v2.pptx
 
Webinar Innovative assessments for SOcial Emotional Skills
Webinar Innovative assessments for SOcial Emotional SkillsWebinar Innovative assessments for SOcial Emotional Skills
Webinar Innovative assessments for SOcial Emotional Skills
 
Conducting exciting academic research in Computer Science
Conducting exciting academic research in Computer ScienceConducting exciting academic research in Computer Science
Conducting exciting academic research in Computer Science
 
The basics of sentences session 9pptx.pptx
The basics of sentences session 9pptx.pptxThe basics of sentences session 9pptx.pptx
The basics of sentences session 9pptx.pptx
 

Module 3 Deadlocks.pptx

  • 2. System Model • A system consists of a finite number of resources to be distributed among a number of competing processes. • If a system has two CPUs, then the resource type CPU has two instances. • A process must request a resource before using it and must release the resource after using it. • Each process utilizes a resource as follows: -- Request: The process requests the resource. If the request cannot be granted immediately then the requesting process must wait until it can acquire the resource. – Use : The process can operate on the resource. – Release : The process releases the resource.
  • 3. Deadlock Example • Deadlock : It is a situation when a process in the system has acquired some resources and waiting for more resources acquired by some other process which in turn is waiting for the resources acquired by this process. None of them can process and operating system cant do any work. • Different resource type : P1 DVD and P2 Printer P1 requests printer and P2 requests DVD P1 waiting for the release of printer(P2) and P2 is waiting for the release of DVD(P1). • Same resource type : P1,P2,P3 having three CD RW drives.
  • 4. Deadlock Characterization • Mutual exclusion: Only one process at a time can use a resource. If a process requests that resource, the requesting process must be delayed until the resource has been released. • Hold and wait: A process holding at least one resource is waiting to acquire additional resources held by other processes. • No preemption: A resource can be released voluntarily by the process holding it, after that process has completed its task. • Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0. Four conditions for a deadlock to occur
  • 5. Resource-Allocation Graph • It consists of set of vertices V and a set of edges E. • V is partitioned into two types: – P = {P1, P2, …, Pn}, the set consisting of all the processes in the system R = {R1, R2, …, Rm}, the set consisting of all resource types in the system • Request edge – Directed edge Pi  Rj , process Pi has requested an instance of resource type Rj and is currently waiting for that resource. • Assignment edge – Directed edge Rj  Pi, instance of resource type Rj has been allocated to process Pi.
  • 6. Resource-Allocation Graph (Cont.) • Process • Resource Type with 4 instances • Pi requests instance of Rj • Pi is holding an instance of Rj Pi Pi Rj Rj
  • 7. Resource Allocation Graph with a Deadlock Resource Allocation Graph with no deadlock Resource-Allocation Graph (Cont.)
  • 8.  Process states: • Process P1 is holding an instance of resource type R2 and is waiting for an instance of resource type R1 . • Process P2 is holding an instance of R1 and an instance of R2 and is waiting for an instance of R3. • Process P3 is holding an instance of R3 .  Methods for Handling Deadlocks : • Ensure that the system will never enter a deadlock state: – Deadlock prevention – Deadlock avoidance • Allow the system to enter a deadlock state and then recover. • Ignore the problem and pretend that deadlocks never occur in the system;
  • 9. Deadlock Prevention  Mutual Exclusion –It must hold for non-sharable resources. We cannot prevent deadlocks by denying the mutual-exclusion condition, because some resources are intrinsically non sharable. E.g.: Printer cannot be simultaneously shared by several process(non sharable).  Hold and Wait – To ensure that this condition doesnot occurs in the system, we must guarantee that whenever a process requests a resource, it does not hold any other resources. – One protocol requires process to request and be allocated all its resources before it begins execution. – Another protocol allows process to request resources only when the process has none allocated to it.
  • 10. Deadlock Prevention (Cont.) • No Preemption : To ensure that this condition doesnot hold, use the following protocol. – If a process is holding some resources, requests another resource that cannot be immediately allocated to it, then all resources the process is currently holding are preempted. – Preempted resources are added to list of resources for which process is waiting. – Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting. – This protocol is used whose state can be easily saved and restored later.
  • 11. Deadlock Prevention (Cont.) • Circular Wait - Impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration. - Initially process requests any number of instances of resource type Ri. After that process can request instance of resource type Rj if and only if F(Rj) >F(Ri). - E.g.: We define a one-to-one function F: R N, where N is the set of natural numbers, R is the set of resource types includes tape drives, disk drives, and printers. - Alternatively, we can require that a process requesting an instance of resource type Rj must have released any resources Ri; such that F(Ri) >=F(Rj).
  • 12. Deadlock Example with Lock Ordering void transaction(Account from, Account to, double amount) { mutex lock1, lock2; lock1 = get_lock(from); lock2 = get_lock(to); acquire(lock1); acquire(lock2); withdraw(from, amount); deposit(to, amount); release(lock2); release(lock1); } Transactions 1 and 2 execute concurrently. Transaction1 transfers $25 from account A to account B, and Transaction2 transfers $50 from account B to account A
  • 13. Deadlock Avoidance • Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. • Given this a priori information, it is possible to construct an algorithm which ensures that the system will never enter a deadlocked state. • Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes. • The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition.
  • 14. Safe State • A state is in safe if the system can allocate resources to a process in some order and still avoid a deadlock. • System is in safe state if there exists a sequence <P1, P2, …, Pn> of all the processes in the system such that if Pi resource needs are not immediately available, then Pi can wait until all Pj have finished. – When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate. – When Pi terminates, Pi +1 can obtain its needed resources, and so on .
  • 15. Basic Facts • If a system is in safe state  no deadlocks If a system is in unsafe state  possibility of deadlock. • Avoidance  ensure that a system will never enter an unsafe state. Avoidance Algorithms : • Single instance of a resource type Use a resource-allocation graph algorithm • Multiple instances of a resource type Use the banker’s algorithm
  • 16. Resource-Allocation Graph Algorithm • Claim edge Pi  Rj indicates that process Pj may request resource Rj at sometime in the future; represented by a dashed line. • Claim edge converts to request edge when a process requests a resource. • Request edge converted to an assignment edge when the resource is allocated to the process. • When a resource is released by a process, assignment edge reconverts to a claim edge. • Resources must be claimed a priori in the system.
  • 17. Deadlock Avoidance Unsafe State In Resource-Allocation Graph
  • 18. Banker’s Algorithm • It is applicable to resource allocation system with multiple instances of each resource type. • The name was chosen because the algorithm could be used in a banking system to ensure that the bank never allocated its available cash in such a way that it could no longer satisfy the needs of all its customers.
  • 19. Banker’s Algorithm • Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj available • Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj • Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj • Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task Need [i,j] = Max[i,j] – Allocation [i,j] Let n = number of processes, and m = number of resources types.
  • 20. Safety Algorithm 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work = Available Finish [i] = false for i = 0, 1, …, n- 1 2. Find an i such that both: (a) Finish [i] = false (b) Needi  Work If no such i exists, go to step 4. 3.Work=Work + Allocation Finish [i] = true . Go to step 2. 4. If Finish [i] == true for all i, then the system is in a safe state
  • 21. Resource-Request Algorithm for Process Pi Requesti = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of resource type Rj 1.If Requesti  Needi go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim. 2. If Requesti  Available, go to step 3. Otherwise Pi must wait, since resources are not available. 3. Pretend to allocate requested resources to Pi by modifying the state as follows: Available = Available – Requesti; Allocationi = Allocationi + Requesti; Needi = Needi – Requesti;  If safe  the resources are allocated to Pi  If unsafe  Pi must wait, and the old resource-allocation state is restored
  • 22. Deadlock Detection • Allow system to enter deadlock state • Detection algorithm • Recovery scheme
  • 23. Single Instance of Each Resource Type • If all the resources have single instance then we use a variant of resource allocation graph called wait for graph. • We obtain this graph by removing the resource nodes and collapsing the appropriate edges. • Maintains wait-for graph – Nodes are processes – Pi  Pj means Pi is waiting for Pj to release a resource that Pi needs. • Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock.
  • 24. Resource-Allocation Graph and Wait-for Graph Resource-Allocation Graph Corresponding wait-for graph
  • 25. Several instances of resource type :Detection Algorithm Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work = Available (b) For i = 1,2, …, n, if Allocationi  0, then Finish[i] = false; otherwise, Finish[i] = true 2. Find an index i such that both: (a) Finish[i] == false (b) Requesti  Work If no such i exists, go to step 4 3. Work = Work + Allocationi Finish[i] = true go to step 2 4. If Finish[i] == false, for some i, 1  i  n, then the system is in deadlock state. Moreover, if Finish[i] == false, then Pi is deadlocked
  • 26. Detection-Algorithm Usage • When, and how often, to invoke depends on: – How often a deadlock is likely to occur? – How many processes will be affected by deadlock when it happens? • If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes “caused” the deadlock. • Recovery from Deadlock : • Inform the operator that a deadlock has occurred and to let the operator deal with the deadlock manually. • Let the system recover from the deadlock automatically.
  • 27. Recovery from Deadlock: Process Termination • Abort all deadlocked processes :The deadlocked processes may have computed for a long time, and the results of these partial computations must be discarded and probably will have to be recomputed later. • Abort one process at a time until the deadlock cycle is eliminated :each process is aborted, a deadlock-detection algorithm must be invoked to determine whether any processes are still deadlocked. • In which order should we choose to abort? 1. Priority of the process. 2. How long process has computed, and how much longer to completion. 3. How many resources that the process has used. 4. How many resources that process needs in order to complete. 5. How many processes will need to be terminated.
  • 28. Recovery from Deadlock: Resource Preemption  Resource Preemption : We successively preempt some resources from processes and give these resources to other processes till the deadlock cycle is broken. • Selecting a victim – Which resources and which processes are to be preempted? As in process termination, we must determine the order of preemption to minimize cost. • Rollback – If we preempt a resource from a process, what should be done with that process? If it is missing some needed resource. We must roll back the process to some safe state and restart it from that state. • Starvation – How can we guarantee that resources will not always be preempted from the same process?
  • 30. Background • Memory consists of large array of words each is having own address. • Instruction execution cycle : Fetches an instruction from memory. The instruction is then decoded and may cause operands to be fetched from memory. • After the instruction has been executed on the operands, results may be stored back in to memory. • Program must be brought (from disk) into memory and placed within a process for it to be run.
  • 31. • Main memory and registers built in to the processor itself are the only storage that the CPU can access directly. • If the data are not in memory they must be moved before CPU can operate on them. • Make sure that each process has a separate memory space. • To do this, we need the ability to determine the range of legal addresses that the process may access and to ensure that the process can access only these legal addresses. • Legal address – Providing protection through two registers : base and limit register. Basic Hardware
  • 32. Basic Hardware • A pair of base and limit registers define the logical address space. • Logical address : It is the address generated by the CPU. • Base register holds the smallest legal physical memory address. • Limit register specifies the size of the range. • CPU must check every memory access generated in user mode to be sure that it is between base and limit for that user.
  • 33. Hardware Address Protection Base – Smallest legal physical address Limit -- Size of Range Eg : CPU Address --- 256002 Base --- 256000 (256002>256000) Limit---- (300040- 256000 )= 44040 Base + Limit = 300040
  • 34. • The base and limit registers can be loaded only by the operating system, which uses a special privileged instruction. • These privileged instructions and operating system executes in kernel mode. • Only the operating system can load base and limit registers. • OS can change the value of the registers but user programs cannot change the register contents.
  • 35. Binding of Instructions and Data to Memory • Binding of instructions and data to memory addresses can happen at three different stages. • Compile time: If memory location known at compile time , where the process will reside then absolute code can be generated . Eg : MSDOS. • Load time: If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code. • Execution time: If the process can be moved during its execution from one memory segment to another then binding will be delayed until run time.
  • 36. Multistep Processing of a User Program
  • 37. Memory-Management Unit (MMU) • The run time mapping from virtual to physical address is done by hardware device called MMU. Dynamic relocation using a relocation register
  • 38. Logical versus Physical address space • The address generated by CPU is known as logical address(virtual address). • The address which is seen by the memory unit is known as physical address. • The compile time and load time address time binding methods generate identical physical and logical address. • The set of all logical addresses generated by a program is a logical address space. • The set of all physical addresses corresponding to these logical addresses is a physical address space.
  • 39. Swapping • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. • Backing store : Fast disk which is large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. • Round robin scheduling algorithm : When a time quantum expires, the memory manager will start to swap out the process that just finished and to swap another process into the memory space that has been freed.
  • 40. Schematic View of Swapping • Roll out, roll in – It is a variant of swapping used for priority- based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed.
  • 41. Contiguous Allocation • The memory is usually divided into two partitions: one for the resident operating system and one for the user processes. • Contiguous allocation is one of the early method . Resident operating system, usually held in low memory .User process reside in high memory. • Consider how to allocate available memory to the processes that are in the input queue waiting to be brought into memory. • In contiguous memory allocation, each process is contained in a single contiguous section of memory.
  • 42. Memory Allocation • Simplest method for allocating memory is to divide memory into several fixed-sized partitions. • Each partition may contain exactly one process . Degree of multiprogramming is bound by the number of partitions. • In multiple partition when a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process. • In variable partition scheme, the operating system keeps a table indicating which parts of memory are available and which are occupied. • Initially, all memory is available for user processes and is considered one large block of available memory called hole.
  • 43. Dynamic Storage-Allocation Problem • First-fit: Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or at the location where the previous first-fit search ended. • Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size – Produces the smallest leftover hole. • Worst-fit: Allocate the largest hole; must also search entire list – Produces the largest leftover hole. – First-fit and best-fit better than worst-fit in terms of speed and storage utilization but first fit is generally faster. How to satisfy a request of size n from a list of free holes?
  • 44. Fragmentation • External Fragmentation – It exists when there is a total memory space exists to satisfy a request, but it is not contiguous. • Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is the memory internal to a partition, but not being used. Eg: Multiple partition scheme with a hole of 18464 bytes. Suppose process requests 18462 bytes, if we allocate the requested block we are left with a hole of 2 bytes(unused memory). • First fit analysis reveals that given N blocks allocated, 0.5 N blocks lost to fragmentation – 1/3 may be unusable -> 50-percent rule Both first fit and best fit suffers from external fragmentation
  • 45. Fragmentation (Cont.) • Reduce external fragmentation by compaction. – Shuffle the memory contents to place all free memory together in one large block. – Compaction is done at execution time and if the relocation is dynamic. – Relocation requires only moving the program n data and then changing the base register to reflect the new base address. – Permit the logical address space to be non contiguous thus allowing the physical memory to be allocated whenever such memory is available.
  • 46. Paging • Paging is a memory management scheme that permits the physical address space of a process can be noncontiguous. – Avoids external fragmentation – Avoids problem of varying sized memory chunks. • Basic method : --Divide physical memory into fixed-sized blocks called frames. --Divide logical memory into blocks of same size called pages.  When a process is to be executed its pages are loaded in to available memory frames from their source.
  • 47. Paging • Every address generated by the CPU is divided in to two parts : page number(p) and page offset (d). • The page number is used as an index to page table. The page table contains the base address of each page in physical memory. • The base address is combined with the page offset to define the physical address that is sent to memory unit. • If the size of the logical address space is 2m , and a page size 2n then the high-order m- n bits of a logical address designate the page number, and the n low-order bits designate the page offset. page number page offset p d m -n n
  • 48. Paging Hardware Page no| page offset Base address of page page
  • 49. Paging Model of Logical and Physical Memory
  • 50. Paging Example n=2 and m=4 32-byte memory and 4-byte pages m=4 n=2 Logical address 2^m= 2^4= 16 (0 to 15) Page size 2^n=2^2=4
  • 51. Paging • When a process arrives in the system to be executed, its size is expressed in pages. Each page of the process needs one frame. • The first page of a process is loaded in to one of the physical frames and frame number is put in to page table . • Logical address are mapped in to physical address and this mapping is hidden from the user and is controlled by the operating system. • OS manages physical memory it must be aware of allocation details and this information is kept in a data structure called a frame table.
  • 52. Free Frames Before allocation After allocation
  • 53. Implementation of Page Table • The page table is implemented as a set of dedicated registers . These registers should be built with very high-speed logic to make the paging-address translation efficient. • If the page table is large (million of entries) use of fast registers is not feasible. • Page table is kept in main memory. • Page-table base register (PTBR) points to the page table • Page-table length register (PTLR) indicates size of the page table. • Changing page tables requires changing only this one register, substantially reducing context-switch time.
  • 54. • The problem with this approach is the time required to access a user memory location. • If we want to access location i, we must first index in to the page table, using the value in the PTBR offset by the page number for i . This task requires a memory access. • It provides us with the frame number, which is combined with the page offset to produce the actual address. We can then access the desired place in memory. • The standard solution to this problem is to use a special, small, fast lookup hardware cache, called translation look aside buffer. The TLB is associative, high-speed memory.
  • 56. Effective Access Time • Hit ratio (): Percentage of times that a page number is found in the TLB. • Consider  = 80%, 20ns for TLB search, 100ns for memory access. • Miss ratio(ẞ): Percentage of times that a page number is not found in the TLB. • Consider ẞ = 20%, fail to find the page number in TLB(20ns), Access memory for the page table(100ns), 100ns to access the desired byte in memory . • Effective Access Time (EAT): EAT= HIT RATIO*(TLB Search + Memory Access)+MISS RATIO*(TLB Search +page table+Memory Access) EAT = 0.80 x (20+100) + 0.20 x(20+100 +100) = 140ns
  • 57. Memory Protection • Memory protection is a paging environment in which protection bits are associated with each frame. These bits are kept in page table. • Valid-invalid bit attached to each entry in the page table: – “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page. – “invalid” indicates that the page is not in the process’ logical address space • The OS sets this bit for each page to allow or disallow access to the page.
  • 58. Valid (v) or Invalid (i) Bit in a Page Table
  • 59. Shared Pages • Advantage of paging is the possibility of sharing common code. • System supports 40 users ,each of whom executes a text editor. If the text editor consists of 150KB of code and 50KB of data space, so we need 200KB*40=8000 KB to support 40 users. • Made code as reentrant code(sharable) that never changes during execution time. • To support 40 users, we need only 1 copy of editor(150KB)+ 40 copies of 50KB= 2150 KB.A significant savings. • In the figure use three-page editors each page 50 KB in size (being shared among three processes). Each process has its own data page.
  • 61. Segmentation • Segmentation is a memory-management scheme that supports user view of memory . • A program is a collection of segments – A segment is a logical unit such as: main program ,procedure function , method , object , local variables, global variables common block , stack , symbol table ,arrays. User’s view of a program
  • 62. Segmentation Architecture • Logical address consists of a two tuple: <segment-number, offset> • Segment table – maps two-dimensional physical addresses; each table entry has: – base – contains the starting physical address where the segments reside in memory – limit – specifies the length of the segment • Segment-table base register (STBR) points to the segment table’s location in memory • Segment-table length register (STLR) indicates number of segments used by a program.
  • 65. Structure of the Page Table • Memory structures for paging can get huge using straight- forward methods – Consider a 32-bit logical address space as on modern computers. – Page size of 4 KB (212). – Page table would have 1 million entries (232 / 212). – If each entry is 4 bytes -> 4 MB of physical address space / memory for page table alone. • That amount of memory used to cost a lot. • Don’t want to allocate that contiguously in main memory
  • 67. Two-Level Paging Example • A logical address (on 32-bit machine with 1K page size) is divided into: – a page number consisting of 22 bits – a page offset consisting of 10 bits • Since the page table is paged, the page number is further divided into: – a 12-bit page number – a 10-bit page offset • Thus, a logical address is as follows: where p1 is an index into the outer page table, and p2 is the displacement within the page of the inner page table. • Known as forward-mapped page table
  • 70. 64-bit Logical Address Space • Even two-level paging scheme not sufficient • If page size is 4 KB (212) – Then page table has 252 entries – If two level scheme, inner page tables could be 210 4-byte entries – Address would look like – Outer page table has 242 entries or 244 bytes – One solution is to add a 2nd outer page table – But in the following example the 2nd outer page table is still 234 bytes in size • And possibly 4 memory access to get to one physical memory location
  • 72. Hashed Page Table • The virtual page number in the virtual address is hashed into the hash table. • The virtual page number is compared with field 1 in the first element in the linked list. • If there is a match, the corresponding page frame (field 2) is used to form the desired physical address. • If there is no match, subsequent entries in the linked list are searched for a matching virtual page number.
  • 74. Inverted Page Table • Rather than each process having a page table and keeping track of all possible logical pages, track all physical pages. • One entry for each real page of memory. • Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns the page. • Each virtual address in the system consists of a triple: <process-id, page-number, offset>. • Each inverted page-table entry is a pair <process-id, page-number> where the process-id assumes the role of the address-space identifier.