Operating System Concepts
Operating System Concepts
Operating System Concepts
Computer System
Hardware
Software
User
Input Units
CPU
Output Units
Secondary storage
Operating System
Application Programs
Processor Memory
Control Unit
Magnetic Tapes
Magnetic Disks
Compact Disks
Programming Languages
RAM
Cache Memory
ROM
Floppy Disks
Hard Disks
High Level
Low Level
Application Programs
Software
Operating System
Hardware
Application programs
Are programs produced by programming companies to help computer users to perform useful tasks.
Instructions
Input Units
Instructions
Control Unit
Control Signals
Data
Processor
Cache Memory
Data
ALU
Store Results
Or Load date Results
Secondary Storage
ROM
CPU
Random Access Memory Volatile (lose contents when power is off). Can be modified by the user.
ROM:
Read only Memory Non Volatile. Contains fixed programs used by the computer.
Cache Memory:
A memory inside the processor chip. Used to store frequently used data. Increases the processor speed.
Processor
Cache Memory
ALU
Definition: Secondary storage units are the units used to store data permanently.
Secondary Storage
Magnetic Disks
Magnetic Tapes
Compact Disks
Floppy Disks
Hard Disks
Magnetic Tapes:
Used to store large volume of data in large computers like mainframes
for long time. Consists of plastic film coated by magnetic material (iron oxide).
Advantages: Compact (can store huge amount of data). Economical (low cost). No loss of data. Disadvantages: Sequential storage.
Magnetic Disks:
A surface of metal (in the case of hard disk) or plastic (in floppy disks) coated with magnetic material. It rotates with a high speed. Divided into tracks and sectors. Track Sector
Compact Disks:
CD/ROM (Compact Disk/Read Only Memory). A plastic surface coated by a reflective material. A laser beam is used to write on CD/ROM. Can store up to 600 Mega byte.
Q: Define OS, then what are the different OS goals? Sol: Operating System (OS):
is the program running all the times on the computer to communicate all computer components (usually called Kernel).
OS does not perform a useful task by itself, but it creates a suitable environment so that other programs can operate efficiently.
OS Goals: Convenient for the user. Efficient for the system components.
Batch System:
Users send their jobs to the computer operator. Operator organize jobs into a set of batches (each contains similar jobs). Each batch is run separately as a set of jobs.
Batch 1
Next batch
Operator
Batch 2
CPU
Batch 1 Job
Multi-Programming System:
A number of processes are in memory inside the ready queue waiting for the CPU (there is one user). Windows OS using this concept.
Process 1
Ready Queue Process 2 Process 3 Memory
CPU
User 1
User 2
Process 1
Process 2 Process 3
CPU
Switching between users processes
User 3
Memory
Multi-Processor System:
Is a system with more than one processor to maximize the system speed.
CPU 1
Ready Queue
Process 1 Process 2
User
CPU 2
Process 3
CPU 3 Memory
Network System:
Systems that operate networks in order to achieve:
Resource sharing. Computation speedup. Load balancing. Communication between hosts. Star
Bus
Ring
Hard real time systems: critical tasks must be performed on time. Soft real time systems: critical tasks gets priority over other tasks.
3. File Management.
4. Storage Management. 5. I/O Management.
6. Protection Management.
7. Networking Management.
Process Management
Q: Define the following: Process Resource Sol: Process :is a program in execution; it is an active entity in the memory while the program is the passive copy in the hard disk.
Process 1
Program
Memory
Process 2
Hard Disk
Resource: any things in the system that may be used by active processes. They may be hardware (printers, tape drives, memory), or software (files, databases).
Resource Types
Preemptive Resources
A resource that stops the current process when a higher priority process comes.
A resource that can not stop the current process when a higher priority process comes.
Ex: Memory.
Ex: CD recorder,
Because
It allows a low priority process to swapped out
form it to the disk when a higher priority process arrives and needs a larger amount of memory than the available. The low priority process can resume execution later (swapped in again).
5kb are used for OS, 10 kb for the low priority process. Hence, the available space is 17 kb. A higher priority process arrives and needs 20 kb.
high priority process finishes execution
OS (5 kb) OS (5 kb) Swap in the high priority process OS (5 kb)
Disk
Available 27 Kb
Available 27 Kb
Available 7 Kb
High Priority 20 Kb
Because: If a process has begun to burn a CD-ROM, suddenly taking the CD recorder away from it and giving it to another process will result in a bad CD.
Request Use
Release
Q: Explain the main differences between a input (job) and ready queues?
Sol: -input queue: stores the programs that will be opened soon (located at the disk) - ready queue: contains the active process that are ready to be executed (located in the memory).
Ready Queue
Q: Explain what is meant by process states, and then draw the process state diagram. Sol: As process executes, it changes its state. The process state may be:
New: The process is being created. Ready: The process is waiting for the processor. Running: The process instructions are being executed. Waiting: The process is waiting for an event to happen (such as I/O completion).
Dispatcher
Q: Explain why? Each process must have a process control block (PCB) in memory. Sol:
Because:
Process control block (PCB) is the way used to represent the process in the operating system.
It contains information about the process such as: Process state (new, ready, ) Address of the next instruction to be executed in the process. Process priority. Process assigned memory. I/O information (such as I/O devices and opened files).
Q: Show graphically how OS uses the PCB to switch between active processes Sol:
When interrupt happened, OS saves the state of the current active process to its PCB so it can continue correctly when it resumes its execution again. To resume execution, OS reloads the process state from its PCB and continue execution.
Q: Give a suitable definition for the "Context Switch", and then explain why context switch is a pure overhead? Sol: Context switch is the time needed by OS to switch the processor from a process to another.
TX : time needed to save the state of Px in PCBx TY : time needed to load the state of PY in PCBY
Context switch is a pure overhead because the system
Q: what is meant by Schedulers?, then discuss different types of them? Sol: Long-term scheduler (or job scheduler) selects which processes should be brought into the ready queue from the job queue (determine the degree of multi-programming)
Short-term scheduler (or CPU scheduler) selects which
(ready queue) and swapped in again later (it decrease the degree of multiprogramming).
Passive Programs
Disk
Memory
Long Term Scheduler Short Term Scheduler
Process
CPU
Assign the CPU to a process from ready queue
Process
Q: Explain why?
Long term scheduler increases the degree of multiprogramming. Medium term scheduler decreases the degree of multiprogramming.
Sol: Degree of multi-programming is the number of processes that are placed in the ready queue waiting for execution by the CPU.
Process 1 Process 2 Process 3 Process 4 Process 5
Degree of
Multi-Programming
Memory
Since Long term scheduler selects which processes to brought to the ready queue, hence, it Decreases the degree of multiprogramming.
Disk
Job Queue
Memory
Since Medium term scheduler picks some processes from the ready queue and swap them out of memory, hence, it decreases the degree of multiprogramming.
Disk
Job Queue
Memory
CPU Scheduling
(Ready Queue)
Q: Explain what is meant by CPU scheduling, and then discuss the difference between loader and dispatcher? Sol: Difference between loader and dispatcher:
Ready Queue
CPU scheduling
CPU Scheduling is the method to select a process from
the ready queue to be executed by CPU when ever the CPU becomes idle.
Some Examples: First Come First Serviced (FCFS) scheduling. Shortest Job First (SJF) scheduling. Priority scheduling.
CPU Scheduling Algorithm
Process 2
Process 3 Memory
Dispatcher
CPU
Q: Explain the main differences between preemptive and non preemptive scheduling?
Sol:
Preemptive scheduling: allows releasing the current
executing process from CPU when another process (which has a higher priority) comes and need execution.
allocated to a process, the process keeps the CPU until it release the CPU .
Preemptive Scheduling
CPU
CPU
parameters:
CPU utilization.
Sol:
CPU Utilization:
The percentage of times while CPU is busy to the total time (
times CPU busy + times it is idle). Hence, it measures the benefits from CPU.
Times CPU Busy *100 Total Time
CPU
Utilization
To maximize utilization, keep CPU as busy as possible. CPU utilization range from 40% (for lightly loaded systems) to
90% (for heavily loaded) (Explain why? CPU utilization can not reach 100%, because of the context switch between active processes).
System Throughput:
The number of process that are completed per time unit (hour)
Turnaround time:
For a particular process, it is the total time needed for process
execution (from the time of submission to the time of completion). It is the sum of process execution time and its waiting times (to get memory, perform I/O, .).
Waiting time:
The waiting time for a specific process is the sum of all periods it
Response time.
It is the time from the submission of a process until the first
It is desirable to:
Maximize: CPU utilization. System throughput. Minimize: Turnaround time. Waiting time. Response time.
The process that comes first will be executed first. Not preemptive.
Ready queue
FCFS Scheduling
CPU
Consider the following set of processes, with the length of the CPU burst (Execution) time given in milliseconds: The processes arrive in the order P1, P2, P3. All at time 0.
Process Burst Time
1 2 3
P1
P2 P3
24
3 3
Gant chart:
0
24
24
27
27
30
+
Execution Time
Repeat the previous example, assuming that the processes arrive in the order P2, P3, P1. All at time 0.
Process Burst Time 24 3 3
P1 P2 P3
Gant chart:
Process
P1
P2
P3
6
30
0
3
3
6
Q: Explain why? FCFS CPU scheduling algorithm introduces a long average waiting time?
Sol: Because:
it suffers from Convoy effect, hence, all other processes
must wait for the big process to execute if this big process comes first.
This results in a long waiting time for small processes, and
When CPU is available, it will be assigned to the process with the smallest CPU burst (non preemptive). If two processes have the same next CPU burst, FCFS is used.
SJF Scheduling
10 5 18 7
X
CPU
18
10
Consider the following set of processes, with the length of the CPU burst time given in milliseconds: The processes arrive in the order P1, P2, P3, P4. All at time 0.
Process P1 P2 P3 P4 Burst Time 6 8 7 3
2. Using SJF
Process P1 P2
Burst Time 6 8 7 3
Gant chart:
P3 P4
Q: Explain why? SJF CPU scheduling algorithm introduces the minimum average waiting time for a set of processes? Give an example.
Sol: Because: by moving a short process before a long one, the waiting time of the short process decreases more than it increases the waiting time of the long process. Hence, the average waiting time decreases. Example: assuming two processes P1 and P2
Process P1 P2 Burst Time 30 2
Using FCFS
P1
0 30
Waiting time(P1)=0 Waiting time(P2)=30 Average waiting time=(0+30)/2=15
Using SJF P2
0 2
P2
32
P1
32
Shortest-Remaining-Time-First (SRTF)
It is a preemptive version of the Shortest Job First . It allows a new process to gain the processor if its execution time less than the remaining time of the currently processing one.
SRTF Scheduling
2 10 7 5 3 4
CPU
Consider the following set of processes, with the length of the CPU burst time given in milliseconds: The processes arrive in the order P1, P2, P3, P4. as shown in table.
Process P1 P2 P3 P4 Burst Time 7 4 1 4 Arrival Time 0 2 4 5
2. Using SRTF
Process P1 P2 P3
Burst Time 7 4 1
Arrival Time 0 2 4
Gant chart:
P4
Priority scheduling
A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer). There are two types: Preemptive nonpreemptive
Priority Scheduling
10 5 18 7
X
CPU
18
10
processes may never execute Solution Aging as time progresses increase the priority of the process
Very low priority processprocess Very low priority
26 28
30 8
Starvation Aging
Consider the following set of processes, with the length of the CPU burst time given in milliseconds:
Process Burst Time 10 1 2 1 5 priority 3 1 4 5 2
The processes arrive in the order P1, P2, P3, P4, P5. All at time 0.
P1 P2 P3 P4
P5
Allocate the CPU for one Quantum time (also called time slice) Q to each process in the ready queue. This scheme is repeated until all processes are finished. A new process is added to the end of the ready queue.
Round Roben Scheduling
CPU
Consider the following set of processes, with the length of the CPU burst time given in milliseconds: The processes arrive in the order P1, P2, P3. All at time 0. use RR scheduling with Q=2 and Q=4
Process P1 P2 P3 Burst Time 24 3 3
RR with Q=4
Gant chart:
6
30
4
7
7
10
RR with Q=2
Process
Burst Time
P1
P2
24
3
P3
Gant chart:
30
10
Explain why? If the quantum time decrease, this will slow down the execution of the processes.
Sol:
Because decreasing the quantum time will increase
the context switch (the time needed by the processor to switch between the processes in the ready queue) which will increase the time needed to finish the execution of the active processes, hence, this slow down the system.
Divide ready queue into several queues. Each queue has specific Quantum time as shown in figure. Allow processes to move between queues. Queue 0
Queue 1
Queue 2
Deadlock
Deadlock:
A set of blocked processes, each: 1 2 holding a resource
Resource
Process A
Deadlock
Process B
Resource
Hence, blocked processes will never change state (Explain why?) because the resource it has requested is held by another waiting process.
Deadlock
System Breakdown
Q: Discuss briefly the different deadlock conditions. Sol: Deadlock arises if four conditions hold simultaneously:
Deadlock conditions
Mutual Exclusion
Circular wait
No preemption
Mutual 1 Exclusion
only one process can use a resource at a time.
A process holding at least one resource is waiting for additional resource held by another processes
No Preemption
A resource is released only by the process holding it after it completed its task
4 4 Circular Wait
Circular wait
There exists a set {P0, P1, P2, .., Pn} of waiting processes such that: P0 waiting for a resource held by P1. P1 waiting for a resource held by P2. P2 waiting for a resource held by P3.
P0 Pn P1
Circular Wait
Pn waiting for a resource held by P0.
P3 P4
P2
- All system active processes. - Available system resources . - Interconnections between active processes and system resources.
Resource
Edge Rj Pi ( Request edge ) Process Pi requests an instance of resource Rj Rj Pi ( Assignment edge ) An instance of Resource Rj is assigned to Process Pi
E (Edge Set) set of all edges in Resource Allocation Graph. E = {Pn Rm, Rx Py, .. }
Example:
1
Processes States:
Waiting
Instance of R1 P1 R = { R1, R2, Instance of R2 R3, R4 } Instance of R3 P2 E = { P1R1,Instance of R11and R2 R2P2, R2P1 P2R3, R P2, Instance of R3 ------P3 ,R3P3 }
Q: Explain why: Although the graph contains a cycle, the system may not in a deadlock state.
Sol: Case 1: The system has one instance per resource type
R2
Cycle (Deadlock)
R1
P1 R1 P2 R2
P2
As show, no chance to break the cycle because: No process can finish execution.
Case 2: The system has more than one instance per resource type
If a cycle exists, the system may or may not be in a deadlock state.
P2 R3 P3 R2
Cycle: P1 R1 P3 R2
There is no deadlock because: P4 may release its instance of R2. This resource can be then allocated to P3 which breaking the cycle.
Also P2 may release its instance of R1. This resource can be allocated to P1.
Deadlock Avoidance
Q: Describe in details the basic rules used for deadlock avoidance . Sol: Examines the system state so that:
System state
Safe
Unsafe
No deadlock
Possibility of deadlock
unsafe state.
System in safe state if there exists a safe sequence of all processes
1 resources it holds 1 resources it holds + + systems 2 available resources. 2 systems available resources. + Resources 3 held by P1
requests a resource.
3 Request edge converted to an assignment edge when
Assignment Edge
Request Edge
Claim Edge
an assignment edge does not result in a cycle in the resource allocation graph
Cycle
R2 P1 P3
R3 P2
R4
Is the system at safe state? What is the safe sequence? (if exists). Assume at T1, P3 requests R2, is it reasonable to grant this request?
R1
R2 P1
P3
R3 P2
R4
Steps:
1. Find the Available Resources. 2. Construct the table: Process Max. Need Hold
R1
R2 P1
P3
R3 P2 Available Resources: R2
R4
Process
P1 P2 P3
Max. Need
R1, R2, R3 R3, R4 R1, R2, R3, R4
Hold
R1, R3 ---R4
R1
cycle
R2 P1
P3
R3 P2
R4
It is not reasonable to grant the request because system will be in unsafe state (because this may result in a future cycle if P1 requests R2).
Free some resources from the deadlocked processes. Give those resources to other processes until deadlock eliminated
Q: explain what is meant by memory management, then discuss why to manage memory? Sol:
Ready Queue
Memory management is how to organize active processes (processes currently in the ready queue) so that:
1. 2. Processes can be easily reached. Maximize memory space utilization.
Q: show how a loader stores an executable file into the memory assuming:
File of Size =20 memory words (instructions and data). Using Contiguous allocation method. Repeat the problem three different times using: First fit. Best fit. Worst Fit. Sol:
?
Loader
Memory
loader
.1
block ( contiguous allocation ) : ( : hole First Fit ). : ( Best Fit ). : . Worst Fit
. .1 Limit register word Base Register
logical .2 . physical
Memory
OS
Start address of process Legal range
999 1000 1010
1040 20
Limit
1040 1060
1070
1100
1125
Process
1150
Loader
1200
Process
1250 1255
Memory
1100 20
Limit
Process
1040
1070
Process
1100 1120 1125
Process Process
1150
Loader
1200
1250 1255
Process
Memory
1150 20
Limit
Process
1040
1070
Process
1100
1125
Process
1150 1170
Process Process
Loader
1200 1250 1255
Q: Explain what is meant by Bootstrapping, then what is the difference between Loader and Bootstrap Loader? Sol: Bootstrap program
ROM
Bootstrap Loader
Bootstrapping
Actions taken when a computer is first powered on until it is ready to be used . Computer reads a program from a ROM (Read Only Memory) which: Is installed by the manufacturer. Contains bootstrap program and some other routines that controls hardware
(BIOS)
. Loader . loader
: ROM Bootstrap loader Loaders 0 . Loader . Bootstrap program . Absolute Loader loader
Loader
A part or OS.
Perform loading and relocation for users programs.
Q: Explain why? The bootstrap loader is an absolute (Simple) loader? Sol: Assuming OS requires 1000 words (from 0 to 999).
As shown, the range of the logical address (in the executable copy of OS) in the disk is the same of the physical address range in memory. So, no relocation.
Disk
OS
OS
999
No Relocation
Memory
Bootstrap Loader
Q: Explain in details how to manage memory in multi-programming environment? Sol: Memory Management In multi-programming environment
Paging Swapping
Contagious allocation
Swapping:
Q: Explain what is meant by swapping, Give some examples for swapping. Sol: A process can be swapped out of memory to disk, and then brought back into memory for continued execution.
Disk
Available 27 Kb
Available 27 Kb
Available 7 Kb
High Priority 20 Kb
Contiguous Allocation
Q: Explain what is meant by contiguous allocation, what are its different types?
Sol:
Simple Method
General Method
Simple Method
Divide memory into several fixed-sized partitions. Each partition contains one process.
When the process terminates, the partition becomes available for another process.
its no longer used (Explain why?) because it has various drawbacks like: 1. Degree of multiprogramming is bounded by the number of partions. 2. Internal fragmentations.
Process 1 (7 KB)
10 KB
4 KB
8 KB
9 KB
7 KB
Process 2 (9 KB)
10 KB
Internal Fragmentation
Process 3 (8 KB)
10 KB
As shown
This method suffers from internal fragmentations. The degree of multiprogramming is bounded to 3 although it can be 4.
General Method
Initially, all memory is available for user processes, and is considered as
one large block of available memory. When a process arrives and needs memory, we search for a hole large enough for this process using.
First Fit
Best Fit
Worst Fit
If we find one, we allocate only as much memory as is needed, keeping the rest available to satisfy future requests.
There are three different methods to find the suitable hole for a process?:
First fit: allocate the first hole that is big enough (fastest method).
Best fit: allocate the smallest hole that is big enough (produces the smallest leftover hole). Worst fit: allocate the largest hole (produces the largest leftover hole which may be more useful than the smaller leftover hole from a best-fit approach.
OS
Input Queue (in the disk) Process 4 Process 1 (4 KB) (7 KB) Process 5 (9 KB) 2 Process (9 KB) Process 3 (8 KB)
4 KB
8 KB
9 KB
7 KB
As shown: The degree of multiprogramming changing according to the number of processes in the memory (in ready queue).
OS
Process 4 Input Queue (in the disk) Process 2 Process 3 Process 9 Process 20
4 KB
8 KB
9 KB
7 KB
External Fragmentations
Compaction
OS
Process 4 Process 4
Input Queue (in the disk)
4 KB
8 KB
9 KB
7 KB
Compaction: Is a movement of the memory contents to place all free memory in a one large
block sufficient to store new process. It is a solution for the external fragmentation but it is expensive and is not always possible.
Paging
Paging is a memory-management scheme that permits the physicaladdress space of a process to be noncontiguous.
it is commonly used in most operating systems. Divide physical memory into fixed-sized blocks called frames. Divide Process into blocks of same size called pages. Use a page table which contains base address of each page in physical memory.
X
Example
Page number
0
Page number
Frame number
Process P
1 2 3
Paging Example
32-byte memory, each memory word of size =1 byte (can store only one character), the size of page (and frame also)= 4-byte. Show how to store 4 pages process into memory using a page table. according to your page table, what are the physical addresses corresponding to the logical addresses 4 and 13.
Sol: Memory size=32 Byte =32 words. Page size = Frame size = 4 bytes =4 words. Number of Frames = 32/4 = 8 Frames (addressed from 0 7)
Logical address
Page 0 Contents
Physical address
Frame 0
Frame 1 Frame 2
Frame 3 Frame 4 Frame 5 Frame 6 Frame 7
Advantages of paging:
1. No external fragmentation. 2. Allow the process components to be noncontiguous.
Problems in paging:
1. A possibility of internal fragmentations that can not be used.
Q: Explain when you have internal fragmentations when using the paging technique. Sol: when the contents of the last page of the process is less than the frame size, then the remaining part of the frame will be an internal fragmentations that can not be used.
Any Questions?