Operating System: Semester 5
Operating System: Semester 5
Operating System: Semester 5
SYSTEM
SEMESTER 5
UNIT - 2
HI COLLEGE
SYLLABUS
UNIT - 2
HI COLLEGE
INTRODUCTION
Process Concept
A process is a program in execution. It is a series of instructions that are being
carried out by the CPU at a given point in time. Each process has its own unique
memory space, which allows multiple processes to run concurrently without
interfering with each other.Processes can be created in several ways, such as by
executing a new program or by forking an existing process.
Process Scheduling,
1. First-come, first-served (FCFS):
Each process arrives at the ready queue and is executed in the order in
which it arrives.
For example, let's say we have three processes, P1, P2, and P3, arriving at the
ready queue at times 0, 2, and 4, respectively. If the CPU can only handle one
process at a time, it will execute P1 from time 0 to completion, then P2 from
time 2 to completion, and finally P3 from time 4 to completion.
2. Round-robin:
Each process is assigned a fixed amount of CPU time (known as a time slice)
and cycles through the processes in a round-robin fashion.
For example, let's say we have three processes, P1, P2, and P3, with time
slices of 5 milliseconds each. If the CPU can only handle one process at a
time, it will execute P1 for 5 milliseconds at time 0, then switch to P2 for 5
milliseconds at time 5, then switch back to P1 for another 5 milliseconds at
time 10, and so on. This continues until all processes have had a chance to
execute for their full time slice.
3. Priority scheduling:
Each process is assigned a priority based on its importance or urgency, and
the highest-priority process is executed first.
For example, let's say we have three processes, P1 (high priority), P2
(medium priority), and P3 (low priority), arriving at the ready queue at times
0, 2, and 4, respectively. If the CPU can only handle one process at a time, it
will execute P1 from time 0 to completion (since it has the highest priority),
then switch to P2 from time 2 to completion (since it has a higher priority
than P3), and finally switch to P3 from time 4 to completion (since it has no
higher-priority processes waiting).
Operation on Processes
In computer science, an operating system manages multiple processes running
on a computer system. These processes can interact with each other in various
ways, and the operating system provides a set of operations to manipulate
them. Here are some common operations on processes:
1. Creation: This operation creates a new process in the system. The operating
system allocates resources such as memory, CPU time, and I/O devices to the
new process.
2. Termination: This operation destroys a process and releases all the resources
it has been using.
7. Context Switch: This operation saves the current state of a running process
(known as its context) and restores the state of another process (known as its
context switch). Context switches are necessary when the operating system
needs to switch between processes due to scheduling or resource allocation
reasons.
1. Ready Queue: This is a list of processes that are waiting to be executed by the
CPU. Processes enter the ready queue when they have been loaded into
memory and are ready to run.
3. Turnaround Time: This is the total time it takes for a process to complete
execution, from the time it enters the system until it exits. Turnaround time
includes both the CPU burst and any wait times for I/O operations or resource
allocation.
5. Response Time: This is the amount of time it takes for a process to respond
after it has been submitted to the system. Response time includes both the
CPU burst and any wait times for I/O operations or resource allocation, as well
as any delays caused by other processes executing on the CPU.
6. Scheduling Algorithm: This is the set of rules used by the operating system to
select which process should be executed next by the CPU. Scheduling
algorithms can be classified into several categories, such as first-come, first-
served (FCFS), round-robin, priority scheduling, and multilevel feedback queue
(MLFQ). Each algorithm has its own trade-offs between CPU utilization,
response time, and wait times for processes.
7. Context Switch: This is the process of saving a process's current state (known
as its context) in memory and restoring another process's context when
switching between processes due to scheduling or resource allocation reasons.
Context switches can add overhead to CPU scheduling, as they require
additional memory access and processing time.
Scheduling Criteria,
Scheduling criteria are the factors that operating systems use to select which
process should be executed next by the CPU. These criteria help balance the
competing demands of CPU utilization, response time, and wait times for
processes. Here are some common scheduling criteria:
2. Response Time: This criterion aims to minimize the amount of time it takes
for a process to respond after it has been submitted to the system. Short
response times are critical for interactive processes, such as those used in
graphical user interfaces (GUIs).
5. Fairness: This criterion aims to ensure that all processes receive a fair share of
CPU resources over time, regardless of their initial priority or arrival time.
Fairness is critical for ensuring that no single process monopolizes the CPU and
prevents other processes from completing in a reasonable amount of time.
6. Deadline: This criterion is used for real-time processes that have strict timing
constraints, such as those used in embedded systems or scientific computing
applications. Deadline scheduling algorithms aim to ensure that all real-time
processes complete within their specified deadlines, even in the presence of
other processes with less stringent timing requirements.
GANTT CHART
If the CPU scheduling policy is Round Robin with time quantum = 2 unit,
calculate the average waiting time and average turn around time.
PRIORITY SCHEDULING
This algorithm assigns each process a priority level based on its importance or
urgency, and executes the process with the highest priority first. High-priority
processes are executed before lower-priority processes, even if they have
shorter CPU bursts or require less CPU time overall. However, lower-priority
processes may experience starv
Now,
Now,
1. Mutual Exclusion: This mechanism ensures that only one process or thread
can access a shared resource at a time. It prevents data races and ensures that
the resource is not corrupted by multiple processes accessing it simultaneously.
The critical section of a process is the portion of code that accesses the shared
resource. To prevent multiple processes from accessing the resource
simultaneously, mutual exclusion is required. Mutual exclusion ensures that
only one process can enter its critical section at a time, preventing data races
and ensuring consistency.
A semaphore is a variable that can take integer values, and it is used to control
access to a shared resource. A semaphore is initialized with a non-negative
integer value, which represents the number of processes that can access the
shared resource simultaneously.
To access the shared resource, a process must first acquire the semaphore. If
the semaphore value is greater than zero, the process can enter its critical
section and access the shared resource. The semaphore value is then
decremented to reflect that the resource is now being used by one process.
Semaphores are useful in solving the critical section problem because they
provide a mechanism for mutual exclusion and ensure that only one process
can access a shared resource at a time. They also support resource sharing by
allowing multiple processes to access the shared resource simultaneously, as
long as there are enough resources available.
If the semaphore's value is greater than zero, that means there are still
resources available, and you can enter the critical section and use the shared
resource. The semaphore's value is then decremented by one to reflect that
there is now one less resource available.
If multiple people try to enter the critical section at the same time, only one
person will be able to do so, because the semaphore's value will be
decremented to zero. The other people will have to wait until the semaphore's
value becomes greater than zero again, indicating that there are resources
available.
When you're done using the shared resource, you release the semaphore by
incrementing its value. This allows other people who were waiting to enter the
critical section and use the resource.
4. nice: This command allows you to start a new process with a lower priority
than normal, which can help prevent it from monopolizing system resources
and causing other processes to slow down or freeze. You can use nice followed
by the name of the command you want to run, followed by an optional priority
value (-n) between -20 (highest priority) and 19 (lowest priority).
5. renice: This command allows you to change the nice value of an already
running process, which can help prevent it from hogging resources or slowing
down other processes. You can use renice followed by the PID of the process
you want to modify, followed by an optional nice value (-n) between -20
(highest priority) and 19 (lowest priority).
SYSTEM CALLS
In computer systems, system calls are requests made by a running program to
the operating system (OS) to perform certain tasks that are not directly
supported by the processor or hardware. These tasks may include reading or
writing data from a disk, sending or receiving data over a network, or managing
system resources like memory and CPU time.
read() and write(): Used for input/output (I/O) operations, such as reading
data from a file or writing data to a network socket.
open() and close(): Used for managing file descriptors, such as opening a file
for reading or writing, and closing it when done.
fork() and execve(): Used for process management, such as creating a new
process (fork()), executing a new program (execve()), and waiting for a child
process to complete (wait()).
malloc() and free(): Used for memory management, such as allocating and
deallocating dynamically allocated memory blocks.
signal() and sigaction(): Used for handling signals (interrupts), such as
registering a function to be called when a specific signal is received.
waitpid() and kill(): Used for process control, such as waiting for a specific
child process to complete (waitpid()), and sending a signal to another
process (kill()).