Assignment#2
Assignment#2
Information Sharing:
Cooperating processes enable the sharing of information between different processes, allowing
for more efficient data exchange and improved functionality of applications that rely on shared
data sources.
Computation Speed-Up:
By breaking down a task into smaller sub-tasks that can run concurrently, cooperating
processes can significantly reduce the overall time required to complete complex
computations.
Modularity:
Cooperation among processes allows for modular system design, where different functionalities
are handled by separate processes. This modularity makes the system easier to manage,
update, and debug.
Convenience:
Cooperative processing can enhance user convenience by allowing processes to perform tasks
like automatic backups, updating applications in the background, or maintaining a responsive
user interface while other intensive tasks are being performed.
Definition: Preemptive scheduling allows a process to be interrupted and moved to the ready state by
the operating system, allowing another process to use the CPU.
Context Switch: It can occur anytime a process switches from a running state to a ready state or from a
waiting state to a ready state.
Efficiency: It provides better responsiveness and ensures that high-priority processes get CPU time when
needed.
Examples: Round Robin, Shortest Remaining Time First (SRTF), Priority Scheduling (preemptive).
Non-Preemptive Scheduling:
Definition: In non-preemptive scheduling, once a process starts its execution, it cannot be interrupted
and must run until it completes or moves to a waiting state.
Context Switch: It occurs only when a process terminates or voluntarily moves to the waiting state.
Fairness: It can lead to longer wait times for high-priority processes if a low-priority process is running.
Examples: First-Come, First-Served (FCFS), Shortest Job Next (SJN), Priority Scheduling (non-
preemptive).
Q3: Describe the functionality of fork system call. How can we know the child
process is created or not? And it is newly created child process?
Ans : Functionality of fork System Call:
The fork system call is used in Unix and Linux systems to create a new process. This new process is called
the child process, and it runs concurrently with the parent process.
When fork is called, it creates a duplicate of the calling process, copying the entire process image,
including the program counter, stack, and variables.
The child process receives a unique process ID (PID), but it is an exact copy of the parent process at the
moment of fork.
The fork system call returns different values in the parent and child processes:
In the parent process, fork returns the PID of the child process.
If fork returns a negative value, it indicates that the creation of the child process was unsuccessful.
Newly Created Child Process:
The newly created child process can be identified by checking the return value of the fork call:
If the return value is 0, it indicates that the process is the child process.
This distinction allows the parent and child processes to execute different code based on the return
value of fork.
Q4: What is the difference between shared memory and message passing
model? Describe the situation suitable to use for both models?
Ans: Differences Between Shared Memory and Message Passing Models
Communication Mechanism:
Shared Memory: Processes communicate by reading and writing to a shared section of memory. This
model relies on the operating system to grant access to this memory region.
Message Passing: Processes communicate by sending and receiving messages. The operating system
provides the necessary facilities to handle message exchange, ensuring synchronization and
communication.
Data Sharing:
Shared Memory: Allows maximum speed and convenience since processes directly access the shared
memory. However, it requires proper synchronization mechanisms like semaphores or mutexes to avoid
race conditions.
Message Passing: Involves overhead due to message copying and the system calls involved in sending
and receiving messages. It inherently provides synchronization as messages are atomic and follow a
specific order.
Suitability:
Shared Memory: Best suited for applications that require high-speed communication and can efficiently
handle synchronization. Typical examples include real-time systems and applications with significant
data sharing between processes, such as multimedia processing and scientific simulations.
Message Passing: Ideal for distributed systems where processes might reside on different machines. It is
also suitable for environments where ease of use and modularity are more critical than raw
performance, such as network services and inter-process communication in microservices architectures.
High-performance computing applications where speed and efficient data sharing are critical.
Applications requiring frequent and large data exchanges, such as graphics rendering and large-scale
simulations.
Message Passing:
Systems requiring strong modularity and ease of maintenance, like microservices and network services.
Q5: Which parameters are required to measure the efficiency of any algorithm?
Ans : To measure the efficiency of an algorithm, two primary parameters are considered:
Time Complexity:
This parameter measures the amount of time an algorithm takes to complete as a function of the size of
the input. Time complexity is usually expressed using Big O notation, which provides an upper bound on
the running time as the input size grows. It helps in comparing the performance of different algorithms
by giving a theoretical estimate of their running time for large inputs.
Space Complexity:
This parameter measures the amount of memory space an algorithm uses as a function of the size of the
input. Like time complexity, space complexity is also expressed in Big O notation. It considers both the
memory needed for the algorithm's variables and the additional space used by the data structures
within the algorithm.
These two metrics are critical in determining the overall efficiency of an algorithm because they provide
insights into the resource usage (time and memory) that an algorithm requires, allowing for better
optimization and selection of appropriate algorithms for different applications.
Q6: Write at least four conditions in which short term scheduler invoke. Draw
its diagram?
Ans :
The short-term scheduler, also known as the CPU scheduler, is responsible for selecting which process in
the ready queue should be executed next by the CPU. Here are at least four conditions under which the
short-term scheduler is invoked:
Process Creation:
When a new process is created, it may be added to the ready queue, necessitating a decision about
whether it should be executed immediately or later.
Process Termination:
When a process terminates, the CPU becomes available, and the scheduler must select the next process
from the ready queue to execute.
I/O Completion:
When a process completes an I/O operation and becomes ready to execute, the scheduler may need to
decide if this process should be given CPU time.
Interrupt Occurrence:
When an interrupt occurs (e.g., from a timer), the current process may be preempted, and the scheduler
must decide which process should execute next.
These conditions help in managing the CPU's time efficiently, ensuring that processes are executed in a
fair and timely manner.
Q7: Discuss how the FCFS algorithm leads the process in the situation called
convoy effect?
Ans : The First Come, First Served (FCFS) scheduling algorithm can lead to a situation known as the
convoy effect. This phenomenon occurs when a large or resource-intensive process ties up the system
resources, causing a delay for all other processes that arrive later. Here’s how it happens:
Non-Preemptive Nature:
Processes are executed in the order they arrive. If a long process arrives first, it will occupy the
CPU for its entire burst time, regardless of the length of subsequent processes.
When a long process is running, shorter processes that arrive afterward must wait until the long
process finishes. This creates a queue where all processes are delayed by the initial long
process.
Resource Utilization:
The convoy effect can lead to inefficient resource utilization, as I/O-bound processes may spend
a significant amount of time waiting for the CPU, causing an overall increase in system response
time and reducing throughput.
For example, if a CPU-intensive process with a long burst time is followed by several short
processes, the short processes are forced to wait for the long process to complete, creating a
"convoy" behind the long process. This can significantly degrade system performance,
especially in time-sharing systems where responsiveness is critical.