Oops Java Advance
Oops Java Advance
Process Management and Synchronization - The Critical Section Problem, Synchronization Hardware,
Semaphores, and Classical Problems of Synchronization, Critical Regions, Monitors.
Inter-process Communication Mechanisms: IPC between processes on a single computer system, IPC
between processes on different systems, using pipes, FIFOs, message queues, shared memory.
Process synchronization problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.
EXAMPLE:
The Producer-Consumer problem is a classic synchronization issue in operating systems. It involves two
types of processes: producers, which generate data, and consumers, which process that data. Both share
a common buffer. The challenge is to ensure that the producer doesn’t add data to a full buffer and the
consumer doesn’t remove data from an empty buffer while avoiding conflicts when accessing the
buffer.
Although both the producer and consumer routines are correct separately, they may not function
correctly when executed concurrently. As an illustration, suppose that the value of the variable counter is
currently 5 and that the producer and consumer processes execute the statements "counter++"and
"counter--" concurrently. Following the execution of these two statements, the value of the variable
counter may be 4, 5, or 6! The only correct result, though, is counter == 5, which is generated correctly it
the producer and consumer execute separately.
We can show that the value of counter may be incorrect as follows. Note that the statement "counter++"
may be implemented in machine language as
register1=counter
register1=register1+1 counter=register1
where register is a local CPU register. Similarly, the statement "counter-"is implemented as follows:
register2=counter
register2=register2-1
counter=register2
where again register is a local CPU register. Even though registers and register may be the same physical
register (an accumulator, say), remember that the contents of this register will be saved and restored by
the interrupt handler The concurrent execution of "counter++" and "counter--" is equivalent to a
sequential execution where the lower-level statements presented previously are interleaved in some
arbitrary order (but the order within each high-level statement is preserved). One such interleaving is
PETERSON’S SOLUTION
Peterson’s Solution is a classical software-based solution to the critical section problem. In Peterson’s
solution, we have two shared variables:
boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section
int turn: The process whose turn is to enter the critical section.
SYNCHRONIZATION HARDWARE
Disable Interrupts
Critical section
Enable Interrupts
Remainder section
Multi Processor Systems: Special hardware instructions Disabling interrupts on a multiprocessor can be
time consuming, since the message is passed to all the processors. This message passing delays entry into
each critical section, and system efficiency decreases. Many modern computer systems therefore provide
special hardware instructions like “test _and _set” and “Compare _and_Swap” that allow us either to test
and modify the content of a word or to swap the contents of two words atomically—that is, as one
uninterruptible unit.
do
{
acquire lock
critical section
release lock
remainder section
}
while (TRUE);
2.Swap Instruction
Swap instruction can also be used for mutual exclusion
Definition
Void swap(boolean &a, boolean &b)
{
boolean temp=a; a=b; b=temp;
}
Algorithm
do
{
key=true;
while(key=true)
swap(lock,key);
critical section
lock=false;
remainder section
}while(1);
lock is global variable initialized to false.each process has a local variable key. A process wants to enter
critical section,since the value of lock is false and key is true.
lock=false
key=true
after swap instruction,
lock=true
key=false now key=false becomes true,process exits repeat-until,and enter into critical section. When
process is in critical section (lock=true),so other processes wanting to enter critical section will have
lock=true key=true Hence they will do busy waiting in repeat-until loop until the process exits critical
section and sets the value of lock to false.
SEMAPHORES
2) signaloperation: Signal operation increments by 1,if the value is not positive then one of the process
blocked in wait operation unblocked.
Definition of the signal () operation
signal (S)
{
S++;
}
Types of Semaphores
Semaphores are of two Types:
Binary Semaphore: This is also known as a mutex lock. It can have only two values – 0 and 1. Its value
is initialized to 1. It is used to implement the solution of critical section problems with multiple processes.
Counting Semaphore: Its value can range over an unrestricted domain. It is used to control access to a
resource that has multiple instances.
Semaphores Implementation
do
{
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
First process that executes wait operation will be immediately granted sem.count to 0. If some other
process wants critical section and executes wait() then it is blocked,since value becomes -1. If the
process exits critical section it executes signal().sem.count is incremented by 1.blocked process is
removed from queue and added to ready queue.
Advantages
1.Semaphores allow only one process into the critical section. They follow the mutual exclusion principle strictly
and are much more efficient than some other methods of synchronization.
2.There is no resource wastage because of busy waiting in semaphores as processor time is not wasted unnecessarily
to check if a condition is fulfilled to allow a process to access the critical section.
3.Semaphores are implemented in the machine independent code of the microkernel. So they are machine
independent.
Problems:
1) Deadlock Deadlock occurs when multiple processes are blocked.each waiting for a resource that can only be
freed by one of the other blocked processes.
2) Starvation one or more processes gets blocked forever and never get a chance to take their turn in the critical
section.
3) Priority inversion If low priority process is running ,medium priority processes are waiting for low priority
process,high priority processes are waiting for medium priority processes.this is called Priority inversion.
1.Bounded-buffer problem
Two processes share a common ,fixed –size buffer. Producer puts information into the buffer, consumer
takes it out. The problem arise when the producer wants to put a new item in the buffer,but it is already
full. The solution is for the producer has to wait until the consumer has consumed atleast one buffer.
similarly if the consumer wants to remove an item from the buffer and sees that the buffer is empty,it goes
to sleep until the producer puts something in the buffer and wakes it up.
The producer and consumer processes share the following data structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
semaphore full = 0
We assume that the pool consists of n buffers, each capable of holding one item. The mutex semaphore
provides mutual exclusion for accesses to the buffer pool and is initialized to the value 1. The empty and
full semaphores count the number of empty and full buffers. The semaphore empty is initialized to the
value n; the semaphore full is initialized to the value 0.
The structure of the producer process
do
{
...
/* produce an item in next_produced *
...
wait (empty);
wait (mutex);
...
/* add next produced to the buffer */
...
signal (mutex);
signal (full);
} while (true);
3. TheDining-PhilosophersProblem
Consider five philosophers who spend their lives thinking and eating. The philosophers share a circular
table surrounded by five chairs, each belonging to one philosopher. In the centre of the table is a bowl of
rice, and the table is laid with five single chopsticks. When a philosopher thinks, she does not interact with
her colleagues. From time to time, a philosopher gets hungry and tries to pick up the two chopsticks that
are closest to her (the chopsticks that are between her and her left and right neighbours). A philosopher
may pick up only one chopstick at a time. Obviously, she cannot pick up a chopstick that is already in the
hand of a neighbour. When a hungry philosopher has both her chopsticks at the same time, she eats
without releasing the chopsticks. When she is finished eating, she puts down both chopsticks and starts
thinking again.
Solution
One simple solution is to represent each chopstick with a semaphore. A philosopher tries to grab a
chopstick by executing a wait () operation on that semaphore. She releases her chopsticks by executing the
signal () operation on the appropriate semaphores. Thus, the shared data are semaphore chopstick [5];
where all the elements of chopstick are initialized to 1.
The structure of Philosopher i:
do
{
wait (chopstick[i]);
wait (chopStick [(i + 1) % 5]);
// eat for a while
signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think for a while
} while (TRUE);
Although this solution guarantees that no two neighbours are eating simultaneously, it nevertheless must
be rejected because it could create a deadlock. Suppose that all five philosophers become hungry at the
same time and each grabs her left chopstick. All the elements of chopstick will now be equal to 0. When
each philosopher tries to grab her right chopstick, she will be delayed forever.
Several possible remedies to the deadlock problem are replaced by:
Allow at most four philosophers to be sitting simultaneously at the table.
Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to
do this, she must pick them up in a critical section).
Use an asymmetric solution—that is, an odd-numbered philosopher picks up first her
left chopstick and then her right chopstick, whereas an even numbered philosopher
picks up her right chopstick and then her left chopstick.
MONITORS
Although semaphores provide a convenient and effective mechanism for process synchronization, using
them incorrectly can result in errors such as following,
Suppose that a process interchanges the order in which the wait() and signal() operations
on the semaphore mutex are executed, resulting in the following execution:
signal (mutex);
...
critical section
...
wait (mutex);
In this situation, several processes may be executing in their critical sections simultaneously, violating the
mutual-exclusion requirement. This error may be discovered only if several processes are simultaneously
active in their critical sections.
Suppose that a process replaces signal (mutex) with wait (mutex). That is, it executes
wait (mutex);
...
critical section
...
wait (mutex);
In this case, a deadlock will occur.
Suppose that a process omits the wait (mutex), or the signal (mutex), or both. In this case, either mutual
exclusion is violated or a deadlock will occur. To deal with such errors, researchers have developed one
fundamental high-level synchronization construct—the monitor type.
1. Monitor Usage
A monitor type is an ADT that includes a set of programmer defined operations that are provided with
mutual exclusion within the monitor. The monitor type also declares the variables whose values define the
state of an instance of that type, along with the bodies of functions that operate on those variables.
Monitor monitorname
{ /* shared variable declarations */
function P1 ( . . . ) { . . .
}
function P2 ( . . . ) { . . .
}
.
.
.
function Pn ( . . . ) { . . .
}
initialization code ( . . . ) { . . .
}
}
The monitor construct ensures that only one process at a time is active within the monitor. Consequently,
the programmer does not need to code this synchronization constraint explicitly.
Condition Variables: The monitor construct is not sufficiently powerful for modelling some
synchronization schemes. For this purpose, we need to define additional synchronization mechanisms.
These mechanisms are provided by the condition construct, condition x, y;
The only operations that can be invoked on a condition variable are wait () and signal (). The operation
x.wait ();
means that the process invoking this operation is suspended until another process invokes
x.signal ();
The x.signal () operation resumes exactly one suspended process. If no process is suspended,
then the signal () operation has no effect.
Condition Variables
For each condition x, we introduce a semaphore x _sem and an integer variable x _count, both initialized
to 0.
The operation x.wait () can now be implemented as,
x_count++;
if (next_count > 0)
signal (next);
else
signal (mutex);
wait (x_sem);
x_count--;
PIPES
Pipe is a communication medium between two or more related or interrelated processes. It can be either
within one process or a communication between the child and the parent processes. Communication can
also be multi-level such as communication between the parent, the child and the grand-child, etc.
Communication is achieved by one process writing into the pipe and other reading from the pipe. To
achieve the pipe system call, create two files, one to write into the file and another to read from the file.
Pipe mechanism can be viewed with a real-time scenario such as filling water with the pipe into some
container, say a bucket, and someone retrieving it, say with a mug. The filling process is nothing but
writing into the pipe and the reading process is nothing but retrieving from the pipe. This implies that one
output (water) is input for the other (bucket).
#include<unistd.h>
int pipe(int pipedes[2]);
his system call would create a pipe for one-way communication i.e., it creates two descriptors, first one is connected
to read from the pipe and other one is connected to write into the pipe.
Descriptor pipedes[0] is for reading and pipedes[1] is for writing. Whatever is written into pipedes[1] can be read
from pipedes[0].
This call would return zero on success and -1 in case of failure. To know the cause of failure, check with errno
variable or perror() function.
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
int open(const char *pathname, int flags);
int open(const char *pathname, int flags, mode_t mode);
Even though the basic operations for file are read and write, it is essential to open the file before performing the
operations and closing the file after completion of the required operations.
#include<unistd.h>
int close(int fd)
The above system call closing already opened file descriptor. This implies the file is no longer in use and resources
associated can be reused by any other process. This system call returns zero on success and -1 in case of error. The
cause of error can be identified with errno variable or perror() function.
#include<unistd.h>
ssize_t read(int fd, void *buf, size_t count)
The above system call is to read from the specified file with arguments of file descriptor fd, proper buffer with
allocated memory (either static or dynamic) and the size of buffer.
The file descriptor id is to identify the respective file, which is returned after calling open() or pipe() system call.
The file needs to be opened before reading from the file. It automatically opens in case of calling pipe() system call.
This call would return the number of bytes read (or zero in case of encountering the end of the file) on success and -
1 in case of failure. The return bytes can be smaller than the number of bytes requested, just in case no data is
available or file is closed. Proper error number is set in case of failure.
To know the cause of failure, check with errno variable or perror() function.
#include<unistd.h>
ssize_t write(int fd, void *buf, size_t count)
The above system call is to write to the specified file with arguments of the file descriptor fd, a proper buffer with
allocated memory (either static or dynamic) and the size of buffer.
The file descriptor id is to identify the respective file, which is returned after calling open() or pipe() system call.
The file needs to be opened before writing to the file. It automatically opens in case of calling pipe() system call.
This call would return the number of bytes written (or zero in case nothing is written) on success and -1 in case of
failure. Proper error number is set in case of failure.
To know the cause of failure, check with errno variable or perror() function.
Example Program
Algorithm
Step 1 − Create a pipe.
Step 2 − Send a message to the pipe.
Step 3 − Retrieve the message from the pipe and write it to the standard output.
Step 4 − Send another message to the pipe.
Step 5 − Retrieve the message from the pipe and write it to the standard output.
Note − Retrieving messages can also be done after sending all messages.
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#define MSG_LEN 64
int main(){
int result;
int fd[2];
char message[MSG_LEN];
char recvd_msg[MSG_LEN];
result = pipe (fd);
if (result < 0)
{
perror("pipe ");
exit(1);
}
strncpy(message,"Linux World!! ",MSG_LEN);
result=write(fd[1],message,strlen(message));
if (result < 0)
{
perror("write");
exit(2);
}
strncpy(message,"Understanding ",MSG_LEN);
result=write(fd[1],message,strlen(message));
if (result < 0)
{
perror("write");
exit(2);
}
strncpy(message,"Concepts of ",MSG_LEN);
result=write(fd[1],message,strlen(message));
if (result < 0)
{
perror("write");
exit(2);
}
strncpy(message,"Piping ", MSG_LEN);
result=write(fd[1],message,strlen(message));
if (result < 0)
{
perror("write");
exit(2);
}
result=read (fd[0],recvd_msg,MSG_LEN);
if (result < 0)
{
perror("read");
exit(3);
}
printf("%s\n",recvd_msg);
return 0;
}
OUTPUT: Linux World!! Understanding Concepts of Piping
Two-way Communication Using Pipes
Pipe communication is viewed as only one-way communication i.e., either the parent process writes and
the child process reads or vice-versa but not both. However, what if both the parent and the child needs to
write and read from the pipes simultaneously, the solution is a two-way communication using pipes. Two
pipes are required to establish two-way communication.
Following are the steps to achieve two-way communication −
Step 1 − Create two pipes. First one is for the parent to write and child to read, say as pipe1. Second one is
for the child to write and parent to read, say as pipe2.
Step 2 − Create a child process.
Step 3 − Close unwanted ends as only one end is needed for each communication.
Step 4 − Close unwanted ends in the parent process, read end of pipe1 and write end of pipe2.
Step 5 − Close the unwanted ends in the child process, write end of pipe1 and read end of pipe2.
Step 6 − Perform the communication as required.
Sample Programs
− Achieving two-way communication using pipes.
Algorithm
Step 1 − Create pipe1 for the parent process to write and the child process to read.
Step 2 − Create pipe2 for the child process to write and the parent process to read.
Step 3 − Close the unwanted ends of the pipe from the parent and child side.
Step 4 − Parent process to write a message and child process to read and display on the screen.
Step 5 − Child process to write a message and parent process to read and display on the screen.
#include<stdio.h>
#include<unistd.h>
int main() {
int pipefds1[2], pipefds2[2];
int returnstatus1, returnstatus2;
int pid;
char pipe1writemessage[20] = "Hi";
char pipe2writemessage[20] = "Hello";
char readmessage[20];
returnstatus1 = pipe(pipefds1);
if (returnstatus1 == -1) {
printf("Unable to create pipe 1 \n");
return 1;
}
returnstatus2 = pipe(pipefds2);
if (returnstatus2 == -1) {
printf("Unable to create pipe 2 \n");
return 1;
}
pid = fork();
OUTPUT
In Parent: Writing to pipe 1 – Message is Hi
In Child: Reading from pipe 1 – Message is Hi
In Child: Writing to pipe 2 – Message is Hello
In Parent: Reading from pipe 2 – Message is Hello
Named pipes
Named pipes, also referred to as FIFOs (First In, First Out), constitute essential IPC systems in software systems.
They offer a quick and effective method for successfully transferring information between processes. Specialized
kinds of files known as named pipes serve as a means for interaction among unconnected procedures that operate on
an identical structure as well as on separate ones.
First-in, first-out (FIFO) named pipes ensure that information composed to the line by a single procedure is read
from the pipe by another course in the identical order. Therefore, They are particularly advantageous when
processes must communicate independently without sharing storage or direct dependency management.
we create a named pipe using the mkfifo function with a specified path (/tmp/myfifo). Then, we fork a child process
where the child process acts as a writer and the parent process acts as a reader. The writer process opens the named
pipe for writing (O_WRONLY), writes a message to the pipe using the write function, and then closes the pipe. The
reader process opens the named pipe for reading (O_RDONLY), reads the message from the pipe using the read
function, and then closes the pipe. The parent process waits for the child process to finish using wait(NULL), and
then removes the named pipe using unlink.
include<sys/types.h>
#include<sys/stat.h>
int mkfifo(const char *pathname, mode_t mode);
mkfifo() makes a FIFO special file with the name specified by pathname and the permissions are specified
by mode. On success mkfifo() returns 0 while on error it returns -1.
The advantage is that this FIFO special file can be used by any process for reading or writing just like a
normal file. This means to sender process can use the write() system call to write data into the pipe and the
receiver process can use the read() system call to read data from the pipe, hence, completing the
communication.
It is same as a pipe except that it is accessed as part of the filesystem. Multiple process can access it for
writing and reading. When the FIFO special files is used for exchange of data by process, the entire data is
passed internally without writing it on the filesystem. Hence, if you open this special file there will be no
content written in it.
Note: The FIFO pipe works in blocked mode(by default) i.e., the writing process must be present on one
end while the reading process must be present on the other side at the same time else the communication
will not happen. Operating the FIFO special file in non-blocking mode is also possible.
The entire IPC process will consist of three programs:
Program1: to create a named pipe
Program2: process that will write into the pipe (sender process)
Program3: process that will receive data from pipe (receiver process)
ADVANTAGES
Simple and easy to use − Named pipes offer a simple and uncomplicated method for communication
between processes. They operate on a straightforward FIFO model when information created by a single
procedure is retrieved through another procedure in the exact same order.
Process independence − Operations with named pipes can communicate with one another. The procedures
can talk to one another with no previous understanding or reliance so long as they're granted the right
authorization for using the named pipe.
Interprocess communication − When a named pipe exists in a file system that is shared that both systems
can access, it allows interaction among procedures that are operating on the exact same machine as well as
among procedures that are operating on various systems.
Efficiency − Named pipes are a productive way to communicate between processes. They can effectively
handle enormous quantities of data and depend on a buffering system.
DISADVANTAGES
Unidirectional communication − Named pipes support a one-way exchange from the writer's
perspective to the audience member or the reverse because they are unidirectional.
Limited functionality − Named pipes offer a straightforward method of transmitting information
and are comparatively simple. Advanced features like communication limitations, structures for
data, and complex data serialization are not supported by them.
Blocking behavior − Both reading and writing activities on named pipes are perpetually blocked
by default. A program will be restricted until the information is accessible or time is freed if it
attempts to read from or write to a devoid pipe or a pipe that is full.
Limited error handling − The embedded abilities of named pipes could be improved. Additionally,
there isn't an integrated system to deal with or recuperate from such mistakes if an error in
communication takes place, like a pipe breaking or an operation ending abruptly.
MESSAGE QUEUE
once the message is received by a process it would be no longer available for any other process.
Whereas in shared memory, the data is available for multiple processes to access.
If we want to communicate with small message formats.
Shared memory data need to be protected with synchronization when multiple processes
communicating at the same time.
Frequency of writing and reading using the shared memory is high, then it would be very complex
to implement the functionality. Not worth with regard to utilization in this kind of cases.
What if all the processes do not need to access the shared memory but very few processes only
need it, it would be better to implement with message queues.
If we want to communicate with different data packets, say process A is sending message type 1 to
process B, message type 10 to process C, and message type 20 to process D. In this case, it is
simplier to implement with message queues. To simplify the given message type as 1, 10, 20, it
can be either 0 or +ve or –ve as discussed below.
Ofcourse, the order of message queue is FIFO (First In First Out). The first message inserted in the
queue is the first one to be retrieved.
Using Shared Memory or Message Queues depends on the need of the application and how
effectively it can be utilized.
Communication using message queues can happen in the following ways −
Writing into the shared memory by one process and reading from the shared memory by another
process. As we are aware, reading can be done with multiple processes as well.
Writing into the shared memory by one process with different data packets and reading from it by multiple
processes, i.e., as per message type.
Having seen certain information on message queues, now it is time to check for the system call (System V) which
supports the message queues.
To perform communication using message queues, following are the steps −
Step 1 − Create a message queue or connect to an already existing message queue (msgget())
Step 2 − Write into message queue (msgsnd())
Step 3 − Read from the message queue (msgrcv())
Step 4 − Perform control operations on the message queue (msgctl())
IPC_STAT − Copies information of the current values of each member of struct msqid_ds to the passed structure
pointed by buf. This command requires read permission on the message queue.
IPC_SET − Sets the user ID, group ID of the owner, permissions etc pointed to by structure buf.
IPC_RMID − Removes the message queue immediately.
IPC_INFO − Returns information about the message queue limits and parameters in the structure pointed by buf,
which is of type struct msginfo
MSG_INFO − Returns an msginfo structure containing information about the consumed system
resources by the message queue.
EXAMPLE
C Program for Message Queue (Writer Process)
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/msg.h>
// structure for message queue
struct msg_buffer {
long msg_type;
char msg_text[100];
} message;
int main()
{
key_t key;
int msgid;
// ftok to generate unique key
Shared memory
The Shared memory is a memory segment that multiple processes can access concurrently. It is one of
the fastest IPC methods because the processes communicate by the reading and writing to a shared
block of the memory. Unlike other IPC mechanisms that involve the more complex synchronization and
data exchange procedures shared memory provides the straightforward way for the processes to share
data.
How Shared Memory IPC Works?
The Shared memory IPC works by creating a memory segment that is accessible by the multiple
processes. Here’s a basic outline of how it operates:
Creation of Shared Memory Segment: A process usually the parent, creates a shared memory
segment using the system calls like shmget() in Unix-like systems. This segment is assigned the
unique identifier (shmid).
Attaching to the Shared Memory Segment: The Processes that need to access the shared memory
attach themselves to this segment using shmat() system call. Once attached the processes can directly
read from and write to the shared memory.
Synchronization: Since multiple processes can access the shared memory simultaneously
synchronization mechanisms like semaphores are often used to the prevent race conditions and
ensure data consistency.
Detaching and Deleting the Segment: When a process no longer needs access to the shared
memory it can detach from the segment using shmdt() system call. The shared memory segment can
be removed entirely from system using shmctl() once all processes have the detached.
Used System Calls
The system calls that are used in the program are:
Function
Signature Description
int main()
{
// ftok to generate unique key
key_t key = ftok("shmfile", 65);
cout << "Data written in memory: " << str << endl;
return 0;
}
#include <iostream>
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/shm.h>
using namespace std;
int main()
{
// ftok to generate unique key
key_t key = ftok("shmfile", 65);
return 0;
}