Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
59 views

Race Conditions: Process Synchronization

Process synchronization is required when threads or processes need to access shared resources, such as memory or data, to avoid race conditions. There are different types of synchronization including barriers, locks, and semaphores. Critical sections identify the code accessing shared resources, and mutual exclusion is needed to ensure only one process is in the critical section at a time. Solutions to achieve mutual exclusion include disabling interrupts, using lock variables, taking turns, and using sleep and wakeup system calls. The bounded buffer problem demonstrates how producers and consumers can be synchronized using a fixed size buffer and sleep/wakeup calls.

Uploaded by

shezipak
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

Race Conditions: Process Synchronization

Process synchronization is required when threads or processes need to access shared resources, such as memory or data, to avoid race conditions. There are different types of synchronization including barriers, locks, and semaphores. Critical sections identify the code accessing shared resources, and mutual exclusion is needed to ensure only one process is in the critical section at a time. Solutions to achieve mutual exclusion include disabling interrupts, using lock variables, taking turns, and using sleep and wakeup system calls. The bounded buffer problem demonstrates how producers and consumers can be synchronized using a fixed size buffer and sleep/wakeup calls.

Uploaded by

shezipak
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 5

:Process synchronization

In computer science, especially parallel computing, synchronization means the coordination of


simultaneous threads or processes to complete a task in order to get correct runtime order and avoid
.unexpected race conditions
Process synchronization is required when one process must wait for another to complete some
operation before proceeding. For example, one process (called a writer) may be writing data to a
certain main memory area, while another process (a reader) may be reading data from that area and
sending it to the printer. The reader and writer must be synchronized so that the writer does not
...overwrite

Types of synchronization:

* Barrier
* Lock/semaphore
* Non-blocking synchronization
* Synchronous communication operations

Advantages of synchronous communications are

* No need of start and stop bits


* Higher data rates possible
* The actual data length need not be fixed

RACE CONDITIONS

* In operating systems, processes that are working together share some common storage
(main memory, file etc.) that each process can read and write. When two or more processes
are reading or writing some shared data and the final result depends on who runs precisely
when, are called race conditions. Concurrently executing threads that share data need to
synchronize their operations and processing in order to avoid race condition on shared data.
Only one ‘customer’ thread at a time should be allowed to examine and update the shared
variable.

* Race conditions are also possible in Operating Systems. If the ready queue is implemented
as a linked list and if the ready queue is being manipulated during the handling of an interrupt,
then interrupts must be disabled to prevent another interrupt before the first one completes.

If interrupts are not disabled than the linked list could become corrupt.

1. count++ could be implemented as


register1 = count
register1 = register1 + 1
count = register1

2. count-- could be implemented as


register2 = count
register2 = register2 – 1
count = register2

3. Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}

Critical Section
The key to preventing trouble involving shared storage is find some way to prohibit more than
one process from reading and writing the shared data simultaneously. That part of the
program where the
shared memory is accessed is called the Critical Section. To avoid race conditions and flawed
results, one must identify codes in Critical Sections in each thread.

Solution to Critical-Section Problem

1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes
can be executing in their critical sections

2. Progress - If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely

3. Bounded Waiting - A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted

* Assume that each process executes at a nonzero speed

* No assumption concerning relative speed of the N processes

The characteristic properties of the code that form a Critical Section are

* Codes that reference one or more variables in a “read-update-write” fashion while any of
those variables is possibly being altered by another thread.

* Codes that alter one or more variables that are possibly being referenced in “read-updata-
write” fashion by another thread.

• Codes use a data structure while any part of it is possibly being altered by another thread.

* Codes alter any part of a data structure while it is possibly in use by another thread.

Here, the important point is that when one process is executing shared modifiable data in its
critical section, no other process is to be allowed to execute in its critical section. Thus, the
execution of critical sections by the processes is mutually exclusive in time.

MUTUAL EXCLUSION

A way of making sure that if one process is using a shared modifiable data, the other
processes will be excluded from doing the same thing.
Formally, while one process executes the shared variable, all other processes desiring to do
so at the same time moment should be kept waiting; when that process has finished
executing the shared
variable, one of the processes waiting; while that process has finished executing the shared
variable, one of the processes waiting to do so should be allowed to proceed. In this fashion,
each process
executing the shared data (variables) excludes all others from doing so simultaneously. This
is called Mutual Exclusion.

Note
that mutual exclusion needs to be enforced only when processes access shared modifiable
data - when processes are performing operations that do not conflict with one another they
should be allowed to proceed concurrently.
MUTUAL EXCLUSION CONDITIONS

If we could arrange matters such that no two processes were ever in their critical sections
simultaneously, we could avoid race conditions. We need four conditions to hold to have a
good
solution for the critical section problem (mutual exclusion).

•No two processes may at the same moment inside their critical sections.

•No assumptions are made about relative speeds of processes or number of CPUs.

•No process should outside its critical section should block other processes.

•No process should wait arbitrary long to enter its critical section.

PROPOSALS FOR ACHIEVING MUTUAL EXCLUSION

The mutual exclusion problem is to devise a pre-protocol (or entry protocol) and a post-
protocol (or exist protocol) to keep two or more threads from being in their critical sections at
the same time.
Tanenbaum examine proposals for critical-section problem or mutual exclusion problem.

Problem
When one process is updating shared modifiable data in its critical section, no other process
should allowed to enter in its critical section.

PROPOSAL 1 -DISABLING INTERRUPTS (HARDWARE SOLUTION)


Each process disables all interrupts just after entering in its critical section and re-enable all
interrupts
just before leaving critical section. With interrupts turned off the CPU could not be switched to
other
process. Hence, no other process will enter its critical and mutual exclusion achieved.

Conclusion

Disabling interrupts is sometimes a useful interrupts is sometimes a useful technique within


the kernel
of an operating system, but it is not appropriate as a general mutual exclusion mechanism for
users
process. The reason is that it is unwise to give user process the power to turn off interrupts.

PROPOSAL 2 - LOCK VARIABLE (SOFTWARE SOLUTION)

In this solution, we consider a single, shared, (lock) variable, initially 0. When a process wants
to enter in its critical section, it first test the lock. If lock is 0, the process first sets it to 1 and
then enters the critical section. If the lock is already 1, the process just waits until (lock)
variable becomes 0. Thus, a 0 means that no process in its critical section, and 1 means hold
your horses - some process is in its critical section.

Conclusion

The flaw in this proposal can be best explained by example. Suppose process A sees that the
lock is 0. Before it can set the lock to 1 another process B is scheduled, runs, and sets the
lock to 1. When the process A runs again, it will also set the lock to 1, and two processes will
be in their critical section
simultaneously.

PROPOSAL 3 - STRICT ALTERATION

In this proposed solution, the integer variable 'turn' keeps track of whose turn is to enter the
critical
section. Initially, process A inspect turn, finds it to be 0, and enters in its critical section.
Process B also finds it to be 0 and sits in a loop continually testing 'turn' to see when it
becomes 1.Continuously
testing a variable waiting for some value to appear is called the Busy-Waiting.

Conclusion

Taking turns is not a good idea when one of the processes is much slower than the other.
Suppose
process 0 finishes its critical section quickly, so both processes are now in their noncritical
section. This situation violates above mentioned condition 3.

USING SYSTEMS CALLS 'SLEEP' AND 'WAKEUP '

Basically, what above mentioned solution do is this: when a processes wants to enter in its
critical
section , it checks to see if then entry is allowed. If it is not, the process goes into tight loop
and waits
(i.e., start busy waiting) until it is allowed to enter. This approach waste CPU-time.
Now look at some interprocess communication primitives is the pair of steep-wakeup.

•Sleep

* It is a system call that causes the caller to block, that is, be suspended until some other
process wakes it up.

•Wakeup

It is a system call that wakes up the process.


Both 'sleep' and 'wakeup' system calls have one parameter that represents a memory
address used to match up 'sleeps' and 'wakeups' .

THE BOUNDED BUFFER PRODUCERS AND CONSUMERS

The bounded buffer producers and consumers assumes that there is a fixed buffer size i.e., a
finite
numbers of slots are available.

Statement

To suspend the producers when the buffer is full, to suspend the consumers when the buffer
is empty, and to make sure that only one process at a time manipulates a buffer so there are
no race conditions or lost updates.

As an example how sleep-wakeup system calls are used, consider the producer-consumer
problem also known as bounded buffer problem.
Two processes share a common, fixed-size (bounded) buffer. The producer puts information
into the
buffer and the consumer takes information out.

Trouble arises when

1. The producer wants to put a new data in the buffer, but buffer is already full.
Solution: Producer goes to sleep and to be awakened when the consumer has removed data.

2. The consumer wants to remove data the buffer but buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes
consumer up.

Conclusion

This approaches also leads to same race conditions we have seen in earlier approaches.
Race condition can occur due to the fact that access to 'count' is unconstrained. The essence
of the problem is that a wakeup call, sent to a process that is not sleeping, is lost.

You might also like