Mobile Computing Lecture Notes
Mobile Computing Lecture Notes
LECTURE NOTES
UNIT-I
WIRELESS LANS AND PANS
1.1 INTRODUCTION
The field of computer networks has grown significantly in the last three
decades. An interesting usage of computer networks is in offices and
educational institutions, where tens (sometimes hundreds) of personal
computers (PCs) are interconnected, to share resources (e.g., printers) and
exchange information, using a high-bandwidth communication medium (such as
the Ethernet). These privately-owned networks are known as local area
networks (LANs) which come under the category of small-scale
networks(networks within a single building or campus with a size of a few
kilometres). To do away with the wiring associated with the interconnection of
PCs in LANs, researchers have explored the possible usage of radio waves and
infrared light for interconnection. This has resulted in the emergence of wireless
LANs (WLANs), where wireless transmission is used at the physical layer of
the network. Wireless personal area networks (WPANs) are the next step down
from WLANs, covering smaller areas with low power transmission, for
networking of portable and mobile computing devices such as PCs, personal
digital assistants (PDAs), which are essentially very small computers designed
to consume as little power as possible so as to increase the lifetime of their
batteries, cell phones, printers, speakers, microphones, and other consumer
electronics.
• Dynamic topology and restricted connectivity: The mobile nodes may often
go out of reach of each other. This means that network connectivity is partial at
times.
Uses of WLANs
Wireless computer networks are capable of offering versatile functionalities.
WLANs are very flexible and can be configured in a variety of topologies based
on the application. Some possible uses of WLANs are mentioned below.
• Users would be able to surf the Internet, check e-mail, and receive Instant
Messages on the move.
• In areas affected by earthquakes or other such disasters, no suitable
infrastructure may be available on the site. WLANs are handy in such locations
to set up networks on the fly.
• There are many historic buildings where there has been a need to set up
computer networks. In such places, wiring may not be permitted or the building
design may not be conducive to efficient wiring. WLANs are very good
solutions in such places.
Design Goals
The following are some of the goals which have to be achieved while designing
WLANs:
• License-free operation: One of the major factors that affects the cost of
wireless access is the license fee for the spectrum in which a particular wireless
access technology operates. Low cost of access is an important aspect for
popularizing a WLAN technology. Hence the design of WLAN should consider
the parts of the frequency spectrum (e.g., ISM band) for its operation which do
not require an explicit licensing.
• Tolerance to interference: The proliferation of different wireless networking
technologies both for civilian and military applications and the use of the
microwave frequency spectrum for non-communication purposes(e.g.,
microwave ovens) have led to a significant increase in the interference level
across the radio spectrum. The WLAN design should account for this and take
appropriate measures by way of selecting technologies and protocols to operate
in the presence of interference.
• Global usability: The design of the WLAN, the choice of technology, and the
selection of the operating frequency spectrum should take into account the
prevailing spectrum restrictions in countries across the world. This ensures the
acceptability of the technology across the world.
Figure 1.1 gives a schematic picture of what a typical ESS looks like
The following are the STA services, which are provided by every
station, including APs:
• Authentication: Authentication is done in order to establish the identity of
stations to each other. The authentication schemes range from relatively
insecure handshaking to public-key encryption schemes.
The three choices for the physical layer in the original 802.11 standard are as
follows:
(ii) Direct Sequence Spread Spectrum (DSSS) operating in the 2.4 GHz ISM
band, at data rates of 1 Mbps [using Differential Binary Phase Shift Keying
(DBPSK) modulation scheme] and 2 Mbps [using Differential Quadrature Phase
Shift Keying (DQPSK)];
The primary function of this layer is to arbitrate and statistically multiplex the
transmission requests of various wireless stations that are operating in an area.
This assumes importance because wireless transmissions are inherently
broadcast in nature and contentions to access the shared channel need to be
resolved prudently in order to avoid collisions, or at least to reduce the number
of collisions. The MAC layer also supports many auxiliary functionalities such
as offering support for roaming, authentication, and taking care of power
conservation. The basic services supported are the mandatory asynchronous data
service and an optional real-time service. The asynchronous data service is
supported for unicast packets as well as for multicast packets. The real-time
service is supported only in infrastructure-based networks where APs control
access to the shared medium.
Distributed Foundation Wireless Medium Access Control (DFWMAC)
• Short Inter-Frame Spacing (SIFS) is the shortest of all the IFSs and denotes
highest priority to access the medium. It is defined for short control messages
such as acknowledgments for data packets and polling responses. The
transmission of any packet should begin only after the channel is sensed to be
idle for a minimum time period of at least SIFS.
• PCF Inter-Frame Spacing (PIFS) is the waiting time whose value lies
between SIFS and DIFS. This is used for real-time services.
• DCF Inter-Frame Spacing (DIFS) is used by stations that are operating
under the DCF mode to transmit packets. This is for asynchronous data transfer
within the contention period.
• Extended Inter-Frame Spacing (EIFS) is the longest of all the IFSs and
denotes the least priority to access the medium. EIFS is used for
resynchronization whenever physical layer detects incorrect MAC frame
reception.
If the medium is busy, the node backs off, in which the station defers channel
access by a random amount of time chosen within a contention window(CW).
The value of CW can vary between CWmin and CWmax. The time intervals are all
integral multiples of slot times, which are chosen judiciously using propagation
delay, delay in the transmitter, and other physical layer dependent parameters.
As soon as the back-off counter reaches zero and expires, the station can access
the medium. During the back-off process, if anode detects a busy channel, it
freezes the back-off counter and the process is resumed once the channel
becomes idle for a period of DIFS. Each station executes the back-off procedure
at least once between every successive transmission.
In the scheme discussed so far, each station has the same chances for
transmitting data next time, independent of the overall waiting time for
transmission. Such a system is clearly unfair. Ideally, one would like to give
stations that wait longer a higher priority service in order to ensure that they are
not starved. The back-off timer incorporated into the above mechanism tries to
make it fair. Longer waiting stations, instead of choosing another random
interval from the contention window, wait only for a residual amount of time
that is specified by the back-off timer.
RTS-CTS Mechanism
The hidden terminal problem is a major problem that is observed in wireless
networks. This is a classic example of problems arising due to incomplete
topology information in wireless networks that was mentioned initially. It also
highlights the non-transitive nature of wireless transmission. In some situations,
one node can receive from two other nodes, which cannot hear each other. In
such cases, the receiver may be bombarded by both the senders, resulting in
collisions and reduced throughput. But the senders, unaware of this, may get the
impression that the receiver can clearly listen to them without interference from
anyone else. This is called the hidden terminal problem. To alleviate this
problem, the RTS-CTS mechanism has been devised as shown in Figure 1.2 (b).
If a node has a packet to send and is in the IDLE state, it goes into the
WAIT_FOR_NAV state. After the on-going transmissions (if any) in the
neighborhood are over, the node goes to the WAIT_FOR_DIFS state. After
waiting for DIFS amount of time, if the medium continues to be idle, the station
enters the BACKING_OFF state. Otherwise, the station sets its back-off
counter(if the counter value is zero) and goes back to the IDLE state. During
back-off, if the node senses a busy channel, the node saves the back-off counter
and goes back to the IDLE state. Otherwise, it goes into one of three states. If
the packet type is broadcast, the node enters the TRANSMITTING_BCAST
state where it transmits the broadcast packet. If the packet type is unicast and
the packet size is less than the RTS threshold, the node enters the
TRANSMITTING_UNICAST state and starts transmitting data. If the packet
size is greater than the RTS threshold, the node enters the
TRANSMITTING_RTS state and starts transmitting the RTS packet. After the
RTS transmission is over, the node enters the WAITING_FOR_CTS state. If the
CTS packet is not received within a specified time, the node times out and goes
back to the IDLE state, and increases the CW value exponentially up to a
maximum of CWmax. If the CTS packet is received, the node enters the
TRANSMITTING_UNICAST state and starts transmitting data. After the
unicast packet is transmitted, the node enters the WAITING_FOR_ACK state.
When the node receives the ACK, it goes back to the IDLE state and reduces the
CW value to CWmin .If a node receives an RTS packet when in IDLE state and if
the NAV of the node indicates that no other on-going transmissions exist, the
node enters the TRANSMITTING_CTS state and starts transmitting the CTS
packet. After the CTS packet is transmitted, the node enters the
WAITING_FOR_DATA state and waits for the data packet from the sender. On
receiving the data packet, the node enters the TRANSMITTING_ACK state and
starts transmitting the ACK for the data packet. When the ACK has been
transmitted, the node goes back to the IDLE state. If the data packet is not
received, the receiver returns to the IDLE state.
Fragmentation
Bit error rates in the wireless medium are much higher than in other media. The
-9
bit error rate in fiber optics is only about 10 , whereas in wireless, it is as large
-4
as 10 . One way of decreasing the frame error rate is by using shorter frames.
IEEE 802.11 specifies a fragmentation mode where user data packets are split
into several smaller parts transparent to the user. This will lead to shorter
frames, and frame error will result in retransmission of a shorter frame. The
RTS and CTS messages carry duration values for the current fragment and
estimated time for the next fragment. The medium gets reserved for the
successive frames until the last fragment is sent. The length of each fragment is
the same for all the fragments except the last fragment. The fragments contain
information to allow the complete MAC Protocol Data Unit (MPDU, informally
referred to as packet) to be reassembled from the fragments that constitute it.
The frame type, sender address, destination address, sequence control field, and
indicator for more fragments to come are all present in the fragment header. The
destination constructs the complete packet by reassembling the fragments in the
order of the sequence number field. The receiving station ensures that all
duplicate fragments are discarded and only one copy of each fragment is
integrated. Acknowledgments for the duplicates may, however, be sent.
Synchronization
Synchronization of clocks of all the wireless stations is an important function to
be performed by the MAC layer. Each node has an internal clock, and clocks are
all synchronized by a Timing Synchronization Function (TSF). Synchronized
clocks are required for power management, PCF coordination, and Frequency
Hopping Spread Spectrum (FHSS) hopping sequence synchronization. Without
synchronization, clocks of the various wireless nodes in the network may not
have a consistent view of the global time. Within a BSS, quasi periodic beacon
frames are transmitted by the AP, that is, one beacon frame is sent every Target
Beacon Transmission Time (TBTT) and the transmission of a beacon is deferred
if the medium is busy. A beacon contains a time-stamp that is used by the node
to adjust its clock. The beacon also contains some management information for
power optimization and roaming. Not all beacons need to be heard for achieving
synchronization.
Power Management
Usage of power cords restricts the mobility that wireless nodes can potentially
offer. The usage of battery-operated devices calls for power management
because battery power is expensive. Stations that are always ready to receive
data consume more power (the receiver current may be as high as 100 mA). The
transceiver must be switched off whenever carrier sensing is not needed. But
this has to be done in a manner that is transparent to the existing protocols. It is
for this reason that power management is an important functionality in the MAC
layer. Therefore, two states of the station are defined: sleep and awake. The
sleep state refers to the state where the transceiver cannot receive or send
wireless signals. Longer periods in the sleep state mean that the average
throughput will be low. On the other hand, shorter periods in the sleep state
consume a lot of battery power and are likely to reduce battery life. If a sender
wants to communicate with a sleeping station, it has to buffer the data it wishes
to send. It will have to wait until the sleeping station wakes up, and then send
the data. Sleeping stations wake up periodically, when senders can announce the
destinations of their buffered data frames. If any node is a destination, then that
node has to stay awake until the corresponding transmission takes place.
Roaming
Each AP may have a range of up to a few hundred meters where its transmission
will be heard well. The user may, however, walk around so that he goes from
the BSS of one AP to the BSS of another AP. Roaming refers to providing
uninterrupted service when the user walks around with a wireless station. When
the station realizes that the quality of the current link is poor, it starts scanning
for another AP. This scanning can be done in two ways: active scanning and
passive scanning. Active scanning refers to sending a probe on each channel and
waiting for a response. Passive scanning refers to listening into the medium to
find other networks. The information necessary for joining the new BSS can be
obtained from the beacon and probe frames.
Newer Standards
The original standards for IEEE 802.11 came out in 1997 and promised a data
rate of 1-2 Mbps in the license-free 2.4 GHz ISM band. Since then, several
improvements in technology have called for newer and better standards that
offer higher data rates. This has manifested in the form of IEEE802.11aand
IEEE 802.11b standards, both of which came out in 1999. IEEE 802.11b, an
extension of IEEE 802.11 DSSS scheme, defines operation in the 2.4GHz ISM
band at data rates of 5.5 Mbps and 11 Mbps, and is trademarked commercially
by the Wireless Ethernet Compatibility Alliance (WECA) as Wi-Fi. It achieves
high data rates due to the use of Complimentary Code Keying(CCK). IEEE
802.11a operates in the 5 GHz band (unlicensed national information
infrastructure band), and uses orthogonal frequency division multiplexing
(OFDM) at the physical layer. IEEE 802.11a supports data rates up to 54 Mbps
and is the fast Ethernet analogue to IEEE 802.11b.Other IEEE 802.11 (c, d, and
h) task groups are working on special regulatory and networking issues. IEEE
802.11e deals with the requirements of time sensitive applications such as voice
and video. IEEE802.11f deals with inter-AP communication to handle roaming.
IEEE 802.11g aims at providing the high speed of IEEE 802.11a in the ISM
band. IEEE 802.11i deals with advanced encryption standards to support better
privacy.
The tasks of the physical layer are modulation and demodulation of a radio
carrier with a bit stream, forward error-correction mechanisms, signal strength
measurement, and synchronization between the sender and the receiver. The
standard uses the CCA scheme (similar to IEEE 802.11) to sense whether the
channel is idle or busy.
The MAC Sublayer
The CAC sublayer offers a connectionless data service to the MAC sublayer.
The MAC layer uses this service to specify a priority (called the CAM
priority)which is the QoS parameter for the CAC layer. This is crucial in the
resolution of contention in the CAM.EY-NPMA After a packet with an associated
CAM priority has been chosen in the CAC sublayer for transmission, the next
phase is to compete with packets of other nodes for channel access. The channel
access mechanism is a dynamic, listen-and-then-talk protocol that is very
similar to the CSMA/CA used in 802.11 and is called the elimination yield non-
pre-emptive multiple access (EYNPMA) mechanism. Figure 1.5 shows the
operation of the EY-NPMA mechanism in which the nodes 1, 2, 3, and 4 have
packets to be sent to the AP. The CAM priority for nodes 2 and 4 is higher with
priority 2 followed by node 3with priority 3, and node 1 with the least priority
of 4. The prioritization phase will have k slots where k (can vary from 1 to 5
with k − 1 having higher priority than k) refers to the number of priority levels.
3. Transmission: This is the final stage in the channel access where the
transmission of the selected packet takes place. During this phase, the successful
delivery of a data packet is acknowledged with an ACK packet. The
performance of EY-NPMA protocol suffers from major factors such as packet
length, number of nodes, and the presence of hidden terminals. The efficiency
of this access scheme varies from 8% to 83% with variation of packet sizes from
50 bytes to 2 Kbytes. The above-described channel access takes place during
what is known as the channel synchronization condition. The other two
conditions during which channel access can take place are
(a) the channel free condition, when the node senses the channel free for some
amount of time and then gains access, and
(b) the hidden terminal condition, when a node is eliminated from contention,
but still does not sense any data transmission, indicating the presence of a
hidden node.
Failure of HIPERLAN/1
In spite of the high data rate that it promised, HIPERLAN/1 standard has always
been considered unsuccessful. This is because IEEE Ethernet had been
prevalent and hence, for its wireless counterpart too, everybody turned toward
IEEE, which came out with its IEEE 802.11 standard. As a result, hardly any
manufacturer adopted the HIPERLAN/1 standard for product development.
However, the standard is still studied for the stability it provides and for the fact
that many of the principles followed have been adopted in the other standards.
1.4.2 HIPERLAN/2 The IEEE 802.11 standard offers data rates of 1 Mbps
while the newer standard IEEE802.11a offers rates up to 54 Mbps. However,
there was a necessity to support QoS, handoff (the process of transferring an
MT from one channel/AP to another), and data integrity in order to satisfy the
requirements of wireless LANs. This demand was the motivation behind the
emergence of HIPERLAN/2. The standard has become very popular owing to
the significant support it has received from cellular manufacturers such as Nokia
and Ericsson. The HIPERLAN/2 tries to integrate WLANs into the next-
generation cellular systems. It aims at converging IP and ATM type services at a
high data rate of 54 Mbps for indoor and outdoor applications. The
HIPERLAN/2, an ATM compatible WLAN, is a connection-oriented system,
which uses fixed size packets and enables QoS applications easy to implement.
The HIPERLAN/2 network has a typical topology as shown in Figure 1.6. The
figure shows MTs being centrally controlled by the APs which are in turn
connected to the core network (infrastructure-based network). It is to be noted
that, unlike the IEEE standards, the core network for HIPERLAN/2 is not just
restricted to Ethernet. Also, the AP used in HIPERLAN/2 consists of one or
many transceivers called Access Point Transceivers (APTs) which are
controlled by a single Access Point Controller (APC).
The physical layer is responsible for the conversion of the PDU train from the
DLC layer to physical bursts that are suitable for radio transmission.
HIPERLAN/2, like IEEE 802.11a, uses OFDM for transmission. The
HIPERLAN/2 allows bit rates from 6 Mbps to 54 Mbps using a scheme called
link adaptation. This scheme allows the selection of a suitable modulation
method for the required bit rate. This scheme is unique to HIPERLAN/2 and is
not available in the IEEE standards and HIPERLAN/1. More details on the
physical layer can be found in.
The CL
The topmost layer in the HIPERLAN/2 protocol stack is the CL. The functions
of the layer are to adapt the requirements of the different higher layers of the
core network with the services provided by the lower layers of HIPERLAN/2,
and to convert the higher layer packets into ones of fixed size that can be used
by the lower layers. A CL is defined for every type of core network supported.
In short, this layer is responsible for the network-independent feature of
HIPERLAN/2. The CL is classified into two types, namely, the packet-based
CL and the cell based CL. The packet-based CL processes variable-length
packets (such as IEEE 802.3, IP, and IEEE 1394). The cell-based CL processes
fixed sized ATM cells. The CL has two sublayers, namely, the common part
(CP) and the service-specific convergence sublayer (SSCS). The CP is
independent of the core network. It allows parallel segmentation and reassembly
of packets. The CP comprises of two sublayers, namely, the common part
convergence sublayer (CPCS) and the segmentation and reassembly (SAR)
sublayer. The CPCS processes the packets from the higher layer and adds
padding and additional information, so as to be segmented in the SAR. For
further information on the CP, readers are referred to. The SSCS consists of
functions that are specific to the core network. For example, the Ethernet SSCS
has been standardized in for Ethernet core networks. The SSCS adapts the
different data formats to the HIPERLAN/2 DLC format. It is also responsible
for mapping the QoS requests of the higher layers to the QoS parameters of
HIPERLAN/2 such as data rate, delay, and jitter.
The DLC Layer
The DLC layer constitutes the logical link between the AP and the MTs. This
ensures a connection-oriented communication in a HIPERLAN/2 network, in
contrast to the connectionless service offered by the IEEE standards. The DLC
layer is organized into three functional units, namely, the Radio Link Control
(RLC) sublayer on the control plane, the error control (EC) sublayer on the user
plane, and the MAC sublayer.
• Association Control Function (ACF): The ACF handles the registration and
the authentication functions of an MT with an AP within a radio cell. Only after
the ACF procedure has been carried out can the MT ever communicate with the
AP.
• DLC user Connection Control (DCC): The DCC function is used to control
DLC user connections. It can set up new connections, modify existing
connections, and terminate connections.
• Radio Resource Control (RRC): The RRC is responsible for the surveillance
and efficient utilization of the available frequency resources.
The MAC protocol is used for access to the medium, resulting in the
transmission of data through that channel. However, unlike the IEEE standards
and the HIPERLAN/1 in which channel access is made by sensing it, the MAC
protocol follows a dynamic time division multiple access/time division
duplexing (TDMA/TDD) scheme with centralized control. The protocol
supports both AP-MT unicast and multicast transfer, and at the same time MT-
MT peer-to-peer communication. The centralized AP scheduling provides QoS
support and collision-free transmission. The MAC protocol provides a
connection-oriented communication between the AP and the MT (or between
MTs).
Security Issues
1.5 BLUETOOTH
WLAN technology enables device connectivity to infrastructure-based services
through a wireless carrier provider. However, the need for personal devices to
communicate wirelessly with one another, without an established infrastructure,
has led to the emergence of Personal Area Networks (PANs). The first attempt to
define a standard for PANs dates back to Ericsson's Bluetooth project in 1994 to
enable communication between mobile phones using low power and low-cost
radio interfaces. In May 1998, several companies such as Intel, IBM, Nokia, and
Toshiba joined Ericsson to form the Bluetooth Special Interest Group (SIG)
whose aim was to develop a de facto standard for PANs. Recently, IEEE has
approved a Bluetooth-based standard (IEEE 802.15.1) for wireless personal area
networks (WPANs). The standard covers only the MAC and the physical layers
while the Bluetooth specification details the whole protocol stack. Bluetooth
employs radio frequency (RF) technology for communication. It makes use of
frequency modulation to generate radio waves in the ISM band. Low power
consumption of Bluetooth technology and an offered range of up to ten meters
has paved the way for several usage models. One can have an interactive
conference by establishing an ad hoc network of laptops. Cordless computer,
instant postcard [sending digital photographs instantly (a camera is cordlessly
connected to a mobile phone)], and three-in-one phone [the same phone
functions as an intercom (at the office, no telephone charge), cordless phone (at
home, a fixed-line charge), and mobile phone (on the move, a cellular charge)]
are other indicative usage models.
The radio part of the specification deals with the characteristics of the
transceivers and design specifications such as frequency accuracy, channel
interference, and modulation characteristics. The Bluetooth system operates in
the globally available ISM frequency band and the frequency modulation is
GFSK. It supports 64 Kbps voice channels and asynchronous data channels with
a peak rate of 1 Mbps. The data channels are either asymmetric (in one
direction) or symmetric (in both directions). The Bluetooth transceiver is a
FHSS system operating over a set of m channels each of width 1 MHz. In most
of the countries, the value of m is 79. Frequency hopping is used and hopsare
made at a rapid rate across the possible 79 hops in the band, starting at 2.4GHz
and stopping at 2.480 GHz. The choice of frequency hopping has been made to
provide protection against interference. The Bluetooth air interface is based on a
nominal antenna power of 0 dBm (1mW) with extensions for operating at up to
20 dBm (100 mW) worldwide. The nominal link range is from 10 centimetres to
10 meters, but can be extended to more than 100 meters by increasing the
transmit power (using the 20 dBm option). It should be noted here that a WLAN
cannot use an antenna power of less than 0 dBm (1 mW) and hence an 802.11
solution might not be apt for power-constrained devices.
Baseband Layer
The key functions of this layer are frequency hop selection, connection creation,
and medium access control. Bluetooth communication takes place by adhoc
creation of a network called a piconet. The address and the clock associated
with each Bluetooth device are the two fundamental elements governing the
formation of a piconet. Every device is assigned a single 48-bit address which is
similar to the addresses of IEEE 802.xx LAN devices. The address field is
partitioned into three parts and the lower address part (LAP) is used in several
baseband operations such as piconet identification, error checking, and security
checks. The remaining two parts are proprietary addresses of the manufacturing
organizations. LAP is assigned internally by each organization. Every device
also has a 28-bit clock (called the native clock) that ticks 3,200 times per second
or once every 312.5 μs. It should be noted that this is twice the normal hopping
rate of 1,600 hops per second.
Piconet
The initiator for the formation of the network assumes the role of the master (of
the piconet). All the other members are termed as slaves of the piconet. A
piconet can have up to seven active slaves at any instant. For the purpose of
identification, each active slave of the piconet is assigned a locally unique active
member address AM_ADDR. Other devices could also be part of the piconet by
being in the parked mode (explained later). A Bluetooth device not associated
with any piconet is said to be in standby mode. Figure 1.8 shows a piconet with
several devices.
Communication Channel
The channel is divided into time slots, each 625 μs in length. The time slots are
numbered according to the Bluetooth clock of the piconet master. A time
division duplex (TDD) scheme is used where master and slave alternately
transmit. The master starts its transmission in even-numbered time slots only,
and the slave starts its transmission in odd-numbered time slots only. This is
clearly illustrated in Figure 1.10 (a). The packet start shall be aligned with the
slot start. A Bluetooth device would determine slot parity by looking at the least
significant bit (LSB) in the bit representation of its clock. IfLSB is set to 1, it is
the possible transmission slot for the slave. A slave in normal circumstances is
allowed to transmit only if in the preceding slot it has received a packet from the
master. A slave should know the master's clock and address to determine the
next frequency (from the FSM). This information is exchanged during paging.
Inquiry State
As shown in Figure 1.9, a device which is initially in the standby state enters the
inquiry state. As its name suggests, the sole purpose of this state is to collect
information about other Bluetooth devices in its vicinity. This information
includes the Bluetooth address and the clock value, as these form the crux of the
communication between the devices. This state is classified into three substates:
inquiry, inquiry scan, and inquiry response. A potential master sends an inquiry
packet in the inquiry state on the inquiry hop sequence of frequencies. This
sequence is determined by feeding a common address as one of the inputs to the
FSM. A device (slave) that wants to be discovered will periodically enter the
inquiry scan state and listen for these inquiry packets. When an inquiry message
is received in the inquiry scan state, a response packet called the Frequency
Hopping Sequence (FHS) containing the responding device address must be
sent. Devices respond after a random jitter to reduce the chances of collisions.
Page State
A device enters this state to invite other devices to join its piconet. A device
could invite only the devices known to itself. So normally the inquiry operation
would precede this state. This state also is classified into three sub-states: page,
page scan, and page response. In the page mode, the master estimates the slave's
clock based on the information received during the inquiry state, to determine
where in the hop sequence the slave might be listening in the page scan mode.
In order to account for inaccuracies in estimation, the master also transmits the
page message through frequencies immediately preceding and succeeding the
estimated one. On receiving the page message, the slave enters the slave page
response substate. It sends back a page response consisting of its ID packet
which contains its Device Access Code (DAC). Finally, the master (after
receiving the response from a slave) enters the page response state and informs
the slave about its clock and address so that the slave can go ahead and
participate in the piconet. The slave now calculates an offset to synchronize
with the master clock, and uses that to determine the hopping sequence for
communication in the piconet.
Power Management
The Bluetooth units can be in several modes of operation during the connection
state, namely, active mode, sniff mode, hold mode, and park mode. These
modes are now described.
• Active mode: In this mode, the Bluetooth unit actively participates in the
piconet. Various optimizations are provided to save power. For instance, if the
master informs the slave when it will be addressed, the slave may sleep until
then. The active slaves are polled by the master for transmissions.
• Sniff mode: This is a low-power mode in which the listening activity of the
slave is reduced. The LMP in the master issues a command to the slave to enter
the sniff mode, giving it a sniff interval, and the slave listens for transmissions
only at these fixed intervals.
• Hold mode: In this mode, the slave temporarily does not support ACL packets
on the channel (possible SCO links will still be supported). In this mode,
capacity is made available for performing other functions such as scanning,
paging, inquiring, or attending another piconet.
• Park mode: This is a very low-power mode. The slave gives up its active
member address and is given an eight-bit parked member address. The slave,
however, stays synchronized to the channel. Any messages to be sent to a
parked member are sent over the broadcast channel characterized by an active
member address of all zeros. Apart from saving power, the park mode helps the
master to have more than seven slaves (limited by the three-bit active member
address space) in the piconet.
Bluetooth Security
In Bluetooth communications, devices may be authenticated and links may be
encrypted. The authentication of devices is carried out by means of a challenge
response mechanism which is based on a commonly shared secret link key
generated through a user-provided personal identification number (PIN). The
authentication starts with the transmission of an LMP challenge packet and ends
with the verification of result returned by the claimant. Optionally, the link
between them could also be encrypted.
4. Serial and object exchange profiles: The serial port profile emulates a serial
line (RS232 and USB serial ports) for (legacy) applications that require a serial
line. The other profiles, generic object exchange, object push, file transfer, and
synchronization, are for exchanging objects between two wireless devices.
Bluetooth is the first wireless technology which has actually tried to attempt to
make all the household consumer electronics devices follow one particular
communication paradigm. It has been partially successful, but it does have its
limitations. Bluetooth communication currently does not provide support for
routing. It should be noted that some research efforts are under way to
accommodate this in the Bluetooth specification. Once the routing provision is
given, inter-piconet communication could be enhanced. The issues of handoffs
also have not yet been dealt with till now. Although master–slave architecture
has aided low cost, the master becomes the bottleneck for the whole piconet in
terms of performance, fault tolerance, and bandwidth utilization. Most
importantly, Bluetooth communication takes place in the same frequency band
as that of WLAN and hence robust coexistence solutions need to be developed
to avoid interference. The technology is still under development. Currently,
there are nearly 1,800 adopter companies which are contributing toward the
development of the technology.
1.6 HOME RF
Wireless home networking represents the use of the radio frequency
(RF)spectrum to transmit voice and data in confined areas such as homes and
small offices. One of the visionary concepts that home networking intends to
achieve is the establishment of communication between home appliances such
as computers, TVs, telephones, refrigerators, and air conditioners. Wireless
home networks have an edge over their wired counterparts because features
such as flexibility (enabling of file and drive sharing) and interoperability that
exist in the wired networks are coupled with those in the wireless domain,
namely, simplicity of installation and mobility. The HIPERLAN/2, as
mentioned earlier, has provisions for direct communication between the mobile
terminals (the home environment). The home environment enables election of a
central controller (CC) which coordinates the communication process. This
environment is helpful in setting up home networks. Apart from this, an industry
consortium known as the Home RF Working Group has developed a technology
that is termed HomeRF. This technology intends to integrate devices used in
homes into a single network and utilize RF links for communication. HomeRF
is a strong competitor to Bluetooth as it operates in the ISM band.
Technical Features The HomeRF provides data rates of 1.6 Mbps, a little
higher than the Bluetooth rate, supporting both infrastructure-based and ad hoc
communications. It provides a guaranteed QoS delivery to voice-only devices
and best-effort delivery for data-only devices. The devices need to be plug-and-
play enabled; this needs automatic device discovery and identification in the
network. Atypical HomeRF network consists of resource providers (through
which communication to various resources such as the cable modem and phone
lines is effected), and the devices connected to them (such as the cordless
phone, printers, and file servers). The HomeRF technology follows a protocol
called the shared wireless access protocol (SWAP). The protocol is used to set
up a network that provides access to a public network telephone, the Internet
(data), entertainment networks (cable television, digital audio, and video),
transfer and sharing of data resources (such as disks and printers), and home
control and automation. The SWAP has been derived from the IEEE 802.11 and
the European digitally enhanced cordless telephony (DECT) standards. It
employs a hybrid TDMA/CSMA scheme for channel access. While TDMA
handles isochronous transmission (similar to synchronous transmission,
isochronous transmission is also used for multimedia communication where
both the schemes have stringent timing constraints, but isochronous
transmission is not as rigid as synchronous transmission in which data streams
are delivered only at specific intervals), CSMA supports asynchronous
transmission (in a manner similar to that of the IEEE 802.11 standard), thereby
making the actual framing structure more complex. The SWAP, however,
differs from the IEEE 802.11specification by not having the RTS-CTS
handshake since it is more economical to do away with the expensive
handshake; moreover, the hidden terminal problem does not pose a serious
threat in the case of small-scale networks such as the home networks. The
SWAP can support up to 127 devices, each identified uniquely by a 48-
bitnetwork identifier. The supported devices can fall into one (or more) of the
following four basic types:
• Connection point that provides a gateway to the public switched telephone
network (PSTN), hence supporting voice and data services.
• Asynchronous data node that uses the CSMA/CA mechanism to communicate
with other nodes.
Infrared The infrared technology (IrDA) uses the infrared region of the light for
communication . Some of the characteristics of these communications are as
follows:
• The infrared rays can be blocked by obstacles, such as walls and buildings.
• The effective range of infrared communications is about one meter. But when
high power is used, it is possible to achieve better ranges.
• The cost of infrared devices is very low compared to that of Bluetooth devices.
Although the restriction of line of sight (LoS) is there on the infrared devices,
they are extremely popular because they are cheap and consume less power. The
infrared technology has been prevalent for a longer time than Bluetooth wireless
communications. So it has more widespread usage than Bluetooth. Table1.2
compares the technical features of Bluetooth, HomeRF, and IrDAtechnologies.
• Address mobility
1.9 MOBILE IP
Each computer connected to the Internet has a unique IP address, which helps
not only in identifying the computer on the network but also routing the data to
the computer. The problem of locating a mobile host in a mobile domain is now
imminent as the IP address assigned can no longer be restricted to a region. The
first conceivable solution to the above problem would be to change the IP
address when the host moves from one subnet to another. In this way, its
address is consistent with the subnet it is currently in. The problems with
changing the IP address as the host moves is that TCP identifies its connection
with another terminal based on the IP address. Therefore, if the IP address itself
changes, the TCP connection must be reestablished. Another method would be
to continue to use the same IP address and add special routing entries for
tracking the current location of the user. This solution is practical if the number
of mobile users is small. The quick-fix solutions are inadequate, but they give
valuable insight into the nature of the mobility problem and offer certain
guidelines for the actual solution. Before providing the solution to the problem,
some issues of utmost importance need to be enumerated. These are as follows:
1.9.1 MobileIP
The essence of the MobileIP scheme is the use of the old IP address but with a
few additional mechanisms, to provide mobility support. MN is assigned
another address, the care of address (COA).
Reverse Tunneling
It appears that there should not be any problem for the MN in sending a packet
to the CN following path III. However, there are other practical constraints that
play an important role here.
1. Ingress filtering: There are some routers which filter the packets going out
of the network if the source IP address associated with them is not the subnet's
IP address. This is known as ingress filtering where the MN's packet may get
filtered in the foreign network if it uses its home IP address directly.
2. Firewalls: As a security measure, most firewalls will filter and drop packets
that originate from outside the local network, but appear to have a source
address of a node that belongs to the local network. Hence if MN uses its home
IP address and if these packets are sent to the home network, then they will be
filtered.
1.9.3 Route Optimization The packets sent to and from the HA are routed on
non-optimal paths, hence the need for optimizations. The CN is assumed to be
mobility-aware, that is, it has the capability to de-encapsulate the packets from
the MN and send packets to the MN, bypassing the HA. The following are some
of the concepts related to optimization strategies.
• Binding cache: The CN can keep the mapping of MN's IP address and COA
in a cache. Such a cache is called a binding cache. Binding cache is used by the
CN to find the COA of the MN in order to optimize the path length. Like any
other cache, this may follow the update policies such as least recently used and
first-in-first-out.
• Binding request and binding update: The CN can find the binding using a
binding request message, to which the HA responds with a binding update
message.
• Binding warning: In some cases, a handoff may occur, but CN may continue
to use the old mapping. In such situations, the old FA sends a binding warning
message to HA, which in turn informs the CN about the change, using a binding
update message.
1.9.4 MobileIP Variations – The 4 × 4 ApproachAs discussed in Section 1.9.1,
MobileIP is a general-purpose solution to the mobility problem over IPv4. It
uses encapsulation as a primary technique and thus introduces a huge overhead
(approximately 20 bytes per packet). In the MobileIP scheme, the MN is
dependent on the FA to provide a COA. The presence of the FA in all
transactions prevents the MN from being able to perform any kind of
optimization, and it is unable to forgo the MobileIP support even when it is not
required. The key factors that affect any optimization scheme are the
permissiveness of the network and the capabilities of the communicating nodes.
In the following strategy presented, it is presumed that the MN does not depend
on the FA for any support and it is able to acquire a COA from the subnet that it
is present in.
Goals of Optimizations
Any optimization scheme should try to ensure guaranteed delivery, low latency,
and low overhead. Deliverability is to be understood in terms of the traditional
datagram network that provides only a best-effort service. The latency issue
mainly deals with the route that is being followed by the packet from the source
to the destination, either in terms of the hop count or the delay. The overhead in
the MobileIP scheme is essentially the packet encapsulation overhead.
The 4 × 4 Approach
The strategy presented here provides four options for packets directed from the
MN to the CN (OUT approaches) and four more options for packets directed
from the CN to the MN (IN approaches). The set of options can be provided as
a4×4 matrix to the hosts, which can decide on the appropriate combination
depending on the situation. The IN and OUT strategies are summarized in
Tables 1.4 and 1.5, respectively. s and d represent the outer source and
destination in the encapsulated packet while S and D represent the inner source
and destination of the packet (refer to Figure 1.14). Indirect transmission refers
to the routing of packets between the CN and MN involving the HA, whereas
direct transmission bypasses the HA. In Table 1.4 the four IN strategies are
listed along with the respective source and destination fields, and the
assumptions made and restrictions on usage of the strategies. For example, INIE
uses the traditional MobileIP mechanism and works in all network environments
irrespective of security considerations, while IN-DT is applicable for short-term
communication wherein the mobility support is compromised. In Table 1.5, the
four OUT strategies are listed. For example, OUT-IE uses the traditional
MobileIP reverse tunneling mechanism and works in all networks cenarios,
while OUT-DH avoids encapsulation overhead but can be used only when the
MN and CN are in the same IP subnet.
1.9.5 Handoffs
A handoff is required when the MN is moving away from the FA it is connected
to, and as a result the signals transmitted to and from the current FA become
weak. If the MN can receive clearer signals from another FA, it breaks its
connection with the current FA and establishes a connection with the new one.
The typical phases involved in handoff are measuring the signal strength,
decisions regarding where and when to hand off, and the establishment of a new
connection breaking the old one.
Classification of Handoffs
The issues in handoffs are on the same lines as those in cellular networks.
Handoffs can be classified in three ways based on functionalities of the entities
involved, signaling procedure, and number of active connections. Function-
based classification is based on the roles of the MN and FA during the handoff.
Figure 1.15 shows the MN, BS, FA, and CN in the handoff scenario.
1. Mobile initiated handoff: In this case, the handoff is managed by the MN.
The MN measures the signal strength, decides the target base station (BS), and
triggers the handoff.
2. Mobile evaluated handoff: This is similar to the previous case except that
the decision on the handoff lies within the network, perhaps with the BS.
3. Network initiated handoff: In this case, the network (BS) decides where the
MN should be handed over. Also, only the network measures the signal strength
of the uplink and the MN has very little role to play.
4. Mobile assisted handoff: The MN assists the network in the network
initiated scenario by measuring the downlink signal strength. This is typically to
avoid a black hole scenario. A black hole scenario occurs when the channel
properties tend to be asymmetric. (Usually wireless channels are assumed to
have the same properties in both uplink and downlink, but in certain
circumstances the throughput on one of the directions may be significantly less
than the other. This scenario is referred to as a black hole.)The second kind of
classification is based on the number of active connections, where the handoffs
are classified into two types: the hard handoff (only one active connection to the
new or the old FA) and the soft handoff (has two active connections during the
handoff).Signaling procedure-based handoffs are classified into two types
depending on which FA (old FA or new FA) triggers the handoff along with
MN.
• Forward handoff: In this case, MN decides the target BS and then requests
the target BS to contact the current BS to initiate the handoff procedure.
• Backward handoff: In this case, MN decides the target BS and then requests
the current BS to contact the new one.
Fast Handoffs
A typical handoff takes a few seconds to break the old connection and establish
the new one. This delay may be split into three components : delay in detection
of a need for a handoff, layer2 handoff (a data link connection that needs to be
established between the new FA and MN), and layer3 handoff or registration
with HA. The first two components cannot be avoided; however, the delay due
to the third can be reduced. Also, if the above operations are parallelized, the
total delay will be reduced. Two techniques called pre- and post-registration
handoffs are employed to perform the above operations. The difference lies in
the order in which the operations are performed. In the case of the pre-
registration handoff, the registration with the HA takes place before the handoff
while the MN is still attached to the old FA, while in the case of the post-
registration handoff, registration takes place after the MN is connected to the
new FA. In this case, the MN continues to use the old FA, tunnelling data via
the new FA until the process of registration is completed.
• Detection of black holes: Sometimes it might happen that the signals of one of
the links (uplink or downlink) become weak while the other link has a good
signal strength. Such a phenomenon is known as a black hole because data can
go in one direction but cannot come out in the other. In such cases, a handoff
may be required. IPv6 allows both MN and BS to detect the need for a handoff
due to creation of black holes.
• IPv6 avoids overheads due to encapsulation because both the COA and the
original IP address are included in the same packet in two different fields. Apart
128
from these, IPv6 allows 2 addresses, thereby solving the IP address shortage
problem, and includes advanced QoS features. It also supports encryption and
decryption options to provide authentication and integrity.
CellularIP
CellularIP offers an alternative to the handoff detection problem by using the
MAC layer information based on the received signal strengths to detect
handoffs, instead of using the network layer information. The routing nodes
maintain both a paging cache and a routing cache; the routing cache is a
mapping between an MN's IP address and its current location in the Cellular
IPdomain. The paging cache is preferred for nodes that receive or send packets
relatively infrequently, and it is maintained by paging update packets sent by the
MN whenever it crosses between two APs. The routing cache will be updated
whenever the MN has a packet to send. The MN will send the packet to the
closest AP and this will update all routing caches all the way up to the ANG. It
is to be noted that during a handoff or just after the handoff, packets meant for
the MN will be routed to both the old as well as the current AP in charge of the
MN for a time interval equal to the routing cache timeout.
TIMIP
In the Terminal Independent Mobility for IP (TIMIP) approach, the emphasis is
on providing a uniform service to both MobileIP-capable MNs as well as the
legacy terminals. The MobileIP capability of the legacy terminals will be
provided by the ANG. The ANG will keep track of the information regarding
each of the MNs in its domain such as the MN's MAC and IP addresses, the
MobileIP capabilities, and the authentication parameters. Whenever an MN
arrives in the TIMIP domain, a routing path has to be created in the domain so
that all packets intended for this host can be efficiently routed. This will cause a
trigger of updates to ensure route reconfiguration in the entire hierarchy. The
ARs not involved in the route will be unaware of the new path to the MN. As a
result, the default strategy for any packet in the TIMIP domain, that is, for any
IP address that is unknown at a particular AP or AR, will be to route it to the
ANG.
The common security problems that may arise in wireless networks are as
follows:
• Replay attacks: Many times the above problem is solved by making the
registration process encrypted. Though this appears to avoid the first problem,
the malicious node may copy the MN's registration packet, which is encrypted
when the MN tries to register with the FA. Though this packet cannot be
decoded by this malicious node, it can certainly use this packet for registering
itself as the MN at a later point of time, and hence enjoy all the facilities at the
cost of the MN.
• Tunnel hijacking: In this case, the malicious node uses the tunnel built by the
MN to break through the firewalls.
• FA can itself be a malicious node. The MN and HA share the same security
association and use the message digest5 (MD5) with 128-bit encryption. To
circumvent the problem of replay attacks the MN and HA use a shared random
number4 (called Nonce) and this random number is sent along with the encrypted
registration request. On registration, the HA verifies the random number and
issues a new random number to be used for the next registration. Hence, even if
the packet is copied by the malicious node, it becomes useless for a replay
attack, as at the time of the next registration the random number would have
changed anyway.
The usual QoS parameters are delay, loss, throughput, and delay jitter.
Whenever an MN moves across from one agent to another, there is obviously a
change in the data flow path due to the handoff. The delay is likely to change
due to the change in the data flow path and also due to the fact that the new
location may vary from the old location with respect to congestion
characteristics. Again, if the new location is highly congested, the available
bandwidth is less, hence the throughput guarantees that were provided earlier
may be violated. In addition, under extreme cases there may be temporary
disconnections immediately following a handoff, which causes significant data
loss during the transit.
Requirements of a Mobility-Aware RSVP
MRSVP – Implementation
We have to identify proxy agents (PAs) that will make reservations on behalf of
mobile senders and receivers. There are two types of PAs: remote and local. A
local proxy agent (LPA) is that to which the MN is currently attached. Every
other agent in the MSPEC will be a remote proxy agent (RPA).The sender
periodically generates ACTIVE PATH messages, and for a mobile sender the
PAs will send PASSIVE PATH messages along the flow path to the destination.
Similarly, the PAs for a mobile receiver send the PASSIVE RESV messages
while the receiver itself sends the ACTIVE RESV message. The framework also
defines additional messages such as JoinGroup, RecvSpec, SenderSpec, and
SenderMSpec . The key issues in the implementation are as follows:
• The identification of proxy agents (local and remote) that will perform the
reservations on behalf of an MN.
• The identification of flow anchors (proxy agents), a Sender Anchor when the
MN is a sender and a Receiver Anchor when the MN is a receiver, that will act
as fixed points in the flow path.
• The establishment of both active and passive reservations (by the remote
proxy agents) for the MN according to the MSPEC.
• The actual message sequences that lead to the reservation depends on the type
of the flow and the strategy adopted. The MRSVP scheme is an initial approach
to providing QoS guarantees within the MobileIP framework. The scheme
considers both unicast as well as multicast traffic for all types of senders and
receivers. The significant contribution of the approach is the notion of
PASSIVE reservations that exist virtually on future routers that the MN's data
flow is likely to use, but will turn into real flows when theMN moves into the
new domain.
1.10.1 TraditionalTCP
TCP provides a connection-oriented, reliable, and byte stream service. The term
connection-oriented means the two applications using TCP must establish a
TCP connection with each other before they can exchange data. It is a full
duplex protocol, meaning that each TCP connection supports a pair of byte
streams, one flowing in each direction. TCP includes a flow-control mechanism
for each of these byte streams that allows the receiver to limit how much data
the sender can transmit. TCP also implements a congestion-control
mechanism.TCP divides the data stream to be sent into smaller segments and
assigns sequence numbers to them. The sequence number helps the receiver to
provide the higher layers with in-order packet delivery, and also detect losses.
The sliding window mechanism employed by TCP guarantees the reliable
delivery of data, ensures that the data is delivered in order, and enforces flow
control between the sender and the receiver. In the sliding-window process, the
sender sends several packets before awaiting acknowledgment of any of them,
and the receiver acknowledges several packets at a time by sending to the
transmitter the relative byte position of the last byte of the message that it has
received successfully. The number of packets to be sent before the wait for
acknowledgment (window size) is set dynamically, that is, it can change from
time to time depending on network conditions. Because the major cause of
packet loss in the wired domain is congestion, TCP assumes that any loss is due
to congestion. The TCP congestion control mechanism works as below.
Initially, the TCP sender sets the congestion window to the size of one
maximum TCP segment [also known as maximum segment size (MSS)]. The
congestion window gets doubled for each successful transmission of the current
window. This process continues until the size of the congestion window exceeds
he size of the receiver window or the TCP sender notices a timeout for any TCP
segment. The TCP sender interprets the timeout event as network congestion,
initializes a parameter called slow start threshold to half the current congestion
window size, and resets the congestion window size to one MSS. It then
continues to double the congestion window on every successful transmission
and repeats the process until the congestion window size reaches the slow start
window threshold. Once the threshold is reached, the TCP sender increases the
congestion window size by one MSS for each successful transmission of the
window. This mechanism whereby the congestion window size is brought down
to one MSS each time network congestion is detected and then is incremented
as described above is referred to as slow start. Another important characteristic
of TCP is fast retransmit and recovery. If the receiver receives packets out of
order, it continues to send the acknowledgment for the last packet received in
sequence. This indicates to the sender that some intermediate packet was lost
and the sender need not invoke the congestion control mechanism. The sender
then reduces the window size by half and retransmits the missing packet. This
avoids the slow start phase.
1.10.5 Indirect TCP This approach involves splitting of the TCP connection
into two distinct connections, one TCP connection between the MN and BS6 and
another TCP connection between the BS and the CN. In this context of TCP over
6
wireless, the terms BS and AP are used interchangeably. Such a division splits the TCP
connection based on the domain, the wireless domain, and the wired domain.
The traditional TCP can be used in the wired part of the connection and some
optimized version of TCP can be used in the wireless counterpart. In this case,
the intermediate agent commonly known as the access point (AP) acts as a
proxy for MN. The indirect TCP (ITCP) mechanism is shown in Figure 1.18.
Loss of packets in the wireless domain, which would otherwise cause a
retransmission in the wired domain, is now avoided by using a customized
transport protocol between the AP and MN which accounts for the vagaries of
the wireless medium. The AP acknowledges CN for the data sent to MN and
buffers this data until it is successfully transmitted to MN. MN acknowledges
the AP alone for the data received. Handoff may take a longer time as all the
data acknowledged by AP and not transmitted to MN must be buffered at the
new AP.
4.4.6 MobileTCP
The most common problem associated with the wireless domain is that quite
often the connection between MN and BS is lost for small intervals of time.
This typically happens when MN moves behind a huge building or MN enters
offices where the signals are filtered. In such cases, the sender will keep
transmitting and times out eventually. In case of ITCP, the data buffered at AP
may grow too large in size. It may also lead to slow start. In such situations the
sender needs to be informed. This situation is handled in mobile TCP (M-TCP)
by the supervisory host (the node in the wired network that controls a number of
APs) which advertises the window size to be one, thus choking the sender and
hence avoiding slow start. Connection may be resumed when MN can be
contacted again. When the supervisory host receives a TCP packet, it forwards it
to the M-TCP client. Upon reception of an ACK from M-TCP client, the
supervisory host forwards the ACK to the TCP sender. Hence M-TCP maintains
the end-to-end TCP semantics even though the TCP connection is split at the
supervisory host. When the M-TCP client undergoes a temporary link break, the
supervisory host avoids forwarding the ACK of the last byte to the sender and
hence the sender TCP goes to the persist state by setting the window size to
zero. This avoids retransmission, closing of the congestion window, and slow
start at the sender.
1.10.7 Explicit Loss Notification Typically, the problem with TCP lies in the
fact that it does not know the exact cause for packet loss, and hence has to
invariably assume congestion loss. An ideal TCP simply retransmits the lost
packets without any congestion control mechanism. The MAC layer, however,
can identify the reason for the packet loss. Once the MAC layer detects that
either a handoff is about to occur or realizes that the actual cause of the packet
loss is not congestion, then it immediately informs the TCP layer of the
possibility of a non-congestion loss. The crux of the strategy is to detect loss at
MN and send an explicit loss notification (ELN) to the sender. The sender does
not reduce window size on receiving the ELN as this message implies that there
was an error and not congestion. This technique avoids slow start and can
handle encrypted data. However, the protocol layer software at the MAC layer
of MN needs to be changed. Further, the information conveyed by the MAC
layer may not always be reliable.
1.10.8 WTCP
WTCP aims at revamping the transport protocol for the wireless domain using
(c) mechanisms for detecting the reason for packet loss, and
1.10.10 Transaction-OrientedTCP
The TCP connection setup and connection tear-down phases involve a huge
overhead in terms of time and also in terms of the number of packets sent. This
overhead is very costly, especially if the size of the data is small. An alternative
for such transactions is transaction-oriented TCP (TTCP). The motivation
behind this approach is to integrate the call setup, the call tear-down, and the
actual data transfer into a single transaction, thereby avoiding separate packets
for connecting and disconnecting. However, the flip side to the strategy is that
changes must be made to TCP, which goes against some of the fundamental
objectives that the changes to TCP must be transparent and must not affect the
existing framework. Table 1.6 shows a summary of the various approaches
discussed so far. The next section briefly describes the impact of mobility on the
performance of TCP.
Fast Retransmit/Recovery
The usual problem associated with handoffs is that the handoff may lead to
packet loss during transit, either as a result of the intermediary routers' failure to
allocate adequate buffers or their inability to forward the packets meant for the
MN to the new BS. The result of the packet loss during handoff is slow start.
The solution involves artificially forcing the sender to go into fast
retransmission mode immediately, by sending duplicate acknowledgments after
the handoff, instead of going into slow start. The advantage of the strategy is its
simplicity and the fact that it requires minimal changes to the existing TCP
structure. However, the scheme does not consider the fact that there may be
losses over the wireless links.
Using Multicast
1.11 WAP
WAP stands for wireless application protocol. This name is a misnomer,
because WAP represents a suite of protocols rather than a single protocol. WAP
has today become the de facto standard for providing data and voice services to
wireless handheld devices. WAP aims at integrating a simple lightweight
browser also known as a micro-browser into handheld devices, thus requiring
minimal amounts of resources such as memory and CPU at these devices. WAP
tries to compensate for the shortfalls of the wireless handheld devices and the
wireless link (low bandwidth, low processing capabilities, high bit-error rate,
and low storage availability) by incorporating more intelligence into the
network nodes such as the routers, Web servers, and BSs. The primary
objectives of the WAP protocol suite are independence from the wireless
network standards, interoperability among service providers, overcoming the
shortfalls of the wireless medium (such as low bandwidth, high latency, low
connection stability, and high transmission cost per bit), overcoming the
drawbacks of handheld devices (small display, low memory, limited battery
power, and limited CPU power), increasing efficiency and reliability, and
providing security, scalability, and extensibility.
1.12.1HTTP Drawbacks
The main protocol on which the Web operates today is the hypertext transfer
protocol (HTTP), which is optimized mainly for the wired world. It has a lot of
overhead, but it is acceptable when the network bandwidth is an inexpensive
resource as in typical wired networks compared to wireless networks. HTTP has
drawbacks such as high connection overhead (a new TCP socket is opened for
every newHTML object), redundant capabilities transmission (information
regarding the browser capabilities is included in every HTTP request), and
verbosity (HTTP is ASCII-encoded and hence inherently verbose). The
WebExpress system suggests that an Intercept model be applied for Web access
over wireless interfaces. This allows the number of requests sent over the
wireless channel to be optimized, and also avoids the connection setup overhead
over the wireless interface. There are two main entities that are introduced into
the system: the client side interface (CSI) and the server side interface (SSI).
The CSI appears as a local Web proxy co-resident with the Web browser on the
wireless rendering device, say, a mobile phone or a PDA. The communication
between the CSI and the Web browser takes place through the loopback feature
of the TCP/IP suite (wherein the host sends a packet to itself using an IP address
like 127.0.0.1). The communication between the CSI and the SSI is the only
interaction over the wireless network and this uses a reduced HTTP, as
discussed later. The SSI communicates with the Web server over the wired
network. The SSI could typically be resident at the network gateway or the FA
in MobileIP. The intercept model (Figure 4.10) is transparent to browsers and
servers, and is also insensitive to changes in HTTP/HTML technology.
• Header reduction: HTTP requests are prefixed with headers that indicate to
the origin server the rendering capabilities of the browser and also the various
content formats handled by it. The alternative to this is that the CSI sends this
information in the first request and SSI records this information. For every
subsequent request sent by the CSI, the SSI automatically inserts this capability
list into each packet meant for the origin server.
UNIT-II
Table 2.1. Differences between cellular networks and ad hoc wireless networks
2.1.2 Applications of Ad Hoc Wireless Networks Ad hoc wireless networks, due to
their quick and economically less demanding deployment, find applications in several areas.
Some of these include: military applications, collaborative and distributed computing, emergency
operations, wireless mesh networks, wireless sensor networks, and hybrid wireless network
architectures.
Military Applications
Ad hoc wireless networks can be very useful in establishing communication among a group of
soldiers for tactical operations. Setting up a fixed infrastructure for communication among a
group of soldiers in enemy territories or in inhospitable terrains may not be possible. In such
environments, ad hoc wireless networks provide the required communication mechanism
quickly. Another application in this area can be the coordination of military objects moving at
high speeds such as fleets of airplanes or warships. Such applications require quick and reliable
communication. Secure communication is of prime importance as eavesdropping or other
security threats can compromise the purpose of communication or the safety of personnel
involved in these tactical operations. They also require the support of reliable and secure
multimedia multicasting. For example, the leader of a group of soldiers may want to give an
order to all the soldiers or to a set of selected personnel involved in the operation. Hence, the
routing protocol in these applications should be able to provide quick, secure, and reliable
multicast communication with support for real-time traffic.
As the military applications require very secure communication at any cost, the vehicle-mounted
nodes can be assumed to be very sophisticated and powerful. They can have multiple high-power
transceivers, each with the ability to hop between different frequencies for security reasons. Such
communication systems can be assumed to be equipped with long-life batteries that might not be
economically viable for normal usage. They can even use other services such as location tracking
[using the global positioning system (GPS)] or other satellite-based services for efficient
communication and coordination. Resource constraints such as battery life and transmitting
power may not exist in certain types of applications of ad hoc wireless networks. For example,
the ad hoc wireless network formed by a fleet of military tanks may not suffer from the power
source constraints present in the ad hoc network formed by a set of wearable devices used by the
foot soldiers. In short, the primary nature of the communication required in a military
environment enforces certain important requirements on ad hoc wireless networks, namely,
reliability, efficiency, secure communication, and support for multicast routing.
Another domain in which the ad hoc wireless networks find applications is collaborative
computing. The requirement of a temporary communication infrastructure for quick
communication with minimal configuration among a group of people in a conference or
gathering necessitates the formation of an ad hoc wireless network. For example, consider a
group of researchers who want to share their research findings or presentation materials during a
conference, or a lecturer distributing notes to the class on the fly. In such cases, the formation of
an ad hoc wireless network with the necessary support for reliable multicast routing can serve the
purpose. The distributed file sharing applications utilized in such situations do not require the
level of security expected in a military environment. But the reliability of data transfer is of high
importance. Consider the example where a node that is part of an ad hoc wireless network has to
distribute a file to other nodes in the network. Though this application does not demand the
communication to be interruption-free, the goal of the transmission is that all the desired
receivers must have the replica of the transmitted file. Other applications such as streaming of
multimedia objects among the participating nodes in an ad hoc wireless network may require
support for soft real-time communication. The users of such applications prefer economical and
portable devices, usually powered by battery sources. Hence, a mobile node may drain its battery
and can have varying transmission power, which may result in unidirectional links with its
neighbors. Devices used for such applications could typically be laptops with add-on wireless
interface cards, enhanced personal digital assistants (PDAs), or mobile devices with high
processing power. In the presence of such heterogeneity, interoperability is an important issue.
Emergency Operations
Ad hoc wireless networks are very useful in emergency operations such as search and rescue,
crowd control, and commando operations. The major factors that favor ad hoc wireless networks
for such tasks are self-configuration of the system with minimal overhead, independent of fixed
or centralized infrastructure, the nature of the terrain of such applications, the freedom and
flexibility of mobility, and the unavailability of conventional communication infrastructure. In
environments where the conventional infrastructure-based communication facilities are
destroyed due to a war or due to natural calamities such as earthquakes, immediate deployment
of ad hoc wireless networks would be a good solution for coordinating rescue activities. Since
the ad hoc wireless networks require minimum initial network configuration for their
functioning, very little or no delay is involved in making the network fully operational. The
above-mentioned scenarios are unexpected, in most cases unavoidable, and can affect a large
number of people. Ad hoc wireless networks employed in such circumstances should be
distributed and scalable to a large number of nodes. They should also be able to provide fault
tolerant communication paths. Real-time communication capability is also important since voice
communication predominates data communication in such situations.
Wireless mesh networks are ad hoc wireless networks that are formed to provide an alternate
communication infrastructure for mobile or fixed nodes/users, without the spectrum reuse
constraints and the requirements of network planning of cellular networks. The mesh topology of
wireless mesh networks provides many alternate paths for a data transfer session between a
source and destination, resulting in quick reconfiguration of the path when the existing path fails
due to node failures. Wireless mesh networks provide the most economical data transfer
capability coupled with the freedom of mobility. Since the infrastructure built is in the form of
small radio relaying devices fixed on the rooftops of the houses in a residential zone as shown in
Figure 2.10, or similar devices fitted on the lamp posts as depicted in Figure 2.11, the investment
required in wireless mesh networks is much less than what is required for the cellular network
counterparts. Such networks are formed by placing wireless relaying equipment spread across
the area to be covered by the network. The possible deployment scenarios of wireless mesh
networks include: residential zones (where broadband Internet connectivity is required),
highways (where a communication facility for moving automobiles is required), business zones
(where an alternate communication system to cellular networks is required), important civilian
regions (where a high degree of service availability is required), and university campuses (where
inexpensive campus-wide network coverage can be provided). Wireless mesh networks should
be capable of self-organization and maintenance. The ability of the network to overcome single
or multiple node failures resulting from disasters makes it convenient for providing the
communication infrastructure for strategic applications. The major advantages of wireless mesh
networks are support for a high data rate, quick and low cost of deployment, enhanced services,
high scalability, easy extendability, high availability, and low cost per bit. Wireless mesh
networks operate at the license-free ISM bands around 2.4 GHz and 5 GHz. Depending on the
technology used for the physical layer and MAC layer communication, data rates of 2 Mbps to
60 Mbps can be supported. For example, if IEEE 802.11a is used, a maximum data rate of 54
Mbps can be supported. The deployment time required for this network is much less than that
provided by other infrastructure-based networks. Incremental deployment or partial batch
deployment can also be done. Wireless mesh networks provide a very economical
communication infrastructure in terms of both deployment and data transfer costs. Services such
as smart environments that update information about the environment or locality to the visiting
nodes are also possible in such an environment. A truck driver can utilize enhanced location
discovery services, and hence spotting his location on an updated digital map is possible. Mesh
networks scale well to provide support to a large number of nodes. Even at a very high density of
mobile nodes, by employing power control at the mobile nodes and relay nodes, better system
throughput and support for a large number of users can be achieved. But in the case of cellular
networks, improving scalability requires additional infrastructural nodes, which in turn involves
high cost. As mentioned earlier, mesh networks provide expandability of service in a cost
effective manner. Partial roll out and commissioning of the network and extending the service in
a seamless manner without affecting the existing installation are the benefits from the viewpoint
of service providers. Wireless mesh networks provide very high availability compared to the
existing cellular architecture, where the presence of a fixed base station that covers a much larger
area involves the risk of a single point of failure.
• Mobility of nodes: Mobility of nodes is not a mandatory requirement in sensor networks. For
example, the nodes deployed for periodic monitoring of soil properties are not required to be
mobile. However, the sensor nodes that are fitted on the bodies of patients in a post-surgery ward
of a hospital may be designed to support limited or partial mobility. In general, sensor networks
need not in all cases be designed to support mobility of sensor nodes.
• Size of the network: The number of nodes in the sensor network can be much larger than that
in a typical ad hoc wireless network.
• Density of deployment: The density of nodes in a sensor network varies with the domain of
application. For example, military applications require high availability of the network, making
redundancy a high priority.
• Power constraints: The power constraints in sensor networks are much more stringent than
those in ad hoc wireless networks. This is mainly because the sensor nodes are expected to
operate in harsh environmental or geographical conditions, with minimum or no human
supervision and maintenance. In certain cases, the recharging of the energy source is impossible.
Running such a network, with nodes powered by a battery source with limited energy, demands
very efficient protocols at network, data link, and physical layer. The power sources used in
sensor networks can be classified into the following three categories:
– Replenishable power source: In certain applications of sensor networks, the power source can
be replaced when the existing source is fully drained (e.g., wearable sensors that are used to
sense body parameters).
– Regenerative power source: Power sources employed in sensor networks that belong to this
category have the capability of regenerating power from the physical parameter under
measurement. For example, the sensor employed for sensing temperature at a power plant can
use power sources that can generate power by using appropriate transducers.
• Data/information fusion: The limited bandwidth and power constraints demand aggregation
of bits and information at the intermediate relay nodes that are responsible for relaying. Data
fusion refers to the aggregation of multiple packets into one before relaying it. This mainly aims
at reducing the bandwidth consumed by redundant headers of the packets and reducing the media
access delay involved in transmitting multiple packets. Information fusion aims at processing the
sensed data at the intermediate nodes and relaying the outcome to the monitor node.
• Traffic distribution: The communication traffic pattern varies with the domain of application
in sensor networks. For example, the environmental sensing application generates short periodic
packets indicating the status of the environmental parameter under observation to a central
monitoring station. This kind of traffic demands low bandwidth. The sensor network employed
in detecting border intrusions in a military application generates traffic on detection of certain
events; in most cases these events might have time constraints for delivery. In contrast, ad hoc
wireless networks generally carry user traffic such as digitized and packetized voice stream or
data traffic, which demands higher bandwidth.
One of the major application areas of ad hoc wireless networks is in hybrid wireless architectures
such as multi-hop cellular networks (MCNs) and and integrated cellular ad hoc relay (iCAR)
networks . The tremendous growth in the subscriber base of existing cellular networks has
shrunk the cell size up to the pico-cell level. The primary concept behind cellular networks is
geographical channel reuse. Several techniques such as cell sectoring, cell resizing, and multitier
cells have been proposed to increase the capacity of cellular networks. Most of these schemes
also increase the equipment cost. The capacity (maximum throughput) of a cellular network can
be increased if the network incorporates the properties of multi-hop relaying along with the
support of existing fixed infrastructure. MCNs combine the reliability and support of fixed base
stations of cellular networks with flexibility and multi-hop relaying of ad hoc wireless networks.
The MCN architecture is depicted in Figure 2.6. In this architecture, when two nodes (which are
not in direct transmission range) in the same cell want to communicate with each other, the
connection is routed through multiple wireless hops over the intermediate nodes. The base
station maintains the information about the topology of the network for efficient routing. The
base station may or may not be involved in this multi-hop path. Suppose node A wants to
communicate with node B. If all nodes are capable of operating in MCN mode, node A can reach
node B directly if the node B is within node A's transmission range. When node C wants to
communicate with node E and both are in the same cell, node C can reach node E through node
D, which acts as an intermediate relay node. Such hybrid wireless networks can provide high
capacity resulting in lowering the cost of communication to less than that in single-hop cellular
networks. The major advantages of hybrid wireless networks are as follows:
• Higher capacity than cellular networks obtained due to the better channel reuse provided by
reduction of transmission power, as mobile nodes use a power range that is a fraction of the cell
radius.
• Increased flexibility and reliability in routing. The flexibility is in terms of selecting the best
suitable nodes for routing, which is done through multiple mobile nodes or through base stations,
or by a combination of both. The increased reliability is in terms of resilience to failure of base
stations, in which case a node can reach other nearby base stations using multi-hop paths.
• Better coverage and connectivity in holes (areas that are not covered due to transmission
difficulties such as antenna coverage or the direction of antenna) of a cell can be provided by
means of multiple hops through intermediate nodes in the cell.
• Routing
• Multicasting
• Pricing scheme
• Self-organization • Security
• Energy management
• Scalability
• Deployment considerations
2.2.1 Medium Access Scheme The primary responsibility of a medium access control
(MAC) protocol in ad hoc wireless networks is the distributed arbitration for the shared channel
for transmission of packets. The performance of any wireless network hinges on the MAC
protocol, more so for ad hoc wireless networks. The major issues to be considered in designing a
MAC protocol for ad hoc wireless networks are as follows:
• Distributed operation: The ad hoc wireless networks need to operate in environments where
no centralized coordination is possible. The MAC protocol design should be fully distributed
involving minimum control overhead. In the case of polling-based MACprotocols, partial
coordination is required.
• Synchronization: The MAC protocol design should take into account the requirement of time
synchronization. Synchronization is mandatory for TDMA-based systems for management of
transmission and reception slots. Synchronization involves usage of scarce resources such as
bandwidth and battery power. The control packets used for synchronization can also increase
collisions in the network.
• Hidden terminals: Hidden terminals are nodes that are hidden (or not reachable) from the
sender of a data transmission session, but are reachable to the receiver of the session. In such
cases, the hidden terminal can cause collisions at the receiver node. The presence of hidden
terminals can significantly reduce the throughput of a MAC protocol used in ad hoc wireless
networks. Hence the MAC protocol should be able to alleviate the effects of hidden terminals.
• Exposed terminals: Exposed terminals, the nodes that are in the transmission range of the
sender of an on-going session, are prevented from making a transmission. In order to improve
the efficiency of the MAC protocol, the exposed nodes should be allowed to transmit in a
controlled fashion without causing collision to the on-going data transfer.
• Throughput: The MAC protocol employed in ad hoc wireless networks should attempt to
maximize the throughput of the system. The important considerations for throughput
enhancement are minimizing the occurrence of collisions, maximizing channel utilization, and
minimizing control overhead.
• Access delay: The access delay refers to the average delay that any packet experiences to get
transmitted. The MAC protocol should attempt to minimize the delay.
• Fairness: Fairness refers to the ability of the MAC protocol to provide an equal share or
weighted share of the bandwidth to all competing nodes. Fairness can be either node-based or
flow-based. The former attempts to provide an equal bandwidth share for competing nodes
whereas the latter provides an equal share for competing data transfer sessions. In ad hoc
wireless networks, fairness is important due to the multi-hop relaying done by the nodes. An
unfair relaying load for a node results in draining the resources of that node much faster than that
of other nodes.
• Ability to measure resource availability: In order to handle the resources such as bandwidth
efficiently and perform call admission control based on their availability, the MACprotocol
should be able to provide an estimation of resource availability at every node. This can also be
used for making congestion-control decisions.
• Capability for power control: The transmission power control reduces the energy
consumption at the nodes, causes a decrease in interference at neighboring nodes, and increases
frequency reuse. Support for power control at the MAC layer is very important in the ad hoc
wireless environment.
• Adaptive rate control: This refers to the variation in the data bit rate achieved over a channel.
A MAC protocol that has adaptive rate control can make use of a high data rate when the sender
and receiver are nearby and adaptively reduce the data rate as they move away from each other.
• Use of directional antennas: This has many advantages that include increased spectrum reuse,
reduction in interference, and reduced power consumption. Most of the existing MAC protocols
that use omni directional antennas do not work with directional antennas.
2.2.2 Routing The responsibilities of a routing protocol include exchanging the route
information; finding a feasible path to a destination based on criteria such as hop length,
minimum power required, and lifetime of the wireless link; gathering information about the path
breaks; mending the broken paths expending minimum processing power and bandwidth; and
utilizing minimum bandwidth. The major challenges that a routing protocol faces are as follows:
• Mobility: One of the most important properties of ad hoc wireless networks is the mobility
associated with the nodes. The mobility of nodes results in frequent path breaks, packet
collisions, transient loops, stale routing information, and difficulty in resource reservation. A
good routing protocol should be able to efficiently solve all the above issues.
• Bandwidth constraint: Since the channel is shared by all nodes in the broadcast region (any
region in which all nodes can hear all other nodes), the bandwidth available per wireless link
depends on the number of nodes and the traffic they handle. Thus only a fraction of the total
bandwidth is available for every node.
• Error-prone and shared channel: The bit error rate (BER) in a wireless channel is very high
(of the order of 10-5 to 10-3) compared to that in its wired counterparts (of the order of 10-12 to 10- 9).
Routing protocols designed for ad hoc wireless networks should take this into account.
Consideration of the state of the wireless link, signal-to-noise ratio, and path loss for routing in
ad hoc wireless networks can improve the efficiency of the routing protocol.
• Location-dependent contention: The load on the wireless channel varies with the number of
nodes present in a given geographical region. This makes the contention for the channel high
when the number of nodes increases. The high contention for the channel results in a high
number of collisions and a subsequent wastage of bandwidth. A good routing protocol should
have built-in mechanisms for distributing the network load uniformly across the network so that
the formation of regions where channel contention is high can be avoided.
• Other resource constraints: The constraints on resources such as computing power, battery
power, and buffer storage also limit the capability of a routing protocol. The major requirements
of a routing protocol in ad hoc wireless networks are the following:
• Minimum route acquisition delay: The route acquisition delay for a node that does not have a
route to a particular destination node should be as minimal as possible. This delay may vary with
the size of the network and the network load.
• Quick route reconfiguration: The unpredictable changes in the topology of the network
require that the routing protocol be able to quickly perform route reconfiguration in order to
handle path breaks and subsequent packet losses.
• Minimum control overhead: The control packets exchanged for finding a new route and
maintaining existing routes should be kept as minimal as possible. The control packets consume
precious bandwidth and can cause collisions with data packets, thereby reducing network
throughput.
• Scalability: Scalability is the ability of the routing protocol to scale well (i.e., perform
efficiently) in a network with a large number of nodes. This requires minimization of control
overhead and adaptation of the routing protocol to the network size.
• Provisioning of QoS: The routing protocol should be able to provide a certain level of QoS as
demanded by the nodes or the category of calls. The QoS parameters can be bandwidth, delay,
jitter, packet delivery ratio, and throughput. Supporting differentiated classes of service may be
of importance in tactical operations.
• Support for time-sensitive traffic: Tactical communications and similar applications require
support for time-sensitive traffic. The routing protocol should be able to support both hard
realtime and soft real-time traffic.
• Security and privacy: The routing protocol in ad hoc wireless networks must be resilient to
threats and vulnerabilities. It must have inbuilt capability to avoid resource consumption, denial-
of- service, impersonation, and similar attacks possible against an ad hoc wireless network.
2.2.3 Multicasting
Multicasting plays an important role in the typical applications of ad hoc wireless networks,
namely, emergency search-and-rescue operations and military communication. In such an
environment, nodes form groups to carry out certain tasks that require point-to-multipoint and
multipoint-to-multipoint voice and data communication. The arbitrary movement of nodes
changes the topology dynamically in an unpredictable manner. The mobility of nodes, with the
constraints of power source and bandwidth, makes multicast routing very challenging.
Traditional wired network multicast protocols such as core based trees (CBT), protocol
independent multicast (PIM), and distance vector multicast routing protocol (DVMRP) do not
perform well in ad hoc wireless networks because a tree-based multicast structure is highly
unstable and needs to be frequently readjusted to include broken links. Use of any global routing
structure such as the link-state table results in high control overhead. The use of single-link
connectivity among the nodes in a multicast group results in a tree-shaped multicast routing
topology. Such a tree-shaped topology provides high multicast efficiency, with low packet
delivery ratio due to the frequent tree breaks. Provisioning of multiple links among the nodes in
an ad hoc wireless network results in a mesh-shaped structure. The mesh-based multicast routing
structure may work well in a high-mobility environment. The major issues in designing multicast
routing protocols are as follows:
• Robustness: The multicast routing protocol must be able to recover and reconfigure quickly
from potential mobility-induced link breaks thus making it suitable for use in highly dynamic
environments.
• Control overhead: The scarce bandwidth availability in ad hoc wireless networks demands
minimal control overhead for the multicast session.
• Quality of service: QoS support is essential in multicast routing because, in most cases, the
data transferred in a multicast session is time-sensitive.
• Efficient group management: Group management refers to the process of accepting multicast
session members and maintaining the connectivity among them until the session expires. This
process of group management needs to be performed with minimal exchange of control
messages.
• Scalability: The multicast routing protocol should be able to scale for a network with a large
number of nodes.
• QoS parameters: As different applications have different requirements, their level of QoS and
the associated QoS parameters also differ from application to application. For example, for
multimedia applications, the bandwidth and delay are the key parameters, whereas military
applications have the additional requirements of security and reliability. For defense
applications, finding trustworthy intermediate hosts and routing through them can be a QoS
parameter. For applications such as emergency search-and-rescue operations, availability is the
key QoS parameter. Multiple link disjoint paths can be the major requirement for such
applications. Applications for hybrid wireless networks can have maximum available link life,
delay, channel utilization, and bandwidth as the key parameters for QoS. Finally, applications
such as communication among the nodes in a sensor network require that the transmission
among them results in minimum energy consumption, hence battery life and energy conservation
can be the prime QoS parameters here.
• QoS-aware routing: The first step toward a QoS-aware routing protocol is to have the routing
use QoS parameters for finding a path. The parameters that can be considered for routing
decisions are network throughput, packet delivery ratio, reliability, delay, delay jitter, packet loss
rate, bit error rate, and path loss. Decisions on the level of QoS and the related parameters for
such services in ad hoc wireless networks are application-specific and are to be met by the
underlying network. For example, in the case where the QoS parameter is bandwidth, the routing
protocol utilizes the available bandwidth at every link to select a path with necessary bandwidth.
This also demands the capability to reserve the required amount of bandwidth for that particular
connection.
• QoS framework: A framework for QoS is a complete system that attempts to provide the
promised services to each user or application. All the components within this subsystem should
cooperate in providing the required services. The key component of QoS framework is a QoS
service model which defines the way user requirements are served. The key design issue is
whether to serve the user on a per-session basis or a per-class basis. Each class represents an
aggregation of users based on certain criteria. The other key components of this framework are
QoS routing to find all or some feasible paths in the network that can satisfy user requirements,
QoS signaling for resource reservation required by the user or application, QoS medium access
control, connection admission control, and scheduling schemes pertaining to that service model.
The QoS modules such as routing protocol, signaling protocol, and resource management should
react promptly according to changes in the network state (topology change in ad hoc wireless
networks) and flow state (change in end-to-end view of service delivered).
2.2.7 Self-Organization One very important property that an ad hoc wireless network
should exhibit is organizing and maintaining the network by itself. The major activities that an ad
hoc wireless network is required to perform for self-organization are neighbor discovery,
topology organization, and topology reorganization. During the neighbor discovery phase, every
node in the network gathers information about its neighbors and maintains that information in
appropriate data structures. This may require periodic transmission of short packets named
beacons, or promiscuous snooping on the channel for detecting activities of neighbors. Certain
MAC protocols permit varying the transmission power to improve upon spectrum reusability. In
the topology organization phase, every node in the network gathers information about the entire
network or a part of the network in order to maintain topological information. During the
topology reorganization phase, the ad hoc wireless networks require updating the topology
information by incorporating the topological changes occurred in the network due to the mobility
of nodes, failure of nodes, or complete depletion of power sources of the nodes. The
reorganization consists of two major activities. First is the periodic or a periodic exchange of
topological information. Second is the adaptability (recovery from major topological changes in
the network). Similarly, network partitioning and merging of two existing partitions require
major topological reorganization. Ad hoc wireless networks should be able to perform self-
organization quickly and efficiently in a way transparent to the user and the application.
2.2.8 Security The security of communication in ad hoc wireless networks is very important,
especially in military applications. The lack of any central coordination and shared wireless
medium makes them more vulnerable to attacks than wired networks. The attacks against ad hoc
wireless networks are generally classified into two types: passive and active attacks. Passive
attacks refer to the attempts made by malicious nodes to perceive the nature of activities and to
obtain information transacted in the network without disrupting the operation. Active attacks
disrupt the operation of the network. Those active attacks that are executed by nodes outside the
network are called external attacks, and those that are performed by nodes belonging to the same
network are called internal attacks. Nodes that perform internal attacks are compromised nodes.
The major security threats that exist in ad hoc wireless networks are as follows:
• Denial of service: The attack effected by making the network resource unavailable for service
to other nodes, either by consuming the bandwidth or by overloading the system, is known as
denial of service (DoS). A simple scenario in which a DoS attack interrupts the operation of ad
hoc wireless networks is by keeping a target node busy by making it process unnecessary
packets.
• Resource consumption: The scarce availability of resources in ad hoc wireless network makes
it an easy target for internal attacks, particularly aiming at consuming resources available in the
network. The major types of resource-consumption attacks are the following: – Energy
depletion: Since the nodes in ad hoc wireless networks are highly constrained by the energy
source, this type of attack is basically aimed at depleting the battery power of critical nodes by
directing unnecessary traffic through them.
– Buffer overflow: The buffer overflow attack is carried out either by filling the routing table
with unwanted routing entries or by consuming the data packet buffer space with unwanted data.
Such attacks can lead to a large number of data packets being dropped, leading to the loss of
critical information. Routing table attacks can lead to many problems, such as preventing a node
from updating route information for important destinations and filling the routing table with
routes for nonexistent destinations.
• Host impersonation: A compromised internal node can act as another node and respond with
appropriate control packets to create wrong route entries, and can terminate the traffic meant for
the intended destination node.
• Information disclosure: A compromised node can act as an informer by deliberate disclosure
of confidential information to unauthorized nodes. Information such as the amount and the
periodicity of traffic between a selected pair of nodes and pattern of traffic changes can be very
valuable for military applications. The use of filler traffic (traffic generated for the sole purpose
of changing the traffic pattern) may not be suitable in resource-constrained ad hoc wireless
networks.
• Transmission power management: The power consumed by the radio frequency (RF) module
of a mobile node is determined by several factors such as the state of operation, the transmission
power, and the technology used for the RF circuitry. The state of operation refers to the transmit,
receive, and sleep modes of the operation. The transmission power is determined by the
reachability requirement of the network, the routing protocol, and the MAC protocol employed.
The RF hardware design should ensure minimum power consumption in all the three states of
operation. Going to the sleep mode when not transmitting or receiving can be done by additional
hardware that can wake up on reception of a control signal. Power conservation responsibility
lies across the data link, network, transport, and application layers. By designing a data link layer
protocol that reduces unnecessary retransmissions, by preventing collisions, by switching to
standby mode or sleep mode whenever possible, and by reducing the transmit/receive switching,
power management can be performed at the data link layer. The use of a variable power MAC
protocol can lead to several advantages that include energy saving at the nodes, increase in
bandwidth reuse, and reduction in interference. Also, MAC protocols for directional antennas are
at their infancy. The network layer routing protocols can consider battery life and relaying load
of the intermediate nodes while selecting a path so that the load can be balanced across the
network, in addition to optimizing and reducing the size and frequency of control packets. At the
transport layer, reducing the number of retransmissions, and recognizing and handling the reason
behind the packet losses locally, can be incorporated into the protocols. At the application layer,
the power consumption varies with applications. In a mobile computer, the image/video
processing/playback software and 3D gaming software consume higher power than other
applications. Hence application software developed for mobile computers should take into
account the aspect of power consumption as well.
• Battery energy management: The battery management is aimed at extending the battery life
of a node by taking advantage of its chemical properties, discharge patterns, and by the selection
of a battery from a set of batteries that is available for redundancy. Recent studies showed that
pulsed discharge of a battery gives longer life than continuous discharge. Controlling the
charging rate and discharging rate of the battery is important in avoiding early charging to the
maximum charge or full discharge below the minimum threshold. This can be achieved by
means of embedded charge controllers in the battery pack. Also, the protocols at the data link
layer and network layer can be designed to make use of the discharge models. Monitoring of the
battery for voltage levels, remaining capacity, and temperature so that proactive actions (such as
incremental powering off of certain devices, or shutting down of the mobile node when the
voltage crosses a threshold) can be taken is required.
• Processor power management: The clock speed and the number of instructions executed per
unit time are some of the processor parameters that affect power consumption. The CPU can be
put into different power saving modes during low processing load conditions. The CPU power
can be completely turned off if the machine is idle for a long time. In such cases, interrupts can
be used to turn on the CPU upon detection of user interaction or other events.
• Devices power management: Intelligent device management can reduce power consumption
of a mobile node significantly. This can be done by the operating system (OS) by selectively
powering down interface devices that are not used or by putting devices into different power
saving modes, depending on their usage. Advanced power management features built into the
operating system and application softwares for managing devices effectively are required.
2.2.11 Scalability
Even though the number of nodes in an ad hoc wireless network does not grow in the same
magnitude as today's Internet, the operation of a large number of nodes in the ad hoc mode is not
far away. Traditional applications such as military, emergency operations, and crowd control
may not lead to such a big ad hoc wireless network. Commercial deployments of ad hoc wireless
networks that include wireless mesh networks show early trends for a widespread installation of
ad hoc wireless networks for mainstream wireless communication. For example, the latency of
path-finding involved with an on-demand routing protocol in a large ad hoc wireless network
may be unacceptably high. Similarly, the periodic routing overhead involved in a table-driven
routing protocol may consume a significant amount of bandwidth in such large networks. Also a
large ad hoc wireless network cannot be expected to be formed by homogeneous nodes, raising
issues such as widely varying resource capabilities across the nodes. A hierarchical topology-
based system and addressing may be more suitable for large ad hoc wireless networks. Hybrid
architectures that combine the multi-hop radio relaying in the presence of infrastructure may
improve scalability.
• Low cost of deployment: The use of multi-hop wireless relaying essentially eliminates the
requirement of laying cables and maintenance in a commercial deployment of communication
infrastructure. Hence the cost involved is much lower than that of wired networks.
• Short deployment time: Compared to wired networks, the deployment time is considerably
less due to the absence of any wired links. Also, wiring a dense urban region is extremely
difficult and time-consuming in addition to the inconvenience caused.
• Reconfigurability: The cost involved in reconfiguring a wired network covering a
metropolitan area network (MAN) is very high compared to that of an ad hoc wireless network
covering the same service area. Also, the incremental deployment of ad hoc wireless networks
might demand changes in the topology of the fixed part (e.g., the relaying devices fixed on lamp
posts or rooftops) of the network at a later stage. The issues and solutions for deployment of ad
hoc wireless networks vary with the type of applications and the environment in which the
networks are to be deployed. The following are the major issues to be considered in deploying an
ad hoc wireless network:
– Commercial wide-area deployment: One example of this deployment scenario is the wireless
mesh networks. The aim of the deployment is to provide an alternate communication
infrastructure for wireless communication in urban areas and areas where a traditional cellular
base station cannot handle the traffic volume. This scenario assumes significance as it provides
very low cost per bit transferred compared to the wide-area cellular network infrastructure.
Another major advantage of this application is the resilience to failure of a certain number of
nodes. Addressing, configuration, positioning of relaying nodes, redundancy of nodes, and power
sources are the major issues in deployment. Billing, provisioning of QoS, security, and handling
mobility are major issues that the service providers need to address.
– Home network deployment: The deployment of a home area network needs to consider the
limited range of the devices that are to be connected by the network. Given the short
transmission ranges of a few meters, it is essential to avoid network partitions. Positioning of
relay nodes at certain key locations of a home area network can solve this. Also, network
topology should be decided so that every node is connected through multiple neighbors for
availability.
• Required longevity of network: The deployment of ad hoc wireless networks should also
consider the required longevity of the network. If the network is required for a short while (e.g.,
the connectivity among a set of researchers at a conference and the connectivity required for
coordination of a crowd control team), battery-powered mobile nodes can be used. When the
connectivity is required for a longer duration of time, fixed radio relaying equipment with
regenerative power sources can be deployed. A wireless mesh network with roof-top antennas
deployed at a residential zone requires weather-proof packages so that the internal circuitry
remains unaffected by the environmental conditions. In such an environment, the mesh
connectivity is planned in such a way that the harsh atmospheric factors do not create network
partitions.
• Area of coverage: In most cases, the area of coverage of an ad hoc wireless network is
determined by the nature of application for which the network is set up. For example, the home
area network is limited to the surroundings of a home. The mobile nodes' capabilities such as the
transmission range and associated hardware, software, and power source should match the area
of coverage required. In some cases where some nodes can be fixed and the network topology is
partially or fully fixed, the coverage can be enhanced by means of directional antennas.
• Service availability: The availability of network service is defined as the ability of an ad hoc
wireless network to provide service even with the failure of certain nodes. Availability assumes
significance both in a fully mobile ad hoc wireless network used for tactical communication and
in partially fixed ad hoc wireless networks used in commercial communication infrastructure
such as wireless mesh networks. In the case of wireless mesh networks, the fixed nodes need to
be placed in such a way that the failure of multiple nodes does not lead to lack of service in that
area. In such cases, redundant inactive radio relaying devices can be placed in such a way that on
the event of failure of an active relaying node, the redundant relaying device can take over its
responsibilities.
• Operational integration with other infrastructure: Operational integration of ad hoc
wireless networks with other infrastructures can be considered for improving the performance or
gathering additional information, or for providing better QoS. In the military environment,
integration of ad hoc wireless networks with satellite networks or unmanned aerial vehicles
(UAVs) improves the capability of the ad hoc wireless networks. Several routing protocols
assume the availability of the global positioning system (GPS), which is a satellite-based
infrastructure by which the geographical location information can be obtained as a resource for
network synchronization and geographical positioning. In the commercial world, the wireless
mesh networks that service a given urban region can interoperate with the wide-area cellular
infrastructure in order to provide better QoS and smooth handoffs across the networks. Handoff
to a different network can be done in order to avoid call drops when a mobile node with an active
call moves into a region where service is not provided by the current network.
• Choice of protocols: The choice of protocols at different layers of the protocol stack is to be
done taking into consideration the deployment scenario. A TDMA-based and in secure MAC
protocol may not be the best suited compared to a CDMA-based MAC protocol for a military
application. The MAC protocol should ensure provisioning of security at the link level. At the
network layer, the routing protocol has to be selected with care. A routing protocol that uses
geographical information (GPS information) may not work well in situations where such
information is not available. For example, the search-and-rescue operation teams that work in
extreme terrains or underground or inside a building may not be able to use such a routing
protocol. An ad hoc wireless network with nodes that cannot have their power sources
replenished should use a routing protocol that does not employ periodic bea-cons for routing.
The periodic beacons, or routing updates, drain the battery with time. In situations of high
mobility, for example, an ad hoc wireless network formed by devices connected to military
vehicles, the power consumption may not be very important and hence one can employ beacon
based routing protocols for them. The updated information about connectivity leads to improved
performance. In the case of deployment of wireless mesh networks, the protocols should make
use of the fixed nodes to avoid unstable paths due to the mobility of the relaying nodes. At the
transport layer, the connection-oriented or connectionless protocol should be adapted to work in
the environment in which the ad hoc wireless network is deployed. In a high-mobility
environment, path breaks, network partitions, and remerging of partitions are to be considered,
and appropriate actions should be taken at the higher layers. This can be extended to
connectionless transport protocols to avoid congestion. Also, packet loss arising out of
congestion, channel error, link break, and network partition is to be handled differently in
different applications. The timer values at different layers of the protocol stack should be adapted
to the deployment scenarios.
The major issues to be considered for a successful ad hoc wireless Internet are the following: •
Gateways: Gateway nodes in the ad hoc wireless Internet are the entry points to the wired
Internet. The major part of the service provisioning lies with the gateway nodes. Generally
owned and operated by a service provider, gateways perform the following tasks: keeping track
of the end users, bandwidth management, load balancing, traffic shaping, packet filtering,
bandwidth fairness, and address, service, and location discovery.
• Address mobility: Similar to the Mobile IP, the ad hoc wireless Internet also faces the
challenge of address mobility. This problem is worse here as the nodes operate over multiple
wireless hops. Solutions such as Mobile IP can provide temporary alternatives for this.
• Routing: Routing is a major problem in the ad hoc wireless Internet, due to the dynamic
topological changes, the presence of gateways, multi-hop relaying, and the hybrid character of
the network. The possible solution for this is the use of a separate routing protocol, for the
wireless part of the ad hoc wireless Internet. Routing protocols are more suitable as they exploit
the presence of gateway nodes.
• Transport layer protocol: Even though several solutions for transport layer protocols exist for
ad hoc wireless networks, unlike other layers, the choice lies in favor of TCP's extensions
proposed for ad hoc wireless networks. Split approaches that use traditional wired TCP for the
wired part and a specialized transport layer protocol for the ad hoc wireless network part can also
be considered where the gateways act as the intermediate nodes at which the connections are
split. Several factors are to be considered here, the major one being the state maintenance
overhead at the gateway nodes.
• Load balancing: It is likely that the ad hoc wireless Internet gateways experience heavy
traffic. Hence the gateways can be saturated much earlier than other nodes in the network. Load
balancing techniques are essential to distribute the load so as to avoid the situation where the
gateway nodes become bottleneck nodes. Gateway selection strategies and load balancing
schemes can be used for this purpose.
• Provisioning of security: The inherent broadcast nature of the wireless medium attracts not
just the mobility seekers but also potential hackers who love to snoop on important information
sent unprotected over the air. Hence security is a prime concern in the ad hoc wireless Internet.
Since the end users can utilize the ad hoc wireless Internet infrastructure to make e-commerce
transactions, it is important to include security mechanisms in the ad hoc wireless Internet.
• QoS support: With the widespread use of voice over IP (VoIP) and growing multimedia
applications over the Internet, provisioning of QoS support in the ad hoc wireless Internet
becomes a very important issue. As discussed in Chapter 10, this is a challenging problem in the
wired part as well as in the wireless part.
• Service, address, and location discovery: Service discovery in any network refers to the
activity of discovering or identifying the party which provides a particular service or resource. In
wired networks, service location protocols exist to do the same, and similar systems need to be
extended to operate in the ad hoc wireless Internet as well. Address discovery refers to the
services such as those provided by address resolution protocol (ARP) or domain name service
(DNS) operating within the wireless domain. Location discovery refers to different activities
such as detecting the location of a particular mobile node in the network or detecting the
geographical location of nodes. Location discovery services can provide enhanced services such
as routing of packets, location-based services, and selective region-wide broadcasts. Figure 2.8
shows a wireless mesh network that connects several houses to the Internet through a gateway
node. Such networks can provide highly reliable broadband wireless networks for the urban as
well as the rural population in a cost-effective manner with fast deployment and reconfiguration.
This wireless mesh network is a special case of the ad hoc wireless Internet where mobility of
nodes is not a major concern as most relay stations and end users use fixed transceivers. Figure
2.8 shows that house A is connected to the Internet over multiple paths (path 1 and path 2).
Figure 2.8. An illustration of the ad hoc wireless Internet implemented by a wireless mesh
network.
MAC PROTOCOLS FOR AD HOC WIRELESS
NETWORKS
2.4.1 Bandwidth Efficiency As mentioned earlier, since the radio spectrum is limited, the
bandwidth available for communication is also very limited. The MAC protocol must be
designed in such a way that the scarce bandwidth is utilized in an efficient manner. The control
overhead involved must be kept as minimal as possible. Bandwidth efficiency can be defined as
the ratio of the bandwidth used for actual data transmission to the total available bandwidth. The
MAC protocol must try to maximize this bandwidth efficiency.
2.4.2 Quality of Service Support Due to the inherent nature of the ad hoc wireless
network, where nodes are usually mobile most of the time, providing quality of service (QoS)
support to data sessions in such networks is very difficult. Bandwidth reservation made at one
point of time may become invalid once the node moves out of the region where the reservation
was made. QoS support is essential for supporting time-critical traffic sessions such as in
military communications. The MACprotocol for ad hoc wireless networks that are to be used in
such real-time applications must have some kind of a resource reservation mechanism that takes
into consideration the nature of the wireless channel and the mobility of nodes.
2.4.3 Synchronization The MAC protocol must take into consideration the synchronization
between nodes in the network. Synchronization is very important for bandwidth (time slot)
reservations by nodes. Exchange of control packets may be required for achieving time
synchronization among nodes. The control packets must not consume too much of network
bandwidth.
2.4.4 Hidden and Exposed Terminal Problems The hidden and exposed terminal
problems are unique to wireless networks. The hidden terminal problem refers to the collision of
packets at a receiving node due to the simultaneous transmission of those nodes that are not
within the direct transmission range of the sender, but are within the transmission range of the
receiver. Collision occurs when both nodes transmit packets at the same time without knowing
about the transmission of each other. For example, consider Figure 2.9. Here, if both node S1
and node S2 transmit to node R1 at the same time, their packets collide at node R1. This is
because both nodes S1 and S2 are hidden from each other as they are not within the direct
transmission range of each other and hence do not know about the presence of each other.
2.4.5 Error-Prone Shared Broadcast Channel Another important factor in the design
of a MAC protocol is the broadcast nature of the radio channel, that is, transmissions made by a
node are received by all nodes within its direct transmission range. When a node is receiving
data, no other node in its neighborhood, apart from the sender, should transmit. A node should
get access to the shared medium only when its transmissions do not affect any ongoing session.
Since multiple nodes may contend for the channel simultaneously, the possibility of packet
collisions is quite high in wireless networks. A MAC protocol should grant channel access to
nodes in such a manner that collisions are minimized. Also, the protocol should ensure that all
nodes are treated fairly with respect to bandwidth allocation.
2.4.7 Mobility of Nodes This is a very important factor affecting the performance
(throughput) of the protocol. Nodes in an ad hoc wireless network are mobile most of the time.
The bandwidth reservations made or the control information exchanged may end up being of no
use if the node mobility is very high. The MAC protocol obviously has no role to play in
influencing the mobility of the nodes. The protocol design must take this mobility factor into
consideration so that the performance of the system is not significantly affected due to node
mobility.
• The access delay, which refers to the average delay experienced by any packet to get
transmitted, must be kept low.
• The protocol should ensure fair allocation (either equal allocation or weighted allocation) of
bandwidth to nodes.
• The protocol should minimize the effects of hidden and exposed terminal problems.
• It should have power control mechanisms in order to efficiently manage energy consumption of
the nodes.
• The protocol should have mechanisms for adaptive data rate control (adaptive rate control
refers to the ability to control the rate of outgoing traffic from a node after taking into
consideration such factors as load in the network and the status of neighbor nodes).
• It should try to use directional antennas which can provide advantages such as reduced
interference, increased spectrum reuse, and reduced power consumption.
• Since synchronization among nodes is very important for bandwidth reservations, the protocol
should provide time synchronization among nodes.
• Contention-based protocols
Apart from these three major types, there exist other MAC protocols that cannot be classified
clearly under any one of the above three types of protocols.
• Synchronous protocols: Synchronous protocols require time synchronization among all nodes
in the network, so that reservations made by a node are known to other nodes in its
neighborhood. Global time synchronization is generally difficult to achieve.
• Asynchronous protocols: They do not require any global synchronization among nodes in the
network. These protocols usually use relative time information for effecting reservations.
2.6.4 Other Protocols There are several other MAC protocols that do not strictly fall under
the above categories.
The binary exponential back-off mechanism used in MACA at times starves flows. For example,
consider Figure 2.6. Here both nodes S1 and S2 keep generating a high volume of traffic. The
node that first captures the channel (say, node S1) starts transmitting packets. The packets
transmitted by the other node S2 get collided, and the node keeps incrementing its back-off
window according to the BEB algorithm. As a result, the probability of node S2 acquiring the
channel keeps decreasing, and over a period of time it gets completely blocked. To overcome
this problem, the back-off algorithm has been modified in MACAW . The packet header now has
an additional field carrying the current back-off counter value of the transmitting node. A node
receiving the packet copies this value into its own back-off counter. This mechanism allocates
bandwidth in a fair manner. Another problem with BEB algorithm is that it adjusts the back-off
counter value very rapidly, both when a node successfully transmits a packet and when a
collision is detected by the node. The back-off counter is reset to the minimum value after every
successful transmission. In the modified back-off process, this would require a period of
contention to be repeated after each successful transmission in order to build up the back-off
timer values. To prevent such large variations in the back-off values, a multiplicative increase
and linear decrease (MILD) back-off mechanism is used in MACAW. Here, upon a collision, the
back-off is increased by a multiplicative factor (1.5), and upon a successful transmission, it is
decremented by one. This eliminates contention and hence long contention periods after every
successful transmission, at the same time providing a reasonably quick escalation in the back-off
values when the contention is high.
The MACAW protocol uses one more control packet called the request-for-request-to-send
(RRTS) packet. The following example shows how this RRTS packet proves to be useful.
Consider Figure 2.8. Here assume transmission is going on between nodes S1 and R1. Now node
S2 wants to transmit to node R2. But since R2 is a neighbor of R1, it receives CTSpackets from
node R1, and therefore it defers its own transmissions. Node S2 has no way to learn about the
contention periods during which it can contend for the channel, and so it keeps on trying,
incrementing its back-off counter after each failed attempt. Hence the main reason for this
problem is the lack of synchronization information at source S2. MACAW overcomes this
problem by using the RRTS packet. In the same example shown in Figure 2.8, receiver node R2
contends for the channel on behalf of source S2. If node R2 had received anRTS previously for
which it was not able to respond immediately because of the on-going transmission between
nodes S1 and R1, then node R2 waits for the next contention period and transmits the RRTS
packet. Neighbor nodes that hear the RRTS packet (including node R1) are made to wait for two
successive slots (for the RTS-CTS exchange to take place). The source node S2, on receiving the
RRTS from node R2, transmits the regular RTS packet to node R2, and the normal packet
exchange (RTS-CTS-Data-ACK) continues from here. Figure 2.9 shows the operation of the
MACAW protocol. In the figure, S is the source node and R denotes the receiver node. N1 and
N2 are neighbor nodes. When RTS transmitted by node S is overheard by node N1, it refrains
from transmitting until node S receives the CTS. Similarly, when the CTS transmitted by node R
is heard by neighbor node N2, it defers its transmissions until the data packet is received by
receiver R. On receiving this CTS packet, node S immediately transmits the DS message
carrying the expected duration of the data packet transmission. On hearing this packet, node N1
backs off until the data packet is transmitted. Finally, after receiving the data packet, node R
acknowledges the reception by sending node S an ACK packet.
Multiple access collision avoidance (MACA) , which was discussed earlier in this chapter,
belongs to the category of FAMA protocols. In MACA, a ready node transmits an RTS packet. A
neighbor node receiving the RTS defers its transmissions for the period specified in the RTS. On
receiving the RTS, the receiver node responds by sending back a CTS packet, and waits for a
long enough period of time in order to receive a data packet. Neighbor nodes of the receiver
which hear this CTS packet defer their transmissions for the time duration of the impending data
transfer. In MACA, nodes do not sense the channel. A node defers its transmissions only if it
receives an RTS or CTS packet. In MACA, data packets are prone to collisions with RTS
packets.
According to the FAMA principle, in order for data transmissions to be collision-free, the
duration of an RTS must be at least twice the maximum channel propagation delay.
Transmission of bursts of packets is not possible in MACA. In FAMA-NTR (discussed below)
the MACA protocol is modified to permit transmission of packet bursts by enforcing waiting
periods on nodes, which are proportional to the channel propagation time.
This variant of FAMA, called FAMA – non-persistent transmit request (FAMA-NTR), combines
non-persistent carrier-sensing along with the RTS-CTS control packet exchange mechanism.
Before sending a packet, the sender node senses the channel. If the channel is found to be busy,
then the node backs off for a random time period and retries later. If the channel is found to be
free, it transmits the RTS packet. After transmitting the RTS, the sender listens to the channel for
one round-trip time in addition to the time required by the receiver node to transmit a CTS. If it
does not receive the CTS within this time period or if the CTS received is found to be corrupted,
then the node takes a random back-off and retries later. Once the sender node receives the CTS
packet without any error, it can start transmitting its data packet burst. The burst is limited to a
maximum number of data packets, after which the node releases the channel, and contends with
other nodes to again acquire the channel. In order to allow the sender node to send a burst of
packets once it acquires the floor, the receiver node is made to wait for a time duration of τ
seconds after processing each data packet received. Here, τ denotes the maximum channel
propagation time. A waiting period of 2 τ seconds is enforced on a transmitting node after
transmitting any control packet. This is done to allow the RTS-CTS exchange to take place
without any error. A node transmitting an RTS is required to wait for 2 τ seconds after
transmitting the RTS in order to enable the receiver node to receive the RTS and transmit the
corresponding CTS packet. After sending the final data packet, a sender node is made to wait for
τ seconds in order to allow the destination to receive the data packet and to account for the
enforced waiting time at the destination node.
The busy tone multiple access (BTMA) protocol is one of the earliest protocols proposed for
overcoming the hidden terminal problem faced in wireless environments. The transmission
channel is split into two: a data channel and a control channel. The data channel is used for data
packet transmissions, while the control channel is used to transmit the busy tone signal. When a
node is ready for transmission, it senses the channel to check whether the busy tone is active. If
not, it turns on the busy tone signal and starts data transmission; otherwise, it reschedules the
packet for transmission after some random rescheduling delay. Any other node which senses the
carrier on the incoming data channel also transmits the busy tone signal on the control channel.
Thus, when a node is transmitting, no other node in the two-hop neighborhood of the
transmitting node is permitted to simultaneously transmit. Though the probability of collisions is
very low in BTMA, the bandwidth utilization is very poor.Figure 2.10 shows the worst-case
scenario where the node density is very high; the dotted circle shows the region in which nodes
are blocked from simultaneously transmitting when node N1 is transmitting packets.
The dual busy tone multiple access protocol (DBTMA) is an extension of the BTMA scheme.
Here again, the transmission channel is divided into two: the data channel and the control
channel. As in BTMA, the data channel is used for data packet transmissions. The control
channel is used for control packet transmissions (RTS and CTS packets) and also for transmitting
the busy tones. DBTMA uses two busy tones on the control channel, BTt and BTr . The BTt tone is
used by the node to indicate that it is transmitting on the data channel. The BTr tone is turned on
by a node when it is receiving data on the data channel. The two busy tone signals are two sine
waves at different well-separated frequencies. When a node is ready to transmit a data packet, it
first senses the channel to determine whether the BTr signal is active. An active BTr signal
indicates that a node in the neighborhood of the ready node is currently receiving packets. If the
ready node finds that there is no BTr signal, it transmits the RTS packet on the control channel.
On receiving theRTS packets, the node to which the RTS was destined checks whether the BTt
tone is active in its neighborhood. An active BTt implies that some other node in its neighborhood
is transmitting packets and so it cannot receive packets for the moment. If the node finds noBTt
signal, it responds by sending a CTS packet and then turns on the BTr signal (which informs other
nodes in its neighborhood that it is receiving). The sender node, on receiving this CTS packet,
turns on the BTt signal (to inform nodes in its neighborhood that it is transmitting) and starts
transmitting data packets. After completing transmission, the sender node turns off the BTt signal.
The receiver node, after receiving all data packets, turns off theBTr signal. The above process is
depicted in Figure 2.11.
When compared to other RTS/CTS-based medium access control schemes (such as MACA and
MACAW ), DBTMA exhibits better network utilization. This is because the other schemes block
both the forward and reverse transmissions on the data channel when they reserve the channel
through their RTS or CTS packets. But in DBTMA, when a node is transmitting or receiving,
only the reverse (receive) or forward (transmit) channels, respectively, are blocked. Hence the
bandwidth utilization of DBTMA is nearly twice that of other RTS/CTS-based schemes.
Receiver-Initiated Busy Tone Multiple Access Protocol In the receiver-initiated busy tone multiple
access protocol (RI-BTMA), similar to BTMA, the available bandwidth is divided into two
channels: a data channel for transmitting data packets and a control channel. The control channel
is used by a node to transmit the busy tone signal. A node can transmit on the data channel only
if it finds the busy tone to be absent on the control channel. The data packet is divided into two
portions: a preamble and the actual data packet. The preamble carries the identification of the
intended destination node. Both the data channel and the control channel are slotted, with each
slot equal to the length of the preamble. Data transmission consists of two steps. First, the
preamble needs to be transmitted by the sender. Once the receiver node acknowledges the
reception of this preamble by transmitting the busy tone signal on the control channel, the actual
data packet is transmitted. A sender node that needs to transmit a data packet first waits for a free
slot, that is, a slot in which the busy tone signal is absent on the control channel. Once it finds
such a slot, it transmits the preamble packet on the data channel. If the destination node receives
this preamble packet correctly without any error, it transmits the busy tone on the control
channel. It continues transmitting the busy tone signal as long as it is receiving data from the
sender. If preamble transmission fails, the receiver does not acknowledge with the busy tone, and
the sender node waits for the next free slot and tries again. The operation of the RI-BTMA
protocol is shown in Figure 2.12. The busy tone serves two purposes. First, it acknowledges the
sender about the successful reception of the preamble. Second, it informs the nearby hidden
nodes about the impending transmission so that they do not transmit at the same time.
There are two types of RI-BTMA protocols: the basic protocol and the controlled protocol. The
basic packet transmission mechanism is the same in both protocols. In the basic protocol, nodes
do not have backlog buffers to store data packets. Hence packets that suffer collisions cannot be
retransmitted. Also, when the network load increases, packets cannot be queued at the nodes.
This protocol would work only when the network load is not high; when network load starts
increasing, the protocol becomes unstable. The controlled protocol overcomes this problem. This
protocol is the same as the basic protocol, the only difference being the availability of backlog
buffers at nodes. Therefore, packets that suffer collisions, and those that are generated during
busy slots, can be queued at nodes. A node is said to be in the backlogged mode if its backlog
buffer is non-empty. When a node in the backlogged mode receives a packet from its higher
layers, the packet is put into the buffer and transmitted later. Suppose the packet arrives at a node
when it is not in the backlogged mode, then if the current slot is free, the preamble for the packet
is transmitted with probability p in the current slot itself (not transmitted in the same slot with
probability (1 - p)). If the packet was received during a busy slot, the packet is just put into the
backlog buffer, where it waits until the next free slot. A backlogged node transmits a backlogged
packet in the next idle slot with a probability q. All other packets in the backlog buffer just keep
waiting until this transmission succeeds. This protocol can work for multi-hop radio networks as
well as for single-hop fully connected networks.
2.7.4MACA-By Invitation
MACA-by invitation (MACA-BI) is a receiver-initiated MAC protocol. It reduces the number of
control packets used in the MACA protocol. MACA, which is a sender-initiated protocol, uses
the three-way handshake mechanism (which was shown in Figure 2.11), where first the RTS and
CTS control packets are exchanged, followed by the actual DATA packet transmission. MACA-
BI eliminates the need for the RTS packet. In MACA-BI the receiver node initiates data
transmission by transmitting a ready to receive (RTR) control packet to the sender (Figure 2.13).
If it is ready to transmit, the sender node responds by sending a DATA packet. Thus data
transmission in MACA-BI occurs through a two-way handshake mechanism.
The receiver node may not have an exact knowledge about the traffic arrival rates at its
neighboring sender nodes. It needs to estimate the average arrival rate of packets. For providing
necessary information to the receiver node for this estimation, the DATA packets are modified to
carry control information regarding the backlogged flows at the transmitter node, number of
packets queued, and packet lengths. Once this information is available at the receiver node, the
average rate of the flows can be easily estimated. Suppose the estimation is incorrect or is not
possible (when the first data packet of the session is to be transmitted), the MACA-BI protocol
can be extended by allowing the sender node to declare its backlog through an RTS control
packet, if an RTR packet is not received within a given timeout period. In MACA, the CTS
packet was used to inform the hidden terminals (nodes) about the impending DATA packet
transmission, so that they do not transmit at the same time and disrupt the session. This role is
played in MACA-BI by the RTR packets. An RTR packet carries information about the time
interval during which the DATA packet would be transmitted. When a node hears RTR packets
transmitted by its neighbors, it can obtain information about the duration of DATA packet
transmissions by nodes that may be either its direct one-hop neighbors or its two hop neighbors,
that is, hidden terminals. Since it has information about transmissions by the hidden terminals, it
refrains from transmitting during those periods (Figure 2.13). Hence the hidden terminal problem
is overcome in MACA-BI. Collision among DATA packets is impossible. However, the hidden
terminal problem still affects the control packet transmissions. This leads to protocol failure, as
in certain cases the RTR packets can collide with DATA packets. One such scenario is depicted
in Figure 2.92. Here, RTR packets transmitted by receiver nodes R1 and R2 collide at node A.
So node A is not aware of the transmissions from nodes S1 and S2. When node A transmits RTR
packets, they collide with DATA packets at receiver nodes R1 and R2.
The efficiency of the MACA-BI scheme is mainly dependent on the ability of the receiver node
to predict accurately the arrival rates of traffic at the sender nodes.
The CTS packet carries the MAC addresses of the sender and the receiver node, and the route
identification number (RTid) for that flow. The RTid is used by nodes in order to avoid
misinterpretation of CTS packets and initiation of false CTS-only handshakes. Consider Figure
2.16. Here there are two routes – Route 1: A-B-C-D-E and Route 2: X-C-Y. When node C hears
a CTS packet transmitted by node B, by means of the RTid field on the packet, it understands that
the CTS was transmitted by its upstream node (upstream node refers to the next hop neighbor
node on the path from the current node to the source node of the data session) on Route 1. It
invokes a timer T which is set to expire after a certain period of time, long enough for node B to
receive a packet from node A. A CTS packet is transmitted by node C once the timer expires.
This CTS is overheard by node Y also, but since the RTid carried on the CTS is different from the
RTid corresponding to Route 2, node Y does not respond. In MARCH, the MAC layer has access
to tables that maintain routing information (such asRTid), but the protocol as such does not get
involved in routing.
In order to prioritize nodes transmitting voice traffic (voice nodes) over nodes transmitting
normal data traffic (data nodes), two rules are followed in D-PRMA. According to the first rule,
the voice nodes are allowed to start contending from minislot 1 with probability p = 1; data
nodes can start contending only with probability p < 1. For the remaining (m - 1) minislots, both
the voice nodes and the data nodes are allowed to contend with probability p < 1. This is because
the reservation process for a voice node is triggered only after the arrival of voice traffic at the
node; this avoids unnecessary reservation of slots. According to the second rule, only if the node
winning the minislot contention is a voice node, is it permitted to reserve the same slot in each
subsequent frame until the end of the session. If a data node wins the contention, then it is
allowed to use only one slot, that is, the current slot, and it has to make fresh reservations for
each subsequent slot. Nodes that are located within the radio coverage of the receiver should not
be permitted to transmit simultaneously when the receiver is receiving packets. If permitted,
packets transmitted by them may collide with the packets of the on-going traffic being received
at the receiver. Though a node which is located outside the range of a receiver is able to hear
packets transmitted by the sender, it should still be allowed to transmit simultaneously. The
above requirements, in essence, mean that the protocol must be free of the hidden terminal and
exposed terminal problems. In D-PRMA, when a node wins the contention in minislot 1, other
terminals must be prevented from using any of the remaining (m - 1) minislots in the same slot
for contention (requirement 1). Also, when a slot is reserved in subsequent frames, other nodes
should be prevented from contending for those reserved slots (requirement 2). The RTS-CTS
exchange mechanism taking place in the reservation process helps in trying to satisfy
requirement 1. A node that wins the contention in minislot 1 starts transmitting immediately
from minislot 2. Any other node that wants to transmit will find the channel to be busy from
minislot 2. Since an RTS can be sent only when the channel is idle, other neighboring nodes
would not contend for the channel until the on-going transmission gets completed. A node sends
an RTS in the RTS/BI part of a minislot. Only a node that receives an RTS destined to it is
allowed to use the CTS/BI part of the slot for transmitting the CTS. So the CTS packet does not
suffer any collision due to simultaneous RTS packet transmissions. This improves the probability
for a successful reservation. In order to avoid the hidden terminal problem, all nodes hearing the
CTS sent by the receiver are not allowed to transmit during the remaining period of that same
slot. In order to avoid the exposed terminal problem, a node hearing the RTS but not the CTS
(sender node's neighbor) is still allowed to transmit. But, if the communication is duplex in
nature, where a node may transmit and receive simultaneously, even such exposed nodes (that
hear RTS alone) should not be allowed to transmit. Therefore, D-PRMA makes such a node
defer its transmissions for the remaining time period of the same slot. If an RTS or CTS packet
collides, and a successful reservation cannot be made in the first minislot, then the subsequent (m
- 1) minislots of the same slot are used for contention. For satisfying requirement 2, the
following is done. The receiver of the reserved slot transmits a busy indication (BI) signal
through the RTS/BI part of minislot 1 of the same slot in each of the subsequent frames, without
performing a carrier-sense. The sender also performs a similar function, transmitting the BI
through the CTS/BI part of minislot 1 of the same slot in each subsequent frame. When any node
hears a BI signal, it does not further contend for that slot in the current frame. Because of this,
the reserved slot in each subsequent frame is made free of contention. Also, making the receiver
transmit the BI signal helps in eliminating the hidden terminal problem, since not all neighbors
of the receiver can hear from the sender. Finally, after a node that had made the reservation
completes its data transmission and does not anymore require a reserved slot, it just stops
transmitting the BI signal. D-PRMA is more suited for voice traffic than for data traffic
applications.
• Usage of negative acknowledgments for reservation requests, and control packet transmissions
at the beginning of each slot, for distributing slot reservation information to senders of broadcast
or multicast sessions. Time is divided into equal-sized frames, and each frame consists of S slots
(Figure 2.18). Each slot is further divided into five minislots. The first four minislots are used for
transmitting control packets and are called control minislots (CMS1, CMS2, CMS3, and CMS4).
The fifth and last minislot, called data minislot (DMS), is meant for data transmission. The data
minislot is much longer than the control minislots as the control packets are much smaller in size
compared to data packets.
Each node that receives data during the DMS of the current slot transmits a slot reservation (SR)
packet during the CMS1 of the slot. This serves to inform other neighboring potential sender
nodes about the currently active reservation. The SR packet is either received without error at the
neighbor nodes or causes noise at those nodes, in both cases preventing such neighbor nodes
from attempting to reserve the current slot. Every node that transmits data during the DMS of the
current slot transmits a request-to-send (RTS) packet during CMS2 of the slot. This RTS packet,
when received by other neighbor nodes or when it collides with other RTS packets at the
neighbor nodes, causes the neighbor nodes to understand that the source node is scheduled to
transmit during the DMS of the current slot. Hence they defer their transmissions during the
current slot. The control minislots CMS3 and CMS4 are used as follows. The sender of an
intended reservation, if it senses the channel to be idle during CMS1, transmits an RTS packet
during CMS2. The receiver node of a unicast session transmits a clear-to-send (CTS) packet
during CMS3. On receiving this packet, the source node understands that the reservation was
successful and transmits data during the DMS of that slot, and during the same slot in subsequent
frames, until the unicast flow gets terminated. Once the reservation has been made successfully
in a slot, from the next slot onward, both the sender and receiver do not transmit anything during
CMS3, and during CMS4 the sender node alone transmits a not-to-send (NTS) packet. The
purpose of the NTS packet is explained below. If a node receives an RTS packet for broadcast or
multicast during CMS2, or if it finds the channel to be free during CMS2, it remains idle and
does not transmit anything during CMS3 and CMS4. Otherwise, it sends a not-to-send (NTS)
packet during CMS4. The NTS packet serves as a negative acknowledgment; a potential
multicast or broadcast source node that receives the NTS packet during CMS4, or that detects
noise during CMS4, understands that its reservation request had failed, and it does not transmit
during the DMS of the current slot. If it finds the channel to be free during CMS4, which implies
that its reservation request was successful, it starts transmitting the multicast or broadcast packets
during the DMS of the slot. The length of the frame is very important in CATA. For any node
(say, node A) to broadcast successfully, there must be no other node (say, node B) in its two-hop
neighborhood that transmits simultaneously. If such a node B exists, then if node B is within
node A's one-hop neighborhood, node A and node B cannot hear the packets transmitted by each
other. If node B is within the two-hop neighborhood of node A, then the packets transmitted by
nodes A and B would collide at their common neighbor nodes. Therefore, for any node to
transmit successfully during one slot in every frame, the number of slots in each frame must be
larger than the number of two-hop neighbor nodes of the transmitting node. The worst-case value
of the frame length, that is, the number of slots in the frame, would be Min(d2 + 1, N), whered is
the maximum degree (degree of a node refers to the count of one-hop neighbors of the node) of a
node in the network, and N is the total number of nodes in the network. CATA works well with
simple single-channel half-duplex radios. It is simple and provides support for collision-free
broadcast and multicast traffic.
2.8.3 Hop Reservation Multiple Access Protocol The hop reservation multiple access
protocol (HRMA) is a multichannel MAC protocol which is based on simple half-duplex, very
slow frequency-hopping spread spectrum (FHSS) radios. It uses a reservation and handshake
mechanism to enable a pair of communicating nodes to reserve a frequency hop, thereby
guaranteeing collision-free data transmission even in the presence of hidden terminals. HRMA
can be viewed as a time slot reservation protocol where each time slot is assigned a separate
frequency channel. Out of the available L frequency channels, HRMA uses one frequency
channel, denoted by f0 , as a dedicated synchronizing channel. The nodes exchange
synchronization information on f0 . The remaining L - 1 frequencies are divided into frequency
pairs (denoted by (fi, ), i = 1, 2, 3, .., M), thereby restricting the length of the hopping sequence to
M. fi is used for transmitting and receiving hop-reservation (HR) packets, request-to-send (RTS)
packets, clearto- send (CTS) packets, and data packets. is used for sending and receiving
acknowledgment (ACK) packets for the data packets received or transmitted on frequency fi. In
HRMA, time is slotted, and each slot is assigned a separate frequency hop, which is one among
the M frequency hops in the hopping sequence. Each time slot is divided into four periods,
namely, synchronizing period, HR period, RTS period, and CTS period, each period meant for
transmitting or receiving the synchronizing packet, FR packet, RTS packet, and CTS packet,
respectively. All idle nodes, that is, nodes that do not transmit or receive packets currently, hop
together. During the synchronizing period of each slot, all idle nodes hop to the synchronizing
frequency f0 and exchange synchronization information. During the HR, RTS, and CTS periods,
they just stay idle, dwelling on the common frequency hop assigned to each slot. In addition to
the synchronization period used for synchronization purposes, an exclusive synchronization slot
is also defined at the beginning of each HRMA frame (Figure 2.19). This slot is of the same size
as that of the other normal slots. All idle nodes dwell on the synchronizing frequency f0 during
the synchronizing slot and exchange synchronization information that may be used to identify
the beginning of a frequency hop in the common hopping sequence, and also the frequency to be
used in the immediately following hop. Thus the HRMA frame, as depicted in Figure 2.19, is
composed of the single synchronizing slot, followed by M consecutive normal slots.
When a new node enters the network, it remains on the synchronizing frequency f0 for a long
enough period of time so as to gather synchronization information such as the hopping pattern
and the timing of the system. If it receives no synchronization information, it assumes that it is
the only node in the network, broadcasts its own synchronization information, and forms a one
node system. Since synchronization information is exchanged during every synchronization slot,
new nodes entering the system can easily join the network. If μ is the length of each slot and μ
s the length of the synchronization period on each slot, then the dwell time of fo at the beginning
of each frame would be μ + μ s. Consider the case where nodes from two different
disconnected network partitions come nearby. Figure 2.20depicts the worst-case frequency
overlap scenario. In the figure, the maximum number of frequency hops M = 5. It is evident from
the figure that within any time period equal to the duration of a HRMA frame, any two nodes
from the two disconnected partitions always have at least two overlapping time periods of length
μ s on the synchronizing frequency f0. Therefore, nodes belonging to disconnected network
components can easily merge into a single network.
2.8.4 Soft Reservation Multiple Access with Priority Assignment Soft reservation
multiple access protocol with priority assignment (SRMA/PA) was developed with the main
objective of supporting integrated services of real-time and non-realtime applications in ad hoc
wireless networks, at the same time maximizing the statistical multiplexing gain. Nodes use a
collision-avoidance handshake mechanism and asoft reservation mechanism in order to contend
for and effect reservation of time slots. The soft reservation mechanism allows any urgent node,
transmitting packets generated by a real-time application, to take over the radio resource from
another node of a non-real-time application on an on-demand basis. SRMA/PA is a TDMA-
based protocol in which nodes are allocated different time slots so that the transmissions are
collision-free. The main features of SRMA/PA are a unique frame structure and soft reservation
capability for distributed and dynamic slot scheduling, dynamic and distributed access priority
assignment and update policies, and a time-constrained back-off algorithm. Time is divided into
frames, with each frame consisting of a fixed number (N) of time slots. The frame structure is
shown in Figure 2.21. Each slot is further divided into six different fields, SYNC, soft
reservation (SR), reservation request (RR), reservation confirm (RC), data sending (DS), and
acknowledgment (ACK). The SYNC field is used for synchronization purposes. The SR, RR,
RC, and ACK fields are used for transmitting and receiving the corresponding control packets.
The DS field is used for data transmission. The SR packet serves as a busy tone. It informs the
nodes in the neighborhood of the transmitting node about the reservation of the slot. The SR
packet also carries the access priority value assigned to the node that has reserved the slot. When
an idle node receives a data packet for transmission, the node waits for a free slot and transmits
the RR packet in the RR field of that slot. A node determines whether or not a slot is free through
the SR field of that slot. In case of a voice terminal node, the node tries to take control of the slot
already reserved by a data terminal if it finds its priority level to be higher than that of the data
terminal. This process is called soft reservation. This makes the SRMA/PA different from other
protocols where even if a node has lower access priority compared to other ready nodes, it
proceeds to complete the transmission of the entire data burst once it has reserved the channel.
The protocol assumes the availability of global time at all nodes. Each node therefore knows
when a five-phase cycle would start. The five phases of the reservation process are as follows:
1. Reservation request phase: Nodes that need to transmit packets send reservation request (RR)
packets to their destination nodes.
2. Collision report phase: If a collision is detected by any node during the reservation request phase,
then that node broadcasts a collision report (CR) packet. The corresponding source nodes, upon
receiving the CR packet, take necessary action.
3. Reservation confirmation phase: A source node is said to have won the contention for a slot if it
does not receive any CR messages in the previous phase. In order to confirm the reservation request
made in the reservation request phase, it sends a reservation confirmation (RC) message to the
destination node in this phase.
4. Reservation acknowledgment phase: In this phase, the destination node acknowledges reception of
the RC by sending back a reservation acknowledgment (RA) message to the source. The hidden
nodes that receive this message defer their transmissions during the reserved slot.
5. Packing and elimination (P/E) phase: Two types of packets are transmitted during this phase:
packing packet and elimination packet. The details regarding the use of these packets will be
described later in this section. Each of the above five phases is described below.
In this phase, each node that needs to transmit packets sends an RR packet to the intended
destination node with a contention probability p, in order to reserve an IS. Such nodes that send
RR packets are called requesting nodes (RN). Other nodes just keep listening during this phase.
If any of the listening nodes detects collision of RR packets transmitted in the previous phase, it
broadcasts a collision report (CR) packet. By listening for CR packets in this phase, an RN
comes to know about collision of the RR packet it had sent. If no CR is heard by the RN in this
phase, then it assumes that the RR packet did not collide in its neighborhood. It then becomes a
transmitting node (TN). Once it becomes a transmitting node, the node proceeds to the next
phase, the reservation confirmation phase. On the other hand, if it hears a CR packet in this
phase, it waits until the next reservation request phase, and then tries again. Thus, if two RNs are
hidden from each other, their RR packets collide, both receive CR packets, and no reservation is
made, thereby eliminating the hidden terminal problem.
An RN that does not receive any CR packet in the previous phase, that is, a TN, sends an RC
packet to the destination node. Each neighbor node that receives this packet understands that the
slot has been reserved, and defers its transmission during the corresponding information slots in
the subsequent information frames until the next reservation frame.
On receiving the RC packet, the intended receiver node responds by sending an RA packet back
to the TN. This is used to inform the TN that the reservation has been established. In case the TN
is isolated and is not connected to any other node in the network, then it would not receive the
RA packet, and thus becomes aware of the fact that it is isolated. Thus the RC packet prevents
such isolated nodes from transmitting further. The reservation acknowledgment phase also serves
another purpose. Other two-hop neighbor nodes that receive this RA packet get blocked from
transmitting. Therefore, they do not disturb the transmission that is to take place in the reserved
slots. When more than two TNs are located nearby, it results in a deadlock condition. Such
situations may occur when there is no common neighbor node present when the RNs transmit
RR packets. Collisions are not reported in the next phase, and so each node claims success and
becomes a TN. Deadlocks are of two types: isolated and non-isolated. An isolated deadlock is a
condition where none of the deadlocked nodes is connected to any non-deadlocked node. In the
non isolated deadlock situation, at least one deadlocked node is connected to a non-deadlocked
neighbor node. The RA phase can resolve isolated deadlocks. None of the nodes transmits RA,
and hence the TNs abort their transmissions. Packing/elimination (P/E) phase: In this phase, a
packing packet (PP) is sent by each node that is located within two hops from a TN, and that had
made a reservation since the previous P/E phase. A node receiving a PP understands that there
has been a recent success in slot reservation three hops away from it, and because of this some of
its neighbors would have been blocked during this slot. The node can take advantage of this and
adjust its contention probability p, so that convergence is faster. In an attempt to resolve a non-
isolated deadlock, each TN is required to transmit an elimination packet (EP) in this phase, with
a probability 0.5. A deadlocked TN, on receiving an EP before transmitting its own EP, gets to
know about the deadlock. It backs off by marking the slot as reserved and does not transmit
further during the slot. Consider Figure 2.24. Here nodes 1, 7, and 9 have packets ready to be
transmitted to nodes 4, 8, and 10, respectively. During the reservation request phase, all three
nodes transmit RR packets. Since no other node in the two-hop neighborhood of node 1 transmits
simultaneously, node 1 does not receive any CR message in the collision report phase. So node 1
transmits an RC message in the next phase, for which node 4 sends back an RA message, and the
reservation is established. Node 7 and node 9 both transmit the RR packet in the reservation
request phase. Here node 9 is within two hops from node 7. So if both nodes 7 and 9 transmit
simultaneously, their RR packets collide at common neighbor node 11. Node 11 sends a CR
packet which is heard by nodes 7 and 9. On receiving the CR packet, nodes 7 and 9 stop
contending for the current slot.
Best-effort packet transmissions and real-time packet transmissions can be interleaved at nodes,
with higher priority being given to real-time packets. For real-time packets,
MACA/PReffectively works as a TDM system, with a superframe time of CYCLE. The best-
effort packets are transmitted in the empty slots (which have not been reserved) of the cycle.
When a new node joins the network, it initially remains in the listening mode during which it
receives reservation tables from each of its neighbors and learns about the reservations made in
the network. After this initial period, the node shifts to its normal mode of operation. The QoS
routing protocol used with MACA/PR is the destination sequenced distance vector (DSDV)
routing protocol [16] (described in detail in Chapter 7). Bandwidth constraint has been
introduced in the routing process. Each node periodically broadcasts to its neighbors the
(bandwidth, hop distance) pairs for each preferred path, that is, for each bandwidth value, to each
destination. The number of preferred paths is equal to the maximum number of slots in a cycle.
After this is done, if a node receives a real-time packet with a certain bandwidth requirement that
cannot be satisfied using the current available paths, the packet is dropped and no ACK packet is
sent. The sender node would eventually reroute the packet. Thus, MACA/PR is an efficient
bandwidth reservation protocol that can support real-time traffic sessions. One of the important
advantages of MACA/PR is that it does not require global synchronization among nodes. A
drawback of MACA/PR is that a free slot can be reserved only if it can fit in the entire RTS-
CTS-DATA-ACK exchange. Therefore, there is a possibility of many fragmented free slots not
being used at all, reducing the bandwidth efficiency of the protocol.
The distributed priority scheduling scheme (DPS) is based on the IEEE 802.11 distributed
coordination function. DPS uses the same basic RTS-CTS-DATA-ACK packet exchange
mechanism. The RTS packet transmitted by a ready node carries the priority tag/priority index
for the current DATA packet to be transmitted. The priority tag can be the delay target for the
DATA packet. On receiving the RTS packet, the intended receiver node responds with a CTS
packet. The receiver node copies the priority tag from the received RTS packet and piggybacks it
along with the source node id, on the CTS packet. Neighbor nodes receiving theRTS or CTS
packets (including the hidden nodes) retrieve the piggy-backed priority tag information and make
a corresponding entry for the packet to be transmitted, in their scheduling tables (STs). Each
node maintains an ST holding information about packets, which were originally piggy-backed on
control and data packets. The entries in the ST are ordered according to their priority tag values.
When the source node transmits a DATA packet, its head-of-line packet information (consisting
of the destination and source ids along with the priority tag) is piggy-backed on the DATA
packet (head-of-line packet of a node refers to the packet to be transmitted next by the node).
This information is copied by the receiver onto the ACK packet it sends in response to the
received DATA packet. Neighbor nodes receiving the DATA or ACK packets retrieve the piggy-
backed information and update their STs accordingly. When a node hears an ACK packet, it
removes from its ST any entry made earlier for the corresponding DATA packet. Figure 2.27
illustrates the piggy-backing and table update mechanism. Node 1 needs to transmit a DATA
packet (with priority index value 9) to node 2. It first transmits an RTSpacket carrying piggy-
backed information about this DATA packet. The initial state of the STof node 4 which is a
neighbor of nodes 1 and 2 is shown in ST (a). Node 4, on hearing thisRTS packet, retrieves the
piggybacked priority information and makes a corresponding entry in its ST, as shown in ST (b).
The destination node 2 responds by sending a CTS packet. The actual DATA packet is sent by
the source node once it receives the CTS packet. This DATA packet carries piggy-backed
priority information regarding the head-of-line packet at node 1. On hearing this DATA packet,
neighbor node 4 makes a corresponding entry for the head-of-line packet of node 1, in its ST. ST
(c) shows the new updated status of the ST at node 4. Finally, the receiver node sends an ACK
packet to node 1. When this packet is heard by node 4, it removes the entry made for the
corresponding DATA packet from its ST. The state of the scheduling table at the end of this data
transfer session is depicted in ST (d).
In essence, each node's scheduling table gives the rank of the node with respect to other nodes in
its neighborhood. This rank information is used to determine the back-off period to be taken by
the node. The back-off distribution is given by
where CWmin is the minimum size of the contention window. r is the rank in the scheduling table
of the node's highest priority packet; n is the current number of transmission attempts made by
the node; nmax is the maximum number of retransmissions permitted; α is a constant; and γ
is a constant that is used to control the congestion in the second attempt for the highest ranked
nodes.
Multi-Hop Coordination
By means of the multi-hop coordination mechanism, the excess delay incurred by a packet at the
upstream nodes is compensated for at the downstream nodes. When a node receives a packet, it
would have already received the priority index of the packet piggy-backed on the previous RTS
packet. In case the node is an intermediate node which has to further forward the packet, the
node calculates the new priority index of the DATA packet in a recursive fashion, based on the
received value of the priority index. If is the priority index assigned to the kth packet of flow i
with size at its jth hop, and if is the time at which thekth packet of flow i arrives at its first hop (the
next hop node to the source node on the path to the destination), then the new priority index
assigned to the received packet at intermediate node j is given as
where the increment of the priority index is a non-negative function of i, j, , and . Because of this
mechanism, if a packet suffers due to excess delay at the upstream nodes, the downstream nodes
increase the priority of the packet so that the packet is able to meet its end-toend delay target.
Similarly, if a packet arrives very early due to lack of contention at the upstream nodes, then the
priority of that packet would be reduced at the downstream nodes. Any suitable scheme can be
used for obtaining the values for . One simple scheme, called uniform delay budget allocation,
works as follows. For a flow i with an end-to-end delay target of D, the increment in priority
index value for a packet belonging to that flow, at hop j, is given as, , where K is the length of the
flow's path. The distributed priority scheduling and multi-hop coordination schemes described
above are fully distributed schemes. They can be utilized for carrying time-sensitive traffic on ad
hoc wireless networks.
In the receiver participation mechanism, a receiver node, when using its ST information, finds
that the sender is transmitting out of order, that is, the reference FIFO schedule is being violated,
an out-of-order notification is piggy-backed by the receiver on the control packets (CTS/ACK) it
sends to the sender. In essence, information regarding the transmissions taking place in the
twohop neighborhood of the sender is propagated by the receiver node whenever it detects a
FIFO schedule violation. Since the notification is sent only when a FIFOviolation is detected, the
actual transmission may not strictly follow the FIFO schedule; rather, it approximates the FIFO
schedule. On receiving an out-of-order packet from a sender node, the receiver node transmits a
notification to the sender node carrying the actual rank Rof the sender with respect to the
receiver's local ST. On receiving this out-of-order notification, the sender node goes into a back-
off state after completing the transmission of its current packet. The back-off period Tback − off is
given by
where Tsuccess is the longest possible time required to transmit a packet successfully, including the
RTS-CTS-DATA-ACK handshake. Thus the node backs off, allowing higher priority packets in
the neighborhood of the receiver to get transmitted first. In order to obtain a perfect FIFO
schedule, the receiver can very well be made not to reply to the out-of-order requests (RTS) of
the sender. This would cause the sender to time out and back off, thereby avoiding any out-of-
order transmission. But since the sender has already expended its resources in transmitting the
RTS successfully, it is allowed to complete the transmission of its current packet. This is a trade-
off between achieving perfect FIFO scheduling and high system utilization. Since in DWOP a
node's access to the medium is dependent on its rank in the receiver node's ST (the rank of a
node denotes the position of the node's entry in the receiver node's ST as per the priority of the
corresponding packet), information maintained in the ST must be consistent with the actual
network scenario. The stale entry elimination mechanism makes sure that the STs are free of
stale entries. An entry is deleted from the STonly after an ACK packet for the corresponding
entry is heard by the node. In case the ACK packet collides at the node, the corresponding entry
in the ST will never be removed. This may cause a large deviation from the ideal FIFO schedule.
Figure 2.28 (b) shows an example-perceived collisions scenario. The sender and receiver of flow
B might have stale entries because of collisions caused by packets belonging to flow A and flow
C at the sender and receiver of flow B. It can be observed that, in case there is a stale entry in the
ST of a node, the node's own headof- line packet position remains fixed, while other entries
below the head-of-line entry keep changing. The above observation is used as a stale entry
detection method. Thus, when a node observes that its rank remains fixed while packets whose
priorities are below the priority of its head-of-line packet are being transmitted, it concludes that
it may have one or more stale entries in its ST. The node simply deletes the oldest entry from its
ST, assuming it to be the stale entry. This mechanism thus eliminates stale entries from the STs
of nodes. In summary, DWOP tries to ensure that packets get access to the channel according to
the order defined by a reference scheduler. The above discussion was with respect to the
FIFOscheduler. Though the actual schedule deviates from the ideal FIFO schedule due to
information asymmetry and stale information in STs, the receiver participation and the stale
entry elimination mechanisms try to keep the actual schedule as close as possible to the ideal
schedule.
Figure 2.29. Feedback mechanism. Reproduced with permission from [20], © Elsevier, 2004.
Using the count of DATA packets transmitted (pktsSent) and count information carried by ACK
packets (acksRcvd), available in PDT, packet delivery ratio (PDR) of the flow at any given time
is computed as (2.9.1)
The protocol works as follows. The packet-exchange mechanism followed for transmitting each
data packet is depicted in Figure 2.31. In the example shown in the figure, each node is assumed
to have six directional antennas. The main concept in this protocol is the mechanism used by the
transmitting and receiving nodes to determine the directions of each other. The MAC layer at the
source node must be able to find the direction of the intended next-hop receiver node so that the
data packet could be transmitted through a directional antenna. It is the same case with the
receiver. The receiver node must be able to determine the direction of the sender node before
starting to receive data packets. This is performed in the following manner. An idle node is
assumed to be listening to the on-going transmissions on all its antennas. The sender node first
transmits an RTS packet addressed to the receiver. This RTS is transmitted through all the
antennas of the node (omnidirectional transmission). The intended receiver node, on receiving
this RTS packet, responds by transmitting a CTSpacket, again on all its antennas
(omnidirectional transmission). The receiver node also notes down the direction of the sender by
identifying the antenna that received the RTS packet with maximum power. The source, on
receiving the CTS packet, determines the direction of the receiver node in a similar manner. The
neighbor nodes that receive the RTS or CTS packets defer their transmissions for appropriate
periods of time. After receiving the CTS, the source node transmits the next data packet through
the chosen directional antenna. All other antennas are switched off and remain idle. The receiver
node receives this data packet only through its selected antenna.
Since a node transmits packets only through directional antennas, the interference caused to
nodes in its direct transmission range is reduced considerably, which in turn leads to an increase
in the overall throughput of the system.
In the second directional MAC scheme (DMAC-2) proposed in [23], both directional
RTS(DRTS) as well as omnidirectional RTS (ORTS) transmissions are used. In DMAC-1, the
usage of DRTS may increase the probability of control packet collisions. For example, consider
Figure 2.34. Here node A initiates a DRTS transmission to node B. The DRTS packet is not
heard by node E, and so it is not aware of the transmission between nodes A and B. Suppose
node E sends a packet to node A, then that packet may collide with the OCTS orDACK packets
transmitted by node B. The probability of control packet collisions is reduced in DMAC-2. In
DMAC-2, a node that wants to initiate a data transfer may send an ORTS or aDRTS as per the
following two rules. (1) If none of the directional antennas at the node are blocked, then the node
sends an ORTS packet. (2) Otherwise, the node sends a DRTS packet, provided the desired
directional antenna is not blocked. Consider the same example in Figure 2.34. Here when node A
initiates data transfer to node B, assuming all its antennas are not blocked, it sends an ORTS
packet to node B. Node E would now receive this packet, and the antenna on which the ORTS
packet was received would remain blocked for the duration of the transmission from node A to
node B. If node E wants to send a packet to node A, it needs to wait for the duration of the
transmission between nodes A and B, so that its directional antenna pointing toward node A
becomes unblocked; only then can it start transmitting packets to node A. Thus, the combination
of ORTS and DRTS packets in DMAC-2 reduces collisions between control packets. Consider
Figure 2.35. Node B sends an OCTS packet to node A after receiving DRTS from node A. Node
C would be aware of the transmission and its antenna pointing toward node B would remain
blocked for the duration of the transmission. Now suppose node D sends anORTS to node C.
Since one of node C's antennas is blocked currently, it would not respond to the ORTS packet.
This results in node D timing out and unnecessary retransmissions ofORTS packets to node C.
To avoid this situation, another packet called directional wait-tosend (DWTS) is used. On
receiving the ORTS from node D, node C transmits the DWTSpacket using a directional antenna
toward node D. This DWTS packet carries the expected duration of the on-going transmission
between nodes A and B. Node D, on receiving this packet, waits for the specified interval of time
and then tries again.
2.11 OTHER MAC PROTOCOLS There are several other MAC protocols that do
not strictly fall under the categories discussed above. Some of these MAC protocols are
described in this section.
2.11.1 Multichannel MAC Protocol The multichannel MAC protocol (MMAC) uses
multiple channels for data transmission. There is no dedicated control channel. N channels that
have enough spectral separation between each other are available for data transmission. Each
node maintains a data structure called PreferableChannelList (PCL). The usage of the channels
within the transmission range of the node is maintained in the PCL. Based on their usage,
channels can be classified into three types.
• High preference channel (HIGH): The channel has been selected by the current node and is
being used by the node in the current beacon interval (beacon interval mechanism will be
explained later). Since a node has only one transceiver, there can be only one HIGH channel at a
time.
• Medium preference channel (MID): A channel which is free and is not being currently used in
the transmission range of the node is said to be a medium preference channel. If there is no
HIGH channel available, a MID channel would get the next preference.
• Low preference channel (LOW): Such a channel is already being used in the transmission range
of the node by other neighboring nodes. A counter is associated with each LOW state channel.
For each LOW state channel, the count of source-destination pairs which have chosen the
channel for data transmission in the current beacon interval is maintained. Time is divided into
beacon intervals and every node is synchronized by periodic beacon transmissions. So, for every
node, the beacon interval starts and ends almost at the same time. At the start of every beacon
interval, there exists a time interval called the ad hoc traffic indication messages (ATIM)
window. This window is used by the nodes to negotiate for channels for transmission during the
current beacon interval. ATIM messages such as ATIM,ATIM-ACK (ATIM-acknowledgment),
and ATIM-RES (ATIM-reservation) are used for this negotiation. The exchange of ATIM
messages takes place on a particular channel called thedefault channel. The default channel is
one of the multiple available channels. This channel is used for sending DATA packets outside
the ATIM window, like any other channel. A node that wants to transmit in the current beacon
interval sends an ATIM packet to the intended destination node. The ATIM message carries the
PCL of the transmitting node. The destination node, upon receiving the packet, uses the PCL
carried on the packet and its ownPCL to select a channel. It includes this channel information in
the ATIM-ACK packet it sends to the source node. The source node, on receiving the ATIM-
ACK packet, determines whether it can transmit on the channel indicated in the ATIM-ACK
message. If so, it responds by sending the destination node an ATIM-RES packet. The ATIM-
ACK and ATIM-RES packets are also used to notify the neighbor nodes of the receiver and
sender nodes, respectively, about the channel that is going to be used for transmission in the
current beacon interval. The nodes that hear these packets update their PCLs accordingly. At the
end of theATIM window, the source and destination nodes switch to the agreed-upon channel
and start communicating by exchanging RTS/CTS control packets. If the source node is not able
to use the channel selected by the destination, it cannot transmit packets to that destination in the
current beacon interval. It has to wait for the next beacon interval for again negotiating channels.
The ATIM packets themselves may be lost due to collisions; in order to prevent this, each node
waits for a randomly chosen back-off period (between 0 and CWmin) before transmitting the
ATIM packet. Operation of the MMAC protocol is illustrated in Figure 2.36. At the beginning of
the beacon interval, source node S1 sends an ATIM message to receiver R1. Receiver R1
responds by sending an ATIM-ACK packet (ATIM-ACK(1)) carrying the ID 1 of the channel it
prefers (inFigure 2.36 the number within parentheses indicates the ID of the preferred channel).
Node S1, on receiving this packet, confirms the reservation by sending an ATIM-RES packet
(ATIMRES( 1)) for channel 1. The ATIM-ACK(1) packet sent by receiver R1 is also overheard
by node R2. When node R2 receives an ATIM packet from source S2, it chooses a different
channel with ID 2, and sends the channel information to source S2 through the ATIM-ACK
packet (ATIMACK( 2)). Since channel 2 is agreeable to node S2, it responds by sending
theATIM-RES(2) packet, and the reservation gets established. Once the ATIM window finishes,
the data transmission (through RTS-CTS-DATA-ACK packet exchange) between node pairs S1-
R1 and S2-R2 takes place on the corresponding reserved channels, channel 1 and channel 2,
respectively.
• If there exists a HIGH state channel in node R's PCL, then that channel is selected.
• Else if there exists a HIGH state channel in the PCL of node S, then this channel is selected. •
Else if there exists a common MID state channel in the PCLs of both node S and node R, then
that channel is selected. If many such channels exists, one of them is selected randomly. • Else if
there exists a channel which is in the MID state at only one of the two nodes, then that channel is
chosen. If many such channels exist, one of them is selected randomly.
• If all channels in both PCLs are in the LOW state, the counters of the corresponding channels at
nodes S and R are added, and the channel with the least count is selected. Ties are broken
arbitrarily. MMAC uses simple hardware. It requires only a single transceiver. It does not have
any dedicated control channel. The throughput of MMAC is higher than that of IEEE 802.11
when the network load is high. This higher throughput is in spite of the fact that in MMAC only
a single transceiver is used at each node. Unlike other protocols, the packet size in MMAC need
not be increased in order to take advantage of the presence of an increased number of channels.
If the rates chosen by the sender and receiver are different, then the reservation durationDRTS
calculated by the neighbor nodes of the sender would not be valid. DRTS time period, which is
calculated based on the information carried initially by the RTS packet, is referred to as tentative
reservation. In order to overcome this problem, the sender node sends the data packet with a
special MAC header containing a reservation subheader (RSH). The RSHcontains a subset of
header fields already present in the IEEE 802.11 data frame, along with a check sequence for
protecting the subheader. The fields in the RSH contain control information for determining the
duration of the transmission. A neighbor node with tentative reservation entries in its NAV, on
hearing the data packet, calculates DRSH , the new reservation period, and updates its NAV to
account for the difference between DRTS andDRSH. For the channel quality estimation and prediction
algorithm, the receiver node uses a sample of the instantaneous received signal strength at the
end of RTS reception. For the rate selection algorithm, a simple threshold-based technique is
used. Here the rate is chosen by comparing the channel quality estimate [e.g., signal to noise
ratio (SNR)] against a series of thresholds representing the desired performance bounds (e.g., a
series of SNR thresholds). The modulation scheme with the highest data rate, satisfying the
performance objective for the channel quality estimate, is chosen. RBAR employs an efficient
quality estimation mechanism, which leads to a high overall system throughput. RBAR can be
easily incorporated into many existing medium access control protocols.
In ICSMA, the total available bandwidth is split into two equal channels (say, channel 1 and
channel 2). The handshaking process is interleaved between the two channels, hence the name
interleaved carrier-sense multiple access. The working of ICSMA is very simple. It uses the
basic RTS-CTS-DATA-ACK exchange mechanism used in IEEE 802.11 DCF. If the source
node transmits the RTS packet on channel 1, the receiver node, if it is ready to accept packets
from the sender, responds by transmitting the CTS packet on channel 2. Each node maintains a
data structure called extended network allocation vector (E-NAV), which is analogous to the
network allocation vector (NAV) used in IEEE 802.11 DCF. On receiving an RTS packet, the
receiver node checks its E-NAV and finds out whether free time slots are available. It sends the
CTS only if free slots are available. The source node, on receiving this CTS, transmits the DATA
packet on channel 1. The receiver acknowledges the reception of the DATA packet by
transmitting the ACK on channel 2. The ICSMA channel access mechanism is illustrated
inFigure 2.41. Figure 2.41 (a) shows the RTS-CTS-DATA-ACK exchange of 802.11 DCF.
Figure 2.41 (b) shows simultaneous data packet transmissions between two nodes in ICSMA.
After transmitting an RTS or a DATA packet on a channel, the sender node waits on the other
channel for the CTS or ACK packet. If it does not receive any packet on the other channel, it
assumes that the RTS or DATA packet it transmitted was lost, and retries again. Similarly at the
receiver, after transmitting a CTS frame on one of the channels, the receiver node waits on the
other channel for the DATA packet. If the DATA packet is not received within the timeout
period, it retransmits the CTS packet.
Figure 2.41. Packet transmissions in (a) 802.11 DCF and (b) ICSMA. Reproduced with
permission from [29], © IEEE, 2003.
The performance improvement of ICSMA is attributed to the following facts: • Nodes that hear
RTS in a particular channel (say, channel 1) and do not hear the corresponding CTS on the other
channel (channel 2) conclude that they are only sender-exposed in channel 1. Therefore, if they
have packets to send, they can use channel 1 to transmit RTS to other nodes. This would not
have been possible in 802.11 DCF, where transmissions by a senderexposed node would have
collided with the corresponding currently active sender node. • Nodes that hear only the CTS in a
particular channel (say, channel 1) and had not heard the corresponding RTS on the other
complementary channel (channel 2) realize that they are only receiver-exposed on channel 1 to
the on-going transmission. If they receive any RTS on channel 2, they would not refrain from
sending a CTS on channel 1 for the received RTS. This would also not have been possible in
802.11 DCF, where there would have been collision at the receiver of the on-going session
between the CTS packet transmitted by this node and the DATA packets belonging to the on-
going session. Also, if this CTS transmission is successful, then there might have been collisions
between DATA packets belonging to the two sessions at both the receiver nodes. The E-NAV
used in ICSMA is implemented as two linked lists of blocks, namely, the SEList and the REList.
Each block in each linked list has a start time and an end time. A typical list looks like s1, f1; s2, f2;
...; sk, fk where Si denotes the start time of the ith block in the list and fidenotes the finish time of the
ith block in the list. The SEList is used to determine if the node would be sender-exposed at any
given instant of time in the future. A node is predicted to be senderexposed at any time t if there
is a block sj, fj in the SEList such that sj < t < fj. Similarly, the REList tells if the node would be
receiver-exposed at any time in the future. A node is predicted to be receiver-exposed at any time
t if there exists a block sj, fj in the REList such that sj < t < fj. The SEList and the REList are
updated whenever the RTS and CTSpackets are received by the node. The SEList is modified
when an RTS packet is received, and the REList is modified when a CTS packet is received by
the node. The modification in the list might be adding a new block, modifying an existing block,
or merging two or more existing blocks and modifying the resulting block. ICSMA is a simple
two-channel MAC protocol for ad hoc wireless networks that reduces the number of exposed
terminals and tries to maximize the number of simultaneous sessions.ICSMA was found to
perform better than the 802.11 DCF protocol is terms of metrics such as throughput and channel
access delay.
UNIT-III
ROUTING PROTOCOLS FOR AD HOC WIRELESS
NETWORKS
3.1 INTRODUCTION
An ad hoc wireless network consists of a set of mobile nodes (hosts) that are
connected by wireless links. The network topology (the physical connectivity of
the communication network) in such a network may keep changing randomly.
Routing protocols that find a path to be followed by data packets from a source
node to a destination node used in traditional wired networks cannot be directly
applied in ad hoc wireless networks due to their highly dynamic topology,
absence of established infrastructure for centralized administration (e.g., base
stations or access points), bandwidth-constrained wireless links, and resource
(energy)-constrained nodes. A variety of routing protocols for ad hoc wireless
networks has been proposed in the recent past. This chapter first presents the
issues involved in designing a routing protocol and then the different
classifications of routing protocols for ad hoc wireless networks. It then
discusses the working of several existing routing protocols with illustrations.
The exposed terminal problem refers to the inability of a node which is blocked
due to transmission by a nearby transmitting node to transmit to another node.
Consider the example in Figure 3.3. Here, if a transmission from node B to
another node A is already in progress, node C cannot transmit to node D, as it
concludes that its neighbor, node B, is in transmitting mode and hence should
not interfere with the on-going transmission. Thus, reusability of the radio
spectrum is affected. For node C to transmit simultaneously when node B is
transmitting, the transmitting frequency of node C must be different from its
receiving frequency.
Figure 3.2. Hidden terminal problem with RTS-CTS-Data-ACK scheme.
1. Flat topology routing protocols: Protocols that fall under this category make
use of a flat addressing scheme similar to the one used in IEEE 802.3 LANs. It
assumes the presence of a globally unique (or at least unique to the connected
part of the network) addressing mechanism for nodes in an ad hoc wireless
network.
2. Hierarchical topology routing protocols: Protocols belonging to this
category make use of a logical hierarchy in the network and an associated
addressing scheme. The hierarchy could be based on geographical information
or it could be based on hop distance.
When a node detects a link break, it sends an update message to its neighbors
with the link cost of the broken link set to ∞. After receiving
the update message, all affected nodes update their minimum distances to the
corresponding nodes (including the distance to the destination). The node that
initiated the update message then finds an alternative route, if available from its
DT. Note that this new route computed will not contain the broken link.
Consider the scenario shown in Figure 3.8. When the link between nodes 12 and
15 breaks, all nodes having a route to the destination with predecessor as node
12 delete their corresponding routing entries. Both node 12 and node 15 send
update messages to their neighbors indicating that the cost of the link between
nodes 12 and 15 is ∞. If the nodes have any other alternative
route to the destination node 15, they update their routing tables and
indicate the changed route to their neighbors by sending an update message. A
neighbor node, after receiving an update message, updates its routing table only
if the new path is better than the previously existing paths. For example, when
node 12 finds an alternative route to the destination through node 13, it
broadcasts an update message indicating the changed path. After receiving the
update message from node 12, neighboring nodes 8, 14, 15, and 13 do not
change their routing entry corresponding to destination 15 while node 4 and
node 10 modify their entries to reflect the new updated path. Nodes 4 and 10
again send an update message to indicate the correct path to the destination for
their respective neighbors. When node 10 receives node 4's update message, it
again modifies its routing entry to optimize the path to the destination node (15)
while node 4 discards the update entry it received from node 10.
STAR has very low communication overhead among all the table-driven routing
protocols. The use of the LORA approach in this table-driven routing protocol
reduces the average control overhead compared to several other on-demand
routing protocols.
Optimizations
Several optimization techniques have been incorporated into the basic DSR
protocol to improve the performance of the protocol. DSR uses the route cache
at intermediate nodes. The route cache is populated with routes that can be
extracted from the information contained in data packets that get forwarded.
This cache information is used by the intermediate nodes to reply to the source
when they receive a RouteRequest packet and if they have a route to the
corresponding destination. By operating in the promiscuous mode, an
intermediate node learns about route breaks. Information thus gained is used to
update the route cache so that the active routes maintained in the route cache do
not use such broken links. During network partitions, the affected nodes initiate
RouteRequest packets. An exponential backoff algorithm is used to avoid
frequent RouteRequest flooding in the network when the destination is in
another disjoint set. DSR also allows piggy-backing of a data packet on
theRouteRequest so that a data packet can be sent along with the RouteRequest.
If optimization is not allowed in the DSR protocol, the route construction phase
is very simple. All the intermediate nodes flood the RouteRequest packet if it is
not redundant. For example, after receiving the RouteRequest packet from node
1 (refer to Figure 3.10), all its neighboring nodes, that is, nodes 2, 5, and 6,
forward it. Node 4 receives the RouteRequest from both nodes 2 and 5. Node 4
forwards the first RouteRequest it receives from any one of the nodes 2 and 5
and discards the other redundant/duplicate RouteRequest packets. The
RouteRequest is propagated till it reaches the destination which initiates the
RouteReply. As part of optimizations, if the intermediate nodes are also allowed
to originate RouteReply packets, then a source node may receive multiple replies
from intermediate nodes. For example, in Figure 3.11, if the intermediate node
10 has a route to the destination via node 14, it also sends the RouteReply to the
source node. The source node selects the latest and best route, and uses that for
sending data packets. Each data packet carries the complete path to its
destination.
This protocol uses a reactive approach which eliminates the need to periodically
flood the network with table update messages which are required in a table-
driven approach. In a reactive (on-demand) approach such as this, a route is
established only when it is required and hence the need to find routes to all other
nodes in the network as required by the table-driven approach is eliminated. The
intermediate nodes also utilize the route cache information efficiently to reduce
the control overhead. The disadvantage of this protocol is that the route
maintenance mechanism does not locally repair a broken link. Stale route cache
information could also result in inconsistencies during the route reconstruction
phase. The connection setup delay is higher than in table-driven protocols. Even
though the protocol performs well in static and low-mobility environments, the
performance degrades rapidly with increasing mobility. Also, considerable
routing overhead is involved due to the source-routing mechanism employed in
DSR. This routing overhead is directly proportional to the path length.
The main advantage of this protocol is that routes are established on demand
and destination sequence numbers are used to find the latest route to the
destination. The connection setup delay is less. One of the disadvantages of this
protocol is that intermediate nodes can lead to inconsistent routes if the source
sequence number is very old and the intermediate nodes have a higher but not
the latest destination sequence number, thereby having stale entries. Also
multiple RouteReply packets in response to a single RouteRequest packet can
lead to heavy control overhead. Another disadvantage of AODV is that the
periodic beaconing leads to unnecessary bandwidth consumption.
The route establishment function is performed only when a node requires a path
to a destination but does not have any directed link. This process establishes a
destination-oriented directed acyclic graph (DAG) using a Query/Update
mechanism. Consider the network topology shown in Figure 3.14. When node 1
has data packets to be sent to the destination node 7, a Query packet is
originated by node 1 with the destination address included in it. This Query
packet is forwarded by intermediate nodes 2, 3, 4, 5, and 6, and reaches the
destination node 7, or any other node which has a route to the destination. The
node that terminates (in this case, node 7) the Query packet replies with an
Update packet containing its distance from the destination (it is zero at the
destination node). In the example, the destination node 7 originates an Update
packet. Each node that receives the Update packet sets its distance to a value
higher than the distance of the sender of the Update packet. By doing this, a set
of directed links from the node which originated the Query to the destination
node 7 is created. This forms the DAG depicted in Figure 3.14. Once a path to
the destination is obtained, it is considered to exist as long as the path is
available, irrespective of the path length changes due to the reconfigurations that
may take place during the course of the data transfer session. When an
intermediate node (say, node 5) discovers that the route to the destination node
is invalid, as illustrated in Figure 3.15, it changes its distance value to a higher
value than its neighbors and originates an Update packet. The neighboring node
4 that receives the Update packet reverses the link between 1 and 4 and
forwards the Update packet. This is done to update the DAG corresponding to
destination node 7. This results in a change in the DAG. If the source node has
no other neighbor that has a path to the destination, it initiates a fresh
Query/Update procedure. Assume that the link between nodes 1 and 4 breaks.
Node 4 reverses the path between itself and node 5, and sends an update
message to node 5. Since this conflicts with the earlier reversal, a partition in the
network can be inferred. If the node detects a partition, it originates a Clear
message, which erases the existing path information in that partition related to
the destination.
LAR reduces the control overhead by limiting the search area for finding a path.
The efficient use of geographical position information, reduced control
overhead, and increased utilization of bandwidth are the major advantages of
this protocol. The applicability of this protocol depends heavily on the
availability of GPS infrastructure or similar sources of location information.
Hence, this protocol cannot be used in situations where there is no access to
such information.
The main advantage of this protocol is that it finds more stable routes when
compared to the shortest path route selection protocols such as DSR and
AODV. This protocol accommodates temporal stability by using beacon counts
to classify a link as stable or weak. The main disadvantage of this protocol is
that it puts a strong RouteRequest forwarding condition which results in
RouteRequest failures. A failed RouteRequest attempt initiates a similar path-
finding process for a new path without considering the stability criterion. Such
multiple flooding of RouteRequest packets consumes a significant amount of
bandwidth in the already bandwidth-constrained network, and also increases the
path setup time. Another disadvantage is that the strong links criterion increases
the path length, as shorter paths may be ignored for more stable paths.
The use of LET and RET estimates reduces path breaks and their associated ill
effects such as reduction in packet delivery, increase in the number of out-
oforder packets, and non-optimal paths resulting from local reconfiguration
attempts. The proactive route reconfiguration mechanism adopted here works
well when the topology is highly dynamic. The requirements of time
synchronization increases the control overhead. Dependency on the GPS
infrastructure affects the operability of this protocol in environments where such
infrastructure may not be available.
3.6 HYBRID ROUTING PROTOCOLS In hybrid routing protocols, each
node maintains the network topology information up to m hops. The different
existing hybrid protocols are presented below.
The main advantage of CEDAR is that it performs both routing and QoS path
computation very efficiently with the help of core nodes. The increase- and
decrease-waves help in appropriate propagation of the stable high-bandwidth
link information and the unstable low-bandwidth link information, respectively.
Core broadcasts provide a reliable mechanism for establishing paths with QoS
support. A disadvantage of this protocol is that since route computation is
carried out at the core nodes only, the movement of the core nodes adversely
affects the performance of the protocol. Also, the core node update information
could cause a significant amount of control overhead.
When an intermediate node in an active path detects a broken link in the path, it
performs a local path reconfiguration in which the broken link is bypassed by
means of a short alternate path connecting the ends of the broken link. A path
update message is then sent to the sender node to inform it about the change in
path. This results in a sub-optimal path between two end points, but achieves
quick reconfiguration in case of link failures. To obtain an optimal path, the
sender reinitiates the global path-finding process after a number of local
reconfigurations.
Advantages and Disadvantages
By combining the best features of proactive and reactive routing schemes, ZRP
reduces the control overhead compared to the RouteRequest flooding
mechanism employed in on-demand approaches and the periodic flooding of
routing information packets in table-driven approaches. But in the absence of a
query control, ZRP tends to produce higher control overhead than the
aforementioned schemes. This can happen due to the large overlapping of
nodes' routing zones. The query control must ensure that redundant or duplicate
RouteRequests are not forwarded. Also, the decision on the zone radius has a
significant impact on the performance of the protocol.
The hierarchical approach used in this protocol significantly reduces the storage
requirements and the communication overhead created because of mobility. The
zone-level topology is robust and resilient to path breaks due to mobility of
nodes. Intra-zonal topology changes do not generate network-wide control
packet transmissions. A main disadvantage of this protocol is the additional
overhead incurred in the creation of the zone-level topology. Also the path to
the destination is sub-optimal. The geographical information required for the
creation of the zone level topology may not be available in all environments.
Two different algorithms have been proposed by Sisodia et al. in , for finding
preferred links. The first algorithm selects the route based on degree of neighbor
nodes (degree of a node is the number of neighbors). Preference is given to
nodes whose neighbor degree is higher. As higher degree neighbors cover more
nodes, only a few of them are required to cover all the nodes of the NNT. This
reduces the number of broadcasts. The second algorithm gives preference to
stable links. Links are not explicitly classified as stable or unstable. The notion
of stability is based on the weight given to links.
Neighbor Degree-Based Preferred Link Algorithm (NDPL)
Let d be the node that calculates the preferred list table PLT. TP is the traversed
path and OLDPL is the preferred list of the received RouteRequest packet. The
NNT of node d is denoted by NNTd . N(i) denotes the neighbors of node i and
itself. Include list (INL) is a set containing all neighbors reachable by
transmitting the RouteRequest packet after execution of the algorithm, and the
exclude list (EXL) is a set containing all neighbors that are unreachable by
transmitting the RouteRequest packet after execution of the algorithm. 1. In this
step, node d marks the nodes that are not eligible for further forwarding the
RouteRequestpacket. 1. If a node i of TP is a neighbor of node d, mark all
neighbors of i as reachable, that is, add N(i) toINL. 2. If a node i of OLDPL is a
neighbor of node d, and i < d, mark all neighbors of node i as reachable, that is,
include N(i) in INL. 3. If neighbor i of node d has a neighbor n present in TP,
mark all neighbors of i as reachable by adding N(i) to INL. 4. If neighbor i of
node d has a neighbor n present in OLDPL, and n < d, here again add N(i) to INL,
thereby marking all neighbors of node i as reachable. 2. If neighbor i of node d
is not in INL, put i in preferred list table PLT and mark all neighbors of i as
reachable. If i is present in INL, mark the neighbors of i as unreachable by
adding them to EXL, asN(i) may not be included in this step. Here neighbors i of
d are processed in decreasing order of their degrees. After execution of this step,
the RouteRequest is guaranteed to reach all neighbors ofd. If EXL is not empty,
some neighbor's neighbors n of node d are currently unreachable, and they are
included in the next step. 3. If neighbor i of d has a neighbor n present in EXL,
put i in PLT and mark all neighbors of i as reachable. Delete all neighbors of i
from EXL. Neighbors are processed in decreasing order of their degrees. After
execution of this step, all the nodes in NNTd are reachable. Apply reduction steps
to remove overlapping neighbors from PLT without compromising on
reachability. 4. Reduction steps are applied here in order to remove overlapping
neighbors from PLT without compromising on reachability. 1. Remove each
neighbor i from PLT if N(i) is covered by remaining neighbors of PLT. Here the
minimum degree neighbor is selected every time. 2. Remove neighbor i from
PLT whose N(i) is covered by node d itself.
Weight-Based Preferred Link Algorithm (WBPL)In this algorithm, a node finds the
preferred links based on stability, which is indicated by a weight, which in turn
is based on its neighbors' temporal and spatial stability.
1. Let BCnti be the count of beacons received from a neighbor i and THbcon is the
number of beacons generated during a time period equal to that required to
cover twice the transmission range . Weight given to i based on time stability is
2. Estimate the distance ( ) to i from the received power of the last few packets
using appropriate propagation models. The weight based on spatial stability is .
3. The weight assigned to the link i is the combined weight given to time
stability and spatial stability.
5. If a link is overloaded, delete the associated neighbor from PLT. Execute Step
1 of NDPL and delete ∀i, i PLT ∩ i INL. Also, delete those neighbors from
PLT that satisfy Step 4 of NDPL. Consider, for example, Figure 3.29, where the
node 3 is the source and node 8 is the destination. S andU denote stable and
unstable links. In WBPL and NDPL, the source that initiates that RouteRequest
as Dest is not present in NNT and computes the preferred link table (PLT). Let K
= 2 be the preferred list'ssize. In NDPL, after Step 2 the PLT becomes {5, 1},
and after Step 3 also the PLT remains {5, 1}. In reduction Step 4b, neighbor 1 is
deleted from PLT and hence node 3 sends the RouteRequest only to neighbor 5.
In WBPL, the weights are assigned to all neighbors according to Steps 1, 2, and
3, and all neighbors are in PLT. In Step 5, neighbors 1, 4, and 2 are deleted from
PLT due to Step 4a and 4b ofNDPL and hence only node 5 remains. Now the
RouteRequest can be sent as a unicast packet to avoid broadcast. If it is
broadcast, all the nodes receive the packet, but only node 5 can further forward
it. As Dest 8 is present in node 5's NNT, it directly sends it to node 6 which
forwards it to Dest. Here only three packets are transmitted for finding the route
and the path length is 3. Now consider SSA. After broadcasts by nodes 3 and 5,
the RouteRequest packet reaches node 6, where it is rejected and hence the
RouteRequest fails to find a route. After timeout, it sets a flag indicating
processed by all and hence finds the same route as WBPL and NDPL.
The MPRset need not be optimal, and during initialization of the network it may
be same as the neighbor set. The smaller the number of nodes in the MPRset,
the higher the efficiency of protocol compared to link state routing. Every node
periodically originates topology control (TC) packets that contain topology
information with which the routing table is updated. These TC packets contain
the MPRSelector set of every node and are flooded throughout the network
using the multipoint relaying mechanism. Every node in the network receives
several such TC packets from different nodes, and by using the information
contained in the TC packets, the topology table is built. A TC message may be
originated by a node earlier than its regular period if there is a change in the
MPRSelector set after the previous transmission and a minimal time has elapsed
after that. An entry in the topology table contains a destination node which is
the MPRSelector and a last-hop node to that destination, which is the node that
originates the TC packet. Hence, the routing table maintains routes for all other
nodes in the network.
OLSR has several advantages that make it a better choice over other tabledriven
protocols. It reduces the routing overhead associated with table-driven routing,
in addition to reducing the number of broadcasts done. Hence OLSR has the
advantages of low connection setup time and reduced control overhead.
Every node maintains information about all its neighbors and the status of links
to each of them. This information is broadcast within the cluster at regular
intervals. The cluster leader exchanges the topology and link state routing
information among its peers in the neighborhood clusters, using which the
nexthigher- level clustering is performed. This exchange of link state
information is done over multiple hops that consist of gateway nodes and
cluster-heads. The path between two cluster-heads which is formed by multiple
wireless links is called a virtual link. The link status for the virtual
link(otherwise called tunnel) is obtained from the link status parameters of the
wireless links that constitute the virtual link. In Figure 3.31, the path between
first-level clusters L1 - 6 and L1 - 4 includes the wireless links 6 - 12 - 5 - 16 - 4.
The clustering is done recursively to the higher levels. At any level, the cluster
leader exchanges topology information with its peers. After obtaining
information from its peers, it floods the information to the lower levels, making
every node obtain the hierarchical topology information. This hierarchical
topology necessitates a hierarchical addressing which helps in operating with
less routing information against the full topology exchange required in the link
state routing. The hierarchical addressing defined in HSR includes the
hierarchical ID (HID) and node ID. The HID is a sequence of IDs of cluster
leaders of all levels starting from the highest level to the current node. This ID
of a node in HSR is similar to the unique MAC layer address. The hierarchical
addresses are stored in an HSR table at every node that indicates the node's own
position in the hierarchy. The HSR table is updated whenever routing update
packets are received by the node. The hierarchical address of node 11 in Figure
3.31 is < 6, 6, 6, 6, 11 >, where the last entry (11) is the node ID and the rest
consists of the node IDs of the cluster leaders that represent the location of node
11 in the hierarchy. Similarly, the HID of node 4 is < 6, 4, 4, 4 >. When node 11
needs to send a packet to node 4, the packet is forwarded to the highest node in
the hierarchy (node 6). Node 6 delivers the packet to node 4, which is at the top-
most level of the hierarchy.
Advantages and Disadvantages
The HSR protocol reduces the routing table size by making use of hierarchy
information. In HSR, the storage required is O(n × m) compared to O(nm) that is
required for a flat topology link state routing protocol (n is the average number
of nodes in a cluster and m is the number of levels). Though the reduction in the
amount of routing information stored at nodes is appreciable, the overhead
involved in exchanging packets containing information about the multiple levels
of hierarchy and the leader election process make the protocol unaffordable in
the ad hoc wireless networks context. Besides, the number of nodes that
participate in an ad hoc wireless network does not grow to the dimensions of the
number of nodes in the Internet where the hierarchical routing is better suited. In
the military applications of ad hoc wireless networks, the hierarchy of routing
assumes significance where devices with higher capabilities of communication
can act as the cluster leaders.
The link state information for the nodes belonging to the smallest scope is
exchanged at the highest frequency. The frequency of exchanges decreases with
an increase in scope. This keeps the immediate neighborhood topology
information maintained at a node more precise compared to the information
about nodes farther away from it. Thus the message size for a typical topology
information update packet is significantly reduced due to the removal of
topology information regarding the far-away nodes. The path information for a
distant node may be inaccurate as there can be staleness in the information. But
this is compensated by the fact that the route gets more and more accurate as the
packet nears its destination. FSR scales well for large ad hoc wireless networks
because of the reduction in routing overhead due to the use of the
abovedescribed mechanism, where varying frequencies of updates are used.
Figure 3.33 illustrates an example depicting the network topology information
maintained at nodes in a network. The routing information for the nodes that are
one hop away from a node are exchanged more frequently than the routing
information about nodes that are more than one hop away. Information
regarding nodes that are more than one hop away from the current node are
listed below the dotted line in the topology table. Figure 3.33. An illustration
of routing tables in FSR.
This metric proposes to distribute the load among all nodes in the network so
that the power consumption pattern remains uniform across them. This problem
is very complex when the rate and size of data packets vary. A nearly optimal
performance can be achieved by routing packets to the least-loaded next-hop
node.
Minimum Cost per Packet
In order to maximize the life of every node in the network, this routing metric is
made as a function of the state of the node's battery. A node's cost decreases
with an increase in its battery charge and vice versa. Translation of the
remaining battery charge to a cost factor is used for routing. With the
availability of a battery discharge pattern, the cost of a node can be computed.
This metric has the advantage of ease in the calculation of the cost of a node and
at the same time congestion handling is done.
Minimize Maximum Node Cost
This metric minimizes the maximum cost per node for a packet after routing a
number of packets or after a specific period. This delays the failure of a node,
occurring due to higher discharge because of packet forwarding.
TRANSPORT LAYER AND SECURITY PROTOCOLS
FOR AD HOC WIRELESS NETWORKS
3.10 INTRODUCTION
The objectives of a transport layer protocol include the setting up of an end-to
end connection, end-to-end delivery of data packets, flow control, and
congestion control. There exist simple, unreliable, and connection-less transport
layer protocols such as UDP, and reliable, byte-stream-based, and connection
oriented transport layer protocols such as TCP for wired networks. These
traditional wired transport layer protocols are not suitable for ad hoc wireless
networks due to the inherent problems associated with the latter. The first half
of this chapter discusses the issues and challenges in designing a transport layer
protocol for ad hoc wireless networks, the reasons for performance degradation
when TCP is employed in ad hoc wireless networks, and it also discusses some
of the existing TCP extensions and other transport layer protocols for ad hoc
wireless networks. The previous chapters discussed various networking
protocols for ad hoc wireless networks. However, almost all of them did not
take into consideration one very important aspect of communication: security.
Due to the unique characteristics of ad hoc wireless networks, which have been
mentioned in the previous chapters, such networks are highly vulnerable to
security attacks compared to wired networks or infrastructure-based wireless
networks (such as cellular networks). Therefore, security protocols being used
in the other networks (wired networks and infrastructure-based wireless
networks) cannot be directly applied to ad hoc wireless networks. The second
half of this chapter focuses on the security aspect of communication in ad hoc
wireless networks. Some of the recently proposed protocols for achieving secure
communication are discussed.
• The transport layer protocol should have mechanisms for congestion control
and flow control in the network.
• It should be able to provide both reliable and unreliable connections as per the
requirements of the application layer.
• The protocol should be able to adapt to the dynamics of the network such as
the rapid change in topology and changes in the nature of wireless links from
uni-directional to bidirectional or vice versa.
• The transport layer protocol should make use of information from the lower
layers in the protocol stack for improving the network throughput.
Figure 3.34shows the variation of the congestion window in TCP; the slow start
phase is between points A-B. Once it reaches the slow-start threshold (in Figure
3.34, the slow-start threshold is initially taken as 16 for illustration), it grows
linearly, adding one MSS to the congestion window on every ACK received.
This linear growth, which continues until the congestion window reaches the
receiver window (which is advertised by the TCP receiver and carries the
information about the receiver's buffer size), is called congestion avoidance, as
it tries to avoid increasing the congestion window exponentially, which will
surely worsen the congestion in the network. TCP updates the RTO period with
the current round-trip delay calculated on the arrival of every ACK packet. If
the ACK packet does not arrive within the RTO period, then it assumes that the
packet is lost. TCP assumes that the packet loss is due to the congestion in the
network and it invokes the congestion control mechanism. The TCP sender does
the following during congestion control:
(i) reduces the slow-start threshold to half the current congestion window or two
MSSs whichever is larger,
The TCP sender also assumes a packet loss if it receives three consecutive
duplicate ACKs (DUPACKs) [repeated acknowledgments for the same TCP
segment that was successfully received in-order at the receiver]. Upon reception
of three DUPACKs, the TCP sender retransmits the oldest unacknowledged
segment. This is called the fast retransmit scheme. When the TCP receiver
receives out-of-order packets, it generates DUPACKs to indicate to the TCP
sender about the sequence number of the last in-order segment received
successfully. Among the several extensions of TCP, some of the important
schemes are discussed below. The regularTCP which was discussed above is
also called as TCP Tahoe (in most of the existing literature).TCP Reno is
similar to TCP Tahoe with fast recovery. On timeout or arrival of three
DUPACKs, the TCP Reno sender enters the fast recovery during which (refer to
points C-JK in Figure 3.34) theTCP Reno sender retransmits the lost packet,
reduces the slow-start threshold and congestion window size to half the size of
the current congestion window, and increments the congestion window linearly
(one MSS per DUPACK) with every subsequent DUPACK. On reception of a
new ACK (not a DUPACK,i.e., an ACK with a sequence number higher than
the highest seen sequence number so far), the TCPReno resets the congestion
window with the slow-start threshold and enters the congestion avoidance phase
similar to TCP Tahoe (points K-L-M in Figure 3.34). J. C. Hoe proposed TCP-
New Reno extending the TCP Reno in which the TCP sender does not exit the
fast-recovery state, when a new ACK is received. Instead it continues to remain
in the fast-recovery state until all the packets originated are acknowledged. For
every intermediate ACK packet, TCP-New Reno assumes the next packet after
the last acknowledged one is lost and is retransmitted. TCP with selective ACK
(SACK) , improves the performance of TCP by using the selective ACKs
provided by the receiver. The receiver sends a SACK instead of an ACK, which
contains a set of SACK blocks. These SACK blocks contain information about
the recently received packets which is used by the TCP sender while
retransmitting the lost packets.
3.14.2 Why Does TCP Not Perform Well in Ad Hoc Wireless Networks? The
major reasons behind throughput degradation that TCP faces when used in ad
hoc wireless networks are the following:
• Misinterpretation of packet loss: Traditional TCP was designed for wired
networks where the packet loss is mainly attributed to network congestion.
Network congestion is detected by the sender's packet RTO period. Once a
packet loss is detected, the sender node assumes congestion in the network and
invokes a congestion control algorithm. Ad hoc wireless networks experience a
much higher packet loss due to factors such as high bit error rate (BER) in the
wireless channel, increased collisions due to the presence of hidden terminals,
presence of interference, location-dependent contention, uni-directional links,
frequent path breaks due to mobility of nodes, and the inherent fading properties
of the wireless channel.
• Effect of path length: It is found that the TCP throughput degrades rapidly
with an increase in path length in string (linear chain) topology ad hoc wireless
networks, . This is shown in Figure 3.35. The possibility of a path break
increases with path length. Given that the probability of a link break is pl , the
probability of a path break (pb) for a path of length k can be obtained as pb = 1 -
(1 -pl)k. Figure 3.36 shows the variation of pb with path length for pl = 0.1. Hence
as the path length increases, the probability of a path break increases, resulting
in the degradation of the throughput in the network. Figure 3.35. Variation of
TCP throughput with path length.
Figure 3.36. Variation of pb with path length (pl = 0.1). • Misinterpretation of
congestion window:
• Multipath routing: There exists a set of QoS routing and best-effort routing
protocols that use multiple paths between a source-destination pair. There are
several advantages in using multipath routing. Some of these advantages include
the reduction in route computing time, the high resilience to path breaks, high
call acceptance ratio, and better security. For TCP, these advantages may add to
throughput degradation. These can lead to a significant amount of out-of-order
packets, which in turn generates a set of duplicate acknowledgments
(DUPACKs) which cause additional power consumption and invocation of
congestion control.
messages for aiding routing in the Internet. 3 Certain routing protocols for ad hoc wireless
networks have explicit RouteError messages to inform the sender about path breaks so that the
sender can recompute a fresh route to the destination. This is especially used in on-demand
routing protocols such as DSR. Once the TCP sender receives the ELFN packet, it
disables its retransmission timers and enters astandby state. In this state, it
periodically originates probe packets to see if a new route is reestablished. Upon
reception of an ACK by the TCP receiver for the probe packets, it leaves the
standby state, restores the retransmission timers, and continues to function as
normal. Advantages and Disadvantages TCP-ELFN improves the TCP performance
by decoupling the path break information from the congestion information by
the use of ELFN. It is less dependent on the routing protocol and requires only
link failure notification about the path break. The disadvantages of TCP-ELFN
include the following: (i) when the network is temporarily partitioned, the path
failure may last longer and this can lead to the origination of periodic probe
packets consuming bandwidth and power and (ii) the congestion window used
after a new route is obtained may not reflect the achievable transmission rate
acceptable to the network and the TCP receiver.
3.14.5 TCP-BuS
TCP with buffering capability and sequence information (TCP-BuS) is similar
to the TCP-F andTCP-ELFN in its use of feedback information from an
intermediate node on detection of a path break. But TCP-BuS is more dependent
on the routing protocol compared to TCP-F and TCP-ELFN. TCP-BuS was
proposed, with associativity-based routing (ABR) protocol as the routing
scheme. Hence, it makes use of some of the special messages such as localized
query (LQ) and REPLY, defined as part ofABR for finding a partial path. These
messages are modified to carry TCP connection and segment information. Upon
detection of a path break, an upstream intermediate node [called pivot node
(PN)] originates an explicit route disconnection notification (ERDN) message.
This ERDN packet is propagated to the TCP-BuS sender and, upon reception of
it, the TCP-BuS sender stops transmission and freezes all timers and windows
as in TCP-F. The packets in transit at the intermediate nodes from the TCP-BuS
sender to the PN are buffered until a new partial path from the PN to the
TCPBuS receiver is obtained by the PN. In order to avoid unnecessary
retransmissions, the timers for the buffered packets at the TCP-BuS sender and
at the intermediate nodes up to PN use timeout values proportional to the
roundtrip time (RTT). The intermediate nodes between the TCP-BuS sender and
the PN can request the TCP-BuS sender to selectively retransmit any of the lost
packets. Upon detection of a path break, the downstream node originates a route
notification (RN) packet to the TCP-BuS receiver, which is forwarded by all the
downstream nodes in the path. An intermediate node that receives an RN packet
discards all packets belonging to that flow. The ERDN packet is propagated to
the TCP-BuS sender in a reliable way by using an implicit acknowledgment and
retransmission mechanism. The PN includes the sequence number of the TCP
segment belonging to the flow that is currently at the head of its queue in the
ERDN packet. The PN also attempts to find a new partial route to the TCP-BuS
receiver, and the availability of such a partial path to destination is intimated to
theTCP-BuS sender through an explicit route successful notification (ERSN)
packet. TCP-BuS utilizes the route reconfiguration mechanism of ABR to
obtain the partial route to the destination. Due to this, other routing protocols
may require changes to support TCP-BuS. The LQ and REPLY messages are
modified to carry TCP segment information, including the last successfully
received segment at the destination. The LQ packet carries the sequence number
of the segment at the head of the queue buffered at the PN and the REPLY
carries the sequence number of the last successful segment the TCPBuS receiver
received. This enables the TCP-BuS receiver to understand the packets lost in
transition and those buffered at the intermediate nodes. This is used to avoid fast
retransmission requests usually generated by the TCP-BuS receiver when it
notices an out-of-order packet delivery. Upon a successfulLQREPLY process to
obtain a new route to the TCP-BuS receiver, PN informs the TCP-BuS sender of
the new partial path using the ERSN packet. When the TCP-BuS sender
receives an ERSN packet, it resumes the data transmission. Since there is a
chance for ERSN packet loss due to congestion in the network, it needs to be
sent reliably. The TCP-BuS sender also periodically originates probe packets to
check the availability of a path to the destination. Figure 3.39 shows an
illustration of the propagation of ERDN and RN messages when a link between
nodes 4 and 12 fails.
When a TCP-BuS sender receives the ERSN message, it understands, from the
sequence number of the last successfully received packet at the destination and
the sequence number of the packet at the head of the queue at PN, the packets
lost in transition. The TCP-BuS receiver understands that the lost packets will
be delayed further and hence uses a selective acknowledgment strategy instead
of fast retransmission. These lost packets are retransmitted by the TCP-BuS
sender. During the retransmission of these lost packets, the network congestion
between the TCP-BuS sender and PN is handled in a way similar to that in
traditional TCP.
Advantages and Disadvantages
Figure 3.40. An illustration of ATCP thin layer and ATCP state diagram.
Two major advantages of ATCP are (i) it maintains the end-to-end semantics of
TCP and (ii) it is compatible with traditional TCP. These advantages permit
ATCP to work seamlessly with the Internet. In addition, ATCP provides a
feasible and efficient solution to improve throughput of TCP in ad hoc wireless
networks. The disadvantages of ATCP include (i) the dependency on the
network layer protocol to detect the route changes and partitions, which not all
routing protocols may implement and (ii) the addition of a thin ATCP layer to
the TCP/IP protocol stack that requires changes in the interface functions
currently being used.
3.14.7 SplitTCP
One of the major issues that affects the performance of TCP over ad hoc
wireless networks is the degradation of throughput with increasing path length,
as discussed early in this chapter. The short (i.e., in terms of path length)
connections generally obtain much higher throughput than long connections.
This can also lead to unfairness among TCP sessions, where one session may
obtain much higher throughput than other sessions. This unfairness problem is
further worsened by the use of MAC protocols such as IEEE 802.11, which are
found to give a higher throughput for certain link-level sessions, leading to an
effect known as channel capture effect. This effect leads to certain flows
capturing the channel for longer time durations, thereby reducing throughput for
other flows. The channel capture effect can also lead to low overall system
throughput. The reader can refer to Chapter 6for more details on MAC protocols
and throughput fairness. Split-TCP provides a unique solution to this problem
by splitting the transport layer objectives into congestion control and end-to-end
reliability. The congestion control is mostly a local phenomenon due to the
result of high contention and high traffic load in a local region. In the ad hoc
wireless network environment, this demands local solutions. At the same time,
reliability is an end-to-end requirement and needs end-to-end acknowledgments.
In addition to splitting the congestion control and reliability objectives, split-
TCP splits a long TCP connection into a set of short concatenated TCP
connections (called segments or zones) with a number of selected intermediate
nodes (known as proxy nodes) as terminating points of these short connections.
Figure 3.41 illustrates the operation of split-TCP where a three segment split-
TCP connection exists between source node 1 and destination node 15. A proxy
node receives the TCP packets, reads its contents, stores it in its local buffer,
and sends an acknowledgment to the source (or the previous proxy). This
acknowledgment called local acknowledgment (LACK) does not guarantee end-
to-end delivery. The responsibility of further delivery of packets is assigned to
the proxy node. A proxy node clears a buffered packet once it receives LACK
from the immediate successor proxy node for that packet. Split-TCP maintains
the end-to-end acknowledgment mechanism intact, irrespective of the addition
of zone-wise LACKs. The source node clears the buffered packets only after
receiving the end-to-end acknowledgment for those packets.
Split-TCP has the following advantages: (i) improved throughput, (ii) improved
throughput fairness, and (iii) lessened impact of mobility. Throughput
improvement is due to the reduction in the effective transmission path length
(number of hops in a zone or a path segment). TCP throughput degrades with
increasing path length. Split-TCP has shorter concatenated path segments, each
operating at its own transmission rate, and hence the throughput is increased.
This also leads to improved throughput fairness in the system. Since in split-
TCP, the path segment length can be shorter than the end-to-end path length, the
effect of mobility on throughput is lessened. The disadvantages of split-TCP can
be listed as follows: (i) It requires modifications to TCP protocol, (ii) the end-to-
end connection handling of traditional TCP is violated, and (iii) the failure of
proxy nodes can lead to throughput degradation. The traditional TCP has end-
to-end semantics, where the intermediate nodes do not process TCP packets,
whereas in split-TCP, the intermediate nodes need to process the TCP packets
and hence, in addition to the loss of end-to-end semantics, certain security
schemes that require IP payload encryption cannot be used. During frequent
path breaks or during frequent node failures, the performance of split-TCP may
be affected.
differentiating with ad hoc transport protocol it is referred to as ACTP in this chapter. The key
design philosophy of ACTP is to leave the provisioning of reliability to the
application layer and provide a simple feedback information about the delivery
status of packets to the application layer. ACTP supports the priority of packets
to be delivered, but it is the responsibility of the lower layers to actually provide
a differentiated service based on this priority. Figure 3.42 shows the ACTP layer
and the API functions used by the application layer to interact with the ACTP
layer. Each API function call to send a packet [SendTo()] contains the additional
information required for ACTP such as the maximum delay the packet can
tolerate (delay), the message number of the packet, and the priority of the
packet. The message number is assigned by the application layer, and it need
not to be in sequence. The priority level is assigned for every packet by the
application. It can be varied across packets in the same flow with increasing
numbers referring to higher priority packets. The non-zero value in the message
number field implicitly conveys that the application layer expects a delivery
status information about the packet to be sent. This delivery status is maintained
at the ACTP layer, and is available to the application layer for verification
through another API functionIsACKed<message number>. The delivery status
returned by IsACKed<message number> function call can reflect (i) a
successful delivery of the packet (ACK received), (b) a possible loss of the
packet (no ACK received and the deadline has expired), (iii) remaining time for
the packet (no ACK received but the deadline has not expired), and (iv) no state
information exists at the ACTP layer regarding the message under
consideration. A zero in the delay field refers to the highest priority packet,
which requires immediate transmission with minimum possible delay. Any
other value in the delay field refers to the delay that the message can experience.
On getting the information about the delivery status, the application layer can
decide on retransmission of a packet with the same old priority or with an
updated priority. Well after the packet's lifetime expires, ACTP clears the
packet's state information and delivery status. The packet's lifetime is calculated
as 4 × retransmit timeout (RTO) and is set as the lifetime when the packet is
sent to the network layer. A node estimates the RTO interval by using the
round-trip time between the transmission time of a message and the time of
reception of the corresponding ACK. Hence, the RTO value may not be
available if there are no existing reliable connections to a destination. A packet
without any message number (i.e., no delivery status required) is handled
exactly the same way as in UDP without maintaining any state information.
One of the most important advantages of ACTP is that it provides the freedom
of choosing the required reliability level to the application layer. Since ACTP is
a light-weight transport layer protocol, it is scalable for large networks.
Throughput is not affected by path breaks as much as in TCP as there is no
congestion window for manipulation as part of the path break recovery. One
disadvantage of ACTP is that it is not compatible with TCP. Use of ACTP in a
very large ad hoc wireless network can lead to heavy congestion in the network
as it does not have any congestion control mechanism.
"exponentially averaged," renamed here with a more appropriate term, "weighted average." An
example for this is , where a is an appropriate weight factor and the other terms are self-
explanatory. Unlike TCP, which employs either a decrease of the congestion
window or an increase of the congestion window after a congestion, ATP has
three phases, namely, increase, decrease, and maintain. If the new transmission
rate (R) fed back from the network is beyond a threshold (γ) greater than the
current transmission rate (S) [i.e., R > S(1+γ)], then the current transmission
rate is increased by a fraction (k) of the difference between the two transmission
rates ( ). The fraction and threshold are taken to avoid rapid fluctuations in the
transmission rate and induced load. The current transmission rate is updated to
the new transmission rate if the new transmission rate is lower than the current
transmission rate. In the maintain phase, if the new transmission rate is higher
than the current transmission rate, but less than the above mentioned threshold,
then the current transmission rate is maintained without any change. If an ATP
sender has not received any ACK packets for two consecutive feedback periods,
it undergoes a multiplicative decrease of the transmission rate. After a third such
period without any ACK, the connection is assumed to be lost and the ATP
sender goes to the connection initiation phase during which it periodically
generates probe packets. When a path break occurs, the network layer detects it
and originates an ELFN packet toward the ATP sender. The ATP sender freezes
the sender state and goes to the connection initiation phase. In this phase also,
the ATP sender periodically originates probe packets to know the status of the
path. With a successful probe, the sender begins data transmission again.
Advantages and Disadvantages The major advantages of ATP include improved
performance, decoupling of the congestion control and reliability mechanisms,
and avoidance of congestion window fluctuations. ATP does not maintain any
per flow state at the intermediate nodes. The congestion information is gathered
directly from the nodes that experience it. The major disadvantage of ATP is the
lack of interoperability with TCP. As TCP is a widely used transport layer
protocol, interoperability with TCP servers and clients in the Internet is
important in many applications. For large ad hoc wireless networks, the fine-
grained per-flow timer used at the ATP sender may become a scalability
bottleneck in resource-constrained mobile nodes.
• Lack of association: Since these networks are dynamic in nature, a node can
join or leave the network at any point of the time. If no proper authentication
mechanism is used for associating nodes with a network, an intruder would be
able to join into the network quite easily and carry out his/her attacks.
firewall is used to separate a local network from the outside world. It is a software which works
closely with a router program and filters all packets entering the network to determine whether or
not to forward those packets toward their intended destinations. A firewall protects the resources
of a private network from malicious intruders on foreign networks such as the Internet. In an ad
hoc wireless network, the firewall software could be installed on each node on the network.
Figure 3.43 shows a classification of the different types of attacks possible in ad
hoc wireless networks. The following sections describe the various attacks listed
in the figure.
3.19.1 Network Layer Attacks This section lists and gives brief descriptions of
the attacks pertaining to the network layer in the network protocol stack.
transport layer.
– Distributed DoS attack: A more severe form of the DoS attack is the
distributed DoS (DDoS) attack. In this attack, several adversaries that are
distributed throughout the network collude and prevent legitimate users from
accessing the services offered by the network.
• Impersonation: In impersonation attacks, an adversary assumes the identity
and privileges of an authorized node, either to make use of network resources
that may not be available to it under normal circumstances, or to disrupt the
normal functioning of the network by injecting false routing information into
the network. An adversary node could masquerade as an authorized node using
several methods. It could by chance guess the identity and authentication details
of the authorized node (target node), or it could snoop for information regarding
the identity and authentication of the target node from a previous
communication, or it could circumvent or disable the authentication mechanism
at the target node. A man-in-the-middle attack is another type of impersonation
attack. Here, the adversary reads and possibly modifies, messages between two
end nodes without letting either of them know that they have been attacked.
Suppose two nodes X and Y are communicating with each other; the adversary
impersonates node Y with respect to node X and impersonates node X with
respect to node Y, exploiting the lack of third-party authentication of the
communication between nodes X and Y. Device Tampering Unlike nodes in a wired
network, nodes in ad hoc wireless networks are usually compact, soft, and hand-
held in nature. They could get damaged or stolen easily.
Key arbitration schemes use a central arbitrator to create and distribute keys
among all participants. Hence, they are a class of key transport schemes.
Networks which have a fixed infrastructure use the AP as an arbitrator, since it
does not have stringent power or computation constraints. In ad hoc wireless
networks, the problem with implementation of arbitrated protocols is that the
arbitrator has to be powered on at all times to be accessible to all nodes. This
leads to a power drain on that particular node. An alternative would be to make
the keying service distributed, but simple replication of the arbitration at
different nodes would be expensive for resource-constrained devices and would
offer many points of vulnerability to attacks. If any one of the replicated
arbitrators is attacked, the security of the whole system breaks down. Key
Agreement Most key agreement schemes are based on asymmetric key algorithms.
They are used when two or more people want to agree upon a secret key, which
will then be used for further communication. Key agreement protocols are used
to establish a secure context over which a session can be run, starting with many
parties who wish to communicate and an insecure channel. In group key
agreement schemes, each participant contributes a part to the secret key. These
need the least amount of preconfiguration, but such schemes have high
computational complexity. The most popular key agreement schemes use the
Diffie-Hellman exchange , an asymmetric key algorithm based on discrete
logarithms.
Public key infrastructure (PKI) enables the easy distribution of keys and is a
scalable method. Each node has a public/private key pair, and a certifying
authority (CA) can bind the keys to the particular node. But the CA has to be
present at all times, which may not be feasible in ad hoc wireless networks. It is
also not advisable to simply replicate the CA at different nodes. In , a scheme
based on threshold cryptography has been proposed by which n servers exist in
the ad hoc wireless network, out of which any (t+1) servers can jointly perform
any arbitration or authorization successfully, but t servers cannot perform the
same. Hence, up to t compromised servers can be tolerated. This is called an (n,
t + 1) configuration, where n ≥ 3t + 1. To sign a certificate, each server
generates a partial signature using its private key and submits it to a combiner.
The combiner can be any one of the servers. In order to ensure that the key is
combined correctly, t + 1 combiners can be used to account for at most t
malicious servers. Using t + 1 partial signatures (obtained from itself and t other
servers), the combiner computes a signature and verifies its validity using a
public key. If the verification fails, it means that at least one of the t + 1 keys is
not valid, so another subset of t + 1 partial signatures is tried. If the combiner
itself is malicious, it cannot get a valid key, because the partial signature of
itself is always invalid. The scheme can be applied to asynchronous networks,
with no bound on message delivery or processing times. This is one of the
strengths of the scheme, as the requirement of synchronization makes the
system vulnerable to DoS attacks. An adversary can delay a node long enough
to violate the synchrony assumption, thereby disrupting the system. Sharing a
secret in a secure manner alone does not completely fortify a system. Mobile
adversaries can move from one server to another, attack them, and get hold of
their private keys. Over a period of time, an adversary can have more than t
private keys. To counter this, share refreshing has been proposed, by which
servers create a new independent set of shares (the partial signatures which are
used by the servers) periodically. Hence, to break the system, an adversary has
to attack and capture more than tservers within the period between two
successive refreshes; otherwise, the earlier share information will no longer be
valid. This improves protection against mobile adversaries. Self-Organized Public
Key Management for Mobile Ad Hoc Networks The authors of have proposed a
completely self-organized public key system for ad hoc wireless networks. This
makes use of absolutely no infrastructure – TTP, CA, or server – even during
initial configuration. The users in the ad hoc wireless network issue certificates
to each other based on personal acquaintance. A certificate is a binding between
a node and its public key. These certificates are also stored and distributed by
the users themselves. Certificates are issued only for a specified period of time
and contain their time of expiry along with them. Before it expires, the
certificate is updated by the user who had issued the certificate. Initially, each
user has a local repository consisting of the certificates issued by him and the
certificates issued by other users to him. Hence, each certificate is initially
stored twice, by the issuer and by the person for whom it is issued. Periodically,
certificates from neighbors are requested and the repository is updated by
adding any new certificates. If any of the certificates are conflicting (e.g., the
same public key to different users, or the same user having different public
keys), it is possible that a malicious node has issued a false certificate. A node
then labels such certificates as conflicting and tries to resolve the conflict.
Various methods exist to compare the confidence in one certificate over another.
For instance, another set of certificates obtained from another neighbor can be
used to take a majority decision. This can be used to evaluate the trust in other
users and detect malicious nodes. If the certificates issued by some node are
found to be wrong, then that node may be assumed to be malicious. The authors
of define a certificate graph as a graph whose vertices are public keys of some
nodes and whose edges are public-key certificates issued by users. When a user
X wants to obtain the public key of another user Y, he/she finds a chain of valid
public key certificates leading to Y. The chain is such that the first hop uses an
edge from X, that is, a certificate issued by X, the last hop leads into Y(this is a
certificate issued to Y), and all intermediate nodes are trusted through the
previous certificate in the path. The protocol assumes that trust is transitive,
which may not always be valid. Having seen the various key management
techniques employed in ad hoc wireless networks, we now move on to discuss
some of the security-aware routing schemes for ad hoc wireless networks.
One of the solutions for the blackhole problem is to restrict the intermediate
nodes from originatingRouteReply packets. Only the destination node would be
permitted to initiate RouteReply packets. Security is still not completely assured,
since the malicious node may lie in the path chosen by the destination node.
Also, the delay involved in the route discovery process increases as the size of
the network increases. In another solution to this problem, suggested in , as soon
as the RouteReplypacket is received from one of the intermediate nodes, another
RouteRequest packet is sent from the source node to the neighbor node of the
intermediate node in the path. This is to ensure that such a path exists from the
intermediate node to the destination node. For example, let the source node
sendRouteRequest packets and receive RouteReply through the intermediate
malicious node M. TheRouteReply packet of node M contains information
regarding its next-hop neighbor nodes. Let it contain information about the
neighbor node E. Then, as shown in Figure 3.48, the source node S
sendsFurtherRouteRequest packets to this neighbor node E. Node E responds by
sending aFurtherRouteReply packet to source node S. Since node M is a
malicious node which is not present in the routing list of node E, the
FurtherRouteReply packet sent by node E will not contain a route to the
malicious node M. But if it contains a route to the destination node D, then the
new route to the destination through node E is selected, and the earlier selected
route through node M is rejected. This protocol completely eliminates the
blackhole attack caused by a single attacker. The major disadvantage of this
scheme is that the control overhead of the routing protocol increases
considerably. Also, if the malicious nodes work in a group, this protocol fails
miserably.