Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ijwmn 030218

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No.

2, April 2011

OPTIMIZING POWER AND BUFFER CONGESTION


ON WIRELESS SENSOR NODES USING CAP
(COORDINATED ADAPTIVE POWER)
MANAGEMENT TECHNIQUE
Gauri Joshi and Prabhat Ranjan

Dhirubhai Ambani Institute of Information and Communication Technology


Gandhinagar, India
gauri_joshi@daiict.ac.in , prabhat_ranjan@daiict.ac.in

ABSTRACT
Limited hardware capabilities and very limited battery power supply are the two main constraints that
arise because of small size and low cost of the wireless sensor nodes. Power optimization is highly
desired at all the levels in order to have a long lived Wireless Sensor Network (WSN). Prolonging the life
span of the network is the prime focus in highly energy constrained wireless sensor networks. Sufficient
number of active nodes can only ensure proper coverage of the sensing field and connectivity of the
network. If large number of wireless sensor nodes get their batteries depleted over a short time span then
it is not possible to maintain the network. In order to have long lived network it is mandatory to have
long lived sensor nodes and hence power optimization at node level becomes equally important as power
optimization at network level. In this paper need for a dynamically adaptive sensor node is signified in
order to optimize power at individual nodes along with the reduction in data loss due to buffer
congestion.
We have analyzed a sensor node with fixed service rate (processing rate and transmission rate) and a
sensor node with variable service rates for its power consumption and data loss in small sized buffers
under varying traffic (workload) conditions. For variable processing rate Dynamic Voltage Frequency
Scaling (DVFS) technique is considered and for variable transmission rate Dynamic Modulation Scaling
(DMS) technique is considered. Comparing the results of a dynamically adaptive sensor node with that
of a fixed service rate sensor node shows improvement in the lifetime of node as well as reduction in the
data loss due to buffer congestion. Further we have tried to coordinate the service rates of computation
unit and communication unit on a sensor node which give rise to Coordinated Adaptive Power (CAP)
management. The main objective of CAP Management is to save the power during normal periods and
reduce the data loss due to buffer congestion (overflow) during catastrophic periods. With CAP
management we are trying to adaptively change the power consumption of sensor nodes. Power
consumption of processing unit and communication unit are coordinated together and changed
adaptively with respect to the workload. Coordination between processing and communication subunits
results in better energy optimization as well as possible data loss before transmission because of limited
buffer sizes can be avoided.

KEYWORDS
Power Optimization, Buffer Overflow, Wireless Sensor Nodes, Coordination between DVFS and DMS

DOI : 10.5121/ijwmn.2011.3218 225


International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011

1. UNDERSTANDING THE PROBLEM AND LITERATURE REVIEW


Micro sensor nodes are small in size and hence the size of battery supported on node is also
small. This battery can supply a small amount of energy for the functioning of node and nodes
are expected to work reliably over a longer period of time which may extend up to few years.
Replacing the batteries is not feasible due to remote, random, inaccessible nodes and also due
to large number of nodes in the network. Recharging or replacing the batteries on sensor nodes
manually is not possible due to very large number of sensor nodes and also nodes may not be
approachable physically in many applications. Limited battery energy constraint can be
overcome if battery gets recharged using energy scavenging techniques. Electrical energy can
be scavenged or harvested from environmental sources such as solar, wind or water flow.
Continuous research [1-4] is going on to develop Energy harvesting WSN as it seems to
overcome the problem of stringent power constraint. However for WSN applications the energy
demands are large because of the wireless communications and availability of environmental
power cannot be guaranteed as nodes energy harvesting opportunities may vary from place to
place and time to time. Also the energy consumption of the nodes varies due to uneven
distribution of workloads or network traffic. All these constraints emphasize that sensor node
with energy scavenging ability also need power management.
Our focus is on the power optimization at sensor node level with ultimate aim to increase the
lifetime of the Wireless Sensor Network. In a wireless sensor node maximum power is
consumed for wireless communication and data processing also consumes moderate amount of
power. We are trying to optimize the power consumption of processor and radio unit
dynamically by adopting the optimized service rates (processing rate and transmission rate
respectively) with respect to the instantaneous workload requirements. For a long-lived
Wireless Sensor Network, low power hardware is the basic requirement [5] along with energy
efficient protocols [6]. To increase the lifetime of wireless networks in which nodes are battery
operated, various energy management techniques have been proposed in the literature. An
overview of these techniques has been given in [7]. Mostly researchers have suggested reducing
the wireless communication power [8, 9]. In [10-12]] shutting down processing unit has been
suggested in order to optimize topology and communication range. Shutting down results in
power saving. Information about network traffic and route is exploited in [13] to decide when
to turn the node into the sleeping state from ON state. Though wireless communication is the
major activity which consumes lot of power for transmission, reception as well as when it is
idle (listening), power consumption of microcontroller or processor also have significant share
in the energy budget of wireless sensor nodes. Emerging real time applications of WSN like
body sensor networks need powerful microcontrollers which consume more power [14, 15]. In
[16] energy consumption of the widely adopted Mica2 sensor node is studied very accurately
and shown that the power consumption of processor ranges from 28 % to 86% of the total
power consumed and roughly 50% on average. So it becomes equally important to optimize the
power consumption of computing unit along with communication unit. Dynamic Frequency
Scaling (DFS) and Dynamic Voltage Scaling (DVS) techniques are energy efficient when the
processor is in idle state [17] to reduce power consumption. Topology control and power-aware
routing protocols only reduce the transmission power of radio, and hence are not suitable for
the applications with low workload or the radio platforms with high idle power consumption.
Sleep scheduling protocols only reduce the idle power consumption and hence not effective
when the network workload is high or the idle power consumption of radio is low. It clearly
indicates the need for adaptive sensor node architecture which can handle both the situations of

226
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
low traffic (normal period) as well as high traffic (catastrophic period) efficiently. That is we
want no power wastage during normal periods and no data loss during catastrophic periods.
Figure 1 show the basic block schematic of a wireless sensor node.

Figure 1. Basic architecture of a sensor node


Data arrives in the input buffer from two sources- data sensed by its own sensors and the data
received from the neighboring nodes. The processor processes this data and the processed data
comes in the output buffer. Transmitter transmits this data. Sensor nodes with fixed service
rates are designed to handle moderate data arrival rates otherwise if designed to handle low data
rates will result in data loss due to buffer overflow during catastrophic period or if designed to
handle peak data rates then will remain idle over a longer period and power wastage will be
more which reduces the lifetime of the sensor nodes.
Low duty cycling applied on low power hardware further increases the lifetime of the sensor
nodes and long lived sensor nodes support to work the network over longer period. We have
considered rotational sleep schedule (time triggered wake up) as it provides better network
coverage and connectivity till sufficient number of nodes fails. As sensor networks are mainly
deployed to sense some rare events, most of the times there is no much traffic in the network
and not much work for the sensor nodes to carry. The problem arises when a node is turned ON
but there is no much work to carry and hence remain idle for a longer duration. This idle state
power consumption is the power wastage as power is consumed for doing nothing. More the
idle period more is the more wastage. So it is important to control the power consumption
during ON state by reducing the idle time periods. Here we have classified the time period over
which sensor nodes are alive in the two categories- normal period and catastrophic period.
Normal period is the time interval when event of interest has not occurred and everything is
normal. It results in small data arrival rates in the input buffer of sensor nodes. Catastrophic
period is the time duration when the event occurs. Lot of information is sensed by the nodes as
well as lot of data received from the neighboring nodes results in the peak data arrival rate.
In order to reduce the idle power consumption during normal period if the service rates of the
sensor nodes are kept smaller, then large amount of data arriving at the sensor node cannot be
handled during the catastrophic period. Sensor nodes being very small sized devices have very
small buffers to store the data and if not served with proper service rate then result in the buffer
overflow and data gets lost. Hence wireless sensor nodes with longer life time but providing
desired Quality of Service (QoS) are required.
Reducing the idle power consumption during the normal period and reducing the buffer
overflow during the catastrophic period are equally important.
227
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
A sensor node capable of variable service rates can handle both the issues. Working with the
smaller service rates consumes less power and reduction in idle power consumption increases
the life time during normal period while offering higher service rates during the catastrophic
period consumes more power but reduces the data loss because of buffer overflow.
We have modeled a sensor node with fixed service rates as well as a sensor node with variable
service rates and compared their performances.
Data loss in a wireless communication system result by two ways. One possibility is that data is
transmitted successfully and traveling on a communication channel data may be lost due to
channel impairments, heavy traffic resulting in collisions, no connecting node etc. second
possibility of data loss is before data transmission takes place. With a small sized hardware
having constraints on its memory and buffer sizes, there is possibility of data loss due to buffer
overflow. The tradeoff between packet loss due to buffer overflow and packet loss due to
transmission errors has been studied in [18 ] for the increase in the overall system throughput.
Data loss after transmission can be controlled to some extent by using proper modulation
technique, channel encoding, error correcting codes etc. Data loss before transmission needs to
take care by individual node. This kind of data loss is very prominent in the wireless
communication networks where each node is highly hardware constrained as well as energy
constrained. Wireless sensor network is a good example of this kind of network. Data loss in
the node also results due to network congestion. Data is lost in the node itself before getting
transmitted. Such loss occurs due to the buffer overflow at the output buffer (data loss due to
buffer congestion). Data processed by processor comes in the output buffer and waits for
getting transmitted. If the channel condition is poor then transmitter will not transmit the data in
order to save the retransmissions. If receiver cannot receive the packet may be due to collision
or something else then transmitter does not receive acknowledgement (ack) from the receiver
and needs to keep the transmitted packet stored in the queue for retransmission. In such
conditions probability of data loss due to output buffer congestion increases (either tail drop or
head drop policy) as the output buffer size is very small in wireless sensor nodes. It not only
leads to data loss but also results in CPU power wastage. It is wise to reduce the processing
speed and save processors power if the transmission speed has reduced and packets in the
output queue are waiting. In [14], effect of network congestion on buffer congestion has been
analyzed and shown that by reducing the speed of microcontroller during congestion periods
can save the power. We have discussed the effect of network congestion on buffer congestion
resulting in data and power loss. Now consider the situation when there is no network
congestion and channel condition is also good. For sensor network applications like monitoring
applications most of the time there is no event occurring and there is very less traffic in the
network. We call this time period as normal time period. When event occurs and get sensed and
detected by the sensor nodes then the traffic in the network increases. We call this time period
as catastrophic period. This increased traffic needs to be handled with proper processing and
transmission speed otherwise data loss due to buffer overflow will occur. Information about the
event of interest though sensed will not reach to the sink node. It may violet the purpose of
deploying WSN. Figure 2 depicts the scope of Cap management technique for power and
buffer overflow optimization at sensor node level.

2. POWER OPTIMIZATION AT SENSOR NODES


As discussed earlier data computation and wireless data communication are the main power
consuming factors. By varying the computation (processing) and communication (transmission)

228
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
rate w.r.t. the instantaneous amount of data to be processed or transmitted power can be
optimized. Figure 2 elaborates the scope of coordinated power management technique.
2.1. Optimizing computing energy using DVFS
In most of the WSN applications sensor nodes have a time varying computational load, and
hence peak system performance is not always required. DVFS exploits this fact by dynamically
adapting the processor's supply voltage and operating frequency to satisfy the instantaneous
processing requirement. The concept of dynamic voltage scaling is nicely elaborated by Amit
Sinha, Anantha Chandrakasan et al [17, 20].

Figure 2. Scope of CAP Management on a Wireless Sensor Node

229
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
Here performance of processor is lowered against its energy efficiency. Performance is
degraded in the sense that it takes more time for processing and introduces computational delay
( latency), which is the cost paid to save computational energy. So this mean of computational
energy saving can be adapted only within the latency constraint, which is not going to adversely
affect the performance of the network. This latency constraint is different for different
applications. Several modern processors such as Intel's Strong-Arm and Transmitta’s Crusoe
support scaling of voltage and frequency.
Reduction in the operational clock frequency results in linear energy savings and additional
quadratic energy savings can be obtained if the power supply voltage is reduced to the
minimum required for that particular frequency.
2.2. Optimizing communication energy using DMS
M-ary modulation is the key to adaptive modulation. Number of bits per symbol (constellation
size) can be changed adaptively which results in variable data rate but with constant symbol
rate. In [21] it has been shown that MQAM modulation is efficient for short range
communications. Modulation scaling is a technique to decrease the energy consumed during
data transmission. Actual data transmission itself constitutes a major portion of the total energy
consumption in wireless communication systems. Modulation scaling trades off energy
consumption against transmission delay (latency).
Dynamic modulation scaling (DMS) concept is elaborated by Schurgers et. al [22]. It seems
better to reduce the transmission time in order to reduce the energy consumption, so generally it
is better to transmit as fast as possible and then turn to OFF state. Hence it is desirable to
transmit multiple bits per symbol (M-ary modulation) in order to reduce on time of transmitter.
But unfortunately for today’s available transceivers start up time is much higher hundreds of
microseconds) and it increases the power consumed by electronic hardware of the transmitter
very aggressively as compared to output power transmitted. So switching transmitter ON and
OFF frequently is not a wise decision and may not result in significant energy saving. In case of
M-ary modulation ( M = 2b) as the constellation size b (number of bits per symbol) increases,
power consumed by hardware as well as output power increases so for a particular transmission
system value of b should be optimized for specific symbol rate. The energy consumption in
data transmission is proportional to the transmission data rate [23, 24]. Increasing the
constellation size b, energy consumed for transmission of each bit increases while associated
transmission delay is decreased. Dynamic modulation scaling is useful to achieve multiple data
rate and dynamic power scaling to provide energy savings.

3. SYSTEM MODEL
From the architecture of a sensor node it can be viewed as two systems connected in tandem
(output of first system is input for the second system). Figure 3 shows the tandem queue model
of a sensor node with fixed service rate.

Figure 3. Tandem queue model of wireless sensor node (fixed service rate)
230
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
Let A n = number of packets arrived during nth slot

B 1 = maximum capacity of input buffer

B 2 = maximum capacity of output buffer

f = number of packets served by the processor in each time slot

b = number of packets transmitted by the transmitter in each time slot

M n = input buffer occupancy at the start of nth slot

N n = output buffer occupancy at the start of nth slot

We have considered late arrival system (LAS) where data packets are allowed to enter in the
system just before the slot ends. These packets get service in the next time slot. Input buffer
occupancy at the start of a time slot is dependent on buffer occupancy at the start of previous
slot, number of packets served and number of packets arrived during previous slot.

M n = min {max {(M n-1 – f), 0} + A n-1, B1}

Similarly output buffer occupancy can be written as

N n = min {max {(N n-1 – b), 0} + min {M n-1, f} , B2}

3.1. Fixed service rates (No DVFS, No DMS)


In most of the WSN applications during normal periods very less workload needs to be
handled. When the number of packets in the buffer is less than the number of packets that can
be served in one time slot of duration ∆, server remains idle for some period.

I 1n = Idle period of processor in nth time slot

= max {(f - M n-1)/f, 0} ∆

Similarly,

I 2n = Idle period of transceiver in nth time slot

= max {(b - N n-1)/b, 0} ∆

For a fixed service rate sensor node, M n is a function of service rate arrival rate. As service rate
is fixed M n depends on arrival rate only. During normal period, arrival rate is very small which
keeps the value of M n also small and as a result processor remains idle over a longer period.
On the contrary during catastrophic conditions the arrival (number of packets arrived in one
slot) increases but as the service rate is fixed and buffer size is small possibility of data loss due
to buffer congestion (buffer overflow) occurs as per the head drop or tail drop scheme.
Input buffer overflow (OV1) occurs when M n= B 1 and output buffer overflow (OV2) occurs
when N n = B 2.
231
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
OV1n = max {(M n-1 –f), 0} + A n-1 - B1

OV2n = max {(N n-1 –b), 0} + min {M n-1 , f} - B2

3.2. Only DVFS, No DMS (variable f and fixed b)


In this case input buffer status depends on arrival rate as well as service rate f. Arrival rate
cannot be controlled but service rate can be used as a control knob to reduce idle period during
normal times and buffer overflow during catastrophic period. In this case
M n = min {max {(M n-1 –f n-1), 0} + A n-1 , B1}

This value of M n will decide the value of service rate f n. Smaller the value of M n smaller value
of f n will be selected. Reduction in service rate will reduce the power consumption and will
take more time to complete the service (DVFS). It helps to reduce the idle power wastage.
I 1n = Idle period of processor in nth time slot

= max {(f n- M n-1)/f max, 0} ∆

By reducing the value of f n idle time will be reduced and will also save the power.

Buffer overflow is given as

OV1n = max {(M n-1 –f n-1), 0} + A n-1 - B1

During catastrophic condition as the arrival rate increases value of M n will be more. Data loss
due to input buffer overflow can be reduced by increasing the value of f n. In order to make
service rate buffer adaptive we need to scale f n in terms of M n.
f n = (M n* f max)/ B1

f max is the maximum supported service rate.

But as the second server in the tandem queue (transmitter) works with fixed service rate, there
is possibility of data loss during catastrophe and more power wastage during idle period.
Output buffer occupancy is-

N n = min {max {(N n-1 – b), 0} + min {M n-1, fn} , B2}

Output buffer overflow can be written as

OV2n = max {(N n-1 – b), 0} + min {M n-1, fn} - B2

In this equation fn is varying at the first server but there is no control knob at the second server
to control the overflow. During catastrophe as A n increases, fn at the first server will increase
resulting in increased N n. As b is constant and B2 is fixed output buffer overflow increases it
not only results in data loss but as the processed data gets lost, processing power used for that
also goes waste. Similarly during normal conditions as A n reduces, fn will be reduced. It will
reduce the packet arrival rate in the output buffer but as second server works with fixed rate

232
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
(which is high enough to handle worst case condition), it will remain idle over longer duration
and more power will be wasted.
I n2 = Idle period of transceiver in nth time slot

= max {(b - N n-1)/b, 0} ∆

N n-1 becomes smaller due to reduced fn but b is constant and moderately high hence I2n
increases.
Implementation of only DVFS is not enough as it increases the processed data loss and
processing power loss during catastrophe and more idle power wastage during normal period.

3.3. Only DMS, No DVFS (variable b and fixed f)


In this case service rate of processor- f is fixed and kept considerably high in order to handle
sufficient number of packets during catastrophic conditions. Transmission rate b can be varied
as per the number of packets in the output buffer. During normal conditions f will be much
higher than the arrival rate A n so the first server- processor remains idle over a longer duration.
Idle power wastage is more likely in the first server. Number of packets entering in the output
buffer is also small during normal conditions. Second server- transmitter will select lower
service rate b for the transmission of packets. Lowering the transmission rate will reduce the
power consumption (RF power with DMS). Similarly during catastrophic conditions A n will
increase and f is not sufficient high then data loss may occur at input buffer. At the output
buffer data loss due to buffer congestion is reduced by increasing the transmission rate (more
number of bits per symbol).

OV2n = max {(N n-1 – b n), 0} + min {M n-1, f} - B2

Here though b n is changing, OV2n gets limited by f which is fixed. So implementation of only
DMS is also not enough.

3.4. Both DVFS and DMS integration (variable b and f)


From the discussions above it is highly desirable to have both f and b both changing w.r.t. to
the number of packets in the buffer waiting for the service. Figure 4 shows the sensor node
architecture with variable processing rate and variable transmission rate. Here a monitor checks
the queue length and the probability of buffer overflow. Processing rate of the processor is
varied as per the principle of dynamic voltage/frequency scaling (DVFS) and the data
transmission rate is varied using dynamic modulation scaling (DMS).

Figure 4. A sensor node with variable service rates

233
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
As seen earlier input and output buffer occupancies can be given as
M n = f {An, f} 0 ≤ M n ≤ B1

N n = f {f, b} 0 ≤ N n ≤ B2
for the stability of the system f ≥ An , so that departure rate of first server is nothing but its
arrival rate An. so we can approximate,

N n ~ f {A n, b} 0 ≤ N n ≤ B2

It shows that occupancy of input buffer as well as output buffer is a function of arrival rate An.
Implementing DVFS (on processor) and DMS (on transmitter) together on a sensor node make
the service rates f and b to change w.r.t. input and output buffer occupancy respectively will
save the power during normal periods and will reduce buffer overflow data loss during
catastrophic periods. Also as both the buffer occupancies are function of arrival rate An
(directly proportional) there is no need to monitor input and output buffers separately. By
monitoring input buffer only it is possible to select required f and b. Now we can say that f and
b are changing in coordination. It results in coordinated adaptive power (CAP) management
giving extended lifetime to the sensor nodes and indirectly contributing to the lifetime
extension of WSN. It also ensures QoS by reducing the buffer congestion and data loss because
of it.

4. NEED FOR CAP MANAGEMENT


In order to save the overall energy consumption of a sensor node we consider implementing
DVFS and DMS on a sensor node. ON time period is divided in number of time slots of fixed
duration ∆. At the start of every time slot status of the input buffer and out buffer are checked
independently and accordingly voltage-frequency settings of the processor and modulation
index of transmitter are set. Here as DVFS and DMS work independently on the same sensor
node some of the problems that arise are:
• Two different power managers are required one for processing unit and other for
communication unit.
• Input buffers and output buffer are separately monitored hence more interfacing with
hardware is required.
• More software, more iterations and more energy required.

Figure 5. ON period divided in number of slots


234
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
• DVFS has predictive approach but DMS checks the actual status of the output buffer,
so DMS monitor is not aware of what will be the status of output buffer in next
slot.

• Most of the time sensor node simply has to forward the data, it does not consume any
time for processing and suddenly the data in the output buffer increases.

As DMS modulator is not aware of this fact in advance so monitoring the output buffer,
deciding and then adjusting the modulation level for transmission in the same time slot takes
time. Meanwhile there is possibility of data loss due to limited buffer size.

To overcome above mentioned problems coordination between DVFS and DMS is required. If
the operating state of communication unit is selected based on the operating state of the
processor then both units will work together for power optimization as well as possibility of
data loss before transmission gets removed.

4.1. Concept of CAP management


Idea of coordinated power management was floated by Vijay Raghunathan et al [24]. CAP
management is a technique to coordinate active operating states of processor and transmitter in
a particular time slot adaptively with workload. Dynamic Voltage Scaling
(DVS) [25] and Dynamic Modulation Scaling (DMS) [26] techniques are integrated on the
node and works in coordination. CAP management considers the following assumptions:
• Due to limited energy availability, a short haul multi hop communication is preferred.

• Other than sensing its environment each node acts as a router and simply forwards the
received data to other nodes.

• Percentage of data to be forwarded is much greater than the percentage of data actually
sensed.

• Predicted workload tracks the actual workload efficiently.

Using the fact that if workload monitor observes a heavy workload and selects higher supply
voltage and clock frequency for processing predicted heavy load, then packet arrival rate in
output buffer will be quite high and if packets are not transmitted quickly then some of the data
packets may get lost even before transmission due to limited output buffer size. In this situation
modulation scaler selects higher constellation size i.e. selects multiple bits per symbol and fast
data transmission takes place. Transmission time Ton is reduced which results in to energy
saving but electronic energy consumption for higher b increases which also depends on
transmitter hardware and design. If DVFS is selecting smaller value of supply voltage and
clock frequency then optimum constellation size b is selected, which results in to comparatively
slow data transmission but reduces power consumed by hardware and also reduces output
power transmitted. in wireless sensor networks for short haul communication generally 0dBm
output power is considered which is much smaller than electronic power consumption. Hence it
is not always worthy to transmit data with higher constellation size, but constellation size is
optimized for minimum energy consumption and maximum latency constraints, rather better to
scale constellation size with workload up to permissible limit.

235
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
In the block diagram shown in Figure 6, a common workload monitor and predictor controls
voltage and frequency scaler for processor and also modulation (Mod) scalar for transmitter. As
workload is predicted for next slot, where first data will be processed with certain processing
rate and then will be made available in output buffer for transmission so a sufficient delay is
introduced before giving control signal to modulator. Also constellation size used for
modulation is required to be specified in the packet header for the purpose of demodulation at
the receiver side.

Use of DVFS and DMS together has been explored in [27] and [28] for minimizing energy
consumption. In [27], Kumar et al. addressed a resource allocation problem. In [28], genetic
algorithm is used to solve the convex optimization problem of the energy management. In
[29],[30] authors have combined DVFS and DMS techniques to maximize the battery energy
levels of individual nodes at the same time meeting the end to end latency requirements.

Figure 6. CAP Management Conceptual Block Diagram

5. SIMULATION RESULTS
We have assumed that the arrival of data packets follows Poisson distribution. During normal
conditions the arrival rate A n is assumed to be λ1 while during catastrophe it is assumed to be
λ2. Both the system queues are of fixed lengths B1 and B 2 respectively. When there is a sudden
change in the surroundings the data flow increases to a comparatively higher rate and that is
why there is a need for higher value of service rate during catastrophe period.
The node is designed in such a way that it analyses the overflow probability for both the servers
after every 20 time units. During this period if the overflow of packets in any of the queues has
reached above a threshold level then their respective service rates will increase, so that the
higher traffic of data could be managed with less overflow.
As soon as the condition is back to normal the service rates will be changed back to the initial
values. Using the lower values of service rates during normal conditions (because there is less
data traffic in the system during normal conditions) we are trying to save the battery power
since power consumption increases with the increase in service rate. And by increasing the
236
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
value of service rates during catastrophe we are reducing the overflow of packets since during
catastrophe data traffic in the system increases to a great extent.

Figure 7 and Figure 8 shows the graphs of power consumed, queue length and buffer overflow
probability of a sensor node with fixed service rate and that of a sensor node with variable
service rate. Figure 7 compares both types of sensor nodes under normal condition while Figure
8 shows comparison under catastrophic conditions.

In Table 1 all the performance parameters observed in both the models under normal as well as
under catastrophe conditions are listed for the purpose of comparison. The first row depicts
parameters obtained for the fixed service rate model. The rest of the rows depict parameters
obtained for varying service rate model. Note that the values of varying service rate model
show better results.

Figure 7. Performance graphs of processor with input buffer with fixed service rate and with
variable service rates under normal condition

237
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011

Figure 8. Performance graphs of processor with input buffer with fixed service rate and with
variable service rates under catastrophe

238
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011

6. CONCLUSION
A wireless sensor node with capability of adaptive service rates is more power optimized as
compared with the sensor node with fixed service rate. Adaptive sensor nodes not only result in
longer lifetime but also provide the better QoS by reducing the data loss due to the buffer
overflow during the period of catastrophe. Longer lifetime is achieved by reducing the idle time
periods and keeping sensor node busy with small service rates and consuming less power
during normal period of operation. Service rate adaptive sensor nodes are actually power
adaptive sensor nodes. These nodes also help to meet out the node-to-node delay constraints
and reduce the number of time out dropped packets. Such long lived sensor nodes with better
QoS performance will be helpful in making Wireless Sensor Networks more feasible.

In this paper we have considered only two ON states of a sensor node for the purpose of
analysis. Similarly a sensor node with multiple number of states can be analyzed and will result
in better performance as switching between two neighboring states will take less switching time
and will consume less switching energy. Coordination between service rates of processor and
transmitter reduces the need for checking two buffers independently. It gives a sensor node
with coordinated DVFS and DMS techniques with better power optimization.

ACKNOWLEDGEMENTS
The authors would like to thank Prof. Jaideep Mulherkar, Mr. Sudhanshu Dwivedi and Mr.
Anshul Goel for their interest and support in this work.

REFERENCES
[1] Yu Gu, Ting Zhu, and Tian He. Esc: Energy synchronized communication in sustainable sensor
networks, October, 2009.
[2] Aman Kansal, Jason Hsu, Sadaf Zahedi, and Mani B. Srivastava. Power management in energy
harvesting sensor networks. ACM Trans. Embed. Comput. Syst., 6, September 2007.
[3] Shaobo Liu, Qinru Qiu, and Qing Wu. Energy aware dynamic voltage and frequency selection
for real-time systems with energy harvesting. In Proceedings of the conference on Design,
automation and test in Europe, DATE ’08, pages 236–241, New York, NY, USA, 2008. ACM.
[4] C. Moser, D. Brunelli, L. Thiele, and L. Benini. Real-time scheduling with regenerative energy.
In In Proc. of the 18th Euromicro Conference on Real-Time Systems (ECRTS 06), pages 261–
270. IEEE Computer Society Press, 2006.
[5] Jason Hill, Mike Horton, Ralph Kling, and Lakshman Krishnamurthy. The platforms enabling
wireless sensor networks. Commun. ACM, 47:41–46, June 2004.
[6] Ian F. Akyildiz, Weiiian Su, Yogesh Sankarasubramaniam, and Erdal Cayirci. A survey on sensor
networks. IEEE Communications Magazine, 40(8):102 – 114, August 2002. Survey.
[7] Xiaodong Zhang Xiaoyan Cui and Yongkai Shang. Energy-saving strategies of wireless sensor
networks. In Proceedings of the international Symposium on Microwave, Antenna, Propagation
and EMC Technologies for Wireless Communications, 2007.
[8] Ines Slama, Badii Jouaber, and Djamal Zeghlache. Optimal power management scheme for
heterogeneous wireless sensor networks: Lifetime maximization under qos and energy
constraints. In Proceedings of the Third International Conference on Networking and Services,
pages 69–, Washington, DC, USA, 2007. IEEE Computer Society.

239
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
[9] N. M. Moghadam A. S. Zahmati and B. Abolhassani. Epmplcs: An efficient power management
protocol with limited cluster size for wireless sensor networks. In Proceedings of the 27th
International Conference on Distributed Computing Systems Workshops, (ICDCSW’07), pages
69–72, 2007.
[10] Hung-Chin Jang and Hon-Chung Lee. Efficient energy management to prolong wireless sensor
network lifetime. In Proceedings of ICI 2007. 3rd IEEE/IFIP Inter national Conference in
Central Asia on Internet, 2007, pages 1 – 4, Sept. 2007.
[11] Y.-X. He C. Lin and N. Xiong. An energy-effcient dynamic power management in wireless
sensor networks. In Proceedings of The Fifth International Symposium on Parallel and
Distributed Computing, ISPDC’06, 2006.
[12] Xue Wang, Junjie Ma, and Sheng Wang. Collaborative deployment optimization and dynamic
power management in wireless sensor networks. In Proceedings of the Fifth International
Conference on Grid and Cooperative Computing, GCC ’06, pages 121–128, Washington, DC,
USA, 2006. IEEE Computer Society.
[13] D. Peng H. Wang, W. Wang and H. Sharif. Optimal power management scheme for hetero-
geneous wireless sensor networks: Lifetime maximization under qos and energy constraints. In
Proceedings of the International Conference on Communication Systems, ICCS, pages 1–5,
2006.
[14] Fabrizio Mulas, Andrea Acquaviva, Salvatore Carta, Gianni Fenu, Davide Quaglia and Franco
Fummi. Network-adaptive management of computation energy in wireless sensor networks. In
Proceedings of the 2010 ACM Symposium on Applied Computing, SAC ’10, pages 756–763,
New York, NY, USA, 2010. ACM.
[15] Mark Hempstead, Nikhil Tripathi, Patrick Mauro, Gu-Yeon Wei, and David Brooks. An ultra
low power system architecture for sensor network applications. SIGARCH Comput. Archit.
News, 33:208–219, May 2005.
[16] Victor Shnayder, Mark Hempstead, Bor-rong Chen, Geoff Werner Allen, and Matt Welsh.
Simulating the power consumption of large-scale sensor network applications In Proceedings of
the 2nd international conference on Embedded networked sensor systems, SenSys ’04, pages
188–200, New York, NY, USA, 2004. ACM.
[17] A. Sinha and A. Chandrakasan. Dynamic power management in wireless sensor networks.
Design & Test of Computers, IEEE, 18(2):62–74, August 2002.
[18] Anh Tuan Hoang and Mehul Motani. Cross-layer adaptive transmission with incomplete
system state information. IEEE Transactions on Communications, 56(11):1961–1971, 2008.
[19] Fabrizio Mulas, Andrea Acquaviva, Salvatore Carta, Gianni Fenu, Davide Quaglia, and
Franco Fummi. Network-adaptive management of computation energy in wireless sensor
networks. In Proceedings of the 2010 ACM Symposium on Applied Computing, SAC ’10,
pages 756–763, New York, NY, USA, 2010. ACM.
[20] Rex Min, Travis Furrer, Anantha Chandrakasan, "Dynamic Voltage Scaling Techniques for
Distributed Microsensor Networks," VLSI, IEEE Computer Society Workshop on, p. 43, IEEE
Computer Society Annual Workshop on VLSI (WVLSI'00), 2000
[21] Zhang Jianhual Zhang Ping Shakya Mukesh, Muddassir Iqbal and Inam-Ur- Rehman.
Comparative analysis of m-ary modulation techniques for wireless ad-hoc networks. In SAS,
IEEE Sensors Applications Symposium,, February 2007.
[22] C. Schurgers, O. Aberthorne, and M. Srivastava, “Modulation scaling for energy aware
communication systems,” in ISLPED ’01: Proceedings of the 2001 international symposium on
Low power electronics and design. New York, NY, USA: ACM Press, 2001, pp. 96–99.

240
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 2, April 2011
[23] W. W. D. P. H. W. H. Sharif, “Study of an energy efficient multi rate scheme for wireless sensor
network mac protocol,” Q2Winet06, 2006.
[24] V. Raghunathan, C. Schurgers, S. Park, and M. Srivastava, “Energy aware wireless microsensor
networks,” 2002
[25] Hakan Aydin, Rami Melhem, Daniel Moss ́,and Pedro Mej ́a-Alvarez. Power
aware scheduling for periodic real-time tasks. IEEE Trans. Comput., 53:584–600, May 2004.
[26] Yang Yu, Bhaskar Krishnamachari, and Viktor K. Prasanna. Energy-latency trade- offs for data
gathering in wireless sensor networks. In In IEEE Infocom, 2004.
[27] G. Sudha Anil Kumar, Govindarasu Manimaran, and Zhengdao Wang. End-to- end energy
management in networked real-time embedded systems. IEEE Trans. Parallel Distrib. Syst.,
19(11):1498–1510, 2008.
[28] Z. Fan C. Yeh and R.X. Gao. Energy-aware data acquisition in wireless sensor networks. In IEEE
Instrumentation and Measurement Technology Conference, 2007.
[29] Bo Zhang, Robert Simon, and Hakan Aydin. Energy management for time-critical energy
harvesting wireless sensor networks. In SSS, pages 236–251, 2010.
[30] B. Zhang, R. Simon, and H. Aydin. Joint Voltage and Modulation Scaling for Energy Harvesting
Sensor Networks. Proceedings of the First International Workshop on Energy Aware Design
and Analysis of Cyber Physical Systems (WEA-CPS'10), Stockholm, Sweden, April 2010.

Gauri Joshi has received B.E. in Electronics from Pune university in 1993 and
M.E. in digital communication from J.N.Vyas university, Jodhpur in
2005. She is having a rich teaching experience and currently pursuing
Ph.D. from DAIICT Gandhinagar.

Prabhat Ranjan is a professor at Dhirubhai Ambani Institute of Information


and Communication Technology, Gandhinagar in India since 2002. He
received his Bachelors and Masters in Physics from IIT Kharagpur and
Delhi University, Delhi (India). He obtained his PhD degree from
University of California, Berkeley (USA) in 1986 based on his research
work at Lawrence Berkeley Laboratory, Berkeley. He was employed at
Saha Insitute of Nuclear Physics, Kolkata and Institute for Plasma
Research, Gandhinagar and was Project Leader of largest Indian Fusion
reactor ADITYA from 1996 to 2002. His current research interests are in the area application of
Embedded Systems and Wireless Sensor Networks to Wildlife Tracking, Planetary Exploration
and Healthcare among other things.

241

You might also like