Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Mobile Computing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

Contents

Chapter 1 ........................................................................................................................................................................ 3

Mobile computing ........................................................................................................................................................... 3

Chapter 2...................................................................................................................................................................... 14

Localization .................................................................................................................................................................. 14

Chapter 3....................................................................................................................................................................... 26

Context-aware computing ........................................................................................................................................... 26

Chapter 4....................................................................................................................................................................... 42

Sensors ......................................................................................................................................................................... 42

Chapter 5....................................................................................................................................................................... 52

Mobile Application Development and Protocols ......................................................................................................... 52

References ..................................................................................................................................................................... 62
Chapter 1
Mobile computing

What is computing?

The utilization of computers to complete a task. It involves both hardware & software functions
performing some sort of task with a computer.

Examples of computing being used in everyday life: Swiping a debit card, sending an email, or
using a cell phone can all be considered forms of computing.

What is the mobility?

The capability to change location while communicating to invoke computing service at some
remote computers.

What is mobile computing?

Ability to compute remotely while on the move. It is possible to access information from
anywhere and at anytime.

Definition: What is mobile computing?


Computing that is not obstructed while the location of it changes.
Mobile Computing is using a computer (of one kind or another) while on the move.
Computing on the go!!
Mobile Computing is a technology that allows transmission of data, voice and video via
a computer or any other wireless enabled device without having to be connected to a
fixed physical link.
The process of computation on a mobile device.
Facilitates a large number of applications on a single device.
Many other names/overlapping computing paradigms:
• Pervasive Computing
• Ubiquitous Computing
• Wireless Systems
• Internet of Things (IoT)
• Embedded Computing
• Nomadic Computing
• Wireless Sensor Networks
• (Mobile) Ad-Hoc Networks
• Mesh Networks
• Vehicular Networks

The Mobile Computing Structure:

1. Mobile communication

2. Mobile hardware

3. Mobile software

Mobile communication

The mobile communication in this case, refers to the


infrastructure put in place to ensure that seamless and reliable
communication goes on. These would include devices such as
protocols, services, bandwidth, and portals necessary to
facilitate and support the stated services. The data format is
also defined at this stage. This ensures that there is no collision
with other existing systems which offer the same service.
Since the media is unguided / unbounded, the overlaying infrastructure is basically radio
wave-oriented. That is, the signals are carried over the air to intended devices that are capable
of receiving and sending similar kinds of signals.
Wired Networks Mobile Networks

high bandwidth low bandwidth


low bandwidth variability high bandwidth variability
can listen on wire hidden terminal problem
high power machines low power machines
high resource machines low resource machines
need physical access (security) need proximity
low delay higher delay
connected operation disconnected operation

Mobile Hardware

Mobile hardware includes mobile devices or device


components that receive or access the service of mobility.
They would range from portable laptops, smartphones, tablet
Pc's, Personal Digital Assistants.

These devices will have a receptor medium that is capable of


sensing and receiving signals. These devices are configured to operate in full- duplex, whereby
they are capable of sending and receiving signals at the same time. They don't have to wait until
one device has finished communicating for the other device to initiate communications.

Mobile Hardware
Above mentioned devices use an existing and established network to operate on. In most cases,
it would be a wireless network.

Mobile software

Mobile software is the actual program that runs on the mobile hardware. It deals with the
characteristics and requirements of mobile applications. This is the engine of the mobile device.
In other terms, it is the operating system of the appliance. It's the essential component that
operates the mobile device.

Since portability is the main factor, this type of computing ensures that users are not tied or
pinned to a single physical location, but are able to operate from anywhere. It incorporates all
aspects of wireless communications.

Limitations of Mobile Computing:

1. Resource constraints - Battery needs and recharge requirements are the biggest constraints of
mobilecomputing. When a power outlet or portable generator is not available, mobile
computers must rely entirely on battery power. Combined with the compact size of many
mobile devices, this often means unusually expensive batteries must be used to
obtain the necessary battery life.

2. Interference - There may be interference in wireless signals affecting the quality of service.
Weather, terrain, and the range from the nearest signal point can all interfere with
signal reception. Reception in tunnels, some buildings, and rural areas is often poor.
3. Bandwidth — There may be bandwidth constraints due to limited spectrum availability at given
instant causing connection latency. Mobile Internet access is generally slower than direct cable
connections, using technologies such as GPRS and EDGE, and more recently 3G networks. These
networks are usually available within range of commercial cell phone towers. Higher speed
wireless LANs are inexpensive but have very limited range.

4. Dynamic changes in communication environment - We know that there may be variations in signal
power within a region it causes link delays and connection loss.

5. Network issues — Due to the ad hoc networks some issues relating discovery of connection,
service to destination, and connection stability.

6. Interoperability The varying protocol standards available between different regions may lead
to interoperability issues.

7. Security constraints - Protocols conserving privacy of communication may be violated. Sometimes


physical damage or loss of mobile device is probable than static computing system.

8. Potential health hazards: People who use mobile devices while driving are often
distracted from driving are thus assumed more likely to be involved in traffic accidents.
Cell phones may interfere with sensitive medical devices. There are allegations that cell
phone signals may cause health problems.

9. Human interface with device: Screens and keyboards tend to be small, which may make them har
d to use. Alternate input methods such as speech or handwriting recognition require training.

Advantages of Mobile Computing:

1. No location constraint: Mobile computing frees the user from being tied to a location and incre
ased bandwidth and speed of transmission makes it possible to work on the move.
2. It saves time and enhances productivity with a better return on investment (RoI)
3. It provides entertainment, news and information on the move with streaming data, video and
audio Streamlining of business processes: Mobility has enabled streamlining of business proces
ses, cumbersome emails, paper processing, delays in communication and transmission.
4. Newer job opportunities for IT professionals have emerged and IT businesses now have an add
ed service in their portfolio which only will keep growing as per indicative mobile
computing trends.

Evolution (Seven waves of mobile computing)


• The history of mobile computing can be divided into a number of eras, or waves, each
characterized by a particular technological focus, interaction design trends, and by leading to
fundamental changes in the design and use of mobile devices.
• Mobile computing history has, so far, entailed seven particularly important waves.
• Although not strictly sequential, they provide a good overview of the legacy on which current
mobile computing research and design is built.
• These waves are the basis for the technology that is used today in research and design of
mobile computing
• These seven categories are: Portability, Miniaturization, Connectivity, Convergence,
Divergence, Apps, Digital Ecosystems

Portability

• Reducing the size of hardware to enable the creation of computers that could be physically
moved around relatively easily.
Miniaturization

• Creating new and significantly smaller mobile form factors that allowed the use of personal
mobile devices while on the move

Connectivity

• Developing devices and applications that allowed users to be online and communicate via
wireless data networks while on the move

Convergence

• Integrating emerging types of digital mobile devices, such as Personal Digital Assistants
(PDAs), mobile phones, music players, cameras, games, etc., into hybrid devices
Divergence

• Opposite approach to interaction design by promoting information appliances with specialized


functionality rather than generalized ones

Applications (Apps)

• The latest wave of applications (apps) is about developing matter and substance for use and
consumption on mobile devices, and making access to this fun or functional interactive
application content easy and enjoyable

Digital Ecosystems

• The emerging wave of digital ecosystems is about the larger wholes of pervasive and
interrelated technologies that interactive mobile systems are increasingly becoming a part of
Chapter 2

Localization

Location awareness
Location awareness refers to the capability of a device to actively or passively
determine its location in terms of coordinates with respect to a point of reference.
Various sensors or navigational tools can be used to determine the location. Some
of the application of location awareness can be:
1. Emergency response, navigation, asset tracking, ground surveying
2. Symbolic location is a proxy for activity (e.g., being in grocery store implies
shopping)
3. Social roles and interactions can be learned by collation
4. Coordinate change can imply activity and mode of transportation
(i.e., running, driving).

Sensors monitor phenomena in the physical world and the spatial relationships between them
and the objects and events of the physical world are an essential component of the sensor
information. Without knowing the position of a sensor node, its information will only tell
part of the story. For example, sensors deployed in a forest to raise alarms whenever wildfires
occur gain significantly in value if they are able to report the spatial relationship between
them and the monitored event. Further, accurate location information is needed for various
tasks such as routing based on geographic information, object tracking, and location-aware
services. Localization is the task of determining the physical coordinates of a sensor node
(or a group of sensor nodes) or the spatial relationships among objects. It comprises a set
of techniques and mechanisms that allow a sensor to estimate its own location based on
information gathered from the sensor’s environment. While the Global Positioning System
(GPS) is undoubtedly the most well-known location-sensing system, it is not accessible in
all environments (e.g., indoors or under dense foliage) and may incur resource costs unacceptable
for resource-constrained wireless sensor networks (WSNs). Therefore, this chapter
discusses various techniques and case studies for localization and location services targeted
at WSNs.

Overview
Localization in wireless sensor networks (WSNs) is crucial for providing context to sensor readings
and enabling various applications like environmental monitoring, intrusion detection, and
surveillance. The location of sensor nodes can be expressed using global metrics like GPS coordinates
or relative metrics based on arbitrary coordinate systems. Localization information should ideally
possess qualities of accuracy and precision.
There are two main types of localization metrics:
1. Global Metrics: These metrics position nodes within a global reference frame such as GPS or
UTM coordinate systems. They provide absolute positions that are universally understood.
2. Relative Metrics: These metrics are based on arbitrary coordinate systems and reference
frames, such as a sensor's distance to other sensors. They do not rely on global coordinates and
may be more suitable for localized operations. Key qualities of localization information
include accuracy and precision. Accuracy refers to how close a reading is to the ground truth,
while precision measures the consistency of readings.
In addition to physical positions, symbolic locations like "office 354" or "bathroom" may suffice for
certain applications, especially indoor tracking systems.
In many cases, not all sensor nodes have knowledge of their global coordinates. Instead, subsets of
nodes, known as anchor nodes, know their global positions. Anchor-based localization techniques
utilize these anchor nodes to help other nodes determine their positions. These techniques are
commonly used in WSNs.
Many localization techniques, including anchor-based approaches, rely on range measurements, such
as received signal strengths or time differences of arrival of ultrasound pulses. These techniques are
known as range-based localization techniques and are essential for estimating distances between
sensor nodes.
Overall, localization is a fundamental aspect of WSNs, enabling various applications and services by
providing spatial context to sensor data and facilitating efficient network operations

A. Ranging Techniques
The foundation of numerous localization techniques is the estimation of the physical distance
between two sensor nodes. Estimates are obtained through measurements of certain
characteristics of the signals exchanged between the sensors, including signal propagation
times, signal strengths, or angle of arrival.
1. Time of Arrival
The concept behind the time of arrival (ToA) method (also called time of flight method)
is that the distance between the sender and receiver of a signal can be determined using
the measured signal propagation time and the known signal velocity. For example, sound
waves travel 343 m/s (in 20 ◦C), that is, a sound signal takes approximately 30 ms to travel
a distance of 10 m. In contrast, a radio signal travels at the speed of light (about 300 km/s),
that is, the signal requires only about 30 ns to travel 10 m. The consequence is that radio based
distance measurements require clocks with high resolution, adding to the cost and
complexity of a sensor network. The one-way time of arrival method measures the one-way
propagation time, that is, the difference between the sending time and the signal arrival time
(Figure 1(a)), and requires highly accurate synchronization of the clocks of the sender and
receiver. Therefore, the two-way time of arrival method is preferred, where the round-trip
time of a signal is measured at the sender device (Figure 1(b)). In summary, for one-way
measurements, the distance between two nodes i and j can be determined as:
distij = (t2 − t1) × v
where t1 and t2 are the sending and receive times of the signal (measured at the sender and
receiver, respectively) and v is the signal velocity. Similarly, for the two-way approach, the
distance is calculated as:
distij =(t4 − t1) − (t3 − t2)/2 × v
where t3 and t4 are the sending and receive times of the response signal. Note that with one way
localization, the receiver node calculates its location, whereas in the two-way approach,
the sender node calculates the receiver’s location. Therefore a third message will be necessary
in the two-way approach to inform the receiver of its location.

Figure1. Comparison of different ranging schemes (one-way ToA, two-way ToA, and TDoA).

2. Time Difference of Arrival


The time difference of arrival (TDoA)

approach uses two signals that travel with different


velocities (Figure 1(c)). The receiver is then able to determine its location similar to
the ToA approach. For example, the first signal could be a radio signal (issued at t1 and
received at t2), followed by an acoustic signal (either immediately or after a fixed time
interval twait = t3 − t1). Therefore, the receiver can determine the distance as:
dist = (v1 − v2) × (t4 − t2 − twait)
TDoA-based approaches do not require the clocks of the sender and receiver to be synchronized
and can obtain very accurate measurements. The disadvantage of the TDoA approach
is the need for additional hardware, for example, a microphone and speaker for the above
example.
Another variant of this approach uses TDoA measurements of a single signal to estimate
the location of the sender using multiple receivers with known locations. The propagation
delay di for the signal to receiver i depends on the distance between sender and receiver i.
Correlation analysis can then provide a time delay δ = di − dj which corresponds to the
difference in path length to receivers i and j .The main disadvantage of this approach is that the clocks
of the receivers must be tightly synchronized.

3. Angle of Arrival
Another technique used for localization is to determine the direction of signal propagation,
typically using an array of antennas or microphones. The angle of arrival (AoA) is then the
angle between the propagation direction and some reference direction known as orientation. For
example, for acoustic measurements, several spatially separated
microphones are used to receive a single signal and the differences in arrival time,
amplitude, or phase are used to determine an estimate of the arrival angle, which in turn
can be used to determine the position of a node. While the appropriate hardware can obtain
accuracies within a few degrees, AoA measurement hardware can add significantly to the
size and cost of sensor nodes.

4. Received Signal Strength


The concept behind the received signal strength (RSS) method is that a signal decays with the
distance traveled. Acommonly found feature in wireless devices is a received signal strength
indicator (RSSI), which can be used to measure the amplitude of the incoming radio signal.
Many wireless network card drivers readily export RSSI values, but their meaning may
differ from vendor to vendor and there is no specified relationship between RSSI values and
the signal’s power levels. Typically, RSSI values are in the range of 0 . . . RSSI_Max, where
common values for RSSI_Max are 100, 128, and 256. In free space, the RSS degrades with
the square of the distance from the sender. More specifically, the Friis transmission equation
expresses the ratio of the received power Pr to the transmission power Pt as:

Pr l2
= Gt Gr
Pt (4 p ) 2 R 2
where Gt is the antenna gain of the transmitting antenna and Gr is the antenna gain of the receiving
antenna. In practice, the actual attenuation depends on multipath propagation effects, reflections,
noise, etc., therefore a more realistic model replaces R2 in Equation with Rn with n typically in the
range of 3 and 5.

B. Range-Based Localization

1. Triangulation
Triangulation uses the geometric properties of triangles to estimate sensor locations. Specifically,
triangulation relies on the gathering of angle (or bearing) measurements as described
in the previous section. A minimum of two bearing lines (and the locations of the anchor
nodes or the distance between them) are needed to determine the location of a sensor node
in two-dimensional space. Figure 2(a) illustrates the concept of triangulation using three
anchor nodes with known locations (xi, yi ) and measured angles αi (expressed relative to a
fixed baseline in the coordinate system, for example, the vertical line in the figure). If more
than two bearings are measured, the presence of noise in the measurements may prevent
them from intersecting in a single point. Therefore statistical algorithms or fixing methods
have been developed to obtain a single location

Figure 2 Triangulation (a) and trilateration (b).

• Unknown receiver location xr=[xr,yr]T


• Bearing measurements from N anchor points: β=[β1,…,βN]T
• Known anchor locations xi=[xi,yi]T
• Actual (unknown) bearings θ(x)=[θ1(x),…, θN(x)]T
• Relationship between actual and measured bearings is β=θ(xr)+δθ with δθ=[δθ1,…, δθN]T
being the Gaussian noise with zero-mean and NxN covariance matrix S=diag(σ12,…,σN2)
• Relationship between bearings of N anchors and their locations:
yi - yr
tan q i (x) =
xi - xr
• Maximum likelihood (ML) estimator of receiver location is then:

1 1 N ( i (xˆ r ) −  i ) 2
xˆ r = arg min [ (xˆ r ) −  ] S [ (xˆ r ) −  ] = arg min 
T −1


2 2 i =1  i2
This non-linear least squares minimization can be performed using Newton-Gauss iterations:

xˆ r,i +1 = xˆ r,i + (q x (xˆ r,i )T S -1q x (xˆ r,i ))-1q x (xˆ r,i )T S -1[b - q x (xˆ r,i )]
2. Trilateration
Trilateration refers to the process of calculating a node’s position based on measured distances
between itself and a number of anchor points with known locations. Given the location
of an anchor and a sensor’s distance to the anchor (e.g., estimated through RSS measurements),
it is known that the sensor must be positioned somewhere along the circumference of
a circle centered at the anchor’s position with a radius equal to the sensor–anchor distance.
In two-dimensional space, distance measurements from at least three noncollinear anchors
are required to obtain a unique location (i.e., the intersection of three circles). Figure 2(b)
illustrates an example for the two-dimensional case. In three dimensions, distance measurements
to at least four noncoplanar anchors are required.
• n anchor nodes: xi=(xi,yi) (i=1..n), unknown node location x=(x,y)
• Distances between node and anchors known (ri, i=1..n)
• Relationships between anchor/node positions and distances (2 D)

 ( x1 − x) 2 + ( y1 − y ) 2   r12 
 2  2
( x2 − x) + ( y2 − y )  = r2 
2

   
 2
 2
 n
( x − x ) 2
+ ( y n − y )  rn 
• After some rearrangements and subtracting the equation from all previous ones (to remove the
square of the unknown sensor location (x, y)). This can be represented as Ax=b with:

 2( xn − x1 ) 2( yn − y1 ) 
 2( x − x ) 2( y − y ) 
A= n 2 n 2 
   
 
 n n −1
2( x − x ) 2( y n − y n −1 
)

 r12 − rn 2 − x12 − y12 + xn 2 + yn 2 


 
− − − + +
2 2 2 2 2 2
b= 2 
r rn x 2 y 2 x n y n
  
 2 2
rn −1 − rn − xn −1 − yn −1 + xn + yn 
2 2 2 2
3. Iterative and Collaborative Multilateration
While the lateration technique relies on the presence of at least three anchor nodes to position
a fourth unknown node, this technique can be extended to determine locations of nodes
without three neighboring anchor nodes. Once a node has identified its position using the
beacon messages from the anchor nodes, it becomes an anchor and broadcasts beacon messages
containing its estimated position to other nearby nodes. This iterative multilateration
process repeats until all nodes in a network have been localized.
Figure 3(a) visualizes this process: in the first iteration, the gray node estimates its location
with the help of the three black anchor nodes and in the second iteration, the white nodes
estimate their respective locations with the help of two original anchor nodes and the gray
node. The drawback of iterative multilateration is that the localization error accumulates
with each iteration.
In ad hoc deployments of sensor and anchor nodes, it is possible that a node will not have
three neighboring anchor nodes, therefore preventing it from determining its own location.
In this case, a node can use a process called collaborative multilateration to estimate its
position using location information obtained over multiple hops. Figure 3(b) shows a
simple example with six nodes: four anchor nodes Ai (black) and two nodes with unknown
locations Si (white). The goal of collaborative multilateration is to construct a graph of
participating nodes, that is, nodes that are anchors or have at least three participating neighbors
(e.g., all nodes in Figure 3(b) are participants). A node can then try to estimate
its position by solving the corresponding system of overconstrained quadratic equations
relating the distances among the node and its neighbors.

Figure 3 (a) Iterative multilateration and (b) collaborative multilateration.


4. GPS-Based Localization
The Global Positioning System (GPS) is the most widely publicized location-sensing system,
providing an excellent lateration framework for determining geographic positions. GPS (formally
known as NAVSTAR – Navigation Satellite Timing and Ranging) is the only fully operational global
navigation satellite system (GNSS) and it consists of at least 24 satellites orbiting the earth
at altitudes of approximately 11,000 miles. It began as a test program in 1973 and became
fully operational in 1995. In the meantime, GPS has established itself as a widely used aid
to civilian navigation, surveying, tracking and surveillance, and scientific applications. GPS
provides two levels of service:
1. The Standard Positioning Service (SPS) is a positioning service available to all GPS
users on a continuous worldwide basis without restrictions or direct charge. High-quality
GPS receivers based on SPS are able to attain accuracies of 3m and better horizontally.
2. The Precise Positioning Service (PPS) is used by US and Allied military users and is
a more robust GPS service that includes encryption and jam resistance. For example, it
uses two signals to reduce radio transmission errors, while SPS only uses one signal.
GPS satellites are uniformly distributed in a total of six orbits (i.e., there are four satellites
per orbit) and they circle the earth twice a day at approximately 7000 miles per hour.
The number of satellites and their spatial distribution ensure that at least eight satellites
can be seen simultaneously from almost anywhere on the planet. Each satellite constantly
broadcasts coded radio waves (known as pseudorandom code) that contain information on
the identity of the particular satellite, the location of the satellite, the satellite’s status (i.e.,
whether it is working properly), and the date and time a signal has been sent. In addition to
the satellites, GPS further relies on infrastructure on the ground to monitor satellite health,
signal integrity, and orbital configuration. At least six monitor stations located around the
world constantly receive the data sent by the satellites and forward the information to a
master control station (MCS). The MCS (located near Colorado Springs, Colorado) uses
the data from the monitor stations to compute corrections to the satellites’ orbital and clock
information, which are then sent back to the appropriate satellites via ground antennas.
A GPS receiver (e.g., embedded into a mobile device) receives the information transmitted
by the satellites that are currently in view by the receiver. The basic principle of
GPS positioning is illustrated in Figure 4. Satellites and receivers use very accurate and
synchronized clocks so that they generate the same code at exactly the same time. The GPS
receiver compares its generated code with the code received from the satellite, thereby determining
the actual generation time (e.g., t0 in Figure 4) of the code at the satellite and the
time difference ∆ between the code generation time and the current time. Therefore, _
then expresses the travel time of the code from the satellite to the receiver. Note that the
received satellite data is attenuated due to the satellite–earth path even if no obstructions

Figure 4 GPS positioning principle.

occur. Radio waves travel at the speed of light (about 186 000 miles per second), so if _
is known, the distance from the satellite to the receiver (distance = speed × time) can be
determined. Once the distance has been determined, the receiver knows that it is located
somewhere on a sphere centered on the satellite with a radius equal to the computed distance.
Repeating this process with two more satellites, the position of the receiver can be
narrowed down to the two points where the three spheres intersect. Typically, one of the two
points can be eliminated very easily, for example, because it would position the receiver far
out in space or the receiver would travel at a virtually impossible velocity.
While three satellites appear to be sufficient for localization, a fourth satellite is needed
to obtain an accurate position. Positioning via GPS relies on correct timing to make accurate
measurements, that is, the clocks of the satellites and the receivers must be synchronized
precisely. Satellites are equipped with four atomic clocks (synchronized to each other within
a few nanoseconds), providing highly accurate time readings. However, the clocks used for
GPS receivers are not nearly as accurate as the atomic clocks onboard the satellites, introducing
measurement errors that can have a significant impact on the quality of localization.
Because radio waves travel at very high speeds (and therefore require very little time to
travel), small errors in the timing can result in large deviations in position measurements.
For example, a clock error of 1 ms would result in a position error of about 300 km. Therefore,
a fourth measurement is required, where the fourth sphere should ideally intersect the
other three spheres at the exact location of the receiver. Because of timing errors, the fourth
sphere may not intersect with all other spheres, even though we know that they are supposed
to align. If the spheres are too large, we can reduce their sizes by adjusting the clock (by
moving it forward) until the spheres are small enough to intersect in one point. Similarly, if
the spheres are too small, we adjust the clock by moving it backwards. That is, because
the timing error is the same for all measurements, a receiver can calculate the required
clock adjustment to obtain a single intersection point among all four spheres. In addition to
providing a means for clock synchronization, a fourth measurement also allows a receiver
to obtain a three-dimensional position, that is, latitude, longitude, and elevation.
While most GPS receivers available today are able to provide position measurements
with accuracies of 10m or less, advanced techniques to further increase the accuracy are
available. For example, Differential GPS (DGPS) relies on landbased
receivers with exactly known locations to receive GPS signals, compute correction
factors, and broadcast them to GPS receivers that are then able to correct their own GPS
measurements. While it is possible to build wireless sensor networks where each sensor
has its own GPS receiver, constraints such as high power consumption, cost, and the need
for line-of-sight make a fully GPS-based solution impractical for most sensor networks.
However, GPS receivers deployed on a few nodes in a WSN may be sufficient to provide
location services based on reference points as described in the following section.

Range-Free Localization
The localization approaches discussed in the previous sections are based on distance estimations
using ranging techniques (RSS, ToA, TDoA, and AoA) and belong therefore to the
class of range-based localization algorithms. In contrast, range-free techniques estimate
node locations based on connectivity information instead of distance or angle measurements.
Range-free localization techniques do not require additional hardware and are therefore a
cost-effective alternative to range-based techniques.

Chapter 3

Context-aware computing
CONTEXT-AWARE COMPUTING
Let us start by exploring what content means. Context means user’s
preferences, likings, dislikes, location, and general awareness of the surrounding
environment in which the user is operating, located, or situated.
Examples of awareness of the surrounding environment could be information
related to weather, climate, traffic, the time of the day, or physical location
of the user. It could also be information related to user’s computing device
like battery level, available network bandwidth, available Wi-Fi infrastructure,
and so on.
Now we shall expand this basic definition of context to context computing.
Context-aware computing is the computing environment that is aware of the context
of the computing device, computing infrastructure, or the user. A computing
device could be any of various devices including smartphones, tablets, wearables,
or traditional devices like laptops and desktops. Computing infrastructure can
include hardware, software, applications, network bandwidth, Wi-Fi bandwidth
and protocols, and battery information.
A smartphone, e.g., is a computing device that is aware of the surrounding
context. The computing infrastructure, such as its operating system, acquires this
context, stores it, processes it, and then responds to the context by either changing
or adapting its functionality or behavior. It will also make certain context-aware

decisions. The computing infrastructure could process and respond to the context
with minimal or no inputs from its user. Some of the examples of how contextaware
infrastructure do and will respond are as follows:
• A smartphone could detect that it is in a crowded place like an airport, railway
station, or mall and automatically change the device behavior to implement
noise cancellation algorithms. This would enable the device to respond better
to a user’s voice commands.
• Smartphones could detect the location of the user and alter its functionality,
such as, e.g., by automatically increasing or decreasing speaker volume, or
changing to silent mode if the user is in the meeting, change ringtones based
on whether the user is at home, at the office, or traveling by car.
• Smartphones could automatically respond to certain calls with messages if the
user were in the office or driving. It could even block some calls based on the
user’s location context.
• Wearables could use environmental context and automatically compensate its
calculations for calories burned.
• A smart watch could automatically adjust daylight savings or time zone based
on the location context.
• Traditional or contemporary smart devices can use location-based services to
suggest dining locations, entertainment centers in the area, or even emergency
services like hospitals and urgent care centers.
A context-aware device can acquire context data through various mechanisms
like generic or specific sensors, through the Internet, via GPS, or through a history
of logs, past decisions, locations, or actions. Today sensor types and availability
have increased and become more sophisticated. This enables a large
number of context-aware use cases on devices like tablets, wearables, smartphones,
and even on traditional laptops and desktops. Even basic gyroscopes,
accelerometers, and magnetometers can acquire direction and orientation data,
resulting in use cases like shutting down when an accidental fall is detected or
suggest upcoming dinning place or gas station based on the current user location.
Thus context awareness is now becoming a necessity for various computing
devices and infrastructure, including applications, in order to make smart decisions,
predict user actions, and alter device functionality in order to reduce the
need for users to manually input context-related information (Fig.1).
LEVELS OF INTERACTIONS FOR CONTEXT-AWARE INFRASTRUCTURE
There are three level of interactivity for context-aware computing, infrastructure,
or applications:
1. Personalization: Here users specify their own settings/environment that
controls how the context-aware infrastructure (hardware, software,
applications, and so on) should behave or respond in a given situation.

FIGURE 1 Concept of context-aware computing


Passive context awareness: In this case the context-aware infrastructure
provides the user with information from the sensors or changes that occurred
in the previous context; however the infrastructure does not act or change
behavior based on this. The user decides on the course of action based on the
updated context information.
3. Active context awareness: In this case the context-aware infrastructure
collects, processes, and takes all required actions based on the sensor or
context information. It offloads the work from the user by taking active
decisions.
Table 2.1 lists some categories of context-aware applications and services
based on the level of interaction with the user.
UBIQUITOUS COMPUTING
The word ubiquitous means omnipresent, universal, global, or ever-present.
Ubiquitous computing means a computing environment that appears to be present
everywhere, anywhere, and anytime. Unlike a traditional unconnected desktop
computer, which is stationary and can only be accessed while sitting in front of it,
the concept of ubiquitous computing points to availability of a computing power
through use of any device or infrastructure, in any location, in any format, and at
any given time.

Table 1 Context-Based Services in Mobile Computing


A user today interacts with the computing environment through a number of
different devices like laptops, smartphones, tablets, phablets, and even connected
home appliances like microwaves or refrigerators. With the availability of wearables
like smart watches and Google Glass, the access to the underlying computing
environment has become really ubiquitous.
There are many essential components of compute infrastructure that enable
the concept of ubiquitous computing. Some of these components are: the Internet,
wireless or network protocols, operating systems supporting ubiquitous behavior,
middleware or firmware, sensors and context-aware devices, microprocessors, and
other computer hardware.
Ubiquitous computing can provide users with two crucial user-experience
enhancing qualities: “invisibility” and “proactivity.” For example, imagine a
completely context-aware shopping experience where user does not have to wait
at traditional checkout lines but instead can automatically scan the basket for all
goods, scan the user’s device/identity, and charge the relevant credit card based
on the user’s preferences. In this case the individual serial process of checkout is
completely invisible to the user and the system can be proactive in identifying the
user and payment method, thereby enhancing the user’s shopping experience in
terms of time and ease of use.
In ubiquitous computing, computers are no longer tied to physical space like
in a computer room or laboratory, but can be deployed and accessed at arbitrary
locations throughout the world. Due to this phenomenon, the following changes
have occurred in computing devices:
1. Form factor changes: These simple form factor changes in display size,
hardware sizes, and so on supported the physical movement of computers
outside of the traditional room. However, such computers lacked sensitivity to
any attributes of the surrounding environment.
2. Context-sensitive changes: Changes needed to be made to overcome the
drawback of insensitivity to the surrounding environment. Initially the
sensitivity was limited to detecting other compute devices nearby but later it
expanded into parameters like time and light intensity of the day/night, light
level, amount of nearby cell-phone traffic at the current physical location,
identity or role of the person using the computer, roles of other persons near
the computer, vibration, and so on.
In general, context-aware computing attempts to use When (time), Where
(location), Who (identity), What (activity), and Why (usage) as part of its
decision-making algorithms.
Examples: In an exercise room, context-aware computing will sense when to
infer the possible tastes of the user to control the type of music and control the
sound system accordingly. Or in some other use case, an algorithm can use various
other environmental parameters like sound or light level to infer whether a
particular message can be sent to the user or not.
CHALLENGES OF UBIQUITOUS COMPUTING
The key issue [2] of the sensors and their network, however, is that since sensors
are inaccurate, they make the computing environment uncertain and probabilistic.
The following are the few examples of uncertainties:
Where uncertainty: Location sensors reports location probability of “true”
location on X, Y, and Z space.
Who uncertainty: Face recognition sensor returns the probability that it has
just seen a particular person through probability distribution.
What uncertainty: A camera sensor trying to recognize an object will send a
set of estimates as to the object seen (again a probability distribution).
Let us now also explore the system level challenges of ubiquitous
computing.
Power management: Ubiquitous computing requires ubiquitous power. There
are three main components or sources of power consumption in ubiquitous
computing devices: processing, storage, and communication.
Processing for ubiquitous computing platforms is highly variable since it can
vary from simple applications to computationally intensive tasks. There are many
controls available for controlling power consumption in a single processing unit,
such as power-gating certain units or blocks inside the processor when not in use
or lowering operating voltages to slow down energy consumption. The platform
can also use multiple task-specific processing units or processors to perform
specific tasks, thereby gating power to other blocks when not needed for that specific
task. For example, one could have a minute processor for addressing and
handling sensor interrupts/data while utilizing a high-performing processor for
full function computations and a network processor for processing network data.
The software in the case of multiple processing components needs to be able to
dynamically control and gate power to certain blocks, while running required processing
components at full voltage or reduced voltage depending on the requirement
of a particular task.
Just like processing units, the wireless interface and protocol used affects the
power requirements and policy of a ubiquitous computing platform. For example,
Wi-Fi, Bluetooth, ZigBee, and ultra-wide band are some of the standards with
varying capabilities and characteristics. Each of the protocol has defined power
and performance settings (like transmit strength and duty cycle) that can be effectively
used to manage power in the platform. Multiple protocols can be used in
the platform depending on the targeted use cases. For example, Wi-Fi is used for
home networks and Internet, while Bluetooth is used for hands-free or voice communication
on mobile phones. The operating software on a ubiquitous platform
needs to transparently provide the user with services that consider different
power-performance characteristics of these protocols like energy consumption per
bit, standby power, and so forth.
The third component that affects the power profile of a ubiquitous system is
the storage media, such as SRAM, DRAM, flash, and disc drives. Each of these
storage types has a different power profile (idle power, active power, leakage
power, and so on). For example, SRAM can be lower power than DRAM, while
FLASH has a better idle power profile. The software will need to deploy various
schemes to manage power consumption and access to these various storage
options on the platform.
Limitations of wireless discovery
The world has moved from era of one person, one computer to one person,
multiple computing devices. Today an individual has multiple devices: a
desktop computer, notebook, tablets, smartphones, and other such
portable computing devices that share the same/surrounding space at home or
office. With multiple computing devices being associated with each individual,
the physical and virtual management of these devices becomes challenging. In
the future, we could also have embedded processors in numerous household
and office products trying to identify or associate with an individual. This
further complicates the management of computing devices associated with
each person in that space.
Such a collection of small devices needs to be found in the surroundings
(home/office), identified by each by its type (phone, notebook, tablet, or the
like) and functionality, and then each device needs to be associated with a
particular user/system. Today there is some kind of unique name/IP/MAC
address to identify these devices but such identification may not be always
available as in the case of embedded systems. It is also possible that ubiquitous
and/or pervasive devices may not be plugged into a wired network. Such ubiquitous
devices that are coexisting in the same space will not only have to be identified
appropriately by the managing software but will also need to be connected
with other available devices based on their functionality, user preference, and
authenticity.
User interface adaptation
Ubiquitous computing refers to different types of devices ranging from small
sensors to tablets to notebooks and desktops to complex computing devices.
Each one of these will have varying display types and sizes. An application
that runs on a smaller display of a smartphone should work as effectively on
larger screens of desktop computers. The physical difference of displays should
not matter to the user experience across these different devices. The user
should be able to manipulate the touch controls and tiny menu on smartphones
as easily as on larger displays of notebooks or desktops. So an application
designed for a smartphone with a smaller display should be able to adapt
easily to a larger display size when that display size is available and vice versa,
where server applications designed for a larger display should adapt to
smaller smartphone display size. A pragmatic approach would be to generate
user interface components based on underlying basic user definition plus
knowledge of target display capabilities on the go. To build such applications,
four main components are needed:
1. A user interface specification language,
2. A control protocol that provides an abstracted communication channel
between the application and user interface,
3. Appliance adaptors, allowing the control protocol to be translated into the
primitives available on the target device, and
4. The graphic user interface generator.
Since a user interface is visible to the customer, it is important to maintain
its usability across multiple display targets. But user interface designers would
not know how their application would appear on various different screen sizes
used by customers. In ubiquitous computing environments, the range of target
screen sizes is much greater than what is found in traditional desktop or notebooks.
Hence, significant software engineering hurdles still remain in creating
standards and the basic mechanisms to generate and display content to the
user.
Location-aware computing
Ubiquitous computing uses location of the device to enhance user experience
and its most important feature is to customize services that are made available
to the user, such as automatically locating other devices nearby, remembering
them, and then offering services/data to the user after appropriate user
authentication.
The location context is not just limited to knowledge of where a user is but
also includes knowledge of who the user is and who else is near that user. Such
context can also include historical usage of the user and thereby determine applications
that the user might want to access based on history, such as a scenario
where a ubiquitous system supporting location context automatically controls
device volume or notification alerts based on whether the user is in a crowded
place like a shopping mall or in a quiet place like a library. Another scenario
would be to determine whether there are people around and whether the user is
likely to be in some meeting with them. It can also control the display brightness
based on time and location of the user.
There are many location-based services that can be offered by applications,
such as finding nearby restaurants, cheap gas stations, localizing Internet searches,
and so on.
Traditional location-context_based systems have the limitation of “uncertainty
of location estimates” where it is not possible for the system to know the
exact location of the device/user and hence cannot describe the range of
possible locations. To resolve this, a fusion of several sources of location information
can be used to improve accuracy of location and allow users/applications
to understand and compensate for the error distribution of the estimated
location.
CONTEXT
Context means the idea of “situated cognition.” For mobile computing, context
can be defined as an environment or situation surrounding a user or a device.
Context can be categorized based on the location of the user or the device, identity
of the user, activity performed by the user or the device, and time of the task,
application or the process. Context can be used to authenticate the identity of the
user or the device, to perform authorization of location, function or data and to
provide services.
COMPUTING CONTEXT
A computing context is information about a situation, location, environment, identity,
time, or location, regarding the user, people, place, or things. This information
is then used by context-aware computing devices to anticipate user
requirements and predictably offer enriched, situation-aware and usable content,
functions, and experiences.
Fig. 2.2 shows examples of context environment that are applicable to
context-aware computing. These can be categorized into three main areas:
• Physical context—lighting, noise levels, traffic conditions, and temperature.
• User context—user’s profile, biometric information, location, people nearby,
current social situation.
• Time context—time of a day, week, month, and season of the year.
PASSIVE VERSUS ACTIVE CONTEXT
An active context awareness refers to processing that changes its content on its
own based on the measured sensor data. An example would be how time and
location change in smartphones based on where the user is (provided the
“auto-update” is selected for these features in user settings). Active context is
also considered proactive context.
A passive context awareness refers to processing where an updated context
(based on sensor data) is presented to the user and user then has to decide if,
how, and when the application should change. For example, when the user is in a
different time zone or location and “auto-update” is turned off in the user settings
for these features, then the smartphone will not automatically update time and

FIGURE 2 Example of context environment

Table 2Passive Context and Active Context Response to Inputs Service/Sensor


location but instead will prompt the user with all required information and let the
user decide on subsequent action.
Table 2.2 shows device actions to inputs based on passive and active context.

CONTEXT-AWARE APPLICATIONS
Context information can be used in software applications [3] to enhance user experience
and facilitate effective hardware and software resource usage. It can be used to
personalize user interface, add or remove drivers, applications, and software modules,
present context-based information to user queries, and perform context-driven
actions. Following are some examples of applications that uses context information.
• Proximate selection: Proximate selections refer to the user interface that
highlights the objects or information which are in proximity of the user at a
particular instance of query. Such user interface can use user’s current
location as default and can offer the user to connect or user nearby
input_output devices such as printers, audio speakers, display screens, and so
on. It can also offer to connect to or share information with other users within
the preset proximity and it can also provide information about nearby
attractions and locations that the user might be interested to visit/explore, such
as restaurants, gas stations, sports stadium, and so on.
• Automatic contextual reconfiguration: The process of adding or removing
software components, or changing the interaction between these components
is referred to as automatic contextual reconfiguration. For example, device
drivers can be loaded based on user profile. The context information can thus
be used to support personalized system configurations.
• Contextual information and commands: By using context information such as
location or user preferences, the software can present the user with commands
that are filtered or personalized with context (e.g., send file command will
send it to the nearby connected device by default), or it can change present
user with certain execution options based on current location such as offer to
silent the mobile device while in library.
• Context-triggered actions: The software or applications can automatically
invoke certain actions based on if-then condition-action rules. For example,
applications can offer automatic reminders to checkout certain reading
materials or it can automatically put the mobile device in silent mode when
user is detected around library. Such automatic actions however require higher
degree of context information accuracy.
LOCATION AWARENESS
Location awareness refers to the capability of a device to actively or passively
determine its location in terms of coordinates with respect to a point of reference.
Various sensors or navigational tools can be used to determine the location. Some
of the application of location awareness can be:
1. Emergency response, navigation, asset tracking, ground surveying
2. Symbolic location is a proxy for activity (e.g., being in grocery store implies
shopping)
3. Social roles and interactions can be learned by collation
4. Coordinate change can imply activity and mode of transportation
(i.e., running, driving).
LOCATION SOURCES IN MOBILE PHONES
There are many location technologies and sensor types that can be used in the
devices with location context awareness. Some of the technologies and sensors
are listed:
GNSS [4] (Global Navigation Satellite System)
This system is made up of a network of satellites that transmits signals used for
positioning and navigation around the globe. Examples include GPS, GLONASS,
and GALILEO systems. Each one of these systems consists of three main segments:
(1) Space segment: This segment refers to satellites or network of satellites;
(2) Control segment: This segment refers to system of tracking stations
located around the world that controls functions like satellite orbit determination,
synchronization, and so on; (3) User segment: This segment refers to satellite
receivers and users with different capabilities.
GNSS is suitable for outdoor location context, has good coverage and
accuracy across the globe (Fig. 2.3).
Wireless Geo
Wireless Geo refers to wireless mechanisms used to identify actual location of the
device. In this method, actual physical location rather than geographic coordination
is provided by the underlying wireless locating engines. An example would

FIGURE 2.3
Key segments of GNSS.
be Cell ID (CID), which is a unique number used to identify each mobile/smartphone.
The CID-based mechanism uses cell tower, CID, and location area code to
identify the mobile phone.
Sensors
Sensors can be used to enhance the accuracy of determining the location of a
device. For example, in dead reckoning, sensors can be used to determine relative
motion from a reference point, such as to detect whether the system moves outside
of a 3-m radius, or to determine relative positioning of devices, such as the
case of bumping two devices up against each other to establish common reference
and then they can track their relative positions. Sensors can also be used standalone
when other methods are not available. First let us understand what dead
reckoning is. Dead reckoning (deduced reckoning) is the process of calculating
current position by using previously determined reference position and advancing
that position based upon known or estimated speeds over elapsed time and course.
Although it provides good information on position, this method is prone to errors
due to factors like inaccuracy in speed or direction estimations. Errors will also
be cumulative since new estimated value would have its own errors and it will
also be based on previous position which had errors, thus resulting in cumulative
errors. Some of the sensors used are accelerometers and gyroscopes for acceleration/
velocity integration for dead reckoning, accelerometers for bump events,
pressure for elevation, and so on.
Chapter 4

Sensors
TERMINOLOGY OVERVIEW
Sensors, transducers, and actuators forms the base of a sensor ecosystem. This
section covers their basic definition.
A sensor is a device that converts physical activity or changes into an electrical
signal. It is the interface between the physical or real world and electrical system
and components of the computing device. In the simplest form a sensor
responds to some kind of physical change or stimulus and outputs some form of
electrical signal or data. Sensors are required to produce data that the computing
system can process. For example, opening a washing machine stops the washing
cycle. Opening of a house door results in activation of a house alarm. Without the
sensing of these physical activities there would be no change in washing cycle or
triggering of the house alarm.
A transducer is the device that takes one form of input (energy or signal) and
changes into another form, as shown in Fig. 3.1. A transducer can be part of our
earlier defined sensors. Many times the terms sensor and transducer are used
interchangeably, but we can differentiate them by saying that sensors measure the
change in physical environment and produce electrical signals using a transducer,
where the transducer takes the measured change in the physical environment and
transforms it into a different form of energy (such as an electrical signal) as
shown in Fig. 3.2.
FIGURE 3.1
Basic concept of transducers.
FIGURE 3.2 Example of sensor with transducer.
A combination transducer performs detection of one energy form and can create
an energy output. For example, an antenna can receive (detect) radio signals
and also transmit (create) radio signals.
The performance of a transducer can be measured in terms of its accuracy,
sensitivity, resolution, and range.
An actuator is a transducer that takes one form of energy as input and produces
some form of motion, movement, or action. Thus it converts some form of
energy into kinetic energy. For example, an electrical motor in an elevator
converts electrical energy into the vertical movement of going from one floor to
another floor of the building. The following are the main category of actuators:
1. Pneumatic: These actuators convert energy from compressed air (at high
pressure) to either linear or rotary motion. Examples include valve controls of
liquid or gas pipes.
2. Electric: These actuators convert electrical energy to mechanical energy. An
example would be an electric water pump pumping water out of well.
3. Mechanical: These actuators convert mechanical energy into some form of
motion. An example would be a simple pulley used to pull weights.
The performance of actuators can be measured in terms of force, speed, and
durability.
SENSOR ECOSYSTEM OVERVIEW
The sensor ecosystem is complex with many significant components, players, and
segments of enabling technologies (such as sensor types and wireless protocols), manufacturers,
developers, markets, and consumers. One of the components of this ecosystem
is the set of enabling technologies. Let us look at some of the sensor types
such as location based sensors, proximity sensors, touch sensors, and biosensors.
LOCATION-BASED SENSORS
Location sensors can help enable use cases such as ones mentioned in Table 3.1.
Table 3.1 Location Sensor Use Cases

• Transducer: a device which converts one form of energy to another


• Sensor: a transducer that converts a physical phenomenon into an electric signal
• an interface between the physical world and the computing world.
• Actuator: a transducer that converts
an electric signal to a physical
phenomenon
From Physical Process to Digital Signal
Sensor/Actuator System

Sensor-to-Signal Interface
• Action of environment on a sensor causes it to generate an electrical signal directly
• voltage source (V), current (I), or charge (Q) source
• Action of environment on sensor changes an electrical parameter that we can measure
• resistance changes: V = I * R (R = resistance)
• capacitance changes: C = ε * A / d (A = area, d = distance, ε = permittivity
inductance changes: V ~ dI/dt, I ~ ∫V dt

Sensor Types

• Sensor Types: Power SupplyModulating

• Also known as Active Sensors


• They need auxiliary power to perform functionality
• Self-Generating
• Also known as Passive Sensors
• They derive the power from the input
• Sensor Types: Operating ModeDeflection

• The measured quantity produces a physical effect


• Generates an apposing effect which can be measured
• Faster
• Null
• Applies the counter force
• To balance the deflection from the null point (balance condition)
• Can be more accurate but slow
• Sensor Types: Physical PropertyTemperature

• Pressure
• Humidity
• Light
• Microphone (sound)
• Motion detector
• Chemical detector
• Image Sensor
• Flow and level sensor
• …
Sensor Types: HW & SW
• Hardware-based sensors
• Physical components built into a device
• They derive their data by directly measuring specific environmental properties
• Software-based sensors
• Not physical devices, although they mimic hardware-based sensors
• They derive their data from one or more hardware-based sensors
Sensor Types: Function Type
• Motion sensors
• Measure acceleration forces and rotational forces along three axes, e.g., accelerometer,
gyroscope, etc.
• Position sensors
• Measure the physical position of a device, e.g., GPS, proximity sensor, etc.
• Environmental sensors
• Measure various environmental parameters, e.g., light sensor, thermometer, etc.
Sensor List

Sensor Function Type Software-based or Hardware-based


Accelerometer Motion Sensor Hardware-based

Gyroscope Motion Sensor Hardware-based

Gravity Motion Sensor Software-based

Rotation Vector Motion Sensor Software-based

Magnetic Field Position Sensor Hardware-based

Proximity Position Sensor Hardware-based

GPS Position Sensor Hardware-based

Orientation Position Sensor Software-based

Light Environmental Sensor Hardware-based

Thermometer Environmental Sensor Hardware-based

Barometer Environmental Sensor Hardware-based

Humidity Environmental Sensor Hardware-based

Smartphone Sensing
• Light
• Proximity
• Cameras (multiple)
• Microphones (multiple)
• Touch
• Position
• GPS, Wi-Fi, cell, NFC, Bluetooth
• Accelerometer
• Gyroscope
• Magnetometer
• Pressure
• Temperature
• Humidity
• Fingerprint sensor
Sensor: GPS (Recap)
• Need connect to 3 satellites for 2D positioning, 4 satellites for 3D positioning
• More visible satellites increase precision
• Based on concept of trilateration

Location service using GPS in Android consists of five architectural components


• GPS chip: Radio frequency receiver that directly communicates with GPS satellites
• GPS driver communicates with GPS chip, provides low-level APIs to high-level software
• GPS engine: The heart of the system; uses configuration parameters to configure GPS; instructs GPS
driver to detect satellites; gets timing data from NTP servers (fast) or Internet (slow)
• Android Location Service: consists of Android framework classes like Location Manager that
provides data/services to applications
• Also integrations location data from multiple sources (Wi-Fi, cellular, etc.)
• Android Location Service: consists of Android framework classes like Location Manager that
provides data/services to applications
• Also integrations location data from multiple sources (Wi-Fi, cellular, etc.)
Sensor: Motion and Orientation
• Most of the sensors use the same coordinate system
• When a device’s screen is facing the user
• The X axis is horizontal and points to the right
• The Y axis is vertical and points up
• The Z axis pints toward outside of the screen face
Chapter 5

Mobile Application Development and Protocols


Mobile Devices as Web Clients
• Web access using mobile devices would be
frustrating and meaningless because :
• Slow network due transmission media
compared to wired networks
• Small screen size
• frequent disconnections and signal fading
that occur when a user moves around.
Mobile Devices as Web Clients
• Web access using mobile devices would be
frustrating and meaningless because :
• Slow network due transmission media
compared to wired networks
• Small screen size
• frequent disconnections and signal fading
that occur when a user moves around.
Mobile Devices as Web Clients (cont.)
• First two problems may solve each other.
• World Wide Web Consortium (W3C) is a nonprofit group
that oversees the formation of various Web standards.
• In 1998 W3C announced the creation of a special version of
HTML for mobile devices. which was called compact
HTML (C-HTML).
• in C-HTML, the advanced features of web such as fonts,
frames, tables, graphics, and dynamic content were omitted
with the intent of not only saving bandwidth but also freeing
the handheld devices of computational overload.
Wireless Application Protocol (WAP)
• WAP is not a markup language like CHTML, It
it is a complete stack of protocols.
• It is more complicated and more effective.
• WAP architecture defines an optimized
protocol stack for communication over
wireless media, a content description
language, and a browser.
Traditional and WAP based web access

WAP gateway
• WAP gateway is responsible to convert a WAP
request/response to an HTTP request/response.
This conversion is very compute-intensive. To
handle this computational workload, the WAP
gateway needs to be a powerful computer.
• WAP is often described as a network-centric
protocol—most of the intelligence and
computations associated with the WAP protocols
are embedded in the network rather than the phone
WAP protocol stack
It is designed to be compatible with the Internet
• Pages in WAP are converted to the http and TCP
protocol at the gateway
• WAP 2.0 provides support for the protocols that are
counterparts of IP, TCP, and HTTP
• WAP 2.0 is also flexible and bearer independent—
meaning that WAP services can run over any specific
wireless data bearer technologies such as SMS,
GSM, GPRS, 3G, etc.

WAP Protocol Stack


WAP Architecture
It includes
• micro-browser on the device,
• WML (the Wireless Markup Language),
• WMLS (a client-side scripting language),
• Telephony service,
• A set of formats for the commonly used data such
as images, phone books, and calendars.
Wireless Application Environment (WAE)
WAP Architecture
WSP helps establish a web browsing session from
a mobile handset.
It is based on the HTTP protocol and provides
the basic session state management
Wireless Session Protocol (WSP)
WAP Architecture
•It considered to be the equivalent of the TCP layer of the TCP/IP
stack, but it takes into account the availability of low bandwidth by
providing different classes of transaction services.
•WTP transaction services include reliable request and response that
have been adapted to the wireless world.
•WTP handles the problem of packet loss more effectively than TCP.
Packet loss is a fairly common phenomenon in wireless technologies
due to factors such as atmospheric noise, signal fading, and handoff.
The packet losses are often misinterpreted by TCP as network
congestion, thereby drastically reducing the network throughput
WAP Architecture
•WTLS is the security layer that is used to
transfer data securely between a mobile device
and a server.
•It provides support for data security and privacy,
authentication, as well as protection against
denial-of-service (DOS) attacks.
Wireless Transport Layer Security (WTLS)

WAP Architecture
•WTLS is the security layer that is used to
transfer data securely between a mobile device
and a server.
•It provides support for data security and privacy,
authentication, as well as protection against
denial-of-service (DOS) attacks.
Wireless Transport Layer Security (WTLS)
WAP Architecture
•WDP is the bottom-most protocol in the WAP
protocol suite. It functions as an adaptation layer
in a wireless communication environment that
makes every data network look like UDP to the
upper layers by providing services for transport of
data in the unreliable wireless environment. WDP
invokes services of one or more data bearers such
as SMS, GPRS, CDMA, UMTS, etc.
WAP Architecture
•A bearer is a low-level transport mechanism for network messages.
•Considering the diversity of transport technologies, WAP is
designed to operate with SMS (Short Message Service) to GPRS
(General Packet Radio System), UMTS and IP.
•WAP supports circuit-switched bearer services such as dial-up
networking using IP and Point-to-Point Protocol (PPP). However,
packet-switched bearer services are much better suited than
circuit-switched bearer services for mobile devices as they can
provide more reliable services in the unreliable wireless
connection environment.
Bearer Interfaces
Operating Systems for Mobile Computing
Operating System Responsibilites in Mobile Devices
• Managing Resources: The resources that are
managed by the operating system include
processor, memory, files, and various types of
attached devices such as camera, speaker,
keyboard, and screen.
• Providing Different Interfaces: Mobile OS
have to manage many interfaces at the same
time. Mainly user interface, network interface,
other devices.
Mobile O/S— Basic Concepts
• OS is viewed as providing a set of services to the application programs.
• OS is usually structured into a kernel layer and a shell layer.
• The shell essentially provides facilities for user interaction with the kernel.
• The kernel executes in the supervisor mode and can run privileged
instructions that could not be run in the user mode.
• The shell programs are usually not memory resident.
• The kernel of the operating system is responsible for interrupt servicing
and management of processes, memory, and files.
• Two popular OS Kernel architectures are used;
• Monolithic Kernel (Windows - Unix)
• MicroKernel
Monolithic Kernel
• During booting, the kernel is loaded and
continues to remain in the main memory of the
device.
• This implies that in a virtual memory system,
paging does not apply to the kernel code and
kernel data.
• So, the kernel is called the memory resident
part of an operating system.
Monolithic Kernel Disadvantages
The main problem with the monolithic kernel
design is that it makes the kernel massive, nonmodular,
hard to tailor, maintain, extend, and
configure.
• kernel code can crash the system, thus
crashing the debugger too.
Monolithic kernel architecture

Microkernel
• Considering the disadvantages of the monolithic kernel design,
the microkernel design approach has been proposed.
• The microkernel design approach tries to minimize the size of
the kernel code. Only the basic hardware-dependent
functionalities and a few critical functionalities are implemented
in the kernel mode and all other functionalities are implemented
in the user mode.
• Most of the operating system services run as user level
processes. The main advantage of this approach is that it
becomes easier to port, extend, and maintain the operating
system code. The kernel code is very difficult to debug
compared to application programs.
Microkernel Architecture

Mobile Phones OS
• Windows CE, Pocket PC, Windows Mobile,
Windows Phone 7
• Palm OS
• Symbian OS
• iOS
• Android
Android OS
• In 2005, Google acquired a small startup company
called Android, which was developing an operating
system for mobile devices based on Linux.
• Google set up the Open Handset Alliance in 2007. It
is a group of 82 technology and mobile
communication companies that are collaborating to
develop the Android operating system as an open
source software for mobile devices.
• Android allows application developers to write code
in the Java language.
Android OS
• In 2005, Google acquired a small startup company
called Android, which was developing an operating
system for mobile devices based on Linux.
• Google set up the Open Handset Alliance in 2007. It
is a group of 82 technology and mobile
communication companies that are collaborating to
develop the Android operating system as an open
source software for mobile devices.
• Android allows application developers to write code
in the Java language.
Android OS
• It facilitates the development of applications
with the help of a set of core Java libraries
developed by Google
Android Software Stack
• Application Layer: The Android operating system comes with
a set of basic applications such as
• web browser, email client, SMS program, maps, calendar,
and contacts repository management programs. All
these applications are written using the Java
programming language J2ME.
• Android applications do not have control over their own
priorities.
• This design is intentional and is intended to help
aggressively manage resources to ensure device
responsiveness and even kill an application when needed
Android Software Stack
• Application framework: An application framework is used
to implement a standard structure for different
applications.
• The application framework essentially provides a set of
services that an application programmer can make use
of.
• The services include managers and content providers.
Content providers enable applications to access data
from other applications. A notification manager allows
an application to display custom alerts on the status bar
References
1- Waltenegus Dargie, Christian Poellabauer “Fundamentals of Wireless Sensor Networks:
Theory and Practice ” , wiley publisher, 1st editon, 2010.
2- Manish J. Gajjar” Mobile Sensors and Context-Aware Computing”, Morgan Kaufmann is an imprint
of Elsevier, 1st edition, 2017

You might also like