Mobile Computing
Mobile Computing
Mobile Computing
Chapter 1 ........................................................................................................................................................................ 3
Chapter 2...................................................................................................................................................................... 14
Localization .................................................................................................................................................................. 14
Chapter 3....................................................................................................................................................................... 26
Chapter 4....................................................................................................................................................................... 42
Sensors ......................................................................................................................................................................... 42
Chapter 5....................................................................................................................................................................... 52
References ..................................................................................................................................................................... 62
Chapter 1
Mobile computing
What is computing?
The utilization of computers to complete a task. It involves both hardware & software functions
performing some sort of task with a computer.
Examples of computing being used in everyday life: Swiping a debit card, sending an email, or
using a cell phone can all be considered forms of computing.
The capability to change location while communicating to invoke computing service at some
remote computers.
Ability to compute remotely while on the move. It is possible to access information from
anywhere and at anytime.
1. Mobile communication
2. Mobile hardware
3. Mobile software
Mobile communication
Mobile Hardware
Mobile Hardware
Above mentioned devices use an existing and established network to operate on. In most cases,
it would be a wireless network.
Mobile software
Mobile software is the actual program that runs on the mobile hardware. It deals with the
characteristics and requirements of mobile applications. This is the engine of the mobile device.
In other terms, it is the operating system of the appliance. It's the essential component that
operates the mobile device.
Since portability is the main factor, this type of computing ensures that users are not tied or
pinned to a single physical location, but are able to operate from anywhere. It incorporates all
aspects of wireless communications.
1. Resource constraints - Battery needs and recharge requirements are the biggest constraints of
mobilecomputing. When a power outlet or portable generator is not available, mobile
computers must rely entirely on battery power. Combined with the compact size of many
mobile devices, this often means unusually expensive batteries must be used to
obtain the necessary battery life.
2. Interference - There may be interference in wireless signals affecting the quality of service.
Weather, terrain, and the range from the nearest signal point can all interfere with
signal reception. Reception in tunnels, some buildings, and rural areas is often poor.
3. Bandwidth — There may be bandwidth constraints due to limited spectrum availability at given
instant causing connection latency. Mobile Internet access is generally slower than direct cable
connections, using technologies such as GPRS and EDGE, and more recently 3G networks. These
networks are usually available within range of commercial cell phone towers. Higher speed
wireless LANs are inexpensive but have very limited range.
4. Dynamic changes in communication environment - We know that there may be variations in signal
power within a region it causes link delays and connection loss.
5. Network issues — Due to the ad hoc networks some issues relating discovery of connection,
service to destination, and connection stability.
6. Interoperability The varying protocol standards available between different regions may lead
to interoperability issues.
8. Potential health hazards: People who use mobile devices while driving are often
distracted from driving are thus assumed more likely to be involved in traffic accidents.
Cell phones may interfere with sensitive medical devices. There are allegations that cell
phone signals may cause health problems.
9. Human interface with device: Screens and keyboards tend to be small, which may make them har
d to use. Alternate input methods such as speech or handwriting recognition require training.
1. No location constraint: Mobile computing frees the user from being tied to a location and incre
ased bandwidth and speed of transmission makes it possible to work on the move.
2. It saves time and enhances productivity with a better return on investment (RoI)
3. It provides entertainment, news and information on the move with streaming data, video and
audio Streamlining of business processes: Mobility has enabled streamlining of business proces
ses, cumbersome emails, paper processing, delays in communication and transmission.
4. Newer job opportunities for IT professionals have emerged and IT businesses now have an add
ed service in their portfolio which only will keep growing as per indicative mobile
computing trends.
Portability
• Reducing the size of hardware to enable the creation of computers that could be physically
moved around relatively easily.
Miniaturization
• Creating new and significantly smaller mobile form factors that allowed the use of personal
mobile devices while on the move
Connectivity
• Developing devices and applications that allowed users to be online and communicate via
wireless data networks while on the move
Convergence
• Integrating emerging types of digital mobile devices, such as Personal Digital Assistants
(PDAs), mobile phones, music players, cameras, games, etc., into hybrid devices
Divergence
Applications (Apps)
• The latest wave of applications (apps) is about developing matter and substance for use and
consumption on mobile devices, and making access to this fun or functional interactive
application content easy and enjoyable
Digital Ecosystems
• The emerging wave of digital ecosystems is about the larger wholes of pervasive and
interrelated technologies that interactive mobile systems are increasingly becoming a part of
Chapter 2
Localization
Location awareness
Location awareness refers to the capability of a device to actively or passively
determine its location in terms of coordinates with respect to a point of reference.
Various sensors or navigational tools can be used to determine the location. Some
of the application of location awareness can be:
1. Emergency response, navigation, asset tracking, ground surveying
2. Symbolic location is a proxy for activity (e.g., being in grocery store implies
shopping)
3. Social roles and interactions can be learned by collation
4. Coordinate change can imply activity and mode of transportation
(i.e., running, driving).
Sensors monitor phenomena in the physical world and the spatial relationships between them
and the objects and events of the physical world are an essential component of the sensor
information. Without knowing the position of a sensor node, its information will only tell
part of the story. For example, sensors deployed in a forest to raise alarms whenever wildfires
occur gain significantly in value if they are able to report the spatial relationship between
them and the monitored event. Further, accurate location information is needed for various
tasks such as routing based on geographic information, object tracking, and location-aware
services. Localization is the task of determining the physical coordinates of a sensor node
(or a group of sensor nodes) or the spatial relationships among objects. It comprises a set
of techniques and mechanisms that allow a sensor to estimate its own location based on
information gathered from the sensor’s environment. While the Global Positioning System
(GPS) is undoubtedly the most well-known location-sensing system, it is not accessible in
all environments (e.g., indoors or under dense foliage) and may incur resource costs unacceptable
for resource-constrained wireless sensor networks (WSNs). Therefore, this chapter
discusses various techniques and case studies for localization and location services targeted
at WSNs.
Overview
Localization in wireless sensor networks (WSNs) is crucial for providing context to sensor readings
and enabling various applications like environmental monitoring, intrusion detection, and
surveillance. The location of sensor nodes can be expressed using global metrics like GPS coordinates
or relative metrics based on arbitrary coordinate systems. Localization information should ideally
possess qualities of accuracy and precision.
There are two main types of localization metrics:
1. Global Metrics: These metrics position nodes within a global reference frame such as GPS or
UTM coordinate systems. They provide absolute positions that are universally understood.
2. Relative Metrics: These metrics are based on arbitrary coordinate systems and reference
frames, such as a sensor's distance to other sensors. They do not rely on global coordinates and
may be more suitable for localized operations. Key qualities of localization information
include accuracy and precision. Accuracy refers to how close a reading is to the ground truth,
while precision measures the consistency of readings.
In addition to physical positions, symbolic locations like "office 354" or "bathroom" may suffice for
certain applications, especially indoor tracking systems.
In many cases, not all sensor nodes have knowledge of their global coordinates. Instead, subsets of
nodes, known as anchor nodes, know their global positions. Anchor-based localization techniques
utilize these anchor nodes to help other nodes determine their positions. These techniques are
commonly used in WSNs.
Many localization techniques, including anchor-based approaches, rely on range measurements, such
as received signal strengths or time differences of arrival of ultrasound pulses. These techniques are
known as range-based localization techniques and are essential for estimating distances between
sensor nodes.
Overall, localization is a fundamental aspect of WSNs, enabling various applications and services by
providing spatial context to sensor data and facilitating efficient network operations
A. Ranging Techniques
The foundation of numerous localization techniques is the estimation of the physical distance
between two sensor nodes. Estimates are obtained through measurements of certain
characteristics of the signals exchanged between the sensors, including signal propagation
times, signal strengths, or angle of arrival.
1. Time of Arrival
The concept behind the time of arrival (ToA) method (also called time of flight method)
is that the distance between the sender and receiver of a signal can be determined using
the measured signal propagation time and the known signal velocity. For example, sound
waves travel 343 m/s (in 20 ◦C), that is, a sound signal takes approximately 30 ms to travel
a distance of 10 m. In contrast, a radio signal travels at the speed of light (about 300 km/s),
that is, the signal requires only about 30 ns to travel 10 m. The consequence is that radio based
distance measurements require clocks with high resolution, adding to the cost and
complexity of a sensor network. The one-way time of arrival method measures the one-way
propagation time, that is, the difference between the sending time and the signal arrival time
(Figure 1(a)), and requires highly accurate synchronization of the clocks of the sender and
receiver. Therefore, the two-way time of arrival method is preferred, where the round-trip
time of a signal is measured at the sender device (Figure 1(b)). In summary, for one-way
measurements, the distance between two nodes i and j can be determined as:
distij = (t2 − t1) × v
where t1 and t2 are the sending and receive times of the signal (measured at the sender and
receiver, respectively) and v is the signal velocity. Similarly, for the two-way approach, the
distance is calculated as:
distij =(t4 − t1) − (t3 − t2)/2 × v
where t3 and t4 are the sending and receive times of the response signal. Note that with one way
localization, the receiver node calculates its location, whereas in the two-way approach,
the sender node calculates the receiver’s location. Therefore a third message will be necessary
in the two-way approach to inform the receiver of its location.
Figure1. Comparison of different ranging schemes (one-way ToA, two-way ToA, and TDoA).
3. Angle of Arrival
Another technique used for localization is to determine the direction of signal propagation,
typically using an array of antennas or microphones. The angle of arrival (AoA) is then the
angle between the propagation direction and some reference direction known as orientation. For
example, for acoustic measurements, several spatially separated
microphones are used to receive a single signal and the differences in arrival time,
amplitude, or phase are used to determine an estimate of the arrival angle, which in turn
can be used to determine the position of a node. While the appropriate hardware can obtain
accuracies within a few degrees, AoA measurement hardware can add significantly to the
size and cost of sensor nodes.
Pr l2
= Gt Gr
Pt (4 p ) 2 R 2
where Gt is the antenna gain of the transmitting antenna and Gr is the antenna gain of the receiving
antenna. In practice, the actual attenuation depends on multipath propagation effects, reflections,
noise, etc., therefore a more realistic model replaces R2 in Equation with Rn with n typically in the
range of 3 and 5.
B. Range-Based Localization
1. Triangulation
Triangulation uses the geometric properties of triangles to estimate sensor locations. Specifically,
triangulation relies on the gathering of angle (or bearing) measurements as described
in the previous section. A minimum of two bearing lines (and the locations of the anchor
nodes or the distance between them) are needed to determine the location of a sensor node
in two-dimensional space. Figure 2(a) illustrates the concept of triangulation using three
anchor nodes with known locations (xi, yi ) and measured angles αi (expressed relative to a
fixed baseline in the coordinate system, for example, the vertical line in the figure). If more
than two bearings are measured, the presence of noise in the measurements may prevent
them from intersecting in a single point. Therefore statistical algorithms or fixing methods
have been developed to obtain a single location
1 1 N ( i (xˆ r ) − i ) 2
xˆ r = arg min [ (xˆ r ) − ] S [ (xˆ r ) − ] = arg min
T −1
•
2 2 i =1 i2
This non-linear least squares minimization can be performed using Newton-Gauss iterations:
xˆ r,i +1 = xˆ r,i + (q x (xˆ r,i )T S -1q x (xˆ r,i ))-1q x (xˆ r,i )T S -1[b - q x (xˆ r,i )]
2. Trilateration
Trilateration refers to the process of calculating a node’s position based on measured distances
between itself and a number of anchor points with known locations. Given the location
of an anchor and a sensor’s distance to the anchor (e.g., estimated through RSS measurements),
it is known that the sensor must be positioned somewhere along the circumference of
a circle centered at the anchor’s position with a radius equal to the sensor–anchor distance.
In two-dimensional space, distance measurements from at least three noncollinear anchors
are required to obtain a unique location (i.e., the intersection of three circles). Figure 2(b)
illustrates an example for the two-dimensional case. In three dimensions, distance measurements
to at least four noncoplanar anchors are required.
• n anchor nodes: xi=(xi,yi) (i=1..n), unknown node location x=(x,y)
• Distances between node and anchors known (ri, i=1..n)
• Relationships between anchor/node positions and distances (2 D)
( x1 − x) 2 + ( y1 − y ) 2 r12
2 2
( x2 − x) + ( y2 − y ) = r2
2
2
2
n
( x − x ) 2
+ ( y n − y ) rn
• After some rearrangements and subtracting the equation from all previous ones (to remove the
square of the unknown sensor location (x, y)). This can be represented as Ax=b with:
2( xn − x1 ) 2( yn − y1 )
2( x − x ) 2( y − y )
A= n 2 n 2
n n −1
2( x − x ) 2( y n − y n −1
)
occur. Radio waves travel at the speed of light (about 186 000 miles per second), so if _
is known, the distance from the satellite to the receiver (distance = speed × time) can be
determined. Once the distance has been determined, the receiver knows that it is located
somewhere on a sphere centered on the satellite with a radius equal to the computed distance.
Repeating this process with two more satellites, the position of the receiver can be
narrowed down to the two points where the three spheres intersect. Typically, one of the two
points can be eliminated very easily, for example, because it would position the receiver far
out in space or the receiver would travel at a virtually impossible velocity.
While three satellites appear to be sufficient for localization, a fourth satellite is needed
to obtain an accurate position. Positioning via GPS relies on correct timing to make accurate
measurements, that is, the clocks of the satellites and the receivers must be synchronized
precisely. Satellites are equipped with four atomic clocks (synchronized to each other within
a few nanoseconds), providing highly accurate time readings. However, the clocks used for
GPS receivers are not nearly as accurate as the atomic clocks onboard the satellites, introducing
measurement errors that can have a significant impact on the quality of localization.
Because radio waves travel at very high speeds (and therefore require very little time to
travel), small errors in the timing can result in large deviations in position measurements.
For example, a clock error of 1 ms would result in a position error of about 300 km. Therefore,
a fourth measurement is required, where the fourth sphere should ideally intersect the
other three spheres at the exact location of the receiver. Because of timing errors, the fourth
sphere may not intersect with all other spheres, even though we know that they are supposed
to align. If the spheres are too large, we can reduce their sizes by adjusting the clock (by
moving it forward) until the spheres are small enough to intersect in one point. Similarly, if
the spheres are too small, we adjust the clock by moving it backwards. That is, because
the timing error is the same for all measurements, a receiver can calculate the required
clock adjustment to obtain a single intersection point among all four spheres. In addition to
providing a means for clock synchronization, a fourth measurement also allows a receiver
to obtain a three-dimensional position, that is, latitude, longitude, and elevation.
While most GPS receivers available today are able to provide position measurements
with accuracies of 10m or less, advanced techniques to further increase the accuracy are
available. For example, Differential GPS (DGPS) relies on landbased
receivers with exactly known locations to receive GPS signals, compute correction
factors, and broadcast them to GPS receivers that are then able to correct their own GPS
measurements. While it is possible to build wireless sensor networks where each sensor
has its own GPS receiver, constraints such as high power consumption, cost, and the need
for line-of-sight make a fully GPS-based solution impractical for most sensor networks.
However, GPS receivers deployed on a few nodes in a WSN may be sufficient to provide
location services based on reference points as described in the following section.
Range-Free Localization
The localization approaches discussed in the previous sections are based on distance estimations
using ranging techniques (RSS, ToA, TDoA, and AoA) and belong therefore to the
class of range-based localization algorithms. In contrast, range-free techniques estimate
node locations based on connectivity information instead of distance or angle measurements.
Range-free localization techniques do not require additional hardware and are therefore a
cost-effective alternative to range-based techniques.
Chapter 3
Context-aware computing
CONTEXT-AWARE COMPUTING
Let us start by exploring what content means. Context means user’s
preferences, likings, dislikes, location, and general awareness of the surrounding
environment in which the user is operating, located, or situated.
Examples of awareness of the surrounding environment could be information
related to weather, climate, traffic, the time of the day, or physical location
of the user. It could also be information related to user’s computing device
like battery level, available network bandwidth, available Wi-Fi infrastructure,
and so on.
Now we shall expand this basic definition of context to context computing.
Context-aware computing is the computing environment that is aware of the context
of the computing device, computing infrastructure, or the user. A computing
device could be any of various devices including smartphones, tablets, wearables,
or traditional devices like laptops and desktops. Computing infrastructure can
include hardware, software, applications, network bandwidth, Wi-Fi bandwidth
and protocols, and battery information.
A smartphone, e.g., is a computing device that is aware of the surrounding
context. The computing infrastructure, such as its operating system, acquires this
context, stores it, processes it, and then responds to the context by either changing
or adapting its functionality or behavior. It will also make certain context-aware
decisions. The computing infrastructure could process and respond to the context
with minimal or no inputs from its user. Some of the examples of how contextaware
infrastructure do and will respond are as follows:
• A smartphone could detect that it is in a crowded place like an airport, railway
station, or mall and automatically change the device behavior to implement
noise cancellation algorithms. This would enable the device to respond better
to a user’s voice commands.
• Smartphones could detect the location of the user and alter its functionality,
such as, e.g., by automatically increasing or decreasing speaker volume, or
changing to silent mode if the user is in the meeting, change ringtones based
on whether the user is at home, at the office, or traveling by car.
• Smartphones could automatically respond to certain calls with messages if the
user were in the office or driving. It could even block some calls based on the
user’s location context.
• Wearables could use environmental context and automatically compensate its
calculations for calories burned.
• A smart watch could automatically adjust daylight savings or time zone based
on the location context.
• Traditional or contemporary smart devices can use location-based services to
suggest dining locations, entertainment centers in the area, or even emergency
services like hospitals and urgent care centers.
A context-aware device can acquire context data through various mechanisms
like generic or specific sensors, through the Internet, via GPS, or through a history
of logs, past decisions, locations, or actions. Today sensor types and availability
have increased and become more sophisticated. This enables a large
number of context-aware use cases on devices like tablets, wearables, smartphones,
and even on traditional laptops and desktops. Even basic gyroscopes,
accelerometers, and magnetometers can acquire direction and orientation data,
resulting in use cases like shutting down when an accidental fall is detected or
suggest upcoming dinning place or gas station based on the current user location.
Thus context awareness is now becoming a necessity for various computing
devices and infrastructure, including applications, in order to make smart decisions,
predict user actions, and alter device functionality in order to reduce the
need for users to manually input context-related information (Fig.1).
LEVELS OF INTERACTIONS FOR CONTEXT-AWARE INFRASTRUCTURE
There are three level of interactivity for context-aware computing, infrastructure,
or applications:
1. Personalization: Here users specify their own settings/environment that
controls how the context-aware infrastructure (hardware, software,
applications, and so on) should behave or respond in a given situation.
CONTEXT-AWARE APPLICATIONS
Context information can be used in software applications [3] to enhance user experience
and facilitate effective hardware and software resource usage. It can be used to
personalize user interface, add or remove drivers, applications, and software modules,
present context-based information to user queries, and perform context-driven
actions. Following are some examples of applications that uses context information.
• Proximate selection: Proximate selections refer to the user interface that
highlights the objects or information which are in proximity of the user at a
particular instance of query. Such user interface can use user’s current
location as default and can offer the user to connect or user nearby
input_output devices such as printers, audio speakers, display screens, and so
on. It can also offer to connect to or share information with other users within
the preset proximity and it can also provide information about nearby
attractions and locations that the user might be interested to visit/explore, such
as restaurants, gas stations, sports stadium, and so on.
• Automatic contextual reconfiguration: The process of adding or removing
software components, or changing the interaction between these components
is referred to as automatic contextual reconfiguration. For example, device
drivers can be loaded based on user profile. The context information can thus
be used to support personalized system configurations.
• Contextual information and commands: By using context information such as
location or user preferences, the software can present the user with commands
that are filtered or personalized with context (e.g., send file command will
send it to the nearby connected device by default), or it can change present
user with certain execution options based on current location such as offer to
silent the mobile device while in library.
• Context-triggered actions: The software or applications can automatically
invoke certain actions based on if-then condition-action rules. For example,
applications can offer automatic reminders to checkout certain reading
materials or it can automatically put the mobile device in silent mode when
user is detected around library. Such automatic actions however require higher
degree of context information accuracy.
LOCATION AWARENESS
Location awareness refers to the capability of a device to actively or passively
determine its location in terms of coordinates with respect to a point of reference.
Various sensors or navigational tools can be used to determine the location. Some
of the application of location awareness can be:
1. Emergency response, navigation, asset tracking, ground surveying
2. Symbolic location is a proxy for activity (e.g., being in grocery store implies
shopping)
3. Social roles and interactions can be learned by collation
4. Coordinate change can imply activity and mode of transportation
(i.e., running, driving).
LOCATION SOURCES IN MOBILE PHONES
There are many location technologies and sensor types that can be used in the
devices with location context awareness. Some of the technologies and sensors
are listed:
GNSS [4] (Global Navigation Satellite System)
This system is made up of a network of satellites that transmits signals used for
positioning and navigation around the globe. Examples include GPS, GLONASS,
and GALILEO systems. Each one of these systems consists of three main segments:
(1) Space segment: This segment refers to satellites or network of satellites;
(2) Control segment: This segment refers to system of tracking stations
located around the world that controls functions like satellite orbit determination,
synchronization, and so on; (3) User segment: This segment refers to satellite
receivers and users with different capabilities.
GNSS is suitable for outdoor location context, has good coverage and
accuracy across the globe (Fig. 2.3).
Wireless Geo
Wireless Geo refers to wireless mechanisms used to identify actual location of the
device. In this method, actual physical location rather than geographic coordination
is provided by the underlying wireless locating engines. An example would
FIGURE 2.3
Key segments of GNSS.
be Cell ID (CID), which is a unique number used to identify each mobile/smartphone.
The CID-based mechanism uses cell tower, CID, and location area code to
identify the mobile phone.
Sensors
Sensors can be used to enhance the accuracy of determining the location of a
device. For example, in dead reckoning, sensors can be used to determine relative
motion from a reference point, such as to detect whether the system moves outside
of a 3-m radius, or to determine relative positioning of devices, such as the
case of bumping two devices up against each other to establish common reference
and then they can track their relative positions. Sensors can also be used standalone
when other methods are not available. First let us understand what dead
reckoning is. Dead reckoning (deduced reckoning) is the process of calculating
current position by using previously determined reference position and advancing
that position based upon known or estimated speeds over elapsed time and course.
Although it provides good information on position, this method is prone to errors
due to factors like inaccuracy in speed or direction estimations. Errors will also
be cumulative since new estimated value would have its own errors and it will
also be based on previous position which had errors, thus resulting in cumulative
errors. Some of the sensors used are accelerometers and gyroscopes for acceleration/
velocity integration for dead reckoning, accelerometers for bump events,
pressure for elevation, and so on.
Chapter 4
Sensors
TERMINOLOGY OVERVIEW
Sensors, transducers, and actuators forms the base of a sensor ecosystem. This
section covers their basic definition.
A sensor is a device that converts physical activity or changes into an electrical
signal. It is the interface between the physical or real world and electrical system
and components of the computing device. In the simplest form a sensor
responds to some kind of physical change or stimulus and outputs some form of
electrical signal or data. Sensors are required to produce data that the computing
system can process. For example, opening a washing machine stops the washing
cycle. Opening of a house door results in activation of a house alarm. Without the
sensing of these physical activities there would be no change in washing cycle or
triggering of the house alarm.
A transducer is the device that takes one form of input (energy or signal) and
changes into another form, as shown in Fig. 3.1. A transducer can be part of our
earlier defined sensors. Many times the terms sensor and transducer are used
interchangeably, but we can differentiate them by saying that sensors measure the
change in physical environment and produce electrical signals using a transducer,
where the transducer takes the measured change in the physical environment and
transforms it into a different form of energy (such as an electrical signal) as
shown in Fig. 3.2.
FIGURE 3.1
Basic concept of transducers.
FIGURE 3.2 Example of sensor with transducer.
A combination transducer performs detection of one energy form and can create
an energy output. For example, an antenna can receive (detect) radio signals
and also transmit (create) radio signals.
The performance of a transducer can be measured in terms of its accuracy,
sensitivity, resolution, and range.
An actuator is a transducer that takes one form of energy as input and produces
some form of motion, movement, or action. Thus it converts some form of
energy into kinetic energy. For example, an electrical motor in an elevator
converts electrical energy into the vertical movement of going from one floor to
another floor of the building. The following are the main category of actuators:
1. Pneumatic: These actuators convert energy from compressed air (at high
pressure) to either linear or rotary motion. Examples include valve controls of
liquid or gas pipes.
2. Electric: These actuators convert electrical energy to mechanical energy. An
example would be an electric water pump pumping water out of well.
3. Mechanical: These actuators convert mechanical energy into some form of
motion. An example would be a simple pulley used to pull weights.
The performance of actuators can be measured in terms of force, speed, and
durability.
SENSOR ECOSYSTEM OVERVIEW
The sensor ecosystem is complex with many significant components, players, and
segments of enabling technologies (such as sensor types and wireless protocols), manufacturers,
developers, markets, and consumers. One of the components of this ecosystem
is the set of enabling technologies. Let us look at some of the sensor types
such as location based sensors, proximity sensors, touch sensors, and biosensors.
LOCATION-BASED SENSORS
Location sensors can help enable use cases such as ones mentioned in Table 3.1.
Table 3.1 Location Sensor Use Cases
Sensor-to-Signal Interface
• Action of environment on a sensor causes it to generate an electrical signal directly
• voltage source (V), current (I), or charge (Q) source
• Action of environment on sensor changes an electrical parameter that we can measure
• resistance changes: V = I * R (R = resistance)
• capacitance changes: C = ε * A / d (A = area, d = distance, ε = permittivity
inductance changes: V ~ dI/dt, I ~ ∫V dt
Sensor Types
• Pressure
• Humidity
• Light
• Microphone (sound)
• Motion detector
• Chemical detector
• Image Sensor
• Flow and level sensor
• …
Sensor Types: HW & SW
• Hardware-based sensors
• Physical components built into a device
• They derive their data by directly measuring specific environmental properties
• Software-based sensors
• Not physical devices, although they mimic hardware-based sensors
• They derive their data from one or more hardware-based sensors
Sensor Types: Function Type
• Motion sensors
• Measure acceleration forces and rotational forces along three axes, e.g., accelerometer,
gyroscope, etc.
• Position sensors
• Measure the physical position of a device, e.g., GPS, proximity sensor, etc.
• Environmental sensors
• Measure various environmental parameters, e.g., light sensor, thermometer, etc.
Sensor List
Smartphone Sensing
• Light
• Proximity
• Cameras (multiple)
• Microphones (multiple)
• Touch
• Position
• GPS, Wi-Fi, cell, NFC, Bluetooth
• Accelerometer
• Gyroscope
• Magnetometer
• Pressure
• Temperature
• Humidity
• Fingerprint sensor
Sensor: GPS (Recap)
• Need connect to 3 satellites for 2D positioning, 4 satellites for 3D positioning
• More visible satellites increase precision
• Based on concept of trilateration
WAP gateway
• WAP gateway is responsible to convert a WAP
request/response to an HTTP request/response.
This conversion is very compute-intensive. To
handle this computational workload, the WAP
gateway needs to be a powerful computer.
• WAP is often described as a network-centric
protocol—most of the intelligence and
computations associated with the WAP protocols
are embedded in the network rather than the phone
WAP protocol stack
It is designed to be compatible with the Internet
• Pages in WAP are converted to the http and TCP
protocol at the gateway
• WAP 2.0 provides support for the protocols that are
counterparts of IP, TCP, and HTTP
• WAP 2.0 is also flexible and bearer independent—
meaning that WAP services can run over any specific
wireless data bearer technologies such as SMS,
GSM, GPRS, 3G, etc.
WAP Architecture
•WTLS is the security layer that is used to
transfer data securely between a mobile device
and a server.
•It provides support for data security and privacy,
authentication, as well as protection against
denial-of-service (DOS) attacks.
Wireless Transport Layer Security (WTLS)
WAP Architecture
•WDP is the bottom-most protocol in the WAP
protocol suite. It functions as an adaptation layer
in a wireless communication environment that
makes every data network look like UDP to the
upper layers by providing services for transport of
data in the unreliable wireless environment. WDP
invokes services of one or more data bearers such
as SMS, GPRS, CDMA, UMTS, etc.
WAP Architecture
•A bearer is a low-level transport mechanism for network messages.
•Considering the diversity of transport technologies, WAP is
designed to operate with SMS (Short Message Service) to GPRS
(General Packet Radio System), UMTS and IP.
•WAP supports circuit-switched bearer services such as dial-up
networking using IP and Point-to-Point Protocol (PPP). However,
packet-switched bearer services are much better suited than
circuit-switched bearer services for mobile devices as they can
provide more reliable services in the unreliable wireless
connection environment.
Bearer Interfaces
Operating Systems for Mobile Computing
Operating System Responsibilites in Mobile Devices
• Managing Resources: The resources that are
managed by the operating system include
processor, memory, files, and various types of
attached devices such as camera, speaker,
keyboard, and screen.
• Providing Different Interfaces: Mobile OS
have to manage many interfaces at the same
time. Mainly user interface, network interface,
other devices.
Mobile O/S— Basic Concepts
• OS is viewed as providing a set of services to the application programs.
• OS is usually structured into a kernel layer and a shell layer.
• The shell essentially provides facilities for user interaction with the kernel.
• The kernel executes in the supervisor mode and can run privileged
instructions that could not be run in the user mode.
• The shell programs are usually not memory resident.
• The kernel of the operating system is responsible for interrupt servicing
and management of processes, memory, and files.
• Two popular OS Kernel architectures are used;
• Monolithic Kernel (Windows - Unix)
• MicroKernel
Monolithic Kernel
• During booting, the kernel is loaded and
continues to remain in the main memory of the
device.
• This implies that in a virtual memory system,
paging does not apply to the kernel code and
kernel data.
• So, the kernel is called the memory resident
part of an operating system.
Monolithic Kernel Disadvantages
The main problem with the monolithic kernel
design is that it makes the kernel massive, nonmodular,
hard to tailor, maintain, extend, and
configure.
• kernel code can crash the system, thus
crashing the debugger too.
Monolithic kernel architecture
Microkernel
• Considering the disadvantages of the monolithic kernel design,
the microkernel design approach has been proposed.
• The microkernel design approach tries to minimize the size of
the kernel code. Only the basic hardware-dependent
functionalities and a few critical functionalities are implemented
in the kernel mode and all other functionalities are implemented
in the user mode.
• Most of the operating system services run as user level
processes. The main advantage of this approach is that it
becomes easier to port, extend, and maintain the operating
system code. The kernel code is very difficult to debug
compared to application programs.
Microkernel Architecture
Mobile Phones OS
• Windows CE, Pocket PC, Windows Mobile,
Windows Phone 7
• Palm OS
• Symbian OS
• iOS
• Android
Android OS
• In 2005, Google acquired a small startup company
called Android, which was developing an operating
system for mobile devices based on Linux.
• Google set up the Open Handset Alliance in 2007. It
is a group of 82 technology and mobile
communication companies that are collaborating to
develop the Android operating system as an open
source software for mobile devices.
• Android allows application developers to write code
in the Java language.
Android OS
• In 2005, Google acquired a small startup company
called Android, which was developing an operating
system for mobile devices based on Linux.
• Google set up the Open Handset Alliance in 2007. It
is a group of 82 technology and mobile
communication companies that are collaborating to
develop the Android operating system as an open
source software for mobile devices.
• Android allows application developers to write code
in the Java language.
Android OS
• It facilitates the development of applications
with the help of a set of core Java libraries
developed by Google
Android Software Stack
• Application Layer: The Android operating system comes with
a set of basic applications such as
• web browser, email client, SMS program, maps, calendar,
and contacts repository management programs. All
these applications are written using the Java
programming language J2ME.
• Android applications do not have control over their own
priorities.
• This design is intentional and is intended to help
aggressively manage resources to ensure device
responsiveness and even kill an application when needed
Android Software Stack
• Application framework: An application framework is used
to implement a standard structure for different
applications.
• The application framework essentially provides a set of
services that an application programmer can make use
of.
• The services include managers and content providers.
Content providers enable applications to access data
from other applications. A notification manager allows
an application to display custom alerts on the status bar
References
1- Waltenegus Dargie, Christian Poellabauer “Fundamentals of Wireless Sensor Networks:
Theory and Practice ” , wiley publisher, 1st editon, 2010.
2- Manish J. Gajjar” Mobile Sensors and Context-Aware Computing”, Morgan Kaufmann is an imprint
of Elsevier, 1st edition, 2017