Smart Computing Notes - UNIT 3 to 6
Smart Computing Notes - UNIT 3 to 6
NOTES
UNIT 3
Ubiquitous System:
Each object linked with processing of information, connects to electronic devices and embedding processors.
They are connected all the time and completely available. Integration of computers in day to day life. This is
increasing M2M connectivity and reducing human intervention. Everywhere, pervasive computing and
ambient intelligent.
Pervasive computing, also called ubiquitous computing, is the growing trend of embedding computational
capability (generally in the form of microprocessors) into everyday objects to make them effectively
communicate and perform useful tasks in a way that minimizes the end user's need to interact with computers.
The goal of pervasive computing is to make devices "smart," thus creating a sensor network capable of
collecting, processing and sending data, and, ultimately, communicating as a means to adapt to the data's
context and activity; in essence, a network that can understand its surroundings and improve the human
experience and quality of life.
Ubiquitous computing, is the growing trend of embedding computational capability (generally in the form of
microprocessors) into everyday objects to make them effectively communicate and perform useful tasks in a
way that minimizes the end user's need to interact with computers as computers. Pervasive computing devices
are network-connected and constantly available.
Features of Ubiquitous Computing:
M2M
Memory and storage requirements are reduced by use of inexpensive processors
Real time attributes are captured.
24 by 7 available with highest accuracy.
Many to many relationships
Dependent of internet, wireless technology and electronics.
Highly reliable.
Though ubiquitous computing can make many daily activities faster and more cost-efficient Ubiquitous
computing can remove the complexity of computing and increases efficiency while using computing for
different daily activities. Ubiquitous computing provides users the ability to access services and resources all
the time and irrespective to their location. Ubiquitous computing makes our lives simpler through the use of
tools that allow us to manage information easily. Ubiquitous computing allows using and handling these
services any time and from anywhere.
Five main properties for Ubicomp Systems were proposed by Weiser (1991)
1. Computers need to be networked, distributed and transparently accessible
– In1991, little wireless computing, Internet far less pervasive
2. Computer Interaction with Humans needs to be more hidden
– Because much HCI is overly intrusive
3. Computers need to be aware of environment context
– In order to optimise their operation in their physical & human environment.
4. Computers can operate autonomously, without human intervention, be self-governed
5. Computers can handle a multiplicity of dynamic actions and interactions, governed by intelligent
decision-making and intelligent organisational interaction. This entails some form of artificial
intelligence.
Autonomous
Distributed
Context-aware iHCI
Intelligent
It is possible for UbiCom systems to be context-aware, to be autonomous and for systems to adapt their
behavior in dynamic environments in significant ways, without using any artificial intelligence in the system.
Systems could simply use a directory service and simple event condition action rules to identify available
resources and to select from them. There are several ways to characterize intelligent systems. Intelligence can
enable systems to act more proactively and dynamically in order to support the following behaviors in
UbiCom systems
1. Modeling of its physical environment
2. Modeling and mimicking its human environment
3. Handling incompleteness
4. Handling non-deterministic behavior
5. Semantic and knowledge-based behavior
implicit HCI
Context-
Autonomous Intelligent UbiComp
Aware
System
Distributed ICT
CCI
ICTI
Virtual Environments
Computers can handle a multiplicity of dynamic actions and interactions, governed by intelligent decision-
making and intelligent organisational interaction. This entails some form of artificial intelligence.
Intelligent UbiCom systems (IS) can:
• Act more proactively, dynamically & humanely through:
• Model how their environment changes when deciding how it acts.
• Goal-based / planning
• Reasoning for re-planning
• Handle uncertainty.
• semantic based interaction etc
The computer poses binary questions, has a limit on the types and frequency of input it will take, etc. So
people have to adapt skills that are solely for using computers. With implicit hci, the computer is allowing the
human and the human’s environment to impose on its actions instead of the other way around. The computer
is tasked with adapting a hierarchy of data and actions solely for the human or for a human’s environment.
The action of a user is always performed in a certain environment. Implicit interaction is based on the
assumption that the computer has a certain understanding of our behavior in the given situation. This
knowledge is then considered as an additional input to the computer while doing a task. Considering current
computer technology interaction is explicit – the user tells the computer in a certain Considering current
computer technology interaction is explicit – the user tells the computer in a certain level of abstraction (e.g.
by command-line, direct manipulation using a GUI, gesture, or speech input) what she expects the computer
to do. This is considered as explicit interaction.
H2C / eHCI
C2H / iHCI
Minimum
C2C
The explicit interaction encouraged users to be active, exploratory, and creative. The implicit interaction let
users embrace and exploit dynamic qualities of the surroundings, contributing to making the systems fun,
exciting, magical, 'live', and real.
Implicit Versus Explicit Human–Computer Interaction: The original UbiCom vision focused on making
computation and digital information access more seamless and less obtrusive. To achieve this requires in part
that systems do not need users to explicitly specify each detail of an interaction to complete a task. For
example, using many electronic devices for the first time requires users to explicitly configure some
proprietary controls of a timer interface. It should be implicit that if devices use absolute times for scheduling
actions, then the first time the device is used, the time should be set. This type of implied computer interaction
is referred to as implicit human computer interaction (iHCI). Schmidt defines iHCI as ‘an action, performed
by the user that is not primarily aimed to interact with a computerized system but which such a system
understands as input’. Reducing the degree of explicit interaction with computers requires striking a careful
balance between several factors. It requires users to become comfortable with giving up increasing control to
automated systems that further intrude into their lives, perhaps without the user being aware of it. It requires
systems to be able to reliably and accurately detect the user and usage context and to be able to adapt their
operation accordingly.
Smart DEI also refers to hybrid models that combine the designs of smart device, smart environments and
smart interaction
Smart Device Smart DEI Model
VM ASOS Knowledge
MTOS Intelligent
Organisation System
Mobile RTOS
Environment:
Basic Interaction: Basic interaction typically involves two dependent parties: a sender and a receiver. The
sender knows the address of the receiver in advance; the structure and meaning of the messages exchanged are
agreed in advance, the control of flow, i.e., the sequencing of the individual messages, is known in advance.
However, the content, the instances of the message that adhere to the accepted structure and meaning, can
vary. There are two main types of basic interaction, synchronous versus asynchronous.
In the smart interaction design model, system components dynamically organize and interact to achieve shared
goals. This organization may occur internally without any external influence, a self-organizing system, or this
may be driven in part by external events. Components interact to achieve goals jointly because they are
deliberately not designed to execute and complete sets of tasks to achieve goals all by themselves they are not
monolithic system components. There are several benefits to designs based upon sets of interacting
components.
Asynchronous and synchronous interaction is considered part of the distributed system communication
functions. In contrast, interactions that are coordinated, conventions based, semantics and linguistic based and
whose interactions are driven by dynamic organizations are considered to be smart interaction.
Additional type of design is needed to knit together many individual system components and interactions.
Smart interaction promotes unified & continuous interaction model between Ubicomp applications & their
Ubicomp infrastructure, physical world & human environments. Internal self-organising system vs. externally
driven system. Components can interact cooperatively versus competitively. Several benefits to designs based
upon set of interacting components: A range of levels of interaction between Ubicomp System components.
Basic Interaction: Typically involves two interlinked parties, a sender and a receiver. Sender knows things in
advance: Two main types of basic interaction synchronous versus asynchronous
Smart Interaction: Smart Interaction extends basic interactions as follows.
• Coordinated interactions
• Cooperative versus competitive interaction
• Policy and Convention based Interaction
• Dynamic Organisational Interaction
• Semantic and Linguistic Interaction
Smart Surfaces, Skin, Paint, Matter and Dust: MEMS can be permanently attached to some fixed substrate
forming smart surfaces or be more free standing, forming smart structures that can reorganise. An example of
a smart surface is a paint that is able to sense vibrations because it is loaded with a fine powder. MEMS could
be mixed with a range of bulk materials, such as paints, gels, and spread on surfaces or embedded into
surfaces or scattered into and carried as part of other media such as air and water. For example, coating
bridges and buildings with smart paint could sense and report traffic, wind loads and monitor structural
integrity. A smart paint coating on a wall could sense vibrations, monitor the premises for intruders, and
cancel noise.
SMART COMPUTING
NOTES
UNIT 4
Smart Devices: Smart devices are characterized by the ability to execute multiple, possibly concurrent,
applications, supporting different degrees of mobility and customization and by supporting intermittent remote
service access and operating according to local resource constraints. A smart device, as the name suggests, is
an electronic gadget that is able to connect, share and interact with its user and other smart devices. Although
usually small in size, smart devices typically have the computing power of a few gigabytes. Smart devices are
devices with CPU & OS, Connected to other devices or OS via different wireless protocols. Ex. Smart phones,
tablets, smart watches, smart glasses and other personal electronics.
• System architectures focus on the idea of reducing complexity through both a separation of concerns
using modularisation & transparency
– Two common criteria for modules: high cohesion and loose-coupling
• Meyer (1998) uses five criteria for modularisation:
– Decomposability
– Composability
– Understandability
– Continuity:
– Protection:.
Web Browser
Access
Maintenance
Service Provision Life-Cycle in ubiquitous computing is divided in to four phases namely Service creation,
Service operation, Service maintenance and Service dissolution. The provision of application services for
smart devices entails the management of distributed services throughout the whole of their life cycle and not
just in specific phases such as service discovery. There are two separate aspects to this, first, defining a
generic life cycle model for service provision and, second, to manage this life cycle. In a simple service
provision lifecycle model, only two of the five service model components are active, the processing services
or service provision and service access or clients, the other three components, communication, stored
information and information sources, are treated as passive components.
In the service creation phase, service processes register themselves in service directories. Service requesters in
access nodes search for services (information processes and repositories). Services get selected, configured,
and multiple services need to be composed, e.g., multiple services need to be composed to capture, annotate,
transmit, edit, store, configure and print images. In the operational or service execution phase, services are
invoked and multiple interlinked services may need to be coordinated. In the service maintenance phase,
service processes, access configurations and service compositions can be updated. In the service dissolution
phase, services may be put off line or terminated temporarily by the processes themselves or by requesters.
Services may also be terminated permanently and removed.
The design for the service lifecycle depends on application requirements such as the type of mobility needed.
For example, a static device such as a set top audio video receiver can support both dynamic service initiation
and execution. This enables the device to be preconfigured using default factory settings and then shipped to
be used in different regions in which it must detect and tune itself to the variable regional RF broadcast signal
sources. This can also enable a static smart device to switch to an alternative service provider when a fault
occurs, providing the user has permission to access it, possibly via another service contract.
Virtualization
A virtual machine is a computer file, typically called an image, which behaves like an actual computer. In
other words, creating a computer within a computer. It runs in a window, much like any other program, giving
the end user the same experience on a virtual machine as they would have on the host operating system itself
A Virtual Machine Monitor (VMM) is a software program that enables the creation, management and
governance of virtual machines (VM) and manages the operation of a virtualized environment on top of a
physical host machine. VMM is also known as Virtual Machine Manager and Hypervisor.
A Virtual Machine (VM) supports large scale multi user concurrent server execution and it enables cross
platform interoperability across a diverse set of hardware resources at multiple levels of abstractions. To
understand the concept of a VM, the concept of a computer or machine needs first to be considered from two
different viewpoints: from the process and from the operating system viewpoint. From an application
processing viewpoint, a computer consists of the use of processes that are held in memory in bounded address
spaces. Processes consist of a list of instructions defined in a high level interface, the Application
Programmer’s Interface (API), that are converted into binary digital instructions at a lower level Application
Binary Interface (ABI), to be executed. The underlying hardware such as the CPU and I/O devices are hidden
by the vitalizing software or VM Monitor (VMM), the API and ABI. The operating system viewpoint
considers the details of how multiple processes can be executed simultaneously on the hardware and when
there are more processes than hardware resources available as opposed to dedicated task (embedded) systems.
Several different standards for SOC: (XML based) Web Services, Computer Grids OGSI, OASIS SOA RM,
Open Group SOA Working Group and Semantic Web Services
Notion of service characterised by: Descriptions, Outcomes, Offers, Competency:, Execution, Composition
and Constraints or policies
Mobile Code:
Mobile code is any program, application, or content capable of movement while embedded in an email,
document or website. Mobile code uses network or storage media, such as a Universal Serial Bus (USB) flash
drive, to execute local code execution from another computer system to do the work assigned to them
autonomously. Is software transferred between systems and executed on a local system without explicit
installation by the recipient. It can be executed on one or several hosts. Can transfer from host to another host
and execute easily. It Includes Scripts like JavaScript, VBScript, Java applets, Office Macros, DLLs, ActiveX
Controls etc. A mobile code is associated with at least two parties: its producer and its consumer – the
consumer being the host that runs the code. Mobile code systems range from simple applets to intelligent
software agents.
2. Code Signing: Code signing is the process by which a code is digitally signed by the code producer in
order to assure strong authentication and integrity of the code to the code consumer. Upon receipt of an applet
with a valid signature, the code consumer’s Java virtual machine executes the applet like a trusted piece of
code. There is more to securing a host from a malicious mobile code than just making sure that this program
has been correctly signed by someone on the Internet.
3. Firewalling: Selectively choose whether or not to run a program at the very point where it enters the client
domain. For example, if an organization is running a firewall or web proxy, it identifies Java applets, examine
them, and decide whether or not to serve them to the client.
4. Proof-carrying code: Enables a host to determine that a program code provided by another system is safe
to install and execute. The basic idea of PCC is that the code producer is required to provide an encoding of a
proof that his/her code adheres to the security policy. Specified by the code consumer. The proof is encoded in
a form that can be transmitted digitally. Therefore, the code consumer can quickly validate the code using a
simple, automatic, and reliable proof- checking process.
Protecting Mobile Code from the Execution Environment: As a mobile agent moves around the network,
its code as well as its data is vulnerable to various security threats. There are two known types of attacks
passive attacks and active attacks
1. Active and Passive attacks
a. Passive Attacks: An adversary attempts to extract some information from messages exchanged between
two Agents without modifying the contents of the messages. Usually cryptographic mechanisms, such as RSA
and ElGamal cryptosystems are used to protect against this kind of attacks
b. Active Attacks: Attacker in this case is able to modify the data or the code of a mobile agent to benefit
from them or impersonate a legitimate principal in the system and intercept messages intended for that
principal. Data integrity mechanisms can be used to protect against tampering. Authentication mechanisms
can be used to protect against impersonation.
Smart Card –
Smart card technology can provide a means of secure communications between the card/device and readers.
Similar in concept to security protocols used in many networks, this feature allows smart cards and devices to
send and receive data in a secure and private manner. Smart cards are available in two forms: memory cards
and microprocessor cards. Memory cards are a relatively inexpensive way to improve PC and network
security because the user must present a card, a username, and a password to gain access. Microprocessor
cards are having capability for performing a complex operations and computations.
Contact Smart Card: A contact smart card ought to be introduced within a smart card reader with a direct
physical union to a conductive contact tray noticed on the surface of the smart card, in general the surface is
gold plated. Over this substantial contact points processing of commands, data, and card status takes place.
These are the most common type of smart card. Electrical contacts located on the outside of the card connect
to a card reader when the card is inserted. This connector is bonded to the encapsulated chip in the card.
Increased levels of processing power, flexibility and memory will add cost. Single function cards are usually
the most cost-effective solution. Choose the right type of smart card for your application by determining your
required level of security and evaluating cost versus functionality in relation to the cost of the other hardware
elements found in a typical workflow.
Contactless Smart Card: A Contactless smart card as the name suggests it only needs close immediacy with
the card reader. The card reader as well as the card has antennae, and both devices communicate with the help
of RF (radio frequency) above this contactless link. Some contactless cards also generates power for the
inbuilt chip from this electromagnetic field produced. The range of the signals are generally one half to
maximum 3 inches for non battery powered smart cards, this is perfect for applications like payment that
necessitates an extremely fast card interlace and entry in a building. These cards function with a very limited
memory and communicate at 125 MHz. Another type of limited card is the Gen 2 UHF Card that operates at
860 MHz to 960 MHz.
Smart Card Device Applications: Some of the most common smart card applications are:
Credit cards
Satellite TV
Computer security systems
Electronic cash
Wireless communication
Government identification
Banking
Grid Computing - Grid computing refers to distributed systems that enable the large scale coordinated use
and sharing of geographically distributed resources,10 based on persistent, standards based service
infrastructures, often with a high performance orientation. Grid computing 10 Requests to use and share
resources such as computer resourcing and computer storage are referred to as Jobs. Service Architecture
Models 87 specifies standards for a high performance computing infrastructure rather than support for fault
tolerance and support for highly dynamic ad hoc interaction, which is more the focus of P2P systems. Three
main types of grid system occur in practice: (1) com mutational grids that have higher aggregate
computational capacity available for single applications than the capacity of any constituent machine in the
system; (2) data grids that provide an infrastructure for synthesizing new information from data repositories
such as digital libraries or data warehouses that are distributed in a wide area network; and (3) service grids
that provide services that are not provided by any single machine.
Client-Server Model:
Asymmetric distributed computing model with respect to where resources reside and the direction of the
interaction. Client-server interaction is also asymmetric:
A I P
C
Monolithic
A I P
2 1
C C
Thin Client Servers
A P P
C 2 1 C
Fat Client Servers
System configuration (partitioning and distribution) depends upon: network links; local resources, remote
service availability; type of application, service maintenance model.
Different degrees of resources on access devices (clients)
Resource poor (thin-client server model): reliance on external servers, network supports remote service access
on demand
Processing needed to adapt content to different types of terminals. Thin-client server model is often
considered to be easier to maintain. Thin-clients offer very limited application platform
Mobile OS
A mobile operating system (OS) is software that allows Smartphones, tablets and other devices to run
applications and programs. A mobile OS provides an interface between the device's hardware components and
its software functions. It typically starts when a device powers on, presenting a screen with icons or tiles that
show information and provide application access. Mobile operating systems also manage cellular and wireless
network connectivity and phone access.
Static: all scheduling decisions determined before execution, Dynamic: run-time decisions are used
Memory Management: Kernel should be small. Good resource / Memory management needed, System
resources should be released as soon as they are no longer needed
Mobile OS Design: In the past, phone devices retain information in memory as long as the battery held a
charge. Now, permanent storage in the form of Flash ROM. Mobile devices boot from ROM & load data
more slowly. On the other hand, ROM memory uses less power.
SMART COMPUTING
NOTES
UNIT 5
Wireless sensor networks (WSNs) are usually composed of a large number of sensors, which are densely and
randomly deployed over inaccessible terrains and are utilized in applications such as environment surveillance
and security monitoring. A MANET is an autonomous system of mobile nodes. The system may operate in
isolation, or may have gateways to and interface with a fixed network. A Wireless Sensor Network (WSN)
consists of base stations and a number of wireless sensors (nodes).
Sensor
o A transducer
o converts physical phenomenon e.g. heat, light, motion, vibration, and sound into electrical signals
Sensor node
o basic unit in sensor network
o contains on-board sensors, processor, memory, transceiver, and power supply
Sensor network
o consists of a large number of sensor nodes
o nodes deployed either inside or very close to the sensed phenomenon
Overview of Sensor Net Components and Processes: The main components of a typical sensor network
system, are sensors connected in a network that is serviced by a sensor access node. A slightly different but
compatible view of a sensor network is to view sensors as being of three types of node: (1) common nodes
mainly responsible for collecting sensor data; (2) sink nodes that are responsible for receiving, storing,
processing, aggregating data from common nodes; and (3) gateway nodes that connect sink nodes to external
entities. Common nodes are equivalent to sensors, and access nodes combine the functionality of sink and
gateway nodes. Some sensors in the network can act as sink nodes within the network in addition to the access
node.
Sensors have a low power, short range wireless interface that enables them to communicate with other sensors
within their range and with data receivers or readers, also called sensor nodes. sensors could collaborate so
that only a single sensor source and sensors along a single path forward the data, in order to conserve energy.
Sensor access nodes multiplex data from multiple sensors, often also supporting a local controller with a
microprocessor and signal processor. Sensor nodes may also support local data logging and storage. Sensors
range in size from micro to macro sized. Sensor nodes range in size from shoe box size to, pencil case size to
match box size. The sensor access node acts as ‘Base station’ that will route queries to other appropriate
sensors nodes in a sensor network. in Figure three sensors are in range of an event, two sensors are damaged
by the event, and two sensors are in range of the access node. As the input event sensors are not in range of
the access node, they must route their data through other sensors to get to the access node in order for the
events to be accessible over the Internet. Sensor nets could contain large numbers of sensors and nodes.
Sensor nets can be heterogeneous in terms of the types of sensor nodes and types of sensor; this makes
interoperability a key concern. Managing all of these constraints and creating a system that functions properly
for the application domain while remaining understandable and manageable by human operators and users
who may be casual passersby, is a big challenge.
Processing
Storage
Sensor
Trans- Analogue ADC DSP
ducer Filter
Amplifier
Transceiver
Antenna
Modulator Trans-
mitter
Power Switch
Power
management
Demod- Receiver
ulator
Battery
A block diagram of a circuit for a sensor is split into four main functions: sensing, processing, transcribing
and power related. The signal from the sensor is filtered and amplified, converted into a digital signal by the
analogue to digital converter (ADC), some simple digital signal processing (DSP) is performed at the sensor
before the signal is modulated for transmission. This particular sensor design also supports input configuration
for the DSP. The MEMS design of the sensor is able to decrease the size and power consumption of the sensor
by aggregating multiple separate electronic components into a single chip. Sensors often need to be able to
operate unattended, long lived, low duty cycle systems.
A block diagram of a circuit for a sensor is split into four main functions: sensing, processing, transcribing
and power related. The signal from the sensor is filtered and amplified, converted into a digital signal by the
analogue to digital converter (ADC), some simple digital signal processing (DSP) is performed at the sensor
before the signal is modulated for transmission. This particular sensor design also supports input configuration
for the DSP. The MEMS design of the sensor is able to decrease the size and power consumption of the sensor
by aggregating multiple separate electronic components into a single chip. Sensors often need to be able to
operate unattended, long lived, low duty cycle systems.
Some Applications of sensors includes: Cars, Computers, Retail, logistics, Household tasks, Buildings,
Environment monitoring, Industrial sensing & diagnostics.
Micro-Electro-Mechanical Systems-
MEMS stands for Micro-Electro-Mechanical Systems. In its broadest sense, MEMS refers to a technology
that is made up of microfabrication-based miniaturised mechanical and electro-mechanical elements, such as
devices and structures. On the lower end of the size spectrum, the crucial physical dimensions of MEMS
devices can range from much below one micron to several millimeters. Similarly, there are many different
kinds of MEMS devices, ranging from very basic structures with no moving parts to highly intricate
electromechanical systems with several moving parts managed by integrated microelectronics. The primary
need for MEMS is that at least some of the components, whether or not they are movable, have some kind of
mechanical functioning. In different parts of the world, several terms are used to define MEMS. Most people
refer to them as MEMS in the United States, although other people refer to them as "micromachined devices"
or "microsystems technology" in other areas of the world.
Miniaturised structures, sensors, actuators, and microelectronics are the functional components of MEMS;
however, the microsensors and microactuators are the most remarkable—and possibly most fascinating—
elements. It is fair to classify microsensors and microactuators as "transducers," which are defined as devices
that change the shape of energy. Microsensors usually work by converting a mechanical signal that has been
measured into an electrical signal.
Fabrication: MEMS design differs from that of the equivalent macro devices which comprise mechanical and
discrete electronic component design because these micro components are based upon silicon based Integrated
Circuit (IC), also called chip, design. Analogue devices may also be replaced by IC versions, e.g., whereas a
traditional thermometer is based upon a liquid, such as mercury, expanding along a tube referenced to a
calibrated scale, an electronic thermometer can be built out of a thermocouple and IC amplifier. An optical
micro fabrication approach, photolithography, is then used to fabricate the circuit. This first covers a layer
with a photo resistant chemical. Then the circuit pattern to be fabricated is drawn onto a photo mask. The
photolithography systems shines the UV light through the photo mask, projecting a shadow onto a layer that
then reacts with the photo resistant chemical and hardens, allowing the selective removal of parts of the
substrate to be chemical etched away. Engineers thus design a new circuit by designing the pattern of
interconnections among millions of relatively simple and identical components.
Micro-Actuators: The mechanisms involved in micro actuation while conceptually similar to the equivalent
macro mechanisms may function fundamentally differently. They are engineered in a fundamentally different
way using integrated circuit design and nanotechnology. MEMS actuator applications include:
• Micro mirror array based projectors (micro projectors) can be used to generate large screen display content
from smaller devices. This has applications in navigations systems (Heads Up Displays or HUDs.
• Inkjet printers heads: MEMS can be used to control ink deposits onto paper.
• Optical switches: optical cross connect switches (OXC) are devices used by telecommunications carriers to
switch high speed optical signals in a fibre optic network.
• Micro fluid pumps: The essential components include a fluid actuator, a fluidic control device, and micro
plumbing, e.g., for use in delivering medicine.
• Miniature RF transceivers: can replace passive low Q, where the Q factor indicates the rate of energy
dissipation relative to the oscillation frequency, components in communication devices such as vibrating
resonators, switches, capacitors, and inductors and put them on a single high Q MEMS RF transceiver chip.
• Miniature storage devices: can support gigabytes of non volatile data storage in a single IC chip, low power,
and low data latency.
Embedded System
Embedded systems are used mainly online for task enactment in the physical world, in contrast to general
purpose computers which are often used offline for information access and sharing. Thus embedded computer
systems differ from general purpose (MTOS) systems in three main ways. Embedded systems focus more on
single task enactment. Safety criticality may be important because actions affect the physical world. Third,
tasks often need to be scheduled with respect to real time constraints. An embedded system is a component in
a larger system that performs a single dedicated task. This can use a far simpler and cheaper operating system
and hardware because there is only one process. This simplifies memory management and process control and
omits inter process communication, which typically need to be supported in an MTOS. An embedded system
may or may not be visible as a computer to a user of that system. An embedded system is programmable and
contains one or more dedicated task computers, microprocessors, or microcontrollers.
It's a system where assignments have deadlines, or a time limit. Therefore, the task must be strictly finished
before the deadline in order to avoid a loss. Because real-time systems process data in defined, predictable
time frames, execution of tasks or workloads is practically guaranteed, thus improving the reliability of critical
systems for business.
Hard real time systems have a strict time limit, or we can say deadlines. It is important to meet those deadlines;
otherwise, the system is considered a system failure. The response time is in milliseconds. It includes short
databases. In this system, safety is essential.
In a soft real time system, there is no mandatory requirement of completing the deadline for every task.
However, it is good if the process gets processed according to the given timing requirement, otherwise, the
operation might get degraded. The response time is higher.
It includes large databases. In this system, safety is not essential.
Real-Time Operating Systems for Embedded Systems: Real time embedded systems applications are a
subset of embedded system applications which perform safety critical tasks with respect to time constraints
because if these are violated, the system may become unsafe, such as the operation of cars, ships, airplanes,
medical instrumentation monitoring and control, multimedia streaming, factory automation, financial
transaction processing and video games machines. Real Time Operating Systems or RTOS can be considered
to be a resource constrained system where the primary resources, such as data transfer, are constrained in
time. This in turn constrains the number of CPU cycles but other resources may also be constrained such as
memory. A RTOS reacts to external events that interrupt it. RTOS design focuses on scheduling efforts so that
processes can meet real time constraints and optimise memory allocation, process context switching time and
interrupt latency.
Disadvantages of Embedded System: There are a few restrictions of installed framework, as follows.
Subsequent to creating installed framework, you can’t make any alteration, improvement or up degree.
Hard to keep up.
Hard to take a back-up of implanted documents.
You need to reset all setting, due to happen any issue in the framework.
Investigating is Harder.
Harder to move information from one framework to other framework.
Constraints for equipment, because of make it for explicit undertaking.
Less force supply sturdiness.
Restricted assets for memory.
Control Systems:
The simplest type of control is activated only when defined thresholds are crossed. Feedback control uses
continuous monitoring of some output using sensors and reacts to their changes in order to regulate the output.
There are two basic kinds of feedback. Negative feedback seeks to reduce some change in a system output or
state whereas positive feedback acts to amplify a system state or output. There are several basic designs for
feedback control.
Programmable Controllers: Programmable controllers have been developed to support much more
configurable and flexible control, e.g., microcontrollers. The hardware architecture of microcontrollers is
much simpler than that of the more general purpose processor mother boards in PCs. Microcontrollers do not
need to have an address bus or a data bus, because they integrate all the RAM and non-volatile memory on the
same chip as the CPU. The CPU chip can be simpler, 4 bit processors as opposed to 64 bit processors. I/O
control can be simpler as there may not be any video screen output or keyboard input. Programs are often
developed in an emulator on a (PC) development platform that is then downloaded to the target device for
execution, maintenance, debugging and validation.
Simple PID-Type Controllers: There are certain circumstances in which the simple proportional or P type
controller output is not regulated correctly, e.g., when there is a lot of delay in the plant between changing the
input and seeing a change in response of the output. In this case a parameter can be regulated below its
reference value. To solve this problem, either integral or differential control or both can be added to the
control. A PID controller is so called because it combines Proportional, Integral and Derivative type control.
A proportional controller is just the error signal multiplied by a constant and fed out to a hardware drive. An
integral controller deals with the past behaviour of the control. It is almost always used in conjunction with
proportional control. It sums all the preceding inputs and is used to add longer term precision to the control
loop by making use of the past history
• With proportional control action, there is a continuous relationship between the output of the controller
(M) (Manipulated Variable) and Actuating Error Signal E (deviation).
• With integral control action, the output of the controller is changed at a rate which is proportional to the
actuating error signal. E (t)
• In controller with derivative control action the output of the controller depends on the rate of change of
the e(t)
• Control Systems – Proportional (P) Feedback - Action taken to negatively feedback a signal to the plant,
Is in proportion to the degree the system diverges from the reference value
• PID Control: Sometimes P type controller output is not regulated correctly. To solve this problem either
integral or differential control or both can be added to the control. PID controller is so named because it
combines Proportional, Integral and Derivative type control. Proportional (P) controller is just the error
signal multiplied by a constant and fed out to a hardware drive.
Micro-sensors: Sensors are a type of transducer. Microsensors can work quite differently from equivalent
macro sensor, Sensors enable adaptation, Often embedded into system as part of a control loop.
Microsensors are two- and three-dimensional micromachined structures that have smaller size, improved
performance, better reliability, and lower production costs than many alternative forms of sensor. They are
part of the wider class of micro-electro-mechanical-systems (MEMS) devices that also includes microactuators.
Typical sizes of microsensors range from 10 μm (0.01 mm or 10−5 m) up to 5 mm.
Nano-Computing: Nano computing describes computing that uses extremely small, or nanoscale, devices.
This state-of-the-art electronic devices could be as small as about 100 nm, which is about the same size as a
virus. Nanocomputer is the logical name for a computer smaller than the microcomputer. Nanocomputers
process and perform computations similar to standard computers, but are sized in nanometres. However, with
fast-moving nanotechnology, Nanocomputers will eventually scale down to the atomic level and will be
measured in nanometres.
SMART COMPUTING
NOTES
UNIT 6
What is “IoT”
“from a network of interconnected computers to a network of interconnected objects”, European Commission
“The basic idea of the IOT is that virtually every physical thing in this world can also become a computer that
is connected to the Internet”, AutoID Labs
“IoT is simply the point in time when more “things or objects” were connected to the Internet than people”,
Cisco Systems
“A world-wide network of interconnected objects uniquely addressable, based on standard communication
protocol.” Tata Consultancy services
IoT is service oriented network with resource constraints and is mandatory subset of future Internet. IoT is
convergence of sensor nodes, RFID objects and smart devices. IoT connects objects around us (electronic,
electrical, non-electrical) to provide seamless communication and contextual services provided by them. A
proposed development of the Internet in which everyday objects have network connectivity, allowing them to
send and receive data. All objects, animals or people are provided with unique identifiers. Ability to transfer
data over a network without requiring human-to-human or human-to-computer interaction. IoT has evolved
from the convergence of wireless technologies, micro-electromechanical systems and the Internet.
IoT
Internet of Things refers to the network of physical devices, vehicles, home appliances, and other items
embedded with electronics, software, sensors, and network connectivity, allowing them to collect and
exchange data. The IoT enables these devices to interact with each other and with the environment and
enables the creation of smart systems and services.
The Internet of Things (IoT) is characterized by the following key features that are mentioned below.
1. Connectivity
Connectivity is an important requirement of the IoT infrastructure. Things of IoT should be connected to the
IoT infrastructure. Anyone, anywhere, anytime can connect, this should be guaranteed at all times. For
example, the connection between people through Internet devices like mobile phones, and other gadgets, also
a connection between Internet devices such as routers, gateways, sensors, etc.
3. Scalability
The number of elements connected to the IoT zone is increasing day by day. Hence, an IoT setup should be
capable of handling the massive expansion. The data generated as an outcome is enormous, and it should be
handled appropriately.
5. Architecture
IoT Architecture cannot be homogeneous in nature. It should be hybrid, supporting different manufacturers ‘
products to function in the IoT network. IoT is not owned by anyone engineering branch. IoT is a reality when
multiple domains come together.
6. Safety
There is a danger of the sensitive personal details of the users getting compromised when all his/her devices
are connected to the internet. This can cause a loss to the user. Hence, data security is the major challenge.
Besides, the equipment involved is huge. IoT networks may also be at risk. Therefore, equipment safety is
also critical.
7. Self-Configuring
This is one of the most important characteristics of IoT. IoT devices are able to upgrade their software in
accordance with requirements with a minimum of user participation. Additionally, they can set up the
network, allowing for the addition of new devices to an already-existing network.
8. Interoperability
IoT devices use standardized protocols and technologies to ensure they can communicate with each other and
other systems. Interoperability is one of the key characteristics of the Internet of Things (IoT). It refers to the
ability of different IoT devices and systems to communicate and exchange data with each other, regardless of
the underlying technology or manufacturer.
Architecture of IoT
The IoT needs an open architecture to maximise interoperability among heterogeneous systems and
distributed resources including providers and consumers of information and services, whether they be
human beings, software, smart objects or devices.
IoT nodes may need to form peer networks with other nodes dynamically and autonomously locally or
remotely, this should be done through a decentralized, distributed approach to the architecture, with
support for semantic search, discovery and peer networking.
Distributed open architecture with end to end characteristics, interoperability of heterogeneous systems,
neutral access, clear layering and resilience to physical network disruption.
Decentralized autonomic architectures based on peering of nodes.
Architectures moving intelligence at the very edge of the networks, up to users’ terminals and things.
Cloud computing technology, event-driven architectures, disconnected operations and synchronization.
Use of market mechanisms for increased competition and participation.
IoT protocol stack
Internet of Things (IoT) protocols are ways of protecting data and ensuring it is exchanged securely between
devices via the Internet. IoT protocols define how data is transmitted across the internet. By doing so, they
ensure that data being exchanged between connected IoT devices is secure.
IoT Components
2. Gateways: IoT Gateway manages the bidirectional data traffic between different networks and protocols.
Another function of the gateway is to translate different network protocols and make sure interoperability of
the connected devices and sensors. Gateways can be configured to perform pre-processing of the collected
data from thousands of sensors locally before transmitting it to the next stage. In some scenarios, it would be
necessary due to the compatibility of the TCP/IP protocol.
3. Cloud: The Internet of Things creates massive data from devices, applications, and users, which has to be
managed in an efficient way. IoT cloud offers tools to collect, process, manage and store huge amounts of data
in real time. Industries and services can easily access these data remotely and make critical decisions when
necessary.
4. Analytics is the process of converting analog data from billions of smart devices and sensors into useful
insights which can be interpreted and used for detailed analysis. Smart analytics solutions are inevitable for
IoT systems for the management and improvement of the entire system.
5. User interfaces are the visible, tangible part of the IoT system which users can access. Designers will have
to make sure of a well-designed user interface for minimum effort for users and encourage more interactions.
Smart Home:
A smart home's devices are connected with each other and can be accessed through one central point—a
smartphone, tablet, laptop, or game console. Door locks, televisions, thermostats, home monitors, cameras,
lights, and even appliances such as the refrigerator can be controlled through one home automation system.
For communication between smart devices, smart home technology relies on different types of connectivity.
In most cases, smart homes use Wi-Fi, Bluetooth, Z-Wave, and Zigbee. These are wireless technologies
allowing a user to communicate with a central hub
In home automation Scenarios the System will detect the presence and absence of human in home and
according to the instruction will do the stuff for us. If no one is present in a home and if someone entered it
will automatically turn on/off the lights according to your instructions will on/off the devices and will do all
the stuff on single instruction.
Smart agriculture:
The term smart agriculture refers to the usage of technologies like Internet of Things, sensors, location
systems, robots and artificial intelligence on your farm. The ultimate goal is increasing the quality and
quantity of the crops while optimizing the human labor used.
Integrated Pest Management or Control (IPM/C) – Often the farmer’s hard work are destroyed by pests
and they suffer huge monetary losses. To prevent such situation Agriculture Internet of Things has a system
that monitor and scans the environmental parameters & plant growth, further this data is utilized by pest
control sensors which is capable of predicting pest behavior. This information can be used by farmers to
reduce damage done by pests on a large scale.
Smart City: