Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Seca CH-2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

INDUSTRIAL INTERNET OF THINGS – SECA4005

UNIT – II TECHNICAL AND BUSINESS INNOVATORS OF INDUSTRIAL INTERNET

47
TECHNICAL AND BUSINESS INNOVATORS OF INDUSTRIAL INTERNET
Miniaturization – Cyber Physical Systems – Wireless technology – IP Mobility – Network
Functionality Virtualization – Cloud and Fog - Big Data and Analytics – M2M Learning and
Artificial Intelligence.

MINIATURIZATION
In the world of Internet of Things (IoT), miniaturization is enabling new applications in the form of
wearables, vehicles and transportation, disposable tracking tech for pharmaceuticals and produce,
and more uses than we can count for smart city and smart home use.
In this digital era, as we wirelessly connect more and more devices to the Internet, researchers and
engineers face several challenges, like how to package a radio transmitter into their existing device
real estate, how to make increasingly smaller devices, how to reduce the area coverage for mounting
chips. They are also striving to meet consumer demand for Internet of Things (IoT) products that are
ergonomically easy to use.
Ideally, engineers would tend to use IoT components that are smaller in size, have better RF
performance, and have reasonable prices. However, these characteristics do not usually converge in
IoT component offerings, and that presents a challenge for solution providers.
Fortunately, the size of a silicon die has been getting smaller and smaller over the years as the
industry adopts new silicon manufacturing processes. The industry has been solving the space issue
for IoT implementations by combining the MCU and RF frontend into system-on-chip (SoC)
configurations.
The demand for embedded SIM (eSIM) is steadily rising among the smartphone manufacturers,
laptop manufacturers, energy & utility sector companies. The OEMs across the globe are focusing
on the development and integration of eSIM in numerous applications.
The increasing demand for miniaturization of IoT components across various industries is also
boosting the demand for eSIM globally.
In 2018, researchers from the Green IC group at the National University of Singapore (NUS) in
collaboration with associate professor Paolo Crovetti from the Polytechnic University of Turin in
Italy created the timer, that trigger sensor to perform their tasks when required, is believed to be so
efficient that it runs using an on-chip solar cell with a diameter close to that of a human hair. This is
a major step in IoT miniaturization claimed with low-power.
The wake-up timer can continue operations even when a battery is not available and with very little
ambient power, as demonstrated by a miniaturized on-chip solar cell exposed to moonlight. An on-
chip capacitor used for slow and infrequent wake-up also helps reduce the device’s silicon
manufacturing cost thanks to its small surface area of 49 microns on both sides.

48
IoT sensor nodes are individual miniaturized systems containing one or more sensors, as well as
circuits for data processing, wireless communication, and power management. To keep power
consumption low, they are kept in sleep mode most of the time, and wake-up timers are used to
trigger the sensors to carry out a task. As they are turned on most of the time, wake-up timers set the
minimum power consumption of IoT sensor nodes. They also play a fundamental role in reducing
the average power consumption of systems-on-chip.
When designing a hardware module, one of the pressing questions is about Antenna. Developers
must work around the space reserved for antenna and the type of antenna they will use to integrate
with a corresponding module. PCB trace antennas are general preference because of their low bill
of material (BoM) costs. But they require a significant size which can cause devices to be large and
difficult to work with.
The smaller size we try to achieve, the less efficiency we can have for the RF performance. Chip
antennas are famous for various applications as they simplify design efforts and optimize size
consumption.
According to statistics of Bluegiga, approximately only 10 percent of these evaluated designs deploy
the external antenna, and 90 percent of the customers choose modules with a built-in chip antenna.
Hence, it becomes necessary to continuously evaluate the possibility of space reduction on
chipboard, something Cloud of Things has successfully achieved with our latest DeviceTone Genie
product line, working with great partners including Nordic Semiconductor and AES with their
minIot devices.

Importance of Miniaturization
Miniaturization produced sleeker computers and phones that take up less space and produce less
waste in the manufacturing and assembly processes, but smaller technology is more stylish.
Miniaturization in form factor chipsets and modules has contributed to cost-effective, faster-running,
and more powerful computer components.
In the world of Internet of Things (IoT), miniaturization is enabling new applications in the form of
wearables, vehicles and transportation, disposable tracking tech for pharmaceuticals and produce,
and more uses than we can count for smart city and smart home use.

49
Miniaturization in MEMS Sensors

Fig.2.1 Miniaturization
Micromachining has become a key technology for the miniaturization of sensors. Being able to
reduce the size of the sensing element by using standard semiconductor manufacturing technology
allows a dramatic reduction in size. Integrating the signal processing alongside the sensing element
further enhances the opportunities to reduce the size of the system, eliminating the need for extra
pins to link to external devices.
The choice of micromachining process technology can also determine the limits of miniaturization,
but this is often determined by the sensor type. Piezoelectric micro machined elements for pressure
sensing have less opportunity to scale than a diaphragm built from CMOS silicon on the surface of a
substrate, for example, but can deliver higher performance.

Uses of Miniaturized Technology


The increased applications for IoT extend from personal use with wrist wear, footwear, eyewear,
body wear, and neckwear associated with personal use in training and fitness, as well as more
practical applications, such as sports, infotainment, healthcare, defense, enterprise, and industry.
The industrial applications of wearable technology will see major benefits in the healthcare segment
in which connected devices improve efficiency and reduce operational costs.
By creating more powerful devices with smaller footprints – particularly through the use of
improved edge processing – providers and facilities will gain the ability to keep track of patients
through real-time monitoring of vital signs and health stats. From wristbands to implants, data is
50
transmitted through the cloud and analyzed to produce more accurate outcomes and treatment
options.
In the military industry, wearable technology “can help soldiers in the field by tracking them more
accurately, giving central command more precision in coordinating operations,” according to Emily
Rector with MarketScale. Wireless, hands-free communications and more efficient battery life could
contribute to timesaving and lifesaving operations.

Limits Of Miniaturization
In addition, miniaturized equipment is frequently not as easy to maintain and therefore typically
does not receive the same routine maintenance and care that larger equipment receives.
This can lead to increased overall costs as a result of disposal and the overheads required to keep
additional equipment on hand.

CYBER PHYSICAL SYSTEMS


Cyber – computation, communication, and control that are discrete, logical, and switched
Physical – natural and human-made systems governed by the laws of physics and operating in
continuous time
Cyber-Physical Systems – systems in which the cyber and physical systems are tightly integrated at
all scales and levels
A cyber-physical system (CPS) or intelligent system is a computer system in which a mechanism is
controlled or monitored by computer-based algorithms. In cyber-physical systems, physical and
software components are deeply intertwined, able to operate on different spatial and temporal scales,
exhibit multiple and distinct behavioral modalities, and interact with each other in ways that change
with context. CPS involves transdisciplinary approaches, merging theory
of cybernetics, mechatronics, design and process science. The process control is often referred to
as embedded systems. In embedded systems, the emphasis tends to be more on the computational
elements, and less on an intense link between the computational and physical elements. CPS is also
similar to the Internet of Things (IoT), sharing the same basic architecture; nevertheless, CPS
presents a higher combination and coordination between physical and computational elements.
Examples of CPS include smart grid, autonomous automobile systems, medical
monitoring, industrial control systems, robotics systems, and automatic pilot avionics. Precursors of
cyber-physical systems can be found in areas as diverse as aerospace, automotive, chemical
processes, civil infrastructure, energy, healthcare, manufacturing, transportation, entertainment,
and consumer appliances.

51
CPS Characteristics
• CPS are physical and engineered systems whose operations are monitored, coordinated, controlled,
and integrated.
• This intimate coupling between the cyber and physical is what differentiates CPS from other
fields.
Some hallmark characteristics:
• Cyber capability in every physical component
• Networked at multiple and extreme scales
• Complex at multiple temporal and spatial scales
• Constituent elements are coupled logically and physically
• Dynamically reorganizing/reconfiguring open system.
• High degrees of automation, control loops closed at many scales
• Unconventional computational & physical substrates (such as bio, nano, chem, ...)
• Operation must be dependable, certified in some cases.

Mobile Cyber-physical systems


Mobile cyber-physical systems, in which the physical system under study has inherent mobility, are
a prominent subcategory of cyber-physical systems. Examples of mobile physical systems include
mobile robotics and electronics transported by humans or animals. The rise in popularity of smart
phones has increased interest in the area of mobile cyber-physical systems. Smartphone platforms
make ideal mobile cyber-physical systems for a number of reasons, including:
 Significant computational resources, such as processing capability, local storage
 Multiple sensory input/output devices, such as touch screens, cameras, GPS chips, speakers,
microphone, light sensors, proximity sensors
 Multiple communication mechanisms, such as WiFi, 4G, EDGE, Bluetooth for
interconnecting devices to either the Internet, or to other devices
 High-level programming languages that enable rapid development of mobile CPS node
software, such as Java,] C#, or JavaScript
 Readily available application distribution mechanisms, such as Google Play Store and Apple
App Store
 End-user maintenance and upkeep, including frequent re-charging of the battery

52
Examples of Cyber Physical System
Common applications of CPS typically fall under sensor-based communication-enabled autonomous
systems. For example, many wireless sensor networks monitor some aspect of the environment and
relay the processed information to a central node. Other types of CPS include smart
grid, autonomous automotive systems, medical monitoring, process control systems, distributed
robotics, and automatic pilot avionics.
A real-world example of such a system is the Distributed Robot Garden at MIT in which a team of
robots tend a garden of tomato plants. This system combines distributed sensing (each plant is
equipped with a sensor node monitoring its status), navigation, manipulation and wireless
networking.
A focus on the control system aspects of CPS that pervade critical infrastructure can be found in the
efforts of the Idaho National Laboratory and collaborators researching resilient control systems. This
effort takes a holistic approach to next generation design, and considers the resilience aspects that
are not well quantified, such as cyber security, [18] human interaction and complex interdependencies.
Another example is MIT's ongoing CarTel project where a fleet of taxis work by collecting real-time
traffic information in the Boston area. Together with historical data, this information is then used for
calculating fastest routes for a given time of the day.
CPS are also used in electric grids to perform advanced control, especially in the smart grids context
to enhance the integration of distributed renewable generation. Special remedial action scheme are
needed to limit the current flows in the grid when wind farm generation is too high. Distributed CPS
are a key solution for this type of issues.
In industry domain, the cyber-physical systems empowered by Cloud technologies have led to novel
approaches that paved the path to Industry 4.0 as the European Commission IMC-AESOP project
with partners such as Schneider Electric, SAP, Honeywell, Microsoft etc. demonstrated.

53
WIRELESS TECHNOLOGY

The Internet of Things (IoT) starts with connectivity, but since IoT is a widely diverse and
multifaceted realm, you certainly cannot find a one-size-fits-all communication solution.
Continuing our discussion on mesh and star topologies, in this article we’ll walk through the
six most common types of IoT wireless technologies.
Each solution has its strengths and weaknesses in various network criteria and is therefore best -
suited for different IoT use cases.

Fig. 2.2 : Wireless technologies


1. LPWANs
Low Power Wide Area Networks (LPWANs) are the new phenomenon in IoT. By providing
long-range communication on small, inexpensive batteries that last for years, t his family of
technologies is purpose-built to support large-scale IoT networks sprawling over vast industrial
and commercial campuses.
LPWANs can literally connect all types of IoT sensors – facilitating numerous applications
from asset tracking, environmental monitoring and facility management to occupancy
detection and consumables monitoring. Nevertheless, LPWANs can only send small blocks of
data at a low rate, and therefore are better suited for use cases that don’t require high
bandwidth and are not time-sensitive.
Also, not all LPWANs are created equal. Today, there exist technologies operating in both the
licensed (NB-IoT, LTE-M) and unlicensed (e.g. MYTHINGS, LoRa, Sigfox etc.) spectrum with
varying degrees of performance in key network factors. For example, while power consumption
is a major issue for cellular-based, licensed LPWANs; Quality-of-Service and scalability are
54
main considerations when adopting unlicensed technologies. Standardization is another
important factor to think of if you want to ensure reliability, security, and interoperability in the
long run.

2. Cellular (3G/4G/5G)
Well-established in the consumer mobile market, cellular networks offer reliable broadband
communication supporting various voice calls and video streaming applications. On the
downside, they impose very high operational costs and power requirements.
While cellular networks are not viable for the majority of IoT applications powered by battery-
operated sensor networks, they fit well in specific use cases such as connected cars or fleet
management in transportation and logistics. For example, in-car infotainment, traffic
routing, advanced driver assistance systems (ADAS) alongside fleet telematics and tracking
services can all rely on the ubiquitous and high bandwidth cellular connectivity.
Cellular next-gen 5G with high-speed mobility support and ultra-low latency is positioned to be
the future of autonomous vehicles and augmented reality. 5G is also expected to enable real-
time video surveillance for public safety, real-time mobile delivery of medical data sets
for connected health, and several time-sensitive industrial automation applications in the
future.

3. Zigbee and Other Mesh Protocols


Zigbee is a short-range, low-power, wireless standard (IEEE 802.15.4), commonly deployed in
mesh topology to extend coverage by relaying sensor data over multiple sensor nodes.
Compared to LPWAN, Zigbee provides higher data rates, but at the same time, much less
power-efficiency due to mesh configuration.
Because of their physical short-range (< 100m), Zigbee and similar mesh protocols (e.g. Z-
Wave, Thread etc.) are best-suited for medium-range IoT applications with an even distribution
of nodes in close proximity. Typically, Zigbee is a perfect complement to Wi-Fi for
various home automation use cases like smart lighting, HVAC controls, security and energy
management, etc. – leveraging home sensor networks.
Until the emergence of LPWAN, mesh networks have also been implemented in industrial
contexts, supporting several remote monitoring solutions. Nevertheless, they are far from ideal
for many industrial facilities that are geographically dispersed, and their theoretical scalabil ity
is often inhibited by increasingly complex network setup and management.

55
4. Bluetooth and BLE
Defined in the category of Wireless Personal Area Networks, Bluetooth is a short -range
communication technology well-positioned in the consumer marketplace. Bluetooth Classic
was originally intended for point-to-point or point-to-multipoint (up to seven slave nodes) data
exchange among consumer devices. Optimized for power consumption, Bluetooth Low-Energy
was later introduced to address small-scale Consumer IoT applications.
BLE-enabled devices are mostly used in conjunction with electronic devices, typically
smartphones that serve as a hub for transferring data to the cloud. Nowadays, BLE is widely
integrated into fitness and medical wearables (e.g. smartwatches, glucose meters, pulse
oximeters, etc.) as well as Smart Home devices (e.g. door locks) – whereby data is
conveniently communicated to and visualized on smartphones.
The release of Bluetooth Mesh specification in 2017 aims to enable a more scalable dep loyment
of BLE devices, particularly in retail contexts. Providing versatile indoor localization features,
BLE beacon networks have been used to unlock new service innovations like in-store
navigation, personalized promotions, and content delivery.

5. Wi-Fi
There is virtually no need to explain Wi-Fi, given its critical role in providing high-throughput
data transfer for both enterprise and home environments. However, in the IoT space, its major
limitations in coverage, scalability and power consumption make the technology much less
prevalent.
Imposing high energy requirements, Wi-Fi is often not a feasible solution for large networks of
battery-operated IoT sensors, especially in industrial IoT and smart building scenarios. Instead,
it more pertains to connecting devices that can be conveniently connected to a power outlet
like smart home gadgets and appliances, digital signages or security cameras.
Wi-Fi 6 – the newest Wi-Fi generation – brings in greatly enhanced network bandwidth (i.e.
<9.6 Gbps) to improve data throughput per user in congested environments. With this, the
standard is poised to level up public Wi-Fi infrastructure and transform customer experience
with new digital mobile services in retail and mass entertainment sectors. Also, in-car networks
for infotainment and on-board diagnostics are expected to be the most game-changing use case
for Wi-Fi 6. Yet, the development will likely take some more time.

56
6. RFID
Radio Frequency Identification (RFID) uses radio waves to transmit small amounts of data
from an RFID tag to a reader within a very short distance. Till now, the technology has
facilitated a major revolution in retail and logistics.
By attaching an RFID tag to all sorts of products and equipment, businesses can track their
inventory and assets in real-time – allowing for better stock and production planning as well as
optimized supply chain management. Alongside increasing IoT adoption, RFID continues to be
entrenched in the retail sector, enabling new IoT applications like smart shelves, self-checkout,
and smart mirrors.

IP MOBILITY
The increasing use of virtualization in the data center has enabled an unprecedented degree of
flexibility in managing servers and workloads. One important aspect of this newfound flexibility is
mobility. As workloads are hosted on virtual servers, they are decoupled from the physical
infrastructure and become mobile by definition. As end-points become detached from the physical
infrastructure and are mobile, the routing infrastructure is challenged to evolve from a topology
centric addressing model to a more flexible architecture. This new architecture is capable of
allowing IP addresses to freely and efficiently move across the infrastructure. There are several
ways of adding mobility to the IP infrastructure, and each of them addresses the problem with
57
different degrees of effectiveness. LISP Host Mobility is poised to provide a solution for workload
mobility with optimal effectiveness. This document describes the LISP Host Mobility solution,
contrasts it with other IP mobility options, and provides specific guidance for deploying and
configuring the LISP Host mobility solution.

IP Mobility Requirements
The requirements for an IP mobility solution can be generalized to a few key aspects. To make a fair
comparison of existing solutions and clearly understand the added benefit of the LISP Host Mobility
solution, The different functional aspects that must be addressed in an IP mobility solution are
• Redirection
The ultimate goal of IP mobility is to steer traffic to the valid location of the end-point. This aspect
is generally addressed by providing some sort of re-direction mechanism to enhance the traffic
steering already provided by basic routing. Redirection can be achieved by replacing the destination
address with a surrogate address that is representative of the new location of the end-point. Different
techniques will allow the redirection of traffic either by replacing the destination's address altogether
or by leveraging a level of indirection in the addressing such as that achieved with tunnels and
encapsulations. The different approaches impact applications to different degrees. The ultimate goal
of IP mobility is to provide a solution that is totally transparent to the applications and allows for the
preservation of established sessions, as end-points move around the IP infrastructure.
• Scalability
Most techniques create a significant amount of granular state to re-direct traffic effectively. The
state is necessary to correlate destination IP addresses to specific locations, either by means of
mapping or translation. This additional state must be handled in a very efficient manner to attain a
solution that can support a deployable scale at a reasonable cost in terms of memory and processing.
• Optimized Routing
As end-points move around, it is key that traffic is routed to these end-points following the best
possible path. Since mobility is based largely on re-direction of traffic, the ability to provide an
optimal path is largely a function of the location of the re-directing element. Depending on the
architecture, the solution may generate sub-optimal traffic patterns often referred to as traffic
triangulation or hair-pinning in an attempt to describe the unnecessary detour traffic needs to take
when the destination is mobile. A good mobility solution is one that can provide optimized paths
regardless of the location of the end-point.

• Client Independent Solution


It is important that the mobility solution does not depend on agents installed on the mobile end-
points or on the clients communicating with these end-points. A network based solution is highly
58
desirable and is key to the effective deployment of a mobility solution given the precedent of the
large installed base of end-points that cannot be changed or managed at will to install client
software.
• Address Family Agnostic Solution
The solution provided must work independently of IPv4 or IPv6 end-points and networks. Since
mobility relies on the manipulation of the mapping of identity to location, address families with
lengthier addresses tend to provide alternatives not available with smaller address spaces. These
address dependent solutions have limited application as they usually call for an end to end
deployment of IPv6. To cover the broad installed base of IPv4 networking and end-points, the ideal
solution should work for IPv4 or IPv6 independently.

Existing IP Mobility Solutions


The following IP Mobility technology solutions are available and described below:
• Route Health Injection (RHI) and Host Routing
• Mobile IPv4
• Mobile IPv6
• DNS Based Redirection: Global Site Selector (GSS)
Route Health Injection (RHI) and Host Routing
One simple way to redirect traffic to a new location when a server (or group of servers) moves is to
inject a more specific route to the moved end-point(s) into the routing protocol when the moves are
detected. In the extreme case, this means injecting a host route from the "landing" location every
time a host moves. Load balancers with the Route Health Injection (RHI) functionality implemented
can provide an automated mechanism to detect server moves and inject the necessary host routes
when the servers move.
This approach, although simple, pollutes the routing tables considerably and causes large amount of
churn in the routing protocol. Forcing churning of the routing protocol is a risky proposition as it
could lead to instabilities and overall loss of connectivity, together with adding delays to roaming
handoffs.
Mobile IPv4
Mobile IP is defined for IPv4 in IETF RFC 3344. Basically mobile IPv4 provides a mechanism to
redirect traffic to a mobile node whenever this node moves from its "Home Network" to a "Foreign
Network." Every host will have a "Home Address" within a "Home Network" which is front-ended
by a router that acts as a "Home Agent" and that advertises the "Home Network" into the routing
protocol. Traffic destined to the "Home Address" will always be routed to the "Home Agent." If the
mobile node is in its "Home Network" traffic will be forwarded directly in the data plane to the host
59
as per regular routing. If the host has moved to a "Foreign Network", traffic will be IP tunneled by
the "Home Agent" to a "Care-of- Address" which is the address of the gateway router for the
"Foreign Network." With Mobile IPv4 there is always a triangular traffic pattern. Also, Mobile IPv4
does not offer a solution for multicast. Since the mobile node is usually sourcing traffic, if the
Foreign Agent is not directly connected, there is the need for host route injection at the foreign site
to get RPF to work. In addition, multicast traffic from the mobile node has to always hairpin through
the home agent since the distribution tree is built and rooted at the “Home Agent.”
Mobile IPv6
IETF RFC 3775 defines mobility support in IPv6. IPv6 takes a step beyond IPv4 mobility and
provides optimal data paths between server and client. The process in IPv6 is similar to that of IPv4
with a few additions. Rather than having the Home Agent always redirect the traffic to the Care-of-
Address (CoA) for the server that has moved, the Home Agent is taken out of the data path by
distributing the CoA to Home Address Binding information to the client itself. Once the client has
the CoA information for a particular server, it can send traffic directly to the CoA rather than
triangulating it through the Home Address. This provides a direct path from client to server.
Although Mobile IPv6 provides direct path routing for mobile nodes, it is limited to IPv6 enabled
end-points, it requires that the entire data path be IPv6 enabled, and it also requires that the end-
points have IPv6 mobility agents installed on them.

DNS Based Redirection: Global Site Selector (GSS)


It may be possible to direct traffic to a moving server by updating the DNS entries for the moving
server as the server moves locations. This scheme assumes that every time a server moves it is
assigned a new IP address within the server's "landing" subnet. When the server moves, its DNS
entry is updated to reflect the new IP address. Any new connections to the server will use the new IP
address that is learnt via DNS resolution. Thus traffic is redirected by updating the mapping of the
DNS name (identity) to the new IP address (location). The new IP address assigned after the move
may be assigned directly to the server or may be a new Virtual IP (VIP) on a load balancer front-
ending the server at the new location. When using load balancers at each location, the load balancers
can be leveraged to determine the location of a host by checking the servers' health with probes.
When a change of location is detected, the integration of workflow in vCenter (VMware) updates
the Global Site Selector (GSS) of the new VIP for the server and the GSS will in turn proceed to
update the DNS system with the new VIP to server-name mapping. Established connections will
continue to try to reach the original VIP, it is up to the load balancers to be able to re-direct those
connections to the new host location and create a hair-pinned traffic pattern for the previously
established connections. New connections will be directed to the new VIP (provided the DNS cache
has been updated on the client) and will follow a direct path to this new VIP. Eventually all old
connections are completed and there are no hair-pinned flows.

60
The main caveats with this approach include:
• Rate of refresh for the DNS cache may impact either the convergence time for the move or the
scalability of the DNS system if the rate is too high.
• Works only for name-based connections while many applications are moving to an IP based
connection model.
• Previously established connections are hair-pinned. This implies that there is a period of time
where there are active connections to the old address and some new connections to the new address
in the second data center. During this state the network administrator may not be able to ascertain
that these two addresses are the same system (from the point of view of the application).

NETWORK FUNCTIONALITY VIRTUALIZATION


Network Function Virtualization, or NFV, is a way to reduce cost and accelerate service deployment
for network operators by decoupling functions like a firewall or encryption from dedicated hardware
and moving them to virtual servers.
Instead of installing expensive proprietary hardware, service providers can purchase inexpensive
switches, storage and servers to run virtual machines that perform network functions. This collapses
multiple functions into a single physical server, reducing costs and minimizing truck rolls.

If a customer wants to add a new network function, the service provider can simply spin up a new
virtual machine to perform that function.
For example, instead of deploying a new hardware appliance across the network to enable network
encryption, encryption software can be deployed on a standardized server or switch already in the
network.
This virtualization of network functions reduces dependency on dedicated hardware appliances for
network operators, and allows for improved scalability and customization across the entire network.
Different from a virtualized network, NFV seeks to offload network functions only, rather than the
entire network.

61
Fig.2.3 : Network Function Virtualization architecture
NFV architecture
The NFV architecture proposed by the European Telecommunications Standards Institute (ETSI) is
helping to define standards for NFV implementation. Each component of the architecture is based
on these standards to promote better stability and interoperability.
NFV architecture consists of:
 Virtualized network functions (VNFs)are software applications that deliver network
functions such as file sharing, directory services, and IP configuration.
 Network functions virtualization infrastructure (NFVi) consists of the infrastructure
components—compute, storage, networking—on a platform to support software, such as
a hypervisor like KVM or a container management platform, needed to run network apps.
 Management, automation and network orchestration (MANO) provides the framework for
managing NFV infrastructure and provisioning new VNFs.

62
Software-defined networking (SDN) and NFV
NFV and SDN are not dependent on each other, but they do have similarities. Both rely on
virtualization and use network abstraction, but how they separate functions and abstract resources is
different.
SDN separates network forwarding functions from network control functions with the goal of
creating a network that is centrally manageable and programmable. NFV abstracts network
functions from hardware. NFV supports SDN by providing the infrastructure on which SDN
software can run.
NFV and SDN can be used together, depending on what you want to accomplish, and both use
commodity hardware. With NFV and SDN, you can create a network architecture that is more
flexible, programmable, and uses resources efficiently.

The benefits of using NFV


There are plenty of reasons for organizations to use NFV, including the following benefits:
 Better communication
 Reduced costs
 Improved flexibility and accelerated time to market for new products and updates
 Improved scalability and resource management
 Reduced vendor lock-in

Better communication and information accessibility


In addition to managing networks, NFV improves network function by transforming how the
network architects generate network services. This process is performed by using an architectural
and creatively designed method to link together different network nodes to produce a
communication channel that can provide freely accessible information to users.
Reduced costs
Often used to great effect for decoupling network services, NFV can also be used as an alternative
for routers, firewalls and load balancers. One of the appeals of NFV over routers, firewalls and load
balancers is that it doesn’t require network proprietors to purchase dedicated hardware devices to
perform their work or generate service chains or groups. This benefit helps to reduce the cost of
operating expenses and allows work to be performed with fewer potential operating issues.
Improved scalability
Because VMs have virtualized services, they can receive portions of the virtual resources on x86
servers, allowing multiple VMs to run from a single server and better scale, based on the remaining
63
resources. This advantage helps direct unused resources to where they’re needed and boosts
efficiency for data centers with virtualized infrastructures.
NFV allows networks the ability to quickly and easily scale their resources based off of incoming
traffic and resource requirements. And software-defined networking (SDN) software lets VMs
automatically scale up or down.
Better resource management
Once a data center or similar infrastructure is virtualized, it can do more with fewer resources
because a single server can run different VNFs simultaneously to produce the same amount of work.
It allows for an increased workload capacity while reducing the data center footprint, power
consumption and cooling needs.
Flexibility and accelerated time to market
NFV helps organizations update their infrastructure software when network demands change,
starkly reducing the need for physical updates. As business requirements change and new market
opportunities open, NFV helps organizations quickly adapt. Because a network’s infrastructure can
be altered to better support a new product, the time-to-market period can be shortened.
Reduced vendor lock-in
The largest benefit of running VNFs on COTS hardware is that organizations aren’t chained to
proprietary, fixed-function boxes that take truck rolls and lots of time and labor for deployment and
configuration.

64
CLOUD AND FOG

Fog Computing
Fog computing, also called fog networking or fogging, describes a decentralized computing
structure located between the cloud and devices that produce data. This flexible structure enables
users to place resources including applications and the data they produce, in logical locations to
enhance performance.

Working of Fog computing

Fig.2.4 : Fog Computing


Fog computing works by deploying fog nodes throughout your network. Devices from controllers,
switches, routers, and video cameras can act as fog nodes. These fog nodes can then be deployed in
target areas such as your office floor or within a vehicle. When an IoT device generates data this can
then be analyzed via one of these nodes without having to be sent all the way back to the cloud. The
main difference between cloud computing and fog computing is that the former provides centralized
access to resources whereas the latter provides a decentralized local access.
Transporting data through fog computing has the following steps:
 Signals from IoT devices are wired to an automation controller which then executes a control
system program to automate the devices.
 The control system program sends data through to an OPC server or protocol gateway.
 The data is then converted into a protocol that can be more easily understood by internet-
based services (Typically this is a protocol like HTTP or MQTT).

65
 Finally, the data is sent to a fog node or IoTgateway which collects the data for further
analysis. This will filter the data and in some cases save it to hand over to the cloud later.

Features between fog, edge & cloud computing

Applications of fog computing


 Linked vehicles: Self-driven or self-driven vehicles are now available on the market,
producing a significant volume of data. The information has to be easily interpreted and
processed based on the information presented such as traffic, driving conditions,
environment, etc. All this information is processed quickly with the aid of fog computing.
 Smart Grids and Smart Cities: Energy networks use real-time data for the efficient
management of systems. It is necessary to process the remote data near to the location where
it is produced. It is also likely that data from multiple sensors will be produced. Fog
computing is constructed in such a manner that all problems can be sorted.

66
 Real-time analytics: Data can be transferred using fog computing deployments from the
location where it is produced to different locations. Fog computing is used for real-time
analytics that passes data to financial institutions that use real-time data from production
networks.

Characteristics of Fog
Cognition:
Cognition is responsiveness to client centric objectives. Fog based data access and analytics give a
better alert about customer requirements, best position handling for where to transmit, store, and
control functions throughout cloud to the IoT continuum. Applications, due to close proximity, at
end devices provide a better conscious and responsive reproduced customer requirement relation.
Heterogeneity:
Fog computing is a virtualized structure so it offers computational, storage, and networking services
between the main cloud and devices at the end. Its heterogeneity featured servers consist of
hierarchical building blocks at distributed positions.
Geographical Environment Distribution:
Fog computing environment has a widely circulated deployment in context to provide QoS for both
mobiles and motionless end devices. Fog network distributes geographically its nodes and sensors in
the scenario of different phase environment, for example, temperature monitoring at chemical vat,
weather monitoring sensors, STLS sensors, and health care monitoring system.
Edge Location with Low Latency:
The coming out smart applications services are inadequate due to the lack of support at the
proximity of devices with featured QoS at the edge of the core network. Video streaming in small
TV support devices, monitoring sensors, live gaming applications.
Real-Time Interaction:
Real-time interaction is a variety and requirement of fog applications, like monitoring a critical
process at oil rig with the fog edge devices or sensors, real-time transmission for traffic monitoring
systems, electricity distribution monitoring system applications, and so on. Fog applications are
having real-time processing capabilities for QoS rather than batch processing.
Large Scale Sensor Network:Fog has a feature applicable when environment monitoring system, in
near smart grid applications, inherently extends its monitoring systems caused by hierarchical
computing and storage resource requirements.

67
Widespread Wireless Access:
In this scenario wireless access protocols (WAP) and cellular mobile gateways can be classical
examples as fog node proximity to the end users.
Interoperable Technology:
Fog components must be able to work in interoperating environment to guarantee support for wide
range of services like data streaming and real-time processing for best data analyses and predictive
decisions.

Benefits or Advantages of Fog computing

➨It offers better security. Fog nodes can be protected using same procedures followed in IT
environment.
➨It processes selected data locally instead of sending them to the cloud for processing. Hence it
can save network bandwidth. This leads to lower operational costs.
➨It reduces latency requirements and hence quick decisions can be made. This helps in avoiding
accidents.
➨It offers better privacy to the users data as they are analyzed locally instead of sending them to the
cloud. Moreover IT team can manage and control the devices.
➨It is easy to develop fog applications using right tools which can drive machines as per customers
need.
➨Fog nodes are mobile in nature. Hence they can join and leave the network at any time.
➨Fog nodes can withstand harsh environmental conditions in places such as tracks, vehicles, under
sea, factory floors etc. Moreover it can be installed in remote locations.
➨Fog computing offers reduction in latency as data are analyzed locally. This is due to less round
trip time and less amount of data bandwidth.

Disadvantages of Fog computing

➨Encryption algorithms and security policies make it more difficult for arbitrary devices to
exchange data. Any mistakes in security algorithms lead to exposure of data to the hackers.
Other security issues are IP address spoofing, man in the middle attacks, wireless network security
etc.
➨To achieve high data consistency in the the fog computing is challenging and requires more
efforts.
➨Fog computing will realize global storage concept with infinite size and speed of local storage but
data management is a challenge.
➨Trust and authentication are major concerns.
68
➨Scheduling is complex as tasks can be moved between client devices, fog nodes and back end
cloud servers.
➨Power consumption is high in fog nodes compare to centralized cloud architecture.

Cloud Computing
cloud computing is the delivery of computing services—including servers, storage, databases,
networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster
innovation, flexible resources, and economies of scale. You typically pay only for cloud services
you use, helping lower your operating costs, run your infrastructure more efficiently and scale as
your business needs change.

Benefits of Cloud Computing


Flexibility
Users can scale services to fit their needs, customize applications and access cloud services from
anywhere with an internet connection.
Efficiency
Enterprise users can get applications to market quickly, without worrying about underlying
infrastructure costs or maintenance.
Strategic value
Cloud services give enterprises a competitive advantage by providing the most innovative
technology available.
Flexibility
 Scalability: Cloud infrastructure scales on demand to support fluctuating workloads.
 Storage options: Users can choose public, private, or hybrid storage offerings, depending
on security needs and other considerations.
 Control choices: Organizations can determine their level of control with as-a-service
options. These include software as a service (SaaS), platform as a service (PaaS), and
infrastructure as a service (IaaS).
 Tool selection: Users can select from a menu of prebuilt tools and features to build a
solution that fits their specific needs.
 Security features: Virtual private cloud, encryption, and API keys help keep data secure.

69
Efficiency
 Accessibility: Cloud-based applications and data are accessible from virtually any internet-
connected device.
 Speed to market: Developing in the cloud enables users to get their applications to market
quickly.
 Data security: Hardware failures do not result in data loss because of networked backups.
 Savings on equipment: Cloud computing uses remote resources, saving organizations the
cost of servers and other equipment.
 Pay structure: A “utility” pay structure means users only pay for the resources they use.
Strategic value
 Streamlined work: Cloud service providers (CSPs) manage underlying infrastructure,
enabling organizations to focus on application development and other priorities.
 Regular updates: Service providers regularly update offerings to give users the most up-to-
date technology.
 Collaboration: Worldwide access means teams can collaborate from widespread locations.
 Competitive edge: Organizations can move more nimbly than competitors who must
devote IT resources to managing infrastructure.

Types of cloud computing


 Not all clouds are the same and not one type of cloud computing is right for everyone.
Several different models, types and services have evolved to help offer the right solution for
your needs.
 First, you need to determine the type of cloud deployment or cloud computing architecture,
that your cloud services will be implemented on. There are three different ways to deploy
cloud services: on a public cloud, private cloud or hybrid cloud.

Hybrid cloud
 A hybrid cloud is a type of cloud computing that combines on-premises infrastructure—or a
private cloud—with a public cloud. Hybrid clouds allow data and apps to move between the
two environments.
 Many organisations choose a hybrid cloud approach due to business imperatives such as
meeting regulatory and data sovereignty requirements, taking full advantage of on-premises
technology investment or addressing low latency issues.

70
 The hybrid cloud is evolving to include edge workloads as well. Edge computing brings the
computing power of the cloud to IoT devices—closer to where the data resides. By moving
workloads to the edge, devices spend less time communicating with the cloud, reducing
latency and they are even able to operate reliably in extended offline periods.
Advantages of the hybrid cloud
 Control—your organisation can maintain a private infrastructure for sensitive assets or
workloads that require low latency.
 Flexibility—you can take advantage of additional resources in the public cloud when you
need them.
 Cost-effectiveness—with the ability to scale to the public cloud, you pay for extra computing
power only when needed.
 Ease—transitioning to the cloud does not have to be overwhelming because you can migrate
gradually—phasing in workloads over time.
Public cloud
 Public clouds are the most common type of cloud computing deployment. The cloud
resources (like servers and storage) are owned and operated by a third-party cloud service
provider and delivered over the internet. With a public cloud, all hardware, software and
other supporting infrastructure are owned and managed by the cloud provider. Microsoft
Azure is an example of a public cloud.
 In a public cloud, you share the same hardware, storage and network devices with other
organisations or cloud “tenants,” and you access services and manage your account using a
web browser. Public cloud deployments are frequently used to provide web-based email,
online office applications, storage and testing and development environments.
Advantages of public cloud
 Lower costs—no need to purchase hardware or software and you pay only for the service
you use.
 No maintenance—your service provider provides the maintenance.
 Near-unlimited scalability—on-demand resources are available to meet your business needs.
 High reliability—a vast network of servers ensures against failure.
Private cloud
 A private cloud consists of cloud computing resources used exclusively by one business or
organisation. The private cloud can be physically located at your organisation’s on-site
datacenter or it can be hosted by a third-party service provider. But in a private cloud, the

71
services and infrastructure are always maintained on a private network and the hardware and
software are dedicated solely to your organisation.
 In this way, a private cloud can make it easier for an organisation to customise its resources
to meet specific IT requirements. Private clouds are often used by government agencies,
financial institutions, any other mid- to large-size organisations with business-critical
operations seeking enhanced control over their environment.
Advantages of a private cloud
 More flexibility—your organisation can customise its cloud environment to meet specific
business needs.
 More control—resources are not shared with others, so higher levels of control and privacy
are possible.
 More scalability—private clouds often offer more scalability compared to on-premises
infrastructure.

BIG DATA AND ANALYTICS


DATA
The quantities, characters, or symbols on which operations are performed by a computer, which may
be stored and transmitted in the form of electrical signals and recorded on magnetic, optical, or
mechanical recording media.
BIG DATA
Big data is a combination of structured, semistructured and unstructured data collected by
organizations that can be mined for information and used in machine learning projects, predictive
modeling and other advanced analytics applications.

Importance of Big data


Companies use big data in their systems to improve operations, provide better customer service,
create personalized marketing campaigns and take other actions that, ultimately, can increase
revenue and profits. Businesses that use it effectively hold a potential competitive advantage over
those that don't because they're able to make faster and more informed business decisions.
For example, big data provides valuable insights into customers that companies can use to refine
their marketing, advertising and promotions in order to increase customer engagement and
conversion rates. Both historical and real-time data can be analyzed to assess the evolving
preferences of consumers or corporate buyers, enabling businesses to become more responsive to
customer wants and needs.

72
Types of Big Data
Following are the types of Big Data:
1. Structured
2. Unstructured
3. Semi-structured

Structured
Any data that can be stored, accessed and processed in the form of fixed format is termed as a
'structured' data. Over the period of time, talent in computer science has achieved greater success in
developing techniques for working with such kind of data (where the format is well known in
advance) and also deriving value out of it. However, nowadays, we are foreseeing issues when a size
of such data grows to a huge extent, typical sizes are being in the rage of multiple zetta bytes.
Examples of Structured Data
An 'Employee' table in a database is an example of Structured Data

Employee_ID Employee_Name Gender Department Salary_In_lacs

2365 Rajesh Kulkarni Male Finance 650000

3398 Pratibha Joshi Female Admin 650000

7465 Shushil Roy Male Admin 500000

7500 Shubhojit Das Male Finance 500000

7699 Priya Sane Female Finance 550000

Unstructured
Any data with unknown form or the structure is classified as unstructured data. In addition to the
size being huge, un-structured data poses multiple challenges in terms of its processing for deriving
value out of it. A typical example of unstructured data is a heterogeneous data source containing a
combination of simple text files, images, videos etc. Now day organizations have wealth of data
available with them but unfortunately, they don't know how to derive value out of it since this data is
in its raw form or unstructured format.

73
Fig.2.5 : Unstructured data types
Semi-structured
Semi-structured data can contain both the forms of data. We can see semi-structured data as a
structured in form but it is actually not defined with e.g. a table definition in relational DBMS.
Example of semi-structured data is a data represented in an XML file.

Characteristics of Big Data


Big data can be described by the following characteristics:
 Volume
 Variety
 Velocity
 Variability
(i) Volume – The name Big Data itself is related to a size which is enormous. Size of data plays a
very crucial role in determining value out of data. Also, whether a particular data can actually be
considered as a Big Data or not, is dependent upon the volume of data. Hence, 'Volume' is one
characteristic which needs to be considered while dealing with Big Data solutions.
(ii) Variety – The next aspect of Big Data is its variety. Variety refers to heterogeneous sources and
the nature of data, both structured and unstructured. During earlier days, spreadsheets and databases
were the only sources of data considered by most of the applications. Nowadays, data in the form of
emails, photos, videos, monitoring devices, PDFs, audio, etc. are also being considered in the
analysis applications. This variety of unstructured data poses certain issues for storage, mining and
analyzing data.

74
(iii) Velocity – The term 'velocity' refers to the speed of generation of data. How fast the data is
generated and processed to meet the demands, determines real potential in the data.
Big Data Velocity deals with the speed at which data flows in from sources like business processes,
application logs, networks, and social media sites, sensors, Mobile devices, etc. The flow of data is
massive and continuous.
(iv) Variability – This refers to the inconsistency which can be shown by the data at times, thus
hampering the process of being able to handle and manage the data effectively.

Types of data that comes under big data.


 Black box data: The black box of aeroplane , jets,Helicopter are used to store microphone
voices, Performance information etc.
 Social media data: Different social media websites Hold information about various users.
 Stock exchange data: It holds information about Buy and sell shares etc.
 Transport data: The transport data holds information About model, capacity, distance and
many other things of vehicles.
 Search engine data: Different search engines retrieve data from different database.

Advantages of Big Data Processing


Ability to process Big Data in DBMS brings in multiple benefits, such as-
 Businesses can utilize outside intelligence while taking decisions
Access to social data from search engines and sites like facebook, twitter are enabling organizations
to fine tune their business strategies.
 Improved customer service
Traditional customer feedback systems are getting replaced by new systems designed with Big Data
technologies. In these new systems, Big Data and natural language processing technologies are
being used to read and evaluate consumer responses.
 Early identification of risk to the product/services, if any
 Better operational efficiency
Big Data technologies can be used for creating a staging area or landing zone for new data before
identifying what data should be moved to the data warehouse. In addition, such integration of Big
Data technologies and data warehouse helps an organization to offload infrequently accessed data.

75
Different Types of Big Data Analytics
Here are the four types of Big Data analytics:
1. Descriptive Analytics
This summarizes past data into a form that people can easily read. This helps in creating reports, like
a company’s revenue, profit, sales, and so on. Also, it helps in the tabulation of social media metrics.
Use Case: The Dow Chemical Company analyzed its past data to increase facility utilization across
its office and lab space. Using descriptive analytics, Dow was able to identify underutilized space.
This space consolidation helped the company save nearly US $4 million annually.
2. Diagnostic Analytics
This is done to understand what caused a problem in the first place. Techniques like drill-down, data
mining, and data recovery are all examples. Organizations use diagnostic analytics because they
provide an in-depth insight into a particular problem.
Use Case: An e-commerce company’s report shows that their sales have gone down, although
customers are adding products to their carts. This can be due to various reasons like the form didn’t
load correctly, the shipping fee is too high, or there are not enough payment options available. This
is where you can use diagnostic analytics to find the reason.
3. Predictive Analytics
This type of analytics looks into the historical and present data to make predictions of the future.
Predictive analytics uses data mining, AI, and machine learning to analyze current data and make
predictions about the future. It works on predicting customer trends, market trends, and so on.
Use Case: PayPal determines what kind of precautions they have to take to protect their clients
against fraudulent transactions. Using predictive analytics, the company uses all the historical
payment data and user behavior data and builds an algorithm that predicts fraudulent activities.
4. Prescriptive Analytics
This type of analytics prescribes the solution to a particular problem. Perspective analytics works
with both descriptive and predictive analytics. Most of the time, it relies on AI and machine
learning.
Use Case: Prescriptive analytics can be used to maximize an airline’s profit. This type of analytics is
used to build an algorithm that will automatically adjust the flight fares based on numerous factors,
including customer demand, weather, destination, holiday seasons, and oil prices.

Big Data Analytics Tools


Here are some of the key big data analytics tools :
 Hadoop - helps in storing and analyzing data
76
 MongoDB - used on datasets that change frequently
 Talend - used for data integration and management
 Cassandra - a distributed database used to handle chunks of data
 Spark - used for real-time processing and analyzing large amounts of data
 STORM - an open-source real-time computational system
 Kafka - a distributed streaming platform that is used for fault-tolerant storage

Big Data Industry Applications


Here are some of the sectors where Big Data is actively used:
 Ecommerce - Predicting customer trends and optimizing prices are a few of the ways e-
commerce uses Big Data analytics
 Marketing - Big Data analytics helps to drive high ROI marketing campaigns, which result in
improved sales
 Education - Used to develop new and improve existing courses based on market
requirements
 Healthcare - With the help of a patient’s medical history, Big Data analytics is used to
predict how likely they are to have health issues
 Media and entertainment - Used to understand the demand of shows, movies, songs, and
more to deliver a personalized recommendation list to its users
 Banking - Customer income and spending patterns help to predict the likelihood of choosing
various banking offers, like loans and credit cards
 Telecommunications - Used to forecast network capacity and improve customer experience
 Government - Big Data analytics helps governments in law enforcement, among other things

Challenges In Big Data Analytics


 Uncertainty of Data Management Landscape: Because big data is continuously expanding,
there are new companies and technologies that are being developed every day. A big
challenge for companies is to find out which technology works bests for them without the
introduction of new risks and problems.
 The Big Data Talent Gap: While Big Data is a growing field, there are very few experts
available in this field. This is because Big data is a complex field and people who understand
the complexity and intricate nature of this field are far few and between. Another major
challenge in the field is the talent gap that exists in the industry
77
 Getting data into the big data platform: Data is increasing every single day. This means that
companies have to tackle a limitless amount of data on a regular basis. The scale and variety
of data that is available today can overwhelm any data practitioner and that is why it is
important to make data accessibility simple and convenient for brand managers and owners.
 Need for synchronization across data sources: As data sets become more diverse, there is a
need to incorporate them into an analytical platform. If this is ignored, it can create gaps and
lead to wrong insights and messages.
 Getting important insights through the use of Big data analytics: It is important that
companies gain proper insights from big data analytics and it is important that the correct
department has access to this information. A major challenge in big data analytics is bridging
this gap in an effective fashion.

M2M LEARNING AND ARTIFICIAL INTELLIGENCE


M2M Learning
Machine-to-machine, or M2M, is a broad label that can be used to describe any technology that
enables networked devices to exchange information and perform actions without the manual
assistance of humans. Artificial intelligence (AI) and machine learning (ML) facilitate the
communication between systems, allowing them to make their own autonomous choices. M2M
technology was first adopted in manufacturing and industrial settings, where other technologies,
such as SCADA and remote monitoring, helped remotely manage and control data from equipment.
M2M has since found applications in other sectors, such as healthcare, business and insurance.
M2M is also the foundation for the internet of things (IoT).

Working of M2M
The main purpose of machine-to-machine technology is to tap into sensor data and transmit it to a
network. Unlike SCADA or other remote monitoring tools, M2M systems often use public networks
and access methods -- for example, cellular or Ethernet -- to make it more cost-effective.
The main components of an M2M system include sensors, RFID, a Wi-Fi or cellular
communications link, and autonomic computing software programmed to help a network device
interpret data and make decisions. These M2M applications translate the data, which can trigger
preprogrammed, automated actions.
One of the most well-known types of machine-to-machine communication is telemetry, which has
been used since the early part of the last century to transmit operational data. Pioneers in telemetric
first used telephone lines, and later, radio waves, to transmit performance measurements gathered
from monitoring instruments in remote locations.

78
The Internet and improved standards for wireless technology have expanded the role of telemetry
from pure science, engineering and manufacturing to everyday use in products such as heating units,
electric meters and internet-connected devices, such as appliances.
Beyond being able to remotely monitor equipment and systems, the top benefits of M2M include:
 reduced costs by minimizing equipment maintenance and downtime;
 boosted revenue by revealing new business opportunities for servicing products in the field;
and
 improved customer service by proactively monitoring and servicing equipment before it fails
or only when it is needed.

M2M applications and examples


Machine-to-machine communication is often used for remote monitoring. In product restocking, for
example, a vending machine can message the distributor's network, or machine, when a particular
item is running low to send a refill. An enabler of asset tracking and monitoring, M2M is vital in
warehouse management systems (WMS) and supply chain management (SCM).
Utilities companies often rely on M2M devices and applications to not only harvest energy, such as
oil and gas, but also to bill customers -- through the use of smart meters -- and to detect worksite
factors, such as pressure, temperature and equipment status.

Fig.2.6 : M2M applications

79
In telemedicine, M2M devices can enable the real time monitoring of patients' vital statistics,
dispensing medicine when required or tracking healthcare assets.
The combination of the IoT, AI and ML is transforming and improving mobile payment processes
and creating new opportunities for different purchasing behaviors. Digital wallets, such as Google
Wallet and Apple Pay, will most likely contribute to the widespread adoption of M2M financial
activities.
Smart home systems have also incorporated M2M technology. The use of M2M in this embedded
system enables home appliances and other technologies to have real time control of operations as
well as the ability to remotely communicate.
M2M is also an important aspect of remote-control software, robotics, traffic control, security,
logistics and fleet management and automotive.
Key features of M2M
Key features of M2M technology include:
 Low power consumption, in an effort to improve the system's ability to effectively service
M2M applications.
 A Network operator that provides packet-switched service
 Monitoring abilities that provide functionality to detect events.
 Time tolerance, meaning data transfers can be delayed.
 Time control, meaning data can only be sent or received at specific predetermined periods.
 Location specific triggers that alert or wake up devices when they enter particular areas.
 The ability to continually send and receive small amounts of data.

M2M requirements
According to the European Telecommunications Standards Institute (ETSI), requirements of an
M2M system include:
 Scalability - The M2M system should be able to continue to function efficiently as more
connected objects are added.
 Anonymity - The M2M system must be able to hide the identity of an M2M device when
requested, subject to regulatory requirements.
 Logging - M2M systems must support the recording of important events, such as failed
installation attempts, service not operating or the occurrence of faulty information.
The logs should be available by request.

80
 M2M application communication principles - M2M systems should enable communication
between M2M applications in the network and the M2M device or gateway using
communication techniques, such as short message service (SMS) and IP Connected devices
should also be able to communicate with each other in a peer-to-peer (P2P) manner.
 Delivery methods - The M2M system should support Unicast,
anycast, multicast and broadcast communication modes, with broadcast being replaced by
multicast or anycast whenever possible to minimize the load on the communication network.
 Message transmission scheduling - M2M systems must be able to control network access and
messaging schedules and should be conscious of M2M applications' scheduling delay
tolerance.
 Message communication path selection - Optimization of the message communication paths
within an M2M system must be possible and based on policies like transmission failures,
delays when other paths exist and network costs.

Artificial Intelligence
The intelligence demonstrated by machines is known as Artificial Intelligence. Artificial
Intelligence has grown to be very popular in today’s world. It is the simulation of natural
intelligence in machines that are programmed to learn and mimic the actions of humans. These
machines are able to learn with experience and perform human-like tasks. As technologies such
as AI continue to grow, they will have a great impact on our quality of life. It’s but natural that
everyone today wants to connect with AI technology somehow, may it be as an end-user or
pursuing a career in Artificial Intelligence.

Working of Artificial Intelligence (AI)


Building an AI system is a careful process of reverse-engineering human traits and capabilities in a
machine, and using it’s computational prowess to surpass what we are capable of.
To understand How Aritificial Intelligence actually works, one needs to deep dive into the various
sub domains of Artificial Intelligence and understand how those domains could be applied into the
various fields of the industry.
 Machine Learning : ML teaches a machine how to make inferences and decisions based on
past experience. It identifies patterns, analyses past data to infer the meaning of these data
points to reach a possible conclusion without having to involve human experience. This
automation to reach conclusions by evaluating data, saves a human time for businesses and
helps them make a better decision.
 Deep Learning : Deep Learning ia an ML technique. It teaches a machine to process inputs
through layers in order to classify, infer and predict the outcome.
81
 Neural Networks : Neural Networks work on the similar principles as of Human Neural
cells. They are a series of algorithms that captures the relationship between various
underying variabes and processes the data as a human brain does.
 Natural Language Processingc: NLP is a science of reading, understanding, interpreting a
language by a machine. Once a machine understands what the user intends to communicate,
it responds accordingly.
 Computer Vision : Computer vision algorithms tries to understand an image by breaking
down an image and studying different parts of the objects. This helps the machine classify
and learn from a set of images, to make a better output decision based on previous
observations.
 Cognitive Computing : Cognitive computing algorithms try to mimic a human brain by
anaysing text/speech/images/objects in a manner that a human does and tries to give the
desired output.

Advantages of Artificial Intelligence


There’s no doubt in the fact that technology has made our life better. From music
recommendations, map directions, mobile banking to fraud prevention, AI and other
technologies have taken over. There’s a fine line between advancement and destruction. There’s
always two sides to a coin, and that is the case with AI as well. Let us take a look at some
advantages of Artificial Intelligence-
Advantages of Artificial Intelligence (AI)
 Reduction in human error
 Available 24×7
 Helps in repetitive work
 Digital assistance
 Faster decisions
 Rational Decision Maker
 Medical applications
 Improves Security
 Efficient Communication

82
Top Used Applications in Artificial Intelligence
1. Google’s AI-powered predictions (E.g.: Google Maps)
2. Ride-sharing applications (E.g.: Uber, Lyft)
3. AI Autopilot in Commercial Flights
4. Spam filters on E-mails
5. Plagiarism checkers and tools
6. Facial Recognition
7. Search recommendations
8. Voice-to-text features
9. Smart personal assistants (E.g.: Siri, Alexa)
10. Fraud protection and prevention.

83
Text/Reference Books

1. S. Misra, A. Mukherjee, and A. Roy, Introduction to IoT. Cambridge University Press, 2020
2. S. Misra, C. Roy, and A. Mukherjee, Introduction to Industrial Internet of Things and Industry 4.0.
CRC Press.2020
3. Dr. Guillaume Girardin , Antoine Bonnabel, Dr. Eric Mounier, 'Technologies Sensors for the
Internet of Things Businesses & Market Trends 2014 -2024',Yole Development Copyrights
,2014
4. Peter Waher, 'Learning Internet of Things', Packt Publishing, 2015

Question Bank

PART-A
1. Define Big Data.
2. Distinguish between Structured and unstructured data.
3. Latency is low in fog computing, analyze the reasons.
4. Identify the domains where fog computing is used.
5. Produce an example of Moore’s Law.
6. Summarize how MEMS sensors manufactured.
7. Distinguish between artificial Intelligence and Machine learning.

PART-B
1. Explain in detail on how a wireless router works with neat architectural sketch.
2. Design a Virtual Machine to manage the control room of COVID DISASTER MANAGEMENT
with your own specifications
3. Describe in details any 3 Applications of AI in INDUSTRY 4.0 with its advantages and
disadvantages
4. Design an IOT system to save energy and visualize data using a machine and implement algorithms
to tackle problems in the industry.
5. Demonstrate and explain how Chennai can be converted into a smart city with the applications of
IOT in smart cities.
6. Discuss in detail the working of Mobile IP.

84

You might also like