Seca CH-2
Seca CH-2
Seca CH-2
47
TECHNICAL AND BUSINESS INNOVATORS OF INDUSTRIAL INTERNET
Miniaturization – Cyber Physical Systems – Wireless technology – IP Mobility – Network
Functionality Virtualization – Cloud and Fog - Big Data and Analytics – M2M Learning and
Artificial Intelligence.
MINIATURIZATION
In the world of Internet of Things (IoT), miniaturization is enabling new applications in the form of
wearables, vehicles and transportation, disposable tracking tech for pharmaceuticals and produce,
and more uses than we can count for smart city and smart home use.
In this digital era, as we wirelessly connect more and more devices to the Internet, researchers and
engineers face several challenges, like how to package a radio transmitter into their existing device
real estate, how to make increasingly smaller devices, how to reduce the area coverage for mounting
chips. They are also striving to meet consumer demand for Internet of Things (IoT) products that are
ergonomically easy to use.
Ideally, engineers would tend to use IoT components that are smaller in size, have better RF
performance, and have reasonable prices. However, these characteristics do not usually converge in
IoT component offerings, and that presents a challenge for solution providers.
Fortunately, the size of a silicon die has been getting smaller and smaller over the years as the
industry adopts new silicon manufacturing processes. The industry has been solving the space issue
for IoT implementations by combining the MCU and RF frontend into system-on-chip (SoC)
configurations.
The demand for embedded SIM (eSIM) is steadily rising among the smartphone manufacturers,
laptop manufacturers, energy & utility sector companies. The OEMs across the globe are focusing
on the development and integration of eSIM in numerous applications.
The increasing demand for miniaturization of IoT components across various industries is also
boosting the demand for eSIM globally.
In 2018, researchers from the Green IC group at the National University of Singapore (NUS) in
collaboration with associate professor Paolo Crovetti from the Polytechnic University of Turin in
Italy created the timer, that trigger sensor to perform their tasks when required, is believed to be so
efficient that it runs using an on-chip solar cell with a diameter close to that of a human hair. This is
a major step in IoT miniaturization claimed with low-power.
The wake-up timer can continue operations even when a battery is not available and with very little
ambient power, as demonstrated by a miniaturized on-chip solar cell exposed to moonlight. An on-
chip capacitor used for slow and infrequent wake-up also helps reduce the device’s silicon
manufacturing cost thanks to its small surface area of 49 microns on both sides.
48
IoT sensor nodes are individual miniaturized systems containing one or more sensors, as well as
circuits for data processing, wireless communication, and power management. To keep power
consumption low, they are kept in sleep mode most of the time, and wake-up timers are used to
trigger the sensors to carry out a task. As they are turned on most of the time, wake-up timers set the
minimum power consumption of IoT sensor nodes. They also play a fundamental role in reducing
the average power consumption of systems-on-chip.
When designing a hardware module, one of the pressing questions is about Antenna. Developers
must work around the space reserved for antenna and the type of antenna they will use to integrate
with a corresponding module. PCB trace antennas are general preference because of their low bill
of material (BoM) costs. But they require a significant size which can cause devices to be large and
difficult to work with.
The smaller size we try to achieve, the less efficiency we can have for the RF performance. Chip
antennas are famous for various applications as they simplify design efforts and optimize size
consumption.
According to statistics of Bluegiga, approximately only 10 percent of these evaluated designs deploy
the external antenna, and 90 percent of the customers choose modules with a built-in chip antenna.
Hence, it becomes necessary to continuously evaluate the possibility of space reduction on
chipboard, something Cloud of Things has successfully achieved with our latest DeviceTone Genie
product line, working with great partners including Nordic Semiconductor and AES with their
minIot devices.
Importance of Miniaturization
Miniaturization produced sleeker computers and phones that take up less space and produce less
waste in the manufacturing and assembly processes, but smaller technology is more stylish.
Miniaturization in form factor chipsets and modules has contributed to cost-effective, faster-running,
and more powerful computer components.
In the world of Internet of Things (IoT), miniaturization is enabling new applications in the form of
wearables, vehicles and transportation, disposable tracking tech for pharmaceuticals and produce,
and more uses than we can count for smart city and smart home use.
49
Miniaturization in MEMS Sensors
Fig.2.1 Miniaturization
Micromachining has become a key technology for the miniaturization of sensors. Being able to
reduce the size of the sensing element by using standard semiconductor manufacturing technology
allows a dramatic reduction in size. Integrating the signal processing alongside the sensing element
further enhances the opportunities to reduce the size of the system, eliminating the need for extra
pins to link to external devices.
The choice of micromachining process technology can also determine the limits of miniaturization,
but this is often determined by the sensor type. Piezoelectric micro machined elements for pressure
sensing have less opportunity to scale than a diaphragm built from CMOS silicon on the surface of a
substrate, for example, but can deliver higher performance.
Limits Of Miniaturization
In addition, miniaturized equipment is frequently not as easy to maintain and therefore typically
does not receive the same routine maintenance and care that larger equipment receives.
This can lead to increased overall costs as a result of disposal and the overheads required to keep
additional equipment on hand.
51
CPS Characteristics
• CPS are physical and engineered systems whose operations are monitored, coordinated, controlled,
and integrated.
• This intimate coupling between the cyber and physical is what differentiates CPS from other
fields.
Some hallmark characteristics:
• Cyber capability in every physical component
• Networked at multiple and extreme scales
• Complex at multiple temporal and spatial scales
• Constituent elements are coupled logically and physically
• Dynamically reorganizing/reconfiguring open system.
• High degrees of automation, control loops closed at many scales
• Unconventional computational & physical substrates (such as bio, nano, chem, ...)
• Operation must be dependable, certified in some cases.
52
Examples of Cyber Physical System
Common applications of CPS typically fall under sensor-based communication-enabled autonomous
systems. For example, many wireless sensor networks monitor some aspect of the environment and
relay the processed information to a central node. Other types of CPS include smart
grid, autonomous automotive systems, medical monitoring, process control systems, distributed
robotics, and automatic pilot avionics.
A real-world example of such a system is the Distributed Robot Garden at MIT in which a team of
robots tend a garden of tomato plants. This system combines distributed sensing (each plant is
equipped with a sensor node monitoring its status), navigation, manipulation and wireless
networking.
A focus on the control system aspects of CPS that pervade critical infrastructure can be found in the
efforts of the Idaho National Laboratory and collaborators researching resilient control systems. This
effort takes a holistic approach to next generation design, and considers the resilience aspects that
are not well quantified, such as cyber security, [18] human interaction and complex interdependencies.
Another example is MIT's ongoing CarTel project where a fleet of taxis work by collecting real-time
traffic information in the Boston area. Together with historical data, this information is then used for
calculating fastest routes for a given time of the day.
CPS are also used in electric grids to perform advanced control, especially in the smart grids context
to enhance the integration of distributed renewable generation. Special remedial action scheme are
needed to limit the current flows in the grid when wind farm generation is too high. Distributed CPS
are a key solution for this type of issues.
In industry domain, the cyber-physical systems empowered by Cloud technologies have led to novel
approaches that paved the path to Industry 4.0 as the European Commission IMC-AESOP project
with partners such as Schneider Electric, SAP, Honeywell, Microsoft etc. demonstrated.
53
WIRELESS TECHNOLOGY
The Internet of Things (IoT) starts with connectivity, but since IoT is a widely diverse and
multifaceted realm, you certainly cannot find a one-size-fits-all communication solution.
Continuing our discussion on mesh and star topologies, in this article we’ll walk through the
six most common types of IoT wireless technologies.
Each solution has its strengths and weaknesses in various network criteria and is therefore best -
suited for different IoT use cases.
2. Cellular (3G/4G/5G)
Well-established in the consumer mobile market, cellular networks offer reliable broadband
communication supporting various voice calls and video streaming applications. On the
downside, they impose very high operational costs and power requirements.
While cellular networks are not viable for the majority of IoT applications powered by battery-
operated sensor networks, they fit well in specific use cases such as connected cars or fleet
management in transportation and logistics. For example, in-car infotainment, traffic
routing, advanced driver assistance systems (ADAS) alongside fleet telematics and tracking
services can all rely on the ubiquitous and high bandwidth cellular connectivity.
Cellular next-gen 5G with high-speed mobility support and ultra-low latency is positioned to be
the future of autonomous vehicles and augmented reality. 5G is also expected to enable real-
time video surveillance for public safety, real-time mobile delivery of medical data sets
for connected health, and several time-sensitive industrial automation applications in the
future.
55
4. Bluetooth and BLE
Defined in the category of Wireless Personal Area Networks, Bluetooth is a short -range
communication technology well-positioned in the consumer marketplace. Bluetooth Classic
was originally intended for point-to-point or point-to-multipoint (up to seven slave nodes) data
exchange among consumer devices. Optimized for power consumption, Bluetooth Low-Energy
was later introduced to address small-scale Consumer IoT applications.
BLE-enabled devices are mostly used in conjunction with electronic devices, typically
smartphones that serve as a hub for transferring data to the cloud. Nowadays, BLE is widely
integrated into fitness and medical wearables (e.g. smartwatches, glucose meters, pulse
oximeters, etc.) as well as Smart Home devices (e.g. door locks) – whereby data is
conveniently communicated to and visualized on smartphones.
The release of Bluetooth Mesh specification in 2017 aims to enable a more scalable dep loyment
of BLE devices, particularly in retail contexts. Providing versatile indoor localization features,
BLE beacon networks have been used to unlock new service innovations like in-store
navigation, personalized promotions, and content delivery.
5. Wi-Fi
There is virtually no need to explain Wi-Fi, given its critical role in providing high-throughput
data transfer for both enterprise and home environments. However, in the IoT space, its major
limitations in coverage, scalability and power consumption make the technology much less
prevalent.
Imposing high energy requirements, Wi-Fi is often not a feasible solution for large networks of
battery-operated IoT sensors, especially in industrial IoT and smart building scenarios. Instead,
it more pertains to connecting devices that can be conveniently connected to a power outlet
like smart home gadgets and appliances, digital signages or security cameras.
Wi-Fi 6 – the newest Wi-Fi generation – brings in greatly enhanced network bandwidth (i.e.
<9.6 Gbps) to improve data throughput per user in congested environments. With this, the
standard is poised to level up public Wi-Fi infrastructure and transform customer experience
with new digital mobile services in retail and mass entertainment sectors. Also, in-car networks
for infotainment and on-board diagnostics are expected to be the most game-changing use case
for Wi-Fi 6. Yet, the development will likely take some more time.
56
6. RFID
Radio Frequency Identification (RFID) uses radio waves to transmit small amounts of data
from an RFID tag to a reader within a very short distance. Till now, the technology has
facilitated a major revolution in retail and logistics.
By attaching an RFID tag to all sorts of products and equipment, businesses can track their
inventory and assets in real-time – allowing for better stock and production planning as well as
optimized supply chain management. Alongside increasing IoT adoption, RFID continues to be
entrenched in the retail sector, enabling new IoT applications like smart shelves, self-checkout,
and smart mirrors.
IP MOBILITY
The increasing use of virtualization in the data center has enabled an unprecedented degree of
flexibility in managing servers and workloads. One important aspect of this newfound flexibility is
mobility. As workloads are hosted on virtual servers, they are decoupled from the physical
infrastructure and become mobile by definition. As end-points become detached from the physical
infrastructure and are mobile, the routing infrastructure is challenged to evolve from a topology
centric addressing model to a more flexible architecture. This new architecture is capable of
allowing IP addresses to freely and efficiently move across the infrastructure. There are several
ways of adding mobility to the IP infrastructure, and each of them addresses the problem with
57
different degrees of effectiveness. LISP Host Mobility is poised to provide a solution for workload
mobility with optimal effectiveness. This document describes the LISP Host Mobility solution,
contrasts it with other IP mobility options, and provides specific guidance for deploying and
configuring the LISP Host mobility solution.
IP Mobility Requirements
The requirements for an IP mobility solution can be generalized to a few key aspects. To make a fair
comparison of existing solutions and clearly understand the added benefit of the LISP Host Mobility
solution, The different functional aspects that must be addressed in an IP mobility solution are
• Redirection
The ultimate goal of IP mobility is to steer traffic to the valid location of the end-point. This aspect
is generally addressed by providing some sort of re-direction mechanism to enhance the traffic
steering already provided by basic routing. Redirection can be achieved by replacing the destination
address with a surrogate address that is representative of the new location of the end-point. Different
techniques will allow the redirection of traffic either by replacing the destination's address altogether
or by leveraging a level of indirection in the addressing such as that achieved with tunnels and
encapsulations. The different approaches impact applications to different degrees. The ultimate goal
of IP mobility is to provide a solution that is totally transparent to the applications and allows for the
preservation of established sessions, as end-points move around the IP infrastructure.
• Scalability
Most techniques create a significant amount of granular state to re-direct traffic effectively. The
state is necessary to correlate destination IP addresses to specific locations, either by means of
mapping or translation. This additional state must be handled in a very efficient manner to attain a
solution that can support a deployable scale at a reasonable cost in terms of memory and processing.
• Optimized Routing
As end-points move around, it is key that traffic is routed to these end-points following the best
possible path. Since mobility is based largely on re-direction of traffic, the ability to provide an
optimal path is largely a function of the location of the re-directing element. Depending on the
architecture, the solution may generate sub-optimal traffic patterns often referred to as traffic
triangulation or hair-pinning in an attempt to describe the unnecessary detour traffic needs to take
when the destination is mobile. A good mobility solution is one that can provide optimized paths
regardless of the location of the end-point.
60
The main caveats with this approach include:
• Rate of refresh for the DNS cache may impact either the convergence time for the move or the
scalability of the DNS system if the rate is too high.
• Works only for name-based connections while many applications are moving to an IP based
connection model.
• Previously established connections are hair-pinned. This implies that there is a period of time
where there are active connections to the old address and some new connections to the new address
in the second data center. During this state the network administrator may not be able to ascertain
that these two addresses are the same system (from the point of view of the application).
If a customer wants to add a new network function, the service provider can simply spin up a new
virtual machine to perform that function.
For example, instead of deploying a new hardware appliance across the network to enable network
encryption, encryption software can be deployed on a standardized server or switch already in the
network.
This virtualization of network functions reduces dependency on dedicated hardware appliances for
network operators, and allows for improved scalability and customization across the entire network.
Different from a virtualized network, NFV seeks to offload network functions only, rather than the
entire network.
61
Fig.2.3 : Network Function Virtualization architecture
NFV architecture
The NFV architecture proposed by the European Telecommunications Standards Institute (ETSI) is
helping to define standards for NFV implementation. Each component of the architecture is based
on these standards to promote better stability and interoperability.
NFV architecture consists of:
Virtualized network functions (VNFs)are software applications that deliver network
functions such as file sharing, directory services, and IP configuration.
Network functions virtualization infrastructure (NFVi) consists of the infrastructure
components—compute, storage, networking—on a platform to support software, such as
a hypervisor like KVM or a container management platform, needed to run network apps.
Management, automation and network orchestration (MANO) provides the framework for
managing NFV infrastructure and provisioning new VNFs.
62
Software-defined networking (SDN) and NFV
NFV and SDN are not dependent on each other, but they do have similarities. Both rely on
virtualization and use network abstraction, but how they separate functions and abstract resources is
different.
SDN separates network forwarding functions from network control functions with the goal of
creating a network that is centrally manageable and programmable. NFV abstracts network
functions from hardware. NFV supports SDN by providing the infrastructure on which SDN
software can run.
NFV and SDN can be used together, depending on what you want to accomplish, and both use
commodity hardware. With NFV and SDN, you can create a network architecture that is more
flexible, programmable, and uses resources efficiently.
64
CLOUD AND FOG
Fog Computing
Fog computing, also called fog networking or fogging, describes a decentralized computing
structure located between the cloud and devices that produce data. This flexible structure enables
users to place resources including applications and the data they produce, in logical locations to
enhance performance.
65
Finally, the data is sent to a fog node or IoTgateway which collects the data for further
analysis. This will filter the data and in some cases save it to hand over to the cloud later.
66
Real-time analytics: Data can be transferred using fog computing deployments from the
location where it is produced to different locations. Fog computing is used for real-time
analytics that passes data to financial institutions that use real-time data from production
networks.
Characteristics of Fog
Cognition:
Cognition is responsiveness to client centric objectives. Fog based data access and analytics give a
better alert about customer requirements, best position handling for where to transmit, store, and
control functions throughout cloud to the IoT continuum. Applications, due to close proximity, at
end devices provide a better conscious and responsive reproduced customer requirement relation.
Heterogeneity:
Fog computing is a virtualized structure so it offers computational, storage, and networking services
between the main cloud and devices at the end. Its heterogeneity featured servers consist of
hierarchical building blocks at distributed positions.
Geographical Environment Distribution:
Fog computing environment has a widely circulated deployment in context to provide QoS for both
mobiles and motionless end devices. Fog network distributes geographically its nodes and sensors in
the scenario of different phase environment, for example, temperature monitoring at chemical vat,
weather monitoring sensors, STLS sensors, and health care monitoring system.
Edge Location with Low Latency:
The coming out smart applications services are inadequate due to the lack of support at the
proximity of devices with featured QoS at the edge of the core network. Video streaming in small
TV support devices, monitoring sensors, live gaming applications.
Real-Time Interaction:
Real-time interaction is a variety and requirement of fog applications, like monitoring a critical
process at oil rig with the fog edge devices or sensors, real-time transmission for traffic monitoring
systems, electricity distribution monitoring system applications, and so on. Fog applications are
having real-time processing capabilities for QoS rather than batch processing.
Large Scale Sensor Network:Fog has a feature applicable when environment monitoring system, in
near smart grid applications, inherently extends its monitoring systems caused by hierarchical
computing and storage resource requirements.
67
Widespread Wireless Access:
In this scenario wireless access protocols (WAP) and cellular mobile gateways can be classical
examples as fog node proximity to the end users.
Interoperable Technology:
Fog components must be able to work in interoperating environment to guarantee support for wide
range of services like data streaming and real-time processing for best data analyses and predictive
decisions.
➨It offers better security. Fog nodes can be protected using same procedures followed in IT
environment.
➨It processes selected data locally instead of sending them to the cloud for processing. Hence it
can save network bandwidth. This leads to lower operational costs.
➨It reduces latency requirements and hence quick decisions can be made. This helps in avoiding
accidents.
➨It offers better privacy to the users data as they are analyzed locally instead of sending them to the
cloud. Moreover IT team can manage and control the devices.
➨It is easy to develop fog applications using right tools which can drive machines as per customers
need.
➨Fog nodes are mobile in nature. Hence they can join and leave the network at any time.
➨Fog nodes can withstand harsh environmental conditions in places such as tracks, vehicles, under
sea, factory floors etc. Moreover it can be installed in remote locations.
➨Fog computing offers reduction in latency as data are analyzed locally. This is due to less round
trip time and less amount of data bandwidth.
➨Encryption algorithms and security policies make it more difficult for arbitrary devices to
exchange data. Any mistakes in security algorithms lead to exposure of data to the hackers.
Other security issues are IP address spoofing, man in the middle attacks, wireless network security
etc.
➨To achieve high data consistency in the the fog computing is challenging and requires more
efforts.
➨Fog computing will realize global storage concept with infinite size and speed of local storage but
data management is a challenge.
➨Trust and authentication are major concerns.
68
➨Scheduling is complex as tasks can be moved between client devices, fog nodes and back end
cloud servers.
➨Power consumption is high in fog nodes compare to centralized cloud architecture.
Cloud Computing
cloud computing is the delivery of computing services—including servers, storage, databases,
networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster
innovation, flexible resources, and economies of scale. You typically pay only for cloud services
you use, helping lower your operating costs, run your infrastructure more efficiently and scale as
your business needs change.
69
Efficiency
Accessibility: Cloud-based applications and data are accessible from virtually any internet-
connected device.
Speed to market: Developing in the cloud enables users to get their applications to market
quickly.
Data security: Hardware failures do not result in data loss because of networked backups.
Savings on equipment: Cloud computing uses remote resources, saving organizations the
cost of servers and other equipment.
Pay structure: A “utility” pay structure means users only pay for the resources they use.
Strategic value
Streamlined work: Cloud service providers (CSPs) manage underlying infrastructure,
enabling organizations to focus on application development and other priorities.
Regular updates: Service providers regularly update offerings to give users the most up-to-
date technology.
Collaboration: Worldwide access means teams can collaborate from widespread locations.
Competitive edge: Organizations can move more nimbly than competitors who must
devote IT resources to managing infrastructure.
Hybrid cloud
A hybrid cloud is a type of cloud computing that combines on-premises infrastructure—or a
private cloud—with a public cloud. Hybrid clouds allow data and apps to move between the
two environments.
Many organisations choose a hybrid cloud approach due to business imperatives such as
meeting regulatory and data sovereignty requirements, taking full advantage of on-premises
technology investment or addressing low latency issues.
70
The hybrid cloud is evolving to include edge workloads as well. Edge computing brings the
computing power of the cloud to IoT devices—closer to where the data resides. By moving
workloads to the edge, devices spend less time communicating with the cloud, reducing
latency and they are even able to operate reliably in extended offline periods.
Advantages of the hybrid cloud
Control—your organisation can maintain a private infrastructure for sensitive assets or
workloads that require low latency.
Flexibility—you can take advantage of additional resources in the public cloud when you
need them.
Cost-effectiveness—with the ability to scale to the public cloud, you pay for extra computing
power only when needed.
Ease—transitioning to the cloud does not have to be overwhelming because you can migrate
gradually—phasing in workloads over time.
Public cloud
Public clouds are the most common type of cloud computing deployment. The cloud
resources (like servers and storage) are owned and operated by a third-party cloud service
provider and delivered over the internet. With a public cloud, all hardware, software and
other supporting infrastructure are owned and managed by the cloud provider. Microsoft
Azure is an example of a public cloud.
In a public cloud, you share the same hardware, storage and network devices with other
organisations or cloud “tenants,” and you access services and manage your account using a
web browser. Public cloud deployments are frequently used to provide web-based email,
online office applications, storage and testing and development environments.
Advantages of public cloud
Lower costs—no need to purchase hardware or software and you pay only for the service
you use.
No maintenance—your service provider provides the maintenance.
Near-unlimited scalability—on-demand resources are available to meet your business needs.
High reliability—a vast network of servers ensures against failure.
Private cloud
A private cloud consists of cloud computing resources used exclusively by one business or
organisation. The private cloud can be physically located at your organisation’s on-site
datacenter or it can be hosted by a third-party service provider. But in a private cloud, the
71
services and infrastructure are always maintained on a private network and the hardware and
software are dedicated solely to your organisation.
In this way, a private cloud can make it easier for an organisation to customise its resources
to meet specific IT requirements. Private clouds are often used by government agencies,
financial institutions, any other mid- to large-size organisations with business-critical
operations seeking enhanced control over their environment.
Advantages of a private cloud
More flexibility—your organisation can customise its cloud environment to meet specific
business needs.
More control—resources are not shared with others, so higher levels of control and privacy
are possible.
More scalability—private clouds often offer more scalability compared to on-premises
infrastructure.
72
Types of Big Data
Following are the types of Big Data:
1. Structured
2. Unstructured
3. Semi-structured
Structured
Any data that can be stored, accessed and processed in the form of fixed format is termed as a
'structured' data. Over the period of time, talent in computer science has achieved greater success in
developing techniques for working with such kind of data (where the format is well known in
advance) and also deriving value out of it. However, nowadays, we are foreseeing issues when a size
of such data grows to a huge extent, typical sizes are being in the rage of multiple zetta bytes.
Examples of Structured Data
An 'Employee' table in a database is an example of Structured Data
Unstructured
Any data with unknown form or the structure is classified as unstructured data. In addition to the
size being huge, un-structured data poses multiple challenges in terms of its processing for deriving
value out of it. A typical example of unstructured data is a heterogeneous data source containing a
combination of simple text files, images, videos etc. Now day organizations have wealth of data
available with them but unfortunately, they don't know how to derive value out of it since this data is
in its raw form or unstructured format.
73
Fig.2.5 : Unstructured data types
Semi-structured
Semi-structured data can contain both the forms of data. We can see semi-structured data as a
structured in form but it is actually not defined with e.g. a table definition in relational DBMS.
Example of semi-structured data is a data represented in an XML file.
74
(iii) Velocity – The term 'velocity' refers to the speed of generation of data. How fast the data is
generated and processed to meet the demands, determines real potential in the data.
Big Data Velocity deals with the speed at which data flows in from sources like business processes,
application logs, networks, and social media sites, sensors, Mobile devices, etc. The flow of data is
massive and continuous.
(iv) Variability – This refers to the inconsistency which can be shown by the data at times, thus
hampering the process of being able to handle and manage the data effectively.
75
Different Types of Big Data Analytics
Here are the four types of Big Data analytics:
1. Descriptive Analytics
This summarizes past data into a form that people can easily read. This helps in creating reports, like
a company’s revenue, profit, sales, and so on. Also, it helps in the tabulation of social media metrics.
Use Case: The Dow Chemical Company analyzed its past data to increase facility utilization across
its office and lab space. Using descriptive analytics, Dow was able to identify underutilized space.
This space consolidation helped the company save nearly US $4 million annually.
2. Diagnostic Analytics
This is done to understand what caused a problem in the first place. Techniques like drill-down, data
mining, and data recovery are all examples. Organizations use diagnostic analytics because they
provide an in-depth insight into a particular problem.
Use Case: An e-commerce company’s report shows that their sales have gone down, although
customers are adding products to their carts. This can be due to various reasons like the form didn’t
load correctly, the shipping fee is too high, or there are not enough payment options available. This
is where you can use diagnostic analytics to find the reason.
3. Predictive Analytics
This type of analytics looks into the historical and present data to make predictions of the future.
Predictive analytics uses data mining, AI, and machine learning to analyze current data and make
predictions about the future. It works on predicting customer trends, market trends, and so on.
Use Case: PayPal determines what kind of precautions they have to take to protect their clients
against fraudulent transactions. Using predictive analytics, the company uses all the historical
payment data and user behavior data and builds an algorithm that predicts fraudulent activities.
4. Prescriptive Analytics
This type of analytics prescribes the solution to a particular problem. Perspective analytics works
with both descriptive and predictive analytics. Most of the time, it relies on AI and machine
learning.
Use Case: Prescriptive analytics can be used to maximize an airline’s profit. This type of analytics is
used to build an algorithm that will automatically adjust the flight fares based on numerous factors,
including customer demand, weather, destination, holiday seasons, and oil prices.
Working of M2M
The main purpose of machine-to-machine technology is to tap into sensor data and transmit it to a
network. Unlike SCADA or other remote monitoring tools, M2M systems often use public networks
and access methods -- for example, cellular or Ethernet -- to make it more cost-effective.
The main components of an M2M system include sensors, RFID, a Wi-Fi or cellular
communications link, and autonomic computing software programmed to help a network device
interpret data and make decisions. These M2M applications translate the data, which can trigger
preprogrammed, automated actions.
One of the most well-known types of machine-to-machine communication is telemetry, which has
been used since the early part of the last century to transmit operational data. Pioneers in telemetric
first used telephone lines, and later, radio waves, to transmit performance measurements gathered
from monitoring instruments in remote locations.
78
The Internet and improved standards for wireless technology have expanded the role of telemetry
from pure science, engineering and manufacturing to everyday use in products such as heating units,
electric meters and internet-connected devices, such as appliances.
Beyond being able to remotely monitor equipment and systems, the top benefits of M2M include:
reduced costs by minimizing equipment maintenance and downtime;
boosted revenue by revealing new business opportunities for servicing products in the field;
and
improved customer service by proactively monitoring and servicing equipment before it fails
or only when it is needed.
79
In telemedicine, M2M devices can enable the real time monitoring of patients' vital statistics,
dispensing medicine when required or tracking healthcare assets.
The combination of the IoT, AI and ML is transforming and improving mobile payment processes
and creating new opportunities for different purchasing behaviors. Digital wallets, such as Google
Wallet and Apple Pay, will most likely contribute to the widespread adoption of M2M financial
activities.
Smart home systems have also incorporated M2M technology. The use of M2M in this embedded
system enables home appliances and other technologies to have real time control of operations as
well as the ability to remotely communicate.
M2M is also an important aspect of remote-control software, robotics, traffic control, security,
logistics and fleet management and automotive.
Key features of M2M
Key features of M2M technology include:
Low power consumption, in an effort to improve the system's ability to effectively service
M2M applications.
A Network operator that provides packet-switched service
Monitoring abilities that provide functionality to detect events.
Time tolerance, meaning data transfers can be delayed.
Time control, meaning data can only be sent or received at specific predetermined periods.
Location specific triggers that alert or wake up devices when they enter particular areas.
The ability to continually send and receive small amounts of data.
M2M requirements
According to the European Telecommunications Standards Institute (ETSI), requirements of an
M2M system include:
Scalability - The M2M system should be able to continue to function efficiently as more
connected objects are added.
Anonymity - The M2M system must be able to hide the identity of an M2M device when
requested, subject to regulatory requirements.
Logging - M2M systems must support the recording of important events, such as failed
installation attempts, service not operating or the occurrence of faulty information.
The logs should be available by request.
80
M2M application communication principles - M2M systems should enable communication
between M2M applications in the network and the M2M device or gateway using
communication techniques, such as short message service (SMS) and IP Connected devices
should also be able to communicate with each other in a peer-to-peer (P2P) manner.
Delivery methods - The M2M system should support Unicast,
anycast, multicast and broadcast communication modes, with broadcast being replaced by
multicast or anycast whenever possible to minimize the load on the communication network.
Message transmission scheduling - M2M systems must be able to control network access and
messaging schedules and should be conscious of M2M applications' scheduling delay
tolerance.
Message communication path selection - Optimization of the message communication paths
within an M2M system must be possible and based on policies like transmission failures,
delays when other paths exist and network costs.
Artificial Intelligence
The intelligence demonstrated by machines is known as Artificial Intelligence. Artificial
Intelligence has grown to be very popular in today’s world. It is the simulation of natural
intelligence in machines that are programmed to learn and mimic the actions of humans. These
machines are able to learn with experience and perform human-like tasks. As technologies such
as AI continue to grow, they will have a great impact on our quality of life. It’s but natural that
everyone today wants to connect with AI technology somehow, may it be as an end-user or
pursuing a career in Artificial Intelligence.
82
Top Used Applications in Artificial Intelligence
1. Google’s AI-powered predictions (E.g.: Google Maps)
2. Ride-sharing applications (E.g.: Uber, Lyft)
3. AI Autopilot in Commercial Flights
4. Spam filters on E-mails
5. Plagiarism checkers and tools
6. Facial Recognition
7. Search recommendations
8. Voice-to-text features
9. Smart personal assistants (E.g.: Siri, Alexa)
10. Fraud protection and prevention.
83
Text/Reference Books
1. S. Misra, A. Mukherjee, and A. Roy, Introduction to IoT. Cambridge University Press, 2020
2. S. Misra, C. Roy, and A. Mukherjee, Introduction to Industrial Internet of Things and Industry 4.0.
CRC Press.2020
3. Dr. Guillaume Girardin , Antoine Bonnabel, Dr. Eric Mounier, 'Technologies Sensors for the
Internet of Things Businesses & Market Trends 2014 -2024',Yole Development Copyrights
,2014
4. Peter Waher, 'Learning Internet of Things', Packt Publishing, 2015
Question Bank
PART-A
1. Define Big Data.
2. Distinguish between Structured and unstructured data.
3. Latency is low in fog computing, analyze the reasons.
4. Identify the domains where fog computing is used.
5. Produce an example of Moore’s Law.
6. Summarize how MEMS sensors manufactured.
7. Distinguish between artificial Intelligence and Machine learning.
PART-B
1. Explain in detail on how a wireless router works with neat architectural sketch.
2. Design a Virtual Machine to manage the control room of COVID DISASTER MANAGEMENT
with your own specifications
3. Describe in details any 3 Applications of AI in INDUSTRY 4.0 with its advantages and
disadvantages
4. Design an IOT system to save energy and visualize data using a machine and implement algorithms
to tackle problems in the industry.
5. Demonstrate and explain how Chennai can be converted into a smart city with the applications of
IOT in smart cities.
6. Discuss in detail the working of Mobile IP.
84