Fog computing or fog networking, also known as fogging, is an architecture that uses edge devices to carry out a substantial amount of computation, storage, and communication locally and routed over the internet backbone.
1) Fog computing is an extension of cloud computing that processes data closer to the edge of the network, such as at factory equipment, power poles, or vehicles. It aims to improve efficiency and reduce data transportation costs compared to cloud computing alone.
2) Fog computing involves fog nodes that are located between end devices and the cloud. Fog nodes can perform tasks like data analysis, storage, and sharing results with the cloud and other nodes. This helps process time-sensitive data locally for applications involving the internet of things.
3) Fog computing provides advantages over cloud computing like lower latency, better support for mobility and real-time interactions, local data processing for privacy and efficiency, and ability to handle
Finding your Way in the Fog: Towards a Comprehensive Definition of Fog Computing
The cloud is migrating to the edge of the network, where
routers themselves may become the virtualisation infrastructure,
in an evolution labelled as “the fog”. However, many
other complementary technologies are reaching a high level
of maturity. Their interplay may dramatically shift the information
and communication technology landscape in the
following years, bringing separate technologies into a common
ground. This paper offers a comprehensive definition
of the fog, comprehending technologies as diverse as cloud,
sensor networks, peer-to-peer networks, network virtualisation
functions or configuration management techniques. We
highlight the main challenges faced by this potentially breakthrough
technology amalgamation.
This document provides a seminar report on cloud computing submitted by Vanama Vamsi Krishna in partial fulfillment of the requirements for a Bachelor of Technology degree. The 3-page report includes an abstract, table of contents, introduction on cloud computing concepts, a brief history of cloud computing, key characteristics of cloud computing including cost, scalability and reliability, components and architecture of cloud computing, types and roles in cloud computing, merits and demerits, and a conclusion. The report provides a high-level overview of cloud computing fundamentals.
A Review: The Internet of Things Using Fog Computing
Fog computing is a new computing paradigm that processes data and analytics at the edge of the network, rather than sending all data to a centralized cloud. This helps address issues with the cloud-based Internet of Things (IoT) model, such as high latency, bandwidth constraints, location awareness, and mobility. Fog computing brings computing resources closer to IoT devices and end users by using edge devices like routers, switches, and access points as "fog nodes" that can perform analytics and decision making. This allows time-sensitive IoT applications to function more efficiently. Fog computing also helps optimize resource usage by balancing processing between the edge and cloud.
Emerging cloud computing paradigm vision, research challenges and development...
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Internet of Things (IoT) represents a remarkable transformation of the way in which our world will soon interact. Much like the World Wide Web connected computers to networks, and the next evolution connected people to the Internet and other people, IoT looks poised to interconnect devices, people, environments, virtual objects and machines in ways that only science fiction writers could have imagined.
A Study on Cloud and Fog Computing Security Issues and SolutionsAM Publications
Cloud computing is the significant part of the data world. The security level in cloud is undefined. Fog computing is the new buzz word added to the technical world. And the term Fog was coined by CISCO. The need for Fog computing is security and gets the data more closely to the end-user. Fog Computing is not going to replace the Cloud computing, it will be acting as the intermediate layer for securing the data which is stored inside the cloud. The principal idea of this paper is to provide data safety measures to the Cloud storage through Fog Computing. Fog Computing will be playing the vital role for the future technology. The Internet of Things (IoT) will be using the Fog computing to implement the smart World concept. So, in the future we have to handle huge amount of data and we need to provide the security for the Data. This study gives the security solutions available for the different issues.
A Review- Fog Computing and Its Role in the Internet of ThingsIJERA Editor
Fog computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Dening characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Het-erogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid , Smart Cities, and, in general, Wireless Sensors and Actuators Net-works (WSANs).
Abstract Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. It is a model in which data, processing and applications are concentrated in devices at the network edge rather than existing almost entirely in the cloud. This document describes the various features of Fog Computing and a case study along with the actual implementation of fog computing in traffic analysis to understand how fog computing is applied to the edge environment. This document also contains the difference between the fog computing and cloud computing. Keywords— Fog Computing, Characteristics of Fog computing, Application of Fog computing, Difference between Cloud computing and Fog Computing.
1) Fog computing is an extension of cloud computing that processes data closer to the edge of the network, such as at factory equipment, power poles, or vehicles. It aims to improve efficiency and reduce data transportation costs compared to cloud computing alone.
2) Fog computing involves fog nodes that are located between end devices and the cloud. Fog nodes can perform tasks like data analysis, storage, and sharing results with the cloud and other nodes. This helps process time-sensitive data locally for applications involving the internet of things.
3) Fog computing provides advantages over cloud computing like lower latency, better support for mobility and real-time interactions, local data processing for privacy and efficiency, and ability to handle
Finding your Way in the Fog: Towards a Comprehensive Definition of Fog ComputingHarshitParkar6677
The cloud is migrating to the edge of the network, where
routers themselves may become the virtualisation infrastructure,
in an evolution labelled as “the fog”. However, many
other complementary technologies are reaching a high level
of maturity. Their interplay may dramatically shift the information
and communication technology landscape in the
following years, bringing separate technologies into a common
ground. This paper offers a comprehensive definition
of the fog, comprehending technologies as diverse as cloud,
sensor networks, peer-to-peer networks, network virtualisation
functions or configuration management techniques. We
highlight the main challenges faced by this potentially breakthrough
technology amalgamation.
This document provides a seminar report on cloud computing submitted by Vanama Vamsi Krishna in partial fulfillment of the requirements for a Bachelor of Technology degree. The 3-page report includes an abstract, table of contents, introduction on cloud computing concepts, a brief history of cloud computing, key characteristics of cloud computing including cost, scalability and reliability, components and architecture of cloud computing, types and roles in cloud computing, merits and demerits, and a conclusion. The report provides a high-level overview of cloud computing fundamentals.
A Review: The Internet of Things Using Fog ComputingIRJET Journal
Fog computing is a new computing paradigm that processes data and analytics at the edge of the network, rather than sending all data to a centralized cloud. This helps address issues with the cloud-based Internet of Things (IoT) model, such as high latency, bandwidth constraints, location awareness, and mobility. Fog computing brings computing resources closer to IoT devices and end users by using edge devices like routers, switches, and access points as "fog nodes" that can perform analytics and decision making. This allows time-sensitive IoT applications to function more efficiently. Fog computing also helps optimize resource usage by balancing processing between the edge and cloud.
Emerging cloud computing paradigm vision, research challenges and development...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Internet of Things (IoT) represents a remarkable transformation of the way in which our world will soon interact. Much like the World Wide Web connected computers to networks, and the next evolution connected people to the Internet and other people, IoT looks poised to interconnect devices, people, environments, virtual objects and machines in ways that only science fiction writers could have imagined.
The document discusses the integration of fog computing with Internet of Things (IoT) applications. It introduces fog computing and how it extends cloud computing by providing data processing and storage locally at IoT devices to address challenges of latency and mobility. Benefits of fog computing include low latency, scalability, and flexibility to support various IoT applications like smart homes, healthcare, traffic lights, and connected cars. Challenges of integrating fog computing with IoT include security, privacy, resource estimation, and ensuring communication between fog servers and the cloud. The document reviews open issues and concludes by discussing future research directions for fog computing and IoT integration.
Fog Computing Reality Check: Real World Applications and ArchitecturesBiren Gandhi
Is Fog Computing just a buzz or a real business?
The IoT is flooded with a variety of platforms and solutions. Fog Computing has been notably appearing as an evolving term in the context of IoT software. There is skepticism that Fog Computing is just another buzzword destined to disappear in the dust of time. Get insight from concrete business cases in a variety of IoT verticals – Agriculture, Industrial Manufacturing, Transportation, Smart & Connected Communities etc. and learn how Fog Computing can play a substantial role in each one of these verticals. Develop a judicious point of view with respect to the future of Fog Computing through market research, technology disruption vectors and ROI use cases presented in this session.
This document presents a seminar on fog computing given by Ajay Dhanraj Sirsat. It discusses the existing cloud computing system and its problems, proposes fog computing as an alternative system, and describes fog computing architecture and its advantages over cloud. Fog computing extends cloud services to the edge of the network to provide low latency and location awareness. It is well-suited for applications such as the Internet of Things, connected cars, smart grids, and smart buildings.
Topic: Moving from Cloud Computing to Fog Computing: How the “Internet of Things will Change the Way We Live and Work
Speaker: Jeff Hagins, Co-founder & CTO, SmartThings
This document discusses cloud computing. It begins with an introduction defining cloud computing as allowing users to access virtually unlimited computing resources over the internet. It then discusses the architecture of cloud computing including front-end and back-end components. The main components of a cloud are infrastructure, storage, platform, applications, services, and clients. There are different types of clouds including public clouds, private clouds, and hybrid clouds that use a mix of internal and external providers. Cloud services are divided into infrastructure as a service, platform as a service, and software as a service. The document concludes with some key characteristics of cloud computing such as its cost effectiveness and features like platform and location independence.
This document discusses the core concepts of cloud computing. It begins by explaining how cloud computing evolved from earlier technologies like mainframe computing, client-server systems, virtualization, distributed computing, and internet technologies. It then defines the key aspects of cloud computing models, including service models (IaaS, PaaS, SaaS) and deployment models (private, public, hybrid cloud). The document also outlines some of the core desired features of cloud computing like self-service, elasticity, metering and billing, and customization. Finally, it discusses some challenges and risks of cloud computing including security, privacy, trust issues as well as dependency on the cloud infrastructure.
This document provides an overview of cloud computing. It begins with an abstract that discusses how cloud computing is a recent buzzword that represents the future of computing both technically and socially. It then covers various topics related to cloud computing including the basics, types of clouds, stakeholders, advantages, motivations for growth, architecture, comparisons to grid computing and utility computing, popular cloud applications and potential applications in India.
Cloud computing provides many benefits but also poses security risks due to data being stored remotely. This document discusses several key security threats in cloud computing like data leakage, attacks against the cloud infrastructure, and issues regarding access control and data segregation. It proposes some solutions to address these risks, such as access control management, incident response processes, data partitioning, and migration capabilities to improve security in cloud environments.
This document discusses cloud computing. It begins with an introduction that defines cloud computing and outlines some of its key attributes and capabilities. It then discusses several aspects of cloud computing including cloud storage services, frameworks, architectures, and layers. The document also covers advantages and disadvantages of cloud computing, as well as threats and opportunities presented by the cloud. It concludes with a comparison table of several representative cloud platforms.
This document provides an introduction to fog computing. Fog computing is a model where data processing and applications occur at the edge of networks rather than solely in the cloud. This helps address limitations of cloud computing like high latency and bandwidth usage. Key characteristics of fog computing include low latency, geographical distribution, mobility support, and real-time interactions. Potential applications discussed are connected cars, smart grids, and smart traffic lights, which can benefit from fog computing's low latency and location awareness.
Imagine yourself in the world where the users of the computer of today’s internet world don’t have to run, install or store their application or data on their own computers, imagine the world where every piece of your information or data would reside on the Cloud (Internet).
This document discusses fog computing as an extension of cloud computing that moves some computing and storage to the edge of the network. It begins with an abstract that outlines fog computing and its advantages over cloud, such as lower latency. The introduction discusses Cisco's vision for fog computing and bringing applications to billions of connected devices at the network edge. It then discusses how fog computing addresses the issues of slow response times and scalability that cloud computing faces for machine-to-machine communication. The document provides examples of how fog computing could be applied in smart traffic lights, wireless sensor networks, and the internet of things.
This document discusses security and privacy issues of fog computing based on a survey of existing work. It begins with an overview of fog computing, defining it as an extension of cloud computing to the edge of networks. It then identifies several key security and privacy challenges of fog computing, including issues of trust and authentication, network security, secure data storage, and secure and private data computation. Several potential solutions are also briefly discussed, such as reputation-based trust models, biometric authentication, software-defined networking for security, and techniques like homomorphic encryption to enable verifiable and private computation on outsourced data.
Security and Privacy Issues of Fog Computing: A SurveyHarshitParkar6677
Abstract. Fog computing is a promising computing paradigm that ex-
tends cloud computing to the edge of networks. Similar to cloud comput-
ing but with distinct characteristics, fog computing faces new security
and privacy challenges besides those inherited from cloud computing. In
this paper, we have surveyed these challenges and corresponding solu-
tions in a brief manner.
This document discusses fog computing and its role in supporting Internet of Things applications. It defines fog computing as extending cloud computing to the edge of the network to enable applications requiring low latency, mobility support, and location awareness. Key characteristics of fog include its geographical distribution, support for real-time interactions, and role in streaming and sensor applications. The document argues fog is well-suited as a platform for connected vehicles, smart grids, smart cities, and wireless sensor networks due to its ability to meet latency and mobility requirements. It also describes the interplay between fog and cloud for data analytics, with fog handling real-time analytics near data sources and cloud providing long-term global analytics.
This document discusses fog computing and its role in supporting Internet of Things applications. It defines fog computing as extending cloud computing to the edge of the network to enable applications requiring low latency, mobility support, and location awareness. Key characteristics of fog include its geographical distribution, support for real-time interactions, and role in streaming and sensor applications. The document argues fog is well-suited as a platform for connected vehicles, smart grids, smart cities, and wireless sensor networks due to its ability to meet latency and mobility requirements. It also describes how fog and cloud can work together with fog handling real-time analytics near data sources and cloud providing long-term global analytics and data storage.
Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. The motivation of Fog computing lies in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks.
This document discusses fog computing, which extends cloud computing to the edge of the network. It describes the existing cloud computing model and proposes fog computing as an alternative to address issues like latency. Key topics covered include security issues, privacy issues, potential scenarios and applications of fog computing, and ideas for future enhancement.
Fog computing is a distributed computing paradigm that processes data closer to IoT devices rather than sending all data to centralized cloud servers. This helps address issues like high latency, bandwidth constraints, and scalability challenges. Fog computing deploys compute and storage resources between end devices and cloud data centers. It can perform tasks like data aggregation, analytics, and decision making near devices to enable low-latency applications. Coordinating fog and cloud resources requires addressing challenges regarding resource management, load balancing, APIs, security, and fault tolerance.
Adaptive Multi-Criteria-Based Load Balancing Technique for Resource Allocatio...IJCNCJournal
Recently, to deliver services directly to the network edge, fog computing, an emerging and developing technology, acts as a layer between the cloud and the IoT worlds. The cloud or fog computing nodes could be selected by IoTs applications to meet their resource needs. Due to the scarce resources of fog devices that are available, as well as the need to meet user demands for low latency and quick reaction times, resource allocation in the fog-cloud environment becomes a difficult problem. In this problem, the load balancing between several fog devices is the most important element in achieving resource efficiency and preventing overload on fog devices. In this paper, a new adaptive resource allocation technique for load balancing in a fog-cloud environment is proposed. The proposed technique ranks each fog device using hybrid multi-criteria decision- making approaches Fuzzy Analytic Hierarchy Process (FAHP) and Fuzzy Technique for Order Performance by Similarity to Ideal Solution (FTOPSIS), then selects the most effective fog device based on the resulting ranking set. The simulation results show that the proposed technique outperforms existing techniques in terms of load balancing, response time, resource utilization, and energy consumption. The proposed technique decreases the number of fog nodes by 11%, load balancing variance by 69% and increases resource utilization to 90% which is comparatively higher than the comparable methods.
Adaptive Multi-Criteria-Based Load Balancing Technique for Resource Allocatio...IJCNCJournal
Recently, to deliver services directly to the network edge, fog computing, an emerging and developing technology, acts as a layer between the cloud and the IoT worlds. The cloud or fog computing nodes could be selected by IoTs applications to meet their resource needs. Due to the scarce resources of fog devices that are available, as well as the need to meet user demands for low latency and quick reaction times, resource allocation in the fog-cloud environment becomes a difficult problem. In this problem, the load balancing between several fog devices is the most important element in achieving resource efficiency and preventing overload on fog devices. In this paper, a new adaptive resource allocation technique for load balancing in a fog-cloud environment is proposed. The proposed technique ranks each fog device using hybrid multi-criteria decision- making approaches Fuzzy Analytic Hierarchy Process (FAHP) and Fuzzy Technique for Order Performance by Similarity to Ideal Solution (FTOPSIS), then selects the most effective fog device based on the resulting ranking set. The simulation results show that the proposed technique outperforms existing techniques in terms of load balancing, response time, resource utilization, and energy consumption. The proposed technique decreases the number of fog nodes by 11%, load balancing variance by 69% and increases resource utilization to 90% which is comparatively higher than the comparable methods.
Fog computing is defined as a decentralized infrastructure that places storage and processing components at the edge of the cloud, where data sources such as application users and sensors exist.It is an architecture that uses edge devices to carry out a substantial amount of computation (edge computing), storage, and communication locally and routed over the Internet backbone.To achieve real-time automation, data capture and analysis has to be done in real-time without having to deal with the high latency and low bandwidth issues that occur during the processing of network data In 2012, Cisco introduced the term fog computing for dispersed cloud infrastructures.. In 2015, Cisco partnered with Microsoft, Dell, Intel, Arm and Princeton University to form the OpenFog Consortium.The consortium's primary goals were to both promote and standardize fog computing. These concepts brought computing resources closer to data sources.Fog computing also differentiates between relevant and irrelevant data. While relevant data is sent to the cloud for storage, irrelevant data is either deleted or transmitted to the appropriate local platform. As such, edge computing and fog computing work in unison to minimize latency and maximize the efficiency associated with cloud-enabled enterprise systemsFog computing consists of various componets such as fog nodes.Fog nodes are independent devices that pick up the generated information. Fog nodes fall under three categories: fog devices, fog servers, and gateways. These devices store necessary data while fog servers also compute this data to decide the course of action. Fog devices are usually linked to fog servers. Fog gateways redirect the information between the various fog devices and servers. With Fog computing, local data storage and scrutiny of time-sensitive data become easier. With this the amount and the distance of passing data to the cloud is reduced, therefore reducing the security challenges.Fog computing enables data processing based on application demands, available networking and computing resources. This reduces the amount of data required to be transferred to the cloud, ultimately saving network bandwidth.Fog computing can run independently and ensure uninterrupted services even with fluctuating network connectivity to the cloud. It performs all time-sensitive actions close to end users which meets latency constraints of IoT applications.
IoT applications where data is generated in terabytes or more, where a quick and large amount of data processing is required and sending data to the cloud back and forth is not feasible, are good candidates for fog computing. Fog computing provides real-time processing and event responses which are critical in healthcare. Besides, it also addresses issues regarding network connectivity and traffic required for remote storage, processing and medical record retrieval from the cloud.
Clarifying fog computing and networking 10 questions and answersRezgar Mohammad
Fog computing is an architecture that distributes computing, storage, control and networking functions closer to users along the cloud-to-thing continuum compared to traditional cloud computing architectures. It aims to provide a seamless continuum of services from the cloud to end devices. Key differences between fog and edge computing are that fog is more inclusive, seeks to realize a seamless continuum rather than isolated platforms, and envisions a horizontal platform to support multiple industries. Fog computing is expected to enable new commercial opportunities and business models by providing integrated end-to-end services and applications through the convergence of cloud and fog platforms.
Extends cloud computing services to the edge of the network.
Similar to cloud, Fog provides:
Data
Computation
Storage
Application Services to end users.
Motivations for Fog Computing:
Smart Grid, Smart Traffic Lights in vehicular networks and Software Defined Networks.
This document provides an overview of fog computing, including its characteristics, architecture, applications, examples, advantages, and disadvantages. Fog computing extends cloud computing by performing computing tasks closer to end users at the edge of the network to reduce latency. It has a dense geographical distribution and supports mobility and real-time interactions better than cloud computing. The document outlines the key components of fog architecture and discusses scenarios where fog computing can be applied, such as smart grids, smart buildings, and connected vehicles.
Efficient ECC-Based Authentication Scheme for Fog-Based IoT EnvironmentIJCNCJournal
The rapid growth of cloud computing and Internet of Things (IoT) applications faces several threats, such as latency, security, network failure, and performance. These issues are solved with the development of fog computing, which brings storage and computation closer to IoT-devices. However, there are several challenges faced by security designers, engineers, and researchers to secure this environment. To ensure the confidentiality of data that passes between the connected devices, digital signature protocols have been applied to the authentication of identities and messages. However, in the traditional method, a user's private key is directly stored on IoTs, so the private key may be disclosed under various malicious attacks. Furthermore, these methods require a lot of energy, which drains the resources of IoT-devices. A signature scheme based on the elliptic curve digital signature algorithm (ECDSA) is proposed in this paper to improve the security of the private key and the time taken for key-pair generation. ECDSA security is based on the intractability of the Elliptic Curve Discrete Logarithm Problem (ECDLP), which allows one to use much smaller groups. Smaller group sizes directly translate into shorter signatures, which is a crucial feature in settings where communication bandwidth is limited, or data transfer consumes a large amount of energy. In this paper, we have chosen the safe curve types of elliptic-curve cryptography (ECC) such as M221, SECP256r1, curve 25519, Brainpool P256t1, and M-551. These types of curves are the most secure curves of other curves of ECC as their security is based on the complexity of the ECDLP of the curve. And these types of curves exceed the complexity of the ECDLP. A valid signature can be generated without reestablishing the whole private key. ECDSA ensures data security and successfully reduces intermediate attacks. The efficiency and effectiveness of ECDSA in the IoT environment are validated by experimental evaluation and comparison analysis. The results indicate that, in comparison to the two-party ECDSA and RSA, the proposed ECDSA decreases computation time by 65% and 87%, respectively. Additionally, as compared to two-party ECDSA and RSA, respectively, it reduces energy consumption by 77% and 82%.
Efficient ECC-Based Authentication Scheme for Fog-Based IoT EnvironmentIJCNCJournal
The rapid growth of cloud computing and Internet of Things (IoT) applications faces several threats, such as latency, security, network failure, and performance. These issues are solved with the development of fog computing, which brings storage and computation closer to IoT-devices. However, there are several challenges faced by security designers, engineers, and researchers to secure this environment. To ensure the confidentiality of data that passes between the connected devices, digital signature protocols have been applied to the authentication of identities and messages. However, in the traditional method, a user's private key is directly stored on IoTs, so the private key may be disclosed under various malicious attacks. Furthermore, these methods require a lot of energy, which drains the resources of IoT-devices. A signature scheme based on the elliptic curve digital signature algorithm (ECDSA) is proposed in this paper to improve the security of the private key and the time taken for key-pair generation. ECDSA security is based on the intractability of the Elliptic Curve Discrete Logarithm Problem (ECDLP), which allows one to use much smaller groups. Smaller group sizes directly translate into shorter signatures, which is a crucial feature in settings where communication bandwidth is limited, or data transfer consumes a large amount of energy. In this paper, we have chosen the safe curve types of elliptic-curve cryptography (ECC) such as M221, SECP256r1, curve 25519, Brainpool P256t1, and M-551. These types of curves are the most secure curves of other curves of ECC as their security is based on the complexity of the ECDLP of the curve. And these types of curves exceed the complexity of the ECDLP. A valid signature can be generated without reestablishing the whole private key. ECDSA ensures data security and successfully reduces intermediate attacks. The efficiency and effectiveness of ECDSA in the IoT environment are validated by experimental evaluation and comparison analysis. The results indicate that, in comparison to the two-party ECDSA and RSA, the proposed ECDSA decreases computation time by 65% and 87%, respectively. Additionally, as compared to two-party ECDSA and RSA, respectively, it reduces energy consumption by 77% and 82%.
The Future of Fog Computing and IoT: Revolutionizing Data ProcessingFredReynolds2
Sending a business e-mail, watching a YouTube video, making an online video call meeting, or playing a video game online requires considerable data flow. It necessitates such massive data flow in the direction of servers in data centers. Cloud computing prefers remote data processing and substantial storage systems to develop online apps we use daily. But we must know that other decentralized cloud computing systems exist. Fog computing technology is growing wildly in popularity. As per fog technology experts, the global fog technology market will reach nearly $2.3 billion at the end of 2032. The market for fog technology was $196.7 million at the end of 2022.
This document discusses security issues related to data management in wireless communication and sensor networks over cloud environments. It begins by describing wireless sensor networks and cloud computing individually, noting key characteristics like location independence and on-demand access. It then discusses how wireless sensor networks and cloud computing can be integrated using technologies like PHP and MySQL. The main body of the document focuses on security challenges in cloud computing environments, including issues related to virtualization, networking, and browser-based attacks that can carry over risks from traditional systems. It concludes that secure data transmission to and from the cloud is an important issue that requires mitigation techniques like encryption algorithms.
Chances are you have a Wi-Fi network at home, or live close to one (or more) that tantalizingly pops up in a list whenever you boot up the laptop.
The problem is, if there's a lock next to the network name (AKA the SSID, or service set identifier), that indicates security is activated. Without the password or passphrase, you're not going to get access to that network, or the sweet, sweet internet that goes with it.
A distributed denial-of-service (DDoS) attack is a malicious attempt to disrupt normal traffic of a targeted server, service or network by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic. DDoS attacks achieve effectiveness by utilizing multiple compromised computer systems as sources of attack traffic. Exploited machines can include computers and other networked resources such as IoT devices. From a high level, a DDoS attack is like a traffic jam clogging up with highway, preventing regular traffic from arriving at its desired destination.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together.
There are two methods for interfacing memory and I/O devices with a microprocessor: I/O mapped I/O and memory mapped I/O. I/O mapped I/O treats I/O devices and memory separately, while memory mapped I/O treats I/O devices as memory. I/O mapped I/O can use either 8 or 16 address lines, allowing connection of up to 256 fixed I/O devices or 65,536 variable I/O devices. Specific instructions like IN, OUT, and MOV are used to access I/O ports depending on whether it is fixed or variable addressing.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together.
This document discusses procedures, macros, and stack operations in assembly language. It explains that procedures allow repetitive code to be written once and called multiple times to save memory. Procedures use stack operations to push return addresses and data onto the stack. Macros simplify programming by reducing repetitive code. Procedures are called at runtime, while macro calls are replaced with their body at assembly time.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together.
Procedures and macros allow code to be reused in assembly language programs. Procedures are subroutines that are called using CALL and RET instructions. Macros allow short, repetitive code sequences to be defined once and reused by replacing the macro call with its body code. Some key differences are that procedures occupy less memory than macros since macro code is generated each time, while procedures' code is only stored once. Procedures are accessed using CALL while macros are accessed by name.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together. Some microprocessors in the 20th century required several chips. Microprocessors help to do everything from controlling elevators to searching the Web. Everything a computer does is described by instructions of computer programs, and microprocessors carry out these instructions many millions of times a second. [1]
Microprocessors were invented in the 1970s for use in embedded systems. The majority are still used that way, in such things as mobile phones, cars, military weapons, and home appliances. Some microprocessors are microcontrollers, so small and inexpensive that they are used to control very simple products like flashlights and greeting cards that play music when you open them. A few especially powerful microprocessors are used in personal computers.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together. Some microprocessors in the 20th century required several chips. Microprocessors help to do everything from controlling elevators to searching the Web. Everything a computer does is described by instructions of computer programs, and microprocessors carry out these instructions many millions of times a second. [1]
Microprocessors were invented in the 1970s for use in embedded systems. The majority are still used that way, in such things as mobile phones, cars, military weapons, and home appliances. Some microprocessors are microcontrollers, so small and inexpensive that they are used to control very simple products like flashlights and greeting cards that play music when you open them. A few especially powerful microprocessors are used in personal computers.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together. Some microprocessors in the 20th century required several chips. Microprocessors help to do everything from controlling elevators to searching the Web. Everything a computer does is described by instructions of computer programs, and microprocessors carry out these instructions many millions of times a second. [1]
Microprocessors were invented in the 1970s for use in embedded systems. The majority are still used that way, in such things as mobile phones, cars, military weapons, and home appliances. Some microprocessors are microcontrollers, so small and inexpensive that they are used to control very simple products like flashlights and greeting cards that play music when you open them. A few especially powerful microprocessors are used in personal computers.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together. Some microprocessors in the 20th century required several chips. Microprocessors help to do everything from controlling elevators to searching the Web. Everything a computer does is described by instructions of computer programs, and microprocessors carry out these instructions many millions of times a second. [1]
Microprocessors were invented in the 1970s for use in embedded systems. The majority are still used that way, in such things as mobile phones, cars, military weapons, and home appliances. Some microprocessors are microcontrollers, so small and inexpensive that they are used to control very simple products like flashlights and greeting cards that play music when you open them. A few especially powerful microprocessors are used in personal computers.
This document describes the Jcc family of conditional jump instructions in x86 assembly language. It provides the instruction name, description of the condition tested, and any alternative mnemonics or opposite instructions. The instructions test various CPU flags like carry, zero, sign, overflow, parity, and compare values based on signed or unsigned arithmetic.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together. Some microprocessors in the 20th century required several chips. Microprocessors help to do everything from controlling elevators to searching the Web. Everything a computer does is described by instructions of computer programs, and microprocessors carry out these instructions many millions of times a second. [1]
Microprocessors were invented in the 1970s for use in embedded systems. The majority are still used that way, in such things as mobile phones, cars, military weapons, and home appliances. Some microprocessors are microcontrollers, so small and inexpensive that they are used to control very simple products like flashlights and greeting cards that play music when you open them. A few especially powerful microprocessors are used in personal computers.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together. Some microprocessors in the 20th century required several chips. Microprocessors help to do everything from controlling elevators to searching the Web. Everything a computer does is described by instructions of computer programs, and microprocessors carry out these instructions many millions of times a second. [1]
Microprocessors were invented in the 1970s for use in embedded systems. The majority are still used that way, in such things as mobile phones, cars, military weapons, and home appliances. Some microprocessors are microcontrollers, so small and inexpensive that they are used to control very simple products like flashlights and greeting cards that play music when you open them. A few especially powerful microprocessors are used in personal computers.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together. Some microprocessors in the 20th century required several chips. Microprocessors help to do everything from controlling elevators to searching the Web. Everything a computer does is described by instructions of computer programs, and microprocessors carry out these instructions many millions of times a second. [1]
Microprocessors were invented in the 1970s for use in embedded systems. The majority are still used that way, in such things as mobile phones, cars, military weapons, and home appliances. Some microprocessors are microcontrollers, so small and inexpensive that they are used to control very simple products like flashlights and greeting cards that play music when you open them. A few especially powerful microprocessors are used in personal computers.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together. Some microprocessors in the 20th century required several chips. Microprocessors help to do everything from controlling elevators to searching the Web. Everything a computer does is described by instructions of computer programs, and microprocessors carry out these instructions many millions of times a second. [1]
Microprocessors were invented in the 1970s for use in embedded systems. The majority are still used that way, in such things as mobile phones, cars, military weapons, and home appliances. Some microprocessors are microcontrollers, so small and inexpensive that they are used to control very simple products like flashlights and greeting cards that play music when you open them. A few especially powerful microprocessors are used in personal computers.
A brief introduction to quadcopter (drone) working. It provides an overview of flight stability, dynamics, general control system block diagram, and the electronic hardware.
Software Engineering and Project Management - Introduction to Project ManagementPrakhyath Rai
Introduction to Project Management: Introduction, Project and Importance of Project Management, Contract Management, Activities Covered by Software Project Management, Plans, Methods and Methodologies, some ways of categorizing Software Projects, Stakeholders, Setting Objectives, Business Case, Project Success and Failure, Management and Management Control, Project Management life cycle, Traditional versus Modern Project Management Practices.
A vernier caliper is a precision instrument used to measure dimensions with high accuracy. It can measure internal and external dimensions, as well as depths.
Here is a detailed description of its parts and how to use it.
How to Manage Internal Notes in Odoo 17 POSCeline George
In this slide, we'll explore how to leverage internal notes within Odoo 17 POS to enhance communication and streamline operations. Internal notes provide a platform for staff to exchange crucial information regarding orders, customers, or specific tasks, all while remaining invisible to the customer. This fosters improved collaboration and ensures everyone on the team is on the same page.
Literature Reivew of Student Center DesignPriyankaKarn3
It was back in 2020, during the COVID-19 lockdown Period when we were introduced to an Online learning system and had to carry out our Design studio work. The students of the Institute of Engineering, Purwanchal Campus, Dharan did the literature study and research. The team was of Prakash Roka Magar, Priyanka Karn (me), Riwaz Upreti, Sandip Seth, and Ujjwal Dev from the Department of Architecture. It was just a scratch draft made out of the initial phase of study just after the topic was introduced. It was one of the best teams I had worked with, shared lots of memories, and learned a lot.
Response & Safe AI at Summer School of AI at IIITHIIIT Hyderabad
Talk covering Guardrails , Jailbreak, What is an alignment problem? RLHF, EU AI Act, Machine & Graph unlearning, Bias, Inconsistency, Probing, Interpretability, Bias
20CDE09- INFORMATION DESIGN
UNIT I INCEPTION OF INFORMATION DESIGN
Introduction and Definition
History of Information Design
Need of Information Design
Types of Information Design
Identifying audience
Defining the audience and their needs
Inclusivity and Visual impairment
Case study.
1. The Fog Computing Paradigm: Scenarios and
Security Issues
Ivan Stojmenovic
SIT, Deakin University, Burwood, Australia
and
SEECS, University of Ottawa, Canada
Email: stojmenovic@gmail.com
Sheng Wen
School of Information Technology,
Deakin University,
220 Burwood Highway, Burwood, VIC, 3125, Australia
Email: wesheng@deakin.edu.au
Abstract—Fog Computing is a paradigm that extends Cloud
computing and services to the edge of the network. Similar
to Cloud, Fog provides data, compute, storage, and application
services to end-users. In this article, we elaborate the motivation
and advantages of Fog computing, and analyse its applications
in a series of real scenarios, such as Smart Grid, smart traffic
lights in vehicular networks and software defined networks. We
discuss the state-of-the-art of Fog computing and similar work
under the same umbrella. Security and privacy issues are further
disclosed according to current Fog computing paradigm. As an
example, we study a typical attack, man-in-the-middle attack,
for the discussion of security in Fog computing. We investigate
the stealthy features of this attack by examining its CPU and
memory consumption on Fog device.
Index Terms—Fog Computing, Cloud Computing, Internet of
Things, Software Defined Networks.
I. INTRODUCTION
CISCO recently delivered the vision of fog computing
to enable applications on billions of connected devices,
already connected in the Internet of Things (IoT), to run
directly at the network edge [1]. Customers can develop,
manage and run software applications on Cisco IOx framework
of networked devices, including hardened routers, switches
and IP video cameras. Cisco IOx brings the open source Linux
and Cisco IOS network operating system together in a single
networked device (initially in routers). The open application
environment encourages more developers to bring their own
applications and connectivity interfaces at the edge of the
network. Regardless of Cisco’s practices, we first answer the
questions of what the Fog computing is and what are the
differences between Fog and Cloud.
In Fog computing, services can be hosted at end devices
such as set-top-boxes or access points. The infrastructure of
this new distributed computing allows applications to run as
close as possible to sensed actionable and massive data, com-
ing out of people, processes and thing. Such Fog computing
concept, actually a Cloud computing close to the ‘ground’,
creates automated response that drives the value.
Both Cloud and Fog provide data, computation, storage
and application services to end-users. However, Fog can be
distinguished from Cloud by its proximity to end-users, the
dense geographical distribution and its support for mobility
[2]. We adopt a simple three level hierarchy as in Figure 1.
Cloud
Fog
Core
Edge
Locations
Fig. 1. Fog between edge and cloud.
In this framework, each smart thing is attached to one of Fog
devices. Fog devices could be interconnected and each of them
is linked to the Cloud.
In this article, we take a close look at the Fog computing
paradigm. The goal of this research is to investigate Fog
computing advantages for services in several domains, such as
Smart Grid, wireless sensor networks, Internet of Things (IoT)
and software defined networks (SDNs). We examine the state-
of-the-art and disclose some general issues in Fog computing
including security, privacy, trust, and service migration among
Fog devices and between Fog and Cloud. We finally conclude
this article with discussion of future work.
II. WHY DO WE NEED FOG?
In the past few years, Cloud computing has provided many
opportunities for enterprises by offering their customers a
range of computing services. Current “pay-as-you-go” Cloud
computing model becomes an efficient alternative to owning
and managing private data centres for customers facing Web
applications and batch processing [3]. Cloud computing frees
the enterprises and their end users from the specification of
many details, such as storage resources, computation limitation
and network communication cost. However, this bliss becomes
Proceedings of the 2014 Federated Conference on
Computer Science and Information Systems pp. 1–8
DOI: 10.15439/2014F503
ACSIS, Vol. 2
978-83-60810-58-3/$25.00 c 2014, IEEE 1
2. Fig. 2. Fog computing in smart grid.
a problem for latency-sensitive applications, which require
nodes in the vicinity to meet their delay requirements [2].
When techniques and devices of IoT are getting more involved
in people’s life, current Cloud computing paradigm can hardly
satisfy their requirements of mobility support, location aware-
ness and low latency.
Fog computing is proposed to address the above problem
[1]. As Fog computing is implemented at the edge of the
network, it provides low latency, location awareness, and
improves quality-of-services (QoS) for streaming and real
time applications. Typical examples include industrial automa-
tion, transportation, and networks of sensors and actuators.
Moreover, this new infrastructure supports heterogeneity as
Fog devices include end-user devices, access points, edge
routers and switches. The Fog paradigm is well positioned for
real time big data analytics, supports densely distributed data
collection points, and provides advantages in entertainment,
advertising, personal computing and other applications.
III. WHAT CAN WE DO WITH FOG?
We elaborate on the role of Fog computing in the following
six motivating scenarios. The advantages of Fog computing
satisfy the requirements of applications in these scenarios.
Smart Grid: Energy load balancing applications may run on
network edge devices, such as smart meters and micro-grids
[4]. Based on energy demand, availability and the lowest price,
these devices automatically switch to alternative energies like
solar and wind. As shown in Figure 2, Fog collectors at the
edge process the data generated by grid sensors and devices,
and issue control commands to the actuators [2]. They also
filter the data to be consumed locally, and send the rest
to the higher tiers for visualization, real-time reports and
transactional analytics. Fog supports ephemeral storage at the
lowest tier to semi-permanent storage at the highest tier. Global
coverage is provided by the Cloud with business intelligence
analytics.
Fig. 3. Fog computing in smart traffic lights and connected vehicles.
Smart Traffic Lights and Connected Vehicles: Video cam-
era that senses an ambulance flashing lights can automatically
change street lights to open lanes for the vehicle to pass
through traffic. Smart street lights interact locally with sensors
and detect presence of pedestrian and bikers, and measure
the distance and speed of approaching vehicles. As shown in
Figure 3, intelligent lighting turns on once a sensor identifies
movement and switches off as traffic passes. Neighbouring
smart lights serving as Fog devices coordinate to create green
traffic wave and send warning signals to approaching vehicles
[2]. Wireless access points like WiFi, 3G, road-side units and
smart traffic lights are deployed along the roads. Vehicles-to-
Vehicle, vehicle to access points, and access points to access
points interactions enrich the application of this scenario.
Wireless Sensor and Actuator Networks: Traditional wire-
less sensor networks fall short in applications that go beyond
sensing and tracking, but require actuators to exert physical
actions like opening, closing or even carrying sensors [2]. In
this scenario, actuators serving as Fog devices can control the
measurement process itself, the stability and the oscillatory
behaviours by creating a closed-loop system. For example,
in the scenario of self-maintaining trains, sensor monitoring
on a train’s ball-bearing can detect heat levels, allowing
applications to send an automatic alert to the train operator
to stop the train at next station for emergency maintenance
and avoid potential derailment. In lifesaving air vents scenario,
sensors on vents monitor air conditions flowing in and out of
mines and automatically change air-flow if conditions become
dangerous to miners.
Decentralized Smart Building Control: The applications
of this scenario are facilitated by wireless sensors deployed
to measure temperature, humidity, or levels of various gases
in the building atmosphere. In this case, information can be
exchanged among all sensors in a floor, and their readings can
be combined to form reliable measurements. Sensors will use
distributed decision making and activation at Fog devices to
2 PROCEEDINGS OF THE FEDCSIS. WARSAW, 2014
3. Fig. 4. Fog computing in SDN in vehicular networks [6].
react to data. The system components may then work together
to lower the temperature, inject fresh air or open windows. Air
conditioners can remove moisture from the air or increase the
humidity. Sensors can also trace and react to movements (e.g,
by turning light on or off). Fog devices could be assigned at
each floor and could collaborate on higher level of actuation.
With Fog computing applied in this scenario, smart buildings
can maintain their fabric, external and internal environments
to conserve energy, water and other resources.
IoT and Cyber-physical systems (CPSs): Fog computing
based systems are becoming an important class of IoT and
CPSs. Based on the traditional information carriers including
Internet and telecommunication network, IoT is a network
that can interconnect ordinary physical objects with identified
addresses [5]. CPSs feature a tight combination of the system’s
computational and physical elements. CPSs also coordinate
the integration of computer and information centric physical
and engineered systems. IoT and CPSs promise to transform
our world with new relationships between computer-based
control and communication systems, engineered systems and
physical reality. Fog computing in this scenario is built on the
concepts of embedded systems in which software programs
and computers are embedded in devices for reasons other than
computation alone. Examples of the devices include toys, cars,
medical devices and machinery. The goal is to integrate the
abstractions and precision of software and networking with the
dynamics, uncertainty and noise in the physical environment.
Using the emerging knowledge, principles and methods of
CPSs, we will be able to develop new generations of intelligent
medical devices and systems, ‘smart’ highways, buildings,
factories, agricultural and robotic systems.
Software Defined Networks (SDN): As shown in Figure 4,
Fog computing framework can be applied to implement the
SDN concept for vehicular networks. SDN is an emergent
computing and networking paradigm, and became one of the
most popular topics in IT industry [7]. It separates control
and data communication layers. Control is done at a central-
ized server, and nodes follow communication path decided
by the server. The centralized server may need distributed
implementation. SDN concept was studied in WLAN, wireless
sensor and mesh networks, but they do not involve multi-
hop wireless communication, multi-hop routing. Moreover,
there is no communication between peers in this scenario.
SDN concept together with Fog computing will resolve the
main issues in vehicular networks, intermittent connectivity,
collisions and high packet loss rate, by augmenting vehicle-
to-vehicle with vehicle-to-infrastructure communications and
centralized control. SDN concept for vehicular networks is
first proposed in [6].
IV. STATE-OF-THE-ART
A total of eight articles were identified on the concept of
Fog computing [1], [2], [8], [9], [10], [11], [12], [13], [14].
There are some other concepts, not declared as Fog computing,
fall under the same umbrella. We will also discuss these work
in the subsection of similar work.
A. Related Work
K. Hong et al. proposed mobile Fog in [11]. This is a
high level programming model for geo-spatially distributed,
large-scale and latency-sensitive future Internet applications.
Following the logical structure shown in Figure 1, low-latency
processing occurs near the edge while latency-tolerant large-
scope aggregation is performed on powerful resources in the
core of the network (normally the Cloud). Mobile Fog consists
of a set of event handlers and functions that an application
can call. Mobile Fog model is not presented as generic
model, but is built for particular application, while leaving
out functions that deal with technical challenges of involved
image processing primitives. Fog computing approach reduces
latency and network traffic.
B. Ottenwalder et al. presented a placement and migration
method for Cloud and Fog resources providers [13]. It ensures
application-defined end-to-end latency restrictions and reduces
the network utilization by planning the migration ahead of
time. They also show how the application knowledge of the
complex event processing system can be used to reduce the
required bandwidth of virtual machines during their migration.
Network intensive operators are placed on distributed Fog
devices while computationally intensive operators are in the
Cloud. Migration costs are amortized by selecting migration
targets that ensure a low expected network utilization for a
sufficiently long time. This work does not optimize workload
mobility because Fog devices are also able to carry compu-
tationally intensive tasks. It also does not optimize the size
of control information or mobility overhead, and does not
describe network control policies for finding optimal paths for
different applications.
In [11], K. Hong et al. proposed an opportunistic spatio-
temporal event processing system that uses prediction-based
continuous query handling. Their system predicts future query
regions for moving consumers and starts processing events
early so that the live situational information is available when
IVAN STOJMENOVIC, SHENG WEN: THE FOG COMPUTING PARADIGM 3
4. the consumer reaches the future location. Historical events for
a location are processed before the mobile user arrives at that
location. Live event processing begins at the moment the user
arrives. To mitigate large speed of mobile user, authors propose
using parallel resources to enable pipeline processing of future
locations in several time steps looking ahead. Further, they
proposed taking several predictions for each time step and
opportunistically compute the events for all of those locations.
When the user arrives at that time, the prediction among those
that is closest to truth will be selected and its events returned.
J. Zhu et al. applied existing methods for web optimization
in a novel manner [14]. Within Fog computing context, these
methods can be combined with unique knowledge that is only
available at the Fog devices. More dynamic adaptation to the
user’s conditions can also be accomplished with network edge
specific knowledge. As a result, a user’s Web page rendering
performance is improved beyond that achieved by simply
applying those methods at the Web server.
In the mobile Cloud concept [12], pervasive mobile devices
share their heterogeneous resources and support services.
Neighbouring nodes in a local network form a group called a
local Cloud. Nodes share their resources with other nodes in
the same local Cloud. A local resource coordinator serving as
Fog device is elected from the nodes in each local Cloud. The
work [12] proposed an architecture and mathematical frame-
work for heterogeneous resource sharing based on the key idea
of service-oriented utility functions. Normally heterogeneous
resources are quantified in disparate scales, such as power,
bandwidth and latency. However, authors in [12] present a
unified framework where all these quantities are equivalently
mapped to “time” resources. They formulate optimization
problems for maximizing the sum and product of the utility
functions, and solve them via convex optimization approaches.
The work [10] first reviews the reliability requirements of
Smart Grid, Cloud, and sensors and actuators. This work then
combines them towards reliable Fog computing. However, it
only concludes that building Fog computing based projects
is challenging and does not offer any novel concept for
the reliability of the network of smart devices in the Fog
computing paradigm.
B. Similar Work
BETaaS [15] proposed replacing Cloud as the residen-
t for machine-to-machine applications by ‘local Cloud’ of
gateways. The ‘local Cloud’ is composed of devices that
provide smart things with connectivity to the Internet, such as
smart phones, home routers and road-side units. This enables
applications that are limited in time and space to require simple
and repetitive interactions. It also enables the applications to
respond in consistent manner.
Demand Response Management (DRM) is a key component
in the smart grid to effectively reduce power generation costs
and user bills. The work [16] addressed the DRM problem
in a network of multiple utility companies and consumers
where every entity is concerned about maximizing its own
benefit. In their model, utility companies communicate with
each other, while users receive price information from utility
companies and transmit their demand to them. They propose
a Stackelberg game [17] between utility companies and end-
users to maximize the revenue of each utility company and
the payoff of each user. Stackelberg equilibrium of the game
has a unique solution. They develop a distributed algorithm
which converges to the equilibrium with only local information
available for both utility companies and end-users. Utility
companies play a non-cooperative game. They inform users
whenever they change price, and users then update their
demand vectors and inform utility companies. This iterates
until convergence. The main drawback of this algorithm is a
significant communication overhead between users and utility
companies. Though DRM helps to facilitate the reliability of
power supply, the smart grid can be susceptible to privacy and
security issues because of communication links between the
utility companies and the consumers. They study the impact of
an attacker who can manipulate the price information from the
utility companies, and propose a scheme based on the concept
of shared reserve power to improve the grid reliability and
ensure its dependability.
The work [18] investigated how energy consumption may
be optimized by taking into consideration the interaction
between both parties. The energy price model is a function
of total energy consumption. The objective function optimizes
the difference between the value and cost of energy. The
power supplier pulls consumers in a round-robin fashion,
and provides them with energy price parameter and current
consumption summary vector. Each user then optimizes his
own schedule and reports it to the supplier, which in turn
updates its energy price parameter before pulling the next
consumers. This interaction between the power company and
its consumers is modelled through a two-step centralized
game, based on which the work [18] proposed the Game-
Theoretic Energy Schedule (GTES) method. The objective of
the GTES method is to reduce the peak to average power ratio
by optimizing the users energy schedules.
The closest work for SDN in vehicular networks are several
implementations in wireless sensor network and mesh net-
works [19], [20]. Moreover, B. Zhou et al. studied adaptive
traffic light control for smoothing vehicles’ travel and maxi-
mizing the traffic throughout for both single and multiple lanes
[21], [22]. In addition, the work [23] proposed a three-tier
structure for traffic light control. First, an electronic toll collec-
tion (ETC) system is employed for collecting road traffic flow
data and calculating the recommended speed. Second, radio
antennas are installed near the traffic lights. Third, road traffic
flow information can be obtained by wireless communication
between the antennas and ETC devices. A branch-and-bound-
based real-time traffic light control algorithm is designed to
smooth vehicles’ travels.
V. SECURITY AND PRIVACY IN FOG COMPUTING
Security and privacy issues were not studied in the context
of fog computing. They were studied in the context of s-
mart grids [24] and machine-to-machine communications [25].
4 PROCEEDINGS OF THE FEDCSIS. WARSAW, 2014
5. Cloud
PDA
PC
Phone
Gateway
(Fog Devices)
TD-SCDMA /
WCDMA /
CDMA2000
WLAN
802.11 b/g
Fig. 5. A scenario for a man-in-the-middle attack towards Fog.
There are security solutions for Cloud computing. However,
they may not suit for Fog computing because Fog devices
work at the edge of networks. The working surroundings of
Fog devices will face with many threats which do not exist in
well managed Cloud. In this section, we discuss the security
and privacy issues in Fog Computing.
A. Security Issues
The main security issues are authentication at different
levels of gateways as well as (in case of smart grids) at the
smart meters installed in the consumer’s home. Each smart
meter and smart appliance has an IP address. A malicious
user can either tamper with its own smart meter, report false
readings, or spoof IP addresses. There are some solutions
for the authentication problem. The work [26] elaborated
public key infrastructure (PKI) based solutions which involve
multicast authentication. Some authentication techniques using
Diffie-Hellman key exchange have been discussed in [27].
Smart meters encrypt the data and send to the Fog device, such
as a home-area network (HAN) gateway. HAN then decrypts
the data, aggregates the results and then passes them forward.
Intrusion detection techniques can also be applied in Fog
computing [28]. Intrusion in smart grids can be detected
using either a signature-based method in which the patterns
of behaviour are observed and checked against an already
existing database of possible misbehaviours. Intrusion can also
be captured by using an anomaly-based method in which
an observed behaviour is compared with expected behaviour
to check if there is a deviation. The work [29] develops
an algorithm that monitors power flow results and detects
anomalies in the input values that could have been modified
by attacks. The algorithm detects intrusion by using principal
component analysis to separate power flow variability into
regular and irregular subspaces.
B. An Example: Man-in-the-Middle Attack
Man-in-the-middle attack has potential to become a typical
attack in Fog computing. In this subsection, we take man-
in-the-middle attack as an example to expose the security
problems in Fog computing. In this attack, gateways serving
Linux IP Stack
Hook
process
AMR H.263 H.324M
IP_INPUT IP_OUTPUT
Attacker
Victims
Fig. 6. A system design of man-in-the-middle-attack in Fog.
as Fog devices may be compromised or replaced by fake ones
[30]. Examples are KFC or Star Bar customers connecting
to malicious access points which provide deceptive SSID as
public legitimate ones. Private communication of victims will
be hijacked once the attackers take the control of gateways.
1) Environment Settings of Stealth Test: Man-in-the-middle
attack can be very stealthy in Fog computing paradigm. This
type of attack will consume only a small amount of resources
in Fog devices, such as negligible CPU utilization and mem-
ory consumption. Therefore, traditional anomaly detection
methods can hardly expose man-in-the-middle attack without
noticeable features of this attack collected from the Fog. In
order to examine how stealthy the man-in-the-middle attack
can be, we implement an attack environment shown in Figure
5. In this scenario, a 3G user sends a video call to a WLAN
user. Since the man-in-the-middle attack requires to control
the communication between the 3G user and the WLAN user,
the key of this attack is to compromise the gateway which
serves as the Fog device.
Two steps are needed to realize the man-in-the-middle attack
for the stealth test. First, we need to compromise the gateway,
and second, we insert malicious code into the compromised
system. For susceptible gateways, we can either refresh the
ROM of a normal gateway or place a fake active point in
PC Phone
Compromised
Gateway
TD-SCDMA /
WCDMA /
CDMA2000
WLAN
802.11 b/g
AttackerA
(4) (1)
(3) (2)
Fig. 7. The hijacked communication in Fog (e.g. from phone to PC).
IVAN STOJMENOVIC, SHENG WEN: THE FOG COMPUTING PARADIGM 5
6. Fig. 8. Memory Consuming of man-in-the-middle-attack in Fog.
Fig. 9. CPU consuming of man-in-the-middle-attack in Fog.
the environment. Both methods can be easily implemented in
the real world, such as in the KFC or Star Bar environments.
In our experiment, we choose the former and use Broadcom
BCM5354 as the gateway [31]. This device has a high-
performance MIPS32 processor, IEEE 802.11 b/g MAC/PHY
and USB2.0 controller. Video communication is set up on
BCM5354 between a 3G mobile phone and a laptop which
adopts Wifi for connection. We refresh the ROM of BCM4354
and update its system to the open-source Linux kernel 2.4.
In order to hijack and replay victims’ video communication,
we insert a hook program into the TCP/IP stack of the
compromised system. Hook is a technique of inserting code
into a system call in order to alter it [32]. The typical hook
works by replacing the function pointer to the call with its own,
then once it is done doing its processing, it will then call the
original function pointer. The system structure is implemented
in Figure 6. We further employ the relevant APIs and data
structures in the system to control the gateway device, such as
boot strap, diagnostics and initialization code. The IP packets
from WLAN will be transferred to and processed in 3G related
modules. We plug a 3G USB modem on BCM5354 device,
on which we implement H.324M for video and audio tunnel
with 3G CS. H.263 and AMR functions are also implemented
as the video and audio codec modules in the system.
2) Work Flow of Man-in-the-Middle Attack: The communi-
cation between 3G and WLAN needs a gateway to translate the
data of different protocols into the suitable formats. Therefore,
all the communication data will firstly arrive at the gateway
and then be forwarded to other receivers.
In our experiment, the man-in-the-middle attack is divided
into four steps. We illustrate the hijacked communication from
3G to WLAN in Figure 7. In the first two steps, the embedded
hook process of the gateway redirects the data received from
the 3G user to the attacker. The attacker replays or modifies
the data of the communication at his or her own computer,
and then send the data back to the gateway. In the final step,
the gateway forwards the data from the attacker to the WLAN
user. In fact, the communication from the WLAN user will also
be redirected to the attacker at first, and then be forwarded by
the hook in the gateway to the 3G user. We can see clearly
from Figure 7 that the attacker can monitor and modify the
data sent from the 3G user to the WLAN user in the ‘middle’
of the communication.
3) Results of Stealth Test: Traditional anomaly detection
techniques rely on the deviation of current communication
from the features of normal communication. These features
include memory consumption, CPU utilization, bandwidth
usage, etc. Therefore, to study the stealth of man-in-the-
middle attack, we examine the memory consumption and the
CPU utilization of gateway during the attack. If man-in-the-
middle attack does not greatly change the features of the
communication, it can be proofed to be a stealthy attack. For
simplicity, we assume the attacker will only replay the data at
his or her own computer but will not modify the data.
Firstly, we compare the memory utilization of gateway
before and after a video call tunnel is built in our experiment.
6 PROCEEDINGS OF THE FEDCSIS. WARSAW, 2014
7. The results are shown in Figure 8, and the red line in
plots indicates the average amount of memory consumption.
We can see clearly that man-in-the-middle attack does not
largely influence the video communication. In Figure 8(A),
the average value is 15232 K Bytes, while after we build the
video tunnel on gateway, the memory consumption reaches
15324.8 K Bytes in Figure 8(B). Secondly, we show the CPU
consumption of gateway in Figure 9. Based on the results
in Figure 9, we can also see that man-in-the-middle attack
does not largely influence the video communication. In the
Figure 8(A), the average value is 16.6704%, while after the
video tunnel is built, the CPU consumption reaches 17.9260%.
We therefore conclude that man-in-the-middle attack can be
very stealthy in Fog computing because of the negligible
increases in both memory consumption and CPU utilization
in our experiments.
Man-in-the-middle attack is simple to launch but difficult
to be addressed. In the real world, it is difficult to protect Fog
devices from compromise as the places for the deployment
of Fog devices are normally out of religious surveillance.
Encrypted communication techniques may also not protect
users from this attack since attackers can set up a legitimate
terminal and replay the communication without decryption.
Particularly, complex encryption and decryption techniques
may not be suitable for some scenarios. For example, the
encryption and decryption techniques will consume lots of
battery power in 3G mobile phones. In fact, this attack is
not limited to the scenario of our experiment environment.
We can find many applications running in Fog computing
are susceptible to man-in-the-middle attack. For example,
many Internet users communicate with each other using MSN
(Windows Live Massager). The communication data of MSN
is normally not encrypted and can be modified in the ‘middle’.
Future work is needed to address the man-in-the-middle attack
in Fog computing.
C. Privacy Issues
In smart grids, privacy issues deal with hiding details, such
as what appliance was used at what time, while allowing
correct summary information for accurate charging. R. Lu et
al. described an efficient and privacy-preserving aggregation
scheme for smart grid communications [33]. It uses a super-
increasing sequence to structure multi-dimensional data and
encrypt the structured data by the homomorphic cryptogram
technique. A homomorphic function takes as input the encrypt-
ed data from the smart meters and produces an encryption
of the aggregated result. The Fog device cannot decrypt the
readings from the smart meter and tamper with them. This
ensures the privacy of the data collected by smart meters,
but does not guarantee that the Fog device transmits the
correct report to the other gateways. For data communications
from user to smart grid operation center, data aggregation is
performed directly on cipher-text at local gateways without
decryption, and the aggregation result of the original data can
be obtained at the operation center [33]. Authentication cost
is reduced by a batch verification technique.
VI. CONCLUSIONS AND FUTURE WORK
We investigate Fog computing advantages for services in
several domains, and provide the analysis of the state-of-the-
art and security issues in current paradigm. Based on the work
of this paper, some innovations in compute and storage may be
inspired in the future to handle data intensive services based
on the interplay between Fog and Cloud.
Future work will expand on the Fog computing paradigm in
Smart Grid. In this scenario, two models for Fog devices can
be developed. Independent Fog devices consult directly with
the Cloud for periodic updates on price and demands, while
interconnected Fog devices may consult each other, and create
coalitions for further enhancements.
Next, Fog computing based SDN in vehicular networks will
receive due attention. For instance, an optimal scheduling in
one communication period, expanded toward all communica-
tion periods, has been elaborated in [6]. Traffic light control
can also be assisted by the Fog computing concept. Finally,
mobility between Fog nodes, and between Fog and Cloud, can
be investigated. Unlike traditional data centres, Fog devices
are geographically distributed over heterogeneous platforms.
Service mobility across platforms needs to be optimized.
REFERENCES
[1] F. Bonomi, “Connected vehicles, the internet of things, and fog com-
puting,” in The Eighth ACM International Workshop on Vehicular Inter-
Networking (VANET), Las Vegas, USA, 2011.
[2] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and
its role in the internet of things,” in Proceedings of the First Edition of
the MCC Workshop on Mobile Cloud Computing, ser. MCC’12. ACM,
2012, pp. 13–16.
[3] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski,
G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “A view of
cloud computing,” Commun. ACM, vol. 53, no. 4, pp. 50–58, Apr 2010.
[4] C. Wei, Z. Fadlullah, N. Kato, and I. Stojmenovic, “On optimally
reducing power loss in micro-grids with power storage devices,” IEEE
Journal of Selected Areas in Communications, 2014 to appear.
[5] L. Atzori, A. Iera, and G. Morabito, “The internet of things: A survey,”
Comput. Netw., vol. 54, no. 15, pp. 2787–2805, Oct. 2010.
[6] K. Liu, J. Ng, V. Lee, S. Son, and I. Stojmenovic, “Cooperative data
dissemination in hybrid vehicular networks: Vanet as a software defined
network,” Submitted for publication, 2014.
[7] K. Kirkpatrick, “Software-defined networking,” Commun. ACM, vol. 56,
no. 9, pp. 16–19, Sep. 2013.
[8] Cisco, “Cisco delivers vision of fog computing to accelerate value from
billions of connected devices,” Cisco, Tech. Rep., Jan. 2014.
[9] K. Hong, D. Lillethun, U. Ramachandran, B. Ottenwälder, and B. Kold-
ehofe, “Opportunistic spatio-temporal event processing for mobile situ-
ation awareness,” in Proceedings of the 7th ACM International Confer-
ence on Distributed Event-based Systems, ser. DEBS’13. ACM, 2013,
pp. 195–206.
[10] H. Madsen, G. Albeanu, B. Burtschy, and F. Popentiu-Vladicescu,
“Reliability in the utility computing era: Towards reliable fog comput-
ing,” in Systems, Signals and Image Processing (IWSSIP), 2013 20th
International Conference on, July 2013, pp. 43–46.
[11] K. Hong, D. Lillethun, U. Ramachandran, B. Ottenwälder, and B. Kold-
ehofe, “Mobile fog: A programming model for large-scale applications
on the internet of things,” in Proceedings of the Second ACM SIGCOMM
Workshop on Mobile Cloud Computing, ser. MCC’13. ACM, 2013, pp.
15–20.
[12] T. Nishio, R. Shinkuma, T. Takahashi, and N. B. Mandayam, “Service-
oriented heterogeneous resource sharing for optimizing service latency
in mobile cloud,” in Proceedings of the First International Workshop on
Mobile Cloud Computing and Networking, ser. MobileCloud’13. ACM,
2013, pp. 19–26.
IVAN STOJMENOVIC, SHENG WEN: THE FOG COMPUTING PARADIGM 7
8. [13] B. Ottenwalder, B. Koldehofe, K. Rothermel, and U. Ramachandran,
“Migcep: Operator migration for mobility driven distributed complex
event processing,” in Proceedings of the 7th ACM International Confer-
ence on Distributed Event-based Systems, ser. DEBS’13. ACM, 2013,
pp. 183–194.
[14] J. Zhu, D. Chan, M. Prabhu, P. Natarajan, H. Hu, and F. Bonomi,
“Improving web sites performance using edge servers in fog computing
architecture,” in Service Oriented System Engineering (SOSE), 2013
IEEE 7th International Symposium on, March 2013, pp. 320–323.
[15] BETaaS, “Building the environment for the things as a service,” BETaaS,
Tech. Rep., Nov. 2012.
[16] S. Maharjan, Q. Zhu, Y. Zhang, S. Gjessing, and T. Basar, “Dependable
demand response management in the smart grid: A stackelberg game
approach,” Smart Grid, IEEE Transactions on, vol. 4, no. 1, pp. 120–
132, March 2013.
[17] D. Korzhyk, V. Conitzer, and R. Parr, “Solving stackelberg games
with uncertain observability,” in The 10th International Conference on
Autonomous Agents and Multiagent Systems - Volume 3, ser. AAMAS
’11, 2011, pp. 1013–1020.
[18] Z. Fadlullah, D. Quan, N. Kato, and I. Stojmenovic, “Gtes: An optimized
game-theoretic demand-side management scheme for smart grid,” Sys-
tems Journal, IEEE, vol. 8, no. 2, pp. 588–597, June 2014.
[19] T. Luo, H.-P. Tan, and T. Quek, “Sensor openflow: Enabling software-
defined wireless sensor networks,” Communications Letters, IEEE,
vol. 16, no. 11, pp. 1896–1899, Nov. 2012.
[20] Y. Daraghmi, C.-W. Yi, and I. Stojmenovic, “Forwarding methods in
data dissemination and routing protocols for vehicular ad hoc networks,”
Network, IEEE, vol. 27, no. 6, pp. 74–79, November 2013.
[21] B. Zhou, J. Cao, X. Zeng, and H. Wu, “Adaptive traffic light control
in wireless sensor network-based intelligent transportation system,” in
Vehicular Technology Conference Fall (VTC 2010-Fall), 2010 IEEE
72nd, Sept 2010, pp. 1–5.
[22] B. Zhou, J. Cao, and H. Wu, “Adaptive traffic light control of multiple
intersections in wsn-based its,” in Vehicular Technology Conference
(VTC Spring), 2011 IEEE 73rd, May 2011, pp. 1–5.
[23] C. Li and S. Shimamoto, “An open traffic light control model for
reducing vehicles co2 emissions based on etc vehicles,” Vehicular
Technology, IEEE Transactions on, vol. 61, no. 1, pp. 97–110, Jan 2012.
[24] W. Wang and Z. Lu, “Survey cyber security in the smart grid: Survey
and challenges,” Comput. Netw., vol. 57, no. 5, pp. 1344–1371, Apr.
2013.
[25] R. Lu, X. Li, X. Liang, X. Shen, and X. Lin, “Grs: The green, relia-
bility, and security of emerging machine to machine communications,”
Communications Magazine, IEEE, vol. 49, no. 4, pp. 28–35, April 2011.
[26] Y. W. Law, M. Palaniswami, G. Kounga, and A. Lo, “Wake: Key
management scheme for wide-area measurement systems in smart grid,”
Communications Magazine, IEEE, vol. 51, no. 1, pp. 34–41, January
2013.
[27] Z. Fadlullah, M. Fouda, N. Kato, A. Takeuchi, N. Iwasaki, and Y. Noza-
ki, “Toward intelligent machine-to-machine communications in smart
grid,” Communications Magazine, IEEE, vol. 49, no. 4, pp. 60–65, April
2011.
[28] C. Modi, D. Patel, B. Borisaniya, H. Patel, A. Patel, and M. Rajarajan, “A
survey of intrusion detection techniques in cloud,” Journal of Network
and Computer Applications, vol. 36, no. 1, pp. 42–57, 2013.
[29] J. Valenzuela, J. Wang, and N. Bissinger, “Real-time intrusion detection
in power system operations,” Power Systems, IEEE Transactions on,
vol. 28, no. 2, pp. 1052–1062, May 2013.
[30] L. Zhang, W. Jia, S. Wen, and D. Yao, “A man-in-the-middle attack
on 3g-wlan interworking,” in Communications and Mobile Computing
(CMC), International Conference on, vol. 1, April 2010, pp. 121–125.
[31] Broadcom bcm 5354. [Online]. Available: http://www.broadcom.com/
products/Wireless-LAN/802.11-Wireless-LAN-Solutions/BCM5354
[32] Wikipedia. (2014) Hooking, what is hooking? [Online]. Available:
http://en.wikipedia.org/wiki/Hooking
[33] R. Lu, X. Liang, X. Li, X. Lin, and X. Shen, “Eppa: An efficient
and privacy-preserving aggregation scheme for secure smart grid com-
munications,” Parallel and Distributed Systems, IEEE Transactions on,
vol. 23, no. 9, pp. 1621–1631, Sept 2012.
8 PROCEEDINGS OF THE FEDCSIS. WARSAW, 2014