Named Data Networking (NDN) has been recently extended to enable the discovery and provisioning b... more Named Data Networking (NDN) has been recently extended to enable the discovery and provisioning by name of in-network cloud-like services such as processing and data storage. Such a feature is particularly helpful in distributed edge environments, where mobile end devices can be involved in service offering. In such a context, a challenging issue is to motivate a mobile device to behave as a provider and share its resources (e.g., CPU, memory) in order to assist other end devices (consumers), in wireless proximity, asking for a given service. In this paper, we propose a solution revolving around two concepts: enhanced NDN primitives and a social-driven stimulus for mobile devices to volunteer as service providers. A bio-inspired response function is used by a potential provider's device to rate both its available resources and the social ties with the current consumer's device, established according to the Social Internet of Things (SIoT) paradigm. An early evaluation showcases to which extent the conceived solution allows a consumer to find a nearby provider available to offer its services, under different social neighbourhood settings.
According to the fog computing paradigm, a plethora of Internet of Things (IoT) applications will... more According to the fog computing paradigm, a plethora of Internet of Things (IoT) applications will benefit from the distributed execution of services and applications, processing and storage in proximity to data sources. However, new networking primitives and service orchestration mechanisms need to be designed to cope with the dynamic, distributed, and heterogeneous nature of the fog resources. A solution to such a challenging deployment would be the coupling of two revolutionary future Internet paradigms, Software-Defined Networking (SDN) and Named Data Networking (NDN). Indeed, on the one side, SDN can support wise orchestration decisions by leveraging a centralized intelligence and a global view of resources. On the other side, NDN well matches the service-centric nature of fog computing by letting (named) services be discovered regardless of the identity of the specific executor. In the paper, we dissect the strengths of this integrated solution, with a special focus devoted to evaluate the benefits of the NDN stateful data plane coupled with the centralized SDN control plane, when compared to a traditional IP-based host-centric software-defined approach.
By enabling name-based routing and ubiquitous in-network caching, Named Data Networking (NDN) is ... more By enabling name-based routing and ubiquitous in-network caching, Named Data Networking (NDN) is a promising network architecture for sixth generation (6G) edge network infrastructures. However, the performance of content retrieval largely depends on the selected caching strategy, which is implemented in a distributed fashion by each NDN node. Previous research showed the effectiveness of caching decisions based on content popularity and network topology information. This paper presents a new distributed caching strategy for NDN edge networks based on a metric called popularity-aware closeness (PaC), which measures the proximity of the potential cacher to the majority of requesters of a certain content. After identifying the most popular contents, the strategy caches them in the available edge nodes that guarantee the higher PaC. Achieved simulation results show that the proposed strategy outperforms other benchmark schemes, in terms of reduced content retrieval delay and exchanged ...
ICC 2022 - IEEE International Conference on Communications
Federated learning (FL) is gaining momentum as a prominent solution to perform training procedure... more Federated learning (FL) is gaining momentum as a prominent solution to perform training procedures without the need to move sensitive end-user data to a centralized third party server. In FL, models are locally trained at distributed enddevices, acting as clients, and only model updates are transferred from the clients to the aggregator, which is in charge of global model aggregation. Although FL can ensure better privacy preservation than centralized machine learning (ML), it exhibits still some concerns. First, clients need to be properly discovered and selected to ensure that highly accurate models are built. Second, huge models may still require to be exchanged from the aggregator to all the selected clients, incurring a not negligible network footprint. To tackle such issues, in this paper, we propose a framework built upon in-network caching, multicast and namebased data delivery, natively provided by the Named Data Networking (NDN) paradigm, in order to support client discovery and aggregator-clients data exchange. Benefits of the proposal are showcased when compared to a conventional application-layer solution.
Thanks to recent advancements in edge computing, the traditional centralized cloud-based approach... more Thanks to recent advancements in edge computing, the traditional centralized cloud-based approach to deploy Artificial Intelligence (AI) techniques will be soon replaced or complemented by the so-called edge AI approach. By pushing AI at the network edge, close to the large amount of raw input data, the traffic traversing the core network as well as the inference latency can be reduced. Despite such neat benefits, the actual deployment of edge AI across distributed nodes raises novel challenges to be addressed, such as the need to enforce proper addressing and discovery procedures, to identify AI components, and to chain them in an interoperable manner. Named Data Networking (NDN) has been recently argued as one of the main enablers of network and computing convergence, which edge AI should build upon. However, the peculiarities of such a new paradigm entails to go a step further. In this paper we disclose the potential of NDN to support the orchestration of edge AI. Several motivations are discussed, as well as the challenges which serve as guidelines for progress beyond the state of the art in this topic.
ICC 2020 - 2020 IEEE International Conference on Communications (ICC), 2020
Software Defined Networking (SDN) and Named Data Networking (NDN) have been recently advocated as... more Software Defined Networking (SDN) and Named Data Networking (NDN) have been recently advocated as complementary paradigms to improve content distribution in the next-generation Internet. On the one hand, SDN offers a centralized control plane that can optimize routing decisions; on the other, the distinctive features at the NDN data plane, such as name-based delivery, in-network caching, and stateful forwarding, simplify data dissemination. In the integrated design, when a request cannot be handled locally at the NDN data plane in the Forwarding Information Base (FIB), the SDN Controller is contacted to inject the forwarding rule. Decisions such as which rules need to be stored in the node and for how long deeply affect the packet forwarding performance. This paper debates about the issues related to forwarding rules in the FIBs of SDN-controlled NDN nodes, by specifically accounting for their name-based nature, representing a key novelty compared to legacy SDN implementations. Quantitative results are reported to showcase the impact of crucial parameters, like the content popularity, the content requests rate, the table size, on the FIB performance in terms of valuable metrics (e.g., hit ratio, rejected requests, incurred signaling with the Controller).
Task offloading to the edge is gaining momentum to support compute-intensive interactive applicat... more Task offloading to the edge is gaining momentum to support compute-intensive interactive applications which, on the one hand, can hardly run on resource-limited consumer devices and, on the other hand, may suffer from running in the cloud due to the strict delay constraints. The availability of network nodes with heterogeneous capabilities in the distributed edge infrastructure paves the way to groundbreaking in-network computing scenarios, but makes the computing task allocation decision more challenging. The straightforward approach of offloading the computation task to the edge node that is the nearest to the data source may lead to performance inefficiencies. Indeed, such edge node may easily get overloaded, thus failing to ensure low-latency task execution. A more judicious strategy is required which accounts for the edge nodes’ processing capabilities and for the queuing delay accumulated when tasks wait before being executed as well as for their connectivity. In this paper, we propose a novel optimal computing task allocation strategy aimed at minimizing the network resources usage, while bounding the execution latency at the edge node acting as the task executor. We formulate the optimal task allocation through an integer linear programming problem, assuming an edge infrastructure managed through software-defined networking. Achieved results show that the proposal meets the targeted objectives under all the considered simulation settings and significantly outperforms other benchmark solutions.
IEEE Transactions on Green Communications and Networking, 2021
In-network caching is one of the main pillars of the Named Data Networking (NDN) paradigm, where ... more In-network caching is one of the main pillars of the Named Data Networking (NDN) paradigm, where every Internet router, in the path between data sources and consumers, can cache incoming content packets. Multiple strategies have been designed for caching Internet of Things (IoT) data streamed by resource-constrained devices in edge domains and wireless sensor networks, while the benefits of IoT data caching at Internet-scale, including both edge and core network segments, have not been fully disclosed. In this work, we propose and analyse a novel probabilistic Internet-scale caching design for IoT data, which jointly accounts for the content popularity and lifetime. In the considered scenario, IoT contents are requested by remote consumers and delivered by crossing multiple edge and core network segments of the NDN-based future Internet. The proposal is composed of two distinct reactive caching strategies, a coordinated and an autonomous one, to be implemented in the edge and core domain, respectively. Achieved results show that the proposal outperforms state-of-the-art solutions by providing, among others, the highest cache hit ratio and the shortest number of hops. Such performances testify a lower pressure on energy-constrained devices and on the network infrastructure, overall contributing to the sustainability of the IoT ecosystem.
Among the distinctive features of the Named Data Networking (NDN) paradigm, in-network caching pl... more Among the distinctive features of the Named Data Networking (NDN) paradigm, in-network caching plays a crucial role in improving data delivery performance. At the network edge, the benefits of in-network caching can be remarkable in terms of reduced network traffic and user-perceived access latency. Multiple strategies have been designed for caching static Internet contents in NDN routers, while less attention has been devoted to transient Internet of Things (IoT) data, which expire after a certain amount of time. In this work, we introduce a novel distributed and autonomous caching strategy where NDN nodes take decisions by considering the popularity of IoT data and their lifetime. The target is to cache the most popular data with the highest residual lifetime in order to maximize the cache hit ratio at the edge. Performance evaluation assessed with the ndnSIM network simulator shows that the proposed solution outperforms existing schemes available in the literature by guaranteeing, among others, the highest cache hit ratio and the shortest content retrieval time.
IEEE Transactions on Network and Service Management, 2021
With more than 75 billions of objects connected by 2025, Internet of Things (IoT) is the catalyst... more With more than 75 billions of objects connected by 2025, Internet of Things (IoT) is the catalyst for the digital revolution, contributing to the generation of big amounts of (transient) data, which calls into question the storage and processing performance of the conventional cloud. Moving storage resources at the edge can reduce the data retrieval latency and save core network resources, albeit the actual performance depends on the selected caching policy. Existing edge caching strategies mainly account for the content popularity as crucial decision metric and do not consider the transient feature of IoT data. In this article, we design a caching orchestration mechanism, deployed as a network application on top of a software-defined networking Controller in charge of the edge infrastructure, which accounts for the nodes’ storage capabilities, the network links’ available bandwidth, and the IoT data lifetime and popularity. The policy decides which IoT contents have to be cached and in which node of a distributed edge deployment with limited storage resources, with the ultimate aim of minimizing the data retrieval latency. We formulate the optimal content placement through an Integer Linear Programming (ILP) problem and propose a heuristic algorithm to solve it. Results show that the proposal outperforms the considered benchmark solutions in terms of latency and cache hit probability, under all the considered simulation settings.
By leveraging the global interconnection of billions of tiny smart objects, the Internet of Thing... more By leveraging the global interconnection of billions of tiny smart objects, the Internet of Things (IoT) paradigm is the main enabler of smart environments, ranging from smart cities to building automation, smart transportation, smart grids, and healthcare [...]
Named Data Networking (NDN) is a promising communication paradigm for the challenging vehicular a... more Named Data Networking (NDN) is a promising communication paradigm for the challenging vehicular ad hoc environment. In particular, the built-in pervasive caching capability was shown to be essential for effective data delivery in presence of short-lived and intermittent connectivity. Existing studies have however not considered the fact that multiple vehicular contents can be transient, i.e., they expire after a certain time period since they were generated, the so-called FreshnessPeriod in NDN. In this paper, we study the effects of caching transient contents in Vehicular NDN and present a simple yet effective freshness-driven caching decision strategy that vehicles can implement autonomously. Performance evaluation in ndnSIM shows that the FreshnessPeriod is a crucial parameter that deeply influences the cache hit ratio and, consequently, the data dissemination performance.
By offering low-latency and context-aware services, fog computing will have a peculiar role in th... more By offering low-latency and context-aware services, fog computing will have a peculiar role in the deployment of Internet of Things (IoT) applications for smart environments. Unlike the conventional remote cloud, for which consolidated architectures and deployment options exist, many design and implementation aspects remain open when considering the latest fog computing paradigm. In this paper, we focus on the problems of dynamically discovering the processing and storage resources distributed among fog nodes and, accordingly, orchestrating them for the provisioning of IoT services for smart environments. In particular, we show how these functionalities can be effectively supported by the revolutionary Named Data Networking (NDN) paradigm. Originally conceived to support named content delivery, NDN can be extended to request and provide named computation services, with NDN nodes acting as both content routers and in-network service executors. To substantiate our analysis, we present...
Named Data Networking (NDN) has been recently extended to enable the discovery and provisioning b... more Named Data Networking (NDN) has been recently extended to enable the discovery and provisioning by name of in-network cloud-like services such as processing and data storage. Such a feature is particularly helpful in distributed edge environments, where mobile end devices can be involved in service offering. In such a context, a challenging issue is to motivate a mobile device to behave as a provider and share its resources (e.g., CPU, memory) in order to assist other end devices (consumers), in wireless proximity, asking for a given service. In this paper, we propose a solution revolving around two concepts: enhanced NDN primitives and a social-driven stimulus for mobile devices to volunteer as service providers. A bio-inspired response function is used by a potential provider's device to rate both its available resources and the social ties with the current consumer's device, established according to the Social Internet of Things (SIoT) paradigm. An early evaluation showcases to which extent the conceived solution allows a consumer to find a nearby provider available to offer its services, under different social neighbourhood settings.
According to the fog computing paradigm, a plethora of Internet of Things (IoT) applications will... more According to the fog computing paradigm, a plethora of Internet of Things (IoT) applications will benefit from the distributed execution of services and applications, processing and storage in proximity to data sources. However, new networking primitives and service orchestration mechanisms need to be designed to cope with the dynamic, distributed, and heterogeneous nature of the fog resources. A solution to such a challenging deployment would be the coupling of two revolutionary future Internet paradigms, Software-Defined Networking (SDN) and Named Data Networking (NDN). Indeed, on the one side, SDN can support wise orchestration decisions by leveraging a centralized intelligence and a global view of resources. On the other side, NDN well matches the service-centric nature of fog computing by letting (named) services be discovered regardless of the identity of the specific executor. In the paper, we dissect the strengths of this integrated solution, with a special focus devoted to evaluate the benefits of the NDN stateful data plane coupled with the centralized SDN control plane, when compared to a traditional IP-based host-centric software-defined approach.
By enabling name-based routing and ubiquitous in-network caching, Named Data Networking (NDN) is ... more By enabling name-based routing and ubiquitous in-network caching, Named Data Networking (NDN) is a promising network architecture for sixth generation (6G) edge network infrastructures. However, the performance of content retrieval largely depends on the selected caching strategy, which is implemented in a distributed fashion by each NDN node. Previous research showed the effectiveness of caching decisions based on content popularity and network topology information. This paper presents a new distributed caching strategy for NDN edge networks based on a metric called popularity-aware closeness (PaC), which measures the proximity of the potential cacher to the majority of requesters of a certain content. After identifying the most popular contents, the strategy caches them in the available edge nodes that guarantee the higher PaC. Achieved simulation results show that the proposed strategy outperforms other benchmark schemes, in terms of reduced content retrieval delay and exchanged ...
ICC 2022 - IEEE International Conference on Communications
Federated learning (FL) is gaining momentum as a prominent solution to perform training procedure... more Federated learning (FL) is gaining momentum as a prominent solution to perform training procedures without the need to move sensitive end-user data to a centralized third party server. In FL, models are locally trained at distributed enddevices, acting as clients, and only model updates are transferred from the clients to the aggregator, which is in charge of global model aggregation. Although FL can ensure better privacy preservation than centralized machine learning (ML), it exhibits still some concerns. First, clients need to be properly discovered and selected to ensure that highly accurate models are built. Second, huge models may still require to be exchanged from the aggregator to all the selected clients, incurring a not negligible network footprint. To tackle such issues, in this paper, we propose a framework built upon in-network caching, multicast and namebased data delivery, natively provided by the Named Data Networking (NDN) paradigm, in order to support client discovery and aggregator-clients data exchange. Benefits of the proposal are showcased when compared to a conventional application-layer solution.
Thanks to recent advancements in edge computing, the traditional centralized cloud-based approach... more Thanks to recent advancements in edge computing, the traditional centralized cloud-based approach to deploy Artificial Intelligence (AI) techniques will be soon replaced or complemented by the so-called edge AI approach. By pushing AI at the network edge, close to the large amount of raw input data, the traffic traversing the core network as well as the inference latency can be reduced. Despite such neat benefits, the actual deployment of edge AI across distributed nodes raises novel challenges to be addressed, such as the need to enforce proper addressing and discovery procedures, to identify AI components, and to chain them in an interoperable manner. Named Data Networking (NDN) has been recently argued as one of the main enablers of network and computing convergence, which edge AI should build upon. However, the peculiarities of such a new paradigm entails to go a step further. In this paper we disclose the potential of NDN to support the orchestration of edge AI. Several motivations are discussed, as well as the challenges which serve as guidelines for progress beyond the state of the art in this topic.
ICC 2020 - 2020 IEEE International Conference on Communications (ICC), 2020
Software Defined Networking (SDN) and Named Data Networking (NDN) have been recently advocated as... more Software Defined Networking (SDN) and Named Data Networking (NDN) have been recently advocated as complementary paradigms to improve content distribution in the next-generation Internet. On the one hand, SDN offers a centralized control plane that can optimize routing decisions; on the other, the distinctive features at the NDN data plane, such as name-based delivery, in-network caching, and stateful forwarding, simplify data dissemination. In the integrated design, when a request cannot be handled locally at the NDN data plane in the Forwarding Information Base (FIB), the SDN Controller is contacted to inject the forwarding rule. Decisions such as which rules need to be stored in the node and for how long deeply affect the packet forwarding performance. This paper debates about the issues related to forwarding rules in the FIBs of SDN-controlled NDN nodes, by specifically accounting for their name-based nature, representing a key novelty compared to legacy SDN implementations. Quantitative results are reported to showcase the impact of crucial parameters, like the content popularity, the content requests rate, the table size, on the FIB performance in terms of valuable metrics (e.g., hit ratio, rejected requests, incurred signaling with the Controller).
Task offloading to the edge is gaining momentum to support compute-intensive interactive applicat... more Task offloading to the edge is gaining momentum to support compute-intensive interactive applications which, on the one hand, can hardly run on resource-limited consumer devices and, on the other hand, may suffer from running in the cloud due to the strict delay constraints. The availability of network nodes with heterogeneous capabilities in the distributed edge infrastructure paves the way to groundbreaking in-network computing scenarios, but makes the computing task allocation decision more challenging. The straightforward approach of offloading the computation task to the edge node that is the nearest to the data source may lead to performance inefficiencies. Indeed, such edge node may easily get overloaded, thus failing to ensure low-latency task execution. A more judicious strategy is required which accounts for the edge nodes’ processing capabilities and for the queuing delay accumulated when tasks wait before being executed as well as for their connectivity. In this paper, we propose a novel optimal computing task allocation strategy aimed at minimizing the network resources usage, while bounding the execution latency at the edge node acting as the task executor. We formulate the optimal task allocation through an integer linear programming problem, assuming an edge infrastructure managed through software-defined networking. Achieved results show that the proposal meets the targeted objectives under all the considered simulation settings and significantly outperforms other benchmark solutions.
IEEE Transactions on Green Communications and Networking, 2021
In-network caching is one of the main pillars of the Named Data Networking (NDN) paradigm, where ... more In-network caching is one of the main pillars of the Named Data Networking (NDN) paradigm, where every Internet router, in the path between data sources and consumers, can cache incoming content packets. Multiple strategies have been designed for caching Internet of Things (IoT) data streamed by resource-constrained devices in edge domains and wireless sensor networks, while the benefits of IoT data caching at Internet-scale, including both edge and core network segments, have not been fully disclosed. In this work, we propose and analyse a novel probabilistic Internet-scale caching design for IoT data, which jointly accounts for the content popularity and lifetime. In the considered scenario, IoT contents are requested by remote consumers and delivered by crossing multiple edge and core network segments of the NDN-based future Internet. The proposal is composed of two distinct reactive caching strategies, a coordinated and an autonomous one, to be implemented in the edge and core domain, respectively. Achieved results show that the proposal outperforms state-of-the-art solutions by providing, among others, the highest cache hit ratio and the shortest number of hops. Such performances testify a lower pressure on energy-constrained devices and on the network infrastructure, overall contributing to the sustainability of the IoT ecosystem.
Among the distinctive features of the Named Data Networking (NDN) paradigm, in-network caching pl... more Among the distinctive features of the Named Data Networking (NDN) paradigm, in-network caching plays a crucial role in improving data delivery performance. At the network edge, the benefits of in-network caching can be remarkable in terms of reduced network traffic and user-perceived access latency. Multiple strategies have been designed for caching static Internet contents in NDN routers, while less attention has been devoted to transient Internet of Things (IoT) data, which expire after a certain amount of time. In this work, we introduce a novel distributed and autonomous caching strategy where NDN nodes take decisions by considering the popularity of IoT data and their lifetime. The target is to cache the most popular data with the highest residual lifetime in order to maximize the cache hit ratio at the edge. Performance evaluation assessed with the ndnSIM network simulator shows that the proposed solution outperforms existing schemes available in the literature by guaranteeing, among others, the highest cache hit ratio and the shortest content retrieval time.
IEEE Transactions on Network and Service Management, 2021
With more than 75 billions of objects connected by 2025, Internet of Things (IoT) is the catalyst... more With more than 75 billions of objects connected by 2025, Internet of Things (IoT) is the catalyst for the digital revolution, contributing to the generation of big amounts of (transient) data, which calls into question the storage and processing performance of the conventional cloud. Moving storage resources at the edge can reduce the data retrieval latency and save core network resources, albeit the actual performance depends on the selected caching policy. Existing edge caching strategies mainly account for the content popularity as crucial decision metric and do not consider the transient feature of IoT data. In this article, we design a caching orchestration mechanism, deployed as a network application on top of a software-defined networking Controller in charge of the edge infrastructure, which accounts for the nodes’ storage capabilities, the network links’ available bandwidth, and the IoT data lifetime and popularity. The policy decides which IoT contents have to be cached and in which node of a distributed edge deployment with limited storage resources, with the ultimate aim of minimizing the data retrieval latency. We formulate the optimal content placement through an Integer Linear Programming (ILP) problem and propose a heuristic algorithm to solve it. Results show that the proposal outperforms the considered benchmark solutions in terms of latency and cache hit probability, under all the considered simulation settings.
By leveraging the global interconnection of billions of tiny smart objects, the Internet of Thing... more By leveraging the global interconnection of billions of tiny smart objects, the Internet of Things (IoT) paradigm is the main enabler of smart environments, ranging from smart cities to building automation, smart transportation, smart grids, and healthcare [...]
Named Data Networking (NDN) is a promising communication paradigm for the challenging vehicular a... more Named Data Networking (NDN) is a promising communication paradigm for the challenging vehicular ad hoc environment. In particular, the built-in pervasive caching capability was shown to be essential for effective data delivery in presence of short-lived and intermittent connectivity. Existing studies have however not considered the fact that multiple vehicular contents can be transient, i.e., they expire after a certain time period since they were generated, the so-called FreshnessPeriod in NDN. In this paper, we study the effects of caching transient contents in Vehicular NDN and present a simple yet effective freshness-driven caching decision strategy that vehicles can implement autonomously. Performance evaluation in ndnSIM shows that the FreshnessPeriod is a crucial parameter that deeply influences the cache hit ratio and, consequently, the data dissemination performance.
By offering low-latency and context-aware services, fog computing will have a peculiar role in th... more By offering low-latency and context-aware services, fog computing will have a peculiar role in the deployment of Internet of Things (IoT) applications for smart environments. Unlike the conventional remote cloud, for which consolidated architectures and deployment options exist, many design and implementation aspects remain open when considering the latest fog computing paradigm. In this paper, we focus on the problems of dynamically discovering the processing and storage resources distributed among fog nodes and, accordingly, orchestrating them for the provisioning of IoT services for smart environments. In particular, we show how these functionalities can be effectively supported by the revolutionary Named Data Networking (NDN) paradigm. Originally conceived to support named content delivery, NDN can be extended to request and provide named computation services, with NDN nodes acting as both content routers and in-network service executors. To substantiate our analysis, we present...
Uploads
Papers by Giuseppe Ruggeri