I am an assistant professor in computer engineering and science faculty of Shahid Beheshti University of Tehran (SBU), and a resident researcher in Institute for Research in Fundamental Sciences (IPM). My research interest fields are1- Complex and social networks2- Robustness and resilience of networks3- Quantum complex networks
As the number of elements in large-scale massively parallel computers, Multiprocessors System-on-... more As the number of elements in large-scale massively parallel computers, Multiprocessors System-on-Chip (MP-SoCs), and peer-to-peer communication networks increases, the probability of component failure becomes significant. Consequently, fault-tolerance turns out to be a key issue in the design of such systems. Adaptive routing algorithms have been frequently suggested as a means of improving communication performance in such systems. These algorithms, unlike deterministic routing, can utilize network state information to exploit the presence of multiple paths. Before such schemes can be successfully incorporated in networks, it is necessary to have a clear understanding of the factors which affect their performance potential. This paper investigates the performance of nine prominent adaptive fault-tolerant routing algorithms in wormhole-switched 2-D tori with a routing scheme suggested by Chalasani and Boppana [1], as an instance of a fault-tolerant method. The suggested scheme is wi...
Computation of the second order delay in RC-tree based circuits is important during the design pr... more Computation of the second order delay in RC-tree based circuits is important during the design process of modern VLSI systems with respect to having tree structure circuits. Calculation of the second and higher order moments is possible in tree based networks. Because of the closed form solution, computation speed and the ease of using the performance optimization in VLSI design methods such as floor planning, placement and routing, the Elmore delay metric is widely implemented for past generation circuits. However, physical and logical synthesis optimizations require fast and accurate analysis techniques of the RC networks. Elmore first proposed matching circuit moments to a probability density function (PDF), which led to the widespread implementation of it in many networks. But the accuracy of Elmore metric is sometimes unacceptable for the RC interconnect problems in today’s CMOS technologies. The main idea behind our approach is based on the moment matching technique with the p...
Autism is one of the most important neurological disorders which leads to problems in a person... more Autism is one of the most important neurological disorders which leads to problems in a person's social interactions. Improvement of brain imaging technologies and techniques help us to build brain structural and functional networks. Finding networks topology pattern in each of the groups (autism and healthy control) can aid us to achieve an autism disorder screening model. In the present study, we have utilized the genetic algorithm to extract a discriminative sub-network that represents differences between two groups better. In the fitness evaluation phase, for each sub-network, a machine learning model was trained using various entropy features of the sub-network and its performance was measured. Proper model performance implies extracting a good discriminative sub-network. Network entropies can be used as network topological descriptors. The evaluation results indicate the acceptable performance of the proposed screening method based on extracted discriminative sub-networks ...
Memory access as a primary performance bottleneck of each processing unit also plays a significan... more Memory access as a primary performance bottleneck of each processing unit also plays a significant role in GPU performance. In addition to high challenging parts of GPU’s memory access path, the low locality property among the requests considerably increases the memory access delay. Despite the GPU’s immense processing power, they cannot reach their maximum throughput values because of the memory access bottlenecks. Memory divergence and miss locality among the L1 missed requests significantly impose the Last-Level-Cache contention and main memory row switching overheads. In addition, interconnection network routes the request packets regardless of locality properties, such routing algorithm considerably disrupts the locality among the requests. In this paper, we proposed Locality-Aware Resource Allocation (LARA) to reduce the Streaming-Multiprocessors stall time with arbitrage among the memory request packets in favor of locality maintenance at the interconnection network of GPU. In addition, before injecting the memory requests to the interconnection network, they will be reordered at the injection-port buffer based on their thread block equality. Memory-divergence and miss-locality among the requests are two main factors that increase the rates of Last-Level-Cache contention and main memory row switching. We proposed a comprehensive approach to improving the GPU performance by decreasing the average memory access delay. We focused on the request locality property to decrease the Last-Level-Cache contention overheads and main memory row switching rate. As a result, 33% maximum and 17% average speed-up improvements among the used benchmarks, without significant effect on system areas and power consumptions, are reported.
Recovery of complex networks is an important issue that has been extensively used in various fiel... more Recovery of complex networks is an important issue that has been extensively used in various fields. Much work has been done to measure and improve the stability of complex networks during attacks. Recently, many studies have focused on the network recovery strategies after attack. In many real cases, link retrieval and recovery of critical infrastructures such as transmission network and telecommunications infrastructures are of particular importance and should be prioritized. For example, when a flood disrupts optical fibre communications in transmission networks and paralyzes the network, link retrieval corresponds to the recovery of fibre communications, so that the transmission network communication capacity can be restored at the earliest possible time. So, predicting the appropriate reserved links in a way that the network can be recovered at the lowest cost and fastest time after attacks or interruptions will be critical in a disaster. In this article, different kinds of att...
International Journal of Computer Mathematics: Computer Systems Theory
ABSTRACT Recursive decomposition of networks is a widely used approach in network analysis for fa... more ABSTRACT Recursive decomposition of networks is a widely used approach in network analysis for factorization of network structure into small subgraph patterns with few nodes. These patterns are called graphlets (motifs), and their analysis is considered as a common approach in bioinformatics. This paper focuses on evaluating the importance of graphlets in networks and proposes a new analytical model for ranking the graphlets importance based on their contribution to the graph energy spectrum. Besides, a general formula is provided to calculate the graphlet energy contribution to the total energy of a graph; then the energy value of the graph is estimated based on its graphlets. The results of the empirical analysis of synthetic and real networks are consistent with the theoretical results and suggest that the proposed analytical model can accurately estimate the structural features of a given graph based on its graphlets.
The significance of the existing analysis methods in complex networks and easy access to the ever... more The significance of the existing analysis methods in complex networks and easy access to the ever-increasing volume of information present the emergence of proposing new methods in various fields based on complex system ideas. However, these systems are usually faced with various random failures and intelligent attacks. Due to the nature of the components’ behaviors, the occurrence of the failures and faults in their operations and the alteration of their topologies are the most important problems. Since the complex systems are usually used as the infrastructures of other networks, their robustness against failures and the adoption of suitable precautions are necessary. Moreover, the small-world effect in most complex systems is one of the crucial structural features. The authors found that the relation between these two is not well-known and may even be in conflict in some networks. The main goal in this paper is to achieve an optimal topology by utilizing a robustness-oriented multi-objective trade-off optimization model (edge rewiring) to establish a peaceful relationship between the two requirements. By offering a proposed rewiring method with the small-world effect, which is called core-periphery Windmill property, the authors demonstrated that the generated networks are able to exhibit appropriate robustness even during intelligent attacks. The results obtained in terms of Windmill graphs are presented very good approximations to demonstrate the small-world effect. These graphs are used as the initial core in the construction of the optimized networks’ topologies.
Chaos: An Interdisciplinary Journal of Nonlinear Science
This paper contributes in detecting chaotic behaviors in dynamic complex social networks using a ... more This paper contributes in detecting chaotic behaviors in dynamic complex social networks using a new feature diffusion-aware model from two perspectives of abnormal links as well as abnormal nodes. The proposed approach constructs a probabilistic model of dynamic complex social networks and subsequently, applies it to detect chaotic behaviors by measuring deviations from the model. The predictive model considers the main processes of features' dynamics, evolution of nodes' features, feature diffusion, and link generation processes in dynamic complex social networks. The feature diffusion process indicates the process in which each node former features influence the future features of its neighbors. The proposed approach is validated by experiments on two real dynamic complex social network datasets of Google+ and Twitter. The approach uses some Markov Chain Monte Carlo sampling methods like Metropolis-Hastings algorithm and Slice sampling strategy to extract the model parameters, given these real datasets. Experimental results indicate the improved performance characteristics of the proposed approach in comparison with baseline approaches in terms of the performance measures of accuracy, F1-score, Matthews Correlation Coefficient, recall, precision, area under ROC curve, and log-likelihood.
Abstract Study on complex networks illustrates systems of real-world in disparate realms that inc... more Abstract Study on complex networks illustrates systems of real-world in disparate realms that incorporates a range of biological networks to technological systems and has, over the past years, become one of the most important and fascinating fields of the interdisciplinary research center. These complex networks share many topological features such as the small-worldness, scale-freeness, the existence of motifs and graphlets and self-similarity. In most cases, complex and real-networks are very large, and the description and analysis of them in explicit form is often faced with difficulty. We manage to head off aforementioned troubles by examining successful models amongst communication networks in some particular aspects, including important factors such as cost, security, integrity, scalability, and fault tolerant. The last factor is distinctly important for each communication network. Recently, some methods and mechanisms have been proposed to increase and improve the robustness of network by modifying its topology. The rewiring is the mechanism amongst the defensive strategies to increase the resilience of attacked networks in which the affected nodes are disconnected from faulty nodes and, possibly, connect to another profitable node with a specific probability. In this paper, a rewiring mechanism based on Shannon entropy concept is proposed to streamline the complex networks configuration in order to improve their resiliency. Network entropy is a quantitative criterion for describing its robustness and is acknowledged as one of the topological characteristic criteria. In practice, this quantity is related to the capacity of the network to tolerate changes in its configuration under various environmental constraints. We evaluate the network robustness based on the spectrum of degree distribution, heterogeneity, as well as the average size of the largest connected cluster during removing nodes with a sequence of systematic attacks based on the degree, betweenness, and Dangalchev’s closeness centralities. The proposed rewiring strategy is applied over six synthetic networks and six real datasets, and then we verified that through approximately 30% swapping of links, the overall robustness of networks can be reached.
ABSTRACT A variety of proposals has been introduced to dynamically complete scale-free (SF) netwo... more ABSTRACT A variety of proposals has been introduced to dynamically complete scale-free (SF) networks that are utilized to generate Barabási-Albert (BA) graph models. The analytical method for analysis of real networks, which is based on uniform distributions proposed in the BA model, does not seem so perfect. Consequently, some scholars refined and extended the BA model among which non-linear preferential attachment (NLPA), dynamical edge rewiring, fitness models and other growth models presented in the literature. In a network, features like how individuals enter the network, the probability distribution of edge growth, and the probability distribution of nodes’ age have considerable significance. A key question that neglects in the BA model is that ‘Can LPA phenomenon basically generate SF graphs apart from the arrival process of nodes and the probability distribution of nodes’ age?’. In this paper, we demonstrate the resulted graphs belong to the class of small-world or random networks contrary to the employ of LPA in the proposed model. Hence, what neglect in the BA model prevents us from attaining a clear interpretation of all properties of real-world networks. This study attempts to uncover these features by proposing an extended BA model.
The information and communication technology nowadays more than ever depends on the Internet and ... more The information and communication technology nowadays more than ever depends on the Internet and cloud computing, so that the data centers (DCs) have been converted to a constitutive unit of the cloud computing. A DC is composed of two primary parts: servers and Data Center Networks (DCNs). Robustness and scalability are two major challenges of the DCNs that are expanded based on two strategies, scale-out, and scale-up. This paper is distinctive from the related studies in two aspects. The first one is to simultaneously focus on both the scalability and the robustness challenges of the DCNs. For this purpose, we will concentrate on the comparison of robustness in the scalable models of these networks. The second one is, despite the previous work that only evaluated the DCN robustness under topological changes, we evaluated the robustness and fault tolerance against three types of unexpected changes in topology, traffic, and COI (community of interest) in the present work. Hence, we have chosen the network criticality (NC) as a graph-theoretic metric for analyzing DCN robustness. Afterward, we compare some structural and spectral graph metrics with NC among some well-known DCNs, and their scale-out and scale-up. Our results are useful to select the appropriate scaling strategy with the goal of maximizing the robustness of existing DCNs and provide a guideline for designing the new robust and scalable DCN.
The combination of traditional wired links for regular transmissions and express wireless paths f... more The combination of traditional wired links for regular transmissions and express wireless paths for long distance communications is a promising solution to prevent multi-hop network delays. In wireless network-on-chip technology, wireless-equipped routers are more error-prone than the conventional ones not only because of their implementation complexities but also due to their relatively high utilization. In this paper, a new topology is presented to enhance the network reliability, and then a novel routing algorithm is proposed to tolerate both intermittent and permanent faults on wireless hubs. In the proposed approach, once a wireless hub becomes faulty, the best alternative adjustment hub will be indicated and all the packets that have high average hop-count are routed through this alternative hub. In comparison with the state-of-the-art works, the proposed approach shows significant improvements in terms of robustness, congestion management, and resilience.
International Journal of Computer Applications in Technology
In this paper, an efficient immunisation strategy is devised for different types of networks, ran... more In this paper, an efficient immunisation strategy is devised for different types of networks, ranging from peer-to-peer computer networks to scale-free and small-world social networks. This strategy, named I-ring (I-chain), is proposed in order to immunise the random acquaintance between the independent nodes. As long as the data flow is not halted by failures, they are routed minimally through the network. However, if the information flow is blocked by failures, the routing restrictions may be relaxed by rerouting the message flow such that it bypasses the failed nodes and faulty region. The proposed strategy requires no knowledge of node degrees or general information about the network and its topology. Most importantly, a novel performance measure is evolved to assess the reliability and robustness of networks, that is the probability of messages facing the I-ring (I-chain). The experimental results of simulations testify the accuracy and practicability of the proposed measure.
International Journal of Mathematical Modelling and Numerical Optimisation, 2016
Previous attentions on the resilience of random graphs, P2P, and real-life networks have been res... more Previous attentions on the resilience of random graphs, P2P, and real-life networks have been restricted to approximately find the disconnection probability or they required the computation of NP-complete metrics and also no closed-form solution was available even for the basic networks. Our objective, in this paper, is to highlight the disconnection probability which can arise in such networks. We study the resilience of exponential and real-life networks to the random removal of individual nodes, using an accurate analytical model calculating the exact probability of various networks disconnection facing the malfunctions or local failures which lead to the loss of the global information-carrying ability of these networks. This model is generic and can provide more complete characterisation of the networks robustness. To validate our model and method, we performed sufficient results of Monte-Carlo simulation method and presented an analysis of the effects of the disconnection phenomenon on the overall network's resilience.
With the possibility of integrating multiple cores into a single chip, research on the networks-o... more With the possibility of integrating multiple cores into a single chip, research on the networks-on-chip (NoCs) as a kind of interconnection network has assumed great significance. In such networks, the effort is to provide broadband and extendable infrastructure for multi-core architectures. Communication between processors in an NoC is established using routing algorithms. Meanwhile, NoCs, like any other system, are prone to failure. With the increase in the number of network components on a chip, the probability of failure increases, too. Therefore, considering a fault-tolerant mechanism in NoCs seems to be a necessity. The main challenge of this work is combining performance and fault tolerance while reducing power, complexity and cost. In this paper, a fault-tolerant routing algorithm for tolerating static and dynamic faults in 2D Mesh NoCs and node failure model is presented. It should be taken into consideration that despite many other routing algorithms, the proposed method uses only one Virtual Channel. Results show that this method has lower latency and power consumption than SAVA and segment-based (SB) routing algorithms. It showed 2.91 and 12.74 % less power consumption than SAVA and SB, respectively, under SPLASH-2 traffic in an 8 $$\times $$× 8 Mesh network with 8 faulty nodes. Its average latency, under Uniform, Transpose, and SPLASH-2 traffics in a 4 $$\times $$× 4 Mesh with 4 faulty nodes and an 8 $$\times $$× 8 Mesh network with 8 faulty nodes, was reduced by 4.39 and 14.08 % compared to SAVA and SB, respectively.
As the number of elements in large-scale massively parallel computers, Multiprocessors System-on-... more As the number of elements in large-scale massively parallel computers, Multiprocessors System-on-Chip (MP-SoCs), and peer-to-peer communication networks increases, the probability of component failure becomes significant. Consequently, fault-tolerance turns out to be a key issue in the design of such systems. Adaptive routing algorithms have been frequently suggested as a means of improving communication performance in such systems. These algorithms, unlike deterministic routing, can utilize network state information to exploit the presence of multiple paths. Before such schemes can be successfully incorporated in networks, it is necessary to have a clear understanding of the factors which affect their performance potential. This paper investigates the performance of nine prominent adaptive fault-tolerant routing algorithms in wormhole-switched 2-D tori with a routing scheme suggested by Chalasani and Boppana [1], as an instance of a fault-tolerant method. The suggested scheme is wi...
Computation of the second order delay in RC-tree based circuits is important during the design pr... more Computation of the second order delay in RC-tree based circuits is important during the design process of modern VLSI systems with respect to having tree structure circuits. Calculation of the second and higher order moments is possible in tree based networks. Because of the closed form solution, computation speed and the ease of using the performance optimization in VLSI design methods such as floor planning, placement and routing, the Elmore delay metric is widely implemented for past generation circuits. However, physical and logical synthesis optimizations require fast and accurate analysis techniques of the RC networks. Elmore first proposed matching circuit moments to a probability density function (PDF), which led to the widespread implementation of it in many networks. But the accuracy of Elmore metric is sometimes unacceptable for the RC interconnect problems in today’s CMOS technologies. The main idea behind our approach is based on the moment matching technique with the p...
Autism is one of the most important neurological disorders which leads to problems in a person... more Autism is one of the most important neurological disorders which leads to problems in a person's social interactions. Improvement of brain imaging technologies and techniques help us to build brain structural and functional networks. Finding networks topology pattern in each of the groups (autism and healthy control) can aid us to achieve an autism disorder screening model. In the present study, we have utilized the genetic algorithm to extract a discriminative sub-network that represents differences between two groups better. In the fitness evaluation phase, for each sub-network, a machine learning model was trained using various entropy features of the sub-network and its performance was measured. Proper model performance implies extracting a good discriminative sub-network. Network entropies can be used as network topological descriptors. The evaluation results indicate the acceptable performance of the proposed screening method based on extracted discriminative sub-networks ...
Memory access as a primary performance bottleneck of each processing unit also plays a significan... more Memory access as a primary performance bottleneck of each processing unit also plays a significant role in GPU performance. In addition to high challenging parts of GPU’s memory access path, the low locality property among the requests considerably increases the memory access delay. Despite the GPU’s immense processing power, they cannot reach their maximum throughput values because of the memory access bottlenecks. Memory divergence and miss locality among the L1 missed requests significantly impose the Last-Level-Cache contention and main memory row switching overheads. In addition, interconnection network routes the request packets regardless of locality properties, such routing algorithm considerably disrupts the locality among the requests. In this paper, we proposed Locality-Aware Resource Allocation (LARA) to reduce the Streaming-Multiprocessors stall time with arbitrage among the memory request packets in favor of locality maintenance at the interconnection network of GPU. In addition, before injecting the memory requests to the interconnection network, they will be reordered at the injection-port buffer based on their thread block equality. Memory-divergence and miss-locality among the requests are two main factors that increase the rates of Last-Level-Cache contention and main memory row switching. We proposed a comprehensive approach to improving the GPU performance by decreasing the average memory access delay. We focused on the request locality property to decrease the Last-Level-Cache contention overheads and main memory row switching rate. As a result, 33% maximum and 17% average speed-up improvements among the used benchmarks, without significant effect on system areas and power consumptions, are reported.
Recovery of complex networks is an important issue that has been extensively used in various fiel... more Recovery of complex networks is an important issue that has been extensively used in various fields. Much work has been done to measure and improve the stability of complex networks during attacks. Recently, many studies have focused on the network recovery strategies after attack. In many real cases, link retrieval and recovery of critical infrastructures such as transmission network and telecommunications infrastructures are of particular importance and should be prioritized. For example, when a flood disrupts optical fibre communications in transmission networks and paralyzes the network, link retrieval corresponds to the recovery of fibre communications, so that the transmission network communication capacity can be restored at the earliest possible time. So, predicting the appropriate reserved links in a way that the network can be recovered at the lowest cost and fastest time after attacks or interruptions will be critical in a disaster. In this article, different kinds of att...
International Journal of Computer Mathematics: Computer Systems Theory
ABSTRACT Recursive decomposition of networks is a widely used approach in network analysis for fa... more ABSTRACT Recursive decomposition of networks is a widely used approach in network analysis for factorization of network structure into small subgraph patterns with few nodes. These patterns are called graphlets (motifs), and their analysis is considered as a common approach in bioinformatics. This paper focuses on evaluating the importance of graphlets in networks and proposes a new analytical model for ranking the graphlets importance based on their contribution to the graph energy spectrum. Besides, a general formula is provided to calculate the graphlet energy contribution to the total energy of a graph; then the energy value of the graph is estimated based on its graphlets. The results of the empirical analysis of synthetic and real networks are consistent with the theoretical results and suggest that the proposed analytical model can accurately estimate the structural features of a given graph based on its graphlets.
The significance of the existing analysis methods in complex networks and easy access to the ever... more The significance of the existing analysis methods in complex networks and easy access to the ever-increasing volume of information present the emergence of proposing new methods in various fields based on complex system ideas. However, these systems are usually faced with various random failures and intelligent attacks. Due to the nature of the components’ behaviors, the occurrence of the failures and faults in their operations and the alteration of their topologies are the most important problems. Since the complex systems are usually used as the infrastructures of other networks, their robustness against failures and the adoption of suitable precautions are necessary. Moreover, the small-world effect in most complex systems is one of the crucial structural features. The authors found that the relation between these two is not well-known and may even be in conflict in some networks. The main goal in this paper is to achieve an optimal topology by utilizing a robustness-oriented multi-objective trade-off optimization model (edge rewiring) to establish a peaceful relationship between the two requirements. By offering a proposed rewiring method with the small-world effect, which is called core-periphery Windmill property, the authors demonstrated that the generated networks are able to exhibit appropriate robustness even during intelligent attacks. The results obtained in terms of Windmill graphs are presented very good approximations to demonstrate the small-world effect. These graphs are used as the initial core in the construction of the optimized networks’ topologies.
Chaos: An Interdisciplinary Journal of Nonlinear Science
This paper contributes in detecting chaotic behaviors in dynamic complex social networks using a ... more This paper contributes in detecting chaotic behaviors in dynamic complex social networks using a new feature diffusion-aware model from two perspectives of abnormal links as well as abnormal nodes. The proposed approach constructs a probabilistic model of dynamic complex social networks and subsequently, applies it to detect chaotic behaviors by measuring deviations from the model. The predictive model considers the main processes of features' dynamics, evolution of nodes' features, feature diffusion, and link generation processes in dynamic complex social networks. The feature diffusion process indicates the process in which each node former features influence the future features of its neighbors. The proposed approach is validated by experiments on two real dynamic complex social network datasets of Google+ and Twitter. The approach uses some Markov Chain Monte Carlo sampling methods like Metropolis-Hastings algorithm and Slice sampling strategy to extract the model parameters, given these real datasets. Experimental results indicate the improved performance characteristics of the proposed approach in comparison with baseline approaches in terms of the performance measures of accuracy, F1-score, Matthews Correlation Coefficient, recall, precision, area under ROC curve, and log-likelihood.
Abstract Study on complex networks illustrates systems of real-world in disparate realms that inc... more Abstract Study on complex networks illustrates systems of real-world in disparate realms that incorporates a range of biological networks to technological systems and has, over the past years, become one of the most important and fascinating fields of the interdisciplinary research center. These complex networks share many topological features such as the small-worldness, scale-freeness, the existence of motifs and graphlets and self-similarity. In most cases, complex and real-networks are very large, and the description and analysis of them in explicit form is often faced with difficulty. We manage to head off aforementioned troubles by examining successful models amongst communication networks in some particular aspects, including important factors such as cost, security, integrity, scalability, and fault tolerant. The last factor is distinctly important for each communication network. Recently, some methods and mechanisms have been proposed to increase and improve the robustness of network by modifying its topology. The rewiring is the mechanism amongst the defensive strategies to increase the resilience of attacked networks in which the affected nodes are disconnected from faulty nodes and, possibly, connect to another profitable node with a specific probability. In this paper, a rewiring mechanism based on Shannon entropy concept is proposed to streamline the complex networks configuration in order to improve their resiliency. Network entropy is a quantitative criterion for describing its robustness and is acknowledged as one of the topological characteristic criteria. In practice, this quantity is related to the capacity of the network to tolerate changes in its configuration under various environmental constraints. We evaluate the network robustness based on the spectrum of degree distribution, heterogeneity, as well as the average size of the largest connected cluster during removing nodes with a sequence of systematic attacks based on the degree, betweenness, and Dangalchev’s closeness centralities. The proposed rewiring strategy is applied over six synthetic networks and six real datasets, and then we verified that through approximately 30% swapping of links, the overall robustness of networks can be reached.
ABSTRACT A variety of proposals has been introduced to dynamically complete scale-free (SF) netwo... more ABSTRACT A variety of proposals has been introduced to dynamically complete scale-free (SF) networks that are utilized to generate Barabási-Albert (BA) graph models. The analytical method for analysis of real networks, which is based on uniform distributions proposed in the BA model, does not seem so perfect. Consequently, some scholars refined and extended the BA model among which non-linear preferential attachment (NLPA), dynamical edge rewiring, fitness models and other growth models presented in the literature. In a network, features like how individuals enter the network, the probability distribution of edge growth, and the probability distribution of nodes’ age have considerable significance. A key question that neglects in the BA model is that ‘Can LPA phenomenon basically generate SF graphs apart from the arrival process of nodes and the probability distribution of nodes’ age?’. In this paper, we demonstrate the resulted graphs belong to the class of small-world or random networks contrary to the employ of LPA in the proposed model. Hence, what neglect in the BA model prevents us from attaining a clear interpretation of all properties of real-world networks. This study attempts to uncover these features by proposing an extended BA model.
The information and communication technology nowadays more than ever depends on the Internet and ... more The information and communication technology nowadays more than ever depends on the Internet and cloud computing, so that the data centers (DCs) have been converted to a constitutive unit of the cloud computing. A DC is composed of two primary parts: servers and Data Center Networks (DCNs). Robustness and scalability are two major challenges of the DCNs that are expanded based on two strategies, scale-out, and scale-up. This paper is distinctive from the related studies in two aspects. The first one is to simultaneously focus on both the scalability and the robustness challenges of the DCNs. For this purpose, we will concentrate on the comparison of robustness in the scalable models of these networks. The second one is, despite the previous work that only evaluated the DCN robustness under topological changes, we evaluated the robustness and fault tolerance against three types of unexpected changes in topology, traffic, and COI (community of interest) in the present work. Hence, we have chosen the network criticality (NC) as a graph-theoretic metric for analyzing DCN robustness. Afterward, we compare some structural and spectral graph metrics with NC among some well-known DCNs, and their scale-out and scale-up. Our results are useful to select the appropriate scaling strategy with the goal of maximizing the robustness of existing DCNs and provide a guideline for designing the new robust and scalable DCN.
The combination of traditional wired links for regular transmissions and express wireless paths f... more The combination of traditional wired links for regular transmissions and express wireless paths for long distance communications is a promising solution to prevent multi-hop network delays. In wireless network-on-chip technology, wireless-equipped routers are more error-prone than the conventional ones not only because of their implementation complexities but also due to their relatively high utilization. In this paper, a new topology is presented to enhance the network reliability, and then a novel routing algorithm is proposed to tolerate both intermittent and permanent faults on wireless hubs. In the proposed approach, once a wireless hub becomes faulty, the best alternative adjustment hub will be indicated and all the packets that have high average hop-count are routed through this alternative hub. In comparison with the state-of-the-art works, the proposed approach shows significant improvements in terms of robustness, congestion management, and resilience.
International Journal of Computer Applications in Technology
In this paper, an efficient immunisation strategy is devised for different types of networks, ran... more In this paper, an efficient immunisation strategy is devised for different types of networks, ranging from peer-to-peer computer networks to scale-free and small-world social networks. This strategy, named I-ring (I-chain), is proposed in order to immunise the random acquaintance between the independent nodes. As long as the data flow is not halted by failures, they are routed minimally through the network. However, if the information flow is blocked by failures, the routing restrictions may be relaxed by rerouting the message flow such that it bypasses the failed nodes and faulty region. The proposed strategy requires no knowledge of node degrees or general information about the network and its topology. Most importantly, a novel performance measure is evolved to assess the reliability and robustness of networks, that is the probability of messages facing the I-ring (I-chain). The experimental results of simulations testify the accuracy and practicability of the proposed measure.
International Journal of Mathematical Modelling and Numerical Optimisation, 2016
Previous attentions on the resilience of random graphs, P2P, and real-life networks have been res... more Previous attentions on the resilience of random graphs, P2P, and real-life networks have been restricted to approximately find the disconnection probability or they required the computation of NP-complete metrics and also no closed-form solution was available even for the basic networks. Our objective, in this paper, is to highlight the disconnection probability which can arise in such networks. We study the resilience of exponential and real-life networks to the random removal of individual nodes, using an accurate analytical model calculating the exact probability of various networks disconnection facing the malfunctions or local failures which lead to the loss of the global information-carrying ability of these networks. This model is generic and can provide more complete characterisation of the networks robustness. To validate our model and method, we performed sufficient results of Monte-Carlo simulation method and presented an analysis of the effects of the disconnection phenomenon on the overall network's resilience.
With the possibility of integrating multiple cores into a single chip, research on the networks-o... more With the possibility of integrating multiple cores into a single chip, research on the networks-on-chip (NoCs) as a kind of interconnection network has assumed great significance. In such networks, the effort is to provide broadband and extendable infrastructure for multi-core architectures. Communication between processors in an NoC is established using routing algorithms. Meanwhile, NoCs, like any other system, are prone to failure. With the increase in the number of network components on a chip, the probability of failure increases, too. Therefore, considering a fault-tolerant mechanism in NoCs seems to be a necessity. The main challenge of this work is combining performance and fault tolerance while reducing power, complexity and cost. In this paper, a fault-tolerant routing algorithm for tolerating static and dynamic faults in 2D Mesh NoCs and node failure model is presented. It should be taken into consideration that despite many other routing algorithms, the proposed method uses only one Virtual Channel. Results show that this method has lower latency and power consumption than SAVA and segment-based (SB) routing algorithms. It showed 2.91 and 12.74 % less power consumption than SAVA and SB, respectively, under SPLASH-2 traffic in an 8 $$\times $$× 8 Mesh network with 8 faulty nodes. Its average latency, under Uniform, Transpose, and SPLASH-2 traffics in a 4 $$\times $$× 4 Mesh with 4 faulty nodes and an 8 $$\times $$× 8 Mesh network with 8 faulty nodes, was reduced by 4.39 and 14.08 % compared to SAVA and SB, respectively.
Uploads
Papers by Farshad Safaei