Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
In cloud computing services, high availability is one of the quality of service requirements which is necessary to maintain customer confidence. High availability systems can be built by applying redundant nodes and multiple clusters in... more
In cloud computing services, high availability is one of the quality of service requirements which is necessary to maintain customer confidence. High availability systems can be built by applying redundant nodes and multiple clusters in order to cope with software and hardware failures. Due to cloud computing complexity, dependability analysis of the cloud may require combining state‐based and nonstate‐based modeling techniques. This article proposes a hierarchical model combining reliability block diagrams and continuous time Markov chains to evaluate the availability of OpenStack private clouds, by considering different scenarios. The steady‐state availability, downtime, and cost are used as measures to compare different scenarios studied in the article. The heterogeneous workloads are considered in the proposed models by varying the number of CPUs requested by each customer. Both hardware and software failure rates of OpenStack components used in the model are collected via setti...
Abstract—Code division multiple access (CDMA) systems have been very successful in extending the multiuser capability of communication systems with fixed resources through a set of pseudorandom (PN) codes. Spectral efficiency of CDMA... more
Abstract—Code division multiple access (CDMA) systems have been very successful in extending the multiuser capability of communication systems with fixed resources through a set of pseudorandom (PN) codes. Spectral efficiency of CDMA channels can be increased by using an M-ary ...
Wireless sensor network includes a large number of nodes which are distributed in a geographical location. The essential fact about WSN is that energy of nodes is limited. Therefore, presenting proper solutions as an optimized routing is... more
Wireless sensor network includes a large number of nodes which are distributed in a geographical location. The essential fact about WSN is that energy of nodes is limited. Therefore, presenting proper solutions as an optimized routing is crucial to equally use energy of all nodes. In this paper we propose a method which performs routing in WSNs using greedy approach. It is able to choose optimum rout based on energy level and distance. Since our method tries to equally utilize energy of different nodes, it will eventually result in lifetime increase. In addition to modifying energy consumption, simulation results show that proposed algorithm achieves considerable improvement in reduction of end-to-end delay and increase in packet delivery rate.
Abstract—The degree distribution is an important characteris-tic in complex networks. In many applications, quantification of degree distribution in the form of a fixed-length feature vector is a necessary step. Additionally, we often... more
Abstract—The degree distribution is an important characteris-tic in complex networks. In many applications, quantification of degree distribution in the form of a fixed-length feature vector is a necessary step. Additionally, we often need to compare the degree distribution of two given networks and extract the amount of similarity between the two distributions. In this paper, we propose a novel method for quantification of the degree distributions in complex networks. Based on this quantification method, a new distance function is also proposed for degree distributions, which captures the differences in the overall structure of the two given distributions. The proposed method is able to effectively compare networks even with different scales, and outperforms the state of the art methods considerably, with respect to the accuracy of the distance function. The datasets and more detailed evaluations are available upon request.
Current applied intelligent systems have crucial shortcomings either in reasoning the gathered knowledge, or representation of comprehensive integrated information. To address these limitations, we develop a formal transition system which... more
Current applied intelligent systems have crucial shortcomings either in reasoning the gathered knowledge, or representation of comprehensive integrated information. To address these limitations, we develop a formal transition system which is applied to the common artificial intelligence (AI) systems, to reason about the findings. The developed model was created by combining the Public Announcement Logic (PAL) and the Linear Temporal Logic (LTL), which will be done to analyze both single-framed data and the following time-series data. To do this, first, the achieved knowledge by an AI-based system (i.e., classifiers) for an individual time-framed data, will be taken, and then, it would be modeled by a PAL. This leads to developing a unified representation of knowledge, and the smoothness in the integration of the gathered and external experiences. Therefore, the model could receive the classifier's predefined -or any external- knowledge, to assemble them in a unified manner. Alon...
To study the prevalence and factors associated with opioid use in pain, 480 consecutive patients with a chief complaint of pain were interviewed at 10 clinics in Zahedan. The data were analysed in relation to 18 possible associated... more
To study the prevalence and factors associated with opioid use in pain, 480 consecutive patients with a chief complaint of pain were interviewed at 10 clinics in Zahedan. The data were analysed in relation to 18 possible associated factors. The prevalence of opioid use was 28.5% in patients presenting with pain. There was no significant relation between opioid use and chronic pain [>/= 6 months], but there was a relationship with the following 5 factors:previous opioid use by friends [72.9% versus 20.4% without friends using], occupation [58.5% private sector employees/self-employed versus 17.4% housewives], cigarette smoking [60.8% versus 21.8% not smoking], consultation for a psychological problem [38.3% versus 23.3% without], and death of a spouse [60.0% versus 26.1% without]
Long and continuous running of software can cause software aging-induced errors and failures. Cloud data centers suffer from these kinds of failures when Virtual Machine Monitors (VMMs), which control the execution of Virtual Machines... more
Long and continuous running of software can cause software aging-induced errors and failures. Cloud data centers suffer from these kinds of failures when Virtual Machine Monitors (VMMs), which control the execution of Virtual Machines (VMs), age. Software rejuvenation is a proactive fault management technique that can prevent the occurrence of future failures by terminating VMMs, cleaning up their internal states, and restarting them. However, the appropriate time and type of VMM rejuvenation can affect performance, availability, and power consumption of a system. In this paper, an analytical model is proposed based on Stochastic Activity Networks for performance evaluation of Infrastructure-as-a-Service cloud systems. Using the proposed model, a two-threshold power-aware software rejuvenation scheme is presented. Many details of real cloud systems, such as VM multiplexing, migration of VMs between VMMs, VM heterogeneity, failure of VMMs, failure of VM migration, and different proba...
Peer-to-Peer (P2P) cloud systems are becoming more popular due to the high computational capability, scalability, reliability, and efficient data sharing. However, sending and receiving a massive amount of data causes huge network traffic... more
Peer-to-Peer (P2P) cloud systems are becoming more popular due to the high computational capability, scalability, reliability, and efficient data sharing. However, sending and receiving a massive amount of data causes huge network traffic leading to significant communication delays. In P2P systems, a considerable amount of the mentioned traffic and delay is owing to the mismatch between the physical layer and the overlay layer, which is referred to as locality problem. To achieve higher performance and consequently resilience to failures, each peer has to make connections to geographically closer peers. To the best of our knowledge, locality problem is not considered in any well known P2P cloud system. However, considering this problem could enhance the overall network performance by shortening the response time and decreasing the overall network traffic. In this paper, we propose a novel, efficient, and general solution for locality problem in P2P cloud systems considering the roun...
The minimum cost subgraph for joint distributed source and network coding has been studied using a linear programming approach. This problem considers some statistically correlated information sources in a network, where sending over each... more
The minimum cost subgraph for joint distributed source and network coding has been studied using a linear programming approach. This problem considers some statistically correlated information sources in a network, where sending over each link has a defined cost. The desired solution to this problem is a set of coding rates over the graph links, that minimizes the total communication
Research Interests:
... Doroud branch) Ali Movaghar Department of Computer Engineering, Sharif University of Technology Tehran, Iran Mehran Mahramian Informatics Services ... Younis, O., Krunz, M., Ramasubramanian, S.: Node Clustering in Wireless Sensor... more
... Doroud branch) Ali Movaghar Department of Computer Engineering, Sharif University of Technology Tehran, Iran Mehran Mahramian Informatics Services ... Younis, O., Krunz, M., Ramasubramanian, S.: Node Clustering in Wireless Sensor Net-works: Recent Developments ...
ABSTRACT Network Coding (NC) is an emerging technique that helps wireless networks to have higher throughput and better energy efficiency. In this paper, a novel network coding-aware routing (S-NC) which uses NC to improve reliability and... more
ABSTRACT Network Coding (NC) is an emerging technique that helps wireless networks to have higher throughput and better energy efficiency. In this paper, a novel network coding-aware routing (S-NC) which uses NC to improve reliability and robustness in unreliable WSNs is proposed. It is based on a structure-free routing algorithm as the underlying routing engine which employs neighbor queue length to increase coding opportunity; while exploiting a spatial coding method to control the amount of redundancy. To the best of our knowledge, this is the first study on the topic of network coding-aware routing in WSNs. Simulation results in TOSSIM demonstrate that S-NC significantly improves the performance in terms of reliability, end-to-end delay, and energy consumption in the network.
Abstract—Energy constraint is the most critical problem in wireless sensor networks (WSNs). To address this issue, clustering has been introduced as an efficient way for routing. However, the available clustering algorithms do not... more
Abstract—Energy constraint is the most critical problem in wireless sensor networks (WSNs). To address this issue, clustering has been introduced as an efficient way for routing. However, the available clustering algorithms do not efficiently consider the geographical ...
One of the most important challenges in mobile Ad-hoc networks is simulation. The simulation of these networks based on the accessible simulation techniques, which are based on packet, is a demanding and time-consuming task. Moreover,... more
One of the most important challenges in mobile Ad-hoc networks is simulation. The simulation of these networks based on the accessible simulation techniques, which are based on packet, is a demanding and time-consuming task. Moreover, with the complexity of the network and the increase in the numbers of the nodes, it is likely to take a long time. This is
In this paper, we present a new Start-time Fair Queuing (SFQ) algorithm called Weighted Start-time Fair Queuing (WSFQ) which is more efficient and achieves better fairness than SFQ in the presence of small and huge elastic traffic flows.... more
In this paper, we present a new Start-time Fair Queuing (SFQ) algorithm called Weighted Start-time Fair Queuing (WSFQ) which is more efficient and achieves better fairness than SFQ in the presence of small and huge elastic traffic flows. WSFQ scheduler, like SFQ uses a start time eligibility criterion to select packets and when the start-time of two packets in two flows are the same, it acts like Weighted Fair Queuing (WFQ) which selects the smallest virtual finish time first. Afterward, we compared the performance of our model with that of the applied scheduling algorithms such as First-In-First-Out (FIFO), SFQ as well as WFQ on the end-to-end delay and the throughput in small and large-scale networks. Our analysis demonstrates that our proposed model is suitable for elastic services networks since it achieves low end-to-end delay for elastic applications as well as providing fairness which is desirable for elastic traffic regardless of variation in one misbehaving flow.
Research Interests:

And 335 more