Systems using capabilities to provide preferential service to selected flows have been proposed as a defense against large-scale network denial-of-service attacks. While these systems offer strong protection for established network flows,... more
Systems using capabilities to provide preferential service to selected flows have been proposed as a defense against large-scale network denial-of-service attacks. While these systems offer strong protection for established network flows, the Denial-of-Capability (DoC) attack, which prevents new capability-setup packets from reaching the destination, limits the value of these systems. Portcullis mitigates DoC attacks by allocating scarce link bandwidth for connection establishment packets based on per-computation fairness. We prove that a legitimate sender can establish a capability with high probability regardless of an attacker’s resources or strategy and that no system can improve on our guarantee. We simulate full and partial deployments of Portcullis on an Internetscale topology to confirm our theoretical results and demonstrate the substantial benefits of using per-computation fairness.
Knowledge of the largest trac flows in a network is im- portant for many network management applications. The problem of finding these flows is known as the heavy-hitter problem and has been the subject of many studies in the past years.... more
Knowledge of the largest trac flows in a network is im- portant for many network management applications. The problem of finding these flows is known as the heavy-hitter problem and has been the subject of many studies in the past years. One of the most ecient and well-known algo- rithms for finding heavy hitters is lossy counting (29). In this work we introduce probabilistic lossy counting (PLC), which enhances lossy counting in computing network traf- fic heavy hitters. PLC uses on a tighter error bound on the estimated sizes of trac flows and provides probabilistic rather than deterministic guarantees on its accuracy. The probabilistic-based error bound substantially improves the memory consumption of the algorithm. In addition, PLC reduces the rate of false positives of lossy counting and achieves a low estimation error, although slightly higher than that of lossy counting. We compare PLC with state-of-the-art algorithms for find- ing heavy hitters. Our experiments using real tr...
Economic principles are increasingly being suggested for addressing some complex issues related to distributed resource allocation for QoS (Quality of Service) enhancement. Many proposals have been put forth, including various strategies... more
Economic principles are increasingly being suggested for addressing some complex issues related to distributed resource allocation for QoS (Quality of Service) enhancement. Many proposals have been put forth, including various strategies from Pricing Theory and ...
Complex applications are describing using workflows. Execution of these workflows in Grid environments require optimized assignment of tasks on available resources according with different constrains. This paper presents a decentralized... more
Complex applications are describing using workflows. Execution of these workflows in Grid environments require optimized assignment of tasks on available resources according with different constrains. This paper presents a decentralized scheduling algorithm based on genetic algorithms for the problem of DAG scheduling. The genetic algorithm presents a powerful method for optimization and could consider multiple criteria in optimization process. Also, we describe in this paper the integration platform for the proposed algorithm in Grid systems. We make a comparative evaluation with other existing DAG scheduling solution: Cluster ready Children First, Earliest Time First, Highest Level First with Estimated Times, Improved Critical Path with Descendant Prediction) and Hybrid Remapper. We carry out our experiments using a simulation tool with various scheduling scenarios and with heterogeneous input tasks and computation resources. We present several experimental results that offer a support for near-optimal algorithm selection.
Traffic engineering plays a critical role in determining the performance and reliability of a network. A major challenge in traffic engineering is how to cope with dynamic and unpredictable changes in traffic demand. In this paper, we... more
Traffic engineering plays a critical role in determining the performance and reliability of a network. A major challenge in traffic engineering is how to cope with dynamic and unpredictable changes in traffic demand. In this paper, we propose COPE, a class of traffic engineering algorithms that optimize for the expected scenarios while providing a worst-case guarantee for unexpected scenarios. Using extensive evaluations based on real topologies and traffic traces, we show that COPE can achieve efficient resource utilization and avoid network congestion in a wide variety of scenarios.
Research shows, that the major issue in development of quality software is precise estimation. Further this estimation depends upon the degree of intricacy inherent in the software i.e. complexity. This paper attempts to empirically... more
Research shows, that the major issue in development of quality software is precise estimation. Further this estimation depends upon the degree of intricacy inherent in the software i.e. complexity. This paper attempts to empirically demonstrate the proposed complexity which is based on IEEE Requirement Engineering document. It is said that a high quality SRS is pre requisite for high quality software. Requirement Engineering document (SRS) is a specification for a particular software product, program or set of program that performs some certain functions for a specific environment. The various complexity measure given so far are based on Code and Cognitive metrics value of software, which are code based. So these metrics provide no leverage to the developer of the code. Considering the shortcoming of code based approaches, the proposed approach identifies complexity of software immediately after freezing the requirement in SDLC process. The proposed complexity measure compares well with established complexity measures. Finally the trend can be validated with the result of proposed measure. Ultimately, Requirement based complexity measure can be used to understand the complexity of proposed software much before the actual implementation of design thus saving on cost and manpower wastage.
Bulk MOSFET is reaching to its physical limit with the advancement of technology. The key factor which influences the performance of bulk MOSFET in nano regime is the gate oxide thickness. In this work an attempt has been made to analyze... more
Bulk MOSFET is reaching to its physical limit with the advancement of technology. The key factor which influences the performance of bulk MOSFET in nano regime is the gate oxide thickness. In this work an attempt has been made to analyze the underlap FinFET structure using 2D simulation. ITRS 2009 high performance (HP) updates for the year of 2015 is used in this work. Study of n-type underlap FinFET structure is carried out to analyze the effects of metal gate with high-k dielectric. Use of high-k dielectrics with metal gate at a given EOT can improve the gate leakage current without harming the device performance. Underlap structure provides an improvement in the off-state leakage current than the overlap structure. Effects of gate workfuction variation on the performance of underlap FinFET structure is also studied in this paper.
To be agile and cost effective, data centers should allow dynamic resource allocation across large server pools. In particular, the data center network should enable any server to be assigned to any service. To meet these goals, we... more
To be agile and cost effective, data centers should allow dynamic resource allocation across large server pools. In particular, the data center network should enable any server to be assigned to any service. To meet these goals, we present VL2, a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics. VL2 uses (1) flat addressing to allow service instances to be placed anywhere in the ...