Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010
Abstract In this paper, we propose virtual data center (VDC) as the unit of resource allocation for multiple tenants in the cloud. VDCs are more desirable than physical data centers because the resources allocated to VDCs can be rapidly adjusted as tenants' needs change. To enable the VDC abstraction, we design a data center network virtualization architecture called SecondNet. SecondNet achieves scalability by distributing all the virtual-to-physical mapping, routing, and bandwidth reservation state in server hypervisors.
—With the growth of data volumes and variety of Internet applications, data centers (DCs) have become an efficient and promising infrastructure for supporting data storage, and providing the platform for the deployment of diversified network services and applications (e.g., video streaming, cloud computing). These applications and services often impose multifarious resource demands (storage, compute power, bandwidth, latency) on the underlying infrastructure. Existing data center architectures lack the flexibility to effectively support these applications, which results in poor support of QoS, deployability, manageability, and defence against security attacks. Data center network virtualization is a promising solution to address these problems. Virtualized data centers are envisioned to provide better management flexibility, lower cost, scalability, better resources utilization, and energy efficiency. In this paper, we present a survey of the current state-of-the-art in data center networks virtualization, and provide a detailed comparison of the surveyed proposals. We discuss the key research challenges for future research and point out some potential directions for tackling the problems related to data center design.
2014
Multi-tenant datacenters represent an extremely challenging networking environment. Tenants want the ability to migrate unmodified workloads from their enterprise networks to service provider datacenters, retaining the same networking configurations of their home network. The service providers must meet these needs without operator intervention while preserving their own operational flexibility and efficiency. Traditional networking approaches have failed to meet these tenant and provider requirements. Responding to this need, we present the design and implementation of a network virtualization solution for multi-tenant datacenters.
—Virtualization is an essential step before a bare-metal data center being ready for commercial usage, because it bridges the foreground interface for cloud tenants and the background resource management on underlying infrastructures. A concept at the heart of the foreground is multi-tenancy, which deals with logical isolation of shared virtual computing, storage, and network resources and provides adaptive capability for heterogeneous demands from various tenants. A crucial problem in the background is load balancing, which affects multiple issues including cost, flexibility and availability. In this work, we propose a virtualization framework that consider these two problems simultaneously. Our framework takes advantage of the flourishing application of distributed virtual switch (DVS), and leverages the blooming adoption of OpenFlow protocols. First, the framework accommodates heterogeneous network communication patterns by supporting arbitrary traffic matrices among virtual machines (VMs) in virtual private clouds (VPCs). The only constraint on the network flows is that the bandwidth of a server's network interface. Second, our framework achieves load balancing using an elaborately designed link establishment algorithm. The algorithm takes the configurations of the bare-metal data center and the dynamic network environment as inputs, and adaptively applies a globally bounded oversubscription on every link. Our framework concentrates on the fat-tree architecture, which is widely used in today's data centers.
2014 IEEE Network Operations and Management Symposium (NOMS), 2014
Proceedings of the 2014 ACM conference on SIGCOMM, 2014
2014 IEEE Network Operations and Management Symposium (NOMS), 2014
h i g h l i g h t s • We propose a total bandwidth allocation solution with two-layer framework in cloud datacenter. • We design a fine-grained network abstraction model (FGVC model) meeting more requirements. • We implement two-phase VM placement algorithm uses two-phase optimization. • We propose E-F runtime mechanism fairly utilizes the unused bandwidth resources between tenants. • By comprehensive simulations and experiments, the results show that our solution outperforms other classical ones. a b s t r a c t In today's production-grade cloud datacenters, cloud service providers do not offer any bandwidth guarantee between VMs, which results in unpredictable performance of tenants' applications. The research community has recognized this problem; however, existing solutions to bandwidth allocation fail to take into consideration tenants' request for bandwidth and the actual bandwidth usage of applications simultaneously, which leads to a waste of bandwidth resources or unpredictable performance. To address these issues, we present SpongeNet, a bandwidth allocation solution that consists of three components through two layers-static bandwidth guarantees at the tenant layer and a dynamic rate allocation at the application layer to realize predictable performance. The first component, named FGVC model, is a network abstraction model that provides a simple, accurate and flexible way for tenants to specify network requirements and achieve high utilization through bandwidth saving. The second component is a two-phase VM placement algorithm that provides optimal combinations of ordering policies and dispatching policies to meet multiple goals. The third component, named E-F runtime mechanism, can achieve the fairness between guaranteed and unguaranteed tenants in utilizing the unused bandwidth resources. Extensive simulations based on real application traces and 3-level tree topology show that SpongeNet enhances bandwidth saving when compared to the state-of-the-art solutions (e.g., the Oktopus system), and significantly improves the throughput rate by 18% and response time by 92%. With a small prototype implementation on a 7-server testbed, we demonstrate that SpongeNet provides fair work-conserving bandwidth guarantee among all tenants, even in extreme cases.
annals of telecommunications - annales des télécommunications, 2010
Anthropological Forum, 2024
Acta Botanica Brasilica, 1991
Boletim da Sociedade Portuguesa de Química, 2003
Biomedicine & pharmacotherapy = Biomedecine & pharmacotherapie, 2017
Pattern Recognition Letters, 2010
Jurnal Ilmiah Intech : Information Technology Journal of UMUS
Dao: A Journal of Comparative Philosophy, 2024
INOVA-TIF, 2019
arXiv (Cornell University), 2020
Journal for person-oriented research, 2024