Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2576195.2576198acmconferencesArticle/Chapter ViewAbstractPublication PagesveeConference Proceedingsconference-collections
research-article

Tesseract: reconciling guest I/O and hypervisor swapping in a VM

Published: 01 March 2014 Publication History

Abstract

Double-paging is an often-cited, if unsubstantiated, problem in multi-level scheduling of memory between virtual machines (VMs) and the hypervisor. This problem occurs when both a virtualized guest and the hypervisor overcommit their respective physical address-spaces. When the guest pages out memory previously swapped out by the hypervisor, it initiates an expensive sequence of steps causing the contents to be read in from the hypervisor swapfile only to be written out again, significantly lengthening the time to complete the guest I/O request. As a result, performance rapidly drops.
We present Tesseract, a system that directly and transparently addresses the double-paging problem. Tesseract tracks when guest and hypervisor I/O operations are redundant and modifies these I/Os to create indirections to existing disk blocks containing the page contents. Although our focus is on reconciling I/Os between the guest disks and hypervisor swap, our technique is general and can reconcile, or deduplicate, I/Os for guest pages read or written by the VM.
Deduplication of disk blocks for file contents accessed in a common manner is well-understood. One challenge that our approach faces is that the locality of guest I/Os (reflecting the guest's notion of disk layout) often differs from that of the blocks in the hypervisor swap. This loss of locality through indirection results in significant performance loss on subsequent guest reads. We propose two alternatives to recovering this lost locality, each based on the idea of asynchronously reorganizing the indirected blocks in persistent storage.
We evaluate our system and show that it can significantly reduce the costs of double-paging. We focus our experiments on a synthetic benchmark designed to highlight its effects. In our experiments we observe Tesseract can improve our benchmark's throughput by as much as 200% when using traditional disks and by as much as 30% when using SSD. At the same time worst case application responsiveness can be improved by a factor of 5.

References

[1]
VMware vSphere hypervisor. http://www.vmware.com/go/ESXiInfoCenter.
[2]
Jenkins. http://jenkins-ci.org.
[3]
VMware Workstation. http://www.vmware.com/products/workstation.
[4]
O. Agesen. US patent 8380939: System/method for maintaining memory page sharing in a virtual environment, 2011.
[5]
N. Amit, D. Tsafrir, and A. Schuster. VSwapper: A memory swapper for virtualized environments. In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS- XIX, 2014.
[6]
R. P. Goldberg and R. Hassinger. The double paging anomaly. In Proceedings of the May 6--10, 1974, National Computer Conference and Exposition, AFIPS '74, pages 195--199, 1974.
[7]
K. Govil. Virtual Clusters: Resource Management on Large Shared-Memory Multiprocessors. PhD thesis, Stanford University, Palo Alto, CA, USA, 2001.
[8]
K. Govil, D. Teodosiu, Y. Huang, and M. Rosenblum. Cellular Disco: Resource management using virtual clusters on shared- memory multiprocessors. ACM Trans. Comput. Syst., 18:229--262, August 2000.
[9]
A. Gulati, I. Ahmad, and C. A. Waldspurger. Parda: Proportional allocation of resources for distributed storage access. In Proceedings of the 7th Conference on File and Storage Technologies, FAST '09, pages 85--98, 2009.
[10]
S. T. Jones, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau. Geiger: monitoring the buffer cache in a virtual machine environment. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS-XII, pages 14--24, 2006.
[11]
P. Lu and K. Shen. Virtual machine memory access tracing with hypervisor exclusive cache. In Proceedings of the 2007 USENIX Annual Technical Conference, USENIX'09, pages 3:1--3:15, 2007.
[12]
P. Manning and J. Dieckhans. Storage I/O control technical overview and considerations for deployment. 2010. URL http://www.vmware.com/files/pdf/techpaper/VMW-vSphere41-SIOC.pdf.
[13]
G. Miós, D. G. Murray, S. Hand, and M. A. Fetterman. Satori: enlightened page sharing. In Proceedings of the 2009 USENIX Annual Technical Conference, USENIX'09, 2009.
[14]
E. Park, B. Egger, and J. Lee. Fast and space-efficient virtual machine checkpointing. In Proceedings of the 7th ACM SIG- PLAN/SIGOPS International Conference on Virtual Execution Environments, VEE '11, pages 75--86, 2011.
[15]
L. Seawright and R. MacKinnon. VM/370 - a study of multiplicity and usefulness. IBM Sys. Jrnl, 18(1):4--17, 1979.
[16]
Standard Performance Evaluation Corporation. SPECjbb2005. http://www.spec.org/jbb2005.
[17]
C. A. Waldspurger. Memory resource management in VMware ESX server. SIGOPS Oper. Syst. Rev., 36:181--194, Dec. 2002.

Cited By

View all
  • (2023)Affinity Alloc: Taming Not-So Near-Data ComputingProceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture10.1145/3613424.3623778(784-799)Online publication date: 28-Oct-2023
  • (2023)Thoth: Provisioning Over-Committed Memory Resource with Differentiated QoS in Public Clouds2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys)10.1109/HPCC-DSS-SmartCity-DependSys60770.2023.00021(82-89)Online publication date: 17-Dec-2023
  • (2021)Progressive Memory Adjustment with Performance Guarantee in Virtualized SystemsProceedings of the 50th International Conference on Parallel Processing10.1145/3472456.3472491(1-11)Online publication date: 9-Aug-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
VEE '14: Proceedings of the 10th ACM SIGPLAN/SIGOPS international conference on Virtual execution environments
March 2014
236 pages
ISBN:9781450327640
DOI:10.1145/2576195
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 March 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. hypervisor
  2. memory overcommitment
  3. paging
  4. swapping
  5. virtual machines
  6. virtualization

Qualifiers

  • Research-article

Conference

VEE '14

Acceptance Rates

VEE '14 Paper Acceptance Rate 18 of 56 submissions, 32%;
Overall Acceptance Rate 80 of 235 submissions, 34%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)6
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Affinity Alloc: Taming Not-So Near-Data ComputingProceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture10.1145/3613424.3623778(784-799)Online publication date: 28-Oct-2023
  • (2023)Thoth: Provisioning Over-Committed Memory Resource with Differentiated QoS in Public Clouds2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys)10.1109/HPCC-DSS-SmartCity-DependSys60770.2023.00021(82-89)Online publication date: 17-Dec-2023
  • (2021)Progressive Memory Adjustment with Performance Guarantee in Virtualized SystemsProceedings of the 50th International Conference on Parallel Processing10.1145/3472456.3472491(1-11)Online publication date: 9-Aug-2021
  • (2018)A novel disk I/O scheduling framework of virtualized storage systemCluster Computing10.1007/s10586-017-1363-9Online publication date: 14-Feb-2018
  • (2015)HybridSwap: A scalable and synthetic framework for guest swapping on virtualization platform2015 IEEE Conference on Computer Communications (INFOCOM)10.1109/INFOCOM.2015.7218457(864-872)Online publication date: Apr-2015
  • (2014)Symbiotic Dynamic Memory Balancing for Virtual Machines in Smart TV SystemsETRI Journal10.4218/etrij.14.2214.003836:5(741-751)Online publication date: 1-Oct-2014
  • (2018)A novel disk I/O scheduling framework of virtualized storage systemCluster Computing10.1007/s10586-017-1363-9Online publication date: 14-Feb-2018
  • (2017)HypercallbacksACM SIGOPS Operating Systems Review10.1145/3139645.313965451:1(54-59)Online publication date: 11-Sep-2017
  • (2017)HypercallbacksProceedings of the 16th Workshop on Hot Topics in Operating Systems10.1145/3102980.3102987(37-41)Online publication date: 7-May-2017
  • (2016)Review of Virtual Memory Optimization in Cloud Environment2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS)10.1109/INCoS.2016.106(53-58)Online publication date: Sep-2016
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media