Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3631882.3631883acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmemsysConference Proceedingsconference-collections
research-article
Open access

An Empirical Analysis on Memcached's Replacement Policies

Published: 08 April 2024 Publication History

Abstract

The performance of large-scale web services heavily relies on the hit ratio of the key-value caches. One core component of a high-performance key-value cache is the replacement policy. A right replacement policy can help the caching system achieve a better hit ratio with no extra space cost, thereby improving the system’s throughput and end-to-end latency. Memcached and Redis are two widely used in-memory key-value caching software in many production systems. Both Memcached and Redis are simple to use and capable of ensuring end-to-end latency requirements for latency critical services. Memcached and Redis use different policies for cache replacement. In contrast, Memcached uses LRU or a variant of segmented-LRU (SegLRU) for replacement, while Redis uses KLRU, a random sampling-based LRU policy which evicts the LRU object from K randomly selected samples. This naturally leads to the question: “how does one compare to the other in actual production usage?” To answer the question, we implement the KLRU policy on Memcached. We evaluate the effectiveness of these three policies using both synthetic and actual production workloads. Our empirical analysis shows that, both SegLRU and KLRU outperform LRU in scalability for write-intensive workloads. However, despite the fact that SegLRU and KLRU are considerably different in terms of their heuristic and implementation, they yield very similar cache hit ratios, throughput, and scalability, with the random sampling-based LRU slightly winning over write-heavy workloads. KLRU also shows advantages in its simplicity in data structures and flexibility in adjusting the sampling size K to adapt to different workloads.

References

[1]
Berk Atikoglu, Yuehai Xu, Eitan Frachtenberg, Song Jiang, and Mike Paleczny. 2012. Workload Analysis of a Large-Scale Key-Value Store. In Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE Joint International Conference on Measurement and Modeling of Computer Systems (London, England, UK) (SIGMETRICS ’12). Association for Computing Machinery, New York, NY, USA, 53–64. https://doi.org/10.1145/2254756.2254766
[2]
Nathan Beckmann, Haoxian Chen, and Asaf Cidon. 2018. LHD: Improving Cache Hit Rate by Maximizing Hit Density. In 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI 18). USENIX Association, Renton, WA, 389–403. https://www.usenix.org/conference/nsdi18/presentation/beckmann
[3]
Ben-Manes. [n. d.]. caffeine: A high performance caching library for Java. https://github.com/ben-manes/caffeine
[4]
Benjamin Berg, Daniel S. Berger, Sara McAllister, Isaac Grosof, Sathya Gunasekar, Jimmy Lu, Michael Uhlar, Jim Carrig, Nathan Beckmann, Mor Harchol-Balter, and Gregory R. Ganger. 2020. The CacheLib Caching Engine: Design and Experiences at Scale. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). USENIX Association, 753–768. https://www.usenix.org/conference/osdi20/presentation/berg
[5]
Daniel S. Berger, Nathan Beckmann, and Mor Harchol-Balter. 2018. Practical Bounds on Optimal Caching with Variable Object Sizes. Proc. ACM Meas. Anal. Comput. Syst. 2, 2, Article 32 (jun 2018), 38 pages. https://doi.org/10.1145/3224427
[6]
Aaron Blankstein, Siddhartha Sen, and Michael J. Freedman. 2017. Hyperbolic Caching: Flexible Caching for Web Applications. In 2017 USENIX Annual Technical Conference (USENIX ATC 17). USENIX Association, Santa Clara, CA, 499–511. https://www.usenix.org/conference/atc17/technical-sessions/presentation/blankstein
[7]
Badrish Chandramouli, Guna Prasaad, Donald Kossmann, Justin Levandoski, James Hunter, and Mike Barnett. 2018. FASTER: A Concurrent Key-Value Store with In-Place Updates. In Proceedings of the 2018 International Conference on Management of Data (Houston, TX, USA) (SIGMOD ’18). Association for Computing Machinery, New York, NY, USA, 275–290. https://doi.org/10.1145/3183713.3196898
[8]
Jiqiang Chen, Liang Chen, Sheng Wang, Guoyun Zhu, Yuanyuan Sun, Huan Liu, and Feifei Li. 2020. HotRing: A Hotspot-Aware In-Memory Key-Value Store. In 18th USENIX Conference on File and Storage Technologies (FAST 20). USENIX Association, Santa Clara, CA, 239–252. https://www.usenix.org/conference/fast20/presentation/chen-jiqiang
[9]
Asaf Cidon, Assaf Eisenman, Mohammad Alizadeh, and Sachin Katti. 2016. Cliffhanger: Scaling Performance Cliffs in Web Memory Caches. In 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16). USENIX Association, Santa Clara, CA, 379–392.
[10]
Brian F. Cooper, Adam Silberstein, Erwin Tam, Raghu Ramakrishnan, and Russell Sears. 2010. Benchmarking Cloud Serving Systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing (Indianapolis, Indiana, USA) (SoCC ’10). ACM, New York, NY, USA, 143–154.
[11]
Diego Didona and Willy Zwaenepoel. 2019. Size-Aware Sharding for Improving Tail Latencies in in-Memory Key-Value Stores. In Proceedings of the 16th USENIX Conference on Networked Systems Design and Implementation (Boston, MA, USA) (NSDI’19). USENIX Association, USA, 79–93.
[12]
Dormando. 2018. Replacing the cache replacement algorithm in memcached. Retrieved April 10, 2022 from https://memcached.org/blog/modern-lru/
[13]
Gil Einziger and Roy Friedman. 2014. TinyLFU: A Highly Efficient Cache Admission Policy. In 2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing. 146–153. https://doi.org/10.1109/PDP.2014.34
[14]
Ohad Eytan, Danny Harnik, Effi Ofer, Roy Friedman, and Ronen Kat. 2019. IBM Object Store Traces (SNIA IOTTA Trace Set 36305). In SNIA IOTTA Trace Repository, Geoff Kuenning (Ed.). Storage Networking Industry Association. http://iotta.snia.org/traces/key-value?only=36305
[15]
Ohad Eytan, Danny Harnik, Effi Ofer, Roy Friedman, and Ronen Kat. 2020. It’s Time to Revisit LRU vs. FIFO. In 12th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 20). USENIX Association. https://www.usenix.org/conference/hotstorage20/presentation/eytan
[16]
Bin Fan, David G. Andersen, and Michael Kaminsky. 2013. MemC3: Compact and Concurrent MemCache with Dumber Caching and Smarter Hashing. In 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI 13). USENIX Association, Lombard, IL, 371–384. https://www.usenix.org/conference/nsdi13/technical-sessions/presentation/fan
[17]
Bin Fan, Hyeontaek Lim, David G. Andersen, and Michael Kaminsky. 2011. Small Cache, Big Effect: Provable Load Balancing for Randomly Partitioned Cluster Services. In Proceedings of the 2nd ACM Symposium on Cloud Computing (Cascais, Portugal) (SOCC ’11). Association for Computing Machinery, New York, NY, USA, Article 23, 12 pages. https://doi.org/10.1145/2038916.2038939
[18]
Sai Huang, Qingsong Wei, Dan Feng, Jianxi Chen, and Cheng Chen. 2016. Improving Flash-Based Disk Cache with Lazy Adaptive Replacement. ACM Trans. Storage 12, 2, Article 8 (feb 2016), 24 pages. https://doi.org/10.1145/2737832
[19]
Mike Hurwitz, Shray Kumar, Shane Hansen, and Alexa Griffith. 2022. LazyLRU: An in-memory cache with limited locking. https://github.com/TriggerMail/lazylru
[20]
Xin Jin, Xiaozhou Li, Haoyu Zhang, Robert Soulé, Jeongkeun Lee, Nate Foster, Changhoon Kim, and Ion Stoica. 2017. NetCache: Balancing Key-Value Stores with Fast In-Network Caching. In Proceedings of the 26th Symposium on Operating Systems Principles (Shanghai, China) (SOSP ’17). Association for Computing Machinery, New York, NY, USA, 121–136. https://doi.org/10.1145/3132747.3132764
[21]
Theodore Johnson and Dennis Shasha. 1994. 2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm. In Proceedings of the 20th International Conference on Very Large Data Bases(VLDB ’94). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 439–450.
[22]
Redis Labs. 2020. Overview of Redis key eviction policies. Retrieved September 10, 2020 from https://redis.io/docs/reference/eviction/
[23]
Redis Labs. 2020. redis. Retrieved September 10, 2020 from https://redis.io
[24]
Hyeontaek Lim, Dongsu Han, David G. Andersen, and Michael Kaminsky. 2014. MICA: A Holistic Approach to Fast In-Memory Key-Value Storage. In 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14). USENIX Association, Seattle, WA, 429–444. https://www.usenix.org/conference/nsdi14/technical-sessions/presentation/lim
[25]
Zaoxing Liu, Zhihao Bai, Zhenming Liu, Xiaozhou Li, Changhoon Kim, Vladimir Braverman, Xin Jin, and Ion Stoica. 2019. DistCache: Provable Load Balancing for Large-Scale Storage Systems with Distributed Caching. In Proceedings of the 17th USENIX Conference on File and Storage Technologies (Boston, MA, USA) (FAST’19). USENIX Association, USA, 143–157.
[26]
Sara McAllister, Benjamin Berg, Julian Tutuncu-Macias, Juncheng Yang, Sathya Gunasekar, Jimmy Lu, Daniel S. Berger, Nathan Beckmann, and Gregory R. Ganger. 2021. Kangaroo: Caching Billions of Tiny Objects on Flash. In Proceedings of the ACM SIGOPS 28th Symposium on Operating Systems Principles (Virtual Event, Germany) (SOSP ’21). Association for Computing Machinery, New York, NY, USA, 243–262. https://doi.org/10.1145/3477132.3483568
[27]
Nimrod Megiddo and Dharmendra S. Modha. 2003. ARC: A Self-Tuning, Low Overhead Replacement Cache. In 2nd USENIX Conference on File and Storage Technologies (FAST 03). USENIX Association, San Francisco, CA. https://www.usenix.org/conference/fast-03/arc-self-tuning-low-overhead-replacement-cache
[28]
memcached. 2020. mc-crusher. https://github.com/memcached/mc-crusher Accessed: 2022-05-15.
[29]
memcached. 2020. memcached. Retrieved September 10, 2020 from https://memcached.org
[30]
Cheng Pan, Xiaolin Wang, Yingwei Luo, and Zhenlin Wang. 2021. Penalty- and Locality-Aware Memory Allocation in Redis Using Enhanced AET. ACM Trans. Storage 17, 2, Article 15 (may 2021), 45 pages. https://doi.org/10.1145/3447573
[31]
K. Psounis and B. Prabhakar. 2002. Efficient randomized Web-cache replacement schemes using samples from past eviction times. IEEE/ACM Transactions on Networking 10, 4 (2002), 441–454. https://doi.org/10.1109/TNET.2002.801414
[32]
Guocong Quan, Jian Tan, Atilla Eryilmaz, and Ness Shroff. 2019. A New Flexible Multi-Flow LRU Cache Management Paradigm for Minimizing Misses. Proc. ACM Meas. Anal. Comput. Syst. 3, 2, Article 39 (jun 2019), 30 pages. https://doi.org/10.1145/3341617.3326154
[33]
Liana V. Rodriguez, Farzana Yusuf, Steven Lyons, Eysler Paz, Raju Rangaswami, Jason Liu, Ming Zhao, and Giri Narasimhan. 2021. Learning Cache Replacement with CACHEUS. In 19th USENIX Conference on File and Storage Technologies (FAST 21). USENIX Association, 341–354. https://www.usenix.org/conference/fast21/presentation/rodriguez
[34]
SNIA. 2020. MSR Cambridge Traces. http://iotta.snia.org/traces/388 Accessed: 2020-03-15.
[35]
Ted Unangst. 2014. 2Q Buffer Cache in OpenBSD. Retrieved April 10, 2022 from https://undeadly.org/cgi?action=article;sid=20140908113732
[36]
Yuchen Wang, Junyao Yang, and Zhenlin Wang. 2021. Dynamically Configuring LRU Replacement Policy in Redis. In The International Symposium on Memory Systems (Washington, DC, USA) (MEMSYS 2020). Association for Computing Machinery, New York, NY, USA, 272–280. https://doi.org/10.1145/3422575.3422799
[37]
Alex Wiggins and Jimmy Langston. 2012. Enhancing the scalability of memcached. Intel document (2012). https://www.intel.com/content/dam/develop/external/us/en/documents/memcached-05172012.pdf
[38]
Jake Wires, Stephen Ingram, Zachary Drudi, Nicholas J. A. Harvey, and Andrew Warfield. 2014. Characterizing Storage Workloads with Counter Stacks. In 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14). USENIX Association, Broomfield, CO, 335–349. https://www.usenix.org/conference/osdi14/technical-sessions/presentation/wires
[39]
Gang Yan, Jian Li, and Don Towsley. 2021. Learning from Optimal Caching for Content Delivery. In Proceedings of the 17th International Conference on Emerging Networking EXperiments and Technologies (Virtual Event, Germany) (CoNEXT ’21). Association for Computing Machinery, New York, NY, USA, 344–358. https://doi.org/10.1145/3485983.3494855
[40]
Junyao Yang, Yuchen Wang, and Zhenlin Wang. 2021. Efficient Modeling of Random Sampling-Based LRU(ICPP 2021). Association for Computing Machinery, New York, NY, USA, Article 32, 11 pages. https://doi.org/10.1145/3472456.3472514
[41]
Juncheng Yang, Yao Yue, and K. V. Rashmi. 2020. A large scale analysis of hundreds of in-memory cache clusters at Twitter. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). USENIX Association, 191–208. https://www.usenix.org/conference/osdi20/presentation/yang
[42]
Juncheng Yang, Yao Yue, and Rashmi Vinayak. 2021. Segcache: a memory-efficient and scalable in-memory key-value cache for small objects. In 18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21). USENIX Association, 503–518. https://www.usenix.org/conference/nsdi21/presentation/yang-juncheng
[43]
Yuanyuan Zhou, James Philbin, and Kai Li. 2001. The Multi-Queue Replacement Algorithm for Second Level Buffer Caches. In Proceedings of the General Track: 2001 USENIX Annual Technical Conference. USENIX Association, USA, 91–104.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
MEMSYS '23: Proceedings of the International Symposium on Memory Systems
October 2023
231 pages
ISBN:9798400716447
DOI:10.1145/3631882
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 April 2024

Check for updates

Author Tags

  1. LRU
  2. Memcached
  3. Random Replacement
  4. Replacement Policy

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • national science fondation
  • National Science Fondation

Conference

MEMSYS '23
MEMSYS '23: The International Symposium on Memory Systems
October 2 - 5, 2023
VA, Alexandria, USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 225
    Total Downloads
  • Downloads (Last 12 months)225
  • Downloads (Last 6 weeks)26
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media