Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Our troubles with Linux Kernel upgrades and why you should care

Published: 23 July 2013 Publication History

Abstract

Linux and other open-source Unix variants (and their distributors) provide researchers with full-fledged operating systems that are widely used. However, due to their complexity and rapid development, care should be exercised when using these operating systems for performance experiments, especially in systems research. In particular, the size and continual evolution of the Linux code-base makes it difficult to understand, and as a result, decipher and explain the reasons for performance improvements. In addition, the rapid kernel development cycle means that experimental results can be viewed as out of date, or meaningless, very quickly. We demonstrate that this viewpoint is incorrect because kernel changes can and have introduced both bugs and performance degradations.
This paper describes some of our experiences using Linux and FreeBSD as platforms for conducting performance evaluations and some performance regressions we have found. Our results show, these performance regressions can be serious (e.g., repeating identical experiments results in large variability in results) and long lived despite having a large negative effect on performance (one problem was present for more than 3 years). Based on these experiences, we argue: it is sometimes reasonable to use an older kernel version, experimental results need careful analysis to explain why a performance effect occurs, and publishing papers validating prior research is essential.

References

[1]
M. Baker. Independent labs to verify high-profile papers. Nature | News, August 2012.
[2]
S. M. Blackburn, A. Diwan, M. Hauswirth, P. F. Sweeney, J. N. Amaral, V. Babka, W. Binder, T. Brecht, L. Bulej, L. Eeckhout, S. Fischmeister, D. Frampton, R. Garner, A. Georges, L. J. Hendren, M. Hind, A. L. Hosking, R. Jones, T. Kalibera, P. Moret, N. Nystrom, V. Pankratius, and P. Tuma. Evaluate collaboratory technical report #1: Can you trust your experimental results?, February 2012. http://evaluate.inf.usi.ch/technical-reports/1.
[3]
B. M. Cantrill, M. W. Shapiro, and A. H. Leventhal. Dynamic instrumentation of production systems. In Proceedings of the Annual Conference on USENIX Annual Technical Conference, ATEC '04, Berkeley, CA, U.S.A., 2004. USENIX Association.
[4]
CentOS: The community enterprise operating system. http://www.centos.org.
[5]
J. Corbet, G. Kroah-Hartman, and A. McPherson. Linux kernel development: How fast it is going, who is doing it, what they are doing, and who is sponsoring it, Mar. 2012. http://go.linuxfoundation.org/who-writes-linux-2012.
[6]
Evaluate Collaboratory. Experimental evaluation of software and systems in computer science. http://evaluate.inf.usi.ch/.
[7]
D. G. Feitelson. Experimental computer science: The need for a cultural change, December 2006. http://www.cs.huji.ac.il/ feit/papers/exp05.pdf.
[8]
A. S. Harji. Performance Comparison of Uniprocessor and Multiprocessor Web Server Architectures. PhD thesis, University of Waterloo, 2010. http://hdl.handle.net/10012/5040.
[9]
A. S. Harji, P. A. Buhr, and T. Brecht. Our troubles with Linux and why you should care. In Proceedings of the Second Asia-Pacific Workshop on Systems, APSys '11, pages 2:1-2:5, New York, NY, USA, July 2011. ACM.
[10]
A. S. Harji, P. A. Buhr, and T. Brecht. Comparing high-performance multi-core web-server architectures. In Proceedings of the 5th Annual International Systems and Storage Conference (SYSTOR 2012), pages 2:1--2:12, New York, NY, USA, June 2012. ACM.
[11]
M. Larabel. Five Years Of Linux Kernel Benchmarks: 2.6.12 Through 2.6.37. Phoronix Media, Nov. 2010. http://www.phoronix.com/scan.php?page=article-&item=linux 2612 2637&num=1.
[12]
J. Lehrer. The truth wears off: Is there something wrong with the scientific method? The New Yorker, December 13, 2010.
[13]
Linux kernel performance! http://kernel-perf.sourceforge.net.
[14]
J. N. Matthews. The case for repeated research in operating systems. SIGOPS Operating Systems Review, 38:5--7, April 2004.
[15]
Netcraft, Jan. 2012. https://ssl.netcraft.com/sslsample-report/CMatch/oscnt all.
[16]
OProfile, 2012. http://oprofile.sourceforge.net.
[17]
D. Pariag, T. Brecht, A. Harji, P. Buhr, and A. Shukla. Comparing the performance of web server architectures. In Proc. of the 2nd ACM SIGOPS/EuroSys Conf. on Computer Systems, pages 231--243. ACM, Mar. 2007.
[18]
C. Small, N. Ghosh, H. Saleeb, M. Seltzer, and K. Smith. Does systems research measure up? Technical report, Harvard University, TR-16-97, 1997.
[19]
J. Summers, T. Brecht, D. L. Eager, and B. Wong. Methodologies for generating HTTP streaming video workloads to evaluate web server performance. In 5th Annual International Systems and Storage Conference (SYSTOR), pages 13:1--13:12, 2012.
[20]
J. Summers, T. Brecht, D. L. Eager, and B. Wong. To chunk or not to chunk: Implications for HTTP streaming video server performance. In 22nd SIGMM Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV), pages 15--20, Toronto, Canada, June 2012.
[21]
SUSE linux enterprise server for system z: The market leader for linux on ibm mainframes, May 2010.http://www.novell.com/docrep/2010/05/-SLES for system z market share leader.pdf.
[22]
The SystemTap home page, 2010. http://sourceware.org/systemtap/.
[23]
W. F. Tichy. Should computer scientists experiment more? Computer, 31(5):32--40, 1998.
[24]
Top 500 supercomputer sites. http://www.top500.org/statistics/list, Category→Operating System Family→Submit.
[25]
Web Technology Surveys. Historical trends in the usage of Linux versions for websites, Aug. 2011. http://w3techs.com/technologies/history details/oslinux.

Cited By

View all
  • (2018)Build and Execution Environment (BEE): an Encapsulated Environment Enabling HPC Applications Running Everywhere2018 IEEE International Conference on Big Data (Big Data)10.1109/BigData.2018.8622572(1737-1746)Online publication date: Dec-2018
  • (2015)An introduction to Docker for reproducible researchACM SIGOPS Operating Systems Review10.1145/2723872.272388249:1(71-79)Online publication date: 20-Jan-2015
  • (2015)From Repeatability to Reproducibility and CorroborationACM SIGOPS Operating Systems Review10.1145/2723872.272387549:1(3-11)Online publication date: 20-Jan-2015

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM SIGOPS Operating Systems Review
ACM SIGOPS Operating Systems Review  Volume 47, Issue 2
July 2013
69 pages
ISSN:0163-5980
DOI:10.1145/2506164
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 July 2013
Published in SIGOPS Volume 47, Issue 2

Check for updates

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)2
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2018)Build and Execution Environment (BEE): an Encapsulated Environment Enabling HPC Applications Running Everywhere2018 IEEE International Conference on Big Data (Big Data)10.1109/BigData.2018.8622572(1737-1746)Online publication date: Dec-2018
  • (2015)An introduction to Docker for reproducible researchACM SIGOPS Operating Systems Review10.1145/2723872.272388249:1(71-79)Online publication date: 20-Jan-2015
  • (2015)From Repeatability to Reproducibility and CorroborationACM SIGOPS Operating Systems Review10.1145/2723872.272387549:1(3-11)Online publication date: 20-Jan-2015

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media