Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3357384.3358143acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
short-paper
Open access

On Heavy-user Bias in A/B Testing

Published: 03 November 2019 Publication History

Abstract

On-line experimentation (also known as A/B testing) has become an integral part of software development. To timely incorporate user feedback and continuously improve products, many software companies have adopted the culture of agile deployment, requiring online experiments to be conducted and concluded on limited sets of users for a short period. While conceptually efficient, the result observed during the experiment duration can deviate from what is seen after the feature deployment, which makes the A/B test result biased. In this paper, we provide theoretical analysis to show that heavy-users can contribute significantly to the bias, and propose a re-sampling estimator for bias adjustment.

References

[1]
Donald T. Campbell. 1957. Factors relevant to the validity of experiments in social settings. Psychological Bulletin, Vol. 54 (1957), 297--312.
[2]
Donald T Campbell and Julian C Stanley. 1963. Experimental and Quasi-experimental Designs for Research .Cengage Learning, New York, NY.
[3]
Thomas D Cook, Donald Thomas Campbell, and William Shadish. 2002. Experimental and Quasi-experimental Designs for Generalized Causal Inference .Houghton Mifflin Boston, Boston, MA.
[4]
P. Dmitriev, B. Frasca, S. Gupta, R. Kohavi, and G. Vaz. 2016. Pitfalls of long-term online controlled experiments. In 2016 IEEE International Conference on Big Data (Big Data). 1367--1376. https://doi.org/10.1109/BigData.2016.7840744
[5]
Henning Hohnhold, Deirdre O'Brien, and Diane Tang. 2015. Focusing on the long-term: It's good for users and business. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining . ACM, Sydney, Australia, 1849--1858.
[6]
Guido W Imbens and Donald B Rubin. 2015. Causal inference in statistics, social, and biomedical sciences .Cambridge University Press, New York, NY.
[7]
Eugene Kharitonov, Alexey Drutsa, and Pavel Serdyukov. 2017. Learning sensitive combinations of A/B test metrics. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining . ACM, Cambridge, UK, 651--659.
[8]
Ron Kohavi, Alex Deng, Roger Longbotham, and Ya Xu. 2014. Seven rules of thumb for web site experimenters. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining . ACM, New York, NY, 1857--1866.
[9]
Hans R. Kunsch. 1989. The Jackknife and the Bootstrap for general stationary observations. Annals of Statistics, Vol. 17, 3 (1989), 1217--1241.
[10]
Rupert G. Miller. 1974. The jackknife: A review. Biometrika, Vol. 61, 1 (1974), 1--15.
[11]
M. H. Quenouille. 1956. Notes on bias in estimation. Biometrika, Vol. 43 (1956), 353.
[12]
J N K Rao. 1965. A note on estimation of ratios by Quenouille's method. Biometrika, Vol. 52, 3 (1965), 647--649.
[13]
Donald B. Rubin. 2008. For objective causal inference, design trumps analysis. The Annals of Applied Statistics, Vol. 2 (2008), 808--840.
[14]
Arman Sabbaghi and Qiang Huang. 2018. Model transfer across additive manufacturing processes via mean effect equivalence of lurking variables. Annals of Applied Statistics, Vol. 12 (2018), 2409--2429.
[15]
Daniel A. Sheinin, Sajeev Varki, and Christy Ashley. 2011. The differential effect of Ad novelty and message usefulness on brand judgments. Journal of Advertising, Vol. 40, 3 (2011), 5--18.
[16]
Elizabeth A. Stuart, Stephen R. Cole, Catherine P. Bradshaw, and Philip J. Leaf. 2011. The use of propensity scores to assess the generalizability of results from randomized trials. Journal of the Royal Statistical Society, Series A, Vol. 174, 2 (2011), 369--386.
[17]
Elizabeth Tipton, Larry Hedges, Michael Vaden-Kiernan, Geoffrey Borman, Kate Sullivan, and Sarah Caverly. 2014. Sample selection in randomized experiments: A new method using propensity score stratified sampling. Journal of Research on Educational Effectiveness, Vol. 7, 1 (2014), 114--135.
[18]
John W Tukey. 1958. Bias and confidence in not quite large samples. Annals of Mathematical Statistics, Vol. 29 (1958), 614.

Cited By

View all
  • (2024)User Interface Evaluation Through Implicit-Association TestsProceedings of the ACM on Human-Computer Interaction10.1145/36646368:EICS(1-23)Online publication date: 17-Jun-2024
  • (2024)Automating Pipelines of A/B Tests with Population Split Using Self-Adaptation and Machine LearningProceedings of the 19th International Symposium on Software Engineering for Adaptive and Self-Managing Systems10.1145/3643915.3644087(84-97)Online publication date: 15-Apr-2024
  • (2024)Mission Reproducibility: An Investigation on Reproducibility Issues in Machine Learning and Information Retrieval Research2024 IEEE 20th International Conference on e-Science (e-Science)10.1109/e-Science62913.2024.10678657(1-9)Online publication date: 16-Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CIKM '19: Proceedings of the 28th ACM International Conference on Information and Knowledge Management
November 2019
3373 pages
ISBN:9781450369763
DOI:10.1145/3357384
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 November 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. block bootstrap
  2. causal inference
  3. external validity
  4. jackknife

Qualifiers

  • Short-paper

Conference

CIKM '19
Sponsor:

Acceptance Rates

CIKM '19 Paper Acceptance Rate 202 of 1,031 submissions, 20%;
Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

Upcoming Conference

CIKM '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)203
  • Downloads (Last 6 weeks)18
Reflects downloads up to 31 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)User Interface Evaluation Through Implicit-Association TestsProceedings of the ACM on Human-Computer Interaction10.1145/36646368:EICS(1-23)Online publication date: 17-Jun-2024
  • (2024)Automating Pipelines of A/B Tests with Population Split Using Self-Adaptation and Machine LearningProceedings of the 19th International Symposium on Software Engineering for Adaptive and Self-Managing Systems10.1145/3643915.3644087(84-97)Online publication date: 15-Apr-2024
  • (2024)Mission Reproducibility: An Investigation on Reproducibility Issues in Machine Learning and Information Retrieval Research2024 IEEE 20th International Conference on e-Science (e-Science)10.1109/e-Science62913.2024.10678657(1-9)Online publication date: 16-Sep-2024
  • (2024)A/B testingJournal of Systems and Software10.1016/j.jss.2024.112011211:COnline publication date: 2-Jul-2024
  • (2024)Navigating the Evaluation Funnel to Optimize Iteration Speed for Recommender SystemsProceedings of the Future Technologies Conference (FTC) 2024, Volume 110.1007/978-3-031-73110-5_11(138-157)Online publication date: 5-Nov-2024
  • (2023)All about Sample-Size Calculations for A/B Testing: Novel Extensions & Practical GuideProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3614779(3574-3583)Online publication date: 21-Oct-2023
  • (2023)Statistical Challenges in Online Controlled Experiments: A Review of A/B Testing MethodologyThe American Statistician10.1080/00031305.2023.225723778:2(135-149)Online publication date: 18-Oct-2023
  • (2023)Is designed data collection still relevant in the big data era? – A discussionQuality and Reliability Engineering International10.1002/qre.333139:4(1107-1109)Online publication date: 29-Mar-2023
  • (2022)Novelty and Primacy: A Long-Term Estimator for Online ExperimentsTechnometrics10.1080/00401706.2022.212430964:4(524-534)Online publication date: 8-Nov-2022

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media