Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2559206.2578861acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Building castles in quicksand: blueprint for a crowdsourced study

Published: 26 April 2014 Publication History

Abstract

Finding participants for experiments has always been a challenge. As technology advanced, running experiments online became a viable way to carry out research that did not require anything more than a personal computer. The natural next step in this progression emerged as crowdsourcing became an option. We report on our experience of joining this new wave of practice, and the difficulties and challenges we encountered when crowdsourcing a study. This led us to re-evaluate the validity of crowdsourced research. We report our findings, and conclude with guidelines for crowdsourced experiments.

References

[1]
Budapest Open Access Initiative. http://www.budapestopenaccessinitiative.org/.
[2]
Open Knowledge Foundation. http://okfn.org/.
[3]
Public Library of Science. http://www.plos.org/.
[4]
Adar, E. Why I hate Mechanical Turk research (and workshops). In Proceedings of the CHI 2011 Workshop on Crowdsourcing and Human Computation (May 2011).
[5]
Amazon Mechanical Turk. Requester Best Practices Guide. Amazon Web Services, June 2011.
[6]
Buhrmester, M., Kwang, T., and Gosling, S. D. Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data Perspectives on Psychological Science 6, 1 (2011), 3--5.
[7]
Gambetta, D. Deceptive mimicry in humans. In Perspectives on imitation: From neuroscience to social science, S. Hurley and N. Chater, Eds. MIT Press, Cambridge, 2005, 221--241.
[8]
Horton, J. J., Rand, D. G., and Zeckhauser, R. J. The online laboratory: Conducting experiments in a real labor market. Experimental Economics 14, 3 (2011), 399--425.
[9]
Kitchenham, B., Pfleeger, S., Pickard, L., Jones, P., Hoaglin, D., El Emam, K., and Rosenberg, J. Preliminary guidelines for empirical research in software engineering. Software Engineering, IEEE Transactions on 28, 8 (2002), 721--734.
[10]
Kittur, A., Chi, E. H., and Suh, B. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, ACM (2008), 453--456.
[11]
Lasecki, W., Kamar, E., and Teevan, J. Raising an army: Attacking crowd systems. Presented at CrowdConf 2013, San Francisco (2013).
[12]
Lasecki, W., Teevan, J., and Kamar, E. Information extraction and manipulation threats in crowd-powered systems. In Proceedings of the 2014 ACM Conference on Computer Supported Cooperative Work (CSCW 2014), Baltimore (2014).
[13]
Lease, M., Hullman, J., Bigham, J. P., Bernstein, M., Kim, J., Lasecki, W., Bakhshi, S., Mitra, T., and Miller, R. Mechanical turk is not anonymous. Social Science Research Network (2013).
[14]
Lin, E., Greenberg, S., Trotter, E., Ma, D., and Aycock, J. Does domain highlighting help people identify phishing sites? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, ACM (New York, NY, USA, 2011), 2075--2084.
[15]
Mason, W., and Suri, S. Conducting behavioral research on Amazon's Mechanical Turk. Behavior Research Methods 44, 1 (2012), 1--23.
[16]
Maxion, R. A., Longstaff, T. A., and McHugh, J. Why is there no science in cyber science?: A panel discussion at NSPW 2010. In Proceedings of the 2010 Workshop on New Security Paradigms, NSPW '10, ACM (New York, NY, USA, 2010), 1--6.
[17]
Office of Research Ethics. Human participant research guidelines: Use of crowdsourcing services. University of Waterloo, July 2013.
[18]
Padmos, A. A case of sesame seeds: Growing and nurturing credentials in the face of mimicry, September 2011. Available at http://digirep.rhul.ac.uk/items/328c3d8b-3695-bfea-03f0-b651ac709211/1/.
[19]
Paolacci, G., Chandler, J., and Ipeirotis, P. G. Running experiments on Amazon Mechanical Turk. Judgment and Decision Making 5, 5 (August 2010), 411--419.
[20]
Peisert, S., and Bishop, M. How to design computer security experiments. In Fifth World Conference on Information Security Education, L. Futcher and R. Dodge, Eds., vol. 237 of IFIP - International Federation for Information Processing. Springer, 2007, 141--148.
[21]
Renkema-Padmos, A., Volkamer, M., and Renaud, K. Building castles in quicksand: Blueprint for a crowdsourced study (materials). figshare (2014). http://dx.doi.org/10.6084/m9.figshare.938239.
[22]
Schechter, S. Common pitfalls in writing about security and privacy human subjects experiments, and how to avoid them. Microsoft, January 2013.
[23]
Schechter, S. Experimenting on Mechanical Turk: 5 How Tos. Microsoft, July 2009.
[24]
Skitka, L. J., and Sargis, E. G. The Internet as psychological laboratory. Annual Review of Psychology 57 (2006), 529--555.
[25]
StatCounter Global Stats. Top 5 desktop, tablet, and console browsers in the USA from Dec 2012 to Dec 2013, Jan. 2014. http://gs.statcounter.com/#browser-US-monthly-201212-201312.
[26]
Sweller, J. Cognitive load during problem solving: Effects on learning. Cognitive Science 12, 2 (1988), 257--285.

Cited By

View all
  • (2016)Eye tracking scanpath analysis on web pagesProceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications10.1145/2857491.2857519(103-110)Online publication date: 14-Mar-2016

Index Terms

  1. Building castles in quicksand: blueprint for a crowdsourced study

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI EA '14: CHI '14 Extended Abstracts on Human Factors in Computing Systems
    April 2014
    2620 pages
    ISBN:9781450324748
    DOI:10.1145/2559206
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 April 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. crowdsourcing
    2. participants
    3. study

    Qualifiers

    • Research-article

    Conference

    CHI '14
    Sponsor:
    CHI '14: CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2014
    Ontario, Toronto, Canada

    Acceptance Rates

    CHI EA '14 Paper Acceptance Rate 1,000 of 3,200 submissions, 31%;
    Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)5
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 30 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2016)Eye tracking scanpath analysis on web pagesProceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications10.1145/2857491.2857519(103-110)Online publication date: 14-Mar-2016

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media