Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

The Expertise Involved in Deciding which HITs are Worth Doing on Amazon Mechanical Turk

Published: 22 April 2021 Publication History

Abstract

Crowdworkers depend on Amazon Mechanical Turk (AMT) as an important source of income and it is left to workers to determine which tasks on AMT are fair and worth completing. While there are existing tools that assist workers in making these decisions, workers still spend significant amounts of time finding fair labor. Difficulties in this process may be a contributing factor in the imbalance between the median hourly earnings ($2.00/hour) and what the average requester pays ($11.00/hour). In this paper, we study how novices and experts select what tasks are worth doing. We argue that differences between the two populations likely lead to the wage imbalances. For this purpose, we first look at workers' comments in TurkOpticon (a tool where workers share their experience with requesters on AMT). We use this study to start to unravel what fair labor means for workers. In particular, we identify the characteristics of labor that workers consider is of "good quality'' and labor that is of "poor quality'' (e.g., work that pays too little.) Armed with this knowledge, we then conduct an experiment to study how experts and novices rate tasks that are of both good and poor quality. Through our research we uncover that experts and novices both treat good quality labor in the same way. However, there are significant differences in how experts and novices rate poor quality labor, and whether they believe the poor quality labor is worth doing. This points to several future directions, including machine learning models that support workers in detecting poor quality labor, and paths for educating novice workers on how to make better labor decisions on AMT.

References

[1]
Antonio Aloisi. 2015. Commoditized workers: Case study research on labor law issues arising from a set of on-demand/gig economy platforms. Comp. Lab. L. & Pol'y J. 37 (2015), 653.
[2]
Janine Berg, Marianne Furrer, Ellie Harmon, Uma Rani, and S Silberman. 2018. Digital labour platforms and the future of work. Towards Decent Work in the Online World. Rapport de l'OIT (2018).
[3]
Birgitta Bergvall-Kåreborn and Debra Howcroft. 2014. A mazon Mechanical Turk and the commodification of labour. New Technology, Work and Employment 29, 3 (2014), 213--223.
[4]
Alice M Brawley and Cynthia LS Pury. 2016. Work experiences on MTurk: Job satisfaction, turnover, and information sharing. Computers in Human Behavior 54 (2016), 531--546.
[5]
Amy Bruckman. 2002. Studying the amateur artist: A perspective on disguising data collected in human subjects research on the Internet. Ethics and Information Technology 4, 3 (2002), 217--231.
[6]
Michael D Buhrmester, Sanaz Talaifar, and Samuel D Gosling. 2018. An evaluation of Amazon's Mechanical Turk, its rapid rise, and its effective use. Perspectives on Psychological Science 13, 2 (2018), 149--154.
[7]
Chris Callison-Burch. 2014. Crowd-Workers: Aggregating Information Across Turkers To Help Them Find Higher Paying Work. In HCOMP-2014.
[8]
Chun-Wei Chiang, Anna Kasunic, and Saiph Savage. 2018. Crowd Coach: Peer Coaching for Crowd Workers' Skill Growth. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 37.
[9]
Djellel Difallah, Elena Filatova, and Panos Ipeirotis. 2018. Demographics and Dynamics of Mechanical Turk Workers. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM '18). ACM, New York, NY, USA, 135--143. https://doi.org/10.1145/3159652.3159661
[10]
Kinda El Maarry, Kristy Milland, and Wolf-Tilo Balke. 2018. A Fair Share of the Work?: The Evolving Ecosystem of Crowd Workers. In Proceedings of the 10th ACM Conference on Web Science. ACM, 145--152.
[11]
Christian Fieseler, Eliane Bucher, and Christian Pieter Hoffmann. 2019. Unfairness by Design? The Perceived Fairness of Digital Labor on Crowdworking Platforms. Journal of Business Ethics 156, 4 (01 Jun 2019), 987--1005. https: //doi.org/10.1007/s10551-017-3607-2
[12]
Ujwal Gadiraju and Gianluca Demartini. 2019. Understanding Worker Moods and Reactions to Rejection in Crowd-sourcing. In Proceedings of the 30th ACM Conference on Hypertext and Social Media (HT '19). Association for Computing Machinery, New York, NY, USA, 211--220. https://doi.org/10.1145/3342220.3343644
[13]
Ujwal Gadiraju, Jie Yang, and Alessandro Bozzon. 2017. Clarity is a Worthwhile Quality: On the Role of Task Clarity in Microtask Crowdsourcing. In Proceedings of the 28th ACM Conference on Hypertext and Social Media (HT '17). ACM, New York, NY, USA, 5--14. https://doi.org/10.1145/3078714.3078715
[14]
SR Gouravajhala, YOUXUAN Jiang, Preetraj Kaur, Jarir Chaar, and Walter S Lasecki. 2018. Finding mnemo: Hybrid intelligence memory in a crowd-powered dialog system. In Collective Intelligence Conference (CI 2018). Zurich, Switzerland.
[15]
Mary L Gray, Siddharth Suri, Syed Shoaib Ali, and Deepti Kulkarni. 2016. The crowd is a collaborative network. In Proceedings of the 19th ACM conference on computer-supported cooperative work & social computing. ACM, 134--147.
[16]
Neha Gupta, David Martin, Benjamin V Hanrahan, and Jacki O'Neill. 2014. Turk-life in India. In Proceedings of the 18th International Conference on Supporting Group Work. ACM, 1--11.
[17]
Lei Han, Eddy Maddalena, Alessandro Checco, Cristina Sarasua, Ujwal Gadiraju, Kevin Roitero, and Gianluca Demartini. 2020. Crowd Worker Strategies in Relevance Judgment Tasks. In Proceedings of the 13th International Conference on Web Search and Data Mining (WSDM '20). Association for Computing Machinery, New York, NY, USA, 241--249. https://doi.org/10.1145/3336191.3371857
[18]
Lei Han, Kevin Roitero, Ujwal Gadiraju, Cristina Sarasua, Alessandro Checco, Eddy Maddalena, and Gianluca Demartini. 2019. All those wasted hours: On task abandonment in crowdsourcing. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 321--329.
[19]
Lei Han, Kevin Roitero, Ujwal Gadiraju, Cristina Sarasua, Alessandro Checco, Eddy Maddalena, and Gianluca Demartini. 2019. The impact of task abandonment in crowdsourcing. IEEE Transactions on Knowledge and Data Engineering (2019).
[20]
Benjamin V Hanrahan, David Martin, Jutta Willamowski, and John M Carroll. 2018. Investigating the Amazon Mechanical Turk Market Through Tool Design. Computer Supported Cooperative Work (CSCW) 27, 3-6 (2018), 1255--1274.
[21]
Benjamin V Hanrahan, Jutta K Willamowski, Saiganesh Swaminathan, and David B Martin. 2015. TurkBench: Rendering the market for Turkers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1613--1616.
[22]
Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Chris Callison-Burch, and Jeffrey P Bigham. 2018. A data-driven analysis of workers' earnings on amazon mechanical turk. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 449.
[23]
Larry V Hedges. 1981. Distribution theory for Glass's estimator of effect size and related estimators. journal of Educational Statistics 6, 2 (1981), 107--128.
[24]
Paul Hitlin. 2016. Research in the Crowdsourcing Age, a Case Study. Pew Research Center. https://www.pewinternet. org/2016/07/11/research-in-the-crowdsourcing-age-a-case-study/
[25]
Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, and Jennifer Wortman Vaughan. 2015. Incentivizing high quality crowdwork. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 419--429.
[26]
Joses Ho, Tayfun Tumkaya, Sameer Aryal, Hyungwon Choi, and Adam Claridge-Chang. 2019. Moving beyond P values: data analysis with estimation graphics. Nature Methods 16, 7 (2019), 565--566. https://doi.org/10.1038/s41592-019-0470-3
[27]
Debra Howcroft and Birgitta Bergvall-Kåreborn. 2018. A typology of crowdwork platforms. Work, Employment and Society (2018), 0950017018760136.
[28]
Lilly Irani. 2015. Difference and dependence among digital workers: The case of Amazon Mechanical Turk. South Atlantic Quarterly 114, 1 (2015), 225--234.
[29]
Lilly C Irani and M Silberman. 2013. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 611--620.
[30]
Ayush Jain, Akash Das Sarma, Aditya Parameswaran, and Jennifer Widom. 2017. Understanding Workers, Developing Effective Tasks, and Enhancing Marketplace Dynamics: A Study of a Large Crowdsourcing Marketplace. Proc. VLDB Endow. 10, 7 (March 2017), 829--840. https://doi.org/10.14778/3067421.3067431
[31]
David Johnstone, Mary Tate, and Erwin Fielt. 2018. Taking rejection personally: An ethical analysis of work rejection on Amazon Mechanical Turk. (2018).
[32]
Toni Kaplan, Susumu Saito, Kotaro Hara, and Jeffrey P Bigham. 2018. Striving to earn more: a survey of work strategies and tool use among crowd workers. In Sixth AAAI Conference on Human Computation and Crowdsourcing
[33]
Aniket Kittur, Jeffrey V Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In Proceedings of the 2013 conference on Computer supported cooperative work. ACM, 1301--1318.
[34]
Leib Litman, Jonathan Robinson, and Cheskie Rosenzweig. 2015. The relationship between motivation, monetary compensation, and data quality among US-and India-based workers on Mechanical Turk. Behavior research methods 47, 2 (2015), 519--528.
[35]
Bingjie Liu and S Shyam Sundar. 2018. Microworkers as research participants: Does underpaying Turkers lead to cognitive dissonance? Computers in Human Behavior 88 (2018), 61--69.
[36]
Matt Lovett, Saleh Bajaba, Myra Lovett, and Marcia J Simmering. 2018. Data Quality from Crowdsourced Surveys: A Mixed Method Inquiry into Perceptions of Amazon's Mechanical Turk Masters. Applied Psychology 67, 2 (2018), 339--366.
[37]
VK Chaithanya Manam and Alexander J. Quinn. 2018. Wingit: Efficient refinement of unclear task instructions. In In Sixth AAAI Conference on Human Computation and Crowdsourcing.
[38]
David Martin, Benjamin V Hanrahan, Jacki O'Neill, and Neha Gupta. 2014. Being a turker. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. ACM, 224--235.
[39]
David Martin, Jacki O'Neill, Neha Gupta, and Benjamin V Hanrahan. 2016. Turking in a global labour market. Computer Supported Cooperative Work (CSCW) 25, 1 (2016), 39--77.
[40]
Ted Matherly. 2019. A panel for lemons? Positivity bias, reputation systems and data quality on MTurk. European Journal of Marketing 53, 2 (2019), 195--223. https://doi.org/10.1108/EJM-07-2017-0491 arXiv:https://doi.org/10.1108/EJM-07- 2017-0491
[41]
Brian McInnis, Dan Cosley, Chaebong Nam, and Gilly Leshed. 2016. Taking a HIT: Designing around rejection, mistrust, risk, and workers' experiences in Amazon Mechanical Turk. In Proceedings of the 2016 CHI conference on human factors in computing systems. ACM, 2271--2282.
[42]
Eyal Peer, Joachim Vosgerau, and Alessandro Acquisti. 2014. Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior research methods 46, 4 (2014), 1023--1031.
[43]
Lisa Posch, Arnim Bleier, Clemens Lechner, Daniel Danner, Fabian Flöck, and Markus Strohmaier. 2017. Measuring Motivations of Crowdworkers: The Multidimensional Crowdworker Motivation Scale. (2017). arXiv:cs.SI/1702.01661
[44]
Susumu Saito, Chun-Wei Chiang, Saiph Savage, Teppei Nakano, Tetsunori Kobayashi, and Jeffrey Bigham. 2019. TurkScanner: Predicting the Hourly Wage of Microtasks. arXiv preprint arXiv:1903.07032 (2019).
[45]
Shruti Sannon and Dan Cosley. 2018. It was a shady HIT: Navigating Work-Related Privacy Concerns on MTurk. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, LBW507.
[46]
Shruti Sannon and Dan Cosley. 2019. Privacy, Power, and Invisible Labor on Amazon Mechanical Turk. (2019).
[47]
Saiph Savage, Chun Wei Chiang, Susumu Saito, Carlos Toxtli, and Jeffrey Bigham. 2020. Becoming the Super Turker: Increasing Wages via a Strategy from High Earning Workers. In Proceedings of The Web Conference 2020. 1241--1252.
[48]
Kim Bartel Sheehan. 2018. Crowdsourcing research: data collection with Amazon's Mechanical Turk. Communication Monographs 85, 1 (2018), 140--156.
[49]
M. Silberman and Lilly Irani. 2016. Operating an employer reputation system: lessons from TurkOpticon, 2008 - 2015. Comparative Labor Law & Policy Journal, Forthcoming. https://ssrn.com/abstract=2729498
[50]
M Six Silberman, Bill Tomlinson, Rochelle LaPlante, Joel Ross, Lilly Irani, and Andrew Zaldivar. 2018. Responsible research with crowds: pay crowdworkers at least minimum wage. Commun. ACM 61, 3 (2018), 39--41.
[51]
Vanessa Williamson. 2016. On the ethics of crowdsourced research. PS: Political Science & Politics 49, 1 (2016), 77--81.
[52]
Alex J. Wood, Vili Lehdonvirta, and Mark Graham. [n. d.]. Workers of the Internet unite? Online freelancer organisation among remote gig economy workers in six Asian and African countries. New Technology, Work and Employment 33, 2 ([n. d.]), 95--112. https://doi.org/10.1111/ntwe.12112 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/ntwe.12112
[53]
Meng-Han Wu and Alexander James Quinn. 2017. Confusing the crowd: Task instruction quality on amazon mechanical turk. In Fifth AAAI Conference on Human Computation and Crowdsourcing.
[54]
Jie Yang, Carlo van der Valk, Tobias Hoßfeld, Judith Redi, and Alessandro Bozzon. 2018. How Do Crowdworker Communities and Microtask Markets Influence Each Other? A Data-Driven Study on Amazon Mechanical Turk. In Sixth AAAI Conference on Human Computation and Crowdsourcing

Cited By

View all
  • (2024)Designing Gig Worker Sousveillance ToolsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642614(1-19)Online publication date: 11-May-2024
  • (2023)Crowdsourced Perceptual Ratings of Voice Quality in People With Parkinson's Disease Before and After Intensive Voice and Articulation Therapies: Secondary Outcome of a Randomized Controlled TrialJournal of Speech, Language, and Hearing Research10.1044/2023_JSLHR-22-0069466:5(1541-1562)Online publication date: 9-May-2023
  • (2023)The Dark Side of Recruitment in Crowdsourcing: Ethics and Transparency in Micro-Task MarketplacesComputer Supported Cooperative Work10.1007/s10606-023-09464-932:3(439-474)Online publication date: 28-Jul-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 5, Issue CSCW1
CSCW
April 2021
5016 pages
EISSN:2573-0142
DOI:10.1145/3460939
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 April 2021
Published in PACMHCI Volume 5, Issue CSCW1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. amazon mechanical turk
  2. human-intelligence tasks

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)41
  • Downloads (Last 6 weeks)6
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Designing Gig Worker Sousveillance ToolsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642614(1-19)Online publication date: 11-May-2024
  • (2023)Crowdsourced Perceptual Ratings of Voice Quality in People With Parkinson's Disease Before and After Intensive Voice and Articulation Therapies: Secondary Outcome of a Randomized Controlled TrialJournal of Speech, Language, and Hearing Research10.1044/2023_JSLHR-22-0069466:5(1541-1562)Online publication date: 9-May-2023
  • (2023)The Dark Side of Recruitment in Crowdsourcing: Ethics and Transparency in Micro-Task MarketplacesComputer Supported Cooperative Work10.1007/s10606-023-09464-932:3(439-474)Online publication date: 28-Jul-2023
  • (2022)Mobilizing Crowdwork:A Systematic Assessment of the Mobile Usability of HITsProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3501876(1-20)Online publication date: 29-Apr-2022
  • (2022)A Team Crowdsourcing Method Combining Competition and Collaboration2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech)10.1109/DASC/PiCom/CBDCom/Cy55231.2022.9927987(1-6)Online publication date: 12-Sep-2022
  • (2022)Fair compensation of crowdsourcing work: the problem of flat ratesBehaviour & Information Technology10.1080/0144929X.2022.215056442:16(2871-2892)Online publication date: 28-Nov-2022
  • (2022)Hourly Wages in Crowdworking: A Meta-AnalysisBusiness & Information Systems Engineering10.1007/s12599-022-00769-564:5(553-573)Online publication date: 30-Aug-2022

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media