Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3479986.3479987acmotherconferencesArticle/Chapter ViewAbstractPublication PagesopencollabConference Proceedingsconference-collections
research-article
Open access

The platform belongs to those who work on it! Co-designing worker-centric task distribution models

Published: 15 October 2021 Publication History

Abstract

Today, digital platforms are increasingly mediating our day-to-day work and crowdsourced forms of labour are progressively gaining importance (e.g. Amazon Mechanical Turk, Universal Human Relevance System, TaskRabbit). In many popular cases of crowdsourcing, a volatile, diverse, and globally distributed crowd of workers compete among themselves to find their next paid task. The logic behind the allocation of these tasks typically operates on a “First-Come, First-Served” basis. This logic generates a competitive dynamic in which workers are constantly forced to check for new tasks.
This article draws on findings from ongoing collaborative research in which we co-design, with crowdsourcing workers, three alternative models of task allocation beyond “First-Come, First-Served”, namely (1) round-robin, (2) reputation-based, and (3) content-based. We argue that these models could create fairer and more collaborative forms of crowd labour.
We draw on Amara On Demand, a remuneration-based crowdsourcing platform for video subtitling and translation, as the case study for this research. Using a multi-modal qualitative approach that combines data from 10 months of participant observation, 25 semi-structured interviews, two focus groups, and documentary analysis, we observed and co-designed alternative forms of task allocation in Amara on Demand. The identified models help envision alternatives towards more worker-centric crowdsourcing platforms, understanding that platforms depend on their workers, and thus ultimately they should hold power within them.

References

[1]
Tanja Aitamurto. 2015. Motivation factors in crowdsourced journalism: Social impact, social change, and peer learning. Social Change, and Peer Learning (October 16, 2015). International Journal of Communication 9(2015), 3523–3543.
[2]
David Altheide. 1987. Reflections: Ethnographic content analysis. Qualitative Sociology 10, 1 (1987), 65–77. https://doi.org/10.1007/BF00988269
[3]
Amara.org. [n.d.]. Amara - Award-winning Subtitle Editor and Enterprise Offerings. https://amara.org/en/
[4]
Amara.org. [n.d.]. Caption, Subtitle and Translate Video. https://amara.org/
[5]
Sihem Amer-Yahia and Senjuti Roy. 2016. Toward worker-centric crowdsourcing. Bulletin of the Technical Committee on Data Engineering (2016).
[6]
Daren C Brabham. 2013. Crowdsourcing. Mit Press.
[7]
Alice M Brawley and Cynthia LS Pury. 2016. Work experiences on MTurk: Job satisfaction, turnover, and information sharing. Computers in Human Behavior 54 (2016), 531–546.
[8]
Lizhen Cui, Xudong Zhao, Lei Liu, Han Yu, and Yuan Miao. 2017. Complex crowdsourcing task allocation strategies employing supervised and reinforcement learning. International Journal of Crowd Science(2017).
[9]
James Davidson, Benjamin Liebald, Junning Liu, Palash Nandy, Taylor Van Vleet, Ullas Gargi, Sujoy Gupta, Yu He, Mike Lambert, Blake Livingston, 2010. The YouTube video recommendation system. In Proceedings of the fourth ACM conference on Recommender systems. 293–296.
[10]
Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2013. Pick-a-crowd: tell me what you like, and i’ll tell you what to do. In Proceedings of the 22nd international conference on World Wide Web. 367–374.
[11]
Youssef El Faqir, Javier Arroyo, and Samer Hassan. 2020. An overview of decentralized autonomous organizations on the blockchain. In Proceedings of the 16th International Symposium on Open Collaboration. 1–8.
[12]
Enrique Estellés-Arolas and Fernando González-Ladrón-de Guevara. 2012. Towards an integrated crowdsourcing definition. Journal of Information science 38, 2 (2012), 189–200.
[13]
European Research Council. 2018. Ethics Self-Assessment step by step. https://erc.europa.eu/content/ethics-self-assessment-step-step
[14]
Rita Faullant, Johann Fueller, and Katja Hutter. 2013. Fair play: perceived fairness in crowdsourcing communities and its behavioral consequences. In Academy of Management Proceedings, Vol. 2013. Academy of Management Briarcliff Manor, NY 10510, 15433.
[15]
Yuanyue Feng, Hua Jonathan Ye, Ying Yu, Congcong Yang, and Tingru Cui. 2018. Gamification artifacts and crowdsourcing participation: Examining the mediating role of intrinsic motivations. Computers in Human Behavior 81 (2018), 124–136.
[16]
Donglai Fu and Yanhua Liu. 2021. Fairness of Task Allocation in Crowdsourcing Workflows. Mathematical Problems in Engineering 2021 (2021).
[17]
Mark Graham and Jamie Woodcock. 2018. Towards a fairer platform economy: Introducing the fairwork foundation. Alternate Routes 29(2018).
[18]
Mary L Gray and Siddharth Suri. 2019. Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
[19]
Lei Han, Kevin Roitero, Ujwal Gadiraju, Cristina Sarasua, Alessandro Checco, Eddy Maddalena, and Gianluca Demartini. 2019. All those wasted hours: On task abandonment in crowdsourcing. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 321–329.
[20]
Karin Hansson, Tanja Aitamurto, and Thomas Ludwig. [n.d.]. From alienation to relation: Examining the modes of production in crowdsourcing. ([n. d.]). https://doi.org/10.18420/ecscw2017-13
[21]
Karin Hansson, Thomas Ludwig, and Tanja Aitamurto. [n.d.]. Capitalizing Relationships: Modes of Participation in Crowdsourcing. ([n. d.]). https://doi.org/10.1007/s10606-018-9341-1
[22]
Chien-Ju Ho, Shahin Jabbari, and Jennifer Wortman Vaughan. 2013. Adaptive task assignment for crowdsourced classification. In International Conference on Machine Learning. PMLR, 534–542.
[23]
Chien-Ju Ho and Jennifer Vaughan. 2012. Online task assignment in crowdsourcing markets. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 26.
[24]
Jeff Howe. 2008. Crowdsourcing: How the power of the crowd is driving the future of business. Random House.
[25]
Lilly C Irani and M Six Silberman. 2013. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems. 611–620.
[26]
Folasade Olubusola Isinkaye, YO Folajimi, and Bolande Adefowoke Ojokoh. 2015. Recommendation systems: Principles, methods and evaluation. Egyptian Informatics Journal 16, 3 (2015), 261–273.
[27]
Dean Jansen, Aleli Alcala, and Francisco Guzman. [n.d.]. Amara: A Sustainable, Global Solution for Accessibility, Powered by Communities of Volunteers. In Universal Access in Human-Computer Interaction. Design for All and Accessibility Practice (Cham, 2014), Constantine Stephanidis and Margherita Antona (Eds.). Springer International Publishing, 401–411.
[28]
Lars Bo Jeppesen and Karim R Lakhani. 2010. Marginality and problem-solving effectiveness in broadcast search. Organization science 21, 5 (2010), 1016–1033.
[29]
Jiuchuan Jiang, Bo An, Yichuan Jiang, Chenyan Zhang, Zhan Bu, and Jie Cao. 2019. Group-oriented task allocation for crowdsourcing in social networks. IEEE Transactions on Systems, Man, and Cybernetics: Systems (2019).
[30]
Menna Maged Kamel, Alberto Gil-Solla, and Manuel Ramos-Carber. 2020. Tasks Recommendation in Crowdsourcing based on Workers’ Implicit Profiles and Performance History. In Proceedings of the 2020 9th International Conference on Software and Information Engineering (ICSIE). 51–55.
[31]
David R Karger, Sewoong Oh, and Devavrat Shah. 2014. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research 62, 1 (2014), 1–24.
[32]
Aniket Kittur, Jeffrey V Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In Proceedings of the 2013 conference on Computer supported cooperative work. 1301–1318.
[33]
Robert E Kraut and Paul Resnick. 2012. Building successful online communities: Evidence-based social design. Mit Press.
[34]
Seth Siyuan Li and Elena Karahanna. 2015. Online recommendation systems in a B2C E-commerce context: a review and future directions. Journal of the Association for Information Systems 16, 2 (2015), 2.
[35]
Leticia Machado, Rafael Prikladnicki, Felipe Meneguzzi, Cleidson RB de Souza, and Erran Carmel. 2016. Task allocation for crowdsourcing using AI planning. In Proceedings of the 3rd International Workshop on CrowdSourcing in Software Engineering. 36–40.
[36]
Ke Mao, Ye Yang, Qing Wang, Yue Jia, and Mark Harman. 2015. Developer recommendation for crowdsourced software development tasks. In 2015 IEEE Symposium on Service-Oriented System Engineering. IEEE, 347–356.
[37]
Annette Markham and Elizabeth Buchanan. 2012. Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee. http://aoir.org/reports/ethics2.pdf
[38]
David Martin, Benjamin V Hanrahan, Jacki O’Neill, and Neha Gupta. 2014. Being a turker. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 224–235.
[39]
Karl Marx. 1959. Economica 26, 104 (1959), 379–379. http://www.jstor.org/stable/2550890
[40]
Brian McInnis, Dan Cosley, Chaebong Nam, and Gilly Leshed. 2016. Taking a HIT: Designing around rejection, mistrust, risk, and workers’ experiences in Amazon Mechanical Turk. In Proceedings of the 2016 CHI conference on human factors in computing systems. 2271–2282.
[41]
John H McNeely. 1966. Origins of the Zapata revolt in Morelos. Hispanic American Historical Review 46, 2 (1966), 153–169.
[42]
Alan Menk, Laura Sebastia, and Rebeca Ferreira. 2019. Recommendation systems for tourism based on social networks: A survey. arXiv preprint arXiv:1903.12099(2019).
[43]
Ted Palys. 2008. Purposive sampling. In The SAGE Encyclopedia of Qualitative Research Methods, Lisa M Given (Ed.). Vol. 2. Sage, 697–698.
[44]
Maria Papoutsoglou, Georgia M Kapitsaki, and Lefteris Angelis. 2020. Modeling the effect of the badges gamification mechanism on personality traits of Stack Overflow users. Simulation Modelling Practice and Theory 105 (2020), 102157.
[45]
Paul Resnick, Ko Kuwabara, Richard Zeckhauser, and Eric Friedman. 2000. Reputation systems. Commun. ACM 43, 12 (2000), 45–48.
[46]
Paul Resnick, Richard Zeckhauser, John Swanson, and Kate Lockwood. 2006. The value of reputation on eBay: A controlled experiment. Experimental economics 9, 2 (2006), 79–101.
[47]
Joel Ross, Lilly Irani, M. Six Silberman, Andrew Zaldivar, and Bill Tomlinson. 2010. Who Are the Crowdworkers? Shifting Demographics in Mechanical Turk. In CHI ’10 Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI EA ’10). Association for Computing Machinery, New York, NY, USA, 2863–2872. https://doi.org/10.1145/1753846.1753873
[48]
David Rozas, Nigel Gilbert, Paul Hodkinson, and Samer Hassan. 2021. Talk Is Silver, Code Is Gold? Beyond Traditional Notions of Contribution in Peer Production: The Case of Drupal. Frontiers in Human Dynamics 3 (2021), 12. https://doi.org/10.3389/fhumd.2021.618207
[49]
David Rozas, Antonio Tenorio-Fornés, Silvia Díaz-Molina, and Samer Hassan. 2021. When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance. SAGE Open 11, 1 (2021), 21582440211002526. https://doi.org/10.1177/21582440211002526 arXiv:https://doi.org/10.1177/21582440211002526
[50]
David Rozas, Antonio Tenorio-Fornés, and Samer Hassan. 2021. Analysis of the Potentials of Blockchain for the Governance of Global Digital Commons. Frontiers in Blockchain 4 (2021), 15. https://doi.org/10.3389/fbloc.2021.577680
[51]
Niloufar Salehi, Lilly C Irani, Michael S Bernstein, Ali Alkhatib, Eva Ogbe, and Kristy Milland. 2015. We are dynamo: Overcoming stalling and friction in collective action for crowd workers. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 1621–1630.
[52]
Gerald Schimak, Denis Havlik, and Jasmin Pielorz. 2015. Crowdsourcing in crisis and disaster management–challenges and considerations. In International symposium on environmental software systems. Springer, 56–70.
[53]
Daniel B Shank. 2016. Using crowdsourcing websites for sociological research: The case of Amazon Mechanical Turk. The American Sociologist 47, 1 (2016), 47–55.
[54]
Avi Silberschatz, Peter Baer Galvin, and Greg Gagne. 1999. Applied operating system concepts. John Wiley & Sons, Inc.
[55]
Andrew S Tanenbaum and Herbert Bos. 2015. Modern operating systems. Pearson.
[56]
Max Weber. 1904. Die” Objektivität” sozialwissenschaftlicher und sozialpolitischer Erkenntnis. Archiv für sozialwissenschaft und sozialpolitik 19, 1(1904), 22–87.
[57]
Jurgen Willems, Carolin J Waldner, and John C Ronquillo. 2019. Reputation Star Society: Are star ratings consulted as substitute or complementary information?Decision Support Systems 124 (2019), 113080.
[58]
Xiaoyan Yin, Yanjiao Chen, and Baochun Li. 2017. Task assignment with guaranteed quality for crowdsourcing platforms. In 2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS). IEEE, 1–10.
[59]
Han Yu, Zhiqi Shen, and Cyril Leung. 2013. Bringing reputation-awareness into crowdsourcing. In 2013 9th International Conference on Information, Communications & Signal Processing. IEEE, 1–5.
[60]
Eve Zelickson. 2019. A Balancing Act: Hybrid Organizations in the Collaborative Economy Empirical Evidence from Amara on Demand.
[61]
Bingxu Zhao, Yingjie Wang, Yingshu Li, Yang Gao, and Xiangrong Tong. 2019. Task allocation model based on worker friend relationship for mobile crowdsourcing. Sensors 19, 4 (2019), 921.
[62]
Kathryn Zyskowski and Kristy Milland. 2018. A crowded future: Working against abstraction on Turker Nation. Catalyst: Feminism, Theory, Technoscience 4, 2 (2018), 1–30.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
OpenSym '21: Proceedings of the 17th International Symposium on Open Collaboration
September 2021
136 pages
ISBN:9781450385008
DOI:10.1145/3479986
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 October 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. crowdsourcing
  2. digital labour
  3. distribution of value
  4. future of work
  5. human computation
  6. platform economy
  7. task allocation
  8. worker-centric platforms

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

OpenSym 2021

Acceptance Rates

Overall Acceptance Rate 108 of 195 submissions, 55%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 1,110
    Total Downloads
  • Downloads (Last 12 months)591
  • Downloads (Last 6 weeks)41
Reflects downloads up to 14 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media