Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2736277.2741681acmotherconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article
Open access

Getting More for Less: Optimized Crowdsourcing with Dynamic Tasks and Goals

Published: 18 May 2015 Publication History

Abstract

In crowdsourcing systems, the interests of contributing participants and system stakeholders are often not fully aligned. Participants seek to learn, be entertained, and perform easy tasks, which offer them instant gratification; system stakeholders want users to complete more difficult tasks, which bring higher value to the crowdsourced application. We directly address this problem by presenting techniques that optimize the crowdsourcing process by jointly maximizing the user longevity in the system and the true value that the system derives from user participation.
We first present models that predict the "survival probability" of a user at any given moment, that is, the probability that a user will proceed to the next task offered by the system. We then leverage this survival model to dynamically decide what task to assign and what motivating goals to present to the user. This allows us to jointly optimize for the short term (getting difficult tasks done) and for the long term (keeping users engaged for longer periods of time).
We show that dynamically assigning tasks significantly increases the value of a crowdsourcing system. In an extensive empirical evaluation, we observed that our task allocation strategy increases the amount of information collected by up to 117.8%. We also explore the utility of motivating users with goals. We demonstrate that setting specific, static goals can be highly detrimental to the long-term user participation, as the completion of a goal (e.g., earning a badge) is also a common drop-off point for many users. We show that setting the goals dynamically, in conjunction with judicious allocation of tasks, increases the amount of information collected by the crowdsourcing system by up to 249%, compared to the existing baselines that use fixed objectives.

References

[1]
S. S. Anand and B. Mobasher. Intelligent techniques for web personalization. In Proceedings of the 2003 international conference on Intelligent Techniques for Web Personalization, pages 1--36. Springer-Verlag, 2003.
[2]
A. Anderson, D. Huttenlocher, J. Kleinberg, and J. Leskovec. Steering user behavior with badges. In Proceedings of the 22nd international conference on World Wide Web, pages 95--106. International World Wide Web Conferences Steering Committee, 2013.
[3]
B. I. Aydin, Y. S. Yilmaz, Y. Li, Q. Li, J. Gao, and M. Demirbas. Crowdsourcing for multiple-choice question answering. In Twenty-Sixth IAAI Conference, 2014.
[4]
Y. Bachrach, T. Graepel, T. Minka, and J. Guiver. How to grade a test without knowing the answers|a bayesian graphical model for adaptive crowdsourcing and aptitude testing. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1183--1190, 2012.
[5]
D. Cosley, D. Frankowski, L. Terveen, and J. Riedl. Using intelligent task routing and contribution review to help communities build artifacts of lasting value. In Proceedings of the SIGCHI conference on Human Factors in computing systems, pages 1037--1046. ACM, 2006.
[6]
P. Dai, C. H. Lin, D. S. Weld, et al. Pomdp-based control of work flows for crowdsourcing. Artificial Intelligence, 202:52--85, 2013.
[7]
A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the em algorithm. Applied statistics, pages 20--28, 1979.
[8]
D. Easley and A. Ghosh. Incentives, gamification, and game theory: an economic approach to badge design. In Proceedings of the fourteenth ACM conference on Electronic commerce, pages 359--376. ACM, 2013.
[9]
M. Eirinaki and M. Vazirgiannis. Web mining for web personalization. ACM Transactions on Internet Technology (TOIT), 3(1):1--27, 2003.
[10]
P. G. Ipeirotis and E. Gabrilovich. Quizz: targeted crowdsourcing with a billion (potential) users. In Proceedings of the 23rd international conference on World wide web, pages 143--154. International World Wide Web Conferences Steering Committee, 2014.
[11]
M. Krieger, E. M. Stark, and S. R. Klemmer. Coordinating tasks on the commons: designing for personal goals, expertise and serendipity. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1485--1494. ACM, 2009.
[12]
P. Langley. Machine learning for adaptive user interfaces. In KI-97: Advances in artificial intelligence, pages 53--62. Springer, 1997.
[13]
J. Lehmann, M. Lalmas, E. Yom-Tov, and G. Dupret. Models of user engagement. In User Modeling, Adaptation, and Personalization, pages 164--175. Springer, 2012.
[14]
J. M. Linacre et al. Computer-adaptive testing: A methodology whose time has come. Chae, S.-Kang, U.--Jeon, E.--Linacre, JM (eds.): Development of Computerised Middle School Achievement Tests, MESA Research Memorandum, (69), 2000.
[15]
A. Mao, E. Kamar, and E. Horvitz. Why stop now? predicting worker engagement in online crowdsourcing. In First AAAI Conference on Human Computation and Crowdsourcing, 2013.
[16]
W. Mason and D. J. Watts. Financial incentives and the performance of crowds. ACM SigKDD Explorations Newsletter, 11(2):100--108, 2010.
[17]
S. McBurney, E. Papadopoulou, N. Taylor, and H. Williams. Adapting pervasive environments through machine learning and dynamic personalization. In Parallel and Distributed Processing with Applications, 2008. ISPA'08. International Symposium on, pages 395--402. IEEE, 2008.
[18]
G. Neumann and J. R. Peters. Fitted q-iteration by advantage weighted regression. In Advances in neural information processing systems, pages 1177--1184, 2009.
[19]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825--2830, 2011.
[20]
M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming, volume 414. John Wiley & Sons, 2009.
[21]
V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. The Journal of Machine Learning Research, 11:1297--1322, 2010.
[22]
S. Robertson, M. Vojnovic, and I. Weber. Rethinking the esp game. In CHI'09 Extended Abstracts on Human Factors in Computing Systems, pages 3937--3942. ACM, 2009.
[23]
J. M. Rzeszotarski, E. Chi, P. Paritosh, and P. Dai. Inserting micro-breaks into crowdsourcing work flows. In First AAAI Conference on Human Computation and Crowdsourcing, 2013.
[24]
L. von Ahn. Duolingo: learn a language for free while helping to translate the web. In Proceedings of the 2013 international conference on Intelligent user interfaces, pages 1--2. ACM, 2013.
[25]
L. Von Ahn, B. Maurer, C. McMillen, D. Abraham, and M. Blum. recaptcha: Human-based character recognition via web security measures. Science, 321(5895):1465--1468, 2008.
[26]
T. P. Waterhouse. Pay by the bit: an information-theoretic metric for collective human judgment. In Proceedings of the 2013 conference on Computer supported cooperative work, pages 623--638. ACM, 2013.
[27]
J. Whitehill, T.-f. Wu, J. Bergsma, J. R. Movellan, and P. L. Ruvolo. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in neural information processing systems, pages 2035--2043, 2009.
[28]
M. B. Wilk and R. Gnanadesikan. Probability plotting methods for the analysis for the analysis of data. Biometrika, 55(1):1--17, 1968.

Cited By

View all
  • (2024)"Are we all in the same boat?" Customizable and Evolving Avatars to Improve Worker Engagement and Foster a Sense of Community in Online Crowd WorkProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642429(1-26)Online publication date: 11-May-2024
  • (2023)Qrowdsmith: Enhancing Paid Microtask Crowdsourcing with Gamification and Furtherance IncentivesACM Transactions on Intelligent Systems and Technology10.1145/360494014:5(1-26)Online publication date: 21-Jun-2023
  • (2022)A Survey on Task Assignment in CrowdsourcingACM Computing Surveys10.1145/349452255:3(1-35)Online publication date: 3-Feb-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
WWW '15: Proceedings of the 24th International Conference on World Wide Web
May 2015
1460 pages
ISBN:9781450334693

Sponsors

  • IW3C2: International World Wide Web Conference Committee

In-Cooperation

Publisher

International World Wide Web Conferences Steering Committee

Republic and Canton of Geneva, Switzerland

Publication History

Published: 18 May 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adaptive systems
  2. user modeling

Qualifiers

  • Research-article

Conference

WWW '15
Sponsor:
  • IW3C2

Acceptance Rates

WWW '15 Paper Acceptance Rate 131 of 929 submissions, 14%;
Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)103
  • Downloads (Last 6 weeks)17
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)"Are we all in the same boat?" Customizable and Evolving Avatars to Improve Worker Engagement and Foster a Sense of Community in Online Crowd WorkProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642429(1-26)Online publication date: 11-May-2024
  • (2023)Qrowdsmith: Enhancing Paid Microtask Crowdsourcing with Gamification and Furtherance IncentivesACM Transactions on Intelligent Systems and Technology10.1145/360494014:5(1-26)Online publication date: 21-Jun-2023
  • (2022)A Survey on Task Assignment in CrowdsourcingACM Computing Surveys10.1145/349452255:3(1-35)Online publication date: 3-Feb-2022
  • (2022)Human as a Service: Towards Resilient Parking Search System With Sensorless SensingIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2021.313371323:8(13863-13877)Online publication date: Aug-2022
  • (2022)Maximum Profit Routing for Mobile Crowdsensing2022 21st ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)10.1109/IPSN54338.2022.00042(441-450)Online publication date: May-2022
  • (2022)Cost-effective crowdsourced join queries for entity resolution without prior knowledgeFuture Generation Computer Systems10.1016/j.future.2021.09.008127:C(240-251)Online publication date: 1-Feb-2022
  • (2022)Privacy-Preserving Content-Based Task AllocationPrivacy-Preserving in Mobile Crowdsensing10.1007/978-981-19-8315-3_3(33-61)Online publication date: 21-Dec-2022
  • (2021)The Differential Effect of Privacy-Related Trust on Groupware Application Adoption and Use during the COVID-19 pandemicProceedings of the ACM on Human-Computer Interaction10.1145/34795495:CSCW2(1-34)Online publication date: 18-Oct-2021
  • (2021)Examining Collaborative Support for Privacy and Security in the Broader Context of Tech CaregivingProceedings of the ACM on Human-Computer Interaction10.1145/34795405:CSCW2(1-23)Online publication date: 18-Oct-2021
  • (2021)On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current PracticesProceedings of the ACM on Human-Computer Interaction10.1145/34795315:CSCW2(1-34)Online publication date: 18-Oct-2021
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media