Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3297001.3297050acmotherconferencesArticle/Chapter ViewAbstractPublication PagescodsConference Proceedingsconference-collections
short-paper

Efficient Budget Allocation and Task Assignment in Crowdsourcing

Published: 03 January 2019 Publication History

Abstract

Requesters in crowdsourcing marketplaces would like to efficiently allocate a fixed budget, among the set of tasks to be completed, which are of varying difficulty levels. The uncertainty in the arrival and departure of workers and the diversity in their skill levels add to the challenge, as minimizing the overall completion time is also an important concern. Current literature focuses on sequential allocation of tasks, i.e., task assignment to one worker at a time, or assumes the task difficulties to be known in advance. In this paper, we study the problem of efficient budget allocation under dynamic worker pool in crowdsourcing. Specifically, we consider binary labeling tasks for which the budget allocation problem can be cast as one of finding the optimal policy for a Markov decision process. We present a mathematical framework for modeling the problem and propose a class of algorithms for obtaining its solution. Experiments on simulated as well as real data demonstrate the capability of these algorithms to achieve performance very close to sequential allocation in much less time and their superiority over naive allocation strategies.

References

[1]
Xi Chen, Qihang Lin, and Dengyong Zhou. 2013. Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing. In International Conference on Machine Learning. 64--72.
[2]
Siamak Faradani, Björn Hartmann, and Panagiotis G Ipeirotis. 2011. What's the Right Price? Pricing Tasks for Finishing on Time. Human computation 11 (2011), 11.
[3]
Ashish Khetan and Sewoong Oh. 2016. Achieving budget-optimality with adaptive schemes in crowdsourcing. In Advances in Neural Information Processing Systems. 4844--4852.
[4]
Qi Li, Fenglong Ma, Jing Gao, Lu Su, and Christopher J Quinn. 2016. Crowdsourcing high quality labels with a tight budget. In Proceedings of the ninth acm international conference on web search and data mining. ACM, 237--246.
[5]
Warren B Powell. 2007. Approximate Dynamic Programming: Solving the curses of dimensionality. Vol. 703. John Wiley & Sons.
[6]
Vikas C Raykar, Shipeng Yu, Linda H Zhao, Gerardo Hermosillo Valadez, Charles Florin, Luca Bogoni, and Linda Moy. 2010. Learning from crowds. Journal of Machine Learning Research 11, Apr (2010), 1297--1322.
[7]
Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y Ng. 2008. Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, 254--263.
[8]
Jing Xie and Peter I Frazier. 2013. Sequential bayes-optimal policies for multiple comparisons with a known standard. Operations Research 61, 5 (2013), 1174--1189.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CODS-COMAD '19: Proceedings of the ACM India Joint International Conference on Data Science and Management of Data
January 2019
380 pages
ISBN:9781450362078
DOI:10.1145/3297001
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 January 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. budget allocation
  2. crowdsourcing
  3. reinforcement learning

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Conference

CoDS-COMAD '19
CoDS-COMAD '19: 6th ACM IKDD CoDS and 24th COMAD
January 3 - 5, 2019
Kolkata, India

Acceptance Rates

CODS-COMAD '19 Paper Acceptance Rate 62 of 198 submissions, 31%;
Overall Acceptance Rate 197 of 680 submissions, 29%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 195
    Total Downloads
  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)2
Reflects downloads up to 28 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media