Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Incentives for Effort in Crowdsourcing Using the Peer Truth Serum

Published: 31 March 2016 Publication History

Abstract

Crowdsourcing is widely proposed as a method to solve a large variety of judgment tasks, such as classifying website content, peer grading in online courses, or collecting real-world data. As the data reported by workers cannot be verified, there is a tendency to report random data without actually solving the task. This can be countered by making the reward for an answer depend on its consistency with answers given by other workers, an approach called peer consistency. However, it is obvious that the best strategy in such schemes is for all workers to report the same answer without solving the task.
Dasgupta and Ghosh [2013] show that, in some cases, exerting high effort can be encouraged in the highest-paying equilibrium. In this article, we present a general mechanism that implements this idea and is applicable to most crowdsourcing settings. Furthermore, we experimentally test the novel mechanism, and validate its theoretical properties.

Supplementary Material

a48-radanovic-apndx.pdf (radanovic.zip)
Supplemental movie, appendix, image and software files for, Incentives for Effort in Crowdsourcing Using the Peer Truth Serum

References

[1]
Luis Von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
[2]
R. N. Colvile, N. K. Woodfield, D. J. Carruthers, B. E. A. Fisher, A. Rickard, S. Neville, and A. Hughes. 2002. Uncertainty in dispersion modeling and urban air quality mapping. Environmental Science and Policy 5, 207--220.
[3]
Anirban Dasgupta and Arpita Ghosh. 2013. Crowdsourced judgement elicitation with endogenous proficiency. In Proceedings of the 22nd ACM International World Wide Web Conference (WWW13).
[4]
Boi Faltings, Jason J. Li, and Radu Jurca. 2014a. Incentive mechanisms for community sensing. IEEE Transactions on Computers 63, 115--128.
[5]
Boi Faltings, Pearl Pu, Bao Duy Tran, and Radu Jurca. 2014b. Incentives to counter bias in human computation. In Proceedings of the 2nd AAAI Conference on Human Computation and Crowdsourcing (HCOMP'14).
[6]
Xi Alice Gao, Andrew Mao, and Yiling Chen. 2013. Trick or treat: Putting peer prediction to the test. In Workshop on Crowdsourcing and Online Behavioral Experiments, in Conjunction with ACM EC13.
[7]
Florent Garcin and Boi Faltings. 2014. Swissnoise: Online polls with game-theoretic incentives. Proceedings of the 26th Conference on Innovative Applications of AI, 2972--2977.
[8]
Sharad Goel, Daniel M. Reeves, and David M. Pennock. 2009. Collective revelation: A mechanism for self-verified, weighted, and truthful predictions. In Proceedings of the 10th ACM Conference on Electronic Commerce (EC’09).
[9]
Christopher Harris. 2011. You’re hired! An examination of crowdsourcing incentive models in human resource tasks. In Proceedings of the Workshop on Crowdsourcing for Search and Data Mining (CSDM) at the 4th ACM International Conference on Web Search and Data Mining (WSDM’11).
[10]
Shih-Wen Huang and Wai-Tat Fu. 2013a. Don’t hide in the crowd!: Increasing social transparency between peer workers improves crowdsourcing outcomes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
[11]
Shih-Wen Huang and Wai-Tat Fu. 2013b. Enhancing reliability using peer consistency evaluation in human computation. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work.
[12]
Radu Jurca and Boi Faltings. 2009. Mechanisms for making crowds truthful. Journal of Artificial Intelligence Research (JAIR) 34, 209--253.
[13]
Radu Jurca and Boi Faltings. 2011. Incentives for answering hypothetical questions. In Workshop on Social Computing and User Generated Content (EC’11).
[14]
Ece Kamar and Eric Horvitz. 2012. Incentives for truthful reporting in crowdsourcing. In Proceedings of the 2012 International Conference on Autonomous Agents and Multiagent Systems (AAMAS'12).
[15]
Nicolas Lambert and Yoav Shoham. 2008. Truthful surveys. In Proceedings of the 3rd International Workshop on Internet and Network Economics (WINE’08).
[16]
Nolan Miller, Paul Resnick, and Richard Zeckhauser. 2005. Eliciting informative feedback: The peer-prediction method. Management Science 51, 1359--1373.
[17]
Athanasios Papakonstantinou, Alex Rogers, Enrico Gerding, and Nicholas Jennings. 2011. Mechanism design for the truthful elicitation of costly probabilistic estimates in distributed information systems. Artificial Intelligence 175, 648--672.
[18]
Drazen Prelec. 2004. A bayesian truth serum for subjective data. Science 34, 5695 (2004), 462--466.
[19]
Drazen Prelec and Sebastian Seung. 2006. An algorithm that finds truth even if most people are wrong. Working paper.
[20]
Goran Radanovic and Boi Faltings. 2013. A robust Bayesian truth serum for non-binary signals. In Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI’13).
[21]
Goran Radanovic and Boi Faltings. 2014. Incentives for truthful information elicitation of continuous signals. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI’14).
[22]
Goran Radanovic and Boi Faltings. 2015a. Incentive schemes for participatory sensing. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS’15).
[23]
Goran Radanovic and Boi Faltings. 2015b. Incentives for subjective evaluations with private beliefs. In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI’15).
[24]
Aaron Shaw, Daniel L. Chen, and John Horton. 2011. Designing incentives for inexpert human raters. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW’11).
[25]
Adish Singla and Andreas Krause. 2013. Truthful incentives in crowdsourcing tasks using regret minimization mechanisms. In Proceedings of the 22nd International Conference on World Wide Web.
[26]
Hossein Azari Soufiani, David C. Parkes, and Lirong Xia. 2012. Random utility theory for social choice. In Proceedings of Neural Information Processing Systems.
[27]
Bo Waggoner and Yiling Chen. 2013. Information elicitation sans verification. In Information Elicitation Sans Verification. In Proceedings of the 3rd Workshop on Social Computing and User Generated Content (SC’13).
[28]
Bo Waggoner and Yiling Chen. 2014. Output agreement mechanisms and common knowledge. In Proceedings of the 2nd AAAI Conference on Human Computation and Crowdsourcing (HCOMP’14).
[29]
Jens Witkowski, Yoram Bachrach, Peter Key, and David C. Parkes. 2013. Dwelling on the negative: Incentivizing effort in peer prediction. In Proceedings of the 1st AAAI Conference on Human Computation and Crowdsourcing.
[30]
Jens Witkowski and David C. Parkes. 2012a. Peer prediction without a common prior. In Proceedings of the 13th ACM Conference on Electronic Commerce (EC’12). 964--981.
[31]
Jens Witkowski and David C. Parkes. 2012b. A robust Bayesian truth serum for small populations. In Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI’12).
[32]
Jens Witkowski and David C. Parkes. 2013. Learning the prior in minimal peer prediction. In Proceedings of the 3rd Workshop on Social Computing and User Generated Content (SC'13).
[33]
Peter Zhang and Yiling Chen. 2014. Elicitability and knowledge-free elicitation with peer prediction. In Proceedings of the 2014 International Conference on Autonomous Agents and Multiagent Systems (AAMAS’14).

Cited By

View all
  • (2024)Dominantly Truthful Peer Prediction Mechanisms with a Finite Number of TasksJournal of the ACM10.1145/363823971:2(1-49)Online publication date: 10-Apr-2024
  • (2024)Spot Check Equivalence: An Interpretable Metric for Information Elicitation MechanismsProceedings of the ACM Web Conference 202410.1145/3589334.3645679(276-287)Online publication date: 13-May-2024
  • (2024)Decentralized and Incentivized Federated Learning: A Blockchain-Enabled Framework Utilising Compressed Soft-Labels and Peer ConsistencyIEEE Transactions on Services Computing10.1109/TSC.2023.333698017:4(1449-1464)Online publication date: Jul-2024
  • Show More Cited By

Index Terms

  1. Incentives for Effort in Crowdsourcing Using the Peer Truth Serum

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Intelligent Systems and Technology
    ACM Transactions on Intelligent Systems and Technology  Volume 7, Issue 4
    Special Issue on Crowd in Intelligent Systems, Research Note/Short Paper and Regular Papers
    July 2016
    498 pages
    ISSN:2157-6904
    EISSN:2157-6912
    DOI:10.1145/2906145
    • Editor:
    • Yu Zheng
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 31 March 2016
    Accepted: 01 November 2015
    Revised: 01 August 2015
    Received: 01 January 2015
    Published in TIST Volume 7, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Crowdsourcing
    2. mechanism design
    3. peer prediction

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    • Nano-Tera.ch as part of the Opensense2 project

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)56
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 17 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Dominantly Truthful Peer Prediction Mechanisms with a Finite Number of TasksJournal of the ACM10.1145/363823971:2(1-49)Online publication date: 10-Apr-2024
    • (2024)Spot Check Equivalence: An Interpretable Metric for Information Elicitation MechanismsProceedings of the ACM Web Conference 202410.1145/3589334.3645679(276-287)Online publication date: 13-May-2024
    • (2024)Decentralized and Incentivized Federated Learning: A Blockchain-Enabled Framework Utilising Compressed Soft-Labels and Peer ConsistencyIEEE Transactions on Services Computing10.1109/TSC.2023.333698017:4(1449-1464)Online publication date: Jul-2024
    • (2024)When Crowdsourcing Meets Data Markets: A Fair Data Value Metric for Data TradingJournal of Computer Science and Technology10.1007/s11390-023-2519-039:3(671-690)Online publication date: 1-May-2024
    • (2023)A fair incentive scheme for community health workersProceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v37i12.26653(14127-14135)Online publication date: 7-Feb-2023
    • (2023)The Square Root Agreement Rule for Incentivizing Truthful Feedback on Online PlatformsManagement Science10.1287/mnsc.2022.437569:1(377-403)Online publication date: 1-Jan-2023
    • (2023)Auditing for Federated Learning: A Model Elicitation ApproachProceedings of the Fifth International Conference on Distributed Artificial Intelligence10.1145/3627676.3627683(1-9)Online publication date: 30-Nov-2023
    • (2023)Incentive Mechanism Design for Responsible Data Governance: A Large-scale Field ExperimentJournal of Data and Information Quality10.1145/359261715:2(1-18)Online publication date: 19-Apr-2023
    • (2023)Measurement Integrity in Peer Prediction: A Peer Assessment Case StudyProceedings of the 24th ACM Conference on Economics and Computation10.1145/3580507.3597744(369-389)Online publication date: 9-Jul-2023
    • (2023)Surrogate Scoring RulesACM Transactions on Economics and Computation10.1145/356555910:3(1-36)Online publication date: 15-Feb-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media