Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3524458.3547243acmconferencesArticle/Chapter ViewAbstractPublication PagesgooditConference Proceedingsconference-collections
research-article

A new Workflow for Human-AI Collaboration in Citizen Science

Published: 07 September 2022 Publication History

Abstract

The unprecedented growth of online citizen science projects provides growing opportunities for the public to participate in scientific discoveries. Nevertheless, volunteers typically make only a few contributions before exiting the system. Thus a significant challenge to such systems is increasing the capacity and efficiency of volunteers without hindering their motivation and engagement. To address this challenge, we study the role of incorporating collaborative agents in the existing workflow of a citizen science project for the purpose of increasing the capacity and efficiency of these systems, while maintaining the motivation of participants in the system. Our new enhanced workflow combines human-machine collaboration in two ways: Humans can aid the machine in solving more difficult tasks with high information value, while the machine can facilitate human engagement by generating motivational messages that emphasize different aspects of human-machine collaboration. We implemented this workflow in a study comprising thousands of volunteers in Galaxy Zoo, one of the largest citizen science projects on the web. Volunteers could choose to use the enhanced workflow or the existing workflow in which users did not receive motivational messages, and tasks were allocated to volunteers sequentially without regard to information value. We found that the volunteers working in the enhanced workflow were more productive than those volunteers who worked in the existing workflow, without incurring a loss in the quality of their contributions. Additionally, in the enhanced workflow, the type of messages used had a profound effect on volunteer performance. Our work demonstrates the importance of varying human-machine collaboration models in citizen science.

References

[1]
Ashton Anderson, Daniel Huttenlocher, Jon Kleinberg, and Jure Leskovec. 2013. Steering user behavior with badges. In Proceedings of the 22nd International Conference on World Wide Web. 95–106.
[2]
Ashton Anderson, Daniel Huttenlocher, Jon Kleinberg, and Jure Leskovec. 2014. Engaging with massive online courses. In Proceedings of the 23rd International Conference on World Wide Web. ACM, 687–698.
[3]
Maria Aristeidou, Christothea Herodotou, Heidi L Ballard, Alison N Young, Annie E Miller, Lila Higgins, and Rebecca F Johnson. 2021. Exploring the participation of young citizen scientists in scientific research: The case of iNaturalist. Plos one 16, 1 (2021), e0245682.
[4]
Amos Azaria, Yonatan Aumann, and Sarit Kraus. 2014. Automated agents for reward determination for human work in crowdsourcing applications. Autonomous Agents and Multi-Agent Systems 28, 6 (2014), 934–955.
[5]
Dominik Dellermann, Philipp Ebel, Matthias Söllner, and Jan Marco Leimeister. 2019. Hybrid intelligence. Business & Information Systems Engineering 61, 5 (2019), 637–643.
[6]
Alexandra Eveleigh, Charlene Jennett, Ann Blandford, Philip Brohan, and Anna L Cox. 2014. Designing for dabblers and deterring drop-outs in citizen science. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems. ACM, 2985–2994.
[7]
Lucy Fortson, Darryl Wright, Chris Lintott, and Laura Trouille. 2018. Optimizing the Human-Machine Partnership with Zooniverse. arXiv preprint arXiv:1809.09738(2018).
[8]
Kobi Gal and Barbara J Grosz. 2022. Multi-Agent Systems: Technical & Ethical Challenges of Functioning in a Mixed Group. Daedalus 151, 2 (2022), 114–126.
[9]
Catherine Hoffman, Caren B Cooper, Eric B Kennedy, Mahmud Farooque, and Darlene Cavalier. 2017. Scistarter 2.0: A digital platform to foster and study sustained engagement in citizen science. In Analyzing the Role of Citizen Science in Modern Research. IGI Global, 50–61.
[10]
Eric Horvitz, Andy Jacobs, and David Hovel. 1999. Attention-sensitive alerting. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 305–313.
[11]
Panagiotis G Ipeirotis. 2010. Analyzing the amazon mechanical turk marketplace. XRDS: Crossroads, The ACM Magazine for Students 17, 2 (2010), 16–21.
[12]
Corey Jackson, Carsten Østerlund, Kevin Crowston, Gabriel Mugar, and KD Hassman. 2014. Motivations for sustained participation in citizen science: Case studies on the role of talk. In 17th ACM Conference on Computer Supported Cooperative Work & Social Computing.
[13]
Ece Kamar and Lydia Manikonda. 2017. Complementing the Execution of AI Systems with Human Computation. In AAAI Workshops.
[14]
Pietro Michelucci and Janis L Dickinson. 2016. The power of crowds. Science 351, 6268 (2016), 32–33.
[15]
Arefeh Nasri, Carlos Carrion, Lei Zhang, and Babak Baghaei. 2020. Using propensity score matching technique to address self-selection in transit-oriented development (TOD) areas. Transportation 47, 1 (2020), 359–371.
[16]
Besmira Nushi, Ece Kamar, and Eric Horvitz. 2018. Towards accountable ai: Hybrid human-machine analyses for characterizing system failure. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 6. 126–135.
[17]
Jennifer Preece and Ben Shneiderman. 2009. The reader-to-leader framework: Motivating technology-mediated social participation. AIS Transactions on Human-Computer Interaction 1, 1(2009), 13–32.
[18]
Paul R Rosenbaum and Donald B Rubin. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika 70, 1 (1983), 41–55.
[19]
Ariel Rosenfeld and Sarit Kraus. 2016. Strategical Argumentative Agent for Human Persuasion. In European Conference on Artificial Intelligence.
[20]
Avi segal, Kobi Gal, Ece Kamar, Eric Horvitz, Alex Bowyer, and Grant Miller. 2016. Intervention strategies for increasing engagement in crowdsourcing: Platform, predictions, and experiments. (2016).
[21]
Avi Segal, Kobi Gal, Ece Kamar, Eric Horvitz, and Grant Miller. 2018. Optimizing interventions via offline policy evaluation: Studies in citizen science. In Thirty-Second AAAI Conference on Artificial Intelligence.
[22]
Avi Segal, Ya’akov Kobi Gal, Robert J Simpson, Victoria Victoria Homsy, Mark Hartswood, Kevin R Page, and Marina Jirotka. 2015. Improving productivity in citizen science through controlled intervention. In Proceedings of the 24th International Conference on World Wide Web. ACM, 331–337.
[23]
Tammar Shrot, Avi Rosenfeld, Jennifer Golbeck, and Sarit Kraus. 2014. Crisp: an interruption management algorithm based on collaborative filtering. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3035–3044.
[24]
Tammar Shrot, Avi Rosenfeld, and Sarit Kraus. 2009. Leveraging users for efficient interruption management in agent-user systems. In Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology-Volume 02. IEEE Computer Society, 123–130.
[25]
Brian L Sullivan, Christopher L Wood, Marshall J Iliff, Rick E Bonney, Daniel Fink, and Steve Kelling. 2009. eBird: A citizen-based bird observation network in the biological sciences. Biological Conservation 142, 10 (2009), 2282–2292.
[26]
Lav R Varshney. 2012. Participation in crowd systems. In Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on. IEEE, 996–1001.
[27]
Mike Walmsley, Lewis Smith, Chris Lintott, Yarin Gal, Steven Bamford, Hugh Dickinson, Lucy Fortson, Sandor Kruk, Karen Masters, Claudia Scarlata, 2020. Galaxy Zoo: probabilistic morphology through Bayesian CNNs and active learning. Monthly Notices of the Royal Astronomical Society 491, 2 (2020), 1554–1574.
[28]
Bryan Wilder, Eric Horvitz, and Ece Kamar. 2020. Learning to complement humans. arXiv preprint arXiv:2005.00582(2020).
[29]
Darryl E Wright, Chris J Lintott, Stephen J Smartt, Ken W Smith, Lucy Fortson, Laura Trouille, Campbell R Allen, Melanie Beck, Mark C Bouslog, Amy Boyer, 2017. A transient search using combined human and machine classifications. Monthly Notices of the Royal Astronomical Society 472, 2 (2017), 1315–1323.
[30]
Michael Zevin, Scott Coughlin, Sara Bahaadini, Emre Besler, Neda Rohani, Sarah Allen, Miriam Cabero, Kevin Crowston, Aggelos K Katsaggelos, Shane L Larson, 2017. Gravity Spy: integrating advanced LIGO detector characterization, machine learning, and citizen science. Classical and quantum gravity 34, 6 (2017), 064003.

Cited By

View all
  • (2024)Collaborating with Bots and Automation on OpenStreetMapACM Transactions on Computer-Human Interaction10.1145/366532631:3(1-30)Online publication date: 17-May-2024
  • (2023)Public Health Calls for/with AI: An Ethnographic PerspectiveProceedings of the ACM on Human-Computer Interaction10.1145/36102037:CSCW2(1-26)Online publication date: 4-Oct-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
GoodIT '22: Proceedings of the 2022 ACM Conference on Information Technology for Social Good
September 2022
436 pages
ISBN:9781450392846
DOI:10.1145/3524458
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 September 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. citizen science
  2. human computer workflow

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

GoodIT 2022
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)62
  • Downloads (Last 6 weeks)5
Reflects downloads up to 17 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Collaborating with Bots and Automation on OpenStreetMapACM Transactions on Computer-Human Interaction10.1145/366532631:3(1-30)Online publication date: 17-May-2024
  • (2023)Public Health Calls for/with AI: An Ethnographic PerspectiveProceedings of the ACM on Human-Computer Interaction10.1145/36102037:CSCW2(1-26)Online publication date: 4-Oct-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media