Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Research Agenda for Hybrid Intelligence: Augmenting Human Intellect With Collaborative, Adaptive, Responsible, and Explainable Artificial Intelligence

Published: 01 August 2020 Publication History

Abstract

We define hybrid intelligence (HI) as the combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them and achieving goals that were unreachable by either humans or machines. HI is an important new research focus for artificial intelligence, and we set a research agenda for HI by formulating four challenges.

References

[1]
S. Kambhampati, Challenges of human-aware AI systems. 2019. [Online]. Available:
[2]
J. Guszcza, “Smarter together: Why artificial intelligence needs human-centered design,” Deloitte, London, no. 22, 2018. [Online]. Available: https://www2.deloitte.com/content/dam/insights/us/articles/4214_Smarter-together/DI_Smarter-together.pdf
[3]
S. Amershi, M. Cakmak, W. B. Knox, and T. Kulesza, “Power to the people: The role of humans in interactive machine learning,” AI Mag., vol. 35, no. 4, pp. 105–120, 2014.
[4]
M. B. Cook and H. S. Smallman, “Human factors of the confirmation bias in intelligence analysis: Decision support from graphical evidence landscapes,” Hum. Factors, vol. 50, no. 5, pp. 745–754, 2008.
[5]
H. de Weerd, R. Verbrugge, and B. Verheij, “How much does it help to know what she knows you know? An agent-based simulation study,” Artif. Intell., vol. 199–200, pp. 67–92, June–July 2013.
[6]
D. Grossi and P. Turrini, “Dependence in games and dependence games,” J. Auton. Agents Multi-Agent Syst., vol. 25, no. 2, pp. 284–312, 2012.
[7]
A. Romano and D. Balliet, “Reciprocity outperforms conformity to promote cooperation,” Psychol. Sci., vol. 28, no. 10, pp. 1490–1502, 2017.
[8]
M. Johnson et al., “Team IHMC’s lessons learned from the DARPA robotics challenge trials,” J. Field Robot., vol. 32, no. 2, pp. 192–208, 2015.
[9]
I. Rauschert, P. Agrawal, R. Sharma, S. Fuhrmann, I. Brewer, and A. MacEachren, “Designing a human-centered, multimodal GIS interface to support emergency management,” in Proc. 10th ACM Int. Symp. Advances Geographic Information Systems, 2002, pp. 119–124.
[10]
L. She and J. Y. Chai, “Interactive learning of grounded verb semantics towards human-robot communication,” in Proc. 55th Annu. Meeting Assoc. Computational Linguistics (ACL), 2017, pp. 1634–1644. [Online]. Available: https://dblp.uni-trier.de/pers/hd/s/She:Lanbo)
[11]
H. Hung, E. Gedik, and L. Cabrera-Quiros, “Complex conversational scene analysis using wearable sensing,” in Multi-modal Behavior Analysis in the Wild: Advances Challenges, X. Alameda-Pineda, E. Ricci, N. Sebe, Eds. New York: Academic, 2018, pp. 225–245.
[12]
F. Hutter, L. Kotthoff, and J. Vanschoren, Eds., Automated Machine Learning Methods, Systems, Challenges. New York: Springer Verlag 2019. [Online]. Available: https://www.automl.org/book/
[13]
J. van den Hoven, P. Vermaas, and I. van de Poel, “Sources, theory, values and application domains,” in Handbook of Ethics, Values, and Technological Design. Dordrecht: Springer-Verlag, 2015. [Online]. Available: https://link.springer.com/referencework/10.1007%2F978-94-007-6994-6
[14]
I. Verdiesen, V. Dignum, and J. Van Den Hoven, “Measuring moral acceptability in E-deliberation: A practical application of ethics by participation,” ACM Trans. Internet Technol., vol. 18, no. 4, Art. no. 2018.
[15]
M.A. de Graaf, and B F. Malle, “How people explain action (and Autonomous Intelligent Systems should too),” in Proc. Association Advancement Artificial Intelligence Fall Symp. (AAAI), 2017, pp. 19–26.
[16]
C. Lacave and F. Diez, “A review of explanation methods for Bayesian networks,” Knowl. Eng. Rev., vol. 17, no. 2, pp. 107–127, 2002.
[17]
S. Ross, M. C. Hughes, and F. Doshi-Velez, Right for the right reasons: Training differentiable models by constraining their explanations. 2017. [Online]. Available:
[18]
P. W. Koh and P. Liang, Understanding black-box predictions via influence functions. 2017. [Online]. Available:
[19]
M. T. Ribeiro, S. Singh and C. Guestrin, “Why should I trust you?: Explaining the predictions of any classifier,” in Proc. 22nd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
[20]
M. Ligthart, T. Fernhout, M. A. Neerincx, L. A. Van Bindsbergen, M. A. Grootenhuis, and K. V. Hindriks, “A child and a robot getting acquainted: Interaction design for eliciting self-disclosure,” in Proc. 18th Int. Conf. Autonomous Agents and MultiAgent Systems, May 2019, pp. 61–70.
[21]
F. Van Harmelen and A. Ten Teije, “A boxology of design patterns for hybrid learning and reasoning systems,” J. Web Eng., vol. 18, no. 1, pp. 97–124, 2019.
[22]
V. Dignum and F. Dignum, “Modelling agent societies: Co-ordination frameworks and institutions.” in Portuguese Conference on Artificial Intelligence (EPIA 2001) (Lecture Notes in Computer Science series 2258). Berlin: Springer-Verlag, 2001, pp. 191–204.

Cited By

View all
  • (2024)Computational Theory of Mind with Abstractions for Effective Human-Agent CollaborationProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3663123(2249-2251)Online publication date: 6-May-2024
  • (2024)Toward a Quality Model for Hybrid Intelligence TeamsProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3662893(434-443)Online publication date: 6-May-2024
  • (2024)Exploring the landscape of trustworthy artificial intelligenceIntelligent Decision Technologies10.3233/IDT-24036618:2(837-854)Online publication date: 1-Jan-2024
  • Show More Cited By

Index Terms

  1. A Research Agenda for Hybrid Intelligence: Augmenting Human Intellect With Collaborative, Adaptive, Responsible, and Explainable Artificial Intelligence
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Computer
    Computer  Volume 53, Issue 8
    Aug. 2020
    102 pages

    Publisher

    IEEE Computer Society Press

    Washington, DC, United States

    Publication History

    Published: 01 August 2020

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 06 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Computational Theory of Mind with Abstractions for Effective Human-Agent CollaborationProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3663123(2249-2251)Online publication date: 6-May-2024
    • (2024)Toward a Quality Model for Hybrid Intelligence TeamsProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3662893(434-443)Online publication date: 6-May-2024
    • (2024)Exploring the landscape of trustworthy artificial intelligenceIntelligent Decision Technologies10.3233/IDT-24036618:2(837-854)Online publication date: 1-Jan-2024
    • (2024)Simple Transformers: Open-source for AllProceedings of the 2024 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region10.1145/3673791.3698412(209-215)Online publication date: 8-Dec-2024
    • (2024)Towards Human-AI Teaming to Mitigate Alert Fatigue in Security Operations CentresACM Transactions on Internet Technology10.1145/367000924:3(1-22)Online publication date: 15-Jul-2024
    • (2024)Unpacking Human-AI interactions: From Interaction Primitives to a Design SpaceACM Transactions on Interactive Intelligent Systems10.1145/366452214:3(1-51)Online publication date: 8-Jun-2024
    • (2024)Rocks Coding, Not Development: A Human-Centric, Experimental Evaluation of LLM-Supported SE TasksProceedings of the ACM on Software Engineering10.1145/36437581:FSE(699-721)Online publication date: 12-Jul-2024
    • (2024)Harmonizing Ethical Principles: Feedback Generation Approaches in Modeling Human Factors for Assisted Psychomotor SystemsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664900(380-385)Online publication date: 27-Jun-2024
    • (2024)The Discontent with Intent Estimation In-the-Wild: The Case for Unrealized IntentionsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3644055(1-9)Online publication date: 11-May-2024
    • (2024)Generative Artificial Intelligence and the Future of Software TestingComputer10.1109/MC.2023.330699857:1(27-32)Online publication date: 1-Jan-2024
    • Show More Cited By

    View Options

    View options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media