Abstract
Because ever more powerful intelligent agents will interact with people in increasingly sophisticated and important ways, greater attention must be given to the technical and social aspects of how to make agents acceptable to people [4], p. 51]. The technical challenge is to devise a computational structure that guarantees that from the technical standpoint all is under control. We want to be able to help ensure the protection of agent state, the viability of agent communities, and the reliability of the resources on which they depend. To accomplish this, we must guarantee, insofar as is possible, that the autonomy of agents can always be bounded by explicit enforceable policy that can be continually adjusted to maximize the agents. effectiveness and safety in both human and computational environments. The social challenge is to ensure that agents and people interact gracefully and to provide reassurance to people that all is working according to plan. We want agents to be designed to fit well with how people actually work together. Explicit policies governing human-agent interaction, based on careful observation of work practice and an understanding of current social science research, can help assure that effective and natural coordination, appropriate levels and modalities of feedback, and adequate predictability and responsiveness to human control are maintained. These factors are key to providing the reassurance and trust that are the prerequisites to the widespread acceptance of agent technology for non-trivial applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Asimov, I. (1942/1968). Runaround. In I. Asimov (Ed.), I, Robot. (pp. 33–51). London, England: Grafton Books. Originally published in Astounding Science Fiction, 1942, pp. 94-103.
Bradshaw, J. M., Beautement, P., Breedy, M. R., Bunch, L., Drakunov, S. V., Feltovich, P., Hoffman, R. R., Jeffers, R., Johnson, M., Kulkarni, S., Lott, J., Raj, A. K., Suri, N., & Uszok, A. (2003). Making agents acceptable to people. In N. Zhong & J. Liu (Ed.), Handbook of Intelligent Information Technology. Amsterdam, Holland: IOS Press.
Clarke, R. (1993–1994). Asimov’s laws of robotics: Implications for information technology, Parts 1 and 2. IEEE Computer, December/January, 53–61/57-66.
Norman, D. A. (1997). How might people interact with agents? In J. M. Bradshaw (Ed.), Software Agents. (pp. 49–55). Cambridge, MA: The AAAI Press/The MIT Press.
Pynadath, D., & Tambe, M. (2001). Revisiting Asimov’s first law: A response to the call to arms. Proceedings of ATAL 01.
Shoham, Y., & Tennenholtz, M. (1992). On the synthesis of useful social laws for artificial agent societies. Proceedings of the Tenth National Conference on Artificial Intelligence, (pp. 276–281). San Jose, CA.
Weld, D., & Etzioni, O. (1994). The firsts law of robotics: A call to arms. Proceedings of the National Conference on Artificial Intelligence (AAAI 94), (pp. 1042–1047).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bradshaw, J.M. (2003). Making Agents Acceptable to People. In: Mařík, V., Pěchouček, M., Müller, J. (eds) Multi-Agent Systems and Applications III. CEEMAS 2003. Lecture Notes in Computer Science(), vol 2691. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45023-8_1
Download citation
DOI: https://doi.org/10.1007/3-540-45023-8_1
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40450-7
Online ISBN: 978-3-540-45023-8
eBook Packages: Springer Book Archive