Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article

Society-in-the-loop: programming the algorithmic social contract

Published: 01 March 2018 Publication History

Abstract

Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'

References

[1]
Aldewereld, H., Dignum, V., & hua Tan, Y. (2014). Design for values in software development. In J. van den Jeroen, P. E. Vermaas, & I. van de Poel (Eds.), Handbook of ethics, values, and technological design. Dordrecht: Springer.
[2]
Allen, J., Guinn, C. I., & Horvtz, E. (1999). Mixed-initiative interaction. IEEE Intelligent Systems and Their Applications, 14(5), 14-23.
[3]
Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4), 105-120.
[4]
Arrow, K. J. (2012). Social choice and individual values. New Haven: Yale University Press.
[5]
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on facebook. Science, 348(6239), 1130-1132.
[6]
Baldassarri, D., & Grossman, G. (2011). Centralized sanctioning and legitimate authority promote cooperation in humans. Proceedings of the National Academy of Sciences, 108(27), 11023-11027.
[7]
Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2017). Fairness in criminal justice risk assessments: The state of the art. arXiv preprint [arXiv:1703.09207].
[8]
Binmore, K. (2005). Natural justice. Oxford: Oxford University Press. Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
[9]
Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662-679.
[10]
Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209-227.
[11]
Cakmak, M., Chao, C., & Thomaz, A. L. (2010). Designing interactions for robot active learners. IEEE Transactions on Autonomous Mental Development, 2(2), 108-118.
[12]
Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain humanlike biases. Science, 356(6334), 183-186.
[13]
Callejo, P., Cuevas, R., Cuevas, A., & Kotila, M. (2016). Independent auditing of online display advertising campaigns. In Proceedings of the 15th ACM workshop on hot topics in networks (HotNets) (pp. 120-126).
[14]
Castelfranchi, C. (2000). Artificial liars: Why computers will (necessarily) deceive us and each other. Ethics and Information Technology, 2(2), 113-119.
[15]
Chen, Y., Lai, J. K., Parkes, D. C., & Procaccia, A. D. (2013). Truth, justice, and cake cutting. Games and Economic Behavior, 77(1), 284-297.
[16]
Citron, D. K., & Pasquale, F. A. (2014). The scored society: due process for automated predictions. Washington Law Review, 89, 1-33.
[17]
Conitzer, V., Brill, M., & Freeman, R. (2015). Crowdsourcing societal tradeoffs. In Proceedings of the 2015 international conference on autonomous agents and multiagent systems (pp. 1213-1217). International Foundation for Autonomous Agents and Multiagent Systems.
[18]
Crandall, J. W. & Goodrich, M. A. (2001). Experiments in adjustable autonomy. In 2001 IEEE international conference on Systems, man, and cybernetics (Vol. 3, pp. 1624-1629). IEEE.
[19]
Cuzzillo, T. (2015). Real-world active learning: Applications and strategies for human-in-the-loop machine learning. Technical report, OAZReilly.
[20]
Delvaux, M. (2016). Motion for a European Parliament resolution: with recommendations to the commission on civil law rules on robotics. Technical Report (2015/2103(INL)), European Commission.
[21]
Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398-415.
[22]
Dinakar, K., Chen, J., Lieberman, H., Picard, R., & Filbin, R. (2015). Mixed-initiative real-time topic modeling & visualization for crisis counseling. In Proceedings of the 20th international conference on intelligent user interfaces (pp. 417-426). ACM.
[23]
Etzioni, A., & Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology, 18(2), 149-156.
[24]
Friedman, B. (1996). Value-sensitive design. Interactions, 3(6), 16-23.
[25]
Fukuyama, F. (2011). The origins of political order: From prehuman times to the French Revolution. London: Profile books.
[26]
Gates, G., Ewing, J., Russell, K., & Watkins, D. (2015). How Volkswagen's 'defeat devices' worked. New York Times.
[27]
Gauthier, D. (1986). Morals by agreement. Oxford: Oxford University Press.
[28]
Gürerk, Ö., Irlenbusch, B., & Rockenbach, B. (2006). The competitive advantage of sanctioning institutions. Science, 312(5770), 108-111.
[29]
Hamilton, W. D. (1963). The evolution of altruistic behavior. American Naturalist, 97, 354-356.
[30]
Haviland, W., Prins, H., McBride, B., & Walrath, D. (2013). Cultural anthropology: The human challenge. Boston: Cengage Learning.
[31]
Helbing, D., & Pournaras, E. (2015). Society: Build digital democracy. Nature, 527, 33-34.
[32]
Henrich, J. (2004). Cultural group selection, coevolutionary processes and large-scale cooperation. Journal of Economic Behavior & Organization, 53(1), 3-35.
[33]
Hern, A. (2016). 'partnership on artificial intelligence' formed by Google, Facebook, Amazon, IBM, Microsoft and Apple. Technical report, The Guardian.
[34]
Hobbes, T. (1651). Leviathan, or, The matter, forme, and power of a common-wealth ecclesiasticall and civill. London: Andrew Crooke.
[35]
Horvitz, E. (1999). Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 159-166). ACM.
[36]
IEEE (2016). Ethically aligned design. Technical report, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.
[37]
Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., Van Riemsdijk, M. B., & Sierhuis, M. (2014). Coactive design: Designing support for interdependence in joint activity. Journal of Human-Robot Interaction, 3(1), 43-69.
[38]
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint [arXiv:1609.05807].
[39]
Leben, D. (2017). A rawlsian algorithm for autonomous vehicles. Ethics and Information Technology, 19, 107-115.
[40]
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
[41]
Letham, B., Rudin, C., McCormick, T. H., Madigan, D., et al. (2015). Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3), 1350-1371.
[42]
Levy, S. (2010). The AI revolution is on. Wired. Lippmann, W. (1927). The phantom public. New Brunswick: Transaction Publishers.
[43]
Littman, M. L. (2015). Reinforcement learning improves behaviour from evaluative feedback. Nature, 521(7553), 445-451.
[44]
Liu, B. (2012). Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1), 1-167.
[45]
Locke, J. (1689). Two treatises of government. Self Published.
[46]
Markoff, J. (2015). Machines of loving grace. Ecco.
[47]
MIT (2017). The moral machine. Retrieved January 01, 2017, from http://moralmachine.mit.edu.
[48]
Moulin, H., Brandt, F., Conitzer, V., Endriss, U., Lang, J., & Procaccia, A. D. (2016). Handbook of Computational Social Choice. Cambridge: Cambridge University Press.
[49]
National Science and Technology Council Committee on Technology. (2016). Preparing for the future of artificial intelligence. Technical report, Executive Office of the President.
[50]
Nisan, N., Roughgarden, T., Tardos, E., & Vazirani, V. V. (2007). Algorithmic game theory (Vol. 1). Cambridge: Cambridge University Press.
[51]
Nowak, M., & Highfield, R. (2011). Supercooperators: Altruism, evolution, and why we need each other to succeed. New York: Simon and Schuster.
[52]
O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Publishing Group.
[53]
O'Reilly, T. (2016). Open data and algorithmic regulation. In B. Goldstein & L. Dyson (Eds.), Beyond transparency: Open data and the future of civic innovation. San Francisco: Code for America Press.
[54]
Orseau, L. & Armstrong, S. (2016). Safely interruptible agents. In Uncertainty in artificial intelligence: 32nd Conference (UAI).
[55]
Pariser, E. (2011). The filter bubble: What the internet is hiding from you. London: Penguin.
[56]
Parkes, D. C., & Wellman, M. P. (2015). Economic reasoning and artificial intelligence. Science, 349(6245), 267-272.
[57]
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge: Harvard University Press.
[58]
Pentland, A. S. (2013). The data-driven society. Scientific American, 309(4), 78-83.
[59]
Pigou, A. C. (1920). The economics of welfare. London: Palgrave Macmillan.
[60]
Rawls, J. (1971). A theory of justice. Cambridge: Harvard University Press.
[61]
Richerson, P. J., & Boyd, R. (2005). Not by genes alone.
[62]
Rousseau, J.-J. (1762). The Social Contract.
[63]
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211-252.
[64]
Scott, B. (2014). Visions of a techno-leviathan: The politics of the bitcoin blockchain. E-International Relations.
[65]
Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. Cambridge: MIT Press.
[66]
Sheridan, T. B. (2006). Supervisory control. Handbook of Human Factors and Ergonomics (3rd ed., pp. 1025-1052). Hoboken: Wiley.
[67]
Sigmund, K., De Silva, H., Traulsen, A., & Hauert, C. (2010). Social learning promotes institutions for governing the commons. Nature, 466(7308), 861-863.
[68]
Skyrms, B. (2014). Evolution of the social contract. Cambridge: Cambridge University Press.
[69]
Standing Committee of the One Hundred Year Study of Artificial Intelligence (2016). Artificial intelligence and life in 2030. Technical report, Stanford University.
[70]
Sweeney, L. (2013). Discrimination in online ad delivery. Queue, 11(3), 10.
[71]
Tambe, M., Scerri, P., & Pynadath, D. V. (2002). Adjustable autonomy for the real world. Journal of Artificial Intelligence Research, 17(1), 171-228.
[72]
Thomaz, A. L., & Breazeal, C. (2008). Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence, 172(6), 716-737.
[73]
Trivers, R. L. (1971). The evolution of reciprocal altruism. The Quarterly Review of Biology, 46, 35-57.
[74]
Tufekci, Z. (2015). Algorithmic harms beyond facebook and google: Emergent challenges of computational agency. Journal on Telecommunications and High Technology Law, 13, 203.
[75]
Turchin, P. (2015). Ultrasociety: How 10,000 years of war made humans the greatest cooperators on earth. Chaplin: Beresta Books.
[76]
Valentino-DeVries, J., Singer-Vine, J., & Soltani, A. (2012). Websites vary prices, deals based on users' information. Wall Street Journal, 10, 60-68.
[77]
Van de Poel, I. (2013). Translating values into design requirements. In Philosophy and engineering: Reflections on practice, principles and process (pp. 253-266). Dordrecht: Springer.
[78]
Young, H. P. (2001). Individual strategy and social structure: An evolutionary theory of institutions. Princeton: Princeton University Press.

Cited By

View all
  • (2025)Procedural fairness in algorithmic decision-making: the role of public engagementEthics and Information Technology10.1007/s10676-024-09811-427:1Online publication date: 1-Mar-2025
  • (2024)Measurable Trust: The Key to Unlocking User Confidence in Black-Box AIProceedings of the Second International Symposium on Trustworthy Autonomous Systems10.1145/3686038.3686058(1-7)Online publication date: 16-Sep-2024
  • (2024)Beyond principles: Embedding ethical AI risks in public sector risk management practiceProceedings of the 25th Annual International Conference on Digital Government Research10.1145/3657054.3657063(70-80)Online publication date: 11-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Ethics and Information Technology
Ethics and Information Technology  Volume 20, Issue 1
March 2018
68 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 March 2018

Author Tags

  1. Artificial intelligence
  2. Ethics
  3. Governance
  4. Regulation
  5. Society

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Procedural fairness in algorithmic decision-making: the role of public engagementEthics and Information Technology10.1007/s10676-024-09811-427:1Online publication date: 1-Mar-2025
  • (2024)Measurable Trust: The Key to Unlocking User Confidence in Black-Box AIProceedings of the Second International Symposium on Trustworthy Autonomous Systems10.1145/3686038.3686058(1-7)Online publication date: 16-Sep-2024
  • (2024)Beyond principles: Embedding ethical AI risks in public sector risk management practiceProceedings of the 25th Annual International Conference on Digital Government Research10.1145/3657054.3657063(70-80)Online publication date: 11-Jun-2024
  • (2024)My Future with My Chatbot: A Scenario-Driven, User-Centric Approach to Anticipating AI ImpactsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659026(2071-2085)Online publication date: 3-Jun-2024
  • (2024)Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database RegistrationsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658548(173-185)Online publication date: 3-Jun-2024
  • (2024)Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy RisksProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642116(1-19)Online publication date: 11-May-2024
  • (2024)Finding middle grounds for incoherent horn expressions: the moral machine caseAutonomous Agents and Multi-Agent Systems10.1007/s10458-024-09681-638:2Online publication date: 16-Oct-2024
  • (2024)Machine agency and representationAI & Society10.1007/s00146-022-01446-739:1(345-352)Online publication date: 1-Feb-2024
  • (2024)What about investors? ESG analyses as tools for ethics-based AI auditingAI & Society10.1007/s00146-022-01415-039:1(329-343)Online publication date: 1-Feb-2024
  • (2024)Enhancing Ethical Governance of Artificial Intelligence Through Dynamic Feedback MechanismWisdom, Well-Being, Win-Win10.1007/978-3-031-57867-0_8(105-121)Online publication date: 15-Apr-2024
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media