Abstract
Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals. While regulatory frameworks are being developed, there remains a lack of consensus on methods necessary to deliver safe AI. The potential for explainable AI (XAI) to contribute to the effectiveness of the regulation of AI is being increasingly examined. Regulation must include methods to ensure compliance on an ongoing basis, though there is an absence of practical proposals on how to achieve this. For XAI to be successfully incorporated into a regulatory system, the individuals who are engaged in interpreting/explaining the model to stakeholders should be sufficiently qualified for the role. Statutory professionals are prevalent in domains in which harm can be done to the health, safety and rights of individuals. The most obvious examples are doctors, engineers and lawyers. Those professionals are required to exercise skill and judgement and to defend their decision making process in the event of harm occurring. We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework for compliance and monitoring purposes. We will refer to this new statutory professional as an AI Architect (AIA). This AIA would be responsible to ensure the risk of harm is minimised and accountable in the event that harms occur. The AIA would also be relied on to provide appropriate interpretations/explanations of XAI models to stakeholders. Further, in order to satisfy themselves that the models have been developed in a satisfactory manner, the AIA would require models to have appropriate transparency. Therefore it is likely that the introduction of an AIA system would lead to an increase in the use of XAI to enable AIA to discharge their professional obligations.
This paper emanated from research funded by Science Foundation Ireland to the Insight Centre for Data Analytics (12/RC/2289_P2) and SFI Centre for Research Training in Machine Learning (18/CRT/6183). For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
https://ec.europa.eu/growth/tools-databases/regprof/professions/bycountry
Regulated Professions Database/Ireland/Solicitor. https://ec.europa.eu/growth/tools-databases/regprof/regprof/3572
Acemoglu, D.: Harms of AI. (no. w29247). National Bureau of Economic Research (2021)
Ali, S., et al.: Explainable artificial intelligence (XAI): what we know and what is left to attain trustworthy artificial intelligence. Inf. Fusion 99, 101805 (2023)
Cannon, M., McGurk, B.: Professional Indemnity Insurance, 2nd edn. Oxford University Press, Oxford (2016)
Carnegie Mellon University: Artificial Intelligence Engineering. https://www.sei.cmu.edu/our-work/artificial-intelligence-engineering/
Chromik, M., Eiband, M., Völkel, S.T., Buschek, D.: Dark patterns of explainability, transparency, and user control for intelligent systems. In: IUI Workshops, vol. 2327 (2019)
Council of Europe: AI Initiatives (2023). https://www.coe.int/en/web/artificial-intelligence/national-initiatives
Department of the Taoiseach, Government of Ireland: Revised RIA Guidelines: How to conduct a Regulatory Impact Analysis (2009). https://assets.gov.ie/43562/b2c5a78227834a96ad001b381456ab18.pdf
Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives (2018). https://arxiv.org/abs/1710.00794
Easterbrook, F.H., Fischel, D.R.: Limited liability and the corporation. Univ. Chicago Law Rev. 52(1), 89–117 (1985)
Ebers, M.: Regulating explainable AI in the European Union. An overview of the current legal framework(s). In: Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence. The Swedish Law and Informatics Research Institute (2021)
Enarsson, T., Enqvist, L., Naarttijärvi, M.: Approaching the human in the loop - legal perspectives on hybrid human/algorithmic decision-making in three contexts. Inf. Commun. Technol. Law 31(1), 123–153 (2022)
Engineers Canada: Professional practice in software engineering (2023). https://engineerscanada.ca/sites/default/files/public-policy/professional-practice-software-engineering-en.pdf
EU Commission: Better Regulation Toolbox (2021)
EU Commission: Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) (2021). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
EU Commission: An EU Strategy on Standardisation - Setting global standards in support of a resilient, green and digital EU single market (2022). https://ec.europa.eu/docsroom/documents/48598
EU Commission: Draft standardisation request to the European Standardisation Organisations in support of safe and trustworthy artificial intelligence (2022). https://ec.europa.eu/docsroom/documents/52376
EU Parliament and Council: Directive 2005/36/EC of the European Parliament and of the Council of 7 September 2005 on the recognition of professional qualifications. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2005:255:0022:0142:en:PDF
Förster, M., Klier, M., Kluge, K., Sigler, I.: Fostering human agency: a process for the design of user-centric XAI systems. In: ICIS 2020 Proceedings, p. 12 (2020)
Garvin, D.A.: Can industry self-regulation work? Calif. Manage. Rev. 25(4), 37–52 (1983)
Hohma, E., Boch, A., Trauth, R., Lütge, C.: Investigating accountability for artificial intelligence through risk governance: a workshop-based exploratory study. Front. Psychol. 14, 1073686 (2023)
Holmes, N.: The social implications of the Australian computer society. Aust. Comput. J. 6(3), 124–128 (1974)
Holmes, N.: The profession and digital technology. Computer 114–116 (2011)
Huber, W.D.: Should the forensic accounting profession be regulated? Res. Account. Regul. 25(1), 123–132 (2013)
Huising, R., Silbey, S.S.: Governing the gap: forging safe science through relational regulation. Regul. Govern. 5(1), 14–42 (2011)
Krügel, S., Ostermaier, A., Uhl, M.: Algorithms as partners in crime: a lesson in ethics by design. Comput. Hum. Behav. 138, 107483 (2023)
Law, M.T., Kim, S.: Specialization and regulation: the rise of professionals and the emergence of occupational licensing regulation. J. Econ. Hist. 65(3), 723–756 (2005)
Meskea, C., Bundea, E., Schneiderb, J., Gersch, M.: Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Inf. Syst. Manag. 39(1), 53–63 (2022)
Methnani, L., Aler Tubella, A., Dignum, V., Theodorou, A.: Let me take over: variable autonomy for meaningful human control. Front. Artif. Intell. 4, 737072 (2021)
Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how I learnt to stop worrying and love the social and behavioural sciences. In: IJCAI-17 Workshop on Explainable AI (XAI) Proceedings, Melbourne (2017)
Mills, S., Ryan, A., Scott-Byrne, C.: Disciplinary Procedures in the Statutory Professions. Bloomsbury Professional (2023)
Mökander, J., Sheth, M., Gersbro-Sundler, M., Blomgren, P., Floridi, L.: Challenges and best practices in corporate AI governance: lessons from the biopharmaceutical industry. Front. Comput. Sci. 4, 1068361 (2022)
Morgan, J.K., Hanrahan, P.: Professional indemnity insurance: protecting clients and regulating professionals. Univ. New South Wales Law J. 40(1), 353–384 (2017)
Muchlinski, P.: Limited liability and multinational enterprises: a case for reform? Camb. J. Econ. 34, 915–928 (2010)
NiFhaolain, L., Hines, A.: Could regulating the creators deliver trustworthy AI? In: Second Workshop on Implementing Machine Ethics, Dublin (2020)
NiFhaolain, L., Hines, A., Nallur, V.: Assessing the appetite for trustworthiness and the regulation of artificial intelligence in Europe. In: Proceedings of The 28th Irish Conference on Artificial Intelligence and Cognitive Science, Dublin, Ireland 7–8 December 2020 (2020)
OECD: Regulatory impact assessment (2020). https://www.oecd.org/gov/regulatory-policy/regulatory-impact-assessment-7a9638cb-en.htm
Ozkaya, I.: An AI engineer versus a software engineer. IEEE Softw. 39(06), 4–7 (2022)
Palladino, N.: A ‘biased’ emerging governance regime for artificial intelligence? How AI ethics get skewed moving from principles to practices. Telecommun. Policy 47, 102479 (2022)
Schneider, J., Meske, C., Vlachos, M.: Deceptive AI explanations: creation and detection. In: ICAART 2022–14th International Conference on Agents and Artificial Intelligence (2022)
Shaffer, G.C., Pollack, M.A.: Hard vs. soft law: alternatives, complements, and antagonists in international governance. Minnesota Law Rev. 94(3), 706–799 (2010)
Santoni de Sio, F., van den Hoven, J.: Meaningful human control over autonomous systems: a philosophical account. Front. Robot. AI 5, 15 (2018)
Santoni de Sio, F., Mecacci, G.: Four responsibility gaps with artificial intelligence: why they matter and how to address them. Philos. Technol. 34, 1057–1084 (2021)
Smuha, N.A.: From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for artificial intelligence. Law Innov. Technol. 13, 57–84 (2021)
Solicitors Regulation Authority of England and Wales: SRA Principles. https://www.sra.org.uk/solicitors/standards-regulations/principles/
The Alan Turing Institute: AI Standards Database (2023). https://aistandardshub.org/ai-standards-search/
Viljanen, M., Parviainen, H.: AI applications and regulation: mapping the regulatory strata. Front. Comput. Sci. 3, 141 (2022)
Webley, S., Werner, A.: Corporate codes of ethics: necessary but not sufficient. Bus. Ethics Eur. Rev. 17(4), 405–415 (2008)
Winfield, A.: Ethical standards in robotics and AI. Nat. Electron. 2(2), 46–48 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
NiFhaolain, L., Hines, A., Nallur, V. (2023). Statutory Professions in AI Governance and Their Consequences for Explainable AI. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1901. Springer, Cham. https://doi.org/10.1007/978-3-031-44064-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-44064-9_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44063-2
Online ISBN: 978-3-031-44064-9
eBook Packages: Computer ScienceComputer Science (R0)