Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3593013.3594050acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

To Be High-Risk, or Not To Be—Semantic Specifications and Implications of the AI Act’s High-Risk AI Applications and Harmonised Standards

Published: 12 June 2023 Publication History

Abstract

The EU’s proposed AI Act sets out a risk-based regulatory framework to govern the potential harms emanating from use of AI systems. Within the AI Act’s hierarchy of risks, the AI systems that are likely to incur “high-risk” to health, safety, and fundamental rights are subject to the majority of the Act’s provisions. To include uses of AI where fundamental rights are at stake, Annex III of the Act provides a list of applications wherein the conditions that shape high-risk AI are described. For high-risk AI systems, the AI Act places obligations on providers and users regarding use of AI systems and keeping appropriate documentation through the use of harmonised standards. In this paper, we analyse the clauses defining the criteria for high-risk AI in Annex III to simplify identification of potential high-risk uses of AI by making explicit the “core concepts” whose combination makes them high-risk. We use these core concepts to develop an open vocabulary for AI risks (VAIR) to represent and assist with AI risk assessments in a form that supports automation and integration. VAIR is intended to assist with identification and documentation of risks by providing a common vocabulary that facilitates knowledge sharing and interoperability between actors in the AI value chain. Given that the AI Act relies on harmonised standards for much of its compliance and enforcement regarding high-risk AI systems, we explore the implications of current international standardisation activities undertaken by ISO and emphasise the necessity of better risk and impact knowledge bases such as VAIR that can be integrated with audits and investigations to simplify the AI Act’s application.

References

[1]
2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L:2016:119:TOC
[2]
2022. Draft standardisation request to the European Standardisation Organisations in support of safe and trustworthy artificial intelligence. https://ec.europa.eu/docsroom/documents/52376
[3]
2022. Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0496
[4]
November 2022. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts) and amending certain Union legislative acts. https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf
[5]
Norberto Nuno Gomes de Andrade and Verena Kontschieder. 2021. AI Impact Assessment: A policy prototyping experiment. (2021). https://openloop.org/wp-content/uploads/2021/01/AI_Impact_Assessment_A_Policy_Prototyping_Experiment.pdf
[6]
Jerome De Cooman. 2022. Humpty dumpty and high-risk AI systems: the ratione materiae dimension of the proposal for an EU artificial intelligence act. Mkt. & Competition L. Rev. 6 (2022), 49.
[7]
Martin Ebers, Veronica R. S. Hoch, Frank Rosenkranz, Hannah Ruschemeier, and Björn Steinrötter. 2021. The European Commission’s Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS). J 4, 4 (2021), 589–603. https://doi.org/10.3390/j4040043
[8]
Joshua Eckroth, Liang Dong, Reid G Smith, and Bruce G Buchanan. 2012. NewsFinder: Automating an AI news service. AI Magazine 33, 2 (2012), 43–43.
[9]
European Commission, Joint Research Centre, Stefano Nativi, and Sarah De Nigris. 2021. AI Watch: AI standardisation landscape state of play and link to the EC proposal for an AI regulatory framework. Technical Report. https://data.europa.eu/doi/10.2760/376602
[10]
European Commission, Joint Research Centre, Josep Soler Garrido, Songül Tolan, Isabelle Hupont Torres, David Fernandez Llorca, Vicky Charisi, Emilia Gomez Gutierrez, Henrik Junklewitz, Ronan Hamon, Delia Fano Yela, and Cecilia Panigutti. 2023. AI Watch: Artificial Intelligence Standardisation Landscape Update. Analysis of IEEE standards in the context of the European AI Regulation. Technical Report. Luxembourg (Luxembourg). https://data.europa.eu/doi/10.2760/131984
[11]
Delaram Golpayegani, Harshvardhan J Pandit, and Dave Lewis. 2022. AIRO: An ontology for representing AI risks based on the proposed EU AI Act and ISO risk management standards. In Towards a Knowledge-Aware AI: SEMANTiCS 2022—Proceedings of the 18th International Conference on Semantic Systems, 13-15 September 2022, Vienna, Austria, Vol. 55. IOS Press, 51–65.
[12]
Sean McGregor. 2021. Preventing repeated real world AI failures by cataloging incidents: The AI incident database. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 15458–15463.
[13]
Scott McLachlan, Burkhard Schafer, Kudakwashe Dube, Evangelia Kyrimi, and Norman Fenton. 2022. Tempting the Fate of the furious: cyber security and autonomous cars. International Review of Law, Computers & Technology (2022), 1–21.
[14]
OECD. 2022. OECD Framework for the Classification of AI systems. (2022). https://doi.org/10.1787/cb6d9eca-en
[15]
Alina Oprea and Apostol Vassilev. 2023. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. NIST AI 100-2e2023 ipd (2023). https://doi.org/10.6028/NIST.AI.100-2e2023.ipd
[16]
Harshvardhan J Pandit. 2022. A Semantic Specification for Data Protection Impact Assessments (DPIA). In Towards a Knowledge-Aware AI: SEMANTiCS 2022—Proceedings of the 18th International Conference on Semantic Systems, 13-15 September 2022, Vienna, Austria. IOS Press, 36–50.
[17]
Nikiforos Pittaras and Sean McGregor. 2022. A taxonomic system for failure cause analysis of open source AI incidents. arXiv preprint arXiv:2211.07280 (2022).
[18]
María Poveda-Villalón, Paola Espinoza-Arias, Daniel Garijo, and Oscar Corcho. 2020. Coming to terms with FAIR ontologies. In Knowledge Engineering and Knowledge Management: 22nd International Conference, EKAW 2020, Bolzano, Italy, September 16–20, 2020, Proceedings 22. Springer, 255–270.
[19]
Drew Roselli, Jeanna Matthews, and Nisha Talagala. 2019. Managing bias in AI. In Companion Proceedings of The 2019 World Wide Web Conference. 539–544.
[20]
Sofia Samoili, Montserrat Lopez Cobo, Emilia Gomez, Giuditta De Prato, Fernando Martinez-Plumed, and Blagoj Delipetrev. 2020. AI Watch. Defining Artificial Intelligence. Towards an operational definition and taxonomy of artificial intelligence. Technical Report. https://ai-watch.ec.europa.eu/publications/defining-artificial-intelligence-10_en
[21]
Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, Patrick Hall, 2022. Towards a standard for identifying and managing bias in artificial intelligence. NIST Special Publication 1270 (2022). https://doi.org/10.6028/NIST.SP.1270
[22]
Nathalie A Smuha, Emma Ahmed-Rengers, Adam Harkens, Wenlong Li, James MacLaren, Riccardo Piselli, and Karen Yeung. 2021. How the EU can achieve legally trustworthy AI: a response to the European Commission’s proposal for an artificial intelligence act. Available at SSRN 3899991 (2021).
[23]
André Steimers and Moritz Schneider. 2022. Sources of risk of AI systems. International Journal of Environmental Research and Public Health 19, 6 (2022), 3641.
[24]
Michael Veale and Frederik Zuiderveen Borgesius. 2021. Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International 22, 4 (2021), 97–112.
[25]
Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, 2022. Taxonomy of risks posed by language models. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 214–229.
[26]
Xu Xu, Genia Kostka, and Xun Cao. 2022. Information control and public support for social credit systems in China. The Journal of Politics 84, 4 (2022), 2230–2245.

Cited By

View all
  • (2024)Exploring the Association Between Artificial Intelligence Management and Green Innovation: Expanding the Research Field for Sustainable OutcomesSustainability10.3390/su1621931516:21(9315)Online publication date: 26-Oct-2024
  • (2024)"You Can either Blame Technology or Blame a Person..." --- A Conceptual Model of Users' AI-Risk Perception as a Tool for HCIProceedings of the ACM on Human-Computer Interaction10.1145/36869968:CSCW2(1-25)Online publication date: 8-Nov-2024
  • (2024)Good Intentions, Risky Inventions: A Method for Assessing the Risks and Benefits of AI in Mobile and Wearable UsesProceedings of the ACM on Human-Computer Interaction10.1145/36765078:MHCI(1-28)Online publication date: 24-Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
June 2023
1929 pages
ISBN:9798400701924
DOI:10.1145/3593013
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 June 2023

Check for updates

Author Tags

  1. AI Act
  2. harmonised standards
  3. high-risk AI
  4. semantic web
  5. taxonomy

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

FAccT '23

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,788
  • Downloads (Last 6 weeks)123
Reflects downloads up to 26 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring the Association Between Artificial Intelligence Management and Green Innovation: Expanding the Research Field for Sustainable OutcomesSustainability10.3390/su1621931516:21(9315)Online publication date: 26-Oct-2024
  • (2024)"You Can either Blame Technology or Blame a Person..." --- A Conceptual Model of Users' AI-Risk Perception as a Tool for HCIProceedings of the ACM on Human-Computer Interaction10.1145/36869968:CSCW2(1-25)Online publication date: 8-Nov-2024
  • (2024)Good Intentions, Risky Inventions: A Method for Assessing the Risks and Benefits of AI in Mobile and Wearable UsesProceedings of the ACM on Human-Computer Interaction10.1145/36765078:MHCI(1-28)Online publication date: 24-Sep-2024
  • (2024)The Atlas of AI Incidents in Mobile Computing: Visualizing the Risks and Benefits of AI Gone MobileAdjunct Proceedings of the 26th International Conference on Mobile Human-Computer Interaction10.1145/3640471.3680447(1-6)Online publication date: 21-Sep-2024
  • (2024)Decoding Real-World Artificial Intelligence IncidentsComputer10.1109/MC.2024.343249257:11(71-81)Online publication date: 1-Nov-2024
  • (2024)A comprehensive survey and classification of evaluation criteria for trustworthy artificial intelligenceAI and Ethics10.1007/s43681-024-00590-8Online publication date: 21-Oct-2024
  • (2024)Data Privacy Vocabulary (DPV) – Version 2.0The Semantic Web – ISWC 202410.1007/978-3-031-77847-6_10(171-193)Online publication date: 27-Nov-2024
  • (2024)AI Cards: Towards an Applied Framework for Machine-Readable AI and Risk Documentation Inspired by the EU AI ActPrivacy Technologies and Policy10.1007/978-3-031-68024-3_3(48-72)Online publication date: 1-Aug-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media