Jerome De Cooman
Jerome De Cooman is Ph.D. candidate under the supervision of Prof. Dr. Nicolas Petit (European University Institute, Italy) and Prof. Dr. Pieter Van Cleynenbreugel (University of Liege, Belgium). His research focuses on whether, how, and when regulate technologies (artificial intelligence in particular). As part of his Ph.D. research, Jerome pays very particular attention to the modernisation of competition law proceedings and the algorithmic shift in the fight against cartelisation. He is also interested in intellectual property and personal data protection.
Jerome is also teaching assistant at University of Liege since 2018 in EU Competition Law and EU law, (Big) Data and AI Applications courses (taught to both law and applied science students). Since 2021, he is Administrative Manager at the Brussels School of Competition, Junior Editor for the Yearbook of Antitrust and Regulatory Studies (YARS) at the Centre for Antitrust and Regulatory Studies (CARS, University of Warsaw, Poland), and Junior Member of the Academic Society for Competition Law (ASCOLA).
Jerome holds a Master of Laws from the University of Liege (2017), a Master of Management Sciences from HEC-Liege (2018), and the Law, Cognitive & Artificial Intelligence Technology Programme Certification from the Brussels School of Competition (2019).
Jerome is also teaching assistant at University of Liege since 2018 in EU Competition Law and EU law, (Big) Data and AI Applications courses (taught to both law and applied science students). Since 2021, he is Administrative Manager at the Brussels School of Competition, Junior Editor for the Yearbook of Antitrust and Regulatory Studies (YARS) at the Centre for Antitrust and Regulatory Studies (CARS, University of Warsaw, Poland), and Junior Member of the Academic Society for Competition Law (ASCOLA).
Jerome holds a Master of Laws from the University of Liege (2017), a Master of Management Sciences from HEC-Liege (2018), and the Law, Cognitive & Artificial Intelligence Technology Programme Certification from the Brussels School of Competition (2019).
less
Uploads
Papers by Jerome De Cooman
AI system are allegedly fairer than public servants, impartial and bias-free. It has been largely demonstrated this is not the case. First, an AI system is as good as its training set. If the latter was biased, so will the recommendation. Second, it has been hypothesised an algorithmic recommendation will be more often followed than rejected because the public officer develops an overreliance towards the AI system which is, statistically, more often right than wrong (automation bias). This leads to complacency and rubberstamping on the part of public officials. An algorithmic recommendation is, after all, merely the first plausible explanation that comes from a somehow superior authority – the AI system being allegedly more reliable than human officers (hindsight bias). This first plausible explanation will tempt the public servants to cease the scrutiny (search satisfaction). Even if further investigation were to be conducted, the recommendation would serve as an anchor as any new information gathered would be interpreted as strengthening the preconceived opinion (anchoring and confirmation biases).
Against that background, this paper argues that both the combination of investigation and prosecution powers and algorithmic recommendation raise the same issue, namely, a biased decision-making. Therefore, this paper calls for a similar solution for algorithmic recommendations to the one developed to mitigate administrative bias. If the distinction between investigation and decision-making within an administration mitigates the confirmation and commitment biases, an independent team should scrutinise the algorithmic recommendation and its use during the investigation phase. This should mitigate the automation bias encountered at the information-gathering phase by assessing the algorithmic recommendation with a fresh set of eyes. To build the argument, the paper discusses the bicephalic organisation of French and Belgian Competition Law Authorities. However, the solution proposed – a four-eyes principle – is transposable to other law enforcement activities.
[fr] L'intelligence artificielle (IA) est une technologie qui soulève d'importants enjeu, notamment éthiques. Pour y répondre, la Commission européenne a appointé un groupe d'experts indépendant (High-Level Experts Group). Un projet fut publié le 18 décembre 2018 et fut ouvert à consultation publique (Draft Ethics Guidelines for Trustworthy AI). Le Rapport fut publié le 8 avril 2019 (Ethics Guidelines for Trustworthy AI). Cet article fait le point sur ce document et sa genèse. Son objectif est triple: présenter les efforts européens, analyser les écueils d'une approche éthique (Ethics Lobbying, Ethics Shopping, Ethics Bluewashing et Ethics Dumping) et expliquer pourquoi une telle approche reste intéressante malgré ses difficultés intrinsèques, insistant sur la relation symbiotique de l'éthique et de la science juridique.
[en] Big ideas swirl around Big Data. In this article, we analyze three potentially fallacious elements: data is the new black gold (1), since data has a market value, individuals should be able to sell it (2) and regulations protecting personal data (GDPR) favor monopolies (3).
Drafts by Jerome De Cooman
Le colloque est ouvert aux communications et articles rédigés par un·e ou plusieurs auteur·e·s issu·e·s ou non de la même filière discutant du concept de borne sous un ou plusieurs angles disciplinaires différents. Une attention toute particulière sera portée aux propositions de collaborations interdisciplinaires intégrant juristes, politologues et criminologues.
AI system are allegedly fairer than public servants, impartial and bias-free. It has been largely demonstrated this is not the case. First, an AI system is as good as its training set. If the latter was biased, so will the recommendation. Second, it has been hypothesised an algorithmic recommendation will be more often followed than rejected because the public officer develops an overreliance towards the AI system which is, statistically, more often right than wrong (automation bias). This leads to complacency and rubberstamping on the part of public officials. An algorithmic recommendation is, after all, merely the first plausible explanation that comes from a somehow superior authority – the AI system being allegedly more reliable than human officers (hindsight bias). This first plausible explanation will tempt the public servants to cease the scrutiny (search satisfaction). Even if further investigation were to be conducted, the recommendation would serve as an anchor as any new information gathered would be interpreted as strengthening the preconceived opinion (anchoring and confirmation biases).
Against that background, this paper argues that both the combination of investigation and prosecution powers and algorithmic recommendation raise the same issue, namely, a biased decision-making. Therefore, this paper calls for a similar solution for algorithmic recommendations to the one developed to mitigate administrative bias. If the distinction between investigation and decision-making within an administration mitigates the confirmation and commitment biases, an independent team should scrutinise the algorithmic recommendation and its use during the investigation phase. This should mitigate the automation bias encountered at the information-gathering phase by assessing the algorithmic recommendation with a fresh set of eyes. To build the argument, the paper discusses the bicephalic organisation of French and Belgian Competition Law Authorities. However, the solution proposed – a four-eyes principle – is transposable to other law enforcement activities.
[fr] L'intelligence artificielle (IA) est une technologie qui soulève d'importants enjeu, notamment éthiques. Pour y répondre, la Commission européenne a appointé un groupe d'experts indépendant (High-Level Experts Group). Un projet fut publié le 18 décembre 2018 et fut ouvert à consultation publique (Draft Ethics Guidelines for Trustworthy AI). Le Rapport fut publié le 8 avril 2019 (Ethics Guidelines for Trustworthy AI). Cet article fait le point sur ce document et sa genèse. Son objectif est triple: présenter les efforts européens, analyser les écueils d'une approche éthique (Ethics Lobbying, Ethics Shopping, Ethics Bluewashing et Ethics Dumping) et expliquer pourquoi une telle approche reste intéressante malgré ses difficultés intrinsèques, insistant sur la relation symbiotique de l'éthique et de la science juridique.
[en] Big ideas swirl around Big Data. In this article, we analyze three potentially fallacious elements: data is the new black gold (1), since data has a market value, individuals should be able to sell it (2) and regulations protecting personal data (GDPR) favor monopolies (3).
Le colloque est ouvert aux communications et articles rédigés par un·e ou plusieurs auteur·e·s issu·e·s ou non de la même filière discutant du concept de borne sous un ou plusieurs angles disciplinaires différents. Une attention toute particulière sera portée aux propositions de collaborations interdisciplinaires intégrant juristes, politologues et criminologues.