I am a Professor (Computer Science) at Teesside university (UK). Before joining Teesside, I was a postdoctoral research fellow at the AI Lab of VUB, funded by the FWO foundation. I did my PhD at the AI Center (CENTRIA) of the New University of Lisbon (UNL), and my master of the European Eramus Mundus in Computational Logic program at UNL and the Technical University of Dresden.
My research interests include evolution of cooperation, evolutionary game theory, Intention/Plan recognition, evolution of cognition, knowledge representation and reasoning, and logic programming. Address: School of Computing, Teesside University
This original and timely monograph describes a unique self-contained excursion that reveals to th... more This original and timely monograph describes a unique self-contained excursion that reveals to the readers the roles of two basic cognitive abilities, i.e. intention recognition and arranging commitments, in the evolution of cooperative behavior. This book analyses intention recognition, an important ability that helps agents predict others’ behavior, in its artificial intelligence and evolutionary computational modeling aspects, and proposes a novel intention recognition method. Furthermore, the book presents a new framework for intention-based decision making and illustrates several ways in which an ability to recognize intentions of others can enhance a decision making process. By employing the new intention recognition method and the tools of evolutionary game theory, this book introduces computational models demonstrating that intention recognition promotes the emergence of cooperation within populations of self-regarding agents. Finally, the book describes how commitment provides a pathway to the evolution of cooperative behavior, and how it further empowers intention recognition, thereby leading to a combined improved strategy.
Auditors can play a vital role in ensuring that tech companies develop and deploy AI systems safe... more Auditors can play a vital role in ensuring that tech companies develop and deploy AI systems safely, taking into account not just immediate, but also systemic harms that may arise from the use of future AI capabilities. However, to support auditors in evaluating the capabilities and consequences of cutting-edge AI systems, governments may need to encourage a range of potential auditors to invest in new auditing tools and approaches. We use evolutionary game theory to model scenarios where the government wishes to incentivise auditing but cannot discriminate between high and low-quality auditing. We warn that it is alarmingly easy to stumble on 'Adversarial Incentives', which prevent a sustainable market for auditing AI systems from forming. Adversarial Incentives mainly reward auditors for catching unsafe behaviour. If AI companies learn to tailor their behaviour to the quality of audits, the lack of opportunities to catch unsafe behaviour will discourage auditors from innovating. Instead, we recommend that governments always reward auditors, except when they find evidence that those auditors failed to detect unsafe behaviour they should have. These 'Vigilant Incentives' could encourage auditors to find innovative ways to evaluate cutting-edge AI systems. Overall, our analysis provides useful insights for the design and implementation of efficient incentive strategies for encouraging a robust auditing ecosystem.
In this paper, we study the problem of cost optimisation of individual-based institutional incent... more In this paper, we study the problem of cost optimisation of individual-based institutional incentives (reward, punishment, and hybrid) for guaranteeing a certain minimal level of cooperative behaviour in a well-mixed, finite population. In this scheme, the individuals in the population interact via cooperation dilemmas (Donation Game or Public Goods Game) in which institutional reward is carried out only if cooperation is not abundant enough (i.e., the number of cooperators is below a threshold 1 ≤ t ≤ N − 1, where N is the population size); and similarly, institutional punishment is carried out only when defection is too abundant. We study analytically the cases t = 1 for the reward incentive under the small mutation limit assumption and two different initial states, showing that the cost function is always non-decreasing. We derive the neutral drift and strong selection limits when the intensity of selection tends to zero and infinity, respectively. We numerically investigate the problem for other values of t and for population dynamics with arbitrary mutation rates.
Navigating the intricacies of digital environments demands effective strategies for fostering coo... more Navigating the intricacies of digital environments demands effective strategies for fostering cooperation and upholding norms. Banning and moderating the content of influential users on social media platforms is often met with shock and awe, yet clearly has profound implications for online behaviour and digital governance. Drawing inspiration from these actions, we model broadcasting retributive measures. Leveraging analytical modeling and extensive agent-based simulations, we investigate how signaling punitive actions can deter antisocial behaviour and promote cooperation in online and multi-agent systems. Our findings underscore the transformative potential of threat signaling in cultivating a culture of compliance and bolstering social welfare, even in challenging scenarios with high costs or complex networks of interaction. This research offers valuable insights into the mechanisms of promoting pro-social behaviour and ensuring behavioural compliance across diverse digital ecosystems.
As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence le... more As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making and social interactions. Existing theoretical research on the emergence and stability of cooperation, particularly in the context of social dilemmas, has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. Resorting to methods from evolutionary game theory, we study how different forms of AI can influence cooperation in a population of humanlike agents playing the one-shot Prisoner's dilemma game. We found that Samaritan AI agents who help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only helps those considered worthy/cooperative, especially in slow-moving societies where change based on payoff difference is moderate (small intensities of selection). Only in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs. Furthermore, when it is possible to identify whether a co-player is a human or an AI, we found that cooperation is enhanced when human-like agents disregard AI performance. Our findings provide novel insights into the design and implementation of context-dependent AI systems for addressing social dilemmas.
Understanding the emergence of prosocial behaviours among self-interested individuals is an impor... more Understanding the emergence of prosocial behaviours among self-interested individuals is an important problem in many scientific disciplines. Various mechanisms have been proposed to explain the evolution of such behaviours, primarily seeking the conditions under which a given mechanism can induce highest levels of cooperation. As these mechanisms usually involve costs that alter individual pay-offs, it is, however, possible that aiming for highest levels of cooperation might be detrimental for social welfare—the latter broadly defined as the total population pay-off, taking into account all costs involved for inducing increased prosocial behaviours. Herein, by comparing stochastic evolutionary models of two well-established mechanisms of prosocial behaviour—namely, peer and institutional incentives—we demonstrate that the objectives of maximizing cooperation and of maximizing social welfare are often misaligned. First, while peer punishment is often more effective than peer reward in promoting cooperation—especially with a higher impact-to-cost ratio—the opposite is true for social welfare. In fact, welfare typically decreases (increases) with this ratio for punishment (reward). Second, for institutional incentives, while maintaining similar levels of cooperation, rewards result in positive social welfare across a much broader range of parameters. Furthermore, both types of incentives often achieve optimal social welfare when their impact is moderate rather than maximal, indicating that careful planning is essential for costly institutional mechanisms to optimize social outcomes. These findings are consistent across varying mutation rates, selection intensities and game configurations. Overall, we argue for the need of adopting social welfare as the main optimization objective when designing and implementing evolutionary mechanisms for social and collective goods.
The adoption of new technologies by firms is a fundamental driver of technological change, enhanc... more The adoption of new technologies by firms is a fundamental driver of technological change, enhancing competitiveness across various industries. Recent advancements in information technologies have amplified the strategic significance of technology in the competitive landscape, reshaping global markets and the workplace. Technological innovation continues at a swift pace, but its success hinges on effective adoption. Embracing new technologies sets businesses apart, fostering innovation, and attracting customers and investors. However, the decision to adopt technology poses challenges, especially regarding which technologies to choose in a dynamical market. Firms often invest in technology to gain a competitive edge, potentially neglecting broader social benefits in the process. This chapter summarises the authors' research on evolutionary dynamics of decision making regarding technology adoption. They employ methods from Evolutionary Game Theory (EGT), exploring scenarios with well-mixed populations and distributed networked environments.
In this paper, we consider the replicator-mutator dynamics for pairwise social dilemmas where the... more In this paper, we consider the replicator-mutator dynamics for pairwise social dilemmas where the payoff entries are random variables. The randomness is incorporated to take into account the uncertainty, which is inevitable in practical applications and may arise from different sources such as lack of data for measuring the outcomes, noisy and rapidly changing environments, as well as unavoidable human estimate errors. We analytically and numerically compute the probability that the replicator-mutator dynamics has a given number of equilibria for four classes of pairwise social dilemmas (Prisoner's Dilemma, SnowDrift Game, Stag-Hunt Game and Harmony Game). As a result, we characterise the qualitative behaviour of such probabilities as a function of the mutation rate. Our results clearly show the influence of the mutation rate and the uncertainty in the payoff matrix definition on the number of equilibria in these games. Overall, our analysis has provided novel theoretical contributions to the understanding of the impact of uncertainty on the behavioural diversity in a complex dynamical system.
Auditors can play a vital role in ensuring that tech companies develop and deploy AI systems safe... more Auditors can play a vital role in ensuring that tech companies develop and deploy AI systems safely, taking into account not just immediate, but also systemic harms that may arise from the use of future AI capabilities. However, to support auditors in evaluating the capabilities and consequences of cutting-edge AI systems, governments may need to encourage a range of potential auditors to invest in new auditing tools and approaches. We use evolutionary game theory to model scenarios where the government wishes to incentivise auditing but cannot discriminate between high and low-quality auditing. We warn that it is alarmingly easy to stumble on 'Adversarial Incentives', which prevent a sustainable market for auditing AI systems from forming. Adversarial Incentives mainly reward auditors for catching unsafe behaviour. If AI companies learn to tailor their behaviour to the quality of audits, the lack of opportunities to catch unsafe behaviour will discourage auditors from innovating. Instead, we recommend that governments always reward auditors, except when they find evidence that those auditors failed to detect unsafe behaviour they should have. These 'Vigilant Incentives' could encourage auditors to find innovative ways to evaluate cutting-edge AI systems. Overall, our analysis provides useful insights for the design and implementation of efficient incentive strategies for encouraging a robust auditing ecosystem.
Studying social dilemmas prompts the question of how cooperation can emerge in situations where i... more Studying social dilemmas prompts the question of how cooperation can emerge in situations where individuals are expected to act selfishly. Here, in the framework of the one-shot Public Goods Game (PGG), we introduce the concept that individuals can adjust their behaviour based on the cooperative commitments made by other players in the group prior to the actual PGG interaction. To this end, we establish a commitment threshold that group members must meet for a commitment to be formed. We explore the effects of punishing commitment non-compliant players (those who commit and defect if the commitment is formed) and rewarding commitment-compliant players (those who commit and cooperate if the commitment is formed). In the presence of commitment and absence of an incentive mechanism, we observe that conditional behaviour based on commitment alone can enhance cooperation, especially when considering a specific commitment threshold value. In the presence of punishment, our results suggest that the survival of cooperation is most likely at intermediate commitment thresholds. Notably, cooperation is maximised at high commitment thresholds, when punishment occurs more frequently. Moreover, even when cooperation rarely survives, a cyclic behaviour emerges, facilitating the persistence of cooperation. For the reward case, we found that cooperation is highly frequent regardless of the commitment threshold adopted.
Spending by the UK's National Health Service (NHS) on independent healthcare treatment has been i... more Spending by the UK's National Health Service (NHS) on independent healthcare treatment has been increased in recent years and is predicted to sustain its upward trend with the forecast of population growth. Some have viewed this increase as an attempt not to expand the patients' choices but to privatize public healthcare. This debate poses a social dilemma whether the NHS should stop cooperating with Private providers. This paper contributes to healthcare economic modelling by investigating the evolution of cooperation among three proposed populations: Public Healthcare Providers, Private Healthcare Providers and Patients. The Patient population is included as a main player in the decision-making process by expanding patient's choices of treatment. We develop a generic basic model that measures the cost of healthcare provision based on given parameters, such as NHS and private healthcare providers' cost of investments in both sectors, cost of treatments and gained benefits. A patient's costly punishment is introduced as a mechanism to enhance cooperation among the three populations. Our findings show that cooperation can be improved with the introduction of punishment (patient's punishment) against defecting providers. Although punishment increases cooperation, it is very costly considering the small improvement in cooperation in comparison to the basic model.
We argue that the emotion of guilt, in the sense of actual harm done to others from inappropriate... more We argue that the emotion of guilt, in the sense of actual harm done to others from inappropriate action or inaction, is worthwhile to incorporate in evolutionary game models, as it can lead to increased cooperation, whether by promoting apology or by inhibiting defection. The study thereof can then transpire to abstract and concrete populations of non-human agents.
Our research is concerned with studying behavioural changes within a dynamic system, i.e. health ... more Our research is concerned with studying behavioural changes within a dynamic system, i.e. health care, and their effects on the decision-making process. Evolutionary Game theory is applied to investigate the most probable strategy(ies) adopted by individuals in a finite population based on the interactions among them with an eye to modelling behaviour using the following metrics: cost of investment, cost of management, cost of treatment, reputation benefit for the provider(s), and the gained health benefit for the patient.
Upon starting a collective endeavour, it is important to understand your partners’ preferences an... more Upon starting a collective endeavour, it is important to understand your partners’ preferences and how strongly they commit to a common goal. Establishing a prior commitment or agreement in terms of posterior benefits and consequences from those engaging in it provides an important mechanism for securing cooperation. Resorting to methods from Evolutionary Game Theory (EGT), here we analyse how prior commitments can also be adopted as a tool for enhancing coordination when its outcomes exhibit an asymmetric payoff structure, in both pairwise and multi-party interactions. Arguably, coordination is more complex to achieve than cooperation since there might be several desirable collective outcomes in a coordination problem (compared to mutual cooperation, the only desirable collective outcome in cooperation dilemmas). Our analysis, both analytically and via numerical simulations, shows that whether prior commitment would be a viable evolutionary mechanism for enhancing coordination and ...
When making a mistake, individuals can apologize to secure further cooperation, even if the apolo... more When making a mistake, individuals can apologize to secure further cooperation, even if the apology is costly. Similarly, individuals arrange commitments to guarantee that an action such as a cooperative one is in the others' best interest, and thus will be carried out to avoid eventual penalties for commitment failure. Hence, both apology and commitment should go side by side in behavioral evolution. Here we provide a computational model showing that apologizing acts are rare in non-committed interactions, especially whenever cooperation is very costly, and that arranging prior commitments can considerably increase the frequency of such behavior. In addition, we show that in both cases, with or without commitments, apology works only if it is sincere, i.e. costly enough. Most interestingly, our model predicts that individuals tend to use much costlier apology in committed relationships than otherwise, because it helps better identify free-riders such as fake committers: `commit...
This original and timely monograph describes a unique self-contained excursion that reveals to th... more This original and timely monograph describes a unique self-contained excursion that reveals to the readers the roles of two basic cognitive abilities, i.e. intention recognition and arranging commitments, in the evolution of cooperative behavior. This book analyses intention recognition, an important ability that helps agents predict others’ behavior, in its artificial intelligence and evolutionary computational modeling aspects, and proposes a novel intention recognition method. Furthermore, the book presents a new framework for intention-based decision making and illustrates several ways in which an ability to recognize intentions of others can enhance a decision making process. By employing the new intention recognition method and the tools of evolutionary game theory, this book introduces computational models demonstrating that intention recognition promotes the emergence of cooperation within populations of self-regarding agents. Finally, the book describes how commitment provides a pathway to the evolution of cooperative behavior, and how it further empowers intention recognition, thereby leading to a combined improved strategy.
Auditors can play a vital role in ensuring that tech companies develop and deploy AI systems safe... more Auditors can play a vital role in ensuring that tech companies develop and deploy AI systems safely, taking into account not just immediate, but also systemic harms that may arise from the use of future AI capabilities. However, to support auditors in evaluating the capabilities and consequences of cutting-edge AI systems, governments may need to encourage a range of potential auditors to invest in new auditing tools and approaches. We use evolutionary game theory to model scenarios where the government wishes to incentivise auditing but cannot discriminate between high and low-quality auditing. We warn that it is alarmingly easy to stumble on 'Adversarial Incentives', which prevent a sustainable market for auditing AI systems from forming. Adversarial Incentives mainly reward auditors for catching unsafe behaviour. If AI companies learn to tailor their behaviour to the quality of audits, the lack of opportunities to catch unsafe behaviour will discourage auditors from innovating. Instead, we recommend that governments always reward auditors, except when they find evidence that those auditors failed to detect unsafe behaviour they should have. These 'Vigilant Incentives' could encourage auditors to find innovative ways to evaluate cutting-edge AI systems. Overall, our analysis provides useful insights for the design and implementation of efficient incentive strategies for encouraging a robust auditing ecosystem.
In this paper, we study the problem of cost optimisation of individual-based institutional incent... more In this paper, we study the problem of cost optimisation of individual-based institutional incentives (reward, punishment, and hybrid) for guaranteeing a certain minimal level of cooperative behaviour in a well-mixed, finite population. In this scheme, the individuals in the population interact via cooperation dilemmas (Donation Game or Public Goods Game) in which institutional reward is carried out only if cooperation is not abundant enough (i.e., the number of cooperators is below a threshold 1 ≤ t ≤ N − 1, where N is the population size); and similarly, institutional punishment is carried out only when defection is too abundant. We study analytically the cases t = 1 for the reward incentive under the small mutation limit assumption and two different initial states, showing that the cost function is always non-decreasing. We derive the neutral drift and strong selection limits when the intensity of selection tends to zero and infinity, respectively. We numerically investigate the problem for other values of t and for population dynamics with arbitrary mutation rates.
Navigating the intricacies of digital environments demands effective strategies for fostering coo... more Navigating the intricacies of digital environments demands effective strategies for fostering cooperation and upholding norms. Banning and moderating the content of influential users on social media platforms is often met with shock and awe, yet clearly has profound implications for online behaviour and digital governance. Drawing inspiration from these actions, we model broadcasting retributive measures. Leveraging analytical modeling and extensive agent-based simulations, we investigate how signaling punitive actions can deter antisocial behaviour and promote cooperation in online and multi-agent systems. Our findings underscore the transformative potential of threat signaling in cultivating a culture of compliance and bolstering social welfare, even in challenging scenarios with high costs or complex networks of interaction. This research offers valuable insights into the mechanisms of promoting pro-social behaviour and ensuring behavioural compliance across diverse digital ecosystems.
As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence le... more As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making and social interactions. Existing theoretical research on the emergence and stability of cooperation, particularly in the context of social dilemmas, has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. Resorting to methods from evolutionary game theory, we study how different forms of AI can influence cooperation in a population of humanlike agents playing the one-shot Prisoner's dilemma game. We found that Samaritan AI agents who help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only helps those considered worthy/cooperative, especially in slow-moving societies where change based on payoff difference is moderate (small intensities of selection). Only in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs. Furthermore, when it is possible to identify whether a co-player is a human or an AI, we found that cooperation is enhanced when human-like agents disregard AI performance. Our findings provide novel insights into the design and implementation of context-dependent AI systems for addressing social dilemmas.
Understanding the emergence of prosocial behaviours among self-interested individuals is an impor... more Understanding the emergence of prosocial behaviours among self-interested individuals is an important problem in many scientific disciplines. Various mechanisms have been proposed to explain the evolution of such behaviours, primarily seeking the conditions under which a given mechanism can induce highest levels of cooperation. As these mechanisms usually involve costs that alter individual pay-offs, it is, however, possible that aiming for highest levels of cooperation might be detrimental for social welfare—the latter broadly defined as the total population pay-off, taking into account all costs involved for inducing increased prosocial behaviours. Herein, by comparing stochastic evolutionary models of two well-established mechanisms of prosocial behaviour—namely, peer and institutional incentives—we demonstrate that the objectives of maximizing cooperation and of maximizing social welfare are often misaligned. First, while peer punishment is often more effective than peer reward in promoting cooperation—especially with a higher impact-to-cost ratio—the opposite is true for social welfare. In fact, welfare typically decreases (increases) with this ratio for punishment (reward). Second, for institutional incentives, while maintaining similar levels of cooperation, rewards result in positive social welfare across a much broader range of parameters. Furthermore, both types of incentives often achieve optimal social welfare when their impact is moderate rather than maximal, indicating that careful planning is essential for costly institutional mechanisms to optimize social outcomes. These findings are consistent across varying mutation rates, selection intensities and game configurations. Overall, we argue for the need of adopting social welfare as the main optimization objective when designing and implementing evolutionary mechanisms for social and collective goods.
The adoption of new technologies by firms is a fundamental driver of technological change, enhanc... more The adoption of new technologies by firms is a fundamental driver of technological change, enhancing competitiveness across various industries. Recent advancements in information technologies have amplified the strategic significance of technology in the competitive landscape, reshaping global markets and the workplace. Technological innovation continues at a swift pace, but its success hinges on effective adoption. Embracing new technologies sets businesses apart, fostering innovation, and attracting customers and investors. However, the decision to adopt technology poses challenges, especially regarding which technologies to choose in a dynamical market. Firms often invest in technology to gain a competitive edge, potentially neglecting broader social benefits in the process. This chapter summarises the authors' research on evolutionary dynamics of decision making regarding technology adoption. They employ methods from Evolutionary Game Theory (EGT), exploring scenarios with well-mixed populations and distributed networked environments.
In this paper, we consider the replicator-mutator dynamics for pairwise social dilemmas where the... more In this paper, we consider the replicator-mutator dynamics for pairwise social dilemmas where the payoff entries are random variables. The randomness is incorporated to take into account the uncertainty, which is inevitable in practical applications and may arise from different sources such as lack of data for measuring the outcomes, noisy and rapidly changing environments, as well as unavoidable human estimate errors. We analytically and numerically compute the probability that the replicator-mutator dynamics has a given number of equilibria for four classes of pairwise social dilemmas (Prisoner's Dilemma, SnowDrift Game, Stag-Hunt Game and Harmony Game). As a result, we characterise the qualitative behaviour of such probabilities as a function of the mutation rate. Our results clearly show the influence of the mutation rate and the uncertainty in the payoff matrix definition on the number of equilibria in these games. Overall, our analysis has provided novel theoretical contributions to the understanding of the impact of uncertainty on the behavioural diversity in a complex dynamical system.
Auditors can play a vital role in ensuring that tech companies develop and deploy AI systems safe... more Auditors can play a vital role in ensuring that tech companies develop and deploy AI systems safely, taking into account not just immediate, but also systemic harms that may arise from the use of future AI capabilities. However, to support auditors in evaluating the capabilities and consequences of cutting-edge AI systems, governments may need to encourage a range of potential auditors to invest in new auditing tools and approaches. We use evolutionary game theory to model scenarios where the government wishes to incentivise auditing but cannot discriminate between high and low-quality auditing. We warn that it is alarmingly easy to stumble on 'Adversarial Incentives', which prevent a sustainable market for auditing AI systems from forming. Adversarial Incentives mainly reward auditors for catching unsafe behaviour. If AI companies learn to tailor their behaviour to the quality of audits, the lack of opportunities to catch unsafe behaviour will discourage auditors from innovating. Instead, we recommend that governments always reward auditors, except when they find evidence that those auditors failed to detect unsafe behaviour they should have. These 'Vigilant Incentives' could encourage auditors to find innovative ways to evaluate cutting-edge AI systems. Overall, our analysis provides useful insights for the design and implementation of efficient incentive strategies for encouraging a robust auditing ecosystem.
Studying social dilemmas prompts the question of how cooperation can emerge in situations where i... more Studying social dilemmas prompts the question of how cooperation can emerge in situations where individuals are expected to act selfishly. Here, in the framework of the one-shot Public Goods Game (PGG), we introduce the concept that individuals can adjust their behaviour based on the cooperative commitments made by other players in the group prior to the actual PGG interaction. To this end, we establish a commitment threshold that group members must meet for a commitment to be formed. We explore the effects of punishing commitment non-compliant players (those who commit and defect if the commitment is formed) and rewarding commitment-compliant players (those who commit and cooperate if the commitment is formed). In the presence of commitment and absence of an incentive mechanism, we observe that conditional behaviour based on commitment alone can enhance cooperation, especially when considering a specific commitment threshold value. In the presence of punishment, our results suggest that the survival of cooperation is most likely at intermediate commitment thresholds. Notably, cooperation is maximised at high commitment thresholds, when punishment occurs more frequently. Moreover, even when cooperation rarely survives, a cyclic behaviour emerges, facilitating the persistence of cooperation. For the reward case, we found that cooperation is highly frequent regardless of the commitment threshold adopted.
Spending by the UK's National Health Service (NHS) on independent healthcare treatment has been i... more Spending by the UK's National Health Service (NHS) on independent healthcare treatment has been increased in recent years and is predicted to sustain its upward trend with the forecast of population growth. Some have viewed this increase as an attempt not to expand the patients' choices but to privatize public healthcare. This debate poses a social dilemma whether the NHS should stop cooperating with Private providers. This paper contributes to healthcare economic modelling by investigating the evolution of cooperation among three proposed populations: Public Healthcare Providers, Private Healthcare Providers and Patients. The Patient population is included as a main player in the decision-making process by expanding patient's choices of treatment. We develop a generic basic model that measures the cost of healthcare provision based on given parameters, such as NHS and private healthcare providers' cost of investments in both sectors, cost of treatments and gained benefits. A patient's costly punishment is introduced as a mechanism to enhance cooperation among the three populations. Our findings show that cooperation can be improved with the introduction of punishment (patient's punishment) against defecting providers. Although punishment increases cooperation, it is very costly considering the small improvement in cooperation in comparison to the basic model.
We argue that the emotion of guilt, in the sense of actual harm done to others from inappropriate... more We argue that the emotion of guilt, in the sense of actual harm done to others from inappropriate action or inaction, is worthwhile to incorporate in evolutionary game models, as it can lead to increased cooperation, whether by promoting apology or by inhibiting defection. The study thereof can then transpire to abstract and concrete populations of non-human agents.
Our research is concerned with studying behavioural changes within a dynamic system, i.e. health ... more Our research is concerned with studying behavioural changes within a dynamic system, i.e. health care, and their effects on the decision-making process. Evolutionary Game theory is applied to investigate the most probable strategy(ies) adopted by individuals in a finite population based on the interactions among them with an eye to modelling behaviour using the following metrics: cost of investment, cost of management, cost of treatment, reputation benefit for the provider(s), and the gained health benefit for the patient.
Upon starting a collective endeavour, it is important to understand your partners’ preferences an... more Upon starting a collective endeavour, it is important to understand your partners’ preferences and how strongly they commit to a common goal. Establishing a prior commitment or agreement in terms of posterior benefits and consequences from those engaging in it provides an important mechanism for securing cooperation. Resorting to methods from Evolutionary Game Theory (EGT), here we analyse how prior commitments can also be adopted as a tool for enhancing coordination when its outcomes exhibit an asymmetric payoff structure, in both pairwise and multi-party interactions. Arguably, coordination is more complex to achieve than cooperation since there might be several desirable collective outcomes in a coordination problem (compared to mutual cooperation, the only desirable collective outcome in cooperation dilemmas). Our analysis, both analytically and via numerical simulations, shows that whether prior commitment would be a viable evolutionary mechanism for enhancing coordination and ...
When making a mistake, individuals can apologize to secure further cooperation, even if the apolo... more When making a mistake, individuals can apologize to secure further cooperation, even if the apology is costly. Similarly, individuals arrange commitments to guarantee that an action such as a cooperative one is in the others' best interest, and thus will be carried out to avoid eventual penalties for commitment failure. Hence, both apology and commitment should go side by side in behavioral evolution. Here we provide a computational model showing that apologizing acts are rare in non-committed interactions, especially whenever cooperation is very costly, and that arranging prior commitments can considerably increase the frequency of such behavior. In addition, we show that in both cases, with or without commitments, apology works only if it is sincere, i.e. costly enough. Most interestingly, our model predicts that individuals tend to use much costlier apology in committed relationships than otherwise, because it helps better identify free-riders such as fake committers: `commit...
When starting a new collaborative endeavor, it pays to establish upfront how strongly your partne... more When starting a new collaborative endeavor, it pays to establish upfront how strongly your partners commit to the common goal and what compensation can be expected in case the collaboration is violated. Diverse examples in biological and social contexts have demonstrated the pervasiveness of making prior agreements on posterior compensations, suggesting that this behavior could have been shaped by natural selection (Nesse, 2001; Han, 2013). We discuss here our work in (Han et al., 2013), wherein we analyze the evolutionary relevance of such a commitment strategy in the context of the pairwise one-shot Prisoner’s Dilemma (PD). The commitment strategy proposes, prior to any interaction, its co-player to commit to cooperate in the PD, paying a cost to render the commitment deal reliable (e.g. the cost to hire a layer to make a legal contract). Those players that commit and then default (i.e. defects) have to compensate their nondefaulting co-player. Resorting to methods of Evolutionary Game Theory (Sigmund, 2010), we analyze, both mathematically and using numerical simulations, the viability of such a commitment strategy in the copresence of different free-riding strategies, including the one that commits but then defaults on the commitment, and the one that commits and cooperates only if someone else pays the cost of arranging the commitment (namely, this strategy defects if there is no commitment in place). Our results show that when the cost of arranging a commitment deal is justified with respect to the benefit of cooperation, substantial levels of cooperation can be achieved, even without repeated interactions. On the one hand, commitment proposers can get rid of those individuals that agree to cooperate yet act differently, and, on the other hand, they can maintain a sufficient advantage over those that cooperate only if the commitment is set up by someone else, because a commitment proposer will cooperate with players alike herself, while the latter defect among themselves.
Presentation at 18th International Conference on Logic for Programming Artificial Intelligence an... more Presentation at 18th International Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR’2012) Merida, Venezuela, 10-15 March 2012
Presentation at AAAI Fall Symposium on Proactive Assistive Agents Virginia, USA, 11-13 November, ... more Presentation at AAAI Fall Symposium on Proactive Assistive Agents Virginia, USA, 11-13 November, 2010
Presentation at Artificial Intelligence Techniques for Ambience Intelligence Kuala Lumpur, July 1... more Presentation at Artificial Intelligence Techniques for Ambience Intelligence Kuala Lumpur, July 18, 2010
Presentation at ICAPS "Goal, Activity and Plan Recognition" workshop Freiburg, Germany, 12 June, ... more Presentation at ICAPS "Goal, Activity and Plan Recognition" workshop Freiburg, Germany, 12 June, 2011
Presentation at Portuguese Conference on Artificial Intelligence (EPIA’2011) Lisbon, Portugal, 10... more Presentation at Portuguese Conference on Artificial Intelligence (EPIA’2011) Lisbon, Portugal, 10-13 October, 2011
Presentation at Intl. Symp. on Computational Intelligence for Engineering Systems, ISEP, Porto, P... more Presentation at Intl. Symp. on Computational Intelligence for Engineering Systems, ISEP, Porto, Portugal, November 2009
Presentation at Portuguese Conference on Artificial Intelligence (EPIA’2013) Angra do Heroısmo, S... more Presentation at Portuguese Conference on Artificial Intelligence (EPIA’2013) Angra do Heroısmo, September 2013
Innovation, creativity, and competition are some of the fundamental underlying forces driving the... more Innovation, creativity, and competition are some of the fundamental underlying forces driving the advances in Artificial Intelligence (AI). This race for technological supremacy creates a complex ecology of choices that may lead to negative consequences, in particular, when ethical and safety procedures are underestimated or even ignored. Here we resort to a novel game the- oretical framework to describe the ongoing AI bidding war, also allowing for the identification of procedures on how to influence this race to achieve desirable outcomes. By exploring the similarities between the ongoing competition in AI and evolutionary systems, we show that the timelines in which AI supremacy can be achieved play a crucial role for the evolution of safety prone behaviour and whether influencing procedures are required. When this supremacy can be achieved in a short term (near AI), the significant advantage gained from winning a race leads to the dominance of those who completely ignore the safety precautions to gain extra speed, rendering of the presence of reciprocal behavior irrelevant. On the other hand, when such a supremacy is a distant future, reciprocating on others’ safety behaviour provides in itself an efficient solution, even when monitoring of unsafe development is hard. Our results suggest un- der what conditions AI safety behaviour requires additional supporting procedures and provide a basic framework to model them.
Evolutionary Game Theory-an application of game theory to evolving populations in biology, introd... more Evolutionary Game Theory-an application of game theory to evolving populations in biology, introduced by Maynard Smith and Price in 1973-provides a mathematical framework describing dynamics of competition and strategic behavior according to Darwinian natural selection principle. Over the last 50 years, the theory has helped us understand a wide range of biological and social phenomena such as the evolution of cooperation and eusociality and dynamics of collective action. In complex biological and social systems, individuals interact with innumerable others simultaneously and might have different preferences or interests, leading to the adoption of distinct strategies. Moreover, individuals' interactions are often affected by the constantly changing environments, making it difficult to assign deterministic payoffs that correctly characterize these interactions. These complex, dynamical systems are often modeled and studied via random multi-player multi-strategy games, in which the payoff entries are random variables. Fig. 1. (Upper row) impact of mutation rate on the probability of having a certain number of internal equilibrium points for three popular social dilemmas (two-player two-strategy games). (Lower row) probability of having a certain number of internal equilibrium points for two-strategy games with three, four and five players without mutation.
Evolutionary Game Theory-an application of game theory to evolving populations in biology, introd... more Evolutionary Game Theory-an application of game theory to evolving populations in biology, introduced by Maynard Smith and Price in 1973-provides a mathematical framework describing dynamics of competition and strategic behavior according to Darwinian natural selection principle. Over the last 50 years, the theory has helped us understand a wide range of biological and social phenomena such as the evolution of cooperation and eusociality and dynamics of collective action. In complex biological and social systems, individuals interact with innumerable others simultaneously and might have different preferences or interests, leading to the adoption of distinct strategies. Moreover, individuals' interactions are often affected by the constantly changing environments, making it difficult to assign deterministic payoffs that correctly characterize these interactions. These complex, dynamical systems are often modeled and studied via random multi-player multi-strategy games, in which the payoff entries are random variables. Fig. 1. (Upper row) impact of mutation rate on the probability of having a certain number of internal equilibrium points for three popular social dilemmas (two-player two-strategy games). (Lower row) probability of having a certain number of internal equilibrium points for two-strategy games with three, four and five players without mutation.
Rapid technological advancements in Artificial Intelligence
(AI), together with the growing deplo... more Rapid technological advancements in Artificial Intelligence (AI), together with the growing deployment of AI in new application domains such as robotics, face recognition, selfdriving cars and genetics, are generating an anxiety which makes companies, nations and regions think they should respond competitively. AI appears, for instance, to have instigated a race among the chip builders, just because of the requisites it imposes on that technology. Governments are furthermore stimulating economic investments in AI research and development as they fear of missing out, resulting in a racing narrative that increases further the anxiety among stake-holders
Rapid technological advancements in Artificial Intelligence (AI), as well as the growing deployme... more Rapid technological advancements in Artificial Intelligence (AI), as well as the growing deployment of intelligent technologies in new application domains, have generated serious anxiety and a fear of missing out among different stakeholders, fostering a racing narrative. Whether real or not, the belief in such a race for domain supremacy through AI can make it real simply from its consequences. These consequences may be negative, as racing for technological supremacy creates a complex ecology of choices that could push stakeholders to underestimate or even ignore ethical and safety procedures. Given the breadth and depth of AI and its advances, it is difficult to assess which technology needs regulation and when. As there is no easy access to data describing this alleged AI race, theoretical models are necessary to understand its potential dynamics, allowing for the identification of when procedures need to be put in place to favour outcomes beneficial for all. We show in [Han et al. 2020], that next to the risks of setbacks and being reprimanded for unsafe behaviour, the timescale in which domain supremacy can be achieved plays a crucial role. When this can be achieved in the short term, those who completely ignore the safety precautions are bound to win the race but at a cost to society, apparently requiring regulatory actions [Han et al. 2021]. For a long-term situation, conditions can be identified that require the promotion of risk-taking as opposed to compliance with safety regulations in order to improve social welfare. These results remain robust both when two or several actors are involved in the development process and when the negative outcomes affect either the unsafe actor or the entire group. Thus, when defining codes of conduct and regulatory policies for AI applications, a clear understanding of the timescale of the race is thus required, as this may induce important non-trivial effects. The current work is based on the publication in [Han et al. 2020], which has not been presented at a major AI conference with archival proceedings before.
Uploads
Books by The Anh Han
Papers by The Anh Han
(AI), together with the growing deployment of AI in new
application domains such as robotics, face recognition, selfdriving
cars and genetics, are generating an anxiety which
makes companies, nations and regions think they should
respond competitively. AI appears, for instance, to have
instigated a race among the chip builders, just because of the
requisites it imposes on that technology. Governments are
furthermore stimulating economic investments in AI research
and development as they fear of missing out, resulting
in a racing narrative that increases further the anxiety
among stake-holders