Abstract
Norms can be used in multi-agent systems for defining patterns of behaviour in terms of permissions, prohibitions and obligations that are addressed to agents playing a specific role. Agents may play different roles during their execution and they may even play different roles simultaneously. As a consequence, agents may be affected by inconsistent norms; e.g., an agent may be simultaneously obliged and forbidden to reach a given state of affairs. Dealing with this type of inconsistency is one of the main challenges of normative reasoning. Existing approaches tackle this problem by using a static and predefined order that determines which norm should prevail in the case where two norms are inconsistent. One main drawback of these proposals is that they allow only pairwise comparison of norms; it is not clear how agents may use the predefined order to select a subset of norms to abide by from a set of norms containing multiple inconsistencies. Furthermore, in dynamic and non-deterministic environments it can be difficult or even impossible to specify an order that resolves inconsistencies satisfactorily in all potential situations. In response to these two problems, we propose a mechanism with which an agent can dynamically compute a preference order over subsets of its competing norms by considering the coherence of its cognitive and normative elements. Our approach allows flexible resolution of normative inconsistencies, tailored to the current circumstances of the agent. Moreover, our solution can be used to determine norm prevalence among a set of norms containing multiple inconsistencies.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
In this paper we will use the terms inconsistency and normative inconsistency as synonyms.
The existence of normative agents that mediate between external agents or humans and multi-agent systems is not new. For example, in Electronic Institutions [19] there are governor agents that guarantee that external agents comply with the norms of the institution.
In this paper, we regard norms as conditional expressions that specify under which general circumstances they must be instantiated.
As defined in [6], the desirability of a formula \(\gamma \) represents to what extent an agent wants to achieve a situation in which \(\gamma \) holds.
According to [6] intentions are not considered as a basic attitude. Thus, the intentions of n-BDI agents are generated on-line from the agents’ beliefs and desires. The intentionality degree of a formula \(\gamma \) is the consequence of finding a best feasible plan that permits a state of the world where \(\gamma \) holds to be achieved.
See [10] for the pseudocode of the algorithm executed by n-BDI agents.
In this case the static order will determine that just the instance created out of the most salient norm should prevail. In Sect. 5 we demonstrate that approaches that only rely on salience to solve normative inconsistencies can lead to undesired results, even if only two instances are considered.
Recall that deductive coherence is a symmetric relationship and, as a consequence, constraints in the coherence graph are defined over the set of all subsets of two elements of \(\mathcal {V}\).
Recall that the expressions in N contain a norm and the salience of this norm.
Recall that n-BDI agents translate instances into desires that will be considered for deriving intentions. Thus, intentions are not a basic attitude and there is not a direct link between instances and intentions. As a consequence, the set of intentions is not considered for resolving inconsistencies.
Notice that we assume that the agent performs a reasoning process, such as the one described in [6], for inferring mental formulas (e.g., \( belief (a\wedge b, min\{\rho _{a},\rho _{b}\})\)) that are a conjunction of separate mental formulas (e.g., \( belief (a, \rho _{a})\) and \( belief (b, \rho _{b})\)).
Note that agents are still under the influence of any instance even if they stop enacting the target role of this instance. Because of this, we have not defined an incoherence relationship between instances and beliefs that represent the fact that the agent is no longer playing the target role of instances.
In particular, the conditional order prefers the prohibition instance to the permission instance iff
$$\begin{aligned} \frac{\rho _{s}^{F}+\rho _{c}^{F}}{2} > \frac{\rho _{s}^{P}+\rho _{c}^{P}}{2} \end{aligned}$$otherwise the permission instance prevails.
In each run we generate a random number for each instance. We define that the instance can be fulfilled when this number is greater than 1 minus its ease of compliance.
Note that we only consider the runs in which coherence selects one instance (i.e., the inconsistency does not remain unresolved) and this instance cannot be fulfilled.
Note that we only consider the runs in which both instances can be fulfilled and coherence selects the least salient instance (i.e., the inconsistency does not remain unresolved).
It may be the case that the two norms were instantiated at two points in the past, when the webManager knew that its user was an academic and a university member. However, the webManager cannot determine in the current situation whether its user is still the target of the two instances.
In particular, the conditional order prefers the prohibition instance to the permission instance iff
$$\begin{aligned} \frac{\rho _{s}^{F}+\rho _{c}^{F}+\rho _{\textit{universityMember}}}{3} > \frac{\rho _{s}^{P}+\rho _{c}^{P}+\rho _{ academicStaff }}{3} \end{aligned}$$otherwise the permission instance prevails.
Notice that the two addressing beliefs have a certainty of 1.
The conditional order prefers the prohibition instance to the permission instance iff
$$\begin{aligned} \frac{\rho _{s}^{F}+\rho _{c}^{F}}{2}> \frac{\rho _{s}^{P}+\rho _{c}^{P}+\rho _{ highTraffic (\textit{slow})}-\rho _{ lowTraffic (\textit{slow})}}{4} \end{aligned}$$otherwise the permission instance prevails.
In particular, the conditional order prefers the prohibition instance to the permission instance iff
$$\begin{aligned} \frac{\rho _{s}^{F}+\rho _{s}^{F}-\rho _{\textit{use}( fast )}}{3}> \frac{\rho _{s}^{P}+\rho _{c}^{P}}{2} \end{aligned}$$otherwise the permission instance prevails.
We have considered alternative methods for calculating the conditional order (e.g., in the previous experiment about the activation and expiration beliefs we have also tried to calculate the conditional order as \(\frac{\rho _{s}^{F}+\rho _{c}^{F}+\rho _{ highTraffic (w)}-(1-\rho _{ lowTraffic (w)})}{4}\)), but these methods have also produced undesirable results.
Note that since we are assuming that the norm, activation and expiration conditions are literals, the substitution for creating an instance from a norm is empty.
References
Alchourrón, C. E., & Bulygin, E. (1971). Normative systems. Wien: Springer.
Aphale, M., Norman, T., & Sensoy, M. (2013). Goal-directed policy conflict detection and prioritisation. In H. Aldewereld & J. S. Sichman (Eds.), Coordination, organizations, institutions, and norms in agent systems VIII (pp. 87–104)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.
Bourdieu, P. (1998). Practical reason: On the theory of action. Stanford: Stanford University Press.
Broersen, J., Dastani, M., Hulstijn, J., Huang, Z., & van der Torre, L. (2001). The BOID architecture: Conflicts between beliefs, obligations, intentions and desires. In Proceedings of the international conference on autonomous agents (pp. 9–16). New York: ACM Press.
Campenní, M., Andrighetto, G., Cecconi, F., & Conte, R. (2009). Normal= normative? The role of intelligent agents in norm innovation. Mind & Society, 8(2), 153–172.
Casali, A., Godo, L., & Sierra, C. (2011). A graded BDI agent model to represent and reason about preferences. Artificial Intelligence, 175(7–8), 1468–1478.
Conte, R., & Dignum, F. (2001). From social monitoring to normative influence. Journal of Artificial Societies and Social Simulation, 4(2), 7.
Criado, N., Argente, E., Noriega, P., & Botti, V. (2013). Human-inspired model for norm compliance decision making. Information Sciences, 245, 218–239.
Criado, N., Argente, E., Noriega, P., & Botti, V. (2013). Manea: A distributed architecture for enforcing norms in open mas. Engineering Applications of Artificial Intelligence, 26(1), 76–95.
Criado, N., Argente, E., Noriega, P., & Botti, V. (2013). Reasoning about constitutive norms in BDI agents. Logic Journal of IGPL, 22(1), 66–93.
Criado, N., Argente, E., Noriega, P., & Botti, V. (2014). Reasoning about norms under uncertainty in dynamic environments. International Journal of Approximate Reasoning, 55(9), 2049–2070.
Dignum, F., Kinny, D., & Sonenberg, L. (2002). From desires, obligations and norms to goals. Cognitive Science Quarterly, 2(3–4), 407–430.
Dignum, F., Morley, D., Sonenberg, E. A., & Cavedon, L. (2000). Towards socially sophisticated bdi agents. In Proceedings of the fourth international conference on multiagent systems (pp. 111–118). New York: IEEE Press.
Dignum, F. P. M. (1999). Autonomous agents with norms. Journal of Artificial Intelligence and Law, 7(1), 69–79.
Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77(2), 321–357.
Dung, P. M., & Sartor, G. (2011). The modular logic of private international law. Artificial Intelligence and Law, 19(2–3), 233–261.
Epstein, J. M. (2001). Learning to be thoughtless: Social norms and individual computation. Computational Economics, 18(1), 9–24.
Esteva, F., & Godo, L. (2001). Monoidal t-norm based logic: Towards a logic for left-continuous t-norms. Fuzzy Sets and Systems, 124(3), 271–288.
Esteva, M., Rodrguez-Aguilar, J. A., Sierra, C., Garcia, P., & Arcos, J. L. (2001). On the formal specification of electronic institutions. In F. Dignum & C. Sierra (Eds.), Agent mediated electronic commerce (Vol. 1991, pp. 126–147)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.
Fitting, M. (1996). First-order logic and automated theorem proving. New York: Springer.
Gaertner, D. (2009). Argumentation and normative reasoning. Ph.D. thesis, Imperial College.
Gottwald, S. (2014). Many-valued logic. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. New York: Springer.
Joseph, S., Sierra, C., Schorlemmer, M., & Dellunde, P. (2010). Deductive coherence and norm adoption. Logic Journal of the IGPL, 18, 118–156.
King, T. C., Dignum, V., & van Riemsdijk, M. B. (2014). Re-checking normative system coherence. In T. Balke, F. Dignum, M. B. can Riemsdijk, & A. K. Chopra (Eds.), Coordination, organizations, institutions, and norms in agent systems IX (pp. 275–290)., Lecture Notes in Computer Science New York: Springer.
Kollingbaum, M. J. (2005). Norm-governed practical reasoning agents. Ph.D. thesis, University of Aberdeen.
Kollingbaum, M. J., & Norman, T. J. (2004). Strategies for resolving norm conflict in practical reasoning. In In Proceedings of the ECAI workshop coordination in emergent agent societies (pp. 1–10).
Kollingbaum, M. J., Norman, T. J., Preece, A., & Sleeman, D. (2007). Norm conflicts and inconsistencies in virtual organisations. In P. Noriega, J. Vázquez-Salceda, G. Boella, O. Boissier, V. Dignum, N. Fornara, & E. Matson (Eds.), Coordination, organizations, institutions, and norms in agent systems II (Vol. 4386, pp. 245–258)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.
Leite, J., Alferes, J., & Pereira, L. (2001). Multi-dimensional dynamic knowledge representation. In T. Eiter, W. Faber, & M. Truszczyski (Eds.), Logic programming and nonmotonic reasoning (Vol. 2173, pp. 365–378)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.
Li, T., Balke, T., De Vos, M., Satoh, K., & Padget, J. (2013). Detecting conflicts in legal systems. In Y. Motomura, A. Butler, & D. Bekki (Eds.), New frontiers in artificial intelligence (Vol. 7856, pp. 174–189)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.
y López, F. López, Luck, M., & d’Inverno, M. (2006). A normative framework for agent-based systems. Computational & Mathematical Organization Theory, 12(2), 227–250.
Meneguzzi, F., & Luck, M. (2009). Norm-based behaviour modification in BDI agents. In Proceedings of the international conference on autonomous agents and multiagent systems (pp. 177–184).
Modgil, S., & Luck, M. (2009). Argumentation based resolution of conflicts between desires and normative goals. In I. Rahwan & P. Moraitis (Eds.), Argumentation in multi-agent systems (Vol. 5384, pp. 19–36)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.
Moses, Y., & Tennenholtz, M. (1995). Artificial social systems. Computers and Artificial Intelligence, 14(6), 533–562.
Oren, N., Luck, M., Miles, S., & Norman, T. J. (2008). An argumentation inspired heuristic for resolving normative conflict. In Proceedings of the workshop on coordination, organizations, institutions, and norms in agent systems (pp. 41–56).
Oren, N., Panagiotidi, S., Vázquez-Salceda, J., Modgil, S., Luck, M., & Miles, S. (2009). Towards a formalisation of electronic contracting environments. Proceedings of the coordination, organizations, institutions and norms in agent systems IV (pp. 156–171).
Searle, J. R. (1969). Speech acts: An essay in the philosophy of language. Cambridge: Cambridge University Press.
Simon, H. A. (1982). Models of bounded rationality: Empirically grounded economic reason (Vol. 3). Cambridge: MIT press.
Singh, M. P. (1999). An ontology for commitments in multiagent systems. Journal of Artificial Intelligence and Law, 7(1), 97–113.
Thagard, P. (2002). Coherence in thought and action. Cambridge: The MIT Press.
Thagard, P., & Verbeurgt, K. (1998). Coherence as constraint satisfaction. Cognitive Science, 22(1), 1–24.
Vasconcelos, W. W., Kollingbaum, M. J., & Norman, T. J. (2009). Normative conflict resolution in multi-agent systems. Autonomous Agents and Multi-Agent Systems, 19(2), 124–152.
Villatoro, D., Andrighetto, G., Sabater-Mir, J., & Conte, R. (2011). Dynamic sanctioning for robust and cost-efficient norm compliance. In Proceedings of the IJCAI (Vol. 11, pp. 414–419).
von Wright, G. H. (1963). Norm and action: A logical enquiry. London: Routledge & Kegan Paul.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Criado, N., Black, E. & Luck, M. A coherence maximisation process for solving normative inconsistencies. Auton Agent Multi-Agent Syst 30, 640–680 (2016). https://doi.org/10.1007/s10458-015-9300-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10458-015-9300-x