Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A coherence maximisation process for solving normative inconsistencies

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

Norms can be used in multi-agent systems for defining patterns of behaviour in terms of permissions, prohibitions and obligations that are addressed to agents playing a specific role. Agents may play different roles during their execution and they may even play different roles simultaneously. As a consequence, agents may be affected by inconsistent norms; e.g., an agent may be simultaneously obliged and forbidden to reach a given state of affairs. Dealing with this type of inconsistency is one of the main challenges of normative reasoning. Existing approaches tackle this problem by using a static and predefined order that determines which norm should prevail in the case where two norms are inconsistent. One main drawback of these proposals is that they allow only pairwise comparison of norms; it is not clear how agents may use the predefined order to select a subset of norms to abide by from a set of norms containing multiple inconsistencies. Furthermore, in dynamic and non-deterministic environments it can be difficult or even impossible to specify an order that resolves inconsistencies satisfactorily in all potential situations. In response to these two problems, we propose a mechanism with which an agent can dynamically compute a preference order over subsets of its competing norms by considering the coherence of its cognitive and normative elements. Our approach allows flexible resolution of normative inconsistencies, tailored to the current circumstances of the agent. Moreover, our solution can be used to determine norm prevalence among a set of norms containing multiple inconsistencies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. In this paper we will use the terms inconsistency and normative inconsistency as synonyms.

  2. The existence of normative agents that mediate between external agents or humans and multi-agent systems is not new. For example, in Electronic Institutions [19] there are governor agents that guarantee that external agents comply with the norms of the institution.

  3. In this paper, we regard norms as conditional expressions that specify under which general circumstances they must be instantiated.

  4. As defined in [6], the desirability of a formula \(\gamma \) represents to what extent an agent wants to achieve a situation in which \(\gamma \) holds.

  5. According to [6] intentions are not considered as a basic attitude. Thus, the intentions of n-BDI agents are generated on-line from the agents’ beliefs and desires. The intentionality degree of a formula \(\gamma \) is the consequence of finding a best feasible plan that permits a state of the world where \(\gamma \) holds to be achieved.

  6. See [10] for the pseudocode of the algorithm executed by n-BDI agents.

  7. In this case the static order will determine that just the instance created out of the most salient norm should prevail. In Sect. 5 we demonstrate that approaches that only rely on salience to solve normative inconsistencies can lead to undesired results, even if only two instances are considered.

    Fig. 1
    figure 1

    Instances and norms affecting the webManager agent. Ellipses represent the normative elements. Coherence relationships among these elements are represented by continuous lines, whereas dashed lines represent incoherence relationships

  8. Recall that deductive coherence is a symmetric relationship and, as a consequence, constraints in the coherence graph are defined over the set of all subsets of two elements of \(\mathcal {V}\).

  9. Recall that the expressions in N contain a norm and the salience of this norm.

  10. Recall that n-BDI agents translate instances into desires that will be considered for deriving intentions. Thus, intentions are not a basic attitude and there is not a direct link between instances and intentions. As a consequence, the set of intentions is not considered for resolving inconsistencies.

  11. Notice that we assume that the agent performs a reasoning process, such as the one described in [6], for inferring mental formulas (e.g., \( belief (a\wedge b, min\{\rho _{a},\rho _{b}\})\)) that are a conjunction of separate mental formulas (e.g., \( belief (a, \rho _{a})\) and \( belief (b, \rho _{b})\)).

  12. Note that agents are still under the influence of any instance even if they stop enacting the target role of this instance. Because of this, we have not defined an incoherence relationship between instances and beliefs that represent the fact that the agent is no longer playing the target role of instances.

  13. In particular, the conditional order prefers the prohibition instance to the permission instance iff

    $$\begin{aligned} \frac{\rho _{s}^{F}+\rho _{c}^{F}}{2} > \frac{\rho _{s}^{P}+\rho _{c}^{P}}{2} \end{aligned}$$

    otherwise the permission instance prevails.

  14. In each run we generate a random number for each instance. We define that the instance can be fulfilled when this number is greater than 1 minus its ease of compliance.

  15. Note that we only consider the runs in which coherence selects one instance (i.e., the inconsistency does not remain unresolved) and this instance cannot be fulfilled.

  16. Note that we only consider the runs in which both instances can be fulfilled and coherence selects the least salient instance (i.e., the inconsistency does not remain unresolved).

  17. It may be the case that the two norms were instantiated at two points in the past, when the webManager knew that its user was an academic and a university member. However, the webManager cannot determine in the current situation whether its user is still the target of the two instances.

  18. In particular, the conditional order prefers the prohibition instance to the permission instance iff

    $$\begin{aligned} \frac{\rho _{s}^{F}+\rho _{c}^{F}+\rho _{\textit{universityMember}}}{3} > \frac{\rho _{s}^{P}+\rho _{c}^{P}+\rho _{ academicStaff }}{3} \end{aligned}$$

    otherwise the permission instance prevails.

  19. Notice that the two addressing beliefs have a certainty of 1.

  20. Such semantics have been widely used in previous research on agents and norms, such as [31] and [27].

  21. The conditional order prefers the prohibition instance to the permission instance iff

    $$\begin{aligned} \frac{\rho _{s}^{F}+\rho _{c}^{F}}{2}> \frac{\rho _{s}^{P}+\rho _{c}^{P}+\rho _{ highTraffic (\textit{slow})}-\rho _{ lowTraffic (\textit{slow})}}{4} \end{aligned}$$

    otherwise the permission instance prevails.

  22. In particular, the conditional order prefers the prohibition instance to the permission instance iff

    $$\begin{aligned} \frac{\rho _{s}^{F}+\rho _{s}^{F}-\rho _{\textit{use}( fast )}}{3}> \frac{\rho _{s}^{P}+\rho _{c}^{P}}{2} \end{aligned}$$

    otherwise the permission instance prevails.

  23. We have considered alternative methods for calculating the conditional order (e.g., in the previous experiment about the activation and expiration beliefs we have also tried to calculate the conditional order as \(\frac{\rho _{s}^{F}+\rho _{c}^{F}+\rho _{ highTraffic (w)}-(1-\rho _{ lowTraffic (w)})}{4}\)), but these methods have also produced undesirable results.

  24. Note that since we are assuming that the norm, activation and expiration conditions are literals, the substitution for creating an instance from a norm is empty.

References

  1. Alchourrón, C. E., & Bulygin, E. (1971). Normative systems. Wien: Springer.

    Book  MATH  Google Scholar 

  2. Aphale, M., Norman, T., & Sensoy, M. (2013). Goal-directed policy conflict detection and prioritisation. In H. Aldewereld & J. S. Sichman (Eds.), Coordination, organizations, institutions, and norms in agent systems VIII (pp. 87–104)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.

    Chapter  Google Scholar 

  3. Bourdieu, P. (1998). Practical reason: On the theory of action. Stanford: Stanford University Press.

    Google Scholar 

  4. Broersen, J., Dastani, M., Hulstijn, J., Huang, Z., & van der Torre, L. (2001). The BOID architecture: Conflicts between beliefs, obligations, intentions and desires. In Proceedings of the international conference on autonomous agents (pp. 9–16). New York: ACM Press.

  5. Campenní, M., Andrighetto, G., Cecconi, F., & Conte, R. (2009). Normal= normative? The role of intelligent agents in norm innovation. Mind & Society, 8(2), 153–172.

    Article  Google Scholar 

  6. Casali, A., Godo, L., & Sierra, C. (2011). A graded BDI agent model to represent and reason about preferences. Artificial Intelligence, 175(7–8), 1468–1478.

    Article  MathSciNet  MATH  Google Scholar 

  7. Conte, R., & Dignum, F. (2001). From social monitoring to normative influence. Journal of Artificial Societies and Social Simulation, 4(2), 7.

    Google Scholar 

  8. Criado, N., Argente, E., Noriega, P., & Botti, V. (2013). Human-inspired model for norm compliance decision making. Information Sciences, 245, 218–239.

    Article  MATH  Google Scholar 

  9. Criado, N., Argente, E., Noriega, P., & Botti, V. (2013). Manea: A distributed architecture for enforcing norms in open mas. Engineering Applications of Artificial Intelligence, 26(1), 76–95.

    Article  Google Scholar 

  10. Criado, N., Argente, E., Noriega, P., & Botti, V. (2013). Reasoning about constitutive norms in BDI agents. Logic Journal of IGPL, 22(1), 66–93.

    Article  MathSciNet  MATH  Google Scholar 

  11. Criado, N., Argente, E., Noriega, P., & Botti, V. (2014). Reasoning about norms under uncertainty in dynamic environments. International Journal of Approximate Reasoning, 55(9), 2049–2070.

    Article  MathSciNet  MATH  Google Scholar 

  12. Dignum, F., Kinny, D., & Sonenberg, L. (2002). From desires, obligations and norms to goals. Cognitive Science Quarterly, 2(3–4), 407–430.

    MATH  Google Scholar 

  13. Dignum, F., Morley, D., Sonenberg, E. A., & Cavedon, L. (2000). Towards socially sophisticated bdi agents. In Proceedings of the fourth international conference on multiagent systems (pp. 111–118). New York: IEEE Press.

  14. Dignum, F. P. M. (1999). Autonomous agents with norms. Journal of Artificial Intelligence and Law, 7(1), 69–79.

    Article  Google Scholar 

  15. Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77(2), 321–357.

    Article  MathSciNet  MATH  Google Scholar 

  16. Dung, P. M., & Sartor, G. (2011). The modular logic of private international law. Artificial Intelligence and Law, 19(2–3), 233–261.

    Article  Google Scholar 

  17. Epstein, J. M. (2001). Learning to be thoughtless: Social norms and individual computation. Computational Economics, 18(1), 9–24.

    Article  MATH  Google Scholar 

  18. Esteva, F., & Godo, L. (2001). Monoidal t-norm based logic: Towards a logic for left-continuous t-norms. Fuzzy Sets and Systems, 124(3), 271–288.

    Article  MathSciNet  MATH  Google Scholar 

  19. Esteva, M., Rodrguez-Aguilar, J. A., Sierra, C., Garcia, P., & Arcos, J. L. (2001). On the formal specification of electronic institutions. In F. Dignum & C. Sierra (Eds.), Agent mediated electronic commerce (Vol. 1991, pp. 126–147)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.

    Chapter  Google Scholar 

  20. Fitting, M. (1996). First-order logic and automated theorem proving. New York: Springer.

    Book  MATH  Google Scholar 

  21. Gaertner, D. (2009). Argumentation and normative reasoning. Ph.D. thesis, Imperial College.

  22. Gottwald, S. (2014). Many-valued logic. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. New York: Springer.

    Google Scholar 

  23. Joseph, S., Sierra, C., Schorlemmer, M., & Dellunde, P. (2010). Deductive coherence and norm adoption. Logic Journal of the IGPL, 18, 118–156.

    Article  MathSciNet  MATH  Google Scholar 

  24. King, T. C., Dignum, V., & van Riemsdijk, M. B. (2014). Re-checking normative system coherence. In T. Balke, F. Dignum, M. B. can Riemsdijk, & A. K. Chopra (Eds.), Coordination, organizations, institutions, and norms in agent systems IX (pp. 275–290)., Lecture Notes in Computer Science New York: Springer.

    Chapter  Google Scholar 

  25. Kollingbaum, M. J. (2005). Norm-governed practical reasoning agents. Ph.D. thesis, University of Aberdeen.

  26. Kollingbaum, M. J., & Norman, T. J. (2004). Strategies for resolving norm conflict in practical reasoning. In In Proceedings of the ECAI workshop coordination in emergent agent societies (pp. 1–10).

  27. Kollingbaum, M. J., Norman, T. J., Preece, A., & Sleeman, D. (2007). Norm conflicts and inconsistencies in virtual organisations. In P. Noriega, J. Vázquez-Salceda, G. Boella, O. Boissier, V. Dignum, N. Fornara, & E. Matson (Eds.), Coordination, organizations, institutions, and norms in agent systems II (Vol. 4386, pp. 245–258)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.

    Chapter  Google Scholar 

  28. Leite, J., Alferes, J., & Pereira, L. (2001). Multi-dimensional dynamic knowledge representation. In T. Eiter, W. Faber, & M. Truszczyski (Eds.), Logic programming and nonmotonic reasoning (Vol. 2173, pp. 365–378)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.

    Chapter  Google Scholar 

  29. Li, T., Balke, T., De Vos, M., Satoh, K., & Padget, J. (2013). Detecting conflicts in legal systems. In Y. Motomura, A. Butler, & D. Bekki (Eds.), New frontiers in artificial intelligence (Vol. 7856, pp. 174–189)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.

    Chapter  Google Scholar 

  30. y López, F. López, Luck, M., & d’Inverno, M. (2006). A normative framework for agent-based systems. Computational & Mathematical Organization Theory, 12(2), 227–250.

    Article  Google Scholar 

  31. Meneguzzi, F., & Luck, M. (2009). Norm-based behaviour modification in BDI agents. In Proceedings of the international conference on autonomous agents and multiagent systems (pp. 177–184).

  32. Modgil, S., & Luck, M. (2009). Argumentation based resolution of conflicts between desires and normative goals. In I. Rahwan & P. Moraitis (Eds.), Argumentation in multi-agent systems (Vol. 5384, pp. 19–36)., Lecture Notes in Computer Science Berlin Heidelberg: Springer.

    Chapter  Google Scholar 

  33. Moses, Y., & Tennenholtz, M. (1995). Artificial social systems. Computers and Artificial Intelligence, 14(6), 533–562.

    MathSciNet  Google Scholar 

  34. Oren, N., Luck, M., Miles, S., & Norman, T. J. (2008). An argumentation inspired heuristic for resolving normative conflict. In Proceedings of the workshop on coordination, organizations, institutions, and norms in agent systems (pp. 41–56).

  35. Oren, N., Panagiotidi, S., Vázquez-Salceda, J., Modgil, S., Luck, M., & Miles, S. (2009). Towards a formalisation of electronic contracting environments. Proceedings of the coordination, organizations, institutions and norms in agent systems IV (pp. 156–171).

  36. Searle, J. R. (1969). Speech acts: An essay in the philosophy of language. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  37. Simon, H. A. (1982). Models of bounded rationality: Empirically grounded economic reason (Vol. 3). Cambridge: MIT press.

    Google Scholar 

  38. Singh, M. P. (1999). An ontology for commitments in multiagent systems. Journal of Artificial Intelligence and Law, 7(1), 97–113.

    Article  Google Scholar 

  39. Thagard, P. (2002). Coherence in thought and action. Cambridge: The MIT Press.

    Google Scholar 

  40. Thagard, P., & Verbeurgt, K. (1998). Coherence as constraint satisfaction. Cognitive Science, 22(1), 1–24.

    Article  Google Scholar 

  41. Vasconcelos, W. W., Kollingbaum, M. J., & Norman, T. J. (2009). Normative conflict resolution in multi-agent systems. Autonomous Agents and Multi-Agent Systems, 19(2), 124–152.

    Article  Google Scholar 

  42. Villatoro, D., Andrighetto, G., Sabater-Mir, J., & Conte, R. (2011). Dynamic sanctioning for robust and cost-efficient norm compliance. In Proceedings of the IJCAI (Vol. 11, pp. 414–419).

  43. von Wright, G. H. (1963). Norm and action: A logical enquiry. London: Routledge & Kegan Paul.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Natalia Criado.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Criado, N., Black, E. & Luck, M. A coherence maximisation process for solving normative inconsistencies. Auton Agent Multi-Agent Syst 30, 640–680 (2016). https://doi.org/10.1007/s10458-015-9300-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-015-9300-x

Keywords