A General Framework For Reasoning On Inconsistency
A General Framework For Reasoning On Inconsistency
A General Framework For Reasoning On Inconsistency
A General
Framework for
Reasoning On
Inconsistency
123
SpringerBriefs in Computer Science
Series Editors
Stan Zdonik
Peng Ning
Shashi Shekhar
Jonathan Katz
Xindong Wu
Lakhmi C. Jain
David Padua
Xuemin Shen
Borko Furht
VS Subrahmanian
Martial Hebert
Katsushi Ikeuchi
Bruno Siciliano
123
Maria Vanina Martinez Cristian Molinaro
Department of Computer Science Dipartimento di Elettronica
University of Oxford Università della Calabria
Oxford, UK Rende, Italy
Some of the authors of this monograph may have been funded in part by
AFOSR grant FA95500610405, ARO grants W911NF0910206, W911NF1160215,
W911NF1110344, an ARO/Penn State MURI award, and ONR grant
N000140910685. Maria Vanina Martinez is currently partially supported by the En-
gineering and Physical Sciences Research Council of the United Kingdom (EPSRC)
grant EP/J008346/1 (“PrOQAW: Probabilistic Ontological Query Answering on the
Web”) and by a Google Research Award. Finally, the authors would also like to
thank the anonymous reviewers who provided valuable feedback on earlier versions
of this work; their comments and suggestions helped improve this manuscript.
v
Contents
3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
vii
Chapter 1
Introduction and Preliminary Concepts
1.1 Motivation
Inconsistency management has been intensely studied in various parts of AI, often
in slightly disguised form (Gardenfors 1988; Pinkas and Loui 1992; Poole 1985;
Rescher and Manor 1970). For example, default logics (Reiter 1980) use syntax to
distinguish between strict facts and default rules, and identify different extensions
of the default logic as potential ways of “making sense” of seemingly conflicting
information. Likewise, inheritance networks (Touretzkey 1984) define extensions
based on analyzing paths in the network and using notions of specificity to resolve
conflicts. Argumentation frameworks (Dung 1995) study different ways in which
an argument for or against a proposition can be made, and then determine which
arguments defeat which other arguments in an effort to decide what can be rea-
sonably concluded. All these excellent works provide an a priori conflict resolution
mechanism. A user who uses a system based on these papers is forced to use the
semantics implemented in the system, and has little say in the matter (besides which
most users querying KBs are unlikely to be experts in even classical logic, let alone
default logics and argumentation methods).
Clearly, the theory is inconsistent. Nevertheless, there might be several options that
a user might consider to handle it: for instance, one might replace the first rule with
1.2 Tarski’s Abstract Logic 3
Alfred Tarski (1956) defines an abstract logic as a pair (L , CN) where the members
of L are called well-formed formulas, and CN is a consequence operator. CN is
any function from 2L (the powerset of L ) to 2L that satisfies the following axioms
(here X is a subset of L ):
1. X ⊆ CN(X) (Expansion)
2. CN(CN(X))
= CN(X) (Idempotence)
3. CN(X) = Y ⊆ f X CN(Y ) (Finiteness)
4. CN({x}) = L for some x ∈ L (Absurdity)
Notation: Y ⊆ f X means that Y is a finite subset of X.
Intuitively, CN(X) returns the set of formulas that are logical consequences of X
according to the logic in question. It can be easily shown from the above axioms that
CN is a closure operator, that is, for any X, X , X ⊆ L , CN enjoys the following
properties:
5. X ⊆ X ⇒ CN(X) ⊆ CN(X ). (Monotonicity)
6. CN(X) ∪ CN(X ) ⊆ CN(X ∪ X ).
7. CN(X) = CN(X ) ⇒ CN(X ∪ X ) = CN(X ∪ X ).
4 1 Introduction and Preliminary Concepts
Most well-known monotonic logics (such as first order logic (Shoenfield 1967),
propositional logic (Shoenfield 1967), modal logic, temporal logic, fuzzy logic,
probabilistic logic (Bacchus 1990), etc.) are special cases of Tarski’s notion of an
abstract logic. AI introduced non-monotonic logics (Bobrow 1980) which do not
satisfy the monotonicity property.
Once (L , CN) is fixed, a notion of consistency arises as follows.
Definition 1.1 (Consistency). Let X ⊆ L . X is consistent w.r.t. the logic (L , CN)
iff CN(X)
= L . It is inconsistent otherwise.
The previous definition says that X is consistent iff its set of consequences is not the
set of all formulas. For any abstract logic (L , CN), we also require the following
axiom to be satisfied:
8. CN(0)
/
= L (Coherence)
The coherence requirement (absent from Tarski’s original proposal, but added here
to avoid considering trivial systems) forces the empty set 0/ to always be consistent –
this makes sense for any reasonable logic as saying emptiness should intuitively be
consistent.
It can be easily verified that if a set X ⊆ L is consistent, then its closure under
CN is consistent as well as any subset of X. Moreover, if X is inconsistent, then
every superset of X is inconsistent.
Chapter 2
A General Framework for Handling
Inconsistency
This chapter proposes a general framework for handling inconsistency under any
monotonic logic. Basically, our approach to reason with an inconsistent knowledge
base (KB) is a process which follows three steps:
1. Determining consistent “subbases”;
2. Selecting among all the subbases the ones that are preferred;
3. Applying entailment on the preferred subbases.
Throughout the rest of this chapter, we assume that we have an arbitrary, but fixed
monotonic logic (L , CN).
The basic idea behind our framework is to construct what we call options, and
then to define a preference relation on these options. The preferred options are in-
tended to support the conclusions to be drawn from the inconsistent knowledge
base. Intuitively, an option is a set of formulas that is both consistent and closed
w.r.t. consequence in logic (L , CN).
Definition 2.1 (Options). An option is any set O of elements of L such that:
• O is consistent.
• O is closed, i.e., O = CN(O).
We use Opt(L ) to denote the set of all options that can be built from (L , CN).
Note that the empty set is not necessarily an option. This depends on the value of
CN(0) / in the considered logic (L , CN). For instance, in propositional logic, it is
clear that 0/ is not an option since all the tautologies will be inferred from it. Indeed,
it is easy to see that 0/ is an option iff CN(0)
/ = 0.
/
Clearly, for each consistent subset X of L , it holds that CN(X) is an option
(as CN(X) is consistent and Idempotence axiom entails that CN(X) is closed). Since
we are considering generic logic, we can show that options do not always exist.
Proof. (⇒) Let us assume that Opt(L ) = 0. / Let us also assume by contradiction
that ∃ψ ∈ L such that CN({ψ }) is consistent. Since CN({ψ }) is closed by the
Idempotence axiom, CN({ψ }) is an option, which is a contradiction.
Assume now that CN(0)/ = 0./ This means that 0/ is an option since it is closed and
/
= L ), which is a contradiction.
consistent (CN(0)
(⇐) Let us assume that (i) ∀ψ ∈ L , CN({ψ }) is inconsistent, and (ii) CN(0)/
=
/ Assume also by contradiction that Opt(L )
= 0/ and let O ∈ Opt(L ). There are
0.
two cases:
Case 1: O = 0.
/ Consequently, CN(0) / = 0,
/ which contradicts assumption (ii).
/ Since O is consistent, ∃ψ ∈ O s.t. {ψ } is consistent and thus
Case 2: O
= 0.
CN({ψ }) is consistent. This contradicts assumption (i).
So far, we have defined the concept of option for any logic (L , CN) in a way that
is independent of a knowledge base. We now show how to associate a set of options
with an inconsistent knowledge base.
In most approaches for handling inconsistency, the maximal consistent subsets
of a given inconsistent knowledge base have an important role. This may induce
one to think of determining the options of a knowledge base as the closure of its
maximal consistent subsets. However, this approach has the side effect of dropping
entire formulas, whereas more fine-grained approaches could be adopted in order
to preserve more information of the original knowledge base. This is shown in the
following example.
Example 2.1. Consider the propositional knowledge base K = {(a ∧ b); ¬b}. There
are two maximal consistent subsets, namely MCS1 = {a ∧ b} and MCS2 = {¬b}.
However, one could argue that MCS2 is too weak, since we could have included a
by “weakening” the formula (a ∧ b) instead of dropping it altogether.
The “maximal consistent subset” approach, as well as the one suggested in the
previous example, can be seen as a particular case of a more general approach,
where one considers consistent “relaxations” (or weakenings) of a given inconsis-
tent knowledge base. The ways in which such weakenings are determined might be
different, as the following examples show.
Example 2.2. Consider again the temporal knowledge base of Example 1.2. An in-
tuitive way to “weaken” the knowledge base might consist of replacing the (next
moment in time) connective with the ♦ (sometime in the future) connective. So, for
instance, processed ← received might be replaced by ♦processed ← received,
thus saying that if received is true at time t, then processed is true at some sub-
sequent time t ≥ t (not necessarily at time t + 1). This would lead to a consistent
knowledge base, whose closure is clearly an option. Likewise, we might weaken
only (1.6), obtaining another consistent knowledge base whose closure is an option.
Example 2.3. Consider the probabilistic knowledge base of Example 1.3. A reason-
able way to make a probabilistic formula φ : [, u] weaker, might be to replace it
with another formula φ : [ , u ] where [, u] ⊆ [ , u ].
2 A General Framework for Handling Inconsistency 7
The preceding examples suggest that a flexible way to determine the options of
a given knowledge base should be provided, since what is considered reasonable to
be an option might depend on the logic and the application domain at hand, and,
more importantly, it should depend on the user’s preferences. The basic idea is to
consider weakenings of a given knowledge base K whose closures yield options.
For instance, as said before, weakenings might be subsets of the knowledge base.
Although such a weakening mechanism is general enough to be applicable to many
logics, more tailored mechanisms could be defined for specific logics. For instance,
the two reasonable approaches illustrated in Examples 2.2 and 2.3 above cannot be
captured by considering subsets of the original knowledge bases; as another exam-
ple, let us reconsider Example 2.1: by looking at subsets of the knowledge base, it
is not possible to get an option containing both a and ¬b. We formally introduce the
notion of weakening as follows.
Example 2.4. Consider again the knowledge base of Example 2.1 and let Wall be the
adopted weakening mechanism. Our framework is flexible enough to allow the set
CN({a, ¬b}) to be an option for K . This weakening mechanism preserves more in-
formation from the original knowledge base than the classical “maximal consistent
subsets” approach.
In Chap. 4 we will consider specific monotonic logics and show more tailored
weakening mechanisms.
The framework for reasoning about inconsistency has three components: the set
of all options for a given knowledge base, a preference relation between options,
and an inference mechanism.
Definition 2.6 (General framework). A general framework for reasoning about
inconsistency in a knowledge base K is a triple Opt(K , W ), , |∼ such that:
• Opt(K , W ) is the set of options for K w.r.t. the weakening mechanism W .
• ⊆ Opt(K , W ) × Opt(K , W ). is a partial (or total) preorder (i.e., it is
reflexive and transitive).
• |∼ : 2Opt(K ,W ) → Opt(L ).
The second important concept of the general framework above is the preference re-
lation among options. Indeed, O1 O2 means that the option O1 is at least as
preferred as O2 . This relation captures the idea that some options are better than
others because, for instance, the user has decided that this is the case, or because
those preferred options satisfy the requirements imposed by the developer of a con-
flict management system. For instance, in Example 1.1, the user chooses certain
options (e.g., the options where the salary is minimal or where the salary is maxi-
mal based on his needs). From the partial preorder we can derive the strict par-
tial order (i.e., it is irreflexive and transitive) over Opt(K , W ) as follows: for
any O1 , O2 ∈ Opt(K , W ) we say O1 O2 iff O1 O2 and O2
O1 . Intuitively,
O1 O2 means that O1 is strictly preferable to O2 . The set of preferred options in
Opt(K , W ) determined by is Opt (K , W ) = {O | O ∈ Opt(K , W ) ∧ O ∈
Opt(K , W ) with O O}. Whenever W is clear from the context, we simply
write Opt (K ) instead of Opt (K , W ).
2 A General Framework for Handling Inconsistency 9
In the following three examples, we come back to the example theories of Chap. 1
to show how our framework can handle them.
Example 2.5. Let us consider again the knowledge base S of Example 1.1. Consider
the options O1 = CN({(1.1), (1.3)}), O2 = CN({(1.1), (1.2)}), O3 = CN({(1.2),
(1.3)}), and let us say that these three options are strictly preferable to all other
options for S; then, we have to determine the preferred options among these three.
Different criteria might be used to determine the preferred options:
• Suppose the score sc(Oi ) of option Oi is the sum of the elements in the multiset
{S | salary(John, S) ∈ Oi }. In this case, the score of O1 is 50K, that of O2 is 110K,
and that of O3 is 60K. We could now say that Oi O j iff sc(Oi ) ≤ sc(O j ). In this
case, the only preferred option is O1 , which corresponds to the bank manager’s
viewpoint.
• On the other hand, suppose we say that Oi O j iff sc(Oi ) ≥ sc(O j ). In this case,
the only preferred option is O2 ; this corresponds to the view that the rule saying
everyone has only one salary is wrong (perhaps the database has John being paid
out of two projects simultaneously and 50K of his salary is charged to one project
and 60K to another).
• Now consider the case where we change our scoring method and say that
sc(Oi ) = min{S | salary(John, S) ∈ Oi }. In this case, sc(O1 ) = 50K, sc(O2 ) =
50K, sc(O3 ) = 60K. Let us suppose that the preference relation says that Oi O j
iff sc(Oi ) ≥ sc(O j ). Then, the only preferred option is O3 , which corresponds ex-
actly to the tax agency’s viewpoint.
Example 2.6. Let us consider the temporal logic theory T of Example 1.2. We
may choose to consider just three options for determining the preferred ones:
O1 = CN({(1.4), (1.5)}), O2 = CN({(1.4), (1.6)}), O3 = CN({(1.5), (1.6)}). Sup-
pose now that we can associate a numeric score with each formula in T , describing
the reliability of the source that provided the formula. Let us say these scores are 3,
1, and 2 for formulas (1.4), (1.5) and (1.6), respectively, and the weight of an option
Oi is the sum of the scores of the formulas in T ∩ Oi . We might say Oi O j iff the
score of Oi is greater than or equal to the score of O j . In this case, the only preferred
option is O2 .
Example 2.7. Consider the probabilistic logic theory P of Example 1.3. Suppose that
in order to determine the preferred options, we consider only options that assign a
single non-empty probability interval to p, namely options of the form CN({p :
[, u]}). For two atoms A1 = p : [1 , u1 ] and A2 = p : [2 , u2 ], let diff(A1 , A2 ) =
abs(1 − 2 ) + abs(u1 − u2 ). Let us say that the score of an option O = CN({A}),
denoted by score(O), is given by ∑A ∈P diff(A, A ). Suppose we say that Oi O j
iff score(Oi ) ≤ score(O j ). Intuitively, this means that we are preferring options
that change the lower and upper bounds in P as little as possible. In this case,
CN({p : [0.41, 0.43]}) is a preferred option.
Thus, we see that our general framework for managing inconsistency is very
powerful – it can be used to handle inconsistencies in different ways based upon
10 2 A General Framework for Handling Inconsistency
how the preference relation between options is defined. In Chap. 4, we will consider
more logics and illustrate more examples showing how the proposed framework is
suitable for handling inconsistency in a flexible way.
The following definition introduces a preference criterion where an option is
preferable to another if and only if the latter is a weakening of the former.
The following corollary states that W is indeed a preorder (in particular, a partial
order).
Corollary 2.1. Consider a knowledge base K and a weakening mechanism W . W
is a partial order over Opt(K , W ).
If the user’s preferences are expressed according to W , then the preferred op-
tions are the least weak or, in other words, in view of Proposition 2.2, they are the
maximal ones under set inclusion.
The third component of the framework is a mechanism for selecting the infer-
ences to be drawn from the knowledge base. In our framework, the set of inferences
is itself an option. Thus, it should be consistent. This requirement is of great im-
portance, since it ensures that the framework delivers safe conclusions. Note that
this inference mechanism returns an option of the language from the set of options
for a given knowledge base. The set of inferences is generally computed from the
preferred options. Different mechanisms can be defined for selecting the inferences
to be drawn. Here is an example of such a mechanism.
We can show that the set of inferences made using the universal criterion is itself
an option of K , and thus the universal criterion is a valid mechanism of inference.
Moreover, it is included in every preferred option.
2 A General Framework for Handling Inconsistency 11
is not a valid inference mechanism since the set of consequences returned by it may
be inconsistent, thus, it is not an option.
Chapter 3
Algorithms
In this chapter, we present now general algorithms for computing the preferred
options for a given knowledge base. Throughout this chapter, we assume that
CN(K ) is finite for any knowledge base K . The preferred options could be naively
computed as follows.
procedure CPO-Naive(K , W , )
1. Let X = {CN(K ) | K ∈ W (K ) ∧ K is consistent}
2. Return any O ∈ X s.t. there is no O ∈ X s.t. O O
Clearly, X is the set of options for K . Among them, the algorithm chooses the
preferred ones according to . Note that CPO-Naive, as well as the other algorithms
we present in the following, relies on the CN operator, which makes the algorithm
independent of the underlying logic; in order to apply the algorithm to a specific
logic it suffices to provide the definition of CN for that logic. One reason for the
inefficiency of CPO-Naive is that it makes no assumptions about the weakening
mechanism and the preference relation.
The next theorem identifies the set of preferred options for a given knowledge
base when Wall and W are the weakening mechanism and the preference relation,
respectively.
Theorem 3.1. Consider a knowledge base K . Let Wall and W be the weak-
ening mechanism and preference relation, respectively, that are used. Let Φ =
ψ ∈K weakening(ψ ). Then, the set of preferred options for K is equal to PO
where
Example 3.1. Consider again the knowledge base K = {(a ∧ b); ¬b} of
Example 2.1. We have that Φ = CN({a ∧ b}) ∪ CN({¬b}). Thus, it is easy to
see that a preferred option for K is CN({a, ¬b}) (note that a ∈ Φ since a ∈
CN({a ∧ b})).
1 Recall that a Horn clause is a disjunction of literals containing at most one positive literal.
3 Algorithms 15
Note that Theorem 3.1 entails that if both computing CN and consistency
checking can be done in polynomial time, then one preferred option can be computed
in polynomial time. For instance, this is the case for propositional Horn knowledge
bases (see Chap. 4). Furthermore, observe that Theorem 3.1 also holds when ⊇ is
the preference relation simply because ⊇ coincides with W (see Proposition 2.2).
Let us now consider the case where W⊆ and ⊇ are the adopted weakening
mechanism and preference relation, respectively.
The following corollary identifies the set of preferred options for a knowledge
base when the weakening mechanism and the preference relation are W⊆ and ⊇,
respectively.
16 3 Algorithms
The preceding corollary provides a way to compute the preferred options: first
the maximal consistent subsets of K are computed, then CN is applied to them.
Clearly, such an algorithm avoids the computation of every option. Note that this
corollary entails that if both computing CN and consistency checking can be done
in polynomial time, then one preferred option can be computed in polynomial time.
Moreover, observe that both the corollary above and Theorem 3.2 also hold in the
case where the adopted preference criterion is W because ⊇ coincides with W
(see Proposition 2.2).
We now consider the case where different assumptions on the preference rela-
tion are made. The algorithms below are independent of the weakening mechanism
that we choose to use. For the sake of simplicity, we will use Opt(K ) instead of
Opt(K , W ) to denote the set of options for a knowledge base K .
Definition 3.1. A preference relation is said to be monotonic iff for any X,Y ⊆ L ,
if X ⊆ Y , then Y X. is said to be anti-monotonic iff for any X,Y ⊆ L , if X ⊆ Y ,
then X Y .
Definition 3.2. Let K be a knowledge base and O an option for K . We define the
set of minimal expansions of O as follows:
Clearly, the way exp(O) is computed depends on the adopted weakening mech-
anism. In the following algorithm, the preference relation is assumed to be anti-
monotonic.
procedure CPO-Anti(K , )
1. S0 = {O | O is a minimal (under ⊆) option for K }
2. Construct a maximal sequence S1 , . . . , Sn s.t. Si
= 0/ where
Si = {O | O ∈ exp(Si−1 ) ∧
∃O ∈ S0 (O ⊂ O ∧ O
O )}, 1 ≤ i ≤ n
3. S = ni=0 Si
4. Return the -preferred options in S
3 Algorithms 17
Formula ψ1 and ψ2 state that employee Mark and Claude checked in for work at 9
and 8 AM, respectively. However, formula ψ3 records that employee Mark checked
in for work at 10 AM that day. Furthermore, as it is not possible for a person to check
in for work at different times on the same day, we also have formula ψ4 , which is
the instantiation of that constraint for employee Mark.
Assume that each formula ψi has an associated non-negative weight wi ∈ [0, 1]
corresponding to the likelihood of the formula being wrong, and suppose those
weights are w1 = 0.2, w2 = 0, w3 = 0.1, and w4 = 0. Suppose that the weight of
an option O is w(O) = ∑ψi ∈K ∩O wi . Let W⊆ be the weakening mechanism used,
and consider the preference relation defined as follows: Oi O j iff w(Oi ) ≤ w(O j ).
Clearly, the preference relation is anti-monotonic. Algorithm CPO-Anti first com-
putes S0 = {O0 = CN(0)}. / It then looks for the minimal expansions of O0 which are
preferable to O0 . In this case, we have O1 = CN({ψ2 }) and O2 = CN({ψ4 }); hence,
S1 = {O1 , O2 }. Note that neither CN({ψ1 }) nor CN({ψ3 }) is preferable to O0 and
thus they can be discarded because O0 turns out to be strictly preferable to them.
The algorithm then looks for the minimal expansions of some option in S1 which
are preferable to O0 ; the only one is O3 = CN({ψ2 , ψ4 }), so S3 = {O3 }. It is easy to
see that S4 is empty and thus the algorithm returns the preferred options from those
in S0 ∪ S1 ∪ S2 ∪ S3 , which are O0 , O1 , O2 , and O3 . Note that the algorithm avoided
the computation of every option for K .
We now show the correctness of the algorithm.
Theorem 3.3. Let K be a knowledge base and an anti-monotonic preference
relation. Then,
18 3 Algorithms
procedure CPO-Monotonic(K , )
1. S0 = {O | O is a maximal (under ⊆) option for K };
2. Construct a maximal sequence S1 , . . . , Sn s.t. Si
= 0/ where
{O | O ∈ contr(Si−1 ) ∧
∃O ∈ S0 (O ⊂ O ∧ O
O )}, 1 ≤ i ≤ n
Si =
3. S = ni=0 Si
4. Return the -preferred options in S.
3 Algorithms 19
Clearly, the algorithm always terminates, since each option in Si is a proper subset
of some option in Si−1 . The algorithm exploits the monotonicity of to reduce
the set of options from which the preferred ones are determined. The algorithm
first computes the maximal (under ⊆) options for K . It then computes smaller and
smaller options and the monotonicity of is used to discard those options that are
not preferred for sure: when Si is computed, we consider every minimal contraction
O of some option in Si−1 ; if O is a proper subset of an option O ∈ S0 and O
O ,
then O can be discarded since O O by the monotonicity of and therefore
O O. Note that any option that is a subset of O will be discarded as well.
Observe that in the worst case the algorithm has to compute every option for
K (e.g., when O1 O2 for any O1 , O2 ∈ Opt(K ) as in this case every option is
preferred).
It is worth noting that when the adopted weakening mechanism is Wall , the first
step of the algorithm can be implemented by applying Theorem 3.1 since it identifies
the options which are maximal under set inclusion (recall that W coincides with
⊇, see Proposition 2.2). Likewise, when the weakening mechanism is W⊆ , the first
step of the algorithm can be accomplished by applying Corollary 3.1.
Example 3.4. Consider again the knowledge base of Example 3.3. Suppose now that
each formula ψi has associated a non-negative weight wi ∈ [0, 1] corresponding to
the reliability of the formula, and let those weights be w1 = 0.1, w2 = 1, w3 = 0.2,
and w4 = 1. Once again, the weight of an option O is w(O) = ∑ψi ∈K ∩O wi . Let
W⊆ be the weakening mechanism, and consider the preference relation defined as
follows: Oi O j iff w(Oi ) ≥ w(O j ). Clearly, the preference relation is monotonic.
Algorithm CPO-Monotonic first computes the maximal options, i.e., S0 = {O1 =
CN({ψ2 , ψ3 , ψ4 }), O2 = CN({ψ1 , ψ2 , ψ4 }), O4 = CN({ψ1 , ψ2 , ψ3 })}. After that,
the algorithm looks for a minimal contraction O of some option in S0 s.t. there is no
superset O ∈ S0 of O s.t. O
O . It is easy to see that in this case there is no option
that satisfies this property, i.e., S1 = 0. / Thus, the algorithm returns the preferred
options in S0 , namely O1 . Note that the algorithm avoided the computation of every
option for K .
Proof. Let S be the set of options for K computed by the algorithm. First of all,
we show that for any option O ∈ Opt(K ) − S, there exists an option O ∈ S s.t.
O O . Suppose by contradiction that there is an option O ∈ Opt(K ) − S s.t.
there does not exist an option O ∈ S s.t. O O . Since O
∈ S0 , then O is not
a maximal option for K . Hence, there exist an option O0 ∈ S0 and n ≥ 0 options
20 3 Algorithms
Proof. Corollary 3.1 entails that a preferred option can be computed by finding
a maximal consistent subset K of K and then computing CN(K ). Since both
checking consistency and computing consequences can be accomplished in polyno-
mial time (Papadimitriou 1994), the overall computation is in polynomial time.
1Note that a definite clause is a Horn clause where exactly one Li is positive. It is well known that
any set of definite clauses is always consistent.
whose cardinality is 2n (W⊆ and ⊇ are, respectively, the weakening mechanism and
preference relation used).
Proposition 4.2. Let K and ψ be a propositional Horn knowledge base and clause,
respectively. Let W⊆ and ⊇ respectively be a weakening mechanism and preference
relation. The problem of deciding whether ψ is a universal consequence of K is
coNP-complete.
Proof. It follows directly from Corollary 3.1 and the result that can be found in
Cayrol and Lagasquie-Schiex (1994) stating that the problem of deciding whether a
propositional Horn formula is a consequence of every maximal consistent subset of
a Horn knowledge base is coNP-complete.
Note that when the weakening mechanism and the preference relation are Wall
and W , respectively, both the set of options and preferred options do not differ from
those obtained when W⊆ and ⊇ are considered. In fact, since weakening(ψ ) = {ψ }
for any propositional Horn formula ψ , then W⊆ and Wall are the same. Proposition
2.2 states that ⊇ and W coincide. Thus, the previous results are trivially extended
to the case where Wall and W are considered.
Corollary 4.1. Consider a propositional Horn knowledge base K . Let Wall and
W respectively be the weakening mechanism and preference relation that are used.
A preferred option for K can be computed in polynomial time.
Corollary 4.2. Let K and ψ be a propositional Horn knowledge base and clause,
respectively. Let Wall and W respectively be the weakening mechanism and pref-
erence relation that are used. The problem of deciding whether ψ is a universal
consequence of K is coNP-complete.
The first formula in K says that John’s position is X with a probability between 0.6
and 0.7. The second formula states that John is located either in position X or in
position Y with a probability between 0.3 and 0.5. The knowledge base above is
inconsistent: since every world in which the first formula is true satisfies the second
formula as well, the probability of the latter has to be greater than or equal to the
probability of the former.
Example 4.3. Consider again the probabilistic knowledge base K of Example 4.2.
The weakenings of K determined by WP are of the form:
where [0.6, 0.7] ⊆ [1 , u1 ] and [0.3, 0.5] ⊆ [2 , u2 ]. The options for K (w.r.t. WP )
are the closure of those weakenings s.t. [1 , u1 ] ∩ [2 , u2 ]
= 0/ (this condition ensures
consistency).
Suppose that the preferred options are those that modify the probability inter-
vals as little as possible: Oi P O j iff sc(Oi ) ≤ sc(O j ) for any options Oi , O j for
K , where sc(CN({ψ1 , ψ2 })) = diff(ψ1 , ψ1 ) + diff(ψ2 , ψ2 ) and diff(φ : [1 , u1 ], φ :
[2 , u2 ]) = 1 − 2 + u2 − u1 . The preferred options are the closure of:
24 4 Handling Inconsistency in Monotonic Logics
The weakenings (under WP ) whose closure yields the preferred options (w.r.t. P )
can be found by solving a linear program derived from the original knowledge base.
We now show how to derive such a linear program.
In the following definition we use W to denote the set of possible worlds for
a knowledge base K , that is, W = 2Σ , Σ being the set of propositional symbols
appearing in K .
Clearly, in the definition above, the i ’s, ui ’s and pw ’s are variables (pw denotes
the probability of world w). We denote by Sol(LP(K )) the set of solutions of
LP(K ). We also associate a knowledge base KS to every solution S as follows:
KS = {φi : [S (i ), S (ui )] | 1 ≤ i ≤ n}, where S (x) is the value assigned to vari-
able x by solution S . Intuitively, the knowledge base KS is the knowledge base
obtained by setting the bounds of each formula in K to the values assigned by
solution S .
The following theorem states that the solutions of the linear program LP(K )
derived from a knowledge base K “correspond to” the preferred options of K
when the weakening mechanism is WP and the preference relation is P .
Proof. Let LP be the linear program obtained from LP(K ) by discarding the ob-
jective function.
4.2 Propositional Probabilistic Logic 25
Theorem 4.2. Let K and ψ be a Horn probabilistic knowledge base and formula,
respectively. Suppose that the weakening mechanism returns subsets of the given
knowledge base and the preference relation is ⊇. The problem of deciding whether
ψ is a universal consequence of K is coNP-hard.
and
K2 = {u ← xT ∧ xF : [1, 1] | x ∈ X}
Kx = { xT : [1, 1],
xF : [1, 1],
← xT ∧ xF : [1, 1]}
Finally,
K ∗ = K1 ∪ K2 ∪ Kx
x∈X
The derived instance of our problem is (K ∗ , u : [1, 1]). First of all, note that K ∗ is
inconsistent since Kx is inconsistent for any x ∈ X. The set of maximal consistent
subsets of K ∗ is:
M= K1 ∪ K2 ∪ Kx | Kx is a maximal consistent subset o f Kx
x∈X
S = K1 ∪ K2
∪ x∈True {xT : [1, 1], ← xT ∧ xF : [1, 1]}
∪ x∈False {xF : [1, 1], ← xT ∧ xF : [1, 1]}
Temporal logic has been extensively used for reasoning about programs and their
executions. It has achieved a significant role in the formal specification and verifica-
tion of concurrent and distributed systems (Pnueli 1977). In particular, a number of
useful concepts such as safety, liveness and fairness can be formally and concisely
specified using temporal logics (Manna and Pnueli 1992; Emerson 1990).
In this section, we consider Propositional Linear Temporal Logic (PLTL)
(Gabbay et al. 1980) – a logic used in verification of systems and reactive systems.
Basically, this logic extends classical propositional logic with a set of temporal con-
nectives. The particular variety of temporal logic we consider is based on a linear,
discrete model of time isomorphic to the natural numbers. Thus, the temporal con-
nectives operate over a sequence of distinct “moments” in time. The connectives
that we consider are ♦ (sometime in the future), (always in the future) and
(at the next point in time).
Assuming a countable set Σ of propositional symbols, every p ∈ Σ is a PLTL
formula. If φ and ψ are PLTL formulas, then the following are PLTL formulas as
well: φ ∨ ψ , φ ∧ ψ , ¬φ , φ ← ψ , φ , φ , ♦φ
The notion of a timeline can be formalized with a function I : N → 2Σ that
maps each natural number (representing a moment in time) to a set of propositional
symbols (intuitively, this is the set of propositional symbols which are true at that
moment). We say that
• (I, i) |= p iff p ∈ I(i), where p ∈ Σ ;
• (I, i) |= φ iff (I, i + 1) |= φ ;
• (I, i) |= ♦φ iff ∃ j. j ≥ i ∧ (I, j) |= φ ;
• (I, i) |= φ iff ∀ j. j ≥ i implies (I, j) |= φ .
The semantics for the standard connectives is as expected. I is a model of a PLTL
formula φ iff (I, 0) |= φ . Consistency and entailment are defined in the standard way.
Example 4.4. Consider the PLTL knowledge base reported below (Artale 2008)
which specifies the behavior of a computational system.
ψ1 (♦received ← requested)
ψ2 (processed ← received)
The first statement says that it is always the case that if a request is issued, then
it will be received at some future time point. The second statement says that it is
always the case that if a request is received, then it will be processed at the next time
point. The statements above correspond to the definition of the system, i.e., how the
system is supposed to behave. Suppose now that there is a monitoring system which
reports data regarding the system’s behavior and, for instance, the following formula
is added to the knowledge base:
The inclusion of ψ3 makes the knowledge base inconsistent, since the monitoring
system is reporting that a request was received and was not processed at the next
moment in time, but two moments afterwards.
Consider a weakening function that replaces the operator with the ♦ operator
in a formula ψ , provided that the formula thus obtained is a consequence of ψ .
Suppose that preferred options are those that keep as many monitoring system for-
mulas unchanged (i.e. unweakened) as possible. In this case, the only preferred op-
tion is CN({ψ1 , (♦processed ← received), ψ3 }), where formula ψ2 , stating that if
a request is received then it will be processed at the next moment in time, has been
weakened into a new one stating that the request will be processed at a future point
in time.
Theorem 4.3. Let K and ψ be a temporal knowledge base and formula, respec-
tively. Suppose the weakening mechanism returns subsets of the given knowledge
base and the preference relation is ⊇. The problem of deciding whether ψ is a uni-
versal consequence of K is coNP-hard.
Proof. A reduction from 3-DNF VALIDITY to our problem can be carried out in
a similar way to the proof of Theorem 4.2. Let φ = C1 ∨ . . . ∨ Cn be an instance of
3-DNF VALIDITY, where the Ci ’s are conjunctions containing exactly three literals,
and X the set of propositional variables appearing in φ . We derive from φ a temporal
knowledge base K ∗ as follows. Given a literal of the form x (resp. ¬x), with x ∈ X,
we denote with p() the propositional variable xT (resp. xF ). Let
K1 = { (u ← p(1 ) ∧ p(2 ) ∧ p(3 )) | 1 ∧ 2 ∧ 3 is a con junction o f φ }
and
K2 = { (u ← xT ∧ xF ) | x ∈ X}
Given a variable x ∈ X, let
Kx = { x T ,
xF ,
(← xT ∧ xF )}
Finally,
K ∗ = K1 ∪ K2 ∪ Kx
x∈X
The derived instance of our problem is (K ∗ , u). The claim can be proved in a
similar way to the proof of Theorem 4.2.
In this section we consider fuzzy logic. Formulas are of the form φ : v, where φ is
a propositional formula built from a set Σ of propositional symbols and the logical
connectives ¬, ∧, ∨, and v ∈ [0, 1] (we call v the degree of truth). An interpretation
4.4 Fuzzy Logic 29
ψ1 : a : 0.7
ψ2 : b : 0.6
ψ3 : ¬(a ∧ b) : 0.5
Suppose that the weakening mechanism is defined as follows: for any for-
mula φ : v, we define W (φ : v) = {φ : v | v ∈ [0, 1] ∧ v ≤ v}; then, W (K ) =
{{ψ1 , ψ2 , ψ3 } | ψi ∈ W (ψi ) 1 ≤ i ≤ 3}. Thus, options are the closure of
ψ1 : a : v 1
ψ2 : b : v 2
ψ3 : ¬(a ∧ b) : v3
Finally, suppose that the preference relation is expressed as before but, in addition,
we would like to change the degree of truth as little as possible. In this case, the
preferred options are:
CN({b : 0.5, ψ1 , ψ3 })
CN({¬(a ∧ b) : 0.4, ψ1 , ψ2 })
We refer to fuzzy knowledge bases whose formulas are built from propositional
Horn formulas as Horn fuzzy knowledge bases. The following theorem states that
even for this restricted subset of fuzzy logic, the problem of deciding whether a
formula is a universal consequence of a knowledge base is coNP-hard.
Theorem 4.4. Let K and ψ be a Horn fuzzy knowledge base and formula, respec-
tively. Let W⊆ and ⊇ be the adopted weakening mechanism and preference relation,
respectively. The problem of deciding whether ψ is a universal consequence of K
is coNP-hard.
30 4 Handling Inconsistency in Monotonic Logics
Proof. A reduction from 3-DNF VALIDITY to our problem can be carried out in
a way similar to the proof of Theorem 4.2. Let φ = C1 ∨ . . . ∨ Cn be an instance of
3-DNF VALIDITY, where the Ci ’s are conjunctions containing exactly three literals,
and X is the set of propositional variables appearing in φ . We derive from φ a Horn
temporal knowledge base K ∗ as follows. Given a literal of the form x (resp. ¬x),
with x ∈ X, we denote with p() the propositional variable xT (resp. xF ). Let
and
K2 = {u ∨ ¬xT ∨ ¬xF : 1| x ∈ X}
Given a variable x ∈ X, let
Kx = { xT : 1,
xF : 1,
¬xT ∨ ¬xF : 1}
Finally,
K ∗ = K1 ∪ K2 ∪ Kx
x∈X
The derived instance of our problem is (K ∗ , u : 1). The claim can be proved in a
way similar to the proof of Theorem 4.2.
In this section we focus on the belief logic presented in Levesque (1984). Formulas
are formed from a set Σ of primitive propositions, the standard connectives ∨, ∧,
and ¬, and two unary connectives B and L. Neither B nor L appear within the scope
of the other. Connective B is used to express what is explicitly believed by an agent
(a sentence that is actively held to be true by the agent), whereas L is used to ex-
press what is implicitly believed by the agent (i.e., the consequences of his explicit
beliefs).
Semantics of sentences is given in terms of a model structure S, B, T, F, where
S is a set of situations, B is a subset of S (the situations that could be the actual
ones according to what is believed), and T and F are functions from Σ to subsets
of S. Intuitively, T (p) are the situations that support the truth of p and F(p) are
the situations that support the falsity of p. A primitive proposition may be true,
false, both, or neither in a situation. A complete situation (or possible world) is one
that supports either the truth or the falsity (not both) of every primitive proposition.
A complete situation s is compatible with a situation s if s and s agree whenever s
is defined, i.e., if s ∈ T (p) then s ∈ T (p), and if s ∈ F(p) then s ∈ F(p), for each
primitive proposition p. Let W (B) consist of all complete situations in S compatible
with some situation in B.
4.5 Belief Logic 31
Two support relations |=T and |=F between situations and formulas are defined
in the following way:
• s |=T p iff s ∈ T (p), where p is a primitive proposition ,
• s |=F p iff s ∈ F(p), where p is a primitive proposition ;
• s |=T (α ∨ β ) iff s |=T α or s |=T β ,
• s |=F (α ∨ β ) iff s |=F α and s |=F β ;
• s |=T (α ∧ β ) iff s |=T α and s |=T β ,
• s |=F (α ∧ β ) iff s |=F α or s |=F β ;
• s |=T ¬α iff s |=F α ,
• s |=F ¬α iff s |=T α ;
• s |=T Bα iff for every s ∈ B, s |=T α ,
• s |=F Bα iff s
|=T Bα ;
• s |=T Lα iff for every s ∈ W (B), s |=T α ,
• s |=F Lα iff s
|=T Lα .
Given a complete situation s in S, then if s |=T α , then α is true at s, otherwise α is
said to be false at s. Finally, α is said to be valid (|= α ) iff for any model structure
S, B, T, F and any complete situation s in S, α is true at s. The satisfiability of a
sentence is defined analogously; entailment is defined in the expected way.
Note that belief logic allows an agent to believe contradictory sentences, e.g.,
{Bp, B¬p} is a consistent knowledge base. However, {Bp, ¬Bp} is inconsistent.
Example 4.6. Consider the following inconsistent knowledge base K that repre-
sents the knowledge of an agent regarding a city’s subway system:
ψ1 : goingNorthTrain1
ψ2 : B goingNorthTrain1
ψ3 : goingNorthTrain1 → canGetUpTownFromStationA
ψ4 : B(goingNorthTrain1 → canGetUpTownFromStationA)
ψ5 : ¬(canGetUpTownFromStationA)
Using a train schedule associated with train station A, we might be able to express
formulas ψ1 and ψ3 . ψ1 states that Train 1 goes north, whereas ψ3 states that if Train
1 goes north, then the agent can get uptown from station A. Formulas ψ2 and ψ4 state
that the agent explicitly believes in the information that he got from the schedule.
However, this knowledge base is inconsistent because of the presence of formula
ψ5 , which states that it is not possible to get uptown from station A, for instance,
because that route is closed for repairs.
Suppose that each formula ψi is associated with a time stamp t(ψi ) that represents
the moment in time in which the agent acquired that piece of information. In this
case, we consider the subsets of K as its weakenings, and the preference rela-
tion is defined in such a way that maximal (under ⊆) options are preferable to the
others, and among these we say that Oi O j iff sc(Oi ) ≥ sc(O j ) where sc(O) =
∑ψ ∈O∩K t(ψ ), i.e., we would like to preserve as many formulas as possible and
more up to date information. If in our example we have t(ψ1 ) = t(ψ2 ) = 1, t(ψ3 ) =
t(ψ4 ) = 3, and t(ψ5 ) = 5, then the only preferred option is CN({ψ2 , ψ3 , ψ4 , ψ5 }).
32 4 Handling Inconsistency in Monotonic Logics
Theorem 4.5. Let K and ψ be a belief knowledge base and formula, respectively.
Suppose that the weakening mechanism returns subsets of the given knowledge base
and the preference relation is ⊇. The problem of deciding whether ψ is a universal
consequence of K is coNP-hard.
Proof. A reduction from 3-DNF VALIDITY to our problem can be carried out in
a similar way to the proof of Theorem 4.2 by using only propositional formulas.
Let φ = C1 ∨ . . . ∨Cn be an instance of 3-DNF VALIDITY, where the Ci ’s are con-
junctions containing exactly three literals, and X is the set of propositional variables
appearing in φ . We derive from φ a belief knowledge base K ∗ as follows. Given a
literal of the form x (resp. ¬x), with x ∈ X, we denote by p() the propositional
variable xT (resp. xF ). Let
and
K2 = {u ∨ ¬xT ∨ ¬xF | x ∈ X}
Given a variable x ∈ X, let
Kx = { x T ,
xF ,
¬xT ∨ ¬xF }
Finally,
K ∗ = K1 ∪ K2 ∪ Kx
x∈X
The derived instance of our problem is (K ∗ , u). The claim can be proved in a similar
way to the proof of Theorem 4.2.
relation is not known, we can also use the union of different base relations, e.g.,
a {NT PP, NT PP−1 } b means that either a is a non-tangential proper part of b or
vice versa.
ψ1 a {NT PP} b
ψ2 b {EC} c
ψ3 a {EC} c
The knowledge base is inconsistent since the first two formulas imply that a and
c are disconnected, whereas the last one states that they are externally connected.
Suppose the weakening mechanism used is Wall . In this case, the knowledge base
is weakened by making its formulas more undefined. For instance, some options for
K are
O1 = CN({a {NT PP, T PP} b, ψ2 , ψ3 })
O2 = CN({b {EC, PO, NT PP−1 } c, ψ1 , ψ3 })
O3 = CN({b {EC, NT PP−1 } c, ψ1 , ψ3 })
O4 = CN({a {NT PP, T PP} b, b {EC, DC} c, ψ3})
Suppose the preference relation chooses those options that weaken a minimum num-
ber of formulas as the preferred options. In this case, O1 , O2 , O3 are preferred op-
tions, whereas O4 is not.
Suppose the preference relation is W . Then O1 and O3 are preferred options,
whereas O2 and O4 are not. In fact, it is easy to see that O3 W O2 but not vice versa,
and O1 W O4 but not vice versa.
Finally, suppose that options that only weaken formulas of the form x {NT PP} y
or x {T PP} y into x {NT PP, T PP} y are preferable to the others (e.g., because we are
not sure if a region is a tangential or non-tangential proper part of another, but the
information about the other topological relations is reliable and we would prefer not
to change it). In this case, O1 is a preferred option whereas the others are not.
Chapter 5
Link with Existing Approaches
S0 =
0/
Si−1 ∪ {ψi } i f Si−1 ∪ {ψi } is consistent
Si = 1≤i≤n
Si−1 otherwise
Proof. Straightforward.
Brewka (1989) provides a weak and strong notion of provability for both the
generalizations described above. A formula ψ is weakly provable from a knowledge
base K iff there is a preferred subbase S of K s.t. ψ ∈ CN(S); ψ is strongly prov-
able from K iff for every preferred subbase S of K we have ψ ∈ CN(S). Clearly,
the latter notion of provability corresponds to our notion of universal consequence
(Definition 2.8), whereas the former is not a valid inference mechanism, since the set
of weakly provable formulas might be inconsistent. Observe that Brewka’s approach
is committed to a specific logic, weakening mechanism and preference criterion,
whereas our framework is applicable to different logics and gives the flexibility
to choose the weakening mechanism and the preference relation that the end-user
believes more suitable for his purposes.
Proof. Straightforward.
As already said before, once a criterion for determining preferred subbase has
been fixed, a formula is a consequence of K if can be classically inferred from
every preferred subbase, which corresponds to our universal inference mechanism
(Definition 2.8).
In Cayrol and Lagasquie-Schiex (1995), the same criteria for selecting preferred
consistent subbases are considered, and three entailment principles are presented.
The UNI principle corresponds to our universal inference mechanism and it is the
same as in Benferhat et al. (1993). According to the EXI principle, a formula ψ
is inferred from a knowledge base K if ψ is classically inferred from at least one
preferred subbase of K . According to the ARG principle, a formula ψ is inferred
from a knowledge base K if ψ is classically inferred from at least one preferred
subbase and no preferred subbase classically entails ¬ψ . The last two entailment
principles are not valid inference mechanisms in our framework, since the set of
EXI (resp. ARG) consequences might be inconsistent.
The second approach to handling inconsistency does not aim to restore consis-
tency, instead the main objective is to reason despite the inconsistent information
and treats inconsistent information as informative. Argumentation and paraconsis-
tent logic are examples of this approach (we refer the reader to Hunter (1998) for
a survey on paraconsistent logics). Argumentation is based on the justification of
plausible conclusions by arguments (Amgoud and Cayrol 2002; Dung 1995). Due
to inconsistency, arguments may be attacked by counterarguments. The problem is
thus to select the most acceptable arguments. Several semantics were proposed in
the literature for that purpose (see Baroni et al. 2011 for a survey).
In Amgoud and Besnard (2010), an argumentation system was proposed for rea-
soning about inconsistent premises. The system is grounded on Tarski’s logics. It
builds arguments from a knowledge base as follows:
Definition 5.9 (Argument). Let K be a knowledge base. An argument is a pair
(X, x) s.t. X ⊆ K , X is consistent, x ∈ CN(X) and X ⊂ X such that x ∈ CN(X ).
X is called the support of the argument and x its conclusion.
Notations: For K ⊆ L , Arg(K ) is the set of all arguments that may be built
from K using Definition 5.9. For an argument a = (X, x), Conc(a) = x and
Supp(a) = X.
An argument a attacks an argument b if it undermines one of its premises.
Definition 5.10 (Attack). Let a, b be two arguments. a attacks b, denoted aRu b, iff
∃y ∈ Supp(b) such that the set {Conc(a), y} is inconsistent.
40 5 Link with Existing Approaches
An argumentation system over a given knowledge base is thus the pair: set of
arguments and the attacks among them.
It is worth mentioning that the set Arg(K ) may be infinite even when the base K
is finite. This would mean that the argumentation system may be infinite.1
Arguments are evaluated using stable semantics that was proposed in Dung (1995).
The conclusions that may be drawn from K by an argumentation system are the
formulas that are conclusions of arguments in each stable extension.
In Amgoud (2012), it was shown that this argumentation system captures the ap-
proach of Rescher and Manor recalled earlier in this chapter. Indeed, each stable
extension returns a maximal (under set inclusion) consistent subbase of the orig-
inal knowledge base K . Moreover, the plausible conclusions drawn from K are
those inferred from all the maximal subbases of K . In light of Proposition 5.1, it
is easy to show that the abstract framework proposed in this book captures also the
argumentation system proposed in Amgoud and Besnard (2010).
Proof. Straightforward.
1 An AS is finite iff each argument is attacked by a finite number of arguments. It is infinite other-
wise.
Chapter 6
Conclusions
and needs. (ii) A general notion of preference relation between options. We show
that our framework not only captures maximal consistent subsets, but also many
other criteria that a user may use to select between options. We have also shown
that by defining an appropriate preference relation over options, we can capture sev-
eral existing works such as the subbases defined in Rescher and Manor (1970) and
Brewka’s subtheories. (iii) The last component of the framework consists of an in-
ference mechanism that allows the selection of the inferences to be drawn from the
knowledge base. This mechanism should return an option. This forces the system to
make safe inferences.
We have also shown through examples how this abstract framework can be used
in different logics, provided new results on the complexity of reasoning about in-
consistency in such logics, and proposed general algorithms for computing preferred
options.
In short, our framework empowers end-users to make decisions about what they
mean by an option, what options they prefer to what other options, and prevents
them from being dragged down by some systemic assumptions made by a researcher
who might never have seen their application or does not understand the data and/or
the risks posed to the user in decision making based on some a priori definition of
what data should be discarded when an inconsistency arises.
References
Algorithms, 13 Options, 5, 7, 8, 13
CPO-Anti, 16, 17 Preferred, 5, 10, 11, 13, 15, 17, 19, 21, 22,
CPO-Monotonic, 18, 19 24, 35, 36, 38–40
CPO-Naive, 13, 14
Anti-monotonicity, 16 Preferences
Argumentation systems, 39, 40 Best-out, 38
Inclusion based, 38
Belief logic, 30 Lexicographic, 38
Relation, 8, 10, 13, 16, 17, 19, 24, 35, 36, 38
Consistency, see Inconsistency Preferred subbases, 36, 38, 39
Prioritized knowledge bases, 36–38
Fuzzy logic, 28
Probabilistic logic, 3, 6, 9, 23
General framework, 5, 8
Reliability theory, 37
Horn-clause logic, 15, 21
Spatial logic, 32
Inconsistency, 4, 8, 21 RCC-8, 32, 33
Inference mechanism, 8 Strong provability, 37
Maximal consistent subsets, 6, 35, 40 Universal consequences, 10, 22, 25, 28, 29, 32,
Preferred, 35 39
Minimal contraction, 18
Minimal expansion, 16 Weak provability, 37
Monotonic logics, 4, 5, 21 Weakening, 7
Monotonicity, 3, 16, 18, 19 Mechanism, 7, 13, 23, 29