First-Order Logic - Syntax, Semantics, Resolution: Ruzica Piskac
First-Order Logic - Syntax, Semantics, Resolution: Ruzica Piskac
First-Order Logic - Syntax, Semantics, Resolution: Ruzica Piskac
Ruzica Piskac
Yale University
ruzica.piskac@yale.edu
Acknowledgments
1.1 Syntax
Signature
Σ = (Ω, Π),
where
• Ω a set of function symbols f with arity n ≥ 0, written f /n,
• Π a set of predicate symbols p with arity m ≥ 0, written p/m.
If n = 0 then f is also called a constant (symbol). If m = 0 then p is also
called a propositional variable. We use letters P, Q, R, S, to denote
propositional variables.
Refined concept for practical applications: many-sorted signatures
(corresponds to simple type systems in programming languages); not
so interesting from a logical point of view
Variables
Terms
Atoms
Literals
Clauses
Notational Conventions
Example
scope
z }| {
scope
z }| {
∀ y (∀ x p(x) =⇒ q(x, y))
The occurrence of y is bound, as is the first occurrence of x. The
second occurrence of x is a free occurrence.
Substitutions
x[s/x] = s
x′ [s/x] = x′ ; if x′ 6= x
f (s1 , . . . , sn )[s/x] = f (s1 [s/x], . . . , sn [s/x])
⊥[s/x] = ⊥
⊤[s/x] = ⊤
p(s1 , . . . , sn )[s/x] = p(s1 [s/x], . . . , sn [s/x])
(u ≈ v)[s/x] = (u[s/x] ≈ v[s/x])
¬F[s/x] = ¬(F[s/x])
(FρG)[s/x] = (F[s/x]ρG[s/x]) ; for each binary connective ρ
(QyF)[s/x] = Qz((F[z/y])[s/x]) ; with z a “fresh” variable
We need to make sure that the (free) variables in s are not captured
upon placing s into the scope of a quantifier, hence the renaming of the
bound variable y into a “fresh”, that is, previously unused, variable z.
Why this definition of substitution is well-defined will be discussed
below.
General Substitutions
In general, substitutions are mappings
σ : X → TΣ (X)
such that the domain of σ, that is, the set
dom(σ) = {x ∈ X | σ(x) 6= x},
is finite. The set of variables introduced by σ, that is, the set of
variables occurring in one of the terms σ(x), with x ∈ dom(σ), is
denoted by codom(σ).
Substitutions are often written as [s1 /x1 , . . . , sn /xn ], with xi pairwise
distinct, and then denote the mapping
(
si , if y = xi
[s1 /x1 , . . . , sn /xn ](y) =
y, otherwise
Modifying a Substitution
Application of a Substitution
“Homomorphic” extension of σ to terms and formulas:
f (s1 , . . . , sn )σ = f (s1 σ, . . . , sn σ)
⊥σ = ⊥
⊤σ = ⊤
p(s1 , . . . , sn )σ = p(s1 σ, . . . , sn σ)
(u ≈ v)σ = (uσ ≈ vσ)
¬Fσ = ¬(Fσ)
(FρG)σ = (Fσ ρ Gσ) ; for each binary connective ρ
(Qx F)σ = Qz (F σ[x 7→ z]) ; with z a fresh variable
E: Convince yourself that for the special case σ = [t/x] the new
definition coincides with our previous definition (modulo the choice of
fresh names for the bound variables).
Structural Induction
Theorem 1
Let G = (N, T, P, S) be a context-free grammara and let q be a property
of T ∗ (the words over the alphabet T of terminal symbols of G).
q holds for all words w ∈ L(G), whenever one can prove these 2
properties:
1 (base cases)
q(w′ ) holds for each w′ ∈ T ∗ such that X ::= w′ is a rule in P.
2 (step cases)
If X ::= w0 X0 w1 . . . wn Xn wn+1 is in P with Xi ∈ N, wi ∈ T ∗ , n ≥ 0,
then for all w′i ∈ L(G, Xi ), whenever q(w′i ) holds for 0 ≤ i ≤ n, then
also q(w0 w′0 w1 . . . wn w′n wn+1 ) holds.
Here L(G, Xi ) ⊆ T ∗ denotes the language generated by the grammar G
from the nonterminal Xi .
a
Infinite grammars are also admitted.
Structural Recursion
Theorem 2
Let G = (N, T, P, S) be a unambiguous context-free grammar. A
function f is well-defined on L(G) (that is, unambiguously defined)
whenever these 2 properties are satisfied:
1 (base cases)
f is well-defined on the words w′ ∈ Σ∗ for each rule X ::= w′ in P.
2 (step cases)
If X ::= w0 X0 w1 . . . wn Xn wn+1 is a rule in P then
f (w0 w′0 w1 . . . wn w′n wn+1 ) is well-defined, assuming that each of the
f (w′i ) is well-defined.
Substitution Revisited
Q: Does Theorem 2 justify that our homomorphic extension
1.2. Semantics
Structures
Assignments
A(β) : TΣ (X) → A
as follows:
UN = {0, 1, 2, . . .}
0N = 0
sN : n 7→ n + 1
+N : (n, m) 7→ n + m
∗N : (n, m) 7→ n ∗ m
≤N = {(n, m) | n less than or equal to m}
<N = {(n, m) | n less than m}
N(β)(s(x) + s(0)) = 3
N(β)(x + y ≈ s(y)) = 1
N(β)(∀x, y(x + y ≈ y + x)) = 1
N(β)(∀z z ≤ y) = 0
N(β)(∀x∃y x < y) = 1
A, β |= F :⇔ A(β)(F) = 1
A |= F :⇔ A, β |= F, for all β ∈ X → UA
Substitution Lemma
The following theorems, to be proved by structural induction, hold for
all Σ-algebras A, assignments β, and substitutions σ.
Theorem 3
For any Σ-term t
A(β)(tσ) = A(β ◦ σ)(t),
where β ◦ σ : X → A is the assignment β ◦ σ(x) = A(β)(xσ).
Theorem 4
For any Σ-formula F, A(β)(Fσ) = A(β ◦ σ)(F).
Corollary 5
A, β |= Fσ ⇔ A, β ◦ σ |= F
These theorems basically express that the syntactic concept of
substitution corresponds to the semantic concept of an assignment.
Ruzica Piskac First-Order Logic - Syntax, Semantics, Resolution 35 / 125
Models, Validity, and Satisfiability Properties
Theorem 7
F and G are equivalent iff (F ≡ G) is valid.
Extension to sets of formulas N in the “natural way”, e.g., N |= F
:⇔ for all A ∈ Σ-Alg and β ∈ X → UA :
if A, β |= G, for all G ∈ N, then A, β |= F.
Validity and unsatisfiability are just two sides of the same medal as
explained by the following proposition.
Theorem 8
F valid ⇔ ¬F unsatisfiable
Hence in order to design a theorem prover (validity checker) it is
sufficient to design a checker for unsatisfiability.
Q: In a similar way, entailment N |= F can be reduced to unsatisfiability.
How?
Theory of a Structure
Problem of axiomatizability:
For which structures A can one axiomatize Th(A), that is, can one
write down a formula F (or a recursively enumerable set F of formulas)
such that
Th(A) = {G | F |= G}?
Analoguously for sets of structures.
Q1 x1 . . . Qn xn F,
(F ≡ G) ⇒P (F =⇒ G) ∧ (G =⇒ F)
¬QxF ⇒P Qx¬F (¬Q)
(QxF ρ G) ⇒P Qy(F[y/x] ρ G), y fresh, ρ ∈ {∧, ∨}
(QxF =⇒ G) ⇒P Qy(F[y/x] =⇒ G), y fresh
(F ρ QxG) ⇒P Qy(F ρ G[y/x]), y fresh, ρ ∈ {∧, ∨, =⇒ }
Skolemization
Intuition: replacement of ∃y by a concrete choice function computing y
from all the arguments y depends on.
Transformation ⇒S (to be applied outermost, not in subformulas):
Theorem 9
Let F, G, and H as defined above and closed. Then
(i) F and G are equivalent.
(ii) H |= G but the converse is not true in general.
(iii) G satisfiable (wrt. Σ-Alg) ⇔ H satisfiable (wrt. Σ′ -Alg)
where Σ′ = (Ω ∪ SKF, Π), if Σ = (Ω, Π).
Theorem 11
Let F be closed. F satisfiable iff F ′ satisfiable iff N satisfiable
Optimization
f
fA (△, . . . , △) =
△ ... △
In other words, values are fixed to be ground terms and functions are
fixed to be the term constructors. Only predicate symbols p/m ∈ Π
may be freely interpreted as relations pA ⊆ TΣm .
(s1 , . . . , sn ) ∈ pA :⇔ p(s1 , . . . , sn ) ∈ I
Thus we shall identify Herbrand interpretations (over Σ) with sets of
Σ-ground atoms.
Example: ΣPres = ({0/0, s/1, +/2}, {< /2, ≤ /2})
N as Herbrand interpretation over ΣPres :
I={ 0 ≤ 0, 0 ≤ s(0), 0 ≤ s(s(0)), . . . ,
0 + 0 ≤ 0, 0 + 0 ≤ s(0), . . . ,
. . . , (s(0) + 0) + s(0) ≤ s(0) + (s(0) + s(0))
...
s(0) + 0 < s(0) + 0 + 0 + s(0)
. . .}
Example of a GΣ
C = (x < y) ∨ (y ≤ s(x))
Soundness, Completeness
Provability ⊢Γ of F from N in Γ:
N ⊢Γ F :⇔ there exists a proof Γ of F from N.
Γ is called sound :⇔
F1 . . . Fn
∈Γ ⇒ F1 , . . . , Fn |= F
F
Γ is called complete :⇔
N |= F ⇒ N ⊢Γ F
N |= ⊥ ⇒ N ⊢Γ ⊥
Proofs as Trees
∧
markings = formulas
∧
leaves = assumptions and axioms
∧ ∧
other nodes = inferences: conclusion = ancestor
∧
premises = direct descendants
P(f (a)) ∨ Q(b) ¬P(f (a)) ∨ ¬P(f (a)) ∨ Q(b)
¬P(f (a)) ∨ Q(b) ∨ Q(b)
P(f (a)) ∨ Q(b) ¬P(f (a)) ∨ Q(b)
Q(b) ∨ Q(b)
Q(b) ¬P(f (a)) ∨ ¬Q(b)
P(g(a, b)) ¬P(g(a, b))
⊥
Theorem 14
(i) Let Γ be sound. Then N ⊢Γ F ⇒ N |= F
(ii) N ⊢Γ F ⇒ there exist F1 , . . . , Fn ∈ N s.t. F1 , . . . , Fn ⊢Γ F
(resembles compactness).
Definition 15
• Resolution inference rule
C∨A ¬A ∨ D
C∨D
• (positive) factorisation
C∨A∨A
C∨A
These are schematic inference rules; for each substitution of the
schematic variables C, D, and A, respectively, by ground clauses and
ground atoms we obtain an inference rule.
As “∨” is considered associative and commutative, we assume that A
and ¬A can occur anywhere in their respective clauses.
Sample Refutation
Example 16
1. ¬P(f (a)) ∨ ¬P(f (a)) ∨ Q(b) (given)
2. P(f (a)) ∨ Q(b) (given)
3. ¬P(g(b, a)) ∨ ¬Q(b) (given)
4. P(g(b, a)) (given)
5. ¬P(f (a)) ∨ Q(b) ∨ Q(b) (Res. 2. into 1.)
6. ¬P(f (a)) ∨ Q(b) (Fact. 5.)
7. Q(b) ∨ Q(b) (Res. 2. into 6.)
8. Q(b) (Fact. 7.)
9. ¬P(g(b, a)) (Res. 8. into 3.)
10. ⊥ (Res. 4. into 9.)
C ∨ A ∨ ... ∨ A ¬A ∨ D
C∨D
Example 17
1. ¬P(f (a)) ∨ ¬P(f (a)) ∨ Q(b) (given)
2. P(f (a)) ∨ Q(b) (given)
3. ¬P(g(b, a)) ∨ ¬Q(b) (given)
4. P(g(b, a)) (given)
5. ¬P(f (a)) ∨ Q(b) ∨ Q(b) (Res. 2. into 1.)
6. Q(b) ∨ Q(b) ∨ Q(b) (Res. 2. into 5.)
7. ¬P(g(b, a)) (Res. 6. into 3.)
8. ⊥ (Res. 4. into 7.)
Soundness of Resolution
Theorem 18
Propositional resolution is sound.
Proof of L. et I ∈ Σ-Alg. To be shown:
(i) for resolution: I |= C ∨ A, I |= D ∨ ¬A ⇒ I |= C ∨ D
(ii) for factorization: I |= C ∨ A ∨ A ⇒ I |= C ∨ A
ad (i): Assume premises are valid in I. Two cases need to be
considered:
(a) A is valid, or (b) ¬A is valid.
a) I |= A ⇒ I |= D ⇒ I |= C ∨ D
b) I |= ¬A ⇒ I |= C ⇒ I |= C ∨ D
ad (ii): even simpler.
NB: In propositional logic (ground clauses) we have:
1. I |= L1 ∨ . . . ∨ Ln ⇔ there exists i: I |= Li .
2. I |= A or I |= ¬A.
Examples
Natural numbers. (N, >)
Lexicographic orderings. Let (M1 , ≻1 ), (M2 , ≻2 ) be well-founded
orderings. Then let their lexicographic combination
≻ = (≻1 , ≻2 )lex
on M1 × M2 be defined as
Lemma 20
(Mi , ≻i ) well-founded , i = 1, 2 ⇔ (M1 × M2 , (≻1 , ≻2 )lex ) well-founded.
Proof of (. i) “⇒”: Suppose (M1 × M2 , ≻), with ≻ = (≻1 , ≻2 )lex , is not
well-founded. Then there is an infinite sequence
Noetherian Induction
Let (M, ≻) be a well-founded ordering.
Theorem 21 (Noetherian Induction)
A property Q(m) holds for all m ∈ M, whenever for all m ∈ M this
implication is satisfied:
if Q(m′ ), for all m′ ∈ M such that m ≻ m′ ,a
then Q(m).b
a
induction hypothesis
b
induction step
Multi-Sets
Let M be a set. A multi-set S over M is a mapping S : M → N. Hereby
S(m) specifies the number of occurrences of elements m of the base
set M within the multi-set S.
m is called an element of S, if S(m) > 0. We use set notation (∈, ⊂, ⊆,
∪, ∩, etc.) with analogous meaning also for multi-sets, e.g.,
for each m in M.
From now on we only consider finite multi-sets.
Example. S = {a, a, a, b, b} is a multi-set over {a, b, c}, where S(a) = 3,
S(b) = 2, S(c) = 0.
Multi-Set Orderings
Definition 22 (≻mul )
S1 ≻mul S2 :⇔ S1 6= S2
and ∀m ∈ M : [S2 (m) > S1 (m)
⇒ ∃m′ ∈ M : (m′ ≻ m and S1 (m′ ) > S2 (m′ ))]
Theorem 23
a) ≻mul is a partial ordering.
b) ≻ well-founded ⇒ ≻mul well-founded
c) ≻ total ⇒ ≻mul total
Clause Orderings
Example
Example 24
Suppose A5 ≻ A4 ≻ A3 ≻ A2 ≻ A1 ≻ A0 .
Order the following clauses:
¬A1 ∨ ¬A4 ∨ A3
¬A1 ∨ A2
¬A1 ∨ A4 ∨ A3
A0 ∨ A1
¬A5 ∨ A5
A1 ∨ A2
Example
Example 24
Suppose A5 ≻ A4 ≻ A3 ≻ A2 ≻ A1 ≻ A0 .
Then:
A0 ∨ A1
≺ A1 ∨ A2
≺ ¬A1 ∨ A2
≺ ¬A1 ∨ A4 ∨ A3
≺ ¬A1 ∨ ¬A4 ∨ A3
≺ ¬A5 ∨ A5
Theorem 25
1 The orderings on literals and clauses are total and well-founded.
2 Let C and D be clauses with A = max(C), B = max(D), where
max(C) denotes the maximal atom in C.
(i) If A ≻ B then C ≻ D.
(ii) If A = B, A occurs negatively in C but only positively in D, then
C ≻ D.
B
{ ... ∨ B
...
... ∨ B ∨ B
...
¬B ∨ . . .
all D
where max(D) = B
≺ .. ..
. .
A
{ ... ∨ A
...
... ∨ A ∨ A
...
¬A ∨ . . .
...
all C
where max(C) = A
Definition 26
Res(N) = {C | C is concl. of a rule in Res w/ premises in N}
Res0 (N) = N
Resn+1 (N) = Res(Res
S
n (N)) ∪ Resn (N), for n ≥ 0
∗
Res (N) = n≥0 Resn (N)
Theorem 27
(i) Res∗ (N) is saturated.
(ii) Res is refutationally complete, iff for each set N of ground clauses:
N |= ⊥ ⇔ ⊥ ∈ Res∗ (N)
Construction of Interpretations
Given:
set N of ground clauses, atom ordering ≻.
Wanted:
Herbrand interpretation I such that
• “many” clauses from N are valid in I;
• I |= N, if N is saturated and ⊥ 6∈ N.
Construction of Interpretations
Example 28
Let A5 ≻ A4 ≻ A3 ≻ A2 ≻ A1 ≻ A0 (max. literals in red)
clauses C IC ∆C Remarks
1 ¬A0 ∅ ∅ true in IC
2 A0 ∨ A1 ∅ {A1 } A1 maximal
3 A1 ∨ A2 {A1 } ∅ true in IC
4 ¬A1 ∨ A2 {A1 } {A2 } A2 maximal
5 ¬A1 ∨ A4 ∨ A3 ∨ A0 {A1 , A2 } {A4 } A4 maximal
6 ¬A1 ∨ ¬A4 ∨ A3 {A1 , A2 , A4 } ∅ A3 not maximal;
min. counter-ex.
7 ¬A1 ∨ A5 {A1 , A2 , A4 } {A5 }
Structure of N, ≻
Let A ≻ B; producing a new atom does not affect smaller clauses.
possibly productive
B
{ . . .. . . ∨ B
. . .. . . ∨ B ∨ B
¬B ∨ . . .
all D
with max(D) = B
≺ .. ..
. .
A
{ . .... . ∨ A
. .... . ∨ A ∨ A
. .¬A
. ∨ ...
all C
with max(C) = A
(i) C = ¬A ∨ C′ ⇒ no D C produces A.
(ii) C productive ⇒ IC ∪ ∆C |= C.
(iii) Let D′ ≻ D C. Then
ID |= C ⇒ ID′ |= C and IN |= C.
Corollary 35
Theorem 36 (Compactness)
Let N be a set of propositional formulas. Then N unsatisfiable if, and
only if, there exists M ⊆ N, with |M| < ∞, and M unsatisfiable.
Proof of .
“⇐”: trivial.
P(f (a)) ∨ P(f (a)) ∨ ¬Q(z) ¬P(f (a))¬P(g(b, x)) P(g(b, x)) ∨ Q(x)
¬Q(z) Q(x)
[a/z] [a/x]
¬Q(a) Q(a)
Lifting Principle
Problem: Make saturation of infinite sets of clauses as they arise
from taking the (ground) instances of finitely many
general clauses (with variables) effective and efficient.
Idea (Robinson 65): • Resolution for general clauses
• Equality of ground atoms is generalized to unifiability
of general atoms
• Only compute most general (minimal) unfiers
Significance: The advantage of the method in (Robinson 65)
compared with (Gilmore 60) is that unification enumerates
only those instances of clauses that participate in an
inference. Moreover, clauses are not right away
instantiated into ground clauses. Rather they are
instantiated only as far as required for an inference.
Inferences with non-ground clauses in general represent
infinite sets of ground inferences which are computed
simultaneously in a single step.
Ruzica Piskac First-Order Logic - Syntax, Semantics, Resolution 84 / 125
General Resolution Lifting Principle
C∨A∨B
if σ = mgu(A, B) [factorization]
(C ∨ A)σ
General resolution RIF with implicit factorization:
C ∨ A1 ∨ . . . ∨ An D ∨ ¬B
if σ = mgu(A1 , . . . , An , B)
(C ∨ D)σ
We additionally assume that the variables in one of the two premises
of the resolutions rule are (bijectively) renamed such that they become
different to any variable in the other premise. We do not formalize this.
Which names one uses for variables is otherwise irrelevant.
Ruzica Piskac First-Order Logic - Syntax, Semantics, Resolution 85 / 125
General Resolution Unification
Unification
Definition 37
. .
Let E = {s1 = t1 , . . . , sn = tn } (si , ti terms or atoms) a multi-set of
equality problems. A substitution σ is called a unifier of E :⇔
∀1 ≤ i ≤ n : si σ = ti σ.
If a unifier exists, E is called unifiable. If a unifier of E is more general
than any other unifier of E, then we speak of a most general unifier
(mgu) of E. Hereby a substitution σ is called more general than a
substitution τ
Unification
Theorem 38
(Exercise)
(i) ≤ is a quasi-ordering on substitutions, and ◦ is associative.
(ii) If σ ≤ τ and τ ≤ σ (we write σ ∼ τ in this case), then xσ and xτ are
equal up to (bijective) variable renaming, for any x in X.
Theorem 41
1 If E ⇒MM E′ then σ unifier of E iff σ unfier of E′
∗
2 If E ⇒MM ⊥ then E is not unifiable.
∗
3 If E ⇒MM E′ , with E′ a solved form, then σE′ is an mgu of E.
Proof of (. 1) We have to show this for each
Ruzica Piskac of Logic
First-Order the- rules. Let’s treat
Syntax, Semantics, the case
Resolution 89 / 125
General Resolution Unification
Theorem 42
E unifiable ⇔ there exists a most general unifier σ of E, such that σ is
idempotent and dom(σ) ∪ codom(σ) ⊆ var(E).
Notation: σ = mgu(E)
Problem: exponential growth of terms possible
Lifting Lemma
Lemma 43
Let C and D be variable-disjoint clauses. If
C D
yσ y̺
Cσ D̺
[propositional resolution]
C′
then there exists a substitution τ such that
C D
[general resolution]
C′′
y τ
C′ = C′′ τ
Same for factorization.
Ruzica Piskac First-Order Logic - Syntax, Semantics, Resolution 92 / 125
General Resolution Saturation of Sets of General Clauses
Herbrand’s Theorem
Theorem 45 (Herbrand)
Theorem 46 (Löwenheim-Skolem)
Theorem 47
Let N be a set of general clauses where Res(N) ⊆ N. Then
N |= ⊥ ⇔ ⊥ ∈ N.
Proof of L. et Res(N) ⊆ N. By Corollary 44: Res(GΣ (N)) ⊆ GΣ (N)
Complexity of Unification
Literature:
1. Paterson, Wegman: Linear Unification, JCSS 17, 348-375 (1978)
2. Dwork, Kanellakis, Mitchell: On the sequential nature of
unification, Journal Logic Prog. 1, 35-50 (1984)
3. Baader, Nipkow: Term rewriting and all that. Cambridge U. Press
1998, Capter 4.8
Theorem 49 (Paterson, Wegman 1978)
Unifiability is decidable is in linear time. A most general unifiers can be
computed sind in linearer time.
(d)
1 1 1 1
f ... x
2 f 2 f f 2 f 2
(e)
f
1 2
f
1 2 1 g g 1
x y z
(f )
x I J
H g g K
y b
z L M conflict
... a
...
...
...
u = v, v = w ⇒ u=w
. .
unifiable.
(Otherwise a term would have to be unified with a proper subterm of itself.)
Ruzica Piskac First-Order Logic - Syntax, Semantics, Resolution 100 / 125
Unification
Another Example
.
problem h(x, x, y, z) = h(g(y), g(g(z)), g(g(a)), g(a))
after propagation:
A h B h
1 2 4 3 4
1 2
3 D G
E F
C g g g g
x I J
K
H g g
y a
z L M
... a
...
...
...
h
1
g 2
3
4
x g
y
g
z
...
Analysis
Matching
Motivation: Search space for Res very large. Idea for improvement:
1. In the completeness proof (Model Existence Theorem 34) one only
needs to resolve and factor maximal atoms ⇒ order restrictions
2. Choice of negative literals don’t-care ⇒ selection
A selection function is a mapping
¬A ∨ ¬A ∨ B
¬B0 ∨ ¬B1 ∨ A
C∨A∨B
[ordered factoring]
(C ∨ A)σ
if σ = mgu(A, B) and Aσ is maximal wrt. Cσ and nothing is selected in
C.
Ruzica Piskac First-Order Logic - Syntax, Semantics, Resolution 106 / 125
Ordered Resolution with Selection
1) A∨B
2) A ∨ ¬B
3) ¬A ∨ B
4) ¬A ∨ ¬B
5) B∨B 1&3 we assume A ≻ B and
6) B 5 S as indicated by X ;
7) ¬A 6&4 the maximal atom in a
8) A 6&2 clause is depicted in
9) ⊥ 8&7 red.
With this ordering and selection function the refutation proceeds strictly
deterministinally in this example. Generally, proof search will still be
non-deterministic but the search space will be much smaller than with
unrestricted resolution.
another proof of the same clause. In large proofs many rotations are
possible. However, if A ≻ B, then the second proof does not fulfill the
orderings restrictions.
Conclusion: In the presence of orderings restrictions (however one
chooses ≻) no rotations are possible. In other words, orderings identify
exactly one representant in any class of of rotation-equivalent proofs.
C D
yσ yρ
Cσ Dρ
[propositional inference in Res≻
S]
C′
and S(Cσ) ≃ S(C), S(Dρ) ≃ S(D) (that is, “corresponding” literals are
selected), implies that there exists a substitution τ such that
C D
[Inference in Res≻
S]
C′′
y τ
C′ = C′′ τ
Res≻
S′ (GΣ (N)) ⊆ GΣ (N).
Theorem 54
Let ≻ be an atom ordering and S a selection function such that
Res≻
S (N) ⊆ N. Then
N |= ⊥ ⇔ ⊥ ∈ N
Proof of “⇐”. trivial
“⇒”
(i) propositional level: construction of a candidate model IN as for
unrestricted resolution, except that clauses C in N that have selected
literals are not productive, even when they are false in IC and when
their maximal atom occurs only once and positively.
(ii) general clauses: (i) + Corollary 53.
Craig-Interpolation
A theoretical application of ordered resolution is Craig-Interpolation:
Theorem 55 (Craig 57)
Let F and G be two propositional formulas such that F |= G. Then
there exists a formula H (called the interpolant for F |= G), such that H
contains only prop. variables occurring both in F and in G, and such
that F |= H and H |= G.
Proof of T. ranslate F and ¬G into CNF. let N and M, resp., denote the
resulting clause set. Choose an atom ordering ≻ for which the prop. variables
that occur in F but not in G are maximal. Saturate N into N ∗ wrt. Res≻ S with an
empty selection function S . Then saturate N ∗ ∪ M wrt. Res≻ S to derive ⊥. As
N ∗ is already saturated, due to the ordering restrictions only inferences need
to be considered where premises, if they are from N ∗ , only contain symbols
that also occur in G. The conjunction of these premises is an interpolant H.
Craig-Interpolation
Resolution Prover RP
Backward reduction
N | P ∪ {C ∨ L} | O ⊲ N | P ∪ {C} | O
N | P | O ∪ {C ∨ L} ⊲ N | P ∪ {C} | O
if there exists D ∨ L′ ∈ N such that L = L′ σ and Dσ ⊆ C
Clause processing
N ∪ {C} | P | O ⊲ N | P ∪ {C} | O
Inference computation
∅ | P ∪ {C} | O ⊲ N | P | O ∪ {C}, mit N = Res≻ O ∪ {C})
S (O
Theorem 57
∗
N |= ⊥ ⇔ N|∅|∅ ⊲ N ′ ∪ {⊥} | _ | _
Proof in
L. Bachmair, H. Ganzinger: Resolution Theorem Proving
appeared in the Handbook on Automated Theorem Proving, 2001
Basis for the completeness proof is a formal notion of redundancy as
defined subsequently.
Examples of Redundancy
Theorem 58
Saturation up to Redundancy
N is called saturated up to redundancy (wrt. Res≻
S)
:⇔ Res≻
S (N \ Red(N)) ⊆ N ∪ Red(N)
Theorem 59
Let N be saturated up to redundancy. Then
N |= ⊥ ⇔ ⊥ ∈ N
Proof of [. Sketch]
(i) Ground case:
• consider the construction of the candidate model IN≻ for Res≻
S
• redundant clauses are not productive
• redundant clauses in N are not minimal counterexamples for IN≻
The premises of “essential” inferences are either minimal
counterexamples or productive.
(ii) Lifting: no additional problems over the proof of Theorem 54.
Ruzica Piskac First-Order Logic - Syntax, Semantics, Resolution 122 / 125
Resolution Prover RP Redundancy
Theorem 60
(i) N ⊆ M ⇒ Red(N) ⊆ Red(M)
(ii) M ⊆ Red(N) ⇒ Red(N) ⊆ Red(N \ M)
Proof: Exercise.
We conclude that redundancy is preserved when, during a theorem
proving process, one adds (derives) new clauses or deletes redundant
clauses.
The theorems 59 and 60 are the basis for the completeness proof of
our prover RP.
Hyperresolution (ctnd)