Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

A Machine Checked Proof That Ackermann's Function Is Not Primitive Recursive

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

A Machine Checked Proof that Ackermann's Function is not Primitive Recursive

Nora Szasz Department of Computer Sciences Chalmers University of Technology and University of Goteborg

1 Introduction
In logic, many di erent languages have been suggested for the formalization of mathematics. Some of these languages are of interest to computer science, since they might be used as programming logics. As a consequence of this, editors and automatic tools for using such formalized languages have been developed. This work is an experiment with proposals arising from both these elds: Martin-Lof's set theory ML84] as a foundational framework for mathematics, and ALF (Another Logical Framework) ACN90] as editor and checker of a basic formalized language in which the above can be expressed. The example we chose is the proof that Ackermann's function is not primitive recursive. The proof is mathematically simple in the sense that it only uses induction and some basic properties of the addition and of the order of the natural numbers. The result involves obviously also the de nition of the primitive recursive functions. This de nition makes critical use of the notion of tuples and tupling of functions, which is not usually considered in standard presentations. The formalization of these two notions constituted a major problem, and nally lead to the generalization of the de nition of primitive recursive functions of type N n! N m . In section 2 we extend the set of primitive recursive functions to internalize the notion of tuples of functions, and we prove an extension of Ackermann's result for these functions. In section 3 we present the formalization of this proof in Martin-Lof's set theory and we explain how it was written and structured
This work is a shorter version of the thesis with the same name, presentedfor the Licentiate Degree in Computer Science at Chalmers University of Technology and University of Goteborg, Sweden, 1991.

using ALF. In section 4 we make some comments about the development of the proof and design decisions. The completely formalized proof can be found in Sza91].

2 Primitive Recursive Functions, t-Primitive Recursive Functions and Ackermann's Result


In the rst part of this section, we will de ne the set of primitive recursive functions. Then we will de ne Ackermann's function and show how the proof that it grows faster than any primitive recursive function is carried out. In the second part of the section, the de nition of primitive recursive functions will be extended to include functions yielding values in Nn for n>0. This will be done by adding an appending operator for functions to the set of operators de ning the primitive recursive functions. For this set of functions (called t-primitive recursive functions) we will state and prove an extension of Ackermann's result. A detailed proof is given, although it is just a simple generalization of the original one, which is available in many textbooks on recursive functions theory (see Her65]). The structure of the proof presented here will be used later in the description of the formalization.

2.1 Primitive Recursive Functions and Ackermann's Result

The primitive recursive functions are built by rst considering the most fundamental functions over natural numbers: the constant 0 function and the successor function. Then, the projections to select elements in tuples. Over these operations, a principle of de nition of functions by recursion on one of the arguments is added, and nally the class is closed under composition. De nition 1 A function f :N n! N is primitive recursive if and only if it is 1. Z , the constant function 0 of zero arguments, de ned by Z() = 0, 2. the successor function S, de ned by S(x) = x+1, i i 3. one of the projections n, de ned by n(x1 ;:::; xn)= xi, for 0 < i n, 4. de ned by composition of primitive recursive functions h:N m! N and g1 ; :::; gm :N n! N , that is, f(x1 ;:::; xn) = h(g1 (x1;:::; xn); :::; gm(x1 ;:::; xn)),
5. de ned by primitive recursion from primitive recursive functions g:N n?1! N (n>0) and h:N n+1! N , by

f(0; x2 ;:::; xn) = g(x2 ;:::; xn) f(k+1; x2;:::; xn) = h(f(k; x2 ;:::; xn); k; x2;:::; xn): 2

This set of functions describes a large variety of the functions used in mathematics. They are computable and total, but not all computable total functions belong to this set. Ackermann de ned Ack28] a function which is computable and total, but not primitive recursive. Let us now de ne1 a family Ackn :N ! N (n N) of primitive recursive functions by: 8 Ack = S < 0 Ackn+1 (0) = Ackn(1) : Ackn+1(y+1)= Ackn(Ackn+1(y)) It is easy to show by induction that for all n N, Ackn is primitive recursive. Now, de ne A:N 2! N by A(x; y) = Ackx(y). This is a function that is computable and total (because every Ackn is) but which is not primitive recursive. This is proved by rst showing that for any primitive recursive P function f :N n! N there exists a constant kf such that f(x1 ;:::; xn) < A(kf ; n=1 xi ) i for all x1 ; :::; xn N, which is done by structural induction on the de nition of primitive recursive functions. Then, assuming A primitive recursive yields a contradiction: if A is primitive recursive so is the function F :N ! N, de ned by F(x) = A(x; x), and then we obtain that for all x N, F(x) < A(kF ; x) holds, in particular for x = kF we get A(kF ; kF ) = F(kF ) < A(kF ; kF ).

In every formalization work we need to consider details which would normally be ignored in informal mathematics. A main point in the de nition of the primitive recursive functions is the notion of tuples, on which some operations are strongly based: when considering the de nition of a function f by composition of h and g1; :::; gm, there is an implicit operation of tupling made over g1; :::; gm, i.e. f is de ned as the composition of two functions: h and (g1;:::; gm), where (g1;:::; gm) is the function de ned by (g1;:::; gm)(t) = (g1 (t);:::; gm(t)), for t Nn. So, in order to be uniform in the treatment of the objects we are dealing with, we must internalize the notion of tuples and tuples of functions. We will do this by extending the primitive recursive functions to be functions in N n! N m, for n; m N. Instead of adding a tupling operator for functions (as described above), we will add an appending operator for functions, which will be based on the appending operation for tuples h ; i. The appending operator for functions has the same e ect for our purposes as a tupling one, but it is simpler than a tupling operator in the sense that it takes a constant number of functions (two) to yield a new one (see section 4.1). This motivates the following de nition:
1 The function to be de ned is not Ackermann's original function, but a simpli cation of it that preserves its major properties Pet35].

2.2 Tuples of Primitive Recursive Functions and the Extension of Ackermann's Result

De nition 2 A function f :N n! N m (m>0) is t-primitive recursive if and only


if it is 1. 2. 3. 4.

Z , the constant function 0 of zero arguments, de ned by Z() = 0, the successor function S , de ned by S(x) = x+1, i i one of the projections n, de ned by n(x1 ;:::; xn)= xi for 0 < i n, de ned by composition of t-primitive recursive functions h:N k! N m and g:N n! N k, that is, f(x1 ;:::; xn) = h(g(x1 ;:::; xn)),

5. de ned by primitive recursion from t-primitive recursive functions g:N n?1! N m (n>0) and h:N n+m! N m, by:

f(0; x2;:::; xn) = g(x2 ;:::; xn) f(k+1; x2;:::; xn) = h(hf(k; x2 ;:::; xn); (k; x2;:::; xn)i) 6. de ned by appending t-primitive recursive functions g:N n! N m1 and h:N n! N m2 such that m1 +m2 = m, that is, f(x1 ;:::; xn) = hg(x1 ;:::; xn); h(x1;:::; xn)i.

The function A can be de ned in the same way as before, and an extension of Ackermann's result for t-primitive recursive functions can be formulated, by using as measure for the result of the application of a function on a tuple the addition of all its components. So, we de ne the norm of a tuple (x1;:::; xn) to be:

k(x1 ;:::; xn)k =

n X i=1

xi:

Corresponding to Ackermann's result for t-primitive recursive functions is the following:

Theorem. For any t-primitive recursive function f :N n! N m, there exists a n


constant kf such that, for all t N :

kf(t)k < A(kf ; ktk)


Then, as before, assuming that A is t-primitive recursive will yield a contradiction. In order to prove this theorem we need to prove the following properties of A: 4

For all x; y N:

For x = 0, we have that for all y N, y < y+1 = A(0; y). Suppose that y < A(x; y) holds for all y N (IH1 ). Then, y < A(x+1; y) for all y, by induction on y: For the base case, we have 0 < 1 < A(x; 1) (by IH1 ), but A(x; 1) = A(x+1; 0), and hence, 0 < A(x+1; 0). Suppose now that y < A(x+1; y) (IH2). Then, A(x+1; y) < A(x; A(x+1; y)) (by IH1 for A(x+1; y)), hence, y+1 A(x+1; y) < A(x; A(x+1; y)) = A(x+1; y+1).

A1. A(0; y) = y+1. Proof. By de nition of A. A2. A(x+1; 0) = A(x; 1). Proof. By de nition of A. A3. A(x+1; y+1) = A(x; A(x+1; y)). Proof. By de nition of A. A4. y < A(x; y). Proof. By induction on x:

A5. For all x; y ; y N, if y < y , then A(x; y ) < A(x; y ). (Monotonicity in the second argument.) Proof. It is enough to show that A(x; y) < A(x; y+1) ( ). Then, the mono1 2 1 2 1 2

tonicity follows by induction on the di erence between y1 and y2 . We prove ( ) by induction on x: For x = 0 we have A(0; y) = y+1 < A(0; y+1), by A4. Else, also by A4, A(x+1; y) < A(x; A(x+1; y)) = A(x+1; y+1):

A6. A(x; y+1) A(x+1; y). Proof. By induction on y:

For y = 0, we have that A(x; 1) = A(x+1; 0). Suppose now A(x; y+1) A(x+1; y) for all x N (IH). Then, y+1 < A(x; y+1), by A4, and so, y+2 A(x; y+1) A(x+1; y) (by IH). Finally, by A5, A(x; y+2) A(x; A(x+1; y)) = A(x+1; y+1).

A7. For all x1 ; x2; y N, if x1 < x2, then A(x1 ; y) < A(x2; y). (Monotonicity in the rst argument.) Proof. It is enough to show that A(x; y) < A(x+1; y) and prove it by induction on the di erence between x1 and x2. This follows from A(x; y) < A(x; y+1), by A5, and A(x; y+1) A(x+1; y), by A6. A8. A(1; y) = y+2. Proof. By induction on y.
5

So, we can choose


+

A9. A(2; y) = 2y+3. Proof. By induction on y, using A8. A10. For all x ; x N, there exists k? N such that A(x ; A(x ; y)) < A(k?; y) for all y N. Proof. A(x ; A(x ; y)) < A(x +x ; A(x +x +1; y)) (by monot. of A) = A(x +x +1; y+1) (by A3) A(x +x +2; y) (by A6)
1 2 1 2 1 2 1 2 1 2 1 2

? kx1;x2
2

to be x1 +x2+2.

A11. For all x ; x N, there exists k N such that A(x ; y)+A(x ; y) < A(k ; y) for all y N. Proof.
1 + 1 2

A(x1; y)+A(x2; y)

2A(x1 +x2; y) (by monot. of A) < 2A(x1 +x2; y)+3 = A(2; A(x1+x2; y)) (by A9) ? (by A10) < A(k2;x1 +x2 ; y) ? + So, taking kx1;x2 = k2;x1+x2 , the proposition holds.

+ Hence, we can take k0 = kk;0.

A12. For all x; y N, if k N is such that x < A(k; y), then there exists k0 N such that x+y < A(k0 ; y). Proof. y < A(0; y) by A4, and so x+y < A(k; y)+A(0; y) < A(kk; ; y) (by A11).
+ 0

Now, we give the complete proof of the theorem, which is by induction on the de nition of t-primitive recursive functions. For the basic functions: kZ()k = 0 < 1 = A(0; 0) = A(0; k( )k): kS(x)k = x+1 < x+2 = A(1; x) = A(1; k(x)k): i k n(x1 ;:::; xn)k = xi k(x1;:::; xn)k < A(0; k(x1;:::; xn)k), by A4. For the induction steps, we assume the corresponding induction hypothesis in each case: Let f be de ned by composition of h and g, and let t Nn . Then, kh(g(t))k < A(kh ; kg(t)k) (by ind. hyp. for h) < A(kh ; A(kg ; ktk)) (by ind. hyp. for g and monot. of A) ? (by A10) < A(kkh ;kg ; ktk) 6

For functions de ned by primitive recursion, a stronger result will be proved: If f is de ned by primitive recursion from g and h, then there exists a 0 constant kf such that, for all t Nn :
0 kf(t)k+ktk < A(kf ; ktk)

(1)

By the induction hypothesis, we know that there exist kg and kh such that 0 the theorem holds for g and h respectively, and so, by A12, there exist kg 0 such that (1) holds for g and h also. We will prove that (1) holds and kh 0 0 0 by taking kf = kg +kh +1. n (n>0). The proof is by induction on x. Let t =(x; t1) N For x = 0 we have: kf(0; t1 )k+k(0; t1)k = kg(t1 )k+kt1k (by def. of f and k k) 0 < A(kg ; kt1k) (by ind. hyp. for g) 0 0 < A(kf ; kt1k) (by monot. of A and def. of kf ) 0 ; k(0; t1)k) (by def. of k k) = A(kf
0 Assume now that kf(x; t0 )k+k(x; t0)k < A(kf ; k(x; t0)k) holds for all tuples 0) (IH). Then we have that, of the form (x; t kf(x+1; t1)k+k(x+1; t1)k = kh(f(x; t1); x; t1)k+k(x; t1)k+1 (by def. of f and k k) kh(f(x; t1); x; t1)k+k(f(x; t1 ); x; t1)k+1 (by def. of k k) 0 < A(kh ; k(f(x; t1); x; t1)k)+1 (by ind. hyp. for h) and so, 0 kf(x+1; t1)k+k(x+1; t1)k A(kh ; k(f(x; t1); x; t1)k) 0 ; kf(x; t1)k+k(x; t1)k) (by def. of k k) = A(kh 0 0 < A(kh ; A(kf ; k(x; t1)k)) (by IH and monot. of A) 0 0 A(kf ? 1; A(kf ; k(x; t1)k)) 0 (by monot. of A and def. of kf ) 0 ; k(x; t1)k+1) (by A3) = A(kf 0 (by def. of k k) = A(kf ; k(x+1; t1)k) Finally, if f is obtained by appending g and h, then, khg(t); h(t)ik = kg(t)k+kh(t)k (by def. of k k) < A(kg ; ktk)+A(kh ; ktk) (by ind. hyp. for g and h) + (by A11) < A(kkh;kg ; ktk) The proof of the theorem is now complete.

To conclude, let us see that the original result for primitive recursive functions can be obtained from this one. The set of primitive recursive functions corresponds to the subset of t-primitive recursive functions that have type N n! N 1. The basic functions are the same for both sets, and the instance of the primitive recursive schema for t-primitive recursive functions when m = 1 is exactly the same schema as for primitive recursive functions. In order to de ne a function f :N n! N by composition of h:N m! N and g1; :::; gm :N n! N, a new function G:N n! N m must be de ned by appending the functions g1 ; :::; gm (which is done in m?1 steps), and then, f is de ned by composition of h and G. Now, by the de nition of k k, the result for t-primitive recursive functions, when instantiated to functions belonging to the subset of functions in N n! N 1, is the same result as the one for primitive recursive functions.

3 A Complete Formalization of the Proof in Martin-Lof's Set Theory


In this section, we will present a complete formalization of the proof presented above for t-primitive recursive functions. The formalization will be done in Martin-Lof's set theory. The proof was written in an implementation of the (monomorphic) set theory in ALF ACN90] which was also used to edit and check the proof. A complete presentation of Martin-Lof's set theory can be found in NPS90]. We will start by recalling some notational conventions. Then we will show how we formalized the proof in Martin-Lof's set theory. We will rst present the de nitions of the order of the natural numbers, the set of tuples of natural numbers, and the set of codes of t-primitive recursive functions. Then, we will de ne some operations over the tuples which will be used to de ne the t-primitive recursive functions, and mention some lemmas of properties of the operations and propositions de ned. Finally we will explain how the proof was written and structured using ALF.

3.1 Notational Conventions

We refer to NPS90] concerning Martin-Lof's set theory. We will use the same names as appear there for sets, constants, and rules. A proposition is identi ed with the set of its proofs. So we will write A prop when we want to see the set A as a proposition, and will write A true whenever we know that there is an element a A. Assumptions are of the form x A, where x is a variable and A is a set, and they are written x A]. The syntax we will use for expressions is the following: if e is an expression and x a variable, then x:e is the abstraction of x from e. The application of an expression f to another expression a is written f(a). Successive abstractions 8

and applications will be denoted x:y:e and f(a; b) respectively, instead of the corresponding x:(y:e) and f(a)(b). In the case of the non canonical constants, we will change the order of the arguments, putting the argument corresponding to the eliminated element as the last one (instead of putting it in the rst place). We abbreviate the proposition Id(A; a; b) by a =A b. Rules have the form: P1 P2 Pn C where the the premises P1 ; P2; : : :; Pn and the conclusion C are (possibly hypothetical) judgements. In the formalization of the proof, we will de ne some new inductive families of sets and propositions. We will rst introduce the order of the natural numbers, which will be de ned as a binary relation. Then, we will de ne two more families of sets: the tuples of natural numbers, and the t-primitive recursive functions. Justi cations of the rules are straightforward and can be given in the same way as for the basic part of the theory, as shown in ML84] and NPS90]. These inductive de nitions are all of a simple kind, and follow the patterns in CP90, Dyb90]. Hence, elimination and equality rules can be deduced automatically from the formation and introduction rules.

3.2 Basic Sets and Relations

Order

To de ne the order of the natural numbers we will rst de ne an inductive family of sets Less by: Less-formation n N m N Less(n; m) Set Less-introduction1 m N z-Less(m) Less(0; succ(m)) Less-introduction2 n N m N p Less(n; m) s-Less(n; m; p) Less(succ(n); succ(m))

Less-elimination C(x; y; z) Set x; y N; z Less(x; y)] zL(y) C(0; succ(y); z-Less(y)) y N] sL(x; y; z; u) C(succ(x); succ(y); s-Less(x; y; z)) x; y N; z Less(x; y); u C(x; y; z)] n N m N p Less(n; m) LessE(zL; sL; n; m; p) C(n; m; p) We will not state the equality rules, which are straightforward from the elimination and introduction rules. With this de nition, we can de ne the < proposition, by: < Less. And from this we obtain the rules for < by forgetting the proof elements: <-formation n N m N n < m prop <-introduction1 m N 0 < succ(m) true <-introduction2 n N m N n < m true succ(n) < succ(m) true <-elimination C(x; y) prop x; y N; x < y true] C(0; succ(y)) true y N] C(succ(x); succ(y)) true x; y N; x < y true; C(x; y) true] n N m N n < m true C(n; m) true Finally, we de ne by: n m (n < m _ n =N m), which is a proposition under the assumptions n; m N. 10

Tuples of Natural Numbers


In order to represent the tuples of natural numbers, we will de ne a family of sets T, depending on N, in such a way that an element in T(n) will be a tuple of n elements. The family is de ned using the constructors nil and cons: T-formation n N T(n) Set T-introduction1 nil T(0) T-introduction2 T-elimination C(x; y) Set x N; y T(x)] d C(0; nil) e(y; z; u) C(succ(x); cons(y; z)) x; y N; z T(x); u C(x; z)] n N t T(n) Telim(d; e; t) C(n; t) n N a N t T(n) cons(a; t) T(succ(n))

t-Primitive Recursive Functions


The next set to de ne is the set of t-primitive recursive functions. By looking at the de nition of these functions in the previous section, it is clear that this is an inductively de ned set, de ned by Z, S, the projections , composition , primitive recursion and appending. There are two things to take into account when de ning this set: the types of the functions (domain and codomain), and their behavior when applied to tuples of the correct arity. The types can be made explicit by de ning a family of sets TPR, such that an element in TPR(n; m) will represent a function in N n! N m. The behavior of the elements of this family as functions will be dened later, when we de ne the meaning of each constant as a function. So, with this de nition, we will only de ne the the set of codes of t-primitive recursive functions. From this de nition, we will obtain an induction principle, which we will use later both to de ne the meaning of and to prove propositions over the elements in the sets TPR. 11

n N m N TPR(n; m) Set We de ne a canonical constant for each possible way of constructing a t-primitive recursive function: TPR-introduction1 Z TPR(0; 1) TPR-introduction2 TPR-introduction3 S TPR(1; 1)

TPR-formation

n N i N (n; i) TPR(succ(i+n); 1) j Remember that m is de ned only when 0 < j m. In this case, a very simple way to make this condition hold is to re-parameterize the indices. So, we de ne the new constant in such a way that (n; i) is equivalent +1 to the old ii+n+1, and thus, 0 < i+1 i+n+1 will hold for all n; i N. TPR-introduction4 n N k N m N h TPR(k; m) g TPR(n; k) comp(k; h; g) TPR(n; m)

TPR-introduction5 n N m N g TPR(n; m) h TPR(succ(m+n); m) pr(g; h) TPR(succ(n); m) TPR-introduction6 n N m1 N m2 N g TPR(n; m1 ) h TPR(n; m2) app(m1 ; m2; g; h) TPR(n; m1 +m2 ) In these two last cases, the conditions in the de nition of t-primitive recursive functions (n > 0 and m1 +m2 = m respectively) are again forced by re-parameterization.

12

TPR-elimination C(x; y; w) Set x; y N; w TPR(x; y)] z C(0; 1; Z) s C(1; 1; S) r(n; i) C(succ(i+n); 1; (n; i)) n; i N] c(n; k; m; h; g; u; v) C(n; m; comp(k; h; g)) n; k; m N; h TPR(k; m); g TPR(n; k); u C(k; m; h); v C(n; k; g)] p(n; m; g; h; v; u) C(succ(n); m; pr(g; h)) n; m N; g TPR(n; m); h TPR(succ(m+n); m); v C(n; m; g); u C(succ(m+n); m; h)] a(n; m1 ; m2; g; h; v; u) C(n; m1 +m2 ; app(m1 ; m2; g; h)) n; m1; m2 N; g TPR(n; m1); h TPR(n; m2 ); v C(n; m1; g); u C(n; m2; h)] n N m N f TPR(n; m) TPRe(z; s; r; c; p; a; n;m; f) C(n; m; f) In order to improve readability we will omit the variable k in comp(k; h; g), and the variables m1 and m2 in app(m1 ; m2 ; g; h) whenever they can be inferred from the context. In this section, we will rst discuss the de nition of some of the operations on tuples that are used in the proof. Then, we will give a meaning function for the elements of the sets TPR(n; m), to obtain functions that behave as the ones de ned in De nition 2 in section 2.2. Finally, we will show which mathematical properties involving these operations and the order and equality over natural numbers we have to prove.

3.3 Operations and Properties

Operations on Tuples
The following operations on tuples are needed in the formalization of the proof: hd(t) Telim(0; y:z:u:y; t) N n N; t T(n)] gives the rst element of a tuple (0 in the nil case), tl(t) Telim(nil; y:z:u:z; t) T(pred(n)) n N; t T(n)] gives the tail of a tuple (nil if the tuple is nil), k] cons(k; nil) T(1) k N] the constructor of a tuple with one element, chgfst(a; t) cons(a; tl(t)) T(succ(n)) a; n N; t T(succ(n))] changes the rst element of the (non-empty) tuple t to a, 13

First, note that, although hd and tl are usually de ned only for non-empty tuples, we de ned them as total functions. This is because in this framework all the functions are total, so, if the domain is a family of sets, the operation must be de ned over all the sets of the family. In the de nition of proj, we use the re-parameterization in the de nition of to ensure that the conditions on the indices were satis ed. Note also that in the de nition of proj it was necessary to use higher order functions in the following sense: with natrec we de ne an element in the set x T(succ(i+n)): N, and then apply it to the tuple t to get the desired result. This is done in order to be able to make the de nition of pro in(t) by recursion on i, on which t depends. In fact, alternative de nitions of h ; i and k k can be given by recursion on natural numbers using the same idea. Let us de ne: ht1 ; t2in1 apply(natrec( (t:t2 ); u:v: (t:cons(hd(t); apply(v; tl(t)))); n1 ); t1 ) T(n1 +n2 ) n1; n2 N; t1 T(n1 ); t2 T(n2)]; ktkn apply(natrec(0; u:v: (t1:hd(t1)+apply(v; tl(t1 ))); n); t) N n N; t T(n)]: We will use these de nitions instead of the former ones (although they are more complicated) because with a de nition by recursion on natural numbers some equalities are easier to prove (see section 4.2). We will neither write the variable n1 in ht1 ; t2in1 nor n in ktkn (although the constants depend on them) whenever they can be inferred from the context.

Remarks

n1; n2 N; t1 T(n1 ); t2 T(n2 )] the appending operation, pro in (t) apply(natrec( (t1 :hd(t1)); u:v: (t1:apply(v; tl(t1 ))); i); t) N n; i N; t T(succ(i+n))] gives the i+1-th element of the tuple, ktk Telim(0; y:z:u:y+u; t) N n N; t T(n)] the norm of a tuple.

ht1 ; t2i Telim(t2 ; y:z:u:cons(y; u); t1 ) T(n1 +n2)

Meaning of t-Primitive Recursive Functions


To complete the de nition of t-primitive recursive functions, we will now de ne the meaning of the elements (codes) in TPR as functions. This will be done by de ning, for each n; m N and f TPR(n; m) a constant TPRfun (n; m; f) in T(n) ! T(m) that will behave as the t-primitive recursive function f represents, according to De nition 2 in section 2.2. The de nition of TPRfun is by induction on the de nition of the elements in TPR, using TPR elimination. So, in order to de ne it, we must provide 14

a function for each way of constructing an element in TPR(n; m), under the corresponding induction hypothesis. For Z the constant 0 function zf is de ned: zf (t: 0]) T(0) ! T(1). For S we de ne the function sf , which applied to a tuple with one element returns a tuple formed with the successor of that element: sf (t: succ(hd(t))]) T(1) ! T(1). For (n; i), we use proj already de ned for tuples, to de ne the function rf : rf (n; i) (t: pro in (t)]) T(succ(i+n)) ! T(1) n; i N]. For the composition of h TPR(k; m) and g TPR(n; k), we must de ne a function which composes the function de ned by h (fh ) and the function de ned by g (fg ). So, we de ne the function cf : cf (n; k; m; h; g; fh; fg ) (t:apply(fh ; apply(fg ; t))) T(n) ! T(m) n; k; m N; h TPR(k; m); g TPR(n; k); fh T(k) ! T(m); fg T(n) ! T(k)]: For a function f de ned by primitive recursion on g and h, we need to make induction on the rst element of the tuple we apply the function. In the 0 case, the result is the function de ned by g (fg ) applied to the tail of the tuple. In the successor case, we need to build a new tuple (with the result of f applied to the \preceding" tuple, appended with the preceding tuple) and apply the function de ned by h (fh ) to it. We de ne then the function pf by: pf (n; m; g; h; fg ; fh ) (t:natrec(apply(fg ; tl(t)); u:v:apply(fh ; hv; chgfst(u; t)i); hd(t))) T(succ(n)) ! T(m) n; m N; g TPR(n; m); h TPR(succ(m+n); m); fg T(n) ! T(m); fh T(succ(m+n)) ! T(m)]: For the appending of g and h, we use the constant h ; i to de ne the function af , which appends the results of the application of the functions de ned by g (fg ) and h (fh ) to a tuple: af (n; m1 ; m2 ; g; h; fg ; fh ) (t:happly(fg ; t); apply(fh ; t)i) T(n) ! T(m1 +m2 ) n; m1; m2 N; g TPR(n; m1); h TPR(n; m2 ); fg T(n) ! T(m1 ); fh T(n) ! T(m2 )]: Now, for n; m N; f TPR(n; m), we can de ne the t-primitive recursive function represented by f, which is a constant in the corresponding function set: TPRfun (n; m; f) TPRe(zf ; sf ; rf ; cf ; pf ; af ; n; m; f) T(n) ! T(m), and then the result of the application of f in TPR(n; m) to t in T(n) is just: Applym (f; t) apply(TPRfun (n; m; f); t) T(m). n 15

Properties of Apply
From the de nition of Apply we get immediately the following (de nitional) equalities (we omit the variables n and m in Applym (f; t) in order to improve n readability): Apply(Z; t) = 0] T(1) t T(0)]; Apply(S; x]) = succ(x)] T(1) x N]; Apply( (n; i); t) = pro in (t)] T(1) n; i N; t T(succ(i+n))]; Apply(comp(h; g); t) = Apply(h; Apply(g; t)) T(m) n; k; m N; h TPR(k; m); g TPR(n; k); t T(n)]; Apply(pr(g; h); cons(0; t)) = Apply(g; t) T(m) n; m N; g TPR(n; m); h TPR(succ(m+n); m); t T(n)]; Apply(pr(g; h); cons(succ(x); t)) = Apply(h; hApply(pr(g; h); cons(x; t)); cons(x; t)i)) T(m) n; m N; g TPR(n; m); h TPR(succ(m+n); m); t T(n); x N], Apply(app(g; h); t) = hApply(g; t); Apply(h; t)i T(m1 +m2 ) n; m1; m2 N; g TPR(n; m1 ); h TPR(n; m2); t T(n)]: These equalities correspond exactly to the equations 1-6 of De nition 2 in section 2.2, which proves the correctness of the representation.

Some Elementary Arithmetical Lemmas


In the proof of Ackermann's we need just some basic properties of the addition and the order of the natural numbers. Here is a list of the main lemmas we have to prove: - For the propositional equality: re exivity, symmetry, transitivity, substitutivity and congruence. - The addition of two elements x and y of the set N is de ned by the expression: x+y natrec(y; u:v:succ(v); x), from which we can prove the following propositions: 0+x =N x x N], succ(y)+x =N succ(y+x) x; y N], x+y =N y+x x; y N] (commutativity), x+(y+z) =N (x+y)+z x; y; z N] (associativity). 16

- The correctness of the de nition of < can be shown by proving the following
proposition: x < y , 9k N: succ(k)+x =N y x; y N] and using this proposition, we can prove the following properties of < and : x < succ(x), (x < y & y < z) x < z, (x < y & y < z) succ(x) < z, x < (x+succ(y)), x < y (x+z < y+z), : (x < x), x y , 9k N: k+x =N y, 0 x, (x y & y z) x z, x < y succ(x) y, x y (x+z y+z). - Using the above lemmas the following properties of k k are proved: ktk =N 0 t T(0)], ktk =N hd(t)+ktl(t)k n N; t T(succ(n))], k k]k =N k k N], kchgfst(a; t)k =N a+ktl(t)k a; n N; t T(succ(n))], kht1 ; t2ik =N kt1k+kt2 k n1; n2 N; t1 T(n1 ); t2 T(n2)], pro in(t) ktk n; i N; t T(succ(i+n))], kt2k kht1; t2ik n1; n2 N; t1 T(n1); t2 T(n2 )].

The formal proof of the theorem is carried out as a direct translation of the proof presented in section 2.2. The de nition of Ackermann's function is done in the same way as in section 2.1. We de ne a family of functions Ack, such that Ack(n) TPR(1; 1) (n N), by induction on n: Ack0 is the successor function, and to de ne Ackn+1 in terms of Ackn, we use the primitive recursive schema, 1 with the constant function yielding Ackn(1) in the 0 case, and (Ackn 2 ) in the successor case. So, we rst de ne the constant functions Kk TPR(0; 1) by induction on k: 17

3.4 The Proof of the Main Theorem

Kk natrec(Z; u:v:comp(S; v); k) TPR(0; 1) k N], from which we can prove Apply(Kk ; t) =T(1) k] k N; t T(0)]: And now, we can de ne: Ack(n) natrec(S; u:v:pr(Khd(Apply(v; 1 )) ; comp(v; (1; 0))); n) TPR(1; 1) n N] A(x; y) hd(Apply(Ack(x); y)) N x; y N]. With this last de nition, we can prove the propositions corresponding to properties A1 to A12 in section 2.2. The proofs of these properties are generally by induction on the natural numbers, and have the form of long chains of equalities and inequalities; so, although very tedious, they are easily proved using the properties of <, , + and =A that we listed above are enough to carry them out. Then, to prove that for any t-primitive recursive function f :N n! N m there exists a constant kf such that, for all t Nn, kf(t)k < A(kf ; ktk), we de ne a new constant: Cth (n; m; f) 9k N:(8t T(n): kApply(f; t)k < A(k; ktk)) which is a set (proposition) under the assumptions n; m N; f TPR(n; m). To prove that Cth (n; m; f) is true for all elements in the sets TPR, we use TPR elimination, that is, structural induction over the t-primitive recursive functions. This is just the same we did in the proof in section 2.2, so the translation of the proof is straightforward, using the lemmas and the corresponding translations of A1 to A12. To complete the proof, we have to show that A is not in TPR(2; 1). This is done by proving the following proposition: :(9f TPR(2; 1):(8x; y N: A(x; y) =N kApply(f; x; y])k)) (where x; y] cons(x; y])). This is proved following the same steps as in section 2.1, for which we use asymmetry of <.
]

We used ALF ACN90] both to represent Martin-Lof's set theory and to edit and check the proof. Using ALF's type system, a version of Martin-Lof's (monomorphic) set theory was developed2 . To each set corresponds an ALF type; the
2 We followed the lines of the representation of Martin-Lof's monomorphic set theory in Martin-Lof's logical framework NPS90].

3.5 Editing the Proof

18

membership relation between elements and sets is identi ed with the membership relation between objects and types; equal sets correspond to equal types; and equal elements in a set to equal objects in the corresponding type. Propositions (interpreted as sets) are then also represented by types, and proof objects become explicit as objects of the corresponding type. The judgements in the set theory are translated to ALF judgements according to these rules. To the hypothetical judgements, function types are assigned: for instance, the judgement a A x B] is directly translated to a:A x:B], from which an object in a function type can be obtained: x:a:(x:B)A. The rules are translated into constants belonging to function types, from the types of the premisses to the type of the conclusion. So, the new sets are de ned by constants corresponding to the rules de ning them in the set theory. In order to structure the theory, the de nitions were grouped in logics , each one (generally) depending on previously de ned ones. For instance, to de ne the set N of natural numbers, we de ne the logic Nat, which has the following constants: N: Type, 0:N, succ:(n:N)N, natrec: (C:(n:N) Type)(d:C(0))(e:(u:N)(v:C(u))C(succ(u)))(n:N)C(n). The logic T of tuples, based on the logic Nat, has the following declarations: T: (n:N)Type, nil: T(0), cons:(n:N)(a:N)(t:T(n))T(succ(n)), Telim:(C:(n:N)(t:T(n)) Type) (d:C(0; nil)) (e:(n:N)(a:N)(t:T(n))(u:C(n; t)) C(succ(n); cons(n; a; t))) (n:N) (t:T(n)) C(n; t): These logics can be used as modules, letting us choose between di erent implementations of the basic ones (see section 4.3). The operations and properties mentioned in this and the previous sections were respectively de ned and proved using ALF's proof editor. All the theorems were proved completely: that is, the only constants which are not abbreviations are the ones corresponding to formation, introduction and elimination rules of the sets presented. The proof consists of about 200 theorems, of which around 19

50 are part of the proof of the growth of Ackermann's function. The rest are abbreviations, de nitions and the lemmas about properties of primitive recursive functions and natural numbers. The complete proof can be found in Sza91].

4 Design Decisions
In this section we will refer to the development of the proof we presented. We will mention some problems we had to face when formalizingthe proof, and show how we solved (some of) them. We will also show alternative implementations of the sets we de ned in section 3.2. The rst proof we tried to formalize was the one mentioned in section 2.1 for primitive recursive functions. When trying to formalize it in Martin-Lof's set theory, we had di culty in de ning the primitive recursive functions. We de ned an inductive family of sets PR over N, in such a way that an element in PR(n) would represent a primitive recursive function in N n! N. The introduction rules had to re ect De nition 1 in section 2.1, so we introduced one canonical constant for each way of building a primitive recursive function. But, as mentioned in section 2.2, in the case of de nition of a function f :N n! N by composition of h:N m! N and g1 ; :::; gm :N n! N, we need to express that we have m functions g1; :::; gm, where m is a variable in the rule. One way to do it was by having a family of functions g(i) over N, for i m. Then, the introduction rule for the composition case takes a function h PR(m), and a functional expression g, such that g(i) PR(n) whenever i m3 . This seemed to be a good choice, but when proving equalities (the Id proposition represents an intensional equality), we could not prove that two functional expressions were equal just by knowing that they yielded equal values when applied to equal elements in the domain. This problem was critical when proving that the application of the constant function Kk to a tuple in T(0) yields k, which is used to prove that A(x+1; y+1) = A(x; A(x+1; y)), and in the induction step for the composition case of the proof that for any t-primitive recursive function f :N n! N mn there P, exists a constant kf such that for all x1; :::; xn N, f(x1 ;:::; xn) < A(kf ; i=1 xi ). The extension proposed in section 2.2 seems to be an elegant way to get rid of these problems: we internalize the notion of tuples by considering functions in N n! N m , and with this, the composition operator becomes uniform in the sense that it always takes two functions to yield a new one. For the same reason we chose an appending rather than a tupling operation on functions.
3 In order to avoid having an introduction rule depending on a a proposition to be true, we de ned a family of sets Le over N, in such a way that the elements in Le(n) were \copies" of the natural numbers which are less than or equal to n (i.e, de ne inductively Le(n) by zz Le(n) for all n N, and if k Le(n), then ss(k) Le(succ(n))). The introduction rule for the composition case took then h PR(m) and g(i) PR(n) i Le(m)].

4.1 First Attempt

20

In section 3.3 we gave alternative de nitions of some operations on tuples using higher order functions, so that equalities involving these operations were easier to prove. When de ning families of sets, we can consider some of the sets appearing in the formation and introduction rules as parameters, in which case we obtain a set (or a family of sets) for each element in the parameter set. As a consequence, we get one elimination rule for each set (family) we de ned, that is, one for each element in the parameter set of the de nition. For instance, if in the de nition of the sets T(n) in section 3.2 we consider n to be a parameter, we obtain one set T(n) for each n N. This is done by de ning independently a set Tz which is the singleton nil, and a set forming operator Ts taking a set as argument and such that elements in Ts(A) (where A is a set) are of the form cons(a; t), for a N and t A. Then, using universes we can de ne: d c T(n) Set(natrec(Tz; u:v:Ts(v); n)) which is a set whenever n N. With these de nitions we can de ne partial functions over elements of these sets, like hd and tl (see Sza91]). The order of the natural numbers was also de ned as a binary relation over N. If we consider the rst index of the family as a parameter, we obtain, for each n N, a family of sets Less(n) over N such that an element in Less(n)(m) is a proof that n < m. From these, we obtain a de nition of n < m by induction on n, with which some properties of <, like transitivity, are much easier to prove (remember that to prove the transitivity of < we had to prove rst that n < m was (logically) equivalent to 9k N: succ(k)+n =N m, and then use this property to prove the transitivity). It is not clear yet whether it is convenient to consider indices as parameters, and the relation between the rules with and without parameters is not known in the general case either. In the case of T(n), if we consider n to be a parameter, we can prove some \closure" properties like t T(0): t =T(0) nil in a very direct form, from which some properties needed in the proof, like t T(0): ktk =N 0 follow without having to use higher order functions in the de nition of k k (we can prove the same property with the original de nition of the family T using universes4 ). In the case of <, the two ways of de ning the relation seem to be useful when proving properties of it.
4

4.2 Inductive De nitions

this was pointed out by T. Altenkirch.

21

As is common in programming, it is possible in ALF to work with abstract sets and operations rather than with their actual inductive and recursive de nitions. Thus, we may specify abstract structures and operations, which consist of a collection of propositions implicitly characterizing a number of sets and operations on them (such a speci cation is usually given by a set of equations). When writing the proof, we used this approach in order to work in a modular style, and so, independently from the exact de nitions of some sets and propositions. The usual de nition of tuples, for instance, is given by requiring (a family of) sets T(n) for n N, having the following operations: nil T(0), cons: N T(n) ! T(succ(n)), for all n N, hd: T(succ(n)) ! N, for all n N, tl: T(succ(n)) ! T(n), for all n N, such that the following equalities hold: hd(cons(a; t))= a, for all a N, t T(n) (n N), tl(cons(a; t))= t, for all a N, t T(n) (n N). Using hd and tl, we can de ne all the operations on tuples of section 3.3 and all the properties of these operations can be proved from these de nitions using only the two equalities stated above. We have three di erent implementations of the tuples, the two already presented, plus a de nition using universes to de ne the set T(n) as N N : : : N, n times5. We de ned hd and tl for each of these three de nitions and proved that the equalities stated above hold for each implementation. For the order of the natural numbers, we used the same approach, and found the set of properties of this relation we used. We then implemented it in three di erent ways, the two already presented, plus the de nition of n < m as the proposition 9k N: succ(k)+n =N m.

4.3 The Abstract Speci cation Approach

5 The de nition is by induction on n, and for the case n = 0, the set with only one element (>) was chosen.

22

References
Ack28] W. Ackermann. Zum Hilbertschen Aufbau der reellen Zahlen. Mathematische Annalen, 99:118{133, 1928. ACN90] L. Augustsson, T. Coquand, and B. Nordstrom. A short description of Another Logical Framework. In G. Huet and G. Plotkin, editors, Informal Proceedings of the First Workshop on Logical Frameworks, pages 39{42. Esprit Basic Research Action 3245, May 1990. CP90] Thierry Coquand and Christine Paulin. Inductively de ned types. In Proceedings of COLOG-88, number 417 in Lecture Notes in Computer Science. Springer-Verlag, 1990. Dyb90] Peter Dybjer. Inductive sets and families in Martin-Lof's type theory and their set-theoretic semantics. In G. Huet and G. Plotkin, editors, Informal Proceedings of the First Workshop on Logical Frameworks, pages 213{230. Esprit Basic Research Action 3245, May 1990. To appear in G. Huet and G. Plotkin, editors, Logical Frameworks. Her65] H. Hermes. Enumerability, Decidability, Computability. SpringerVerlag, Berlin, 1965. ML84] Per Martin-Lof. Intuitionistic Type Theory. Bibliopolis, Napoli, 1984. NPS90] Bengt Nordstrom, Kent Petersson, and Jan M. Smith. Programming in Martin-Lof's Type Theory. An Introduction. Oxford University Press, 1990. Pet35] R. Peter. Konstruktion nichtrekursiver Funktionen. Mathematische Annalen, 111:42{60, 1935. Sza91] Nora Szasz. A Machine Checked Proof that Ackermann's Function is not Primitive Recursive. Licentiate Thesis, Chalmers University of Technology and University of Goteborg, Sweden, June 1991.

23

You might also like