Why Functional Programming Matters: John Hughes The University, Glasgow
Why Functional Programming Matters: John Hughes The University, Glasgow
Why
Functional Programming
Matters
John Hughes
The University, Glasgow
Abstract
As software becomes more and more complex, it is more and more
important to structure it well. Well-structured software is easy to write
and to debug, and provides a collection of modules that can be reused
to reduce future programming costs. In this paper we show that two fea-
tures of functional languages in particular, higher-order functions and lazy
evaluation, can contribute significantly to modularity. As examples, we
manipulate lists and trees, program several numerical algorithms, and im-
plement the alpha-beta heuristic (an algorithm from Artificial Intelligence
used in game-playing programs). We conclude that since modularity is the
key to successful programming, functional programming offers important
advantages for software development.
1 Introduction
This paper is an attempt to demonstrate to the larger community of (non-
functional) programmers the significance of functional programming, and also
to help functional programmers exploit its advantages to the full by making it
clear what those advantages are.
Functional programming is so called because its fundamental operation is
the application of functions to arguments. A main program itself is written as
a function that receives the program’s input as its argument and delivers the
program’s output as its result. Typically the main function is defined in terms of
other functions, which in turn are defined in terms of still more functions, until
at the bottom level the functions are language primitives. All of these functions
are much like ordinary mathematical functions, and in this paper they will be
1 Anearlier version of this paper appeared in the The Computer Journal, 32(2):98–107,
April 1989. Copyright belongs to The British Computer Society, who grant permission to
copy for educational purposes only without fee provided the copies are not made for direct
commercial advantage and this BCS copyright notice appears.
defined by ordinary equations. We are following Turner’s language Miranda[4]2
here, but the notation should be readable without specific knowledge of this.
The special characteristics and advantages of functional programming are
often summed up more or less as follows. Functional programs contain no
assignment statements, so variables, once given a value, never change. More
generally, functional programs contain no side-effects at all. A function call
can have no effect other than to compute its result. This eliminates a major
source of bugs, and also makes the order of execution irrelevant — since no side-
effect can change an expression’s value, it can be evaluated at any time. This
relieves the programmer of the burden of prescribing the flow of control. Since
expressions can be evaluated at any time, one can freely replace variables by
their values and vice versa — that is, programs are “referentially transparent”.
This freedom helps make functional programs more tractable mathematically
than their conventional counterparts.
Such a catalogue of “advantages” is all very well, but one must not be sur-
prised if outsiders don’t take it too seriously. It says a lot about what functional
programming isn’t (it has no assignment, no side effects, no flow of control) but
not much about what it is. The functional programmer sounds rather like a
mediæval monk, denying himself the pleasures of life in the hope that it will
make him virtuous. To those more interested in material benefits, these “ad-
vantages” are totally unconvincing.
Functional programmers argue that there are great material benefits — that
a functional programmer is an order of magnitude more productive than his
or her conventional counterpart, because functional programs are an order of
magnitude shorter. Yet why should this be? The only faintly plausible reason
one can suggest on the basis of these “advantages” is that conventional programs
consist of 90% assignment statements, and in functional programs these can be
omitted! This is plainly ridiculous. If omitting assignment statements brought
such enormous benefits then Fortran programmers would have been doing it
for twenty years. It is a logical impossibility to make a language more powerful
by omitting features, no matter how bad they may be.
Even a functional programmer should be dissatisfied with these so-called
advantages, because they give no help in exploiting the power of functional lan-
guages. One cannot write a program that is particularly lacking in assignment
statements, or particularly referentially transparent. There is no yardstick of
program quality here, and therefore no ideal to aim at.
Clearly this characterization of functional programming is inadequate. We
must find something to put in its place — something that not only explains the
power of functional programming but also gives a clear indication of what the
functional programmer should strive towards.
2 Miranda is a trademark of Research Software Ltd.
2
2 An Analogy with Structured Programming
It’s helpful to draw an analogy between functional and structured programming.
In the past, the characteristics and advantages of structured programming have
been summed up more or less as follows. Structured programs contain no goto
statements. Blocks in a structured program do not have multiple entries or exits.
Structured programs are more tractable mathematically than their unstructured
counterparts. These “advantages” of structured programming are very similar in
spirit to the “advantages” of functional programming we discussed earlier. They
are essentially negative statements, and have led to much fruitless argument
about “essential gotos” and so on.
With the benefit of hindsight, it’s clear that these properties of structured
programs, although helpful, do not go to the heart of the matter. The most im-
portant difference between structured and unstructured programs is that struc-
tured programs are designed in a modular way. Modular design brings with
it great productivity improvements. First of all, small modules can be coded
quickly and easily. Second, general-purpose modules can be reused, leading to
faster development of subsequent programs. Third, the modules of a program
can be tested independently, helping to reduce the time spent debugging.
The absence of gotos, and so on, has very little to do with this. It helps with
“programming in the small”, whereas modular design helps with “programming
in the large”. Thus one can enjoy the benefits of structured programming in
Fortran or assembly language, even if it is a little more work.
It is now generally accepted that modular design is the key to successful
programming, and recent languages such as Modula-II [6] and Ada [5] include
features specifically designed to help improve modularity. However, there is a
very important point that is often missed. When writing a modular program to
solve a problem, one first divides the problem into subproblems, then solves the
subproblems, and finally combines the solutions. The ways in which one can
divide up the original problem depend directly on the ways in which one can glue
solutions together. Therefore, to increase one’s ability to modularize a problem
conceptually, one must provide new kinds of glue in the programming language.
Complicated scope rules and provision for separate compilation help only with
clerical details — they can never make a great contribution to modularization.
We shall argue in the remainder of this paper that functional languages pro-
vide two new, very important kinds of glue. We shall give some examples of
programs that can be modularized in new ways and can thereby be simplified.
This is the key to functional programming’s power — it allows improved mod-
ularization. It is also the goal for which functional programmers must strive
— smaller and simpler and more general modules, glued together with the new
glues we shall describe.
3
3 Gluing Functions Together
The first of the two new kinds of glue enables simple functions to be glued
together to make more complex ones. It can be illustrated with a simple list-
processing problem — adding the elements of a list. We can define lists3 by
listof ∗ ::= Nil | Cons ∗ (listof ∗)
which means that a list of ∗s (whatever ∗ is) is either Nil , representing a list
with no elements, or a Cons of a ∗ and another list of ∗s. A Cons represents
a list whose first element is the ∗ and whose second and subsequent elements
are the elements of the other list of ∗s. Here ∗ may stand for any type — for
example, if ∗ is “integer” then the definition says that a list of integers is either
empty or a Cons of an integer and another list of integers. Following normal
practice, we will write down lists simply by enclosing their elements in square
brackets, rather than by writing Conses and Nil s explicitly. This is simply a
shorthand for notational convenience. For example,
[] means Nil
[1] means Cons 1 Nil
[1, 2, 3] means Cons 1 (Cons 2 (Cons 3 Nil ))
The elements of a list can be added by a recursive function sum. The function
sum must be defined for two kinds of argument: an empty list (Nil ), and a
Cons. Since the sum of no numbers is zero, we define
sum Nil = 0
and since the sum of a Cons can be calculated by adding the first element of
the list to the sum of the others, we can define
sum (Cons n list) = num + sum list
Examining this definition, we see that only the boxed parts below are specific
to computing a sum.
sum Nil = 0
sum (Cons n list) = n + sum list
4
The definition of foldr can be derived just by parameterizing the definition of
sum, giving
(foldr f x) Nil = x
(foldr f x) (Cons a l ) = f a ((foldr f x) l )
Here we have written brackets around (foldr f x) to make it clear that it replaces
sum. Conventionally the brackets are omitted, and so ((foldr f x) l) is written
as (foldr f x l). A function of three arguments such as foldr, applied to only
two, is taken to be a function of the one remaining argument, and in general,
a function of n arguments applied to only m of them (m < n) is taken to be a
function of the n − m remaining ones. We will follow this convention in future.
Having modularized sum in this way, we can reap benefits by reusing the
parts. The most interesting part is foldr, which can be used to write down a
function for multiplying together the elements of a list with no further program-
ming:
product = foldr (∗) 1
It can also be used to test whether any of a list of booleans is true
anytrue = foldr (∨) F alse
or whether they are all true
alltrue = foldr (∧) True
One way to understand (foldr f a) is as a function that replaces all occurrences
of Cons in a list by f , and all occurrences of Nil by a. Taking the list [1, 2, 3]
as an example, since this means
Cons 1 (Cons 2 (Cons 3 Nil ))
then (foldr (+) 0) converts it into
(+) 1 ((+) 2 ((+) 3 0)) = 6
and (foldr (∗) 1) converts it into
(∗) 1 ((∗) 2 ((∗) 3 1)) = 6
Now it’s obvious that (foldr Cons Nil ) just copies a list. Since one list can be
appended to another by Cons ing its elements onto the front, we find
append a b = foldr Cons b a
As an example,
append [1, 2] [3, 4] = foldr Cons [3, 4] [1, 2]
= foldr Cons [3, 4] (Cons 1 (Cons 2 Nil ))
= Cons 1 (Cons 2 [3, 4]))
(replacing Cons by Cons and Nil by [3, 4])
= [1, 2, 3, 4]
5
We can count the number of elements in a list using the function length, defined
by
f andcons f el = (Cons . f ) el
= Cons (f el )
so
f andcons f el list = Cons (f el ) list
The final version is
doubleall = foldr (Cons . double) Nil
With one further modularization we arrive at
doubleall = map double
map f = foldr (Cons . f ) Nil
where map — another generally useful function — applies any function f to all
the elements of a list.
6
We can even write a function to add all the elements of a matrix, represented
as a list of lists. It is
summatrix = sum . map sum
The function map sum uses sum to add up all the rows, and then the leftmost
sum adds up the row totals to get the sum of the whole matrix.
These examples should be enough to convince the reader that a little mod-
ularization can go a long way. By modularizing a simple function (sum) as a
combination of a “higher-order function” and some simple arguments, we have
arrived at a part (f oldr) that can be used to write many other functions on lists
with no more programming effort.
We do not need to stop with functions on lists. As another example, consider
the datatype of ordered labeled trees, defined by
treeof ∗ ::= Node ∗ (listof (treeof ∗))
This definition says that a tree of ∗s is a node, with a label which is a ∗, and a
list of subtrees which are also trees of ∗s. For example, the tree
would be represented by
Node 1
(Cons (Node 2 Nil )
(Cons (Node 3
(Cons (Node 4 Nil ) Nil ))
Nil ))
Instead of considering an example and abstracting a higher-order function from
it, we will go straight to a function foldtree analogous to foldr. Recall that foldr
took two arguments: something to replace Cons with and something to replace
Nil with. Since trees are built using Node, Cons, and Nil , foldtree must take
three arguments — something to replace each of these with. Therefore we define
foldtree f g a (Node label subtrees) =
f label (foldtree f g a subtrees)
foldtree f g a (Cons subtree rest) =
g (foldtree f g a subtree) (foldtree f g a rest)
foldtree f g a Nil = a
7
Many interesting functions can be defined by gluing foldtree and other functions
together. For example, all the labels in a tree of numbers can be added together
using
sumtree = foldtree (+) (+) 0
Taking the tree we wrote down earlier as an example, sumtree gives
(+) 1
((+) ((+) 2 0)
((+) ((+) 3
((+) ((+) 4 0) 0))
0))
= 10
Cons 1
(append (Cons 2 Nil )
(append (Cons 3
(append (Cons 4 Nil ) Nil ))
Nil ))
= [1, 2, 3, 4]
Finally, one can define a function analogous to map which applies a function f
to all the labels in a tree:
maptree f = foldtree (Node . f ) Cons Nil
All this can be achieved because functional languages allow functions that are
indivisible in conventional programming languages to be expressed as a combina-
tions of parts — a general higher-order function and some particular specializing
functions. Once defined, such higher-order functions allow many operations to
be programmed very easily. Whenever a new datatype is defined, higher-order
functions should be written for processing it. This makes manipulating the
datatype easy, and it also localizes knowledge about the details of its repre-
sentation. The best analogy with conventional programming is with extensible
languages — in effect, the programming language can be extended with new
control structures whenever desired.
8
4 Gluing Programs Together
The other new kind of glue that functional languages provide enables whole
programs to be glued together. Recall that a complete functional program is
just a function from its input to its output. If f and g are such programs, then
(g . f ) is a program that, when applied to its input, computes
g (f input)
The program f computes its output, which is used as the input to program g.
This might be implemented conventionally by storing the output from f in a
temporary file. The problem with this is that the temporary file might occupy
so much memory that it is impractical to glue the programs together in this way.
Functional languages provide a solution to this problem. The two programs f
and g are run together in strict synchronization. Program f is started only
when g tries to read some input, and runs only for long enough to deliver the
output g is trying to read. Then f is suspended and g is run until it tries to read
another input. As an added bonus, if g terminates without reading all of f ’s
output, then f is aborted. Program f can even be a nonterminating program,
producing an infinite amount of output, since it will be terminated forcibly as
soon as g is finished. This allows termination conditions to be separated from
loop bodies — a powerful modularization.
Since this method of evaluation runs f as little as possible, it is called “lazy
evaluation”. It makes it practical to modularize a program as a generator that
constructs a large number of possible answers, and a selector that chooses the
appropriate one. While some other systems allow programs to be run together
in this manner, only functional languages (and not even all of them) use lazy
evaluation uniformly for every function call, allowing any part of a program to
be modularized in this way. Lazy evaluation is perhaps the most powerful tool
for modularization in the functional programmer’s repertoire.
We have described lazy evaluation in the context of functional languages,
but surely so useful a feature should be added to nonfunctional languages —
or should it? Can lazy evaluation and side-effects coexist? Unfortunately, they
cannot: Adding lazy evaluation to an imperative notation is not actually impos-
sible, but the combination would make the programmer’s life harder, rather than
easier. Because lazy evaluation’s power depends on the programmer giving up
any direct control over the order in which the parts of a program are executed,
it would make programming with side effects rather difficult, because predicting
in what order —or even whether— they might take place would require knowing
a lot about the context in which they are embedded. Such global interdepen-
dence would defeat the very modularity that —in functional languages— lazy
evaluation is designed to enhance.
9
square roots. This algorithm computes the square root of a number n by starting
from an initial approximation a0 and computing better and better ones using
the rule
ai+1 = (ai + n/ai )/2
If the approximations converge to some limit a, then
a = (a + n/a)/2
so
2a = a + n/a
a = n/a
a∗a = n
√
a = n
10
so that the list of approximations can be computed by
repeat (next n) a0
The function repeat is an example of a function with an “infinite” output — but
it doesn’t matter, because no more approximations will actually be computed
than the rest of the program requires. The infinity is only potential: All it means
is that any number of approximations can be computed if required; repeat itself
places no limit.
The remainder of a square root finder is a function within, which takes a
tolerance and a list of approximations and looks down the list for two successive
approximations that differ by no more than the given tolerance. It can be
defined by
11
at the given point and at another point nearby and computing the slope of a
straight line between the two points. This assumes that if the two points are
close enough together, then the graph of the function will not curve much in
between. This gives the definition
easydiff f x h = (f (x + h) − f x)/h
In order to get a good approximation the value of h should be very small.
Unfortunately, if h is too small then the two values f (x + h) and f (x) are very
close together, and so the rounding error in the subtraction may swamp the
result. How can the right value of h be chosen? One solution to this dilemma
is to compute a sequence of approximations with smaller and smaller values of
h, starting with a reasonably large one. Such a sequence should converge to the
value of the derivative, but will become hopelessly inaccurate eventually due to
rounding error. If (within eps) is used to select the first approximation that
is accurate enough, then the risk of rounding error affecting the result can be
much reduced. We need a function to compute the sequence:
Here h0 is the initial value of h, and successive values are obtained by repeated
halving. Given this function, the derivative at any point can be computed by
12
Of course, since the error term is only roughly a power of h this conclusion is
also approximate, but it is a much better approximation. This improvement
can be applied to all successive pairs of approximations using the function
which uses repeat improve to get a sequence of more and more improved se-
quences of approximations and constructs a new sequence of approximations
by taking the second approximation from each of the improved sequences (it
turns out that the second one is the best one to take — it is more accurate
13
than the first and doesn’t require any extra work to compute). This algorithm
is really very sophisticated — it uses a better and better numerical method as
more and more approximations are computed. One could compute derivatives
very efficiently indeed with the program:
within eps (super (differentiate h0 f x))
This is probably a case of using a sledgehammer to crack a nut, but the point
is that even an algorithm as sophisticated as super is easily expressed when
modularized using lazy evaluation.
The function zip2 is another standard list-processing function. It takes two lists
and returns a list of pairs, each pair consisting of corresponding elements of the
two lists. Thus the first pair consists of the first element of the first list and the
first element of the second, and so on. We can define zip2 by
zip2 (Cons a s) (Cons b t) = Cons (a, b) (zip2 s t)
In integrate, zip2 computes a list of pairs of corresponding approximations to
the integrals on the two subintervals, and map addpair adds the elements of the
pairs together to give a list of approximations to the original integral.
14
Actually, this version of integrate is rather inefficient because it continually
recomputes values of f . As written, easyintegrate evaluates f at a and at b,
and then the recursive calls of integrate re-evaluate each of these. Also, (f mid)
is evaluated in each recursive call. It is therefore preferable to use the following
version, which never recomputes a value of f :
integrate f a b = integ f a b (f a) (f b)
integ f a b f a f b = Cons ((f a + f b) ∗ (b − a)/2)
map addpair(zip2 (integ f a m f a f m)
(integ f m b f m f b)))
where m = (a + b)/2
fm = f m
The function integrate computes an infinite list of better and better approxi-
mations to the integral, just as differentiate did in the section above. One can
therefore just write down integration routines that integrate to any required
accuracy, as in
This integration algorithm suffers from the same disadvantage as the first dif-
ferentiation algorithm in the preceding subsection — it converges rather slowly.
Once again, it can be improved. The first approximation in the sequence is
computed (by easyintegrate) using only two points, with a separation of b − a.
The second approximation also uses the midpoint, so that the separation be-
tween neighboring points is only (b − a)/2. The third approximation uses this
method on each half-interval, so the separation between neighboring points is
only (b − a)/4. Clearly the separation between neighboring points is halved
between each approximation and the next. Taking this separation as h, the
sequence is a candidate for improvement using the function improve defined
in the preceding section. Therefore we can now write down quickly converging
sequences of approximations to integrals, for example,
super (integrate sin 0 4)
and
improve (integrate f 0 1)
where f x = 1/(1 + x ∗ x)
(This latter sequence is an eighth-order method for computing π/4. The second
approximation, which requires only five evaluations of f to compute, is correct
to five decimal places.)
In this section we have taken a number of numerical algorithms and pro-
grammed them functionally, using lazy evaluation as glue to stick their parts
15
together. Thanks to this, we have been able to modularize them in new ways,
into generally useful functions such as within, relative, and improve. By com-
bining these parts in various ways we have programmed some quite good nu-
merical algorithms very simply and easily.
16
Figure 2: Part of a game tree for tic-tac-toe.
label (the position it represents) and a list of subnodes. We can therefore use
the same datatype to represent them.
A game tree is built by repeated applications of moves. Starting from the
root position, moves is used to generate the labels for the subtrees of the root.
It is then used again to generate the subtrees of the subtrees and so on. This
pattern of recursion can be expressed as a higher-order function,
reptree f a = Node a (map (reptree f ) (f a))
Using this function another can be defined which constructs a game tree from
a particular position:
gametree p = reptree moves p
As an example, consider Fig. 2. The higher-order function used here (reptree) is
analogous to the function repeat used to construct infinite lists in the preceding
section.
The alpha-beta algorithm looks ahead from a given position to see whether
the game will develop favorably or unfavorably, but in order to do so it must be
able to make a rough estimate of the value of a position without looking ahead.
This “static evaluation” must be used at the limit of the look-ahead, and may
be used to guide the algorithm earlier. The result of the static evaluation
is a measure of the promise of a position from the computer’s point of view
(assuming that the computer is playing the game against a human opponent).
The larger the result, the better the position for the computer. The smaller the
result, the worse the position. The simplest such function would return (say)
1 for positions where the computer has already won, −1 for positions where
the computer has already lost, and 0 otherwise. In reality, the static evaluation
function measures various things that make a position “look good”, for example
17
material advantage and control of the center in chess. Assume that we have
such a function,
static :: position → number
Since a game tree is a (treeof position), it can be converted into a (treeof number)
by the function (maptree static), which statically evaluates all the positions in
the tree (which may be infinitely many). This uses the function maptree defined
in Section 2.
Given such a tree of static evaluations, what is the true value of the positions
in it? In particular, what value should be ascribed to the root position? Not
its static value, since this is only a rough guess. The value ascribed to a node
must be determined from the true values of its subnodes. This can be done by
assuming that each player makes the best moves possible. Remembering that
a high value means a good position for the computer, it is clear that when it is
the computer’s move from any position, it will choose the move leading to the
subnode with the maximum true value. Similarly, the opponent will choose the
move leading to the subnode with the minimum true value. Assuming that the
computer and its opponent alternate turns, the true value of a node is computed
by the function maximize if it is the computer’s turn and minimize if it is not:
Here max and min are functions on lists of numbers that return the maximum
and minimum of the list respectively. These definitions are not complete because
they recurse forever — there is no base case. We must define the value of a node
with no successors, and we take it to be the static evaluation of the node (its
label). Therefore the static evaluation is used when either player has already
won, or at the limit of look-ahead. The complete definitions of maximize and
minimize are
maximize (Node n Nil ) = n
maximize (Node n sub) = max (map minimize sub)
minimize (Node n Nil ) = n
minimize (Node n sub) = max (map maximize sub)
One could almost write a function at this stage that would take a position and
return its true value. This would be:
evaluate = maximize . maptree static . gametree
There are two problems with this definition. First of all, it doesn’t work for
infinite trees, because maximize keeps on recursing until it finds a node with
no subtrees — an end to the tree. If there is no end then maximize will return
no result. The second problem is related — even finite game trees (like the one
for tic-tac-toe) can be very large indeed. It is unrealistic to try to evaluate the
18
whole of the game tree — the search must be limited to the next few moves.
This can be done by pruning the tree to a fixed depth,
The function (prune n) takes a tree and “cuts off” all nodes further than n from
the root. If a game tree is pruned it forces maximize to use the static evaluation
for nodes at depth n, instead of recursing further. The function evaluate can
therefore be defined by
evaluate = maximize . maptree static . prune 5 . gametree
which looks (say) five moves ahead.
Already in this development we have used higher-order functions and lazy
evaluation. Higher-order functions reptree and maptree allow us to construct
and manipulate game trees with ease. More importantly, lazy evaluation permits
us to modularize evaluate in this way. Since gametree has a potentially infinite
result, this program would never terminate without lazy evaluation. Instead of
writing
prune 5 . gametree
we would have to fold these two functions together into one that constructed
only the first five levels of the tree. Worse, even the first five levels may be too
large to be held in memory at one time. In the program we have written, the
function
maptree static . prune 5 . gametree
constructs parts of the tree only as maximize requires them. Since each part
can be thrown away (reclaimed by the garbage collector) as soon as maximize
has finished with it, the whole tree is never resident in memory. Only a small
part of the tree is stored at a time. The lazy program is therefore efficient.
This efficiency depends on an interaction between maximize (the last function
in the chain of compositions) and gametree (the first); without lazy evaluation,
therefore, it could be achieved only by folding all the functions in the chain
together into one big one. This would be a drastic reduction in modularity,
but it is what is usually done. We can make improvements to this evaluation
algorithm by tinkering with each part; this is relatively easy. A conventional
programmer must modify the entire program as a unit, which is much harder.
So far we have described only simple minimaxing. The heart of the alpha-
beta algorithm is the observation that one can often compute the value returned
by maximize or minimize without looking at the whole tree. Consider the tree:
19
Strangely enough, it is unnecessary to know the value of the question mark
in order to evaluate the tree. The left minimum evaluates to 1, but the right
minimum clearly evaluates to something at most 0. Therefore the maximum of
the two minima must be 1. This observation can be generalized and built into
maximize and minimize.
The first step is to separate maximize into an application of max to a list
of numbers; that is, we decompose maximize as
maximize = max . maximize0
(We decompose minimize in a similar way. Since minimize and maximize are
entirely symmetrical we shall discuss maximize and assume that minimize is
treated similarly.) Once decomposed in this way, maximize can use minimize0 ,
rather than minimize itself, to discover which numbers minimize would take
the minimum of. It may then be able to discard some of the numbers without
looking at them. Thanks to lazy evaluation, if maximize doesn’t look at all of
the list of numbers, some of them will not be computed, with a potential saving
in computer time.
It’s easy to “factor out” max from the definition of maximize, giving
Since minimize0 returns a list of numbers, the minimum of which is the result of
minimize, (map minimize0 l) returns a list of lists of numbers, and maximize0
should return a list of those lists’ minima. Only the maximum of this list
matters, however. We shall define a new version of mapmin that omits the
minima of lists whose minimum doesn’t matter.
mapmin (Cons nums rest)
= Cons (min nums) (omit (min nums) rest)
20
The function omit is passed a “potential maximum” — the largest minimum
seen so far — and omits any minima that are less than this:
evaluate
= max . maximize0 . highfirst . maptree static . prune 8 . gametree
21
One might regard it as sufficient to consider only the three best moves for the
computer or the opponent, in order to restrict the search. To program this, it
is necessary only to replace highfirst with (taketree 3 . highfirst), where
The function taketree replaces all the nodes in a tree with nodes that have at
most n subnodes, using the function (take n), which returns the first n elements
of a list (or fewer if the list is shorter than n).
Another improvement is to refine the pruning. The program above looks
ahead a fixed depth even if the position is very dynamic — it may decide to
look no further than a position in which the queen is threated in chess, for
example. It’s usual to define certain “dynamic” positions and not to allow look-
ahead to stop in one of these. Assuming a function dynamic that recognizes
such positions, we need only add one equation to prune to do this:
6 Conclusion
In this paper, we’ve argued that modularity is the key to successful program-
ming. Languages that aim to improve productivity must support modular pro-
gramming well. But new scope rules and mechanisms for separate compilation
are not enough — modularity means more than modules. Our ability to de-
compose a problem into parts depends directly on our ability to glue solutions
together. To support modular programming, a language must provide good
glue. Functional programming languages provide two new kinds of glue —
higher-order functions and lazy evaluation. Using these glues one can modular-
ize programs in new and useful ways, and we’ve shown several examples of this.
Smaller and more general modules can be reused more widely, easing subsequent
programming. This explains why functional programs are so much smaller and
easier to write than conventional ones. It also provides a target for functional
programmers to aim at. If any part of a program is messy or complicated, the
programmer should attempt to modularize it and to generalize the parts. He or
she should expect to use higher-order functions and lazy evaluation as the tools
for doing this.
22
Of course, we are not the first to point out the power and elegance of higher-
order functions and lazy evaluation. For example, Turner shows how both can
be used to great advantage in a program for generating chemical structures [3].
Abelson and Sussman stress that streams (lazy lists) are a powerful tool for
structuring programs [1]. Henderson has used streams to structure functional
operating systems [2]. But perhaps we place more stress on functional programs’
modularity than previous authors.
This paper is also relevant to the present controversy over lazy evaluation.
Some believe that functional languages should be lazy; others believe they
should not. Some compromise and provide only lazy lists, with a special syntax
for constructing them (as, for example, in Scheme [1]). This paper provides
further evidence that lazy evaluation is too important to be relegated to second-
class citizenship. It is perhaps the most powerful glue functional programmers
possess. One should not obstruct access to such a vital tool.
Acknowledgments
This paper owes much to many conversations with Phil Wadler and Richard
Bird in the Programming Research Group at Oxford. Magnus Bondesson at
Chalmers University, Göteborg, pointed out a serious error in an earlier version
of one of the numerical algorithms, and thereby prompted development of many
of the others. Ham Richards and David Turner did a superb editorial job,
including converting the notation to Miranda. This work was carried out with
the support of a Research Fellowship from the U.K. Science and Engineering
Research Council.
References
[1] Abelson, H. and Sussman, G. J. The Structure and Interpretation of Com-
puter Programs. MIT Press, Cambridge, Mass., 1984.
[2] Henderson, P. “Purely functional operating systems”. In Functional Pro-
gramming and its Applications. Cambridge University Press, Cambridge,
1982.
[3] Turner, D. A. “The semantic elegance of applicative languages”. In ACM
Symposium on Functional Languages and Computer Architecture (Went-
worth, N.H.). ACM, New York, 1981.
[4] Turner, D. A. “An Overview of Miranda”. SIGPLAN Notices, December
1986 (this and other papers about Miranda are at: http://miranda.org.uk).
[5] United States Department of Defense. The Programming Language Ada Ref-
erence Manual. Springer-Verlag, Berlin, 1980.
[6] Wirth, N. Programming in Modula–II. Springer-Verlag, Berlin, 1982.
23