Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Eight Theses Reflecting On Stephen Toulmin

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/225193496

Eight Theses Reflecting on Stephen Toulmin

Chapter · January 2007


DOI: 10.1007/978-1-4020-4938-5_25

CITATIONS READS
6 114

1 author:

John Woods
University of British Columbia - Vancouver
330 PUBLICATIONS   1,726 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

abductive logic View project

Truth in Fiction: Rethinking its logic View project

All content following this page was uploaded by John Woods on 29 May 2014.

The user has requested enhancement of the downloaded file.


Eight Theses Reflecting on Stephen Toulmin

John Woods
Department of Philosophy
University of British Columbia
1866 Main Mall
Vancouver B.C.
V6T1Z1
e-mail: jhwoods@interchange.ubc.ca
Web-page: www.johnwoods.ca
and
Department of Computer Science
King’s College
Strand
London
WC2R 2LS
UK
e-mail:woodsj@dcs.kcl.ac.uk

1
Eight Theses Reflecting on Stephen Toulmin

My title embodies an ambiguity that I hope to make something of. In one sense, it
suggests a thesis that Stephen Toulmin himself espouses or is committed to. In another, it
suggests a thesis held by me, but occasioned, in whole or in part, by reflecting on
Toulmin’s writings. Of course, the two senses are not robustly disjoint. My principal
purpose is to lend these theses some degree of favour, if not in every case theses of
Toulmin’s own making, then perhaps of Toulmin’s example.

Thesis one. The validity standard is nearly always the wrong standard for real-life
reasoning. It is widely assumed that valid argument is nearly the best there is, improved
upon only by argument that is sound. When made to note that actual reasoners hardly
ever attain the validity standard, the received response is to make the best of a bad thing,
insisting that, for beings like us, reasoning is best when it most closely approximates to
the strict canons of deduction. Against this, cooler heads counsel that the validity
standard is best only when a reasoner’s target is such as to call for it, as when, for
example, one seeks a proof of a proposition of set theory. But even this is wrong. It is
wrong in the sense that it fails to make clear how deeply the validity standard is
embedded in a network of constraints. When a mathematician wants a proof, it is always
as a proof of some proposition P. Further, it is nearly always wanted as a proof that draws
its premisses from the settled lore of mathematics. Taken alone, validity is useless. Its
value rests entirely on its indissoluble link with the other components of proof.
Even when an argument attains it, validity is a rather passive standard. It can be
rendered impotent with a single disruption of the reasoner’s knowledge-set. On receipt of
new information K that contradicts the desired-to-be-proved P, the reasoner’s present
valid argument remains valid, but the proof is lost. Axiomatic approaches to mathematics
owe a good deal of their motivation to an interest in dealing with the passivity of validity.
For if the inputs to the theory’s deductive apparatus could carry assurances of their truth,
no K would emerge to contradict any P for which a valid argument for it exists. But, long
since, such optimism about axiomatics has been driven by well-known paradoxes into the
proferred harbours of system-relativity or other forms of ad hoc sanctuary. So validity
remains an oddly inert standard.
Validity’s station ensures that proof is a brittle accomplishment. Validity is
wholly indifferent to new information. The premissory successor of a valid argument is a
valid argument. This means that if ever we were mad enough to set validity as the target
of our reasoning, achieving it would constitute wholly adequate grounds for shutting the
enquiry down. This makes us see how wrong it is to think of validity as the goal of good
argument. The truth is that a proof-of-P is (sometimes) the goal of good argument. But it
is an expensive goal. It cannot tolerate any case in which, though a K exists that
contradicts P, the argument for P retains some positive force. Most reasoning is unlike
this. Arguments for are met with arguments against, often in circumstances that leave in
contention both the one reasoner’s P and the other reasoner’s contrary of it. But in a
proof-context for which a K exists, not only must P be abandoned, but the proof-
principles that abetted its derivation have to be revised, or precious theorems surrendered.
This is expensive work in the cognitive economy. Very often, it is work of a kind that we
haven’t time for.

2
If the argument here displayed is taken deductively:

1) Ocelots are four-legged


2) Ozzie is an ocelot
3) Therefore, Ozzie is four-legged

then (3) is defeated if Ozzie is three-legged, and so is (1). But if, instead of being seen as
a universally quantified conditional, (1) is taken as a generic claim ([Carlson and
Pelletier, 1995]), then Ozzie’s three-leggedness defeats (3) but leaves (1) standing. Re-
writing (1) as a universally quantified conditional that is impervious to Ozzie’s
discouragement is notoriously difficult business ([Gabbay and Woods, 2005b]). But take
it generically (and re-write “therefore” as a default operator), we achieve two advantages.
We avoid the cost of exceptionalizing (1), and we conform to how we reason about such
things anyway. This is not to say that a proof-of-P target is always wrong for us. Far from
it. But it is brittle, that is, easy to wreck, and expensive, that is, difficult to fix.
Although a part of the proof standard in mathematics, validity has no purchase in
any reasoning that aims (however tentatively) at conclusions carrying information not
fully conveyed by the premisses. One of the first tasks of deductivism is to establish that
the divide between truth-preservation and ampliation is not as chasmatic as usually
supposed. I lack time for deductivists here. So I will say only that I wish them luck. They
will need it.
Perhaps this would be a good place to make a certain rather general point. It is
that in the situations in which real-life participants actually find themselves, targets are
usually contextually-cued. Rather than announcing themselves, they have to be attributed.
Very often such attributions are made in conditions of uncertainty. When this fact is laid
beside a second general fact, interesting consequences ensue. This second fact is that, by
and large, beings like us are cognitive adepts. We tend to get most of the right things
right enough to matter. So we survive, we prosper, and occasionally build great
civilizations. Taken together this pair of facts suggests that in most situations we should
hesitate to attribute to other parties cognitive targets that their behaviour then and there
runs foul of. If this is right, then it is a kind of default position that the invalidity of a
person’s argument is reason not to attribute to him a validity-demanding target. Another
way of saying much the same thing is that an argument’s invalidity is typically no
grounds for judging it negatively. (This is the essential import of the Charity Principle. Of
course, it’s got nothing to do with charity.)
But then why is it, we might ask, that since its inception logic has so steadfastly
thrown in its lot with deduction? Here, too, there is more to be said than be can be said in
this note. Even so, it cannot seriously be doubted that the logic’s favouritism towards
deduction is explained in large part by its impressive success at getting deduction to
surrender its secrets. Deductive logic flourishes because logicians have long since figured
out how to do it. Nothing succeeds like success. (But see just below.)

Thesis two. Little in good reasoning is topic-neutral. Aristotle was a master reductionist,
and a strategically adroit compacter of complexity. In On Interpretation Aristotle
asserted that everything stateable in Greek is stateable without relevant loss in the
language of (what we know as) categorical propositions. In the Analytics, Aristotle said,

3
and repeated, that all correct reasoning reduces to syllogistic. Had Aristotle been right in
the first instance, he would have achieved a striking economy in logical grammar. Had he
been right in the second instance, he would have had (thanks to the perfectability proof)
something approaching effective recognizability for all reasoning. But not even Aristotle
thought that correct reasoning could be detached from the protocols of premiss-selection,
concerning which he would insist on what he called premissory “appropriateness”. In one
strain of universalism espoused by more recent logicians, an argument’s soundness owes
nothing to premiss-content. This topic-neutrality of modern formal logic is strictly a
matter of the uninterpretedness of the atomic elements of the language. This is, by the
way, is a huge encumbrance for any logic that seeks a regulatory role in reasonings
transacted in human languages. (The principal reason is that the atomic components of
any natural language often stand in logical relations to one another). Logicians seek to
minimize the gap between its uninterpreted derivations and natural language reasoning by
supplementation of logical particles, which do admit of interpretation after a fashion. The
upshot is a proliferation of systems, concomitant with the growth of logical particles; and
although valid arguments remain valid under any valuation of atomic components, the
validity is now system-relative, not universal.
Before and after Stephen Toulmin’s heretical insistence that logically correct
reasoning be made sensitive to disciplinary peculiarities, post-Fregean logicians could
hardly have been unaware that the purported universality of pure logic would be of scant
use to the deductive sciences.1 Arising therefrom was the idea that set theory was part of
logic. My view is that either it is or it isn’t. If it is, pure logic is not universal even in the
sense of system-relative topic-neutrality. If it isn’t, pure logic can’t capture all the
deductions even of mathematics, and so must abandon any pretence of being the über-
theorie of the deductive sciences.
One might think that there is an attenuated sense in which something like first-
order logic does achieve a kind of universality. This is the sense in which the analyses it
makes of its target properties – validity, entailment, logical truth, consistency – are
correct for any context in which these properties are either invoked or attributed. On this
view, entailment is entailment, whether in macroeconomics or biochemistry or politics.
But no one who has even a nodding acquaintance with the sheer scope of today’s
rivalrous pluralism in logic can make this claim for universality with any prospect of
serenity. Every one of these properties is the subject of unsettled land claims; and even on
those few occasions when treaties have been signed, they all take the low road (by
universalists’ lights) of domain-relativity.
Weeks ago there occurred in Montreux the first international conference on
Universal Logic. Part of what the organizers sought to achieve was some indication of the
kind of all-embracing logical architecture that might offer, however guardedly, promise
of universalizing logic’s pluralistic sprawl. Judging from the papers read there, including
my own, the organizers will have been left with no alternative but to trudge home and
kick the cat.

Thesis three. The probability calculus distorts much of probabilistic reasoning. In 1953,
Stephen Toulmin wrote as follows: “Starting with a study of the syllogism, the

1
“No more than any other science can mathematics be founded by logic alone; rather . . . something must
already be given to us in our faculty of representation . . . .” ([Hilbert, 1927, p. 464.])

4
probability calculus and the calculus of classes, and then coming to the physical sciences,
logicians have been misled by their earlier preoccupations and interests, vested as they
are in formal systems of considerable refinement and elaboration, into looking for the
wrong things.” ([Toulmin, 1953, p. 49]) Later in the same work he expands upon this
point.

The mathematical theory of probability has some place in the process of theory-
establishing, certainly; but is a more restricted one than logicians have thought. It
has a central place only in limited branches of theory, such as statistical
mechanics and parts of quantum mechanics . . . . The application of the calculus
of probability in this sort of way raises no general questions of a philosophical
kind, but only particular questions of statistical technique: questions to be
answered in terms of the theory of curve-fitting, significant deviations and so on
([Toulmin, 1953, pp. 112-113]).

If the behaviour of individual agents is anything to go on, the standard accounts of


inductive inference constitute a significant distortion of the actual record. Can the same
be said for the linked issue of probabilistic reasoning in the here-and-now? James
Franklin sees in probability an interesting parallel with the concepts of continuity and
perspective ([Franklin, 2001]). All three of these things took a long time before yielding
to mathematical formulation, and before that happened, judgements of them tended to be
unconscious and mistaken.
I have a somewhat different version of this story. Sometimes a conceptually
inchoate idea is cleaned up by a subsequent explication of it. Sometimes these
clarifications are achieved by modelling the target notion mathematically. Sometimes the
clarification could not have been achieved save for the mathematics. We may suppose
that something like this proved to be the case with perspective and continuity. To the
extent that this is so, anything we used to think of these things which didn’t make its way
into the mathematical model could be considered inessential if not just mistaken. It is
interesting to reflect on how well this line of thought fits the case of probability.
In raising the matter, we are calling attention to two questions. (1) What was
probability like before Pascal? (2) How do we now find it to be? Concerning the first of
this pair of questions, I think that we may say that, in their judgements under conditions
of uncertainty, people routinely smudged such distinctions as may have obtained between
‘it is probable that,’ ‘it is plausible that,’ and ‘it is possible that.’ If we run a strict version
of the present line over this trio, then not making it into the calculus of probability leaves
all that is left of these blurred idioms in a probabilistically defective state. There is a
sense in which this is not the wrong thing to conclude, but it is a trivial one. For if what
we sometimes intend by `probability’ fails to find a welcome in the probability calculus,
then it is not a fact about probability that the probability calculus honours. But unlike
what may have been the case with perspective and continuity, we must take care not to
say without further ado that those inferences that don’t make the Pascalian cut are
mistakes of reason.
Let us take it that, unlike perspective and continuity, idioms of probability (or
probability/plausibility/possibility) that don’t cut the Pascalian mustard leave residues of
philosophically interesting usage. If this were so, there might well be philosophically

5
important issues, the successful handling of which requires the wherewithal of this
conceptual residue. Again, standard answers to Kahneman-Tversky questions don’t make
the grade of aleatory probability, but they might well comport with conditions on
plausible reasoning. What, then, are we to say? That these bright, well-educated subjects
are Pascalian misfits or that they are more comfortably at home (though unconsciously)
with a plausibility construal of their proferred tasks? If we say the second, we take on an
onus we might not quite know how to discharge. It is the task of certifying the conditions
under which these non-Pascalian manoeuvres are well-justified. In lots of cases, we won’t
have much of a clue as to how to achieve these elucidations. Small wonder, then, that
what I call the Can Do Principle beckons so attractively ([Woods, 2003],
[Gabbay and Woods, 2003, 2005a]). This is the principle that bids the theorist who is
trying to solve a problem P to stick with what he knows and, if possible, to adapt what he
knows to the requirements of P. One of the great attractions of Pascalian probability is
that we know how to axiomatize it. Can Do is right to emphasize the advantage to be
gained if we could somehow bend the probability calculus to the task to hand. But
sometimes, the connection just can’t be made.
It is worth repeating in any event that targets usually have to be attributed. When
a reasoner’s behaviour is flect with the idioms of probability, it is safe to assume that his
reasoning embeds a given concept, K, of probability. But in general, this alone leaves the
identity of the embedded probability concept underdetermined. Here, too, a certain
caution is called for. If the other party’s behaviour turns out to mismanage K – if it is
defective K-reasoning – that is some reason not to attribute to him an interest in K-
reasoning.
How, then, did Kahneman and Tversky know that their subjects were working
with an aleatory conception of probability (and making a bad fist of it)? The received
answer is that they instructed their subjects so to do. Given the experimental results, their
subjects turned out to be either insubordinate or aleatory misfits. An alternative
possibility is that the Kahneman-Tversky subjects surrendered to non-aleatory urges
triggered by the propositional content of the experimental information, in ways that call
into question the calculus’ assumption of probabilistic independence.

Thesis four. Scant resources have a benign influence on human reasoning. St. Augustine
speaks of “the eros of the mind”. Here is Peirce to the same effect, though with less
passion, in which love is demoted to an itch:

The action of thought is excited by the irritation of doubt and ceases when belief
is attained; so that the production of belief is the sole function of thought ([Peirce,
1931-1958]).

This is a salutary reminder. In a way, it is so obvious as to be effortlessly forgettable that


reasoning is wholly without point or value except as facilitating something else. When
Leibniz proposed Calculemus, the last thing he had in mind was calculation for its own
sake. In the main, we value reasoning for the role it plays in belief-fixation and decision-
making. Accordingly, reasoning is judgeable only in relation to an agent’s cognitive
agenda and the cognitive resources available for him in advancing it. Reasoning is also
sometimes involved in more purely dialectical or rhetorical ends. Argumentation theorists

6
are quite right to take note of this; and quite wrong to give it so central a place in their
speculations. In its most basic employment, reasoning is an aid to cognition. In contexts
of cognitive disagreement, it is unavoidable that various dialectical constraints be
honoured, if only to discourage question-begging and what Aristotle called “babbling”.
But these constraints flow not from the nature of reasoning but rather from the nature of
conflict management.
Agents of all stripes, ourselves as well as NASA and MI5, operate under press of
scant resources. These include information, time and computational capacity, and, often
enough, infrastructural and cultural encouragement, and, of course, money. There is
however, a marked difference between individual and institutional agents. In most
matters, institutional agents command resources in quantities that individuals could not
begin to manage. Agents tend to set their targets in light of the resources available for
facilitating their attainment. This serves to mark off a further difference between
individual and institutional agents. Given the comparative scantness of his cognitive
resources, an individual agent will set targets of concomitantly comparable modesty.
In most things, then, individuals fall considerably short of the standards
championed by mainstream logicians. But it is a considerable mistake to equate these
shortfalls with the cognitively subpar, still less with a failure of rationality. The reason
for saying so, in large part anyhow, is that in most things the standards of deductive and
inductive logic are embedded in cognitive agendas that it would be unreasonable for the
individual to set for himself.
J.S. Mill was on to something important when he observed in A System of Logic
([Mill, 1959]) that inductive reasoning is the proper preserve of societies rather than
individuals. Yet it seems that we simply cannot forbear in telling our students, year in and
year out, that their ampliative reasonings are subpar to the extent that they fall below the
standard of inductive strength.
Reasoning, I say, facilitates cognition. Cognition succeeds when certain
conditions are met. One has only to look at actual – and largely successful – human
practice to see that our cognitive behaviour implies a fallibilist epistemology. That being
so, it is part of a human reasoner’s rationality that he reason with a circumspection
appropriate to his recognition, going in, that his procedures embed the practical certainty
of occasional error.2 If logic is a science of reasoning, it must take into account – indeed
it must honour – the fallibility of the human reasoner when reasoning as he should. The
two mainstream logics leave this duty largely unperformed. Deductive logic embeds an
epistemology of Cartesian error-elimination, and inductive logic embeds an epistemology
of scientific rectitude that attends such things as drug trials by Health Canada. It cannot
be surprising, therefore, that these logics are in the main massively wrong for cognitive
beings like us. Pascalianized inductive strength may be fine for some of what NASA
does, but the individual who is presently seized with an on-rushing tiger experience
would be well-advised to forgo the experimental method.

2
For present purposes, let’s say that an error is something that its committer has an implied interest in
reversing himself on. One of the great virtues of fallibilism is the pressure that it puts on theorists to pay
serious attention to the multiple things that collect under the name of “error”. Fallacy theorist have a
considerable stake in this.

7
Thesis five. Theoretical progress and conceptual change are connected. Now that the
Humanities are awash in the various scepticisms of what laughably is called “post
modernism”, Stephen Toulmin’s heterodoxies about conceptual change may strike us as
small beer. The trouble with post modernism is not its relativities and its constructivisms;
the trouble rests with the pinheads in English Departments, Faculties of Law and
Education, and elsewhere, who construct the arguments on their behalf. If there is a
“fallacy of understatement”, it could not better be instanced than by the assertion that
Stephen Toulmin is no pinhead. In Toulmin’s hands, and later in Paul Thagard’s,
([Thagard, 1992]) a central idea is that scientific advancement is driven by conceptual
change. This is a principle thesis of The Philosophy of Science, where it is defended with
subtlety and power. In Toulmin’s telling, conceptual change comes with new ways of
modelling correlations. Modelling is a way of seeing things, and how one sees things is a
function of what one is able to see, and what one is interested in seeing. These are a large
part of what makes for the restrained historicism that flavours Toulmin’s epistemology.
When The Philosophy of Science made its brazen, cocky appearance, the
philosophical mainstream was scandalized. It wasn’t that the book didn’t receive some
good reviews (all of Toulmin’s books have found receptive critics), but rather that the
book’s doctrines remained decidedly a minority position among philosophers of science.3
This is decidedly odd. At mid-century, English-speaking philosophy – especially its more
technical branches – was agog over revolutionary attainments in arithmetic, semantics
and physics. In each case, the prime movers of these transformations were quite aware of
the conceptual changes that drove their theories forward – Cantor’s “infinite”, Tarski’s
“truth”, and Heisenberg’s “particle”.
The conceptual changes occasioned by modelling phenomena in new ways arise
from a form of ambiguation. This is meaning-change on purpose. Logicians and
argumentation theorists of all stripes are determinedly hostile to ambiguity, a lingering
influence of Aristotle no doubt. Perhaps the most common complaint made by these
practitioners, against those who make progress by changing the subject, is a kind of red
herring fallacy. But the trouble, if trouble there is, is not with the meaning-changer’s
reasoning; the trouble rests with the fallacy theorist’s affection for na ve realism. If the
history of science has anything of metaphysical moment to tell us, it can only be the
incompatibility of scientific progress with that kind of realism. Argumentation theorists,
accordingly, should lighten up and give their attention to the subtleties of reasonings that
both occasion and flow from this creative kind of ambiguation.

Thesis six. Logic should investigate the cognitive aspects of reasoning and arguing. If
what we have said about reasoning as a facilitator of cognition is so, and if logic retains
(or re-engages with) its historic mission as a science of reasoning, then logic must take
account of what cognitive agents are like, what they are interested in and what they are
capable of. Given that beings like us come rigged with psychologies as standard
equipment, the hostility of logic to psychologism is risibly inapposite. Of course,
speaking of ambiguity, psychologism has attracted its own hefty multivocality, ably
sorted out in [Jacquette, 2003]. Not everything that “psychologism” has meant or might

3
In the early 1960s, Toulmin gave a standing-room-only lecture at the University of Michigan. He was
introduced by the benign William Alston, who banteringly averred that the visitor was the most refuted
philosophical writer of the day.

8
come to mean is right for the logic of cognitive systems. But it can be said with some
confidence that logic’s toleration of psychologism must embrace the idea of reasoning in
the service of cognition and take due notice of reasoners as performers of cognitive
tasks. Frege’s contempt was another thing. He associated psychologism with two
(inequivalent) views that he detested. One is the “Millean” doctrine that the principles of
logic are empirical generalizations. The other asserts that the principles of logic can attain
no greater degree of objectivity than that rendered by what has come to be called
“intersubjective agreement”. We find ourselves oddly positioned. At first blush, we
would have thought Frege right in his insistence that there is no place for psychology in
the theory of sets. Tarski could say the same for model theory, Post for recursion theory,
and Gentzen of proof theory. If, as a great many mainline logicians assert, this is all there
is to logic, then logic has no room for psychology because logic makes no room for
cognitive agents. But, upon reflection, given what he actually takes psychologism to be, it
is striking how difficult it has become in the last fifty years to sustain Frege’s claims with
confidence and assurance. In the one case, Frege has Putnam to contend with, in his
insistence that quantum theory has given to logic an empirical cast ([Putnam, 1975]). In
the second, Russell and Hilbert demand a hearing, each arguing, for somewhat different
reasons, the theoretical legitimacy of stipulations when accepted by the requisite research
communities ([Russell, 1903], [Hilbert, 1935]).
Whatever may be said about the four princely domains of mathematical logic --
set theory, modal theory, proof theory and recursion theory -- the past thirty years has
seen the re-engagement by logic of agent-based reasoning. If I may put it this way, logic
proper has had a role in this transformation, what with the emergence of belief dynamics,
situation logics, dialogue logics, time and action logics, among many others.4 A second
source of change is computer science and AI, what with their emphasis on non-
monotonicity, default and defeasible reasoning. A third has been informal logic, and
argumentation theory more generally.5 A theme that runs through all these developments
is that theories of reasoning must attend to how human beings actually do reason. Where
disagreement exists, it turns on the problem (or anyhow the challenge) of validating those
norms of reasoning which nearly everyone seems to agree are, on occasion, violated in
actual practice. I shall return to this important issue when we examine our next thesis,
just below. For now, it suffices to say that the admissibility of psychological factors into
any logic that is serious about cognitive agency is a no-brainer.

4
We might note in passing that the papers of the so-called Woods-Walton Approach to the fallacies –
against which Michael Scriven would bray with characteristic lan – was set largely in this sector of
logic’s own transformations ([Woods and Walton, 1989]). In those early days, Walton and I would repose
the burdens of petitio principii on Kripke models for intuitionistic logic. We would take it as given that
relevant logic à la Pittsburgh and Canberra would suffice for irrelevancies of inference. And so on. The two
of us have since wised up, and now pursue somewhat different, and more comprehensive, methodological
paths, concerning which it is unlikely that we are both right ([Walton, 1995], [Woods, 2004]).
5
Other important sources are cognitive psychology and empirical economics. Case-based analyses of
administrative and corporate decision-making have been with us for a hundred years. The common law is
centuries old. Logicians are just starting to take note of these. Stephen Toulmin is an exception. They have
been in his sights, one way or another, since The Place of Reason in Ethics. See also The Abuses of
Casuistry and Return to Reason.

9
More generally, it is helpful to conceive of a cognitive agent as a device that
executes a certain cognitive psychology. An open question is the extent to which
normative considerations could be handled by a requisite epistemology.

Thesis seven. Ideal models are unsuitable for normativity. Writing at mid-century,
Toulmin allows that in

[i]n practice, of course, we do not always adopt the most satisfactory methods of
argument – we generalize hastily, ignore conflicting evidence, misinterpret
ambiguous observations and so on. We know very well that there are reliable
standards of evidence to be observed, but we do not always observe them. In other
words, we are not always rational; for to be ‘rational’ is to employ always these
reliable, self-consistent methods of forming one’s scientific beliefs, and to fail to
be ‘rational’ is to entertain the hypothesis concerned with a degree of confidence
out of proportion to its ‘probability’. ([Toulmin, 1950, p. 164])6

Here is Toulmin in his first book, The Place of Reason in Ethics, making a point that to
this day is gospel among theorists of human behaviour. ([Cohen, 1982] and [Gigerenzer
and Selten, 2001] are conspicuous exceptions.) This is the idea that when humans reason
in ways that fail to comport with the relevant theories – deductive and inductive logic,
probability theory, decision theory, among others – they perform irrationally.7 Perhaps
you will agree with me that just three years later, Toulmin had wisely modified his
position. (See again the quotations from Philosophy of Science at the beginning of our
discussion of thesis three). What Toulmin was saying in 1953 is that when a piece of
human reasoning fails to honour the requisite theory, then two possibilities are open for
consideration. One is that the defections reveal a lack of rationality. The other is that the
theory whose principles are defected from don’t apply to the cases in question. Much of
our discussion so far has it that it is a theme of Toulminian import that when these
theories are first-order logic, inductive logic, the probability calculus and rational
decision theory, it is nearly always a mistake to suppose that human reasoning, upon pain
of irrationality, must conform to their theorems.
Against this is the insistence by a great many practitioners of these disciplines that
the defected-from principles are normatively valid, and that it follows from this that a
failure to comport with them is indeed rationally subpar. If this is right, it embodies a
hugely important insight into human rationality, one that sustains a pessimism rivalling
the anti-cognitivism of Genesis, running roughshod over the Charity Principle, which
“requires that we make the best, rather than the worst possible interpretation” ([Scriven,
1976, p. 71]). This is something to pause over. It convicts us all of widespread
irrationality, and it makes us massive cognitive misfits.
Still, these theorists have reasons for their view of the matter, the two most
prominent of which are: first, that the principles of these disciplines are articulable in

6
It is not wholly clear whether Toulmin’s quotation marks are intended to be admonitory rather than
merely emphatic.
7
Even more emphatic is Reichenbach in The Rise of Scientific Philosophy ([Reichenbach, 1951, p. 308]):
What gives priority to science? (he asks). “Who can judge about the theory of knowledge if he has not seen
knowledge in its most successful form?” Malcolm, reviewing the book, thought this “primitive nonsense”.

10
ideal models in which they are analytic; and, second, that their normative force is
guaranteed by a kind of reflective equilibrium. Both of these claims are problematic. The
analyticity claim is beggared by question-begging. If, for example, someone says that it is
not true that belief is closed under consequence, it is unavailing to counter with the
assurance that there is an ideal model in which the it is analytically true. Not only does
“it’s analytically true” not work as a rejoinder to “it’s not true”, but being true-in-a-
model makes no claim, just as it stands, on being true in the world, so to speak.8 Close
kin of the analyticity rejoinder is the true-in-a model response. So, again, someone says
that various would-be normative principles don’t hold. The response is that there is a
model M in which these principles are true. This might be true, but it is irrelevant.
Anything true-in-M might actually be false.
The reflective equilibrium defence of the normativity of ideal models falters at the
starting gate. It may be true that by and large the cognitive behaviour of beings like us is
in reflective equilibrium with what could with justice be called the principles of right
reasoning. But there is not a jot of support in this for the proposition that the principles
privileged by these ideal models are the principles of right reasoning for beings like us.
Accordingly, we may think it best to abandon these defences, and direct our quest for the
normative elsewhere. On one approach, a good place to look is indeed our general
practice, but not because it is in reflective equilibrium with the proffered ideal models,
but, again, rather more because our run-of-the-mill reasoning doesn’t kill us: We survive,
we pass on our ratiocinative devices to the descendent class, we prosper, we do particle
physics, and occasionally we build great civilizations. Either we do these things on the
basis of such knowledge as our cognitive processes are capable of attaining for us, or in
the absence of it. If the former is true, the pessimism of the ideal model approach is
unjustified. If the latter is true, the value of knowledge is debased.

Thesis eight. The Can Do Principle should be applied with caution. In most things
cognitive, the Can Do Principle bids the individual to try to tailor the advancement of his
agendas to principles he is already at home with and to problem-solving techniques over
which he has attained a certain mastery. Can Do requires the problem-solver not to start
from scratch if he can help it. Can Do is one of the principal canons of the cognitive
economies in which agents of all types operate. But there are limits. A hammer’s
usefulness in pulling nails gives it no leg up in the removal of paint from walls of one’s
dining room. Using a valuable and versatile tool for a task for which it is unsuited is a
misbegotten, false economy. When the misapplication is inadvertent, there is room to
postulate the presence of the Make Do Principle. Make Do is a degenerate case of Can
Do. It has a twofold appeal. It allows the cognitive agent the comfort of doing something
rather than nothing. It is also abetted by the ideal model methodology of normativity, for
which it is tailor-made. In his more composed moments, the reasoner will see the
disutility of methods and applications of principles that have no standing in the problem-
space at hand. But not only is this something that he sometimes cannot see, but, not
seeing it, Made Do is given a degree of encouragement by the attractive example of
scientific progress via conceptual change.

8
Let us note for the record that Toulmin’s models are not normative but abductive. They are ways of seeing
which facilitate our accounting for data ready to hand.

11
Any probability theorist who knows his onions will be aware that after Pascal,
probability changed. This presents us with a fundamental question: When K is a new
conception of something, does it extinguish its predecessor-concept, or does it foster a
new ambiguity which leaves the old concept standing? The probability theorists’s
inclination is to see its axioms on the analogy of the principle of rectilinear propagation,
after which the scientific concept of light both changed and was laid open to an improved
scientific understanding. Why could we not proclaim a similar law for probability -- the
principle Pascalian compounding -- thanks to which the concept of probability both
changed and was made susceptible to a better theoretical (i.e., mathematical)
understanding? Perhaps we could, but doing so does not answer this fundamental
question. Cetrtainly we don’t expect an individual’s ordinary reasoning about light to
comport with the rectilinear propagation principle. An individual who managed to live
his life wholly innocent of the disclosures of optical geometry might well run up an error-
free record in his reasonings about light and shadow in rainy Vancouver. This tells us that
the laws of optical geometry are not normative for ordinary reasoning about light.
Probability theory (and deductive logic and decision theory) are quite different in their
normative presumptions. I wonder about this. Granted that the aleatory theorems are
binding on certain patterns of reasoning “in statistical mechanics and parts of quantum
mechanics … [or concerning] particular questions of statistical technique: questions to be
answered in terms of the theory of curve-fitting, significant deviations and so on”, it is
left wide open as to what principles apply to individual agents when they estimate the
probability of Lisa being a lawyer with a big firm who works on environmental issues.
Unreflective resort to the calculus of probability gives Toulminian offence for its
procrusteanism (cf. [Toulmin, 1953, p. 126]).9 By my lights, and to the same effect, it is a
matter of Make Do. It is “looking for the wrong things.”
Although they never put it in these terms, one of the central accomplishments of
advancers of the informal logic research programme, which The Uses of Argument did so
much to motivate and shape, is the stiff resistance they have shown to procrusteanism and
Make Do in the analysis of human reasoning and argument.10 It is a welcome turning in
the human sciences, but not as yet one graced by a place in the mainstream. More needs
to be done. Professor Toulmin, I trust that you are listening.

Bibliography

Gregory N. Carlson and Francis Jeffry Pelletier, editors, The Generic Book, Chicago IL:
The University of Chicago Press, 1995

Jonathan Cohen, “Are people programmed to commit fallacies: further thoughts about the
interpretation of experimental data and probability judgement”, Journal of Theory and
Social Behavior, volume 12, pp. 251-274, 1982.

9
To be clear, Toulmin’s target in this instance is Eddington, not Pascal.
10
Mind you, in giving validity so free a rein, pragma-dialectics is something of a hold-out.

12
James Franklin, The Science of Conjecture: Evidence and Probability before Pascal,
Baltimore MD: The Johns Hopkins University Press, 2001.

Dov M. Gabbay and John Woods, Agenda Relevance: A Study in Formal Pragmatics,
Amsterdam: North Holland, 2003

Dov M. Gabbay and John Woods, The Reach of Abduction: Insight and Trial,
Amsterdam: North Holland, 2005a.

Dov M. Gabbay and John Woods, “Fallacies as cognitive virtues”, in Ahti-Veikko


Pietarinen, editor, Logic, Games and Philosophy: Foundational Perspectives,
Amsterdam: Springer, to appear in 2005b.

Gerhard Gentzen, “Untersuchungen über das logische Schliessen”, Mathematische


Zeitschrift, 39, pp. 176-210, 405-431, 1935.

G. Gigerenzer and R. Selten, “Rethinking rationality” in Gigerenzer and Selten, editors,


The Adaptive Toolbox, pp. 1-12, Cambridge MA: MIT Press, 2001.

David Hilbert, “The foundations of mathematics”, in Jean He jenoort, editor, From


Frege to G del, pp. 464-479. Cambridge, MA: Harvard University Press 1967; first
published in 1927.

David Hilbert, Gesammelte Abhandlungen, Band III, Berlin and Heidelberg: Springer
Verlag, 1935.

Dale Jacquette, editor Philosophy, Psychology and Psychologism: Critical and Historical
Readings on the Psychological Turn in Philosophy, Dordrecht and Boston: Kluwer, 2003.

A.J. Jensen and Stephen Toulmin, The Abuses of Casuistry, Berkeley and Los Angeles:
University of California Press, 1990.

Daniel Kahneman and Amos Tversky, “Judgement under uncertainty: Heuristics and
biases”, Science, volume 185, pp. 1124-1131, 1974.

J.S. Mill, A System Of Logic, London: Longman’s Green, 1959.

C.S. Peirce, Collected Works, Cambridge MA: Harvard University Press, 1931--1958.

E.L. Post, “Recursively enumerable sets of positive integers and their decision
problems”, Bulletin of the American Mathematical Society, 50, pp. 305-357, 1944.

Hilary Putnam “The logic of quantum mechanics”, Mathematics, Matter and Method, pp.
174-197, Cambridge: Cambridge University Press, 1975.

13
Hans Reichenbach, The Rise of Scientific Philosophy, Berkeley and Los Angeles:
University of California Press, 1951.

Bertrand Russell, The Principles of Mathematics, London: George Allen and Unwin,
1903.

Michael Scriven, Reasoning, New York: McGraw-Hill, 1976.

Paul Thagard, Conceptual Revolutions, Princeton NJ: Princeton University Press, 1992.

Stephen Toulmin, The Place of Reason in Ethics, Cambridge: University of Cambridge


Press, 1950.

Stephen Toulmin, The Philosophy of Science An Introduction, London: The Hutchinson


University Library, 1953.

Stephen Toulmin, Return to Reason, Cambridge, MA: Harvard University Press, 2001.

Douglas Walton, A Pragmatic Theory of Fallacy, Tuscaloosa: University of Alabama


Press, 1995.

John Woods and Douglas Walton, Fallacies: Selected Papers 1972-1982, Berlin and
New York: Foris de Gruyter, 1989.

John Woods, Paradox and Paraconsistency: Conflict Resolution in the Abstract Sciences,
Cambridge and New York: Cambridge University Press, 2003.

John Woods, The Death of Argument: Fallacies in Agent-Based Reasoning, Dordrecht


and Boston: Kluwer, 2004.

14

View publication stats

You might also like